A survey of probabilistic approaches to modeling and understanding brain function. Neurophysiological, neuroanatomical, and brain imaging studies have helped to shed light on how the brain transforms raw sensory information into a form that is useful for goal-directed behavior. A fundamental question that is seldom addressed by these studies, however, is why the brain uses the types of representations it does and what evolutionary advantage, if any, these representations confer. It is difficult to address such questions directly via animal experiments. A promising alternative is to use probabilistic principles such as maximum likelihood and Bayesian inference to derive models of brain function. This book surveys some of the current probabilistic approaches to modeling and understanding brain function. Although most of the examples focus on vision, many of the models and techniques are applicable to other modalities as well. The book presents top-down computational models as well as bottom-up neurally motivated models of brain function. The topics covered include Bayesian and information-theoretic models of perception, probabilistic theories of neural coding and spike timing, computational models of lateral and cortico-cortical feedback connections, and the development of receptive field properties from natural signals.
The Oxford Handbook of the Philosophy of Perception is a survey by leading philosophical thinkers of contemporary issues and new thinking in philosophy of perception. It includes sections on the history of the subject, introductions to contemporary issues in the epistemology, ontology and aesthetics of perception, treatments of the individual sense modalities and of the things we perceive by means of them, and a consideration of how perceptual information is integrated and consolidated. New analytic tools and applications to other areas of philosophy are discussed in depth. Each of the forty-five entries is written by a leading expert, some collaborating with younger figures; each seeks to introduce the reader to a broad range of issues. All contain new ideas on the topics covered; together they demonstrate the vigour and innovative zeal of a young field. The book is accessible to anybody who has an intellectual interest in issues concerning perception.
Experimental and theoretical neuroscientists use Bayesian approaches to analyze the brain mechanisms of perception, decision-making, and motor control.
An introduction to the Bayesian approach to statistical inference that demonstrates its superiority to orthodox frequentist statistical analysis. This book offers an introduction to the Bayesian approach to statistical inference, with a focus on nonparametric and distribution-free methods. It covers not only well-developed methods for doing Bayesian statistics but also novel tools that enable Bayesian statistical analyses for cases that previously did not have a full Bayesian solution. The book's premise is that there are fundamental problems with orthodox frequentist statistical analyses that distort the scientific process. Side-by-side comparisons of Bayesian and frequentist methods illustrate the mismatch between the needs of experimental scientists in making inferences from data and the properties of the standard tools of classical statistics.
The integrated nested Laplace approximation (INLA) is a recent computational method that can fit Bayesian models in a fraction of the time required by typical Markov chain Monte Carlo (MCMC) methods. INLA focuses on marginal inference on the model parameters of latent Gaussian Markov random fields models and exploits conditional independence properties in the model for computational speed. Bayesian Inference with INLA provides a description of INLA and its associated R package for model fitting. This book describes the underlying methodology as well as how to fit a wide range of models with R. Topics covered include generalized linear mixed-effects models, multilevel models, spatial and spatio-temporal models, smoothing methods, survival analysis, imputation of missing values, and mixture models. Advanced features of the INLA package and how to extend the number of priors and latent models available in the package are discussed. All examples in the book are fully reproducible and datasets and R code are available from the book website. This book will be helpful to researchers from different areas with some background in Bayesian inference that want to apply the INLA method in their work. The examples cover topics on biostatistics, econometrics, education, environmental science, epidemiology, public health, and the social sciences.
This book shows how the Bayesian approach to inference is applicable to partially identified models (PIMs) and examines the performance of Bayesian procedures in partially identified contexts. Drawing on his many years of research in this area, the author presents a thorough overview of the statistical theory, properties, and applications of PIM
Exciting new theories in neuroscience, psychology, and artificial intelligence are revealing minds like ours as predictive minds, forever trying to guess the incoming streams of sensory stimulation before they arrive. In this up-to-the-minute treatment, philosopher and cognitive scientist Andy Clark explores new ways of thinking about perception, action, and the embodied mind.
This is an entry-level book on Bayesian statistics written in a casual, and conversational tone. The authors walk a reader through many sample problems step-by-step to provide those with little background in math or statistics with the vocabulary, notation, and understanding of the calculations used in many Bayesian problems.