Simulation in NSL - Modeling in NSL - Schematic Capture System - User Interface and Graphical Windows - The Modeling Language NSLM - The Scripting Language NSLS - Adaptive Resonance Theory - Depth Perception - Retina - Receptive Fields - The Associative Search Network: Landmark Learning and Hill Climbing - A Model of Primate Visual-Motor Conditional Learning - The Modular Design of the Oculomotor System in Monkeys - Crowley-Arbib Saccade Model - A Cerebellar Model of Sensorimotor Adaptation - Learning to Detour - Face Recognition by Dynamic Link Matching - Appendix I : NSLM Methods - NSLJ Extensions - NSLC Extensions - NSLJ and NSLC Differences - NSLJ and NSLC Installation Instructions.
The purpose of this book is to introduce and survey the various quantitative methods which have been proposed for describing, simulating, embodying, or characterizing the processing of electrical signals in nervous systems. We believe that electrical signal processing is a vital determinant of the functional organization of the brain, and that in unraveling the inherent complexities of this processing it will be essential to utilize the methods of quantification and modeling which have led to crowning successes in the physical and engineering sciences. In comprehensive terms, we conceive neural modeling to be the attempt to relate, in nervous systems, function to structure on the basis of operation. Sufficient knowledge and appropriate tools are at hand to maintain a serious and thorough study in the area. However, work in the area has yet to be satisfactorily integrated within contemporary brain research. Moreover, there exists a good deal of inefficiency within the area resulting from an overall lack of direction, critical self-evaluation, and cohesion. Such theoretical and modeling studies as have appeared exist largely as fragmented islands in the literature or as sparsely attended sessions at neuroscience conferences. In writing this book, we were guided by three main immediate objectives. Our first objective is to introduce the area to the upcoming generation of students of both the hard sciences and psychological and biological sciences in the hope that they might eventually help bring about the contributions it promises.
One of the most exciting and potentially rewarding areas of scientific research is the study of the principles and mechanisms underlying brain function. It is also of great promise to future generations of computers. A growing group of researchers, adapting knowledge and techniques from a wide range of scientific disciplines, have made substantial progress understanding memory, the learning process, and self organization by studying the properties of models of neural networks - idealized systems containing very large numbers of connected neurons, whose interactions give rise to the special qualities of the brain. This book introduces and explains the techniques brought from physics to the study of neural networks and the insights they have stimulated. It is written at a level accessible to the wide range of researchers working on these problems - statistical physicists, biologists, computer scientists, computer technologists and cognitive psychologists. The author presents a coherent and clear nonmechanical presentation of all the basic ideas and results. More technical aspects are restricted, wherever possible, to special sections and appendices in each chapter. The book is suitable as a text for graduate courses in physics, electrical engineering, computer science and biology.
How to Build a Brain provides a detailed exploration of a new cognitive architecture - the Semantic Pointer Architecture - that takes biological detail seriously, while addressing cognitive phenomena. Topics ranging from semantics and syntax, to neural coding and spike-timing-dependent plasticity are integrated to develop the world's largest functional brain model.
A survey of probabilistic approaches to modeling and understanding brain function. Neurophysiological, neuroanatomical, and brain imaging studies have helped to shed light on how the brain transforms raw sensory information into a form that is useful for goal-directed behavior. A fundamental question that is seldom addressed by these studies, however, is why the brain uses the types of representations it does and what evolutionary advantage, if any, these representations confer. It is difficult to address such questions directly via animal experiments. A promising alternative is to use probabilistic principles such as maximum likelihood and Bayesian inference to derive models of brain function. This book surveys some of the current probabilistic approaches to modeling and understanding brain function. Although most of the examples focus on vision, many of the models and techniques are applicable to other modalities as well. The book presents top-down computational models as well as bottom-up neurally motivated models of brain function. The topics covered include Bayesian and information-theoretic models of perception, probabilistic theories of neural coding and spike timing, computational models of lateral and cortico-cortical feedback connections, and the development of receptive field properties from natural signals.
This book is intended as a text for a one-semester course on Mathematical and Computational Neuroscience for upper-level undergraduate and beginning graduate students of mathematics, the natural sciences, engineering, or computer science. An undergraduate introduction to differential equations is more than enough mathematical background. Only a slim, high school-level background in physics is assumed, and none in biology. Topics include models of individual nerve cells and their dynamics, models of networks of neurons coupled by synapses and gap junctions, origins and functions of population rhythms in neuronal networks, and models of synaptic plasticity. An extensive online collection of Matlab programs generating the figures accompanies the book.
This second edition presents the enormous progress made in recent years in the many subfields related to the two great questions : how does the brain work? and, How can we build intelligent machines? This second edition greatly increases the coverage of models of fundamental neurobiology, cognitive neuroscience, and neural network approaches to language. (Midwest).
A comprehensive Introduction to the world of brain and behavior computational models This book provides a broad collection of articles covering different aspects of computational modeling efforts in psychology and neuroscience. Specifically, it discusses models that span different brain regions (hippocampus, amygdala, basal ganglia, visual cortex), different species (humans, rats, fruit flies), and different modeling methods (neural network, Bayesian, reinforcement learning, data fitting, and Hodgkin-Huxley models, among others). Computational Models of Brain and Behavior is divided into four sections: (a) Models of brain disorders; (b) Neural models of behavioral processes; (c) Models of neural processes, brain regions and neurotransmitters, and (d) Neural modeling approaches. It provides in-depth coverage of models of psychiatric disorders, including depression, posttraumatic stress disorder (PTSD), schizophrenia, and dyslexia; models of neurological disorders, including Alzheimer’s disease, Parkinson’s disease, and epilepsy; early sensory and perceptual processes; models of olfaction; higher/systems level models and low-level models; Pavlovian and instrumental conditioning; linking information theory to neurobiology; and more. Covers computational approximations to intellectual disability in down syndrome Discusses computational models of pharmacological and immunological treatment in Alzheimer's disease Examines neural circuit models of serotonergic system (from microcircuits to cognition) Educates on information theory, memory, prediction, and timing in associative learning Computational Models of Brain and Behavior is written for advanced undergraduate, Master's and PhD-level students—as well as researchers involved in computational neuroscience modeling research.
Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networks charts the immense progress made in recent years in many specific areas related to great questions: How does the brain work? How can we build intelligent machines? While many books discuss limited aspects of one subfield or another of brain theory and neural networks, the Handbook covers the entire sweep of topics—from detailed models of single neurons, analyses of a wide variety of biological neural networks, and connectionist studies of psychology and language, to mathematical analyses of a variety of abstract neural networks, and technological applications of adaptive, artificial neural networks. Expository material makes the book accessible to readers with varied backgrounds while still offering a clear view of the recent, specialized research on specific topics.