Comprehensive introduction to the neural network models currently under intensive study for computational applications. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest.
Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics. Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics. Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning. Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs. The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.
This book provides a comprehensive introduction to the computational material that forms the underpinnings of the currently evolving set of brain models. It is now clear that the brain is unlikely to be understood without recourse to computational theories. The theme of An Introduction to Natural Computation is that ideas from diverse areas such as neuroscience, information theory, and optimization theory have recently been extended in ways that make them useful for describing the brains programs. This book provides a comprehensive introduction to the computational material that forms the underpinnings of the currently evolving set of brain models. It stresses the broad spectrum of learning models—ranging from neural network learning through reinforcement learning to genetic learning—and situates the various models in their appropriate neural context. To write about models of the brain before the brain is fully understood is a delicate matter. Very detailed models of the neural circuitry risk losing track of the task the brain is trying to solve. At the other extreme, models that represent cognitive constructs can be so abstract that they lose all relationship to neurobiology. An Introduction to Natural Computation takes the middle ground and stresses the computational task while staying near the neurobiology.
Presenting research on the computational abilities of connectionist, neural, and neurally inspired systems, this series emphasizes the question of how connectionist or neural network models can be made to perform rapid, short-term types of computation that are useful in higher level cognitive processes. The most recent volumes are directed mainly at researchers in connectionism, analogy, metaphor, and case-based reasoning, but are also suitable for graduate courses in those areas.
A detailed formulation of neural networks from the information-theoretic viewpoint. The authors show how this perspective provides new insights into the design theory of neural networks. In particular they demonstrate how these methods may be applied to the topics of supervised and unsupervised learning, including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from varied scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this an extremely valuable introduction to this topic.
Neural computing is one of the most interesting and rapidly growing areas of research, attracting researchers from a wide variety of scientific disciplines. Starting from the basics, Neural Computing covers all the major approaches, putting each in perspective in terms of their capabilities, advantages, and disadvantages. The book also highlights the applications of each approach and explores the relationships among models developed and between the brain and its function. A comprehensive and comprehensible introduction to the subject, this book is ideal for undergraduates in computer science, physicists, communications engineers, workers involved in artificial intelligence, biologists, psychologists, and physiologists.
Quantum Neural Computation is a graduate–level monographic textbook. It presents a comprehensive introduction, both non-technical and technical, into modern quantum neural computation, the science behind the fiction movie Stealth. Classical computing systems perform classical computations (i.e., Boolean operations, such as AND, OR, NOT gates) using devices that can be described classically (e.g., MOSFETs). On the other hand, quantum computing systems perform classical computations using quantum devices (quantum dots), that is devices that can be described only using quantum mechanics. Any information transfer between such computing systems involves a state measurement. This book describes this information transfer at the edge of classical and quantum chaos and turbulence, where mysterious quantum-mechanical linearity meets even more mysterious brain’s nonlinear complexity, in order to perform a super–high–speed and error–free computations. This monograph describes a crossroad between quantum field theory, brain science and computational intelligence.
In this richly illustrated book, it is shown how Shannon's mathematical theory of information defines absolute limits on neural efficiency; limits which ultimately determine the neuroanatomical microstructure of the eye and brain. Written in an informal style this is an ideal introduction to cutting-edge research in neural information theory.