The book reviews developments in the following fields: optimal adaptive control; online differential games; reinforcement learning principles; and dynamic feedback control systems.
Evaluating statistical procedures through decision and game theory, as first proposed by Neyman and Pearson and extended by Wald, is the goal of this problem-oriented text in mathematical statistics. First-year graduate students in statistics and other students with a background in statistical theory and advanced calculus will find a rigorous, thorough presentation of statistical decision theory treated as a special case of game theory. The work of Borel, von Neumann, and Morgenstern in game theory, of prime importance to decision theory, is covered in its relevant aspects: reduction of games to normal forms, the minimax theorem, and the utility theorem. With this introduction, Blackwell and Professor Girshick look at: Values and Optimal Strategies in Games; General Structure of Statistical Games; Utility and Principles of Choice; Classes of Optimal Strategies; Fixed Sample-Size Games with Finite Ω and with Finite A; Sufficient Statistics and the Invariance Principle; Sequential Games; Bayes and Minimax Sequential Procedures; Estimation; and Comparison of Experiments. A few topics not directly applicable to statistics, such as perfect information theory, are also discussed. Prerequisites for full understanding of the procedures in this book include knowledge of elementary analysis, and some familiarity with matrices, determinants, and linear dependence. For purposes of formal development, only discrete distributions are used, though continuous distributions are employed as illustrations. The number and variety of problems presented will be welcomed by all students, computer experts, and others using statistics and game theory. This comprehensive and sophisticated introduction remains one of the strongest and most useful approaches to a field which today touches areas as diverse as gambling and particle physics.
Adaptive control has been one of the main problems studied in control theory. The subject is well understood, yet it has a very active research frontier. This book focuses on a specific subclass of adaptive control, namely, learning-based adaptive control. As systems evolve during time or are exposed to unstructured environments, it is expected that some of their characteristics may change. This book offers a new perspective about how to deal with these variations. By merging together Model-Free and Model-Based learning algorithms, the author demonstrates, using a number of mechatronic examples, how the learning process can be shortened and optimal control performance can be reached and maintained. - Includes a good number of Mechatronics Examples of the techniques. - Compares and blends Model-free and Model-based learning algorithms. - Covers fundamental concepts, state-of-the-art research, necessary tools for modeling, and control.
Using a common unifying framework, this volume explores the main topics of Linear Quadratic control, predictive control, and adaptive predictive control -- in terms of theoretical foundations, analysis and design methodologies, and application-orient ed tools.Presents LQ and LQG control via two alternative approaches: the Dynamic Programming (DP) and the Polynomial Equation (PE) approach. Discusses predicable control, an important tool in industrial applications, within the framework of LQ control, and presents innovative predictive control schemes having guaranteed stability properties. Offers a unique, thorough presentation of indirect adaptive multi-step predictive controllers, with detailed proofs of globally convergent schemes for both the ideal and the bounded disturbance case. Extends the self-tuning property of one-step-ahead control to multi-step control.For engineers and mathematicians interested in the theory, analysis and design methodologies, and application-oriented tools of optimal, predictive and adaptive control.
In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank matrix approximations; hybrid methods based on a combination of iterative procedures and best operator approximation; andmethods for information compression and filtering under condition that a filter model should satisfy restrictions associated with causality and different types of memory.As a result, the book represents a blend of new methods in general computational analysis,and specific, but also generic, techniques for study of systems theory ant its particularbranches, such as optimal filtering and information compression.- Best operator approximation,- Non-Lagrange interpolation,- Generic Karhunen-Loeve transform- Generalised low-rank matrix approximation- Optimal data compression- Optimal nonlinear filtering
Presented in a tutorial style, this comprehensive treatment unifies, simplifies, and explains most of the techniques for designing and analyzing adaptive control systems. Numerous examples clarify procedures and methods. 1995 edition.
Exploring connections between adaptive control theory and practice, this book treats the techniques of linear quadratic optimal control and estimation (Kalman filtering), recursive identification, linear systems theory and robust arguments.
Designed to meet the needs of a wide audience without sacrificing mathematical depth and rigor, Adaptive Control Tutorial presents the design, analysis, and application of a wide variety of algorithms that can be used to manage dynamical systems with unknown parameters. Its tutorial-style presentation of the fundamental techniques and algorithms in adaptive control make it suitable as a textbook. Adaptive Control Tutorial is designed to serve the needs of three distinct groups of readers: engineers and students interested in learning how to design, simulate, and implement parameter estimators and adaptive control schemes without having to fully understand the analytical and technical proofs; graduate students who, in addition to attaining the aforementioned objectives, also want to understand the analysis of simple schemes and get an idea of the steps involved in more complex proofs; and advanced students and researchers who want to study and understand the details of long and technical proofs with an eye toward pursuing research in adaptive control or related topics. The authors achieve these multiple objectives by enriching the book with examples demonstrating the design procedures and basic analysis steps and by detailing their proofs in both an appendix and electronically available supplementary material; online examples are also available. A solution manual for instructors can be obtained by contacting SIAM or the authors. Preface; Acknowledgements; List of Acronyms; Chapter 1: Introduction; Chapter 2: Parametric Models; Chapter 3: Parameter Identification: Continuous Time; Chapter 4: Parameter Identification: Discrete Time; Chapter 5: Continuous-Time Model Reference Adaptive Control; Chapter 6: Continuous-Time Adaptive Pole Placement Control; Chapter 7: Adaptive Control for Discrete-Time Systems; Chapter 8: Adaptive Control of Nonlinear Systems; Appendix; Bibliography; Index
impossible to access. It has been widely scattered in papers, reports, and proceedings ofsymposia, with different authors employing different symbols and terms. But now thereis a book that covers all aspects of this dynamic topic in a systematic manner.Featuring consistent terminology and compatible notation, and emphasizing unifiedstrategies, Adaptive Control Systems provides a comprehensive, integrated accountof basic concepts, analytical tools, algorithms, and a wide variety of application trendsand techniques.Adaptive Control Systems deals not only with the two principal approachesmodelreference adaptive control and self-tuning regulators-but also considers otheradaptive strategies involving variable structure systems, reduced order schemes, predictivecontrol, fuzzy logic, and more. In addition, it highlights a large number of practical applicationsin a range of fields from electrical to biomedical and aerospace engineering ...and includes coverage of industrial robots.The book identifies current trends in the development of adaptive control systems ...delineates areas for further research . : . and provides an invaluable bibliography of over1,200 references to the literature.The first authoritative reference in this important area of work, Adaptive ControlSystems is an essential information source for electrical and electronics, R&D,chemical, mechanical, aerospace, biomedical, metallurgical, marine, transportation, andpower plant engineers. It is also useful as a text in professional society seminars and inhousetraining programs for personnel involved with the control of complex systems, andfor graduate students engaged in the study of adaptive control systems.
Robust and Adaptive Control (second edition) shows readers how to produce consistent and accurate controllers that operate in the presence of uncertainties and unforeseen events. Driven by aerospace applications, the focus of the book is primarily on continuous-time dynamical systems. The two-part text begins with robust and optimal linear control methods and moves on to a self-contained presentation of the design and analysis of model reference adaptive control for nonlinear uncertain dynamical systems. Features of the second edition include: sufficient conditions for closed-loop stability under output feedback observer-based loop-transfer recovery (OBLTR) with adaptive augmentation; OBLTR applications to aerospace systems; case studies that demonstrate the benefits of robust and adaptive control for piloted, autonomous and experimental aerial platforms; realistic examples and simulation data illustrating key features of the methods described; and problem solutions for instructors and MATLAB® code provided electronically. The theory and practical applications address real-life aerospace problems, being based on numerous transitions of control-theoretic results into operational systems and airborne vehicles drawn from the authors’ extensive professional experience with The Boeing Company. The systems covered are challenging—often open-loop unstable with uncertainties in their dynamics—and thus require both persistently reliable control and the ability to track commands either from a pilot or a guidance computer. Readers should have a basic understanding of root locus, Bode diagrams, and Nyquist plots, as well as linear algebra, ordinary differential equations, and the use of state-space methods in analysis and modeling of dynamical systems. The second edition contains a background summary of linear systems and control systems and an introduction to state observers and output feedback control, helping to make it self-contained. Robust and Adaptive Control teaches senior undergraduate and graduate students how to construct stable and predictable control algorithms for realistic industrial applications. Practicing engineers and academic researchers will also find the book of great instructional value.