How do you fly an airplane from one point to another as fast as possible? What is the best way to administer a vaccine to fight the harmful effects of disease? What is the most efficient way to produce a chemical substance? This book presents practical methods for solving real optimal control problems such as these. Practical Methods for Optimal Control Using Nonlinear Programming, Third Edition focuses on the direct transcription method for optimal control. It features a summary of relevant material in constrained optimization, including nonlinear programming; discretization techniques appropriate for ordinary differential equations and differential-algebraic equations; and several examples and descriptions of computational algorithm formulations that implement this discretize-then-optimize strategy. The third edition has been thoroughly updated and includes new material on implicit Runge–Kutta discretization techniques, new chapters on partial differential equations and delay equations, and more than 70 test problems and open source FORTRAN code for all of the problems. This book will be valuable for academic and industrial research and development in optimal control theory and applications. It is appropriate as a primary or supplementary text for advanced undergraduate and graduate students.
Recent developments in constrained control and estimation have created a need for this comprehensive introduction to the underlying fundamental principles. These advances have significantly broadened the realm of application of constrained control. - Using the principal tools of prediction and optimisation, examples of how to deal with constraints are given, placing emphasis on model predictive control. - New results combine a number of methods in a unique way, enabling you to build on your background in estimation theory, linear control, stability theory and state-space methods. - Companion web site, continually updated by the authors. Easy to read and at the same time containing a high level of technical detail, this self-contained, new approach to methods for constrained control in design will give you a full understanding of the subject.
Upper-level undergraduate text introduces aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization. Numerous figures, tables. Solution guide available upon request. 1970 edition.
This book addresses modern nonlinear programming (NLP) concepts and algorithms, especially as they apply to challenging applications in chemical process engineering. The author provides a firm grounding in fundamental NLP properties and algorithms, and relates them to real-world problem classes in process optimization, thus making the material understandable and useful to chemical engineers and experts in mathematical optimization.
This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study. Offers a concise yet rigorous introduction Requires limited background in control theory or advanced mathematics Provides a complete proof of the maximum principle Uses consistent notation in the exposition of classical and modern topics Traces the historical development of the subject Solutions manual (available only to teachers) Leading universities that have adopted this book include: University of Illinois at Urbana-Champaign ECE 553: Optimum Control Systems Georgia Institute of Technology ECE 6553: Optimal Control and Optimization University of Pennsylvania ESE 680: Optimal Control Theory University of Notre Dame EE 60565: Optimal Control
The book describes how sparse optimization methods can be combined with discretization techniques for differential-algebraic equations and used to solve optimal control and estimation problems. The interaction between optimization and integration is emphasized throughout the book.
This best-selling text focuses on the analysis and design of complicated dynamics systems. CHOICE called it “a high-level, concise book that could well be used as a reference by engineers, applied mathematicians, and undergraduates. The format is good, the presentation clear, the diagrams instructive, the examples and problems helpful...References and a multiple-choice examination are included.”
A bottom-up approach that enables readers to master and apply the latest techniques in state estimation This book offers the best mathematical approaches to estimating the state of a general system. The author presents state estimation theory clearly and rigorously, providing the right amount of advanced material, recent research results, and references to enable the reader to apply state estimation techniques confidently across a variety of fields in science and engineering. While there are other textbooks that treat state estimation, this one offers special features and a unique perspective and pedagogical approach that speed learning: * Straightforward, bottom-up approach begins with basic concepts and then builds step by step to more advanced topics for a clear understanding of state estimation * Simple examples and problems that require only paper and pen to solve lead to an intuitive understanding of how theory works in practice * MATLAB(r)-based source code that corresponds to examples in the book, available on the author's Web site, enables readers to recreate results and experiment with other simulation setups and parameters Armed with a solid foundation in the basics, readers are presented with a careful treatment of advanced topics, including unscented filtering, high order nonlinear filtering, particle filtering, constrained state estimation, reduced order filtering, robust Kalman filtering, and mixed Kalman/H? filtering. Problems at the end of each chapter include both written exercises and computer exercises. Written exercises focus on improving the reader's understanding of theory and key concepts, whereas computer exercises help readers apply theory to problems similar to ones they are likely to encounter in industry. With its expert blend of theory and practice, coupled with its presentation of recent research results, Optimal State Estimation is strongly recommended for undergraduate and graduate-level courses in optimal control and state estimation theory. It also serves as a reference for engineers and science professionals across a wide array of industries.