New Dynamic Programming Approaches to Stochastic Optimal Control Problems in Chemical Engineering [microform]

New Dynamic Programming Approaches to Stochastic Optimal Control Problems in Chemical Engineering [microform]

Author: Adrian Martell Thompson

Publisher: Library and Archives Canada = Bibliothèque et Archives Canada

Published: 2005

Total Pages: 460

ISBN-13: 9780494025994

DOWNLOAD EBOOK

The second algorithm, a policy iteration (PI) variant employing Nystrom's discretization method, allows computation of continuous stochastic ROC policies without quadrature, function approximation, interpolation, or Monte Carlo methods. Lipschitz continuity assumptions allow reformulation of the original problem into an equivalent finite state problem solvable in a Luus-Jaakola global optimization framework. This enables exponential computation reductions relative to standard PI. Simulations, involving stochastic ROC of a nonlinear reactor, exhibited a 99.9% reduction in computation with identical accuracy. Additionally, the average performance of the policy obtained was 58.2% better than the certainty equivalence policy. The first, a Monte Carlo extension of iterative dynamic programming (IDP), reduces discretization requirements by restricting the control policy to the dominant portion of the state space. A proof of strong probabilistic convergence of IDP is derived, and is shown to extend to the new stochastic IDP (SIDP) algorithm. Simulations demonstrate that SIDP can provide significant COD mitigation in DAC applications, relative to the standard SDP approach. Specifically, a 96% computation reduction, 92% storage reduction and less than 2% accuracy loss were simultaneously achieved using SIDP. Optimal control of chemical processes in the presence of stochastic model uncertainty is addressed. Contributions are made in two areas of process control interest: dual adaptive control (DAC) and robust optimal control (ROC). These are synergistic in that DAC involves sequences of stochastic ROC problems. In chemical engineering, these problems typically have continuous state and control spaces, and are subject to a curse of dimensionality (COD) within the stochastic dynamic programming (SDP) framework. The main novelty presented here is the method by which this COD is mitigated. Existing methods to mitigate the COD include state space aggregation, function approximation (FA), or exploitation of problem structure, e.g. system linearity. The first two yield problems of reduced but still large complexity. The third is problem specific and does not generalize well to non-linear, non-convex or non-Gaussian structures. Here, two new algorithms are developed that mitigate the COD without these simplifications, with only minimal restrictions imposed on problem structure.


Dynamic Programming in Chemical Engineering and Process Control by Sanford M Roberts

Dynamic Programming in Chemical Engineering and Process Control by Sanford M Roberts

Author: Sanford M. Roberts

Publisher: Elsevier

Published: 1964-01-01

Total Pages: 473

ISBN-13: 0080955193

DOWNLOAD EBOOK

In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank matrix approximations; hybrid methods based on a combination of iterative procedures and best operator approximation; andmethods for information compression and filtering under condition that a filter model should satisfy restrictions associated with causality and different types of memory.As a result, the book represents a blend of new methods in general computational analysis,and specific, but also generic, techniques for study of systems theory ant its particularbranches, such as optimal filtering and information compression.- Best operator approximation,- Non-Lagrange interpolation,- Generic Karhunen-Loeve transform- Generalised low-rank matrix approximation- Optimal data compression- Optimal nonlinear filtering


Stochastic Optimal Control in Infinite Dimension

Stochastic Optimal Control in Infinite Dimension

Author: Giorgio Fabbri

Publisher: Springer

Published: 2017-06-22

Total Pages: 928

ISBN-13: 3319530674

DOWNLOAD EBOOK

Providing an introduction to stochastic optimal control in infinite dimension, this book gives a complete account of the theory of second-order HJB equations in infinite-dimensional Hilbert spaces, focusing on its applicability to associated stochastic optimal control problems. It features a general introduction to optimal stochastic control, including basic results (e.g. the dynamic programming principle) with proofs, and provides examples of applications. A complete and up-to-date exposition of the existing theory of viscosity solutions and regular solutions of second-order HJB equations in Hilbert spaces is given, together with an extensive survey of other methods, with a full bibliography. In particular, Chapter 6, written by M. Fuhrman and G. Tessitore, surveys the theory of regular solutions of HJB equations arising in infinite-dimensional stochastic control, via BSDEs. The book is of interest to both pure and applied researchers working in the control theory of stochastic PDEs, and in PDEs in infinite dimension. Readers from other fields who want to learn the basic theory will also find it useful. The prerequisites are: standard functional analysis, the theory of semigroups of operators and its use in the study of PDEs, some knowledge of the dynamic programming approach to stochastic optimal control problems in finite dimension, and the basics of stochastic analysis and stochastic equations in infinite-dimensional spaces.


Optimal Design of Control Systems

Optimal Design of Control Systems

Author: Gennadii E. Kolosov

Publisher: CRC Press

Published: 2020-08-26

Total Pages: 420

ISBN-13: 1000103323

DOWNLOAD EBOOK

"Covers design methods for optimal (or quasioptimal) control algorithms in the form of synthesis for deterministic and stochastic dynamical systems-with applications in aerospace, robotic, and servomechanical technologies. Providing new results on exact and approximate solutions of optimal control problems."


Stochastic Control Theory

Stochastic Control Theory

Author: Makiko Nisio

Publisher: Springer

Published: 2014-11-27

Total Pages: 263

ISBN-13: 4431551239

DOWNLOAD EBOOK

This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems. First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem. Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-max principle, to be precise). Using semi-discretization arguments, we construct the nonlinear semigroups whose generators provide lower and upper Isaacs equations. Concerning partially observable control problems, we refer to stochastic parabolic equations driven by colored Wiener noises, in particular, the Zakai equation. The existence and uniqueness of solutions and regularities as well as Itô's formula are stated. A control problem for the Zakai equations has a nonlinear semigroup whose generator provides the HJB equation on a Banach space. The value function turns out to be a unique viscosity solution for the HJB equation under mild conditions. This edition provides a more generalized treatment of the topic than does the earlier book Lectures on Stochastic Control Theory (ISI Lecture Notes 9), where time-homogeneous cases are dealt with. Here, for finite time-horizon control problems, DPP was formulated as a one-parameter nonlinear semigroup, whose generator provides the HJB equation, by using a time-discretization method. The semigroup corresponds to the value function and is characterized as the envelope of Markovian transition semigroups of responses for constant control processes. Besides finite time-horizon controls, the book discusses control-stopping problems in the same frameworks.


Dynamic Programming and Optimal Control

Dynamic Programming and Optimal Control

Author: Dimitri Bertsekas

Publisher: Athena Scientific

Published:

Total Pages: 613

ISBN-13: 1886529434

DOWNLOAD EBOOK

This is the leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an extensive treatment of the far-reaching methodology of Neuro-Dynamic Programming/Reinforcement Learning. Among its special features, the book 1) provides a unifying framework for sequential decision making, 2) treats simultaneously deterministic and stochastic control problems popular in modern control theory and Markovian decision popular in operations research, 3) develops the theory of deterministic optimal control problems including the Pontryagin Minimum Principle, 4) introduces recent suboptimal control and simulation-based approximation techniques (neuro-dynamic programming), which allow the practical application of dynamic programming to complex problems that involve the dual curse of large dimension and lack of an accurate mathematical model, 5) provides a comprehensive treatment of infinite horizon problems in the second volume, and an introductory treatment in the first volume The electronic version of the book includes 29 theoretical problems, with high-quality solutions, which enhance the range of coverage of the book.


Applied and Computational Optimal Control

Applied and Computational Optimal Control

Author: Kok Lay Teo

Publisher: Springer Nature

Published: 2021-05-24

Total Pages: 581

ISBN-13: 3030699137

DOWNLOAD EBOOK

The aim of this book is to furnish the reader with a rigorous and detailed exposition of the concept of control parametrization and time scaling transformation. It presents computational solution techniques for a special class of constrained optimal control problems as well as applications to some practical examples. The book may be considered an extension of the 1991 monograph A Unified Computational Approach Optimal Control Problems, by K.L. Teo, C.J. Goh, and K.H. Wong. This publication discusses the development of new theory and computational methods for solving various optimal control problems numerically and in a unified fashion. To keep the book accessible and uniform, it includes those results developed by the authors, their students, and their past and present collaborators. A brief review of methods that are not covered in this exposition, is also included. Knowledge gained from this book may inspire advancement of new techniques to solve complex problems that arise in the future. This book is intended as reference for researchers in mathematics, engineering, and other sciences, graduate students and practitioners who apply optimal control methods in their work. It may be appropriate reading material for a graduate level seminar or as a text for a course in optimal control.


Dynamic Programming in Chemical Engineering and Process Control

Dynamic Programming in Chemical Engineering and Process Control

Author: Sanford M. Roberts

Publisher: Elsevier Science & Technology

Published: 1964

Total Pages: 480

ISBN-13:

DOWNLOAD EBOOK

In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation; methods for low-rank matrix approximations; hybrid methods based on a combination of iterative procedures and best operator approximation; and methods for information compression and filtering under condition that a filter model should satisfy restrictions associated with causality and different types of memory. As a result, the book represents a blend of new methods in general computational analysis, and specific, but also generic, techniques for study of systems theory ant its particular branches, such as optimal filtering and information compression. - Best operator approximation, - Non-Lagrange interpolation, - Generic Karhunen-Loeve transform - Generalised low-rank matrix approximation - Optimal data compression - Optimal nonlinear filtering


Adaptive Dynamic Programming for Control

Adaptive Dynamic Programming for Control

Author: Huaguang Zhang

Publisher: Springer Science & Business Media

Published: 2012-12-14

Total Pages: 432

ISBN-13: 144714757X

DOWNLOAD EBOOK

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: • infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; • finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; • nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: • establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; • demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and • shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.


Self-Learning Optimal Control of Nonlinear Systems

Self-Learning Optimal Control of Nonlinear Systems

Author: Qinglai Wei

Publisher: Springer

Published: 2017-06-13

Total Pages: 242

ISBN-13: 981104080X

DOWNLOAD EBOOK

This book presents a class of novel, self-learning, optimal control schemes based on adaptive dynamic programming techniques, which quantitatively obtain the optimal control schemes of the systems. It analyzes the properties identified by the programming methods, including the convergence of the iterative value functions and the stability of the system under iterative control laws, helping to guarantee the effectiveness of the methods developed. When the system model is known, self-learning optimal control is designed on the basis of the system model; when the system model is not known, adaptive dynamic programming is implemented according to the system data, effectively making the performance of the system converge to the optimum. With various real-world examples to complement and substantiate the mathematical analysis, the book is a valuable guide for engineers, researchers, and students in control science and engineering.