In a mathematically precise manner, this book presents a unified introduction to deterministic control theory. It includes material on the realization of both linear and nonlinear systems, impulsive control, and positive linear systems.
This book is devoted to problems of stochastic control and stopping that are time inconsistent in the sense that they do not admit a Bellman optimality principle. These problems are cast in a game-theoretic framework, with the focus on subgame-perfect Nash equilibrium strategies. The general theory is illustrated with a number of finance applications. In dynamic choice problems, time inconsistency is the rule rather than the exception. Indeed, as Robert H. Strotz pointed out in his seminal 1955 paper, relaxing the widely used ad hoc assumption of exponential discounting gives rise to time inconsistency. Other famous examples of time inconsistency include mean-variance portfolio choice and prospect theory in a dynamic context. For such models, the very concept of optimality becomes problematic, as the decision maker’s preferences change over time in a temporally inconsistent way. In this book, a time-inconsistent problem is viewed as a non-cooperative game between the agent’s current and future selves, with the objective of finding intrapersonal equilibria in the game-theoretic sense. A range of finance applications are provided, including problems with non-exponential discounting, mean-variance objective, time-inconsistent linear quadratic regulator, probability distortion, and market equilibrium with time-inconsistent preferences. Time-Inconsistent Control Theory with Finance Applications offers the first comprehensive treatment of time-inconsistent control and stopping problems, in both continuous and discrete time, and in the context of finance applications. Intended for researchers and graduate students in the fields of finance and economics, it includes a review of the standard time-consistent results, bibliographical notes, as well as detailed examples showcasing time inconsistency problems. For the reader unacquainted with standard arbitrage theory, an appendix provides a toolbox of material needed for the book.
Control theory provides a large set of theoretical and computational tools with applications in a wide range of ?elds, running from ”pure” branches of mathematics, like geometry, to more applied areas where the objective is to ?nd solutions to ”real life” problems, as is the case in robotics, control of industrial processes or ?nance. The ”high tech” character of modern business has increased the need for advanced methods. These rely heavily on mathematical techniques and seem indispensable for competitiveness of modern enterprises. It became essential for the ?nancial analyst to possess a high level of mathematical skills. C- versely, the complex challenges posed by the problems and models relevant to ?nance have, for a long time, been an important source of new research topics for mathematicians. The use of techniques from stochastic optimal control constitutes a well established and important branch of mathematical ?nance. Up to now, other branches of control theory have found comparatively less application in ?n- cial problems. To some extent, deterministic and stochastic control theories developed as di?erent branches of mathematics. However, there are many points of contact between them and in recent years the exchange of ideas between these ?elds has intensi?ed. Some concepts from stochastic calculus (e.g., rough paths) havedrawntheattentionofthedeterministiccontroltheorycommunity.Also, some ideas and tools usual in deterministic control (e.g., geometric, algebraic or functional-analytic methods) can be successfully applied to stochastic c- trol.
A rigorous introduction to optimal control theory, with an emphasis on applications in economics. This book bridges optimal control theory and economics, discussing ordinary differential equations, optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations (ODEs) is the backbone of the theory developed in the book, and chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendixes provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer's creation of an environment in which players interact to maximize the designer's objective. The book focuses on applications; the problems are an integral part of the text. It is intended for use as a textbook or reference for graduate students, teachers, and researchers interested in applications of control theory beyond its classical use in economic growth. The book will also appeal to readers interested in a modeling approach to certain practical problems involving dynamic continuous-time models.
This volume contains survey and research articles by some of the leading researchers in mathematical systems theory - a vibrant research area in its own right. Many authors have taken special care that their articles are self-contained and accessible also to non-specialists.
Geared primarily to an audience consisting of mathematically advanced undergraduate or beginning graduate students, this text may additionally be used by engineering students interested in a rigorous, proof-oriented systems course that goes beyond the classical frequency-domain material and more applied courses. The minimal mathematical background required is a working knowledge of linear algebra and differential equations. The book covers what constitutes the common core of control theory and is unique in its emphasis on foundational aspects. While covering a wide range of topics written in a standard theorem/proof style, it also develops the necessary techniques from scratch. In this second edition, new chapters and sections have been added, dealing with time optimal control of linear systems, variational and numerical approaches to nonlinear control, nonlinear controllability via Lie-algebraic methods, and controllability of recurrent nets and of linear systems with bounded controls.
The goal of this textbook is to introduce students to the stochastic analysis tools that play an increasing role in the probabilistic approach to optimization problems, including stochastic control and stochastic differential games. While optimal control is taught in many graduate programs in applied mathematics and operations research, the author was intrigued by the lack of coverage of the theory of stochastic differential games. This is the first title in SIAM?s Financial Mathematics book series and is based on the author?s lecture notes. It will be helpful to students who are interested in stochastic differential equations (forward, backward, forward-backward); the probabilistic approach to stochastic control (dynamic programming and the stochastic maximum principle); and mean field games and control of McKean?Vlasov dynamics. The theory is illustrated by applications to models of systemic risk, macroeconomic growth, flocking/schooling, crowd behavior, and predatory trading, among others.
The calculus of variations is used to find functions that optimize quantities expressed in terms of integrals. Optimal control theory seeks to find functions that minimize cost integrals for systems described by differential equations. This book is an introduction to both the classical theory of the calculus of variations and the more modern developments of optimal control theory from the perspective of an applied mathematician. It focuses on understanding concepts and how to apply them. The range of potential applications is broad: the calculus of variations and optimal control theory have been widely used in numerous ways in biology, criminology, economics, engineering, finance, management science, and physics. Applications described in this book include cancer chemotherapy, navigational control, and renewable resource harvesting. The prerequisites for the book are modest: the standard calculus sequence, a first course on ordinary differential equations, and some facility with the use of mathematical software. It is suitable for an undergraduate or beginning graduate course, or for self study. It provides excellent preparation for more advanced books and courses on the calculus of variations and optimal control theory.
Stochastic optimization problems arise in decision-making problems under uncertainty, and find various applications in economics and finance. On the other hand, problems in finance have recently led to new developments in the theory of stochastic control. This volume provides a systematic treatment of stochastic optimization problems applied to finance by presenting the different existing methods: dynamic programming, viscosity solutions, backward stochastic differential equations, and martingale duality methods. The theory is discussed in the context of recent developments in this field, with complete and detailed proofs, and is illustrated by means of concrete examples from the world of finance: portfolio allocation, option hedging, real options, optimal investment, etc. This book is directed towards graduate students and researchers in mathematical finance, and will also benefit applied mathematicians interested in financial applications and practitioners wishing to know more about the use of stochastic optimization methods in finance.