Stochastic control is a very active area of research. This monograph, written by two leading authorities in the field, has been updated to reflect the latest developments. It covers effective numerical methods for stochastic control problems in continuous time on two levels, that of practice and that of mathematical development. It is broadly accessible for graduate students and researchers.
This book contains an introduction to three topics in stochastic control: discrete time stochastic control, i. e. , stochastic dynamic programming (Chapter 1), piecewise - terministic control problems (Chapter 3), and control of Ito diffusions (Chapter 4). The chapters include treatments of optimal stopping problems. An Appendix - calls material from elementary probability theory and gives heuristic explanations of certain more advanced tools in probability theory. The book will hopefully be of interest to students in several ?elds: economics, engineering, operations research, ?nance, business, mathematics. In economics and business administration, graduate students should readily be able to read it, and the mathematical level can be suitable for advanced undergraduates in mathem- ics and science. The prerequisites for reading the book are only a calculus course and a course in elementary probability. (Certain technical comments may demand a slightly better background. ) As this book perhaps (and hopefully) will be read by readers with widely diff- ing backgrounds, some general advice may be useful: Don’t be put off if paragraphs, comments, or remarks contain material of a seemingly more technical nature that you don’t understand. Just skip such material and continue reading, it will surely not be needed in order to understand the main ideas and results. The presentation avoids the use of measure theory.
This book gives a systematic treatment of singularly perturbed systems that naturally arise in control and optimization, queueing networks, manufacturing systems, and financial engineering. It presents results on asymptotic expansions of solutions of Komogorov forward and backward equations, properties of functional occupation measures, exponential upper bounds, and functional limit results for Markov chains with weak and strong interactions. To bridge the gap between theory and applications, a large portion of the book is devoted to applications in controlled dynamic systems, production planning, and numerical methods for controlled Markovian systems with large-scale and complex structures in the real-world problems. This second edition has been updated throughout and includes two new chapters on asymptotic expansions of solutions for backward equations and hybrid LQG problems. The chapters on analytic and probabilistic properties of two-time-scale Markov chains have been almost completely rewritten and the notation has been streamlined and simplified. This book is written for applied mathematicians, engineers, operations researchers, and applied scientists. Selected material from the book can also be used for a one semester advanced graduate-level course in applied probability and stochastic processes.
This paper introduces time-continuous numerical schemes to simulate stochastic differential equations (SDEs) arising in mathematical finance, population dynamics, chemical kinetics, epidemiology, biophysics, and polymeric fluids. These schemes are obtained by spatially discretizing the Kolmogorov equation associated with the SDE in such a way that the resulting semi-discrete equation generates a Markov jump process that can be realized exactly using a Monte Carlo method. In this construction the jump size of the approximation can be bounded uniformly in space, which often guarantees that the schemes are numerically stable for both finite and long time simulation of SDEs.
To harness the full power of computer technology, economists need to use a broad range of mathematical techniques. In this book, Kenneth Judd presents techniques from the numerical analysis and applied mathematics literatures and shows how to use them in economic analyses. The book is divided into five parts. Part I provides a general introduction. Part II presents basics from numerical analysis on R^n, including linear equations, iterative methods, optimization, nonlinear equations, approximation methods, numerical integration and differentiation, and Monte Carlo methods. Part III covers methods for dynamic problems, including finite difference methods, projection methods, and numerical dynamic programming. Part IV covers perturbation and asymptotic solution methods. Finally, Part V covers applications to dynamic equilibrium analysis, including solution methods for perfect foresight models and rational expectation models. A website contains supplementary material including programs and answers to exercises.
This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.
The aim of this Special Issue of Mathematics is to commemorate the outstanding Russian mathematician Vladimir Zolotarev, whose 90th birthday will be celebrated on February 27th, 2021. The present Special Issue contains a collection of new papers by participants in sessions of the International Seminar on Stability Problems for Stochastic Models founded by Zolotarev. Along with research in probability distributions theory, limit theorems of probability theory, stochastic processes, mathematical statistics, and queuing theory, this collection contains papers dealing with applications of stochastic models in modeling of pension schemes, modeling of extreme precipitation, construction of statistical indicators of scientific publication importance, and other fields.
Applied Stochastic Models and Control for Finance and Insurance presents at an introductory level some essential stochastic models applied in economics, finance and insurance. Markov chains, random walks, stochastic differential equations and other stochastic processes are used throughout the book and systematically applied to economic and financial applications. In addition, a dynamic programming framework is used to deal with some basic optimization problems. The book begins by introducing problems of economics, finance and insurance which involve time, uncertainty and risk. A number of cases are treated in detail, spanning risk management, volatility, memory, the time structure of preferences, interest rates and yields, etc. The second and third chapters provide an introduction to stochastic models and their application. Stochastic differential equations and stochastic calculus are presented in an intuitive manner, and numerous applications and exercises are used to facilitate their understanding and their use in Chapter 3. A number of other processes which are increasingly used in finance and insurance are introduced in Chapter 4. In the fifth chapter, ARCH and GARCH models are presented and their application to modeling volatility is emphasized. An outline of decision-making procedures is presented in Chapter 6. Furthermore, we also introduce the essentials of stochastic dynamic programming and control, and provide first steps for the student who seeks to apply these techniques. Finally, in Chapter 7, numerical techniques and approximations to stochastic processes are examined. This book can be used in business, economics, financial engineering and decision sciences schools for second year Master's students, as well as in a number of courses widely given in departments of statistics, systems and decision sciences.
Optimal Control and Dynamic Games has been edited to honor the outstanding contributions of Professor Suresh Sethi in the fields of Applied Optimal Control. Professor Sethi is internationally one of the foremost experts in this field. He is, among others, co-author of the popular textbook "Sethi and Thompson: Optimal Control Theory: Applications to Management Science and Economics". The book consists of a collection of essays by some of the best known scientists in the field, covering diverse aspects of applications of optimal control and dynamic games to problems in Finance, Management Science, Economics, and Operations Research. In doing so, it provides both a state-of-the-art overview over recent developments in the field, and a reference work covering the wide variety of contemporary questions that can be addressed with optimal control tools, and demonstrates the fruitfulness of the methodology.