The goal of this textbook is to introduce students to the stochastic analysis tools that play an increasing role in the probabilistic approach to optimization problems, including stochastic control and stochastic differential games. While optimal control is taught in many graduate programs in applied mathematics and operations research, the author was intrigued by the lack of coverage of the theory of stochastic differential games. This is the first title in SIAM?s Financial Mathematics book series and is based on the author?s lecture notes. It will be helpful to students who are interested in stochastic differential equations (forward, backward, forward-backward); the probabilistic approach to stochastic control (dynamic programming and the stochastic maximum principle); and mean field games and control of McKean?Vlasov dynamics. The theory is illustrated by applications to models of systemic risk, macroeconomic growth, flocking/schooling, crowd behavior, and predatory trading, among others.
A comprehensive, self-contained survey of the theory and applications of differential games, one of the most commonly used tools for modelling and analysing economics and management problems which are characterised by both multiperiod and strategic decision making. Although no prior knowledge of game theory is required, a basic knowledge of linear algebra, ordinary differential equations, mathematical programming and probability theory is necessary. Part One presents the theory of differential games, starting with the basic concepts of game theory and going on to cover control theoretic models, Markovian equilibria with simultaneous play, differential games with hierarchical play, trigger strategy equilibria, differential games with special structures, and stochastic differential games. Part Two offers applications to capital accumulation games, industrial organization and oligopoly games, marketing, resources and environmental economics.
This volume contains fifteen articles on the topic of differential and dynamic games, focusing on both theory and applications. It covers a variety of areas and presents recent developments on topics of current interest. It should be useful to researchers in differential and dynamic games, systems and control, operations research and mathematical economics.
Stochastic Differential Equations and Applications, Volume 1 covers the development of the basic theory of stochastic differential equation systems. This volume is divided into nine chapters. Chapters 1 to 5 deal with the basic theory of stochastic differential equations, including discussions of the Markov processes, Brownian motion, and the stochastic integral. Chapter 6 examines the connections between solutions of partial differential equations and stochastic differential equations, while Chapter 7 describes the Girsanov's formula that is useful in the stochastic control theory. Chapters 8 and 9 evaluate the behavior of sample paths of the solution of a stochastic differential system, as time increases to infinity. This book is intended primarily for undergraduate and graduate mathematics students.
This book aims to further develop the theory of stochastic functional inclusions and their applications for describing the solutions of the initial and boundary value problems for partial differential inclusions. The self-contained volume is designed to introduce the reader in a systematic fashion, to new methods of the stochastic optimal control theory from the very beginning. The exposition contains detailed proofs and uses new and original methods to characterize the properties of stochastic functional inclusions that, up to the present time, have only been published recently by the author. The work is divided into seven chapters, with the first two acting as an introduction, containing selected material dealing with point- and set-valued stochastic processes, and the final two devoted to applications and optimal control problems. The book presents recent and pressing issues in stochastic processes, control, differential games, optimization and their application in finance, manufacturing, queueing networks, and climate control. Written by an award-winning author in the field of stochastic differential inclusions and their application to control theory, This book is intended for students and researchers in mathematics and applications; particularly those studying optimal control theory. It is also highly relevant for students of economics and engineering. The book can also be used as a reference on stochastic differential inclusions. Knowledge of select topics in analysis and probability theory are required.
This book focuses on various aspects of dynamic game theory, presenting state-of-the-art research and serving as a testament to the vitality and growth of the field of dynamic games and their applications. Its contributions, written by experts in their respective disciplines, are outgrowths of presentations originally given at the 14th International Symposium of Dynamic Games and Applications held in Banff. Advances in Dynamic Games covers a variety of topics, ranging from evolutionary games, theoretical developments in game theory and algorithmic methods to applications, examples, and analysis in fields as varied as mathematical biology, environmental management, finance and economics, engineering, guidance and control, and social interaction. Featured throughout are valuable tools and resources for researchers, practitioners, and graduate students interested in dynamic games and their applications to mathematics, engineering, economics, and management science.
The theory of two-person, zero-sum differential games started at the be ginning of the 1960s with the works of R. Isaacs in the United States and L. S. Pontryagin and his school in the former Soviet Union. Isaacs based his work on the Dynamic Programming method. He analyzed many special cases of the partial differential equation now called Hamilton Jacobi-Isaacs-briefiy HJI-trying to solve them explicitly and synthe sizing optimal feedbacks from the solution. He began a study of singular surfaces that was continued mainly by J. Breakwell and P. Bernhard and led to the explicit solution of some low-dimensional but highly nontriv ial games; a recent survey of this theory can be found in the book by J. Lewin entitled Differential Games (Springer, 1994). Since the early stages of the theory, several authors worked on making the notion of value of a differential game precise and providing a rigorous derivation of the HJI equation, which does not have a classical solution in most cases; we mention here the works of W. Fleming, A. Friedman (see his book, Differential Games, Wiley, 1971), P. P. Varaiya, E. Roxin, R. J. Elliott and N. J. Kalton, N. N. Krasovskii, and A. I. Subbotin (see their book Po sitional Differential Games, Nauka, 1974, and Springer, 1988), and L. D. Berkovitz. A major breakthrough was the introduction in the 1980s of two new notions of generalized solution for Hamilton-Jacobi equations, namely, viscosity solutions, by M. G. Crandall and P. -L.
Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.
This two-volume book offers a comprehensive treatment of the probabilistic approach to mean field game models and their applications. The book is self-contained in nature and includes original material and applications with explicit examples throughout, including numerical solutions. Volume I of the book is entirely devoted to the theory of mean field games without a common noise. The first half of the volume provides a self-contained introduction to mean field games, starting from concrete illustrations of games with a finite number of players, and ending with ready-for-use solvability results. Readers are provided with the tools necessary for the solution of forward-backward stochastic differential equations of the McKean-Vlasov type at the core of the probabilistic approach. The second half of this volume focuses on the main principles of analysis on the Wasserstein space. It includes Lions' approach to the Wasserstein differential calculus, and the applications of its results to the analysis of stochastic mean field control problems. Together, both Volume I and Volume II will greatly benefit mathematical graduate students and researchers interested in mean field games. The authors provide a detailed road map through the book allowing different access points for different readers and building up the level of technical detail. The accessible approach and overview will allow interested researchers in the applied sciences to obtain a clear overview of the state of the art in mean field games.