Numerical Methods for Stochastic Control Problems in Continuous Time

Numerical Methods for Stochastic Control Problems in Continuous Time

Author: Harold Kushner

Publisher: Springer Science & Business Media

Published: 2013-11-27

Total Pages: 480

ISBN-13: 146130007X

DOWNLOAD EBOOK

Stochastic control is a very active area of research. This monograph, written by two leading authorities in the field, has been updated to reflect the latest developments. It covers effective numerical methods for stochastic control problems in continuous time on two levels, that of practice and that of mathematical development. It is broadly accessible for graduate students and researchers.


Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions

Author: Wendell H. Fleming

Publisher: Springer Science & Business Media

Published: 2006-02-04

Total Pages: 436

ISBN-13: 0387310711

DOWNLOAD EBOOK

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.


Markov Processes and Controlled Markov Chains

Markov Processes and Controlled Markov Chains

Author: Zhenting Hou

Publisher: Springer Science & Business Media

Published: 2002-09-30

Total Pages: 536

ISBN-13: 9781402008030

DOWNLOAD EBOOK

The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South American and Asian scholars.


Controlled Markov Processes

Controlled Markov Processes

Author: E. B. Dynkin

Publisher: Springer

Published: 2012-04-13

Total Pages: 0

ISBN-13: 9781461567486

DOWNLOAD EBOOK

This book is devoted to the systematic exposition of the contemporary theory of controlled Markov processes with discrete time parameter or in another termi nology multistage Markovian decision processes. We discuss the applications of this theory to various concrete problems. Particular attention is paid to mathe matical models of economic planning, taking account of stochastic factors. The authors strove to construct the exposition in such a way that a reader interested in the applications can get through the book with a minimal mathe matical apparatus. On the other hand, a mathematician will find, in the appropriate chapters, a rigorous theory of general control models, based on advanced measure theory, analytic set theory, measurable selection theorems, and so forth. We have abstained from the manner of presentation of many mathematical monographs, in which one presents immediately the most general situation and only then discusses simpler special cases and examples. Wishing to separate out difficulties, we introduce new concepts and ideas in the simplest setting, where they already begin to work. Thus, before considering control problems on an infinite time interval, we investigate in detail the case of the finite interval. Here we first study in detail models with finite state and action spaces-a case not requiring a departure from the realm of elementary mathematics, and at the same time illustrating the most important principles of the theory.


Markov Processes and Controlled Markov Chains

Markov Processes and Controlled Markov Chains

Author: Zhenting Hou

Publisher: Springer Science & Business Media

Published: 2013-12-01

Total Pages: 501

ISBN-13: 146130265X

DOWNLOAD EBOOK

The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South American and Asian scholars.


Markov Processes for Stochastic Modeling

Markov Processes for Stochastic Modeling

Author: Oliver Ibe

Publisher: Newnes

Published: 2013-05-22

Total Pages: 515

ISBN-13: 0124078397

DOWNLOAD EBOOK

Markov processes are processes that have limited memory. In particular, their dependence on the past is only through the previous state. They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems. Covering a wide range of areas of application of Markov processes, this second edition is revised to highlight the most important aspects as well as the most recent trends and applications of Markov processes. The author spent over 16 years in the industry before returning to academia, and he has applied many of the principles covered in this book in multiple research projects. Therefore, this is an applications-oriented book that also includes enough theory to provide a solid ground in the subject for the reader. - Presents both the theory and applications of the different aspects of Markov processes - Includes numerous solved examples as well as detailed diagrams that make it easier to understand the principle being presented - Discusses different applications of hidden Markov models, such as DNA sequence analysis and speech analysis.


Markov Decision Processes with Applications to Finance

Markov Decision Processes with Applications to Finance

Author: Nicole Bäuerle

Publisher: Springer Science & Business Media

Published: 2011-06-06

Total Pages: 393

ISBN-13: 3642183247

DOWNLOAD EBOOK

The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems. The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers in both applied probability and finance, and provides exercises (without solutions).


Adaptive Markov Control Processes

Adaptive Markov Control Processes

Author: Onesimo Hernandez-Lerma

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 160

ISBN-13: 1441987142

DOWNLOAD EBOOK

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided.


Continuous-Time Markov Decision Processes

Continuous-Time Markov Decision Processes

Author: Xianping Guo

Publisher: Springer Science & Business Media

Published: 2009-09-18

Total Pages: 240

ISBN-13: 3642025471

DOWNLOAD EBOOK

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.


Continuous-Time Markov Chains and Applications

Continuous-Time Markov Chains and Applications

Author: G. George Yin

Publisher: Springer Science & Business Media

Published: 2012-11-14

Total Pages: 442

ISBN-13: 1461443466

DOWNLOAD EBOOK

This book gives a systematic treatment of singularly perturbed systems that naturally arise in control and optimization, queueing networks, manufacturing systems, and financial engineering. It presents results on asymptotic expansions of solutions of Komogorov forward and backward equations, properties of functional occupation measures, exponential upper bounds, and functional limit results for Markov chains with weak and strong interactions. To bridge the gap between theory and applications, a large portion of the book is devoted to applications in controlled dynamic systems, production planning, and numerical methods for controlled Markovian systems with large-scale and complex structures in the real-world problems. This second edition has been updated throughout and includes two new chapters on asymptotic expansions of solutions for backward equations and hybrid LQG problems. The chapters on analytic and probabilistic properties of two-time-scale Markov chains have been almost completely rewritten and the notation has been streamlined and simplified. This book is written for applied mathematicians, engineers, operations researchers, and applied scientists. Selected material from the book can also be used for a one semester advanced graduate-level course in applied probability and stochastic processes.