Further Topics on Discrete-Time Markov Control Processes

Further Topics on Discrete-Time Markov Control Processes

Author: Onesimo Hernandez-Lerma

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 286

ISBN-13: 1461205611

DOWNLOAD EBOOK

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.


Adaptive Markov Control Processes

Adaptive Markov Control Processes

Author: Onesimo Hernandez-Lerma

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 160

ISBN-13: 1441987142

DOWNLOAD EBOOK

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided.


Selected Topics on Continuous-time Controlled Markov Chains and Markov Games

Selected Topics on Continuous-time Controlled Markov Chains and Markov Games

Author: Tomas Prieto-Rumeau

Publisher: World Scientific

Published: 2012

Total Pages: 292

ISBN-13: 1848168497

DOWNLOAD EBOOK

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.


Finite Approximations in Discrete-Time Stochastic Control

Finite Approximations in Discrete-Time Stochastic Control

Author: Naci Saldi

Publisher: Birkhäuser

Published: 2018-05-11

Total Pages: 196

ISBN-13: 3319790331

DOWNLOAD EBOOK

In a unified form, this monograph presents fundamental results on the approximation of centralized and decentralized stochastic control problems, with uncountable state, measurement, and action spaces. It demonstrates how quantization provides a system-independent and constructive method for the reduction of a system with Borel spaces to one with finite state, measurement, and action spaces. In addition to this constructive view, the book considers both the information transmission approach for discretization of actions, and the computational approach for discretization of states and actions. Part I of the text discusses Markov decision processes and their finite-state or finite-action approximations, while Part II builds from there to finite approximations in decentralized stochastic control problems. This volume is perfect for researchers and graduate students interested in stochastic controls. With the tools presented, readers will be able to establish the convergence of approximation models to original models and the methods are general enough that researchers can build corresponding approximation results, typically with no additional assumptions.


Continuous Average Control of Piecewise Deterministic Markov Processes

Continuous Average Control of Piecewise Deterministic Markov Processes

Author: Oswaldo Luiz do Valle Costa

Publisher: Springer Science & Business Media

Published: 2013-04-12

Total Pages: 124

ISBN-13: 146146983X

DOWNLOAD EBOOK

The intent of this book is to present recent results in the control theory for the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs). The book focuses mainly on the long run average cost criteria and extends to the PDMPs some well-known techniques related to discrete-time and continuous-time Markov decision processes, including the so-called ``average inequality approach'', ``vanishing discount technique'' and ``policy iteration algorithm''. We believe that what is unique about our approach is that, by using the special features of the PDMPs, we trace a parallel with the general theory for discrete-time Markov Decision Processes rather than the continuous-time case. The two main reasons for doing that is to use the powerful tools developed in the discrete-time framework and to avoid working with the infinitesimal generator associated to a PDMP, which in most cases has its domain of definition difficult to be characterized. Although the book is mainly intended to be a theoretically oriented text, it also contains some motivational examples. The book is targeted primarily for advanced students and practitioners of control theory. The book will be a valuable source for experts in the field of Markov decision processes. Moreover, the book should be suitable for certain advanced courses or seminars. As background, one needs an acquaintance with the theory of Markov decision processes and some knowledge of stochastic processes and modern analysis.


Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution

Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution

Author: J. Adolfo Minjárez-Sosa

Publisher: Springer Nature

Published: 2020-01-27

Total Pages: 129

ISBN-13: 3030357201

DOWNLOAD EBOOK

This SpringerBrief deals with a class of discrete-time zero-sum Markov games with Borel state and action spaces, and possibly unbounded payoffs, under discounted and average criteria, whose state process evolves according to a stochastic difference equation. The corresponding disturbance process is an observable sequence of independent and identically distributed random variables with unknown distribution for both players. Unlike the standard case, the game is played over an infinite horizon evolving as follows. At each stage, once the players have observed the state of the game, and before choosing the actions, players 1 and 2 implement a statistical estimation process to obtain estimates of the unknown distribution. Then, independently, the players adapt their decisions to such estimators to select their actions and construct their strategies. This book presents a systematic analysis on recent developments in this kind of games. Specifically, the theoretical foundations on the procedures combining statistical estimation and control techniques for the construction of strategies of the players are introduced, with illustrative examples. In this sense, the book is an essential reference for theoretical and applied researchers in the fields of stochastic control and game theory, and their applications.


Markov Processes and Controlled Markov Chains

Markov Processes and Controlled Markov Chains

Author: Zhenting Hou

Publisher: Springer Science & Business Media

Published: 2013-12-01

Total Pages: 501

ISBN-13: 146130265X

DOWNLOAD EBOOK

The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South American and Asian scholars.


Continuous-Time Markov Decision Processes

Continuous-Time Markov Decision Processes

Author: Alexey Piunovskiy

Publisher: Springer Nature

Published: 2020-11-09

Total Pages: 605

ISBN-13: 3030549879

DOWNLOAD EBOOK

This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.


Continuous-Time Markov Decision Processes

Continuous-Time Markov Decision Processes

Author: Xianping Guo

Publisher: Springer Science & Business Media

Published: 2009-09-18

Total Pages: 240

ISBN-13: 3642025471

DOWNLOAD EBOOK

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.


Control and System Theory of Discrete-Time Stochastic Systems

Control and System Theory of Discrete-Time Stochastic Systems

Author: Jan H. van Schuppen

Publisher: Springer Nature

Published: 2021-08-02

Total Pages: 940

ISBN-13: 3030669521

DOWNLOAD EBOOK

This book helps students, researchers, and practicing engineers to understand the theoretical framework of control and system theory for discrete-time stochastic systems so that they can then apply its principles to their own stochastic control systems and to the solution of control, filtering, and realization problems for such systems. Applications of the theory in the book include the control of ships, shock absorbers, traffic and communications networks, and power systems with fluctuating power flows. The focus of the book is a stochastic control system defined for a spectrum of probability distributions including Bernoulli, finite, Poisson, beta, gamma, and Gaussian distributions. The concepts of observability and controllability of a stochastic control system are defined and characterized. Each output process considered is, with respect to conditions, represented by a stochastic system called a stochastic realization. The existence of a control law is related to stochastic controllability while the existence of a filter system is related to stochastic observability. Stochastic control with partial observations is based on the existence of a stochastic realization of the filtration of the observed process.​