Dynamic Management Decision and Stochastic Control Processes

Dynamic Management Decision and Stochastic Control Processes

Author: Toshio Odanaka

Publisher: World Scientific

Published: 1990

Total Pages: 240

ISBN-13: 9789810200923

DOWNLOAD EBOOK

This book treats stochastic control theory and its applications in management. The main numerical techniques necessary for such applications are presented. Several advanced topics leading to optimal processes are dismissed. The book also considers the theory of some stochastic control processes and several applications to illustrate the ideas.


Managerial Planning

Managerial Planning

Author: Charles S. Tapiero

Publisher: Taylor & Francis

Published: 1977

Total Pages: 698

ISBN-13: 9780677054001

DOWNLOAD EBOOK

Management in a dynamic process reflected in three essential functions : management of time, change and people. Each of these functions entails problems whose origin can be traced to the special character of time and activities that take place over time. The book provides a bridging gap between quantitative theories imbedded in the systems approach and managerial decision-making over time and under risk. The conventional wisdom that management is a dynamic process is rendered operational. Contents for volume 1 : On time. Planning and planning models - over time. Planning decision - over time. [jaquette].


Managerial Planning

Managerial Planning

Author: Charles S. Tapiero

Publisher: Routledge

Published: 2018-04-17

Total Pages: 410

ISBN-13: 1351243209

DOWNLOAD EBOOK

Originally published in 1977. Management is a dynamic process reflected in three essential functions: management of time, change and people. The book provides a bridging gap between quantitative theories imbedded in the systems approach and managerial decision-making over time and under risk. The conventional wisdom that management is a dynamic process is rendered operational. This title will be of interest to students of business studies and management.


Managerial Planning

Managerial Planning

Author: Charles S. Tapiero

Publisher: Routledge

Published: 2018-04-17

Total Pages: 269

ISBN-13: 1351260510

DOWNLOAD EBOOK

Originally published in 1977. Management is a dynamic process reflected in three essential functions: management of time, change and people. The book provides a bridging gap between quantitative theories imbedded in the systems approach and managerial decision-making over time and under risk. The conventional wisdom that management is a dynamic process is rendered operational. This title will be of interest to students of business studies and management.


Stochastic Dynamic Programming and the Control of Queueing Systems

Stochastic Dynamic Programming and the Control of Queueing Systems

Author: Linn I. Sennott

Publisher: John Wiley & Sons

Published: 1998-09-30

Total Pages: 360

ISBN-13: 9780471161202

DOWNLOAD EBOOK

Eine Zusammenstellung der Grundlagen der stochastischen dynamischen Programmierung (auch als Markov-Entscheidungsprozeß oder Markov-Ketten bekannt), deren Schwerpunkt auf der Anwendung der Queueing-Theorie liegt. Theoretische und programmtechnische Aspekte werden sinnvoll verknüpft; insgesamt neun numerische Programme zur Queueing-Steuerung werden im Text ausführlich diskutiert. Ergänzendes Material kann vom zugehörigen ftp-Server abgerufen werden. (12/98)


Optimization, Control, and Applications of Stochastic Systems

Optimization, Control, and Applications of Stochastic Systems

Author: Daniel Hernández-Hernández

Publisher: Springer Science & Business Media

Published: 2012-08-15

Total Pages: 331

ISBN-13: 0817683372

DOWNLOAD EBOOK

This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.


Optimization of Stochastic Discrete Systems and Control on Complex Networks

Optimization of Stochastic Discrete Systems and Control on Complex Networks

Author: Dmitrii Lozovanu

Publisher: Springer

Published: 2014-11-27

Total Pages: 420

ISBN-13: 3319118331

DOWNLOAD EBOOK

This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors’ new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book’s final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.


Applied Stochastic Models and Control for Finance and Insurance

Applied Stochastic Models and Control for Finance and Insurance

Author: Charles S. Tapiero

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 352

ISBN-13: 1461558239

DOWNLOAD EBOOK

Applied Stochastic Models and Control for Finance and Insurance presents at an introductory level some essential stochastic models applied in economics, finance and insurance. Markov chains, random walks, stochastic differential equations and other stochastic processes are used throughout the book and systematically applied to economic and financial applications. In addition, a dynamic programming framework is used to deal with some basic optimization problems. The book begins by introducing problems of economics, finance and insurance which involve time, uncertainty and risk. A number of cases are treated in detail, spanning risk management, volatility, memory, the time structure of preferences, interest rates and yields, etc. The second and third chapters provide an introduction to stochastic models and their application. Stochastic differential equations and stochastic calculus are presented in an intuitive manner, and numerous applications and exercises are used to facilitate their understanding and their use in Chapter 3. A number of other processes which are increasingly used in finance and insurance are introduced in Chapter 4. In the fifth chapter, ARCH and GARCH models are presented and their application to modeling volatility is emphasized. An outline of decision-making procedures is presented in Chapter 6. Furthermore, we also introduce the essentials of stochastic dynamic programming and control, and provide first steps for the student who seeks to apply these techniques. Finally, in Chapter 7, numerical techniques and approximations to stochastic processes are examined. This book can be used in business, economics, financial engineering and decision sciences schools for second year Master's students, as well as in a number of courses widely given in departments of statistics, systems and decision sciences.


Stochastic Control Theory

Stochastic Control Theory

Author: Makiko Nisio

Publisher: Springer

Published: 2014-11-27

Total Pages: 263

ISBN-13: 4431551239

DOWNLOAD EBOOK

This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems. First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem. Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-max principle, to be precise). Using semi-discretization arguments, we construct the nonlinear semigroups whose generators provide lower and upper Isaacs equations. Concerning partially observable control problems, we refer to stochastic parabolic equations driven by colored Wiener noises, in particular, the Zakai equation. The existence and uniqueness of solutions and regularities as well as Itô's formula are stated. A control problem for the Zakai equations has a nonlinear semigroup whose generator provides the HJB equation on a Banach space. The value function turns out to be a unique viscosity solution for the HJB equation under mild conditions. This edition provides a more generalized treatment of the topic than does the earlier book Lectures on Stochastic Control Theory (ISI Lecture Notes 9), where time-homogeneous cases are dealt with. Here, for finite time-horizon control problems, DPP was formulated as a one-parameter nonlinear semigroup, whose generator provides the HJB equation, by using a time-discretization method. The semigroup corresponds to the value function and is characterized as the envelope of Markovian transition semigroups of responses for constant control processes. Besides finite time-horizon controls, the book discusses control-stopping problems in the same frameworks.


Discrete-Time Markov Control Processes

Discrete-Time Markov Control Processes

Author: Onesimo Hernandez-Lerma

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 223

ISBN-13: 1461207290

DOWNLOAD EBOOK

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.