Discrete–Time Stochastic Control and Dynamic Potential Games

Discrete–Time Stochastic Control and Dynamic Potential Games

Author: David González-Sánchez

Publisher: Springer Science & Business Media

Published: 2013-09-20

Total Pages: 81

ISBN-13: 331901059X

DOWNLOAD EBOOK

​There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well–suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self–contained presentation of stochastic dynamic potential games.


Potential Game Theory

Potential Game Theory

Author: Quang Duy Lã

Publisher: Springer

Published: 2016-05-26

Total Pages: 172

ISBN-13: 3319308696

DOWNLOAD EBOOK

This book offers a thorough examination of potential game theory and its applications in radio resource management for wireless communications systems and networking. The book addresses two major research goals: how to identify a given game as a potential game, and how to design the utility functions and the potential functions with certain special properties in order to formulate a potential game. After proposing a unifying mathematical framework for the identification of potential games, the text surveys existing applications of this technique within wireless communications and networking problems found in OFDMA 3G/4G/WiFi networks, as well as next-generation systems such as cognitive radios and dynamic spectrum access networks. Professionals interested in understanding the theoretical aspect of this specialized field will find Potential Game Theory a valuable resource, as will advanced-level engineering students. It paves the way for extensive and rigorous research exploration on a topic whose capacity for practical applications is vast but not yet fully exploited.


Mathematical Optimization Theory and Operations Research

Mathematical Optimization Theory and Operations Research

Author: Michael Khachay

Publisher: Springer

Published: 2019-06-12

Total Pages: 742

ISBN-13: 3030226298

DOWNLOAD EBOOK

This book constitutes the proceedings of the 18th International Conference on Mathematical Optimization Theory and Operations Research, MOTOR 2019, held in Ekaterinburg, Russia, in July 2019. The 48 full papers presented in this volume were carefully reviewed and selected from 170 submissions. MOTOR 2019 is a successor of the well-known International and All-Russian conference series, which were organized in Ural, Siberia, and the Far East for a long time. The selected papers are organized in the following topical sections: mathematical programming; bi-level optimization; integer programming; combinatorial optimization; optimal control and approximation; data mining and computational geometry; games and mathematical economics.


Finite Approximations in Discrete-Time Stochastic Control

Finite Approximations in Discrete-Time Stochastic Control

Author: Naci Saldi

Publisher: Birkhäuser

Published: 2018-05-11

Total Pages: 196

ISBN-13: 3319790331

DOWNLOAD EBOOK

In a unified form, this monograph presents fundamental results on the approximation of centralized and decentralized stochastic control problems, with uncountable state, measurement, and action spaces. It demonstrates how quantization provides a system-independent and constructive method for the reduction of a system with Borel spaces to one with finite state, measurement, and action spaces. In addition to this constructive view, the book considers both the information transmission approach for discretization of actions, and the computational approach for discretization of states and actions. Part I of the text discusses Markov decision processes and their finite-state or finite-action approximations, while Part II builds from there to finite approximations in decentralized stochastic control problems. This volume is perfect for researchers and graduate students interested in stochastic controls. With the tools presented, readers will be able to establish the convergence of approximation models to original models and the methods are general enough that researchers can build corresponding approximation results, typically with no additional assumptions.


Optimal Control and Dynamic Games

Optimal Control and Dynamic Games

Author: Christophe Deissenberg

Publisher: Springer Science & Business Media

Published: 2005-11-03

Total Pages: 351

ISBN-13: 0387258051

DOWNLOAD EBOOK

Optimal Control and Dynamic Games has been edited to honor the outstanding contributions of Professor Suresh Sethi in the fields of Applied Optimal Control. Professor Sethi is internationally one of the foremost experts in this field. He is, among others, co-author of the popular textbook "Sethi and Thompson: Optimal Control Theory: Applications to Management Science and Economics". The book consists of a collection of essays by some of the best known scientists in the field, covering diverse aspects of applications of optimal control and dynamic games to problems in Finance, Management Science, Economics, and Operations Research. In doing so, it provides both a state-of-the-art overview over recent developments in the field, and a reference work covering the wide variety of contemporary questions that can be addressed with optimal control tools, and demonstrates the fruitfulness of the methodology.


Neural Approximations for Optimal Control and Decision

Neural Approximations for Optimal Control and Decision

Author: Riccardo Zoppoli

Publisher: Springer Nature

Published: 2019-12-17

Total Pages: 532

ISBN-13: 3030296938

DOWNLOAD EBOOK

Neural Approximations for Optimal Control and Decision provides a comprehensive methodology for the approximate solution of functional optimization problems using neural networks and other nonlinear approximators where the use of traditional optimal control tools is prohibited by complicating factors like non-Gaussian noise, strong nonlinearities, large dimension of state and control vectors, etc. Features of the text include: • a general functional optimization framework; • thorough illustration of recent theoretical insights into the approximate solutions of complex functional optimization problems; • comparison of classical and neural-network based methods of approximate solution; • bounds to the errors of approximate solutions; • solution algorithms for optimal control and decision in deterministic or stochastic environments with perfect or imperfect state measurements over a finite or infinite time horizon and with one decision maker or several; • applications of current interest: routing in communications networks, traffic control, water resource management, etc.; and • numerous, numerically detailed examples. The authors’ diverse backgrounds in systems and control theory, approximation theory, machine learning, and operations research lend the book a range of expertise and subject matter appealing to academics and graduate students in any of those disciplines together with computer science and other areas of engineering.


Advances in Dynamic Games

Advances in Dynamic Games

Author: Andrzej S. Nowak

Publisher: Springer Science & Business Media

Published: 2007-12-24

Total Pages: 674

ISBN-13: 0817644296

DOWNLOAD EBOOK

This book focuses on various aspects of dynamic game theory, presenting state-of-the-art research and serving as a guide to the vitality and growth of the field. A valuable reference for researchers and practitioners in dynamic game theory, it covers a broad range of topics and applications, including repeated and stochastic games, differential dynamic games, optimal stopping games, and numerical methods and algorithms for solving dynamic games. The diverse topics included will also benefit researchers and graduate students in applied mathematics, economics, engineering, systems and control, and environmental science.


Modeling Uncertainty

Modeling Uncertainty

Author: Moshe Dror

Publisher: Springer

Published: 2019-11-05

Total Pages: 782

ISBN-13: 0306481022

DOWNLOAD EBOOK

Modeling Uncertainty: An Examination of Stochastic Theory, Methods, and Applications, is a volume undertaken by the friends and colleagues of Sid Yakowitz in his honor. Fifty internationally known scholars have collectively contributed 30 papers on modeling uncertainty to this volume. Each of these papers was carefully reviewed and in the majority of cases the original submission was revised before being accepted for publication in the book. The papers cover a great variety of topics in probability, statistics, economics, stochastic optimization, control theory, regression analysis, simulation, stochastic programming, Markov decision process, application in the HIV context, and others. There are papers with a theoretical emphasis and others that focus on applications. A number of papers survey the work in a particular area and in a few papers the authors present their personal view of a topic. It is a book with a considerable number of expository articles, which are accessible to a nonexpert - a graduate student in mathematics, statistics, engineering, and economics departments, or just anyone with some mathematical background who is interested in a preliminary exposition of a particular topic. Many of the papers present the state of the art of a specific area or represent original contributions which advance the present state of knowledge. In sum, it is a book of considerable interest to a broad range of academic researchers and students of stochastic systems.