Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains

Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains

Author: Xi-Ren Cao

Publisher: Springer Nature

Published: 2020-09-09

Total Pages: 128

ISBN-13: 3030566781

DOWNLOAD EBOOK

This Springer brief addresses the challenges encountered in the study of the optimization of time-nonhomogeneous Markov chains. It develops new insights and new methodologies for systems in which concepts such as stationarity, ergodicity, periodicity and connectivity do not apply. This brief introduces the novel concept of confluencity and applies a relative optimization approach. It develops a comprehensive theory for optimization of the long-run average of time-nonhomogeneous Markov chains. The book shows that confluencity is the most fundamental concept in optimization, and that relative optimization is more suitable for treating the systems under consideration than standard ideas of dynamic programming. Using confluencity and relative optimization, the author classifies states as confluent or branching and shows how the under-selectivity issue of the long-run average can be easily addressed, multi-class optimization implemented, and Nth biases and Blackwell optimality conditions derived. These results are presented in a book for the first time and so may enhance the understanding of optimization and motivate new research ideas in the area.


Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains

Foundations of Average-Cost Nonhomogeneous Controlled Markov Chains

Author: Xi-Ren Cao

Publisher:

Published: 2021

Total Pages: 0

ISBN-13: 9783030566791

DOWNLOAD EBOOK

This Springer brief addresses the challenges encountered in the study of the optimization of time-nonhomogeneous Markov chains. It develops new insights and new methodologies for systems in which concepts such as stationarity, ergodicity, periodicity and connectivity do not apply. This brief introduces the novel concept of confluencity and applies a relative optimization approach. It develops a comprehensive theory for optimization of the long-run average of time-nonhomogeneous Markov chains. The book shows that confluencity is the most fundamental concept in optimization, and that relative optimization is more suitable for treating the systems under consideration than standard ideas of dynamic programming. Using confluencity and relative optimization, the author classifies states as confluent or branching and shows how the under-selectivity issue of the long-run average can be easily addressed, multi-class optimization implemented, and Nth biases and Blackwell optimality conditions derived. These results are presented in a book for the first time and so may enhance the understanding of optimization and motivate new research ideas in the area.


Social Informatics

Social Informatics

Author: Anwitaman Datta

Publisher: Springer Science & Business Media

Published: 2011-10-12

Total Pages: 357

ISBN-13: 3642247032

DOWNLOAD EBOOK

This book constitutes the proceedings of the Third International Conference on Social Informatics, SocInfo 2011, held in Singapore in October 2011. The 15 full papers, 8 short papers and 13 posters included in this volume were carefully reviewed and selected from 68 full paper and 13 poster submissions. The papers are organized in topical sections named: network analysis; eGovernance and knowledge management; applications of network analysis; community dynamics; case studies; trust, privacy and security; peer-production.


Continuous-Time Markov Decision Processes

Continuous-Time Markov Decision Processes

Author: Alexey Piunovskiy

Publisher: Springer Nature

Published: 2020-11-09

Total Pages: 605

ISBN-13: 3030549879

DOWNLOAD EBOOK

This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.


Essentials of Stochastic Processes

Essentials of Stochastic Processes

Author: Richard Durrett

Publisher: Springer

Published: 2016-11-07

Total Pages: 282

ISBN-13: 3319456148

DOWNLOAD EBOOK

Building upon the previous editions, this textbook is a first course in stochastic processes taken by undergraduate and graduate students (MS and PhD students from math, statistics, economics, computer science, engineering, and finance departments) who have had a course in probability theory. It covers Markov chains in discrete and continuous time, Poisson processes, renewal processes, martingales, and option pricing. One can only learn a subject by seeing it in action, so there are a large number of examples and more than 300 carefully chosen exercises to deepen the reader’s understanding. Drawing from teaching experience and student feedback, there are many new examples and problems with solutions that use TI-83 to eliminate the tedious details of solving linear equations by hand, and the collection of exercises is much improved, with many more biological examples. Originally included in previous editions, material too advanced for this first course in stochastic processes has been eliminated while treatment of other topics useful for applications has been expanded. In addition, the ordering of topics has been improved; for example, the difficult subject of martingales is delayed until its usefulness can be applied in the treatment of mathematical finance.


Foundations of Data Science

Foundations of Data Science

Author: Avrim Blum

Publisher: Cambridge University Press

Published: 2020-01-23

Total Pages: 433

ISBN-13: 1108617360

DOWNLOAD EBOOK

This book provides an introduction to the mathematical and algorithmic foundations of data science, including machine learning, high-dimensional geometry, and analysis of large networks. Topics include the counterintuitive nature of data in high dimensions, important linear algebraic techniques such as singular value decomposition, the theory of random walks and Markov chains, the fundamentals of and important algorithms for machine learning, algorithms and analysis for clustering, probabilistic models for large networks, representation learning including topic modelling and non-negative matrix factorization, wavelets and compressed sensing. Important probabilistic techniques are developed including the law of large numbers, tail inequalities, analysis of random projections, generalization guarantees in machine learning, and moment methods for analysis of phase transitions in large random graphs. Additionally, important structural and complexity measures are discussed such as matrix norms and VC-dimension. This book is suitable for both undergraduate and graduate courses in the design and analysis of algorithms for data.


Adaptive Markov Control Processes

Adaptive Markov Control Processes

Author: Onesimo Hernandez-Lerma

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 160

ISBN-13: 1441987142

DOWNLOAD EBOOK

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided.


Basics of Applied Stochastic Processes

Basics of Applied Stochastic Processes

Author: Richard Serfozo

Publisher: Springer Science & Business Media

Published: 2009-01-24

Total Pages: 452

ISBN-13: 3540893326

DOWNLOAD EBOOK

Stochastic processes are mathematical models of random phenomena that evolve according to prescribed dynamics. Processes commonly used in applications are Markov chains in discrete and continuous time, renewal and regenerative processes, Poisson processes, and Brownian motion. This volume gives an in-depth description of the structure and basic properties of these stochastic processes. A main focus is on equilibrium distributions, strong laws of large numbers, and ordinary and functional central limit theorems for cost and performance parameters. Although these results differ for various processes, they have a common trait of being limit theorems for processes with regenerative increments. Extensive examples and exercises show how to formulate stochastic models of systems as functions of a system’s data and dynamics, and how to represent and analyze cost and performance measures. Topics include stochastic networks, spatial and space-time Poisson processes, queueing, reversible processes, simulation, Brownian approximations, and varied Markovian models. The technical level of the volume is between that of introductory texts that focus on highlights of applied stochastic processes, and advanced texts that focus on theoretical aspects of processes.