A Concise Introduction to Decentralized POMDPs

A Concise Introduction to Decentralized POMDPs

Author: Frans A. Oliehoek

Publisher: Springer

Published: 2016-06-03

Total Pages: 146

ISBN-13: 3319289292

DOWNLOAD EBOOK

This book introduces multiagent planning under uncertainty as formalized by decentralized partially observable Markov decision processes (Dec-POMDPs). The intended audience is researchers and graduate students working in the fields of artificial intelligence related to sequential decision making: reinforcement learning, decision-theoretic planning for single agents, classical multiagent planning, decentralized control, and operations research.


A Concise Introduction to Models and Methods for Automated Planning

A Concise Introduction to Models and Methods for Automated Planning

Author: Hector Radanovic

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 132

ISBN-13: 3031015649

DOWNLOAD EBOOK

Planning is the model-based approach to autonomous behavior where the agent behavior is derived automatically from a model of the actions, sensors, and goals. The main challenges in planning are computational as all models, whether featuring uncertainty and feedback or not, are intractable in the worst case when represented in compact form. In this book, we look at a variety of models used in AI planning, and at the methods that have been developed for solving them. The goal is to provide a modern and coherent view of planning that is precise, concise, and mostly self-contained, without being shallow. For this, we make no attempt at covering the whole variety of planning approaches, ideas, and applications, and focus on the essentials. The target audience of the book are students and researchers interested in autonomous behavior and planning from an AI, engineering, or cognitive science perspective. Table of Contents: Preface / Planning and Autonomous Behavior / Classical Planning: Full Information and Deterministic Actions / Classical Planning: Variations and Extensions / Beyond Classical Planning: Transformations / Planning with Sensing: Logical Models / MDP Planning: Stochastic Actions and Full Feedback / POMDP Planning: Stochastic Actions and Partial Feedback / Discussion / Bibliography / Author's Biography


Deep Reinforcement Learning

Deep Reinforcement Learning

Author: Aske Plaat

Publisher: Springer Nature

Published: 2022-06-10

Total Pages: 414

ISBN-13: 9811906386

DOWNLOAD EBOOK

Deep reinforcement learning has attracted considerable attention recently. Impressive results have been achieved in such diverse fields as autonomous driving, game playing, molecular recombination, and robotics. In all these fields, computer programs have taught themselves to understand problems that were previously considered to be very difficult. In the game of Go, the program AlphaGo has even learned to outmatch three of the world’s leading players.Deep reinforcement learning takes its inspiration from the fields of biology and psychology. Biology has inspired the creation of artificial neural networks and deep learning, while psychology studies how animals and humans learn, and how subjects’ desired behavior can be reinforced with positive and negative stimuli. When we see how reinforcement learning teaches a simulated robot to walk, we are reminded of how children learn, through playful exploration. Techniques that are inspired by biology and psychology work amazingly well in computers: animal behavior and the structure of the brain as new blueprints for science and engineering. In fact, computers truly seem to possess aspects of human behavior; as such, this field goes to the heart of the dream of artificial intelligence. These research advances have not gone unnoticed by educators. Many universities have begun offering courses on the subject of deep reinforcement learning. The aim of this book is to provide an overview of the field, at the proper level of detail for a graduate course in artificial intelligence. It covers the complete field, from the basic algorithms of Deep Q-learning, to advanced topics such as multi-agent reinforcement learning and meta learning.


Algorithms for Decision Making

Algorithms for Decision Making

Author: Mykel J. Kochenderfer

Publisher: MIT Press

Published: 2022-08-16

Total Pages: 701

ISBN-13: 0262370239

DOWNLOAD EBOOK

A broad introduction to algorithms for decision making under uncertainty, introducing the underlying mathematical problem formulations and the algorithms for solving them. Automated decision-making systems or decision-support systems—used in applications that range from aircraft collision avoidance to breast cancer screening—must be designed to account for various sources of uncertainty while carefully balancing multiple objectives. This textbook provides a broad introduction to algorithms for decision making under uncertainty, covering the underlying mathematical problem formulations and the algorithms for solving them. The book first addresses the problem of reasoning about uncertainty and objectives in simple decisions at a single point in time, and then turns to sequential decision problems in stochastic environments where the outcomes of our actions are uncertain. It goes on to address model uncertainty, when we do not start with a known model and must learn how to act through interaction with the environment; state uncertainty, in which we do not know the current state of the environment due to imperfect perceptual information; and decision contexts involving multiple agents. The book focuses primarily on planning and reinforcement learning, although some of the techniques presented draw on elements of supervised learning and optimization. Algorithms are implemented in the Julia programming language. Figures, examples, and exercises convey the intuition behind the various approaches presented.


Decision Making Under Uncertainty

Decision Making Under Uncertainty

Author: Mykel J. Kochenderfer

Publisher: MIT Press

Published: 2015-07-24

Total Pages: 350

ISBN-13: 0262331713

DOWNLOAD EBOOK

An introduction to decision making under uncertainty from a computational perspective, covering both theory and applications ranging from speech recognition to airborne collision avoidance. Many important problems involve decision making under uncertainty—that is, choosing actions based on often imperfect observations, with unknown outcomes. Designers of automated decision support systems must take into account the various sources of uncertainty while balancing the multiple objectives of the system. This book provides an introduction to the challenges of decision making under uncertainty from a computational perspective. It presents both the theory behind decision making models and algorithms and a collection of example applications that range from speech recognition to aircraft collision avoidance. Focusing on two methods for designing decision agents, planning and reinforcement learning, the book covers probabilistic models, introducing Bayesian networks as a graphical model that captures probabilistic relationships between variables; utility theory as a framework for understanding optimal decision making under uncertainty; Markov decision processes as a method for modeling sequential problems; model uncertainty; state uncertainty; and cooperative decision making involving multiple interacting agents. A series of applications shows how the theoretical concepts can be applied to systems for attribute-based person search, speech applications, collision avoidance, and unmanned aircraft persistent surveillance. Decision Making Under Uncertainty unifies research from different communities using consistent notation, and is accessible to students and researchers across engineering disciplines who have some prior exposure to probability theory and calculus. It can be used as a text for advanced undergraduate and graduate students in fields including computer science, aerospace and electrical engineering, and management science. It will also be a valuable professional reference for researchers in a variety of disciplines.


Machine Learning and Knowledge Discovery in Databases

Machine Learning and Knowledge Discovery in Databases

Author: Massih-Reza Amini

Publisher: Springer Nature

Published: 2023-03-16

Total Pages: 680

ISBN-13: 3031264126

DOWNLOAD EBOOK

The multi-volume set LNAI 13713 until 13718 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2022, which took place in Grenoble, France, in September 2022. The 236 full papers presented in these proceedings were carefully reviewed and selected from a total of 1060 submissions. In addition, the proceedings include 17 Demo Track contributions. The volumes are organized in topical sections as follows: Part I: Clustering and dimensionality reduction; anomaly detection; interpretability and explainability; ranking and recommender systems; transfer and multitask learning; Part II: Networks and graphs; knowledge graphs; social network analysis; graph neural networks; natural language processing and text mining; conversational systems; Part III: Deep learning; robust and adversarial machine learning; generative models; computer vision; meta-learning, neural architecture search; Part IV: Reinforcement learning; multi-agent reinforcement learning; bandits and online learning; active and semi-supervised learning; private and federated learning; Part V: Supervised learning; probabilistic inference; optimal transport; optimization; quantum, hardware; sustainability; Part VI: Time series; financial machine learning; applications; applications: transportation; demo track.


Handbook of Reinforcement Learning and Control

Handbook of Reinforcement Learning and Control

Author: Kyriakos G. Vamvoudakis

Publisher: Springer Nature

Published: 2021-06-23

Total Pages: 833

ISBN-13: 3030609901

DOWNLOAD EBOOK

This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology. The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of related topics including: deep learning; artificial intelligence; applications of game theory; mixed modality learning; and multi-agent reinforcement learning. Practicing engineers and scholars in the field of machine learning, game theory, and autonomous control will find the Handbook of Reinforcement Learning and Control to be thought-provoking, instructive and informative.


Artificial Intelligence in HCI

Artificial Intelligence in HCI

Author: Helmut Degen

Publisher: Springer Nature

Published: 2023-07-08

Total Pages: 638

ISBN-13: 3031358945

DOWNLOAD EBOOK

This double volume book set constitutes the refereed proceedings of 4th International Conference, AI-HCI 2023, held as part of the 25th International Conference, HCI International 2023, which was held virtually in Copenhagen, Denmark in July 2023. The total of 1578 papers and 396 posters included in the HCII 2023 proceedings was carefully reviewed and selected from 7472 submissions. The first volume focuses on topics related to Human-Centered Artificial Intelligence, explainability, transparency and trustworthiness, ethics and fairness, as well as AI-supported user experience design. The second volume focuses on topics related to AI for language, text, and speech-related tasks, human-AI collaboration, AI for decision-support and perception analysis, and innovations in AI-enabled systems.


Collaborative Computing: Networking, Applications and Worksharing

Collaborative Computing: Networking, Applications and Worksharing

Author: Honghao Gao

Publisher: Springer Nature

Published: 2023-01-24

Total Pages: 544

ISBN-13: 3031243862

DOWNLOAD EBOOK

The two-volume set LNICST 460 and 461 constitutes the proceedings of the 18th EAI International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2022, held in Hangzhou, China, in October 2022. The 57 full papers presented in the proceedings were carefully reviewed and selected from 171 submissions. The papers are organized in the following topical sections: Recommendation System; Federated Learning and application; Edge Computing and Collaborative working; Blockchain applications; Security and Privacy Protection; Deep Learning and application; Collaborative working; Images processing and recognition.


Neural Information Processing

Neural Information Processing

Author: Biao Luo

Publisher: Springer Nature

Published: 2023-11-14

Total Pages: 607

ISBN-13: 9819980828

DOWNLOAD EBOOK

The six-volume set LNCS 14447 until 14452 constitutes the refereed proceedings of the 30th International Conference on Neural Information Processing, ICONIP 2023, held in Changsha, China, in November 2023. The 652 papers presented in the proceedings set were carefully reviewed and selected from 1274 submissions. They focus on theory and algorithms, cognitive neurosciences; human centred computing; applications in neuroscience, neural networks, deep learning, and related fields.