Real-Time Search for Learning Autonomous Agents

Real-Time Search for Learning Autonomous Agents

Author: Toru Ishida

Publisher: Springer Science & Business Media

Published: 1997-06-30

Total Pages: 137

ISBN-13: 0792399447

DOWNLOAD EBOOK

Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.


Real-Time Search for Learning Autonomous Agents

Real-Time Search for Learning Autonomous Agents

Author: Toru Ishida

Publisher: Springer

Published: 2007-08-28

Total Pages: 137

ISBN-13: 0585345074

DOWNLOAD EBOOK

Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.


Autonomy Through Real-time Learning and OpenNARS for Applications

Autonomy Through Real-time Learning and OpenNARS for Applications

Author: Patrick Hammer

Publisher:

Published: 2021

Total Pages: 162

ISBN-13:

DOWNLOAD EBOOK

This work includes an attempt to enhance the autonomy of intelligent agents via real-time learning.In nature, the ability to learn at runtime gives species which can do so key advantages over others. While most AI systems do not need to have this ability but can be trained before deployment, it allows agents to adapt, at runtime, to changing and generally unknown circumstances, and then to exploit their environment for their own purposes. To reach this goal, in this thesis a pragmatic design (ONA) for a general-purpose reasoner incorporating Non-Axiomatic Reasoning System (NARS) theory is explored. The design and implementation is presented in detail, in addition to the theoretical foundation. Then, experiments related to various system capabilities are carried out and summarized, together with application projects where ONA is utilized: a traffic surveillance application in the Smart City domain to identify traffic anomalies through real-time reasoning and learning, and a system to help first responders by providing driving assistance and presenting of mission-critical information. Also it is shown how reliable real-time learning can help to increase autonomy of intelligent agents beyond the current state-of-the-art. Here, theoretical and practical comparisons with established frameworks and specific techniques such as Q-Learning are made, and it is shown that ONA does also work in non-Markovian environments where Q-Learning cannot be applied. Some of the reasoner's capabilities are also demonstrated on real robotic hardware. The experiments there show combining learning knowledge at runtime with the utilization of only partly complete mission-related background knowledge given by the designer, allowing the agent to perform a complex task from an only minimal mission specification which does not include learnable details. Overall, ONA is suitable for autonomous agents as it combines, in a single technique, the strengths of behavior learning, which is usually captured by Reinforcement Learning, and means-end reasoning (such as Belief-Desire-Intention models with planner) to effectively utilize knowledge expressed by a designer.


Transfer Learning for Multiagent Reinforcement Learning Systems

Transfer Learning for Multiagent Reinforcement Learning Systems

Author: Felipe Leno da Silva

Publisher: Morgan & Claypool Publishers

Published: 2021-05-27

Total Pages: 131

ISBN-13: 1636391354

DOWNLOAD EBOOK

Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment. However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning. This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools. This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.


Learning Action Models for Reactive Autonomous Agents

Learning Action Models for Reactive Autonomous Agents

Author: Scott Sherwood Benson

Publisher:

Published: 1997

Total Pages: 220

ISBN-13:

DOWNLOAD EBOOK

Previous work on action-model learning has focused on domains that contain only deterministic, atomic action models that explicitly describe all changes that can occur in the environment. The thesis extends this previous work to cover domains that contain durative actions, continuous variables, nondeterministic action effects, and actions taken by other agents. Results have been demonstrated in several robot simulation environments and the Silicon Graphics, Inc. flight simulator.


Learnable Knowledge for Autonomous Agents

Learnable Knowledge for Autonomous Agents

Author: Saminda W Abeyruwan

Publisher:

Published: 2015

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

While computation power has increased and the statistical machine learning methods have made substantial advancement, many problems that would benefit from real-time interpretation have not exploited their combined strengths. For instance, the problem of gathering data from the environment and transforming it into knowledge as well as updating the knowledge as new data become available. Currently, with substantial expressivity and moderate computational cost, high-level languages or first-order predicate logic or model-based machine learning are used for static representation of knowledge, that is used for reasoning and inferring. In this dissertation, we address how an entity dynamically gather knowledge from environmental data and use that for inferring evolving events and dynamically update the current knowledge. We develop theoretical and empirical solutions using Description Logic representation and reasoning, and General Value Functions in Reinforcement Learning. The proposed solutions dynamically extract low-level knowledge from available data and update the high-level knowledge, which is used to predict the evolving future events. We show its applications in three real world domains: 1) RoboCup 3D Soccer Simulation environment, 2) High-throughput screening, and 3) Axon regeneration.


Deep Learning for Unmanned Systems

Deep Learning for Unmanned Systems

Author: Anis Koubaa

Publisher: Springer Nature

Published: 2021-10-01

Total Pages: 731

ISBN-13: 3030779394

DOWNLOAD EBOOK

This book is used at the graduate or advanced undergraduate level and many others. Manned and unmanned ground, aerial and marine vehicles enable many promising and revolutionary civilian and military applications that will change our life in the near future. These applications include, but are not limited to, surveillance, search and rescue, environment monitoring, infrastructure monitoring, self-driving cars, contactless last-mile delivery vehicles, autonomous ships, precision agriculture and transmission line inspection to name just a few. These vehicles will benefit from advances of deep learning as a subfield of machine learning able to endow these vehicles with different capability such as perception, situation awareness, planning and intelligent control. Deep learning models also have the ability to generate actionable insights into the complex structures of large data sets. In recent years, deep learning research has received an increasing amount of attention from researchers in academia, government laboratories and industry. These research activities have borne some fruit in tackling some of the challenging problems of manned and unmanned ground, aerial and marine vehicles that are still open. Moreover, deep learning methods have been recently actively developed in other areas of machine learning, including reinforcement training and transfer/meta-learning, whereas standard, deep learning methods such as recent neural network (RNN) and coevolutionary neural networks (CNN). The book is primarily meant for researchers from academia and industry, who are working on in the research areas such as engineering, control engineering, robotics, mechatronics, biomedical engineering, mechanical engineering and computer science. The book chapters deal with the recent research problems in the areas of reinforcement learning-based control of UAVs and deep learning for unmanned aerial systems (UAS) The book chapters present various techniques of deep learning for robotic applications. The book chapters contain a good literature survey with a long list of references. The book chapters are well written with a good exposition of the research problem, methodology, block diagrams and mathematical techniques. The book chapters are lucidly illustrated with numerical examples and simulations. The book chapters discuss details of applications and future research areas.


Data Driven Modeling Using Reinforcement Learning in Autonomous Agents

Data Driven Modeling Using Reinforcement Learning in Autonomous Agents

Author: Murat Karakurt

Publisher:

Published: 2003

Total Pages: 150

ISBN-13:

DOWNLOAD EBOOK

This research has aspired to build a system which is capable of solving problems by means of its past experience, especially an autonomous agent that can learn from trial and error sequences. To achieve this, connectionist neural network architectures are combined with the reinforcement learning methods. And the credit assignment problem in multi layer perceptron (MLP) architectures is altered. In classical credit assignment problems, actual output of the system and the previously known data in which the system tries to approximate are compared and the discrepancy between them is attempted to be minimized. However, temporal difference credit assignment depends on the temporary successive outputs. By this new method, it is more feasible to find the relation between each event rather than their consequences.Also in this thesis k-means algorithm is modified. Moreover MLP architectures is written in C++ environment, like Backpropagation, Radial Basis Function Networks, Radial Basis Function Link Net, Self-organized neural network, k-means algorithm.And with their combination for the Reinforcement learning, temporal difference learning, and Q-learning architectures were realized, all these algorithms are simulated, and these simulations are created in C++ environment.As a result, reinforcement learning methods used have two main disadvantages during the process of creating autonomous agent. Firstly its training time is too long, and too many input parameters are needed to train the system. Hence it is seen that hardware implementation is not feasible yet. Further research is considered necessary.


Layered Learning in Multiagent Systems

Layered Learning in Multiagent Systems

Author: Peter Stone

Publisher: MIT Press

Published: 2000-03-03

Total Pages: 300

ISBN-13: 9780262264600

DOWNLOAD EBOOK

This book looks at multiagent systems that consist of teams of autonomous agents acting in real-time, noisy, collaborative, and adversarial environments. This book looks at multiagent systems that consist of teams of autonomous agents acting in real-time, noisy, collaborative, and adversarial environments. The book makes four main contributions to the fields of machine learning and multiagent systems. First, it describes an architecture within which a flexible team structure allows member agents to decompose a task into flexible roles and to switch roles while acting. Second, it presents layered learning, a general-purpose machine-learning method for complex domains in which learning a mapping directly from agents' sensors to their actuators is intractable with existing machine-learning methods. Third, the book introduces a new multiagent reinforcement learning algorithm—team-partitioned, opaque-transition reinforcement learning (TPOT-RL)—designed for domains in which agents cannot necessarily observe the state-changes caused by other agents' actions. The final contribution is a fully functioning multiagent system that incorporates learning in a real-time, noisy domain with teammates and adversaries—a computer-simulated robotic soccer team. Peter Stone's work is the basis for the CMUnited Robotic Soccer Team, which has dominated recent RoboCup competitions. RoboCup not only helps roboticists to prove their theories in a realistic situation, but has drawn considerable public and professional attention to the field of intelligent robotics. The CMUnited team won the 1999 Stockholm simulator competition, outscoring its opponents by the rather impressive cumulative score of 110-0.