Event-triggered Near Optimal Adaptive Control of Interconnected Systems

Event-triggered Near Optimal Adaptive Control of Interconnected Systems

Author: Vignesh Narayanan

Publisher:

Published: 2017

Total Pages: 199

ISBN-13:

DOWNLOAD EBOOK

"Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems"--Abstract, page iv.


Optimal Event-Triggered Control Using Adaptive Dynamic Programming

Optimal Event-Triggered Control Using Adaptive Dynamic Programming

Author: Sarangapani Jagannathan

Publisher: CRC Press

Published: 2024-06-21

Total Pages: 348

ISBN-13: 1040049168

DOWNLOAD EBOOK

Optimal Event-triggered Control using Adaptive Dynamic Programming discusses event triggered controller design which includes optimal control and event sampling design for linear and nonlinear dynamic systems including networked control systems (NCS) when the system dynamics are both known and uncertain. The NCS are a first step to realize cyber-physical systems (CPS) or industry 4.0 vision. The authors apply several powerful modern control techniques to the design of event-triggered controllers and derive event-trigger condition and demonstrate closed-loop stability. Detailed derivations, rigorous stability proofs, computer simulation examples, and downloadable MATLAB® codes are included for each case. The book begins by providing background on linear and nonlinear systems, NCS, networked imperfections, distributed systems, adaptive dynamic programming and optimal control, stability theory, and optimal adaptive event-triggered controller design in continuous-time and discrete-time for linear, nonlinear and distributed systems. It lays the foundation for reinforcement learning-based optimal adaptive controller use for infinite horizons. The text then: Introduces event triggered control of linear and nonlinear systems, describing the design of adaptive controllers for them Presents neural network-based optimal adaptive control and game theoretic formulation of linear and nonlinear systems enclosed by a communication network Addresses the stochastic optimal control of linear and nonlinear NCS by using neuro dynamic programming Explores optimal adaptive design for nonlinear two-player zero-sum games under communication constraints to solve optimal policy and event trigger condition Treats an event-sampled distributed linear and nonlinear systems to minimize transmission of state and control signals within the feedback loop via the communication network Covers several examples along the way and provides applications of event triggered control of robot manipulators, UAV and distributed joint optimal network scheduling and control design for wireless NCS/CPS in order to realize industry 4.0 vision An ideal textbook for senior undergraduate students, graduate students, university researchers, and practicing engineers, Optimal Event Triggered Control Design using Adaptive Dynamic Programming instills a solid understanding of neural network-based optimal controllers under event-sampling and how to build them so as to attain CPS or Industry 4.0 vision.


Robust Adaptive Dynamic Programming

Robust Adaptive Dynamic Programming

Author: Yu Jiang

Publisher: John Wiley & Sons

Published: 2017-04-13

Total Pages: 220

ISBN-13: 1119132657

DOWNLOAD EBOOK

A comprehensive look at state-of-the-art ADP theory and real-world applications This book fills a gap in the literature by providing a theoretical framework for integrating techniques from adaptive dynamic programming (ADP) and modern nonlinear control to address data-driven optimal control design challenges arising from both parametric and dynamic uncertainties. Traditional model-based approaches leave much to be desired when addressing the challenges posed by the ever-increasing complexity of real-world engineering systems. An alternative which has received much interest in recent years are biologically-inspired approaches, primarily RADP. Despite their growing popularity worldwide, until now books on ADP have focused nearly exclusively on analysis and design, with scant consideration given to how it can be applied to address robustness issues, a new challenge arising from dynamic uncertainties encountered in common engineering problems. Robust Adaptive Dynamic Programming zeros in on the practical concerns of engineers. The authors develop RADP theory from linear systems to partially-linear, large-scale, and completely nonlinear systems. They provide in-depth coverage of state-of-the-art applications in power systems, supplemented with numerous real-world examples implemented in MATLAB. They also explore fascinating reverse engineering topics, such how ADP theory can be applied to the study of the human brain and cognition. In addition, the book: Covers the latest developments in RADP theory and applications for solving a range of systems’ complexity problems Explores multiple real-world implementations in power systems with illustrative examples backed up by reusable MATLAB code and Simulink block sets Provides an overview of nonlinear control, machine learning, and dynamic control Features discussions of novel applications for RADP theory, including an entire chapter on how it can be used as a computational mechanism of human movement control Robust Adaptive Dynamic Programming is both a valuable working resource and an intriguing exploration of contemporary ADP theory and applications for practicing engineers and advanced students in systems theory, control engineering, computer science, and applied mathematics.


Adaptive Dynamic Programming with Applications in Optimal Control

Adaptive Dynamic Programming with Applications in Optimal Control

Author: Derong Liu

Publisher: Springer

Published: 2017-01-04

Total Pages: 609

ISBN-13: 3319508156

DOWNLOAD EBOOK

This book covers the most recent developments in adaptive dynamic programming (ADP). The text begins with a thorough background review of ADP making sure that readers are sufficiently familiar with the fundamentals. In the core of the book, the authors address first discrete- and then continuous-time systems. Coverage of discrete-time systems starts with a more general form of value iteration to demonstrate its convergence, optimality, and stability with complete and thorough theoretical analysis. A more realistic form of value iteration is studied where value function approximations are assumed to have finite errors. Adaptive Dynamic Programming also details another avenue of the ADP approach: policy iteration. Both basic and generalized forms of policy-iteration-based ADP are studied with complete and thorough theoretical analysis in terms of convergence, optimality, stability, and error bounds. Among continuous-time systems, the control of affine and nonaffine nonlinear systems is studied using the ADP approach which is then extended to other branches of control theory including decentralized control, robust and guaranteed cost control, and game theory. In the last part of the book the real-world significance of ADP theory is presented, focusing on three application examples developed from the authors’ work: • renewable energy scheduling for smart power grids;• coal gasification processes; and• water–gas shift reactions. Researchers studying intelligent control methods and practitioners looking to apply them in the chemical-process and power-supply industries will find much to interest them in this thorough treatment of an advanced approach to control.


Adaptive Dynamic Programming: Single and Multiple Controllers

Adaptive Dynamic Programming: Single and Multiple Controllers

Author: Ruizhuo Song

Publisher: Springer

Published: 2018-12-28

Total Pages: 271

ISBN-13: 9811317127

DOWNLOAD EBOOK

This book presents a class of novel optimal control methods and games schemes based on adaptive dynamic programming techniques. For systems with one control input, the ADP-based optimal control is designed for different objectives, while for systems with multi-players, the optimal control inputs are proposed based on games. In order to verify the effectiveness of the proposed methods, the book analyzes the properties of the adaptive dynamic programming methods, including convergence of the iterative value functions and the stability of the system under the iterative control laws. Further, to substantiate the mathematical analysis, it presents various application examples, which provide reference to real-world practices.


Reinforcement Learning and Approximate Dynamic Programming for Feedback Control

Reinforcement Learning and Approximate Dynamic Programming for Feedback Control

Author: Frank L. Lewis

Publisher: John Wiley & Sons

Published: 2013-01-28

Total Pages: 498

ISBN-13: 1118453972

DOWNLOAD EBOOK

Reinforcement learning (RL) and adaptive dynamic programming (ADP) has been one of the most critical research fields in science and engineering for modern complex systems. This book describes the latest RL and ADP techniques for decision and control in human engineered systems, covering both single player decision and control and multi-player games. Edited by the pioneers of RL and ADP research, the book brings together ideas and methods from many fields and provides an important and timely guidance on controlling a wide variety of systems, such as robots, industrial processes, and economic decision-making.


Robust Event-Triggered Control of Nonlinear Systems

Robust Event-Triggered Control of Nonlinear Systems

Author: Tengfei Liu

Publisher: Springer Nature

Published: 2020-06-25

Total Pages: 264

ISBN-13: 9811550131

DOWNLOAD EBOOK

This book presents a study on the novel concept of "event-triggered control of nonlinear systems subject to disturbances", discussing the theory and practical applications. Richly illustrated, it is a valuable resource for researchers, engineers and graduate students in automation engineering who wish to learn the theories, technologies, and applications of event-triggered control of nonlinear systems.


Learning-Based Control

Learning-Based Control

Author: Zhong-Ping Jiang

Publisher: Now Publishers

Published: 2020-12-07

Total Pages: 122

ISBN-13: 9781680837520

DOWNLOAD EBOOK

The recent success of Reinforcement Learning and related methods can be attributed to several key factors. First, it is driven by reward signals obtained through the interaction with the environment. Second, it is closely related to the human learning behavior. Third, it has a solid mathematical foundation. Nonetheless, conventional Reinforcement Learning theory exhibits some shortcomings particularly in a continuous environment or in considering the stability and robustness of the controlled process. In this monograph, the authors build on Reinforcement Learning to present a learning-based approach for controlling dynamical systems from real-time data and review some major developments in this relatively young field. In doing so the authors develop a framework for learning-based control theory that shows how to learn directly suboptimal controllers from input-output data. There are three main challenges on the development of learning-based control. First, there is a need to generalize existing recursive methods. Second, as a fundamental difference between learning-based control and Reinforcement Learning, stability and robustness are important issues that must be addressed for the safety-critical engineering systems such as self-driving cars. Third, data efficiency of Reinforcement Learning algorithms need be addressed for safety-critical engineering systems. This monograph provides the reader with an accessible primer on a new direction in control theory still in its infancy, namely Learning-Based Control Theory, that is closely tied to the literature of safe Reinforcement Learning and Adaptive Dynamic Programming.


Cooperative Control of Multi-Agent Systems with Uncertainties

Cooperative Control of Multi-Agent Systems with Uncertainties

Author: Hao Zhang

Publisher: Elsevier

Published: 2024-04-04

Total Pages: 300

ISBN-13: 0443218609

DOWNLOAD EBOOK

Multi-agent coordination is an emerging engineering It has been inspired by the observations and descriptions of collective behavior in nature, such as fish schooling, birds flocking and insects swarming. The advantages of multi-agent coordination include: it can reduce cost and complexity from hardware platform to software and algorithms; in addition, multi-agent systems are capable of many tasks which could not be effectively performed by a single-robot system, for example, the surveillance task. The book proposes a hierarchical design framework that places uncertainties related to system models in the decentralized control layer (bottom layer) and the ones related to the communication (as well as physical interaction) between the agents in the distributed decision-making layer (top layer). The book shows that the two layers meet the separation principle under certain conditions, so that through the two-layer design framework, any challenges can be resolved independently, and the design complexity will not increase with the level of uncertainties. In addition, in order to solve the problem of energy limitation of agents, this book also studies the event-driven cooperative control of multi-agent systems, which can effectively reduce the energy consumption of agents and increase their operational life span. Bridges the gap for engineers and technicians in the automation industry, including theory and practice Provides a general framework for dealing with various uncertainties in multi-agent cooperative control problems Contains contributions surrounding the development of multi-agent systems control theory


Proceedings of 2022 International Conference on Autonomous Unmanned Systems (ICAUS 2022)

Proceedings of 2022 International Conference on Autonomous Unmanned Systems (ICAUS 2022)

Author: Wenxing Fu

Publisher: Springer Nature

Published: 2023-03-10

Total Pages: 3985

ISBN-13: 981990479X

DOWNLOAD EBOOK

This book includes original, peer-reviewed research papers from the ICAUS 2022, which offers a unique and interesting platform for scientists, engineers and practitioners throughout the world to present and share their most recent research and innovative ideas. The aim of the ICAUS 2022 is to stimulate researchers active in the areas pertinent to intelligent unmanned systems. The topics covered include but are not limited to Unmanned Aerial/Ground/Surface/Underwater Systems, Robotic, Autonomous Control/Navigation and Positioning/ Architecture, Energy and Task Planning and Effectiveness Evaluation Technologies, Artificial Intelligence Algorithm/Bionic Technology and Its Application in Unmanned Systems. The papers showcased here share the latest findings on Unmanned Systems, Robotics, Automation, Intelligent Systems, Control Systems, Integrated Networks, Modeling and Simulation. It makes the book a valuable asset for researchers, engineers, and university students alike.