This volume is composed of invited papers on learning and control. The contents form the proceedings of a workshop held in January 2008, in Hyderabad that honored the 60th birthday of Doctor Mathukumalli Vidyasagar. The 14 papers, written by international specialists in the field, cover a variety of interests within the broader field of learning and control. The diversity of the research provides a comprehensive overview of a field of great interest to control and system theorists.
Advances in Motor Learning and Control surveys the latest, most important advances in the field, surpassing the confines of debate between proponents of the information processing and dynamical systems. Zelaznik, editor of the Journal of Motor Behavior from 1989 to 1996, brings together a variety of perspectives. Some of the more difficult topics-such as behavioral analysis of trajectory formation and the dynamic pattern perspective of rhythmic movement-are presented in tutorial fashion. Other chapters provide a foundation for understanding increasingly specialized areas of study.
Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. Since robot learning involves decision making, there is an inherent active learning issue. Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. On the other hand, machine learning is also highly attractive to robotics. There is a great variety of open problems in robotics that defy a static, hand-coded solution. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3).
This book presents selected contributions to the 16th International Conference on Global Research and Education Inter-Academia 2017 hosted by Alexandru Ioan Cuza University of Iași, Romania from 25 to 28 September 2017. It is the third volume in the series, following the editions from 2015 and 2016. Fundamental and applied research in natural sciences have led to crucial developments in the ongoing 4th global industrial revolution, in the course of which information technology has become deeply embedded in industrial management, research and innovation – and just as deeply in education and everyday life. Materials science and nanotechnology, plasma and solid state physics, photonics, electrical and electronic engineering, robotics and metrology, signal processing, e-learning, intelligent and soft computing have long since been central research priorities for the Inter-Academia Community (I-AC) – a body comprising 14 universities and research institutes from Japan and Central/East-European countries that agreed, in 2002, to coordinate their research and education programs so as to better address today’s challenges. The book is intended for use in academic, government, and industrial R&D departments as a reference tool in research and technology education. The 42 peer-reviewed papers were written by more than 119 leading scientists from 14 countries, most of them affiliated to the I-AC.
Recent Advances in Reinforcement Learning addresses current research in an exciting area that is gaining a great deal of popularity in the Artificial Intelligence and Neural Network communities. Reinforcement learning has become a primary paradigm of machine learning. It applies to problems in which an agent (such as a robot, a process controller, or an information-retrieval engine) has to learn how to behave given only information about the success of its current actions. This book is a collection of important papers that address topics including the theoretical foundations of dynamic programming approaches, the role of prior knowledge, and methods for improving performance of reinforcement-learning techniques. These papers build on previous work and will form an important resource for students and researchers in the area. Recent Advances in Reinforcement Learning is an edited volume of peer-reviewed original research comprising twelve invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 22, Numbers 1, 2 and 3).
Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. Since robot learning involves decision making, there is an inherent active learning issue. Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. On the other hand, machine learning is also highly attractive to robotics. There is a great variety of open problems in robotics that defy a static, hand-coded solution. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3).
During the past two decades, there has been a dramatic increasein interest in the study of motor control and learning. In thisvolume authors from a variety of backgrounds and theoreticalperspectives review their research with particular emphasis onthe methods and paradigms employed, and the future direction oftheir work. The book is divided into four main sections. Thefirst section contains chapters examining general issues andtrends in the movement behaviour field. The remaining threesections contain chapters from scientists working in threebroadly defined areas of interest: coordination and control;visuo-motor processes; and movement disorders. Each sectionprovides an overview of the different approaches and differentlevels of analysis being used to examine specific topics withinthe motor domain.
The role of manufacturing in a country’s economy and societal development has long been established through their wealth generating capabilities. To enhance and widen our knowledge of materials and to increase innovation and responsiveness to ever-increasing international needs, more in-depth studies of functionally graded materials/tailor-made materials, recent advancements in manufacturing processes and new design philosophies are needed at present. The objective of this volume is to bring together experts from academic institutions, industries and research organizations and professional engineers for sharing of knowledge, expertise and experience in the emerging trends related to design, advanced materials processing and characterization, and advanced manufacturing processes.
This book focuses on distributed and economic Model Predictive Control (MPC) with applications in different fields. MPC is one of the most successful advanced control methodologies due to the simplicity of the basic idea (measure the current state, predict and optimize the future behavior of the plant to determine an input signal, and repeat this procedure ad infinitum) and its capability to deal with constrained nonlinear multi-input multi-output systems. While the basic idea is simple, the rigorous analysis of the MPC closed loop can be quite involved. Here, distributed means that either the computation is distributed to meet real-time requirements for (very) large-scale systems or that distributed agents act autonomously while being coupled via the constraints and/or the control objective. In the latter case, communication is necessary to maintain feasibility or to recover system-wide optimal performance. The term economic refers to general control tasks and, thus, goes beyond the typically predominant control objective of set-point stabilization. Here, recently developed concepts like (strict) dissipativity of optimal control problems or turnpike properties play a crucial role. The book collects research and survey articles on recent ideas and it provides perspectives on current trends in nonlinear model predictive control. Indeed, the book is the outcome of a series of six workshops funded by the German Research Foundation (DFG) involving early-stage career scientists from different countries and from leading European industry stakeholders.
The recent success of Reinforcement Learning and related methods can be attributed to several key factors. First, it is driven by reward signals obtained through the interaction with the environment. Second, it is closely related to the human learning behavior. Third, it has a solid mathematical foundation. Nonetheless, conventional Reinforcement Learning theory exhibits some shortcomings particularly in a continuous environment or in considering the stability and robustness of the controlled process. In this monograph, the authors build on Reinforcement Learning to present a learning-based approach for controlling dynamical systems from real-time data and review some major developments in this relatively young field. In doing so the authors develop a framework for learning-based control theory that shows how to learn directly suboptimal controllers from input-output data. There are three main challenges on the development of learning-based control. First, there is a need to generalize existing recursive methods. Second, as a fundamental difference between learning-based control and Reinforcement Learning, stability and robustness are important issues that must be addressed for the safety-critical engineering systems such as self-driving cars. Third, data efficiency of Reinforcement Learning algorithms need be addressed for safety-critical engineering systems. This monograph provides the reader with an accessible primer on a new direction in control theory still in its infancy, namely Learning-Based Control Theory, that is closely tied to the literature of safe Reinforcement Learning and Adaptive Dynamic Programming.