Deep Reinforcement Learning Methods for Autonomous Driving Safety and Interactivity

Deep Reinforcement Learning Methods for Autonomous Driving Safety and Interactivity

Author: Xiaobai Ma

Publisher:

Published: 2021

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

To drive a vehicle fully autonomously, an intelligent system needs to be capable of having accurate perception and comprehensive understanding of the surroundings, making reasonable predictions of the progressing of the scenario, and executing safe, comfortable, as well as efficient control actions. Currently, these requirements are mostly fulfilled by the intelligence of human drivers. During past decades, with the development of machine learning and computer science, artificial intelligence starts to show better-than-human performance on more and more practical applications, while autonomous driving is still one of the most attractive and difficult unconquered challenges. This thesis studies the challenges of autonomous driving on its safety and interaction with the surrounding vehicles, and how deep reinforcement learning methods could help address these challenges. Reinforcement learning (RL) is an important paradigm of machine learning which focuses on learning sequential decision-making policies which interact with the task environment. Combining with deep neural networks, the recent development of deep reinforcement learning has shown promising results on control and decision-making tasks with high dimensional observations and complex strategies. The capability and achievements of deep reinforcement learning indicate a wide range of potential applications in autonomous driving. Focusing on autonomous driving safety and interactivity, this thesis presents novel contributions on topics including safe and robust reinforcement learning, reinforcement learning-based safety test, human driver modeling, and multi-agent reinforcement learning. This thesis begins with the study of deep reinforcement learning methods on autonomous driving safety, which is the most critical concern for all autonomous driving systems. We study the autonomous driving safety problem from two points of view: the first is the risk caused by the reinforcement learning control policies due to the mismatch between simulations and the real world; the second is the deep reinforcement learning-based safety test. In both problems, we explore the usage of adversary reinforcement learning agents on finding failures of the system with different focuses: on the first problem, the RL adversary is trained and applied at the learning stage of the control policy to guide it to learn more robust behaviors; on the second problem, the RL adversary is used at the test stage to find the most likely failures in the system. Different learning approaches are proposed and studied for the two problems. Another fundamental challenge for autonomous driving is the interaction between the autonomous vehicle and its surrounding vehicles, which requires accurate modeling of the behavior of surrounding drivers. In the second and third parts of the thesis, we study the surrounding driver modeling problem on three different levels: the action distribution level, the latent state level, and the reasoning level. On the action distribution level, we explore advanced policy representations for modeling the complex distribution of driver's control actions. On the latent state level, we study how to efficiently infer the latent states of surrounding drivers like their driving characteristics and intentions, and how it could be combined with the learning of autonomous driving decision-making policies. On the reasoning level, we investigate the reasoning process between multiple interacting agents and use this to build their behavior models through multi-agent reinforcement learning.


Deep Learning for Autonomous Vehicle Control

Deep Learning for Autonomous Vehicle Control

Author: Sampo Kuutti

Publisher: Morgan & Claypool Publishers

Published: 2019-08-08

Total Pages: 82

ISBN-13: 168173608X

DOWNLOAD EBOOK

The next generation of autonomous vehicles will provide major improvements in traffic flow, fuel efficiency, and vehicle safety. Several challenges currently prevent the deployment of autonomous vehicles, one aspect of which is robust and adaptable vehicle control. Designing a controller for autonomous vehicles capable of providing adequate performance in all driving scenarios is challenging due to the highly complex environment and inability to test the system in the wide variety of scenarios which it may encounter after deployment. However, deep learning methods have shown great promise in not only providing excellent performance for complex and non-linear control problems, but also in generalizing previously learned rules to new scenarios. For these reasons, the use of deep neural networks for vehicle control has gained significant interest. In this book, we introduce relevant deep learning techniques, discuss recent algorithms applied to autonomous vehicle control, identify strengths and limitations of available methods, discuss research challenges in the field, and provide insights into the future trends in this rapidly evolving field.


Explainable Artificial Intelligence for Autonomous Vehicles

Explainable Artificial Intelligence for Autonomous Vehicles

Author: Kamal Malik

Publisher: CRC Press

Published: 2024-08-14

Total Pages: 205

ISBN-13: 1040099297

DOWNLOAD EBOOK

Explainable AI for Autonomous Vehicles: Concepts, Challenges, and Applications is a comprehensive guide to developing and applying explainable artificial intelligence (XAI) in the context of autonomous vehicles. It begins with an introduction to XAI and its importance in developing autonomous vehicles. It also provides an overview of the challenges and limitations of traditional black-box AI models and how XAI can help address these challenges by providing transparency and interpretability in the decision-making process of autonomous vehicles. The book then covers the state-of-the-art techniques and methods for XAI in autonomous vehicles, including model-agnostic approaches, post-hoc explanations, and local and global interpretability techniques. It also discusses the challenges and applications of XAI in autonomous vehicles, such as enhancing safety and reliability, improving user trust and acceptance, and enhancing overall system performance. Ethical and social considerations are also addressed in the book, such as the impact of XAI on user privacy and autonomy and the potential for bias and discrimination in XAI-based systems. Furthermore, the book provides insights into future directions and emerging trends in XAI for autonomous vehicles, such as integrating XAI with other advanced technologies like machine learning and blockchain and the potential for XAI to enable new applications and services in the autonomous vehicle industry. Overall, the book aims to provide a comprehensive understanding of XAI and its applications in autonomous vehicles to help readers develop effective XAI solutions that can enhance autonomous vehicle systems' safety, reliability, and performance while improving user trust and acceptance. This book: Discusses authentication mechanisms for camera access, encryption protocols for data protection, and access control measures for camera systems. Showcases challenges such as integration with existing systems, privacy, and security concerns while implementing explainable artificial intelligence in autonomous vehicles. Covers explainable artificial intelligence for resource management, optimization, adaptive control, and decision-making. Explains important topics such as vehicle-to-vehicle (V2V) communication, vehicle-to-infrastructure (V2I) communication, remote monitoring, and control. Emphasizes enhancing safety, reliability, overall system performance, and improving user trust in autonomous vehicles. The book is intended to provide researchers, engineers, and practitioners with a comprehensive understanding of XAI's key concepts, challenges, and applications in the context of autonomous vehicles. It is primarily written for senior undergraduate, graduate students, and academic researchers in the fields of electrical engineering, electronics and communication engineering, computer science and engineering, information technology, and automotive engineering.


Exploiting Multi-Modal Fusion for Urban Autonomous Driving Using Latent Deep Reinforcement Learning

Exploiting Multi-Modal Fusion for Urban Autonomous Driving Using Latent Deep Reinforcement Learning

Author: Yasser Khalil

Publisher:

Published: 2022

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Human driving decisions are the leading cause of road fatalities. Autonomous driving naturally eliminates such incompetent decisions and thus can improve traffic safety and efficiency. Deep reinforcement learning (DRL) has shown great potential in learning complex tasks. Recently, researchers investigated various DRL-based approaches for autonomous driving. However, exploiting multi-modal fusion to generate pixel-wise perception and motion prediction and then leveraging these predictions to train a latent DRL has not been targeted yet. Unlike other DRL algorithms, the latent DRL algorithm distinguishes representation learning from task learning, enhancing sampling efficiency for reinforcement learning. In addition, supplying the latent DRL algorithm with accurate perception and motion prediction simplifies the surrounding urban scenes, improving training and thus learning a better driving policy. To that end, this Ph.D. research initially develops LiCaNext, a novel real-time multi-modal fusion network to produce accurate joint perception and motion prediction at a pixel level. Our proposed approach relies merely on a LIDAR sensor, where its multi-modal input is composed of bird's-eye view (BEV), range view (RV), and range residual images. Further, this Ph.D. thesis proposes leveraging these predictions with another simple BEV image to train a sequential latent maximum entropy reinforcement learning (MaxEnt RL) algorithm. A sequential latent model is deployed to learn a more compact latent representation from high-dimensional inputs. Subsequently, the MaxEnt RL model trains on this latent space to learn a driving policy. The proposed LiCaNext is trained on the public nuScenes dataset. Results demonstrated that LiCaNext operates in real-time and performs better than the state-of-the-art in perception and motion prediction, especially for small and distant objects. Furthermore, simulation experiments are conducted on CARLA to evaluate the performance of our proposed approach that exploits LiCaNext predictions to train sequential latent MaxEnt RL algorithm. The simulated experiments manifest that our proposed approach learns a better driving policy outperforming other prevalent DRL-based algorithms. The learned driving policy achieves the objectives of safety, efficiency, and comfort. Experiments also reveal that the learned policy maintains its effectiveness under different environments and varying weather conditions.


Deep Multi Agent Reinforcement Learning for Autonomous Driving

Deep Multi Agent Reinforcement Learning for Autonomous Driving

Author: Sushrut Bhalla

Publisher:

Published: 2020

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Deep Learning and back-propagation have been successfully used to perform centralized training with communication protocols among multiple agents in a cooperative Multi-Agent Deep Reinforcement Learning (MARL) environment. In this work, I present techniques for centralized training of MARL agents in large scale environments and compare my work against current state of the art techniques. This work uses model-free Deep Q-Network (DQN) as the baseline model and allows inter agent communication for cooperative policy learning. I present two novel, scalable and centralized MARL training techniques (MA-MeSN, MA-BoN), which are developed under the principle that the behavior policy and message/communication policies have different optimization criteria. Thus, this work presents models which separate the message learning module from the behavior policy learning module. As shown in the experiments, the separation of these modules helps in faster convergence in complex domains like autonomous driving simulators and achieves better results than the current techniques in literature. Subsequently, this work presents two novel techniques for achieving decentralized execution for the communication based cooperative policy. The first technique uses behavior cloning as a method of cloning an expert cooperative policy to a decentralized agent without message sharing. In the second method, the behavior policy is coupled with a memory module which is local to each model. This memory model is used by the independent agents to mimic the communication policies of other agents and thus generate an independent behavior policy. This decentralized approach has minimal effect on degradation of the overall cumulative reward achieved by the centralized policy. Using a fully decentralized approach allows us to address the challenges of noise and communication bottlenecks in real-time communication channels. In this work, I theoretically and empirically compare the centralized and decentralized training algorithms to current research in the field of MARL. As part of this thesis, I also developed a large scale multi-agent testing environment. It is a new OpenAI-Gym environment which can be used for large scale multi-agent research as it simulates multiple autonomous cars driving cooperatively on a highway in the presence of a bad actor. I compare the performance of the centralized algorithms to existing state-of-the-art algorithms, for ex, DIAL and IMS which are based on cumulative reward achieved per episode and other metrics. MA-MeSN and MA-BoN achieve a cumulative reward of at-least 263% higher than the reward achieved by the DIAL and IMS. I also present an ablation study of the scalability of MA-BoN and show that MA-MeSN and MA-BoN algorithms only exhibit a linear increase in inference time and number of trainable parameters compared to quadratic increase for DIAL.


AI-enabled Technologies for Autonomous and Connected Vehicles

AI-enabled Technologies for Autonomous and Connected Vehicles

Author: Yi Lu Murphey

Publisher: Springer Nature

Published: 2022-09-07

Total Pages: 563

ISBN-13: 3031067800

DOWNLOAD EBOOK

This book reports on cutting-edge research and advances in the field of intelligent vehicle systems. It presents a broad range of AI-enabled technologies, with a focus on automated, autonomous and connected vehicle systems. It covers advanced machine learning technologies, including deep and reinforcement learning algorithms, transfer learning and learning from big data, as well as control theory applied to mobility and vehicle systems. Furthermore, it reports on cutting-edge technologies for environmental perception and vehicle-to-everything (V2X), discussing socioeconomic and environmental implications, and aspects related to human factors and energy-efficiency alike, of automated mobility. Gathering chapters written by renowned researchers and professionals, this book offers a good balance of theoretical and practical knowledge. It provides researchers, practitioners and policy makers with a comprehensive and timely guide on the field of autonomous driving technologies.


Safe and Scalable Planning Under Uncertainty for Autonomous Driving

Safe and Scalable Planning Under Uncertainty for Autonomous Driving

Author: Maxime Thomas Marcel Bouton

Publisher:

Published: 2020

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Autonomous driving has the potential to significantly improve safety. Although progress has been made in recent years to deploy automated driving technologies, many situations handled on a daily basis by human drivers remain challenging for autonomous vehicles, such as navigating urban environments. They must reach their goal safely and efficiently while considering a multitude of traffic participants with rapidly changing behavior. Hand-engineering strategies to navigate such environments requires anticipating many possible situations and finding a suitable behavior for each, which places a large burden on the designer and is unlikely to scale to complicated situations. In addition, autonomous vehicles rely on on-board perception systems that give noisy estimates of the location and velocity of others on the road and are sensitive to occlusions. Autonomously navigating urban environments requires algorithms that reason about interactions with and between traffic participants with limited information. This thesis addresses the problem of automatically generating decision making strategies for autonomous vehicles in urban environments. Previous approaches relied on planning with respect to a mathematical model of the environment but have many limitations. A partially observable Markov decision process (POMDP) is a standard model for sequential decision making problems in dynamic, uncertain environments with imperfect sensor measurements. This thesis demonstrates a generic representation of driving scenarios as POMDPs, considering sensor occlusions and interactions between road users. A key contribution of this thesis is a methodology to scale POMDP approaches to complex environments involving a large number of traffic participants. To reduce the computational cost of considering multiple traffic participants, a decomposition method leveraging the strategies of interacting with a subset of road users is introduced. Decomposition methods can approximate the solutions to large sequential decision making problems at the expense of sacrificing optimality. This thesis introduces a new algorithm that uses deep reinforcement learning to bridge the gap with the optimal solution. Establishing trust in the generated decision strategies is also necessary for the deployment of autonomous vehicles. Methods to constrain a policy trained using reinforcement learning are introduced and combined with the proposed decomposition techniques. This method allows to learn policies with safety constraints. To address state uncertainty, a new methodology for computing probabilistic safety guarantees in partially observable domains is introduced. It is shown that the new method is more flexible and more scalable than previous work. The algorithmic contributions present in this thesis are applied to a variety of driving scenarios. Each algorithm is evaluated in simulation and compared to previous work. It is shown that the POMDP formulation in combination with scalable solving methods provide a flexible framework for planning under uncertainty for autonomous driving.


Formal Language Constraints in Deep Reinforcement Learning for Self-driving Vehicles

Formal Language Constraints in Deep Reinforcement Learning for Self-driving Vehicles

Author: Tyler Bienhoff

Publisher:

Published: 2020

Total Pages: 70

ISBN-13:

DOWNLOAD EBOOK

In recent years, self-driving vehicles have become a holy grail technology that, once fully developed, could radically change the daily behaviors of people and enhance safety. The complexities of controlling a car in a constantly changing environment are too immense to directly program how the vehicle should behave in each specific scenario. Thus, a common technique when developing autonomous vehicles is to use reinforcement learning, where vehicles can be trained in simulated and real-world environments to make proper decisions in a wide variety of scenarios. Reinforcement learning models, however, have uncertainties in how the vehicle acts, especially in a previously unseen situation that can lead to dangerous situations with humans onboard or nearby. To improve the safety of the agent, we propose formal language constraints that augment a standard reinforcement learning agent while being trained in a simulated self-driving environment. The constraints help the vehicle navigate turns and other situations by penalizing the agent when an action is chosen that could lead to a dangerous situation such as a collision. Empirically, we show that the agent, with these constraints, has a slight performance improvement as well as a significant decrease in collisions. Future work can expand upon the current constraints and evaluate using different reinforcement learning algorithms with constraints for training the self-driving agent.


Deep Neural Networks and Data for Automated Driving

Deep Neural Networks and Data for Automated Driving

Author: Tim Fingscheidt

Publisher: Springer Nature

Published: 2022-07-19

Total Pages: 435

ISBN-13: 303101233X

DOWNLOAD EBOOK

This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety? This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above.


Autonomous driving algorithms and Its IC Design

Autonomous driving algorithms and Its IC Design

Author: Jianfeng Ren

Publisher: Springer Nature

Published: 2023-08-09

Total Pages: 306

ISBN-13: 9819928974

DOWNLOAD EBOOK

With the rapid development of artificial intelligence and the emergence of various new sensors, autonomous driving has grown in popularity in recent years. The implementation of autonomous driving requires new sources of sensory data, such as cameras, radars, and lidars, and the algorithm processing requires a high degree of parallel computing. In this regard, traditional CPUs have insufficient computing power, while DSPs are good at image processing but lack sufficient performance for deep learning. Although GPUs are good at training, they are too “power-hungry,” which can affect vehicle performance. Therefore, this book looks to the future, arguing that custom ASICs are bound to become mainstream. With the goal of ICs design for autonomous driving, this book discusses the theory and engineering practice of designing future-oriented autonomous driving SoC chips. The content is divided into thirteen chapters, the first chapter mainly introduces readers to the current challenges and research directions in autonomous driving. Chapters 2–6 focus on algorithm design for perception and planning control. Chapters 7–10 address the optimization of deep learning models and the design of deep learning chips, while Chapters 11-12 cover automatic driving software architecture design. Chapter 13 discusses the 5G application on autonomous drving. This book is suitable for all undergraduates, graduate students, and engineering technicians who are interested in autonomous driving.