This book is about making machine learning models and their decisions interpretable. After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. Later chapters focus on general model-agnostic methods for interpreting black box models like feature importance and accumulated local effects and explaining individual predictions with Shapley values and LIME. All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project.
The key idea of this book is that hinging hyperplanes, neural networks and support vector machines can be transformed into fuzzy models, and interpretability of the resulting rule-based systems can be ensured by special model reduction and visualization techniques. The first part of the book deals with the identification of hinging hyperplane-based regression trees. The next part deals with the validation, visualization and structural reduction of neural networks based on the transformation of the hidden layer of the network into an additive fuzzy rule base system. Finally, based on the analogy of support vector regression and fuzzy models, a three-step model reduction algorithm is proposed to get interpretable fuzzy regression models on the basis of support vector regression. The authors demonstrate real-world use of the algorithms with examples taken from process engineering, and they support the text with downloadable Matlab code. The book is suitable for researchers, graduate students and practitioners in the areas of computational intelligence and machine learning.
This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning. Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision. This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following: · Evaluation and Generalization in Interpretable Machine Learning · Explanation Methods in Deep Learning · Learning Functional Causal Models with Generative Neural Networks · Learning Interpreatable Rules for Multi-Label Classification · Structuring Neural Networks for More Explainable Predictions · Generating Post Hoc Rationales of Deep Visual Classification Decisions · Ensembling Visual Explanations · Explainable Deep Driving by Visualizing Causal Attention · Interdisciplinary Perspective on Algorithmic Job Candidate Search · Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions · Inherent Explainability Pattern Theory-based Video Event Interpretations
The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
The work described in this book was first presented at the Second Workshop on Genetic Programming, Theory and Practice, organized by the Center for the Study of Complex Systems at the University of Michigan, Ann Arbor, 13-15 May 2004. The goal of this workshop series is to promote the exchange of research results and ideas between those who focus on Genetic Programming (GP) theory and those who focus on the application of GP to various re- world problems. In order to facilitate these interactions, the number of talks and participants was small and the time for discussion was large. Further, participants were asked to review each other's chapters before the workshop. Those reviewer comments, as well as discussion at the workshop, are reflected in the chapters presented in this book. Additional information about the workshop, addendums to chapters, and a site for continuing discussions by participants and by others can be found at http://cscs.umich.edu:8000/GPTP-20041. We thank all the workshop participants for making the workshop an exciting and productive three days. In particular we thank all the authors, without whose hard work and creative talents, neither the workshop nor the book would be possible. We also thank our keynote speakers Lawrence ("Dave") Davis of NuTech Solutions, Inc., Jordan Pollack of Brandeis University, and Richard Lenski of Michigan State University, who delivered three thought-provoking speeches that inspired a great deal of discussion among the participants.
Machine Learning and Artificial Intelligence in Radiation Oncology: A Guide for Clinicians is designed for the application of practical concepts in machine learning to clinical radiation oncology. It addresses the existing void in a resource to educate practicing clinicians about how machine learning can be used to improve clinical and patient-centered outcomes. This book is divided into three sections: the first addresses fundamental concepts of machine learning and radiation oncology, detailing techniques applied in genomics; the second section discusses translational opportunities, such as in radiogenomics and autosegmentation; and the final section encompasses current clinical applications in clinical decision making, how to integrate AI into workflow, use cases, and cross-collaborations with industry. The book is a valuable resource for oncologists, radiologists and several members of biomedical field who need to learn more about machine learning as a support for radiation oncology. - Presents content written by practicing clinicians and research scientists, allowing a healthy mix of both new clinical ideas as well as perspectives on how to translate research findings into the clinic - Provides perspectives from artificial intelligence (AI) industry researchers to discuss novel theoretical approaches and possibilities on academic collaborations - Brings diverse points-of-view from an international group of experts to provide more balanced viewpoints on a complex topic
This two-volume set LNAI 14471-14472 constitutes the refereed proceedings of the 36th Australasian Joint Conference on Artificial Intelligence, AI 2023, held in Brisbane, QLD, Australia during November 28 – December 1, 2023. The 23 full papers presented together with 59 short papers were carefully reviewed and selected from 213 submissions. They are organized in the following topics: computer vision; deep learning; machine learning and data mining; optimization; medical AI; knowledge representation and NLP; explainable AI; reinforcement learning; and genetic algorithm..
This book is written both for readers entering the field, and for practitioners with a background in AI and an interest in developing real-world applications. The book is a great resource for practitioners and researchers in both industry and academia, and the discussed case studies and associated material can serve as inspiration for a variety of projects and hands-on assignments in a classroom setting. I will certainly keep this book as a personal resource for the courses I teach, and strongly recommend it to my students. --Dr. Carlotta Domeniconi, Associate Professor, Computer Science Department, GMU This book offers a curriculum for introducing interpretability to machine learning at every stage. The authors provide compelling examples that a core teaching practice like leading interpretive discussions can be taught and learned by teachers and sustained effort. And what better way to strengthen the quality of AI and Machine learning outcomes. I hope that this book will become a primer for teachers, data Science educators, and ML developers, and together we practice the art of interpretive machine learning. --Anusha Dandapani, Chief Data and Analytics Officer, UNICC and Adjunct Faculty, NYU This is a wonderful book! I’m pleased that the next generation of scientists will finally be able to learn this important topic. This is the first book I’ve seen that has up-to-date and well-rounded coverage. Thank you to the authors! --Dr. Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, and Biostatistics & Bioinformatics Literature on Explainable AI has up until now been relatively scarce and featured mainly mainstream algorithms like SHAP and LIME. This book has closed this gap by providing an extremely broad review of various algorithms proposed in the scientific circles over the previous 5-10 years. This book is a great guide to anyone who is new to the field of XAI or is already familiar with the field and is willing to expand their knowledge. A comprehensive review of the state-of-the-art Explainable AI methods starting from visualization, interpretable methods, local and global explanations, time series methods, and finishing with deep learning provides an unparalleled source of information currently unavailable anywhere else. Additionally, notebooks with vivid examples are a great supplement that makes the book even more attractive for practitioners of any level. Overall, the authors provide readers with an enormous breadth of coverage without losing sight of practical aspects, which makes this book truly unique and a great addition to the library of any data scientist. Dr. Andrey Sharapov, Product Data Scientist, Explainable AI Expert and Speaker, Founder of Explainable AI-XAI Group
Unveiling the Future: Your Portal to Artificial Intelligence Proficiency In the epoch of digital metamorphosis, Artificial Intelligence (AI) stands as the vanguard of a new dawn, a nexus where human ingenuity intertwines with machine precision. As we delve deeper into this uncharted realm, the boundary between the conceivable and the fantastical continually blurs, heralding a new era of endless possibilities. The Dictionary of Artificial Intelligence, embracing a compendium of 3,300 meticulously curated titles, endeavors to be the torchbearer in this journey of discovery, offering a wellspring of knowledge to both the uninitiated and the adept. Embarking on the pages of this dictionary is akin to embarking on a voyage through the vast and often turbulent seas of AI. Each entry serves as a beacon, illuminating complex terminologies, core principles, and the avant-garde advancements that characterize this dynamic domain. The dictionary is more than a mere compilation of terms; it's a labyrinth of understanding waiting to be traversed. The Dictionary of Artificial Intelligence is an endeavor to demystify the arcane, to foster a shared lexicon that enhances collaboration, innovation, and comprehension across the AI community. It's a mission to bridge the chasm between ignorance and insight, to unravel the intricacies of AI that often seem enigmatic to the outsiders. This profound reference material transcends being a passive repository of terms; it’s an engagement with the multifaceted domain of artificial intelligence. Each title encapsulated within these pages is a testament to the audacity of human curiosity and the unyielding quest for advancement that propels the AI domain forward. The Dictionary of Artificial Intelligence is an invitation to delve deeper, to grapple with the lexicon of a field that stands at the cusp of redefining the very fabric of society. It's a conduit through which the curious become enlightened, the proficient become masters, and the innovators find inspiration. As you traverse through the entries of The Dictionary of Artificial Intelligence, you are embarking on a journey of discovery. A journey that not only augments your understanding but also ignites the spark of curiosity and the drive for innovation that are quintessential in navigating the realms of AI. We beckon you to commence this educational expedition, to explore the breadth and depth of AI lexicon, and to emerge with a boundless understanding and an unyielding resolve to contribute to the ever-evolving narrative of artificial intelligence. Through The Dictionary of Artificial Intelligence, may your quest for knowledge be as boundless and exhilarating as the domain it explores.