Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms. Researchers collecting and analyzing multi-sensory data collections – for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful. - Contains state-of-the-art developments on multi-modal computing - Shines a focus on algorithms and applications - Presents novel deep learning topics on multi-sensor fusion and multi-modal deep learning
Dedicated to remote sensing images, from their acquisition to their use in various applications, this book covers the global lifecycle of images, including sensors and acquisition systems, applications such as movement monitoring or data assimilation, and image and data processing. It is organized in three main parts. The first part presents technological information about remote sensing (choice of satellite orbit and sensors) and elements of physics related to sensing (optics and microwave propagation). The second part presents image processing algorithms and their specificities for radar or optical, multi and hyper-spectral images. The final part is devoted to applications: change detection and analysis of time series, elevation measurement, displacement measurement and data assimilation. Offering a comprehensive survey of the domain of remote sensing imagery with a multi-disciplinary approach, this book is suitable for graduate students and engineers, with backgrounds either in computer science and applied math (signal and image processing) or geo-physics. About the Authors Florence Tupin is Professor at Telecom ParisTech, France. Her research interests include remote sensing imagery, image analysis and interpretation, three-dimensional reconstruction, and synthetic aperture radar, especially for urban remote sensing applications. Jordi Inglada works at the Centre National d’Études Spatiales (French Space Agency), Toulouse, France, in the field of remote sensing image processing at the CESBIO laboratory. He is in charge of the development of image processing algorithms for the operational exploitation of Earth observation images, mainly in the field of multi-temporal image analysis for land use and cover change. Jean-Marie Nicolas is Professor at Telecom ParisTech in the Signal and Imaging department. His research interests include the modeling and processing of synthetic aperture radar images.
Micro-videos, a new form of user-generated contents, have been spreading widely across various social platforms, such as Vine, Kuaishou, and Tik Tok. Different from traditional long videos, micro-videos are usually recorded by smart mobile devices at any place within a few seconds. Due to its brevity and low bandwidth cost, micro-videos are gaining increasing user enthusiasm. The blossoming of micro-videos opens the door to the possibility of many promising applications, ranging from network content caching to online advertising. Thus, it is highly desirable to develop an effective scheme for the high-order micro-video understanding. Micro-video understanding is, however, non-trivial due to the following challenges: (1) how to represent micro-videos that only convey one or few high-level themes or concepts; (2) how to utilize the hierarchical structure of the venue categories to guide the micro-video analysis; (3) how to alleviate the influence of low-quality caused by complex surrounding environments and the camera shake; (4) how to model the multimodal sequential data, {i.e.}, textual, acoustic, visual, and social modalities, to enhance the micro-video understanding; and (5) how to construct large-scale benchmark datasets for the analysis? These challenges have been largely unexplored to date. In this book, we focus on addressing the challenges presented above by proposing some state-of-the-art multimodal learning theories. To demonstrate the effectiveness of these models, we apply them to three practical tasks of micro-video understanding: popularity prediction, venue category estimation, and micro-video routing. Particularly, we first build three large-scale real-world micro-video datasets for these practical tasks. We then present a multimodal transductive learning framework for micro-video popularity prediction. Furthermore, we introduce several multimodal cooperative learning approaches and a multimodal transfer learning scheme for micro-video venue category estimation. Meanwhile, we develop a multimodal sequential learning approach for micro-video recommendation. Finally, we conclude the book and figure out the future research directions in multimodal learning toward micro-video understanding.
The natural interaction ability between human and machine mainly involves human-machine dialogue ability, multi-modal sentiment analysis ability, human-machine cooperation ability, and so on. To enable intelligent computers to have multi-modal sentiment analysis ability, it is necessary to equip them with a strong multi-modal sentiment analysis ability during the process of human-computer interaction. This is one of the key technologies for efficient and intelligent human-computer interaction. This book focuses on the research and practical applications of multi-modal sentiment analysis for human-computer natural interaction, particularly in the areas of multi-modal information feature representation, feature fusion, and sentiment classification. Multi-modal sentiment analysis for natural interaction is a comprehensive research field that involves the integration of natural language processing, computer vision, machine learning, pattern recognition, algorithm, robot intelligent system, human-computer interaction, etc. Currently, research on multi-modal sentiment analysis in natural interaction is developing rapidly. This book can be used as a professional textbook in the fields of natural interaction, intelligent question answering (customer service), natural language processing, human-computer interaction, etc. It can also serve as an important reference book for the development of systems and products in intelligent robots, natural language processing, human-computer interaction, and related fields.
Multimodal Behavioral Analysis in the Wild: Advances and Challenges presents the state-of- the-art in behavioral signal processing using different data modalities, with a special focus on identifying the strengths and limitations of current technologies. The book focuses on audio and video modalities, while also emphasizing emerging modalities, such as accelerometer or proximity data. It covers tasks at different levels of complexity, from low level (speaker detection, sensorimotor links, source separation), through middle level (conversational group detection, addresser and addressee identification), and high level (personality and emotion recognition), providing insights on how to exploit inter-level and intra-level links. This is a valuable resource on the state-of-the- art and future research challenges of multi-modal behavioral analysis in the wild. It is suitable for researchers and graduate students in the fields of computer vision, audio processing, pattern recognition, machine learning and social signal processing. - Gives a comprehensive collection of information on the state-of-the-art, limitations, and challenges associated with extracting behavioral cues from real-world scenarios - Presents numerous applications on how different behavioral cues have been successfully extracted from different data sources - Provides a wide variety of methodologies used to extract behavioral cues from multi-modal data
As explored in this open access book, higher education in STEM fields is influenced by many factors, including education research, government and school policies, financial considerations, technology limitations, and acceptance of innovations by faculty and students. In 2018, Drs. Ryoo and Winkelmann explored the opportunities, challenges, and future research initiatives of innovative learning environments (ILEs) in higher education STEM disciplines in their pioneering project: eXploring the Future of Innovative Learning Environments (X-FILEs). Workshop participants evaluated four main ILE categories: personalized and adaptive learning, multimodal learning formats, cross/extended reality (XR), and artificial intelligence (AI) and machine learning (ML). This open access book gathers the perspectives expressed during the X-FILEs workshop and its follow-up activities. It is designed to help inform education policy makers, researchers, developers, and practitioners about the adoption and implementation of ILEs in higher education.
Kickstart your NLP journey by exploring BERT and its variants such as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more with Hugging Face's transformers library Key FeaturesExplore the encoder and decoder of the transformer modelBecome well-versed with BERT along with ALBERT, RoBERTa, and DistilBERTDiscover how to pre-train and fine-tune BERT models for several NLP tasksBook Description BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer’s encoder and decoder work. You’ll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you’ll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT. By the end of this BERT book, you’ll be well-versed with using BERT and its variants for performing practical NLP tasks. What you will learnUnderstand the transformer model from the ground upFind out how BERT works and pre-train it using masked language model (MLM) and next sentence prediction (NSP) tasksGet hands-on with BERT by learning to generate contextual word and sentence embeddingsFine-tune BERT for downstream tasksGet to grips with ALBERT, RoBERTa, ELECTRA, and SpanBERT modelsGet the hang of the BERT models based on knowledge distillationUnderstand cross-lingual models such as XLM and XLM-RExplore Sentence-BERT, VideoBERT, and BARTWho this book is for This book is for NLP professionals and data scientists looking to simplify NLP tasks to enable efficient language understanding using BERT. A basic understanding of NLP concepts and deep learning is required to get the best out of this book.
This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered. The chapters of this book span three categories: The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10. The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.
This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning. Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision. This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following: · Evaluation and Generalization in Interpretable Machine Learning · Explanation Methods in Deep Learning · Learning Functional Causal Models with Generative Neural Networks · Learning Interpreatable Rules for Multi-Label Classification · Structuring Neural Networks for More Explainable Predictions · Generating Post Hoc Rationales of Deep Visual Classification Decisions · Ensembling Visual Explanations · Explainable Deep Driving by Visualizing Causal Attention · Interdisciplinary Perspective on Algorithmic Job Candidate Search · Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions · Inherent Explainability Pattern Theory-based Video Event Interpretations