This book provides a synthesis of the multifaceted field of interactive multimodal information management. The subjects treated include spoken language processing, image and video processing, document and handwriting analysis, identity information and interfaces. The book concludes with an overview of the highlights of the progress of the field dur
This volume presents high quality, state-of-the-art research ideas and results from theoretic, algorithmic and application viewpoints. It contains contributions by leading experts in the obsequious scientific and technological field of multimedia. The book specifically focuses on interaction with multimedia content with special emphasis on multimodal interfaces for accessing multimedia information. The book is designed for a professional audience composed of practitioners and researchers in industry. It is also suitable for advanced-level students in computer science.
In the past twenty years, computers and networks have gained a prominent role in supporting human communications. This book presents recent research in multimodal information processing, which demonstrates that computers can achieve more than what telephone calls or videoconferencing can do. The book offers a snapshot of current capabilities for the analysis of human communications in several modalities – audio, speech, language, images, video, and documents – and for accessing this information interactively. The book has a clear application goal, which is the capture, automatic analysis, storage, and retrieval of multimodal signals from human interaction in meetings. This goal provides a controlled experimental framework and helps generating shared data, which is required for methods based on machine learning. This goal has shaped the vision of the contributors to the book and of many other researchers cited in it. It has also received significant long-term support through a series of projects, including the Swiss National Center of Competence in Research (NCCR) in Interactive Multimodal Information Management (IM2), to which the contributors to the book have been connected.
This book systematically addresses the quantification of quality aspects of multimodal interactive systems. The conceptual structure is based on a schematic view on human-computer interaction where the user interacts with the system and perceives it via input and output interfaces. Thus, aspects of multimodal interaction are analyzed first, followed by a discussion of the evaluation of output and input and concluding with a view on the evaluation of a complete system.
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.
Adaptive Multimodal Interactive Systems introduces a general framework for adapting multimodal interactive systems and comprises a detailed discussion of each of the steps required for adaptation. This book also investigates how interactive systems may be improved in terms of usability and user friendliness while describing the exhaustive user tests employed to evaluate the presented approaches. After introducing general theory, a generic approach for user modeling in interactive systems is presented, ranging from an observation of basic events to a description of higher-level user behavior. Adaptations are presented as a set of patterns similar to those known from software or usability engineering.These patterns describe recurring problems and present proven solutions. The authors include a discussion on when and how to employ patterns and provide guidance to the system designer who wants to add adaptivity to interactive systems. In addition to these patterns, the book introduces an adaptation framework, which exhibits an abstraction layer using Semantic Web technology.Adaptations are implemented on top of this abstraction layer by creating a semantic representation of the adaptation patterns. The patterns cover both graphical interfaces as well as speech-based and multimodal interactive systems.
This book contains the outcome of the 9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013, held in Lisbon, Portugal, in July/August 2013. The 9 papers included in this book represent the results of a 4-week workshop, where senior and junior researchers worked together on projects tackling new trends in human-machine interaction (HMI). The papers are organized in two topical sections. The first one presents different proposals focused on some fundamental issues regarding multimodal interactions, i.e., telepresence, speech synthesis and interactive modeling. The second is a set of development examples in key areas of HMI applications, i.e., education, entertainment and assistive technologies.
Here is the third of a four-volume set that constitutes the refereed proceedings of the 12th International Conference on Human-Computer Interaction, HCII 2007, held in Beijing, China, in July 2007, jointly with eight other thematically similar conferences. It covers multimodality and conversational dialogue; adaptive, intelligent and emotional user interfaces; gesture and eye gaze recognition; and interactive TV and media.
This book is the result of a group of researchers from different disciplines asking themselves one question: what does it take to develop a computer interface that listens, talks, and can answer questions in a domain? First, obviously, it takes specialized modules for speech recognition and synthesis, human interaction management (dialogue, input fusion, and multimodal output fusion), basic question understanding, and answer finding. While all modules are researched as independent subfields, this book describes the development of state-of-the-art modules and their integration into a single, working application capable of answering medical (encyclopedic) questions such as "How long is a person with measles contagious?" or "How can I prevent RSI?". The contributions in this book, which grew out of the IMIX project funded by the Netherlands Organisation for Scientific Research, document the development of this system, but also address more general issues in natural language processing, such as the development of multidimensional dialogue systems, the acquisition of taxonomic knowledge from text, answer fusion, sequence processing for domain-specific entity recognition, and syntactic parsing for question answering. Together, they offer an overview of the most important findings and lessons learned in the scope of the IMIX project, making the book of interest to both academic and commercial developers of human-machine interaction systems in Dutch or any other language. Highlights include: integrating multi-modal input fusion in dialogue management (Van Schooten and Op den Akker), state-of-the-art approaches to the extraction of term variants (Van der Plas, Tiedemann, and Fahmi; Tjong Kim Sang, Hofmann, and De Rijke), and multi-modal answer fusion (two chapters by Van Hooijdonk, Bosma, Krahmer, Maes, Theune, and Marsi). Watch the IMIX movie at www.nwo.nl/imix-film. Like IBM's Watson, the IMIX system described in the book gives naturally phrased responses to naturally posed questions. Where Watson can only generate synthetic speech, the IMIX system also recognizes speech. On the other hand, Watson is able to win a television quiz, while the IMIX system is domain-specific, answering only to medical questions. "The Netherlands has always been one of the leaders in the general field of Human Language Technology, and IMIX is no exception. It was a very ambitious program, with a remarkably successful performance leading to interesting results. The teams covered a remarkable amount of territory in the general sphere of multimodal question answering and information delivery, question answering, information extraction and component technologies." Eduard Hovy, USC, USA, Jon Oberlander, University of Edinburgh, Scotland, and Norbert Reithinger, DFKI, Germany