This book constitutes the proceedings of the 23rd International Conference on Speech and Computer, SPECOM 2021, held in St. Petersburg, Russia, in September 2021.* The 74 papers presented were carefully reviewed and selected from 163 submissions. The papers present current research in the area of computer speech processing including audio signal processing, automatic speech recognition, speaker recognition, computational paralinguistics, speech synthesis, sign language and multimodal processing, and speech and language resources. *Due to the COVID-19 pandemic, SPECOM 2021 was held as a hybrid event.
This book gives an overview of the research and application of speech technologies in different areas. One of the special characteristics of the book is that the authors take a broad view of the multiple research areas and take the multidisciplinary approach to the topics. One of the goals in this book is to emphasize the application. User experience, human factors and usability issues are the focus in this book.
New material treats such contemporary subjects as automatic speech recognition and speaker verification for banking by computer and privileged (medical, military, diplomatic) information and control access. The book also focuses on speech and audio compression for mobile communication and the Internet. The importance of subjective quality criteria is stressed. The book also contains introductions to human monaural and binaural hearing, and the basic concepts of signal analysis. Beyond speech processing, this revised and extended new edition of Computer Speech gives an overview of natural language technology and presents the nuts and bolts of state-of-the-art speech dialogue systems.
How interactive voice-based technology can tap into the automatic and powerful responses all speech—whether from human or machine—evokes. Interfaces that talk and listen are populating computers, cars, call centers, and even home appliances and toys, but voice interfaces invariably frustrate rather than help. In Wired for Speech, Clifford Nass and Scott Brave reveal how interactive voice technologies can readily and effectively tap into the automatic responses all speech—whether from human or machine—evokes. Wired for Speech demonstrates that people are "voice-activated": we respond to voice technologies as we respond to actual people and behave as we would in any social situation. By leveraging this powerful finding, voice interfaces can truly emerge as the next frontier for efficient, user-friendly technology. Wired for Speech presents new theories and experiments and applies them to critical issues concerning how people interact with technology-based voices. It considers how people respond to a female voice in e-commerce (does stereotyping matter?), how a car's voice can promote safer driving (are "happy" cars better cars?), whether synthetic voices have personality and emotion (is sounding like a person always good?), whether an automated call center should apologize when it cannot understand a spoken request ("To Err is Interface; To Blame, Complex"), and much more. Nass and Brave's deep understanding of both social science and design, drawn from ten years of research at Nass's Stanford laboratory, produces results that often challenge conventional wisdom and common design practices. These insights will help designers and marketers build better interfaces, scientists construct better theories, and everyone gain better understandings of the future of the machines that speak with us.
It is with great pleasure that I present this third volume of the series "Advanced Applications in Pattern Recognition." It represents the summary of many man- (and woman-) years of effort in the field of speech recognition by tne author's former team at the University of Turin. It combines the best results in fuzzy-set theory and artificial intelligence to point the way to definitive solutions to the speech-recognition problem. It is my hope that it will become a classic work in this field. I take this opportunity to extend my thanks and appreciation to Sy Marchand, Plenum's Senior Editor responsible for overseeing this series, and to Susan Lee and Jo Winton, who had the monumental task of preparing the camera-ready master sheets for publication. Morton Nadler General Editor vii PREFACE Si parva licet componere magnis Virgil, Georgics, 4,176 (37-30 B.C.) The work reported in this book results from years of research oriented toward the goal of making an experimental model capable of understanding spoken sentences of a natural language. This is, of course, a modest attempt compared to the complexity of the functions performed by the human brain. A method is introduced for conce1v1ng modules performing perceptual tasks and for combining them in a speech understanding system.
An examination of more than sixty years of successes and failures in developing technologies that allow computers to understand human spoken language. Stanley Kubrick's 1968 film 2001: A Space Odyssey famously featured HAL, a computer with the ability to hold lengthy conversations with his fellow space travelers. More than forty years later, we have advanced computer technology that Kubrick never imagined, but we do not have computers that talk and understand speech as HAL did. Is it a failure of our technology that we have not gotten much further than an automated voice that tells us to "say or press 1"? Or is there something fundamental in human language and speech that we do not yet understand deeply enough to be able to replicate in a computer? In The Voice in the Machine, Roberto Pieraccini examines six decades of work in science and technology to develop computers that can interact with humans using speech and the industry that has arisen around the quest for these technologies. He shows that although the computers today that understand speech may not have HAL's capacity for conversation, they have capabilities that make them usable in many applications today and are on a fast track of improvement and innovation. Pieraccini describes the evolution of speech recognition and speech understanding processes from waveform methods to artificial intelligence approaches to statistical learning and modeling of human speech based on a rigorous mathematical model--specifically, Hidden Markov Models (HMM). He details the development of dialog systems, the ability to produce speech, and the process of bringing talking machines to the market. Finally, he asks a question that only the future can answer: will we end up with HAL-like computers or something completely unexpected?
This book deals with two important technologies in human-computer interaction: computer generation of synthetic speech and computer recognition of human speech. It addresses the problems in generating speech with varying precision of articulation and how to convey moods and attitudes.
The advances in computing and networking have sparked an enormous interest in deploying automatic speech recognition on mobile devices and over communication networks. This book brings together academic researchers and industrial practitioners to address the issues in this emerging realm and presents the reader with a comprehensive introduction to the subject of speech recognition in devices and networks. It covers network, distributed and embedded speech recognition systems.
This new book is the first to provide a truly understandable, non-technical overview of all the major areas in the computer processing of human speech -- speech recognition, speech synthesis, speaker recognition, language identification, lip synchronization, and co-channel separation. It takes a unique, nonmathematical approach in exploring the nature of human language and its impact on the science and methodologies of computer voice technology. In one, easy-to-read source, you gain a deeper understanding of the fundamentals, uses, and applications of the technology itself and of the strengths and weaknesses of various systems. A time-saving glossary of technical terms is included.
Speech Recognition has a long history of being one of the difficult problems in Artificial Intelligence and Computer Science. As one goes from problem solving tasks such as puzzles and chess to perceptual tasks such as speech and vision, the problem characteristics change dramatically: knowledge poor to knowledge rich; low data rates to high data rates; slow response time (minutes to hours) to instantaneous response time. These characteristics taken together increase the computational complexity of the problem by several orders of magnitude. Further, speech provides a challenging task domain which embodies many of the requirements of intelligent behavior: operate in real time; exploit vast amounts of knowledge, tolerate errorful, unexpected unknown input; use symbols and abstractions; communicate in natural language and learn from the environment. Voice input to computers offers a number of advantages. It provides a natural, fast, hands free, eyes free, location free input medium. However, there are many as yet unsolved problems that prevent routine use of speech as an input device by non-experts. These include cost, real time response, speaker independence, robustness to variations such as noise, microphone, speech rate and loudness, and the ability to handle non-grammatical speech. Satisfactory solutions to each of these problems can be expected within the next decade. Recognition of unrestricted spontaneous continuous speech appears unsolvable at present. However, by the addition of simple constraints, such as clarification dialog to resolve ambiguity, we believe it will be possible to develop systems capable of accepting very large vocabulary continuous speechdictation.