This book is appropriate for those specializing in speech science, hearing science, neuroscience, or computer science and engineers working on applications such as automatic speech recognition, cochlear implants, hands-free telephones, sound recording, multimedia indexing and retrieval.
When Speech and Audio Signal Processing published in 1999, it stood out from its competition in its breadth of coverage and its accessible, intutiont-based style. This book was aimed at individual students and engineers excited about the broad span of audio processing and curious to understand the available techniques. Since then, with the advent of the iPod in 2001, the field of digital audio and music has exploded, leading to a much greater interest in the technical aspects of audio processing. This Second Edition will update and revise the original book to augment it with new material describing both the enabling technologies of digital music distribution (most significantly the MP3) and a range of exciting new research areas in automatic music content processing (such as automatic transcription, music similarity, etc.) that have emerged in the past five years, driven by the digital music revolution. New chapter topics include: Psychoacoustic Audio Coding, describing MP3 and related audio coding schemes based on psychoacoustic masking of quantization noise Music Transcription, including automatically deriving notes, beats, and chords from music signals. Music Information Retrieval, primarily focusing on audio-based genre classification, artist/style identification, and similarity estimation. Audio Source Separation, including multi-microphone beamforming, blind source separation, and the perception-inspired techniques usually referred to as Computational Auditory Scene Analysis (CASA).
This is the world’s first edited book on independent component analysis (ICA)-based blind source separation (BSS) of convolutive mixtures of speech. This book brings together a small number of leading researchers to provide tutorial-like and in-depth treatment on major ICA-based BSS topics, with the objective of becoming the definitive source for current, comprehensive, authoritative, and yet accessible treatment.
Provides a comprehensive and coherent account of the state of the art in CASA, in terms of the underlying principles, the algorithms and system architectures that are employed, and the potential applications of this exciting new technology.
Human and Machine Hearing is the first book to comprehensively describe how human hearing works and how to build machines to analyze sounds in the same way that people do. Drawing on over thirty-five years of experience in analyzing hearing and building systems, Richard F. Lyon explains how we can now build machines with close-to-human abilities in speech, music, and other sound-understanding domains. He explains human hearing in terms of engineering concepts, and describes how to incorporate those concepts into machines for a wide range of modern applications. The details of this approach are presented at an accessible level, to bring a diverse range of readers, from neuroscience to engineering, to a common technical understanding. The description of hearing as signal-processing algorithms is supported by corresponding open-source code, for which the book serves as motivating documentation.
We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be "cleaned" with digital signal processing tools before it is played out, transmitted, or stored. This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise reduction but also dereverberation and separation of independent signals. These topics are also covered in this book. However, the general emphasis is on noise reduction because of the large number of applications that can benefit from this technology. The goal of this book is to provide a strong reference for researchers, engineers, and graduate students who are interested in the problem of signal and speech enhancement. To do so, we invited well-known experts to contribute chapters covering the state of the art in this focused field. TOC:Introduction.- Study of the Wiener Filter for Noise Reduction.- Statistical Methods for the Enhancement of Noisy Speech.- Single- und Multi-Microphone Spectral Amplitude Estimation Using a Super-Gaussian Speech Model.- From Volatility Modeling of Financial Time-Series to Stochastic Modeling and Enhancement of Speech Signals.- Single-Microphone Noise Suppression for 3G Handsets Based on Weighted Noise Estimation.- Signal Subspace Techniques for Speech Enhancement.- Speech Enhancement: Application of the Kalman Filter in the Estimate-Maximize (EM) Framework.- Speech Distortion Weighted Multichannel Wiener Filtering Techniques for Noise Reduction.- Adpative Microphone Arrays Employing Spatial Quadratic Soft Constraints and Spectral Shaping.- Single-Microphone Blind Dereverberation.- Separation and Dereverberation of Speech Signals with Multiple Microphones.- Frequency-Domain Blind Source Separation.- Subband Based Blind Source Separation.- Real-Time Blind Source Separation for Moving Speech Signals.- Separation of Speech by Computational Auditory Scene Analysis
This book provides the first comprehensive overview of the fascinating topic of audio source separation based on non-negative matrix factorization, deep neural networks, and sparse component analysis. The first section of the book covers single channel source separation based on non-negative matrix factorization (NMF). After an introduction to the technique, two further chapters describe separation of known sources using non-negative spectrogram factorization, and temporal NMF models. In section two, NMF methods are extended to multi-channel source separation. Section three introduces deep neural network (DNN) techniques, with chapters on multichannel and single channel separation, and a further chapter on DNN based mask estimation for monaural speech separation. In section four, sparse component analysis (SCA) is discussed, with chapters on source separation using audio directional statistics modelling, multi-microphone MMSE-based techniques and diffusion map methods. The book brings together leading researchers to provide tutorial-like and in-depth treatments on major audio source separation topics, with the objective of becoming the definitive source for a comprehensive, authoritative, and accessible treatment. This book is written for graduate students and researchers who are interested in audio source separation techniques based on NMF, DNN and SCA.
Machine audition is the study of algorithms and systems for the automatic analysis and understanding of sound by machine. It has recently attracted increasing interest within several research communities, such as signal processing, machine learning, auditory modeling, perception and cognition, psychology, pattern recognition, and artificial intelligence. However, the developments made so far are fragmented within these disciplines, lacking connections and incurring potentially overlapping research activities in this subject area. Machine Audition: Principles, Algorithms and Systems contains advances in algorithmic developments, theoretical frameworks, and experimental research findings. This book is useful for professionals who want an improved understanding about how to design algorithms for performing automatic analysis of audio signals, construct a computing system for understanding sound, and learn how to build advanced human-computer interactive systems.