Most data compression methods that are based on variable-length codes employ the Huffman or Golomb codes. However, there are a large number of less-known codes that have useful properties and these can be useful. This book brings this large set of codes to the attention of workers in the field and for students of computer science. The author’s crystal clear style of writing and presentation allows easy access to the topic.
A comprehensive reference for the many different types and methods of compression, including a detailed and helpful taxonomy, an analysis of the most common methods, and discussions on their use and comparative benefits. The presentation is organized into the main branches of the field: run length encoding, statistical methods, dictionary-based methods, image compression, audio compression, and video compression. Detailed descriptions and explanations of the most well- known and frequently used methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and nonspecialists. In short, the book provides an invaluable reference and guide for all computer scientists, computer engineers, electrical engineers, signal/image processing engineers and other scientists needing a comprehensive compilation for a broad range of compression methods.
If you want to attract and retain users in the booming mobile services market, you need a quick-loading app that won’t churn through their data plans. The key is to compress multimedia and other data into smaller files, but finding the right method is tricky. This witty book helps you understand how data compression algorithms work—in theory and practice—so you can choose the best solution among all the available compression tools. With tables, diagrams, games, and as little math as possible, authors Colt McAnlis and Aleks Haecky neatly explain the fundamentals. Learn how compressed files are better, cheaper, and faster to distribute and consume, and how they’ll give you a competitive edge. Learn why compression has become crucial as data production continues to skyrocket Know your data, circumstances, and algorithm options when choosing compression tools Explore variable-length codes, statistical compression, arithmetic numerical coding, dictionary encodings, and context modeling Examine tradeoffs between file size and quality when choosing image compressors Learn ways to compress client- and server-generated data objects Meet the inventors and visionaries who created data compression algorithms
This clearly written book offers readers a succinct foundation to the most important topics in the field of data compression. Part I presents the basic approaches to data compression and describes a few popular techniques and methods that are commonly used to compress data. The reader will discover essential concepts. Part II concentrates on advanced techniques, such as arithmetic coding, orthogonal transforms, subband transforms and Burrows-Wheeler transform. This book is the perfect reference for advanced undergraduates in computer science and requires a minimum of mathematics. An author-maintained website provides errata and auxiliary material.
Described by Jeff Prosise of PC Magazine as one of my favorite books on applied computer technology, this updated second edition brings you fully up-to-date on the latest developments in the data compression field. It thoroughly covers the various data compression techniques including compression of binary programs, data, sound, and graphics. Each technique is illustrated with a completely functional C program that demonstrates how data compression works and how it can be readily incorporated into your own compression programs. The accompanying disk contains the code files that demonstrate the various techniques of data compression found in the book.
- Treats joint source and channel decoding in an integrated way - Gives a clear description of the problems in the field together with the mathematical tools for their solution - Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks Traditionally, cross-layer and joint source-channel coding were seen as incompatible with classically structured networks but recent advances in theory changed this situation. Joint source-channel decoding is now seen as a viable alternative to separate decoding of source and channel codes, if the protocol layers are taken into account. A joint source/protocol/channel approach is thus addressed in this book: all levels of the protocol stack are considered, showing how the information in each layer influences the others. This book provides the tools to show how cross-layer and joint source-channel coding and decoding are now compatible with present-day mobile and wireless networks, with a particular application to the key area of video transmission to mobiles. Typical applications are broadcasting, or point-to-point delivery of multimedia contents, which are very timely in the context of the current development of mobile services such as audio (MPEG4 AAC) or video (H263, H264) transmission using recent wireless transmission standards (DVH-H, DVB-SH, WiMAX, LTE). This cross-disciplinary book is ideal for graduate students, researchers, and more generally professionals working either in signal processing for communications or in networking applications, interested in reliable multimedia transmission. This book is also of interest to people involved in cross-layer optimization of mobile networks. Its content may provide them with other points of view on their optimization problem, enlarging the set of tools which they could use. Pierre Duhamel is director of research at CNRS/ LSS and has previously held research positions at Thomson-CSF, CNET, and ENST, where he was head of the Signal and Image Processing Department. He has served as chairman of the DSP committee and associate Editor of the IEEE Transactions on Signal Processing and Signal Processing Letters, as well as acting as a co-chair at MMSP and ICASSP conferences. He was awarded the Grand Prix France Telecom by the French Science Academy in 2000. He is co-author of more than 80 papers in international journals, 250 conference proceedings, and 28 patents. Michel Kieffer is an assistant professor in signal processing for communications at the Université Paris-Sud and a researcher at the Laboratoire des Signaux et Systèmes, Gif-sur-Yvette, France. His research interests are in joint source-channel coding and decoding techniques for the reliable transmission of multimedia contents. He serves as associate editor of Signal Processing (Elsevier). He is co-author of more than 90 contributions to journals, conference proceedings, and book chapters. - Treats joint source and channel decoding in an integrated way - Gives a clear description of the problems in the field together with the mathematical tools for their solution - Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks
Data compression is one of the most important fields and tools in modern computing. From archiving data, to CD-ROMs, and from coding theory to image analysis, many facets of modern computing rely upon data compression. This book provides a comprehensive reference for the many different types and methods of compression. Included are a detailed and helpful taxonomy, analysis of most common methods, and discussions on the use and comparative benefits of methods and description of "how to" use them. Detailed descriptions and explanations of the most well-known and frequently used compression methods are covered in a self-contained fashion, with an accessible style and technical level for specialists and non-specialists.
Fundamental Data Compression provides all the information students need to be able to use this essential technology in their future careers. A huge, active research field, and a part of many people's everyday lives, compression technology is an essential part of today's Computer Science and Electronic Engineering courses. With the help of this book, students can gain a thorough understanding of the underlying theory and algorithms, as well as specific techniques used in a range of scenarios, including the application of compression techniques to text, still images, video and audio. Practical exercises, projects and exam questions reinforce learning, along with suggestions for further reading.* Dedicated data compression textbook for use on undergraduate courses* Provides essential knowledge for today's web/multimedia applications* Accessible, well structured text backed up by extensive exercises and sample exam questions
"Khalid Sayood provides an extensive introduction to the theory underlying today's compression techniques with detailed instruction for their applications using several examples to explain the concepts. Encompassing the entire field of data compression Introduction to Data Compression, includes lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization. Khalid Sayood provides a working knowledge of data compression, giving the reader the tools to develop a complete and concise compression package upon completion of his book."--BOOK JACKET.
Lossless data compression is a facet of source coding and a well studied problem of information theory. Its goal is to find a shortest possible code that can be unambiguously recovered. Here, we focus on rigorous analysis of code redundancy for known sources. The redundancy rate problem determines by how much the actual code length exceeds the optimal code length. We present precise analyses of three types of lossless data compression schemes, namely fixed-to-variable (FV) length codes, variable-to-fixed (VF) length codes, and variable to- variable (VV) length codes. In particular, we investigate the average redundancy of Shannon, Huffman, Tunstall, Khodak and Boncelet codes. These codes have succinct representations as trees, either as coding or parsing trees, and we analyze here some of their parameters (e.g., the average path from the root to a leaf). Such trees are precisely analyzed by analytic methods, known also as analytic combinatorics, in which complex analysis plays decisive role. These tools include generating functions, Mellin transform, Fourier series, saddle point method, analytic poissonization and depoissonization, Tauberian theorems, and singularity analysis. The term analytic information theory has been coined to describe problems of information theory studied by analytic tools. This approach lies on the crossroad of information theory, analysis of algorithms, and combinatorics.