Channel and Source Coding for Non-Volatile Flash Memories

Channel and Source Coding for Non-Volatile Flash Memories

Author: Mohammed Rajab

Publisher: Springer Nature

Published: 2020-01-02

Total Pages: 143

ISBN-13: 3658289821

DOWNLOAD EBOOK

Mohammed Rajab proposes different technologies like the error correction coding (ECC), sources coding and offset calibration that aim to improve the reliability of the NAND flash memory with low implementation costs for industrial application. The author examines different ECC schemes based on concatenated codes like generalized concatenated codes (GCC) which are applicable for NAND flash memories by using the hard and soft input decoding. Furthermore, different data compression schemes are examined in order to reduce the write amplification effect and also to improve the error correct capability of the ECC by combining both schemes.


Source and Channel Coding

Source and Channel Coding

Author: John B. Anderson

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 438

ISBN-13: 1461539986

DOWNLOAD EBOOK

oW should coded communication be approached? Is it about prob H ability theorems and bounds, or about algorithms and structures? The traditional course in information theory and coding teaches these together in one course in which the Shannon theory, a probabilistic the ory of information, dominates. The theory's predictions and bounds to performance are valuable to the coding engineer, but coding today is mostly about structures and algorithms and their size, speed and error performance. While coding has a theoretical basis, it has a practical side as well, an engineering side in which costs and benefits matter. It is safe to say that most of the recent advances in information theory and coding are in the engineering of coding. These thoughts motivate the present text book: A coded communication book based on methods and algorithms, with information theory in a necessary but supporting role. There has been muchrecent progress in coding, both inthe theory and the practice, and these pages report many new advances. Chapter 2 cov ers traditional source coding, but also the coding ofreal one-dimensional sources like speech and new techniques like vector quantization. Chapter 4 is a unified treatment of trellis codes, beginning with binary convolu tional codes and passing to the new trellis modulation codes.


Flash Memories

Flash Memories

Author: Igor Stievano

Publisher: BoD – Books on Demand

Published: 2011-09-06

Total Pages: 278

ISBN-13: 9533072725

DOWNLOAD EBOOK

Flash memories and memory systems are key resources for the development of electronic products implementing converging technologies or exploiting solid-state memory disks. This book illustrates state-of-the-art technologies and research studies on Flash memories. Topics in modeling, design, programming, and materials for memories are covered along with real application examples.


Coding for Flash Memories

Coding for Flash Memories

Author: Eitan Yaakobi

Publisher:

Published: 2011

Total Pages: 164

ISBN-13: 9781124801131

DOWNLOAD EBOOK

Flash memories are, by far, the most important type of non-volatile memory in use today. They are employed widely in mobile, embedded, and mass-storage applications, and the growth in this sector continues at a staggering pace. Moreover, since flash memories do not suffer from the mechanical limitations of magnetic disk drives, solid-state drives have the potential to upstage the magnetic recording industry in the foreseeable future. The research goal of this dissertation is the discovery of new coding theory methods that supports efficient design of flash memories. Flash memory is comprised of blocks of cells, wherein each cell can take on q>̲ 2 levels. While increasing the cell level is easy, reducing its level can be accomplished only by erasing an entire block. Such block erasures are not only time-consuming, but also degrade the memory lifetime. Our main contribution in this research is the design of rewriting codes that maximize the number of times that information can be written prior to incurring a block erasure. Examples of such coding schemes are flash/floating codes and buffer codes, introduced by Jiang and Bruck et al. in 2007, and WOM-codes that were presented by Rivest and Shamir almost three decades ago. The overall goal in these codes is to maximize the amount of information written to a fixed number of cells in a fixed number of writes. Furthermore, the design of error-correcting codes in flash memories is extensively studied. It is shown how to modify WOM-codes to support an error-correction capability. Motivated by the asymmetry of the error behavior of flash memories and the work by Cassuto et al., a coding scheme to correct asymmetric errors is presented. An extensive empirical database of errors was used to develop a comprehensive understanding of the error behavior as well as to design specific error-correcting codes for flash memories. This research on flash memories is expanded to other directions. Wear leveling techniques are widely used in flash memories in order to reduce and balance block erasures. It is shown that coding schemes to be used in these techniques can significantly reduce the number block erasures incurred during data movement. Also, the design of parallel cell programming algorithms is studied for the specific constraints and behavior of flash cells.


Joint Source Channel Coding Using Arithmetic Codes

Joint Source Channel Coding Using Arithmetic Codes

Author: Bi Dongsheng

Publisher: Springer

Published: 2009-11-06

Total Pages: 69

ISBN-13: 9783031005473

DOWNLOAD EBOOK

Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used for decoding convolutional codes, such as the list Viterbi decoding algorithm, can be applied directly on the trellis. The finite state machine interpretation can be easily migrated to Markov source case. We can encode Markov sources without considering the conditional probabilities, while using the list Viterbi decoding algorithm which utilizes the conditional probabilities. We can also use context-based arithmetic coding to exploit the conditional probabilities of the Markov source and apply a finite state machine interpretation to this problem. The finite state machine interpretation also allows us to more systematically understand arithmetic codes with forbidden symbols. It allows us to find the partial distance spectrum of arithmetic codes with forbidden symbols. We also propose arithmetic codes with memories which use high memory but low implementation precision arithmetic codes. The low implementation precision results in a state machine with less complexity. The introduced input memories allow us to switch the probability functions used for arithmetic coding. Combining these two methods give us a huge parameter space of the arithmetic codes with forbidden symbols. Hence we can choose codes with better distance properties while maintaining the encoding efficiency and decoding complexity. A construction and search method is proposed and simulation results show that we can achieve a similar performance as turbo codes when we apply this approach to rate 2/3 arithmetic codes. Table of Contents: Introduction / Arithmetic Codes / Arithmetic Codes with Forbidden Symbols / Distance Property and Code Construction / Conclusion


Channel Codes

Channel Codes

Author: William E. Ryan

Publisher:

Published: 2009

Total Pages: 692

ISBN-13: 9781139931205

DOWNLOAD EBOOK

Channel coding lies at the heart of digital communication and data storage, and this detailed introduction describes the core theory as well as decoding algorithms, implementation details, and performance analyses. In this book, Professors Ryan and Lin provide clear information on modern channel codes, including turbo and low-density parity-check (LDPC) codes. They also present detailed coverage of BCH codes, Reed-Solomon codes, convolutional codes, finite geometry codes, and product codes, providing a one-stop resource for both classical and modern coding techniques. Assuming no prior knowledge in the field of channel coding, the opening chapters begin with basic theory to introduce newcomers to the subject. Later chapters then extend to advanced topics such as code ensemble performance analyses and algebraic code design. 250 varied and stimulating end-of-chapter problems are also included to test and enhance learning, making this an essential resource for students and practitioners alike.