Low-Complexity Decoding of Low-Density Parity Check Codes Through Optimal Quantization and Machine Learning and Optimal Modulation and Coding for Short Block-Length Transmissions

Low-Complexity Decoding of Low-Density Parity Check Codes Through Optimal Quantization and Machine Learning and Optimal Modulation and Coding for Short Block-Length Transmissions

Author: Linfang Wang

Publisher:

Published: 2023

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

This dissertation investigates two topics in channel coding theory: low-complexity decoder design for low-density parity-check (LDPC) codes and reliable communication in the short blocklength regime. For the first topic, we propose a finite-precision decoding method that features the three steps of Reconstruction, Computation, and Quantization (RCQ). The parameters of the RCQ decoder, for both the flooding-scheduled and the layered-scheduled, can be designed efficiently using discrete density evolution featuring hierarchical dynamic quantization (HDQ). To further reduce the hardware usage of the RCQ decoder, we propose a second RCQ framework called weighted RCQ (W-RCQ). Unlike the RCQ decoder, whose quantization and reconstruction parameters change in each layer and iteration, the W-RCQ decoder limits the number of quantization and reconstruction functions to a very small number during the decoding process, for example, three or four. However, the W-RCQ decoder weights check-to-variable node messages using dynamic parameters optimized by a quantized neural network. The proposed W-RCQ decoder uses fewer parameters than the RCQ decoder, thus requiring much fewer resources such as lookup tables. For the second topic, we apply probabilistic amplitude shaping (PAS) to cyclic redundancy check (CRC)-aided tail-biting trellis-coded modulation (TCM). CRC-TCM-PAS produces practical codes for short block lengths on the additive white Gaussian noise (AWGN) channel. In the transmitter, equally likely message bits are encoded by a distribution matcher (DM), generating amplitude symbols with a desired distribution.A CRC is appended to the sequence of amplitude symbols, and this sequence is then encoded and modulated by TCM to produce real-valued channel input signals. We prove that the sign values produced by the TCM are asymptotically equally likely to be positive or negative. The CRC-TCM-PAS scheme can thus generate channel input symbols with a symmetric capacity-approaching probability mass function. We also provide an analytical upper bound on the frame error rate of the CRC-TCM-PAS system over the AWGN channel. This FER upper bound is the objective function for jointly optimizing the CRC and convolutional code. This paper also proposes a multi-composition DM, a collection of multiple constant-composition DMs. The optimized CRC-TCM-PAS systems achieve frame error rates below the random coding union (RCU) bound in AWGN and outperform the short-blocklength PAS systems with various other forward error correction codes.


Low-density Parity-check Codes with Erasures and Puncturing

Low-density Parity-check Codes with Erasures and Puncturing

Author: Jeongseok Ha Ha

Publisher:

Published: 2003

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

In this thesis, we extend applications of Low-Density Parity-Check (LDPC) codes to a combination of constituent sub-channels, which is a mixture of Gaussian channels with erasures. This model, for example, represents a common channel in magnetic recordings where thermal asperities in the system are detected and represented at the decoder as erasures. Although this channel is practically useful, we cannot find any previous work that evaluates performance of LDPC codes over this channel. We are also interested in practical issues such as designing robust LDPC codes for the mixture channel and predicting performance variations due to erasure patterns (random and burst), and finite block lengths. On time varying channels, a common error control strategy is to adapt the coding rate according to available channel state information (CSI). An effective way to realize this coding strategy is to use a single code and puncture it in a rate-compatible fashion, a so-called rate-compatible punctured code (RCPC). We are interested in the existence of good puncturing patterns for rate-changes that minimize performance loss. We show the existence of good puncturing patterns with analysis and verify the results with simulations. Universality of a channel code across a broad range of coding rates is a theoretically interesting topic. We are interested in the possibility of using the puncturing technique proposed in this thesis for designing universal LDPC codes. We also consider how to design high rate LDPC codes by puncturing low rate LDPC codes. The new design method can take advantage of longer effect block lengths, sparser parity-check matrices, and larger minimum distances of low rate LDPC codes.


Low Density Parity Check Code for Next Generation Communication System

Low Density Parity Check Code for Next Generation Communication System

Author: Mayank Ardeshana

Publisher: LAP Lambert Academic Publishing

Published: 2011-12

Total Pages: 72

ISBN-13: 9783845420417

DOWNLOAD EBOOK

Channel coding provides the means of patterning signals so as to reduce their energy or bandwidth consumption for a given error performance. LDPC codes have been shown to have good error correcting performance which enables efficient and reliable communication. LDPC codes have linear decoding complexity but performance approaching close to shannon capacity with iterative probabilistic decoding algorithm. In this dissertation, the performance of different error correcting code such as convolution, Reed Solomon(RS), hamming, block code are evaluated based on different parameters like code rate, bit error rate (BER), Eb/No, complexity, coding gain and compare with LDPC code. In general, message passing algorithm and the sum-product algorithm are used to decode the message. We showed that logarithmic sum-product algorithm with long block length code reduces multiplication to addition by introducing logarithmic likelihood ratio so that it achieves the highest BER performance among all the decoding algorithms. The astonishing performance combined with proposed modified MS decoding algorithm make these codes very attractive for the next generations digital broadcasting system (ABS - S).


A Multistage Scheduled Decoder for Short Block Length Low-density Parity-check Codes

A Multistage Scheduled Decoder for Short Block Length Low-density Parity-check Codes

Author: Nazanin Elhami-Khorasani

Publisher:

Published: 2007

Total Pages: 86

ISBN-13:

DOWNLOAD EBOOK

Recent advances in coding theory have uncovered the previously forgotten power of Low-Density Parity-Check (LDPC) codes. Their popularity can be related to their relatively simple iterative decoders and their potential to achieve high performance close to shannon limit. These make them an attractive candidate for error correcting application in communication systems.


Design of Rate-compatible Structured Low-density Parity-check Codes

Design of Rate-compatible Structured Low-density Parity-check Codes

Author: Jaehong Kim

Publisher:

Published: 2006

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

The main objective of our research is to design practical low-density parity-check (LDPC) codes which provide a wide range of code rates in a rate-compatible fashion. To this end, we first propose a rate-compatible puncturing algorithm for LDPC codes at short block lengths (up to several thousand symbols). The proposed algorithm is based on the claim that a punctured LDPC code with a smaller level of recoverability has better performance. The proposed algorithm is verified by comparing performance of intentionally punctured LDPC codes (using the proposed algorithm) with randomly punctured LDPC codes. The intentionally punctured LDPC codes show better bit error rate (BER) performances at practically short block lengths. Even though the proposed puncturing algorithm shows excellent performance, several problems are still remained for our research objective. First, how to design an LDPC code of which structure is well suited for the puncturing algorithm. Second, how to provide a wide range of rates since there is a puncturing limitation with the proposed puncturing algorithm. To attack these problems, we propose a new class of LDPC codes, called efficiently-encodable rate-compatible (E2RC) codes, in which the proposed puncturing algorithm concept is imbedded. The E2RC codes have several strong points. First, the codes can be efficiently encoded. We present low-complexity encoder implementation with shift-register circuits. In addition, we show that a simple erasure decoder can also be used for the linear-time encoding of these codes. Thus, we can share a message-passing decoder for both encoding and decoding in transceiver systems that require an encoder/decoder pair. Second, we show that the non-systematic parts of the parity-check matrix are cycle-free, which ensures good code characteristics. Finally, the E2RC codes having a systematic rate-compatible puncturing structure show better puncturing performance than any other LDPC codes in all ranges of code rates.


Low-density Parity-check Codes with Reduced Decoding Complexity

Low-density Parity-check Codes with Reduced Decoding Complexity

Author: Benjamin Smith

Publisher:

Published: 2007

Total Pages: 156

ISBN-13: 9780494273289

DOWNLOAD EBOOK

This thesis presents new methods to design low-density parity-check (LDPC) codes with reduced decoding complexity. An accurate measure of iterative decoding complexity is introduced. In conjunction with extrinsic information transfer (EXIT) chart analysis, an efficient optimization program is developed, for which the complexity measure is the objective function, and its utility is demonstrated by designing LDPC codes with reduced decoding complexity. For long block lengths, codes designed by these methods match the performance of threshold-optimized codes, but reduce the decoding complexity by approximately one-third. The performance of LDPC codes is investigated when the decoder is constrained to perform a sub-optimal decoding algorithm. Due to their practical relevance, the focus is on the design of LDPC codes for quantized min-sum decoders. For such a decoder, codes designed for the sum-product algorithm are sub-optimal, and an alternative design strategy is proposed, resulting in gains of more than 0.5 dB.


A Study of Low Density Parity-Check Codes Using Systematic Repeat-Accumulate Codes

A Study of Low Density Parity-Check Codes Using Systematic Repeat-Accumulate Codes

Author:

Publisher:

Published: 2015

Total Pages: 82

ISBN-13:

DOWNLOAD EBOOK

Low Density Parity-Check, or LDPC, codes have been a popular error correction choice in the recent years. Its use of soft-decision decoding through a message-passing algorithm and its channel-capacity approaching performance has made LDPC codes a strong alternative to that of Turbo codes. However, its disadvantages, such as encoding complexity, discourages designers from implementing these codes. This thesis will present a type of error correction code which can be considered as a subset of LDPC codes. These codes are called Repeat-Accumulate codes and are named such because of their encoder structure. These codes is seen as a type of LDPC codes that has a simple encoding method similar to Turbo codes. What makes these codes special is that they can have a simple encoding process and work well with a soft-decision decoder. At the same time, RA codes have been proven to be codes that will work well at short to medium lengths if they are systematic. Therefore, this thesis will argue that LDPC codes can avoid some of its encoding disadvantage by becoming LDPC codes with systematic RA codes. This thesis will also show in detail how RA codes are good LDPC codes by comparing its bit error performance against other LDPC simulation results tested at short to medium code lengths and with different LDPC parity-check matrix constructions. With an RA parity-check matrix describing our LDPC code, we will see how changing the interleaver structure from a random construction to that of a structured can lead to improved performance. Therefore, this thesis will experiment using three different types of interleavers which still maintain the simplicity of encoding complexity of the encoder but at the same time show potential improvement of bit error performance compared to what has been previously seen with regular LDPC codes.


On Constructing Low-density Parity-check Codes

On Constructing Low-density Parity-check Codes

Author: Xudong Ma

Publisher:

Published: 2007

Total Pages: 125

ISBN-13: 9780494433096

DOWNLOAD EBOOK

This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on 2-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of 10−3 to 10−4, the codes by the proposed approach do not have observable error floors at the levels higher than 10−7. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.


Incremental Redundancy Low-density Parity-check Codes for Hybrid FEC/ARQ Schemes

Incremental Redundancy Low-density Parity-check Codes for Hybrid FEC/ARQ Schemes

Author: Woonhaing Hur

Publisher:

Published: 2007

Total Pages: 125

ISBN-13:

DOWNLOAD EBOOK

This dissertation also examines how to improve throughput performance in HybridARQ schemes with low-complexity by exploiting irregular repeat accumulate (IRA) codes. The proposed adaptive transmission method with adaptive puncturing patterns of IRA codes shows higher throughput performance in all of operating code ranges than does any other single mode in IR-HybridARQ schemes.