Quantized Iterative Message Passing Decoders with Low Error Floor for LDPC Codes

Quantized Iterative Message Passing Decoders with Low Error Floor for LDPC Codes

Xiaojie Zhang and Paul H. Siegel This research was supported in part by the Center for Magnetic Recording Research at University of California, San Diego and by the National Science Foundation under Grants CCF-0829865 and CCF-1116739, and University of California Lab Fees Research Program, Award No. 09-LR-06-118620-SIEP.The material in this paper was presented in part at the IEEE International Symposium on Information Theory, Cambridge, MA, July 1–5, 2012, and IEEE International Conference on Signal Processing and Communication, Bangalore, India, July 22–25, 2012.Xiaojie Zhang and Paul H. Siegel are with the Department of Electrical and Computer Engineering and the Center for Magnetic Recording Research, University of California, San Diego, La Jolla, CA 92093 (email: {ericzhang, psiegel}@ucsd.edu)
Abstract

The error floor phenomenon observed with LDPC codes and their graph-based, iterative, message-passing (MP) decoders is commonly attributed to the existence of error-prone substructures – variously referred to as near codewords, trapping sets, absorbing sets, or pseudocodewords – in a Tanner graph representation of the code. Many approaches have been proposed to lower the error floor by designing new LDPC codes with fewer such substructures or by modifying the decoding algorithm. Using a theoretical analysis of iterative MP decoding in an idealized trapping set scenario, we show that a contributor to the error floors observed in the literature may be the imprecise implementation of decoding algorithms and, in particular, the message quantization rules used. We then propose a new quantization method – (-bit quasi-uniform quantization – that efficiently increases the dynamic range of messages, thereby overcoming a limitation of conventional quantization schemes. Finally, we use the quasi-uniform quantizer to decode several LDPC codes that suffer from high error floors with traditional fixed-point decoder implementations. The performance simulation results provide evidence that the proposed quantization scheme can, for a wide variety of codes, significantly lower error floors with minimal increase in decoder complexity.

Low-density parity-check (LDPC) codes, iterative message-passing decoding, sum-product algorithm, message quantization, error floors, trapping sets.

I Introduction

The outstanding performance of low-density parity-check (LDPC) codes and iterative, message-passing (MP) decoding algorithms [1, 2] has attracted considerable attention over the past decade and these techniques are being deployed in a growing number of practical applications. At high signal-to-noise ratio (SNR), however, LDPC codes and MP decoders may be subject to the error floor phenomenon, which manifests itself as an abrupt change in the slope of the error-rate curve. Since many important applications, such as data storage and high-speed digital communication, often require extremely low error rates, the study of error floors in LDPC codes remains of considerable practical, as well as theoretical, interest.

The error floor phenomenon is commonly attributed to the existence of certain error-prone substructures (EPSs) in a Tanner graph representation of the code. In the binary erasure channel (BEC), it has been shown that substructures known as stopping sets determine the error-rate performance and the observed error floor [3]. However, for general memoryless binary-input output-symmetric (MBIOS) channels such as the binary symmetric channel (BSC) and the additive white gaussian noise channel (AWGNC), the EPSs that dominate the error floor performance have not yet been fully characterized, although some classes of EPSs have been identified and studied, such as near-codewords [4], trapping sets [5], absorbing sets [6], and pseudocodewords [7].

One common way to improve the error floor performance of LDPC codes has been to redesign the codes to have Tanner graphs with large girth and without problematic EPSs which usually consist of small number of variable nodes [8, 9, 10]. However, for LDPC codes that have been standardized, approaches are needed that do not modify the codes. In the literature, many modifications to the iterative MP decoding algorithms have been proposed in order to improve high SNR performance, such as averaged decoders [11], reordered decoders [12, 13], and decoders with post processing [14, 15, 18, 16, 17]. In [11], the authors noticed that the emergence of errors in EPSs is heuristically related to a sudden magnitude change in the values of certain variable nodes (VNs). Hence, it was proposed to average the messages in a belief-propagation (BP) decoder over several iterations to avoid such sudden changes and therefore slow down the convergence rate for variable nodes in a trapping set and decrease the frequency of trapping set errors. Another heuristic approach is to process messages based on the order of node reliabilities computed at each iteration [12], and it was suggested that the scheduled decoders are able to resolve some standard trapping set errors [13]. Although these general approaches are capable of improving the average error rate performance to some extent, the resulting decoders still fail on small EPSs and their effect on the error floor is not significant.

To further improve the error floor behavior, decoders that make use of the prior knowledge of some small size EPSs have been designed to reduce the decoding failures due to such EPSs. In [14] and [15], the authors proposed a post-processing decoder that matches the configuration of unsatisfied check nodes (CNs) to trapping sets in a precomputed list after conventional MP decoding has failed. The size and completeness of the trapping set list directly affect the performance gain of such decoders, but to obtain a complete list of small trapping sets of a given LDPC code is generally quite computationally complex. A symbol-selecting post-processing technique was also developed in [16]. It saturates the channel messages on a set of selected variable nodes at each stage after the conventional MP algorithms fails. In [17], Han and Ryan proposed a bi-mode erasure decoder that combines several problematic check nodes into a generalized constraint processor, to which a corresponding maximum a posteriori (MAP) algorithm, such as the BCJR algorithm, is then applied. Another post-processing approach that utilizes the graph-theoretic structure of absorbing sets, proposed in [18], adjusts the appropriate messages in the iterative MP decoding once the decoder enters and remains in the absorbing set of interest.

All the above approaches either change the message update rules of MP decoders or require extra processing steps after conventional MP decoding fails, both of which increase the decoding complexity relative to the original iterative MP algorithms. Moreover, the post-processing approaches that require prior knowledge of the set of EPSs causing the error floor are only effective when applied to LDPC codes whose EPSs have been carefully studied.

In fixed-point implementation of iterative MP decoding, efforts have also been made to improve the error-rate performance in the waterfall region and/or error-floor region by optimizing parameters of uniform quantization [19, 21, 22, 20]. In [19], Zhao et al. studied the effect of message clipping and uniform quantization on the performance of the min-sum decoder in the waterfall region and heuristically optimized the number of quantization bits and the quantization step size for selected LDPC codes. In [20], a dual-mode adaptive uniform quantization scheme was proposed to better approximate the log-tanh function used in sum-product algorithm (SPA) decoding. Specifically, for magnitudes less than 1, all quantization bits were used to represent the fractional part; for magnitudes greater than or equal to 1, all bits were dedicated to the representation of the integer part. In [21, 22], Zhang et al. proposed a conceptually similar idea to increase precision in the quantization of the log-tanh function. Uniform quantization was applied to messages generated by both variable nodes and check nodes, but the quantization step sizes used in the two cases were separately optimized. We note, however, that none of these modified quantization schemes were primarily intended to significantly increase the saturation level, or range, of quantized messages, and in their reported simulation results, error floors can still be clearly observed.

It has been observed that the high error floors associated with certain EPSs of some LDPC codes are closely related to the saturation level imposed on messages passed in the SPA decoder. (See, for example, [23] and references cited therein.) In this work, we investigate the cause of error floors in binary LDPC codes from the perspective of the MP decoder implementation, with special attention to limitations that decrease the dynamic range of messages passed during decoding. We show that, under certain idealized assumptions, the EPSs which are commonly associated with high error floors of some LDPC codes will not trap iterative MP decoders and cause high error floors if message magnitudes and the number of iterations are not limited. Based upon an analysis of the growth rate of messages outside an EPS in an idealized scenario, we propose a novel quasi-uniform quantization method that captures the essence of messages in different ranges of reliability. The proposed quantization method has an extremely large saturation level which prevents iterative MP decoders from being trapped by an EPS. This property, to the best of our knowledge, distinguishes it from other quantization techniques for iterative MP decoding that have appeared in the literature. With the new quantization method, it is possible to have a fixed point implementation of iterative MP decoders that achieves low error floors without an additional post-processing stage or a modification of either the decoding update rules or the graphical code representation upon which the iterative MP decoder operates. We present simulation results for min-sum decoding, SPA decoding, and some of their variants, that demonstrate a significant reduction in the error floors of four representative LDPC codes, with no increase in decoding complexity.

The remainder of the paper is organized as follows. Section II gives some notation and definitions used throughout the paper. In Section III, we analytically investigate the impact that message quantization can have on MP decoder performance and the error floor phenomenon. In Section IV, we propose an enhanced quantization method intended to overcome the limitations of traditional quantization rules. In Section V, we incorporate the new quantizer into SPA and min-sum decoding and, through computer simulation of several LDPC codes known for their high error floors, demonstrate the significant improvement in error-rate performance that this new quantization approach can afford. Section VI concludes the paper.

Ii Notation and Definitions

The study of the phenomenon of error floors began shortly after LDPC codes were rediscovered about a decade ago. It has been shown that the EPSs known as stopping sets cause the error floor in the binary erasure channel (BEC), and such EPSs have a clear combinatorial description. Enumeration of these structures makes it possible to accurately estimate the error floor [3]. However, for other MBIOS channels such as the binary symmetric channel (BSC) and the additive white Gaussian noise channel (AWGNC), it is more difficult to establish the relationship between EPSs and error floors. In [4], it was first pointed out that the near-codewords caused error floors in simulations of Margulis and Ramanujan-Margulis LDPC codes on the AWGNC. The term trapping set proposed by Richardson [5] is operationally defined as a subset of variable nodes (VNs) that is susceptible to errors under a certain iterative MP decoder over an MBIOS channel. Hence, this concept depends on both the channel and the decoding algorithm. In [6], the error floor is associated with some combinatorial substructures within the Tanner graph, named absorbing sets, which are defined independently of the channel. The absorbing sets correspond to a particular type of near codewords or trapping sets that are stable under bit-flipping operations. All these EPSs have been believed to be the cause of error floors, and for some LDPC codes, techniques such as importance sampling used to estimate the error floor are based on the probability of decoding failures on such EPSs [5, 24]. In this section, we will show that under certain idealized assumptions about the computation trees of variable nodes within a given EPS, as well as the correctness of variable node messages outside the EPS, conventional iterative decoders that accurately represent messages will eventually correct errors supported by the EPS.

To facilitate our discussion, we define a substructure called an absolute trapping set from a purely graph-theoretic perspective, independent of the channel and the decoder. Let denote the Tanner graph of a binary LDPC code with VNs , CNs , and edge set .

Definition 1

A stopping set of size is a configuration of variable nodes such that the induced subgraph has no check nodes of degree-one. An trapping set is a configuration of variable nodes, for which the induced subgraph is connected and has odd-degree check nodes. If the induced subgraph of an trapping set does not contain a stopping set, it is called an absolute trapping set.

In the literature, all trapping sets of interest that contribute to the error floor of an LDPC code are of size smaller than the minimum stopping set size of the code, since otherwise the stopping sets would be the dominant contributor to the error floor [3]. Note that the requirement that an absolute trapping set contain no stopping set also implies that it must have at lease one degree-one check node. As we will discuss later in this section, these degree-one check nodes are essential because they are able to pass correct extrinsic messages into the trapping set. To the best of our knowledge, almost all trapping sets of interest in the literature are absolute trapping sets. For example, both of the well-known (5,3) trapping sets in the Tanner code of length 155, the notorious (12,4) trapping sets in the (2640,1320) Margulis code, and the (5,5) trapping set in some codes of variable-degree five are all absolute trapping sets. Unless otherwise indicated, all trapping sets referred to in this paper are absolute trapping sets, as well.

In analogy to the definition of computation tree in [25], we define a k-iteration computation tree as follows.

Definition 2

A k-iteration computation tree for an iterative decoder in the Tanner graph is a tree graph constructed by choosing variable node as its root and then recursively adding edges and leaf nodes to the tree that participate in the iterative message-passing decoding during iterations. To each vertex that is created in , we associate the corresponding node update function in .

Let be the induced subgraph of an trapping set contained in , with VN set and CN set . Let set be the set of degree-one CNs in the subgraph , and let set be the set of neighboring VNs of CNs in . We refer to a message on an edge adjacent to VN as a correct message if its sign reflects the correct value of , and as an incorrect message, otherwise. Let be the set of all descendants of the vertex in a given computation tree.

Definition 3

Given a Tanner graph and an induced subgraph of a trapping set, a variable node is said to be -separated if, for at least one of its neighboring degree-one check node , no variable node belongs to . If every is -separated, the induced subgraph is said to satisfy the -separation assumption.

(a) trapping set and part of its neighboring nodes
(b) Computation tree with root
Fig. 1: Example of a trapping set and its corresponding computation tree.

In Fig. 1(a), we show the graph of a trapping set and some of its neighboring nodes. The set of VNs in the trapping set is , represented as solid black circles. The set of CNs in the trapping set is , . In this trapping set, every VN has a neighboring degree-one CN, i.e., , and . For example, the 3-iteration computation tree of VN is shown in Fig. 1(b). It can be verified from this computation tree that is 2-separated but not 3-separated, because is a descendant of in , but not in . It is worth noting that whether or not a trapping set satisfies the -separation assumption depends on the Tanner graph outside the trapping set, not the trapping set itself.

We want to point out that the -separation assumption is much weaker than the isolation assumption in [26]. The separation assumption here only applies to the VNs that have neighboring degree-one CNs in the induced subgraph , and these neighboring degree-one CNs do not have any VNs from the trapping set as their descendants in the corresponding -iteration computation tree. With the separation assumption, the descendants of are separated from all the nodes in the trapping set, meaning that the incorrect messages passed in the trapping set do not affect the extrinsic messages sent towards in the computation tree.

Iii Error Floors of LDPC Codes

Iii-a Trapping Sets and Min-Sum Decoding

To get further insight into the connection between trapping sets and decoding failures of iterative MP decoders, we first consider a simple iterative MP decoder, the min-sum (MS) decoder, which can be viewed as a simple approximation of the sum-product algorithm. We now briefly recall the VN and CN update rules of min-sum decoding.

A VN receives an input message from the channel, typically the log-likelihood ratio (LLR) of the corresponding channel output, defined as follows

(1)

where is the code bit and is the corresponding received symbol.

Denote by and the messages sent from to and from to , respectively, and denote by the set of neighboring nodes of VN (or CN ). Then, the message sent from to in min-sum decoding is given by

(2)

and the message from CN to VN is computed as

(3)

In the initialization step, we set . It can been seen from (2) and (3) that the min-sum decoding algorithm is insensitive to linear scaling, meaning that linearly scaling all input messages from the channel would not affect the decoding performance.

For the MS decoder, we can show that a trapping set does not cause decoding failure if its induced subgraph in the Tanner graph satisfies certain criteria.

Theorem 1

Let be the Tanner graph of a variable-regular LDPC code that contains a subgraph induced by a trapping set. Assume that the channel is either a BSC or an AWGNC, and that the messages from the channel to all VNs outside are correct. If satisfies the -separation assumption for sufficiently large , then the corresponding min-sum decoder will successfully correct all erroneous VNs in .

Proof:

See Appendix A. \qed

In general, the error-rate performance of MS decoding is not as good as that of SPA decoding. However, there are several quite simple but effective ways to adjust the CN update rule of MS decoding to get comparable performance to SPA decoding. One method is attenuated-min-sum (AMS) decoding [27], where the magnitudes of messages are attenuated at CNs. The corresponding CN update rule of AMS is as follows

(4)

where is the attenuation factor, which can be a fixed constant or adaptively adjusted. Another way to improve the error-rate performance of MS decoding is offset-min-sum (OMS) decoding, which applies an offset to reduce the magnitudes of CN output messages. The resulting CN update equation is

(5)

where is the offset which, like the attenuation factor, can be a fixed constant or adaptively adjusted. In some implementations, for additional simplicity, the attenuation factor or offset is set to be the same fixed constant for all CNs and all iterations [27].

Theorem 1 can be extended to both AMS and OMS decoding, where we assume that, in each iteration, all CNs use the same attenuation factor in AMS or the same offset in OMS.

Corollary 2

Let be the Tanner graph of a variable-regular LDPC code that contains a subgraph induced by a trapping set. Assume that the channel is either a BSC or an AWGNC, and that the messages from the channel to all VNs outside are correct. If satisfies the -separation assumption for sufficiently large , then the both AMS and OMS decoder will successfully correct all erroneous VNs in .

Proof:

See Appendix B. \qed

As shown in Appendix B, the extension to AMS decoding follows easily from Theorem 1. On the other hand, the proof of the extension to the OMS decoder makes use of ideas introduced in the analysis of SPA decoding in the next subsection.

Iii-B Trapping Sets and Sum-Product Algorithm Decoding

In this subsection, we further extend Theorem 1 to sum-product algorithm decoding. The optimality criterion in the design of the SPA decoder is symbol-wise maximum a posteriori probability (MAP), and it is an optimal symbol-wise decoder on Tanner graphs without cycles.

In SPA decoding, VN nodes take log-likelihood ratios of received information from the channel as initial input messages. The VN update rule is the same as that of MS decoding described in (2), which involves the summation of all incoming extrinsic messages. In the CN update rule of SPA decoding, the message sent from CN to VN is computed as

(6)

In practical implementations of the SPA, the following equivalent CN update rule is often used

(7)

where , , and . In some fixed-point implementations, in order to have better approximation, different look-up tables could be used to compute and  [22].

We note that the hyperbolic tangent function, , has numerical saturation problems when computed with finite precision. For example, in 64-bit floating-point (in IEEE 754 standard format [28]) computer implementation, it can be shown that would be rounded to 1 when , meaning that for [29]. In order to avoid such problems that can arise from limited precision, thresholds on the magnitudes of messages must be applied in simulation studies [22].

In order to maintain the performance advantage of SPA decoding over MS decoding, the quantization method has to preserve the self-inverse property of the function and to accurately compute the CN update function in (7). However, it is difficult to have a good approximation of the function with limited resolution, because this requires both fine precision and large range. Efforts have been made to design quantization methods that work effectively with the function. For example, a variable-precision quantization scheme proposed in [20] uses larger quantization step size for magnitudes greater than 1, and smaller step size for magnitudes less than 1. An adaptive uniform quantization method proposed in [21] uses different quantization step sizes for the outputs of the and the function in (7). If the output of the function is quantized with finite precision , inputs greater than can not be distinguished, and is quite small even for extremely fine precision, e.g., and . Hence, the largest supported magnitude during decoding depends on the finest precision of quantization. This means that increasing the quantization range without improving the precision is not beneficial.

In order to avoid dealing with the function, a variety of other CN update rules, most of which are approximations to the SPA, have been proposed. Some of these approximation are based on the following equivalent version of the SPA CN update rule represented by (6) or (7),

(8)

where is the pairwise “box-plus” operator defined as

(9)
(10)

with

(11)

The proof of equivalence between (6) and (8) can be found in [30]. We call such an implementation box-plus SPA decoding. The formulation above does not have the precision problem that (6) and (7) have, and, in fact, in 64-bit double-precision floating-point implementation, the maximum magnitude of a message that can be supported is approximately , which is the largest double-precision value supported by the IEEE 754 standard. Moreover, unlike the function, the function can be well quantized or approximated with piecewise linear functions [29, 30, 31].

If the term is omitted when using (8) to calculate the CN output in box-plus decoding, the result is the same as that produced by the MS algorithm using (3). Therefore, box-plus SPA decoding can be viewed as MS decoding with a correction factor. It is known that the magnitude of is bounded above by (see, for example, [33, p. 232]). In fact, as shown in [27],[32], given the same inputs, a message produced by a CN in SPA decoding has the same sign as the corresponding message in MS decoding, with equal or smaller magnitude. Because of their relevance to the proof of Theorem 4 below, we summarize these observations relating the CN updates produced by the SPA and MS decoders in Lemma 3.

Lemma 3

Let denote the message from CN to VN as computed in (3), and let denote the message from CN to VN as computed in (6), (7), and (8). Then and The correction term in (11) satisfies , and when .

Proof:

See Appendix C. \qed

Finally, we note that if the correction term is replaced with a fixed constant, the resulting CN update rule corresponds to that of the OMS decoder in (5).

As we discussed earlier, no matter how one designs the fixed-point implementation of the original SPA using the function, or even with the floating-point implementation, the function is unbounded. Even if we saturate both the input and the output of the function, the value of is still unbounded and linear in . Therefore, the CN output of a practical implementation of (6) or (7) can significantly differ from the true computed value. However, since box-plus SPA decoding can be considered as min-sun decoding with a correction factor, the implementation error mainly comes from the computation and quantization of the correction factor, which is a small bounded value, as shown in Lemma 3. Now, we can extend Theorem 1 to SPA decoding.

Theorem 4

Let be the Tanner graph of a variable-regular LDPC code that contains a subgraph induced by a trapping set. Assume that the channel is either a BSC or an AWGNC, and that the messages from the channel to all VNs outside are correct. If satisfies the -separation assumption for sufficiently large , then the SPA decoder will successfully correct all erroneous VNs in .

Proof:

See Appendix D. \qed

Remark 1

As will be shown in the simulation results, linear scaling of the input LLRs to the SPA decoder will indeed affect the decoding performance, because the correction factor is not linear in either or .

For most LDPC codes, the trapping sets typically satisfy the -separation assumption only for small values of . Nevertheless, as described more fully in Section V, in our 64-bit double-precision floating-point computer simulations of MS decoding and box-plus SPA decoding applied to several LDPC codes traditionally associated with high error floors, we have not observed, in tens of billions of channel realizations of both the BSC and the AWGNC, any decoding failure in which the error patterns correspond to the support of a small trapping set. Moreover, when we force every VN in a trapping set to be in error and all other VNs to be correct, the floating-point decoders can successfully decode, whereas a decoder implementation that limits the magnitude of messages may not be able to resolve the errors in the trapping set and would then fail to decode to the correct codeword.

We emphasize that the analytical and numerical results in this paper are mainly for variable-regular LDPC codes. Extension of this analysis to variable-irregular LDPC codes does not appear to be straightforward.

Iv New Quantized Decoders with Low Error Floors

As mentioned above, several empirical studies have shown that the range and the precision of quantized messages in iterative LDPC decoders can influence the observed error floor. Moreover, analytical models used to study the dynamical evolution of messages show that message magnitudes can exhibit exponential growth behavior as a function of the number of decoder iterations. Likewise, the proofs of the theorems and corollaries in Section III suggest that iterative decoder performance can be improved by allowing for the exponential growth of message magnitudes. These results serve as the motivation for a new quantization method that we refer to as -bit quasi-uniform quantization, which we now describe.

Consider first the uniform quantizer with quantization step . For any real number , it is defined by

The outputs of the uniform quantizer are of the form . The quantization intervals can be visualized by expressing the quantization rule as

(12)

Now, let , where is an integer value . The -bit uniform quantizer combines the uniform quantization intervals corresponding to the output values into a single semi-infinite interval whose elements are quantized to and, similarly, combines the intervals corresponding to the output values into a single semi-infinite interval whose elements are quantized to . Denoting the -bit quantizer with step by , we have

(13)

The number of intervals is , and the quantizer output levels , can be denoted by the signed -bit binary representation of , that is, , where the last bits are the binary representation of , and is the sign bit with value 0 (resp. 1) when is positive (resp. negative). Note that the output level 0 has two such binary representations; one of them can be selected using any preferred convention.

One approach to expanding the range of quantized messages is to increase the step size , without changing the resolution . This approach, however sacrifices the precision of the quantization. Alternatively, one could maintain the value of and increase to resolve larger magnitudes. This would increase implementation complexity when incorporated into the decoding hardware.

In the context of our application, the -bit quasi-uniform quantizer represents a compromise between these conflicting objectives of retaining fine precision, allowing large dynamic range, and controlling implementation complexity as messages grow exponentially in the number of decoder iterations. The definition of the quantizer involves another parameter , which we refer to as the growth rate parameter. Roughly speaking, the underlying idea behind the quantizer is as follows. For input values in the interval ,we use -bit uniform quantization with step size . The intervals corresponding to quantized values are exactly like those of the -bit uniform quantizer. For values and , the semi-infinite intervals are shortened to have length For input values with magnitude larger than , the quantizer outputs can take an additional values of the form , with corresponding intervals that increase exponentially in length with growth rate . More precisely, the -bit quasi-uniform quantizer, denoted by is defined as follows.

(14)

a

From Definition (14), we see that the quantization levels can be represented with only bits. The levels are represented by , where is the signed binary representation of the integer and the final indicator bit, is set to zero, i.e., , to reflect the fact that the -bit uniform quantizer has been applied. The quantized levels are denoted by , where is the signed binary representation of , and the indicator bit is set to 1, i.e., , to indicate that non-uniform quantization has been used. Similarly, we denote the quantized levels by , where is the signed binary representation of , and the indicator bit is again set to 1, i.e., . It is sometimes convenient to represent these quantization levels in the form , where is the decimal integer representation of the signed binary -tuple or , and is the indicator bit or .

Table II shows an example of (3+1)-bit quasi-uniform quantization with , , and . Here . The operation of the quantizer is shown only for non-negative real inputs. The operation on negative reals can be obtained by odd symmetry. The first bit is the sign bit, and the last bit is the indicator bit. The quantizer behaves just like the 3-bit uniform quantizer in the interval . When , the quantizer uses intervals of exponentially increasing length, with input quantized to the smallest value in the interval in which falls. For example, all values within the quantization interval are quantized to 27. The decimal values are used in the VN and CN update computations, and then the corresponding quantized binary messages are passed between VNs and CNs.

Input Quantized value Binary
range (decimal) form
[0,0.5] 0 0000
(0.5,1.5] 1 0010
(1.5,2.5] 2 0100
(2.5,9) 3 0110
[9,27) 9 0001
[27,81) 27 0011
[81,243) 81 0101
[243, ) 243 0111
TABLE II: 4-bit quasi-uniform quantization with , , and .
Input Quantized value Binary
range (decimal) form
(0,0.5] 0 0000
(0.5,1.5] 1 0001
(1.5,2.5] 2 0010
(2.5,3.5] 3 0011
(3.5,12) 4 0100
[12,36) 12 0101
[36,108) 36 0110
[108,) 108 0111
TABLE I: (3+1)-bit quasi-uniform quantization with and .

We can further extend the idea of (+1)-bit quasi-uniform quantization, as follows. The -bit quasi-uniform quantizer uses bits in total to represent different output magnitudes, or quantization intervals if signs are taken into account. As described in (14) and illustrated in Table II, output magnitudes, including 0, are allocated to the uniform quantization domain and the remaining magnitudes correspond to exponentially growing quantization interval lengths. The generalized (symmetric) -bit quasi-uniform quantizer represents the same number of magnitudes, but it can assign any number, say , to the uniform quantization range and the remaining magnitudes to the exponential quantization range. With a quantization rule similar to (14), the quantized values of the general -bit quasi-uniform quantization are for ; for , and for . Table II shows an example of a general 4-bit quasi-uniform quantization with , , and . The uniform quantization range in this example is from to 4 with uniform step size 1, and the exponential range is above 4 or below with exponential step sizes for .

The motivation for the proposed quasi-uniform quantization method was the analysis of message-passing decoder behavior on trapping sets that satisfy the -separation assumption for large . Although this property is generally not satisfied by trapping sets in practical LDPC codes, the simulation results in the next section demonstrate that, for a variety of LDPC codes that were examined, this quantization approach can substantially lower error floors when used with standard MS-based and SPA-based decoders.

V Numerical Results

In this section we compare the error-rate performance obtained with the proposed quasi-uniform quantization method to that obtained using uniform quantization. We consider four know LPDC codes covering a range of rates and lengths: a rate-, (640,192) quasi-cyclic (QC) LDPC code [17]; the rate-, (2640,1320) Margulis LDPC code [4]; the rate-, (1280, 1024) AR4JA LDPC code [36]; and MacKay’s (4095,3358) regular LDPC code (the 4095.737.3.101 code in [35]) with rate approximately 0.82. Results are shown for various combinations of the BSC and AWGN channels using the MS, OMS, AMS, SPA, and approximated-SPA decoders.

All of the frame error rate (FER) curves are based on Monte Carlo simulations that generated at least 200 error frames for each plotted error rate, and the maximum number of decoding iterations was set to 200, unless otherwise indicated.

V-a Dynamical Range of Message Magnitudes

We first present some empirical data in support of the contention that some benefit may come from allowing message magnitudes to grow during iterative decoding.

(a) MS decoder on the (640,192) QC-LDPC code over BSC of and .
(b) SPA decoder on the Margulis code of length 2640 over AWGNC of dB.
Fig. 2: Probability density function of magnitude of messages.

Fig. 2 shows the empirical probability density functions (pdf) of the message magnitudes observed during decoding simulations for two LDPC codes. Fig. 2(a) shows the pdf for the MS decoder applied to the (640,192) QC-LDPC code on the BSC with , where the magnitude of all input LLRs is scaled to 1. Fig. 2(b) shows the pdf of the SPA decoder applied to the Margulis code on the AWGNC with dB. The data used to create these figures were obtained using floating-point decoder implementations and more than 10 million channel output symbols. The messages passed on all edges during all decoding iterations were collected to generate the pdfs. In the simulations, the iterative MP decoders stopped when a codeword was found or when the maximum number of iterations (200) was reached. The figures confirm that a substantial fraction of messages had “large” magnitudes. Moreover, upon further examination of the simulation data, we found that such “strong” messages, in general, helped to successfully decode the received symbols, as suggested by the idealized theoretical analysis in Section III.

Fig. 3: FER results of min-sum (MS) decoder on the (640,192) QC-LDPC code on BSC, where or , and .
Fig. 4: FER results of min-sum (MS) decoder on the (640,192) QC-LDPC code on AWGNC, where and .

V-B Simulation Results for Min-Sum Decoding and Variants

Figs. 4 and 4 show simulation results for the (640,192) QC-LDPC code using various types of quantized MS decoders and floating-point MS decoders, extending some of the results presented in [37]. For the BSC, we scaled the magnitudes of decoder input messages from the channel to 1 since, for linear decoders such as Gallager-B and MS, the scaling of channel input messages does not affect the decoding performance. The uniform quantization step size is set to 1 or 0.5. So, for example, when , the 3-bit uniform quantizer produces values , and the (3+1)-bit quasi-uniform quantizer with yields the values described in Table II. In the simulation, the parameter was heuristically chosen by testing different values. When is large, a small proved to be enough to represent a large range of message magnitudes.

In Fig. 4, we see that the slope of the error floor resulting from uniform quantization with either step size, or , is similar to that of the Gallager-B decoder error floor. This is because, when most messages have the same magnitude, MS decoding essentially degenerates to Gallager-B decoding, which relies solely upon the signs of messages.

Comparing uniform quantizers with the same number of bits but different step sizes, we see that smaller step size produces better performance in the waterfall region but a higher error floor. This observation can be explained by the saturation level of these quantizers. For example, 3-bit and 4-bit uniform quantizers with step size saturate at magnitudes 3 and 7, respectively, whereas with step size , they saturate at magnitudes 1.5 and 3.5, respectively. The stronger messages, i.e., the messages with larger magnitudes, can be helpful or harmful to the decoding process, depending on whether they are correct or not. The correct ones can help overcome the incorrectly received bits, but the incorrect ones tend to negatively influence the recovery of correctly received bits. In the error-floor region, when channel conditions are good, very few bits are received incorrectly, and as suggested by the proofs of Theorems 1 and 4, large saturation levels allow messages corresponding to correct bits to grow sufficiently to overcome the “incorrect” messages in trapping sets. This behavior is evident in Fig. 4, where the error floors produced by the different uniform quantizers monotonically decrease as the saturation levels increase.

On the other hand, in the waterfall region where many bits are received incorrectly, reducing the saturation level may limit the propagation of strong incorrect messages. Moreover, in this specific case, quantization with the smaller step size may be expected to improve performance relative to that achieved with the larger step size or with a floating-point MS decoder implementation. The reasoning is that, since the magnitudes of input LLRs to the MS decoder from the BSC are scaled to 1, the low saturation level and the possible appearance of non-integral saturated messages may reduce the possibility of the messages at a VN summing to zero. Because having VNs summing to zero could result in oscillatory behavior in the decoder and failure to decode correctly, this could explain why in Fig. 4 the MS decoder using (3+1)-bit quasi-uniform quantization and step size yields better performance than the floating-point decoder.

Fig. 4 shows the performance of MS decoding of the (640,192) QC-LDPC code on the AWGNC channel. Here the -bit quasi-uniform quantizer yields substantial reduction of the error floor in comparison not only to 8-bit uniform quantization but also to the floating-point results. This is consistent with, and more impressive than, the results shown in [37] for the Margulis code, where -bit quasi-uniform quantization surpassed 6-bit uniform quantization and paralleled floating-point results. Heuristic reasoning along the lines used above suggests that codes with higher variable-node degree would benefit even more from the quasi-uniform quantization. However, it is important to point out that the gains can be code-dependent, so further performance studies are needed to confirm this.

Fig. 5: FER results of OMS decoder on the AR4JA LDPC code of and on AWGNC, where , , and offset factor ,.
Fig. 6: FER results of AMS decoder on the (4095,3358) LDPC code on AWGNC, where , , and attenuation factor .

Quasi-uniform quantization can be directly applied to modified MS decoders, such as AMS and OMS, with the possibility of significant reduction in the error floor. This was illustrated in [37] for the (640,192) QC-LDPC code with AMS decoding on the BSC and with OMS decoding on the AWGNC. In case of AMS decoding, -bit quasi-uniform quantization dramatically reduced the error floor relative to 4-bit uniform quantization, achieving the performance of the unsaturated AMS decoder. For OMS decoding with -bit quantization, the comparisons to 5-bit uniform quantization and unsaturated decoding were analogous.

Here we consider the performance of AMS and OMS decoding on longer codes with higher rates, specifically, the rate-0.8, (4095,3358) regular code and the rate-0.8, (1280, 1024), irregular AR4JA code. Fig. 6 compares the quasi-uniform quantization method with uniform quantization in OMS decoding. The performance of the floating-point OMS decoder is also shown. With uniform quantization ranging from 5 bits to 9 bits, we can see that 8 bits suffice to closely approach the error-rate performance of floating-point OMS, whereas the (4+1)-bit quasi-uniform quantization actually surpasses floating-point decoder. Fig. 6 shows a similar comparison for AMS decoding of MacKay’s (4095,3358) LDPC code. The attenuation factor was set to the value 0.7, which was found empirically to give the best error floor performance among integer multiples of 0.1 in the range [0.5, 0.9]. After normalization by this factor in every CN update, we found that the quantized value lost precision due to the coarse step size . As a consequence, the floating-point AMS decoder had better performance than any of the quantized decoders, most noticeably in the waterfall region. Uniform quantization with 7 or more bits appears to eventually achieve floating point performance at FER below , as does -bit quasi-uniform quantization.

V-C Simulation Results for Sum-Product Algorithm Decoding

We now consider the application of quasi-uniform quantization to SPA decoding. In our simulations of quantized SPA decoding, the input LLRs and the messages passed between CNs and VNs are quantized values. For convenience, the CN updates are carried out with floating-point arithmetic using the box-plus update rule in (8); the resulting message is then quantized appropriately.

Fig. 7: FER results of SPA decoder on the (640,192) QC-LDPC code on BSC, where , and .
Fig. 8: FER results of approx.-SPA decoder on the (2640,1320) Margulis code on AWGNC, where and .

In [38], we illustrated the performance of quasi-uniform quantization with SPA decoding of the (640,192) QC-LDPC code on the BSC. We saw that with LLR magnitudes scaled to 2, the (6+1)-bit quasi-uniform quantizer with step size and performs significantly better than 7-bit uniform quantization with the same step size. Its performance is comparable to that of the floating-point SPA decoder, which is superior to floating-point SPA decoding with exact LLR magnitudes when the channel error probability is small.

Here we consider the same code and channel, with step size again set to , but with quantization value scale factor reduced to . With LLR magnitudes scaled to 2, we simulated 6-bit through 10-bit uniform quantization, (5+1)-bit quasi-uniform quantization, and floating-point SPA decoding with LLR magnitudes scaled to 2 as well as with exact LLR magnitudes.

The simulation results, shown in Fig. 8, indicate that the -bit quasi-uniform quantizer provides the best performance for . Comparing to the results in [38], the performance of the -bit quantizer with is only slightly worse than that of the -bit quantizer with .

We note that the selection of the input LLR magnitude, here set to 2, is heuristic and code-dependent. The value 2 was found empirically to give much better performance than, for example, the value 1, but does not necessarily represent the optimal LLR magnitude scaling.

Results for SPA decoding of the (640,192) QC-LDPC code on the AWGN channel were also presented in [38]. The -bit quasi-uniform quantizer with and was found to significantly improve upon 7-bit uniform quantization and match the performance of the floating-point box-plus SPA decoder.

In [38], we found similar relative performance for the Margulis code on the AWGNC. The -bit quasi-uniform quantizer outperformed 7-bit uniform quantization, with step size parameters and , and its performance equaled that of the “approximated box-plus SPA” decoder. The latter made use of a two-piece linear approximation for , taken from [31], in computing the correction factor for box-plus SPA decoding in (11), namely,

(15)

The approximated decoder ran about five times faster than the floating-point SPA decoder, with a performance penalty of less than 0.02 dB in the waterfall region.

In Fig. 8 we show further results for the Margulis code on the AWGNC. The plot shows the FER results for -bit quasi-uniform quantization, as well as 6-, 7-, and 8-bit uniform quantizers, with quantization parameters set to and . We also evaluated the dual quantization SPA decoding proposed by Zhang et al. [21], where the function is quantized into a mapping table, denoted as . Following the notation in [21], we considered dual quantization with parameters Q4.2/1.5, Q5.2/1.6, and Q6.2/1.7 for 6-bit, 7-bit, and 8-bit quantizers, respectively. The Q quantizer uses uniform quantization to represent a signed fixed-point number with bits to the left of the radix point for the integer part and bits to the right of the radix point for the fractional part. For example, a Q4.2 quantizer has uniform quantization step size of 0.25 and a range . Hence, all the quantization methods compared here have the same uniform step size of when quantizing the input LLRs.

We know that the saturation level is limited by the quantization step size, because it is desirable to have for all satisfying . In other words, in the dual quantization scheme, the saturation level has to match the resolution of the quantizer; otherwise the error-rate performance in both the waterfall region and the error-floor region will be significantly degraded. Based on error-rate simulations using a range of saturation levels for dual quantization methods, we chose the saturation level for to be 5.5, 7, and 8 for the 6-bit, 7-bit, and 8-bit dual quantizers, respectively. As the figure reveals, the -bit quasi-uniform quantizer yields the best FER performance in the error-floor region.

Fig. 9: FER results of approx.-SPA decoder on the AR4JA LDPC code of and on AWGNC, where and .
Fig. 10: FER results of approx.-SPA decoder on the (4095,3358) LDPC code on AWGNC, where and .

We also evaluated the performance of quasi-uniform quantization in the context of decoding an irregular LDPC code, namely the rate-, AR4JA code. This protograph-based code has variable node degrees ranging from 1 to 6. Fig. 10 shows the FER obtained with approximated-SPA decoding and (5+1)-bit quasi-uniform quantization, with and . Also shown are the results obtained with the floating-point decoder, as well as those produced by 8-bit uniform quantization with step size . The -bit scheme was superior to both of these alternatives. The figure also includes two curves taken from [36], corresponding to an 8-bit quantized SPA decoder with modified VN update rules that were designed specifically for this code, as well as a “fully-optimized” 8-bit decoder with more sophisticated VN/CN update rules. The -bit quasi-uniform quantizer’s performance surpassed that of the former, but it could not match that of the fully-optimized 8-bit decoder.

V-D Effect of Iteration Limits

Figs. 410 show that (+1)-bit quasi-uniform quantization can provide attractive error-floor performance, sometimes even better than the double-precision floating-point box-plus SPA decoder. In generating these results, we observed from the simulation data that the floating-point SPA generally requires more iterations to decode a codeword than the quasi-uniform quantized SPA, especially in the high SNR region. Since the maximum number of iterations was set to 200 in our simulations, the faster convergence of the quasi-uniform quantized SPA allowed it to outperform the floating-point SPA scheme. The convergence properties of the quasi-uniform quantized SPA decoder appear to derive from its use of non-uniform, exponentially growing step sizes. From the theoretical analysis discussed in Section III, we know that the exponential growth rate of correct messages is larger than that of incorrect messages. We might expect that, with a properly designed quasi-uniform quantizer, the correct messages can reach the higher magnitude level earlier than the incorrect messages, and therefore incorrect messages are more likely to be quantized to lower magnitude levels. Hence, the correct messages can “overcome” the incorrect messages more rapidly, allowing the decoder to converge to a codeword after fewer iterations.

In Fig. 10, we explore the effect of limiting the number of iterations in approximated-SPA decoding of MacKay’s rate-0.82, (4095,3358) LDPC code. With the maximum number of iterations set to 200, we show the results for 6-bit and 10-bit uniform quantizers, the -bit quasi-uniform quantizer, and the floating-point decoder. We also compare the performance of -bit quasi-uniform quantization and the floating-point decoder when the maximum number is raised to 10K and even further to 100K.

With a limit of 200 iterations, this code manifested a high error floor with floating-point SPA decoding. The error floor was lower when the number of iterations could go as high as 10K, and dropped even further when up to 100K iterations were allowed. However, even in the latter case, the FER was only slightly lower than that found with the quasi-uniform quantizer with no more than 200 iterations. The performance of the quasi-uniform quantizer continued to improve in raising the limit to 10K and then to 100K. These results seem to be consistent with the intuition suggested by the theoretical analysis.

Vi Conclusion

Trapping sets and other error-prone substructures are known to influence the error-rate performance of LDPC codes with iterative message-passing decoding. In this paper, we have shown that the use of uniform quantization in iterative MP decoding can be a significant factor contributing to the error floor phenomenon in LDPC code performance. An analysis of iterative MP decoding in an idealized setting suggests that decoder message saturation plays a key role in the occurrence of errors in small trapping sets, leading to observed error floor behaviors. To address this problem, we proposed a novel quasi-uniform quantization method that effectively extends the dynamic range of the quantizer. Without modifying the CN and VN update rules or adding extra stages to standard iterative decoding algorithms, the use of this quantizer was shown to significantly lower the error floors of several well-studied LDPC codes when used with various iterative MP decoding algorithms on the BSC and AWGNC. Simulation results confirmed that this new quantization method can significantly reduce the error floors of these codes with essentially no increase in decoding complexity.

Appendix A Proof of Theorem 1

Proof:

Assume VN is -separated and the corresponding computation tree is . Let be the neighboring degree-one CN of in . From the separation assumption and the assumed correctness of channel messages for VNs outside , all descendants of in receive correct initial messages from the BSC. Like the LLRs of the BSC outputs, all the initial messages in the decoder, , , have the same magnitude. Denote the subtree starting with CN as . With the VN/CN update rules of the MS decoder, we analyze the messages sent from the descendants of in . First, according to the CN update rule described in (3), all messages received by a VN from its children CNs in must have the same sign as the message received from the channel by this VN, because all the messages passed in are correct. Therefore, the outgoing message from any VN to its parent CN in satisfies the following equality

(16)

Moreover, since the LDPC code considered is variable-regular and all the channel messages from the BSC have the same magnitude, all incoming messages received by a VN from its children CNs in must have the same magnitude as well. Therefore, all the messages sent from VNs in the same level of the computation tree have the same magnitude. Let be the magnitude of the messages sent by the VNs whose shortest path to a leaf VN contains CNs in ; in particular, is the magnitude of messages sent by leaf VNs, as well as the magnitude of channel inputs. The discussion above implies that

(17)

where is the variable node degree. Hence, it can be seen that the magnitudes of messages sent towards the root CN of the computation tree grow exponentially, with as the base, in every upper VN level. Therefore, for , the magnitude of the message sent in the -th iteration from to its parent node , the -separated root VN of , is greater than .

Now, let us look at the subtree of that has as its root a child CN of the root . Denote this subtree by . We assume that the message received by from after iterations has a different sign than the message received from ; otherwise, would already have been corrected. Now consider any subtree of that has as its root a VN and contains levels of VNs. We denote such a tree by . If , the subtree must include at least one CN from the set . To see this, recall that the induced subgraph of the trapping set is connected. Since there are VNs in the trapping set, it follows that any two VNs in the trapping set can be connected by a path of length less than . Therefore, for , actually includes all the CNs and VNs in the induced subgraph of the trapping set, in particular a CN from . Of course, for most trapping sets, can include a CN from with much smaller than .

Now, consider as a “super-node” with children VNs. Since includes a CN from , at least one of these children VNs has the property that all of its descendants receive correct messages from the channel. This means that at least one of the incorrect messages going into the super-node would be canceled out by one or more such correct messages. So if the output message, , of such a super-node is incorrect, its magnitude satisfies

(18)

where is the largest magnitude of all incoming incorrect messages, and the second term is an upper bound on the sum of the channel input LLRs to all of the VNs in the -level subtree. Note that the leaf VNs of are not necessarily the leaf VNs of . Thus, we can upper bound the magnitude of the incorrect message sent from to after iterations by

(19)

where is the smallest integer greater than or equal to . The upper bound in (19) is extremely loose, and for most small-size trapping sets, the upper bound is generally less than .

Therefore, by taking the logarithms of in (17) and in (19), respectively, we have

(20)

and

Note that the first term in (20) and the first two terms in (A) are constants and independent of the number of iterations .

Since , if is large enough and there is no limitation imposed on the magnitude of messages, it is easy to see from (20) and (A) that would be greater than multiplied by any constant. This means that the correct messages coming from outside of the trapping set to VNs in through their neighboring CNs in will eventually have greater magnitude than the sum of incorrect messages from other neighboring CNs, i.e., . Hence, all the erroneous VNs in will be corrected. Since, by definition, an absolute trapping set does not contain a stopping set, the remaining erroneous VNs must form a smaller absolute trapping set. Therefore, we can use the same argument to show that as the number of iterations continues to grow, the correct messages would eventually be large enough to correct all erroneous VNs.

Now, we show that the proof technique above can be extended to the AWGNC. Define and to be the minimum and maximum magnitudes, respectively, of the input LLRs from the AWGNC. In this setting, the bounds on and corresponding to those in (20) and (A) take the form

(22)

and

(23)

Since the quantities and are constant and do not change as increases, we can conclude, as we did for the BSC, that the correct messages from outside the trapping set will eventually have greater magnitude than the incorrect messages from within the trapping set. Therefore, all of the VNs will eventually be correctly decoded. \qed

Appendix B Proof of Corollary 2

Proof:

We first consider AMS decoding. Referring to the proof of Theorem 1 for the BSC case, we can replace the quantity in (20) and (A) by , where is the attenuation factor. In practice, we would always choose such that is greater than 1; otherwise, the error-correction performance of the AMS decoder would be inferior to that of the MS decoder. Similar reasoning to that used in the proof of the theorem then leads to the desired conclusion. For the AWGNC case, we make the corresponding changes in (22) and (23), and argue similarly.

For the OMS decoder, the proof follows from the proof of Theorem 4 in Appendix D. There, we simply replace the quantity by the offset . \qed

Appendix C Proof of Lemma 3

Proof:

The first statement regarding the relationship between the sign and magnitude of the CN messages and is proved in [32],[27]. For completeness, we include here an elementary alternative proof.

First note that if , then . Now, if and are nonzero and have the same sign (i.e., ), then and hence . Hence, we can see from (9) that the first statement is true if the inequality holds for any positive real values and . Without loss of generality, if we assume , then the following inequalities are equivalent

Since and , the final inequality holds. Hence, the first statement is proved.

To prove the second statement, note that

Therefore, . When , a similar line of reasoning shows that and . \qed

Appendix D Proof of Theorem 4

Proof:

From Lemma 3, we know that a CN message in SPA decoding has the same sign as the corresponding CN message in MS decoding. Moreover, the magnitude of the former is less than or equal to that of the latter. To compute the output for a CN of degree , the box-plus SPA uses the pairwise box-plus operation (10) at most times. Hence, the difference between output messages of the SPA and the MS algorithm is upper bounded by .

By applying an approach similar to that used in the proof of Theorem 1, we can lower bound the magnitude of messages in SPA decoding as follows

Since all input messages to the decoder from the BSC have the same magnitude, if we scale the magnitudes of all initial messages such that

(24)

then the magnitudes of messages sent towards in the computation tree grow exponentially in the number of iterations, with base . Hence, using the same reasoning as in the proof of Theorem 1, it can be shown that, if is large enough and there is no limit on the magnitudes of messages, the correct messages from outside the trapping set eventually overcome the incorrect messages passed within the trapping set, thereby correcting all erroneous VNs in the trapping set.

The extension to the AWGNC case is analogous to that used in Theorem 1. Let denote the minimum magnitude of all input LLRs from the AWGNC, and linearly scale the magnitudes of all the input messages such that the inequality (24) is satisfied. Then, reasoning as in the proof of the BSC case above, we can show that the magnitudes of correct messages outside the trapping set still grow exponentially with as the base, and eventually they correct all erroneous VNs in the trapping set. \qed

Acknowledgment

The authors would like to thank Yang Han and William Ryan for providing the parity check matrix of the (640,192) QC LDPC code, Brian Butler for helpful discussions, and the anonymous reviewers for their numerous and detailed suggestions that helped to improve this paper.

References

  • [1] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Inform. Theory, vol. 8, pp. 21–28, Jan. 1962.
  • [2] D. J. MacKay and R. M. Neal, “Near Shannon-limit performance of low-density parity check codes,” Electron. Lett., vol. 33, pp. 457–458, Mar. 1997.
  • [3] C. Di, D. Proietti, E. Telatar, T. Richardson, and R. Urbanke, “Finite length analysis of low-density parity-check codes on the binary erasure channel,” IEEE Trans. Inf. Theory, vol. 48, no. 6, pp. 1570–1579, Jun. 2002.
  • [4] D. MacKay and M. Postol, “Weakness of Margulis and Ramanujan-Margulis low-density parity check codes,” Electron. Notes Theor. Comp. Sci., vol. 74, 2003.
  • [5] T. Richardson, “Error-floors of LDPC codes,” in Proc. 41st Annual Allerton Conf. Communication, Control, and Computing, Monticello, IL, Oct. 1–3, 2003, pp. 1426–1435.
  • [6] L. Dolecek, Z. Zhang, V. Anantharam, M. Wainwright, and B. Nikolic, “Analysis of absorbing sets and fully absorbing sets of array-based LDPC codes,” IEEE Trans. Inform. Theory, vol. 56, no. 1, pp. 181–201, Jan. 2010.
  • [7] P. O. Vontobel and R. Koetter, “Graph-cover decoding and finite-length analysis of message-passing iterative decoding of LDPC codes,” CoRR, arxiv.org/abs/cs.IT/0512078.
  • [8] D. Divsalar and C. Jones, “Protograph based low error floor LDPC coded modulation,” in Proc. IEEE Military Commun. Conf., vol. 1, Atlantic City, NJ, Oct. 2005, pp. 378–385.
  • [9] J. Lu and J. M. F. Moura, “Structured LDPC codes for high-density recording: large girth and low error floor,” IEEE Trans. Magnetics, vol. 42, pp. 208–213, Feb. 2006.
  • [10] S. K. Chilappagari, S. Sankaranarayanan, and B. Vasic, “Error floors of LDPC codes on the binary symmetric channel,” in Proc. IEEE Int. Conf. Commun., Istanbul, Turkey, Jun. 2006, pp. 1089–1094.
  • [11] S. Laendner and O. Milenkovic, “Algorithmic and combinatorial analysis of trapping sets in structured LDPC codes,” in Proc. 2005 Int. Conf. Wireless Networks, Commun., Mobile Comp., Maui, HI, Jun. 2005, pp. 630–635.
  • [12] V. Savin, “Iterative LDPC decoding using neighborhood reliabilities,” in Proc. IEEE IEEE Int. Symp. Inform. Theory (ISIT), Nice, France, Jun. 2007, pp. 221–225.
  • [13] A. Casado, M. Griot, and R. Wesel, “Informed dynamic scheduling for belief-propagation decoding of LDPC codes,” in Proc. IEEE Int. Conf. Commun., Glasgow, UK, Jun. 2007, pp. 932–937.
  • [14] E. Cavus and B. Daneshrad, “A performance improvement and error floor avoidance technique for belief propagation decoding of LDPC codes,” in Proc. IEEE Int. Symp. Pers., Indoor and Mobile Radio Comm., Berlin, Germany, Sept. 2005, pp. 2386–2390.
  • [15] G. Kyung and C. Wang, “Exhaustive search for small fully absorbing sets and the corresponding low error-floor decoder,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT), Austin, TX, Jul. 2010, pp. 739–743.
  • [16] N. Varnica, M. P. C. Fossorier, and A. Kavcic, “Augmented belief propagation decoding of low-density parity-check codes,” IEEE Trans. Commun., vol. 55, no. 7, pp. 1308–1317, Jul. 2007.
  • [17] Y. Han and W. E. Ryan, “Low-floor decoders for LDPC codes,” IEEE Trans. Commun., vol. 57, no. 6, pp. 1663–1673, Jun. 2009.
  • [18] Z. Zhang, L. Dolecek, B. Nikolic, V. Anantharam, and M. Wainwright, “Lowering LDPC error floors by postprocessing,” in Proc. IEEE Glob. Telecom. Conf., New Orleans, LA, Nov.-Dec. 2008, pp. 1–6.
  • [19] J. Zhao, F. Zarkeshvari, and A. Banihashemi, “On implementation of min-sum algorithm and its modifications for decoding LDPC codes,” IEEE Trans. Commun., vol. 53, no. 4, pp. 549–554, Apr. 2005.
  • [20] T. Zhang, Z. Wang, and K. Parhi, “On finite precision implementation of LDPC codes decoder,” in Proc. IEEE ISCAS, Sydney, Australia, May 2001, pp. 201–205.
  • [21] Z. Zhang, L. Dolecek, B. Nikolić, V. Anatharam, and M. Wainwright, “Design of LDPC decoders for improved low error rate performance: quantization and algorithm choices,” IEEE Trans. Wireless Commun., vol. 8, no. 11, pp. 3258–3268, Nov. 2009.
  • [22] Z. Zhang, “Design of LDPC decoders for improved low error rate performance,” Ph.D. dissertation, Univ. of California at Berkeley, 2009.
  • [23] B. Butler and P. Siegel, “Error floor approximation for LDPC codes in the AWGN channel,” in Proc. 49th Annual Allerton Conf. Communication, Control, and Computing, Monticello, IL, Sep. 2011, pp. 204–211.
  • [24] L. Dolecek, Z. Zhang, M. Wainwright, and V. Anatharam. “Evaluation of the low frame error rate performance of LDPC codes using importance sampling,” in Proc. IEEE Inform. Theory Workshop (ITW), Lake Tahoe, CA, Sep. 2007, pp. 202–207.
  • [25] B. Frey, R. Koetter, and A. Vardy, “Signal-space characterization of iterative decoding,” IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 766-781, Feb. 2001.
  • [26] S. K. Planjery, D. Declercq, S. K. Chilappagari, and B. Vasic, “Multilevel decoders surpassing belief propagation on the binary symmetric channel,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT), Austin, TX, Jul. 2010, pp. 769–773.
  • [27] J. Chen, A. Dholakia, E. Eleftheriou, M. Fossorier, and X. Hu, “Reduced-complexity decoding of LDPC codes,” IEEE Trans. Communications, vol. 53, no. 8, pp. 1288–1299, Aug. 2005.
  • [28] IEEE Standard for Floating-Point Arithmetic, IEEE Standard 754-2008, Aug. 29, 2008.
  • [29] B. Butler and P. Siegel, “Numerical problems of belief propagation decoders and solutions,” in Proc. IEEE Glob. Telecom. Conf., Anaheim, CA, Dec. 2012, pp. 3201–3207.
  • [30] X. Hu, E. Eleftheriou, D. Arnold, and A. Dholakia, “Efficient implementations of the sum-product algorithm for decoding LDPC codes,” in Proc. IEEE Global Telecommun. Conf., vol. 2, San Antonio, TX, Nov. 2001, pp. 1036–1036E.
  • [31] G. Richter, G. Schmidt, M. Bossert, and E. Costa, “Optimization of a reduced-complexity decoding algorithm for LDPC codes by density evolution,” in Proc. IEEE Int. Conf. Commun., vol. 1, Seoul, Korea, May 2005, pp. 642–646.
  • [32] J. Chen and M. Fossorier, “Near optimum universal belief propagation based decoding of low-density parity check codes,” IEEE Trans. Communications, vol. 50, no. 3, pp. 406–414, Mar. 2002.
  • [33] W.E. Ryan and S. Lin, Channel Codes: Classical and Modern. Cambridge, U.K.: Cambridge Univ. Press, 2009.
  • [34] X. Zhang and P. H. Siegel, “Efficient algorithms to find all small error-prone substructures in LDPC codes,” in Proc. IEEE Glob. Telecom. Conf., Houston, TX, Dec. 2011, pp. 1–6.
  • [35] D. J. C. MacKay, Encyclopedia of Sparse Graph Codes. [Online]. Available: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html
  • [36] J. Hamkins, “Performance of low-density parity-check coded modulation,” IPN Progress Report 42-184, Feb. 2011. [Online]. Available: http://ipnpr.jpl.nasa.gov/progress_report/42-184/184D.pdf
  • [37] X. Zhang and P. Siegel, “Quantized min-sum decoders with low error floor for LDPC codes,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT), Cambridge, MA, July 2–5, 2012, pp. 2871–2875.
  • [38] X. Zhang and P. Siegel, “Will the real error floor please stand up?” in Proc. IEEE Int. Conf. Signal Process. Commun. (SPCOM), Bangalore, India, July 22–25, 2012, pp. 1–5.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
22557
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description