Enhancing the Error Correction of Finite Alphabet Iterative Decoders via Adaptive Decimation

Enhancing the Error Correction of Finite Alphabet Iterative Decoders via Adaptive Decimation

Shiva Kumar Planjery, Bane Vasić Dept. of Electrical and Computer Eng.
University of Arizona
Tucson, AZ 85721, U.S.A.
Email: {shivap,vasic}@ece.arizona.edu
   David Declercq ETIS
ENSEA/UCP/CNRS UMR 8051
95014 Cergy-Pontoise, France
Email: declercq@ensea.fr
Abstract

Finite alphabet iterative decoders (FAIDs) for LDPC codes were recently shown to be capable of surpassing the Belief Propagation (BP) decoder in the error floor region on the Binary Symmetric channel (BSC). More recently, the technique of decimation which involves fixing the values of certain bits during decoding, was proposed for FAIDs in order to make them more amenable to analysis while maintaining their good performance. In this paper, we show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. We describe the adaptive decimation scheme particularly for 7-level FAIDs which propagate only 3-bit messages and provide numerical results for column-weight three codes. Analysis suggests that the failures of the new decoders are linked to stopping sets of the code.

I Introduction

The error floor problem of low-density parity-check (LDPC) codes under iterative decoding is now a well-known problem, where the codes suffer from an abrupt degradation in their error-rate performance in spite of having good minimum distance. The problem has been attributed to the presence of harmful configurations generically termed as trapping sets [1] present in the Tanner graph, which cause the iterative decoder to fail for some low-noise configurations, thereby reducing its guaranteed error correction capability to an extent that is far from the limits of maximum likelihood decoding. More importantly for the BSC, the slope of the error floor is governed by the guaranteed correction capability [2].

Recently, a new class of finite alphabet iterative decoders (FAIDs) that have a much lower complexity than the BP decoder, were proposed for LDPC codes on the BSC [3] [4] and were shown to be capable of outperforming BP in the error floor. Numerical results on several column-weight three codes showed that there exist 7-level FAIDs requiring only 3 bits of precision that can achieve a better guaranteed error correction ability than BP, thereby surpassing it in the error floor region. However, analyzing these decoders for providing performance guarantees proved to be difficult.

More recently, decimation-enhanced FAIDs [5] were proposed for BSC in order to make FAIDs more amenable to analysis while maintaining their good performance. The technique of decimation involves guessing the values of certain bits, and fixing them to these values while continuing to estimate the remaining bits (see [5] for references). In [5], the decimation was carried out by the FAID based on messages passed for some iterations, and a decimation scheme was provided such that a 7-level DFAID matched the good performance of the original 7-level FAID while being analyzable at the same time.

In this paper, we show how decimation can be used adaptively to further increase the guaranteed error correction capability of FAIDs. The adaptive scheme has only marginally increased complexity, but can significantly improve the error-rate performance compared to the FAIDs. We specifically focus on decoders that propagate only 3-bit messages and column-weight three codes since these enable simple implementations and thus have high practical value. We also provide some analysis of the decoders which suggests that the failures are linked to stopping sets of the code. Numerical results are also provided to validate the efficacy of the proposed scheme.

Ii Preliminaries

Let denote the Tanner graph of an (,) binary LDPC code with the set of variable nodes and set of check nodes . is the set of edges in . A code is said to be -left-regular if all variable nodes in of graph have the same degree . The degree of a node is the number of its neighbors. is the minimum distance of the code .

A trapping set is a non-empty set of variable nodes in that are not eventually corrected by the decoder [1].

A multilevel FAID is a 4-tuple given by [4]. The messages are levels confined to an alphabet of size defined as , where and for any . denotes the set of possible channel values that are input to the decoder. For the case of BSC as , and for each variable node , the channel value is determined by , where is received from the BSC at .

Let denote the incoming messages to a node. The update function is used at a check node with degree , and is defined as

where sgn denotes the standard signum function.

The update function is a symmetric rule used at a variable node with degree and is defined as

The function is defined based on a threshold set where and if , and , such that if , and otherwise. The weight is computed at node using a symmetric function . Based on this, can be described as a linear-threshold (LT) or non-linear threshold (NLT) function. If (or constant), then it is an LT function, else it is an NLT function. can also be described as a look-up table (LUT) as shown in Table I (for , it can be obtained from symmetry). This rule is an NLT rule and will be one of the rules used in the proposed decimation scheme.

- - - 0
- - - - - - -
- - - - 0
- - - 0 0
0 - 0 0
-
-
TABLE I: of a 7-level FAID for for a code with

Let denote the computation tree of graph corresponding to a decoder enumerated for iterations with variable node as its root. A node is a descendant of a node if there exists a path starting from node to the root that traverses through node .

Definition 1 (Isolation assumption)

Let be a subgraph of induced by with check node set . The computation tree with the root is said to be isolated if and only if for any node in , does not have any descendant belonging to . If is isolated , then the subgraph is said to satisfy the isolation assumption in for iterations.

Remark: The above definition is a revised version of the one given in [3].

The critical number of a FAID on a subgraph is the smallest number of errors for which fails on under the isolation assumption.

Let denote the set of neighbors of a node in the graph and let denote the set of neighbors of all . Let denote the set of outgoing messages from to all its neighbors in the iteration. Let denote the bit associated to a variable node that is decided by the iterative decoder at the end of the iteration.

Iii Adaptive decimation-enhanced faids

We will first provide some definitions and notations related to the concept of decimation and discuss its incorporation into the framework of FAIDs before we delve into the details of adaptive decimation.

Definition 2

A variable node is said to be decimated at the end of iteration if is set to . Also, , irrespective of its incoming messages, i.e., always sends the strongest possible messages.

Remark: If a node is decimated, then all its descendants in the computation tree can be deleted since the node always sends to its parent.

A decimation rule is a function used by the decoder to decide whether a variable node should be decimated and what value it should be decimated to. Let denote the output of a decimation rule applied to a node . If , then the node is not decimated. If , then , and if , then

Remark: The decimation rule is a function of the channel value and the most recent incoming messages received by the variable node before the decimation rule is applied.

We shall refer to each instance of applying a decimation rule on all the variable nodes as a decimation round.

There are two key aspects to note regarding the application of a decimation rule: 1) the decimation rule is applied after messages are passed iteratively for some iterations, and 2) after each instance of applying the decimation rule, all messages are cleared to zero (which is practically restarting the decoder except that the decimated nodes remain decimated).

Let denote the number of decimation rounds carried out by the decoder with a given decimation rule beyond which no more variable nodes are decimated.

Definition 3

The residual graph is the induced subgraph of the set of variable nodes in that are not decimated after decimation rounds.

We can now formally define the class of adaptive decimation-enhanced multilevel FAIDs (ADFAIDs) as follows. A decoder belonging to such a class denoted by is defined as , where the sets and , and the map are same as the ones defined for a multilevel FAID. The map is the update rule used at the variable node. It requires the output of a decimation rule as an one of its arguments and also uses the maps and to compute its output. For simplicity, we define it for the case of as follows.

where denotes the decimation round completed by the decoder. The maps and are defined as either LT or NLT functions or as look-up tables similar to of a FAID .

Remark: The new class of decoders proposed in this paper use two different maps, and , for updating the messages on non-decimated variable nodes. is the map used to update messages specifically during the decimation procedure, whereas is the map used to decode the remaining non-decimated nodes after the decimation procedure is completed. Also note that for the case of , we restrict the definition of to satisfy , , and . is also defined similarly.

Proposition 1

Given a decimation rule , if the number of decimated nodes after the decimation round is the same as the number of decimated nodes after the decimation round, then no additional nodes will get decimated in the subsequent decimation rounds.

Remark: This would be the stopping criterion used for the decimation procedure. In the above case, .

The set is the set of decimation rules used for adaptive decimation and for any , it satisfies the following properties (specified for ).


  1. and

  2. Given , if , then such that , , and .

Remark: Property 2 implies that a node can be decimated to zero only if and to one only if . Consequently a node initially correct will never be decimated to a wrong value and a node initially wrong will never be decimated to the correct value. Then, a necessary condition for successful decoding is that no node initially in error is decimated. We shall now restrict our discussion to for the remainder of the paper.

For a given decimation rule , a set can be used to completely specify , where is defined as the set of all unordered triples such that . Note that for any unordered triple , by property 1, so is sufficient to completely specify . A is considered to be a conservative decimation rule if is small and an aggressive rule if is large.

Note that the class of decimation-enhanced FAIDs defined in our previous work [5] is a special case of the newly proposed decoders where and . In other words, only a single non-adaptive decimation rule and a single map is used for updating messages in the DFAIDs of [5].

For the remainder of the paper, we shall refer to variable nodes that are initially in error in as error nodes and variable nodes that are initially correct as correct nodes.

Iii-a Motivation for adaptive decimation

Given an error pattern of relatively low weight (), the primary role of decimation is to isolate the subgraph associated with the error pattern from the rest of the graph by decimating as many correct nodes outside this subgraph as possible. The rationale behind resetting the messages to zero at the end of each decimation round is to allow more non-decimated correct nodes that are close to the neighborhood of the decimated correct nodes to possibly be decimated as long as none of the error nodes have been decimated. This is possible since the decimated nodes always send the strongest message ().

Now if a given error pattern is such that the error nodes are relatively clustered with many interconnections between them through their neighboring check nodes, then a more conservative would have to be used by the decoder to ensure that none of the error nodes are decimated. However, if the error pattern is such that the error nodes are more spread out, then it may be desirable to use a more aggressive as there will be many correct nodes in the neighborhood of the error nodes that can be decimated without decimating the error nodes, and, in turn, possibly help the decoder to converge. This is our main motivation for the use of adaptive decimation in the newly proposed decoders, and we will eventually show that adaptive decimation can help achieve an increase in the guaranteed error correction capability of the code.

Iii-B Proposed scheme

We will now describe a particular adaptive decimation scheme used by the decoder in order to enhance the guaranteed error correction capability. In the proposed scheme, the set consists of two decimation rules, namely , where and are the sets of unordered triples that completely specify the rules and respectively. The rule is used only once at the end of the third iteration, and then from that point, is used after every two iterations (). The use of adaptive decimation is carried out only through as follows.

We define a sequence of decimation rules from by considering ordered subsets of with increasing size. Let be the number of rules in the sequence and let denote the set that specifies the rule . Then is defined for each in a way such that with . This implies that the sequence of rules are such that is less conservative than , with being the most conservative and being least conservative (or most aggressive). Note that each subset must be chosen in a manner that ensures that its corresponding rule satisfies the properties of mentioned previously.

For a given error pattern, the decoder starts the decimation procedure by passing messages using the map and applying the decimation rule at the end of the third iteration after which the messages are reset to zero. Then the most conservative rule in the sequence , which is , is used after every two iterations (followed by resetting the messages) until no more nodes can be decimated. The map then is used to decode the remaining non-decimated nodes. If the decoder still does not converge, then the whole decoding process is repeated by using a more aggressive rule in place of . This decoding process continues until the decoder converges or until all rules in the sequence have been used. Let denote the number of decimated bits at the end of a decimation round. The decoding scheme can be summarized as follows. Note that this scheme is devised particularly for the case of .

  • Set . Note that will always be used to update messages at the check node.

  • Initialize .

  • Start the decimation procedure by passing messages for three iterations using . If the decoder converges within those three iterations, STOP.

  • Apply decimation rule for every . Then reset all messages to zero and set .

  • Pass messages for two iterations using for update at the non-decimated nodes. If the decoder converges within those two iterations, STOP.

  • Apply decimation rule only on nodes for which . Then reset all messages to zero. If , and go back to step 5, else go to step 7.

  • Pass messages using on the nodes for which .

  • If decoder converges or has reached maximum allowed iterations, STOP. Else .

  • If STOP. Else go to step 2.

Algorithm 1 Adaptive decimation-enhanced FAID algorithm

Remarks: 1) The only criterion used by the decoder to decide when to use a more aggressive rule on a given error pattern is whether the decoding has failed. 2) The reason for applying at the end of third iteration is that at least three iterations are required for a to be passed. 3)The reason for the choice of every 2 iterations for applying is because 2 iterations is small enough to help prevent the growth of wrong message strengths but sufficient to allow all levels in to be passed.

Iii-C Choice of and

For the proposed decoders, the map is simply chosen to be the of a particular FAID already known to be good on a given code, and for which we want to improve the guaranteed error correction capability. For the numerical results, is chosen to be the of a 7-level FAID defined by Table I.

The choice of on the other hand is non-trivial. It is designed based on analyzing messages that are passed within dense subgraphs that could potentially be trapping sets for a given FAID when errors are introduced in them under the isolation assumption. The rule is chosen under the premise that the growth of message strengths within the subgraph should be slow since many correct nodes in the subgraph would most likely be connected to error nodes, and multiple error nodes may be interconnected to each other in the subgraph (if the number of errors introduced is comparable to the size of the subgraph). Explicit design methods for are not discussed in this paper, but we provide a particular that was designed based on the above philosophy and used for the numerical results. It is an LT rule (see Section II), so it can be described by assigning values to elements in , , and . The map is defined with the following assignments; , , , , , , . This was found to be a good rule for decimation.

Iii-D Analysis

Due to page constraints, no proofs are provided (but they will be provided in the journal version of this paper). For the analysis, we assume that the all-zero codeword is transmitted which is valid since the decoders considered are symmetric.

Proposition 2

A node can receive a from its neighbor in the first or second iteration after resetting the messages, only if all nodes in have been decimated.

Lemma 1

If and if all error nodes in are non-decimated, then a correct node will be decimated if it receives an during a decimation round.

Remark: Note that will always be defined so that for any as explained in the next subsection. Also note how resetting messages at the end of each decimation round can help with decimating more correct nodes due to the above lemma.

Theorem 1

If and no error node is decimated, then any correct node in the residual graph is connected to check nodes that have at least degree-two.

Corollary 1

If Theorem 1 holds and no error node in the residual graph is connected to a degree-one check node, then is a stopping set.

Remark: Note that if an error node in the residual graph is connected to a degree-one check node, it would receive in every iteration for the remainder of the decoding (again assuming no error nodes are decimated), and this will most likely lead to a decoder convergence. Therefore, if no error node is decimated, the decoder is more likely to fail when the residual graph is a stopping set (refer to [1] for details).

The above remark is an important observation since we can now design the rules and the sequence based on analyzing error patterns whose errors are entirely contained in the minimal stopping sets of a given code. For instance, if our goal is to correct up to -errors, then we consider all error patterns up to a weight in the stopping sets in order to design and .

If FAID with has a critical number of + on a stopping set whose induced subgraph is , then is guaranteed to correct up to errors introduced in on the code if the residual graph is . In other words, on a particular code, is more likely to correct all error patterns up to weight- whose support lies in a stopping set in the code, if it has a critical number of on the stopping set.

Iii-E Discussion on design of decimation rules and

The design of involves selecting the triples that should be included in , which depends on the number of errors we are trying to correct and the type of harmful subgraphs present in . should be chosen to be conservative enough so that no error nodes are decimated. On the other hand, the design of not only involves selecting the triples that should be included in , but also determining a specific ordering on the triples that will be included in subsets which determine the sequence of rules used starting from the least conservative rule, and this is dependent on the structure of the code. Both rules can be designed by analyzing them on errors introduced in stopping sets of the code.

In order to specify the set , we just specify the message triples with the weakest values. For specifying in a concise way, we shall introduce some notations. Let be divided into two disjoint subsets, i.e., , where is a subset that contains all triples such that . Based on the analysis described previously, any defined should always have as its subset, regardless of the code. The subset , which is dependent on the code, is an ordered set whose ordering determines the subsets used to specify the sequence of rules .

0
0
0 0
0
0
0 0
TABLE III: Subset of designed for Tanner code
0
0
0 0
TABLE II: Subset of designed for code

Iv Numerical Results and Discussion

Numerical results are provided in Fig. 1 and Fig. 2 for two codes: the well-known Tanner code and a structured rate 0.753 code constructed based on latin squares [6] with . For the Tanner code, the set contains all triples such that and (comparison is componentwise). For the high-rate structured code, contains all triples such that , , and . for the Tanner code and for the code. The sets in for the structured code and the Tanner code are shown in Tables III and III respectively. The cardinalities of the subsets of used by each of the two codes are and respectively. The maximum number of iterations allowed for BP and 7-level FAID, and for of the 7-level ADFAID, was 100.

The significant improvement in the slope of the error floor by using the 7-level ADFAID is evident. For the Tanner code, it was verified that all 6-error patterns are corrected by the 7-level ADFAID while the 7-level FAID corrects all 5-errors and BP fails on 5-errors. For the high-rate structured code, no failed 5-error patterns were found in the region of simulation shown in Fig. 2, which is significant since the code has . This shows that for certain high-rate codes whose graphs are relatively dense and for which it becomes difficult to ensure high in the code, the FAIDs with adaptive decimation can possibly come close to achieving the guaranteed error correction of maximum likelihood decoding. Note that the 7-level ADFAIDs are still 3-bit message passing decoders which have reasonable complexity, and that is still lower than BP.

Fig. 1: FER performance comparison on the Tanner code
Fig. 2: FER comparison on the structured code with

Acknowledgment

This work was funded by NSF grants CCF-0830245 and CCF-0963726, and by Institut Universitaire de France grant.

References

  • [1] T. Richardson, “Error floors of LDPC codes,” in Proc. 41st Annual Allerton Conf. on Commun., Control and Computing, 2003.
  • [2] M. Ivkovic, S. K. Chilappagari, B. Vasic, “Eliminating trapping sets in low-density parity-check codes by using Tanner graph covers,” IEEE Trans. Inf. Theory, vol. 54, no. 8, pp. 3763–3768, 2008.
  • [3] S. K. Planjery, D. Declercq, S. K. Chilappagari, B. Vasic,“Multilevel decoders surpassing belief propagation on the binary symmetric channel,” Proc. Int. Symp. Inf. Theory, pp. 769–773, Austin, Jul. 2010.
  • [4] S. K. Planjery, D. Declercq, L. Danjean, B. Vasic, “Finite alphabet iterative decoders for LDPC codes surpassing floating-point iterative decoders,” Electron. Lett. vol. 47, no. 16, pp. 919–921, Aug. 2011.
  • [5] S. K. Planjery, B. Vasic, D. Declercq, “Decimation-enhanced finite alphabet iterative decoders for LDPC codes on the BSC,” Proc. Int. Symp. Inf. Theory (ISIT’2011), pp. 2383–2387, St. Petersburg, Jul. 2011.
  • [6] D. V. Nguyen, B. Vasic, M. Marcellin, S. K. Chilappagari,“On the construction of structured LDPC codes free of small trapping sets,” IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2280–2302, Apr. 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
22559
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description