Recursive constructions and their maximum likelihood decoding

Recursive constructions and their maximum likelihood decoding

Abstract

We consider recursive decoding techniques for RM codes, their subcodes, and newly designed codes. For moderate lengths up to 512, we obtain near-optimum decoding with feasible complexity.

1 Introduction

In this paper, we consider decoding algorithms that can achieve good performance and low complexity on moderate blocklengths. Our goal is to fill the void left by the best algorithms, such as optimum maximum likelihood (ML) decoding, which has unfeasible complexity even on relatively short blocks, and iterative decoding, which becomes very efficient beginning with the lengths of tens of thousands. More specifically, we wish to achieve near-optimum performance on the lengths ranging from 128 to 512, where neither of these two algorithms can yet combine good performance with low complexity.

To achieve this goal, we will use recursive techniques. One particular class of codes generated by (multilevel) recursion is Reed-Muller (RM) codes and their subcodes. Also, RM codes are only slightly inferior to the best codes on moderate lengths. We will see below that recursive decoding substantially outperforms other (nonexponenential) algorithms known for RM codes. Our basic recursive procedure will split the code of length into the two constituent RM codes -- and - of length . Decoding is then relegated further to the shorter codes until we reach basic codes with feasible ML decoding. In all intermediate steps, we only recalculate the reliabilities of the newly defined symbols.

To improve decoding performance, we will also generalize recursive design. In particular, we use subcodes of RM codes and their modifications. We also use relatively short lists of  code candidates in the intermediate steps of  the recursion. As a result, we closely approach ML decoding performance on the blocklengths up to 512.

2 Reed-Muller codes

We use notation for RM codes of length dimension and distance .  RM codes found numerous applications thanks to fast decoding procedures. First, majority algorithm [6] enables feasible bounded-distance decoding and can even correct [3] most error patterns of weight up to on long codes of fixed rate

Other efficient decoding schemes are based on recursive technique of [5] and [2]. These algorithms enable bounded distance decoding with the lowest complexity order of known for RM codes.   Simulation results [8] show that recursive algorithms increase decoding domain of bounded distance decoding.  Subsequently, these algorithms were slightly refined in [9]. It was shown that (similar to majority decoding) recursive algorithms of [5] and [2] correct most error patterns up to the weight when used on long codes of fixed rate .

For long low-rate RM codes of fixed order , both majority decoding and recursive schemes correct most error patterns of Hamming weight up to where the residual term has vanishing order

(1)

as . Note that (1) gives a threshold-type capacity that approaches the upper limit of However, degree of convergence is relatively slow even for codes Much better results are obtained for ML decoding. For long codes of fixed order it is proven in [7] that ML decoding further reduces the residual term to the order of

(2)

3 Recursive structure

In essence, all recursive techniques known for RM codes are based on the Plotkin construction. Here the original RM code is represented in the form by taking any subblock  from RM and any from RM These two subcodes have length By continuing this process, we again obtain the shorter RM codes of length and so on. Finally, we arrive at the end nodes that are repetition codes and full spaces This is schematically shown in Fig. 1 for RM codes of length 32. In Fig. 2, we consider incomplete decomposition terminated at the biorthogonal codes and single-parity check codes .

Now let denote a block of information bits  that encodes a vector It is also important that our recursion splits into two information subblocks and that encode vectors and respectively. Correspondingly, code dimensions satisfy the recursion . In this way, the shorter information subbloks can be split again until we arrive at the end nodes Thus, any specific codeword can be encoded from the (multiple) information strings assigned to the end nodes or . Following [2], it can be proven that recursive encoding of code has complexity

(3)

This observation comes from two facts. First, the end nodes and satisfy the bound (3). Second, consider the two constituent codes and  Then construction gives complexity for the code  Using this recursion, one can show that also satisfies (3) if constituent codes do.

0,0                                                     2,1

                                                 

1,0      1,1                                           3,1       3,2

           

2,0       2,1        2,2                                4,1        4,2       4,3        

                                            

3,0      3,1        3,2       3,3                      5,1      5,2        5,3      5,4 

              

4,0       4,1        4,2        4,3       4,4       

                   

5,0       5,1         5,2       5,3        5,4       5,5      

Fig. 1 Full decomposition             Fig. 2 Partial decomposition

4  New decoding techniques

Our algorithm also uses the construction and relegates decoding to the two constituent RM codes. Decoder receives a block that consists of two halves and corrupted by noise. We first try to find the better protected codeword from Then we proceed with the block   from the code . In a more general scheme, we repeat this recursion, by decomposing subblocks and further. On all intermediate steps, we only recalculate the probabilities of the newly defined symbols. Finally, we perform soft decision ML decoding once we reach the end nodes. The most important difference from the previous work [9] is that in each step we keep most probable candidates obtained prior to this step. This difference is discussed in Section 5.  In this section, we first assume that our decoding is terminated on the biorthogonal codes depicted in Fig. 2

Step 1.  To find a subblock in hard-decision decoding, one would use its corrupted version Using more general approach, we find the posterior probabilities of  the received symbols. On the left half  each symbol has posterior probability

Similarly, we use the right half to find the posterior probability of any symbol

Given the probabilities and of the symbols and we then find the posterior probability of their binary sum  Here we use the formula of total probability and find

(4)

Here we use the fact that the two original symbols and are independent Also, both symbols are independently corrupted by Gaussian noise.  Now we can use any soft-decision decoding that uses probabilities to find the most probable vector from the -code. This completes Step 1 of our algorithm. Vector is then passed to Step 2.

Step 2.  Now we use both vectors and to estimate each symbol on the right half.  Assuming that is correct, we find  that each symbol has posterior probability

Now we have the two posterior probabilities and of symbols obtained on both corrupted halves. By using the Bayes’ rule, we find the combined estimate

(5)

Finally, we perform soft decision decoding and find a subblock 

Thus, procedure has a recursive structure that calls procedures and and so on. By recalculating probabilities (4) and (5), we finally arrive at the biorthogonal Reed-Muller codes on our way to the left, or full codes on the way to the right. Maximum likelihood decoding is executed on the end nodes. Each decoding retrieves a new subset of information symbols associated with the current end node. In both cases, maximum likelihood decoding has complexity order at most [4].  Simple analysis also shows that recalculating all posterior probabilities in (4) and (5) has complexity at most Therefore our decoding complexity satisfies the recursion

This brings the overall complexity to the order of real operations. A slightly more efficient version gives complexity

5 Analysis and improvements

Given the code we first decode code followed by codes and so on. With the exception of the leftmost and the rightmost nodes, the procedure enters each node multiple times, by taking all the paths leading to this node. It turns out that the output bit error rate (BER) significantly varies on different nodes and even on different paths leading to the same node. Therefore our first problem is to define the most error-prone paths. We start our analysis with two examples.

Example 1. For simplicity, assume that the all-zero codeword from the code is transmitted over the binary channel with crossover probability .  Then we use formula (4) with to find the probability of correct symbol in the block From (4) we see that . Subsequently, this probability rapidly converges to 0.5 in  a few more steps. On the positive side, we note that each step gives us a better protected code that has twice the relative distance of the former one. In particular,  the leftmost node has length and distance Its ML decoding gives asymptotically vanishing BER if the residual term still exceeds , according to (2).

Example 2. Suppose that our original code from the previous example has already received the correct subblock from the code Now we need to find the remaining subblock from the code Given correct we can use (5) with Then we find that symbol is correct with probability .  Now we see that the probability rapidly increases as we move to the right. Note, however, that each new code has half the relative distance of its parent code. In other words, we subsequently improve the channel while entering the new codes with weaker correcting capabilities. Finally, the last code has no error protection and gives the output BER equal to its input error probability.

Asymptotic analysis. For AWGN channels, we assume that the all-zero codeword is transmitted as a sequence of s. Then we receive independent random variables (RV)  and with normal distribution Accordingly, it can be readily seen that the posterior probabilities (and become independent RV with non-Gaussian distribution (here is hyperbolic tangent):

(6)

In the next step, we obtain the RV and   Their distributions can also be written (see [9]) in the form (6), where the residual terms are:

(7)

Similar to Example 1, it can be shown that the first RV has a smaller expectation relative to the original estimate . By contrast, the second RV has a greater expected value. Here the analysis is similar to Example 2.  We also use that the newly defined RV and are all independent for each new step. Now consider asymptotic case of high noise power . (Note that this case is relevant to long RM codes with and fixed order   Then we use asymptotic approximations in formulas (4) and (5) and arrive at the following conclusions.

We prove that moving to the left from to and further is equivalent to squaring our noise power (bringing it to , then and so on), while keeping the signal energy equal to 1. By contrast, the original noise power is cut by half when the algorithm moves to the right (bringing it to , , and so on).

We prove that the left-hand movement makes our subcodes much more vulnerable. In this case, doubling the relative code distance does not compensate for a stronger noise In particular, the highest (worst) BER is obtained on the leftmost node that is decoded first. The second worst BER is obtained on the next decoded node and so on.  Using conventional notation we prove that for

(8)

Now we see that even two adjacent nodes give very different results, where for small By contrast, moving to the right does not increase the output BER relative to the parent code. In this case, the lowest BER is obtained on the rightmost node

Asymptotic comparison.  For long RM codes new recursive decoding increasingly outperforms both the majority algorithm and the former recursive techniques of [5], [8] as the block length grows. In particular, these algorithms give BER   Further, it can be shown that for long RM codes of fixed rate , the above decoding corrects most error patterns of weight up to thus:

increasing times the capacity of bounded-distance decoding

doubling the capacity of the former recursive technique.

Improvements. An important conclusion resulting from the above analysis is to set the leftmost information bits as zeros. In this way, we arrive at the subcodes of the original code that are obtained by eliminating only a few least protected information bits. In particular, even eliminating the first information bits that form the leftmost code immediately can reduce the output BER from to its square for sufficiently long codes.

Decoding performance can be further improved by using list decoding. To simplify the analysis, we now consider the repetition codes . In particular, we start with the leftmost  code and take both codewords and of length Correspondingly, we keep both posterior probabilities instead of choosing the more probable codeword.  This step gives the two initial edges of  a tree. Each edge is associated with a  cost function equal to the log of the (corresponding) posterior probability.

Then we decode the next code Note that the former codewords and give different probability distributions on this node. Given and our new decoding is performed 2 times, separately for and  The result is a full tree of depth 2, that has 4 new edges along with their cost functions. The next step includes 4 decodings of the code performed  on each path of the tree. By continuing this process, we arrive at the codes and . We also keep accumulating the posterior probabilities of our paths. It can be seen that the resulting paths give full biorthogonal code  Choosing the best path at this point becomes equivalent to the original termination at the biorthogonal codes.

To improve our decoding, we keep all paths instead of selecting the best paths. In a more general scheme, the threshold can be greater or smaller than In any case, we start at the repetition codes and keep doubling1 the number of paths until paths are formed. After paths are constructed, we choose paths with maximum cost functions.  In the end, the most probable path (that is, the path with the maximum cost function) is chosen among paths survived at the rightmost node.

Both the simulation results and calculations show that continuous regeneration of best candidates improves our original algorithm that selected the best path at each node. In other words, keeping the longer paths allows us to better separate the transmitted vector from the remaining candidates. As a result, we substantially reduce the overall BER even when compared to the expurgated subcodes. Note, however, that our list decoding increases complexity times, to the order of To refine this scheme further, recall that the channel quality constantly improves as we move from the left to the right. Therefore, we can choose the variable threshold that becomes smaller as our decoding progresses to the rightmost nodes.  In this way, we can substantially reduce our list-decoding complexity even when originally exceeds

Simulation results. Our results are described below in Figures 4 to 9. These figures also reflect the drastic improvements obtained when both techniques - using the subcodes and short decoding lists - were combined. The curves with show the performance of  the refined version of  the former recursive techniques from [5], [2], and [8]. For codes of length 256 and 512, the results are now improved by 3.5 to 5 dB at BER

While using the maximum lists depicted on each figure, simulation also showed that in most cases of incorrect decoding, the erroneous result is more probable than the transmitted vector. This fact shows that our block ER (BL ER) is very close to that of ML decoding. In turn, this gives a new (experimental) bound on the BL ER of ML decoding. Also, our results substantially surpass other codes with similar parameters  (see the current  “world records” on http://www331.jpl.nasa.gov/). In Fig. 9, we summarize the results on block ER of ML decoding for RM codes to of length 256.

It is also interesting that subcodes usually achieve near-ML decoding using much smaller lists relative to the original RM codes. In particular, a subcode (256,78) approaches near-ML decoding using only 32 intermediate paths. Note that even one of the most efficient algorithms developed in [1] uses about paths for BCH codes of length 256. On the other hand, our simulation results show that codes of length 512 approach ML decoding using much bigger lists than codes of length 256.  To extend the results for longer codes, we use slightly different constructions described in the next section.

6 More general recursive constructions

Multiple splitting of RM codes. Here we wish to change the original Plotkin representation. Namely, one can apply more sophisticated partitions that directly split RM codes in or more codes of shorter lengths. For example, by applying Plotkin construction two times, we can split the original block into four quarters Here is taken from the least protected code vectors and belong to the medium-protected code while is taken from the best protected code Simulation performed for this construction did not improve the results presented in Figures 4 to 9.

Slightly better results were obtained for low SNR, when these four codes were combined in a different way as  It can be proven that for low rates the latter construction gives asymptotic improvement to our original Plotkin representation.  This conclusion stems from the following facts. As before, the code is decoded first and is most vulnerable in recursive decoding. Note also that is obtained directly in one step, by adding the third quarter and the forth quarter of our construction. Asymptotically, such a step squares the noise power, as described above. On the other hand, we reduce the length four times in each step. Accordingly, the new recursive construction reaches the leftmost nodes in steps instead of steps used before. As a result, we can replace the former term in (8) by the greater term

Despite substantial asymptotic improvements, simulation showed that these improvements start accumulating only on the lengths of 2048 and above.

Alternating recursions. Suppose that we use the Plotkin construction in Fig. 1 but change our original code from to In other words, we move one more step to the left relative to the Plotkin construction as shown in Fig. 3. As a result, the new code has a better error protection. This alteration also doubles the distance of code and gives unequal error protection for the original code. On the other hand, we also reduce the overall code rate and the SNR per channel symbol (given the same SNR per information bit). This lower rate can eliminate the advantages of  the better protection. To increase code rate in , we then add extra symbols in the next splitting step. For example, we split into codes and by taking one more step to the right as presented in Fig. 3.

m-2,r-3    m-2,r-1    m-2,r

m-1,r-2        m-1,r 

    

m,r                             

Figure 3a Alternating decompositions    Figure 3b Underlying structure

Note that in general alternating construction, we can no longer use RM codes. These only form the first “building blocks”, such as and . By contrast, various nodes only label the edges/paths that correspond to our new codes. The first simulation results obtained in this direction used an -combination of   and codes instead of the original code. Even this simple combination improved the original code at low SNR.  More sophisticated constructions similar to the one of Fig. 3a also outperform RM codes. However, the alternating constructions that we considered to date have not yet improved the performance of subcodes presented in Figures 4 to 9.

Footnotes

  1. We can also increase the number of paths, say, to or on the nodes RM .

References

  1. Y.S. Han, C.R.P. Hartmann, and C.K. Mohan, “Efficient heuristic search algorithms for soft-decision decoding of linear block codes,” IEEE Trans. Inform. Theory, vol. 44,  pp.  3023-3038, 1998.
  2. G.A. Kabatyanskii, “On decoding of Reed-Muller codes in semicontinuous channels,” Proc. 2 Int. Workshop “Algebr. and Combin. Coding Theory”, Leningrad, USSR, 1990, pp. 87-91 (in Russian).
  3. R.E. Krichevskiy, “On the Number of Reed-Muller Code Correctable Errors,” Dokl. Soviet Acad. Sciences, vol. 191, pp. 541-547, 1970.
  4. S.N. Litsyn, “Fast algorithms for decoding orthogonal and related codes,” Lecture Notes in Comp. Science, no. 539, pp. 39-47, 1991.
  5. S.N. Litsyn, “On decoding complexity of low-rate Reed-Muller codes,” Proc. 9 All-Union Conf. on Coding Theory and Info. Transmission, Part 1, Odessa, USSR, pp. 202-204, 1988 (in Russian).
  6. I.S. Reed, “A class of multiple error correcting codes and the decoding scheme,” IEEE Trans. Info. Theory, vol. IT-4, pp.38-49, 1954.
  7. V. Sidel’nikov and A. Pershakov, “Decoding of Reed-Muller codes with a large number of errors,” Probl. Info. Transmission, vol. 28, no. 3, pp. 80-94, 1992 (in Russian).
  8. G. Schnabl and M. Bossert, “Soft-decision decoding of Reed-Muller codes as generalized multiple concatenated codes,” IEEE Trans. Info. Theory, vol. 41, pp. 304-308, 1995.
  9. I. Dumer, “Recursive decoding of Reed-Muller codes,” Proc. 37 Annual Allerton Conf. on Commun., Control, and Comp., Monticello, IL, Sept. 22-24, 1999, pp. 61-69.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
127884
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description