Optimal Thresholds for GMD Decoding with \frac{\ell+1}{\ell}–extended Bounded Distance Decoders

# Optimal Thresholds for GMD Decoding with ℓ+1ℓ–extended Bounded Distance Decoders

\authorblockNChristian Senger, Vladimir R. Sidorenko, Martin Bossert \authorblockAInst. of Telecommunications and Applied Information Theory
Ulm University, Ulm, Germany
This work has been supported by DFG, Germany, under grants BO 867/17 and Bo 867/21-1. Vladimir Sidorenko is on leave from IITP, Russian Academy of Sciences, Moscow, Russia.
\authorblockNVictor V. Zyablov \authorblockAInst. for Information Transmission Problems
Russian Academy of Sciences, Moscow, Russia
zyablov@iitp.ru
###### Abstract

We investigate threshold–based multi–trial decoding of concatenated codes with an inner Maximum–Likelihood decoder and an outer error/erasure –extended Bounded Distance decoder, i.e. a decoder which corrects errors and erasures if , where is the minimum distance of the outer code and . This is a generalization of Forney’s GMD decoding, which was considered only for , i.e. outer Bounded Minimum Distance decoding. One important example for –extended Bounded Distance decoders is decoding of –Interleaved Reed–Solomon codes. Our main contribution is a threshold location formula, which allows to optimally erase unreliable inner decoding results, for a given number of decoding trials and parameter . Thereby, the term optimal means that the residual codeword error probability of the concatenated code is minimized. We give an estimation of this probability for any number of decoding trials.

## I Introduction

One of Forney’s seminal contributions to algebraic coding was the invention of Generalized Minimum Distance (GMD) decoding [1, 2]. It provides a means to exploit soft information from the channel using a hard–decision algebraic decoder by multi–trial error/erasure decoding with a varying number of erased unreliable input symbols. Most intriguing about the GMD scheme is that it performs as good as Maximum Likelihood (ML) decoding if the channel is good. This gives rise to the frequent application of GMD decoding for concatenated codes. There, the inner code is responsible for correcting a considerable amount of transmission channel errors. Thus, the input symbols for the outer decoder can be viewed as being transmitted over a super channel, which is composed of the transmission channel and the inner decoder. This super channel is always good if the parameters of the inner code are chosen appropriately.

Any decoder’s performance can be measured by its guaranteed decoding radius and its residual codeword error probability, the latter one being a function of the channel. The fundamental research problem of threshold–based multi–trial decoding is to find for the set of thresholds , , which optimizes the respective performance measure. Generally, the output of multi–trial decoding is a result list. In this paper, we denote the case that the transmitted codeword is among the elements of the result list as a decoding success. Both the decoding radius and the residual codeword error probability are to be understood in this context.

Maximization of the guaranteed decoding radius of GMD decoding for concatenated codes was considered by Blokh and Zyablov [3]. Using Linear Programming, they obtained optimal threshold sets when both inner and outer code are BMD–decoded. In previous work [4, 5], we generalized their result to the case where the outer code is –extended Bounded Distance (BD)–decoded for the full (real) range .

Blokh and Zyablov also considered the probably more practical performance measure, i.e. minimization of the residual codeword error rate. For concatenated codes with inner ML and outer BMD decoding they derived optimal threshold sets using results on the Binary Symmetric Channel (BSC) error exponent from Gallager [6, 7] and Forney [8]. In this paper, we tackle the generalization to the case of an outer –extended Bounded Distance (BD) decoder, building up on our previous results [9]. Thereby, .

The paper is organized as follows. In Section II, we describe the structure and threshold–based multi–trial decoding of concatenated codes, in Section III, we derive necessary and sufficient conditions for an optimal threshold set. We do this on a high level, using the error- and erasure probabilities for each threshold pair , , as parameters. In Section IV, we recall Forney’s generalization of Gallager’s error exponent of the BSC channel in the error/erasure decoding case. Simple approximations of the error- and erasure probabilities are derived in Section V. This allows to analytically calculate the set of optimal thresholds in Section VI together with results on the residual codeword error probability. In Section VII, we wrap up the paper and draw conclusions for further research.

## Ii GMD Decoding of Concatenated Codes

A concatenated code consists of two constituent codes and . We denote as the inner code, as the outer code and as the concatenated code. Since is binary, such is . W.l.o.g. we restrict ourselves to this most practical case.

An information vector is first encoded into a codeword . The –ary symbols , , of are then converted into binary vectors and encoded into . The binary matrix consisting of the is then transmitted over a BSC channel with crossover probability .

At the receiver, erroneous vectors are received. They are fed into an ML decoder for .The resulting codeword estimates are mapped to their information vectors and converted into symbols , where is the erasure symbol. The vector and the number of thresholds are the input for the GMD decoder of .

Inside the GMD decoder, is processed in the following way. First, for every symbol , the reliability value ,

 (1)

is calculated. Then, the threshold set , , is applied as

 ˆrok,j:=⎧⎨⎩roj,ifvj≥T(z)k,\vartimes,ifvj

resulting in an input list , where . Thus, a decoding result of the inner decoder is discarded in decoding trial , , if its reliability value falls below the threshold . Finally, an error/erasure BD decoder (in our case ) is applied to every element of resulting in a result list . Whenever we have a decoding success.

## Iii Necessary and Sufficient Conditions

This section generalizes our result from [9] which was obtained for the simple case where the inner code is BPSK modulation and the outer BD decoder has parameter , i.e. it is a BMD decoder.

Considering (2), it is clear that two cases can occur while advancing from threshold to . First, it is possible that a correct symbol from the super channel is erased for while it was not for . Second, it is possible that a wrong symbol is erased for while it was not for . The probabilities for these two cases are defined by

 ¯¯¯pk :=Pr(correct symbol erased forT(z)k+1but notT(z)k) p–k :=Pr(erroneous symbol erased forT(z)k+1but notT(z)k),

 pr :=Pr(correct symbol never erased) pc :=Pr(correct or erroneous symbol always erased) pl :=Pr(erroneous symbol never erased)

for the three border cases. We shall find useful approximations for these probabilities in Section V. Note, that .

With each of the probabilities we associate the number of symbols from falling into the case, i.e. . Obviously, .

Let and be the numbers of erroneous and erased symbols, respectively, in . An BD decoder for succeeds in decoding as long as

 ℓ+1ℓ⋅ε(k)+τ(k)≤do−1. (3)

This inequality can be expressed by as

 ℓ+1ℓ(tl+z−1∑ν=kt–ν)+tc+k−1∑ν=1(¯tν+t–ν)≤do−1, (4)

since it follows from (2) and the orderliness of the threshold set that a symbol is erased by if it is erased by . Unequality (4) is then obtained by simply counting all symbols which are errors and erasures, respectively, for decoding trial and replacing and in (3). Let

Then, by

 Pe=∑C(notl,tc,tr,t–1,¯t1,…,t–z,¯tz)ptllptccptrrz−1∏k=1p–t–kk¯¯¯p¯tkk

we obtain an exact formula for the residual codeword error probability of the GMD decoder with . We can replace the condition by

to obtain a good approximation of for . Condition can be compressed to

 C3:=[ℓ+1ℓ⋅tl+tc+(ℓ+1)z−1∑k=1¯tk=do−1],

if we consider . The latter set of equalities can be seen by subtracting two subsequent equations of from each other. Since the super channel can be assumed to be good, we can further approximate by

 Pe≈maxC3{ptllptccz−1∏k=1(p–ℓk¯¯¯pk)¯tk}. (5)

The previous observations allow to prove the following theorem.

###### Theorem 1

The following conditions are necessary and sufficient for an optimal threshold set , which minimizes the residual codeword error rate .

 pℓℓ+1l =pc, (6) pc =(p–ℓ1¯¯¯p1)1ℓ+1 (7)

and

 ∀k=1,…,z−2:p–ℓk¯¯¯pk=p–ℓk+1¯¯¯pk+1. (8)
###### Proof.

Equivalent to the maximization in (5), we can also express the approximation for in logarithmic form, i.e.

 ln(Pe)≈maxC3{tlln(pl)+tcln(pc)+z−1∑k=1¯tkln(p–ℓk¯¯¯pk)}.

Then, the maximization term is a linear function of the . Thus, its maximum is attained at some boundary point fulfilling condition , i.e.

 ln(Pe) ≈max{ℓ⋅(do−1)ℓ+1ln(pl),(do−1)ln(pc), max{do−1ℓ+1ln(p–ℓ1¯¯¯p1),… max{…,do−1ℓ+1ln(p–ℓz−1¯¯¯pz−1)}.

In non–logarithmic form:

 Pe ≈max{pℓ⋅(do−1)ℓ+1l,pdo−1c, max{(p–ℓ1¯¯¯p1)do−1ℓ+1,…,(p–ℓz−1¯¯¯pz−1)do−1ℓ+1}. (9)

Let fulfill the statement of the theorem and let be a set of thresholds where at least one threshold is different than in . Assume that is optimal. The only possible way for to achieve a smaller would be to decrease all terms in (9) simultaneously. This is impossible, decreasing any of the probabilities would increase at least one of the others. This proves that is both unique and optimal. ∎

## Iv Forney’s Generalization of Gallager’s Error Exponent

Let us for a while consider one specific decoding trial , and the corresponding threshold . Reliability values are calculated according to (1) and thresholds are applied as in (2). Hence, by erasing, the inner ML decoder becomes a decoder with erasing option. Its decoding criterion is defined by

 (10)

It was shown by Forney that this criterion is optimal in a sense that no other criterion can decrease both the error and error–or–erasure probability [8]. In the same publication, it was shown that both probabilities can be expressed in terms of Gallager’s error exponent for the BSC [6, 7]. They are given by

 pE :=exp−(E0(Ri)+sT)ni (11) pE,X :=exp−(E0(Ri)−sT)ni, (12)

where is Gallager’s exponent and the corresponding optimization parameter, .

## V Approximated Probabilities

In this section, we shall find simple approximations for the probabilities , and , which were defined in Section III. The approximations are required to obtain an analytic threshold location formula fulfilling the necessary and sufficient conditions of Theorem 1. Let us start with the following observation. With Gallager’s exponent, the error probability of an ML decoder is . For such decoding, there exists a Hamming distance radius such that decoding of with almost always succeeds and decoding of with almost always yields an erroneous result. This radius can be thought of as an approximation of the borders of the Voronoi cells of . For decoding with erasure option as in (10) and the error exponents from (11) and (12) this gives the following approximations of and . Recall, that is the crossover probability of the BSC.

 pE ≈ni∑ν=ΔE(niν)eν(1−e)ni−ν pE,X ≈ni∑ν=ΔE,X(niν)eν(1−e)ni−ν,

. Since , we also have . Of course, the probabilities and radii vary for different thresholds. Hence, we append the threshold index as a parameter, i.e. we denote the probabilities and radii for by , and , respectively. Note, that from follows and .

Let us consider the probability for a symbol, which is erased for each threshold from . Its Hamming distance to the transmitted codeword must be at least and at most . Otherwise, there would be a threshold for which the symbol would not be erased. Consequently, we have

 pc≈ΔE(1)−1∑ν=ΔE,X(1)(niν)eν(1−e)ni−ν=pE,X(1)−pE(1). (13)

Now, let us approximate the probability for a wrong symbol which is never erased. Its Hamming distance to the transmitted codeword must be at least . We obtain

 pl≈ni∑ν=ΔE(z)(niν)eν(1−e)ni−ν=pE(z). (14)

The probabilities , , can be approximated as follows. Symbols counting towards must lie between and as can be seen in Figure 1. For we observe that the symbols must lie between and . We obtain

 ¯¯¯pk ≈ΔE,X(k)−1∑k=ΔE,X(k+1)(niν)eν(1−e)ni−ν =pE,X(k+1)−pE,X(k)−(pE(k)−pE(k+1)) (15) p–k ≈ΔE(k+1)−1∑k=ΔE(k)(niν)eν(1−e)ni−ν =pE(k)−pE(k+1). (16)

With equations (13), (14), (15), and (16) we expressed the probabilities , and in terms of the probabilities and , for which we can use the error- and erasure exponents from (11) and (12), respectively. After some rather technical simplifications, this yields the following simple expressions.

###### Lemma 1

The probabilities and can be approximated by

 pc ≈exp−(E0(Ri)−sT(z)1)ni pl ≈exp−(E0(Ri)+sT(z)z)ni ¯¯¯pk ≈exp−(E0(Ri)−sT(z)k+1)ni p–k ≈exp−(E0(Ri)+sT(z)k)ni,

.

## Vi Optimal Thresholds and Residual Codeword Error Probability

With the results from Sections III and V we can now derive an analytic formula for the optimal thresholds , . Consider Theorem 1. It basically states a system of equations, the optimal threshold set with elements being its solution. Let us express the equations using the approximated probabilities from Lemma 1. For (6), this gives

 pℓℓ+1l =pc⟺ ℓℓ+1(E0(Ri)+sT(z)z) =E0(Ri)−sT(z)1⟺ E0(Ri)(ℓ+1)s =ℓℓ+1T(z)z+T(z)1. (17)

We express (7) and (8) in the same way and obtain

 pc =(p–ℓ1¯¯¯p1)1ℓ+1⟺ E0(Ri)(1−ℓ2ℓ)s =ℓ2+ℓ+1ℓT(z)1−T(z)2 (18)

and, ,

 p–ℓk¯¯¯pk =p–ℓk+1¯¯¯pk+1⟺ 0 =ℓT(z)k−(ℓ+1)T(z)k+1+T(z)k+2. (19)
###### Theorem 2

The optimal threshold set for GMD decoding of a concatenated code, with inner ML and outer BD decoding, , is given by

 T(z)k:=E0(Ri)(ℓz(ℓ2+1)−2ℓk(ℓ2+ℓ−1)+ℓ3+ℓ2)−s(ℓz(ℓ2+1)+ℓ3−ℓ2−2ℓ), (20)

where is Gallager’s error exponent for the BSC and is the corresponding optimization parameter, .

###### Proof.

The statement is given by the unique solution of the recurrence relation (17), (18), and (19) for . ∎

###### Corollary 1

For outer BMD decoding, i.e. , the optimal threshold set is given by

 T(z)k:=E0(Ri)(2k−1)s(2z+1).
###### Proof.

The statement is given by the unique solution of the recurrence relation (17), (18), and (19) for . ∎

The corollary coincides with the results of Blokh and Zyablov [3]. Thus, we obtain their result as a special case of our main result, i.e. Theorem 2. Note, that both and – and thereby also – are functions of the BSC’s crossover probability . Also note, that in both cases is non–decreasing in .

We shall now state the residual codeword error probability, which can be achieved using an optimal set of thresholds for . To do this, we return to Theorem 1, more precisely to (9) in its proof. We saw that all terms in the maximization must be equal. Hence, we have the expression

 Pe≈pℓ⋅(do−1)ℓ+1l.

Using Lemma 1 gives

 P(z)e,ℓ :≈(exp−(E0(Ri)+sT(z)z)ni)ℓ⋅(do−1)ℓ+1 =exp−(2ℓ(do−1)(ℓz−1)ℓz+1+ℓz−1+ℓ2−ℓ−2)E0(Ri)ni, (21)

i.e. is defined by the largest threshold within .

If we are restricted to one single threshold, becomes

 P(1)e,ℓ :≈(exp−(E0(Ri)+sT(1)1)ni)ℓ⋅(do−1)ℓ+1 =exp−(2ℓ(do−1)2ℓ+1)E0(Ri)ni. (22)

The opposite extremal case is an unlimited number of thresholds. To calculate , we require the largest possible threshold, i.e. . L’Hospital’s rule for (20) yields

In the same manner as before, we obtain

 P(∞)e,ℓ :≈(exp−(E0(Ri)+sT(∞)∞)ni)ℓ⋅(do−1)ℓ+1 =exp−⎛⎝2ℓ(do−1)ℓ+1ℓ⎞⎠E0(Ri)ni. (23)
###### Theorem 3

For GMD decoding of a concatenated code with inner ML and outer BD decoding, , and a threshold set from Theorem 2, the achievable residual codeword error rate is in the range

 P(∞)e,ℓ≤P(z)e,ℓ≤P(1)e,ℓ, (24)

where is given by (23) and is given by (22).

For BSC crossover probability , the probabilities in (24) are almost equal. Moreover, approaches the ML error probability as goes to zero. By (24), this happens even faster for and . Morever, our experiments show that the ML error probability is already achieved for moderate channel conditions, especially if .

Figure 2 shows exemplary residual error probability curves for a concatenated code with inner code , outer code and outer BD–decoding with a varying number of optimal thresholds. We observe that BD decoding always beats BMD decoding. This could have been expected, since it can be shown that

 P(z)e,1:=exp−(2z(do−1)2z+1)E0(Ri)ni≥P(z)e,ℓ.

## Vii Conclusions

We investigated threshold–based multi–trial decoding of concatenated codes with inner ML and outer BD decoding. For any integer number of decoding trials, i.e. thresholds, we gave an analytic formula for the optimal locations of the thresholds in a sense that the residual codeword error probability is minimized. We showed that for an arbitrary number of thresholds, outer BD decoding outperforms outer BMD decoding and gave a range of achievable error probabilities. Within this range, the system designer can select to meet given performance and complexity constraints.

Our results can be applied to standardized concatenated coding schemes, e.g. for the CCSDS Telemetry Channel [10]. It utilizes a set of outer Reed–Solomon (RS) codes and an inner convolutional code. For a small modification of the standard [11], the RS odes can be decoded collaboratively, i.e. they can be considered as an –IRS code. For such codes, an efficient error/erasure BD decoding algorithm has been proposed in [12]. Its complexity is the same as for decoding the RS codes separately. Hence, the complexity of multi–trial decoding with the outer BD decoder grows only linearly in .

Another application of our results is decoding of generalized concatenated codes [3, 13]. There, groups of outer RS codes can be combined into IRS codes as we already pointed out in [4].

## References

• [1] G. D. Forney, “Generalized Minimum Distance decoding,” IEEE Trans. Inform. Theory, vol. IT-12, pp. 125–131, April 1966.
• [2] G. D. Forney, Concatenated Codes. Cambridge, MA, USA: M.I.T. Press, 1966.
• [3] E. L. Blokh and V. V. Zyablov, Linear Concatenated Codes. Nauka, 1982. In Russian.
• [4] C. Senger, V. R. Sidorenko, M. Bossert, and V. V. Zyablov, “Decoding generalized concatenated codes using interleaved Reed–Solomon codes,” in Proc. IEEE Int. Symposium on Inform. Theory, (Toronto, ON, Canada), July 2008.
• [5] C. Senger, V. R. Sidorenko, M. Bossert, and V. V. Zyablov, “Multi-trial decoding of concatenated codes using fixed thresholds.” Preprint, 2008.
• [6] R. G. Gallager, “A simple derivation of the coding theorem and some applications,” IEEE Trans. Inform. Theory, vol. IT-11, pp. 3–18, Jan 1965.
• [7] R. G. Gallager, Information Theory and Reliable Communication. New York: John Wiley & Sons, 1968. ISBN 0-471-29048-3.
• [8] G. D. Forney, “Exponential error bounds for erasure, list, and decision feedback schemes,” IEEE Trans. Inform. Theory, vol. IT-14, pp. 206–220, March 1968.
• [9] C. Senger, V. R. Sidorenko, and V. V. Zyablov, “On Generalized Minimum Distance decoding thresholds for the AWGN channel,” in Proc. XII Symposium Problems of Redundancy in Information and Control Systems, (St. Petersburg, Russia), May 2009.
• [10] Consultative Committee for Space Data Systems, Telemetry Channel Coding, October 2002. Recommendation for Space Data System Standards, CCSDS 101.0-B-6, Blue Book, Issue 6.
• [11] G. Schmidt, C. Senger, and M. Bossert, “Concatenated code designs with outer interleaved Reed–Solomon codes and inner tailbiting convolutional codes,” in Proc. International ITG Conference on Source and Channel Coding, (Ulm, Germany), January 2008.
• [12] G. Schmidt, V. R. Sidorenko, and M. Bossert, “Collaborative decoding of interleaved Reed–Solomon codes and concatenated code designs,” IEEE Trans. Inform. Theory, vol. IT-55, pp. 2991–3012, July 2009.
• [13] I. I. Dumer, “Concatenated codes and their multilevel generalizations,” in Handbook of Coding Theory, vol. II, ch. 23, Amsterdam: North-Holland, 1998. ISBN 0-444-50087-1.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters