A Rate-Distortion Perspective on Multiple Decoding Attempts for Reed-Solomon Codes

A Rate-Distortion Perspective on Multiple Decoding Attempts for Reed-Solomon Codes

Phong S. Nguyen, Henry D. Pfister, and Krishna R. Narayanan
Department of Electrical and Computer Engineering, Texas A&M University
College Station, TX 77840, U.S.A.
{psn, hpfister, krn}@tamu.edu
Abstract

Recently, a number of authors have proposed decoding schemes for Reed-Solomon (RS) codes based on multiple trials of a simple RS decoding algorithm. In this paper, we present a rate-distortion (R-D) approach to analyze these multiple-decoding algorithms for RS codes. This approach is first used to understand the asymptotic performance-versus-complexity trade-off of multiple error-and-erasure decoding of RS codes. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the condition for a single error-and-erasure decoding to succeed reduces to a form where the distortion is compared to a fixed threshold. Finding the best set of erasure patterns for multiple decoding trials then turns out to be a covering problem which can be solved asymptotically by rate-distortion theory. Next, this approach is extended to analyze multiple algebraic soft-decision (ASD) decoding of RS codes. Both analytical and numerical computations of the R-D functions for the corresponding distortion measures are discussed. Simulation results show that proposed algorithms using this approach perform better than other algorithms with the same complexity.

I Introduction

Reed-Solomon (RS) codes are one of the most widely used error-correcting codes in digital communication and data storage systems. This is primarily due to the fact that RS codes are maximum distance separable (MDS) codes, can correct long bursts of errors, and have efficient hard-decision decoding (HDD) algorithms, such as the Berlekamp-Massey (BM) algorithm, which can correct up to half the minimum distance ( of the code. An  RS code of length  and dimension  is known to have  due to its MDS nature.

Since the arrival of RS codes, people have put a considerable effort into improving the decoding performance at the expense of complexity. A breakthrough result of Guruswami and Sudan (GS) introduces a hard-decision list-decoding algorithm based on algebraic bivariate interpolation and factorization techniques that can correct errors beyond half the minimum distance of the code [1]. Nevertheless, HDD algorithms do not fully exploit the information provided by the channel output. Koetter and Vardy (KV) later extended the GS decoder to an algebraic soft-decision (ASD) decoding algorithm by converting the probabilities observed at the channel output into algebraic interpolation conditions in terms of a multiplicity matrix [2]. Both of these algorithms however have significant computational complexity. Thus, multiple runs of error-and-erasure and error-only decoding with some low complexity algorithm, such as the BM algorithm, has renewed the interest of researchers. These algorithms essentially first construct a set of either erasure patterns [3, 4], test patterns [5], or patterns combining both [6] and then attempt to decode using each pattern. There has also been recent interest in lowering the complexity per decoding trial as can be seen in [7, 8, 9].

In the scope of multiple error-and-erasure decoding, there have been several algorithms using different sets of erasure patterns. After multiple decoding trials, these algorithms produce a list of candidate codewords and then pick the best codeword on this list, whose size is usually small. The nature of multiple error-and-erasure decoding is to erase some of the least reliable symbols since those symbols are more prone to be erroneous. The first algorithm of this type is called Generalized Minimum Distance (GMD) [3] and it repeats error-and-erasure decoding while successively erasing an even number of the least reliable positions (LRPs) (assuming that  is odd). More recent work by Lee and Kumar [4] proposes a soft-information successive (multiple) error-and-erasure decoding (SED) that achieves better performance but also increases the number of decoding attempts. Literally, the Lee-Kumar’s SED algorithm runs multiple error-and-erasure decoding trials with every combination of an even number  of erasures within the  LRPs.

A natural question that arises is how to construct the “best” set of erasure patterns for multiple error-and-erasure decoding. Inspired by this, we first design a rate-distortion framework to analyze the asymptotic trade-off between performance and complexity of multiple error-and-erasure decoding of RS codes. The framework is also extended to analyze multiple algebraic soft-decision decoding (ASD). Next, we proposed a group of multiple-decoding algorithms based on this approach that achieve better performance-versus-complexity trade-off than other algorithms. The multiple-decoding algorithm that achieves the best trade-off turns out to be a multiple error-only decoding using the set of patterns generated by random codes combining with covering codes. These are the main results of this paper.

I-a Outline of the paper

The paper is organized as follows. In Section II, we design an appropriate distortion measure and present a rate-distortion framework to analyze the performance-versus-complexity trade-off of multiple error-and-erasure decoding of RS codes. Also in this section, we propose a general multiple-decoding algorithm that can be applied to error-and-erasure decoding. Then, in Section III, we discuss a numerical computation of R-D function which is needed for the proposed algorithm. In Section IV, we analyze both bit-level and symbol-level ASD decoding and design distortion measures so that they can fit into the general algorithm. In Section V, we offer some extensions that help the algorithm achieve better performance and running time. Simulation results are presented in Section VI and finally, conclusion is provided in Section VII.

Ii Multiple Error-and-Erasure Decoding

In this section, we set up a rate-distortion framework to analyze multiple attempts of conventional hard decision error-and-erasure decoding.

Let  be the Galois field with  elements denoted as . We consider an  RS code of length , dimension  over . Assume that we transmit a codeword  over some channel and receive a vector  where  is the receive alphabet for a single RS symbol. In this paper, we assume that and all simulations are based on transmitting each of the bits in a symbol using Binary Phase-Shift Keying (BPSK) on an Additive White Gaussian Noise (AWGN) channel. For each codeword index , let  be the permutation given by sorting  in decreasing order so that . Then, we can specify  as the -th most reliable symbol for  at codeword index . To obtain the reliability of the codeword positions (indices), we construct the permutation  given by sorting the probabilities  of the most likely symbols in increasing order. Thus, codeword position  is the -th LRP. These above notations will be used throughout this paper.

Example 1

Consider  and . Assume that we have the probability  written in a matrix form as follows

then  and .

Condition 1

(Classical decoding threshold, see [10, 11]): If  symbols are erased, a conventional hard-decision error-and-erasure decoder such as the BM algorithm is able to correct  errors in unerased positions if

(1)

Ii-a Conventional error and erasure patterns.

Definition 1

(Conventional error and erasure patterns) We define  and  as an error pattern and an erasure pattern respectively, where  means that an error occurs (i.e. the most likely symbol is incorrect) and  means that an erasure occurs at index .

Example 2

If  is odd then is the set of erasure patterns for the GMD algorithm. For the SED algorithm, the set of erasure patterns has the form . Here, in each erasure pattern the letters are written in increasing reliability order of the codeword positions.

Let us revisit the question how to construct the best set of erasure patterns for multiple error-and-erasure decoding. First, it can be seen that a multiple error-and-erasure decoding succeeds if the condition (1) is satisfied during at least one round of decoding. Thus, our approach is to design a distortion measure that converts the condition (1) into a form where the distortion between an error pattern  and an erasure pattern , denoted as , is less than a fixed threshold.

Definition 2

Given a letter-by-letter distortion measure , the distortion between an error pattern  and an erasure pattern  is defined by

Proposition 1

If we choose the letter-by-letter distortion measure   as follows

(2)

then the condition (1) for a successful error-and-erasure decoding then reduces to the form where the distortion is less than a fixed threshold

{proof}

First, we define to count the number of  pairs equal to  for every . Noticing that  and , the condition (1) for one error-and-erasure decoding attempt to succeed becomes  . By seeing that  we conclude the proof. Next, we try to maximize the chance that this successful decoding condition is satisfied by at least one of the decoding attempts (i.e.  for at least one erasure patterns ). Mathematically, we want to build a set  of no more than  erasure patterns  in order to

(3)

The exact answer to this problem is difficult to find. However, one can see it as a covering problem where one wants to cover the space of error patterns using a minimum number of balls centered at the chosen erasure patterns.

This view leads to an asymptotic solution of the problem based on rate-distortion theory. More precisely, we view the error pattern  as a source sequence and the erasure pattern  as a reproduction sequence.

Error pattern Erasure pattern

Rate-distortion theory shows that the set  of  reproduction sequences can be generated randomly so that

where the distortion  is minimized for a given rate . Thus, for large enough , we have

with high probability. Here,  and  are closely related to the complexity and the performance, respectively, of the decoding algorithm. Therefore, we characterize the trade-off between those two aspects using the relationship between  and .

Ii-B Generalized error and erasure patterns

In this subsection, we consider a generalization of the conventional error and erasure patterns under the same framework to make better use of the soft information. At each index of the RS codeword, beside erasing the symbol we can try to decode using not only the most likely symbol but also other ones as the hard decision (HD) symbol. To handle up to the  most likely symbols at each index , we let  and consider the following definition.

Definition 3

(Generalized error patterns and erasure patterns) Consider a positive integer . Let us define  as the generalized error pattern where, at index  implies that the -th most likely symbol is correct for , and  implies none of the first  most likely symbols is correct. Let  be the generalized erasure pattern used for decoding where, at index  implies that the -th most likely symbol is used as the hard-decision symbol for , and  implies that an erasure is used at that index.

For simplicity, we will refer to  as the error pattern and  as the erasure pattern like in the conventional case. Next, we also want to convert the condition (1) to the form where  is less than a fixed threshold. Proposition 1 is thereby generalized into the following theorem.

Theorem 1

We choose the letter-by-letter distortion measure   defined by  in terms of the ) matrix

Using this, the condition (1) for a successful error-and-erasure decoding becomes

{proof}

The reasoning is similar to Proposition 1 using the fact that  and  where for every . For each , we will refer to this generalized case as mBM- decoding.

Example 3

We consider the case mBM-2 decoding where . The distortion measure is given by following the matrix

Here, at each codeword position, we consider the first and second most likely symbols as the two hard-decision choices like in the Chase-type decoding method proposed by Bellorado and Kavcic [7].

Ii-C Proposed General Multiple-Decoding Algorithm

In this section, we propose a general multiple-decoding algorithm for RS codes based on the rate-distortion approach. This general algorithm applies to not only multiple error-and-erasure decoding but also multiple-decoding of other decoding schemes that we will discuss later. The first step is designing a distortion measure that converts the condition for a single decoding to succeed to the form where distortion is less than a fixed threshold. After that, decoding proceeds as described below.

  • Phase I: Compute rate-distortion function.

Step 1: Transmit  (say ) arbitrary test RS codewords, indexed by time , over the channel and compute a set of   matrices  where  is the probability of the -th most likely symbol at position  during time .

Step 2: For each time , obtain the matrix  from  through a permutation  that sorts the probabilities  in increasing order to indicate the reliability order of codeword positions. Take the entry-wise average of all  matrices  to get an average matrix .

Step 3: Compute the R-D function of a source sequence (error pattern) with probability of source letters derived from  and the designed distortion measure (see Section III and Section V-B) . Determine the point on the R-D curve that corresponds to a designated rate  along with the test-channel input-probability distribution vector  that achieves that point.

  • Phase II: Run actual decoder.

Step 4: Based on the actual received signal sequence, compute  and determine the permutation  that gives the reliability order of codeword positions by sorting  in increasing order.

Step 5: Randomly generate a set of  erasure patterns using the test-channel input-probability distribution vector  and permute the indices of each erasure pattern by the permutation 

Step 6: Run multiple attempts of the corresponding decoding scheme (e.g. error-and erasure decoding) using the set of erasure patterns in Step 5 to produce a list of candidate codewords.

Step 7: Use Maximum-Likelihood (ML) decoding to pick the best codeword on the list.

Iii Computing The Rate-Distortion Function

In this section, we will present a numerical method to compute the R-D function and the test-channel input-probability distribution that achieves a specific point in the R-D curve. This probability distribution will be needed to randomly generate the set of erasure patterns in the general multiple-decoding algorithm that we have proposed.

For an arbitrary discrete distortion measure, it can be difficult to compute the R-D function analytically. Fortunately, the Blahut-Arimoto (B-A) algorithm (see details in [12, 13]) gives an alternating minimization technique that efficiently computes the R-D function of a single discrete source. More precisely, given a parameter  which represents the slope of the  curve at a specific point and an arbitrary all-positive initial test-channel input-probability distribution vector , the B-A algorithm shows us how to compute the rate-distortion point  by means of computing the test-channel input-probability distribution vector  and the test-channel transition probability matrix  that achieves that point.

However, it is not straightforward to apply the B-A algorithm to compute the R-D for a discrete source sequence  (an error pattern in our context) of  independent but non identical source components . In order to do that, we consider the group of source letters  where  as a super-source letter , the group of reproduction letters  where  as a super-reproduction letter , and the source sequence  as a single source. For each super-source letter  follows from the independence of source components. While we could apply the B-A algorithm to this source directly, the complexity is a problem because the alphabet sizes for  and   become the super-alphabet sizes  and  respectively. Instead, we avoid this computational challenge by choosing the initial test-channel input-probability distribution so that it can be factorized into a product of  initial test-channel input-probability components, i.e. . Then, we see that this factorization rule still applies after every step of the iterative process. By doing this, for each parameter  we only need to compute the rate-distortion pair for each component (or index ) separately and sum them together. This is captured into the following theorem.

Theorem 2

(Factored Blahut-Arimoto algorithm) Consider a discrete source sequence  of  independent but non identical source components . Given a parameter , the rate and the distortion for this source sequence are given by

where the components   and  are computed by the B-A algorithm with the parameter . This pair of rate and distortion can be achieved by the corresponding test-channel input-probability distribution  where the component probability distribution .

{proof}

See Appendix A.

Iv Multiple Algebraic Soft Decision Decoding (ASD)

In this section, we analyze and design a distortion measure to convert the condition for successful ASD decoding to a suitable form so that we can apply the general multiple-decoding algorithm to ASD decoding.

First, let us give a brief review on ASD decoding of RS codes. Given a set  of  distinct elements in  From each message polynomial , we can have a codeword  by evaluating the message polynomial at , i.e.  for . Consider a received vector , we can compute the a posteriori probability (APP) matrix  as follows.

The ASD decoding as in [2] has the following main steps.

  1. Multiplicity Assignment: Use a particular multiplicity assignment scheme (MAS) to derive a  multiplicity matrix, denoted as , of non-negative integer entries  from the APP matrix .

  2. Interpolation: Construct a bivariate polynomial  of minimum  weighted degree that passes through each of the point  with multiplicity  for  and .

  3. Factorization: Find all polynomials  of degree less than  such that  is a factor of  and re-evaluate these polynomials to form a list of candidate codewords.

In this paper, we denote  as the maximum multiplicity. Intuitively, higher multiplicity should be put on more likely symbols. Increasing  generally gives rise to the performance of ASD decoding. However, one of the drawbacks of ASD decoding is that its decoding complexity is roughly  which sharply increases with . Thus, in this section we will work with small  to keep the complexity affordable.

One of the main contributions of [2] is to offer a condition for successful ASD decoding represented in terms of two quantities specified as the score and the cost as follows.

Definition 4

The score  with respect to a codeword  and a multiplicity matrix  is defined as 
where  such that . The cost  of a multiplicity matrix  is defined as 

Condition 2

(ASD decoding threshold, see [2, 14, 15]). The transmitted codeword will be on the list if

(4)

To match the general framework, the ASD decoding threshold (or condition for successful ASD decoding) should be converted to the form where the distortion is smaller than a fixed threshold.

Iv-a Bit-level ASD case

In this subsection, we consider multiple trials of ASD decoding using bit-level erasure patterns. A bit-level error pattern  and a bit-level erasure pattern  has length  since each symbol has  bits. Similar to Definition 1 of a conventional error pattern and a conventional erasure pattern,  in a bit-level error pattern implies a bit-level error occurs and  in a bit-level erasure pattern implies that a bit-level erasure occurs.

From each bit-level erasure pattern we can specify entries of the multiplicity matrix  using the bit-level MAS proposed in [14] as follows: for each codeword position, assign multiplicity 2 to the symbol with no bit erased, assign multiplicity 1 to each of the two candidate symbols if there is 1 bit erased, and assign multiplicity zero to all the symbols if there are  bits erased. All the other entries are zeros by default. This MAS has a larger decoding region compared to the conventional error-and-erasure decoding scheme.

Condition 3

(Bit-level ASD decoding threshold, see [14]) For RS codes of rate , ASD decoding using the proposed bit-level MAS will succeed (i.e. the transmitted codeword is on the list) if

(5)

where  is the number of bit-level erasures and  is the number of bit-level errors in unerased locations.

We can choose an appropriate distortion measure according to the following proposition which is a natural extension of Proposition 1 in the symbol level.

Proposition 2

If we choose the bit-level letter-by-letter distortion measure  as follows

(6)

then the condition (5) becomes

(7)
{proof}

The proof uses the same reasoning as the proof of Proposition 1.

Remark 1

We refer the the multiple-decoding of bit-level ASD as m-b-ASD.

Iv-B Symbol-level ASD case

In this subsection, we try to convert the condition for successful ASD decoding in general to the form that suits our goal. We will also determine which multiplicity assignment schemes allow us to do so.

Definition 5

(Multiplicity type) For some codeword position, let us assign multiplicity  to the -th most likely symbol for  where . The remaining entries in the column are zeros by default. We call the sequence, , the column multiplicity type for “top-” decoding.

First, we notice that a choice of multiplicity types in ASD decoding at each codeword position has the similar meaning to a choice of erasure decisions in the conventional error-and-erasure decoding. However, in ASD decoding we are more flexible and may have more types of erasures. For example, assigning multiplicity zero to all the symbols (all-zero multiplicity type) at codeword position  corresponds to the case when we have a complete erasure at that position. Assigning the maximum multiplicity  to one symbol corresponds to the case when we choose that symbol as the hard-decision one. Hence with some abuse of terminology, we also use the term (generalized) erasure pattern  for the multiplicity assignment scheme in the ASD context. Each erasure-letter  gives the multiplicity type for the corresponding column of the multiplicity matrix .

Definition 6

(Error and erasure patterns for ASD decoding) Consider a MAS with  multiplicity types. Let  be an erasure pattern where, at index  implies that multiplicity type  is used at column  of the multiplicity matrix . Notice that the definition of an error pattern  in Definition 3 applies unchanged here.

Rate-distortion theory gives us the intuition that in general the more multiplicity types (erasure choices) we have, the better performance of multiple ASD decoding we achieve as  becomes large. Thus, we want to find as many as possible multiplicity types for “top-” that allow us to convert condition for successful ASD decoding to the correct form.

Example 4

Choosing , for example, gives four column multiplicity types for “top-2” decoding as follows: the first is  where we assign multiplicity 2 to the most likely symbol , the second is  where we assign equal multiplicity 1 to the first and second most likely symbols  and , the third is  where we assign multiplicity 2 to the second most likely symbol , and the fourth is  where we assign multiplicity zero to all the symbols at index  (i.e. the -th column of  is an all-zero column). As a corollary of Theorem 3 below, the distortion matrix that converts (4) to the correct form for this case is

The following definition and theorem provide a set of allowable multiplicity types that converts the condition for successful ASD decoding into the form where distortion is less than a fixed threshold.

Definition 7

The set of allowable multiplicity types for “top-” decoding with maximum multiplicity  is defined to be111We use the convention that  if .

(8)

Taking the elements of this set in an arbitrary order, we let the -th multiplicity type in the allowable set be .

Example 5

 consists of all permutations of . Meanwhile,  comprises all the permutations of  and we refer to the multiple ASD decoding algorithm using this set of multiplicity types as mASD-2.  consists of all the permutations of  and this case is referred as mASD-3. We also consider another case called mASD-2a that uses the set of multiplicity types .

Theorem 3

Let be the number of multiplicity types in a MAS for “top-” decoding with maximum multiplicity . Let be a letter-by-letter distortion measure defined by , where is the  matrix

with . Then, the condition (4) for successful ASD decoding of a RS code with rate  is equivalent to

(9)
{proof}

[Sketch of proof] (See details in [16]) Let and be the score and cost of the multiplicity assignment. First, we show that  in (4) implies that . Combining this inequality with the high-rate constraint in Theorem 3 implies that . From (4), we also know that  and this implies that . But, the conditions of the theorem can also be used to show that . Combining this with gives a contradiction unless . Thus, we conclude that .

Therefore, the condition in (4) is equivalent to  because  is a consequence of  and  is satisfied by the high-rate constraint. Finally, one can show that  is equivalent to  with the chosen distortion matrix.

Remark 2

For a fixed , the size of  is maximized when . Multiplicity types  and any permutation of  are always in the allowable set .

V Some Extensions and Generalizations

V-a Erasure patterns using covering codes

The R-D framework we use is most suitable when . For a finite , the random coding approach may have problems with only a few LRPs. We can instead use good covering codes to handle these LRPs. In the scope of covering problems, one can use an -ary -covering code (e.g. a perfect Hamming or Golay code) with covering radius  to cover the whole space of -ary vectors of the same length. The covering may still work well if the distortion measure is close to, but not exactly equal to the Hamming distortion.

In order take care of up to the  most likely symbols at each of the  LRPs of an  RS, we consider an  -ary -covering code whose codeword alphabet is  Then, we give a definition of the (generalized) error patterns and erasure patterns for this case. In order to draw similarities between this case and the previous cases, we still use the terminology “generalized erasure pattern” and shorten it to erasure pattern even if error-only decoding is used. For error-only decoding, Condition 1 for successful decoding becomes

Definition 8

(Error and erasure patterns for error-only decoding) Let us define  as an error pattern where, at index  implies that the -th most likely symbol is correct for , and  implies none of the first  most likely symbols is correct. Let  be an erasure pattern where, at index  implies that the -th most likely symbol is chosen as the hard-decision symbol for .

Proposition 3

If we choose the letter-by-letter distortion measure   defined by  in terms of the  matrix

(10)

then the condition for successful error-only decoding then becomes

(11)
{proof}

It follows directly from .

Remark 3

If we delete the first row which corresponds to the case where none of the first  most likely symbols is correct then the distortion measure is exactly the Hamming distortion.

Split covering approach:

We can break an error pattern  into two sub-error patterns  of  least reliable positions and  of  most reliable positions. Similarly, we can break an erasure pattern  into two sub-erasure patterns  and . Let be the number of positions in the  LRPs where none of the first  most likely symbols is correct, or . If we assign the set of all sub-error patterns  to be an  -covering code then  because this covering code has covering radius . Since , in order to increase the probability that the condition (11) is satisfied we want to make  as small as possible by the use of the R-D approach. The following proposition summarizes how to generate a set of  erasure patterns for multiple runs of error-only decoding.

Proposition 4

In each erasure pattern, the letter sequence at  LRPs is set to be a codeword of an  -ary covering code. The letter sequence of the remaining  is generated randomly by the R-D method (see Section II-C) with rate  and the distortion measure in (10). Since this covering code has  codewords, the total rate is 

Example 6

For a (7,4,3) binary Hamming code which has covering radius , we take care of the  most likely symbols at each of the 7 LRPs. We see that  is a codeword of this Hamming code and then form erasure patterns  with assumption that the positions are written in increasing reliability order. The  sub-erasure patterns  are generated randomly using the R-D approach with rate .

Remark 4

While it also makes sense to use a covering codes for the  LRPs of the erasure patterns and set the the rest to be letter  (i.e. chose the most likely symbol as the hard-decision), our simulation results shows that the performance can be improved by using a combination of covering codes and random (i.e., generated by the R-D approach) codes.

V-B Closed form rate-distortion functions

For some simple distortion measures, we can compute the R-D functions analytically in closed form. First, we observe an error pattern as a sequence of independent but non-identical random sources. Then, we compute the component R-D functions at each index of the sequence and use convex optimization techniques to allocate the total rate and distortion to various components. This method converges to the solution faster than the numerical method in Section III. The following two theorems describe how to compute the R-D functions for the simple distortion measures of Proposition 1 and 2.

Theorem 4

(Conventional error-and-erasure decoding) Let , the overall rate-distortion function is given by  where  and  can be found be a reverse water-filling procedure:

where should be chosen so that . The  function can be achieved by the test-channel input-probability distribution

{proof}

[Sketch of proof] (See [16] for details) With the distortion measure in (2), we follow the method in [17] to compute the rate-distortion function component  and the test-channel input-probability distribution  and  for each index . Then, one can show that the optimal allocation of rate and distortion to the various components is given by a reverse-water filling procedure like in [18].

Theorem 5

(Bit-level ASD case in Proposition 2) The overall rate-distortion function in this case is given by  where   and the distortion component is given by

where  should be chosen so that . The  function can be achieved by the test-channel input-probability distribution

{proof}

[Sketch of proof] (See [16] for details) With the distortion measure in (6), using the method in [17] we can compute the rate-distortion function component  where  is a Lagrange multiplier such that  for each index . Then, the Kuhn-Tucker conditions define the the overall rate allocation.

Vi Simulation results

Figure 1: A realization of R-D curves at dB for various decoding algorithms for the (255,239) RS code over an AWGN channel.
Figure 2: Performance of various decoding algorithms for the (255,239) RS code over an AWGN channel.

Using simulations, we consider the performance of the (255,239) RS code over an AWGN channel with BPSK as the modulation format. The mBM-1 curve corresponds to standard error-and-erasure BM decoding with multiple erasure patterns. For , the mBM- curves correspond to error-and-erasure BM decoding with multiple decoding trials using both erasures and top- symbols. The mASD- curves correspond to multiple ASD decoding trials with maximum multiplicity . The number of trial decoding patterns is where is denoted in parentheses in each algorithm’s acronym (e.g., m-BM-2(11) uses ).

Fig. 2 shows the R-D curves for various algorithms at  dB. For this code, the fixed threshold for decoding is . Therefore, one might expect that algorithms whose average distortion is less than 17 should have a frame error rate (FER) less than . The R-D curve allows one to estimate the number of decoding patterns required to achieve this FER. Conventional BM decoding is very similar to mBM-1 decoding at rate 0. Notice that the mBM-1 algorithm at rate 0, which is very similar to conventional BM decoding, has an expected distortion of roughly 24. For this reason, the FER on conventional decoding is close to 1. The R-D curve tells us that trying roughly (i.e., ) erasure patterns would reduce the FER to roughly because this is where the distortion drops down to 17. Likewise, the mBM-2(11) algorithm has an expected distortion of less than 14. So we expect (and our simulations confirm) that the FER should be less than . One weakness of this approach is that the R-D describes only the average distortion and does not directly consider the probability that the distortion is greater than 17. Still, we can make the following observations from the R-D curve. Even at low rates (e.g., ), we see that the distortion  achieved by mBM-2 is roughly the same as mBM-3, mASD-2, and mASD-3 but smaller than mASD-2a and mBM-1. This implies that mBM-2 is no worse than the more complicated ASD based approaches for a wide range of rates (i.e., ).

The FER of various algorithms can be seen in Fig. 2. The focus on allows us to make fair comparisons with SED(12,12). With the same number of decoding trials, mBM-2(11) outperforms SED(12,12) by 0.3 dB at an FER. Even mBM-2(7), with many fewer decoding trials, outperforms both SED(12,12) and the KV algorithm with . Among all our proposed algorithms with rate , the mBM-HM74(11) achieves the best performance. This algorithm uses the Hamming (7,4) covering code for the 7 LRPs and the R-D approach for the remaining codeword positions. Meanwhile, small differences in the performance between mBM-2(11), mBM-3(11), mASD-2(11), and mASD-3(11) suggest that: (i) taking care of the  most likely symbols at each codeword position is good enough for multiple decoding of high-rate RS code and (ii) multiple runs of error-and-erasure decoding is almost as good as multiple runs of ASD decoding. Recall that this result is also correctly predicted by the R-D analysis. Moreover, it is quite reasonable since we know that the gain of GS decoding, with infinite multiplicity, over the BM algorithm is negligible for high-rate RS codes. To compare with the LCC() Chase-type approach used in [7], we also consider the mBM-HM74(4) algorithm, which uses the Hamming (7,4) covering codes for the 7 LRPs and the hard decision pattern for the remaining codeword positions. This shows that the covering code achieves better performance with the same number () decoding attempts. The comparison is not entirely fair, however, because of their low-complexity approach to multiple decoding. We believe, nevertheless, that their technique can be generalized to covering codes.

Vii Conclusion

A rate-distortion approach is proposed as a unified framework to analyze multiple decoding trials, with various algorithms, of RS codes in terms of performance and complexity. A connection is made between the complexity and performance (in some asymptotic sense) of these multiple-decoding algorithms and the rate and distortion of an associated R-D problem. Covering codes are also combined with the rate-distortion approach to mitigate the suboptimality of random codes when the effective block-length is not large. As part of this analysis, we also present numerical and analytical computations of the rate-distortion function for sequences of independent but non-identical sources. Finally, the simulation results show that our proposed algorithms based on the R-D approach achieve a better performance-versus-complexity trade-off than previously proposed algorithms. One key result is that, for high-rate RS codes, multiple-decoding using the standard BM algorithm is as good as multiple-decoding using more complex ASD algorithms.

In this paper, we only discuss the rate-distortion approach to solve the problem in (3). However, the performance can be further improved by focusing on the rate-distortion error-exponent. This allows us to approximately solve the covering problem for finite  rather than just as . The complexity of multiple decoding can also be decreased by using clever techniques to lower the complexity per decoding trial (e.g., [7]). We will address these two improvements in a future paper.

Appendix A Proof of Theorem 2

First, let us recall that for each source component , the B-A algorithm computes the R-D pair in the following steps:

  1. Choose an arbitrary all-positive test-channel input-probability distribution vector .

  2. Iterate the following steps at 

    where  is the transition probability. It is shown by B-A that  and as .

The rate and distortion can be computed by  and