On the Threshold of Maximum-Distance Separable Codes

On the Threshold of Maximum-Distance Separable Codes

Bruno Kindarji12 1 Sagem Sécurité
Osny, France
   Gérard Cohen2    Hervé Chabanne12 2Institut Télécom
Télécom ParisTech
Paris, France

Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt’09, this paper deals with the threshold of linear -ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible?

We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.

I Introduction

In [1], Bringer et al. proposed a low-cost mutual authentication protocol, that uses a Reed-Solomon code structure. This protocol is pretty simple: Bob owns two secret polynomials of degree less than known only by Alice; to authenticate herself to Bob, Alice proves the knowledge of by sending where is the -th element of a . Bob proves his identity by replying with . This protocol is made such that if Alice speaks to a lot of persons, it is hard to trace Bob out of all the conversations, and it is hard to impersonate Alice (or Bob). The security of the protocol is based on an algorithmic assumption, saying that the polynomial reconstruction problem is hard for the vectors of that are far enough from the code. Indeed, the best known algorithms solving polynomial reconstruction are those of Guruswami-Sudan [2] and, on a related problem, Guruswami-Rudra [3], which can basically reconstruct a polynomial given correct values.

This algorithmic security result is somehow unsatisfying, for it is possible to exhibit better decoding algorithm. We therefore take interest in the information-theoretic aspect of such a problem.

The solution of the problem raised by [1] is to look at the output of a list-decoder centered around the received values, and to output the possible polynomials as candidate values for or . Our approach consists in looking at a usually ignored side of list-decoding, which is to look at the radii such that list-decoding a word with radius provides a list that is always lower-bounded by a large enough number. This differs from the literature concerning list-decoding, which usually looks for radii for which the size is always upper-bounded by a maximum list size, or tries to exhibit a counter-example.

The “large enough” list size can be obtained easily by imposing that Maximum-Likelihood Decoding to be most improbable. For that, we focus on the all-or-nothing behaviour of the ML decoder. Inspired by percolation theory [4], and code-applied graph theory [5], we will show how it is possible to conservatively estimate, before, after, and around a threshold, the all-or-nothing probability of ML decoding.

Ii The Threshold of a Code

The existence of a threshold is motivated by the classical question of percolation : given a graph, with a source, and a sink, and given the probability for a “wet” node of the graph to “wet” an adjacent node, what is the probability for the source to wet the sink? It appears that this probability has a threshold effect; in other words, there exists a limit probability such that, if , then the sink is almost surely wet, and if , then the sink is almost never wet. The threshold effect is illustrated in Fig 1.

This question can be transposed into the probability of error-correcting a code. Given a proportion of errors , with a decoding algorithm, what is the probability of correctly recovering the sent codeword? It was shown in [6] that for every binary code, and every decoding algorithm, this probability also follows a threshold.

In this paper, we show that this property also applies to -ary codes. In the following part, we show that the threshold behaviour that was seen on binary codes can be obtained again.

Ii-a The Margulis-Russo Identity

The technique used to derive threshold effects in discrete spaces is to integrate an isoperimetric inequality; for that, the Margulis-Russo identity is required.

Let be the Hamming space; the Hamming distance provides the number of different coordinates between vectors and . Consider the measure defined by where is the Hamming weight of . The number of limit-vectors of a subset is a function defined as for .

For such that is increasing (i.e. if , and , then with defined component-wise), Margulis and Russo showed :

Let . This section shows that this equality is also true in .

For a vector , the support of is the set of all its non-null coordinates, i.e. . Define the measure function with the weight of . This definition is consistent with a measure, as .

Note the inclusion to be the relation between a set and a (general) subset (i.e. for all , ). The support inclusion generalises the component-wise that was used in the binary case.

Lemma 1 (Margulis-Russo Identity over -ary alphabets)

Let be an increasing subset of , i.e. such that if , for all such that , then . Then


The proof of this lemma is an adaptation of Margulis’ proof in [7]. For this, we use the notation:

  • where , is the number of links from to

  • for , ,

  • for , ( is the reunion of the );

  • is the number of limit-vectors next to elements of weight .

Trivially, . We now note that :

  • , as to go from to , the only way (in one move) is to put one coordinate to ;

  • with the same reasoning;

  • as is increasing.

  • Combining these equalities, we get ;

  • as it is necessary to put a non-null coordinate to and a null one to .

Finally for and (or ).

Back to the identity desired, we observe that

Hence the identity. \endproof

This lemma shows that the Margulis-Russo identity is also true on ; it was the keystone of the reasoning done in [5] to show an explicit form of the threshold behaviour of Maximum-Likelihood Error Correction.

Ii-B A Threshold for Error-Decoding -ary codes

In the following, we use the normal distribution, the accumulate normal function, and (so that ).

A monotone property is a set such that is increasing, or is increasing.

Theorem 1

Let be a monotone property of . Suppose that or .

Let be (the unique real) such that . Let .

Then the measure of , is bounded by :

Sketch of Proof
The proof is exactly the same as the one from [5]. The whole idea is to derive the upper-range:

The integration of this equation, together with the Margulis-Russo lemma, gives the result. \endproof

To conclude this part, we remark that the non-decoding region of a given point, for a -ary code, is an increasing region of . For linear codes, this non-decoding region can always be translated to that of without loss of generality; let . The probability of error decoding of is then .

For , we show that either , or . Indeed, if , then is nearer to a non-null codeword than to . Then all the vectors obtained by replacing one of the coordinates of by are out of ; in particular, . Let be the weight of ; as is nearer to than to , . Thus the previous assertion.

Combining the previous results, we just showed that for any -ary code, the probability of error is, as for binary codes, bounded by a threshold function. This can be expressed by the following theorem, which has the same form as the one showed in [5]:

Theorem 2

Let be a code of any length, and of minimal distance . Over the -ary symmetric channel, with transition probability , the probability of decoding error associated with is such that there exists a unique such that , and is bounded by:

The upper-bound () is true when ; the lower-bound () is true when .

Even though linearity was used not to lose any generality previously, it is not a requirement for this theorem. Indeed, the bounding equations are true for every codeword by replacing by . Assuming that the codewords sent are distributed in a uniform way over , we thus obtain this result.

The behaviour of this function is illustrated in Fig 1. Around (actually, for all …), is extremely flat around 0; around (and, symmetrically, for all , is extremely flat around 1. Finally, around the threshold , the slope is , which is almost vertical when the minimal distance is large.

Figure 1: Illustration of the threshold effect, ,

Iii Explicit Computation of the Threshold for Maximum-Distance Separable Codes

In this section, we only take interest in linear codes over .

Iii-a Another Estimation of the Decoding Threshold

By linearity, we can again without loss of generality assume that the sent codeword was the all null vector. It is possible to have a rough estimation of the probability of wrongly decoding with crossover probability correctly a vector by computing the proportion of vectors of weight less or equal to that are closer to a non-null codeword than to . Let be this proportion.

Let , where is the Hamming ball of radius , ( for example, centered on ) in . It is well known that when , , where is the -ary entropy of .

To compute the numerator, we suggest, for each codeword that has a weight between and , to compute the number of vectors that are nearer to than to . This number actually only depends on the weight of , and will be noted . As there are codewords of weight in the code (with the standard notation), the function can be approximated by:


The different quantities used in this equation are illustrated in Fig 2.

Figure 2: Different quantities used in Eq 1

is explicited hereafter. Let be a codeword of weight . Let be a vector with the following constraints:

  • , i.e. is the result of the transmission of with at most errors.

  • , i.e. is wrongly decoded.

We note the number of coordinates in such that and ; is the number of coordinates such that and ; is the number of coordinates such that and .

The previous constraints on can be rewritten into the system :

We then obtain

Remark 1

It is easy to see that is at most the volume of a ball of radius ; this estimation will be used in the next part.

Iii-B Application to MDS codes

Maximum-Distance Separable (MDS) Codes are codes such that their dimension and minimal distance fulfil the Singleton bound, so that:

A well known family of MDS codes are the Reed-Solomon codes, for which a codeword is made of the evaluation of a degree polynomial over field elements . Reed-Solomon codes over can have a length up to , but shorter such codes are also MDS.

For MDS codes, the number of codewords of given weight is known. This number is:

From this identity, it is easy to derive the more usable formula:


It is now possible to approximate quite nicely the error probability while under the threshold - indeed, the numerator and denominator are correct as long as a vector is not close to 2 different codewords with a weight in the range , i.e. as long as the list of codewords at a distance less than from is reduced to a single element.

Iii-C Short MDS Codes over Large Fields

We now focus on the specific problem presented in the Introduction, and motivated by the beckoning and authentication protocol from [1]. This setting is characterized by the following:

  • The underlying code is a Reed-Solomon over a field ;

  • The field size is very large for cryptographic reasons;

  • The code length is very short (with respect to ) as is the size of embedded low-cost devices’ memory.

This application fits into the framework depicted in the previous sections. Moreover, the information “ much smaller than ” () enables to compute an asymptotic first order estimation of the threshold in such codes.

Indeed, if , then . We now compute an upper-bound on , to derive an estimation on the threshold . More precisely, we aim at computing the first-order value of ; then, is a lower-approximation of the threshold.

To estimate the weight enumerator , we use formula (2) to derive

The number of targetted vectors for each codeword is not easy to evaluate; we note its first order development , so that . (Here, the term is a bounded by a polynomial in .) We know that


Combining these elements with equation (1), we obtain .

As , the first order of is bounded by: .

The bounding (3) of shows that the right-hand side of this inequality is between and , which shows that the threshold is asymptotically between and .

Unfortunately, a more precise evaluation of strongly depends on the context. Indeed, according to Section III-A,

This maximum can be obtained by evaluating the term to be maximized on all vertices of the polytope defined by the system ( is made of 9 inequalities of 3 unknown, the vertices are obtained by selecting 3 of these equations, thus at most vertices); however, it is not possible to exhibit here a general answer as the solution depends on the minimal distance of the code, i.e. on the rate of the Reed-Solomon code.

Iii-D Numerical Application to a MDS Code

In the case of a code over a finite field of reasonable dimension, it is possible to exactly compute the ratio that approximates the Maximum Likelihood threshold. However, the exact threshold cannot be easily computed yet; it is still an open problem related to the list-decoding capacity of Reed-Solomon codes.

We therefore used the NTL open-source library [8] to compute the values , and in order to have an accurate enough approximation of the the function described earlier. The parameters are those that were proposed in [1], and show that the decoding threshold of such a code is between and .

The slope around the threshold is around 115, so for “small” (in fact, a bit smaller than ) is very near to , while as goes to , is much greater than the maximum probability of . This was predicted earlier, and expresses the fact that the list-size of radius is always greater than 1. The threshold value is a lower-bound for the threshold of the code, though the intuition says that this lower-bound is pretty near to the real threshold.

Iv Conclusion

As a conclusion, let us look back to the starting point of our reasoning. The initial goal was to revise the conditions of security of the construction depicted in [1]: from a received vector of , for what parameters is the size of the list of radius exponentially large? This problem can be reduced to that of the threshold probability of a linear error-correcting code. Indeed, below the threshold of the code, when the minimal distance of the code is large enough, the error decoding probability of the code is exponentially small, and it is exponentially close to 1 above the threshold. For our class of parameters, ensuring that the error rate is above the threshold is enough to show the security of the scheme.

We then showed that the threshold behaviour can be explicited for -ary codes as well as for binary codes; we then explicited a lower-bound on the threshold of MDS codes.

Applying these results to the initial problem, we show that the threshold for a (highly) truncated Reed-Solomon code over a finite field is very near to normalized the minimal distance of this code. As a conclusion, to switch from an algorithmic assumption (the hardness of the Polynomial Reconstruction Problem, see [9]) to Information-Theoretical security, we recommend to raise the dimension of the underlying code. This lowers the decoding threshold of the code; the downside is that storage of a codeword is more costly.

V Acknowledgements

We thank Gilles Zémor for the useful comments and fruitful discussions.


  • [1] J. Bringer, H. Chabanne, G. D. Cohen, and B. Kindarji, “Private interrogation of devices via identification codes,” in INDOCRYPT, ser. Lecture Notes in Computer Science, B. K. Roy and N. Sendrier, Eds., vol. 5922.   Springer, 2009, pp. 272–289.
  • [2] V. Guruswami and M. Sudan, “Reflections on ”improved decoding of reed-solomon andalgebraic-geometric codes”,” 2002.
  • [3] V. Guruswami and A. Rudra, “Better binary list decodable codes via multilevel concatenation,” Information Theory, IEEE Transactions on, vol. 55, no. 1, pp. 19–26, Jan. 2009.
  • [4] G. R. Grimmett, “Percolation,” 1997.
  • [5] J.-P. Tillich and G. Zémor, “Discrete isoperimetric inequalities and the probability of a decoding error,” Comb. Probab. Comput., vol. 9, no. 5, pp. 465–479, 2000.
  • [6] G. Zémor, “Threshold effects in codes,” in Algebraic Coding, ser. Lecture Notes in Computer Science, G. D. Cohen, S. Litsyn, A. Lobstein, and G. Zémor, Eds., vol. 781.   Springer, 1993, pp. 278–286.
  • [7] G. A. Margulis, “Probabilistic characteristics of graphs with large connectivity,” Problemy Peredači Informacii, vol. 10, no. 2, pp. 101–108, 1974.
  • [8] V. Shoup, “Ntl: A library for doing number theory.” [Online]. Available: http://www.shoup.net/ntl
  • [9] A. Kiayias and M. Yung, “Cryptographic hardness based on the decoding of reed-solomon codes,” in ICALP, ser. Lecture Notes in Computer Science, P. Widmayer, F. T. Ruiz, R. M. Bueno, M. Hennessy, S. Eidenbenz, and R. Conejo, Eds., vol. 2380.   Springer, 2002, pp. 232–243.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description