Restricted Isometry of Fourier Matrices and List Decodability of Random Linear Codes

Restricted Isometry of Fourier Matrices and List Decodability of Random Linear Codes

Abstract

We prove that a random linear code over , with probability arbitrarily close to , is list decodable at radius with list size and rate . Up to the polylogarithmic factor in and constant factors depending on , this matches the lower bound for the list size and upper bound for the rate. Previously only existence (and not abundance) of such codes was known for the special case (Guruswami, Håstad, Sudan and Zuckerman, 2002).

In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature.

Finally, we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of -sparse signals of length . Specifically, we improve the number of samples from to . The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.

1Introduction

This work is motivated by the list decodability properties of random linear codes for correcting a large fraction of errors, approaching the information-theoretic maximum limit. We prove a near-optimal bound on the rate of such codes, by making a connection to and establishing improved bounds on the restricted isometry property of random submatrices of Hadamard matrices.

A -ary error correcting code of block length is a subset of , where denotes any alphabet of size . The rate of such a code is defined to be . A good code should be large (rate bounded away from ) and have its elements (codewords) well “spread out.” The latter property is motivated by the task of recovering a codeword from a noisy version of it that differs from in a bounded number of coordinates. Since a random string will differ from on an expected positions, the information-theoretically maximum fraction of errors one can correct is bounded by the limit . In fact, when the fraction of errors exceeds , it is not possible to unambiguously identify the close-by codeword to the noisy string (unless the code has very few codewords, i.e., a rate approaching zero).

In the model of list decoding, however, recovery from a fraction of errors approaching the limit becomes possible. Under list decoding, the goal is to recover a small list of all codewords of differing from an input string in at most positions, where is the error fraction (our interest in this paper being the case when is close to ). This requires that have the following sparsity property, called -list decodability, for some small : for every , there are at most codewords within Hamming distance from . We will refer to the parameter as the “list size” — it refers to the maximum number of codewords that the decoder may output when correcting a fraction of errors. Note that -list decodability is a strictly combinatorial notion, and does not promise an efficient algorithm to compute the list of close-by codewords. In this paper, we only focus on this combinatorial aspect, and study a basic trade-off between between , , and the rate for the important class of random linear codes, when . We describe the prior results in this direction and state our results next.

For integers , a random -ary code of rate is -list decodable with high probability. Here is the -ary entropy function: . This follows by a straightforward application of the probabilistic method, based on a union bound over all centers and all -element subsets of codewords that all codewords in lie in the Hamming ball of radius centered at . For , where we think of as fixed and , this implies that a random code of rate is -list decodable. (Here and below, the notation and hide constant factors that depend only on .)

Understanding list decodable codes at the extremal radii , for small , is of particular significance mainly due to numerous applications that depend on this regime of parameters. For example, one can mention hardness amplification of Boolean functions [28], construction of hardcore predicates from one-way functions [18], construction of pseudorandom generators [28] and randomness extractors [29], inapproximability of witnesses [23], and approximating the VC dimension [25]. Moreover, linear list-decodable codes are further appealing due to their symmetries, succinct description, and efficient encoding. For some applications, linearity of list decodable codes is of crucial importance. For example, the black-box reduction from list decodable codes to capacity achieving codes for additive noise channels in [21], or certain applications of Trevisan’s extractor [29] (e.g., [7]) rely on linearity of the underlying list decodable code. Furthermore, list decoding of linear codes features an interplay between linear subspaces and Hamming balls and their intersection properties, which is of significant interest from a combinatorial perspective.

This work is focused on random linear codes, which are subspaces of , where is the finite field with elements. A random linear code of rate is sampled by picking random vectors in and letting be their -span. Since the codewords of are now not all independent (in fact they are not even -wise independent), the above naive argument only proves the -list decodability property for codes of rate [31].1 For the setting , this implies a list size bound of for random linear codes of rate , which is exponentially worse than for random codes. Understanding if this exponential discrepancy between general and linear codes is inherent was raised an open question by Elias [13]. Despite much research, the exponential bound was the best known for random linear codes (except for the case of , and even for only an existence result was known; see the related results section below for more details).

Our main result in this work closes this gap between random linear and random codes, up to polylogarithmic factors in the rate. We state a simplified version of the main theorem (Theorem ?) below.

We remark that both the rate and list size are close to optimal for list decoding from a fraction of errors. For rate, this follows from the fact the -ary “list decoding capacity” is given by , which is for . For list size, a lower bound of is known — this follows from [2] for , and was shown for all in [22] (and also in [3] under a convexity conjecture that was later proved in [4]). We have also assumed that the alphabet size is fixed and have not attempted to obtain the best possible dependence of the constants on the alphabet size.

1.1Related results

We now discuss some other previously known results concerning list decodability of random linear codes.

First, it is well known that a random linear code of rate is -list decodable with high probability. This follows by combining the Johnson bound for list decoding (see, for example, [20]) with the fact that such codes lie on the Gilbert-Varshamov bound and have relative distance with high probability. This result gets the correct quadratic dependence in list size, but the rate is worse.

Second, for the case of , the existence of -list decodable binary linear codes of rate was proved in [16]. For , this implies the existence of binary linear codes of rate list decodable with list size from an error fraction . This matches the bounds for random codes, and is optimal up to constant factors. However, there are two shortcomings with this result: (i) it only works for (the proof makes use of this in a crucial way, and extensions of the proof to larger have been elusive), and (ii) the proof is based on the semi-random method. It only shows the existence of such a code while failing to give any sizeable lower bound on the probability that a random linear code has the claimed list decodability property.

Motivated by this state of affairs, in [15], the authors proved that a random -ary linear code of rate is -list decodable with high probability, for some that depends on . This matches the result for completely random codes up to the leading constant in front of . Unfortunately, for , the constant depends exponentially2 on . Thus, this result only implies an exponential list size in , as opposed to the optimal that we seek.

Summarizing, for random linear codes to achieve a polynomial in list size bound for error fraction , the best lower bound on rate was . We are able to show that random linear codes achieve a list size growing quadratically in for a rate of . One downside of our result is that we do not get a probability bound of , but only for any desired constant (essentially our rate bound degrades by a factor).

Finally, there are also some results showing limitations on list decodability of random codes. It is known that both random codes and random linear codes of rate are, with high probability, not -list decodable [26]. For arbitrary (not necessarily random) codes, the best lower bound on list size is [2].

1.2Proof technique

The proof of our result uses a different approach from the earlier works on list decodability of random linear codes [31]. Our approach consists of three steps.

Step 1: Our starting point is a relaxed version of the Johnson bound for list decoding that only requires the average pairwise distance of codewords to be large (where is the target list size), instead of the minimum distance of the code.

Technically, this extension is easy and pretty much follows by inspecting the proof of the Johnson bound. This has recently been observed for the binary case by Cheraghchi and is implicit in the survey [8]. Here, we give a proof of the relaxed Johnson bound for a more general setting of parameters, and apply it in a setting where the usual Johnson bound is insufficient. Furthermore, as a side application, we show how the average version can be used to bound the list decoding radius of codes which do not have too many codewords close to any codeword — such a bound was shown via a different proof in [17], where it was used to establish the list decodability of binary Reed-Muller codes up to their distance.

Step 2: Prove that the -wise average distance property of random linear codes is implied by the order restricted isometry property (RIP-2) of random submatrices of the Hadamard matrix (or in general, matrices related to the Discrete Fourier Transform).

This part is also easy technically, and our contribution lies in making this connection between restricted isometry and list decoding. The restricted isometry property has received much attention lately due to its relevance to compressed sensing (cf. [5]) and is also connected to the Johnson-Lindenstrauss dimension reduction lemma [1]. Our work shows another interesting application of this concept.

Step 3: Prove the needed restricted isometry property of the matrix obtained by sampling rows of the Hadamard matrix.

This is the most technical part of our proof. Let us focus on for simplicity, and let be the Hadamard (Discrete Fourier Transform) matrix with , whose ’th entry is for . We prove that (the scaled version of) a random submatrix of formed by sampling a subset of rows of satisfies RIP of order with probability . This means that every columns of this sampled matrix are nearly orthogonal — formally, every submatrix of has all its singular values close to .

For random matrices with i.i.d Gaussian or entries, it is relatively easy to prove RIP-2 of order when [1]. Proving such a bound for submatrices of the Discrete Fourier Transform (DFT) matrix (as conjectured in [27]) has been an open problem for many years. The difficulty is that the entries within a row are no longer independent, and not even triple-wise independent. The best proven upper bound on for this case was , improving an earlier upper bound of Candès and Tao [11]. We improve the bound to  — the key gain is that we do not have the factor. This is crucial for our list decoding connection, as the rate of the code associated with the matrix will be , which would be if . We will take (the target list size), and the rate of the random linear code will be , giving the bounds claimed in Theorem ?. We remark that any improvement of the RIP bound towards the information-theoretic limit , a challenging open problem, would immediately translate into an improvement on the list decoding rate of random linear codes via our reductions.

Our RIP-2 proof for row-subsampled DFT matrices proceeds along the lines of [27], and is based on upper bounding the expectation of the supremum of a certain Gaussian process [24]. The index set of the Gaussian process is , the set of all -sparse unit vectors in , and the Gaussian random variable associated with is a Gaussian linear combination of the squared projections of on the rows sampled from the DFT matrix (in the binary case these are just squared Fourier coefficients)3. The key to analyzing the Gaussian process is an understanding of the associated (pseudo)-metric on the index set, defined by . This metric is difficult to work with directly, so we upper bound distances under in terms of distances under a different metric . The principal difference in our analysis compared to [27] is in the choice of  — instead of the max norm used in [27], we use a large finite norm applied to the sampled Fourier coefficients. We then estimate the covering numbers for and use Dudley’s theorem to bound the supremum of the Gaussian process.

It is worth pointing out that, as we prove in this work, for low-rate random linear codes the average-distance quantity discussed in Step 1 above is substantially larger than the minimum distance of the code. This allows the relaxed version of the Johnson bound attain better bounds than what the standard (minimum-distance based) Johnson bound would obtain on list decodability of random linear codes. While explicit examples of linear codes surpassing the standard Johnson bound are already known in the literature (see [14] and the references therein), a by-product of our result is that in fact most linear codes (at least in the low-rate regime) surpass the standard Johnson bound. However, an interesting question is to see whether there are codes that are list decodable even beyond the relaxed version of the Johnson bound studied in this work.

Organization of the paper. The rest of the paper is organized as follows. After fixing some notation, in Section 2 we prove the average-case Johnson bound that relates a lower bound on average pair-wise distances of subsets of codewords in a code to list decoding guarantees on the code. We also show, in Section 2.3, an application of this bound on proving list decodability of “locally sparse” codes, which is of independent interest and simplifies some earlier list decoding results. In Section 3, we prove our main theorem on list decodability of random linear codes by demonstrating a reduction from RIP-2 guarantees of DFT-based complex matrices to average distance of random linear codes, combined with the Johnson bound. Finally, the RIP-2 bounds on matrices related to random linear codes are proved in Section 4.

Notation. Throughout the paper, we will be interested in list decodability of -ary codes. We will denote an alphabet of size by (which one can identify with ); for linear codes, the alphabet will be , the finite field with elements (when is a prime power).

We use the notation . When (resp., ) for some absolute constant , we use the shorthand (resp., ). We use the notation when the base of logarithm is not of significance (e.g., ). Otherwise the base is subscripted as in . The natural logarithm is denoted by .

For a matrix and a multiset of rows , define to be the matrix with rows, formed by the rows of picked by (in some arbitrary order). Each row in may be repeated for the appropriate number of times specified by .

2Average-distance based Johnson bound

In this section, we show how the average pair-wise distances between subsets of codewords in a -ary code translate into list decodability guarantees on the code.

Recall that the relative Hamming distance between strings , denoted , is defined to be the fraction of positions for which . The relative distance of a code is the minimum value of over all pairs of codewords . We define list decodability as follows.

The following definition captures a crucial function that allows one to generically pass from distance property to list decodability.

The well known Johnson bound in coding theory states that a -ary code of relative distance is -list decodable (see for instance [20]). Below we prove a version of this bound which does not need every pair of codewords to be far apart but instead works when the average distance of a set of codewords is large. The proof of this version of the Johnson bound is a simple modification of earlier proofs, but working with this version is a crucial step in our near-tight analysis of the list decodability of random linear codes.

Thus, if one is interested in a bound for list decoding with list size , it is enough to consider the average pairwise Hamming distance of subsets of codewords.

2.1Geometric encoding of -ary symbols

We will give a geometric proof of the above result. For this purpose, we will map vectors in to complex vectors and argue about the inner products of the resulting vectors.

For complex vectors and , we define their inner product . From the definition of the simplex encoding, the following immediately follows:

We can extend the above encoding to map elements of into in the natural way by applying this encoding to each coordinate separately. From the above inner product formula, it follows that for we have

Similarly, we overload the notation to matrices with entries over . Let be a matrix in . Then, is an complex matrix obtained from by replacing each entry with its simplex encoding, considered as a column complex vector.

Finally, we extend the encoding to sets of vectors (i.e., codes) as well. For a set , is defined as a matrix with columns indexed by the elements of , where the column corresponding to each is set to be .

2.2Proof of average-distance Johnson bound

We now prove the Johnson bound based on average distance.

Suppose are such that their average pairwise relative distance is at least , i.e.,

We will prove that cannot all lie in a Hamming ball of radius less than . Since every subset of codewords of satisfy , this will prove that is -list decodable.

Suppose, for contradiction, that there exists such that for and some . Recalling the definition of , note that the assumption about implies

For , define the vector , for some parameter to be chosen later. By and the assumptions about , we have , and . We have

Picking and recalling , we see that the above expression is negative, a contradiction.

2.3An application: List decodability of Reed-Muller and locally sparse codes

Our average-distance Johnson bound implies the following combinatorial result for the list decodability of codes that have few codewords in a certain vicinity of every codeword. The result allows one to translate a bound on the number of codewords in balls centered at codewords to a bound on the number of codewords in an arbitrary Hamming ball of smaller radius. An alternate proof of the below bound (using a “deletion” technique) was given by Gopalan, Klivans, and Zuckerman [17] where they used it to argue the list decodability of (binary) Reed-Muller codes up to their relative distance. A mild strengthening of the deletion lemma was later used in [14] to prove combinatorial bounds on the list decodability of tensor products and interleavings of binary linear codes.

Note that setting above gives the usual Johnson bound for a code of relative distance at least .

We will lower bound the average pairwise relative distance of every subset of codewords of , and then apply Theorem ?.

Let be distinct codewords of . For , the sum of relative distances of , , from is at least since there are at most codewords at relative distance less than from . Therefore

Setting , Theorem ? implies that is -list decodable. But , so the claim follows.

3Proof of the list decoding result

In this section, we prove our main result on list decodability of random linear codes. The main idea is to use the restricted isometry property (RIP) of complex matrices arising from random linear codes for bounding average pairwise distances of subsets of codewords. Combined with the average-distance based Johnson bound shown in the previous section, this proves the desired list decoding bounds. The RIP-2 condition that we use in this work is defined as follows.

Since we will be working with list decoding radii close to , we derive a simplified expression for the Johnson bound in this regime; namely, the following:

The proof is nothing but a simple manipulation of the bound given by Theorem ?. Let . Theorem ? implies that is -list decodable. Now,

In order to prove lower bounds on average distance of random linear codes, we will use the simplex encoding of vectors (Definition ?), along with the following simple geometric lemma.

The proof is a simple application of . The second norm on the right hand side can be expanded as

and the bound follows.

Now we are ready to formulate our reduction from RIP-2 to average distance of codes.

Consider any set of codewords, and the real vector with entries in that is exactly supported on the positions indexed by the codewords in . Obviously, . Thus, by the definition of RIP-2 (Definition ?), we know that, defining ,

Observe that . Let be the average pairwise distance between codewords in . By Lemma ? we conclude that

We remark that the exact choice of the RIP constant in the above result is arbitrary, as long as it remains an absolute constant. Contrary to applications in compressed sensing, for our application it also makes sense to have RIP-2 with constants larger than one, since the proof only requires the upper bound in Definition ?.

By combining Lemma ? with the simplified Johnson bound of Theorem ?, we obtain the following corollary.

The matrix for a linear code has a special form. It is straightforward to observe that, when , the matrix is an incomplete Hadamard-Walsh transform matrix with columns, where is the dimension of the code. In general turns out to be related to a Discrete Fourier Transform matrix. Specifically, we have the following observation.

The RIP-2 condition for random complex matrices arising from random linear codes is proved in Theorem ? of Section 4. We now combine this theorem with the preceding results of this section to prove our main theorem on list decodability of random linear codes.

Let be a uniformly random linear code associated to a random generator matrix over , for a rate parameter to be determined later. Consider the random matrix from Observation ?, where . Recall that is a complex matrix, where . Let . By Theorem ?, for large enough (thus, large enough ) and with probability , the matrix satisfies RIP-2 of order with constant , for some choice of bounded by

Suppose is large enough and satisfies so that the RIP-2 condition holds. By Theorem ?, this ensures that the code is -list decodable with probability at least .

It remains to verify the bound on the rate of . We observe that, whenever the RIP-2 condition is satisfied, must have rank exactly , since otherwise, there would be distinct vectors such that . Thus in that case, the columns of corresponding to and become identical, implying that cannot satisfy RIP-2 of any nontrivial order. Thus we can assume that the rate of is indeed equal to . Now we have

Substituting into the above expression yields the desired bound.

4Restricted isometry property of DFT-based matrices

In this section, we prove RIP-2 for random incomplete Discrete Fourier Transform matrices. Namely, we prove the following theorem.

The proof extends and closely follows the original proof of Rudelson and Vershynin [27]. However we modify the proof at a crucial point to obtain a strict improvement over their original analysis which is necessary for our list decoding application. We present our improved analysis in this section.

Let . Each row of is indexed by an element of and some (recall that , where ). Denote the row corresponding to and by , and moreover, denote the set of -sparse unit vectors in by .

In order to show that satisfies RIP of order , we need to verify that for any ,

In light of Proposition ?, without loss of generality we can assume that is real-valued (since the inner product between any pair of columns of is real-valued).

For , denote the th column of by . For , define the random variable

where the second equality holds since each column of has norm and . Thus, the RIP-condition is equivalent to

Recall that is a random variable depending on the randomness in . The proof of the RIP condition involves two steps. First, bounding in expectation, and second, a tail bound. The first step is proved, in detail, in the following lemma.

We begin by observing that the columns of are orthogonal in expectation; i.e., for any , we have

This follows from and the fact that the expected relative Hamming distance between the columns of corresponding to and , when , is exactly . It follows that for every , , namely, the stochastic process is centered.

Recall that we wish to estimate

The random variables and are independent whenever . Therefore, we can use the standard symmetrization technique on summation of independent random variables in a stochastic process (Proposition ?) and conclude from that

where is a sequence of independent standard Gaussian random variables. Denote the term inside in by ; namely,

Now we observe that, for any fixed , the quantity defines the supremum of a Gaussian process. The Gaussian process induces a pseudo-metric on (and more generally, ), where for , the (squared) distance is given by

By Cauchy-Schwarz, can be bounded as

Here is where our analysis differs from [27]. When , is exactly how the Gaussian metric is bounded in [27]. We obtain our improvement by bounding the metric in a different way. Specifically, let be a positive real parameter to be determined later and let and such that . We assume that is so that becomes an integer. We use Hölder’s inequality with parameters and along with to bound the metric as follows:

Since , is -sparse, and for all choices of , Cauchy-Schwarz implies that and thus, using the triangle inequality, we know that

Therefore, for every , seeing that , we have

which, applied to the bound on the metric, yields

Now,

where we have defined

Observe that, by the triangle inequality,

Plugging back in , we so far have

For a real parameter , define as the minimum number of spheres of radius required to cover with respect to the metric . We can now apply Dudley’s theorem on supremum of Gaussian processes (cf. [24]) and deduce that

In order to make the metric easier to work with, we define a related metric on , according to the right hand side of , as follows:

Let denote the diameter of under the metric . Trivially, . By , we know that

Define similar to , but with respect to the new metric . The preceding upper bound thus implies that

Now, using this bound in and after a change of variables, we have

Now we take an expectation over . Note that combined with implies

Using , we get

Define

Therefore, recalling that , the above inequality simplifies to

where we have replaced the upper limit of integration by the diameter of under the metric (obviously, for all ).

Now we estimate in two ways. The first estimate is the simple volumetric estimate (cf. [27]) that gives

This estimate is useful when is small. For larger values of , we use a different estimate as follows.

We use the method used in [27] (originally attributed to B. Maurey, cf. [6]) and empirically estimate any fixed real vector by an -sparse random vector , for sufficiently large . The vector is an average

where each is a -sparse vector in and . The are independent and identically distributed.

The way each is sampled is as follows. Let so that . With probability , we set . With the remaining probability, is sampled by picking a random according to the probabilities defined by absolute values of the entries of , and setting , where is the th standard basis vector4. This ensures that . Thus, by linearity of expectation, it is clear that . Now, consider

If we pick large enough to ensure that , regardless of the initial choice of , then we can conclude that for every , there exists a of the form that is at distance at most from (since there is always some fixing of the randomness that attains the expectation). In particular, the set of balls centered at all possible realizations of would cover . Since the number of possible choices of of the form is at most , we have

In order to estimate the number of independent samples , we use symmetrization again to estimate the deviation of from its expectation . Namely, since the