Subspace Evasive Sets
Abstract
In this work we describe an explicit, simple, construction of large subsets of , where is a finite field, that have small intersection with every dimensional affine subspace. Interest in the explicit construction of such sets, termed subspaceevasive sets, started in the work of Pudlák and Rödl [PR04] who showed how such constructions over the binary field can be used to construct explicit Ramsey graphs. More recently, Guruswami [Gur11] showed that, over large finite fields (of size polynomial in ), subspace evasive sets can be used to obtain explicit listdecodable codes with optimal rate and constant listsize. In this work we construct subspace evasive sets over large fields and use them, as described in [Gur11], to reduce the list size of folded ReedSolomon codes form to a constant.
1 Introduction
1.1 Subspace evasive sets
Defined formally, a subspace evasive set has intersection of size at most with every dimensional affine subspace . This definition makes sense over finite fields, as well as over infinite fields. Over finite fields, a simple probabilistic argument shows that a random set of size will have intersection of size at most with any dimensional affine subspace . In this work we give the first explicit construction of a subspaceevasive set of size that has intersection size at most with every dimensional affine subspace . This is stated in the next theorem. We postpone the exact definition of the term explicit to the following sections (see Theorem 3.2 for the formal statement of this theorem and Section 4 for a discussion of explicitness).
Theorem 1 (Main theorem).
For any finite field and parameters there exists an explicit construction of a set of size that is subspace evasive with .
While being far from the optimal bound of and despite being exponential in , the bound we obtain is useful when is small and the field is sufficiently large. As we will see below, this is precisely the setting that was raised by Guruswami in connection to error correcting codes.
The main ingredient in our construction is an explicit family of degree polynomials , for all , such that for every injective (i.e full rank) affine map the system of equations
has at most solutions. The degree can be any number between and . Using algebraicgeometry terminology, the set of common zeros of forms an dimensional variety which has finite intersection with any dimensional affine subspace. We call such varieties everywherefinite varieties (see Section 2 for a longer discussion of this particular choice of name).
Constructing subspace evasive sets as in Theorem 1 is then obtained by partitioning the coordinates of the space into blocks of size and applying the basic construction (of an everywherefinite variety) on each block independently. The polynomials we use in the basic construction are extremely simple (weighted sums of powers of variables) which makes the final construction explicit enough to be useful for the listdecoding application described in [Gur11] (allowing for both efficient encoding and listdecoding). Our proofs are elementary and do not use any sophisticated algebraic machinery (apart from Bezout’s theorem).^{1}^{1}1We do use Weil’s exponential sum estimates to analyze a certain variant of the construction but this part of the proof can be omitted by choosing the polynomials more carefully (as described in Section 4).
1.2 Listdecodable codes
An errorcorrecting code allows one to encode a message into a codeword so that encodings of different messages differ in many coordinates. This allows one to recover the original message from an encoding that is corrupted in a small number of coordinates. More formally, A code is a subset , where is some finite alphabet. The rate of the code is denoted and the distance of the code, denoted , is the minimal Hamming distance between two codewords divided by . It is easy to show that and that unique decoding (i.e decoding a message uniquely from a corrupted codeword) is only possible from a fraction of errors. When the number of errors goes beyond one has to be satisfied with listdecoding, in which a short list of possible messages is returned (i.e all messages whose encodings are close to the received word). Non explicitly, one can show the existence of a code that can be listdecoded from errors with listsize bounded by . Obtaining an explicit construction of such a code (with efficient encoding/decoding) is a major open problem in coding theory. The first work to give explicit codes that can be listdecoded from errors was the paper of Guruswami and Rudra [GR08] which builds on earlier work by Parvaresh and Vardi [PV05]. Their work showed that a certain family of codes, called folded ReedSolomon (RS) codes can be listdecoded from errors with list size bounded by , where is the number of coordinates (or block length) of the code.
In a recent work, Guruswami [Gur11] gave a new listdecoding algorithm for folded RS codes which have some nice advantages over previous decoding algorithms. Among these advantages is the property that the list of possible messages, returned by the decoder, is contained in a low dimensional subspace. More precisely, the code represents messages as elements of , where is a finite field of size , and the list returned by the decoder is (quite surprisingly) a subspace of dimension . This immediately gives the size bound for the list of mentioned above but also shows a way for improving further the list size. Guruswami observed that restricting the messages to come from a evasive set , instead of coming from the entire space , will reduce the list size to and remove the dependency on the block length. In order for the rate to not degrade by much we need the size of to be sufficiently large, say .
For this application to produce codes with efficient encoding/decoding, the evasive set must satisfy two explicitness conditions. The first is that messages can be encoded and decoded efficiently into . The second condition is that, given a subspace (say, as a list of basis vectors), one can efficiently compute the intersection of this subspace with . Our construction of subspace evasive sets satisfies both of these conditions (see Section 4) and so we obtain the following theorem.
Theorem 2.
For every and , there exists an explicit family of codes with rate that can be listdecoded from a fraction of errors in quadratic time and with list size .
The use of evasive sets to enhance listdecoding is completely blackbox and only uses the property that the returned list is a subspace of a certain dimension in a sufficiently large field. We give the proof of Theorem 2 in Section 5, stating the relevant claims from [Gur11] that are needed for the blackbox application.
Following [Gur11], Guruswami and Wang [GW11] showed another family of codes with optimal distance list decoding and with the additional property that the list returned by the decoder is a subspace. This family of codes, called derivative codes (also called multiplicity codes in [KSY11]), obtains roughly the same parameters as folded RS codes and can be also combined with our construction of evasive sets in the same way to reduce the list size.
1.3 Affine and twosource extractors
The work of Pudlák and Rödl [PR04] showed that constructing subspace evasive sets gives explicit constructions of bipartite Ramsey graphs. These are bipartite graphs that do not contain bipartite cliques or independent sets of certain size. A recent work of BenSasson and Zewi [BSZ11] explored this connection further and showed (under some number theoretic conjectures) that such sets can also be used to construct twosource extractors which are strong variants of bipartite Ramsey graphs. Another application given in [BSZ11] was to the construction of affine extractors which are functions that have uniform output whenever the input chosen uniformly from a subspace of sufficiently high dimension. Both of these applications require that the construction be over a field of two elements. Our construction requires the field to be at least of size and so is not useful for these applications. An important direction for progress is to generalize our construction for smaller fields. Alternatively, one can try to generalize the approach of [BSZ11] to larger fields and then try to use our construction to obtain better extractors (affine or twosource).
1.4 Organization
Section 2 contains the main construction of everywherefinite varieties (Theorem 2.4). In Section 3, we show how to compose this basic construction to obtain our main theorem, Theorem 3.2, which gives explicit evasive sets. In Section 4 we prove several claims which deal with the explicitness of our construction, and use them, in Section 5 to derive Theorem 2. Appendix A contains some basic results on Fourier analysis that are used in part of Section 3.
2 Everywherefinite varieties
Let be a field and its algebraic closure (recall that the algebraic closure is always infinite, even if is finite). A variety in is the set of common zeros of one or more polynomials. Given polynomials , we denote the variety they define as
The dimension of a variety is a generalization of the notion of dimension for subspaces and can be thought of, informally, as the number of ‘degrees of freedom’ the variety has. In particular, generic polynomials define a variety of dimension . It is well known that the intersection of an dimensional variety with a generic dimensional affine subspace is finite^{2}^{2}2For a precise definition of dimension and proofs of its basic properties we refer the reader to any elementary text on Algebraic Geometry (e.g [Sha94]).. In the following we will not rely on any of these properties and keep the discussion selfcontained. Our main result in this section is a construction of an explicit variety where this holds for all affine subspaces of dimension . Using Bezout’s theorem (Theorem 2.2) and the bound on the degrees of the polynomials defining we will also get an explicit uniform bound on the size of the intersections . We start with the formal definition.
Definition 2.1 (Everywherefinite variety).
Let be polynomials. The variety is everywherefinite if for any affine subspace of dimension , the intersection is finite.
The importance of showing that the intersection is finite comes from Bezout’s theorem, which allows one to give explicit bounds on the intersection size, given that it is finite. This result can be found in most introductory texts on Algebraic Geometry [Sha94] (for an elementary proof of this particular formulation see [Sch95]).
Theorem 2.2 (Bezout).
Let be polynomials. If is finite then
For everywherefinite varieties this gives the following immediate corollary.
Corollary 2.3.
Let be polynomials such that is everywherefinite. Then for any dimensional affine subspace we have
Proof.
Let the dimensional affine subspace be given as the image of an affine map . Let denote the restriction of to , i.e.
Clearly and . The corollary now follows from Theorem 2.2. ∎
We will now describe an explicit construction of an everywherefinite variety. We will need the following definition: A matrix (where ) is regular if all its minors are regular (i.e have nonzero determinant). For example, if is a field with at least distinct nonzero elements then is regular.
Theorem 2.4 (Construction of an everywherefinite variety).
Let be parameters and be a field. Let be a matrix with coefficients in which is regular. Let be integers. Let the polynomials be defined as follows:
Then is everywherefinite. In particular, for any dimensional affine subspace we have .
We prove Theorem 2.4 in the remainder of this section. Let be a dimensional affine subspace. Our goal is to show that is finite, and then the size bound follows from Corollary 2.3. The first step is to present as the image of an affine map with a convenient choice of basis. In the following let and .
Claim 2.5.
There exists an affine map whose image is and such that the following holds. There exist indices such that

For all , .

If then (i.e is constant).

If for then is an affine function just of the variables .
Proof.
Let be an arbitrary affine map whose image is . We construct by a basis change of which puts it in an upperechelon form. That is, let be the minimal index such that is not constant. We take . Let be the minimal index after such that is not an affine function of . We take , and we have that for are affine functions of . Generally, let be the minimal index after such that is not an affine function of . We take and have that for are affine functions of . Obviously, for we have that are affine functions of all . ∎
Let be given by Claim 2.5 and let be the indices given by the claim. Let . Our goal is to show that the following system has a finite number of solutions:
Clearly, applying an invertible linear transformation on the set (replacing each with a linear combination of ) will not affect the number of solutions. Our next step is to find such a linear transformation that will put the ’s in a more convenient form, eliminating some of their coefficients.
Claim 2.6.
Let . There exist linearly independent vectors such that, for all ,
(1) 
where the coefficients are elements of .
Proof.
Recall that by definition where is a regular matrix. Let be the minor of given by restriction to columns . Since is regular we have that is regular. Let denote the rows of . We thus have that where is the th unit vector. That is, where is the inner product of and the th column of . ∎
Let be the vectors given by Claim 2.6 and denote
Let us also denote
Recall that, from the above discussion, our goal is to show that the system has a finite number of solutions in . By Claims 2.5 and 2.6 we have that
(2) 
We now perform one final transformation on our system. Contrary to the previous transformations which were linear transformations, this will be a polynomial transformation. Let
and let
For define
We first note that in order to show that is finite it suffices to show that is finite.
Claim 2.7.
.
Proof.
For each we can define by letting be some root of (it exists since is algebraically closed). Clearly distinct elements in are mapped to distinct elements in . ∎
The reason for these transformations is that the final polynomials have a specifically nice form: they are the sum of with a polynomial of lower total degree.
Claim 2.8.
For all we have that
where .
Proof.
By definition
To prove the claim we need to show that for all . If then is constant. Otherwise let be maximal such that . By Claim 2.5 we have that is an affine function of . Since we have that
since . ∎
To complete the proof of Theorem 2.4 we need to show that is finite. This follows from a general bound for polynomials of the form where .
Lemma 2.9.
Let be polynomials such that where . Then .
Lemma 2.9 follows immediately from the following two claims. In the following, let be the ring of polynomials; be the ideal in generated by ; and be their quotient. Note that is a vector space over .
Claim 2.10.
Proof.
Assume by contradiction there exist where . Let be polynomials such that and for all . Let be the image of in . Since there must exist a nonzero linear dependency among . That is, there exist not all zero such that
Equivalently put,
The key observation is that for any polynomial we have that for all . This is because for all by assumption. Thus substituting we get that
which contradicts the assumption that not all are nonzero. ∎
Claim 2.11.
.
Proof.
We will show that is spanned by the image in of the monomials where . Thus in particular . In order to do so, we need to show that if is a polynomial then there exists a polynomial such that and the degree of each variable in is at most . It suffices to show that if has some variable of degree at least then we can find such that and such that . The claim then follows by iterating this process until all variables have degrees below . Moreover, it suffices to prove this in the case where is a monomial, as this process can be applied to each monomial individually.
Thus, let be a monomial where for some . Define
We have that since by assumption; and as required. ∎
3 Subspace Evasive sets
In this section we construct subspace evasive sets, based on the construction of everywherefinite varieties given in Theorem 2.4. We first recall the definition of subspace evasive sets.
Definition 3.1 (Subspace evasive sets).
Let . We say is subspace evasive if for all dimensional affine subspaces we have .
We next give some necessary definitions. For polynomials we define their common solutions in (as opposed to their solutions over the algebraic closure) as
We say that a matrix is stronglyregular if all its minors are regular for all . For example, if is a field with at least distinct nonzero elements then is stronglyregular.
Theorem 3.2.
Let and be a finite field. Let and assume is integer and divides . Let be a matrix with coefficients in which is stronglyregular. Let be integers. For let
and define to be the times cartesian product of . That is
Then is subspace evasive. Moreover,

If , and then .

If at least of the degrees are coprime to then .
We prove Theorem 3.2 in the remainder of this section. We first show that has small intersection with affine subspaces of dimension at most (this is a stronger statement than the one we proved in Section 2 since the dimension of the subspace can be smaller than ).
Claim 3.3.
Let be an dimensional affine subspace for . Then .
Proof.
Note that since . We will show that in fact , from which the claim will follow since . Now, since the matrix is stronglyregular, its restriction to the first rows is regular; hence is everywherefinite (as an dimensional variety) and, by Bezout’s Theorem (Theorem 2.2), we have . ∎
We now prove that is subspace evasive for dimensions up to .
Claim 3.4.
Let be an dimensional affine subspace for . Then .
Proof.
Let . We prove the claim by induction of the number of blocks . If then and the claim follows from Claim 3.3. We thus assume that . Decompose as a disjoint union of subspaces based on the restriction to the first coordinates (i.e. the first block). That is, let and for each let . Thus and we have that
Now, since is an affine subspace so is . Let where . We also have that is an dimensional affine subspace for all . Now by Claim 3.3 we have that ; and by induction we have that for all . Hence as claimed. ∎
We now turn to prove the ‘Moreover’ part of Theorem 3.2, namely to lower bound the size of . To do so, it is enough to bound the size of (since is a product of such sets). We begin with the unrestricted case, where all we assume are some (rather weak) bounds on the size of the field. We refer the reader to Appendix A for the notations/preliminaries on Fourier analysis and Weil’s theorem used in the proof.
Claim 3.5.
Assume that , and . Then . In particular .
Proof.
Let be chosen uniformly. Our goal is to estimate the probability that for all . Equivalently, let be a random variable defined as and let . We need to estimate the probability that .
To this end, we apply Fourier analysis (for definitions see Appendix A). Assume where . The characters of are given by for where is the trace operator. Since are independent we have that
We proceed to estimate the Fourier coefficients of . Let denote the th column of . We have that
Thus, if the inner product of and is nonzero, we have by the Weil bound (Theorem A.1) that
Since we assume is stronglyregular, for any nonzero there could by at most columns of which are orthogonal to ; hence we deduce that for any nonzero ,
by our choice of parameters. We now apply these bounds to estimate the probability that . We have that and ; hence
Thus and . ∎
We now prove the second item of the ‘Moreover’ part of Theorem 3.2 in which we assume that at least of the degrees are coprime to and use this extra condition to obtain a precise quantity for the size of . In Section 4 we show that this condition on the degrees is relatively easy to satisfy if we are the ones choosing the field .
Claim 3.6.
If at least of the degrees are coprime to then . This implies .
Proof.
Let . Let be degrees among coprime to and let . We will show that for any setting of there exists a unique setting of which makes . This will clearly show that as claimed.
Substitute for all . We have that if
Let be the minor of given by restricting to columns in . Let and let be given by . Then if
We have that is regular since is strongly regular; hence there exists a unique solution for the linear system . We now apply our assumption that each degree is coprime to . This implies that raising to the power in is an automorphism of . That is, for each there exists a unique solution to where . ∎
4 Explicitness of the construction
In this section we discuss the explicitness of our construction of subspace evasive sets. The construction of everywherefinite varieties accomplished in Theorem 2.4 is given as the zero set of explicitly defined polynomials. One can use our construction over any finite field, including which is convenient for applications. The construction requires an explicit strongly regular matrix . Such a matrix can be easily obtained when by taking where are nonzero distinct elements in (this is because each submatrix is a Vandermonde matrix).
4.1 Efficient encoding of vectors as elements of
It is trivial to decide in polynomial time if a given point is in or not. The first nontrivial issue regarding explicitness is how to sample an element of the set uniformly. More precisely, for an evasive set of size we would like to have an efficiently computable bijection . This is needed for the listdecoding application (see Section 5) because we would like to encode messages as strings in without losing much in the rate of the code and so that we can efficiently recover the original messages from their representation as elements of . We now show how one can sample from the variety