Probabilistic existence of rigid combinatorial structures
(extended abstract version)
Abstract
We show the existence of rigid combinatorial objects which previously were not known to exist. Specifically, for a wide range of the underlying parameters, we show the existence of nontrivial orthogonal arrays, designs, and wise permutations. In all cases, the sizes of the objects are optimal up to polynomial overhead. The proof of existence is probabilistic. We show that a randomly chosen such object has the required properties with positive yet tiny probability. The main technical ingredient is a special local central limit theorem for suitable lattice random walks with finitely many steps.
1 Introduction
We introduce a new framework for establishing the existence of rigid combinatorial structures, such as orthogonal arrays, designs and wise permutations. Let be a finite set and let be a vector space of functions from to the rational numbers . We study when there is a small subset satisfying
(1) 
In probabilistic terminology, equation (1) means that if is a uniformly random element in and is a uniformly random element in then
(2) 
where denotes expectation. Of course, (1) holds trivially when . Our goal is to find conditions on and that yield a small subset that satisfies (1), where in our situations, small will mean polynomial in the dimension of . (In many natural problems one might encounter a function space over or instead. However, since (1) is a rational equation, we can always reduce to the case of rational vector spaces.)
Our main theorem, Theorem 2.1, gives sufficient conditions for the existence of a small subset satisfying (1). We apply the theorem to establish results in three interesting cases of the general framework: orthogonal arrays, designs, and wise permutations. These are detailed in the next sections. Our methods solve an open problem, whether there exist nontrivial wise permutations for every . They strengthen Teirlinck’s theorem [Tei87], which was the first theorem to show the existence of designs for every . And they improve existence results for orthogonal arrays, when the size of the alphabet is divisible by many distinct primes. Moreover, in all three cases considered, we show the existence of a structure whose size is optimal up to polynomial overhead.
Our approach to the problem is via probabilistic arguments. In essence, we prove that a random subset of satisfies equation (1) with positive, albeit tiny, probability. Thus our method is one of the few known methods for showing existence of rare objects. This class includes such other methods as the Lovász local lemma [EL75] and Spencer’s “six deviations suffice” method [Spe85]. However, our method does not rely on these previous approaches. Instead, our technical ingredient is a special version of the (multidimensional) local central limit theorem with only finitely many available steps. Since only finitely many steps are available, and since we can only gain access to more steps by increasing the dimension of the random walk, we cannot use any “off the shelf” local central limit theorem, not even one enhanced by a BerryEsseentype estimate of the rate of convergence. Instead, we prove the local central limit theorem that we need directly using Fourier analysis. Section 1.4 gives an overview of our approach.
We also mention that efficient randomized algorithm versions of the Lovász local lemma [Mos09, MT10] and Spencer’s method [Ban10] have recently been found. Relative to these new algorithms, the objects that they produce are no longer rare. Our method is the only one that we know that shows the existence of rare combinatorial structures, which are still rare relative to any known, efficient, randomized algorithm.
1.1 Orthogonal arrays
A subset is an orthogonal array of alphabet size , length and strength if it yields all strings of length with equal frequency if restricted to any coordinates. In other words, for any distinct indices and any (not necessarily distinct) values ,
Equivalently, choosing uniformly, the distribution of is wise independent. For an introduction to orthogonal arrays see [HSS99].
Orthogonal arrays fit into our general framework as follows. We take to be and to be the space spanned by all functions of the form
(3) 
with a subset of size and . With this choice, a subset satisfying (1) is precisely an orthogonal array of alphabet size , length and strength .
It is well known that if is wise independent then for some universal constant (see, e.g., [Rao73]). Matching constructions of size are known, however, as these rely on finite field properties the constant generally tends to infinity with the number of prime factors of . Our technique provides the first upper bound on the size of orthogonal arrays in which the constant in the exponent is independent of .
Theorem 1.1 (Existence of orthogonal arrays).
For all integers , and there exists an orthogonal array of alphabet size , length and strength satisfying for some universal constant .
1.2 Designs
A (simple)  design is a family of distinct subsets of , where each set is of size , such that each elements belong to exactly sets. In other words, denoting by the family of all subsets of of size , a set is a design if for any distinct elements ,
(4) 
For an introduction to combinatorial designs see [CD07].
Our general framework includes designs as follows. We take to be and to be the space spanned by all functions of the form
(5) 
with . With this choice, a subset satisfying (1) is precisely a simple design.
Although designs have been investigated for many years, the basic question of existence of a design for a given set of parameters and remains mostly unanswered unless is quite small. The case is known as a block design and much more is known about it than for larger . Explicit constructions of designs for are known for various specific constant settings of the parameters (e.g.  design). The breakthrough result of Teirlinck [Tei87] was the first to establish the existence of nontrivial designs for . In Teirlinck’s construction, and satisfies congruences that grow very quickly as a function of . Other sporadic and infinite examples have been found since then (see [CD07] or [Mag09] and the references within), however, the set of parameters which they cover is still very sparse. Moreover, it follows from (4) that any design has size . Even when existence has been shown, the designs obtained are often inefficient in the sense that their size is far from this lower bound. One of the main results of our work is to establish the existence of efficient designs for a wide range of parameters.
Theorem 1.2 (Existence of designs).
For all integers , and there exists a  design whose size is at most for some universal constant .
1.3 Permutations
A family of permutations is called a wise permutation if its action on any tuple of elements is uniform. In other words, for any distinct elements and distinct elements ,
(6) 
Our general framework includes wise permutations as follows. We take and to be the space spanned by all functions of the form
where and are tuples of distinct elements in . With this choice, a subset satisfying (1) is precisely a wise permutation.
Constructions of families of wise permutations are known only for : the group of cyclic shifts modulo is a wise permutation; the group of invertible affine transformations over a finite field yields a wise permutation; and the group of Möbius transformations with over the projective line yields a wise permutation. For (and large enough), however, no wise permutation is known, other then the full symmetric group and the alternating group [KNR05, AL11]. In fact, it is known (c.f., e.g., [Cam95], Theorem 5.2) that for and there are no other subgroups of which form a wise permutation. (On other words, there are no other transitive subgroups of for and .) One of our main results is to show existence of small wise permutations for all .
Theorem 1.3 (Existence of wise permutations).
For all integers and there exists a wise permutation satisfying for some universal constant .
It is clear from the definition (6) above that any wise permutation must satisfy . Thus, for fixed , the wise permutations we exhibit are of optimal size up to polynomial overhead. For growing with these wise permutations may be larger, but still no larger than for some universal constant .
1.4 Proof overview
The idea of our approach is as follows. Let be a random multiset of of some fixed size chosen by sampling uniformly and independently times (with replacement). Let be a spanning set of integervalued functions for (where is some finite index set). Observe that satisfies (1) if and only if
(7) 
Thus defining an integervalued random variable
and we see that existence of a subset of size satisfying (1) will follow if we can show that . To this end we examine more closely the distribution of . Let be the random elements chosen in forming . The spanning set defines a mapping by the trivial
Observe that our choice of random model implies that the vectors are independent and identically distributed. Hence,
(8) 
may be viewed as the end position of an step random walk in the lattice . Thus we may hope that if is sufficiently large, then has an approximately (multidimensional) Gaussian distribution by the central limit theorem. If the relevant local central limit theorem holds as well, then the probability also satisfies a Gaussian approximation. In particular, since a (nondegenerate) Gaussian always has positive density at its expectation, we could conclude that as desired.
The above description is the essence of our approach. The main obstacle is, of course, pointed out in the last step. We must control the rate of convergence of the local central limit theorem well enough that the convergence error does not outweigh the probability density of the Gaussian distribution at . Recall that the order of magnitude of such a density is typically for some constant , and recall that is at least the dimension of , which is the main parameter of our problem. So we indeed have very small probabilities. For this reason, and because we want convergence when is only polynomial in the dimension of , we were unable to use any standard local central limit theorem. Instead, we develop an ad hoc version using direct Fourier analysis.
In our proof of the main theorem, we modify the above description in one respect. It is technically more convenient to work with a slightly different probability model. Instead of choosing as above, we set and define by taking each element of into independently with probability . This has the benefit of guaranteeing that is a proper set instead of a multiset. However, it has also the disadvantage that it does not guarantee that . To remedy this, we assume that the space contains the constant function ; or if not, we can add it to at the minor cost of increasing the dimension of by 1. With this assumption, we note that
Thus (7), or equivalently , also implies that as required. Another disadvantage is that in this new probability model, the vector is no longer a sum of identically distributed variables. However, since the summands in (8) are still independent, we can continue to use Fourier analysis methods in our proof.
We cannot expect there to always be a small subset that satisfies (1). For instance, Alon and Vu [AV97] found a regular hypergraph with vertices and edges, with no regular subhypergraph. Here, the degree of a vertex is the number of hyperedges incident to it and a regular hypergraph is one in which the degrees of all vertices are equal. We may describe their example in our language by letting be the set of edges of this hypergraph, be its vertex set, and define by letting be the indicator function of the set of vertices incident to . The result of [AV97] implies that while the vector is constant, this property is not shared by for any nonempty, proper subset . Thus, we need to impose certain conditions on and , or equivalently on the map . We start by requiring certain divisibility, boundedness and symmetry assumptions.
 Divisibility:

is such that is an integer vector. This property is clearly necessary for (7) to hold and is typically a mild restriction on .
 Boundedness:

The entries of must be small. More precisely, is bounded by a polynomial in , since our method requires to be at least some polynomial in this maximum.
 Symmetry:

A symmetry of is a pair consisting of a permutation and a linear transformation which satisfies for all . The set of symmetries of is a subgroup of . We require that the projection to of the group of symmetries is transitive. In other words, that for any there exists a symmetry of satisfying .
It is not hard to verify that the third condition is intrinsic to the structure of and does not depend on the specific choice of spanning set . In our applications it follows easily from the overall symmetry of the setup.
However, we also have a fourth assumption which is more technical than the others. First, we require that forms a basis of . This implies that for any , we may express , the unit vector with at its ’th coordinate, as a linear combination of the form . We call any such linear combination an isolating combination for . We assume that for each , there are many isolating combinations supported on disjoint subsets of . Moreover, we require the coefficients of these combinations to have small norm and to be rational with a small common denominator. This is the most difficult assumption to verify in our applications. Section 2 gives more details about all of these assumptions.
Our main theorem shows that these four conditions yield the existence of a small solution of (1).
Theorem (Main theorem  informal statement).
Let be a finite set and let be a vector space of functions from to which contains the constant functions. If there exists a basis of , consisting of integervalued functions, which satisfies the boundedness, symmetry and isolation conditions above. Then there is a small subset such that
for all in .
We note that the size of the subset obtained must satisfy the divisibility condition above. The existence theorems for orthogonal arrays, designs and wise permutations follow by showing that for the choice of and detailed in Sections 1.1 through 1.3 there exists a choice of basis and small for which all four conditions above hold.
1.5 Related work
In the probabilistic formulation (2) of our problem we seek a small subset such that the uniform distribution over simulates the uniform distribution over with regards to certain tests. There are two ways to relax the problem to make its solution easier, and raise new questions regarding explicit solutions.
One relaxation is to allow a set with a nonuniform distribution . For many practical applications of designs and wise permutations in statistics and computer science, but not quite every application, this relaxation is as good as the uniform question. The existence of a solution with small support is guaranteed by Carathéodory’s theorem, using the fact that the constraints on are all linear equalities and inequalities. Moreover, such a solution can be found efficiently, as was shown by Karp and Papadimitriou [KP82] and in more general settings by Koller and Megiddo [KM94]. Alon and Lovett [AL11] give a strongly explicit analog of this in the case of wise permutations and more generally in the case of group actions.
A different relaxation is to require the uniform distribution on to only approximately satisfy equation (2). Then it is trivial that a sufficiently large random subset satisfies the requirement with high probability, and the question is to find an explicit solution. For instance, we can relax the problem of wise permutations to almost wise permutations. For this variant an optimal solution (up to polynomial factors) was achieved by Kaplan, Naor and Reingold [KNR05], who gave a construction of such an almost wise permutation of size . Alternatively, one can start with the constant size expanding set of given by Kassabov [Kas07] and take a random walk on it of length .
1.6 Paper organization
We give a precise description of the general framework and our main theorem in Section 2. We apply it to show the existence of orthogonal arrays and designs in Section 3. The case of wise permutations requires a detour to the representation theory of the symmetric group, and we defer it to the full version of this paper. The proof of our main theorem is given in Section 4. We summarize and give some open problems in Section 5.
2 Main Theorem
Let be a finite set and let be a vector space of functions from to . We ask for conditions for the existence of a small set for which (1) holds. Our theorem uses the following notation.
For a basis (where is some finite index set) of we define by . This definition is extended linearly to by setting . In the same manner, a set is identified with its indicator vector so that . Finally, we recall from Section 1.4 that a symmetry of is a pair and such that for all in . We now state formally our main theorem.
Theorem 2.1 (Main Theorem).
Let be a finite set and be a vector space of functions from to which contains the constant functions. Suppose that there exist integers , real numbers and a basis of consisting of integervalued functions such that:
 Divisibility:

is an integer vector.
 Boundedness:

for all .
 Symmetry:

For each there exists a symmetry of such that .
 Isolation:

For any there exist vectors for such that

for all .

The vectors have disjoint supports, where the support of a vector is the set of coordinates on which it is nonzero.

for all .

Then there exists a subset with such that
We prove Theorem 2.1 in Section 4. A careful examination of the proof shows that we can choose for any which satisfies the following constraints:

divides ;

;

.
Of course, if the parameters are so large so that the second and third conditions contradict each other, then our theorem remains trivially true by taking .
3 Applications
In this section we apply our main theorem, Theorem 2.1, to prove the existence results for orthogonal arrays and designs, Theorems 1.1 and 1.2. The existence result for wise permutations, Theorem 1.3, is more complicated because it requires a discussion of the representation theory of the symmetric group. We defer it to the full version of this paper.
3.1 Orthogonal arrays
We use the choice of and described in Section 1.1 and recall the definition (3) of the functions of that section. We note that for every subset we have . Thus contains the constant functions as Theorem 2.1 requires. We start by choosing a convenient basis for of integervalued functions. Recall that the alphabet is and let be all symbols other than . Extend the definition (3) of to apply to all subsets with and . Here, we mean that is the constant function . Finally, let
and for set .
Claim 3.1.
The span of the functions is .
Proof.
Clearly for all . To see that spans , we will show that any with and is spanned by . We do this by induction on the number of elements in which are equal to . First, if then . Otherwise, let with , and assume WLOG that . Then
and by induction, the right hand side belongs to the linear span of . ∎
Recall that is defined as . We now choose integers and real numbers such that the conditions of divisibility, boundedness, symmetry and isolation required by Theorem 2.1 are satisfied. First, let . Note that . Thus we set so that is an integer vector. Second, we clearly have for any that . Hence we set .
Third, to witness the symmetry condition, fix and consider the permutation given by . We need to show that there exists a linear map acting on such that for all . This holds since for we have
and is in the linear span of by Claim 3.1.
The fourth condition we need to verify is the existence of many disjoint isolation vectors for each . Note that this condition also implies that is a basis for . This is established in the following lemma.
Lemma 3.2.
Let . There exist disjoint vectors with and such that .
We prove Lemma 3.2 in two steps. First we fix some notations. Let be of size , and let . For let be the restriction of to the coordinates of . Abusing notation, we also think of by setting coordinates outside to zero. Note that in this notation, . We define the vector as
where we recall that for , is the corresponding unit vector. Note that if then .
Claim 3.3.
Let . Then
Proof.
We compute the value of in coordinate . We have
Suppose first that . Then there exists . Flipping the th element in doesn’t change the expression and hence the alternating sign sum cancels. We thus assume from now on that . We thus have
This expression evaluates to only if and . ∎
We next prove Lemma 3.2, showing that we can build many disjoint isolation vectors for any . The proof uses the vectors we just analyzed.
Proof of Lemma 3.2.
Fix . Let be such that . We will construct a vector such that . We will do so by backward induction on . If we take
and if we construct recursively
It is easy to verify using Claim 3.3 that indeed as claimed. We further claim that . This clearly holds if . If we bound by induction
To conclude, we need to show that by choosing different values for such that we can achieve many disjoint vectors which isolate . The key observation is that is supported on elements whose hamming distance from is at most . Thus, if we choose such that and such that the hamming distance between each pair is at least , we get that have disjoint supports. We can achieve by a simple greedy process: choose iteratively; after choosing delete all elements in whose hamming distance from is at most . Since the number of these elements is bounded by the claim follows. ∎
We now have all the conditions to apply Theorem 2.1. We have and . Hence we obtain that there exists an orthogonal array of strength and size for some universal constant .
3.2 Designs
In this section, we prove Theorem 1.2. It suffices to prove the theorem for , since if then the complete design (the design containing all subsets of size ) establishes the theorem. We use the choice of and described in Section 1.2 and recall the definition (5) of the functions of that section. We set and note that and thus contains the constant functions as Theorem 2.1 requires. As a convenient basis for of integervalued functions, we take with . By definition, spans and the fact that is a basis for will be implied by showing the isolation condition of Theorem 2.1.
We choose integers and real numbers to satisfy the conditions of divisibility, boundedness, symmetry and isolation in Theorem 2.1. First, and hence we set so that is an integer vector. Second, . Hence we set . Third, the symmetry condition also follows simply: let be a permutation on . It acts naturally on and (by permuting subsets of ) and gives two permutations and that satisfy . The linear transformation then corresponds to the permutation .
Finally, we need to show that for each there exist many disjoint vectors which isolate it. This is accomplished in the following lemma.
Lemma 3.4.
Assume . For any there exist vectors with such that . Moreover, have disjoint supports and for .
We will need the following technical claim for the proof of Lemma 3.4. In the following we consider binomial coefficients whenever .
Claim 3.5.
Let and . Then
Proof.
Let . If we have and hence . So, it is enough to verify the claim whenever or . If then since . If then . ∎
Proof of Lemma 3.4.
Let be a coordinate we wish to isolate. Let be a set disjoint from and let . Define to be the indicator vector for all subsets such that and , that is
We define vectors as
We will shortly show that
First we bound the norm of and show the existence of many disjoint vectors. It is easy to check that . Also, the vector is supported on coordinates such that . Thus, if we choose such that we get that the vectors have disjoint support. We can choose by a simple greedy argument: choose iteratively, where in each step after choosing we remove all subsets whose intersection with is at least . The number of subsets eliminated in each step is at most hence we will get .
To conclude the proof, we need to compute . Let . Clearly if then . We thus assume that . Let where . We have that if , and that
Hence we have that
(9) 
If then as claimed. To conclude we need to prove that if then . We have and let . Thus
We now apply Claim 3.5 with and conclude that . ∎
We are now ready to apply Theorem 2.1. We have and . Thus the theorem implies the existence of a design with