ANDcompression of NPcomplete problems: Streamlined proof and minor observations
Abstract
\textciteDrucker proved the following result: Unless the unlikely complexitytheoretic collapse occurs, there is no ANDcompression for SAT. The result has implications for the compressibility and kernelizability of a whole range of NPcomplete parameterized problems. We present a streamlined proof of Drucker’s theorem.
An ANDcompression is a deterministic polynomialtime algorithm that maps a set of SATinstances to a single SATinstance of size such that is satisfiable if and only if all are satisfiable. The “AND” in the name stems from the fact that the predicate “ is satisfiable” can be written as the AND of all predicates “ is satisfiable”. Drucker’s theorem complements the result by \textciteBDFH and \textciteFortnowSanthanam, who proved the analogous statement for ORcompressions, and Drucker’s proof not only subsumes their result but also extends it to randomized compression algorithms that are allowed to have a certain probability of failure.
\textciteDrucker presented two proofs: The first uses information theory and the minimax theorem from game theory, and the second is an elementary, iterative proof that is not as general. In our proof, we realize the iterative structure as a generalization of the arguments of \textciteKo for selective sets, which use the fact that tournaments have dominating sets of logarithmic size. We generalize this fact to hypergraph tournaments. Our proof achieves the full generality of Drucker’s theorem, avoids the minimax theorem, and restricts the use of information theory to a single, intuitive lemma about the average noise sensitivity of compressive maps. To prove this lemma, we use the same informationtheoretic inequalities as Drucker.
[itemize]label= \setlist[description]labelindent=
1 Introduction
The influential “ORconjecture” by \textciteBDFH asserts that instances of cannot be mapped in polynomial time to an instance of size so that is a yesinstance if and only if at least one is a yesinstance. Conditioned on the ORconjecture, the “composition framework” of \textciteBDFH has been used to show that many different problems in parameterized complexity do not have polynomial kernels. \textciteFortnowSanthanam were able to prove that the ORconjecture holds unless , thereby connecting the ORconjecture with a standard hypothesis in complexity theory.
The results of \textciteBDFH,FortnowSanthanam can be used not only to rule out deterministic kernelization algorithms, but also to rule out randomized kernelization algorithms with onesided error, as long as the success probability is bigger than zero; this is the same as allowing the kernelization algorithm to be a algorithm. Left open was the question whether the complexitytheoretic hypothesis (or some other hypothesis believed by complexity theorists) suffices to rule out kernelization algorithms that are randomized and have twosided error. \textciteDrucker resolves this question affirmatively; his results can rule out kernelization algorithms that have a constant gap in their error probabilities. This result indicates that randomness does not help to decrease the size of kernels significantly.
With the same proof, \textciteDrucker resolves a second important question: whether the “ANDconjecture”, which has also been formulated by \textciteBDFH analogous to the ORconjecture, can be derived from existing complexitytheoretic assumptions. This is an intriguing question in itself, and it is also relevant for parameterized complexity as, for some parameterized problems, we can rule out polynomial kernels under the ANDconjecture, but we do not know how to do so under the ORconjecture. \textciteDrucker proves that the ANDconjecture is true if holds.
The purpose of this paper is to discuss Drucker’s theorem and its proof. To this end, we attempt to present a simpler proof of his theorem. Our proof in §3 gains in simplicity with a small loss in generality: the bound that we get is worse than Drucker’s bound by a factor of two. Using the slightly more complicated approach in §4, it is possible to get the same bounds as Drucker. These differences, however, do not matter for the basic version of the main theorem, which we state in §1.1 and further discuss in §1.2. For completeness, we briefly discuss a formulation of the composition framework in §1.3.
1.1 Main Theorem: Ruling out OR and ANDcompressions
An ANDcompression for a language is a polynomialtime reduction that maps a set to some instance of a language such that holds if and only if and and and . By De Morgan’s law, the same is an ORcompression for because holds if and only if or or or . \textciteDrucker proved that an ORcompression for implies that , which is a complexity consequence that is closed under complementation, that is, it is equivalent to . For this reason, and as opposed to earlier work \parenciteBDFH,FortnowSanthanam,DellVanMelkebeek, it is without loss of generality that we restrict our attention to ORcompressions for the remainder of this paper. We now formally state Drucker’s theorem.
Theorem (Drucker’s theorem).
Let be languages, let be error probabilities with , and let . Assume that there exists a randomized polynomialtime algorithm that maps any set for some and to such that:
 [font=]
 (Soundness)

If all ’s are noinstances of , then is a noinstance of with probability .
 (Completeness)

If exactly one is a yesinstance of , then is a yesinstance of with probability .
 (Size bound)

The size of is bounded by .
Then .
The procedure above does not need to be a “full” ORcompression, which makes the theorem more general. In particular, is relaxed in two ways: it only needs to work, or be analyzed, in the case that all input instances have the same length; this is useful in hardness of kernelization proofs as it allows similar instances to be grouped together. Furthermore, only needs to work, or be analyzed, in the case that at most one of the input instances is a yesinstance of ; we believe that this property will be useful in future work on hardness of kernelization.
The fact that “relaxed” ORcompressions suffice in Theorem 1.1 is implicit in the proof of \textciteDrucker, but not stated explicitly. Before Drucker’s work, \textciteFortnowSanthanam proved the special case of Theorem 1.1 in which , but they only obtain the weaker consequence , which prevents their result from applying to ANDcompressions in a nontrivial way. Moreover, their proof uses the full completeness requirement and does not seem to work for relaxed ORcompressions.
1.2 Comparison and overview of the proof
The simplification of our proof stems from two main sources: {enumerate*} The “scaffolding” of our proof, its overall structure, is more modular and more similar to arguments used previously by \textciteKo, \textciteFortnowSanthanam, and \textciteDellVanMelkebeek for compressiontype procedures and \textciteisolation for isolation procedures. While the informationtheoretic part of our proof uses the same set of informationtheoretic inequalities as Drucker’s, the simple version in §3 applies these inequalities to distributions that have a simpler structure. Moreover, our calculations have a somewhat more mechanical nature. Both Drucker’s proof and ours use the relaxed ORcompression to design a reduction from to the statistical distance problem, which is known to be in the intersection of and by previous work (cf. \textciteXiaoThesis). \textciteDrucker uses the minimax theorem and a gametheoretic sparsification argument to construct the polynomial advice of the reduction. He also presents an alternative proof \parencite[Section 3]Druckerfull in which the advice is constructed without these arguments and also without any explicit invocation of information theory; however, the alternative proof does not achieve the full generality of his theorem, and we feel that avoiding information theory entirely leads to a less intuitive proof structure. In contrast, our proof achieves full generality up to a factor of two in the simplest proof, it avoids game theoretic arguments, and it limits information theory to a single, intuitive lemma about the average noise sensitivity of compressive maps. Using this informationtheoretic lemma as a black box, we design the reduction in a purely combinatorial way: We generalize the fact that tournaments have dominating sets of logarithmic size to hypergraph tournaments; these are complete uniform hypergraphs with the additional property that, for each hyperedge, one of its elements gets “selected”. In particular, for each set of noinstances, we select one element of based on the fact that ’s behavior on somehow proves that the selected instance is a noinstance of . The advice of the reduction is going to be a small dominating set of this hypergraph tournament on the set of noinstances of . The crux is that we can efficiently test, with the help of the statistical distance problem oracle, whether an instance is dominated or not. Since any instance is dominated if and only if it is a noinstance of , this suffices to solve . In the informationtheoretic lemma, we generalize the notion of average noise sensitivity of Boolean functions (which can attain two values) to compressive maps (which can attain only relatively few values compared to the input length). We show that compressive maps have small average noise sensitivity. Drucker’s “distributional stability” is a closely related notion, which we make implicit use of in our proof. Using the latter notion as the anchor of the overall reduction, however, leads to some additional technicalities in Drucker’s proof, which we also run into in §4 where we obtain the same bounds as Drucker’s theorem. In §3 we instead use the average noise sensitivity as the anchor of the reduction, which avoids these technicalities at the cost of losing a factor of two in the bounds.
1.3 Application: The composition framework for ruling out kernels
We briefly describe a modern variant of the composition framework that is sufficient to rule out kernels of size using Theorem 1.1. It is almost identical to Lemma 1 of \textciteDellMarx,DellVanMelkebeek and the notion defined by \textcite[Definition 2.2]HermelinWu. By applying the framework for unbounded , we can also use it to rule out polynomial kernels.
Definition 1.
Let be a language, and let with parameter be a parameterized problem. A partite composition of into is a polynomialtime algorithm that maps any set for some and to such that:
 [label=(0)]

If all ’s are noinstances of , then is a noinstance of .

If exactly one is a yesinstance of , then is a yesinstance of .

The parameter of is bounded by .
This notion of composition has one crucial advantage over previous notions of ORcomposition: The algorithm does not need to work, or be analyzed, in the case that two or more of the ’s are yesinstances.
Definition 2.
Let be a parameterized problem. We call compositional if there exists an hard or hard problem that has a partite composition algorithm into .
The above definition encompasses both ANDcompositions and ORcompositions because an ANDcomposition of into is the same as an ORcomposition of into . We have the following corollary of Drucker’s theorem.
Corollary.
If , then no compositional problem has kernels of size . Moreover, this even holds when the kernelization algorithm is allowed to be a randomized algorithm with at least a constant gap in error probability.
Proof.
Let be an hard or hard problem that has a partite composition into . Assume for the sake of contradiction that has a kernelization algorithm with soundness error at most and completeness error at most so that is bounded by a constant smaller than one. The concatenation of with the assumed kernelization gives rise to an algorithm that satisfies the conditions of Theorem 1.1, for example with . Therefore, we get and thus , a contradiction.
Several variants of the framework provided by this corollary are possible:

In order to rule out kernels for a parameterized problem , we just need to prove that is compositional for all ; let’s call compositional in this case. One way to show that is compositional is to construct a single composition from a hard problem into ; this is an algorithm as in Definition 1, except that we replace 3 with the bound .

Since all ’s in Definition 1 are promised to have the same length, we can consider a padded version of the language in order to filter the input instances of length of the original into a polynomial number of equivalence classes. Each input length of in some interval corresponds to one equivalence class of length instances of . So long as remains hard or hard, it is sufficient to consider a composition from into . \textcite[Definition 4]crosscomposition formalize this approach.

The composition algorithm can also use randomness, as long as the overall probability gap of the concatenation of composition and kernelization is not negligible.

In the case that is hard, \textciteFortnowSanthanam and \textciteDellVanMelkebeek prove that the composition algorithm can also be a algorithm or even a oracle communication game in order to get the collapse. Interestingly, this does not seem to follow from Drucker’s proof nor from the proof presented here, and it seems to require the full completeness condition for the ORcomposition. \textcitekratsch2012co,pointlinecover exploit these variants of the composition framework to prove kernel lower bounds.
2 Preliminaries
For any set and any , we write for the set of all length strings inside of . For any , we write . For a set , we write for the set of all subsets that have size at most . We will work over a finite alphabet, usually . For a vector , a number , and a value , we write for the string that coincides with except in position , where it has value . For background in complexity theory, we defer to the book by \textciteABbook. We assume some familiarity with the complexity classes and as well as their nonuniform versions and .
2.1 Distributions and Randomized Mappings
A distribution on a finite ground set is a function with . The support of is the set . The uniform distribution on is the distribution with for all . We often view distributions as random variables, that is, we may write to denote the distribution that first produces a sample and then outputs , where . We use any of the following notations:
The last term in this equation is either or if is a deterministic function, but we will also allow to be a randomized mapping, that is, has access to some “internal” randomness. This is modeled as a function for some , and we write as a shorthand for . That is, the internal randomness consists of a sequence of independent and fair coin flips.
2.2 Statistical Distance
The statistical distance between two distributions and on is defined as
(1) 
The statistical distance between and is a number in , with if and only if and if and only if the support of is disjoint from the support of . It is an exercise to show the standard equivalence between the statistical distance and the norm:
2.3 The Statistical Distance Problem
For and , let be the following promise problem:
 yesinstances:

Pairs of circuits so that .
 noinstances:

Pairs of circuits so that .
The statistical distance problem is not known to be polynomialtime computable, and in fact it is not believed to be. On the other hand, the problem is also not believed to be hard because the problem is computationally easy in the following sense.
Theorem (\textciteXiaoThesis + \textciteAdleman).
If are constants, we have
Moreover, the same holds when and
are functions of the input length that satisfy
.
This is the only fact about the SDproblem that we will use in this paper.
Slightly stronger versions of this theorem are known: For example, \textcite[p. 144ff]XiaoThesis proves that holds. In fact, Theorem 2.3 is established by combining his theorem with the standard fact that , i.e., that Arthur–Merlin games can be derandomized with polynomial advice \parenciteAdleman. Moreover, when we have the stronger guarantee that holds, then can be solved using statistical zeroknowledge proof systems \parenciteSV03,GV11. Finally, if , the problem can be solved with perfect zeroknowledge proof systems \parencite[Proposition 5.7]SV03. Using these stronger results whenever possible gives slightly stronger complexity collapses in the main theorem.
3 Ruling out ORcompressions
In this section we prove Theorem 1.1: Any language that has a relaxed ORcompression is in . We rephrase the theorem in a form that reveals the precise inequality between the error probabilities and the compression ratio needed to get the complexity consequence.
Theorem (compressive version of Drucker’s theorem).
Let be languages and be some constants denoting the error probabilities. Let be a polynomial and . Let
(2) 
be a randomized algorithm such that, for all ,

if , then holds with probability , and

if , then holds with probability .
If , then .
This is Theorem 7.1 in \textciteDruckerfull. However, there are two noteworthy differences:

Drucker obtains complexity consequences even when holds, which makes his theorem more general. The difference stems from the fact that we optimized the proof in this section for simplicity and not for the optimality of the bound. He also obtains complexity consequences under the (incomparable) bound . Using the slightly more complicated setup of §4, we would be able to achieve both of these bounds.

To get a meaningful result for ORcompression of complete problems, we need the complexity consequence rather than just . To get the stronger consequence, Drucker relies on the fact that the statistical distance problem has statistical zero knowledge proofs. This is only known to be true when holds, which translates to the more restrictive assumption in his theorem. We instead use Theorem 2.3, which does not go through statistical zero knowledge and proves more directly that is in whenever holds. Doing so in Drucker’s paper immediately improves all of his consequences to .
To obtain Theorem 1.1, the basic version of Drucker’s theorem, as a corollary of Theorem 3, none of these differences matter. This is because we could choose to be sufficiently smaller in the proof of Theorem 1.1, which we provide now before we turn to the proof of Theorem 3.
Proof (of Theorem 1.1).
Let be the algorithm assumed in Theorem 1.1, and let be large enough so that the output size of is bounded by . We transform into an algorithm as required for Theorem 3. Let be a small enough constant so that . Moreover, let be a large enough polynomial so that holds. Then we restrict to a family of functions . Now a minor observation is needed to get an algorithm of the form (2): The set can be efficiently encoded in (which changes the output language from to some ). Thus we constructed a family as required by Theorem 3, which proves the claim.
3.1 ORs are sensitive to Yesinstances
The semantic property of relaxed ORcompressions is that they are “sensitive”: They show a dramatically different behavior for allno input sets vs. input sets that contain a single yesinstance of . The following simple fact is the only place in the overall proof where we use the soundness and completeness properties of .
Lemma.
For all distributions on and all , we have
(3) 
Proof.
The probability that outputs an element of is at most , and similarly, the probability that outputs an element of is at least . By (1) with , the statistical distance between the two distributions is at least .
Despite the fact that relaxed ORcompressions are sensitive to the presence or absence of a yesinstance, we argue next that their behavior within the set of noinstances is actually quite predictable.
3.2 The average noise sensitivity of compressive maps is small
Relaxed ORcompressions are in particular compressive maps. The following lemma says that the average noise sensitivity of any compressive map is low. Here, “average noise sensitivity” refers to the difference in the behavior of a function when the input is subject to random noise; in our case, we change the input in a single random location and notice that the behavior of a compressive map does not change much.
Lemma.
Let , let be the uniform distribution on , and let . Then, for all randomized mappings , we have
(4) 
We defer the purely informationtheoretic and mechanical proof of this lemma to §3.5. In the special case where is a Boolean function, the lefthand side of (4) coincides with the usual definition of the average noise sensitivity.
We translate Lemma 3.2 to our relaxed ORcompression as follows.
Lemma.
Let . For all , there exists so that
(5) 
Here samples a subset of uniformly at random. Note that we replaced the expectation over from (4) with the mere existence of an element in (5) since this is all we need; the stronger property also holds.
Proof.
This lemma suggest the following tournament idea. We let be the set of noinstances, and we let them compete in matches consisting of players each. That is, a match corresponds to a hyperedge of size and every such hyperedge is present, so we are looking at a complete uniform hypergraph. We say that a player is “selected” in the hyperedge if the behavior of on is not very different from the behavior of on , that is, if (5) holds. The point of this construction is that being selected proves that must be a noinstance because (3) does not hold. We obtain a “selector” function that, given , selects an element . We call a hypergraph tournament on .
3.3 Hypergraph tournaments have small dominating sets
Tournaments are complete directed graphs, and it is wellknown that they have dominating sets of logarithmic size. A straightforward generalization applies to hypergraph tournaments . We say that a set dominates a vertex if or holds. A set is a dominating set of if all vertices are dominated by at least one element in .
Lemma.
Let be a finite set, and let be a hypergraph tournament.
Then has a dominating set of size at most .
Proof.
We construct the set inductively. Initially, it has elements. After the th step of the construction, we will preserve the invariant that is of size exactly and that holds, where is the set of vertices that are not yet dominated, that is,
If , we can add an arbitrary edge with to to finish the construction. Otherwise, the following averaging argument, shows that there is an element that dominates at least a fraction of elements :
Thus, the number of elements of left undominated by is at most , so the inductive invariant holds. Since for , we have after steps of the construction, and in particular, has at most elements.
3.4 Proof of the main theorem: Reduction to statistical distance
Proof (of Theorem 3).
We describe a deterministic reduction from to the statistical distance problem with and . The reduction outputs the conjunction of polynomially many instances of . Since is contained in the intersection of and by Theorem 2.3, and since this intersection is closed under taking polynomial conjunctions, we obtain . Thus it remains to find such a reduction. To simplify the discussion, we describe the reduction in terms of an algorithm that solves and uses as an oracle. However, the algorithm only makes nonadaptive queries at the end of the computation and accepts if and only if all oracle queries accept; this corresponds to a reduction that maps an instance of to a conjunction of instances of as required.
To construct the advice at input length , we use Lemma 3.2 with to obtain a hypergraph tournament on , which in turn gives rise to a small dominating set by Lemma 3.3. We remark the triviality that if , then we can use , the set of all noinstances of at this input length, as the advice. Otherwise, we define the hypergraph tournament for all as follows:
By Lemma 3.2, the set over which the minimum is taken is nonempty, and thus is welldefined. Furthermore, the hypergraph tournament has a dominating set of size at most by Lemma 3.3. As advice for input length , we choose this set . Now we have if and only if is dominated by . The idea of the reduction is to efficiently check the latter property.
The algorithm works as follows: Let be an instance of given as input. If holds for some , the algorithm rejects and halts. Otherwise, it queries the SDoracle on the instance for each . If the oracle claims that all queries are yesinstances, our algorithm accepts, and otherwise, it rejects.
First note that distributions of the form and can be be sampled by using polynomialsize circuits, and so they form syntactically correct instances of the SDproblem: The information about , , and is hardwired into these circuits, the input bits of the circuits are used to produce a sample from , and they serve as internal randomness of in case is a randomized algorithm.
It remains to prove the correctness of the reduction. If , we have for all that and that the statistical distance of the query corresponding to is at least by Lemma 3.1. Thus all queries that the reduction makes satisfy the promise of the SDproblem and the oracle answers the queries correctly, leading our reduction to accept. On the other hand, if , then, since is a dominating set of with respect to the hypergraph tournament , there is at least one so that or holds. If , the reduction rejects. The other case implies that the statistical distance between and is at most . The query corresponding to this particular therefore satisfies the promise of the SDproblem, which means that the oracle answers correctly on this query and our reduction rejects.
3.5 Informationtheoretic arguments
We now prove Lemma 3.2. The proof uses the Kullback–Leibler divergence as an intermediate step. Just like the statistical distance, this notion measures how similar two distributions are, but it does so in an informationtheoretic way rather than in a purely statistical way. In fact, it is wellknown in the area that the Kullback–Leibler divergence and the mutual information are almost interchangeable in a certain sense. We prove a version of this paradigm formally in Lemma 3.5 below; then we prove Lemma 3.2 by bounding the statistical distance in terms of the Kullback–Leibler divergence using standard inequalities.
We introduce some basic informationtheoretic notions. The Shannon entropy of a random variable is
The conditional Shannon entropy is
The mutual information between and is . Note that , where is the size of the support of . The conditional mutual information can be defined by the chain rule of mutual information . If and are independent, then a simple calculation reveals that holds.
We now establish a bound on the Kullback–Leibler divergence. The application of Lemma 3.2 only uses . The proof does not become more complicated for general , and we will need the more general version later in this paper.
Lemma.
Let and let be independent distributions on some finite set , and let . Then, for all randomized mappings , we have the following upper bound on the expected value of the Kullback–Leibler divergence:
Proof.
The result follows by a basic calculation with entropy notions. The first equality is the definition of the Kullback–Leibler divergence, which we rewrite using the logarithm rule and the linearity of expectation:
As , both terms of the sum above are entropies, and we can continue the calculation as follows:
(definition of entropy)  
(definition of conditional entropy)  
(definition of mutual information)  
(by independence of ’s)  
(chain rule of mutual information) 
We now turn to the proof of Lemma 3.2, where we bound the statistical distance in terms of the Kullback–Leibler divergence.
Proof (of Lemma 3.2).
We observe that , and so we are in the situation of Lemma 3.5 with . We first apply the triangle inequality to the lefthand side of (4). Then we use Pinsker’s inequality \parencite[Lemma 11.6.1]cover2012elements to bound the statistical distance in terms of the Kullback–Leibler divergence, which we can in turn bound by using Lemma 3.5.
(triangle inequality)  
(Pinsker’s inequality)  
(Jensen’s inequality)  
(Lemma 3.5) 
The equality above uses the fact that is the uniform distribution on .
4 Extension: Ruling out ORcompressions of size
In this section we tweak the proof of Theorem 3 so that it works even when the instances of are mapped to an instance of of size at most . The drawback is that we cannot handle positive constant error probabilities for randomized relaxed ORcompression anymore. For simplicity, we restrict ourselves to deterministic relaxed ORcompressions of size throughout this section.
Theorem (compressive version of Drucker’s theorem).
Let be languages.
Let be a polynomial.
Assume there exists a algorithm
such that, for all ,

if , then , and

if , then .
Then .
This is Theorem 7.1 in \textciteDruckerfull. The main reason why the proof in §3 breaks down for compressions to size with is that the bound on the statistical distance in Lemma 3.2 becomes trivial. This happens already when . On the other hand, the bound that Lemma 3.5 gives for the Kullback–Leibler divergence remains nontrivial even for . To see this, note that the largest possible divergence between and , that is, the divergence without the condition on the mutual information between and , is , and the bound that Lemma 3.5 yields for is logarithmic in that.
Inspecting the proof of Lemma 3.2, we realize that the loss in meaningfulness stems from Pinsker’s inequality, which becomes trivial in the parameter range under consideration. Luckily, there is a different inequality between the statistical distance and the Kullback–Leibler divergence, Vajda’s inequality, that still gives a nontrivial bound on the statistical distance when the divergence is . The inequality works out such that if the divergence is logarithmic, then the statistical distance is an inverse polynomial away from . We obtain the following analogue to Lemma 3.2.
Lemma.
Let let be independent uniform distributions on some finite set , and write . Then, for all randomized mappings with , we have
(6) 
The notation refers to the random variable that samples independently for each as usual, and that samples from the distribution conditioned on the event that , that is, the distribution . The notation is as before, that is, is fixed.
We defer the proof of the lemma to the end of this section and discuss now how to use it to obtain the stronger result for compressions. First note that we could not have directly used Lemma 4 in place of Lemma 3.2 in the proof of the main result, Theorem 3. This is because for , the righthand side of (6) becomes bigger than and thus trivial. In fact, this is the reason why we formulated Lemma 3.5 for general . We need to choose with large enough to get anything meaningful out of (6).
4.1 A different hypergraph tournament
To be able to work with larger , we need to define the hypergraph tournament in a different way; not much is changing on a conceptual level, but the required notation becomes a bit less natural. We do this as follows.
Lemma.
Let . There exists a large enough constant such that with we have: For all with , there exists an element so that
(7) 
where is the distribution that samples the element set , and is the distribution conditioned on the event .
For instance if , then samples the element set and samples the element set . The proof of this lemma is analogous to the proof of Lemma 3.2.
Proof.
We choose as a constant that is large enough so that the righthand side of (6) becomes bounded by . Let for and be the lexicographically th element of . We define the function as follows: . Finally, we let the distributions be for all . We apply Lemma 3.2 to and obtain indices and minimizing the statistical distance on the lefthand side of (6). Since and , we obtain the claim with .
4.2 Proof of Theorem 4
Proof (of Theorem 4).
As in the proof of Theorem 3, we construct a deterministic reduction from to a conjunction of polynomially many instances of the statistical distance problem , but this time we let and be equal to the righthand side of (7). Since there is a polynomial gap between and , Theorem2.3 implies that is contained in the intersection of and . Since the intersection is closed under polynomial disjunctions, we obtain . Thus it remains to find such a reduction.
To construct the advice at input length , we use Lemma 4.1 with , which guarantees that the following hypergraph tournament with is welldefined: