Properly Learning Poisson Binomial Distributionsin Almost Polynomial Time

Properly Learning Poisson Binomial Distributions
in Almost Polynomial Time

Ilias Diakonikolas
University of Edinburgh
ilias.d@ed.ac.uk.
Supported by EPSRC grant EP/L021749/1 and a Marie Curie Career Integration grant.
   Daniel M. Kane
University of California, San Diego
dakane@cs.ucsd.edu.
Some of this work was performed while visiting the University of Edinburgh.
   Alistair Stewart
University of Edinburgh
stewart.al@gmail.com.
Supported by EPSRC grant EP/L021749/1.
Abstract

We give an algorithm for properly learning Poisson binomial distributions. A Poisson binomial distribution (PBD) of order is the discrete probability distribution of the sum of mutually independent Bernoulli random variables. Given samples from an unknown PBD , our algorithm runs in time , and outputs a hypothesis PBD that is -close to in total variation distance. The sample complexity of our algorithm is known to be nearly-optimal, up to logarithmic factors, as established in previous work [DDS12]. However, the previously best known running time for properly learning PBDs [DDS12, DKS15b] was , and was essentially obtained by enumeration over an appropriate -cover. We remark that the running time of this cover-based approach cannot be improved, as any -cover for the space of PBDs has size  [DKS15b].

As one of our main contributions, we provide a novel structural characterization of PBDs, showing that any PBD is -close to another PBD with distinct parameters. More precisely, we prove that, for all there exists an explicit collection of vectors of multiplicities, such that for any PBD there exists a PBD with distinct parameters whose multiplicities are given by some element of , such that is -close to Our proof combines tools from Fourier analysis and algebraic geometry.

Our approach to the proper learning problem is as follows: Starting with an accurate non-proper hypothesis, we fit a PBD to this hypothesis. More specifically, we essentially start with the hypothesis computed by the computationally efficient non-proper learning algorithm in our recent work [DKS15b]. Our aforementioned structural characterization allows us to reduce the corresponding fitting problem to a collection of systems of low-degree polynomial inequalities. We show that each such system can be solved in time , which yields the overall running time of our algorithm.

1 Introduction

The Poisson binomial distribution (PBD) is the discrete probability distribution of a sum of mutually independent Bernoulli random variables. PBDs comprise one of the most fundamental nonparametric families of discrete distributions. They have been extensively studied in probability and statistics [Poi37, Che52, Hoe63, DP09b], and are ubiquitous in various applications (see, e.g.,  [CL97] and references therein). Recent years have witnessed a flurry of research activity on PBDs and generalizations from several perspectives of theoretical computer science, including learning [DDS12, DDO13, DKS15b, DKT15, DKS15a], pseudorandomness and derandomization [GMRZ11, BDS12, De15, GKM15], property testing [AD15, CDGR15], and computational game theory [DP07, DP09a, DP14a, DP14b, GT14].

Despite their seeming simplicity, PBDs have surprisingly rich structure, and basic questions about them can be unexpectedly challenging to answer. We cannot do justice to the probability literature studying the following question: Under what conditions can we approximate PBDs by simpler distributions? See Section 1.2 of [DDS15] for a summary. In recent years, a number of works in theoretical computer science [DP07, DP09a, DDS12, DP14a, DKS15b] have studied, and essentially resolved, the following questions: Is there a small set of distributions that approximately cover the set of all PBDs? What is the number of samples required to learn an unknown PBD?

We study the following natural computational question: Given independent samples from an unknown PBD , can we efficiently find a hypothesis PBD that is close to , in total variation distance? That is, we are interested in properly learning PBDs, a problem that has resisted recent efforts [DDS12, DKS15b] at designing efficient algorithms. In this work, we propose a new approach to this problem that leads to a significantly faster algorithm than was previously known. At a high-level, we establish an interesting connection of this problem to algebraic geometry and polynomial optimization. By building on this connection, we provide a new structural characterization of the space of PBDs, on which our algorithm relies, that we believe is of independent interest. In the following, we motivate and describe our results in detail, and elaborate on our ideas and techniques.

Distribution Learning. We recall the standard definition of learning an unknown probability distribution from samples [KMR94, DL01]: Given access to independent samples drawn from an unknown distribution in a given family , and an error parameter , a learning algorithm for must output a hypothesis such that, with probability at least , the total variation distance between and is at most . The performance of a learning algorithm is measured by its sample complexity (the number of samples drawn from ) and its computational complexity.

In non-proper learning (density estimation), the goal is to output an approximation to the target distribution without any constraints on its representation. In proper learning, we require in addition that the hypothesis is a member of the family . Note that these two notions of learning are essentially equivalent in terms of sample complexity (given any accurate hypothesis, we can do a brute-force search to find its closest distribution in ), but not necessarily equivalent in terms of computational complexity. A typically more demanding notion of learning is that of parameter estimation. The goal here is to identify the parameters of the unknown model, e.g., the means of the individual Bernoulli components for the case of PBDs, up to a desired accuracy .

Discussion. In many learning situations, it is desirable to compute a proper hypothesis, i.e., one that belongs to the underlying distribution family . A proper hypothesis is typically preferable due to its interpretability. In the context of distribution learning, a practitioner may not want to use a density estimate, unless it is proper. For example, one may want the estimate to have the properties of the underlying family, either because this reflects some physical understanding of the inference problem, or because one might only be using the density estimate as the first stage of a more involved procedure. While parameter estimation may arguably provide a more desirable guarantee than proper learning in some cases, its sample complexity is typically prohibitively large.

For the class of PBDs, we show (Proposition 14, Appendix A) that parameter estimation requires samples, for PBDs with Bernoulli components, where is the accuracy parameter. In contrast, the sample complexity of (non-)proper learning is known to be  [DDS12]. Hence, proper learning serves as an attractive middle ground between non-proper learning and parameter estimation. Ideally, one could obtain a proper learner for a given family whose running time matches that of the best non-proper algorithm.

Recent work by the authors [DKS15b] has characterized the computational complexity of non-properly learning PBDs, which was shown to be , i.e., nearly-linear in the sample complexity of the problem. Motivated by this progress, a natural research direction is to obtain a computationally efficient proper learning algorithm, i.e., one that runs in time and outputs a PBD as its hypothesis. Besides practical applications, we feel that this is an interesting algorithmic problem, with intriguing connections to algebraic geometry and polynomial optimization (as we point out in this work). We remark that several natural approaches fall short of yielding a polynomial–time algorithm. More specifically, the proper learning of PBDs can be phrased in a number of ways as a structured non-convex optimization problem, albeit it is unclear whether any such formulation may lead to a polynomial–time algorithm.

This work is part of a broader agenda of systematically investigating the computational complexity of proper distribution learning. We believe that this is a fundamental goal that warrants study for its own sake. The complexity of proper learning has been extensively investigated in the supervised setting of PAC learning Boolean functions [KV94, Fel15], with several algorithmic and computational intractability results obtained in the past couple of decades. In sharp contrast, very little is known about the complexity of proper learning in the unsupervised setting of learning probability distributions.

1.1 Preliminaries.

For with , we will denote and . For a distribution supported on , , we write to denote the value of the probability mass function (pmf) at point . The total variation distance between two distributions and supported on a finite domain is If and are random variables, their total variation distance is defined as the total variation distance between their distributions.

Poisson Binomial Distribution. A Poisson binomial distribution of order or -PBD is the discrete probability distribution of the sum of mutually independent Bernoulli random variables . An -PBD can be represented uniquely as the vector of its parameters , i.e., as , where we can assume that . To go from to its corresponding vector, we find a collection of mutually independent Bernoullis such that is distributed according to with , and we set for all . An equivalent unique representation of an -PBD with parameter vector is via the vector of its distinct parameters , where , and for , together with their corresponding integer multiplicities . Note that , , and . This representation will be crucial for the results and techniques of this paper.

Discrete Fourier Transform. For we will denote . The Discrete Fourier Transform (DFT) modulo of a function is the function defined as for integers . The DFT modulo , , of a distribution is the DFT modulo of its probability mass function. The inverse DFT modulo onto the range of , is the function defined by for . The norm of the DFT is defined as

1.2 Our Results and Comparison to Prior Work.

We are ready to formally describe the main contributions of this paper. As our main algorithmic result, we obtain a near-sample optimal and almost polynomial-time algorithm for properly learning PBDs:

Theorem 1 (Proper Learning of PBDs).

For all and , there is a proper learning algorithm for -PBDs with the following performance guarantee: Let be an unknown -PBD. The algorithm uses samples from , runs in time 111We work in the standard “word RAM” model in which basic arithmetic operations on -bit integers are assumed to take constant time., and outputs (a succinct description of) an -PBD such that with probability at least it holds that

We now provide a comparison of Theorem 1 to previous work. The problem of learning PBDs was first explicitly considered by Daskalakis et al. [DDS12], who gave two main results: (i) a non-proper learning algorithm with sample complexity and running time , and (ii) a proper learning algorithm with sample complexity and running time . In recent work [DKS15b], the authors of the current paper obtained a near-optimal sample and time algorithm to non-properly learn a more general family of discrete distributions (containing PBDs). For the special case of PBDs, the aforementioned work [DKS15b] yields the following implications: (i) a non-proper learning algorithm with sample and time complexity , and (ii) a proper learning algorithm with sample complexity and running time . Prior to this paper, this was the fastest algorithm for properly learning PBDs. Hence, Theorem 1 represents a super-polynomial improvement in the running time, while still using a near-optimal sample size.

In addition to obtaining a significantly more efficient algorithm, the proof of Theorem 1 offers a novel approach to the problem of properly learning PBDs. The proper algorithms of [DDS12, DKS15b] exploit the cover structure of the space of PBDs, and (essentially) proceed by running an appropriate tournament procedure over an -cover (see, e.g., Lemma 10 in [DDS15])222Note that any -cover for the space of -PBDs has size . However, for the task of properly learning PBDs, by a simple (known) reduction, one can assume without loss of generality that . Hence, the tournament-based algorithm only needs to consider -covers over PBDs with Bernoulli components.. This cover-based approach, when applied to an -covering set of size , clearly has runtime , and can be easily implemented in time . [DDS12] applies the cover-based approach to the -cover construction of [DP14a], which has size , while  [DKS15b] proves and uses a new cover construction of size . Observe that if there existed an explicit -cover of size , the aforementioned cover-based approach would immediately yield a time proper learning algorithm. Perhaps surprisingly, it was shown in  [DKS15b] that any -cover for -PBDs with Bernoulli coordinates has size . In conclusion, the cover-based approach for properly learning PBDs inherently leads to runtime of .

In this work, we circumvent the cover size lower bound by establishing a new structural characterization of the space of PBDs. Very roughly speaking, our structural result allows us to reduce the proper learning problem to the case that the underlying PBD has distinct parameters. Indeed, as a simple corollary of our main structural result (Theorem 4 in Section 2), we obtain the following:

Theorem 2 (A “Few” Distinct Parameters Suffice).

For all and the following holds: For any -PBD , there exists an -PBD with such that has distinct parameters.

We note that in subsequent work [DKS15a] the authors generalize the above theorem to Poisson multinomial distributions.

Remark. We remark that Theorem 2 is quantitatively tight, i.e., distinct parameters are in general necessary to -approximate PBDs. This follows directly from the explicit cover lower bound construction of [DKS15b].

We view Theorem 2 as a natural structural result for PBDs. Alas, its statement does not quite suffice for our algorithmic application. While Theorem 2 guarantees that distinct parameters are enough to consider for an -approximation, it gives no information on the multiplicities these parameters may have. In particular, the upper bound on the number of different combinations of multiplicities one can derive from it is , which is not strong enough for our purposes. The following stronger structural result (see Theorem 4 and Lemma 5 for detailed statements) is critical for our improved proper algorithm:

Theorem 3 (A “Few” Multiplicities and Distinct Parameters Suffice).

For all and the following holds: For any there exists an explicit collection of vectors of multiplicities computable in time, so that for any -PBD with variance there exists a PBD with distinct parameters whose multiplicities are given by some element of , such that .

Now suppose we would like to properly learn an unknown PBD with distinct parameters and known multiplicities for each parameter. Even for this very restricted subset of PBDs, the construction of [DKS15b] implies a cover lower bound of . To handle such PBDs, we combine ingredients from Fourier analysis and algebraic geometry with careful Taylor series approximations, to construct an appropriate system of low-degree polynomial inequalities whose solution approximately recovers the unknown distinct parameters.

In the following subsection, we provide a detailed intuitive explanation of our techniques.

1.3 Techniques.

The starting point of this work lies in the non-proper learning algorithm from our recent work [DKS15b]. Roughly speaking, our new proper algorithm can be viewed as a two-step process: We first compute an accurate non-proper hypothesis using the algorithm in  [DKS15b], and we then post-process to find a PBD that is close to . We note that the non-proper hypothesis output by [DKS15b] is represented succinctly via its Discrete Fourier Transform; this property is crucial for the computational complexity of our proper algorithm. (We note that the description of our proper algorithm and its analysis, presented in Section 3, are entirely self-contained. The above description is for the sake of the intuition.)

We now proceed to explain the connection in detail. The crucial fact, established in [DKS15b] for a more general setting, is that the Fourier transform of a PBD has small effective support (and in particular the effective support of the Fourier transform has size roughly inverse to the effective support of the PBD itself). Hence, in order to learn an unknown PBD , it suffices to find another PBD, , with similar mean and standard deviation to , so that the Fourier transform of approximates the Fourier transform of on this small region. (The non-proper algorithm of  [DKS15b] for PBDs essentially outputs the empirical DFT of over its effective support.)

Note that the Fourier transform of a PBD is the product of the Fourier transforms of its individual component variables. By Taylor expanding the logarithm of the Fourier transform, we can write the log Fourier transform of a PBD as a Taylor series whose coefficients are related to the moments of the parameters of (see Equation (2)). We show that for our purposes it suffices to find a PBD so that the first moments of its parameters approximate the corresponding moments for . Unfortunately, we do not actually know the moments for , but since we can easily approximate the Fourier transform of from samples, we can derive conditions that are sufficient for the moments of to satisfy. This step essentially gives us a system of polynomial inequalities in the moments of the parameters of that we need to satisfy.

A standard way to solve such a polynomial system is by appealing to Renegar’s algorithm [Ren92b, Ren92a], which allows us to solve a system of degree- polynomial inequalities in real variables in time roughly . In our case, the degree will be at most poly-logarithmic in , but the number of variables corresponds to the number of parameters of , which is . Hence, this approach is insufficient to obtain a faster proper algorithm.

To circumvent this obstacle, we show that it actually suffices to consider only PBDs with many distinct parameters (Theorem 2). To prove this statement, we use a recent result from algebraic geometry due to Riener [Rie11] (Theorem 6), that can be used to relate the number of distinct parameters of a solution of a polynomial system to the degree of the polynomials involved. Note that the problem of matching moments can be expressed as a system of polynomial equations, where each polynomial has degree . We can thus find a PBD , which has the same first moments as , with distinct parameters such that For PBDs with distinct parameters and known multiplicities for these parameters, we can reduce the runtime of solving the polynomial system to

Unfortunately, the above structural result is not strong enough, as in order to set up an appropriate system of polynomial inequalities for the parameters of , we must first guess the multiplicities to which the distinct parameters appear. A simple counting argument shows that there are roughly ways to choose these multiplicities. To overcome this second obstacle, we need the following refinement of our structural result on distinct parameters: We divide the parameters of into categories based on how close they are to or . We show that there is a tradeoff between the number of parameters in a given category and the number of distinct parameters in that category (see Theorem 4). With this more refined result in hand, we show that there are only many possible collections of multiplicities that need to be considered (see Lemma 5]).

Given this stronger structural characterization, our proper learning algorithm is fairly simple. We enumerate over the set of possible collections of multiplicities as described above. For each such collection, we set up a system of polynomial equations in the distinct parameters of , so that solutions to the system will correspond to PBDs whose distinct parameters have the specified multiplicities which are also -close to . For each system, we attempt to solve it using Renegar’s algorithm. Since there exists at least one PBD close to with such a set of multiplicities, we are guaranteed to find a solution, which in turn must describe a PBD close to .

One technical issue that arises in the above program occurs when . In this case, the effective support of the Fourier transform of cannot be restricted to a small subset. This causes problems with the convergence of our Taylor expansion of the log Fourier transform for parameters near . However, then only parameters are not close to and , and we can deal with such parameters separately.

1.4 Related Work.

Distribution learning is a classical problem in statistics with a rich history and extensive literature (see e.g., [BBBB72, DG85, Sil86, Sco92, DL01]). During the past couple of decades, a body of work in theoretical computer science has been studying these questions from a computational complexity perspective; see e.g., [KMR94, FM99, AK01, CGG02, VW02, FOS05, BS10, KMV10, MV10, DDS12, DDO13, CDSS14a, CDSS14b, ADLS15].

We remark that the majority of the literature has focused either on non-proper learning (density estimation) or on parameter estimation. Regarding proper learning, a number of recent works in the statistics community have given proper learners for structured distribution families, by using a maximum likelihood approach. See e.g., [DR09, GW09, Wal09, DW13, CS13, KS14, BD14] for the case of continuous log-concave densities. Alas, the computational complexity of these approaches has not been analyzed. Two recent works [ADK15, CDGR15] yield computationally efficient proper learners for discrete log-concave distributions, by using an appropriate convex formulation. Proper learning has also been recently studied in the context of mixture models [FOS05, DK14, SOAJ14, LS15]. Here, the underlying optimization problems are non-convex, and efficient algorithms are known only when the number of mixture components is small.

1.5 Organization.

In Section 2, we prove our main structural result, and in Section 3, we describe our algorithm and prove its correctness. In Section 4, we conclude with some directions for future research.

2 Main Structural Result

In this section, we prove our main structural results thereby establishing Theorems 2 and 3. Our proofs rely on an analysis of the Fourier transform of PBDs combined with recent results from algebraic geometry on the solution structure of systems of symmetric polynomial equations. We show the following:

Theorem 4.

Given any -PBD with , there is an -PBD with such that and , satisfying the following properties:

Let . Let , for the integers , where is selected such that . Consider the partition of into the following set of intervals: , , , ; and , , , . Then we have the following:

  1. For each , each of the intervals and contains at most distinct parameters of .

  2. has at most one parameter in each of the intervals and .

  3. The number of parameters of equal to is within an additive of .

  4. For each , each of the intervals and contains at most parameters of .

Theorem 4 implies that one needs to only consider different combinations of multiplicities:

Lemma 5.

For every as in Theorem 4, there exists an explicit set of multisets of triples so that

  1. For each element of and each , is either one of the intervals or as in Theorem 4 or or .

  2. For each element of , .

  3. There exist an element of and a PBD as in the statement of Theorem 4 with so that has a parameter of multiplicity between and for each and no other parameters.

  4. has size and can be enumerated in time.

This is proved in Appendix B.1 by a simple counting argument. We multiply the number of multiplicities for each interval, which is at most the maximum number of parameters to the power of the maximum number of distinct parameters in that interval, giving possibilities.

We now proceed to prove Theorem 4. We will require the following result from algebraic geometry:

Theorem 6 (Part of Theorem 4.2 from [Rie11]).

Given symmetric polynomials in variables , , , let . Let . Then, the minimum value of on is achieved by a point with at most distinct co-ordinates.

As an immediate corollary, we obtain the following:

Corollary 7.

If a set of multivariate polynomial equations , , , with the degree of each being at most has a solution , then it has a solution with at most distinct values of the variables in .

The following lemma will be crucial:

Lemma 8.

Let . Let and be -PBDs with having parameters and and having parameters and . Suppose furthermore that and let be a sufficiently large constant. Suppose furthermore that for and for all positive integers it holds

(1)

Then .

In practice, we shall only need to deal with a finite number of ’s, since we will be considering the case where all or that do not appear in pairs will have size less than . Therefore, the size of the sum in question will be sufficiently small automatically for larger than .

The basic idea of the proof will be to show that the Fourier transforms of and are close to each other. In particular, we will need to make use of the following intermediate lemma:

Lemma 9.

Let , be PBDs with and . Let and be positive integers with the implied constants sufficiently large. If , then

The proof of this lemma, which is given in Appendix B.2, is similar to (part of) the correctness analysis of the non-proper learning algorithm in [DKS15b].

Proof of Lemma 8.

We proceed by means of Lemma 9. We need only show that for all with that For this we note that

Taking a logarithm and Taylor expanding, we find that

(2)

A similar formula holds for . Therefore, we have that

which is at most

An application of Lemma 9 completes the proof. ∎

Proof of Theorem 4.

The basic idea of the proof is as follows. First, we will show that it is possible to modify in order to satisfy (ii) without changing its mean, increasing its variance (or decreasing it by too much), or changing it substantially in total variation distance. Next, for each of the other intervals or , we will show that it is possible to modify the parameters that has in this interval to have the appropriate number of distinct parameters, without substantially changing the distribution in variation distance. Once this holds for each , conditions (iii) and (iv) will follow automatically.

To begin with, we modify to have at most one parameter in in the following way. We repeat the following procedure. So long as has two parameters, and in , we replace those parameters by and . We note that this operation has the following properties:

  • The expectation of remains unchanged.

  • The total variation distance between the old and new distributions is , as is the change in variances between the distributions.

  • The variance of is decreased.

  • The number of parameters in is decreased by 1.

All of these properties are straightforward to verify by considering the effect of just the sum of the two changed variables. By repeating this procedure, we eventually obtain a new PBD, with the same mean as , smaller variance, and at most one parameter in . We also claim that is small. To show this, we note that in each replacement, the error in variation distance is at most a constant times the increase in the sum of the squares of the parameters of the relevant PBD. Therefore, letting be the parameters of and letting be the parameters of , we have that . We note that this difference is entirely due to the parameters that were modified by this procedure. Therefore, it is at most times the number of non-zero parameters created. Note that all but one of these parameters contributes at least to the variance of . Therefore, this number is at most . Hence, the total variation distance between and is at most Similarly, the variance of our distribution is decreased by at most this much. This implies that it suffices to consider that have at most one parameter in . Symmetrically, we can also remove all but one of the parameters in , and thus it suffices to consider that satisfy condition (ii).

Next, we show that for any such that it is possible to modify the parameters that has in or , for any , so that we leave the expectation and variance unchanged, introduce at most error in variation distance, and leave only distinct parameters in this range. The basic idea of this is as follows. By Lemma 8, it suffices to keep or constant for parameters in that range for some range of values of . On the other hand, Theorem 6 implies that this can be done while producing only a small number of distinct parameters.

Without loss of generality assume that we are dealing with the interval . Note that if and , then , and there can be at most parameters in to begin with. Hence, in this case there is nothing to show. Thus, assume that either or that with a sufficiently large constant. Let be the parameters of that lie in . Consider replacing them with parameters also in to obtain . By Lemma 8, we have that so long as the first two moments of and agree and

(3)

for all (the terms in the sum in Equation (1) coming from the parameters not being changed cancel out). Note that . This is because by assumption either and or and . Furthermore, note that . Therefore, . Combining the above, we find that Equation (3) is automatically satisfied for any so long as is larger than a sufficiently large multiple of . On the other hand, Theorem 6 implies that there is some choice of taking on only distinct values, so that is exactly for all in this range. Thus, replacing the ’s in this range by these ’s, we only change the total variation distance by , leave the expectation and variance the same (as we have fixed the first two moments), and have changed our distribution in variation distance by at most .

Repeating the above procedure for each interval or in turn, we replace by a new PBD, with the same expectation and smaller variance and , so that satisfies conditions (i) and (ii). We claim that (iii) and (iv) are necessarily satisfied. Condition (iii) follows from noting that the number of parameters not 0 or 1 is at most , which is . Therefore, the expectation of is the number of parameters equal to . Condition (iv) follows upon noting that is at least the number of parameters in or times (as each contributes at least to the variance). This completes the proof of Theorem 4. ∎

3 Proper Learning Algorithm

Given samples from an unknown PBD , and given a collection of intervals and multiplicities as described in Theorem 4, we wish to find a PBD with those multiplicities that approximates . By Lemma 8, it is sufficient to find such a so that is close to for all small . On the other hand, by Equation (2) the logarithm of the Taylor series of is given by an appropriate expansion in the parameters. Note that if is small, due to the term, the terms of our sum with will automatically be small. By truncating the Taylor series, we get a polynomial in the parameters that gives us an approximation to . By applying a truncated Taylor series for the exponential function, we obtain a polynomial in the parameters of which approximates its Fourier coefficients. This procedure yields a system of polynomial equations whose solution gives the parameters of a PBD that approximates . Our main technique will be to solve this system of equations to obtain our output distribution using the following result:

Theorem 10 ([Ren92b, Ren92a]).

Let , , be polynomials over the reals each of maximum degree at most . Let . If the coefficients of the ’s are rational numbers with bit complexity at most , there is an algorithm that runs in time and decides if is empty or not. Further, if is non-empty, the algorithm runs in time and outputs a point in up to an error .

In order to set up the necessary system of polynomial equations, we have the following theorem:

Theorem 11.

Consider a PBD with , and real numbers and with . Let be as above and let be a sufficiently large multiple of . Let be complex numbers for each integer with so that

Consider another PBD with parameters of multiplicity contained in intervals as described in Theorem 4. There exists an explicit system of real polynomial inequalities each of degree in the so that:

  • If there exists such a PBD of the form of with , , and , then its parameters yield a solution to .

  • Any solution to corresponds to a PBD with

Furthermore, such a system can be found with rational coefficients of encoding size bits.

Proof.

For technical reasons, we begin by considering the case that is larger than a sufficiently large multiple of , as we will need to make use of slightly different techniques in the other case. In this case, we construct our system in the following manner. We begin by putting appropriate constraints on the mean and variance of and requiring that the ’s lie in appropriate intervals.

(4)
(5)
(6)

Next, we need a low-degree polynomial to express the condition that Fourier coefficients of are approximately correct. To do this, we let denote the set of indices so that and the set so that and let . We let

(7)

be an approximation to the logarithm of . We next define to be a Taylor approximation to the exponential function

By Taylor’s theorem, we have that

and in particular that if that .

We would ideally like to use as an approximation to . Unfortunately, may have a large imaginary part. To overcome this issue, we let , defined as the nearest integer to , be an approximation to the imaginary part, and we set

(8)

We complete our system with the final inequality:

(9)

In order for our analysis to work, we will need for to approximate . Thus, we make the following claim:

Claim 12.

If Equations (4), (5), (6), (7), and (8) hold, then for all .

This is proved in Appendix C by showing that is close to a branch of the logarithm of and that , so is a good enough approximation to the exponential.

Hence, our system is defined as follows:

Variables:

  • for each distinct parameter of .

  • for each .

  • for each .

Equations: Equations (4), (5), (6), (7), (8), and (9).

To prove (i), we note that such a will satisfy (4) and (5), because of the bounds on its mean and variance, and will satisfy Equation (6) by assumption. Therefore, by Claim 12, is approximately for all . On the other hand, since , we have that for all . Therefore, setting and as specified, Equation (9) follows. To prove (ii), we note that a whose parameters satisfy will by Claim 12 satisfy the hypotheses of Lemma 9. Therefore,

As we have defined it so far, the system does not have rational coefficients. Equation (7) makes use of and , as does Equation (8). To fix this issue, we note that if we approximate the appropriate powers of and each to accuracy , this produces an error of size at most in the value , and therefore an error of size at most for , and this leaves the above argument unchanged.

Also, as defined above, the system has complex constants and variables and many of the equations equate complex quantities. The system can be expressed as a set of real inequalities by doubling the number of equations and variables to deal with the real and imaginary parts separately. Doing so introduces binomial coefficients into the coefficients, which are no bigger than in magnitude. To express , we need denominators with a factor of . All other constants can be expressed as rationals with numerator and denominator bounded by . So, the encoding size of any of the rationals that appear in the system is .

One slightly more difficult problem is that the proof of Claim 12 depended upon the fact that . If this is not the case, we will in fact need to slightly modify our system of equations. In particular, we redefine to be the set of indices, , so that (rather than ), and let be the set of indices so that . Finally, we let be the set of indices for which . We note that, since each contributes at least to , if Equations (6) and (5) both hold, we must have .

We then slightly modify Equation (8), replacing it by

(10)

Note that by our bound on , this is of degree .

We now need only prove the analogue of Claim 12 in order for the rest of our analysis to follow.

Claim 13.

If Equations (4), (