# Learning Coverage Functions and Private Release of Marginals

## Abstract

We study the problem of approximating and learning coverage functions. A function is a coverage function, if there exists a universe with non-negative weights for each and subsets of such that . Alternatively, coverage functions can be described as non-negative linear combinations of monotone disjunctions. They are a natural subclass of submodular functions and arise in a number of applications.

We give an algorithm that for any , given random and uniform examples of an unknown coverage function , finds a function that approximates within factor on all but -fraction of the points in time . This is the first fully-polynomial algorithm for learning an interesting class of functions in the demanding PMAC model of [3]. Our algorithms are based on several new structural properties of coverage functions. Using the results in [24], we also show that coverage functions are learnable agnostically with excess -error over all product and symmetric distributions in time . In contrast, we show that, without assumptions on the distribution, learning coverage functions is at least as hard as learning polynomial-size disjoint DNF formulas, a class of functions for which the best known algorithm runs in time [39].

As an application of our learning results, we give simple differentially-private algorithms for releasing monotone conjunction counting queries with low *average* error. In particular, for any , we obtain private release of -way marginals with average error in time .

## 1Introduction

We consider learning and approximation of the class of *coverage* functions over the Boolean hypercube . A function is a coverage function if there exists a family of sets on a universe equipped with a weight function such that for any , where for any . We view these functions over by associating each subset with vector such that iff . We define the size (denoted by ) of a coverage function as the size of a smallest-size universe that can be used to define . As is well-known, coverage functions also have an equivalent and natural representation as non-negative linear combinations of monotone disjunctions with the size being the number of disjunctions in the combination.

Coverage functions form a relatively simple but important subclass of the broad class of submodular functions. Submodular functions have been studied in a number of contexts and play an important role in combinatorial optimization [43] with several applications to machine learning [30] and in algorithmic game theory, where they are used to model valuation functions [1]. Coverage functions themselves figure in several applications such as facility location [14], private data release of conjunctions [31] and algorithmic game theory where they are used to model the utilities of agents in welfare maximization and design of combinatorial auctions [16].

In this paper, we investigate the learnability of coverage functions from random examples. The study of learnability from random examples of the larger classes of functions such as submodular and fractionally-subadditive functions has been initiated by [3] who were motivated by applications in algorithmic game theory. They introduced the PMAC model of learning in which, given random and independent examples of an unknown function, the learner is required to output a hypothesis that is multiplicatively close (which is the standard notion of approximation in the optimization setting) to the unknown target on at least fraction of the points. This setting is also considered in [4]. Learning of submodular functions with less demanding (and more common in machine learning) additive guarantees was first considered by [31], who were motivated by problems in private data release. In this setting the goal of the learner is equivalent to producing a hypothesis that -approximates the target function in or distance. That is for functions , or where is the underlying distribution on the domain (with the uniform distribution being the most common). The same notion of error and restriction to the uniform distribution are also used in several subsequent works on learning of submodular functions [13]. We consider both these models in the present work. For a more detailed survey of submodular function learning the reader is referred to [3].

### 1.1Our Results

#### Distribution-independent learning

Our main results are for the uniform, product and symmetric distribution learning of coverage functions. However it is useful to first understand the complexity of learning these functions without any distributional assumptions (for a formal definition and details of the models of learning see Section 2). We prove (see Sec. Section 6) that distribution-independent learning of coverage functions is at least as hard as PAC learning the class of polynomial-size disjoint DNF formulas over arbitrary distributions (that is DNF formulas, where each point satisfies at most 1 term). Polynomial-size disjoint DNF formulas is an expressive class of Boolean functions that includes the class of polynomial-size decision trees, for example. Moreover, there is no known algorithm for learning polynomial-size disjoint DNFs that runs faster than the algorithm for learning general DNF formulas, the best known algorithm for which runs in time [39]. Let denote the class of coverage functions over with range in .

This reduction gives a computational impediment to fully-polynomial PAC (and consequently PMAC) learning of coverage functions of polynomial size or any class that includes coverage functions. Previously, hardness results for learning various classes of submodular and fractionally-subadditive functions were information-theoretic [3] or required encodings of cryptographic primitives in the function [3].

On the positive side, in Section 6.2 we show that learning (both distribution-specific and distribution-independent) of coverage functions of size is at most as hard as learning the class of linear thresholds of monotone Boolean disjunctions (which for example include monotone CNF with clauses). A special case of this simple reduction appears in [32].

#### PAC and PMAC learning over the uniform distribution

Learning of submodular functions becomes substantially easier when the distribution is restricted to be uniform (denoted by ). For example, all submodular functions are learnable with -error of in time [26] whereas there is a constant and a distribution such that no polynomial-time algorithm can achieve -error of when learning submodular functions relative to [3]. At the same time achieving fully-polynomial time is often hard even under this strong assumption on the distribution. For example, polynomial-size disjoint DNF or monotone DNF/CNF are not known to be learnable efficiently in this setting and the best algorithms run in time. But, as we show below, when restricted to the uniform distribution, coverage functions are easier than disjoint DNF and are PAC learnable efficiently. Further, they are learnable in fully-polynomial time even with the stronger multiplicative approximation guarantees of the PMAC learning model [3]. We first state the PAC learning result which is easier to prove and serves as a step toward the PMAC algorithm.

We note that for general submodular functions exponential dependence on is necessary information-theoretically [26]. To obtain an algorithm with multiplicative guarantees we show that for every monotone submodular (and not just coverage) function multiplicative approximation can be easily reduced to additive approximation. The reduction decomposes into subcubes where the target function is relatively large with high probability, specifically the value of on each subcube is times the maximum value of on the subcube. The reduction is based on concentration results for submodular functions [9] and the fact that for any non-negative monotone submodular function , [22]. This reduction together with Thm. ? yields our PMAC learning algorithm for coverage functions.

This is the first fully-polynomial (that is polynomial in , and ) algorithm for PMAC learning a natural subclass of submodular functions even when the distribution is restricted to be uniform. As a point of comparison, the sketching result of [2] shows that for every coverage function and , there exists a coverage function of size size that approximates within factor everywhere. Unfortunately, it is unknown how to compute this strong approximation even in subexponential time and even with value queries^{1}

The key property that we identify and exploit in designing the PAC algorithm is that the Fourier coefficients of coverage functions have a form of (anti-)monotonicity property.

This lemma allows us to find all significant Fourier coefficients of a coverage function efficiently using a search procedure analogous to that in the Kushilevitz-Mansour algorithm [42] (but without the need for value queries). An additional useful property we prove is that any coverage function can be approximated by a function of few variables (referred to as *junta*).

By identifying the variables of an approximating junta we make the learning algorithm computationally more efficient and achieve logarithmic dependence of the number of random examples on . This, in particular, implies *attribute efficiency* [7] of our algorithm. Our bound on junta size is tight since coverage functions include monotone linear functions which require a -junta for -approximation (e.g. [25]). This clearly distinguishes coverage functions from disjunctions themselves which can always be approximated using a function of just variables. We note that in a subsequent work [25] showed approximation by -juntas for all submodular functions using a more involved approach. They also show that this approximation leads to a PMAC learning algorithm for all submodular functions.

Exploiting the representation of coverage functions as non-negative linear combinations of monotone disjunctions, we show that we can actually get a PAC learning algorithm that outputs a hypothesis that is guaranteed to be a coverage function. That is, the algorithm is *proper*. The running time of this algorithm is polynomial in and, in addition, depends polynomially on the size of the target coverage function.

#### Agnostic Learning on Product and Symmetric Distributions

We then consider learning of coverage functions over general product and symmetric distributions (that is those whose PDF is symmetric with respect to the variables). These are natural generalizations of the uniform distribution studied in a number of prior works. In our case the motivation comes from the application to differentially-private release of (monotone) -conjunction counting queries referred to as *-way marginals* in this context. Releasing -way marginals with average error corresponds to learning of coverage functions over the uniform distribution on points of Hamming weight which is a symmetric distribution (we describe the applications in more detail in the next subsection).

As usual with Fourier transform-based techniques, on general product distributions the running time of our PAC learning algorithm becomes polynomial in , where is the smallest bias of a variable in the distribution. It also relies heavily on the independence of variables and therefore does not apply to general symmetric distributions. Therefore, we use a different approach to the problem which learns coverage functions by learning disjunctions in the agnostic learning model [33]. This approach is based on a simple and known observation that if disjunctions can be approximated in distance by linear combinations of some basis functions then so are coverage functions. As a result, the learning algorithm for coverage functions also has agnostic guarantees relative to -error.

For product distributions, this algorithm relies on the fact that disjunctions can be -approximated within by degree polynomials [6].

A simpler proof for this approximation appears in [24] where it is also shown that the same result holds for all symmetric distributions.

For the special case of product distributions that have their one dimensional marginal expectations bounded away from and by some constants, we show that we can in fact make our agnostic learning algorithm *proper*, that is, the hypothesis returned by our algorithm is a coverage function. In particular, we give a proper agnostic learning algorithm for coverage functions over the uniform distribution running in time . It is not hard to show that this algorithm is essentially the best possible assuming hardness of learning sparse parities with noise.

#### Applications to Differentially Private Data Release

We now briefly overview the problem of differentially private data release and state our results. Formal definitions and details of our applications to privacy appear in Sec. ? and a more detailed background discussion can for example be found in [45]. The objective of a private data release algorithm is to release answers to all *counting queries* from a given class with low error while protecting the privacy of participants in the data set. Specifically, we are given a data set which is a subset of a fixed domain (in our case ). Given a query class of Boolean functions on , the objective is to output a data structure that allows answering *counting queries* from on with low error. A counting query for gives the fraction of elements in on which equals to . The algorithm producing should be *differentially private* [17]. The efficiency of a private release algorithm for releasing a class of queries with error on a data set is measured by its running time (in the size of the data set, the dimension and the error parameter) and the minimum data set size required for achieving certain error. Informally speaking, a release algorithm is differentially private if adding an element of to (or removing an element of from) does not affect the probability that any specific will be output by the algorithm significantly. A natural and often useful way of private data release for a data set is to output another data set (in a differentially private way) such that answers to counting queries based on approximate answers based on . Such release is referred to as data *sanitization* and the data set is referred to as a *synthetic data set*.

Releasing Boolean conjunction counting queries is likely the single best motivated and most well-studied problem in private data analysis [5]. It is a part of the official statistics in the form of reported data in the US Census, Bureau of Labor statistics and the Internal Revenue Service.

Despite the relative simplicity of this class of functions, the best known algorithm for releasing all -way marginals with a constant worst-case error runs in polynomial time for data sets of size at least [45]. Starting with the work of [31], researchers have also considered the private release problem with low *average* error with respect to some distribution, most commonly uniform, on the class of queries [13]. However, in most applications only relatively short marginals are of interest and therefore the average error relative to the uniform distribution can be completely uninformative in this case. As can be easily seen (e.g. [31]), the function mapping a monotone conjunction to a counting query for on a data set can be written in terms of a convex combination of monotone disjunctions corresponding to points in which is a coverage function. In this translation the distribution on conjunctions becomes a distribution over points on which the coverage function is defined and the error in approximating the coverage function becomes the average error of the data release. Therefore using standard techniques, we adapt our learning algorithms to this problem. Thm. ? gives the following algorithm for release of -way marginals.

Note that there is no dependence on in the bounds and it applies to any symmetric distribution. Without assumptions on the distribution, [19] give an algorithm that releases -way marginals with average error given a data set of size at least and runs in polynomial time in this size. (They also give a method to obtain the stronger worst-case error guarantees by using *private boosting*.)

We then adapt our PAC learning algorithms for coverage functions to give two algorithms for privately releasing monotone conjunction counting queries over the uniform distribution. Our first algorithm uses Thm. ? to obtain a differentially private algorithm for releasing monotone conjunction counting queries in time polynomial in (the data set dimension) and .

The previous best algorithm for this problem runs is time [13]. In addition, using a general framework from [32], one can reduce private release of monotone conjunction counting queries to PAC learning with value queries of linear thresholds of a polynomial number of conjunctions over a certain class of “smooth” distributions. [32] show how to use their framework together with Jackson’s algorithm for learning majorities of parities [35] to privately release parity counting queries. Using a similar argument one can also obtain a polynomial-time algorithm for privately releasing monotone conjunction counting queries. Our algorithm is substantially simpler and more efficient than the one obtained via the reduction in [32].

We can also use our proper learning algorithm to obtain a differentially private sanitization for releasing marginals in time polynomial in and quasi-polynomial in .

Note that our algorithm for privately releasing monotone conjunction queries with low-average error via a synthetic data set is polynomial time for any error that is .

### 1.2Related Work

[2] study *sketching* of coverage functions and prove that for any coverage function there exists a small (polynomial in the dimension and the inverse of the error parameter) approximate representation that multiplicatively approximates the function on all points. Their result implies an algorithm for learning coverage functions in the PMAC model [3] that uses a polynomial number of examples but requires exponential time in the dimension . [11] study the problem of *testing* coverage functions (under what they call the *W-distance*) and show that the class of coverage functions of polynomial size can be reconstructed, that is, one can obtain in polynomial time, a representation of an unknown coverage function such that is bounded by some polynomial in (in general for a coverage function , can be as high as ), that computes correctly at all points, using polynomially many value queries. Their reconstruction algorithm can be seen as an *exact* learning algorithm with value queries for coverage functions of small size.

In a recent (and independent) work, [52] develop a subroutine for learning sums of monotone conjunctions that also relies on the monotonicity the Fourier coefficients (as in Lemma ?). Their application is in a very different context of learning DNF expressions from *numerical pairwise* queries, which given two assignments from to the variables, expects in reply, the number of terms of the target DNF satisfied by both assignments.

A general result of [31] shows that releasing all counting queries from a concept class using counting queries (when accessing the data set) requires as many counting queries as agnostically learning using statistical queries. Using lower bounds on statistical query complexity of agnostic learning of conjunctions [23] they derived a lower bound on counting query complexity for releasing all conjunction counting queries of certain length. This rules out a fully polynomial (in , the data set size and the dimension ) algorithm to privately release short conjunction counting queries with low worst-case error.

Since our algorithms access the data set using counting queries, the lower bounds from [31] apply to our setting. However the lower bound in [31] is only significant when the length of conjunctions is at most logarithmic in . Building on the work of [18], [46] showed that there exists a constant such that there is no polynomial time algorithm for releasing a synthetic data set that answers all conjunction counting queries with worst-case error of at most under some mild cryptographic assumptions.

## 2Preliminaries

We use to denote the -dimensional Boolean hypercube with “false” mapped to and “true” mapped to . Let denote the set . For , we denote by , the monotone Boolean disjunction on variables with indices in , that is, for any , . A monotone Boolean disjunction is a simple example of a coverage function. To see this, consider a universe of size , containing a single element say , the associated weight, , and the sets such that contains if and only if . In the following lemma we describe a natural and folklore characterization of coverage functions as non-negative linear combination of non-empty monotone disjunctions (e.g. [31]). For completeness we include the proof in Appendix A.

For simplicity and without loss of generality we scale coverage functions to the range . Note that in this case, for we have . In the discussion below we always represent coverage functions as linear combination of monotone disjunctions with the sum of coefficients upper bounded by 1. For convenience, we also allow the empty disjunction (or constant 1) in the combination. Note that differs from the constant 1 only on one point and therefore this more general definition is essentially equivalent for the purposes of our discussion. Note that for every , the coefficient is determined uniquely by the function since is a monomial when viewed over with corresponding to “true”.

### 2.1Learning Models

Our learning algorithms are in several models based on the PAC model [47]. In the PAC learning model the learner has access to random examples of an unknown function from a known class of functions and the goal is to output a hypothesis with low error. The PAC model was defined for Boolean functions with the probability of disagreement being used to measure the error. For our real-valued setting we use error which generalizes the disagreement error.

We also consider learning from random examples with multiplicative guarantees introduced by [3] and referred to as PMAC learning. For a class of non-negative functions , a PMAC learner with approximation factor and error is an algorithm which, with probability at least , outputs a hypothesis that satisfies . We say that multiplicatively -approximates over in this case.

We are primarily interested in the regime when the approximation ratio is close to and hence use instead. We say that the learner is *fully-polynomial* if it is polynomial in , and .

### 2.2Fourier Analysis on the Boolean Cube

When learning with respect to the uniform distribution we use several standard tools and ideas from Fourier analysis on the Boolean hypercube. For any functions , the inner product of and is defined as . The and norms of are defined by and , respectively. Unless noted otherwise, in this context all expectations are with respect to chosen from the uniform distribution.

For , the parity function is defined as Parities form an orthonormal basis for functions on (for the inner product defined above). Thus, every function can be written as a real linear combination of parities. The coefficients of the linear combination are referred to as the Fourier coefficients of . For and , the Fourier coefficient is given by The Fourier expansion of is given by For any function on its spectral -norm is defined as

It is easy to estimate any Fourier coefficient of a function , given access to an oracle that outputs the value of at a uniformly random point in the hypercube. Given any parameters , we choose a set of size drawn uniformly at random from and estimate . Standard Chernoff bounds can then be used to show that with probability at least , For any , a Boolean function is said to be -concentrated on a set of indices, if

The following simple observation (implicit in [42]) can be used to obtain spectral concentration from bounded spectral -norm for any function . In addition, it shows that approximating each large Fourier coefficient to a sufficiently small additive error yields a sparse linear combination of parities that approximates . For completeness we include a proof in Appendix A.

## 3Learning Coverage Functions on the Uniform Distribution

Here we present our PAC and PMAC learning algorithms for over the uniform distribution.

### 3.1Structural Results

We start by proving several structural lemmas about the Fourier spectrum of coverage functions. First, we observe that the spectral -norm of coverage functions is at most .

From Lem. ? we have that there exist non-negative coefficients for every such that . By triangle inequality, we have: To complete the proof, we verify that , . For this note that and thus .

The small spectral -norm guarantees that any coverage function has its Fourier spectrum -concentrated on some set of indices of size (Lem. ?). This means that given an efficient algorithm to find a set of indices such that is of size and we obtain a way to PAC learn coverage functions to -error of . In general, given only random examples labeled by a function that is concentrated on a small set of indices, it is not known how to efficiently find a small set , without additional information about (such as all indices in being of small cardinality). However, for coverage functions, we can utilize a simple monotonicity property of their Fourier coefficients to efficiently retrieve such a set and obtain a PAC learning algorithm with running time that depends only polynomially on .

From Lem. ? we have that there exist constants for every such that and for every . The Fourier transform of can now be obtained simply by observing, as before in Lem. ? that . Thus for every , Notice that since all the coefficients are non-negative, and are non-positive and For an upper bound on the magnitude , we have:

We will now use Lemmas ? and ? to show that for any coverage function , there exists another coverage function that depends on just variables and -approximates it within . Using Lem. ?, we also obtain spectral concentration for . We start with some notation: for any and a subset of variables, let denote the projection of on . Given and , let denote the string in such that and (where denotes the set ). We will need the following simple lemma that expresses the Fourier coefficients of the function which is obtained by averaging a function over all variables outside of (a proof can be found for example in [42]).

We now show that coverage functions can be approximated by functions of few variables.

Since is a coverage function, it can be written as a non-negative weighted sum of monotone disjunctions. Thus, for every the function, defined as for every is also a non-negative linear combination of monotone disjunctions, that is a coverage function. By definition, for every and , In other words, is a convex combination of ’s and therefore is a coverage function itself. Note that for every if the coefficient of in is non-zero then there must exist for which the coefficient of in is non-zero. This implies that . We will now establish that approximates . Using Lem. ?, for every . Thus, . We first observe that . To see this, consider any . Then, such that and therefore, by Lem. ?, . Thus . By Lem. ?, is -concentrated on and using Cauchy-Schwartz inequality,

### 3.2PAC Learning

We now describe our PAC learning algorithm for coverage functions. This algorithm is used for our application to private query release and also as a subroutine for our PMAC learning algorithm. Given the structural results above the algorithm itself is quite simple. Using random examples of the target coverage function, we compute all the singleton Fourier coefficients and isolate the set of coordinates corresponding to large (estimated) singleton coefficients that includes . Thm. ? guarantees that the target coverage function is concentrated on the large Fourier coefficients, the indices of which are subsets of . We then find a collection of indices that contains all such that . This can be done efficiently since by Lem. ?, only if for all , . We can only estimate Fourier coefficients up to some additive error with high probability and therefore we keep all coefficients in the set whose estimated magnitude is at least . Once we have a set , on which the target function is -concentrated, we use Lem. ? to get our hypothesis. We give the pseudocode of the algorithm below.

Let be the target coverage function and let . By Lem. ?, it is sufficient to find a set and estimates for each such that:

and

, .

Let . In the first stage our algorithm finds a set of variables that contains . We do this by estimating all the singleton Fourier coefficients, within with (overall) probability at least (as before we denote the estimate of by ). We set . If all the estimates are within of the corresponding coefficients then for every , . Therefore and hence .

In the second phase, the algorithm finds a set such that the set of all large Fourier coefficients is included in . This is done iteratively starting with . In every iteration, for every set that was added in the previous iteration and every , it estimates within (the success probability for estimates in this whole phase will be ). If then is added to . This iterative process runs until no sets are added in an iteration. At the end of the last iteration, the algorithm returns as the hypothesis.

We first prove the correctness of the algorithm assuming that all the estimates are successful. Let be such that . Then, by Thm. ?, . In addition, by Lem. ?, for all , , . This means that for all , an estimate of within will be at least . By induction on this implies that in iteration , all subsets of of size will be added to and will be added in iteration . Hence the algorithm outputs a set such that . By definition, , and , . By Lem. ?, .

We now analyze the running time and sample complexity of the algorithm. We make the following observations regarding the algorithm.

By Chernoff bounds, examples suffice to estimate all singleton coefficients within with probability at least . To estimate a singleton coefficients of , the algorithm needs to look at only one coordinate and the label of a random example. Thus all the singleton coefficients can be estimated in time .

For every such that was estimated within and , we have that . This implies that . This also implies that .

By Lem. ?, for any , . Thus, if then . This means that the number of iterations in the second phase is bounded by and for all , .

In the second phase, the algorithm only estimates coefficients for subsets in

Let . By Chernoff bounds, a random sample of size can be used to ensure that, with probability at least , the estimates of all coefficients on subsets in are within . When the estimates are successful we also know that and therefore all coefficients estimated by the algorithm in the second phase are also within of true values with probability . Overall in the second phase the algorithm estimates coefficients. To estimate any single of those coefficients, the algorithm needs to examine only coordinates and the label of an example. Thus, the estimation of each Fourier coefficient takes time and time is sufficient to estimate all the coefficients.

Thus, in total the algorithm runs in time, uses random examples and succeeds with probability at least .

### 3.3PMAC Learning

We now describe our PMAC learning algorithm that is based on a reduction from multiplicative to additive approximation. First we note that if we knew that the values of the target coverage function are lower bounded by some then we could obtain multiplicative -approximation using a hypothesis with error of . To see this note that, by Markov’s inequality, implies that . Let . Then

Now, we might not have such a lower bound on the value of . To make this idea work for all coverage functions, we show that any monotone submodular function can be decomposed into regions where it is relatively large (compared to the maximum in that region) with high probability. The decomposition is based on the following lemma: given a monotone submodular function with maximum value , either or there is an index such that for every satisfying . In the first case we can obtain multiplicative approximation from additive approximation using a slight refinement of our observation above (since the lower bound on only holds with probability ). In the second case we can reduce the problem to additive approximation on the half of the domain where . For the other half we use the same argument recursively. After levels of recursion at most fraction of the points will remain where we have no approximation. Those are included in the probability of error. We will need the following concentration inequality for 1-Lipschitz (with respect to the Hamming distance) submodular functions [9].

Another property of non-negative monotone submodular functions that we need is that their expectation is at least half the maximum value [22]. For the special case of coverage functions this lemma follows simply from the fact that the expectation of any disjunction is at least .

We now prove our lemma that lower bounds the relative value of a monotone submodular function.

Let equal the bit string that has in its coordinate and everywhere else. For any let denote the string such that for every . Suppose that for every , exists such that and . By monotonicity of that implies that for every , . Since is a submodular function, for any and such that , we have: This implies that is -Lipschitz. Then, is a -Lipschitz, non-negative submodular function. Also, by Lem. ?, and . Now, using Thm. ?, we obtain:

Recall that for any set of variables and , is defined as the substring of that contains the bits in coordinates indexed by . We are now ready to describe our reduction that gives a PMAC algorithm for coverage functions.

Algorithm consists of a call to , where is a recursive procedure described below.

**Procedure on examples labeled by **:

If , then, returns the hypothesis and halts.

Otherwise, computes a -approximation to the maximum of the target function (with confidence at least for to be defined later). As we show later this can be done by drawing a sufficient number of random examples labeled by and choosing to be the maximum label. Thus, . If ,

**return**. Otherwise, set (note that, with probability at least , for every ).Estimate within an additive error of by with confidence at least . Then, .

If : run Algorithm from Thm. ? on random examples labeled by with accuracy and confidence (note that Algorithm from Thm. ? only gives confidence but the confidence can be boosted to using repetitions with standard hypothesis testing). Let be the hypothesis output by the algorithm.

**Return**hypothesis .If ,

Find such that for every such that with confidence at least . This can be done by drawing a sufficient number of random examples and checking the labels. If such does not exist we output . Otherwise, define to be the restriction of to where and . Run the algorithm from Thm. ? on examples labeled by with accuracy and confidence . Let be the hypothesis returned by the algorithm. Set .

Let to be the restriction of to where . Run on examples labeled by and let be the hypothesis returned by the algorithm.

**Return**hypothesis defined by

The algorithm can simulate random examples labeled by (or ) by drawing random examples labeled by , selecting such that (or ) and removing the -th coordinate. Since bits will need to be fixed the expected number of random examples required to simulate one example from any function in the run of is at most .

We now prove the correctness of the algorithm assuming that all random estimations and runs of the PAC learning algorithm are successful. To see that one can estimate the maximum of a coverage function within a multiplicative factor of , recall that by Lem. ?, . Thus, for a randomly and uniformly chosen , with probability at least , . This means that random examples will suffice to get confidence .

We now observe that if the condition in step holds then -multiplicatively approximates . To see this, first note that in this case, . Then, . By Thm. ?, . Then, by Markov’s inequality,

Let . By the same argument as in eq. (Equation 1), we get that

Therefore,

or, equivalently,

If the condition in step does not hold, then . Thus, , which by Lem. ? yields that there exists such that . Now, by drawing examples and choosing such that for all examples where , we can ensure that, with probability at least ,

Now, by the same analysis as in step 4, we obtain that satisfies .

Now, observe that the set of points in the domain can be partitioned into two disjoint sets.

The set such that for every , has fixed the value of the hypothesis given by based on some hypothesis returned by the PAC learning algorithm (Thm. ?) or when .

The set where the recursion has reached depth and step sets on every point in .

By the construction, the points in can be divided into disjoint sub-cubes such that in each of them, the conditional probability that the hypothesis we output does not satisfy the multiplicative guarantee is at most . Therefore, the hypothesis does not satisfy the multiplicative guarantee on at most fraction of the points in . It is easy to see that has probability mass at most . This is because and thus, when , the dimension of the subcube that is invoked on, is at most . Thus, the total probability mass of points where the multiplicative approximation does not hold is at most .

We now bound the running time and sample complexity of the algorithm. First note that for some all the random estimations and runs of the PAC learning algorithm will be successful with probability at least (by union bound).

From Thm. ?, any run of the PAC learning algorithm in some recursive call to requires at most examples from their respective target functions. Each such example can be simulated using examples labeled by . Thus, in total, in all recursive calls, examples will suffice.

Each run of the PAC learning algorithm requires