Teaching and compressing for low VCdimension^{1}
Abstract
In this work we study the quantitative relation between VCdimension and two other basic parameters related to learning and teaching. Namely, the quality of sample compression schemes and of teaching sets for classes of low VCdimension. Let be a binary concept class of size and VCdimension . Prior to this work, the best known upper bounds for both parameters were , while the best lower bounds are linear in . We present significantly better upper bounds on both as follows. Set .
We show that there always exists a concept in with a teaching set (i.e. a list of labeled examples uniquely identifying in ) of size . This problem was studied by Kuhlmann (1999). Our construction implies that the recursive teaching (RT) dimension of is at most as well. The RTdimension was suggested by Zilles et al. and Doliwa et al. (2010). The same notion (under the name partialID width) was independently studied by Wigderson and Yehudayoff (2013). An upper bound on this parameter that depends only on is known just for the very simple case , and is open even for . We also make small progress towards this seemingly modest goal.
We further construct sample compression schemes of size for , with additional information of bits. Roughly speaking, given any list of labelled examples of arbitrary length, we can retain only labeled examples in a way that allows to recover the labels of all others examples in the list, using additional information bits. This problem was first suggested by Littlestone and Warmuth (1986).
1.1
1 Introduction
The study of mathematical foundations of learning and teaching has been very fruitful, revealing fundamental connections to various other areas of mathematics, such as geometry, topology, and combinatorics. Many key ideas and notions emerged from this study: Vapnik and Chervonenkis’s VCdimension [45], Valiant’s seminal definition of PAC learning [44], Littlestone and Warmuth’s sample compression schemes [32], Goldman and Kearns’s teaching dimension [19], recursive teaching dimension (RTdimension, for short)[48, 12, 40] and more.
While it is known that some of these measures are tightly linked, the exact relationship between them is still not well understood. In particular, it is a long standing question whether the VCdimension can be used to give a universal bound on the size of sample compression schemes, or on the RTdimension.
In this work, we make progress on these two questions. First, we prove that the RTdimension of a boolean concept class having VCdimension is upper bounded by
Our proofs are based on a similar technique of recursively applying Haussler’s Packing Lemma on the dual class. This similarity provides another example of the informal connection between sample compression schemes and RTdimension. This connection also appears in other works that study their relationship with the VCdimension [12, 35, 9].
1.1 VCdimension
VCdimension and size.
A concept class over the universe is a set . When is finite, we denote by . The VCdimension of , denoted , is the maximum size of a shattered subset of , where a set is shattered if for every there is so that for all and for all .
The most basic result concerning VCdimension is the SauerShelahPerles Lemma, that upper bounds in terms of and . It has been independently proved several times, e.g. in [42].
Theorem 1.1 (SauerShelahPerles).
Let be a boolean concept class with VCdimension . Then,
In particular, if then
VCdimension and PAC learning.
The VCdimension is one of the most basic complexity measures for concept classes. It is perhaps mostly known in the context of the PAC learning model. PAC learning was introduced in Valiant’s seminal work [44] as a theoretical model for learning from random examples drawn from an unknown distribution (see the book [28] for more details).
A fundamental and wellknown result of Blumer, Ehrenfeucht, Haussler, and Warmuth [8], which is based on an earlier work of Vapnik and Chervonenkis [45], states that PAC learning sample complexity is equivalent to VCdimension. The proof of this theorem uses Theorem 1.1 and an argument commonly known as double sampling (see Section A in the appendix for a short and self contained description of this well known argument).
VCdimension and the metric structure.
Another fundamental result in this area is Haussler’s [23] description of the metric structure of concept classes with low VCdimension (see also the work of Dudley [14]). Roughly, it says that a concept class of VCdimension , when thought of as an metric space, behaves like a dimensional space in the sense that the size of an separated set in is at most . More formally, every probability distribution on induces the (pseudo) metric
on .
A set is called separated with respect to
if for every two concepts in
we have .
A set is called
an approximating set
An approximating set can be thought of as a metric approximation of the possibly complicated concept class , and for many practical purposes it is a good enough substitute for . Haussler proved that there are always small approximating sets.
Theorem 1.3 (Haussler).
Let be a concept class with VCdimension . Let be a distribution on . Let . If is separated with respect to then
A proof of a weaker statement.
For , let be independent samples from . For every in ,
The union bound implies that there is a choice of of size so that . Theorem 1.1 implies . Thus, . ∎
1.2 Teaching
Imagine a teacher that helps a student to learn a concept by picking insightful examples. The concept is known only to the teacher, but belongs to a class of concepts known to both the teacher and the student. The teacher carefully chooses a set of examples that is tailored for , and then provides these examples to the student. Now, the student should be able to recover from these examples.
A central issue that is addressed in the design of mathematical teaching models is “collusions.” Roughly speaking, a collusion occurs when the teacher and the student agree in advance on some unnatural encoding of information about using the bit description of the chosen examples, instead of using attributes that separate from other concepts. Many mathematical models for teaching were suggested: Shinohara and Miyano [43], Jackson and Tomkins [27], Goldman, Rivest and Schapire [21], Goldman and Kearns [19], Goldman and Mathias [20] Angluin and Krikis [2], Balbach [5], and Kobayashi and Shinohara [29]. We now discuss some of these models in more detail.
Teaching sets.
The first mathematical models for teaching [19, 43, 3] handle collusions in a fairly restrictive way, by requiring that the teacher provides a set of examples that uniquely identifies . Formally, this is captured by the notion of a teaching set, which was independently introduced by Goldman and Kearns [19], Shinohara and Miyano [43] and Anthony et al. [3]. A set is a teaching set for in if for all in , we have . The teaching complexity in these models is captured by the hardest concept to teach, i.e., .
Teaching sets also appear in other areas of learning theory: Hanneke [22] used it in his study of the label complexity in active learning, and the authors of [47] used variants of it to design efficient algorithms for learning distributions using imperfect data.
Defining the teaching complexity using the hardest concept is often too restrictive. Consider for example the concept class consisting of all singletons and the empty set over a domain of size . Its teaching complexity in these models is , since the only teaching set for the empty set is . This is a fairly simple concept class that has the maximum possible complexity.
Recursive teaching dimension.
Goldman and Mathias [20] and Angluin and Krikis [2] therefore suggested less restrictive teaching models, and more efficient teaching schemes were indeed discovered in these models. One approach, studied by Zilles et al. [48], Doliwa et al. [12], and Samei et al. [40], uses a natural hierarchy on the concept class which is defined as follows. The first layer in the hierarchy consists of all concepts whose teaching set has minimal size. Then, these concepts are removed and the second layer consists of all concepts whose teaching set with respect to the remaining concepts has minimal size. Then, these concepts are removed and so on, until all concepts are removed. The maximum size of a set that is chosen in this process is called the recursive teaching (RT) dimension. One way of thinking about this model is that the teaching process satisfies an Occam’s razortype rule of preferring simpler concepts. For example, the concept class consisting of singletons and the empty set, which was considered earlier, has recursive teaching dimension : The first layer in the hierarchy consists of all singletons, which have teaching sets of size . Once all singletons are removed, we are left with a concept class of size , the concept class , and in it the empty set has a teaching set of size .
A similar notion to RTdimension was independently suggested in [47] under the terminology of partial IDs. There the focus was on getting a simultaneous upper bound on the size of the sets, as well as the number of layers in the recursion, and it was shown that for any concept class both can be made at most . Motivation for this study comes from the population recovery learning problem defined in [15].
Previous results.
Doliwa et al. [12] and Zilles et al. [48] asked whether small VCdimension implies small recursive teaching dimension. An equivalent question was asked 10 years earlier by Kuhlmann [30]. Since the VCdimension does not increase when concepts are removed from the class, this question is equivalent to asking whether every class with small VCdimension has some concept in it with a small teaching set. Given the semantics of the recursive teaching dimension and the VCdimension, an interpretation of this question is whether exact teaching is not much harder than approximate learning (i.e., PAC learning).
For infinite classes the answer to this question is negative. There is an infinite concept class with VCdimension so that every concept in it does not have a finite teaching set. An example for such a class is defined as where is the indicator function of all rational numbers that are smaller than . The VCdimension of is , but every teaching set for some must contain a sequence of rationals that converges to .
For finite classes this question is open.
However, in some special cases it is known that the answer is affirmative.
In [30] it is shown that if has VCdimension , then its recursive teaching dimension is also . It is known that if is a maximum
Our contribution.
Our first main result is the following general bound, which exponentially improves over the bound when the VCdimension is small (the proof is given in Section 3).
Theorem 1.4 (RTdimension).
Let be a concept class of VCdimension . Then there exists with a teaching set of size at most
It follows that the recursive teaching dimension of concept classes of VCdimension is at most as well.
1.3 Sample compression schemes
A fundamental and well known statement in learning theory says that if the VCdimension of a concept class is small, then any consistent
Before giving the formal definition of compression schemes, let us consider
a simple illustrative example.
Assume we are interested in learning the concept class of intervals
on the real line. We get a collection of 100 samples of the form
where and indicates
The formal definition.
Littlestone and Warmuth [32] formally defined sample compression schemes as follows. Let with . Let
the set of labelled samples from , of sizes between and . A sample compression scheme for with information , consists of two maps for which the following hold:
 ()

The compression map
takes to with and .
 ()

The reconstruction map
is so that for all in ,
The size of the scheme is .
Intuitively, the compression map takes a long list of samples and encodes it as a short sublist of samples together with some small amount of side information , which helps in the reconstruction phase. The reconstruction takes a short list of samples and decodes it using the side information , without any knowledge of , to an hypothesis in a way that essentially inverts the compression. Specifically, the following property must always hold: if the compression of is the same as that of then .
A different perspective of the side information is as a list decoding in which the small set of labelled examples is mapped to the set of hypothesis , one of which is correct.
We note that it is not necessarily the case that the reconstructed hypothesis belongs to the original class . All it has to satisfy is that for any such that we have that . Thus, has to be consistent only on the sampled coordinates that were compressed and not elsewhere.
Let us consider a simple example of a sample compression scheme, to help digest the definition. Let be a concept class and let be the rank over, say, of the matrix whose rows correspond to the concepts in . We claim that there is an sample compression scheme for with no side information. Indeed, for any , let be a set of at most columns that span the columns of the matrix . Given a sample compress it to for . The reconstruction maps takes to any concept so that . This sample compression scheme works since if then every two different rows in must disagree on .
Connections to learning.
Sample compression schemes are known to yield practical learning algorithms (see e.g. [34]), and allow learning for multi labelled concept classes [41].
They can also be interpreted as a formal manifestation of Occam’s razor. Occam’s razor is a philosophical principle attributed to William of Ockham from the late middle ages. It says that in the quest for an explanation or an hypothesis, one should prefer the simplest one which is consistent with the data. There are many works on the role of Occam’s razor in learning theory, a partial list includes [32, 7, 16, 37, 26, 17, 13]. In the context of sample compression schemes, simplicity is captured by the size of the compression scheme. Interestingly, this manifestation of Occam’s razor is provably useful [32]: Sample compression schemes imply PAC learnability.
Theorem 1.5 (LittlestoneWarmuth).
Let , and . Let be a distribution on , and be independent samples from . Let and . Let be a sample compression scheme for with additional information . Let . Then,
Proof sketch..
There are subsets of of size at most . There are choices for . Each choice of yields a function that is measurable with respect to . The function is one of the functions in . For each , the coordinates in are independent, and so if then the probability that all these samples agree with is less than . The union bound completes the proof. ∎
The sample complexity of PAC learning is essentially the VCdimension. Thus, from Theorem 1.5 we expect the VCdimension to bound from below the size of sample compression schemes. Indeed, [17] proved that there are concept classes of VCdimension for which any sample compression scheme has size at least .
This is part of the motivation for the following basic question that was asked by Littlestone and Warmuth [32] nearly 30 years ago: Does a concept class of VCdimension have a sample compression scheme of size depending only on (and not on the universe size)?
In fact, unlike the VCdimension, the definition of sample compression schemes as well as the fact that they imply PAC learnability naturally generalizes to multiclass classification problems [41]. Thus, Littlestone and Warmuth’s question above can be seen as the boolean instance of a much broader question: Is it true that the size of an optimal sample compression scheme for a given concept class (not necessarily binarylabeled) is the sample complexity of PAC learning of this class?
Previous constructions.
Floyd [16] and Floyd and Warmuth [17] constructed sample compression schemes of size . The construction in [17] uses a transformation that converts certain online learning algorithms to compression schemes. Helmbold and Warmuth [26] and Freund [18] showed how to compress a sample of size to a sample of size using some side information for classes of constant VCdimension (the implicit constant in the depends on the VCdimension).
In a long line of works, several interesting compression schemes for special cases were constructed. A partial list includes Helmbold et al. [25], Floyd and Warmuth [17], BenDavid and Litman [6], Chernikov and Simon [10], Kuzmin and Warmuth [31], Rubinstein et al. [38], Rubinstein and Rubinstein [39], Livni and Simon [33] and more. These works provided connections between compression schemes and geometry, topology and model theory.
Our contribution.
Here we make the first quantitive progress on this question, since the work of Floyd [16]. The following theorem shows that low VCdimension implies the existence of relatively efficient compression schemes. The constructive proof is provided in Section 4.
Theorem 1.6 (Sample compression scheme).
If has VCdimension then it has a sample compression scheme with additional information where and .
Subsequent to this paper, the first and the last authors improved this bound [35], showing that any concept class of VCdimension has a sample compression scheme of size at most . The techniques used in [35] differ from the techniques we use in this paper. In particular, our scheme relies on Haussler’s Packing Lemma (Theorem 1.3) and recursion, while the scheme in [35] relies on von Neumann’s minimax theorem [36] and the approximation theorem [45, 24], which follow from the doublesampling argument of [45]. Thus, despite the fact that our scheme is weaker than the one in [35], it provides a different angle on sample compression, which may be useful in further improving the exponential dependence on the VCdimension to an optimal linear dependence, as conjectured by Floyd and Warmuth [17, 46].
1.4 Discussion and open problems
This work provides relatively efficient constructions of teaching sets and sample compression schemes. However, the exact relationship between VCdimension, sample compression scheme size, and the RTdimension remains unknown. Is there always a concept with a teaching set of size depending only on the VCdimension? (The interesting case is finite concept classes, as mentioned above.) Are there always sample compression schemes of size linear (or even polynomial) in the VCdimension?
The simplest case that is still open is VCdimension . One can refine this case even further. VCdimension means that on any three coordinates , the projection has at most patterns. A more restricted family of classes is concept classes, for which on any three coordinates there are at most patterns. We can show that the recursive teaching dimension of classes is at most .
Lemma 1.7.
Let be a finite concept class. Then there exists some with a teaching set of size at most .
Proof.
Assume that with . If has VCdimension then there exists with a teaching set of size (see [30, 1]). Therefore, assume that the VCdimension of is . Every shattered pair partitions to nonempty sets:
for . Pick a shattered pair and for which the size of is minimal. Without loss of generality assume that and that . To simplify notation, we denote simply by .
We prove below that has VCdimension . This completes the proof since then there is some and some such that is a teaching set for in . Therefore, is a teaching set for in .
First, a crucial observation is that since is a class, no pair is shattered by both and . Indeed, if shatters then either or has at least patterns on . If in addition shatters then has at least patterns on or , contradicting the assumption that is a class.
Now, assume towards contradiction that shatters . Thus, is not shattered by which means that there is some pattern so that . This implies that is a proper subset of , contradicting the minimality of . ∎
2 The dual class
We shall repeatedly use the dual concept class to and its properties. The dual concept class of is defined by , where is the map so that iff . If we think of as a binary matrix whose rows are the concepts in , then corresponds to the distinct rows of the transposed matrix (so it may be that ).
We use the following well known property (see [4]).
Claim 2.1 (Assouad).
If the VCdimension of is then the VCdimension of is at most .
Proof sketch.
If the VCdimension of is then in the matrix representing there are rows that are shattered, and in these rows there are columns that are shattered. ∎
We also define the dual approximating set (recall the definition of from Section 1.1). Denote by the set , where is the uniform distribution on .
3 Teaching sets
In this section we prove Theorem 1.4. The high level idea is to use Theorem 1.3 and Claim 2.1 to identify two distinct in so that the set of so that is much smaller than , add to the teaching set, and continue inductively.
Proof of Theorem 1.4.
For classes with VCdimension there is with a teaching set of size , see e.g. [12]. We may therefore assume that .
We show that if , then there exist in such that
(1) 
From this the theorem follows, since if we iteratively add such to the teaching set and restrict ourselves to , then after at most iterations, the size of the remaining class is reduced to less than . At this point we can identify a unique concept by adding at most additional indices to the teaching set, using the halving argument of [12, 47]. This gives a teaching set of size at most for some , as required.
In order to prove (1), it is enough to show that there exist in such that the normalized hamming distance between is at most . Assume towards contradiction that the distance between every two concepts in is more than , and assume without loss of generality that (that is, all the columns in are distinct). By Claim 2.1, the VCdimension of is at most . Theorem 1.3 thus implies that
(2) 
where the last inequality follows from the definition of and the assumption on the size of . Therefore, we arrive at the following contradiction:
(by Theorem 1.1, since )  
(by Equation 2 above)  
(by definition of ) 
∎
4 Sample compression schemes
In this section we prove Theorem 1.6. The theorem statement and the definition of sample compression schemes appear in Section 1.3.
While the details are somewhat involved, due to the complexity of the definitions, the high level idea may be (somewhat simplistically) summarized as follows.
For an appropriate choice of , we pick an approximating set of the dual class . It is helpful to think of as a subset of the domain . Now, either faithfully represents the sample or it does not (we do not formally define “faithfully represents” here). We identify the following winwin situation: In both cases, we can reduce the compression task to that in a much smaller set of concepts of size at most , similarly to as for teaching sets in Section 3. This yields the same doublelogarithmic behavior.
In the case that faithfully represents , Case 2 below, we recursively compress in the small class . In the unfaithful case, Case 1 below, we recursively compress in a (small) set of concepts for which disagreement occurs on some point of , just as in Section 3. In both cases, we have to extend the recursive solution, and the cost is adding one sample point to the compressed sample (and some small amount of additional information by which we encode whether Case 1 or 2 occurred).
The compression we describe is inductively defined,
and has the following additional structure.
Let be in the image of . The information is of the form
,
where is an integer so that ,
and is
a partial onetoone function
The rest of this section is organized as follows. In Section 4.1 we define the compression map . In Section 4.2 we give the reconstruction map . The proof of correctness is given in Section 4.3 and the upper bound on the size of the compression is calculated in Section 4.4.
4.1 Compression map: defining
Let be a concept class. The compression map is defined by induction on . For simplicity of notation, let .
In what follows we shall routinely use . There are several approximating sets and so we would like to fix one of them, say, the one obtained by greedily adding columns to starting from the first
Induction base.
The base of the induction applies to all concept classes so that . In this case, we use the compression scheme of Floyd and Warmuth [16, 17] which has size . This compression scheme has no additional information. Therefore, to maintain the structure of our compression scheme we append to it redundant additional information by setting and to be empty.
Induction step.
Let be so that . Let be so that
(3) 
This choice balances the recursive size. By Claim 2.1, the VCdimension of is at most (recall that ). Theorem 1.3 thus implies that
(4) 
(Where the second inequality follows from the definition of and the assumption on the size of and the last inequality follows from the definition of and Theorem 1.1).
Let .
Every has a rounding

There exist and such that and .

For all and such that , we have .
The following lemma summarizes two key properties of the compression scheme. The correctness of this lemma follows directly from the definitions of Cases 1 and 2 above.
Lemma 4.1.
Let and be the compression of described above, where . The following properties hold:

is defined on and iff and there exists such that and .

is not defined on iff for all and such that , it holds that .
4.2 Reconstruction map: defining
The reconstruction map is similarly defined by induction on .
Let be a concept class and let
be in the image
Induction base.
Induction step.
Distinguish between two cases:

is defined on .
Let . Denote
Apply recursively on . Let be the result. Output where

is not defined on .
Consider . For each , pick to be an element such that . Let
Apply recursively on and let be the result. Output satisfying
4.3 Correctness
The following lemma yields the correctness of the compression scheme.
Lemma 4.2.
Let be a concept class, , and . Then,

and , and

.
Proof.
We proceed by induction on . In the base case, and the lemma follows from the correctness of Floyd and Warmuth’s compression scheme (this is the only case in which ). In the induction step, assume . We distinguish between two cases:

is defined on .
Let . This case corresponds to Case 1 in the definitions of and Case 1 in the definition of . By Item 1 of Lemma 4.1, and there exists and such that and . Let be the class defined in Case 1 in the definition of . Since , we know that on satisfy the induction hypothesis. Let
be the resulting compression and reconstruction. Since we are in Case 1 in the definitions of and Case 1 in the definition of , and have the following form:
and
Consider item 1 in the conclusion of the lemma. By the definition of and ,
(by the definition of ) (by the induction hypothesis) Therefore, .

is not defined on .
This corresponds to Case 2 in the definitions of and Case 2 in the definition of . Let be the result of Case 2 in the definition of . Since , we know that on satisfy the induction hypothesis. Let
as defined in Case 2 in the definitions of and Case 2 in the definition of . By construction, and have the following form:
and
Consider item 1 in the conclusion of the lemma. Let . By the induction hypothesis, . Thus, for some . Since the range of is , it follows that . This shows that .