Teaching and compressing for low VC-dimension1footnote 11footnote 1A preliminary version of this work, combined with the paper “Sample compression schemes for VC classes” by the first and the last authors, was published in the proceeding of FOCS’15.

Teaching and compressing for low VC-dimension1

Abstract

In this work we study the quantitative relation between VC-dimension and two other basic parameters related to learning and teaching. Namely, the quality of sample compression schemes and of teaching sets for classes of low VC-dimension. Let be a binary concept class of size and VC-dimension . Prior to this work, the best known upper bounds for both parameters were , while the best lower bounds are linear in . We present significantly better upper bounds on both as follows. Set .

We show that there always exists a concept in with a teaching set (i.e. a list of -labeled examples uniquely identifying in ) of size . This problem was studied by Kuhlmann (1999). Our construction implies that the recursive teaching (RT) dimension of is at most as well. The RT-dimension was suggested by Zilles et al. and Doliwa et al. (2010). The same notion (under the name partial-ID width) was independently studied by Wigderson and Yehudayoff (2013). An upper bound on this parameter that depends only on is known just for the very simple case , and is open even for . We also make small progress towards this seemingly modest goal.

We further construct sample compression schemes of size for , with additional information of bits. Roughly speaking, given any list of -labelled examples of arbitrary length, we can retain only labeled examples in a way that allows to recover the labels of all others examples in the list, using additional information bits. This problem was first suggested by Littlestone and Warmuth (1986).

\setstretch

1.1

1 Introduction

The study of mathematical foundations of learning and teaching has been very fruitful, revealing fundamental connections to various other areas of mathematics, such as geometry, topology, and combinatorics. Many key ideas and notions emerged from this study: Vapnik and Chervonenkis’s VC-dimension [45], Valiant’s seminal definition of PAC learning [44], Littlestone and Warmuth’s sample compression schemes [32], Goldman and Kearns’s teaching dimension [19], recursive teaching dimension (RT-dimension, for short)[48, 12, 40] and more.

While it is known that some of these measures are tightly linked, the exact relationship between them is still not well understood. In particular, it is a long standing question whether the VC-dimension can be used to give a universal bound on the size of sample compression schemes, or on the RT-dimension.

In this work, we make progress on these two questions. First, we prove that the RT-dimension of a boolean concept class having VC-dimension is upper bounded by3 . Secondly, we give a sample compression scheme of size that uses additional information. Both results were subsequently improved to bounds that are independent of the size of the concept class  [35, 9]

Our proofs are based on a similar technique of recursively applying Haussler’s Packing Lemma on the dual class. This similarity provides another example of the informal connection between sample compression schemes and RT-dimension. This connection also appears in other works that study their relationship with the VC-dimension [12, 35, 9].

1.1 VC-dimension

VC-dimension and size.

A concept class over the universe is a set . When is finite, we denote by . The VC-dimension of , denoted , is the maximum size of a shattered subset of , where a set is shattered if for every there is so that for all and for all .

The most basic result concerning VC-dimension is the Sauer-Shelah-Perles Lemma, that upper bounds in terms of and . It has been independently proved several times, e.g. in [42].

Theorem 1.1 (Sauer-Shelah-Perles).

Let be a boolean concept class with VC-dimension . Then,

In particular, if then

VC-dimension and PAC learning.

The VC-dimension is one of the most basic complexity measures for concept classes. It is perhaps mostly known in the context of the PAC learning model. PAC learning was introduced in Valiant’s seminal work [44] as a theoretical model for learning from random examples drawn from an unknown distribution (see the book [28] for more details).

A fundamental and well-known result of Blumer, Ehrenfeucht, Haussler, and Warmuth [8], which is based on an earlier work of Vapnik and Chervonenkis [45], states that PAC learning sample complexity is equivalent to VC-dimension. The proof of this theorem uses Theorem 1.1 and an argument commonly known as double sampling (see Section A in the appendix for a short and self contained description of this well known argument).

Theorem 1.2 ([45],[8]).

Let be a set and be a concept class of VC-dimension . Let be a distribution over . Let and an integer satisfying . Let and be a multiset of independent samples from . Then, the probability that there is so that but is at most .

VC-dimension and the metric structure.

Another fundamental result in this area is Haussler’s [23] description of the metric structure of concept classes with low VC-dimension (see also the work of Dudley [14]). Roughly, it says that a concept class of VC-dimension , when thought of as an metric space, behaves like a dimensional space in the sense that the size of an -separated set in is at most . More formally, every probability distribution on induces the (pseudo) metric

on . A set is called -separated with respect to if for every two concepts in we have . A set is called an -approximating set4 for with respect to if it is a maximal -separated set with respect to . The maximality of implies that for every there is some rounding in so that is a good approximation to , that is, . We call a rounding of in .

An approximating set can be thought of as a metric approximation of the possibly complicated concept class , and for many practical purposes it is a good enough substitute for . Haussler proved that there are always small approximating sets.

Theorem 1.3 (Haussler).

Let be a concept class with VC-dimension . Let be a distribution on . Let . If is -separated with respect to then

A proof of a weaker statement.

For , let be independent samples from . For every in ,

The union bound implies that there is a choice of of size so that . Theorem 1.1 implies . Thus, . ∎

1.2 Teaching

Imagine a teacher that helps a student to learn a concept by picking insightful examples. The concept is known only to the teacher, but belongs to a class of concepts known to both the teacher and the student. The teacher carefully chooses a set of examples that is tailored for , and then provides these examples to the student. Now, the student should be able to recover from these examples.

A central issue that is addressed in the design of mathematical teaching models is “collusions.” Roughly speaking, a collusion occurs when the teacher and the student agree in advance on some unnatural encoding of information about using the bit description of the chosen examples, instead of using attributes that separate from other concepts. Many mathematical models for teaching were suggested: Shinohara and Miyano [43], Jackson and Tomkins [27], Goldman, Rivest and Schapire [21], Goldman and Kearns [19], Goldman and Mathias [20] Angluin and Krikis [2], Balbach [5], and Kobayashi and Shinohara [29]. We now discuss some of these models in more detail.

Teaching sets.

The first mathematical models for teaching [19, 43, 3] handle collusions in a fairly restrictive way, by requiring that the teacher provides a set of examples that uniquely identifies . Formally, this is captured by the notion of a teaching set, which was independently introduced by Goldman and Kearns [19], Shinohara and Miyano [43] and Anthony et al. [3]. A set is a teaching set for in if for all in , we have . The teaching complexity in these models is captured by the hardest concept to teach, i.e., .

Teaching sets also appear in other areas of learning theory: Hanneke [22] used it in his study of the label complexity in active learning, and the authors of [47] used variants of it to design efficient algorithms for learning distributions using imperfect data.

Defining the teaching complexity using the hardest concept is often too restrictive. Consider for example the concept class consisting of all singletons and the empty set over a domain of size . Its teaching complexity in these models is , since the only teaching set for the empty set is . This is a fairly simple concept class that has the maximum possible complexity.

Recursive teaching dimension.

Goldman and Mathias [20] and Angluin and Krikis [2] therefore suggested less restrictive teaching models, and more efficient teaching schemes were indeed discovered in these models. One approach, studied by Zilles et al. [48], Doliwa et al. [12], and Samei et al. [40], uses a natural hierarchy on the concept class which is defined as follows. The first layer in the hierarchy consists of all concepts whose teaching set has minimal size. Then, these concepts are removed and the second layer consists of all concepts whose teaching set with respect to the remaining concepts has minimal size. Then, these concepts are removed and so on, until all concepts are removed. The maximum size of a set that is chosen in this process is called the recursive teaching (RT) dimension. One way of thinking about this model is that the teaching process satisfies an Occam’s razor-type rule of preferring simpler concepts. For example, the concept class consisting of singletons and the empty set, which was considered earlier, has recursive teaching dimension : The first layer in the hierarchy consists of all singletons, which have teaching sets of size . Once all singletons are removed, we are left with a concept class of size , the concept class , and in it the empty set has a teaching set of size .

A similar notion to RT-dimension was independently suggested in [47] under the terminology of partial IDs. There the focus was on getting a simultaneous upper bound on the size of the sets, as well as the number of layers in the recursion, and it was shown that for any concept class both can be made at most . Motivation for this study comes from the population recovery learning problem defined in [15].

Previous results.

Doliwa et al. [12] and Zilles et al. [48] asked whether small VC-dimension implies small recursive teaching dimension. An equivalent question was asked 10 years earlier by Kuhlmann [30]. Since the VC-dimension does not increase when concepts are removed from the class, this question is equivalent to asking whether every class with small VC-dimension has some concept in it with a small teaching set. Given the semantics of the recursive teaching dimension and the VC-dimension, an interpretation of this question is whether exact teaching is not much harder than approximate learning (i.e., PAC learning).

For infinite classes the answer to this question is negative. There is an infinite concept class with VC-dimension so that every concept in it does not have a finite teaching set. An example for such a class is defined as where is the indicator function of all rational numbers that are smaller than . The VC-dimension of is , but every teaching set for some must contain a sequence of rationals that converges to .

For finite classes this question is open. However, in some special cases it is known that the answer is affirmative. In [30] it is shown that if has VC-dimension , then its recursive teaching dimension is also . It is known that if is a maximum5 class then its recursive teaching dimension is equal to its VC-dimension [12, 39]. Other families of concept classes for which the recursive teaching dimension is at most the VC-dimension are discussed in [12]. In the other direction, [30] provided examples of concept classes with VC-dimension and recursive teaching dimension at least .

The only bound on the recursive teaching dimension for general classes was observed by both [12, 47]. It states that the recursive teaching dimension of is at most . This bound follows from a simple halving argument which shows that for all there exists some with a teaching set of size .

Our contribution.

Our first main result is the following general bound, which exponentially improves over the bound when the VC-dimension is small (the proof is given in Section 3).

Theorem 1.4 (RT-dimension).

Let be a concept class of VC-dimension . Then there exists with a teaching set of size at most

It follows that the recursive teaching dimension of concept classes of VC-dimension is at most as well.

Subsequent to this paper, Chen, Cheng, and Tang [9] proved that the RT-dimension is at most . Their proof is based on ideas from this work, in particular they follow and improve the argument from the proof of Lemma 1.7.

1.3 Sample compression schemes

A fundamental and well known statement in learning theory says that if the VC-dimension of a concept class is small, then any consistent6 algorithm successfully PAC learns concepts from after seeing just a few labelled examples [45, 7]. In practice, however, a major challenge one has to face when designing a learning algorithm is the construction of an hypothesis that is consistent with the examples seen. Many learning algorithms share the property that the output hypothesis is constructed using a small subset of the examples. For example, in support vector machines, only the set of support vectors is needed to construct the separating hyperplane [11]. Sample compression schemes provide a formal meaning for this algorithmic property.

Before giving the formal definition of compression schemes, let us consider a simple illustrative example. Assume we are interested in learning the concept class of intervals on the real line. We get a collection of 100 samples of the form where and indicates7 if is in the interval . Can we remember just a few of the samples in a way that allows to recover all the 100 samples? In this case, the answer is affirmative and in fact it is easy to do so. Just remember two locations, those of the left most and of the right most (if there are no s, just remember one of the s). From this data, we can reconstruct the value of on all the other 100 samples.

The formal definition.

Littlestone and Warmuth [32] formally defined sample compression schemes as follows. Let with . Let

the set of labelled samples from , of sizes between and . A -sample compression scheme for with information , consists of two maps for which the following hold:

()

The compression map

takes to with and .

()

The reconstruction map

is so that for all in ,

The size of the scheme is .

Intuitively, the compression map takes a long list of samples and encodes it as a short sub-list of samples together with some small amount of side information , which helps in the reconstruction phase. The reconstruction takes a short list of samples and decodes it using the side information , without any knowledge of , to an hypothesis in a way that essentially inverts the compression. Specifically, the following property must always hold: if the compression of is the same as that of then .

A different perspective of the side information is as a list decoding in which the small set of labelled examples is mapped to the set of hypothesis , one of which is correct.

We note that it is not necessarily the case that the reconstructed hypothesis belongs to the original class . All it has to satisfy is that for any such that we have that . Thus, has to be consistent only on the sampled coordinates that were compressed and not elsewhere.

Let us consider a simple example of a sample compression scheme, to help digest the definition. Let be a concept class and let be the rank over, say, of the matrix whose rows correspond to the concepts in . We claim that there is an -sample compression scheme for with no side information. Indeed, for any , let be a set of at most columns that span the columns of the matrix . Given a sample compress it to for . The reconstruction maps takes to any concept so that . This sample compression scheme works since if then every two different rows in must disagree on .

Connections to learning.

Sample compression schemes are known to yield practical learning algorithms (see e.g. [34]), and allow learning for multi labelled concept classes [41].

They can also be interpreted as a formal manifestation of Occam’s razor. Occam’s razor is a philosophical principle attributed to William of Ockham from the late middle ages. It says that in the quest for an explanation or an hypothesis, one should prefer the simplest one which is consistent with the data. There are many works on the role of Occam’s razor in learning theory, a partial list includes [32, 7, 16, 37, 26, 17, 13]. In the context of sample compression schemes, simplicity is captured by the size of the compression scheme. Interestingly, this manifestation of Occam’s razor is provably useful [32]: Sample compression schemes imply PAC learnability.

Theorem 1.5 (Littlestone-Warmuth).

Let , and . Let be a distribution on , and be independent samples from . Let and . Let be a -sample compression scheme for with additional information . Let . Then,

Proof sketch..

There are subsets of of size at most . There are choices for . Each choice of yields a function that is measurable with respect to . The function is one of the functions in . For each , the coordinates in are independent, and so if then the probability that all these samples agree with is less than . The union bound completes the proof. ∎

The sample complexity of PAC learning is essentially the VC-dimension. Thus, from Theorem 1.5 we expect the VC-dimension to bound from below the size of sample compression schemes. Indeed, [17] proved that there are concept classes of VC-dimension for which any sample compression scheme has size at least .

This is part of the motivation for the following basic question that was asked by Littlestone and Warmuth [32] nearly 30 years ago: Does a concept class of VC-dimension have a sample compression scheme of size depending only on (and not on the universe size)?

In fact, unlike the VC-dimension, the definition of sample compression schemes as well as the fact that they imply PAC learnability naturally generalizes to multi-class classification problems [41]. Thus, Littlestone and Warmuth’s question above can be seen as the boolean instance of a much broader question: Is it true that the size of an optimal sample compression scheme for a given concept class (not necessarily binary-labeled) is the sample complexity of PAC learning of this class?

Previous constructions.

Floyd [16] and Floyd and Warmuth [17] constructed sample compression schemes of size . The construction in [17] uses a transformation that converts certain online learning algorithms to compression schemes. Helmbold and Warmuth [26] and Freund [18] showed how to compress a sample of size to a sample of size using some side information for classes of constant VC-dimension (the implicit constant in the depends on the VC-dimension).

In a long line of works, several interesting compression schemes for special cases were constructed. A partial list includes Helmbold et al. [25], Floyd and Warmuth [17], Ben-David and Litman [6], Chernikov and Simon [10], Kuzmin and Warmuth [31], Rubinstein et al. [38], Rubinstein and Rubinstein [39], Livni and Simon [33] and more. These works provided connections between compression schemes and geometry, topology and model theory.

Our contribution.

Here we make the first quantitive progress on this question, since the work of Floyd [16]. The following theorem shows that low VC-dimension implies the existence of relatively efficient compression schemes. The constructive proof is provided in Section 4.

Theorem 1.6 (Sample compression scheme).

If has VC-dimension then it has a -sample compression scheme with additional information where and .

Subsequent to this paper, the first and the last authors improved this bound [35], showing that any concept class of VC-dimension has a sample compression scheme of size at most . The techniques used in [35] differ from the techniques we use in this paper. In particular, our scheme relies on Haussler’s Packing Lemma (Theorem 1.3) and recursion, while the scheme in [35] relies on von Neumann’s minimax theorem [36] and the -approximation theorem [45, 24], which follow from the double-sampling argument of [45]. Thus, despite the fact that our scheme is weaker than the one in [35], it provides a different angle on sample compression, which may be useful in further improving the exponential dependence on the VC-dimension to an optimal linear dependence, as conjectured by Floyd and Warmuth [17, 46].

1.4 Discussion and open problems

This work provides relatively efficient constructions of teaching sets and sample compression schemes. However, the exact relationship between VC-dimension, sample compression scheme size, and the RT-dimension remains unknown. Is there always a concept with a teaching set of size depending only on the VC-dimension? (The interesting case is finite concept classes, as mentioned above.) Are there always sample compression schemes of size linear (or even polynomial) in the VC-dimension?

The simplest case that is still open is VC-dimension . One can refine this case even further. VC-dimension means that on any three coordinates , the projection has at most patterns. A more restricted family of classes is concept classes, for which on any three coordinates there are at most patterns. We can show that the recursive teaching dimension of classes is at most .

Lemma 1.7.

Let be a finite concept class. Then there exists some with a teaching set of size at most .

Proof.

Assume that with . If has VC-dimension then there exists with a teaching set of size (see [30, 1]). Therefore, assume that the VC-dimension of is . Every shattered pair partitions to nonempty sets:

for . Pick a shattered pair and for which the size of is minimal. Without loss of generality assume that and that . To simplify notation, we denote simply by .

We prove below that has VC-dimension . This completes the proof since then there is some and some such that is a teaching set for in . Therefore, is a teaching set for in .

First, a crucial observation is that since is a class, no pair is shattered by both and . Indeed, if shatters then either or has at least patterns on . If in addition shatters then has at least patterns on or , contradicting the assumption that is a class.

Now, assume towards contradiction that shatters . Thus, is not shattered by which means that there is some pattern so that . This implies that is a proper subset of , contradicting the minimality of . ∎

2 The dual class

We shall repeatedly use the dual concept class to and its properties. The dual concept class of is defined by , where is the map so that iff . If we think of as a binary matrix whose rows are the concepts in , then corresponds to the distinct rows of the transposed matrix (so it may be that ).

We use the following well known property (see [4]).

Claim 2.1 (Assouad).

If the VC-dimension of is then the VC-dimension of is at most .

Proof sketch.

If the VC-dimension of is then in the matrix representing there are rows that are shattered, and in these rows there are columns that are shattered. ∎

We also define the dual approximating set (recall the definition of from Section 1.1). Denote by the set , where is the uniform distribution on .

3 Teaching sets

In this section we prove Theorem 1.4. The high level idea is to use Theorem 1.3 and Claim 2.1 to identify two distinct in so that the set of so that is much smaller than , add to the teaching set, and continue inductively.

Proof of Theorem 1.4.

For classes with VC-dimension there is with a teaching set of size , see e.g. [12]. We may therefore assume that .

We show that if , then there exist in such that

(1)

From this the theorem follows, since if we iteratively add such to the teaching set and restrict ourselves to , then after at most iterations, the size of the remaining class is reduced to less than . At this point we can identify a unique concept by adding at most additional indices to the teaching set, using the halving argument of [12, 47]. This gives a teaching set of size at most for some , as required.

In order to prove (1), it is enough to show that there exist in such that the normalized hamming distance between is at most . Assume towards contradiction that the distance between every two concepts in is more than , and assume without loss of generality that (that is, all the columns in are distinct). By Claim 2.1, the VC-dimension of is at most . Theorem 1.3 thus implies that

(2)

where the last inequality follows from the definition of and the assumption on the size of . Therefore, we arrive at the following contradiction:

(by Theorem 1.1, since )
(by Equation 2 above)
(by definition of )

4 Sample compression schemes

In this section we prove Theorem 1.6. The theorem statement and the definition of sample compression schemes appear in Section 1.3.

While the details are somewhat involved, due to the complexity of the definitions, the high level idea may be (somewhat simplistically) summarized as follows.

For an appropriate choice of , we pick an -approximating set of the dual class . It is helpful to think of as a subset of the domain . Now, either faithfully represents the sample or it does not (we do not formally define “faithfully represents” here). We identify the following win-win situation: In both cases, we can reduce the compression task to that in a much smaller set of concepts of size at most , similarly to as for teaching sets in Section 3. This yields the same double-logarithmic behavior.

In the case that faithfully represents , Case 2 below, we recursively compress in the small class . In the unfaithful case, Case 1 below, we recursively compress in a (small) set of concepts for which disagreement occurs on some point of , just as in Section 3. In both cases, we have to extend the recursive solution, and the cost is adding one sample point to the compressed sample (and some small amount of additional information by which we encode whether Case 1 or 2 occurred).

The compression we describe is inductively defined, and has the following additional structure. Let be in the image of . The information is of the form , where is an integer so that , and is a partial one-to-one function8.

The rest of this section is organized as follows. In Section 4.1 we define the compression map . In Section 4.2 we give the reconstruction map . The proof of correctness is given in Section 4.3 and the upper bound on the size of the compression is calculated in Section 4.4.

4.1 Compression map: defining

Let be a concept class. The compression map is defined by induction on . For simplicity of notation, let .

In what follows we shall routinely use . There are several -approximating sets and so we would like to fix one of them, say, the one obtained by greedily adding columns to starting from the first9 column (recall that we can think of as a matrix whose rows correspond to concepts in and whose columns are concepts in the dual class ). To keep notation simple, we shall use to denote both the approximating set in and the subset of composed of columns that give rise to . This is a slight abuse of notation but the relevant meaning will always be clear from the context.

Induction base.

The base of the induction applies to all concept classes so that . In this case, we use the compression scheme of Floyd and Warmuth [16, 17] which has size . This compression scheme has no additional information. Therefore, to maintain the structure of our compression scheme we append to it redundant additional information by setting and to be empty.

Induction step.

Let be so that . Let be so that

(3)

This choice balances the recursive size. By Claim 2.1, the VC-dimension of is at most (recall that ). Theorem 1.3 thus implies that

(4)

(Where the second inequality follows from the definition of and the assumption on the size of and the last inequality follows from the definition of and Theorem 1.1).

Let . Every has a rounding10 in . We distinguish between two cases:

  1. There exist and such that and .

    This is the unfaithful case in which we recurse as in Section 3. Let

    Apply recursively on and the sample . Let be the result of this compression. Output defined as11

    ( is defined on , marking that Case 1 occurred)
  2. For all and such that , we have .

    This is the faithful case, in which we compress by restricting to . Consider . For each , pick12 to be an element such that . Let

    By (4), we know . Therefore, we can recursively apply on and and get . Output defined as

    ()
    ( is not defined on , marking that Case 2 occurred)

The following lemma summarizes two key properties of the compression scheme. The correctness of this lemma follows directly from the definitions of Cases 1 and 2 above.

Lemma 4.1.

Let and be the compression of described above, where . The following properties hold:

  1. is defined on and iff and there exists such that and .

  2. is not defined on iff for all and such that , it holds that .

4.2 Reconstruction map: defining

The reconstruction map is similarly defined by induction on . Let be a concept class and let be in the image13 of with respect to . Let be as in (3).

Induction base.

The induction base here applies to the same classes like the induction base of the compression map. This is the only case where , and we apply the reconstruction map of Floyd and Warmuth [16, 17]

Induction step.

Distinguish between two cases:

  1. is defined on .

    Let . Denote

    Apply recursively on . Let be the result. Output where

  2. is not defined on .

    Consider . For each , pick to be an element such that . Let

    Apply recursively on and let be the result. Output satisfying

4.3 Correctness

The following lemma yields the correctness of the compression scheme.

Lemma 4.2.

Let be a concept class, , and . Then,

  1. and , and

  2. .

Proof.

We proceed by induction on . In the base case, and the lemma follows from the correctness of Floyd and Warmuth’s compression scheme (this is the only case in which ). In the induction step, assume . We distinguish between two cases:

  1. is defined on .

    Let . This case corresponds to Case 1 in the definitions of and Case 1 in the definition of . By Item 1 of Lemma 4.1, and there exists and such that and . Let be the class defined in Case 1 in the definition of . Since , we know that on satisfy the induction hypothesis. Let

    be the resulting compression and reconstruction. Since we are in Case 1 in the definitions of and Case 1 in the definition of , and have the following form:

    and

    Consider item 1 in the conclusion of the lemma. By the definition of and ,

    (by the definition of )
    (by the induction hypothesis)

    Therefore, .

    Consider item 2 in the conclusion of the lemma. By construction and induction,

    Thus, .

  2. is not defined on .

    This corresponds to Case 2 in the definitions of and Case 2 in the definition of . Let be the result of Case 2 in the definition of . Since , we know that on satisfy the induction hypothesis. Let

    as defined in Case 2 in the definitions of and Case 2 in the definition of . By construction, and have the following form:

    and

Consider item 1 in the conclusion of the lemma. Let . By the induction hypothesis, . Thus, for some . Since the range of is , it follows that . This shows that .

Consider item 2 in the conclusion of the lemma. For ,

(by the definition of )
(by the induction hypothesis)
(by the definition of in Case 2 of )