Do Outliers Ruin Collaboration?

Do Outliers Ruin Collaboration?

Mingda Qiao
Institute for Interdisciplinary Information Sciences (IIIS)
Tsinghua University
Abstract

We consider the problem of learning a binary classifier from different data sources, among which at most an fraction are adversarial. The overhead is defined as the ratio between the sample complexity of learning in this setting and that of learning the same hypothesis class on a single data distribution. We present an algorithm that achieves an overhead, which is proved to be worst-case optimal. We also discuss the potential challenges to the design of a computationally efficient learning algorithm with a small overhead.

1 Introduction

Consider the following real-world scenario: we would like to train a speech recognition model based on labeled examples collected from different users. For this particular application, a high average accuracy over all users is far from satisfactory: a model that is correct on of the data may still go seriously wrong for a small yet non-negligible fraction of the users. Instead, a more desirable objective would be finding personalized speech recognition solutions that are accurate for every single user.

There are two major challenges to achieving this goal, the first being user heterogeneity: a model trained exclusively for users with a particular accent may fail miserably for users from another region. This challenge hints that a successful learning algorithm should be adaptive: more samples need to be collected from users with atypical data distributions. Equally crucial is that a small fraction of the users are malicious (e.g., they are controlled by a competing corporation); these users intend to mislead the speech recognition model into generating inaccurate or even ludicrous outputs.

Motivated by these practical concerns, we propose the Robust Collaborative Learning model and study from a theoretical perspective the complexity of learning in the presence of untrusted collaborators. In our model, a learning algorithm interacts with different users, each associated with a data distribution . As mentioned above, a successful learning algorithm should, ideally, find personalized classifiers for different distributions, such that

holds for every , where denotes the true label of sample . Further complicating the situation is that the algorithm can only interact with the data distributions via the users, each of which is either truthful or adversarial. A truthful user always provides the learning algorithm with independent samples drawn from his distribution together with the correct labels, whereas the labeled samples collected from adversarial users are arbitrary.

In the presence of malicious users, it is clearly impossible to learn an accurate classifier for every single distribution: an adversary may choose to provide no information about his data distribution. Therefore, a more realistic objective is to satisfy all the truthful users, i.e., to learn classifiers such that holds for every truthful user .

Naïvely, one could ignore the prior knowledge that samples from truthful users are labeled by the same function, and run independent copies of the same learning algorithm for the users. This straightforward approach clearly needs at least times as many samples as that required by learning on a single data distribution. Following the terminology of Blum et al. [4], we say that this naïve algorithm leads to an sample complexity overhead. The notion of overhead measures the extent to which learning benefits from the collaboration and sharing of information among different parties. Blum et al. [4] proposed a learning algorithm that achieves an overhead for the case that all users are truthful, i.e., . We are then interested in answering the following natural question: can we still achieve a sublinear overhead for the case that , at least when is sufficiently small? In other words, do adversaries ruin the efficiency of collaboration?

1.1 Model and Preliminaries

Similar to the classic Probably Approximately Correct (PAC) learning framework due to Valiant [12], we consider the binary classification problem on a set . The hypothesis class is a collection of binary functions on with VC-dimension . The elements in are labeled by an unknown target function .111This is known as the realizable setting of PAC learning.

Suppose that is a probability distribution on set . Let denote the oracle that, given a set of labeled examples, either returns a classifier that is consistent with the examples (i.e., for every ) or returns if contains no such consistent classifiers. A classic result in PAC learning states that if

independent labeled samples are drawn from , with probability at least , inequality holds for every possible output  [5].

In the Robust Collaborative Learning setting, we consider different data distributions supported on . A learning algorithm interacts with these distributions via user oracles , each of which operates in one of two different modes: truthful or adversarial. Upon each call to a truthful oracle , a sample is drawn from distribution and the labeled sample is returned. On the other hand, an adversarial oracle may output an arbitrary pair in each time.222Our results hold even if the adversarial oracles are allowed to collude and they know the samples previously drawn by truthful oracles.

We define -learning in the Robust Collaborative Learning model as the task of learning an -accurate classifier for each truthful user with probability , under the assumption that at most an fraction of the oracles are adversarial.

Definition 1.1 (-learning).

Algorithm is an -learning algorithm if , given a concept class and access to user oracles among which at most oracles are adversarial, outputs functions , such that with probability at least , holds simultaneously for every truthful oracle .

We also formally define the sample complexity of -learning.

Definition 1.2 (Sample Complexity).

Let denote the expected number of times that algorithm calls oracles in total, when it runs on hypothesis class and user oracles . The sample complexity of -learning a concept class with VC-dimension from users is defined as:

Here the infimum is over all -learning algorithms . The supremum is taken over all hypothesis classes with VC-dimension and user oracles , among which at most an fraction are adversarial.

The overhead of Robust Collaborative Learning is defined as the ratio between the sample complexity and its counterpart in the classic PAC learning setting, . To simplify the notations and restrict our attention to the dependence of overhead on parameters , and , we assume that in our definition of overhead.333This definition only changes by a constant factor when is replaced by other sufficiently small constants.

Definition 1.3 (Overhead).

For and , the sample complexity overhead of Robust Collaborative Learning is defined as

where .

Following our definition of the overhead, the results in [4] imply that when all users are truthful (i.e., when ) and , . They also proved the tightness of this bound in the special case that .

1.2 Our Results

Information-theoretically, collaboration can be robust.

In Section 3, we present our main positive result: a learning algorithm that achieves an sample complexity overhead when . Our result recovers the overhead upper bound due to Blum et al. [4] for the special case . In Section 4, we complement our positive result with a lower bound, which states that an overhead is inevitable in the worst case. In light of the previous overhead lower bound for the special case that  [4], our learning algorithm achieves an optimal overhead when parameters and differ by a bounded constant factor.

Our characterization of the sample complexity in Robust Collaborative Learning indicates that efficient cooperation is possible even if a small fraction of arbitrary outliers are present. Moreover, the overhead is largely determined by , the maximum possible number of adversaries. Our results suggest that for practical applications, the learning algorithm could greatly benefit from a relatively clean pool of data sources.

Computationally, outliers may ruin collaboration.

Our study focuses on the sample complexity of Robust Collaborative Learning, yet also important in practice is the amount of computational power required by the learning task. Indeed, the algorithm that we propose in Section 3 is inefficient due to an exhaustive enumeration of the set of truthful users, which takes exponential time. In Section 5, we provide evidence that hints at a time-sample complexity tradeoff in Robust Collaborative Learning. Informally, we conjecture that any learning algorithm with a sublinear overhead must run in super-polynomial time. In other words, while the presence of adversaries does not seriously increase the sample complexity of learning, it may still ruin the efficiency of collaboration by significantly increasing the computational burden of this learning task. We support our conjecture with known hardness results in computational complexity theory.

2 Related Work

Most related to our work is the recent Collaborative PAC Learning model proposed by Blum et al. [4]. They also considered the task of learning the same binary classifier on different data distributions, yet all users are assumed to be truthful in their model. In fact, the Robust Collaborative Learning model reduces to the personalized setting of their model when . Here the word “personalized” emphasizes the assumption that each user may receive a specialized classifier tailored to his distribution.

In addition to the personalized setting, they also studied the centralized setting, in which all the users should receive the same classifier. They proved that a poly-logarithmic overhead is still achievable in this more challenging setting. In our Robust Collaborative Learning model, however, centralized learning is in general impossible due to the indistinguishability between truthful and adversarial users. The following simple impossibility result holds for extremely simple concept classes and even when infinitely many samples are available.

Proposition 2.1.

For any , and , no algorithms -learn any concept class of VC-dimension , under the restriction that all users should receive the same classifier.

Proof of Proposition 2.1.

Let and be two different samples that can be shattered by . Choose such that and . Let be large enough such that . Construct (degenerate) distributions such that and for each .

Consider the following two cases:

  • The target function is . The only adversarial user, , misleads the learning algorithm by outputing the labeled example .

  • The target function is . The only adversarial user, , misleads the learning algorithm by outputing the labeled example .

Note that in both cases, oracles and always return and respectively, while all other oracles return . Consequently, no algorithms can distinguish these two cases with success probability strictly greater than . Thus, any learning algorithm would have a failure probability of at least . ∎

A related line of research is multi-task learning [6, 1, 2, 3], which studies the problem of learning multiple related tasks simultaneously with significantly fewer samples. Most work in this direction assumes certain relation (e.g., a transfer function) between the given learning tasks. In contrast to multi-task learning, our work focuses on the problem of learning the same classifier on multiple data distributions, without assuming any similarity between these underlying distributions.

Also relevant to our study is the work on robust statistics, i.e., the study of learning and estimation in the presence of noisy data and arbitrary outliers; see Lai et al. [11], Charikar et al. [7], Diakonikolas et al. [8, 9, 10] and the references therein for some recent work in this line of research. Classic problems in this regime include the estimation of the mean and covariance of a high-dimensional distribution, given a dataset consisting of samples drawn from the distribution and a small fraction of arbitrary outliers. Our model differs from this line of research in that we consider a general classification setting, and the learning algorithm is allowed to sample different sources adaptively, instead of learning from a given dataset of fixed size.

3 An Iterative Learning Algorithm

In this section, we present an iterative -learning algorithm achieves an overhead when . Here is the number of users, and denotes the VC-dimension of the hypothesis class . Since can be large and even infinite, we assume that the algorithm access via an oracle that, given a set of labeled examples, either returns a classifier such that holds for each pair , or returns if does not contain any consistent functions. The algorithm interacts with the underlying data distributions via example oracles , among which at most an fraction are adversarial.

3.1 Algorithm

Our algorithm is formally described in Algorithms 1 through 3. The main algorithm proceeds in rounds and maintains a set of the indices of active users at the beginning of round , i.e., users who have not received an -accurate classifier so far. When , the maximum possible number of adversaries, is below , the algorithm invokes subroutine to find a candidate classifier . Then, Algorithm 1 calls the validation procedure to check whether is accurate for each user (with respect to accuracy threshold ). If so, the algorithm marks the output for user as ; otherwise, user stays in set for the next round. When the proportion of adversaries reaches , the algorithm learns for the remaining users independently: for each active user, it draws samples from his oracle and outputs an arbitrary classifier that is consistent with his data.

Input: Parameters , , , , .
Output: Classifiers .
; ;
while  do
       ;
       ;
       ;
       Set for each ;
       ;
      
end while
for  do
       labeled samples from ;
       ;
      
end for
return ;
Algorithm 1 Iterative Robust Collaborative Learning
Input: Index set , parameters , and .
Output: Candidate classifier .
;
for  do
       labeled samples from ;
      
end for
;
for  do
       ;
       if  then return ;
      
end for
Algorithm 2
Input: Set of indices, candidate function , parameters and .
Output: Set of surviving indices.
for  do
       samples from ;
       ;
      
end for
return ;
Algorithm 3

3.2 Analysis of Subroutines

Subroutine (Algorithm 2) is the key to the sample efficiency of our algorithm, as it enables us to learn a candidate classifier that is accurate simultaneously for a constant fraction of the active users, using only a nearly-linear number of samples (with respect to parameters and ). Subroutine (Algorithm 3) further checks whether the learned classifier is accurate enough for each active user. This allows us to determine whether a user should remain active in the next iteration. We devote this subsection to the analysis of these two subroutines.

Lemma 3.1.

Suppose denotes the indices of users, among which at most are adversarial. Let denote the output of . With probability , the following two conditions hold simultaneously for at least indices : (1) ; (2) oracle is truthful.

The proof of Lemma 3.1 relies on the following technical claim, which enables us to relate the union of several equal-size datasets to the samples drawn from the uniform mixture of the corresponding distributions.

Claim 3.2.

Suppose balls are thrown into bins independently and uniformly at random. Then with probability , no bins contain more than balls.

Proof of Claim 3.2.

Let random variable denote the number of balls in a fixed bin, so is the average of i.i.d. Bernoulli random variables with mean . The Chernoff bound implies that

where the last step holds if we choose a sufficiently large hidden constant in . The claim follows from a union bound over the bins. ∎

Proof of Lemma 3.1.

Let denote the indices of truthful users in . By assumption, and contains a function that is consistent with . This guarantees that Algorithm 2 should return as the output when , so function is well-defined.

Recall that in Algorithm 2, we set

Consider the following thought experiment. For each non-empty , we draw a sequence of integers, each of which is chosen uniformly and independently at random from . We also draw samples from oracle for each . If all users in are truthful, the samples together with naturally specify a realization of drawing samples from the uniform mixture distribution : we arrange the samples drawn from each distribution into a queue, and when we would like to draw the -th sample, we simply take the sample at the front of queue .

For a fixed non-empty subset that only contains truthful users, the VC theorem implies that with probability (over the randomness in both the samples and the choice of ), when we draw samples from the uniform mixture as described above, any function that is consistent with the labeled samples satisfies . By a union bound over different sets , the above holds for every simultaneously with probability .

Recall that in Algorithm 2, we first query each oracle to obtain a “training set” of size for each . Then we find set and classifier such that: (1) ; (2) is consistent with all labeled samples in . Suppose that is the set associated with the output of Algorithm 2, and let . Note that .

The crucial observation is that since

Claim 3.2 implies that with probability at least , each index appears less than times in . In other words, is a superset of the samples that are supposed to be drawn from (in our thought experiment). Since is consistent with , a union bound shows that with probability , we have

This further implies that holds for at least indices ; otherwise, we would have

which leads to a contradiction. Here the second step applies . This proves the lemma. ∎

The following lemma, which directly follows from a Chernoff bound and a union bound, states that with probability , correctly determines whether has an error for each user in .

Lemma 3.3.

Let denote the output of . With probability , the following holds for every simultaneously: (1) if , ; (2) if , .

Proof of Lemma 3.3.

Fix a truthful oracle with . Recall that Algorithm 3 sets

Note that is the average of independent Bernoulli random variables, each with mean . Thus, the Chernoff bound implies that with probability , the following two conditions holds simultaneously: (1) if , ; (2) if , . The lemma follows from a union bound over all . ∎

3.3 Correctness and Sample Complexity

Now we are ready to prove our main result.

Theorem 3.4.

For any and , Algorithm 1 is an -learning algorithm and takes at most

samples.

By Theorem 3.4, the sample complexity reduces to when and are constants and for some constant . Therefore, when , we have the following overhead upper bound:

Proof of Theorem 3.4.

The proof proceeds by applying Lemmas 3.1 and 3.3 iteratively. In each round , Lemma 3.1 guarantees that with probability , the learned classifier has an error below for at least truthful users. By Lemma 3.3, for each such distribution, the “validation error” should be below , so these users will exit the algorithm by receiving as the classifier, and the number of active users decreases by a factor of . Therefore, the while-loop in Algorithm 1 terminates after at most iterations. Finally, the algorithm satisfies the remaining active users by drawing samples from each of them. Thus, the VC theorem guarantees that for each truthful user, the learned classifier is -accurate with probability at least . By a union bound, with probability at least

Algorithm 1 returns an -accurate classifier for each truthful user.

It remains to bound the sample complexity of Algorithm 1. In round , the number of active users is at most . Recall that . The number of samples that and draw in round is then upper bounded by

Therefore, the number of samples drawn in the iterations is upper bounded by:

(1)

When the while-loop in Algorithm 1 terminates, it holds that . After that, we learn on the remaining distributions separately, using

(2)

samples in total. Adding (1) and (2) gives the desired sample complexity upper bound. ∎

4 Overhead Lower Bound

In this section, we show that an overhead is unavoidable when . Therefore, the overhead achieved by Algorithm 1 is optimal up to a constant factor, when the number of users is commensurate with the complexity of the hypothesis class. Formally, we have the following theorem.

Theorem 4.1.

For any , and ,

Theorem 4.1 directly implies the following lower bound on the overhead:

Combining this with the previous lower bound when and  [4]444They proved an lower bound for the special case that , yet their proof directly implies the same lower bound when ., we obtain the desired worst-case lower bound of .

Proof of Theorem 4.1.

Assume without loss of generality that is an integer between and . We consider the binary classification problem on set , while the hypothesis class contains all the binary functions on that map to . The target function is chosen uniformly at random from .

Suppose that for truthful users, the data distribution is the degenerate distribution on , so these truthful users provide no information on the correct classifier . On the other hand, the data distribution of the only remaining truthful user satisfies for any and . By construction, a learning algorithm must draw samples from in order to learn an -accurate classifier with a non-trivial success probability .

Now suppose that each of the adversarial users tries to pretend that he is the truthful user . More specifically, each adversarial user chooses a function uniformly at random, and answer the queries as if he is the truthful user with a different target function . In other words, upon each request from the learning algorithm, oracle draws from and returns .

Recall that the actual target function is also uniformly distributed in , so from the perspective of the learning algorithm, the truthful user is indistinguishable from the other adversarial users. Therefore, an -learning algorithm must guarantee that each of these users receives an -accurate classifier with probability at least . The sample complexity lower bound from PAC learning theory implies that we must draw samples from each such user and thus

samples in total. ∎

5 Discussion: A Computationally Efficient Algorithm?

Although Algorithm 1 is proved to achieve an optimal sample complexity overhead in certain cases, the algorithm is computationally inefficient and of limited practical use when there are a large number of users. In particular, subroutine performs an exhaustive search over all user subsets of size , and thus may potentially call oracle exponentially many times. In contrast, the naïve approach that learns for different users separately, though obtaining an overhead, only makes calls to oracle . Naturally, one may wonder whether we can achieve the best of both worlds by finding a computationally efficient learning algorithm with a small overhead? We conjecture that such an algorithm, unfortunately, does not exist.

Conjecture 5.1.

For any and , no learning algorithms that make polynomially many calls to oracle achieve an overhead when .

In words, when there is a non-trivial number of adversaries, any efficient learning algorithm would incur a nearly-linear overhead. We remark that it is necessary to assume since when , the maximum possible number of adversaries, is a constant, the learning algorithm can enumerate the subset of adversarial users in polynomial time, thus achieving an optimal overhead efficiently. Proving or refuting Conjecture 5.1 would greatly further our understanding of the impact of arbitrary outliers on collaborative learning.

The key to our sample-efficient learning algorithm is that subroutine identifies a large user group such that some classifier is consistent with all their labeled samples. Lemma 3.1 further guarantees that is -accurate for at least half of the users. This allows us to satisfy almost all the users in iterations, resulting in the term in the overhead.

We note that finding a group of users with consistent datasets generalizes the problem of finding a large clique in a graph: For an undirected graph with vertices labeled from to , we construct the user oracles such that and produce conflicting labels on the same data if the edge is absent from the graph. Then a group of users have consistent datasets if and only if they form a clique in the corresponding graph.

Unfortunately, Zuckerman [13] proved that even if the graph is known to contain a hidden clique of size 555Analogously, in our setting, we know that a large fraction of users have non-conflicting datasets., it is still NP-hard to find a clique of size for any . This indicates that, following the approach of Algorithm 1, a computationally efficient algorithm can only find accurate classifiers for at most users in each iteration. As a result, iterations would be necessary to satisfy all the users. The algorithm consequently incurs an overhead.

References

  • Baxter [2000] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research (JAIR), 12(1):149–198, 2000.
  • Ben-David et al. [2002] Shai Ben-David, Johannes Gehrke, and Reba Schuller. A theoretical framework for learning from a pool of disparate data sources. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 443–449, 2002.
  • Ben-David et al. [2003] Shai Ben-David, Reba Schuller, et al. Exploiting task relatedness for multiple task learning. Lecture Notes in Computer Science (LNCS), pages 567–580, 2003.
  • Blum et al. [2017] Avrim Blum, Nika Haghtalab, Ariel D Procaccia, and Mingda Qiao. Collaborative pac learning. In Advances in Neural Information Processing Systems (NIPS), pages 2389–2398, 2017.
  • Blumer et al. [1989] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K Warmuth. Learnability and the vapnik-chervonenkis dimension. Journal of the ACM (JACM), 36(4):929–965, 1989.
  • Caruana [1997] Rich Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997.
  • Charikar et al. [2017] Moses Charikar, Jacob Steinhardt, and Gregory Valiant. Learning from untrusted data. In Symposium on Theory of Computing (STOC), pages 47–60, 2017.
  • Diakonikolas et al. [2016] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robust estimators in high dimensions without the computational intractability. In Foundations of Computer Science (FOCS), pages 655–664, 2016.
  • Diakonikolas et al. [2017] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. In International Conference on Machine Learning (ICML), pages 999–1008, 2017.
  • Diakonikolas et al. [2018] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robustly learning a gaussian: Getting optimal error, efficiently. In Symposium on Discrete Algorithms (SODA), pages 2683–2702, 2018.
  • Lai et al. [2016] Kevin A Lai, Anup B Rao, and Santosh Vempala. Agnostic estimation of mean and covariance. In Foundations of Computer Science (FOCS), pages 665–674, 2016.
  • Valiant [1984] Leslie G Valiant. A theory of the learnable. Communications of the ACM (CACM), 27(11):1134–1142, 1984.
  • Zuckerman [2006] David Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. In Symposium on Theory of Computing (STOC), pages 681–690, 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
192105
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description