[
Abstract
Domain generalization is the problem of machine learning when the training data and the test data come from different data domains. We present a simple theoretical model of learning to generalize across domains in which there is a metadistribution over data distributions, and those data distributions may even have different supports. In our model, the training data given to a learning algorithm consists of multiple datasets each from a single domain drawn in turn from the metadistribution. We study this model in three different problem settings—a multidomain Massart noise setting, a decision tree multidataset setting, and a feature selection setting, and find that computationally efficient, polynomialsample domain generalization is possible in each. Experiments demonstrate that our feature selection algorithm indeed ignores spurious correlations and improves generalization.
Learn to Expect the Unexpected]Learn to Expect the Unexpected:
Probably Approximately Correct Domain Generalization
\coltauthor\NameVikas K. Garg \Emailvgarg@csail.mit.edu
\addrMIT
\AND\NameAdam Kalai \Emailadam.kalai@microsoft.com
\addrMicrosoft Research
\AND\NameKatrina Ligett \Emailkatrina@cs.huji.ac.il
\addrHebrew University
\AND\NameZhiwei Steven Wu
\Emailzsw@umn.edu
\addrUniversity of Minnesota
1 Introduction
Machine learning algorithms often fail to generalize in certain ways that come naturally to humans. For example, many people learn to drive in California, and after driving there for many years are able to drive in U.S. states they have never before visited, despite variations in roads and climate. However, even a simple roadsign machine learning classifier would likely have decreased accuracy when tested on outofstate road signs.
More generally, a common problem in realworld machine learning is that the training data do not match the test data. One wellstudied instance of this issue is situations where the training and test data are drawn from different distributions over the same data domain. We are interested in a somewhat different problem—situations where the training data and the test data come from different (though potentially overlapping) domains. A change in the data domain could occur because the underlying data distribution is changing over time, but it could also occur because an algorithm trained on data from a particular geographical location or context is later expected to perform in a different location or context.
While this problem of domain generalization has been studied empirically, our main contribution is a simple model of domain generalization in which theoretical results can be obtained. One challenge in formalizing the model is that arbitrary domain generalization is clearly impossible—an algorithm should not be expected to recognize a yield sign if it has never seen one (nor anything like it) before. We present a simple theoretical model of learning to generalize across domains in which there is a metadistribution over data distributions, and those data distributions may have different domains (in the mathematical sense). In our model, the training data given to a learning algorithm consists of multiple datasets with each dataset drawn conditional on a single domain. The learning algorithm is expected to perform well on future domains drawn from the same distribution.
For example, there might be a metadistribution over US states, and for each state there might be a distribution over examples, say features based on image and location latitude/longitude, taken in that state. The algorithm would be trained on multiple datasets—perhaps a Florida image dataset, a Wyoming image dataset, and an Alabama image dataset—and then would be expected to perform well not just on new images from Florida, Wyoming, and Alabama, but also on images from neverbeforeseen states. It may be, for example, that if each intersection had numerous visits, location features for predicting which type of sign is where, because signs rarely move. However, they may be seen not to generalize well across datasets.
We then investigate this model in three quite distinct settings, and demonstrate that we can leverage the multidomain structure in the problem to derive computationally efficient and conceptually simple algorithms. Our first result focuses on a multidomain variant of the Massart noise model (Massart and Nédélec, 2006), where there is a common target concept across different domains but each domain has a different noise rate in the labels. We provide a general reduction from computationally efficient learning in this model to PAC learning under random classification noise (Angluin and Laird, 1987). Our result can potentially provide new directions in resolving open questions in the standard Massart noise model where each individual example has its own label noise rate (Diakonikolas et al., 2019). See Section 4 for a discussion.
In our second result, we turn to another notoriously difficult computational problem—PAC learning decision trees. We make the assumption that there is a target decision tree that labels the examples across all domains, but examples in each domain all belong to a single leaf in this tree. Under this assumption, we provide an efficient algorithm with runtime , where denotes the dimension of the data and denotes the number of nodes in the target tree. (Without any assumption, the fastest known algorithm runs in time ).
Finally, our third result provides a simple algorithm for selecting features that are predictive across multiple domains. Our algorithm augments a blackbox PAC learner with an additional correlationbased selection based on data across different domains. To empirically demonstrate its effectiveness, we also evaluate our algorithm on the “Universities” dataset of webpages, for which the learning goal is to predict the category of each example (e.g., faculty, student, course, etc.). We show that our approach provides stronger crossdomain generalization than the standard baseline. As hypothesized, we find that features that are highly predictive in one university but not in another are in fact idiosyncratic, removing them improves prediction on data from further universities not in the training set.
We observe that our model of domain generalization enables two distinct advantages over the traditional PAC learning model. Most obviously, PAClearned models do not come with any guarantee of performance on data points drawn from unobserved domains. Furthermore, the additional structure of training on multiple datasets enables insample guarantees that are not achievable in the PAC model.
2 Related Work
A rich literature sometimes known as domain adaptation (e.g., Daume III and Marcu (2006); Blitzer et al. (2006); BenDavid et al. (2007); Blitzer et al. (2008); Mansour et al. (2009a, b); BenDavid et al. (2010); Ganin and Lempitsky (2015); Tzeng et al. (2017); Morerio et al. (2018); Volpi et al. (2018a)) considers settings where the learner has access not only to labeled training data, but also to unlabeled data from the test domain. This is a quite different setting from ours; our learner is given no access to data from the test domain, either labeled or unlabeled.
There is also a rich literature (e.g., Li and Zong (2008); Luo et al. (2008); Crammer et al. (2008); Mansour et al. (2009c); Guo et al. (2018)) that does not always rely on unlabeled data from the test distribution, but rather leverages information about similarity between domains to produce labels for new points. Zhang et al. (2012), relatedly, study the distance between domains in order to draw conclusions about generalization.
Adversarial approaches have recently gained attention (e.g. Zhao et al., 2018), and in particular, Volpi et al. (2018b), like us, generalize to unseen domains, but they attack the problem of domain generalization by augmenting the training data with fictitious, “hard” points. There are also many other empirical approaches to the problem of domain generalization (e.g., Muandet et al., 2013; Khosla et al., 2012; Ghifary et al., 2015; Li et al., 2017; Finn et al., 2017; Li et al., 2018; Mancini et al., 2018; Balaji et al., 2018; Wang et al., 2019; Carlucci et al., 2019; Dou et al., 2019; Li et al., 2019).
There are of course many other related fields of study, including covariate shift (wherein the source and target data generally have different distributions of unlabeled points but the same labeling rule), concept drift and model decay (wherein the distribution over unlabeled points generally remains static, but the labeling rule drifts over time), and multitask learning (wherein the goal is generally to leverage access to multiple domains to improve performance on each of them, rather than generalizing to new domains).
3 Definitions
For mathematical notation, we let denote and denote the indicator function that is 1 if predicate holds and 0 otherwise. For vector , let denote the th coordinate of . Finally, let denote the set of probability distributions over set . We now define our model of learning from independent datasets.
3.1 Generalizing from multiple domains
We consider a model classification with datasets from independent domains where training data consists of datasets each of examples. These datasets are chosen iid from dataset distribution over . In particular, it is assumed that there is a distribution where is a set of examples, is a set of labels, and is a set of domains. Based on this selects labeled examples from a common latent domain as follows: is picked from , and is picked from conditional on its domain being for . For simplicity, in this paper we will focus on classification with equalsized datasets and latent domains but the model can be generalized to other models of learning, unequal dataset sizes, and observed domains. A domaingeneralization learner takes training data divided into of multiple datasets of examples as input and outputs classifier . is said to be computationally efficient if it runs in time polynomial in its input length.
The error of classifier is denoted by and may be omitted when clear from context. This can be thought of in two ways: is the expected error on test datasets of examples or it is also the average performance across domains, i.e., error rate on a random example (from a random domain) from .
We first define a model of sampleefficient learning, for large , with respect to a family of classifiers. Following the agnosticlearning definition of Kearns et al. (1992), we also consider an assumption where is a set of distributions over . {definition}[Efficient Domain Generalization] Computationallyefficient domaingeneralization learner is an efficient domaingeneralization learner for classifiers over assumption if there exist polynomials and such that, for all , all , and all ,
Standard models of learning can be fit into this model using iid and noiseless assumptions:
In particular, agnostic learning can be defined as efficient domaingeneralization learning subject to while PAC learning (Valiant, 1984) can be defined as efficient domaingeneralization learning with .
It is not difficult to see that Definition 3.1 is not substantially different from PAC and Agnostic learning, with a large number of datasets:
Observation \thetheorem
If is PAC learnable, then is efficiently domaingeneralization learnable with noiseless assumption . If is agnostically learnable, then is efficiently domaingeneralization learnable without assumption, i.e., .
Simply take a PAC (or agnostic) learning algorithm for and run it on the first example in each dataset. Since these first examples are in fact iid from , the guarantees of PAC (or agnostic) learning apply to the error for future examples drawn from . This is somewhat dissatisfying, as one might hope that error rates would decrease as the number of data points per domain increases. This motivates the following definition which consider the rate at which the error decreases separately in terms of the number of datasets and number of examples per dataset . {definition}[Datasetefficient learning] Computationallyefficient learner is an datasetefficient learner for classifiers over assumption if there exists polynomials and such that, for all , all , and all ,
This definition requires fewer datasets than the previous definition, requiring a number of datasets that depends only on regardless of .
In PAC and agnostic learning, many problems have a natural complexity parameter where , , , , such as . In those cases, we allow the number of examples and datasets, in Definitions 3.1 and 3.1, to also grow polynomially with . Also note that the set can capture a host of other assumptions, such as a margin between positive and negative examples. It is not difficult to see that the model we use is equivalent to a metadistribution over domains paired with domainspecific distributions over labeled examples, where the domainspecific distributions would simply be the distribution conditioned on the given domain . Finally, while we assume that the chosen zones are not given to the learner—this is without loss of generality as the zones could be redundantly encoded in the examples .
4 MultiDomain Massart Noise Model
In the Massart noise model (Massart and Nédélec, 2006), each individual example has its own label noise rate that is, , at most a given upperbound . Learning under this model is computationally challenging and no efficient algorithms are known even for simple concept classes (Diakonikolas et al., 2019), despite the fact the statistical complexity of learning in this model is no worse than learning with noise rate . We study a multidomain variant of the Massart model, in which the learner receives examples with noisy labels from multiple domains such that each domain has its own fixed noise rate. We demonstrate that by leveraging the crossdomain structure of the problem we can obtain a broad class of computationally efficient algorithms. In particular, we provide a reduction from a multidomain variant of the Massart noise model to PAC learning under random classification noise (Angluin and Laird, 1987). Let us first state the model formally as an assumption over the distributions .
Assumption .
There exists an unknown classifier and an unknown noise rate function such that the distribution over satisfies . We assume quantity is known to the learner.
Note that the minimal error rate , achieved by the “true” classifier , can be much smaller than . Our multidomain variant is a generalization in that the marginal distribution over labeled examples, ignoring zones, fits the Massart noise model. We will leverage the zone structure to provide a reduction from the learning problem in this model to PAC learning under classification noise, defined below.
PAC learning under classification noise (CN) (Angluin and Laird, 1987)
Let be a distribution over . For any noise rate , the example oracle on each call returns an example by first drawing an example from and then drawing a random noisy label such that , where is an known upper bound. The concept class is CN learnable if there exists a learning algorithm and a polynomial such that for any distribution over , any noise rate , and for any and , the following holds: will run in time bounded by and output a hypothesis that with probability at least satisfies .
Let be a concept class that is CN learnable. Then there exists an efficient domain generalization learner for under the multidomain Massart assumption . The basic idea behind the proof is to “denoise” data from each dataset by training a classifier within each dataset and then using that classifier to label another heldout example from that zone. If that classifier had high accuracy, then with high probability the predicted labels will not be correct. A noiseless classification algorithm can then be applied to the denoised data. {proof} Let be a CN learner for with runtime polynomial . To leverage this learner to learn under the multidataset Massart model, we will aim to create an example oracle . Let be the target concept, and let be the target accuracy parameters. We will first draw a collection of datasets from , where . We will run the CN learner with a random subset of of size as input and obtain an hypothesis such that with probability ,
(1) 
where denotes the conditional distribution over conditioned on the zone being . By a union bound, we know that except with probability , (1) holds for all datasets . We will condition on this level of accuracy (event ). Let denote an example in that was not used for learning . This provides another dataset . Note that the ’s i.i.d. draws from the , the marginal distribution of over . Furthermore, by the accuracy guarantee of each , . By a union bound, we know that except with probability , for all . We will condition on this event of correct labeling (event ). This means the examples in can simulate random draws from . Finally, we will run over the set , and by our choice of , will output a hypothesis such that with probability at least (event ). Finally, our learning guarantee follows by combining the failure probability of the three events with a union bound.
Open problem in the (multidomain) Massart model.
An open question in the multidomain Massart noise model is whether there exists an efficient algorithm that only relies on a constant number of examples from each domain. If we can decrease the number of examples in each domain down to 1, we recover the standard Massart noise model. Thus, we view this as an intermediate step towards an efficient algorithm for the standard Massart model (Diakonikolas et al., 2019).
5 Decision Tree MultiDataset Model
We next consider learning binary decision trees on . Despite years of study, there is no known polynomialtime PAC learner for decision trees, with the fastest known algorithm learning binary decision trees of size in time (Hellerstein and Servedio, 2007). Formally, a decision tree is a rooted binary tree where each internal node is annotated with an attribute , and the two child edges are annotated with 0 and 1 corresponding to the restrictions and . Each leaf is annotated with a label , and on the classifier computes the function that is the label of the leaf reached by following the path starting at the root of tree and following the corresponding restrictions.
Assumption .
Let be the class of decision trees with at most leaves. The domains simply correspond to the leaves of the tree in which the (noiseless) example belongs. To make this assumption denoted formal, let the set of domains is simply the set of all possible conjunctions (each can appear as positive, negative, or not at all) on variables. We identify each leaf in a tree with domain , where is the depth of the leaf, are the annotations of the internal nodes on the path, and correspond to the edges on the path to that leaf. Using this notation, the assumption is that there is a tree for which, with probability 1 over , every example satisfies for the leaf in which belongs to, i.e., conjunction holds, and , i.e., noiselessness .
Recall that the chosen domains themselves are not observed, otherwise the learning problem would be trivial. Also note that the natural algorithm that tries to learn a classifier for each dataset to distinguish those examples from examples in other datasets will not work because multiple datasets may represent the same leaf (zone). Instead, we leverage the fact that conjunctions can be learned from positive examples alone.
In particular, we think of the decision tree simply as the union (OR) of the conjunctions corresponding to leaves labeled positively. It is known to be easy to PAClearn conjunctions from positive examples alone by outputting the largest consistent conjunction (Kearns et al., 1994, Section 1.3): the hypothesis given by the conjunction of the subset of possible terms that are consistent with every positively labeled example.

Input: training data .

Let .

For each , find the largest consistent conjunction for .

Output the classifier
Let and be the family of binary decision trees of size at most on . Then the above algorithm is an efficient domaingeneralization learner for for complexity parameter . For decision trees, the complexity of the class depends on both the number of variables and the size of the tree, hence we use as a complexity measure. {proof} For highprobability bounds, it suffices to guarantee expected error rate at most , for , for some polynomial , by Markov’s inequality.
First, it is not difficult to see that the algorithm will never have any false positives, i.e., it will never predict positively when the true label is negative. To see this, note that each positive prediction must arise because of at least one . As mentioned above, Kearns et al. (1994) show that the largest consistent classifier with any set of (noiseless) positive data is conservative in that it never has any false positives. Hence the above algorithm will never have any false positives.
We bound the expected rate of false negatives (which is equal to the expected error rate) by summing over leaves and using linearity of expectation. False negatives in positive leaf can arise in two ways: (a) leaf was simply never chosen as a domain, and (b) leaf was chosen for some , but there is a term for some which occurs in but not in in which case any positive example that satisfies will be a false negative. Moreover, these are the only types of false negatives. Hence, the expected rate of false negatives coming from leaf with probability due to (a) is , the fraction of examples from leaf times the probability that domain was never chosen. The expected rate of false negatives due to (b) is at most , again the probability of leaf times . To see why, note that there are at most terms not in the true conjunction and, for each such term, the expected error contribution can be upper bounded by imagining picking examples at random, for training and 1 for test. The probability that among positive examples, that only example which would satisfy that term would be the one chosen for test is . Hence the expected rate of false negatives and hence also the expected error rate is at most
(2) 
The inequality above holds for the left term because for and for the right term because the and for the left term by concavity of on the probability simplex. Note that the above error rate is bounded by if we have and , which completes our proof.
6 Feature Selection Using Domains
Finally, we use access to training data from multiple domains to aid in performing feature selection.
In this section, we fix . For set , let denote the selected features of example . Let denote the domain corresponding to training dataset , for each . Define to be the correlation of and over and let denote the usual (Pearson) correlation coefficient of feature with conditioned on the example having domain . Let denote the empirical correlation of and on .
The following algorithm (FUD) performs feature selection using domains.

Input: class , parameters , training data consisting of splits of examples each.

If the overall fraction of positive or negative examples is less than (massive class imbalance), stop and output the constant classifier or , respectively.

For each variable , compute empirical correlation of and over each dataset .

Let .

Find any such that for all , and output classifier . If no such exists, output FAIL.
Assumption
For we define the Feature Selection assumption to require that there exists a robust set of features such that:

Noiselessness : For some , .

Independence: and are independent over .

Correlation: For all ,

Idiosyncrasy: For all , .
Note that the constants 1.1, 0.9 and 0.1 in the above assumption can be replaced by parameters (e.g., and ) and the dependence of and on these parameters in the following theorem would be inverse polynomial. {theorem} For any of finite VC dimension , with , and any , FUD is a datasetefficient learner under assumption . In particular, for and ,
for any .
Fix . Note that by the noiseless and independent assumptions, the fraction of positives is the same in each domain, i.e., . We first bound the failure probability of outputting the all 0 or all 1 classifier in the second step. However, if , the probability that it outputs the all 0 classifier is at most by multiplicative Chernoff bounds over labeled examples. Similarly, if , the probability we output the all 1 classifier is at most . Conversely, if , then multiplicative Chernoff bounds also imply that with probability at least , we will output the 0 classifier (and hence have error ), and similarly if .
Henceforth, let us assume .
Next, note that the set described in the assumption is uniquely determined for . Call this set . It suffices to show that with probability at least , for defined in the algorithm. This is because if , by a standard VC bound of Haussler et al. (1991), since is iid and the total number of examples observed is , with probability at least the error is at most because learning of is standard PAC learning of .
Using , Lemma 6 below implies that examples suffice to estimate all correlations accurately to within with probability at least . Assuming this happens, all will necessarily also be in .
It remains to argue that with probability at least , . To see this, note that for each , the Idiosyncrasy assumption means that with probability at most would there be no for which . Hence, by a union bound, with probability at least , there will be simultaneously for each some dataset such that . Since we are assuming that all correlations are estimated correctly to within , it is straightforward to see that .
We now bound the number of examples needed to estimate correlations.
For any jointly distributed binary random variables with , and for any , the probability that the empirical correlation coefficient of iid samples differs by more than from the true correlation is at most . The proof of this Lemma is given in Appendix A.
Domain  Pages  Faculty proportion  Bag density (student pages, faculty pages) 

Cornell  162  21%  23% (22%, 28%) 
Texas  194  24%  23% (23%, 22%) 
Washington  157  20%  24% (24%, 20%) 
Wisconsin  198  21%  23% (21%, 29%) 
Test  2,054  47%  21% (22%, 21%) 
7 Experiments
We conducted simple experiments to evaluate the quality of features selected by our methodology from Section 6. We experimented with the Universities data set,
We summarize the statistics of our data in Table 1. Note that we computed the bag density of a domain as the average of the mean vector pertaining to the binary vectors in the domain. The respective densities for student and faculty pages are also shown. Note that the faculty proportion in test data (47%) is about twice the proportion in any domain from the training data (where the fraction of faculty pages hovers around 20%). Thus, investigating this data for domain generalization is a worthwhile exercise.
We compare the performance of our algorithm with a standard feature baseline. Specifically, the baseline selects words whose Pearson correlation coefficient with the training labels (i.e., faculty or student) is high. We implemented a regularized version of our feature selection algorithm FUD that penalized those features that have large standard deviation (stdev) of the Pearson coefficient on the train domains. In other words, we computed scores , and selected the features that were found to have high . We set the value of the regularization parameter to 2. We call our regularized algorithm FSUS. We trained several classifiers, namely, decision tree, Knearest neighbor, and logisitic regression, on the features selected by each algorithm (using default values of hyperparameters in the Python sklearn library). The performance of the algorithms was measured in terms of the standard balanced error rate, i.e., the average of prediction error on each class. Besides the performance on test data, we also show the mean validation error to estimate the generalization performance on domains in the training set. Specifically, we first trained a separate classifier for each domain and measured its prediction error on the data from other domains in the training set, and then averaged these errors to compute the estimate of validation error, denoted by (K=1) in Figure 1. Likewise, for , classifiers were trained on data from two domains at a time, and evaluated for performance on the other domains; similarly for . As Figure 1 illustrates, our algorithm generally outperformed the baseline method, for different numbers of selected features (horizontal axis) and for different across classifiers. Note that instead of fixing beforehand, we could tune it based on the validation error. We found that performance of our algorithm deteriorated only slightly using the tuned . We omit the details for brevity. These empirical findings substantiate our theoretical foundations, suggesting the benefits of domain generalization.
Figure 2 shows a scatterplot of the correlations of features, words in this instance, and robustness of this correlation across datasets. Interestingly, one of the most correlated features was the token 19, which was later discovered to be correlated in certain datasets simply because student webpages at certain universities were downloaded at 7pm, and the datafiles included header information which revealed the download times. It is normally considered the job of a data scientist to decide to ignore features such as data collection time, but this illustrates how our algorithm identified this problem automatically using the idea of robustness across domains.
8 Conclusions and Open Directions
The goal of this paper is to suggest a simple theoretical model of domain generalization, and to demonstrate its power to obtain results that leverage access to multiple domains.
Even in settings where training data are not explicitly partitioned into domains, ideas from this work can potentially be helpful in developing algorithms that will be robust to unfamiliar data. One approach is to create splits of the training data based on clustering it or dividing it along settings of its variables, such that a domain expert believes that the resulting division into splits may be analogous to future changes in the data to be handled. (Some of the training data could even potentially be used to test out the usefulness of a candidate partition into splits.)
Appendix A Proving bounds on correlation coefficients
This section includes the proof of Lemma 6.
[Proof of Lemma 6] Let and be the correlation coefficient and empirical correlation of on a sample of size . Let and be the corresponding realized empirical fractions over the samples.
By Chernoff bounds, for any , the probability that is at most for . Hence, with probability , for and for all . We now argue that if this happens, then .
As shorthand, let and be the analogous empirical quantities. It may be helpful for the reader to draw a 2x2 table of possible values of and associated probabilities.
Case 1: . In this case we use and argue that both . To see this, the definition of correlation coefficient applied to binary random variables means that correlation can be written as
(3) 
and similarly for . Since all quantities are nonnegative, we can remove terms to get
and similarly for . In turn this implies that . Since , we have that and since we have that , in turn implying . Hence,
Similarly, for , we have
This upper bound is greater than the one we have for . Hence,
In the above we have used the fact that .
Case 2: . We use the fact that, given that ,
(4) 
because is increasing in and decreasing in . From (3), one can see that
(5) 
The bounds on imply that
(6) 
In the last step above, we used the fact that since . Similarly to (4), we have
Combining with a similar lower bound to (6) gives
Applying the same argument replacing with and substituting into (5) gives
By assumption hence and , and since , we have . Combining with the above gives
Case 3: . Replacing by negates and also negates . This transformation swaps with and with but preserves . Hence, we can use the prior two cases which cover .
Footnotes
 For example, for the two positive examples and , the largest consistent conjunction is .
 Available at: http://www.cs.cmu.edu/afs/cs/project/theo20/www/data/
References
 Learning from noisy examples. Machine Learning 2 (4), pp. 343–370. External Links: Link, Document Cited by: §1, §4, §4.
 Metareg: towards domain generalization using metaregularization. In Advances in Neural Information Processing Systems, pp. 998–1008. Cited by: §2.
 A theory of learning from different domains. Machine learning 79 (12), pp. 151–175. Cited by: §2.
 Analysis of representations for domain adaptation. In Advances in neural information processing systems, pp. 137–144. Cited by: §2.
 Learning bounds for domain adaptation. In Advances in neural information processing systems, pp. 129–136. Cited by: §2.
 Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pp. 120–128. Cited by: §2.
 Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2229–2238. Cited by: §2.
 Learning from multiple sources. Journal of Machine Learning Research 9 (Aug), pp. 1757–1774. Cited by: §2.
 Domain adaptation for statistical classifiers. Journal of artificial Intelligence research 26, pp. 101–126. Cited by: §2.
 Distributionindependent pac learning of halfspaces with massart noise. In Advances in Neural Information Processing Systems 32, pp. 4751–4762. Cited by: §1, §4, §4.
 Domain generalization via modelagnostic learning of semantic features. In Advances in Neural Information Processing Systems, pp. 6447–6458. Cited by: §2.
 Modelagnostic metalearning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 1126–1135. Cited by: §2.
 Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning, pp. 1180–1189. Cited by: §2.
 Domain generalization for object recognition with multitask autoencoders. In Proceedings of the IEEE international conference on computer vision, pp. 2551–2559. Cited by: §2.
 Multisource domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4694–4703. Cited by: §2.
 Equivalence of models for polynomial learnability. Information and Computation 95 (2), pp. 129–161. Cited by: §6.
 On pac learning algorithms for rich boolean function classes. Theoretical Computer Science 384 (1), pp. 66–76. Cited by: §5.
 An introduction to computational learning theory. MIT press. Cited by: §5, §5.
 Toward efficient agnostic learning. In In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pp. 341–352. Cited by: §3.1.
 Undoing the damage of dataset bias. In European Conference on Computer Vision, pp. 158–171. Cited by: §2.
 Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pp. 5542–5550. Cited by: §2.
 Learning to generalize: metalearning for domain generalization. In ThirtySecond AAAI Conference on Artificial Intelligence, Cited by: §2.
 Episodic training for domain generalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1446–1455. Cited by: §2.
 Multidomain sentiment classification. In Proceedings of ACL08: HLT, Short Papers, pp. 257–260. Cited by: §2.
 Transfer learning from multiple source domains via consensus regularization. In Proceedings of the 17th ACM conference on Information and knowledge management, pp. 103–112. Cited by: §2.
 Best sources forward: domain generalization through sourcespecific nets. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 1353–1357. Cited by: §2.
 Domain adaptation with multiple sources. In Advances in neural information processing systems, pp. 1041–1048. Cited by: §2.
 Domain adaptation: learning bounds and algorithms. In 22nd Conference on Learning Theory, COLT 2009, Cited by: §2.
 Multiple source adaptation and the rényi divergence. In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence, pp. 367–374. Cited by: §2.
 Risk bounds for statistical learning. The Annals of Statistics 34 (5), pp. 2326–2366. Cited by: §1, §4.
 Minimalentropy correlation alignment for unsupervised deep domain adaptation. In International Conference on Learning Representations, Cited by: §2.
 Domain generalization via invariant feature representation. In International Conference on Machine Learning, pp. 10–18. Cited by: §2.
 Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §2.
 A theory of the learnable. Commun. ACM 27, pp. 1134–1142. Cited by: §3.1.
 Adversarial feature augmentation for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5495–5504. Cited by: §2.
 Generalizing to unseen domains via adversarial data augmentation. In Advances in Neural Information Processing Systems 31, Cited by: §2.
 Learning robust representations by projecting superficial statistics out. In ICLR, Cited by: §2.
 Generalization bounds for domain adaptation. In Advances in neural information processing systems, pp. 3320–3328. Cited by: §2.
 Adversarial multiple source domain adaptation. In Advances in neural information processing systems, pp. 8559–8570. Cited by: §2.