Learning to Transfer Examples for Partial Domain Adaptation
Domain adaptation is critical for learning in new and unseen environments. With domain adversarial training, deep networks can learn disentangled and transferable features that effectively diminish the dataset shift between the source and target domains for knowledge transfer. In the era of Big Data, the ready availability of large-scale labeled datasets has stimulated wide interest in partial domain adaptation (PDA), which transfers a recognizer from a labeled large domain to an unlabeled small domain. It extends standard domain adaptation to the scenario where target labels are only a subset of source labels. Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer. In this work, we propose a unified approach to PDA, Example Transfer Network (ETN), which jointly learns domain-invariant representations across source and target domains, and a progressive weighting scheme that quantifies the transferability of source examples while controlling their importances to the learning task in the target domain. A thorough evaluation on several benchmark datasets shows that our approach achieves state-of-the-art results for partial domain adaptation tasks.
Deep neural networks have significantly advanced the state-of-the-art performance for various machine learning problems [13, 15] and applications [11, 20, 30]. A common prerequisite of deep neural networks is the rich labeled data to train a high-capacity model to have sufficient generalization power. Such rich supervision is often prohibitive in real world applications due to huge cost of data annotation. Thus, to reduce the labeling cost, there is a strong need to develop versatile algorithms that can leverage rich labeled data from a related source domain . However, this domain adaptation paradigm is hindered by the dataset shift underlying different domains, which forms a major bottleneck to adapting the category models to novel target tasks [29, 36].
A major line of the existing domain adaptation methods bridge different domains by learning domain-invariant feature representations in the absence of target labels, i.e. unsupervised domain adaptation. Existing methods assume that the source and target domains share the same set of class labels [32, 12], which is crucial for the classifier trained on the source domain to be directly applicable to the target domain. Recent studies in deep learning reveal that deep networks can disentangle explanatory factors of variations behind domains [8, 42], thus learning more transferable features to improve domain adaptation significantly. These deep domain adaptation methods typically embed distribution matching modules, including moment matching [38, 21, 22, 23] and adversarial training [10, 39, 37, 24, 17], into deep architectures for end-to-end learning of transferable representations.
Although these methods can close the domain shift in the feature space, they assume an identical set of category labels across the source and target domains. In real world applications, it is often formidable to find a relevant dataset with the label space exactly identical to the target dataset of interest, making “label-space engineering” a cumbersome process. A more practical scenario is Partial Domain Adaptation (PDA) [5, 43, 6], which assumes that the source label space is a superspace of the target label space, relaxing the constraint of identical label space made by standard domain adaptation methods [21, 10]. PDA enables knowledge transfer from a big domain of many labels to a small domain of few labels. With the emergence of Big Data, many large-scale labeled datasets such as ImageNet-1K  and Google Open Images  are readily accessible to empower data-driven artificial intelligence. These repositories are almost universal to subsume categories of our target domain, leading partial domain adaptation feasible to many applications. It is intriguing to leverage a universal source domain for learning a variety of small target domains in a on-the-fly way.
As a generalization to standard domain adaptation, partial domain adaptation is more challenging in that the target labels are unknown at training, and there must be many “outlier” source classes that are not useful for the target task but we do not know which they are. This technical challenge is intuitively illustrated in Figure 1(a), where outlier source classes (like pencil) will be forcefully misaligned to the non-corresponding target classes (like pen) by existing domain adaptation methods. As a result, negative transfer will happen when models are discriminating the target classes using the source classifier. Negative transfer is the dilemma that a transfer learner migrates harmful knowledge from the source domain to the target domain and performs worse than a source-only classifier, which is the key obstacle to the wide application of domain adaptation techniques .
Thus, matching the whole source and target domains as previous methods [21, 10] is not a safe solution to the PDA problem. We need to develop algorithms versatile enough to transfer useful examples from the many-class dataset (source domain) to the few-class dataset (target domain) while robust enough to irrelevant or outlier examples. Three earliest approaches to partial domain adaptation [5, 43, 6] address the above goal by weighing each data point in the domain-adversarial networks, where a domain discriminator is learned to distinguish the source and target. While decreasing the impact of irrelevant examples on domain alignment, they do not undo the negative effect of the outlier classes on the source classifier. Moreover, they match the feature distributions without considering the underlying discriminative and multimodal structures. As a result, it is still vulnerable that they may align the features of outlier source classes and target classes, giving way to negative transfer.
Towards a safe approach to partial adaptation, this work presents an Example Transfer Network (ETN), which improves the previous work [5, 43, 6] by learning to transfer useful examples. ETN automatically quantifies the transferability of source examples based on their similarities to the target domain, while simultaneously weighing their contributions to both the source classifier and the domain discriminator. In the meantime, ETN matches the feature distributions of transferable source examples to the target domain by further revealing the discriminative information to the domain discriminator. By this means, the irrelevant source examples belonging to the outlier classes can be potentially detected and filtered out from both the source classifier and the domain-adversarial network. A key improvement of ETN over the previous methods is the capability to simultaneously confine the source classifier and the domain-adversarial network within the auto-discovered shared label space, thus promoting positive transfer of relevant examples and mitigating negative transfer of irrelevant examples. Comprehensive experiments demonstrate that our model achieves state-of-the-art results on several benchmark datasets, including Office-31, Office-Home, ImageNet-1K and Caltech-256.
2 Related Work
Domain adaptation, a special scenario of transfer learning , bridges domains of different distributions to mitigate the burden of annotating target data for machine learning [28, 9, 44, 41], computer vision [32, 12, 16] and natural language processing . The main technical difficulty of domain adaptation is to formally reduce the distribution discrepancy across different domains. Deep networks can learn representations that suppress explanatory factors of variations behind data  and manifest invariant factors across different populations. These invariant factors enable knowledge transfer across relevant domains . Deep networks have been extensively explored for domain adaptation [27, 16], yielding significant performance gains against shallow domain adaptation methods.
While deep representations can disentangle complex data distributions, recent advances reveal that they can only reduce, but not remove, the cross-domain discrepancy . Thus deep learning alone cannot bound the generalization risk for the target task [25, 1]. Recent works bridge deep learning and domain adaptation [38, 21, 10, 39, 22]. They extend deep networks to domain adaptation by adding adaptation layers through which high-order statistics of distributions are explicitly matched [38, 21, 22], or by adding a domain discriminator to distinguish features of the source and target domains, while the features are learned adversarially to deceive the discriminator in a minimax game [10, 39].
Partial Domain Adaptation
While evident advances have been achieved in standard domain adaptation, they still use the vanilla assumption that source and target domains share the same label space. This assumption does not hold in partial domain adaptation (PDA), which transfers models from many-class domains to few-class domains. There are three valuable efforts towards the PDA problem. Selective Adversarial Network (SAN)  adopts multiple adversarial networks with a weighting mechanism to select out source examples in the outlier classes. Partial Adversarial Domain Adaptation  improves SAN by employing only one adversarial network and further adding the class-level weight to the source classifier. Importance Weighted Adversarial Nets (IWAN)  uses the Sigmoid output of an auxiliary domain classifier (not involved in domain-adversarial training) to derive the probability of a source example belonging to the target domain, which is used to weigh source examples in the domain-adversarial network. These pioneering approaches achieve dramatical performance gains over standard methods when applied to partial domain adaptation tasks.
These valuable efforts mitigate negative transfer caused by outlier source classes and promote positive transfer among shared classes. However, as outlier classes are only selected out for the domain discriminators, the source classifier is still trained with all classes , whose performance for shared classes may be distracted by outlier classes. Furthermore, the domain discriminator of IWAN  for obtaining the importance weights distinguishes the source and target domains only based on the feature representations, without exploiting the discriminative information in the source domain. This will result in non-discriminative importance weights to distinguish shared classes from outlier classes. This paper proposes an Example Transfer Network (ETN) that down-weighs the irrelevant examples of outlier classes further on the source classifier and adopts a discriminative domain discriminator to quantify the example transferability.
Open-Set Domain Adaptation
On par with domain adaptation, research has been dedicated to open set recognition, with the goal to reject outliers while correctly recognizing inliers during testing. Open Set SVM  trains a probabilistic SVM and rejects unknown samples by a threshold. Open Set Neural Network  generalizes deep neural networks to open set recognition by introducing an OpenMax layer, which estimates the probability of an input from an unknown class and reject the unknown point by a threshold. Open Set Domain Adaptation (OSDA) [4, 33] tackles the setting when the training and testing data are from different distributions and label sets. Since this scenario is generally challenging, OSDA assumes which classes are shared by the source and target domains are known at training. Unlike OSDA, in our scenario the target classes are entirely unknown at training. It is interesting to extend our work to open set scenario under the generic assumption that all target classes are unknown.
3 Example Transfer Network
The scenario of partial domain adaptation (PDA)  constitutes a source domain of labeled examples associated with classes and a target domain of unlabeled examples drawn from classes. Note that in PDA the source domain label space is a superspace of the target domain label space i.e. . The source and target domains are following different probability distributions and respectively. Besides as in standard domain adaptation, we further have in partial domain adaptation, where denotes the distribution of the source domain data belonging to label space . The goal of PDA is to learn a deep network that enables end-to-end training of transferable feature extractor and adaptive classifier to sufficiently close the distribution discrepancy across domains and bound the target risk .
We incur deteriorated performance when directly applying the source classifier trained with standard domain adaptation methods to the target domain. In partial domain adaptation, it is difficult to identify which part of the source label space is shared with the target label space because the target domain is fully unlabeled and is unknown at the training stage. Under this condition, most of existing deep domain adaptation methods [21, 10, 39, 22] are prone to negative transfer, a degenerated case where the classifier with adaptation performs even worse than the classifier without adaptation. Negative transfer happens since they assume that the source and target domains have identical label space and match whole distributions and even though and are non-overlapping and cannot be matched in principle. Thus, decreasing the negative effect of the source examples in outlier label space is the key to mitigate negative transfer in partial domain adaptation. Besides, we also need to reduce the distribution shift across and to enhance positive transfer in the shared label space . Note that the irrelevant source examples may come from both outlier classes and shared classes, thus requiring a versatile algorithm to identify them.
3.1 Transferability Weighting Framework
The key technical problem of domain adaptation is to reduce the distribution shift between the source and target domains. Domain adversarial networks [10, 39] tackle this problem by learning transferable features in a two-player minimax game: the first player is a domain discriminator trained to distinguish the feature representations of the source domain from the target domain, and the second player is a feature extractor trained simultaneously to deceive the domain discriminator.
Specifically, the domain-invariant features are learned in a minimax optimization procedure: the parameters of the feature extractor are trained by maximizing the loss of domain discriminator , while the parameters of the domain discriminator are trained by minimizing the loss of the domain discriminator . Note that our goal is to learn a source classifier that transfers to the target, hence the loss of the source classifier is also minimized. This leads to the optimization problem proposed in :
where is the union of the source and target domains and is the total number of examples; and are the cross-entropy loss functions.
While domain adversarial networks yield strong results for standard domain adaptation, they will incur performance degeneration on the partial domain adaptation where . This degeneration is caused by the outlier classes in the source domain, which are undesirably matched to the target classes . Due to the domain gap, even the source examples in the shared label space may not transfer well to the target domain. As a consequence, we need to design a new framework for partial domain adaptation.
This paper presents a novel transferability weighting framework to address the technical difficulties of partial domain adaptation. Denote by the weight of each source example , which quantifies the transferability of that example to be useful for the target domain. Then for a source example with a larger weight, we should increase its contribution to the final model to enhance positive transfer; otherwise we should decrease its contribution to mitigating negative transfer. Unlike IWAN , a previous work for partial domain adaptation that reweighs the source examples in the loss of the domain discriminator , in this paper, we further put the weights in the loss of the source classifier . This significantly enhances our ability to diminish the irrelevant source examples that deteriorate our final model.
Furthermore, the unknownness of target labels can make the identification of shared classes difficult, making partial domain adaptation more entangled. We thus believe that the exploitation of unlabeled target examples by semi-supervised learning is also indispensable. We make use of the entropy minimization principle . Let , the entropy loss to quantify the uncertainty of a target example’s predicted label is .
The transferability weighting framework is shown in Figure 2. By weighting the losses of the source classifier and the domain discriminator using the transferability of each source example, and combining the entropy minimization criterion, we achieve the following objective:
where is a hyper-parameter to trade-off the labeled source examples and unlabeled target examples.
The transferability weighting framework can be trained end-to-end by a minimax optimization procedure as follows, yielding a saddle point solution :
3.2 Example Transferability Quantification
With the proposed transferability weighting framework in Equations (2) and (3), the key technical problem is how to quantify the transferability of each source example . We introduce an auxiliary domain discriminator , which is also trained to distinguish the representations of the source domain from the target domain, using similar loss as Equation (3) but dropping . It is not involved in the adversarial training procedure, i.e. the features are not learned to confuse . Such an auxiliary domain discriminator can roughly quantify the transferability of the source examples, through the Sigmoid probability of classifying each source example as belonging to the target distribution.
Such an auxiliary domain discriminator discriminates source and target domains based on the assumption that source examples of shared classes are closer to the target domain than to those source examples in the outlier classes , thus having higher probability to be predicted as from the target domain and lower probability to be predicted as from the source domain. However, the auxiliary domain discriminator only distinguishes the source and target based on domain information. However, because of the potential small gap between output for transferable source examples and irrelevant source examples especially when the auxiliary domain discriminator is trained well, the model is still exposed to the risk of mixing up the transferable and irrelevant source examples, yielding unsatisfactory transferability measures . In partial domain adaptation, the source examples in differentiates from those in mainly in that is shared with the target domain while has no overlap with the target domain. Thus, it is natural to integrate discriminative information into our weight design to resolve the ambiguity between shared and outlier classes.
Inspired by AC-GANs  that integrate the labeled information into the discriminator, we aim to integrate the label information into the auxiliary domain discriminator . However, we hope to develop a transferability measures with both the discriminative information and domain information to generate clearly separatable weight for source data in and respectively. Thus, we add an auxiliary label predictor followed with leaky-softmax activation. accepts input feature from feature extractor and produce dimension output . Then will be passed through a leaky-softmax activation as follows,
where is the -th dimension of . The leaky-softmax has the property that the element-sum of its outputs is smaller than ; when the logit of class is very large, the probability to classify an example as class is high. As the auxiliary label predictor is trained on source examples and labels, the source examples will have higher probability to be classified as a specific class , while the target examples will have smaller logits and uncertain predictions. Therefore, the element-sum of the leaky-softmax outputs is closer to for source examples and closer to for target examples. As such, the auxiliary domain discriminator is defined as
where is the probability the auxiliary classifier predicts each example to class , while computes the probability of each example belonging to the source domain. The larger the value of , the more probable that comes from the source domain while the smaller the value of , the more probable that comes from the target domain.
We train the auxiliary label predictor with the leaky-softmax by a multitask loss over one-vs-rest binary classification tasks for the -class classification problem:
where denotes whether class is the ground-truth label for source example , and is a hyper-parameter. We also train the auxiliary domain discriminator to distinguish the features of the source domain and the target domain as
From Equations (6) and (7), we observe that the outputs of the auxiliary domain discriminator depends on the outputs of the auxiliary label predictor . This guarantees that is trained under the influence of the labeled information, resolving the ambiguity between shared and outlier classes to better quantify the example transferability.
Finally, with the help of the auxiliary label predictor and the auxiliary domain discriminator , we can derive more accurate and discriminative weights to quantify the transferability of each source example as
Since the output of for source examples is closer to , implying very small weights, we normalize the weights in each mini-batch of batch size as .
3.3 Minimax Optimization Problem
With the aforementioned derivation, we now formulate our final model, Example Transfer Network (ETN). We unify the transferability weighting framework in Equations (2)–(3) and the example transferability quantification in Equations (7)–(8). Denoting by the parameters of the auxiliary label predictor , the proposed ETN model can be solved by a minimax optimization problem that finds saddle-point solutions , , and to the parameters as follows,
ETN enhances partial domain adaptation by learning to transfer relevant examples and diminish outlier examples for both source classifier and domain discriminator . It exploits progressive weighting schemes from the auxiliary domain discriminator and auxiliary label predictor , well quantifying the transferability of source examples.
|ETN w/o classifier||56.18||71.93||79.32||65.11||65.57||73.66||65.47||52.9||82.88||72.93||56.93||82.91||68.93|
|ETN w/o auxiliary||48.36||50.42||79.13||56.57||45.88||65.49||56.38||49.07||77.53||75.57||58.81||78.32||61.79|
We conduct experiments on three datasets to evaluate our approach with state-of-the-art deep (partial) domain adaptation methods. Codes and datasets will be available online.
Office-31  is de facto for domain adaptation. It is relatively small with 4,652 images in 31 categories. Three domains, namely A, D, W, are collected by downloading from amazon.com (A), taking from DSLR (D) and from web camera (W). Following the protocol in , we select images from the 10 categories shared by Office-31 and Caltech-256 to build new target domain, creating six partial domain adaptation tasks: AW, DW, WD, AD, DA and WA. Note that there are 31 categories in the source domain and 10 categories in the target domain.
Office-Home  is a larger dataset, with 4 domains of distinct styles: Artistic images, Clip Art, Product images and Real-World images. Each domain contains images of 65 object categories. Denoting them as Ar, Cl, Pr, Rw, we obtain twelve partial domain adaptation tasks: ArCl, ArPr, ArRw, ClAr, ClPr, ClRw, PrAr, PrCl, PrRw, RwAr, RwCl, and RwPr. For partial domain adaptation, we use images from the first 25 categories in alphabetical order as target domain and images from all 65 categories as source domain.
ImageNet-Caltech is a large dataset built with ImageNet-1K and Caltech-256. They share 84 classes, thus we form two partial domain adaptation tasks: ImageNet (1000)Caltech (84) and Caltech (256)ImageNet (84). As most base networks are trained on ImageNet training set, we use images from ImageNet validation set as target domain for Caltech (256)ImageNet (84) task.
We compare the proposed ETN with state-of-the-art deep learning and (partial) domain adaptation methods: ResNet , Deep Adaptation Network (DAN) , Domain-Adversarial Neural Networks (DANN) , Residual Transfer Networks (RTN) , Selective Adversarial Network (SAN) , Importance Weighted Adversarial Network (IWAN)  and Partial Adversarial Domain Adaptation (PADA) .
Besides ResNet-50 , we also evaluate ETN and all methods based on VGG  on Office-31. We perform ablation study to inspect the example transfer mechanism, by evaluating two variants of ETN: 1) ETN w/o classifier is the variant without weights on the source classifier; 2) ETN w/o auxiliary is the variant without the auxiliary label predictor on auxiliary domain discriminator.
We implement all methods based on PyTorch, and fine-tune ResNet-50  and VGG  pre-trained on ImageNet. We add a bottleneck layer before the classifier layer as DANN . For ETN, we train the bottleneck layer, classifier layer and all adversarial networks. As these new layers and networks are trained from scratch, we set their learning rate to be 10 times that of the other layers. We use mini-batch SGD with momentum of 0.9 and the learning rate decay strategy implemented in DANN : the learning rate is adjusted during SGD using , where is the training progress linearly changing from to and , and are optimized with importance weighted cross-validation . The hyper-parameters of all the adversarial networks are increased gradually from to as DANN .
The classification results based on ResNet-50 on the six tasks of Office-31, the twelve tasks of Office-Home and the two large-scale tasks of ImageNet-Caltech are respectively shown in Tables 3 and 2. We also compare all methods on Office-31 based on VGG in Table 4. ETN outperforms all other methods in terms of average accuracy, showing that ETN performs well with different base networks.
Specifically, we have several observations. 1) ADDA, DANN and DAN outperform ResNet only some tasks, implying that they suffer from the negative transfer issue. 2) RTN exploits the entropy minimization criterion to amend itself with semi-supervised learning. Thus, it has some improvement over ResNet but still suffers from negative transfer for some tasks. 3) Partial domain adaptation methods (SAN  and IWAN ) perform better than ResNet and other domain adaptation methods on most tasks, due to their weighting mechanism to mitigate negative transfer caused by outlier classes and promote positive transfer among shared classes. 4) ETN outperforms SAN and IWAN on most tasks, showing its power to discriminate the outlier classes from the shared classes and to transfer relevant examples.
In particular, ETN outperforms SAN and IWAN by much larger margin on the large-scale ImageNet-Caltech dataset, indicating that ETN is robuster to outlier classes and performs better even on dataset with large number of outlier classes ( in ImageNetCaltech) relative to the shared classes ( in ImageNetCaltech). ETN has two advantages: learning label-aware weights and filtering outlier classes out from both source classifier and domain discriminator, which boosts partial domain adaptation performance.
We inspect the efficacy of different modules by comparing in Tables 3 the results of ETN variants. 1) ETN outperforms ETN w/o classifier, proving that the weighting mechanism on the source classifier can reduce the negative influence of outlier classes data and force the source classifier to focus on the data belonging to the target label space. 2) ETN also outperforms ETN w/o auxiliary by larger margin, proving that the auxiliary classifier can inject label information into the domain discriminator to yield discriminative weights, which in turn enables ETN to filter out irrelevant examples.
#Target Classes: We conduct a wide range of partial domain adaptation with different number of target classes. Figure 5 shows that when the number of target classes decreases fewer than , the performance of DANN degrades quickly, implying that negative transfer becomes severer when the label space overlap is diminished. The performance of SAN decreases slowly and stably, indicating that SAN potentially eliminates the influence of outlier classes. IWAN only performs better than DANN when the label space non-overlap is very large and negative transfer is very severe. ETN performs stably and consistently better than all compared methods, showing the advantage of ETN to partial domain adaptation. ETN also performs better than DANN in standard domain adaptation when the label spaces overlap totally, implying that the weighting mechanism will not degrade performance when there are no outlier classes.
Convergence Performance: As shown in Figure 6, the test errors of all methods converge fast but at a high error rate, while ETN converges to the lowest test error. Such phenomenon implies that ETN can be trained more efficiently and stably than previous domain adaptation methods.
Class-Wise Weights: We plot in Figure 3 the weights generated by IWAN  and ETN for source examples in shared classes and outlier classes on ClPr. Compared to IWAN, our approach assigns much larger weights to shared classes and much smaller weights to outlier classes. Most examples of outlier classes have nearly zero weights, explaining the strong results of ETN on these datasets.
Feature Visualization: We plot in Figures 4 the t-SNE embeddings  of the features learned by DANN, SAN, IWAN and ETN on A (31 classes)W (10 classes) with class information, where we randomly select shared classes and source-specific classes. We observe that features learned by DANN, IWAN and SAN are not clustered as clearly as ETN, indicating that ETN can better discriminate both source and target examples than the compared methods.
This paper presented Example Transfer Network, an end-to-end approach to partial domain adaptation. The proposed approach quantifies the transferability of source examples by integrating the discriminative information into the adversarial domain discriminator, and down-weighs the negative influence of the outlier source examples to both the source classifier and the domain discriminator. Based on evaluation, our model performs strongly for partial domain adaptation.
-  S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning from different domains. Machine Learning Journal (MLJ), 79(1-2):151–175, 2010.
-  A. Bendale and T. E. Boult. Towards open set deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1563–1572, 2016.
-  Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(8):1798–1828, 2013.
-  P. P. Busto and J. Gall. Open set domain adaptation. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 754–763, 2017.
-  Z. Cao, M. Long, J. Wang, and M. I. Jordan. Partial transfer learning with selective adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  Z. Cao, L. Ma, M. Long, and J. Wang. Partial adversarial domain adaptation. In The European Conference on Computer Vision (ECCV), September 2018.
-  R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537, 2011.
-  J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference on Machine Learning (ICML), 2014.
-  L. Duan, I. W. Tsang, and D. Xu. Domain transfer multiple kernel learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(3):465–479, 2012.
-  Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. S. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17:59:1–59:35, 2016.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
-  B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
-  Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems (NIPS), pages 529–536. MIT Press, 2005.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  J. Hoffman, S. Guadarrama, E. Tzeng, R. Hu, J. Donahue, R. Girshick, T. Darrell, and K. Saenko. LSDA: Large scale detection through adaptation. In Advances in Neural Information Processing Systems (NIPS), 2014.
-  J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 1994–2003, 2018.
-  L. P. Jain, W. J. Scheirer, and T. E. Boult. Multi-class open set recognition using probability of inclusion. In European Conference on Computer Vision (ECCV), pages 393–409, 2014.
-  I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, S. Kamali, M. Malloci, J. Pont-Tuset, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan, and K. Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://storage.googleapis.com/openimages/web/index.html, 2017.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3431–3440, 2015.
-  M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015.
-  M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems, pages 136–144, 2016.
-  M. Long, H. Zhu, J. Wang, and M. I. Jordan. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning, International Conference on Machine Learning (ICML) 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2208–2217, 2017.
-  Z. Luo, Y. Zou, J. Hoffman, and F. Li. Label efficient learning of transferable representations acrosss domains and tasks. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 164–176, 2017.
-  Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In Conference on Computational Learning Theory (COLT), 2009.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning, International Conference on Machine Learning (ICML) 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2642–2651, 2017.
-  M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013.
-  S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 22(2):199–210, 2011.
-  S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering (TKDE), 22(10):1345–1359, 2010.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, pages 91–99. 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
-  K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision (ECCV), 2010.
-  K. Saito, S. Yamamoto, Y. Ushiku, and T. Harada. Open set domain adaptation by backpropagation. In European Conference on Computer Vision (ECCV), September 2018.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015 (arXiv:1409.1556v6), 2015.
-  M. Sugiyama, M. Krauledat, and K.-R. Muller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research (JMLR), 8(May):985–1005, 2007.
-  A. Torralba and A. A. Efros. Unbiased look at dataset bias. In CVPR 2011, pages 1521–1528, 2011.
-  E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. 2014.
-  E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Simultaneous deep transfer across domains and tasks. In IEEE International Conference on Computer Vision (ICCV), 2015.
-  H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan. Deep hashing network for unsupervised domain adaptation. In (IEEE) Conference on Computer Vision and Pattern Recognition (IEEE Conference on Computer Vision and Pattern Recognition (CVPR)), 2017.
-  X. Wang and J. Schneider. Flexible transfer learning under support and model shift. In Advances in Neural Information Processing Systems (NIPS), 2014.
-  J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems (NIPS), 2014.
-  J. Zhang, Z. Ding, W. Li, and P. Ogunbona. Importance weighted adversarial nets for partial domain adaptation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  K. Zhang, B. Schölkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In International Conference on Machine Learning (ICML), 2013.