Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation

Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation

Shuyang Dai    Yu Cheng    Yizhe Zhang    Zhe Gan    Jingjing Liu    Lawrence Carin
     Duke University      Microsoft Dynamics 365 AI Research
{shuyang.dai, lcarin}
{yu.cheng, yizhe.zhang, zhe.gan, jingjl}

Recent unsupervised approaches to domain adaptation primarily focus on minimizing the gap between the source and the target domains through refining the feature generator, in order to learn a better alignment between the two domains. This minimization can be achieved via a domain classifier to detect target-domain features that are divergent from source-domain features. However, by optimizing via such domain classification discrepancy, ambiguous target samples that are not smoothly distributed on the low-dimensional data manifold are often missed. To solve this issue, we propose a novel Contrastively Smoothed Class Alignment (CoSCA) model, that explicitly incorporates both intra- and inter-class domain discrepancy to better align ambiguous target samples with the source domain. CoSCA estimates the underlying label hypothesis of target samples, and simultaneously adapts their feature representations by optimizing a proposed contrastive loss. In addition, Maximum Mean Discrepancy (MMD) is utilized to directly match features between source and target samples for better global alignment. Experiments on several benchmark datasets demonstrate that CoSCA can outperform state-of-the-art approaches for unsupervised domain adaptation by producing more discriminative features.


Deep neural networks (DNNs) have significantly improved the state of the art on many supervised tasks [8, 46, 39, 18]. However, without sufficient training data, DNNs have weak generalization ability to new tasks or new environments [40]. This is known as the dataset bias or domain-shift problem [15]. Unsupervised domain adaptation (UDA) [30, 12] aims to generalize a model learned from a source domain with rich annotated data to a new target domain without any labeled data. Recently, many approaches have been proposed to learn transferable representations, by simultaneously matching feature distributions across different domains [17, 42].

Figure 1: Comparison between previous classifier-discrepancy-based methods and our proposed CoSCA in the feature space. Top: The region of vacancy created by maximum discrepancy reduces the smoothness of alignment between ambiguous target samples and source samples, leading to sub-optimal solutions. This problem becomes more severe when global domain alignment is not considered. Bottom: Demonstration of global alignment and class-conditional adaptation by using the proposed CoSCA. After classifier discrepancy is maximized, the proposed contrastive loss moves ambiguous target samples near the decision boundary towards their neighbors and separates them from non-neighbors.

Motivated by [13], [43, 11] introduced a min-max game: a domain discriminator is learned by minimizing the error of distinguishing data samples from the source and the target domains, while a feature generator learns transferable features that are indistinguishable by the domain discriminator. This enforces that the learned features be domain-invariant. Meanwhile, a feature classifier (only on source-domain features) ensures that the learned features are class-conditional. Despite promising results, these adversarial methods suffer from inherent algorithmic weaknesses [38]. Specifically, the generator may generate ambiguous features near class boundaries [36]: while the generator manages to fool the discriminator, some target-domain features may still be misclassified. In other words, the model merely aligns the global marginal distribution of the two domains and ignores the class-conditional decision boundaries.

To overcome this issue, recent UDA models further align class-level distributions by taking the decision boundary into consideration. These methods either rely on iteratively refining the decision boundary with empirical data [38, 34], or utilizing multiple-view information [23]. Alternatively, the maximum classifier discrepancy (MCD) model [36] conducts a min-max game between a feature generator and two classifiers. Ambiguous target samples that are far from source-domain samples can be detected when the discrepancy between the two classifiers is maximized, as shown in Figure 1(b). Meanwhile, as the generator fools the classifiers, the generated target features may fall into the source feature regions. However, the target samples may not be smooth on the low-dimensional manifold [6, 28], meaning that neighboring samples may not belong to the same class. As a result, some generated target features could be miscategorized as shown in Figure 1(c).

We propose the Contrastively Smoothed Class Alignment (CoSCA) model to improve the alignment of class-conditional feature distributions between source and target domains, by alternatively estimating the underlying label hypothesis of target samples to map them into tighter clusters, and adapting feature representations based on a proposed contrastive loss. Specifically, by aligning ambiguous target samples near the decision boundaries with their neighbors and distancing them from non-neighbors, CoSCA enhances the alignment of each class in a contrastive manner. Figure 1(f) demonstrates an enhanced and smoothed version of the class-conditional alignment. Moreover, as shown in Figure 1(d), Maximum Mean Discrepancy (MMD) is included to better merge the source and target domain feature representations. The overall framework is trained end-to-end in an adversarial manner.

Our main contributions are summarized as follows:

  • We propose CoSCA, a novel approach that smooths class alignment for maximizing classifier discrepancy with a contrastive loss. CoSCA also provides better global domain alignment via the use of MMD loss.

  • We validate the proposed approach on several domain adaptation benchmarks. Extensive experiments demonstrate that CoSCA achieves state-of-the-art results on several benchmarks.

Related Work

Unsupervised Domain Adaptation. A practical solution for domain adaptation is to learn domain-invariant features whose distribution is similar across the source and target domains. For example, [37] designed discriminative features by using clustering techniques and pseudo-labels. DAN [25] and JAN [27] minimized the MMD loss between two domains. Adversarial domain adaptation was proposed to integrate adversarial learning and domain adaptation in a two-player game [11, 43, 42]. Following this idea, most existing adversarial-learning methods reduce feature differences by fooling a domain discriminator [26, 12]. However, the relationship between target samples and the class-conditional decision boundaries when aligning features [36] was not considered.

Class-conditional Alignment. Recent work enforces class-level alignment while aligning global marginal distributions. Adversarial Dropout Regularization (ADR) [34] and Maximum Classifier Discrepancy (MCD) [36] were proposed to train a neural network in an adversarial manner, avoiding generating non-discriminative features lying in the region near the decision boundary. [31, 27] considered class information when measuring domain discrepancy. Co-regularized Domain Adaptation (Co-DA) [23] utilized multi-view information to match the marginal feature distributions corresponding to the class-conditional distributions. Compared with previous work that executed the alignment by optimizing on “hard” metrics [36, 23], we propose to smooth the alignment iteratively, with explicitly defined loss.

Contrastive Learning. The intuition for contrastive learning is to let the model understand the difference between one set (, data points) and another, instead of only characterizing a single set [48]. This idea has been explored in previous works that model intra-class compactness and inter-class separability (, distinctiveness loss [7], contrastive loss [16], triplet loss [45]) and tangent distance [33]. It has also been extended to consider several assumptions in semi-supervised and unsupervised learning [28, 24], such as the low-density region (or cluster) assumption [28, 33] that the decision boundary should lie in the low-density region, rather than crossing the high-density region. Recently, contrastive learning was applied in UDA [20], in which the intra/inter-class domain discrepancy were modeled. In comparison, our work is based on the MCD framework, utilizing the low-density assumption and focusing on separating the ambiguous target data points by optimizing the contrastive objective, allowing the decision boundary to sit in the low-density region, , region of vacancy, and smoothness assumption.

Figure 2: Framework of the proposed CoSCA. The inputs are with label from the source domain and unlabeled from the target domain. The model contains a shared feature generator and two feature classifiers and . is calculated using the generated feature mean of the source and target, i.e., and respectively. is the classifier discrepancy calculated based on the probability outputs and of and , respectively. is the contrastive loss calculated for both source-and-target and target-and-target samples.


The task of unsupervised domain adaptation seeks to generalize a learned model from a source domain to a target domain, the latter following a different (but related) data distribution from the former. Specifically, the source- and target-domain samples are denoted , and , respectively, where and are the input, and represents the data labels of classes in the source domain. The target domain shares the same label types as the source domain, but we possess no labeled examples from the target domain. We are interested in learning a deep network that reduces domain shift in the data distribution across and , in order to make accurate predictions for . We use the notation to describe the source-domain samples and labels, and for the unlabeled target-domain samples.

Adversarial domain adaptation approaches such as [36, 21] achieve this goal via a two-step procedure: ) train a feature generator and the feature classifiers , with the source domain data, to ensure the generated features are class-conditional; ) train and so that the prediction discrepancy between the two classifiers is maximized, and train to generate features that are distinctively separated. The maximum classifier discrepancy detects the target features that are far from the support of the source domain. As the generator tries to fool the classifiers (, minimizing the discrepancy), these target-domain features are enforced to be categorized and aligned with the source-domain features.

However, only measuring divergence between and can be considered first-order moment matching, which may be insufficient for adversarial training. Previous work also observed similar issues [2, 41]. We tackle this challenge by adding the Maximum Mean Discrepancy (MMD) loss, that matches the difference via higher-order moments. Also, the class alignment in existing UDA methods takes into account the intra-class domain discrepancy only, which makes it difficult to separate samples within the same class that are close to the decision boundary. Thus, in addition to the discrepancy loss, we also measure both intra- and inter-class discrepancy across domains. Specifically, we propose to minimize the distance among target-domain features that fall into the same class based on decision boundaries, and separate those features from different categories. During this process, ambiguous target features are simultaneously kept away from the decision boundaries and mapped into the high-density region, achieving better class alignment.

Global Alignment with MMD

Following [36], we first train a feature generator and two classifiers and to minimize the softmax cross-entropy loss using the data from the labeled source domain , defined as:


where and are the probabilistic output of the two classifiers and , respectively.

In addition to (Global Alignment with MMD), we explicitly minimize the distance between the source and target feature distributions with MMD. The main idea of MMD is to estimate the distance between two distributions as the distance between sample means of the projected embeddings in a Hilbert space. Minimizing MMD is equivalent to minimizing all orders of moments [14]. In practice, the squared value of MMD is estimated with empirical kernel mean embeddings:


where is the kernel mapping, , , and denote the size of a training mini-batch of the data from the source domain and the target domain , respectively; denotes the -norm. With the MMD loss , the normalized features in the two domains are encouraged to be identically distributed, leading to better global domain alignment.

Contrastively Smoothed Class Alignment

Discrepancy Loss. The discrepancy loss represents the level of disagreement between the two feature classifiers in prediction for target-domain samples. Specifically, the discrepancy loss between and is defined as:


where denotes the -norm, and and are the probability output of and for the -th class, respectively. Accordingly, we can define the discrepancy loss over the target domain :


Adversarial training is conducted in the Maximum Classifier Discrepancy (MCD) setup [36]:


where is a hyper-parameter. Minimizing the discrepancy between the two classifiers and induces smoothness for the clearly classified target-domain features, while the region in the vacancy among the ambiguous ones remains non-smooth. Moreover, MCD only utilizes the unlabeled target-domain samples, while ignoring the labeled source-domain data when estimating the discrepancy.

Contrastive Loss. To further optimize to estimate the underlying label hypothesis of target-domain samples, we propose to measure the intra- and inter-class discrepancy across domains, conditional on class information. By using an indicator defined as , we define the contrastive loss between and as:


where is a distance measure (defined below), and is the predicted target label for . Specifically, (6) covers two types of class-aware domain discrepancies: ) intra-class domain discrepancy (); and ) inter-class domain discrepancy (). Note that is known, providing some supervision for parameter learning. Similarly, we can define the constrastive loss between and as:


To obtain the indicator , estimated target label is required. Specifically, for each data sample , a pseudo label is predicted based on the maximum posterior probability of the two classifiers:


Ideally, based on the indicator, should ensure the gathering of features that fall in the same class, while separating those in different categories. Following [28], we utilize contrastive Siamese networks [5], which can learn an invariant mapping to a smooth and coherent feature space and perform well in practice:


where and is a pre-defined margin. The margin loss constrains the neighboring features to be consistent. Based on the above definitions of source-and-target and target-and-target contrastive losses, the overall objective is obtained:


Minimizing the contrastive loss encourages features in the same class to aggregate together while pushing unrelated pairs away from each other. In other words, the semantic feature approximation is enhanced to induce smoothness between data in the feature space.

1:  Input:Source domain samples , and target domain samples . Inner-loop iteration and .
2:  Output: Classifiers and , and generator .
3:  for  from 1 to  do
4:     Sample a mini-batch of source samples and target samples .
5:     Compute on .
6:     Compute on .
7:     Update , and using (11).
8:     for  from 1 to  do
9:        Compute on .
10:        Compute on .
11:        Fix , update and using (12).
12:     end for
13:     for  from 1 to  do
14:        Compute on .
15:        Compute on .
16:        Fix and , update using (13).
17:     end for
18:  end for
Algorithm 1 Training procedure of CoSCA.

Training Procedure

We need to optimize , and by combining all the aforementioned losses, performed in an adversarial training manner. Specifically, we first train the classifiers and and the generator to minimize the objective:


We then train the classifiers and while keeping the generator fixed. The objective is:


Lastly, we train the generator with the following objective, while keeping both and fixed:


where , and are hyper-parameters that balance the different objectives. These steps are repeated, with the full algorithm summarized in Algorithm 1. In our experiments, the inner-loop iteration numbers and are both set to 2.

Class-aware sampling. When training with the contrastive loss, it is important to sample a mini-batch of data with all the classes, to allow (10) to be fully trained. We propose to use a class-aware sampling strategy to enable efficient update of the network. Specifically, we randomly select a subset of classes and then sample data from each class. Consequently, in each mini-batch of the data, we are able to estimate the intra/inter-class discrepancy for each selected class.

Dynamic parameterization of . In our implementation, we adapt a dynamic to parameterize . We set , which is a Gaussian curve ranging from 0 to . This is to prevent unlabeled target features gathering in the early stage of training, as the pseudo labels might not be reliable.

MMD [25] - 71.1 76.9 81.1 - -
DANN [12] 35.7 71.1 81.5 77.1 - -
DSN [4] 40.1 82.7 83.2 91.3 - -
ATT [35] 52.8 86.2 94.2 - - -
With Instance-Normalized Input:
Souce-Only 40.9 82.4 59.9 76.7 77.0 62.6
VADA [38] 73.3 94.5 95.7 - 78.3 71.4
Co-DA [23] 81.3 98.6 97.3 - 80.3 74.5
MCD [36] 68.7 96.2 96.7 94.2 78.1 69.2
CoSCA 80.7 98.7 98.9 99.3 81.7 75.2
Table 1: Results on visual domain adaptation tasks. Source-Only means to train a classifier in the source domain and apply it directly to the target domain without any adaptation. Results with are reported in [36].
Model plane bcycl bus car horse knife mcycl person plant sktbrd train truck mean
Source Only 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81.0 26.5 73.5 8.5 52.4
MMD [25] 87.1 63.0 76.5 42.0 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1
DANN [12] 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4
MCD [36] 89.1 80.8 82.9 70.9 91.6 56.5 89.5 79.3 90.9 76.1 88.3 29.3 77.1
SEDA [10] 95.3 87.1 84.2 58.3 94.4 89.6 87.9 79.1 92.8 91.3 89.6 37.4 82.2
CoSCA 95.7 87.4 85.7 73.5 95.3 72.8 91.5 84.8 94.6 87.9 87.9 36.8 82.9
Table 2: Test accuracy of ResNet101 model fine-tuned on the VisDA dataset. Results with are reported in [36].


We evaluate the proposed model mainly on image datasets. To compare with MCD [36] as well as the state-of-the-art results in [38, 23], we evaluate on the same datasets used in those studies: the digit datasets (, MNIST, MNISTM, Street View House Numbers (SVHN), and USPS), CIFAR-10, and STL-10. We also conduct experiments on the VisDA dataset, i.e., large-scale images. Our model can also be applied to non-visual domain adaptation tasks. Specifically, to show the flexibility of our model, we also evaluate it on the Amazon Reviews dataset.

For visual domain adaptation tasks, the proposed model is implemented based on VADA [38] and Co-DA [23] to avoid any incidental difference caused by network architecture. However, different from these models, our model does not require a discriminator, and only adopts the architecture for the feature generator and the classifier . We also include instance normalization [38, 44], achieving superior results on several benchmarks. For the VisDA dataset, we implemented our model based on the codebase of self-ensembling domain adaptation (SEDA) [10]. To compare with MCD [36], we re-implemented it using the exact architecture as our model.

In addition to the aforementioned baseline models, we also include the results from recently proposed unsupervised domain adaptation models. Note that standard domain adaptation methods (such as Transfer Component Analysis (TCA) [29] and Subspace Alignment (SA) [9]) are not included; these models only work on pre-extracted features, and are often not scalable to large datasets. Instead, we mainly compare our model with methods based on adversarial neural networks.

For the non-visual task, we adopt a one-layer CNN structure from previous work [22]. The feature generator consists of three components, including a 300-dimensional word embedding layer using GloVe [32], a one-layer CNN with ReLU, and a max-over-time pooling through which the final sentence representation is obtained. The classifiers and can be decomposed into one dropout layer and one fully connected output layer.

Digit Datasets

There are four types of digit images (, four domains). MNIST and USPS are both hand-written gray-scale images, the domain difference between which is relatively small. MNISTM [12] is a dataset built upon MNIST by adding randomly colored image patches from BSD500 dataset [1]. SVHN includes colored images of street numbers. All images are rescaled to .

Figure 3: t-SNE embedding of the features for MNISTSVHN and STLCIFAR. Color indicates domain, and the digit number is the label. The ideal situation is to mix the two colors with the same label, representing domain-invariant features. The t-SNE plots for the other datasets are provided in Supplementary Material.

MNISTSVHN. As gray-scale handwritten digits, images from MNIST has much lower dimensionality than those colored house numbers from SVHN. With such large domain gap, MCD fails to align the features of the two. Figure 3(a) plots the t-SNE embedding of the features learned by MCD. Domains are indicated by different colors, and classes are indicated by different digit numbers. The maximized discrepancy provides too many ambiguous target-domain samples. As a result, the feature generator may not properly align them with the source-domain samples. In comparison, as shown in Figure 3(b), CoSCA utilizes the MMD between the source and the target domain features, thus maintaining a better global domain alignment. With further smoothed class-conditional adaptation, it achieves test accuracy of 80.7, as shown in Table 1, competitive with state-of-the-art results from [23].

SVHNMNIST. Classification with the MNIST dataset is easier than others. As shown in the table, source-only achieves 82.4 on SVHNMNIST with instance normalization. Therefore, even with the same amount of domain difference, performance on SVHNMNIST is much better than MNISTSVHN across all compared models. The test accuracy of our model achieves 98.7.

MNISTMNISTM. Since MNISTM is a colored version of MNIST, there exists a one-to-one matching between the two datasets, , a domain adaptation model would perform well as long as domain-invariant features are properly extracted. CoSCA provides better results than Co-DA, yielding a test accuracy of 98.9.

MNISTUSPS. Evaluation on MNIST and USPS datasets is also conducted to compare our model with other baselines. Ours achieves a superb result of 99.3.

CIFAR-10 and STL-10 Datasets

CIFAR-10 and STL-10 are both 10-class datasets, with each image containing an animal or a type of transportation. Images from each class are much more diverse than the digit datasets, with higher intrinsic dimensionality, which makes it a harder domain adaptation task. There are 9 overlapping classes between these two datasets. CIFAR provides images of size 3232 and a large training set of 50,000 image samples, while STL contains higher quality images of size 9696, but with a much smaller training set of 5,000 samples. Following [10, 38, 23], we remove non-overlapping classes from these two datasets and resize the images from STL to 3232.

Due to the small training set in STL, STLCIFAR is more difficult than CIFARSTL. For the latter, the source-only model with no adaptation involved achieves an accuracy of 77.0. With adaptation, the margin-of-improvement is relatively small, while CoSCA provides the best improvement of 4.7 among all the models (Table 1). For STLCIFAR, our model yields a 12.6 margin-of-improvement and an accuracy of 75.2. Figures 3(c) and 3(d) provide t-SNE plots for MCD and our model, respectively, which shows our model achieves much better alignment for each class.

VisDA Dataset

The VisDA dataset is a large-scale image dataset that evaluates the adaptation from synthetic-object to real-object images. Images from the source domain are synthetic renderings of 3D models from different angles and lighting conditions. There are 152,397 image samples in the source domain, and 55,388 image samples in the target domain. The image size, after rescaling as in [36], is . A model architecture with ResNet101 [18] pre-trained on Imagenet is required. There are 12 different object categories in VisDA, shared by the source and the target domains.

Table 2 shows the test accuracy of different models in all object classes. The class-aware methods, namely MCD [36], SEDA [10], and our proposed CoSCA, outperforms the source only model in all categories. In comparison, the methods that are mainly based on distribution matching do not perform well in some of the categories. CoSCA outperforms MCD, showing the effectiveness of contrastive loss and MMD global alignment. In addition, it performs better than SEDA in most categories, demonstrating its robustness in handling large scale images.

Text Dataset

We also evaluate CoSCA on the Amazon Reviews dataset collected by [3]. It contains reviews from several different domains, with 1000 positive and 1000 negative reviews in each domain.

Model Accuracy
Source-Only 79.13
DANN [12] 80.29
PBLM [47] 80.40
MCD [36] 81.35
DAS [19] 81.96
CoSCA 83.17
Table 3: Results on the text classification task. Results with are reported by [19, 47].

Table 3 shows the average classification accuracy of different methods. We use the same model architecture and parameter setting for MCD and the source-only model. Results show that the proposed CoSCA outperforms all other methods. Specifically, it improves the performance from test accuracy of 81.96 to 83.17, when comparing to the state-of-the-art method DAS. MCD achieves 81.35, also outperformed by CoSCA.

Ablation Study

To further demonstrate the improvement of CoSCA over MCD [36], we conduct ablation studies. Specifically, with the same network architecture and setup, we compare model performance among 1) MCD, 2) MCD with only smooth alignment (MCD+Contras), 3) MCD with only global alighnment (MCD+MMD), and 4) CoSCA, to validate the effectiveness of adding contrastive loss and MMD loss to MCD. As MCD has already achieved great performance on some of the benchmark datasets, we mainly choose those tasks on which MCD does not perform very well, in order to better analyze the margin-of-improvement. Therefore, MNISTSVHN, STLCIFAR, and Amazon Reviews are selected for this experiment, and the results are provided in Table 4.

Model SVHN CIFAR Reviews
MCD [36] 68.7 69.2 81.35
MCD+MMD 72.1 70.2 81.73
MCD+Contras 75.9 73.4 82.56
CoSCA 80.7 75.2 83.17
Table 4: Ablation study on CoSCA with different variations of MCD on MNISTSVHN, STLCIFAR, and Amazon Reviews.

Effect of Contrastive Alignment. We compare CoSCA with MCD as well as its few variations, to validate the effectiveness of the proposed contrastive alignment. Table 4 provides the test accuracy for every model across the selected benchmark datasets. For MNISTSVHN, MCD+Contrastive outperforms MCD by 7.2. For STLCIFAR and Amazon Reviews, the margin-of-improvement is 4.2 and 1.21, respectively (less significant than MNISTSVHN, possibly due to the smaller domain difference). Note that the results of MCD+Contras are still worse than CoSCA, demonstrating the effectiveness of the global domain alignment and the framework design of our model.

Effect of MMD. We further investigate how the MMD loss can impact the performance of our proposed CoSCA. Specifically, MCD+MMD achieves a test accuracy of 72.1 for MNISTSVHN, only lifting the original result of MCD by 3.4. For STLCIFAR and Amazon Reviews, the margin-of-improvement is 1.0 and 0.38, respectively. While this validates the effectiveness of having global alignment in the MCD framework, the improvement is small. Without a smoothed class-conditional alignment, MCD still encounters misclassified target features during training, leading to a sub-optimal solution. Notice that when comparing CoSCA with MCD+Contras, the improvement is significant for MNISTSVHN, with validation accuracy and training stability enhanced. This demonstrates the importance of global alignment when there exists a large domain difference.


We have proposed Contrastively Smoothed Class Alignment (CoSCA) for the UDA problem, by explicitly combining intra-class and inter-class domain discrepancy and optimizing class alignment through end-to-end training. Experiments on several benchmarks demonstrate that our model can outperform state-of-the-art baselines. Our experimental analysis shows that CoSCA learns more discriminative target-domain features, and the introduced MMD feature matching improves the global domain alignment. For future work, we want to develop a theoretical interpretation of contrastive learning for domain adaptation, particularly characterizing its effects on the alignment of source and target domain feature distributions.


  • [1] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik (2011) Contour detection and hierarchical image segmentation. PAMI. Cited by: Digit Datasets.
  • [2] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang (2017) Generalization and equilibrium in generative adversarial nets (GANs). In ICML, Cited by: Approach.
  • [3] J. Blitzer, M. Dredze, and F. Pereira (2007) Domain adaptation for sentiment classification. In ACL, Cited by: Text Dataset.
  • [4] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan (2016) Domain separation networks. In NeurIPS, Cited by: Table 1.
  • [5] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah (1994) Signature verification using a ”siamese” time delay neural network. In NeurIPS, Cited by: Contrastively Smoothed Class Alignment.
  • [6] O. Chapelle, B. Scholkopf, and A. Zien (2009) Semi-supervised learning. IEEE Transactions on Neural Networks. Cited by: Introduction.
  • [7] B. Dai and D. Lin (2017) Contrastive learning for image captioning. In NeurIPS, Cited by: Related Work.
  • [8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In ICML, Cited by: Introduction.
  • [9] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars (2013) Unsupervised visual domain adaptation using subspace alignment. In ICCV, Cited by: Experiments.
  • [10] G. French, M. Mackiewicz, and M. Fisher (2018) Self-ensembling for domain adaptation. In ICLR, Cited by: Table 2, CIFAR-10 and STL-10 Datasets, VisDA Dataset, Experiments.
  • [11] Y. Ganin and V. Lempitsky (2014) Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495. Cited by: Introduction, Related Work.
  • [12] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky (2016) Domain-adversarial training of neural networks. JMLR. Cited by: Introduction, Related Work, Table 1, Table 2, Digit Datasets, Table 3.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NeurIPS, Cited by: Introduction.
  • [14] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola (2012) A kernel two-sample test. JMLR. Cited by: Global Alignment with MMD.
  • [15] A. Gretton, A. J. Smola, J. Huang, M. Schmittfull, K. M. Borgwardt, and B. Schölkopf (2009) Covariate shift by kernel mean matching. In MIT press, Cited by: Introduction.
  • [16] R. Hadsell, S. Chopra, and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In CVPR, Cited by: Related Work.
  • [17] P. Haeusser, T. Frerix, A. Mordvintsev, and D. Cremers (2017) Associative domain adaptation. In ICCV, Cited by: Introduction.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: Introduction, VisDA Dataset.
  • [19] R. He, W. S. Lee, H. T. Ng, and D. Dahlmeier (2018) Adaptive semi-supervised learning for cross-domain sentiment classification. In ACL, Cited by: Table 3.
  • [20] G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann (2019) Contrastive adaptation network for unsupervised domain adaptation. In CVPR, Cited by: Related Work.
  • [21] M. Kim, P. Sahu, B. Gholami, and V. Pavlovic (2019) Unsupervised visual domain adaptation: A deep max-margin gaussian process approach. arXiv preprint arXiv:1902.08727. Cited by: Approach.
  • [22] Y. Kim (2014) Convolutional neural networks for sentence classification. In EMNLP, Cited by: Experiments.
  • [23] A. Kumar, P. Sattigeri, K. Wadhawan, L. Karlinsky, R. Feris, B. Freeman, and G. Wornell (2018) Co-regularized alignment for unsupervised domain adaptation. In NeurIPS, Cited by: Introduction, Related Work, Table 1, Digit Datasets, CIFAR-10 and STL-10 Datasets, Experiments, Experiments.
  • [24] C. Li, K. Xu, J. Zhu, and B. Zhang (2017) Triple generative adversarial nets. In NeurIPS, Cited by: Related Work.
  • [25] M. Long, Y. Cao, J. Wang, and M. Jordan (2015) Learning transferable features with deep adaptation networks. In ICML, Cited by: Related Work, Table 1, Table 2.
  • [26] M. Long, Z. CAO, J. Wang, and M. I. Jordan (2018) Conditional adversarial domain adaptation. In NeurIPS, Cited by: Related Work.
  • [27] M. Long, H. Zhu, J. Wang, and M. I. Jordan (2016) Unsupervised domain adaptation with residual transfer networks. In NeurIPS, Cited by: Related Work, Related Work.
  • [28] Y. Luo, J. Zhu, M. Li, Y. Ren, and B. Zhang (2017) Smooth neighbors on teacher graphs for semi-supervised learning. In CVPR, Cited by: Introduction, Related Work, Contrastively Smoothed Class Alignment.
  • [29] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang (2011) Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks. Cited by: Experiments.
  • [30] S. J. Pan, Q. Yang, et al. (2010) A survey on transfer learning. IEEE Transactions on knowledge and data engineering. Cited by: Introduction.
  • [31] Z. Pei, Z. Cao, M. Long, and J. Wang (2018) Multi-adversarial domain adaptation. In AAAI, Cited by: Related Work.
  • [32] J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In EMNLP, Cited by: Experiments.
  • [33] S. Rifai, Y. N. Dauphin, P. Vincent, Y. Bengio, and X. Muller (2011) The manifold tangent classifier. In NeurIPS, Cited by: Related Work.
  • [34] K. Saito, Y. Ushiku, T. Harada, and K. Saenko (2017) Adversarial dropout regularization. arXiv preprint arXiv:1711.01575. Cited by: Introduction, Related Work.
  • [35] K. Saito, Y. Ushiku, and T. Harada (2017) Asymmetric tri-training for unsupervised domain adaptation. In ICML, Cited by: Table 1.
  • [36] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada (2018) Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, Cited by: Introduction, Introduction, Related Work, Related Work, Global Alignment with MMD, Contrastively Smoothed Class Alignment, Table 1, Table 2, Approach, VisDA Dataset, VisDA Dataset, Ablation Study, Table 3, Table 4, Experiments, Experiments.
  • [37] O. Sener, H. O. Song, A. Saxena, and S. Savarese (2016) Learning transferrable representations for unsupervised domain adaptation. In NeurIPS, Cited by: Related Work.
  • [38] R. Shu, H. H. Bui, H. Narui, and S. Ermon (2018) A dirt-t approach to unsupervised domain adaptation. In ICLR, Cited by: Introduction, Introduction, Table 1, CIFAR-10 and STL-10 Datasets, Experiments, Experiments.
  • [39] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: Introduction.
  • [40] A. Torralba and A. A. Efros (2011) Unbiased look at dataset bias. In CVPR, Cited by: Introduction.
  • [41] Y. Tsai, W. Hung, S. Schulter, K. Sohn, M. Yang, and M. Chandraker (2018) Learning to adapt structured output space for semantic segmentation. In CVPR, Cited by: Approach.
  • [42] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko (2015) Simultaneous deep transfer across domains and tasks. In ICCV, Cited by: Introduction, Related Work.
  • [43] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In CVPR, Cited by: Introduction, Related Work.
  • [44] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: Experiments.
  • [45] J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu (2014) Learning fine-grained image similarity with deep ranking. In CVPR, Cited by: Related Work.
  • [46] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In NeurIPS, Cited by: Introduction.
  • [47] Y. Ziser and R. Reichart (2018) Pivot based language modeling for improved neural domain adaptation. In ACL, Cited by: Table 3.
  • [48] J. Y. Zou, D. J. Hsu, D. C. Parkes, and R. P. Adams (2013) Contrastive learning using spectral methods. In NeurIPS, Cited by: Related Work.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description