Unsupervised Domain Adaptation Using Approximate Label Matching

Unsupervised Domain Adaptation Using Approximate Label Matching

Jordan T. Ash    Robert E. Schapire    Barbara E. Engelhardt
Abstract

Domain adaptation addresses the problem created when training data is generated by a so-called source distribution, but test data is generated by a significantly different target distribution. In this work, we present approximate label matching (ALM), a new unsupervised domain adaptation technique that creates and leverages a rough labeling on the test samples, then uses these noisy labels to learn a transformation that aligns the source and target samples. We show that the transformation estimated by ALM has favorable properties compared to transformations estimated by other methods, which do not use any kind of target labeling. Our model is regularized by requiring that a classifier trained to discriminate source from transformed target samples cannot distinguish between the two. We experiment with ALM on simulated and real data, and show that it outperforms techniques commonly used in the field.

domain adaptation, neural networks, deep learning, generative adversarial networks

1 Introduction

Intuitively, intelligent agents should be able to improve on one task after having learned a similar kind of task. A human, for example, might be more capable of understanding Italian after having learned Spanish, or of playing tennis after having learned to play badminton. The goal of domain adaptation is to endow machine intelligence with this same sort of capability.

In the usual classification or regression framework, we assume that training and test data are generated by the same underlying distribution. When this assumption does not hold, test performance can be significantly worse than training performance. This problem comes up in many areas of machine learning, particularly in natural language understanding (Glorot et al., 2011) and computer vision (Gong et al., 2012). As an example, we may want to build a general-purpose sentiment classifier, but only have access to highly-biased training data, such as book reviews from an online store. Similarly, a model trained on photos taken by a webcam will likely generalize poorly to photos taken by a higher-resolution DSLR camera.

The ability to improve the generalizability of fitted models is crucial to the real-world effectiveness of data-hungry methods like deep learning. Domain adaptation holds the promise of allowing these models to be fitted on datasets where labeled examples are abundant, then used to make predictions on a separate dataset where labeled samples are much more scarce, and that may be generated by a distribution different than that of the samples on which the network was originally trained. Domain adaptation has also been connected to a wide array of important machine learning tasks, including counterfactual inference (Johansson et al., 2015) and off-policy reinforcement learning (Sutton & Barto, 1998).

Domain adaptation comes in three specific forms. In supervised domain adaptation, we are given several fully-labeled sources, and the goal is to learn a model that is stronger than one trained on any of the sources alone. In semi-supervised domain adaptation, we again have fully-labeled sources, but also a partially-labeled target domain. In this setting we are interested in classifying the unlabeled target data well by making use of all available information. In this article, we consider unsupervised domain adaptation, where we are given labeled source examples but do not have access to labels on target examples.

To coerce a model trained on one domain into performing well on another, domain adaptation methods often learn a transformation that makes the source samples statistically similar to the target samples. Correspondingly, the difficulty of domain adaptation lies in learning a high-quality transformation. Previous methods typically learn this transformation by considering only source and target samples, and sometimes source labels, but do not make any use of the target labels since they are not provided in this version of the problem.

In this work, we propose a new approach to unsupervised domain adaptation called approximate label matching (ALM). Our method constructs a rough labeling of the target data that we then exploit to dramatically improve the quality of the learned transformation between source and target data.

As a concrete illustration, we show a synthetic unsupervised domain adaptation problem, discussed further in Section 3, that highlights the potential importance of target label information in domain adaptation tasks (Figure 1). We place three circular domains on each of the three vertices of an equilateral triangle. Points that are inside the triangle’s perimeter are assigned negative labels and those outside are assigned positive labels. Despite the obvious similarities of each domain, without somehow incorporating target labels, it would clearly be impossible to uncover the rotation and translation that aligns the target data to either of the available sources. We find that ALM performs well in this adversarial situation, while other methods struggle. This example might seem contrived, but we show in experiments (Section 4.5) that real data can have similar properties.


Figure 1: The synthetic dataset, which includes two sources and one target. Blue points are positive source samples, orange points are negative source samples, and purple and green points are positive and negative target samples respectively. Target domain labels are shown here for clarity, but the classifier does not have access to labels from the target dataset.

In Sections 2 and 2.1 we introduce and provide an intuitive justification for our method. In Section 2.2 we discuss what it means for our model to overfit, and provide a simple regularization scheme that prevents this behavior. Finally, in Sections 3 and 4 we present comparative results of ALM applied to both simulated and real data, and we show that ALM outperforms related methods for domain adaptation.

1.1 Related Work

Domain adaptation has been approached in two distinct ways. In the first, training samples are re-weighted to make the resulting hypothesis better suited for classification on the test set. Kernel Mean Matching (KMM) is an example of a domain adaptation technique that falls into this category (Gretton et al., 2009). KMM re-weights source points in an effort to make the means of the source and target datasets as close as possible in a reproducing kernel Hilbert space, which circumvents the need to approximate source or target distributions. Like other kernel methods, a vanilla implementation of KMM requires the construction of a matrix that is square with the number of samples in the target dataset, causing potential difficulty when working with large amounts of data.

In the alternative approach, one or both of the domains are transformed into a space where they better match each other. Once in this new representation, a classifier trained on a source is expected to perform well on the unlabeled target samples. A simple and effective algorithm that takes this approach is Subspace Alignment (SA), which finds a matrix that minimizes the Bregman divergence between the transformed source and target (Fernando et al., 2013). The PCA-based solution for SA is extremely efficient.

Unlike ALM, SA and KMM do not make use of label information from either the source or target datasets. Like SA, ALM learns a transformation to match the source and target domain, but SA is restricted to learning linear transformations, whereas ALM is able to estimate nonlinear transformations.

More recently, several deep learning-based solutions to domain adaptation have been proposed (Ghifary et al., 2015; Ganin et al., 2016). One popular example of this is domain adversarial neural networks (DANNs), which consist of three parts: a transformation module, label classification module, and domain classification module (Ganin et al., 2016). Their goal, like with SA, is to uncover a transformation that projects both the source and target data into a canonical space where a model trained on the source data will generalize well to the unlabeled target. The transformation module embeds both the source and target datasets, and the label classification module uses that representation to distinguish between labeled source samples. The domain classification module is attached to the transformation module via a gradient-reversal layer, and is trained to distinguish between source and target examples. Gradient reversal causes the transformation module to learn to confuse the domain classification module, ideally forcing the transformed source and transformed target data to have similar distributions. All of the DANN modules are trained simultaneously.

The ALM architecture consists of three neural network modules that are similar to those used in DANNs, but the two methods differ in a number of important ways. A significant difference between the two procedures is our use of a rough, better-than-guessing hypothesis on the target samples, which allows ALM to uncover transformations that other techniques cannot. A subtle difference between the two methods is that the transformation learned by ALM is applied only to the target, not the source, which we find is important for preventing overfitting in ALM. Also, unlike DANNs, our classification module is trained offline on the source dataset, and we make use of a true discriminator (discussed in Section 2.2) rather than a gradient reversal layer, because gradient reversal can cause instability while training (Arjovsky & Bottou, 2017).

Approximate label matching (ALM) is somewhat reminiscent of co-training, which was developed as a method for classifying data when there are few training examples available (Blum & Mitchell, 1998). The idea is to create two different hypotheses by training two classifiers on two different representations of the same data. Test samples on which the two hypotheses strongly agree are added to the training set, and the process is repeated. Like co-training, ALM takes advantage of two candidate hypotheses: a rough labeling that is obtained offline, and an alternate labeling created by passing transformed target samples through a classifier trained on the source dataset.

2 Approximate Label Matching (ALM)

In unsupervised domain adaptation, the predictor has been given one or more source datasets , and one target sequence . Each source dataset includes an input sequence and a label sequence , each with elements. In a binary classification problem, every corresponds to a label for a -dimensional input . For each in the target sequence, the goal of the classifier is to predict the (unknown) label . Throughout this article, we use notation like to denote the sequence of predictions produced by on all for simplicity.

We assume that, within each source , all labeled examples are generated according to the same distribution, and likewise for the target. We further assume that each domain is generated by a different distribution, but that they are similar enough that a model trained on one can be transferred, to some degree, to another. Namely, as discussed in Section 1, we suppose that the target domain can be made to resemble any fixed source domain via a transformation . Once transformed, samples from the target domain could be classified using a model trained on the -th source domain. The key question then is how to find such a transformation.

As a natural starting point, we first train a classifier to label samples from source dataset . Because the distributions generating the source and target are different, the accuracy of on the target will tend to be much worse than that of . The hope is that predictions obtained by transforming the target samples before classification—that is, —will yield more favorable results.

In addition to the classifier , we suppose the availability of an approximate labeling , on the target samples . Such a rough labeling could come from a variety of simple learning procedures, and we outline some options in Section 2.1. To estimate the transformation , we leverage both and the approximate labeling .

Specifically, if is the ideal decision boundary for an optimally-transformed target, and is a labeling on the target that is usually correct, then we may be able to uncover the optimal transformation as the function that causes to best agree with ’s predictions on the transformed target samples . That is, we find to minimize the squared error:

(1)

The nested nature of makes gradient-based methods like neural networks an ideal model choice for this optimization.

We refer to this procedure, which treats each source separately, as approximate label matching. When there are multiple sources available (), we can run ALM on each to create multiple predictors, then take a simple average:

(2)

This average hypothesis can be similarly obtained for any domain adaptation algorithm.

Our method can be generalized to multiclass problems by using a vector encoding for labels and predictions instead of scalar values.

2.1 Obtaining a Rough Labeling

As discussed above, approximate label matching requires a better-than-guessing estimate of the target domain labels in order to be effective. In this section, we overview two ways in which these approximate labels can be acquired.

In the first approach, we obtain a rough labeling by treating all sources in aggregate as one larger training set, and fitting a classifier as one would in an ordinary supervised (non-domain-adaptation) setting. We could then use the predictions of this classifier on the target samples as . We call this a pseudo-supervised rough labeling. This works particularly well when there are multiple sources available, because the resulting model will be able to leverage features that generalize well across the different sources, and that may also generalize reasonably well to an unobserved target domain.

Another approach for estimating target domain labels, which may be preferred when only one source dataset is available, is to simply obtain a hypothesis from an alternative unsupervised domain adaptation algorithm, and use it to produce a rough labeling on target samples. When using ALM in conjunction with this kind of rough labeling, we call the overall technique refinement, since ALM is in a sense refining the labeling produced by the given domain adaptation algorithm.

2.2 Adversarial Regularization

Approximate label matching attempts to align the target with the source . However, when the transformation is highly expressive, we may find that it contorts the target samples so as to match the rough labeling without aligning to . We have observed this phenomenon anecdotally and frequently in experiments; a detailed example is given in Section 3. This is a form of overfitting, in the sense that ALM is finding a hypothesis that agrees with too precisely due to underconstrained optimization.

In order to prevent this type of overfitting, we propose an adversarial form of regularization, which attempts to force the transformed target samples to look enough like the source samples that they could confuse a classifier trained to distinguish between the two. We do this by adding a discriminating function that is trained jointly with the other ALM machinery. At each step of learning, is updated to better distinguish between source samples and transformed target samples, and is updated to both deteriorate the performance of and to solve Eqn. (1). Our regularization technique is very similar to traditional Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), but instead of generating samples from random noise, we generate source-looking samples from target samples. The discriminating function takes as input either source samples or transformed target samples, and is trained to predict either 0 for a source sample or 1 for a target sample. The loss minimized by is simply the binary cross-entropy of the misclassified source and target samples,

where is the magnitude of the regularization. This approach changes the loss function for to

where is the binary cross-entropy loss resulting from output with label . The second term, , denotes the discriminating function’s confidence that the transformed target sample is a target sample. Minimizing this confidence corresponds to “confusing” the discriminator, effectively making the transformed target samples resemble source samples. The fitted ALM now needs to match the labeling as well as possible, while also aligning to source .

Adversarial regularization makes the source and transformed target samples appear relatively similar without the need to approximate generative distributions or define an explicit distance metric between them.

2.3 Implementation of ALM

We implement the ALM functions using neural networks. Specifically, we let be a neural network of arbitrary architecture, and we then use this network to estimate . When solving for , and have already been approximated and are fixed. The transformation is represented by adding additional network layers before the existing input layer of the already-trained network. We further add a discriminator to the output of , which, as described earlier, is iteratively trained to distinguish between source samples and transformed target samples.


Figure 2: A diagram of ALM for a fixed source . Orange modules are being trained together while blue modules were trained offline and are fixed. Solid arrows correspond to standard connections and dashed arrows are adversarial connections.

When training this aggregate network, we use as labels on for backpropagation with a mean squared error criterion, and we do not update any of the weights in the layers corresponding to . The compositionality of backpropagation allows us to easily solve Eqn. (2) even though is nested within .

3 A Synthetic Example

To illustrate how our algorithm works, we demonstrate its performance on the artificial dataset described in Section 1, which is difficult to classify but conceptually simple. The data consists of three isotropic Gaussian domains distributed on each of the three vertices of an equilateral triangle (Figure 3). Points that are inside the triangle’s perimeter are assigned negative labels, and those outside are assigned positive labels. For this example, we will try to adapt the target domain to the first source domain , but performance is similar for any source-target pair chosen.


Figure 3: A visualization of the synthetic dataset and a few domain adaptation techniques. Top Left: The original dataset as shown in Figure 1. Top Right: The source and target after being transformed by SA. Bottom Left: The transformation learned by ALM without adversarial regularization applied to the target data. Bottom Right: The transformation learned by ALM with adversarial regularization applied to the target data.

Because our example includes two sources, is obtained by training a multi-layer perceptron (MLP) on both sources simultaneously. Such a classifier is able to correctly predict 79.9% of the samples in the held-out target domain.

ALM trained without adversarial regularization overfits in exactly the way described in Section 2.2: It learns to make predictions that almost exactly matches the rough labeling , but contorts such that it does not resemble at all (Figure 3). Once adversarial regularization is included, the learned transformation aligns to fairly well, yielding 90.0% accuracy.

This unsupervised domain adaptation simulation demonstrates ALM’s ability to find transformations that other algorithms cannot. In this specific example, recovering the correct translation of target to source inputs is straightforward even without using a target labeling, but recovering the correct domain rotation is challenging because of the circular shape of the datasets. An algorithm that does not make use of a target prediction cannot be expected to perform well on this task.

As an example, the SA algorithm, which does not use a target labeling, lumps the source and target together so that they appear to be from the same distribution, but cannot uncover the necessary target domain rotation (Figure 3). Accordingly, SA does not perform better than guessing on this adversarial simulation.

4 Experiments and Results

We turn next to an empirical evaluation of ALM compared to some leading domain adaptation algorithms. We evaluate these methods on several domain adaptation problems, which are described in detail below. We begin by discussing our experimental setup.

4.1 Experimental Setting

Each domain-adaptation problem consists of several domains. In our experiments we use one domain as a target and another as a source. We repeat this for all possible source-target pairs and report accuracies for each.

In each experiment, we chose to be an MLP that includes three layers, allowing for nonlinear transformations, with 50 hidden nodes and input and output sizes equal to the dimensionality of the data. Similarly, we chose and the predictor used to produce pseudo-supervised rough labelings to both be MLPs with three layers, a hidden layer size of 50 nodes, and as many output dimensions as there are classes in the problem. We fix to also be an MLP with three layers, and again with 50 nodes in the hidden layer. In all experiments, dropout 50% is applied to help in training all networks except for .

We compare ALM to both DANNs and SA on all datasets. The SA algorithm takes a source and target as inputs and returns a canonical feature representation. To make a reasonable comparison, once in this space, we use a neural network with the same architecture as used for in ALM for classification. Similarly, for DANNs, we fix the transformation, label classification, and domain classification modules to have the same architecture as , , and , respectively. While these architectures are certainly not designed to be optimal for each application, we kept them fixed as a control to highlight the relative performance of the various domain adaptation approaches and the generality of our solution.

We also include accuracies of the approximate labeling as well as , that is, applied to the target directly, without transformation by . Finally, we report the results of a “target only” experiment (denoted TO in the tables), in which the target domain is used for both training and testing (still using the same architecture as ). These accuracies are computed using twenty-fold cross validation. The purpose of this design is to show for comparison how well a model can perform when training and test examples come from the same distribution.

The domain adaptation techniques used here, especially DANNs and ALM, can be hyperparameter sensitive. The unsupervised approximation of these kinds of values is an active area of research outside the scope of this work. To obtain accuracy values, we ran each algorithm 100 times with randomly-chosen hyperparameters for each source-target pair, and reported the best results. As an aside, we did notice that the value of , the discriminator’s confidence that the transformed target samples are indeed target samples, tends to correlate strongly with accuracy in both ALM and DANN: exploiting this observation for hyperparameter tuning would be an interesting direction for future research. All neural networks were implemented in Torch (Collobert et al., 2011) and optimized using the Adam variant of SGD (Kingma & Ba, 2015).

4.2 Sentiment Classification

In the sentiment dataset, we are provided with product reviews from amazon.com (Blitzer et al., 2007). All reviews come from items in either Amazon’s book, DVD, electronics, or kitchen departments, and data from each department contains 2,000 samples; each department is a separate domain. Each review is represented as term frequencies for a specified vocabulary. The goal is to distinguish between positive and negative reviews.

Because there are multiple sources available in this problem, we used a pseudo-supervised rough labeling (see Section 2.1) for ALM. However, because of this choice, for a fixed source-target pair, implicitly uses more information than DANNs and SA, which only use information from that single source. Thus, we also present results with refinement of SA and DANN labelings.

We find that, for this task, DANN generally outperforms SA, sometimes by a large margin (Table 1). ALM consistently achieves a higher accuracy than the rough labeling used during its optimization, regardless of the procedure used to generate that labeling.


TO ALM DANN ALM SA ALM
DB 85.5 54.6 80.3 82.9 76.8 78.4 74.8 77.0
EB 85.5 52.0 80.3 82.8 63.1 67.1 65.2 66.0
KB 85.5 55.6 80.3 82.7 65.1 65.7 67.6 70.5
BD 82.3 55.0 80.2 82.7 80.0 81.2 74.5 76.4
ED 82.3 53.9 80.2 83.2 66.1 68.0 63.9 64.3
KD 82.3 55.6 80.2 82.6 67.2 68.4 60.1 61.4
BE 81.8 50.7 82.8 85.2 68.9 69.1 62.0 66.6
DE 81.8 54.4 82.8 84.9 70.2 70.3 64.8 65.0
KE 81.8 65.8 82.8 85.1 76.2 79.1 54.3 56.2
BK 80.0 53.6 87.0 89.0 75.1 75.4 64.4 68.6
DK 80.0 54.4 87.0 89.2 71.5 72.2 56.6 60.0
EK 80.0 60.9 87.0 89.2 76.2 78.7 55.6 56.7
82.4 55.4 82.6 85.0 76.2 72.8 63.7 65.7
Table 1: Percent accuracies for the sentiment classification task. Here, D, B, E, and K represent the DVD, Book, Electronics, and Kitchen Appliances domains respectively. Pairs of letters (e.g., DB) correspond to using the DVD domain as a source for labeling the Books domain, which is used as a target. ALM, ALM, and ALM show the result of performing ALM using a pseudo-supervised rough labeling , refinement of SA, and refinement of DANN, respectively. Average accuracies are shown in the last row.

4.3 Digit Classification

In this experiment, we are supplied with standard MNIST digits, as well as handwritten binary digits extracted from USPS parcels. These two datasets look very similar (Figure 4), but a classifier trained on one still performs significantly worse on the other. Digits from the USPS domain are 16 pixels smaller in both dimensions than MNIST images, so we zero-pad them by 8 pixels on each side for preprocessing.


Figure 4: Example MNIST and USPS digit images. The top row shows MNIST images and the bottom row shows USPS images. The USPS images are actually 16x16 pixels, so they were scaled to the same size as MNIST for this figure.

Because only one source is available, we use refinement of SA to acquire rough labels . To obtain image features, we convolutionally embed both datasets into 84 dimensions. This is done by training a convolutional network of the LeNet-5 (LeCun et al., 1998) architecture on the source data, then taking the activations in the layer before the output layer as a new representation of both the source and the target (Table 2).


TO ALM DANN SA
U 90.8 90.0 94.5 92.4 93.1
M 88.0 86.9 89.7 87.8 88.3
89.2 88.5 92.2 90.1 90.7
Table 2: Percent accuracies for the digit classification task. The first row of results correspond to treating USPS images as a target and the second row corresponds to using MNIST images as a target. Average accuracies are shown in the last row.

Our results show that even in a situation where the source and target appear very similar, domain adaptation could be used to improve test accuracy. We find that ALM achieves a more favorable result than that produced by other techniques on these digit datasets.

4.4 Office Object Classification

The Stanford Office dataset consists of images of 31 different objects typically found around an office (Saenko et al., 2010). It is composed of three domains, pictures taken by a low-quality webcam, a high-quality DSLR camera, and product images from amazon.com (Figure 5).

The webcam, DSLR, and Amazon domains consist of 795, 498, and 2,817 images, respectively. Because state-of-the-art image classifiers typically require much more data to train than is available here, we use a network of the AlexNet (Krizhevsky et al., 2012) architecture that has been trained on the ImageNet dataset as a feature extractor. As in the previous experiment, we pass each image through the network and use the activations at the last hidden layer as a lower-dimensional representation of the data. In this section, we use refinement of SA for the rough labeling .


Figure 5: Example images for 20 (randomly chosen) of the 31 classes in the Office Dataset. Classes from top left to bottom right are ring binder, pen, printer, laptop computer, helmet, paper notebook, trash can, speaker, desk chair, projector, desktop computer, bookshelf, mobile phone, monitor, stapler, phone, mouse, keyboard, file cabinet, and letter tray. The top two rows of images are from the webcam domain, and tend to look overexposed and washed-out, while more balanced images on the third and fourth rows are generated from the DSLR domain. The last two rows of images correspond to Amazon product images, which generally feature a white background.

TO ALM DANN SA
DA 69.5 30.1 35.9 35.3 35.2
WA 69.5 32.2 37.5 37.5 36.9
AD 77.4 42.0 53.6 47.0 49.4
WD 77.4 94.1 98.0 94.4 96.4
AW 80.0 38.3 48.1 43.8 45.9
DW 80.0 80.4 90.8 86.0 88.3
75.6 52.9 60.7 57.3 58.7
Table 3: Percent accuracies for the object classification task. Here, A, D, and W represent the Amazon, DSLR, and webcam domains respectively. Paired notation (e.g., DA) corresponds to using the DSLR domain as a source for labeling target Amazon domain. Average accuracies are shown in the last row.

In these results, the most challenging source-target pairs are those that involve the Amazon domain (Table 3). Unlike the webcam and DSLR domains, Amazon images could contain more than a single object, and that object may not resemble typical items found around an office. Even in these difficult situations, we find that ALM performs as well or better than other methods.

We note that these results are somewhat worse than those reported in the original DANN article. In that work, a pretrained AlexNet is refined during training as the transformation module, rather than used to produce an image embedding and training a transformation module from scratch as we do here (Ganin et al., 2016). Although not explored in this work, we would expect a similar boost in performance from using the alterantive approach with ALM.

4.5 MEG Signal Classification

Magnetoencephalography (MEG) machines use magnetic fields to measure the brain’s electrical activity and are frequently used for experiments in neuroscience and psychology. In the specific problem we consider, originally part of an online competition, the data come from sixteen different people who were shown either pictures of human faces or non-faces (images with no discernible face-like structures) while having their brain activities recorded using MEG. Each subject’s data comprises a single domain, and the objective is to classify instances where a held-out subject was looking at a face from those in which the subject was looking at a non-face (Henson et al., 2011).

Domain adaptation is a problem that comes up frequently in brain-computer interface problems in general, not just those dealing with MEG data. Advancing classification methods in this space could improve existing techniques in areas like fMRI analysis, neuropsychiatric disease diagnosis, prosthetics, and human-computer interaction (Sugiyama et al., 2007), where the locations of neurons and voxels are often person-specific (Xu et al., 2012).

We represent this data according to the method discussed by Barachant et al. (2013), which maps the raw time-series data into 2,014 dimensions. Source labels are used to obtain this representation, so each held-out subject operates in a different space. Figure 6 shows an example representation embedded into two dimensions. Each domain contains between 500 and 600 samples.

These data, at least in an embedded space, seems to have visual properties similar to the synthetic example discussed earlier. Most domains appear to be rotated and translated with respect to each other, and there is no geometric overlap between them. This suggests that ALM might be particularly well-suited for this problem.

Unlike in previous experiments, instead of reporting the accuracy of a technique that performs domain adaptation on a fixed source and target, we average the hypotheses resulting from running that technique on each available source and a fixed target. For ALM, this average hypothesis is described by Eqn. (2). In this problem we again use a pseudo-supervised rough labeling as .


Figure 6: A visualization of the MEG data embedded into two dimensions using t-SNE (Maaten & Hinton, 2008). Blue points are positive source samples, orange points are negative source samples, and purple and green points are positive and negative target samples respectively. The data shows distinct clusters corresponding to each subject, making this a clear domain adaptation problem. Target domain labels are shown here for clarity, but the classifier does not have access to labels from the target dataset.

We also compare to stacked generalization, which in the MEG literature is often used to solve such problems (Olivetti et al., 2014). In this hierarchical approach, a separate base classifier is trained on each of the available sources. The predictions of these classifiers on all available source samples are then aggregated and used as a -dimensional feature space for a higher-level classifier. Again, all models are MLPs with one hidden layer composed of 50 nodes.


TO ALM DANN SA SG
S01 85.6 75.1 77.2 85.2 77.1 76.3 72.7
S02 80.7 66.4 64.8 74.7 69.9 66.8 62.1
S03 82.9 59.8 61.1 73.0 65.2 64.0 57.1
S04 90.5 72.2 72.2 90.6 79.9 75.9 80.4
S05 86.2 66.5 66.0 82.5 76.1 70.8 62.9
S06 86.1 69.0 66.1 75.7 70.4 73.1 63.4
S07 86.1 53.9 69.9 83.0 68.7 70.0 53.7
S08 87.3 70.8 66.8 82.4 76.8 71.9 64.9
S09 88.2 73.7 71.7 84.1 79.4 73.7 76.1
S10 87.7 64.0 68.6 81.4 75.0 71.3 60.5
S11 78.7 62.5 67.4 72.5 64.4 72.5 59.3
S12 84.0 78.4 76.9 86.2 76.4 74.7 75.1
S13 83.3 69.9 67.8 79.3 75.0 72.9 70.2
S14 91.4 73.6 74.5 88.6 79.6 74.4 72.9
S15 90.7 65.0 68.2 83.6 72.7 70.7 67.4
S16 88.2 56.1 63.0 78.0 66.1 67.4 58.6
86.1 67.3 68.9 81.3 73.3 71.6 66.1
Table 4: Percent accuracies for the MEG classification task. Each row presents results corresponding to holding out the denoted subject as a target and averaging the hypotheses resulting from performing domain adaptation with each available source. For this experiment we also include the results of stacked generalization, shown below as SG. Average accuracies are shown in the last row.

ALM outperforms other techniques on this data by a margin larger than in any of the preceding experiments (Table 4).

5 Discussion and Future Work

In this article, we presented approximate label matching, a novel and powerful technique for unsupervised domain adaptation.

In our results, we found that the performance of ALM is never worse than that of the rough labeling used in its optimization. In several cases, ALM was only able to improve slightly, highlighting the difficulty of the unsupervised domain adaptation task. Still, our approach can be used in conjunction with any domain adaptation technique. In our results, ALM outperformed other state-of-the-art domain adaptation methods, sometimes by a substantial margin.

Separately, it might be useful to apply this algorithm iteratively to see if performance continues to improve. In that setting, the output of the current iteration of ALM would be the approximate labeling used for the next iteration of ALM.

Although we only address unsupervised domain adaptation in this article, our approach could be straightforwardly modified to accommodate supervised or semi-supervised situations as well. For supervised domain adaptation, we could learn a transformation by simply setting to the true labels of the target. Similarly, in semi-supervised situations, the available target labels could be used to supplement the source data when learning .

References

  • Arjovsky & Bottou (2017) Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations, volume 5, 2017.
  • Barachant et al. (2013) Barachant, A., Bonnet, S., Congedo, M., and Jutten, C. Classification of covariance matrices using a riemannian-based kernel for bci applications. Neurocomputing, 112:172–178, 2013.
  • Blitzer et al. (2007) Blitzer, J., Dredze, M., and Pereira, F. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Association for Computational Linguistics, volume 7, pp. 440–447, 2007.
  • Blum & Mitchell (1998) Blum, A. and Mitchell, T. Combining labeled and unlabeled data with co-training. In Computational learning theory, pp. 92–100. ACM, 1998.
  • Collobert et al. (2011) Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlab-like environment for machine learning. In NIPS Workshop, 2011.
  • Fernando et al. (2013) Fernando, B., Habrard, A., Sebban, M., and Tuytelaars, T. Unsupervised visual domain adaptation using subspace alignment. In IEEE International Conference on Computer Vision, pp. 2960–2967, 2013.
  • Ganin et al. (2016) Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
  • Ghifary et al. (2015) Ghifary, M., Bastiaan, K.W., Zhang, M., and Balduzzi, D. Domain generalization for object recognition with multi-task autoencoders. In IEEE International Conference on Computer Vision, pp. 2551–2559, 2015.
  • Glorot et al. (2011) Glorot, X., Bordes, A., and Bengio, Y. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 513–520, 2011.
  • Gong et al. (2012) Gong, B., Shi, Y., Sha, F., and Grauman, K. Geodesic flow kernel for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2066–2073. IEEE, 2012.
  • Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
  • Gretton et al. (2009) Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., and Schölkopf, B. Covariate shift by kernel mean matching. Dataset shift in machine learning, 3(4):5, 2009.
  • Henson et al. (2011) Henson, R.N., Wakeman, D.G., Litvak, V., and Friston, K.J. A parametric empirical bayesian framework for the eeg/meg inverse problem: generative models for multi-subject and multi-modal integration. Frontiers in human neuroscience, 5:76, 2011.
  • Johansson et al. (2015) Johansson, F.D., Shalit, U., and Sontag, D. Learning representations for counterfactual inference. In International Conference on Machine Learning (ICML-15), 2015.
  • Kingma & Ba (2015) Kingma, D. and Ba, J. Adam: A method for stochastic optimization. 2015.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
  • LeCun et al. (1998) LeCun, Yann, Bottou, Léon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Maaten & Hinton (2008) Maaten, Laurens van der and Hinton, Geoffrey. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
  • Olivetti et al. (2014) Olivetti, E., Mostafa, S.K., and Avesani, P. Meg decoding across subjects. In Pattern Recognition in Neuroimaging, 2014 International Workshop on, pp. 1–4. IEEE, 2014.
  • Saenko et al. (2010) Saenko, K., Kulis, B., Fritz, M., and Darrell, T. Adapting visual category models to new domains. In European conference on computer vision, pp. 213–226. Springer, 2010.
  • Sugiyama et al. (2007) Sugiyama, M., Krauledat, M., and MÞller, K.R. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(May):985–1005, 2007.
  • Sutton & Barto (1998) Sutton, R.S. and Barto, A.G. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
  • Xu et al. (2012) Xu, H., Lorbert, A., Ramadge, P.J., Guntupalli, J.S., and Haxby, J.V. Regularized hyperalignment of multi-set fmri data. In Statistical Signal Processing Workshop (SSP), 2012 IEEE, pp. 229–232. IEEE, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
32216
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description