Adversarial Transfer Learning for Cross-domain Visual Recognition

Adversarial Transfer Learning for Cross-domain Visual Recognition

Abstract

In many practical visual recognition scenarios, feature distribution in the source domain is generally different from that of the target domain, which results in the emergence of general cross-domain visual recognition problems. To address the problems of visual domain mismatch, we propose a novel semi-supervised adversarial transfer learning approach, which is called Coupled adversarial transfer Domain Adaptation (CatDA), for distribution alignment between two domains. The proposed CatDA approach is inspired by cycleGAN, but leveraging multiple shallow multilayer perceptrons (MLPs) instead of deep networks. Specifically, our CatDA comprises of two symmetric and slim sub-networks, such that the coupled adversarial learning framework is formulated. With such symmetry of two generators, the input data from source/target domain can be fed into the MLP network for target/source domain generation, supervised by two confrontation oriented coupled discriminators. Notably, in order to avoid the critical flaw of high-capacity of the feature extraction function during domain adversarial training, domain specific loss and domain knowledge fidelity loss are proposed in each generator, such that the effectiveness of the proposed transfer network is guaranteed. Additionally, the essential difference from cycleGAN is that our method aims to generate domain-agnostic and aligned features for domain adaptation and transfer learning rather than synthesize realistic images. We show experimentally on a number of benchmark datasets and the proposed approach achieves competitive performance over state-of-the-art domain adaptation and transfer learning approaches.

Transfer learning, domain adaptation, adversarial learning, image classification.

I Introduction

In cross-domain visual recognition systems, the traditional (task-specific) classifiers usually do not work well on those semantic-related but distribution different tasks. A typical cross-domain problem is presented in Fig. 1, which illustrates some example images with similar semantics but different distribution. By observing the Fig. 1, it is not difficult to understand that the classifier trained with the images in the first row cannot work well when classifying the remaining images due to the explicit heterogeneity of multiple visual tasks. Mathematically, the reason lies in that the training data and test samples have different feature distribution (i.e. data bias) and do not satisfy the condition of independent identical distribution (i.e. i.i.d.) [38, 32, 57, 45, 53]. Additionally, to train an accurate classification model, sufficient labeled samples are needed according to the statistical machine learning theory. However, collecting data is an expensive and time-consuming process. Generally, the data problem can be relieved by exploiting a few label information from related source domains and leveraging a number of unlabeled instances from the target domain for recognition model training. Although the scarcity of data and labels can be partially solved by fusing the data drawn from multiple different domains, another dilemma of domain discrepancy is resulted. Recently, domain adaptation (DA) [6, 23, 58] techniques which can effectively ease domain shift problem have been proposed, and demonstrated a great success in various cross-domain visual datasets.

It is of great practical importance to explore DA methods. DA models allow machine learning model to be self-adapted among multiple visual knowledge domains, i.e., the trained model from one data domain can be adapted to another domain and it is the key objective of this paper. There is a fundamental assumption underlying is that, although the domains differ, there is sufficient commonality to support such adaptation.

Fig. 1: Different distributions from different domains. It is not difficult to understand that if the images in the first row are used to train an effective classifier, the model cannot work well for classifying the remaining images.

Most of the existing DA algorithms seek to bridge the gap between domains by reconstructing a common feature subspace for general feature representation. In this paper, we reformulate DA as a conditional image generation problem which tends to bridge the gap by generating domain specific data. The mapping function from one domain to another can be viewed as the modeling process of a generator, which achieves automatic domain shift alignment during data sampling and generating [22]. Recently, generative adversarial network (GAN) proposed in [11], that tends to generate user-defined images by the adversarial mechanism between generator and discriminator, has become a mainstream of DA approach [9]. This is usually modeled by minimizing the approximate domain discrepancy via an adversarial objective function. GAN generally carries two networks called generator and discriminator, which work against each other. The generator is trained to produce images that could confuse the discriminator, while the discriminator tries to distinguish the generated images from real images. This adversarial strategy is very suitable for DA problem [47, 41], because domain discrepancy can be easily reduced by adversarial generation. Therefore, this confrontation principle is exploited to ensure that the discriminator cannot distinguish the source domain from the generated target domain. In [9], DANN is one of the first work of deep domain adaptation, in which the adversarial mechanism of GAN was used to bridge the gap between the source and target domains. Similarly, the GAN inspired domain adaptation (ADDA) with convolutional neural network (CNN) architecture has also achieved a surprisingly good performance in [47]. With the success of GAN in domain adaptation, an adversarial domain adaptation framework with domain generators and domain discriminators as GAN does is studied in this work for cross-domain visual recognition.

It is worthy noting that, in GANs [35, 11], the realistic of the generated images is important. However, the purpose of DA methods is to reduce the domain discrepancy, while the realistic of the generated image is not that important. Therefore, our focus lies in the domain distribution alignment for cross-domain visual recognition instead of the pure image generation like GAN. To this end, in this work, we proposed a simple, slim but effective Coupled adversarial transfer based domain adaptation (CatDA) algorithm. To be specific, the proposed CatDA is formulated with a slim and symmetric multilayer perceptron (MLP) structure instead of deep structure for generative adversarial adaptation. CatDA comprises of two symmetric and coupled sub-networks, with each a generator, a discriminator, a domain specific term and a domain knowledge fidelity term are formulated. CatDA is then implemented by coupled learning of the two sub-networks. With the symmetry, both domains can be generated from each other with an adversarial mechanism supervised by the coupled discriminators, such that the network compatibility for arbitrary domain generation can be guaranteed.

In order to avoid the critical flaw of high-capacity of the network mapping function during domain adversarial training, the semi-supervised mode is therefore considered in our method. In addition, a content fidelity term and a domain loss term are proposed in the generators for achieving the joint domain-knowledge preservation in source and target domains. The structure of CatDA can be simply described as two generators and two discriminators. Specifically, the main contribution and novelty of this work are fourfold:

  • In order to reduce the distribution discrepancy between domains, we propose a simple but effective coupled adversarial transfer network (CatDA), which is a slim and symmetric adversarial domain adaptation network structured by shallow multilayer perceptrons (MLPs). Through the proposed network, source and target domains can be generated against each other with an adversarial mechanism supervised by the coupled discriminators.

  • Inspired by the cycleGAN, the CatDA adopts a cycling structure and formulates a generative adversarial domain adaptation framework comprising of two sub-networks with each carries a generator and a discriminator. The coupled learning algorithm follows a two-way strategy, such that arbitrary domain generation can be achieved without constraining the input to be source or target.

  • To avoid the limitation of domain adversarial training that feature extraction function has high-capacity, in domain alignment, a semi-supervised domain knowledge fidelity loss and domain specific loss are designed for domain content self-preservation and domain realistic. In this way, domain distortion in domain generation is avoided and the domain-agnostic feature representation become more stable and discriminative.

  • A simple but effective MLP network is tailored to handle the small-scale cross-domain visual recognition datasets. In this way, both the requirements of large-scale samples and the pre-trained processing are avoided. Extensive experiments and comparisons on a number of benchmark datasets demonstrate the effectiveness and competitiveness of the proposed CatDA over state-of-the-art methods.

The rest of this paper is organized as follows. In Section \@slowromancapii@, the related work in domain adaptation is reviewed. In Section \@slowromancapiii@, we present the proposed CatDA method in detail. In Section \@slowromancapiv@, the experiments and comparisons on a number of common datasets are presented. The discussion is presented in Section \@slowromancapv@, and finally Section \@slowromancapvi@ concludes this paper.

Ii Related Work

Our approach involves domain adaptation (DA) and generative adversarial methods. Therefore, In this section, the shallow domain adaptation, deep domain adaptation, and generative adversarial networks are briefly introduced, respectively.

Ii-a Shallow Domain Adaptation

In recent years, a number of shallow learning approaches have been proposed in domain adaptation. Generally, these shallow domain adaptation methods can be divided into three categories.

Classifier based approaches. A generic way in this category is to learn a common classifier on source domain leveraging source data and a few labeled target data. In AMKL, Duan et al. [6] proposed an adaptive multiple kernel learning method for video event recognition. Also, a domain transfer MKL (DTMKL) [5] was proposed by jointly learning a SVM and a kernel function for classifier adaptation. Li et al. [24] proposed the cross-domain extreme learning machine for visual domain adaptation [18, 56], in which MMD was formulated for characterizing and matching the marginal and conditional distribution between domains. Zhang et al. [59] proposed a robust extreme domain adaptation (EDA) based classifier with manifold regularization for cross-domain visual recognition.

Feature augmentation/transformation based approaches. In MMDT, Hoffman et al. [14] proposed a Max-Margin Domain Transforms method, in which a category specific transformation was optimized for domain transfer. Long et al. [27] proposed a Transfer Sparse Coding (TSC) method to learn robust sparse representations, in which the empirical Maximum Mean Discrepancy (MMD) [20] is constructed as the distance measure. Then he [28] also proposed a Transfer Joint Matching approach. This TJM method aims to learn a non-linear transformation across domains by minimizing distribution discrepancy based on MMD. Zhang et al.  [60] proposed a regularized subspace alignment method for realizing cross-domain odor recognition with signal shift.

Feature reconstruction based approaches. Different from those methods above, domain adaptation is achieved by feature reconstruction between domains. Jhuo et al. [21] proposed a RDALR method, in which the source samples was reconstructed by the target domain with low-rank constraint model. Similarly, Shao et al. [43] proposed a LTSL method by pre-learning a subspace through principal component analysis (PCA) or linear discriminant analysis (LDA), then low-rank regularization across domains is modeled in this method. Zhang et al. [61] proposed a Latent Sparse Domain Transfer (LSDT) approach by jointly learning a common subspace and a sparse reconstruction matrix across domains, and this method achieves good results.

Fig. 2: (a) The standard GAN is described. (b) The conditional GAN is structured. (c) The structure of the proposed CatDA, which is a slim but symmetric network, with each stream a generator, a discriminator, a domain loss and a content loss are contained. The two-way generation function is a bijective mapping. Besides the inherent GAN losses, the domain knowledge fidelity loss (content loss) and the domain specific loss (domain loss) are designed for domain content self-preservation and domain realistic. ¡° ¡± and ¡°¡± represent the labeled samples in the source and target domain for training; ¡° ¡±represents the generated data from ; ¡° ¡±represents the generated data from . We aim to learn a discriminative classifier through as well as the corresponding labels to classify unlabeled target test data ””.

Ii-B Deep Domain Adaptation

The other category of data-driven domain adaptation method is deep learning and it has witnessed a series of great success  [46, 36, 51, 34, 33]. However, for small-data tasks, deep learning method may not improve the performance too much. Thus, recently, deep domain adaptation methods under small-scale tasks have been emerged.

Donahue et al. [4] proposed a deep transfer strategy leveraging a CNN network on the large-scale ImageNet for small-scale object recognition tasks. Similarly, Razavian et al. [44] also proposed to train a deep network based on ImageNet for high-level domain feature extractor. Tzeng et al. [46] proposed a CNN network based on DDC method both aligning domains and tasks. In DAN, Long et al. [26] proposed a deep transfer network leveraging MMD loss on the high-level features between the two-streamed fully-connected layers from two domains. Then he also proposed another two famous methods. In RTN [30], he proposed a residual transfer network which aims to learn a residual function between the source and target domain. A joint adaptation networks (JAN) is another method which was proposed [29] to learn a adaptation network. This network aligned the joint distributions across domains with a joint maximum mean discrepancy (JMMD) criterion. Hu et al. [17] proposed a deep transfer metric learning (DTML) method leveraging the MLPs instead of CNN. The novelty of this method is to learn a set of hierarchical nonlinear transformations. Autoencoder is an unsupervised feature representation [54] and Wen et al. [50] proposed a deep autoencoder based feature reconstruction for domain adaptation, which aims to share the feature representation parameters between source and target domains. Recently, Chen et al. [3] proposed a broad learning system instead of deep system which can also be considered for transfer learning.

Ii-C Generative Adversarial Networks

The generative adversarial network (GAN) was first proposed by Goodfellow et al. [11] to generate images and produced a high impact in deep learning. GAN generally comprises of two operators: a generator (G) and a discriminator (D). The discriminator discriminates whether the sample is fake or real, while the generator produces samples as real as possible to cheat the discriminator. Mirza et al. [35] proposed a conditional generative adversarial net (CGAN) where both networks G and D receive an additional information vector as input. Salimans et al. [41] achieved state-of-the-art results in semi-supervised classification and improves the visual realistic and image quality compared to GAN. Zhu et al. [63] proposed a cycleGAN for discovering cross-domain relations and transferring style from one domain to another, which was similar with DiscoGAN [22] and DualGAN [55]. The key attributes such as orientation and face identity are preserved.

In [9], DANN is one of the first work in deep domain adaptation, in which the adversarial mechanism of GAN was used to bridge the gap between the source and target domains. A novel ADDA method is proposed in  [47] for adversarial domain adaptation. In this method, the convolutional neural network (CNN) was used for adversarial discriminative feature learning. A GAN based model [1] that adapted the source domain images to appear to be drawn from the target domain was proposed, in which domain image generation was focused. The three works have shown the potential of adversarial learning in domain adaptation. Additionally, a CyCADA method [16] was proposed for cycle generation, in which the representations in both pixel-level and feature-level are adaptive by enforcing cycle-consistency.

Iii The Proposed CatDA

Iii-a Notation

In our method, the source domain is defined by subscript “” and target domain is defined by subscript “”, respectively. The training set of source and target domain are defined as and , respectively. denotes the data labels. A generator network is denoted as : , that embeds data from source domain to its co-domain . The discriminator network is denoted as , which tries to discriminate the real samples in target domain and the generated samples in co-domain . Similarly, aims to map the data from target domain to its co-domain , and tries to discriminate the real samples in source domain and the generated samples in co-domain . ¡° ¡±represents the generated target data from ; ¡° ¡±represents the generated source data from . represents the unlabeled target test data.

Iii-B Motivation

Direct supervised learning of an effective classifier on the target domain is not allowed due to the label scarcity. Therefore, in this paper, we would like to answer whether the target data can be generated by using the labeled source data, such that the classifier can be trained on the generated target data with labels. Our key idea is to learn a ”source target” generative feature representation through an adversarial domain generator, then a domain-agnostic classifier can be learnt on the generated features for cross-domain applications. Noteworthily, our aim is to minimize the feature discrepancy between domains via similarly distributed feature generation rather than generating a vivid target image. Therefore, a simple and flexible network is much more expected for homogeneous image feature generation, instead of very complicated structure (encoder vs. decoder) for realistic image rendering.

Visually, the structure of the standard GAN and conditional GAN are shown in Fig. 2 (a) and Fig. 2 (b), respectively. There are several limitations for the two models. In standard GAN, explicitly supervised data is seldom available, and the randomly generated samples can become tricky if the content information is lost. Thus the trained classifier may not work obviously well. In conditional GAN, although a label constraint is imposed, it does not guarantee the cross-domain relation because of the one-directional domain mapping. Since conditional GAN architecture only learns one mapping from one domain to another (one-way), a two-way coupled adversarial domain generation method with more freedom is therefore proposed in this paper, as shown in Fig. 2 (c). The core of our CatDA model depends on two symmetric GANs, which then result in a pair of symmetric generative and discriminative functions. The two-way generator function can be recognized as a bijective mapping, and the work flow of the proposed CatDA in implementation can be described as follows.

First, different from GAN, the image or feature instead of noise is fed as input into the model. The way-1 of CatDA comprises of the generator and the discriminator . The way-2 comprises of the generator and the discriminator . For way-1, the source data is fed into the generator, and the co-target data is generated. Then the generated target data and the real target data are fed into the discriminator network for adversarial training. For way-2, the similar operation with way-1 is conducted. In order to achieve the bijective mapping, we expect that the real source data can be recovered by feeding the generated into the generator for progressively learning supervised by . Similarly, is also fine-tuned by feeding the supervised by to recover the real target training data.

Iii-C Model Formulation

Fig. 3: The pipeline of the class-wise CatDA. In order to avoid the domain adversarial training limitation that feature extraction function has high-capacity, the semi-supervised method is used in our proposed CatDA to preserve the raw content information in the generated samples. We process the samples class by class which results in a semi-supervised structure. The training samples in source domain and target domain per class are preprocessed independently. Note that ¡°¡± represents the index of class in source and target and ¡°¡± denotes the number of classes. The number of networks to be trained equals to number of classes.

In order to avoid the domain adversarial training limitation of high-capacity of the feature extraction function, the semi-supervised strategy is used in our proposed CatDA to preserve the content information of the generated samples. We process the samples class by class which results in a semi-supervised structure. The training data in source domain and target domain per class are preprocessed independently, thus the number of networks to be trained equals to the number of classes. Specifically, the pipeline of the class-wise CatDA is shown in Fig. 3, and the class conditional information is used. For each CatDA model, the generator is a two-layered perceptron and the discriminator is a three-layered perceptron. Sigmoid function is selected as the activation function in hidden layers. The network structure of the joint generator and discriminator is described in Fig. 4.

The proposed CatDA model has a symmetric structure comprising of two generators and two discriminators, which are described as two ways across domains ( and ). We first describe the model of way-1 (), which shares the same model and algorithm as way-2 ().

  • Way-1: :

A target domain discriminator is formulated to classify whether a generated data is drawn from the target domain (real). Thus, the discriminator loss is formulated as

(1)

where . is the generator for generating realistic data similar to target domain. Therefore, the supervised generator loss is formulated as

(2)
Fig. 4: The network structure of the adversarial training model in CatDA. The input sample for each class is fed into the generator (two-layered perceptrons), and then the generated sample is sent to the discriminator (three-layered perceptrons) for classifying whether it belongs to the target domain. In order to reduce the distribution mismatch across the generated samples and the real samples, a domain specific loss (i.e. domain loss) is imposed. Further, for preserving the content information of each sample, a domain knowledge fidelity term (i.e. content loss) is established in our model.

The two losses in Eq. (1) and (2) are the inherent loss functions in traditional GAN models. The focus of CatDA is to reduce the distribution difference across domains. Therefore, in the proposed CatDA model, two novel loss functions are proposed, which are the domain specific loss (domain loss) and the domain knowledge fidelity loss (content loss).

One of the feasible strategies for reducing the domain discrepancy is to find an abstract feature representation under which the source and target domains are similar. This idea has been explored in [26, 30, 29] by leveraging the Maximum Mean Discrepancy (MMD) criterion, which is used when the source and target data distributions differs. Therefore, in this paper, we focus on the domain specific loss by a simple and approximate MMD, which is formalized to maximize the two-sample test power and minimize the Type II error (the failure of rejecting a false null hypothesis). For convenience, we define the proposed domain loss as , which is minimized to help the learning of the generator as shown in Fig. 2 (c). Specifically, in order to reduce the distribution mismatch between the generated target data and the original target data , the domain specific loss (domain loss) can be formulated as

(3)

where denotes the -norm, and represents the center of the co-target data and target data, respectively. Noteworthily, during the network training phase, the sigmoid function is imposed on the domain loss for probability output normalized to . Therefore, the target domain loss shown in Eq. (3) can be further written as

(4)
Algorithm 1 The Proposed CatDA
Input:   Source data ,
          target training data ,
          iterations , , the number ¡° ¡± of classes;
Procedure:
For do
1.  Initialize and using traditional GAN:
   While do
     Step1: Train the generator and discriminator by solving
          problem (1) and (2) using back-propagation (BP) algorithm;
     Step2: Compute ;
     Step3: Train the generator and discriminator by solving
          the problem (7) using BP algorithm;
     Step4: Compute ;
     ;
   end while
2.  Update and using the proposed model:
   While do
     Step1: Train the generator and the discriminator , by
          solving the problem (1) and (6) using BP algorithm;
     Step2: Update and ;
     Step3: Train the generator and the discriminator , by
          solving the problem (7) and (8) using BP algorithm;
     Step4: Update and ;
     ;
  end while
End
Output: .

Additionally, for preserving the content in source data, we establish a content fidelity term in our model. Ideally, the equality should be satisfied, that is, the generation is reversible. However, this hard constraint is difficult to be guaranteed and thus a relaxed soft constraint is more desirable. To this end, we try to minimize the distance between and and formulate a content loss function , i.e. source content loss as follows

(5)

where , and is a generator of way-2 (). Finally, the objective function of the way-1 generator is composed of 3 parts:

(6)
  • Way-2: :

For way-2, the similar models with way-1 are formulated, including the source discriminator loss , the source data generator loss , source domain loss , and the target content loss . Specifically, the loss functions of way-2 can be formulated as follows

(7)

where and . and are the centers of the generated source data and the real source data. Similar to Eq. (6), the objective function of the way-2 generator can be formulated as

(8)
  • Complete CatDA Model:

The proposed CatDA model is a coupled net of the way-1 and way-2, each of which learns the bijective mapping from one domain to another. The two ways in CatDA are jointly trained in an alternative manner. The generated data and are fed into the discriminators and , respectively.

By joint learning of the Way-1 and Way-2, the complete model of CatDA including the generator and the discriminator can be formulated as the follows.

(9)

In detail, the complete training procedure of the proposed CatDA approach is summarized in Algorithm 1.

Iii-D Classification

For classification, the general classifiers can be trained by the domain aligned and augmented training samples with label . Note that is the output of Algorithm 2. Finally, the recognition accuracy of the unlabeled target test data is reported and compared.

The whole procedure of proposed CatDA for cross-domain visual recognition is clearly described in Algorithm 2, following which the experiments are then conducted to verify the effectiveness and superiority of the proposed method.

Algorithm 2 The Proposed Cross-domain Visual Recognition Method
Input: Source data , a very few target training data ,
         source label , target label , target test data ;
Procedure:
1. Step1: Compute by CatDA method using Algorithm 1;
2. Step2: Train the classifier on augmented training data
         with label ;
3. Step3: Predict the label by the classifier, i.e. .
Output: Predicted label .

Iii-E Remarks

In CatDA, one key difference from the previous GAN model is that a two-way coupled architecture is proposed, with each a domain loss and a content loss are designed for domain alignment and content fidelity. Note that, the proposed CatDA has a similar structure with the cycleGAN, but essentially different in the following aspects.

  • The purpose of CatDA aims at achieving domain adaptation in feature representation for cross-domain application by domain alignment and content preservation, rather than generating realistic pictures. Therefore, a simple yet effective shallow multilayer perceptrons model instead of deep model is proposed in our approach.

  • In order to avoid the limitation of domain adversarial training that feature extraction function has high-capacity, the semi-supervised domain adaptation model is adopted in this paper to help preserve the rich content information.

  • For minimizing the domain discrepancy but preserve the content information, a novel domain loss and a content loss are designed. The cycleGAN focus on image generation but not for cross-domain visual recognition.

Iv Experiments

Iv-a Experimental Protocol

In our method, the total number of layers is set as 3. The neurons number of the output layer in the generator network is the same as the number of input neurons (i.e. feature dimensionality). Then, the output is fed into the discriminator network where the neurons number of hidden layer is set as 100. The neurons number in the output layer of the discriminator is set as 1. The network weights can be optimized by gradient descent based back-propagation algorithm.

Iv-B Compared Methods

The proposed model is flexible and simple, which is therefore regarded as a shallow domain adaptation approach. Therefore, both the shallow features (e.g., pixel-level [15], hand-crafted feature) and deep features can be fed as input.

In the shallow protocol, the following typical DA approaches are exploited for performance comparison.

  • No Adaptation (NA): a naive baseline that learns a linear SVM classifier on the source domain and then applies it to the target domain, which could be regarded as a lower bound;

  • Geodesic Flow Kernel (GFK) [10]: a classic cross-domain subspace learning approach that aligns two domains with a geodesic flow in a Grassmann manifold;

  • Subspace Alignment (SA) [8]: a widely-used feature alignment approach that projects the source subspace to the target subspace computed by PCA;

  • Robust Domain Adaptation via Low Rank (RDALR) [2]: a transfer approach by reconstructing the rotated source data with the target data via low-rank representation;

  • Low-Rank Transfer Subspace Learning (LTSL) [43]: a subspace-based reconstruction approach that tends to align the source subspace and the target by using low-rank representation, which is similar with RADLR [2];

  • Discriminative Transfer (DTSL) [52]: a subspace-based reconstruction approach that tends to align the source to the target by joint low rank and sparse representation;

  • Latent Sparse Domain Transfer (LSDT) [61]: a reconstruction-based latent sparse domain transfer method that tends to jointly learning the subspace and the sparse representation.

In Office-31 recognition, we followed the ADGANet [42] to compare with some recent deep DA approaches,such as DDC [48], DAN [26], RTN [30], DANN [9], ADDA [47], JAN [31] and ADGANet [42]. For fair comparison, the deep features of Office-31 dataset extracted from ResNet-50 with convolutional neural network architecture. Additionally, we also compared with some shallow semi-supervised methods following the [12].

In the deep protocol for cross-domain handwritten digits recognition, we have compared with the recent deep DA approaches, such as DANN [9], Domain confusion [46], CoGAN [25], ADDA [47]and ADGANet [42]. Note that for fair comparison, the deep features of handwritten digit datasets extracted using LeNet with convolutional neural network architecture are fed as the input of our CatDA model.

Iv-C Comparison with Shallow Domain Adaptation

Caltech 256
Amazon
DSLR
Webcam
Fig. 5: Some samples on 4DA office datasets

In this section, five benchmark datasets including 1 4DA office dataset, 2 office-31 dataset, 3 COIL-20 object dataset, 4 MSRC-VOC 2007 datasets and 5 cross-domain handwritten digits are conducted for cross-domain visual recognition. Visually, in Fig. 5, it is shown some samples from 4DA office dataset. Several example images from COIL-20 object dataset are shown in Fig. 6, several example images from MSRC and VOC 2007 datasets are described in Fig. 7, and several example images from the handwritten digits datasets are shown in Fig. 8. From the cross-domain images, the visual heterogeneity is significantly observed, and multiple cross-domain tasks are naturally formulated.

Office 4DA (Amazon, Webcam, DSLR and Caltech 256) datasets [10]:

There are four domains in Office 4DA datasets, which are Amazon (A), Webcam (W)1, DSLR (D) and Caltech (C)2. These datasets contain 10 object classes. It is a common dataset in cross-domain problem. The experimental configuration in our method is followed in[10], and the experiment extracted the 800-bin SURF features [10] for comparison. Every two domains are selected as the source and target domain in turn. As the source domain, 20 samples per object are selected from Amazon, and 8 samples per class are chosen from Webcam, DSLR and Caltech. However, as target training samples, 3 images per class are selected, while the rest samples in target domains are used as target testing data. The experimental results of different domain adaptation methods are shown in Table I. From the results, we observe that the proposed CatDA shows competitive recognition performance and the superiority is therefore proved. Noteworthily, we also exhibit the unsupervised version of our method on this dataset. The asterisk () in Table I indicates that we use our method as an unsupervised manner and the results are not good. From the last lines in Table I, it is obvious that the semi-supervised manner is necessary and competitive in our simple network.

4DA Tasks NA HFA[7] ARC-t[23] MMDT[14] SGF[13] GFK[10] SA[8]
LTSL
-PCA
[43]
LTSL
-LDA
[43]
LSDT[61]
46.0

50.6

68.3

39.0

33.0

31.1

39.3

41.5

44.0

49.5

72.7

47.4
46.9
  • The asterisk () indicates that we use our method as an unsupervised manner and therefore the results are not good.

TABLE I: Recognition performance () of different transfer learning methods in 4DA office datasets

Office 3DA (Office-31 dataset [40]):

This dataset contains three domains such as Amazon (A), Webcam (W) and Dslr (D). It contains 4,652 images from 31 object classes. With each domain worked as source and target alternatively, 6 cross-domain tasks are formed, e.g., , , etc. In experiment, we follow the experimental protocol as [12] for the semi-supervised strategy. In our method, 3 images per class are selected when they are used as target training data, while the rest samples in target domains are used for testing. The recognition accuracy is reported in Table II. We can achieve the competitive results with [12]. Noteworthily, we do not use any auxiliary and discriminative methods only the simple MLP network.

Tasks AVC[40] AKT[23] SGF[13] GFK[10] GMDA[62] GIDR[12] Ours
A W
D W
W D
TABLE II: Recognition accuracy () of different transfer learning methods in office-31 dataset

Columbia Object Image Library (COIL-20) dataset [39].

The COIL-20 dataset3 contains 20 objects with 72 multi-pose images per class and total 1440 gray scale images. We follow the experimental protocol in [52]in our experiments, so this dataset is divided into two subsets C1 and C2, with each 2 quadrants are included. Specifically, the poses of quadrants 1 and 3 are selected as the C1 set and the C2 set contains the poses of quadrants 2 and 4. The two subsets are with different distribution due to the pose rotation but relevant in semantic describing the same objects, therefore it comes to a DA problem. The subsets of C1 and C2 are chosen as source and target domain alternatively, and the cross-domain recognition performance of different methods are shown in Table III. From Table III, we can see that the proposed CatDA shows a significantly superior recognition performance () in average over other state-of-the-art shallow DA methods. This demonstrates that the proposed CatDA can generate similar feature representation with the target domain, such that the heterogeneity can be effectively reduced.

Fig. 6: Some samples from COIL-20 dataset
MSRC
VOC 2007
Fig. 7: Some samples from MSRC and VOC 2007 datasets
Fig. 8: Some examples from handwritten digits datasets
Tasks NA TSL RDALR[2] LTSL[43] DTSL[52] LSDT[61]
TABLE III: Recognition performance () of different transfer learning methods in COIL-20 dataset

MSRC4 and VOC 20075 datasets[52]:

The MSRC dataset contains 18 classes including 4323 images, and the VOC 2007 dataset contains 20 concepts with 5011 images. These two datasets share 6 semantic categories: airplane, bicycle, bird, car, cow and sheep. In this way, the two domain data are constructed to share the same label set. The cross-domain experimental protocol is followed in [28]. We select 1269 images from MSRC as the source domain and 1530 images from VOC 2007 as the target domain to construct a cross-domain task MSRC vs. VOC 2007. Then we switch the two datasets ( VOC 2007 vs. MSRC ) to construct the other task. For feature extraction, all images are uniformly re-scaled to 256 pixels, and the VLFeat open source package is used to extract the 128-dimensional dense SIFT (DSIFT) features. Then -means clustering is leveraged to obtain a 240-dimensional codebook.

By following [49], the source training sample set contains all the labeled samples in the source domain, 4 labeled target samples per class randomly selected from the target domain formulate the labeled target training data, and the rest unlabeled examples are recognized as the target testing data. The experimental results of different domain adaptation methods are shown in Table IV. From the results, we observe that the proposed CatDA outperforms other DA methods.

Tasks NA DTMKT-f[5] MMDT[14] KMM[19] GFK[10] LSDT[61] Ours
MSRC VOC2007
VOC2007 MSRC
TABLE IV: Recognition accuracy () of different transfer learning methods in MSRC and VOC 2007
Tasks NA A-SVM SGF[13] GFK[10] SA[8] LTSL[43] LSDT[61]
MNIST USPS 78.8 78.3 79.2 82.6 78.8 83.2 79.3 81.0
SEMEION USPS 83.6 76.8 77.5 82.7 82.5 83.6 84.7 80.2
MNIST SEMEION 51.9 70.5 51.6 70.5 74.4 72.8 69.1
USPS SEMEION 65.3 74.5 70.9 74.6 65.3 67.4 78.0
USPS MNIST 71.7 73.2 71.1 72.9 71.7 70.5 75.6
SEMEION MNIST 67.6 69.3 66.9 72.9 67.6 70.0 75.8
69.8 73.8 69.5 77.0 76.0 74.0 73.5
TABLE V: Recognition performance () of shallow transfer learning in handwritten digits recognition

Cross-domain handwritten digits datasets:

Three handwritten digits datasets: MNIST6, USPS7 and SEMEION8 with 10 classes from digit are used for evaluating the proposed CatDA method. The MNIST dataset consists of 70,000 instances with image size of , the USPS dataset consists of 9,298 samples with image size of , and SEMEION dataset consists of 2593 images with image size of . In experiments, we adopt the strategy that crop the MNIST dataset into . For DA experiment, every dataset of handwritten digits is used as the source domain and target domain alternatively, and totally 6 cross-domain tasks are obtained. In experiment, 100 samples per class from source domain and 10 samples per category from target domain are randomly chosen for training. After the 5 random splits, the average classification accuracies are described in Table V. From the Table V, we observe that our method shows competitive recognition performance compared to other state-of-the-art methods in average, but slightly lower than the GFK classifier.

Iv-D Comparison with Some Deep Transfer Learning.

In this section, some experiments are deployed on office-31 dataset and handwritten digit datasets for comparison with state-of-the-art deep transfer learning approaches.

Deep features of Office-31 dataset [40]:

For fair comparison, we extract the deep features of Office-31 dataset by the ResNet-50 architecture. From the results, we observe that our method ( in average) outperforms state-of-the-art deep domain adaptation methods. Especially, compared with ADGANet [42] which is proposed from the generative method, our accuracy exceeds it by 4.8%. This demonstrates that our model can effectively alleviate the model bias problem. Similar with 4DA dataset, we also exhibit the unsupervised version of our method on this dataset. The asterisk () in Table VI indicates that we use our method as an unsupervised manner and the results are not so good.

Tasks SourceOnly TCA[37] GFK[10] DDC[48] DAN[26] RTN[30] DANN[9] ADDA[47] JAN[31] ADGANet[42]
  • The asterisk () indicates that we use our method as an unsupervised manner and therefore the results are not good.

TABLE VI: Recognition performance () of deep transfer learning methods in Office-31 dataset

Deep features of handwritten digits datasets: In this section, a new handwritten digits dataset SVHN9 (see Fig. 8) is introduced. During generation, the content information is easy to be changed by GAN without supervision (e.g., in generation), however, in our method, the semi-supervised strategy can help completely avoid such incorrect generation.

We have experimentally validated our proposed method in MNIST, USPS and SVHN datasets. These datasets share 10 classes of digits. In our method, the deep features of three datasets are extracted using the LeNet model provided in the Caffe source code package. For adaptation tasks between MNIST and USPS, the training protocol established in [28] is followed, where 2000 images from MNIST and 1800 images from USPS are randomly sampled. While for adaptation task between SVHN and MNIST, we use the full training sets for comparison against [9]. All the domain adaptation tasks are conducted by following the experimental protocol in [47]. The key difference between [47] and our method lies in that the ADDA [47] is convolutional neural network structured method while ours is multilayer perceptron structured method. In ADDA, an essential problem is that the generated samples may be randomly changed (e.g., instead of ). In our CatDA, this problem can be handled by establishing multiple class-wise models. In our setting, for target training data, 10 samples per class from target domain are randomly selected and 5 random splits are considered totally. The average classification accuracies are reported in Table VII. From the cross-domain recognition results, we observe that our CatDA model outperforms most state-of-the-art methods with improvement in average, and only lower than the very recent ADGANet [42] method.

Tasks Source only DANN[9] Domain confusion[46] CoGAN[25] ADDA[47] ADGANet[42]
MNIST USPS
USPS MNIST
SVHN MNIST
TABLE VII: Recognition performance () of deep transfer learning methods in handwritten digits recognition

V Discussion

V-a Computational Efficiency

As the proposed CatDA model is simple in cross-domain visual recognition system, CPU is enough for model optimization and training instead of GPU. Also, the time cost is much lower as shown in Table VIII, from which we can observe that the computational speed is quite fast for three layered MLP model. Considering that our three layered shallow network can achieve competitive performance and fast computational time, we choose this three layered model in our method. Our experiments are implemented on the runtime environment with a PC of Intel i7-4790K CPU, 4.00GHz, 32GB RAM. It is noteworthy that the time for data preprocessing and classification is excluded.

V-B Evaluation of Layer Number

For insight of the impact of layers, we show the results of different number of layers in Table IX, from which we can observe that the performance does not always show an upward trend with increasing layers and the three layered shallow network can achieve competitive performance.

Tasks 3 layers 4 layers 5 layers 6 layers
MSRC VOC
SVHN MNIST
TABLE VIII: The time cost () with different number of layers on different datasets.
Tasks 3 layers 4 layers 5 layers 6 layers
MNIST USPS
USPS MNIST
SVHN MNIST
Average
TABLE IX: Recognition performance () with different number of layers on different datasets.
Fig. 9: Feature visualization of data distribution. Both (a) and (c) show the feature distribution of source domain (MNIST) and target domain (USPS). (b) shows the feature distribution of the generated data (). Both (d) and (f) show the feature distribution of source domain (Amazon) and target domain (Webcam). (e) shows the distribution of the generated samples ().

V-C Model Visualization

In this section, for better insight of the CatDA model, visualization of class distribution is explored. We visualize the features for further validating the effectiveness of our model. The t-SNE [4] visualization method is employed on the source domain and target domain in the task of handwritten digits datasets (shallow domain adaptation) and task of 3DA office datasets (deep domain adaptation) for feature visualization. From the (b) and (e) in Fig. 9, it is obvious that the better clustering characteristic is achieved and the feature discriminative power is improved in the generated data. As a result, the visual recognition cross-domain performance is improved. It is worthy noting that as shown in Fig. 9 the discrimination of the generated features becomes better than raw features. The reason is that in our semi-supervised domain adaptation method, a partial target label information is used for feature generation, which, therefore, improves the discrimination of the generated features.

V-D Remarks

The proposed method is an adversarial feature adaptation model, which can be used for semi-supervised and unsupervised domain adaptation. We have to claim that 1) the proposed CatDA is completely different from GAN that it cannot be used for image synthesis in its current form, but only for domain adaptive feature representation. This is because the inputs fed into our CatDA are still features such as the low-level hand-crafted feature or deep features by an off-the-shelf CNN model, rather than image pixels. 2) The proposed model is not CNN based, so it may not compare with deep networks. However, deep features can be fed into our model for fair comparison as is presented in Table VI.

Vi Conclusion

In this paper, we propose a new transfer learning method from the perspective of feature generation for cross-domain visual recognition. Specifically, a coupled adversarial transfer domain adaptation (CatDA) framework comprising of two generators, two discriminators, two domain specific loss terms and two content fidelity loss terms is proposed in a semi-supervised mode for domain and intra-class discrepancy reduction. This symmetric model can achieve bijective mapping, such that the domain feature can be generated alternatively benefiting from the reversible characteristic of the proposed model. Consider that our focus is the domain feature generation with distribution disparity removed for cross-domain applications, rather than realistic image generation, a shallow yet effective MLP transfer network is therefore considered. Extensive experiments on several benchmark datasets demonstrate the superiority of the proposed method over some other state-of-the-art DA methods.

In our future work, benefit from the strong feature learning capability of deep neural network (e.g. convolutional neural network), deep adversarial domain adaptation framework that owns similar model with CatDA should be focused. Compared with CatDA, deep methods extract high dimensional semantic features to achieve good performance but they need large-scale samples to train the network. Hence, fine-tuning based parameter transfer from big data to small data can be leveraged for improving the network adaptation.

Acknowledgment

The authors are grateful to the AE and anonymous reviewers for their valuable comments on our work.

Shanshan Wang received BE and ME from the Chongqing University in 2010 and 2013, respectively. She is currently pursuing the Ph.D. degree at Chongqing University. Her current research interests include machine learning, pattern recognition, computer vision.

Lei Zhang (M’14-SM’18) received his Ph.D degree in Circuits and Systems from the College of Communication Engineering, Chongqing University, Chongqing, China, in 2013. He worked as a Post-Doctoral Fellow with The Hong Kong Polytechnic University, Hong Kong, from 2013 to 2015. He is currently a Professor/Distinguished Research Fellow with Chongqing University. He has authored more than 90 scientific papers in top journals, such as IEEE T-NNLS, IEEE T-IP, IEEE T-MM, IEEE T-IM, IEEE T-SMCA, and top conferences such as ICCV, AAAI, ACM MM, ACCV, etc. His current research interests include machine learning, pattern recognition, computer vision and intelligent systems. Dr. Zhang was a recipient of the Best Paper Award of CCBR2017, the Outstanding Reviewer Award of many journals such as Pattern Recognition, Neurocomputing, Information Sciences, etc., Outstanding Doctoral Dissertation Award of Chongqing, China, in 2015, Hong Kong Scholar Award in 2014, Academy Award for Youth Innovation in 2013 and the New Academic Researcher Award for Doctoral Candidates from the Ministry of Education, China, in 2012.

Jingru Fu received the B.S. degree from Fuzhou University, Fuzhou, China, in 2017. She is currently working towards the M.S. degree in Learning Intelligence and Vision Essential (LiVE) group at Chongqing University, Chongqing, China. Her current research interests include machine learning, transfer learning and computer vision.

Footnotes

  1. http://www.eecs.berkeley.edu/~mfritz/domainadaptation/
  2. http://www.vision.caltech.edu/Image_Datasets/Caltech256/
  3. http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php
  4. http://research.microsoft.com/en-us/projects/objectclassrecognition
  5. http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2007
  6. http://yann.lecun.com/exdb/mnist/
  7. http://www-i6.informatik.rwth-aachen.de/~keysers/usps.html
  8. http://archive.ics.uci.edu/ml/datasets/Semeion+Handwritten+Digit
  9. http://ufldl.stanford.edu/housenumbers/

References

  1. K. Bousmalis, N. Silberman, D. Dohan, D. Erhan and D. Krishnan (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, Cited by: §II-C.
  2. S. F. Chang, D. T. Lee, D. Liu and I. Jhuo (2013) Robust visual domain adaptation with low-rank reconstruction. In CVPR, pp. 2168–2175. Cited by: 4th item, 5th item, TABLE III.
  3. C.L.P. Chen and Z. Liu (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans. Neural Networks and Learning Systems 29 (1), pp. 10–24. Cited by: §II-B.
  4. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In ICML, pp. 647–655. Cited by: §II-B, §V-C.
  5. L. Duan, I. W. Tsang and D. Xu (2012) Domain transfer multiple kernel learning. IEEE Trans. Pattern Analysis and Machine Intelligence 34 (3), pp. 465–479. Cited by: §II-A, TABLE IV.
  6. L. Duan, D. Xu, I. W. Tsang and J. Luo (2010) Visual event recognition in videos by learning from web data. In CVPR, pp. 1959–1966. Cited by: §I, §II-A.
  7. L. Duan, D. Xu and I. Tsang (2012) Learning with augmented features for heterogeneous domain adaptation. arXiv. Cited by: TABLE I.
  8. B. Fernando, A. Habrard, M. Sebban and T. Tuytelaars (2014) Unsupervised visual domain adaptation using subspace alignment. In ICCV, pp. 2960–2967. Cited by: 3rd item, TABLE I, TABLE V.
  9. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand and V. Lempitsky (2015) Domain-adversarial training of neural networks. JMLR. Cited by: §I, §II-C, §IV-B, §IV-B, §IV-D, TABLE VI, TABLE VII.
  10. B. Gong, Y. Shi, F. Sha and K. Grauman (2012) Geodesic flow kernel for unsupervised domain adaptation. In CVPR, pp. 2066–2073. Cited by: 2nd item, §IV-C, §IV-C, TABLE I, TABLE II, TABLE IV, TABLE V, TABLE VI.
  11. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: §I, §I, §II-C.
  12. R. Gopalan and R. Chellappa (2014) Unsupervised adaptation across domain shifts by generating intermediate data representations. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (11), pp. 2288–2302. Cited by: §IV-B, §IV-C, TABLE II.
  13. R. Gopalan, R. Li and R. Chellappa (2011) Domain adaptation for object recognition: an unsupervised approach. In ICCV, pp. 999–1006. Cited by: TABLE I, TABLE II, TABLE V.
  14. J. Hoffman, E. Rodner, J. Donahue, B. Kulis and K. Saenko (2014) Asymmetric and category invariant feature transformations for domain adaptation. IJCV 109 (1-2), pp. 28–41. Cited by: §II-A, TABLE I, TABLE IV.
  15. J. Hoffman, D. Wang, F. Yu and T. Darrell (2016) FCNs in the wild: pixel-level adversarial and constraint-based adaptation. arXiv. Cited by: §IV-B.
  16. Hoffman,Judy, Tzeng,Eric, T. Park, Zhu,Jun-Yan, Isola,Phillip, Saenko,Kate, A.Efros,Alexei and Darrell,Trevo (2017) CYCADA: cycle-consistent adversarial domain adaptation. arXiv. Cited by: §II-C.
  17. J. Hu, J. Lu and Y. Tan (2015) Deep transfer metric learning. In ICCV, pp. 325–333. Cited by: §II-B.
  18. G. B. Huang, H. Zhou, X. Ding and R. Zhang (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans. Systems, Man, and Cybernetics: Systems 42 (2), pp. 513–529. Cited by: §II-A.
  19. J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt and B. Scholkopf (2006) Correcting sample selection bias by unlabeled data. In NIPS, pp. 601–608. Cited by: TABLE IV.
  20. A. Iyer, J. S. Nath and S. Sarawagi (2014) Maximum mean discrepancy for class ratio estimation: convergence bounds and kernel selection.. In ICML, pp. 530–538. Cited by: §II-A.
  21. I. Jhuo, D. Liu, D. Lee and S. Chang (2012) Robust visual domain adaptation with low-rank reconstruction. In CVPR, pp. 2168–2175. Cited by: §II-A.
  22. T. Kim, M. Cha, H. Kim, J. K. Lee and J. Kim (2017) Learning to discover cross-domain relations with generative adversarial networks. In ICML, Cited by: §I, §II-C.
  23. B. Kulis, K. Saenko and T. Darrell (2011) What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In CVPR, pp. 1785–1792. Cited by: §I, TABLE I, TABLE II.
  24. S. Li, S. Song, G. Huang and C. Wu (2018) Cross-domain extreme learning machine for domain adaptation. IEEE Trans. Systems, Man, and Cybernetics: Systems. Cited by: §II-A.
  25. M. Y. Liu and O. Tuzel (2016) Coupled generative adversarial networks. Cited by: §IV-B, TABLE VII.
  26. M. Long, Y. Cao, J. Wang and M. Jordan (2015) Learning transferable features with deep adaptation networks. In ICML, pp. 97–105. Cited by: §II-B, §III-C, §IV-B, TABLE VI.
  27. M. Long, G. Ding, J. Wang, J. Sun, Y. Guo and P. S. Yu (2013) Transfer sparse coding for robust image representation. In ICCV, pp. 407–414. Cited by: §II-A.
  28. M. Long, J. Wang, G. Ding, J. Sun and P. S. Yu (2014) Transfer joint matching for unsupervised domain adaptation. In CVPR, pp. 1410–1417. Cited by: §II-A, §IV-C, §IV-D.
  29. M. Long, H. Zhu, J. Wang and M. I. Jordan (2016) Deep transfer learning with joint adaptation networks. In ICML, Cited by: §II-B, §III-C.
  30. M. Long, H. Zhu, J. Wang and M. I. Jordan (2016) Unsupervised domain adaptation with residual transfer networks. In NIPS, pp. 136–144. Cited by: §II-B, §III-C, §IV-B, TABLE VI.
  31. M. Long, H. Zhu, J. Wang and M. I. Jordan (2017) Deep transfer learning with joint adaptation networks. In ICML, pp. 2208–2217. Cited by: §IV-B, TABLE VI.
  32. H. Lu, L. Zhang, Z. Cao, W. Wei, K. Xian, C. Shen and A. V. D. Hengel (2017) When unsupervised domain adaptation meets tensor representations. In ICCV, Cited by: §I.
  33. W. Lu, Y. Li, Y. Cheng, D. Meng, B. Liang and P. Zhou (2018) Early fault detection approach with deep architectures. IEEE Trans. Instrumentation and Measurement 67 (7), pp. 1679–1689. Cited by: §II-B.
  34. C. Ma, Y. Guo, Y. Lei and W. An (2018) Binary volumetric convolutional neural networks for 3-d object recognition. IEEE Trans. Instrumentation and Measurement, pp. 1–11. Cited by: §II-B.
  35. M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. Computer Science, pp. 2672–2680. Cited by: §I, §II-C.
  36. M. Oquab, L. Bottou, I. Laptev and J. Sivic (2014) Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, pp. 1717–1724. Cited by: §II-B.
  37. S. J. Pan, I. W. Tsang, J. T. Kwok and Q. Yang (2011) Domain adaptation via transfer component analysis. IEEE Trans. Neural Networks 22 (2), pp. 199–210. Cited by: TABLE VI.
  38. S. J. Pan and Q. Yang (2010) A survey on transfer learning. IEEE Trans. Knowle. Data Engineering 22 (10), pp. 1345–1359. Cited by: §I.
  39. C. Rate and C. Retrieval (2011) Columbia object image library (coil-20). Computer. Cited by: §IV-C.
  40. K. Saenko, B. Kulis, M. Fritz and T. Darrell (2010) Adapting visual category models to new domains. In ECCV, pp. 213–226. Cited by: §IV-C, §IV-D, TABLE II.
  41. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen and X. Chen (2016) Improved techniques for training gans. In NIPS, pp. 2234–2242. Cited by: §I, §II-C.
  42. S. Sankaranarayanan, Y. Balaji, C. D. Castillo and R. Chellappa (2018) Generate to adapt: aligning domains using generative adversarial networks. In CVPR, Cited by: §IV-B, §IV-B, §IV-D, §IV-D, TABLE VI, TABLE VII.
  43. M. Shao, D. Kit and Y. Fu (2014) Generalized transfer subspace learning through low-rank constraint. IJCV 109 (1-2), pp. 74–93. Cited by: §II-A, 5th item, TABLE I, TABLE III, TABLE V.
  44. A. Sharif Razavian, H. Azizpour, J. Sullivan and S. Carlsson (2014) CNN features off-the-shelf: an astounding baseline for recognition. In CVPR, pp. 806–813. Cited by: §II-B.
  45. C. Turner, H. Sari-Sarraf and E. Hequet (2017) Training a new instrument to measure cotton fiber maturity using transfer learning. IEEE Trans. Instrumentation and Measurement 66 (7), pp. 1668–1678. Cited by: §I.
  46. E. Tzeng, J. Hoffman, T. Darrell and K. Saenko (2015) Simultaneous deep transfer across domains and tasks. In ICCV, pp. 4068–4076. Cited by: §II-B, §II-B, §IV-B, TABLE VII.
  47. E. Tzeng, J. Hoffman, K. Saenko and T. Darrell (2017) Adversarial discriminative domain adaptation. Note: CVPR Cited by: §I, §II-C, §IV-B, §IV-B, §IV-D, TABLE VI, TABLE VII.
  48. E. Tzeng, J. Hoffman, N. Zhang, K. Saenko and T. Darrell (2014) Deep domain confusion: maximizing for domain invariance. Computer Science. Cited by: §IV-B, TABLE VI.
  49. W. Wang (2017) Fredholm multiple kernel learning for semi-supervised domain adaptation. In AAAI, Cited by: §IV-C.
  50. L. Wen, L. Gao and X. Li (2018) A new deep transfer learning based on sparse auto-encoder for fault diagnosis. IEEE Trans. Systems, Man, and Cybernetics: Systems 49 (1), pp. 136–144. Cited by: §II-B.
  51. M. Xie, N. Jean, M. Burke, D. Lobell and S. Ermon (2015) Transfer learning from deep features for remote sensing and poverty mapping. arXiv. Cited by: §II-B.
  52. Y. Xu, X. Fang, J. Wu, X. Li and D. Zhang (2015) Discriminative transfer subspace learning via low-rank and sparse representation.. IEEE Trans. Image Processing 25 (2), pp. 850–863. Cited by: 6th item, §IV-C, §IV-C, TABLE III.
  53. K. Yan, D. Zhang and Y. Xu (2017) Correcting instrumental variation and time-varying drift using parallel and serial multitask learning. IEEE Trans. Instrumentation and Measurement 66 (9), pp. 2306–2316. Cited by: §I.
  54. Y. Yang, Q. M. J. Wu and Y. Wang (2016) Autoencoder with invertible functions for dimension reduction and image reconstruction. IEEE Trans. Systems, Man, and Cybernetics: Systems 48 (7), pp. 1065–1079. Cited by: §II-B.
  55. Z. Yi, H. Zhang, P. Tan and M. Gong (2017) DualGAN: unsupervised dual learning for image-to-image translation. In ICCV, pp. 2868–2876. Cited by: §II-C.
  56. L. Zhang and P. Deng (2018) Abnormal odor detection in electronic nose via self-expression inspired extreme learning machine. IEEE Trans. Systems, Man, and Cybernetics: Systems. Cited by: §II-A.
  57. L. Zhang, Y. Liu and P. Deng (2017) Odor recognition in multiple e-nose systems with cross-domain discriminative subspace learning. IEEE Trans. Instrumentation and Measurement 66 (2), pp. 198–211. Cited by: §I.
  58. L. Zhang and D. Zhang (2015) Domain adaptation extreme learning machines for drift compensation in e-nose systems. IEEE Trans. Instrumentation and Measurement 64 (7), pp. 1790–1801. Cited by: §I.
  59. L. Zhang and D. Zhang (2016) Robust visual knowledge transfer via extreme learning machine-based domain adpatation. IEEE Trans. Image Processing 25 (3), pp. 4959–4973. Cited by: §II-A.
  60. L. Zhang and D. Zhang (2018) Efficient solutions for discreteness, drift, and disturbance (3d) in electronic olfaction. IEEE Trans. Systems, Man, and Cybernetics: Systems 48 (2), pp. 242–254. Cited by: §II-A.
  61. L. Zhang, W. Zuo and D. Zhang (2016) LSDT: latent sparse domain transfer learning for visual adaptation. IEEE Trans. Image Processing 25 (3), pp. 1177–1191. Cited by: §II-A, 7th item, TABLE I, TABLE III, TABLE IV, TABLE V.
  62. J. Zheng, M. Y. Liu, R. Chellappa and P. J. Phillips (2012) A grassmann manifold-based domain adaptation approach. In Pattern Recognition (ICPR), 2012 21st International Conference on, Cited by: TABLE II.
  63. J. Y. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. Cited by: §II-C.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
403126