Selective Transfer with Reinforced Transfer Network for Partial Domain Adaptation

Selective Transfer with Reinforced Transfer Network for Partial Domain Adaptation

Paper ID: 2250
Primary Areas: Deep Learning/Neural Networks
Second Areas: Transfer/Adaptation/Multi-task Learning
Abstract

Partial domain adaptation (PDA) extends standard domain adaptation to a more realistic scenario where the target domain only has a subset of classes from the source domain. The key challenge of PDA is how to select the relevant samples in the shared classes for knowledge transfer. Previous PDA methods tackle this problem by re-weighting the source samples based on the prediction of classifier or discriminator, thus discarding the pixel-level information. In this paper, to utilize both high-level and pixel-level information, we propose a reinforced transfer network (RTNet), which is the first work to apply reinforcement learning to address the PDA problem. The RTNet simultaneously mitigates the negative transfer by adopting a reinforced data selector to filter out outlier source classes, and promotes the positive transfer by employing a domain adaptation model to minimize the distribution discrepancy in the shared label space. Extensive experiments indicate that RTNet can achieve state-of-the-art performance for partial domain adaptation tasks on several benchmark datasets. Codes and datasets will be available online.

Introduction

Deep neural networks have achieved impressive performance in a variety of applications. However, when applied to related but different domains, the generalization ability of the learned model may be severely degraded due to the harmful effects of the domain shift [2]. Re-collecting labeled data from the coming new domain is prohibitive because of the huge cost of data annotation. Domain adaptation techniques solve such a problem by transferring knowledge from a source domain with rich labeled data to a target domain where labels are scarce or unavailable. These domain adaptation methods learn domain-invariant feature representations by moment matching [17, 24, 7] or adversarial training [25, 10].

Figure 1: The PDA technique can be applied in a more practical scenario where the label space of the target domain is only a subset of the source domain. (a) The negative transfer is triggered by mismatch. (b) The negative transfer is mitigated by filtering out outlier source classes.

The previous domain adaptation methods generally assume that the source and target domains have shared label space. However, in real applications, it is usually formidable to find a relevant source domain with identical label space as the target domain of interest. Thus, a more realistic scenario is partial domain adaptation (PDA) [3], which relaxes the constraint that the source and target domains share the label space and assumes that the unknown target label space is a subset of the source label space. In such a scenario, as shown in Figure 1a, existing standard domain adaptation methods force the matching between the outlier source class (blue triangle) and the unrelated target class (red square) by aligning the whole source domain with the target domain. As a result, the negative transfer may be triggered due to the mismatch. Negative transfer is a dilemma that the transfer model performs even worse than the non-adaptation (NoA) model [20].

Several approaches have been proposed to solve the partial domain adaptation problem by re-weighting the source samples in the domain-adversarial network. These weights can be obtained from the distribution of the predicted target label probabilities [4] or the prediction of the domain discriminator [3, 29]. However, these methods ignore pixel-level features when determining weights, thereby losing global correlation information. Moreover, these PDA modules based on adversarial networks are difficult to integrate into domain adaptation methods based on moment matching because they lack discriminators to filter outlier classes. Therefore, most advanced standard domain adaptation methods based on moment matching are hard to extend to address PDA problem.

In this paper, to mitigate negative transfer, we present a reinforced transfer network (RTNet), as shown in Figure 1b, which exploits reinforcement learning to learn a reinforced data selector for filtering out outlier source samples automatically. The motivation of considering pixel-level information is that the source samples related to the target domain will have smaller reconstruction errors than outlier source classes on the generator trained with target samples. For example, an outlier triangle source sample will have a larger reconstruction error than a square source sample on the target generator because the target generator lacks training samples of the triangle category and outlier source samples extremely dissimilar to the target classes. Hence, the reconstruction error can measure the similarity between each source sample and the target domain. To utilize both the pixel-level and high-level information to select source samples related to the target domain, we design a reinforced data selector. Specifically, the reinforced data selector takes action (keep or drop a sample) based on the state of the sample. Then, the reconstruction error of the selected source samples on the target generator is used as a reward to guide the learning of the selector via the actor-critic algorithm [13]. It’s worth noting that the state contains high-level information, and the reward contains pixel-level information. The contribution of this work is that we design a novel reinforced data selector based on reinforcement learning, which solves the PDA problem by taking into account high-level and pixel-level information to select related samples for positive transfer. In addition, most deep domain adaptation methods can be extended to solve PDA problem by integrating the module.

Related Work

Partial Domain Adaptation

Deep domain adaptation methods have been widely studied in recent years. These methods extend deep neural networks by embedding adaptation layers for moment matching [26, 15, 24, 6] or adding domain discriminators for adversarial training [10, 25]. However, these methods may be restricted by the assumption that the source and target domains share the same label space, which is not held in the PDA scenario. Several methods have been proposed to solve the PDA problem. Selective adversarial network (SAN) [3] trains a separate domain discriminator for each class with a weight mechanism to suppress the harmful influence of the outlier classes. Partial adversarial domain adaptation (PADA) [4] improves SAN by adopting only one domain discriminator and obtains the weight of each class based on the predicted target probability distribution of the classifier. Example transfer network (ETN) [5] automatically quantifies the weights of source examples based on their similarities to the target domain. Unlike previous PDA methods, only high-level information was used to select source samples. Our proposed RTNet combines pixel-level and high-level information to achieve more accurate outlier source sample filtering.

Reinforcement Learning

Reinforcement learning (RL) can be roughly divided into two categories [1]: value-based methods and policy-based methods. The value-based methods estimate future expected total rewards through a state, such as SARSA [22] and deep Q network [18]. Policy-based methods try to directly find the next best action in the current state, such as REINFORCE algorithm [27]. To reduce variance, some methods combine value-based and policy-based methods for more stable training, such as the actor-critic algorithm [13]. So far, data selection based on RL has been applied in the fields of active learning [9], co-training [28], text matching [21], etc. However, there is a lack of reinforced data selection methods to solve PDA problem.

Figure 2: Overview of the proposed RTNet. is a shared feature extractor, is a shared classifier, and are source and target generators respectively, is a value network and is a policy network. and are combined with to construct source and target auto-encoders to reconstruct samples, respectively. The structure of each module will be described in detail in Appendix. The green line indicates the flow to get the reward based on reconstruction error of selected source samples on .

Our Approach

Problem Definition and Notations

In this work, based on PDA settings, we define the labeled source dataset as from the source domain associated with classes, and define the unlabeled target dataset as from the target domain associated with classes. Note that, the target label space is contained in the source label space, i.e., and is unknown. The two domains follow different marginal distributions, and , respectively, we further have . is the distribution of source samples in the target label space. The goal is to improve the performance of the classifier in with the help of the knowledge in associated with and make and robust enough to outlier samples.

Overiew of RTNet

As shown in Figure 2, RTNet consists of two components: a domain adaptation (DA) model ( and ) and a reinforced data selector (, and ). The DA model promotes positive transfer by reducing distribution shift between source and target domains in the shared label space. The reinforced data selector based on RL mitigates negative transfer by filtering out outlier source classes. Specially, to filter out outlier source samples, the policy network considers high-level information provided by feature extractor and classifier for decision making to get selected source samples . For the backbone of DA model, takes source transfer features as input to produce label predictions and achieves distribution alignment between and . Meanwhile, the selected source sample reconstruction errors based on are used as rewards to encourage to select samples with small reconstruction errors. For the stability of training, based on actor-critic algorithm, we use a value network combined with rewards to optimize . Besides, the domain-specific generators and trained with reconstruction errors of reconstructed source images and target images , respectively.

Domain Adaptation Model

Almost all partial domain adaptation frameworks are based on adversarial network [3, 4, 29, 5], which has led to many existing advanced domain adaptation algorithms based on moment matching cannot be extended to solve PDA problem. The proposed reinforced data selector is a general module that can be integrated into most UDA frameworks. Hence, we use deep CORAL[24] as the base domain adaptation model to prove that reinforced data selector can be embedded into the matching-based UDA framework to make it robust to PDA scene. In the following, we will give a brief introduction to the main ideas of CORAL.

We define the last layer of as adaptation layer and reduce the distribution shift between source and target domains by aligning the covariance of source and target features. Hence, the CORAL objective function is as follows:

(1)

where denotes the squared matrix Frobenius norm, and represent source and target transferable features output by the adaptation layer, respectively, is the batch ID, and n is the batch size. and represent the covariance matrices, which can be computed as , and . is the centralized matrix, where is an all-one column vector.

To ensure the shared feature extractor and classifier can be trained with supervision on labeled samples, we define a standard cross-entropy classification loss with respect to labeled source samples. Moreover, to encourage the target domain to have a nice manifold structure and thus increase the contributions of target data for better transfer, we expand CORAL by adopting the entropy minimization principle [11]. Let , the entropy objective function utilized to quantify the uncertainty of the predicted label of the target sample can be computed as . Note that, the entropy minimization only constrains . We believe that if the entropy minimization applied to both and as in [16], the target domain samples are easily stuck into the wrong class in early training due to the large domain gap and are difficult to correct afterward. Formally, the full objective function for the domain adaptation model is as follows:

(2)

where hyperparameters and control the impact of the corresponding objective functions. However, in the PDA scenario, most UDA methods (e.g. CORAL) may trigger negative transfer since these methods force alignment of the global distributions and , even though and are non-overlapping and cannot be aligned during transfer. Thus, the motivation of the reinforced data selector is to mitigate negative transfer by filtering out the source outlier classes before performing the distribution alignment.

Reinforced Data Selector

Overview

We consider the source sample selection process of RTNet as Markov decision process, which can be addressed by RL. The reinforced data selector is an agent that interacts with the environment created by the domain adaptation model. The agent takes action to keep or drop a source sample based on the policy function. The domain adaptation model evaluates the actions taken by the agent and provides a reward to guide the agent’s learning. The goal of the agent is to maximize the reward through the actions taken.

As shown in Figure 2, given a batch of source samples , where represents the batch ID and is the batch size. We can obtain the corresponding states through the domain adaptation model. The reinforced data selector then utilizes the policy to determine the actions taken on source samples, where . means to filter out outlier source sample from . Thus, we get a new source batch related to the target domain. Instead of , we feed into the domain adaptation model to solve PDA problem. Finally, the domain adaptation model moves to the next state after updated with and , and provides a reward according to the source reconstruction errors based on to update policy and value networks. In the following sections, we will give a detailed introduction to the state, action, and reward.

State

In RTNet, state is defined as a vector . In order to simultaneously consider the unique information of each source sample and the label distribution information of target domain when taking action, concatenates the following features: (1) The high-level semantic feature , which is the output of given . (2) The label of the source sample, represented by a one-hot vector. (3) The predicted probability distribution of the target batch , which can be calculated as , . Feature (1) represents the high-level information of source sample. Feature (3) based on the intuition that the probabilities of assigning the target data to source outlier classes should be small since the target sample is significantly dissimilar to the source outlier sample. Consequently, quantifies the contribution of each source class to the target domain. Feature (2) is combined with feature (3) to measure the relation between each source sample and the target domain.

Action

The action , which indicates whether the source sample is kept or filtered from the source batch. The selector utilizes -greedy strategy [18] to sample based on . represents the probability that the sample is kept. The is decayed from 1 to 0. is defined as a policy network with two fully connected layers. Formally, is computed as follows:

(3)

where is the ReLU activation, and are the weight matrix and bias of the -th layer, and is the state of the source sample, which concatenates feature (1), (2) and (3).

Reward

The selector takes actions to select from . The RTNet uses to update the domain adaptation model and obtains a reward for evaluating the policy. In contrast to usual reinforcement learning, where one reward corresponds to one action, The RTNet assigns one reward to a batch of actions to improve the efficiency of model training.

To take into account pixel-level information when selecting source samples, the novel reward is designed according to the reconstruction error of the selected source sample based on the target generator. The intuition of using this reconstruction error as reward is that the reconstruction error of outlier source sample is large since they are extremely dissimilar to the target classes. Consequently, the selector aims to select source samples with small reconstruction errors for distribution alignment and classifier training. However, the purpose of reinforcement learning is to maximize the reward, so we design the following novel reward based on reconstruction error:

(4)

where is the sample selected by the reinforced data selector, and is the number of samples selected. Note that, to accurately evaluate the efficacy of , rewards are collected after the feature extractor and classifier are updated as in Eq. 5 and before the generators are updated as in Eq. 6. , and can be trained as follows:

(5)
(6)

In the process of selection, not only the last action contributes to the reward, but all previous actions contribute. Therefore, the future total reward for each batch can be formalized as:

(7)

where is the reward discount factor, and is the number of batches in this episode.

Optimization

The selector is optimized based on actor-critic algorithm [13]. In each episode, the selector aims to maximize the expected total reward. Formally, the objective function is defined as:

(8)

where represents the parameter of policy network . is updated by performing, typically approximate, gradient ascent on . Formally, the update step of is defined as:

(9)

where is the learning rate, is the batch size, and is an estimate of the advantage function based on future total reward, which guides the update of . Note that, is an unbiased estimate of [27]. The actor-critic framework combines and for stable training. In this work, we utilize to estimate the expected feature total reward . Hence, the can be considered as an estimate of the advantage of action, which encourage to make strategies to maximize feature total rewards . The is defined as follows:

(10)

The architecture of the value network is similar to the policy network, except that the final output layer is a regression function. The value network is designed to estimate the expected feature total reward for each state, which can be optimized in the following form:

(11)

As the reinforced data selector and domain adaptation model interact with each other during training, we train them jointly. To ensure that the domain adaptation model provides accurate states and rewards in the early stages of training, we first pre-train , , and through the classification loss of source samples and Eq. 6. We follow the previous work [21] to train RTNet, the detailed training process is shown in Algorithm 1. Note that, RTNet also filters the outlier source classes on the classifier, thus focusing the classifier more on the source samples belonging to target label space, which make the classifier can provide more accurate target label distribution as shown in feature (3).

1:episode number , source data and target data .
2:Initialize pre-trained shared feature extractor and shared classifier in the domain adaptation model.
3:Initialize policy network , value network and pre-trained generators in the reinforced data selector.
4:for  do
5:     for each  do
6:         Obtain the states through the domain adaptation model, where .
7:         Utilizes -greedy strategy to sample based on .
8:         Select source training batch from according to .
9:         Update domain adaptation model and with and as in Eq. 5.
10:         Obtain reward on with as in Eq. 4.
11:         Update and with and as in Eq. 6.
12:         Store to an episode history .
13:     end for
14:     for each  do
15:         Obtain the future total reward as in Eq. 7.
16:         Obtain the estimated future total reward .
17:         Update the policy network as in Eq. 9.
18:         Update the value network as in Eq. 11.
19:     end for
20:end for
Algorithm 1 The optimization strategy of the RTNet

Experiments

Datasets

Office-31[23] is a widely-used visual domain adaptation dataset, which contains 4,110 images of 31 categories from three distinct domains: Amazon website (A), Webcam (W) and DSLR camera (D). Following the settings in [3], we select the same 10 categories in each domain to build new target domain and create 6 transfer scenarios A31W10, W31A10, W31D10, D31W10, A31D10, and D31A10 to evaluate the RTNet. Digital dataset includes five domain adaptation benchmarks: Street View House Numbers (SVHN) [19], MNIST [14], MNIST-M [10], USPS [12] and synthetic digits dataset (SYN) [10], which consist of ten categories. We select 5 categories (digit 0 to digit 4) as target domain in each dataset and construct four partial domain adaptation tasks: SVHN10MNIST5, MNIST10MNIST-M5, USPS10MNIST5 and SYN10MNIST5.

Implementation Details

The RTNet is implemented via Tensorflow and trained with the Adam optimizer. For the experiments on Office-31, we employ the ResNet-50 pre-trained on ImageNet as the backbone of domain adaptation model and fine-tune the parameters of the fully connected layers and the final block. For the experiments on digital datasets, We adopt modified LeNet as the backbone of domain adaptation model and update all of the weights. All images are converted to grayscale and resized to 32 32.

In RTNet, we adopt separate two fully connected layers for the policy network and value network , a series of transposed convolutional layers for the generators . To guarantee fair comparison, the same frameworks are used for and in all comparison methods, and each method is trained five times and the average is taken as the final result. In our method, the discount factor that determines the impact of previous actions on the current reward is a critical hyperparameter. We first select it according to accuracy on SVHN10MNIST5 task and then set to 0.8 for all other transfer tasks, since our approach can work stably across different tasks. As for other hyperparameters, we set , and as in [6]. To ease model selection, the hyperparameters of comparison methods are gradually change from 0 to 1 as in [17].

Type Method A31W10 D31W10 W31D10 A31D10 D31A10 W31A10 Avg
NoA ResNet-50 76.50.3 0.2 0.1 0.2 0.1 0.3 82.0
UDA DAN 53.60.7 0.5 0.6 0.5 0.6 0.5 52.1
DANN 62.80.6 71.60.4 0.5 0.7 0.3 0.4 63.9
CORAL 52.10.5 65.20.2 0.7 0.5 0.4 0.3 58.4
JDDA 73.50.6 0.3 0.2 0.4 0.1 62.80.2 75.5
PDA ETN 89.90.3 0.1 0.2 0.4 83.90.1 0.2 91.3
PADA 86.30.4 0.1 1000.0 0.1 0.2 0.1 89.4
RTNet 92.10.3 1000.0 1000.0 0.1 0.1 0.1 92.7
Table 1: Performance (prediction accuracy standard deviation) on the Office-31 dataset.
Type  Method SVHN10 MNIST5 MNIST10 MNIST-M5 USPS10 MNIST5 SYN10 MNIST5 Avg
NoA Modified LeNet 79.60.3 60.20.4 0.6 91.30.4 76.9
UDA DAN 63.50.5 0.5 0.4 55.00.3 57.2
DANN 68.90.7 50.60.7 83.30.5 77.60.4 70.1
CORAL 60.80.6 43.40.5 61.70.5 74.40.4 60.1
JDDA 72.10.4 54.30.2 71.70.4 85.20.2 70.8
PDA ETN 93.60.2 92.50.1 0.1 0.2 95.1
PADA 90.40.3 89.10.2 0.3 0.1 93.4
RTNet 95.30.1 93.20.2 98.90.1 99.20.0 96.7
Table 2: Performance (prediction accuracy standard deviation) on the digital dataset.

Result and Discussion

Table 1 and Table 2 show the classification results on two datasets. RTNet achieves the best accuracy on most transfer tasks. In particular, RTNet outperforms other methods by a large margin on tasks with small source and target domains, e.g. A31W10, and on tasks with large source and target domains, e.g. SVHN10MNIST5. These results confirm that our approach can learn more transferable features in PDA scenarios of various scales by selecting the related source samples for positive transfer.

By looking at the Table 1 and Table 2, several observations can be made. First, the previous UDA methods including those based on adversarial network (DANN), and those based on moment match (DAN, JDDA, and CORAL) perform even worse than non-adaptation model (ResNet-50 or modified LeNet), indicating that they were affected by the negative transfer. These methods reduce the shift of the marginal distribution between domains without considering the conditional distribution, thus matching the outlier source class with the target domain, resulting in weak classifier performance. Second, PDA methods (ETN and PADA) improve classification accuracy by a large margin since their weighting mechanisms can mitigate negative transfer caused by outlier categories. Finally, RTNet achieves the best performance. Different from the previous PDA methods which only rely on the predicted probability distribution to obtain the weight, RTNet combines the high-level semantic feature and predicted probability distribution to select the source sample, and employs the pixel-level reconstruction error as evaluation criteria to guide the learning of policy network. Thus, this selection mechanism can detect outlier source classes more effectively and transfer relevant samples.

(a) ResNet-50
(b) CORAL
(c) ETN
(d) RTNet
(e) LeNet
(f) CORAL
(g) ETN
(h) RTNet
Figure 3: The t-SNE visualization on A31W10 task ((a)-(d)) and SVHN10 MNIST5 task ((e)-(h)). Red points represent target samples and blue points represent source samples.
(a) Probability in
(b) Acc w.r.t.
(c) Convergence
(d) Learning curve
(e) Acc w.r.t.
Figure 4: (a) Histograms of source class-wise retention probability learned by policy network on SVHN10MNIST5 task. (b) Parameter sensitivity of on SVHN10MNIST5 task. (c) Convergence analysis on SVHN10MNIST5 task. (d) Learning curve on SVHN10MNIST5 task. (e) The accuracy curve of varying the number of target classes on AW task.

Analysis

State Features Combination Accuracy
Modified LeNet 79.6
(1) 92.7
(2) (3) 86.3
(1) (2) (3) 95.3
Table 3: Results (accuracy %) with different combinations of state features on the SVHN10MNIST5 task.

Ablation Study

We further perform a state feature ablation test on SVHN10MNIST5. We have three state features as mentioned in Section 3Our Approach, two of which (Feature (2), (3)) can be considered as a feature group because they are combined to evaluate the contribution of the source sample to the target domain. As shown in Table 3, feature (1) can be used alone to get good performance, which indicates that the feature extracted by the feature extractor can describe the state of the model well, while the result of the second feature group suggests that the probability distributions have limited capacity for state representation. Besides, the combination of these two feature groups yields the best performance, confirming that all features contribute to the final result.

Feature Visualization

We visualize the features of the adaptation layer using t-SNE [8]. As shown in Figure 3, several observations can be made. First, by comparing Figures 2(a), 2(e) and Figures 2(b), 2(f), we find that CORAL forces the target domain to be aligned with the whole source domain, including outlier classes that do not exist in the target label space, which triggers negative transfer leading to model degradation. Second, as can be seen in Figures 2(d), 2(h), RTNet correctly matches the target samples to related source samples by integrating the selector into CORAL architecture to filter out outlier classes, which confirms that the matching-based UDA framework can be extended to solve PDA problem by embedding the reinforced data selector. Finally, compared with Figure 2(c), 2(g), RTNet matches the related source domain and the target domain more accurately, indicating that it is more effective than ETN in suppressing the impact of outlier source classes by considering high-level and pixel-level information.

Statistics of Class-wise Retention Probabilities

We utilize to verify the ability of the data selector to filter the samples, averaging the retention probabilities of each class of source samples. represents a source sample set, which contains samples belonging to class c. As shown in Figure 3(a), RTNet assigns much larger retention probabilities to the shared classes than to the outlier classes. This result proves that RTNet has the ability to automatically select relevant source classes and filter out outlier classes.

Parameter Sensitivity

To investigate the effects of the reward discount factor , we varied its value as . indicates that future rewards are not considered when updating the policy network. indicates that future rewards with no discounts are considered when updating the policy network. As shown in Figure 3(b), the trend implies that appropriately increasing the contribution of future rewards can facilitate correct filtering of reinforced data selector to mitigate negative transfer.

Convergence Performance

We analyze the convergence of RTNet. As shown in Figure 3(c), the test errors of DANN and CORAL are higher than ResNet due to negative transfer. Their testing errors are also unstable, probably because the target domain is matched to different outlier classes during training. RTNet fast and stably converges to the lowest test error, indicating that it can be efficiently trained to solve PDA problem. As shown in Figure 3(d), the reward gradually increases as the episode progresses, meaning that the reinforced data selector can learn the correct policy to maximize the reward and filter out source outlier classes.

Target Classes

We conduct experiments to evaluate the performance of RTNet when the number of target classes varies. As shown in Figure 3(e), as the number of target classes decreases, the performance of CORAL degrades rapidly, indicating that the negative transfer becomes more and more serious as the label distribution becomes larger. RTNet performs better than other comparison methods, indicating that our approach can mitigate negative transfer to solve PDA problem. Moreover, RTNet is superior to UDA method (CORAL) when the source and target label spaces are consistent (A31W31), which shows that our method does not filter erroneously when there are no outlier classes.

Conclusion

In this work, we propose an end-to-end reinforced transfer network, which utilizes both high-level and pixel-level information to address partial domain adaptation problem. RTNet applies reinforcement learning to train a reinforced data selector based on the actor-critic framework to filter out outlier source classes with the purpose of mitigating negative transfer. Unlike previous partial domain adaptation methods based on adversarial network, the reinforced data selector we proposed can be integrated into almost all standard domain adaptation frameworks including those based on adversarial network, and those based on moment match. Note that, the results on RTNet based on adversarial domain adaptation model are shown in the Appendix. The state-of-the-art experimental results confirm the efficacy of our method.

References

  • [1] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath (2017) Deep reinforcement learning: a brief survey. IEEE Signal Processing Magazine 34 (6), pp. 26–38. Cited by: Reinforcement Learning.
  • [2] J. Q. Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence (2009) Dataset shift in machine learning. The MIT Press. Cited by: Introduction.
  • [3] Z. Cao, M. Long, J. Wang, and M. I. Jordan (2018) Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2724–2732. Cited by: Introduction, Introduction, Partial Domain Adaptation, Domain Adaptation Model, Datasets.
  • [4] Z. Cao, L. Ma, M. Long, and J. Wang (2018) Partial adversarial domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 135–150. Cited by: Introduction, Partial Domain Adaptation, Domain Adaptation Model.
  • [5] Z. Cao, K. You, M. Long, J. Wang, and Q. Yang (2019) Learning to transfer examples for partial domain adaptation. arXiv preprint arXiv:1903.12230. Cited by: Partial Domain Adaptation, Domain Adaptation Model.
  • [6] C. Chen, Z. Chen, B. Jiang, and X. Jin (2019) Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. national conference on artificial intelligence. Cited by: Partial Domain Adaptation, Implementation Details.
  • [7] Z. Chen, C. Chen, X. Jin, Y. Liu, and Z. Cheng (2019) Deep joint two-stream wasserstein auto-encoder and selective attention alignment for unsupervised domain adaptation. Neural Computing and Applications, pp. 1–14. Cited by: Introduction.
  • [8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pp. 647–655. Cited by: Feature Visualization.
  • [9] M. Fang, Y. Li, and T. Cohn (2017) Learning how to active learn: a deep reinforcement learning approach.. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 595–605. Cited by: Reinforcement Learning.
  • [10] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky (2016) Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17 (1), pp. 2096–2030. Cited by: Introduction, Partial Domain Adaptation, Datasets.
  • [11] Y. Grandvalet and Y. Bengio (2005) Semi-supervised learning by entropy minimization. In Advances in neural information processing systems, pp. 529–536. Cited by: Domain Adaptation Model.
  • [12] J. J. Hull (2002) A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis & Machine Intelligence 16 (5), pp. 550–554. Cited by: Datasets.
  • [13] V. R. Konda and J. N. Tsitsiklis (2000) Actor-critic algorithms. In Advances in neural information processing systems, pp. 1008–1014. Cited by: Introduction, Reinforcement Learning, Optimization.
  • [14] Y. L. Lecun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. proc ieee. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: Datasets.
  • [15] M. Long, Y. Cao, J. Wang, and M. Jordan (2015) Learning transferable features with deep adaptation networks. In International Conference on Machine Learning, pp. 97–105. Cited by: Partial Domain Adaptation.
  • [16] M. Long, H. Zhu, J. Wang, and M. I. Jordan (2016) Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems, pp. 136–144. Cited by: Domain Adaptation Model.
  • [17] M. Long, H. Zhu, J. Wang, and M. I. Jordan (2017) Deep transfer learning with joint adaptation networks. In International Conference on Machine Learning, pp. 2208–2217. Cited by: Introduction, Implementation Details.
  • [18] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: Reinforcement Learning, Action.
  • [19] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng (2011) Reading digits in natural images with unsupervised feature learning. Nips Workshop on Deep Learning & Unsupervised Feature Learning. Cited by: Datasets.
  • [20] S. J. Pan and Q. Yang (2010) A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22 (10), pp. 1345–1359. Cited by: Introduction.
  • [21] C. Qu, F. Ji, M. Qiu, L. Yang, Z. Min, H. Chen, J. Huang, and W. B. Croft (2019) Learning to selectively transfer: reinforced transfer learning for deep text matching. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 699–707. Cited by: Reinforcement Learning, Optimization.
  • [22] G. A. Rummery and M. Niranjan (1994) On-line q-learning using connectionist systems. Vol. 37, University of Cambridge, Department of Engineering Cambridge, England. Cited by: Reinforcement Learning.
  • [23] K. Saenko, B. Kulis, M. Fritz, and T. Darrell (2010) Adapting visual category models to new domains. In European Conference on Computer Vision (ECCV), Cited by: Datasets.
  • [24] B. Sun and K. Saenko (2016) Deep coral: correlation alignment for deep domain adaptation. In European Conference on Computer Vision, pp. 443–450. Cited by: Introduction, Partial Domain Adaptation, Domain Adaptation Model.
  • [25] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2962–2971. Cited by: Introduction, Partial Domain Adaptation.
  • [26] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474. Cited by: Partial Domain Adaptation.
  • [27] R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: Reinforcement Learning, Optimization.
  • [28] J. Wu, L. Li, and W. Y. Wang (2018) REINFORCED co-training. In NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp. 1252–1262. Cited by: Reinforcement Learning.
  • [29] J. Zhang, Z. Ding, W. Li, and P. Ogunbona (2018) Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8156–8164. Cited by: Introduction, Domain Adaptation Model.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388312
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description