Unsupervised Cross-Modality Domain Adaptation of ConvNets for Biomedical Image Segmentations with Adversarial Loss

Unsupervised Cross-Modality Domain Adaptation of ConvNets for
Biomedical Image Segmentations with Adversarial Loss

Qi Dou, Cheng Ouyang, Cheng Chen, Hao Chen and Pheng-Ann Heng
Department of Computer Science and Engineering, The Chinese University of Hong Kong
Department of Electrical Engineering and Computer Science, University of Michigan
Imsight Medical Technology Inc., Shenzhen, China
qdou@cse.cuhk.edu.hk, couy@umich.edu, {cchen,hchen,pheng}@cse.cuhk.edu.hk
Authors contributed equally.

Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.

Unsupervised Cross-Modality Domain Adaptation of ConvNets for
Biomedical Image Segmentations with Adversarial Loss

Qi Douthanks: Authors contributed equally., Cheng Ouyang, Cheng Chen, Hao Chen and Pheng-Ann Heng Department of Computer Science and Engineering, The Chinese University of Hong Kong Department of Electrical Engineering and Computer Science, University of Michigan Imsight Medical Technology Inc., Shenzhen, China qdou@cse.cuhk.edu.hk, couy@umich.edu, {cchen,hchen,pheng}@cse.cuhk.edu.hk

1 Introduction

Deep convolutional networks (ConvNets) have demonstrated great achievements in recent years, achieving state-of-the-art or even human-level performance on various computer vision challenging problems, such as image recognition, semantic segmentation as well as biomedical image diagnosis [??]. Typically, the deep networks are trained and tested on datasets where all the samples are drawn from the same probability distribution. However, it has been observed that established models would under-perform when tested on samples from a related but not identical new target domain [?].

The existence of domain shift is common in real-life applications [??]. The semantic class labels are usually shared between domains, whereas the distributions of data are different. In the field of biomedical image analysis, this issue is even more obvious. Unlike natural images which are generally taken by optical devices, medical radiological images are acquired by different imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Data distributions of these modalities mismatch significantly, due to their different principles of imaging physics. The appearance of anatomical structures are distinct across radiology modalities, with obviously different intensity histograms. In Fig. 1, we illustrate the severe domain shift between MRI/CT data. In comparison with examples from natural datasets, domain adaptation for cross-modality medical data is more challenging.

Figure 1: Illustration of severe domain shift existing in cross-modality biomedical images. The appearances of the anatomical structures (AA: ascending aorta, LV-blood: left ventricle blood cavity, LV-myo: left ventricle myocardium) would vary significantly on MRI and CT images. Compared with natural image datasets (see bottom examples), domain adaptation for cross-modality medical images encounter more challenges.

To tackle this issue, domain adaptation methods have been studied to generalize the learned models [?]. The domain of labeled training data is termed as source domain, and the test dataset is called target domain. A straight-forward solution is transfer learning, i.e., fine-tuning the models learned on source domain with extra labeled data from the target domain [?]. However, the annotation is prohibitively time-consuming and expensive, especially for those biomedical datasets. Alternatively, the unsupervised domain adaptation methods are more feasible, given that these scenarios transfer knowledge across domains without using additional target domain labels. Advanced studies in this direction have taken advantage of adversarial training to implicitly learn the feature mapping between domains, and achieved remarkable success in natural datasets [??].

Currently, for biomedical images, how to effectively generalize ConvNets across domains has not yet been fully studied. A representative work is [?] which conducted unsupervised domain adaptation for brain lesion segmentation and achieved promising results. However, their source and target domains are relatively close, given that both are MRI datasets although acquired with different scanners. Adapting ConvNets between cross-modality radiology images with a huge domain shift is more compelling for clinical practice, but has not been explored yet.

In this paper, we propose a novel cross-modality domain adaptation framework for medical image segmentations with unsupervised adversarial learning. To transfer the established ConvNet from source domain (MRI) to target domain (CT) images, we design a plug-and-play domain adaptation module (DAM) which implicitly maps the target input data to the feature space of source domain. Furthermore, we construct a discriminator which is also a ConvNet termed as domain critic module (DCM) to differentiate the feature distributions of two domains. Adversarial loss is derived to train the entire domain adaptation framework in an unsupervised manner, by placing the DAM and DCM into a minimax two-player game. Our main contributions are:

  • We pioneer cross-modality domain adaptation for medical image segmentation using deep ConvNets. A flexible plug-and-play framework is designed to transfer a MRI segmenter to CT data via feature-level mapping.

  • We optimize our framework with unpaired MRI/CT images via adversarial learning in an unsupervised manner, eliminating the cost of labeling extra medical datasets.

  • Extensive experiments with promising results on cardiac segmentation application have validated the feasibility of radiology cross-modality domain adaptation, as well as the effectiveness of our approach towards this task.

2 Related Work

Domain adaptation aims to confront the performance degradation caused by any distribution change occurred after learning a classifier. For deep learning models, this situation also applies, and a trend of studies have been conducted to map the target input to the original source domain or its feature space. In this section, we first present related works of unsupervised domain adaptation that achieved promising results on natural image datasets. Next, we review the recent studies on domain adaptation for medical image segmentations using ConvNets.

Most prior studies on unsupervised domain adaptation focused on aligning the distributions between domains in feature space, by minimizing measures of distance between features extracted from the source and target domains. For example, the Maximum Mean Discrepancy (MMD) was minimized together with a task-specific loss to learn the domain-invariant and semantic-meaningful features in [?]. The correlations of layer activations between the domains were aligned in the study of [?]. Based on this, [?] further extended the work and minimized domain difference based on both the first and second order information between source and target domains. Alternatively, with the emergence of generative adversarial network (GAN) [?] and its powerful extensions [??], the mapping between domains were implicitly learned via the adversarial loss. The [?] proposed to extract domain-invariant features by sharing weights between two ConvNet classifiers. Later, the [?] introduced a more flexible adversarial learning method with untied weight sharing, which helps effective learning in the presence of larger domain shifts. Another GAN based direction of solution is to learn a transformation in the pixel space [?], adapting the source-domain images to appear as if drawn from the target domain.

In the field of medical image analysis using deep learning, domain adaptation is also an important topic to generalize learned models across data acquired from different imaging protocols. Transfer learning with network fine-tuning strategies has been experimentally studied by [?] on the brain lesion segmentation application. Although the amount was small, annotations from target domain were still required in their scenario. The latest study on medical data that is closely related to our work is [?], which performed unsupervised domain adaptation for brain lesion segmentation. Their ConvNets learned domain-invariant features on images, with an adversarial loss serving as the supervision for feature extraction. The results were inspiring and demonstrated the efficacy of adversarial loss for unsupervised domain adaptation on medical datasets. However, their source and target domains are relatively close, because both were MRI datasets. Although acquired with different scanners and imaging protocols, the images were from the same modality and the domain shift was not dramatic. In contrast, our problem setting, i.e., adapting a ConvNet trained on MRI data to CT images, is novel but more adventurous and challenging, since our domain shift is more severe.

3 Methods

Figure 2: Overview of our proposed plug-and-play framework for cross-modality domain adaptation. The DAM and DCM are optimized via adversarial learning. During inference, the domain router is used for routing feature maps of different domains.

The Fig. 2 presents our proposed framework for unsupervised cross-modality domain adaptation in biomedical image segmentation. Based on a standard ConvNet segmenter, we construct a plug-and-play domain adaptation module (DAM) and a domain critic module (DCM) to form adversarial learning. Details of network architecture, adaptation method, adversarial loss and training strategies are elaborated in this section.

3.1 ConvNet Segmenter Architecture

With the labeled dataset of samples from source domain, denoted by , we conduct supervised learning to establish a mapping from the input image to the label space . In our setting, the represents the sample (pixel or patch) of medical images and is the category of anatomical structures. For the ease of denotation, we omit the index in the following, and directly use and to represent the samples and labels from the source domain.

The mapping from input to the label space is implicitly learned in the form of a segmentation ConvNet. The backbone of our segmenter is the residual network for pixel-wise prediction of biomedical images. We employ the dilated residual blocks [?] to extract representative features from a large receptive field while preserving the spatial acuity of feature maps. More specifically, the image is firstly input to a Conv layer, then forwarded to 3 residual modules (termed as RM, each consisting of 2 stacked residual blocks) and downsampled by a factor of 8. Next, another three RMs and one dilated RM are stacked to form a deep network. To enlarge receptive field for extracting global semantic features, 4 dilated convolutional layers are used in RM7 with a dilation factor of 2. For dense predictions in our segmentation task, we conduct upsamling at layer Conv10, which is followed by convolutions to smooth out the feature maps. Finally, a softmax layer is used for probability predictions of the pixels.

The segmentation ConvNet using labeled data from source domain is optimized by minimizing the hybrid loss composed of the multi-class cross-entropy loss and the Dice coefficient loss [?]. Formally, we denote for binary label regarding class in sample , its probability prediction is , and the label prediction is , the source domain segmenter loss function is as follows:


where the first term is the cross-entropy loss for pixel-wise classification, with being a weighting factor to cope with the issue of class imbalance. The second term is the Dice loss for multiple cardiac structures, which is commonly employed in biomedical image segmentation problems. We combine the two complementary loss functions to tackle the challenging heart segmentation task. In practice, we also tried to use only one type of loss, but the performance was not quite high.

3.2 Plug-and-Play Domain Adaptation Module

When the ConvNet is learned on the source domain, our goal is to generalize it to a target domain. In transfer learning, the last several layers of the network are usually fine-tuned for a new task with new label space. The supporting assumption is that early layers in the network extract low-level features (such as edge filters and color blobs) which are common for vision tasks. Those upper layers are more task-specific and learn high-level features for the classifier [??]. In this case, labeled data from target domain are required to supervise the learning process. Differently, we use unlabeled data from the target domain, given that labeling dataset is time-consuming and expensive. This is critical in clinical practice where radiologists are willing to perform image computing on cross-modality data with as less extra annotation cost as possible. Hence, we propose to adapt the ConvNet with unsupervised learning.

In our segmenter, the source domain mapping is layer-wise feature extractors composing stacked transformations of , with the denoting the network layer index. Formally, the predictions of labels are obtained by:


For domain adaptation, the label space of source and target domains are identical, i.e., we segment the same anatomical structures from medical MRI/CT data. Our hypothesis is that the distribution changes between the cross-modality domains are primarily low-level characteristics (e.g., gray-scale values) rather than high-level (e.g., geometric structures). The higher layers (such as ) are closely in correlation with the class labels which can be shared across different domains. In this regard, we propose to reuse the feature extractors learned in higher layers of the ConvNet, whereas the earlier layers are updated to conduct distribution mappings in feature space for our unsupervised domain adaptation.

For the input from target domain , we propose a domain adaptation module denoted by that maps to the feature space of the source domain. We denote the adaptation depth by , i.e., the layers earlier than and including are replaced by DAM when processing the target domain images. In the meanwhile, the source model’s upper layers are frozen during domain adaptation learning and reused for target inference. Formally, the predictions for target domain is as:


where represents the DAM which is also a stacked ConvNet. Overall, we form a flexible plug-and-play domain adaptation framework. During the test inference, the DAM directly replaces the early layers of the model trained on source domain. The images of target domain are processed and mapped to deep learning feature space of source domain via the DAM. These adapted features are robust to the cross-modality domain shift, and can be mapped to the label space using those high-level layers established on source domain. In practice, the ConvNet configuration of the DAM is identical to . We initialize the DAM with trained source domain model and fine-tune the parameters in an unsupervised manner with adversarial loss.

3.3 Learning with Adversarial Loss

We propose to train our domain adaptation framework with adversarial loss via unsupervised learning. The spirit of adversarial training roots in GAN, where a generator model and a discriminator model form a minimax two-player game. The generator learns to capture the real data distribution; and the discriminator estimates the probability that a sample comes from the real training data rather than the generated data. These two models are alternatively optimized and compete with each other, until the generator can produce real-like samples that the discriminator fails to differentiate. For our problem, we train the DAM, aiming that the ConvNet can generate source-like feature maps from target input. Hence, the ConvNet is equivalent to a generator from GAN’s perspective.

Considering that accurate segmentations come from high-level semantic features, which in turn rely on fine-patterns extracted by early layers, we propose to align multiple levels of feature maps between source and target domains (see Fig. 2). In practice, we select several layers from the frozen higher layers, and refer their corresponding feature maps as the set of where being the set of selected layer indices. Similarly, we denote the selected feature maps of DAM by with the being the selected layer set. In this way, the feature space of target domain is and the is their counterpart for source domain. Given the distribution of , and that of , the distance between these two domain distributions which needs to be minimized is represented as . For stabilized training, we employ the Wassertein distance [?] between the two distributions as follows:


where represents the set of all joint distributions whose marginals are respectively and .

In adversarial learning, the DAM is pitted against an adversary: a discriminative model that implicitly estimates the . We refer our discriminator as domain critic module and denote it by . Specifically, our constructed DCM consists of several stacked residual blocks, as illustrated in Fig. 2. In each block, the number of feature maps is doubled until it reaches 512, while their sizes are decreased. We concatenate the multiple levels of feature maps as input to the DCM. This discriminator would differentiate the complicated feature space between the source and target domains. In this way, our domain adaptation approach not only removes source-specific patterns in the beginning but also disallows their recovery at higher layers [?]. In unsupervised learning, we jointly optimize the generator (DAM) and the discriminator (DCM) via adversarial loss. Specifically, with being target set, the loss for learning the DAM is:


Furthermore, with the representing the set of source images, the DCM is optimized via:


where is a constant that applies Lipschitz contraint to .

During the alternative updating of and , the DCM outputs a more precise estimation of between distributions of the feature space from both domains. The updated DAM is more effective to generate source-like feature maps for conducting cross-modality domain adaptation.

3.4 Training Strategies

In our setting, the source domain is biomedical cardiac MRI images and the target domain is CT data. All the volumetric MRI and CT images were re-sampled to the voxel spacing of mm and cropped into the size of centering at the heart region. In preprocessing, we conducted intensity standardization for each domain, respectively. Augmentations of rotation, zooming and affine transformations were employed to combat over-fitting. To leverage the spatial information existing in volumetric data, we sampled consecutive three slices along the coronal plane and input them to three channels. The label of the intermediate slice is utilized as the ground truth when training the 2D networks.

We first trained the segmenter on the source domain data in supervised manner with stochastic gradient descent. The Adam optimizer was employed with parameters as batch size of 5, learning rate of and a stepped decay rate of 0.95 every 1500 iterations. After that, we alternatively optimized the DAM and DCM with the adversarial loss for unsupervised domain adaptation. Following the heuristic rules of training WGAN [?], we updated the DAM every 20 times when updating the DCM. In adversarial learning, we utilized the RMSProp optimizer with a learning rate of and a stepped decay rate of 0.98 every 100 joint updates, with weight clipping for the discriminator being 0.03.

4 Experiment

4.1 Dataset and Evaluation Metrics

Figure 3: Results of different methods for CT image segmentations. Each row presents one typical example, from left to right: (a) raw CT slices (b) ground truth labels (c) supervised transfer learning (d) ConvNets trained from scratch (e) directly applying MRI segmenter on CT data (f) our unsupervised cross-modality domain adaptation results. The structures of AA, LA-blood, LV-blood and LV-myo are indicated by yellow, red, green and blue colors, respectively (best viewed in color).

We validated our proposed unsupervised cross-modality domain adaptation method for biomedical image segmentations on the public dataset of MICCAI 2017 Multi-Modality Whole Heart Segmentation [?]. This dataset consists of unpaired 20 MRI and 20 CT images from 40 patients. The MRI and CT data were acquired in different clinical centers. The cardiac structures of the images were manually annotated by radiologists for both MRI and CT images. Our ConvNet segmenter aimed to automatically segment four cardiac structures including the ascending aorta (AA), the left atrium blood cavity (LA-blood), the left ventricle blood cavity (LV-blood), and the myocardium of the left ventricle (LV-myo). For each modality, we randomly split the dataset into training (16 subjects) and testing (4 subjects) sets, which were fixed throughout all experiments.

For evaluation metrics, we followed the common practice to quantitatively evaluate the segmentation performance for automatic methods [?]. The DICE coefficient was employed to assess the agreement between the predicted segmentation and ground truth for cardiac structures. We also calculated the average surface distance (ASD) to measure the segmentation performance from the perspective of the boundary. A higher Dice and lower ASD indicate better segmentation performance. Both metrics are presented in the format of meanstd, which shows the average performance as well as the cross-subject variations of the results.

Methods AA LA-blood LV-blood LV-myo
    Dice     ASD     Dice     ASD     Dice     ASD     Dice     ASD

DL-MR [?]
76.613.8         - 81.113.8         - 87.77.7         - 75.212.1         -

DL-CT [?]
91.118.4         - 92.43.6         - 92.43.3         - 87.23.9         -

75.95.5 12.98.4 78.86.8 16.08.1 90.31.3   2.00.2 75.53.6   2.61.4

81.324.4   2.11.1 89.13.0 10.66.9 88.83.7 21.38.8 73.35.9 42.816.4

78.32.8   2.92.0 89.73.6   7.66.7 91.62.2   4.93.2 85.23.3   5.93.8

19.72.0 31.217.5 25.717.2   8.73.3   0.81.3      N/A 11.114.4 31.037.6

Seg-CT-UDA (=13)
63.915.4 13.95.6 54.713.2 16.66.8 35.126.1 18.45.1 35.418.4 14.25.3

Seg-CT-UDA (=21)
74.86.2 27.57.6 51.111.2 20.14.5 57.212.4 29.511.7 47.85.8 31.210.1

Seg-CT-UDA (=31)
71.90.5 25.812.5 55.222.9 15.28.2 39.221.8 21.23.9 34.319.1 24.710.5
Table 1: Quantitative comparison of segmentation performance on cardiac structures between different methods. (Note: the - means that the results were not reported by that method.)

4.2 Experimental Settings

In our experiments, the source domain is the MRI images and the target domain is the CT dataset. We demonstrated the effectiveness of the proposed unsupervised cross-modality domain adaptation method with extensive experiments. We designed several experiment settings: 1) training and testing the ConvNet segmenter on source domain (referred as Seg-MRI); 2) training the segmenter from scratch on annotated target domain data (referred as Seg-CT); 3) fine-tuning the source domain segmenter with annotated target domain data, i.e., the supervised transfer learning (referred as Seg-CT-STL); 4) directly testing the source domain segmenter on target domain data (referred as Seg-CT-noDA); 5) our proposed unsupervised domain adaptation method (referred as Seg-CT-UDA). We also compared with a previous state-of-the-art heart segmentation method using ConvNets [?]. Last but not least, we conducted ablation studies to observe how the adaptation depth would affect the performance.

4.3 Results of Unsupervised Domain Adaptation

The results of different methods are listed in Table 1, which demonstrates that the proposed unsupervised domain adaptation method is effective by mapping the feature space of target CT domain to that of source MRI domain. Qualitative results of the segmentations for CT images are presented in Fig. 3.

We first evaluate the performance of the segmenter for Seg-MRI, which is the source domain model and serves as the basis for subsequent domain adaptation procedures. Compared with the [?], our ConvNet segmenter reached promising performance with exceeding Dice on LV-blood and LV-myo, as well as comparable Dice on AA and LA-blood. With this standard segmenter network architecture, we conducted following experiments to validate the effectiveness of our unsupervised domain adaptation framework.

To experimentally explore the potential upper-bounds of the segmentation accuracy of the cardiac structures from CT data, we implemented two different settings, i.e., the Seg-CT and Seg-CT-STL. Generally, the segmenter fine-tuned from Seg-MRI achieved higher Dice and lower ASD than the model trained from scratch, proving the effectiveness of supervised transfer learning for adapting an established network to a related target domain using additional annotations. Meanwhile, these results are comparable to [?] on most of the four cardiac structures.

As for observing the severe domain shift problem inherent in cross-modality biomedical images, we directly applied the segmenter trained on MRI domain to the CT data without any domain adaptation procedure. Unsurprisingly, the network of Seg-MRI completely failed on CT images, with average Dice of merely 14.3% across the structures. As shown in Table 1, the Seg-CT-noDA only got a Dice of 0.8% for the LV-blood. The model did not even output any correct predictions for two of the four testing subjects on the structure of LV-blood (please refer to (e) in Fig. 3). This demonstrates that although the cardiac MRI and CT images share similar high-level representations and identical label space, the significant difference in their low-level characteristics makes it extremely difficult for MRI segmenter to extract effective features for CT.

With our unsupervised domain adaptation method, we find a great improvement of the segmentation performance on the target CT data compared with the Seg-CT-noDA. More specifically, our Seg-CT-UDA (d=21) model has increased the average Dice across four cardiac structures by 43.4%. As presented in Fig. 3, the predicted segmentation masks from Seg-CT-UDA can successfully localize the cardiac structures and further capture their anatomical shapes. The performance on segmenting AA is even close to that of Seg-CT-STL. This reflects that the distinct geometric pattern and the clear boundary of the AA have been successfully captured by the DCM. In turn, it supervises the DAM to generate similar activation patterns as the source feature space via adversarial learning. Looking at the other three cardiac structures (i.e., LA-blood, LV-blood and LV-myo), the Seg-CT-UDA performances are not as high as that of AA. The reason is that these anatomical structures are more challenging, given that they come with either relatively irregular geometrics or limited intensity contrast with surrounding tissues. The deficiency focused on the unclear boundaries between neighboring structures or noise predictions on relatively homogeneous tissues away from the ROI. This is responsible for the high ASDs of Seg-CT-UDA, where boundaries are corrupted by noisy outputs. Nevertheless, by mapping the feature space of target domain to that of the source domain, we obtained greatly improved and promising segmentations against Seg-CT-noDA with zero data annotation effort.

4.4 Ablation Study on Adaptation Depth

The adaptation depth is an important hyper-parameter in our framework, which determines how many layers to be replaced during the plug-and-play domain adaptation procedure. Intuitively, a shallower DAM (i.e., smaller ) might be less capable of learning effective feature mapping function across domains than a deeper DAM (i.e., larger ). This is due to the insufficient capacity of parameters in shallow DAM, as well as the huge domain shift in feature distributions. Conversely, with an increase in adaptation depth , DAM becomes more powerful for feature mappings, but training a deeper DAM solely with adversarial gradients would be more challenging. Towards this issue, we conducted ablation studies to demonstrate how the performance would be affected by .

To validate above intuitions and search for an optimal , we repeated the experiment with domain adaptation from MRI to CT by varying the , while maintaining all the other settings the same. Viewing the examples in Fig. 4, Seg-CT-UDA (d=21) model obtained an approaching ground-truth segmentation mask for ascending aorta. The other two models also produced inspiring results capturing the geometry and boundary characteristics of AA, validating the effectiveness of our unsupervised domain adaptation method. From the Table 1, we can observe that DAM with a middle-level of adaptation depth (d=21) achieved the highest Dice on three of the four cardiac structures, exceeding the other two models by a significant margin. For the LA-blood, the three adaptation depths reached comparable segmentation Dice and ASD, and the d=31 model was the best. Notably, the model of Seg-CT-UDA (d=31) overall demonstrated superiority over the model with adaptation depth d=13. This shows that enabling more layers learnable helps to improve the domain adaptation performance on cross-modality segmentations.

Figure 4: Comparison of results using Seg-CT-UDA with different adaptation depth (colors are the same with Fig. 3).

5 Conclusion

This paper pioneers to propose an unsupervised domain adaptation framework for generalizing ConvNets across different modalities of biomedical images. The flexible plug-and-play framework is obtained by optimizing a DAM and DCM via adversarial learning. Extensive experiments with promising results on cardiac segmentations have validated the effectiveness of our approach.


The work described in this paper was supported by the following grants from Hong Kong Research Grants Council under General Research Fund Scheme (Project no. 14202514 and 14203115).


  • [Arjovsky et al., 2017] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
  • [Bousmalis et al., 2017] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, 2017.
  • [Dou et al., 2017] Qi Dou, Lequan Yu, Hao Chen, Yueming Jin, Xin Yang, Jing Qin, and Pheng-Ann Heng. 3d deeply supervised network for automated segmentation of volumetric medical images. Medical image analysis, 41:40–54, 2017.
  • [Esteva et al., 2017] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115–118, 2017.
  • [Ganin et al., 2016] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
  • [Ghafoorian et al., 2017] Mohsen Ghafoorian, Alireza Mehrtash, Tina Kapur, Nico Karssemeijer, Elena Marchiori, Mehran Pesteie, Charles RG Guttmann, Frank-Erik de Leeuw, Clare M Tempany, Bram van Ginneken, et al. Transfer learning for domain adaptation in mri: Application in brain lesion segmentation. In MICCAI, pages 516–524, 2017.
  • [Goodfellow et al., 2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
  • [Gretton et al., 2009] Arthur Gretton, Alexander J Smola, Jiayuan Huang, Marcel Schmittfull, Karsten M Borgwardt, and Bernhard Schölkopf. Covariate shift by kernel mean matching. 2009.
  • [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  • [Kamnitsas et al., 2017] Konstantinos Kamnitsas, Christian Baumgartner, Christian Ledig, Virginia Newcombe, Joanna Simpson, Andrew Kane, David Menon, Aditya Nori, Antonio Criminisi, Daniel Rueckert, et al. Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In International Conference on Information Processing in Medical Imaging, pages 597–609. Springer, 2017.
  • [Milletari et al., 2016] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3D Vision (3DV), 2016 Fourth International Conference on, pages 565–571. IEEE, 2016.
  • [Pan and Yang, 2010] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010.
  • [Patel et al., 2015] Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. IEEE signal processing magazine, 32(3):53–69, 2015.
  • [Payer et al., 2017] Christian Payer, Darko Štern, Horst Bischof, and Martin Urschler. Multi-label whole heart segmentation using cnns and anatomical label configurations. pages 190–198, 2017.
  • [Radford et al., 2015] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • [Shimodaira, 2000] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
  • [Sun and Saenko, 2016] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the ECCV Workshops, pages 443–450. Springer, 2016.
  • [Torralba and Efros, 2011] Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR, pages 1521–1528, 2011.
  • [Tzeng et al., 2014] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
  • [Tzeng et al., 2017] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In CVPR, pages 2962–2971, 2017.
  • [Wang et al., 2017] Yifei Wang, Wen Li, Dengxin Dai, and Luc Van Gool. Deep domain adaptation by geodesic distance minimization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017.
  • [Yosinski et al., 2014] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, pages 3320–3328, 2014.
  • [Yu et al., 2017] Fisher Yu, Vladlen Koltun, and Thomas Funkhouser. Dilated residual networks. In CVPR, pages 636–644, 2017.
  • [Zeiler and Fergus, 2014] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, pages 818–833. Springer, 2014.
  • [Zhuang and Shen, 2016] Xiahai Zhuang and Juan Shen. Multi-scale patch and multi-modality atlases for whole heart segmentation of mri. Medical image analysis, 31:77–87, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description