Revisiting CycleGAN for semi-supervised segmentation

Revisiting CycleGAN for semi-supervised segmentation

Arnab Kumar Mondal
IIT Kharagpur
sanu.arnab@gmail.com
   Aniket Agarwal
IIT Roorkee
aagarwal@ma.iitr.ac.in
   Jose Dolz
ETS Montreal
jose.dolz@etsmtl.ca
   Christian Desrosiers
ETS Montreal
christian.desrosiers@etsmtl.ca
Abstract

In this work, we study the problem of training deep networks for semantic image segmentation using only a fraction of annotated images, which may significantly reduce human annotation efforts. Particularly, we propose a strategy that exploits the unpaired image style transfer capabilities of CycleGAN in semi-supervised segmentation. Unlike recent works using adversarial learning for semi-supervised segmentation, we enforce cycle consistency to learn a bidirectional mapping between unpaired images and segmentation masks. This adds an unsupervised regularization effect that boosts the segmentation performance when annotated data is limited. Experiments on three different public segmentation benchmarks (PASCAL VOC 2012, Cityscapes and ACDC) demonstrate the effectiveness of the proposed method. The proposed model achieves 2-4% of improvement with respect to the baseline and outperforms recent approaches for this task, particularly in low labeled data regime.

\wacvfinalcopy

1 Introduction

Deep learning methods have recently emerged as an efficient solution for semantic image segmentation, achieving outstanding performance in a wide range of applications like analyzing natural scenes, autonomous driving or medical imaging. Despite their success, a main limitation of these methods is the need for large training datasets of pixel-level annotated images. Acquiring such labeled images is a time consuming process that may require user expertise in various scenarios. This impedes the applicability of deep models to applications where labeled images are scarce.

Semi-supervised learning (SSL) has been proposed to overcome the shortage of labeled data. In this scenario, we assume that a large set of unlabeled images is available during training, in addition to a small set of images with strong annotations. Consider a SSL segmentation setting with two distinct subsets: contains labeled images and their corresponding ground-truth mask , and contains unlabeled images (typically ). In this setting, the objective is often formulated as maximizing a log-likelihood with respect the learning parameters of a deep network, through the supervision provided by the labeled set . On the other hand, unsupervised images in can be leveraged in different ways, typically introducing a regularization effect in deep models and therefore improving their generalization capabilities.

Generative adversarial networks (GANs) [8] have shown to be an efficient solution for unsupervised domain adaptation [10, 9, 28, 29], a problem related to semi-supervised learning. GAN-based methods for domain adaptation use adversarial learning to match the distributions of source and target data, commonly at the input or in feature space. Recently, the CycleGAN model [34] has become a popular choice to transfer image style between domains, as it eliminates the restriction of corresponding image pairs during training [12]. This model finds a mapping between source and target images which preserves key attributes between the input and the transformed image using a cycle consistency loss. While CycleGAN has been widely employed to learn a mapping between different domains, it has not yet been investigated in more traditional semi-supervised scenarios where there is no domain shift between labeled and unlabeled data.

In this work, we leverage the unpaired domain adaptation ability of CycleGAN to learn a bidirectional mapping from unlabeled real images to available ground truth masks. This mapping, learned in conjunction with the standard supervised mapping from labeled images to their corresponding labels, acts as an unsupervised regularization loss which helps train the network when labeled data is limited. The proposed method contrasts with recent work on domain adaptation for segmentation [9, 13], where the CycleGAN is employed to map images across two domains. It also differs significantly from recent work using GAN-generated images for semi-supervised segmentation [27], in which cycle consistency is not enforced. The main contributions of this paper can be summarized as follows:

  1. To our knowledge, this is the first semi-supervised segmentation method using CycleGAN to learn a cycle-consistent mapping between unlabeled real images and ground truth masks. The proposed technique acts as an unsupervised regularization prior which improves segmentation performance when labeled data is limited.

  2. We validate our approach on three challenging segmentation tasks from different applications (i.e., natural scenes, autonomous driving and medical imaging), and show that our method is dataset-independent and effective for a wide range of scenarios.

  3. Additionally, we present an ablation study which analyzes the effect of various components of the proposed unsupervised loss and demonstrates the usefulness of these components for improving performance. We believe this analysis is important for future investigations of CycleGANs applied to semi-supervised segmentation.

The rest of the paper is organized as follows. In Section 2, we give a brief overview of relevant work on semantic segmentation with a focus on semi-supervised learning and adversarial learning. Section 3 then presents our model which is evaluated on three challenging datasets in Section 4. Finally, we conclude with a summary of our main contributions and results.

Figure 1: Schematic explaining the working of our model. The model contains four networks which are trained simultaneously.

2 Related work

Supervised methods based on convolutional neural networks (CNNs) are driving progress in semantic segmentation [18, 26, 4]. Despite their success, training these networks requires a large number of densely-annotated images which are expensive to obtain. A solution to address this limitation is weakly-supervised learning, where easier-to-obtain annotations like image-level tags [21, 15, 32], bounding boxes [6, 25] or scribbles [17] are instead used to train segmentation models. However, weakly-supervised methods still require some human interaction, which may be difficult to get in certain scenarios.

Semi-supervision is a special type of weakly-supervised learning where many unlabeled images are also available for training [1, 2, 20, 24, 23, 33]. Instead of relying on weak annotations, semi-supervised learning (SSL) typically uses domain- or task-agnostic properties of the data to regularize learning. Recently, several SSL methods have been proposed for semantic segmentation, for instance based on self-training [1], distillation [33], attention learning [20], manifold embedding [2], co-training [23], and temporal ensembling [24]. As these methods, the proposed approach can also leverage unlabeled image directly, without the need for weak annotations or task-specific priors.

Adversarial learning has also shown great promise for training deep segmentation models with few strongly-annotated images [27, 11, 31]. An interesting approach to include unlabeled images during training is to add a discriminator network in the model, which must determine whether the output of the segmentation network corresponds to a labeled or unlabeled image [31]. This encourages the segmentation network to have a similar distribution of outputs for images with and without annotations, thereby helping generalization. A potential issue with this approach is that the adversarial network can have a reverse effect, where the output for annotated images becomes growingly similar to incorrect predictions obtained for unlabeled images. A related strategy uses the discriminator to predict a confidence map for the segmentation, enforcing this output to be maximum for annotated images [11]. For unlabeled images, areas of high confidence are used to update the segmentation network in a self-teaching manner. The main limitation of this approach is that a confidence threshold must be provided, the value of which can affect performance.

Until now, only a single work has applied Generative Adversarial Networks (GANs) for semi-supervised segmentation [27]. In this previous work, generated images are used for training in addition to both labeled and unlabeled data. The trained segmentation network must predict the correct labels for real images or a special fake label for generated images. For this method to work, fake images should be generated from outside the distribution of real images so that the segmentation network learns a better representation of the manifold (i.e., fake images constitute negative examples). In contrast, our method uses cycle-consistent GANs to better estimate the distribution of real images and their corresponding segmentation masks.

3 Methodology

3.1 CycleGAN for semi-supervised segmentation

The proposed architecture for semi-supervised segmentation, illustrated in Figure 1, is based on the cycle-consistent GAN (CycleGAN) model [34] which has shown outstanding performance for unpaired image-to-image translation. This architecture is composed of four inter-connected networks, two conditional generators and two discriminators, which are trained simultaneously. In the original CycleGAN model, the generators are employed to learn a bidirectional mapping from an image domain to the other. On the other hand, discriminators try to determine whether an image from the corresponding domain is real or generated. By fooling the discriminators through adversarial learning, the model thus learns to generate images from the true distribution without requiring paired images. A cycle-consistency loss is also added to ensure that generators are consistent, i.e. that we recover the same image when going through both generators sequentially.

In our semi-supervised segmentation model, the CycleGAN is instead used to map images to their corresponding segmentation mask and vice-versa. The first generator (), corresponding to the segmentation network that we want to obtain, learns a mapping from an image to its segmentation labels. The first discriminator () tries to differentiate these generated labels from real segmentation masks. Note that the combination of of is similar to semi-supervised segmentation approach presented in [31]. Conversely, the second generator () learns to map a segmentation mask to its image. In our semi-supervised segmentation setting, this generator is only used to improve training. Likewise, the second discriminator () receives an image as input and predicts whether this image is real or generated. To enforce cycle consistency, generators are trained so that feeding the labels generated by for an image into gives that same image, and passing back to the image generated by for a segmentation mask gives that same mask. Figure 2 shows examples of images, ground truth labels, generated images and generated labels obtained for the three datasets used in our experiments.

Image Ground truth labels Generated image Generated labels
Figure 2: Examples of images, ground truth labels, generated images and generated labels obtained for three benchmark datasets: PASCAL VOC 2012 (top row), Cityscapes (middle row), and ACDC (bottom row).

3.2 Loss functions

In this section, we formally define the loss functions employed to train our model in a semi-supervised setting where the data comes from three distributions: labeled images (), ground truth masks of labeled images (), and unlabeled images (). The first loss function is a standard supervised segmentation loss that imposes the segmentation network () to generate labels of ground truth masks:

(1)

where is the pixelwise cross-entropy defined as

(2)

In this expression, and are the ground truth and predicted probabilities that pixel has label . Likewise, we employ a pixelwise L2 norm between a labeled image and the image generated from its corresponding ground truth as supervised loss to train the image generator :

(3)

To exploit unlabeled images, we incorporate two additional types of losses: adversarial losses and cycle-consistency losses. The adversarial losses are used to train the generators and discriminators in a competing fashion, and help the generators produce realistic images and segmentation masks. To have a better training of the discriminators, we follow the approach presented in [19] and use a least square loss instead of the traditional cross-entropy. It was shown in this previous work that this loss function leads to minimizing the Pearson divergence. Suppose that is the predicted probability that segmentation labels correspond to a ground truth mask. We define the adversarial loss for as

(4)

Similarly, let be the predicted probability that an image is real, the adversarial loss for the image discriminator is defined as

(5)

The first cycle consistency loss measures the difference between an unlabeled image and the regenerated image after passing through generators and sequentially. Here, we use the L1 norm since it leads to sharper images than the L2 norm:

(6)

On the other hand, since the segmentation labels are categorical variables, we use cross-entropy to evaluate the difference between a ground-truth segmentation mask and the regenerated labels after passing through generators and in sequence:

(7)

Finally, the total loss is obtained by combining all six loss terms:

(8)
(9)

In practice, learning is performed in an alternating fashion, where the parameters of the generators are optimized while considering those of the discriminators as fixed, and vice-versa.

3.3 Implementation details

Following the original implementation of CycleGAN, we adopt the architecture proposed in [14] for our generators, since it has shown impressive results for image-style transfer. This network is composed of two stride-2 convolutions, followed by 9 residual blocks and two fractionally-strided convolutions with stride . Similarly, instance normalization [30] was employed and no drop-out was adopted. Furthermore, we used softmax as output function when generating segmentation labels from images, whereas tanh was the selected function when translating from segmentation labels to images, in order to have continuous values. In pre-processing, each channel of an image is normalized to the range by subtracting its mean value and dividing by the difference between the maximum and minimum value.

Unlike the original CycleGAN model, we make use of pixel-wise discriminators [11] where the size of the output is the same as the input and the adversarial label (i.e., real / generated) is recopied at each output pixel. We found this model to perform better than having a single discriminator output. Each discriminator contains three convolutional blocks, followed by Leaky ReLU activations with negative slope of . In addition, batch normalization is used in the discriminators after the second convolutional block.

Both generators and discriminators were trained using Adam optimizer [16] with and parameters equal to 0.5 and 0.999. Learning rate was initially set to 210 with a linear decay after every 100 epochs, during 400 epochs. Furthermore, batch size was set to 5 in all experiments. The values of the weighting terms in Eq. (8) were set to , , , and . The code was implemented in Pytorch 3.3 [22] and experiments were run on a server equipped with a NVIDIA Titan V GPU (12 GBs). The code is made publicly available at https://github.com/arnab39/Semi-supervised-segmentation-cycleGAN.

4 Experiments

4.1 Datasets

We conduct experiments on three different public semantic segmentation benchmarks: PASCAL VOC 2012 [7], Cityscapes [5] and the Automated Cardiac Diagnosis Challenge (ACDC) MICCAI 2017 Challenge [3].
PASCAL VOC 2012: This dataset contains 21 common object classes, including one background class. In our experiments, we employed the augmented set composed of 10,582 images, which we split into training (8994 images) and validation (1588 images) subsets. In addition, due to memory limitations, we resized all images to 200200 pixels before being fed into the network.
Cityscapes: This second dataset contains 50 videos from driving scenes where a total of 20 classes (including background) are manually annotated. In our experiments, we split the 3475 provided images into training (2953 images) and validation (522 images) subsets. As in the previous case, all images were resized to a 128256 pixel resolution.
ACDC: This medical image set focuses on the segmentation of cardiac structures (the left ventricular endocardium and epicardium and the right ventricular endocardium) and consists of 100 cine magnetic resonance (MR) exams covering normal cases and subjects with well-defined defined pathologies: dilated cardiomyopathy, hypertrophic cardiomyopathy, myocardial infarction with altered left ventricular ejection fraction and abnormal right ventricle. Each exam contains acquisitions at the diastolic and systolic phases. For our experiments, we employed 75 exams for training and the remaining 25 for validation.

It is important to note that, since we aim at isolating the performance of each method and not achieving state-of-the-art results, no data augmentation was performed in any of the datasets for training.

4.2 Evaluation protocol

We use the mean intersection over union (mIOU) metric to evaluate the segmentation results of all the models. This metric can be defined as , where , , and are the true positive, false positive, and false negative pixels, respectively, determined over the whole validation set.

To have an upper-bound on performance, we train a network in a fully-supervised manner, employing all available training images. We also trained the same model from scratch using only 10%, 20%, 30% or 50% of labeled images, and refer to this baseline as Partial. Our semi-supervised method is trained with the same subsets as the Partial baseline, however it also makes use of unlabeled training images. Last, we compare our method to the approach presented in [11], which has shown state-of-art performance for semi-supervised segmentation.

4.3 Results

In the following section, we report the experimental results of the proposed approach on the three datasets described in Section 4.1.

4.3.1 Comparison on benchmarks

Method Labeled % VOC Cityscapes ACDC
Full 100 0.5543 0.5551 0.8982
Hung et al. [11] 20 0.2032 0.3490 0.8063
Partial 50 0.4108 0.4856 0.8863
30 0.3237 0.4502 0.8785
20 0.2688 0.4112 0.8642
10 0.2158 0.3636 0.8418
Ours 50 0.4197 0.4997 0.8890
30 0.3514 0.4654 0.8804
20 0.2981 0.4321 0.8688
10 0.2543 0.3923 0.8463
Table 1: Semantic segmentation performance of tested methods on the three benchmark datasets, for different levels of supervision. Full corresponds to the segmentation network trained with all training samples, and Partial to the same network trained with a subset of labeled images (ranging from 10% to 50%) without considering unlabeled images.

Table 1 reports the results obtained by the tested approaches on the three benchmark datasets. We first observe that, in all cases, the proposed model outperforms the partial supervision baseline when training with a reduced set of labeled images. This difference is particularly significant when pixel-level annotations are scarce (i.e., 10% and 20% of the whole training set), where the proposed model achieves 2-4% of improvement. As the number of labeled images increases, the gap between the baseline and the proposed models decreases, with a gain close to 1% when training with half of the whole training set. Furthermore, we found that the semi-supervised segmentation approach of Hung et al. [11] obtained poor results for all three datasets, with lower accuracy than the partial supervision baseline (Partial). In the original work [11], authors used a generator pre-trained using ImageNet. In our experiments, to have an unbiased comparison, we tested methods without such pre-training (i.e., all generators and discriminators were trained from scratch). This could potentially explain our lower results obtained for Hung et al.’s method.

Image Ground truth Full Partial Hung et al. [11] Ours
Figure 3: Visual comparisons on the PASCAL VOC 2012 dataset employing 20% of labeled images for training.
Image Ground truth Full
Partial Hung et al. [11] Ours
Image Ground truth Full
Partial Hung et al. [11] Ours
Figure 4: Visual comparisons on the Cityscape dataset employing 20% of labeled images for training.
Image Ground truth Full Partial Hung et al. [11] Ours
Figure 5: Visual comparisons on the ACDC dataset employing 20% of labeled images for training.

A visual comparison of results is given in Figures 7, 4 and 5. It can be seen that the proposed method predicts a segmentation closer to the network trained with all images (Full) than the partial supervision baseline (Partial) and the Hung et al.’s model. While predicted region boundaries are sometimes inaccurate, the global semantic information of the image (i.e., actual class labels) appears to be better learned by our model compared to the partial supervision baseline. In addition, our model seems to better capture details of thin objects –e.g., legs or persons– compared to both the baseline and the method in [11].

Method VOC
Proposed 0.2981
w/o labels cycle loss 0.2627
w/o image cycle loss 0.2733
w/o labels discr. loss 0.2614
w/o image discr. loss 0.2543
Table 2: Ablation study on the PASCAL VOC 2012 dataset with 20% labeled data.

4.3.2 Ablation study

To further analyze the effect of the different components of the proposed model, we conduct an ablation study where the model is trained while removing a single loss term of Eq. (8). Specifically, we train the model without the labels cycle-consistency loss , image cycle-consistency loss , labels discriminator loss , or image discriminator loss . Note that these modifications correspond to setting , , or to 0, respectively. For this experiment, we investigate the performance of the model trained with 20% of labeled data on PASCAL VOC 2012.

The results of our ablation study are summarized in Table 2. The proposed model containing all loss terms reaches a mIOU value of 0.2981. If we remove the cycle consistency loss on the generation of segmentation labels, this value is reduced to 0.2627. However, removing the cycle consistency loss on the image generation leads to an even lower accuracy of 0.2733, suggesting that the cycle consistency loss on segmentation masks has a stronger impact in the model. Regarding the significance of the losses in the discriminators, we observe a reverse effect. A lower performance is observed if the loss on the discriminator is ignored, which is responsible of differentiating between unlabeled and generated images.

5 Discussion and conclusion

We presented a semi-supervised method for image semantic segmentation, where the key idea is to leverage CycleGAN to learn a cycle-consistent mapping between unlabeled real images and available ground truth masks. Unlike recent work using adversarial learning for semi-supervised segmentation [27, 11, 31], the proposed strategy enforces consistency between unpaired images and segmentation masks, which acts as an unsupervised regularizer. From the reported results, we have shown that this strategy improves segmentation performance, particularly when annotated data is scarce.

Due to the high computational and memory requirements of generating large images, our experiments have employed images with reduced size, in particular for the Cityscape dataset where the resolution was reduced from pixels to . This is in large part responsible for the lower accuracy values obtained in our experiments, compared to those reported in the literature. In a future investigation, we will evaluate the performance of our model on full-sized images. Moreover, in this work, we used the same network for both generators ( and ). This architectural choice was made to achieve a better learning equilibrium during training (i.e., avoid a generator learning much faster than the other). Employing different networks in future experiments could however improve performance.

References

  • [1] W. Bai, O. Oktay, M. Sinclair, H. Suzuki, M. Rajchl, G. Tarroni, B. Glocker, A. King, P. M. Matthews, and D. Rueckert (2017) Semi-supervised learning for network-based cardiac mr image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 253–260. Cited by: §2.
  • [2] C. Baur, S. Albarqouni, and N. Navab (2017) Semi-supervised deep learning for fully convolutional networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 311–319. Cited by: §2.
  • [3] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P. Heng, I. Cetin, K. Lekadir, O. Camara, M. A. G. Ballester, et al. (2018) Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?. IEEE Transactions on Medical Imaging. Cited by: §4.1.
  • [4] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (4), pp. 834–848. Cited by: §2.
  • [5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016) The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223. Cited by: §4.1.
  • [6] J. Dai, K. He, and J. Sun (2015) Boxsup: exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1635–1643. Cited by: §2.
  • [7] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §4.1.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2672–2680. Cited by: §1.
  • [9] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2017) Cycada: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213. Cited by: §1, §1.
  • [10] J. Hoffman, D. Wang, F. Yu, and T. Darrell (2016) FCNs in the wild: pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649. Cited by: §1.
  • [11] W.-C. Hung, Y.-H. Tsai, Y.-T. Liou, Y.-Y. Lin, and M.-H. Yang (2018) Adversarial learning for semi-supervised semantic segmentation. In Proceedings of the British Machine Vision Conference (BMVC), pp. 1. Cited by: §2, §3.3, Figure 3, Figure 4, Figure 5, §4.2, §4.3.1, §4.3.1, Table 1, §5.
  • [12] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §1.
  • [13] J. Jiang, Y. Hu, N. Tyagi, P. Zhang, A. Rimner, G. S. Mageras, J. O. Deasy, and H. Veeraraghavan (2018) Tumor-aware, adversarial domain adaptation from ct to mri for lung cancer segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 777–785. Cited by: §1.
  • [14] J. Johnson, A. Alahi, and L. Fei-Fei (2016) Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pp. 694–711. Cited by: §3.3.
  • [15] H. Kervadec, J. Dolz, M. Tang, E. Granger, Y. Boykov, and I. B. Ayed (2019) Constrained-CNN losses for weakly supervised segmentation. Medical image analysis. Cited by: §2.
  • [16] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.3.
  • [17] D. Lin, J. Dai, J. Jia, K. He, and J. Sun (2016) Scribblesup: scribble-supervised convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3159–3167. Cited by: §2.
  • [18] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: §2.
  • [19] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley (2017) Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802. Cited by: §3.2.
  • [20] S. Min and X. Chen (2018) A robust deep attention network to noisy labels in semi-supervised biomedical segmentation. arXiv preprint arXiv:1807.11719. Cited by: §2.
  • [21] G. Papandreou, L. Chen, K. Murphy, and A. L. Yuille (2015) Weakly-and semi-supervised learning of a DCNN for semantic image segmentation. In ICCV, Cited by: §2.
  • [22] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, Cited by: §3.3.
  • [23] J. Peng, G. Estradab, M. Pedersoli, and C. Desrosiers (2019) Deep co-training for semi-supervised image segmentation. arXiv preprint arXiv:1903.11233. Cited by: §2.
  • [24] C. S. Perone, P. Ballester, R. C. Barros, and J. Cohen-Adad (2018) Unsupervised domain adaptation for medical imaging segmentation with self-ensembling. arXiv preprint arXiv:1811.06042. Cited by: §2.
  • [25] M. Rajchl, M. C. Lee, O. Oktay, K. Kamnitsas, J. Passerat-Palmbach, W. Bai, M. Damodaram, M. A. Rutherford, J. V. Hajnal, B. Kainz, et al. (2017) Deepcut: object segmentation from bounding box annotations using convolutional neural networks. IEEE transactions on medical imaging 36 (2), pp. 674–683. Cited by: §2.
  • [26] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §2.
  • [27] N. Souly, C. Spampinato, and M. Shah (2017) Semi supervised semantic segmentation using generative adversarial network. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 5689–5697. Cited by: §1, §2, §2, §5.
  • [28] Y. Tsai, W. Hung, S. Schulter, K. Sohn, M. Yang, and M. Chandraker (2018) Learning to adapt structured output space for semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  • [29] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In Computer Vision and Pattern Recognition (CVPR), Vol. 1, pp. 4. Cited by: §1.
  • [30] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §3.3.
  • [31] Y. Zhang, L. Yang, J. Chen, M. Fredericksen, D. P. Hughes, and D. Z. Chen (2017) Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 408–416. Cited by: §2, §3.1, §5.
  • [32] Y. Zhou, X. He, L. Huang, L. Liu, F. Zhu, S. Cui, and L. Shao (2019) Collaborative learning of semi-supervised segmentation and classification for medical images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2079–2088. Cited by: §2.
  • [33] Y. Zhou, Y. Wang, P. Tang, W. Shen, E. K. Fishman, and A. L. Yuille (2018) Semi-supervised multi-organ segmentation via multi-planar co-training. arXiv preprint arXiv:1804.02586. Cited by: §2.
  • [34] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232. Cited by: §1, §3.1.
Partial Ours
Class 10% 20% 30% 10% 20% 30%
1 0.4236 0.5302 0.5798 0.4369 0.5808 0.5819
2 0.2046 0.2624 0.3328 0.1423 0.3574 0.4197
3 0.1060 0.1542 0.1544 0.0238 0.1511 0.2575
4 0.1633 0.2479 0.2349 0.1504 0.2048 0.3146
5 0.0119 0.0535 0.0744 0.0227 0.0188 0.0623
6 0.5192 0.5310 0.6315 0.6962 0.6867 0.6426
7 0.3263 0.3368 0.4746 0.4451 0.4877 0.5042
8 0.2523 0.4050 0.3841 0.3455 0.3123 0.4075
9 0.1029 0.1226 0.1379 0.2158 0.1429 0.1680
10 0.0123 0.1888 0.1211 0.1412 0.2021 0.1579
11 0.0725 0.1868 0.3487 0.0643 0.2097 0.2359
12 0.2048 0.2496 0.2509 0.2331 0.2302 0.2989
13 0.0781 0.1033 0.1769 0.2474 0.1056 0.2205
14 0.3811 0.4126 0.5343 0.1496 0.4601 0.5107
15 0.5886 0.6528 0.6721 0.5690 0.6992 0.6645
16 0.0889 0.1230 0.0775 0.1996 0.2179 0.1707
17 0.1294 0.1061 0.2078 0.1851 0.0488 0.2258
18 0.1089 0.0654 0.1820 0.2091 0.0350 0.2232
19 0.2990 0.3760 0.4768 0.1879 0.5115 0.4681
20 0.2407 0.2668 0.4210 0.4280 0.2988 0.5331
Table 3: Classwise mean IoU for the 20 valid classes of the PASCAL VOC 2012 dataset, obtained with 10%, 20% or 30% of labeled examples. Partial corresponds to training only the segmentation network of our semi-supervised CycleGAN method with the subset of labeled examples.
Partial Ours
Class 10% 20% 30% 10% 20% 30%
1 0.9346 0.9453 0.9523 0.9369 0.9457 0.9534
2 0.5932 0.6518 0.6850 0.6323 0.6775 0.6998
3 0.7807 0.8149 0.8333 0.7977 0.8199 0.8489
4 0.1419 0.1463 0.1820 0.1401 0.1657 0.2008
5 0.0715 0.1201 0.1684 0.1128 0.1469 0.1726
6 0.2441 0.2975 0.3288 0.2732 0.3074 0.3426
7 0.1255 0.2067 0.2481 0.1869 0.2275 0.2644
8 0.1993 0.2799 0.3315 0.2528 0.3091 0.3512
9 0.7824 0.8177 0.8355 0.8012 0.8278 0.8565
10 0.4458 0.4661 0.4923 0.4433 0.4718 0.5137
11 0.8777 0.8953 0.9019 0.8872 0.8953 0.9072
12 0.3766 0.4534 0.5021 0.4387 0.4822 0.5229
13 0.0687 0.1173 0.1786 0.0867 0.1625 0.1961
14 0.7626 0.8094 0.8367 0.7822 0.8259 0.8528
15 0.0877 0.0857 0.1258 0.0825 0.1046 0.1473
16 0.0448 0.0715 0.1735 0.0576 0.1425 0.1965
17 0.0814 0.2454 0.2892 0.2067 0.2781 0.2985
18 0.0739 0.0935 0.1471 0.0701 0.1525 0.1529
19 0.216 0.2948 0.3420 0.2611 0.2601 0.3618
Table 4: Classwise mean IoU for the 19 valid classes of the Cityscapes dataset, obtained with 10%, 20% or 30% of labeled examples. Partial corresponds to training only the segmentation network of our semi-supervised CycleGAN method with the subset of labeled examples.
Image Ground truth Partial Ours
Figure 6: Visual comparisons on the PASCAL VOC 2012 dataset employing 30% of labeled images for training.
Image Ground truth Partial Ours
Figure 7: Visual comparisons on the Cityscapes dataset employing 30% of labeled images for training.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388184
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description