ADVERSARIAL NORMALIZATION FOR MULTI DOMAIN IMAGE SEGMENTATION

Adversarial Normalization for Multi Domain Image Segmentation

Abstract

Image normalization is a critical step in medical imaging. This step is often done on a per-dataset basis, preventing current segmentation algorithms from the full potential of exploiting jointly normalized information across multiple datasets. To solve this problem, we propose an adversarial normalization approach for image segmentation which learns common normalizing functions across multiple datasets while retaining image realism. The adversarial training provides an optimal normalizer that improves both the segmentation accuracy and the discrimination of unrealistic normalizing functions. Our contribution therefore leverages common imaging information from multiple domains. The optimality of our common normalizer is evaluated by combining brain images from both infants and adults. Results on the challenging iSEG and MRBrainS datasets reveal the potential of our adversarial normalization approach for segmentation, with Dice improvements of up to 59.6% over the baseline.

\name

Pierre-Luc Delisle, Benoit Anctil-Robitaille, Christian Desrosiers, Herve Lombaert \addressETS Montreal, Canada

{keywords}

Task-driven intensity normalization, brain segmentation.

1 Introduction

In medical imaging applications, datasets with annotated images are rare and often composed of few samples. This causes an accessibility problem for developing supervised learning algorithms such as those based on deep learning. Although these algorithms have helped in automating image segmentation, notably in the medical field [9, 4], they need a massive number of training samples to obtain accurate segmentation masks that generalize well across different sites. One possible approach to alleviate this problem would be to use data acquired from multiple sites to increase the generalization performance of the learning algorithm. However, medical images from different datasets or sites can be acquired using various protocols. This leads to a high variance in image intensities and resolution, increasing the sensitivity of segmentation algorithms to raw images and thus impairing their performance.

Figure 1: Mixed iSEG and MRBrainS inputs (left) and images generated with two pipelined FCNs without constraint on realism using only Dice loss (right). Images generated with only Dice loss preserve the structure required for segmentation but lack realism.
Figure 2: Proposed architecture. A first FCN generator network (G) takes a non-normalized patch and generates a normalized patch. The normalized patch is input to a second FCN segmentation network (S) for proper segmentation. Discriminator (D) network apply the constraint of realism on the normalized output. The algorithm learns the optimal normalizing function based on the observed differences between input datasets.

Recently, the problems of image normalization and learned pre-processing of medical images have generated a growing interest. In [5], it was shown that two consecutive fully-convolutional deep neural networks (FCN) can normalize an input prior to segmentation. However, the intermediary synthetic images produced by forward pass on the first fully-convolutional network lack interpretability as there is no constrain on the realism of produced images (see Fig. 1). Also, a limitation of this previous work is the separate processing of 2-D slices of volumetric data, which does not take into account the valuable 3-D context of this data. The study in [11] analyzed multi-site training of deep learning algorithms and compared various traditional normalization techniques such as histogram matching [12] or Gaussian standardization across different datasets. However, these standard normalization techniques are not learned for a specific task, certainly leading to suboptimal results compared to a task-driven normalization approach. Gradient reversal layer based domain adaptation and layer-wise domain-adversarial networks are used in [3] but this method is limited to two domains.

This paper extends medical image normalization so that the normalized images remain realistic and interpretable by clinicians. Our method also leverages information from multiple datasets by learning a joint normalizing transformation accounting for large image variability. We propose a task-and-data-driven adversarial normalization technique that constrains the normalized image to be realistic and optimal for the image segmentation task. Our approach exploits two fully-convolutional 3-D deep neural networks [2]. The first acts as a normalized image generator, while the second serves as segmentation network. This paper also includes a 3-D discriminator [8] network that constrains the generator to produce interpretable images. Standard generative adversarial networks (GANs) [6] aim at classifying generated images as fake or real. Our discriminator rather acts as a domain classifier by distinguishing images between all input domains (i.e. a dataset which has been sampled with a specific protocol at a specific location) including an additional “generated” one. Hence, produced images are both realistic and domain invariant. The parameters of all three networks are learned end-to-end.

Our contributions can be summarized as follows: 1) an adversarially-constrained 3-D pre-processing and segmentation technique using fully-convolutional neural networks which can train on more than one dataset, 2) a learned normalization network for medical images which produces images that are realistic and interpretable by clinicians. The proposed method yields a significant improvement in segmentation performance over using a conventional segmentation model trained and tested on two different data distributions which haven’t been normalized by a learning approach. To the best of our knowledge this is the first work using purely task-and-data driven medical image normalization while keeping the intermediary image medically usable.

Dice
Exp. # Method Train dataset Test dataset CSF GM WM
1 No adaptation iSEG iSEG 0.906 0.868 0.863
2 No adaptation MRBrainS MRBrainS 0.813 0.789 0.839
3 No adaptation, Cross-testing iSEG MRBrainS 0.401 0.354 0.519
4 No adaptation, Cross-testing MRBrainS iSEG 0.293 0.082 0.563
5 Standardized iSEG + MRBrainS iSEG + MRBrainS 0.849 0.808 0.809
Standardized Standardized
6 Without constraint iSEG + MRBrainS iSEG + MRBrainS 0.834 0.859 0.885
7 Adversarially normalized (ours) iSEG + MRBrainS iSEG + MRBrainS 0.919 0.902 0.905
Table 1: Dice score in function of the model architecture and data. The proposed method yielded a significant performance improvement over training and testing on single-domain or on standardized inputs.

2 Method

Let be a 3-D image, where is the set of voxels, and be its segmentation ground truth with pixel labels in . The training set contains examples, each composed of an image , a manual expert segmentation and an image domain label . As shown in Fig. 2, the proposed model is composed of three networks. The first network is a fully-convolutional network. A 3-D U-Net architecture, without loss of generality, has been chosen for its simplicity. This network transforms an input image into a cross-domain normalized image . The second network , which is also a 3-D FCN, receives the normalized image as input and outputs the segmentation map . The third network is the discriminator which receives both raw images and normalized images as input and predicts their domain. Network learns a -class classification problem, with one class for each raw image domain and a -th class for generated images of any domain. As mentioned before, the discriminator is used to ensure that images produced by are both realistic and domain invariant.

The three networks of our model are trained together in an adversarial manner by optimizing the following loss function:

(1)

For training the segmentation network, we use the weighted Dice loss defined as

(2)

where is the softmax output of for voxel and class , and is a small constant to avoid zero-division. For the discriminator classification loss, we employ we use the standard log loss. Let be the output class distribution of following the softmax. For raw (unnormalized) images, the loss is given by

(3)

On the other hand, in the case of generated (normalized) images, the loss becomes

(4)

As in standard adversarial learning methods, we train our model in two alternating steps, first updating the parameters of and , and then updating the parameters of .

3 Experiments and results

3.1 Data

To evaluate the performance of our method, two databases have been carefully retained for their drastic difference in intensity profile and nature. The first one, iSEG [13], is a set of 10 T1 MRI images of infant. The ground truth is the segmentation of the three main structures of the brain: white matter (WM), gray matter (GM) and cerebrospinal fluids (CSF), all three being critical for detecting abnormalities in brain development. Images are sampled into an isotropic 1.0 mm resolution. The second dataset is MRBrainS [10] which contains 5 adult subjects with T1 modality. The dataset also contains the same classes as ground truth. Images were acquired following a voxel size of 0.958 mm 0.958 mm 3.0 mm.

While both datasets have been acquired with a 3 Tesla scanner, iSEG is a set of images acquired on 6-8 month-old infants while MRBrainS is a set of adult images. This implies strong structural differences. Moreover, iSEG is particularly challenging because the images have been acquired during subject’s isointense phase in which the white matter and gray matter voxel intensities greatly overlap, thus leading to a lower tissue contrast. This lower contrast is known to misleads common classifiers.

3.2 Pre-processing and implementation details

Only T1 images were used in our experiments. Since images in MRBrainS dataset are a full head scan, skull stripping was performed using the segmentation map. Resampling to isotropic 1.0 mm as been done to match iSEG sampling. Overlapping patches of voxels centered on foreground were extracted from full volumes with stride of which yielded 27,647 3-D patches in total for all training images. Each training took 8 hours using distributed stochastic gradient descent on two NVIDIA RTX 2080 Ti GPUs.

Input data Normalized images
JSD 3.509 3.504
Table 2: Jensen-Shannon divergence (JSD) of input and normalized images from the generator. A lower value corresponds to more similar distributions.
Figure 3: Mixed iSEG and MRBRainS inputs (left) and the generated images with adversarial normalization with (right). Notice the the improved homogeneity of intensities in the normalized images, making their analysis easier.
GM WM CSF
Figure 4: Histograms of generator outputs (blue) and unnormalized inputs (red). The intensity range of the generated images for grey matter (GM), white matter (WM) and CSF is more compact, showing a reduced variance in voxel intensities.

3.3 Experiment setup

We split 60% of the patches for training, while the other 40% are split in half for validation and testing. Split is done following a stratified shuffle split strategy based on center voxel class. Seven different experimental settings were tested. Baselines for segmentation performance on respectively training and testing on each dataset are done (Exp. 1-2 in Table 1). Cross-domain testing is then done for both dataset (Exp. 3-4). For comparison purpose, we trained a segmentation network with both datasets on which we applied Gaussian standardization (Exp. 5). We also trained two pipelined FCN networks with both datasets at the same time (Exp. 6). We then trained using our method with both datasets (Exp. 7). For each experiment involving the generator network, a Mean-Square Error (MSE) loss is used to initialize weights and produce an output close to real input. Optimum is reached after three epochs. Segmentation Dice loss and discriminator’s cross-entropy loss is then added at the beginning of the fourth epoch. Each experiment has been trained with a Stochastic Gradient Descent optimizer with momentum of 0.9 and weight decay of 0.1. All networks have been initialized using Kaiming initialization [7]. Generator uses a learning rate of while segmentation and discriminator networks use . A learning rate scheduler with patience of 3 epochs on validation losses reduces the learning rate of all optimizers by a factor of 10. No data augmentation was applied.

3.4 Normalization performance

To evaluate the normalization performance, we used the Jensen-Shannon divergence (JSD) between intensity histograms of images in the validation set. JSD measures the mean KL divergence between a distribution (i.e., histogram) and the average of distributions. Table 2 gives the JSD between input images and between images normalized by the generator. We see a decrease in JSD for normalized images showing that the intensity profiles of generated images are more similar to each other. The normalization effect of our method can be better appreciated in Fig. 4. This figure shows a narrower distribution that is more centered around a single mode therefore reducing the intra-class variance and increasing segmentation accuracy. Another benefit of our method is the contrast enhancement it provides to the generated images (Fig. 3). This is mainly brought by our task-driven approach, where minimizing the segmentation loss helps at increasing the contrast along region boundaries.

3.5 Segmentation performance

Our method relies on the segmentation task while performing online normalization. Since the Dice loss is being used, structural elements are kept when generating the intermediate image, while cross-entropy aims at keeping the global features of the image while reducing the differences across domains. The main advantage of our method is the ability to train on drastically different datasets regarding to structures and intensity distributions while still maintaining a good segmentation performance. We are effectively performing segmentation of structures on both adults and infant brains at the same time while still achieving 90.8% of mean Dice score across all classes. This demonstrates the relevance of adding adversarial normalization, increasing the Dice score of up to 59.6% up in mean segmentation performance over all classes against training on a single dataset and testing on the other one. As seen in Fig. 3, our method is also able to normalize the images while maximizing the segmentation and keeping the image interpretable and realistic. This is achieved by the discriminator’s loss which aims at minimizing the cross-entropy between real inputs and generated input up to the point it cannot differentiate anymore from which domain (real iSEG input, real MRBrainS input or generated input) the generator’s output comes from.

4 Conclusion

We proposed a novel task-and-data-driven normalization technique to improve a segmentation task using two datasets. Our method drastically improves performance on test sets by finding the optimal intermediate representation using the structural elements of the image brought by Dice loss and global features brought by cross-entropy. We believe this work is an important contribution to biomedical imaging as it would unlock the training of deep learning models with data from multiple sites, thus reducing the strain on data accessibility. This increase in data availability will help computing models with higher generalization performance. We also demonstrated that it is still possible with our method to train with different datasets while one of these is particularly difficult because of the lower image contrasts. Future work will aim at using a better model to act as a discriminator or using other training methods such as Wasserstein GAN [1] or CycleGAN [14] to compare performance difference on the image normalization. Our method is also theoretically capable of handling more than two datasets, finding automatically a common intermediate representation between all input domains and adapting voxel intensities to maximize the segmentation performance. Further work will also demonstrate the architecture’s performance on new datasets and on the gain this task-driven normalization can provide in case of segmentation tasks with greatly imbalanced data such as brain lesion segmentation.

Acknowledgment – This work was supported financially by the Research Council of Canada (NSERC), the Fonds de Recherche du Quebec (FQRNT), ETS Montreal, and NVIDIA with the donation of a GPU.

References

  1. M. Arjovsky, S. Chintala and L. Bottou (2017) Wasserstein Generative Adversarial Networks. In International Conference on Machine Learning (ICML), External Links: 1701.07875, Link Cited by: §4.
  2. O. Cicek, A. Abdulkadir, S. S. Lienkamp, T. Brox, O. Ronneberger, O. Cicek, A. Abdulkadir, S. Lienkamp, T. Brox and O. Ronneberger (2016) 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Medical Image Computing and Computer Assisted Intervention (MICCAI), External Links: Link Cited by: §1.
  3. O. Ciga, J. Chen and A. Martel (2019) Multi-layer Domain Adaptation for Deep Convolutional Networks. Cited by: §1.
  4. J. Dolz, K. Gopinath, J. Yuan, H. Lombaert, C. Desrosiers and I. Ben Ayed (2019) HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation. IEEE Transactions on Medical Imaging. External Links: Document, arXiv:1804.02967v1, ISSN 1558254X Cited by: §1.
  5. M. Drozdzal, G. Chartrand, E. Vorontsov, M. Shakeri, L. Di Jorio, A. Tang, A. Romero, Y. Bengio, C. Pal, S. Kadoury, M. Drozdzal, G. Chartrand, E. Vorontsov, M. Shakeri, L. Di Jorio, A. Tang, A. Romero, Y. Bengio, C. Pal and S. Kadoury (2018) Learning normalized inputs for iterative estimation in medical image segmentation. Medical Image Analysis. External Links: Document, Link Cited by: §1.
  6. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative Adversarial Networks. In International Conference on Machine Learning (ICML), External Links: 1406.2661, Link Cited by: §1.
  7. K. He, X. Zhang, S. Ren and J. Sun (2015) Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In IEEE International Conference on Computer Vision (ICCV), External Links: Document Cited by: §3.3.
  8. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. Conference Proceedings In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: ISBN 1063-6919, Document Cited by: §1.
  9. K. Kamnitsas, C. Ledig, V. Newcombe, J. Simpson, A. Kane, D. Menon, D. Rueckert and B. Glocker (2017) Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis. External Links: Document, ISSN 13618423, Link Cited by: §1.
  10. A. Mendrik, K. Vincken, H. Kuijf, M. Breeuwer, W. Bouvy, J. de Bresser, A. Alansary, M. de Bruijne, A. Carass, A. El-Baz, A. Jog, R. Katyal, A. Khan, F. Lijn, Q. Mahmood, R. Mukherjee, A. Opbroek, S. Paneri, S. Pereira and M. Viergever (2015) MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans. Computational Intelligence and Neuroscience. External Links: Document Cited by: §3.1.
  11. J. Onofrey, D. Casetti-Dinescu, A. Lauritzen, S. Sarkar, R. Venkataraman, R. Fan, G. Sonn, P. Sprenkle, L. Staib and X. Papademetris (2019) Generalizable Multi-Site Training and Testing Of Deep Neural Networks Using Image Normalization. In IEEE International Symposium on Biomedical Imaging (ISBI), External Links: Document, ISBN VO - Cited by: §1.
  12. D. Shapira, S. Avidan and Y. Hel-Or (2013) Multiple histogram matching. In IEEE International Conference on Image Processing (ICIP), Cited by: §1.
  13. L. Wang, D. Nie, G. Li, É. Puybareau, J. Dolz, Q. Zhang, F. Wang, J. Xia, Z. Wu, J. Chen, K. Thung, T. D. Bui, J. Shin, G. Zeng, G. Zheng, V. S. Fonov, A. Doyle, Y. Xu, P. Moeskops, J. P. W. Pluim, C. Desrosiers, I. B. Ayed, G. Sanroma, O. M. Benkarim, A. Casamitjana, V. Vilaplana, W. Lin, G. Li and D. Shen (2019) Benchmark on Automatic Six-Month-Old Infant Brain Segmentation Algorithms: The iSeg-2017 Challenge. IEEE Transactions on Medical Imaging. External Links: Document, ISSN VO - 38 Cited by: §3.1.
  14. J. Zhu, T. Park, P. Isola and A. Efros (2017) Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In IEEE International Conference on Computer Vision (ICCV), External Links: Document, ISBN VO - Cited by: §4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406441
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description