Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT

Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT


In medical imaging, a general problem is that it is costly and time consuming to collect high quality data from healthy and diseased subjects. Generative adversarial networks (GANs) is a deep learning method that has been developed for synthesizing data. GANs can thereby be used to generate more realistic training data, to improve classification performance of machine learning algorithms. Another application of GANs is image-to-image translations, e.g. generating magnetic resonance (MR) images from computed tomography (CT) images, which can be used to obtain multimodal datasets from a single modality. Here, we evaluate two unsupervised GAN models (CycleGAN and UNIT) for image-to-image translation of T1- and T2-weighted MR images, by comparing generated synthetic MR images to ground truth images. We also evaluate two supervised models; a modification of CycleGAN and a pure generator model. A small perceptual study was also performed to evaluate how visually realistic the synthesized images are. It is shown that the implemented GAN models can synthesize visually realistic MR images (incorrectly labeled as real by a human). It is also shown that models producing more visually realistic synthetic images not necessarily have better quantitative error measurements, when compared to ground truth data. Code is available at


Per Welander Simon Karlsson Anders Eklund \addressDivision of Medical Informatics, Department of Biomedical Engineering,
Linköping University, Linköping, Sweden
Division of Statistics and Machine Learning, Department of Computer and Information Science,
Linköping University, Linköping, Sweden
Center for Medical Image Science and Visualization (CMIV),
Linköping University, Linköping, Sweden

1 Introduction

Deep learning has been applied in many different research fields to solve complicated problems [1], made possible through parallel computing and big datasets. Acquiring a large annotated medical imaging dataset can be rather challenging for classification problems (e.g. discriminating healthy and diseased subjects), as one training example then corresponds to one subject [2]. Data augmentation, e.g. rotation, cropping and scaling, is normally used to increase the amount of training data, but can only provide limited alternative data. A more advanced data augmentation technique, generative adversarial networks (GANs) [3], uses two competing convolutional neural networks (CNNs); one that generates new samples from noise and one that that discriminates samples as real or synthetic. The most obvious application of a GAN in medical imaging is to generate additional realistic training data, to improve classification performance (see e.g.  [4] and [5]). Another application is to use GANs for image-to-image translation, e.g. to generate computed tomography (CT) data from magnetic resonance (MR) images or vice versa. This can for example be very useful for multimodal classification of healthy and diseased subjects, where several types of medical images (e.g. CT and MRI) are combined to improve sensitivity (see e.g. [6] and [7]).

To use GANs for image-to-image translation in medical imaging is not a new idea. Nie et al. [8] used a GAN to generate CT data from MRI. Yang et al. [9] recently used GANs to improve registration and segmentation of MR images, by generating new data and using multimodal algorithms. Similarly, Dar et al.  [10] demonstrate how GANs can be used for generation of a T2-weighted MR image from a T1-weighted image. However, since GANs have only recently been proposed for image-to-image translation, and new GAN models are still being developed, it is not clear what the best GAN model is and how GANs should be evaluated and compared. We therefore present a small comparison for image-to-image translation of T1- and T2-weighted MR images. Compared to previous work [9, 10] which used conditional GANs (cGAN) [11], we show results for our own Keras implementations of CycleGAN [12] and UNIT [13], see for code.

2 Method

2.1 GAN model selection and implementation

Several different GAN models were investigated in a literature study [12, 13, 14, 15, 16]. Two models stood out among the others in synthesizing realistic images in high resolution; CycleGAN [12] and UNIT [13]. Training of neural networks is commonly supervised, i.e. the training requires corresponding ground truth to each input sample. In image-to-image translation this means that paired images from both source and target domain are needed. To alleviate this constraint, CycleGAN and UNIT can work with unpaired training data.

Two different variants of the CycleGAN model were implemented, CycleGAN_s and CycleGAN. Including the ground truth image in the training should intuitively generate better results, since the model then has more information about how the generated image should appear. To investigate this, CycleGAN_s was implemented and trained supervised, i.e. by adding the mean absolute error (MAE) between output and ground truth data. To investigate how the adversarial and cyclic loss contribute to the model, Generators_s was also implemented. It consists of the generators in CycleGAN and is only trained in a supervised manner with a MAE loss using the ground truth images, it does not include the adversarial or the cyclic loss. A Simple baseline model was also implemented for comparison, it consists of only two convolutional layers.

2.2 Evaluation

The dataset used in the evaluation is provided by the Human Connectome project [17, 18] ( We used (paired) T1- and T2-weighted images from 1113 subjects (but note that CycleGAN and UNIT can be trained using unpaired data). All the images have been registered to a common template brain, such that they are in the same position and of the same size. We used axial images (only slice 120) from the segmented brains. The data were split into a training set of 900 images in each domain. The remaining 213 images, in each domain, were used for testing. Using an Nvidia 12 GB Titan X GPU, training times for CycleGAN and UNIT were 419 and 877 seconds/epoch, respectively. The models were on average trained using 180 epochs. The generation of synthetic images took 0.0176 and 0.0478 ms per image for CycleGAN and UNIT, respectively.

The two GAN models were compared using quantitative and qualitative methods. All quantitative results; MAE, mutual information (MI), and peak signal to noise ratio (PSNR), are based on the test dataset. Since the MR images can naturally differ in intensity, each image is normalized before the calculations by division of the standard deviation and subtraction of the mean value.

To visually evaluate a synthetic image compared to a real image can be difficult if the differences are small. A solution to the visual inspection is to instead visualize a relative error between the real image and the synthetic image. This is done by calculating the absolute difference between the images, and dividing it by the real image. These calculations are done on images normalized in the same manner as for the quantitative evaluation, and the error is the relative absolute difference.

Determining if the synthetic MR images are visually realistic or not was done via a perceptual study by one of the authors (Anders Eklund). The evaluator received T1- and T2-weighted images where 96 of them were real and 72 were synthetic, the evaluator then had to determine if each image was real or synthetic. The real and synthetic images were equally divided between the two domains. Images from Generators_s and Simple were not evaluated since it is obvious that the images are synthetic, due to the high smoothness. Evaluating anatomical images is a complicated task best performed by a radiologist. The results presented in this paper should therefore only be seen as an indicator of the visual quality.

3 Results

Quantitative results and results from the perceptual study are shown in Figure 1. The Generators_s model outperforms the other models in all quantitative measurements. The worst performing model on all quantitative measurements, besides MI on T2 images, is the Simple model (despite its supervised nature equivalent to Generators_s). The performance of CycleGAN, CycleGAN_s and UNIT is similar. With just a few exceptions, the quantitative performance is better for T1 images. The opposite is however shown in the perceptual study where more synthetic T1 images are labeled as synthetic compared to T2. Opposite results are in the perceptual study attained for CycleGAN and UNIT, where UNIT shows the best performance for T1 images and CycleGAN shows the best performance for T2 images.

The quantitative superiority of Generators_s does not correspond to the visual realness shown in Figure 2. The supervised training results in an unrealistic, smooth appearance seen in the MR images from Generators_s and the Simple model, where the Simple model also fails in the color mapping of the cerebrospinal fluid. The GAN models trained using an adversarial loss generate more realistic synthetic MR images.

The relative absolute error images in Figure 2 show a greater error for the synthetic T2 images compared to the synthetic T1 images. Synthetic T1 images especially have problems at the edges, whereas errors in the T2 images appear all over the brain.

(a) (b)


(d) (e)
Figure 1: Quantitative error measurements: (a) - MAE, (b) - PSNR and (c) - MI, for the compared GAN models. The results in (d) are the total scores of all GAN models in the perceptual study, and the results in (e) are for each specific model. Labeling T2-weighted images as real or synthetic is harder due to the fact that T2 images are darker by nature.
Figure 2: Synthetic images from the evaluated GAN models. The real images shown at the top are inputs that the synthetic images are based on, this is clarified by the white arrows. The real T2 image is the input that generated the synthetic T1 images and vice versa. The images are the same slice from a single subject. This means that the top images are ground truth for the images below them. The colorbar belongs to the images in the left and right columns, which are calculated as the relative absolute difference between the synthetic and the ground truth image. T1 results are shown in the far left column, and T2 results in the far right column.

4 Discussion

4.1 Quantitative comparison

During training the Generators_s model uses MAE as its only loss function, which creates a model where the goal is to minimize the MAE. The model does this well compared to other models, as shown in Figure 1. The Simple model, which similarly to Generators_s is only trained using the MAE loss, has the highest error among the models. The Simple model only has two convolutional layers and Generators_s has, similar to the CycleGAN generators, 24 convolutional layers. This indicates that the architecture in the Simple model is not sufficiently complex for the translation.

As expected, the CycleGAN_s model shows a slight improvement in MAE for T2 images compared to CycleGAN. However, the results are not significantly better than CycleGAN and the MAE on T1 images is in fact better for CycleGAN. The CycleGAN and UNIT show similar results and it is difficult to argue why one or the other performs slightly better than the other one.

Figure 1c shows that T1 images have a higher MI value than T2 images. This can be correlated to the results from MAE where a larger error was generated from the T2 images. An explanation to why the Simple model has a higher MI score than the majority of models for T2 images, is that T1 and T2 images from the same subject contain very similar information. Since the Simple model only changes the pixel intensity, the main information is preserved.

4.2 Qualitative comparison

From the perceptual study it was shown that the synthetic images have a visually realistic appearance, since synthetic images were classified as real. T2 images were more difficult to classify than T1 images and the reason for the difference can be that the synthetic T2 images had a more realistic appearance, but also the darker nature of T2 images (which for example makes it more difficult to determine if the noise is realistic or not).

The large error on the edges of the synthetic brain images in Figure 2 can be explained by the fact that each brain has a unique shape, and that T2 images are bright for CSF. Areas where there is an intensity change, e.g. CSF and white matter, seem to be more difficult for the models to learn, this might also be due to differences between subjects.

The CycleGAN_s penalizes appearance different from the ground truth, since it uses the MAE loss during training, which forces it to another direction, closer to the smooth appearance of the images from the Generators_s model. If the aim of the test would instead be to evaluate how similar the synthetic images are to the ground truth, the translated images from CycleGAN_s may give better results.

From the results in Figure 2 it is obvious that the supervised training, using MAE, pushes the generators into producing smooth synthetic brain images. Another loss function would probably alter the results, but since it is difficult to create mathematical expressions for assessing how realistic an image is, obtaining visually realistic results using supervised methods is a problematic task. The adversarial loss created by the GAN framework allows the discriminator to act as the complex expression, which results in visually realistic images created from the GAN models.

If the aim was to create images that are as similar to ground truth images as possible, the quantitative measurements would be more applicable. It is clear that even if a model such as Simple has a relatively good score in the quantitative measurements, it does not necessarily generate visually realistic images. This indicates that solely determining if an image is visually realistic can not be done with the used metrics.

4.3 Future work

It has been shown, via a perceptual study, that CycleGAN and UNIT can be used to generate visually realistic MR images. The models performed differently in generating images in the different domains, and training CycleGAN in an unsupervised manner is a better alternative if the aim is to generate as visually realistic images as possible.

A suggestion for future work is to investigate if GANs can be used for data augmentation (e.g. for discriminating healthy and diseased subjects). This would also provide information regarding if the model which creates the most visually realistic images, or the model which performs best in the quantitative evaluations, is the most suitable to use. Here we have only used 2D GANs, but 3D GANs [8, 19] can potentially yield even better results, at the cost of a longer processing time and an increased memory usage.

5 Acknowledgments

This study was supported by Swedish research council grant 2017-04889. Funding was also provided by the Center for Industrial Information Technology (CENIIT) at Linköping University, and the Knut and Alice Wallenberg foundation project ”Seeing organ function”.


  1. Data collection and sharing for this project was provided by the Human Connectome Project (U01-MH93765) (HCP; Principal Investigators: Bruce Rosen, M.D., Ph.D., Arthur W. Toga, Ph.D., Van J.Weeden, MD). HCP funding was provided by the National Institute of Dental and Craniofacial Research (NIDCR), the National Institute of Mental Health (NIMH), and the National Institute of Neurological Disorders and Stroke (NINDS). HCP data are disseminated by the Laboratory of Neuro Imaging at the University of Southern California.


  1. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep learning,” Nature, vol. 521, pp. 436–444, 2015.
  2. Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and Clara I. Sanchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60 – 88, 2017.
  3. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27, pp. 2672–2680. 2014.
  4. Antreas Antoniou, Amos Storkey, and Harrison Edwards, “Data augmentation generative adversarial networks,” arXiv, vol. 1711.04340, 2017.
  5. Francesco Calimeri, Aldo Marzullo, Claudio Stamile, and Giorgio Terracina, “Biomedical data augmentation using generative adversarial neural networks,” in Artificial Neural Networks and Machine Learning (ICANN), Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, and Alessandro E.P. Villa, Eds., 2017, pp. 626–634.
  6. Dai Dai, Jieqiong Wang, Jing Hua, and Huiguang He, “Classification of ADHD children through multimodal magnetic resonance imaging,” Frontiers in Systems Neuroscience, vol. 6, pp. 63, 2012.
  7. Daoqiang Zhang, Yaping Wang, Luping Zhou, Hong Yuan, and Dinggang Shen, “Multimodal classification of Alzheimer’s disease and mild cognitive impairment,” NeuroImage, vol. 55, no. 3, pp. 856 – 867, 2011.
  8. Dong Nie, Roger Trullo, Caroline Petitjean, Su Ruan, and Dinggang Shen, “Medical Image Synthesis with Context-Aware Generative Adversarial Networks,” arXiv, vol. 1612.05362, 2016.
  9. Qianye Yang, Nannan Li, Zixu Zhao, Xingyu Fan, Eric I-Chao Chang, and Yan Xu, “MRI Image-to-Image Translation for Cross-Modality Image Registration and Segmentation,” arXiv, vol. 1801.06940, 2018.
  10. Salman Ul Hassan Dar, Mahmut Yurt, Levent Karacan, Aykut Erdem, Erkut Erdem, and Tolga Çukur, “Image Synthesis in Multi-Contrast MRI with Conditional Generative Adversarial Networks,” arXiv, vol. 1802.01221, 2018.
  11. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” ArXiv:1611.07004, Nov. 2016.
  12. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” ArXiv:1703.10593, Mar. 2017.
  13. M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised Image-to-Image Translation Networks,” ArXiv:1703.00848, Mar. 2017.
  14. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation,” ArXiv:1711.09020, Nov. 2017.
  15. S. Zhou, T. Xiao, Y. Yang, D. Feng, Q. He, and W. He, “GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data,” ArXiv:1705.04932, May 2017.
  16. Z. Yi, H. Zhang, P. Tan, and M. Gong, “DualGAN: Unsupervised Dual Learning for Image-to-Image Translation,” ArXiv:1704.02510, Apr. 2017.
  17. David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil, Wu-Minn HCP Consortium, et al., “The wu-minn human connectome project: an overview,” Neuroimage, vol. 80, pp. 62–79, 2013.
  18. Matthew F Glasser, Stamatios N Sotiropoulos, J Anthony Wilson, Timothy S Coalson, Bruce Fischl, Jesper L Andersson, Junqian Xu, Saad Jbabdi, Matthew Webster, Jonathan R Polimeni, et al., “The minimal preprocessing pipelines for the Human Connectome Project,” Neuroimage, vol. 80, pp. 105–124, 2013.
  19. Biting Yu, Luping Zhou, Lei Wang, Jurgen Fripp, and Pierrick Bourgerat, “3D cGAN Based Cross-Modality MR Image Synthesis for Brain Tumor Segmentation,” in International Symposium on Biomedical Imaging (ISBI), 2018, pp. 626–630.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description