Retinal Optic Disc Segmentation using Conditional Generative Adversarial Network

Retinal Optic Disc Segmentation using Conditional Generative Adversarial Network

[    [    [    [    [    [    [    [    [    [    [ Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili, 43003 Tarragona, Spain. Kayakalp Hospital, 110084 New Delhi, India. Sant Joan de Reus University Hospital, 43204 Reus, Spain. Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, 07-01 Matrix, 138671, Singapore.
Abstract

This paper proposed a retinal image segmentation method based on conditional Generative Adversarial Network (cGAN) to segment optic disc. The proposed model consists of two successive networks: generator and discriminator. The generator learns to map information from the observing input (i.e., retinal fundus color image), to the output (i.e., binary mask). Then, the discriminator learns as a loss function to train this mapping by comparing the ground-truth and the predicted output with observing the input image as a condition. Experiments were performed on two publicly available dataset; DRISHTI GS1 and RIM-ONE. The proposed model outperformed state-of-the-art-methods by achieving around and of Jaccard and Dice coefficients, respectively. Moreover, an image segmentation is performed in less than a second on recent GPU.

C

A]\fnmsVivek Kumar \snmSinghthanks: Corresponding Author: E-mail: vivekkumar.singh@urv.cat, A]\fnmsHatem A. \snmRashwan, D]\fnmsFarhan \snmAkram, B,C]\fnmsNidhi \snmPandey, A]\fnmsMd. Mostafa Kamal \snmSarker, A]\fnmsAdel \snmSaleh, A]\fnmsSaddam \snmAbdulwahab, A]\fnmsNajlaa \snmMaaroof, A]\fnmsJordina Torrents \snmBarrena, A]\fnmsSantiago \snmRomani and A]\fnmsDomenec \snmPuig

onditional Generative Adversarial Networks, deep learning, retinal image analysis, optic disc segmentation

1 Introduction

Retinal fundus image analysis is very important for doctors to deal with the medical diagnosis, screening and treatment of opthalmologic diseases. The morphology of the optic disk (OD), which is a location, where ganglion cell axons exit the eye to form the optic nerve in which visual information of the photo-receptors is transmitted to the brain, is an important structural indicator for assessing the presence and severity of retinal diseases, such as diabetic retinopathy, hypertension, glaucoma, hemorrhages, vein occlusion, and neovascularization [1]. Retinal OD segmentation is the first step for a significant investigation of retinal images which helps to cause eye diseases [2].

The OD appears as a bright yellowish oval region within color fundus images through which the blood vessels enter the eye. The macula is the center of the retina, which is responsible for our central vision. Figure 1 shows the color retinal fundus image with the key anatomical structures denoted. For ophthalmologists and eye care specialists, an automated segmentation and analysis of fundus optic disc plays an important role to diagnose and treat the retinal diseases.

Figure 1: Relevant structures in a fundus image.

Numerous methods has been proposed to detect and segment the optic disc. For diagnosis of glaucoma disease, Chrastek et al.[3] proposed an automated segmentation algorithm to segment the optic nerve head. They firstly removed the blood vessel by using a distance map algorithm and a morphological operation, and then anchored active contour model has been used to segment the optic disc. Lowell et al. [4] proposed a deformable contour model to segment the optic nerve head boundary of retinal images by using a template matching and a directionally sensitive gradient to discard the interference of vessels. In turn, Welfer et al.[5] proposed an automated optic disk segmentation in a fundus image using an adaptive morphological operation. They then used a watershed transform marker to define the optic disk boundary. In addition, the vessel obstruction is minimized by morphological erosion.

With the increase of using deep learning models in segmentation tasks, many methods have recently been proposed based on convolutional neural network (CNN). An automatic optic disc and cup image segmentation has been proposed in [6] based on a stack of deep U-Net models. Each model in a cascade refines the result of the previous one.

In this paper, we propose a retinal OD segmentation model based on conditional Generative Adversarial Network (cGAN) [7]. cGANs is a deep learning network that can learn the statistical invariant features (texture, color etc.) of input image and segment the optic disc region. This paper introduces, to the best of our knowledge, the first application of the conditional generative adversarial training for retinal optical disc segmentation. The Proposed cGAN network consists of two combined networks: generator and discriminator. The generator network learns the mapping from the input, a fundus image, to the output, a segmented image. In turn, the discriminator (i.e, adversarial term) learns a loss function to train this mapping by comparing the ground-truth and the predicted output. Finally, the whole cGAN network optimizes a loss function that combines a conventional binary cross-entropy loss with an adversarial term. The adversarial term encourages the generator to produce output that cannot be distinguished from ground-truth ones.

The rest of the paper is organized as follows. Firstly, section 2 describes the methodology of the proposed cGAN model. In addition, section 3 shows the experiments and discussion. Finally, the conclusion and some future lines of research are explained in section 4.

2 Proposed Methodology

Figure 2: General framework for optic disc segmentation.

Figure 2 shows the proposed cGAN framework for optic disc segmentation model. Optic disc detection in this paper is addressed as a segmentation problem, which is carried out by the generator network.

Figure 3: The generator network architecture, composed by layers of encoder part and decoders part

The generator network is based on encoding and decoding layers. The function of encoders network is to extract features from the input retinal fundus images by covolutional filters with down-sampling, in turn, the decoders utilized the decovolutional filters with up-sampling the feature maps to predict the final segmented image. Each (de)covolutional layer is followed by batch normalization.We used LeakyRelu activation function with slope 0.2 in the end of each (de)covolutional layer. The size of the each spatial filter in each convolution and deconvolution is 4x4 to down- and up-sample the feature maps size with a stride . At the last convolutional layer in encoders, Tanh activation function is used. In the last layer of the decoders, we used a sparse fully connected layer (FC), which convert the feature maps into a single dimensional vector and using sigmoid activation function for obtaining the binary class optic disc segmentation.

In order to increase the segmentation performance of the proposed network, skip connections are used (shown in dotted lines in Figure 3) between encoders and decoders by concatenating the feature maps of a convolutional layer with the ones resulted from the corresponding deconvolutional layer. The main advantage of skip connection is that encoder learn the high level features of retinal optic disc image pixels and decoder learn to correlate with the encoder features to determine whether following receptive fields of the output image are likely to belong to the optic disc mask or not. In Figure 3, the generator network architecture is shown, which consists of encoder and decoder layers. As an input, a retinal image is observed. The original input images from both two publicly dataset are very big in size, therefore in order to reduce the network size we resized the input image to a size and the value of each pixel is normalized to [0,1].

The architecture of the discriminator, which observes the concatenation of the retinal image and the segmentation mask as an input to be evaluated as real or fake, is composed of five convolutional layers. Thus, including the adversarial score in the loss computation of the generator fosters its capabilities to provide good segmentation. Each convolution layer used spatial filter with a stride . The first layer of the discriminator generates feature maps extracted from the input image. In turn, the second and third layers produce and feature maps respectively. The fourth layer generates feature maps with a output size.

Suppose and showing a retinal fundus image and corresponding ground truth segmentation with random variable respectively. is the predicted binary mask of the optic disc. Besides, the L1 normalized distance between ground truth and the predicted masks is . In addition, is an empirical weighting factor and the discriminator output score is . If the discriminator output score gives 1 then predicted mask seems like a true ground truth, otherwise it gives output score 0.

(1)

Here, we have used a L1 loss function to boost the learning process. According to [7], using only L1 loss will produce blurred segmentations. Therefore, to avoid this problem, we used adversarial network to increase the performance of the segmented image. Adversarial network allows the generator to completely change the output image at fine level. The loss computation of discriminator network is shown below:

(2)

Therefore, the optimizer helps to the discriminator network in order to maximize the belief value for actual masks (by minimizing and to minimize the belief value for generated masks (by minimizing .

We have used the Adam [8] optimizer with learning rate 0.0002 for optimization. In addition, during experiment, the batch size is set to 4 and the model is trained with 200 epochs. The proposed cGAN model permits an accurate and strong learning even with a few number of hundreds training images . After segmentation, we have applied erode and dilation morphological operations to remove some white artifacts from the generated output binary mask.

3 Experiments and Discussion

In the experiments, we used a 64-bit I7-6700, 3.40GHz CPU with 16GB of memory space as well as NVIDIA GTX 1070 GPU, running on Ubuntu 16.04 Linux operating system. We used Pytorch [9] neural network library to devise a neural network model using the deep learning framework.

We conduct a comprehensive set of experiments to validate the potential of our proposed model on two datasets:

DRISHTI-GS1 [10]: The dataset is publicly available with comprises 101 images, which is divided into a training and a testing set of images. Training and testing sets consist of 50 and 51 images respectively. These images have their corresponding binary mask as ground truth.

RIM-ONE [11]: This dataset is publicly available particularly for Optic nerve head segmentation, it has a total of 169 high resolution images with their corresponding ground truth. We have used 100 images as training and rest 69 images for test purpose.

For quantitative assessment of the performance of OD segmentation, we have computed Accuracy, Dice Coefficient (Dice), Jaccard index (JACC), Sensitivity and Specificity as detailed in Table 1. We have performed the experiments using the two datasets with three common segmentation methods, FCN [12], U-Net[13] and SegNet [14]. In addition, we compared our results with three baseline state-of-the-art methods, such as Shankaranarayana et. al. [15], Maninis et. al. [16] and Zilly et. al. [17].

Methods Dataset Accuracy Dice JACC Senstivity Specificity
FCN DRISHTI GS1 0.89 0.92 0.96
RIM-ONE 0.94 0.92 0.87 0.88 0.95
SegNet DRISHTI GS1 0.94 0.88 0.83 0.89 0.95
RIM-ONE 0.93 0.85 0.78 0.86 0.94
U-Net DRISHTI GS1 0.97 0.95 0.90 0.96 0.98
RIM-ONE 0.94 0.92 0.89 0.93 0.97
Shankaranarayana et al.2017 [15] DRISHTI GS1 - - - - -
RIM-ONE - 0.98 0.88 - -
Maninis et al. 2016 [16] DRISHTI GS1 - - - - -
RIM-ONE 0.96 0.89 - -
Zilly et al. [17] DRISHTI GS1 - 0.97 0.91 - -
RIM-ONE - 0.94 0.89 - -
cGAN (our proposed) DRISHTI GS1 0.98 0.97 0.96 0.98 0.99
RIM-ONE 0.98 0.98 0.93 0.98 0.99
Table 1: Accuracy, Dice coefficient, Jaccard index, Sensitivity and Specificity with the cGAN ,FCN, SegNet and Unet models, in addition to three baseline methods evaluated on DRISHTI GS1 and RIM-ONE. The best results are marked in a bold text. Results for optic disc segmentation (-) indicates that the result is not reported.

In this section, the proposed method is evaluated on: DRISHTI GS1 and RIM-ONE datasets to show its robustness in a comparison to the state-of-the-art methods.

Table 1 shows that quantitative results of the performance of our proposed segmentation method on using both publicly available DRISHTI GS1 and RIM-ONE datasets. As shown, with DRISHTI GS1, the cGAN model can segment the OD regions with around , , , and of Accuracy, Dice coefficient, Jaccard index, sensitivity and specificity, respectively. As well as, the proposed method outperformed the six tested segmentation models. However, the U-net model provided acceptable results and comparable to our results with with around , , , and with the five evaluation matrices. The three tested baseline methods have only computed the Dice coefficient and Jaccard index as shown in Table 1. The work proposed in [17] yielded feasible scores with around and of the dice coefficient and Jaccard index, respectively.

Furthermore, in order to support out the aforementioned results, we evaluated our model on RIM-ONE dataset. The resulted Accuracy, Dice coefficient, Jaccard, sensitivity and specificity scores with our model were achieved around , , , and , respectively. With RIM-ONE, our proposed cGAN model also outperformed the compared approaches in terms of the six evaluation metrics. [15] yielded the best Dice coefficient among the five tested methods with around , in turn the model in [17] provided the best among the five tested methods. The method proposed by Shankaranarayana[15], the dice coefficient is equal to our proposed method, however our cGAN model achieved high Jaccard index of as comparison to . In addition, The U-net model has still provided good results comparing the other semantic segmentation methods.

Figure 4: Examples of the retinal optic disc segmentation : (col 1) retinal optic disc images, (col 2) ground-truth masks, (col 3) FCN, (col 4) U-Net, (col 5) SegNet and (col 6) generated masks with the cGAN.

A qualitative comparison of segmentation results with the state-of-the-art methods using both retinal optic disc datasets is shown in Figure 4. As shown, the OD segmentation with the proposed method is much closer to the ground truth with accurate boundaries compared to results of the state-of-the-art methods. The visualization supports our numerical results and The U-Net also provided acceptable segmentation. In turn, the SegNet yielded the worst segmentation among the five tested methods.

4 Conclusion

This work proposed a deep learning framework based on conditional Generative Adversarial Network (cGAN) to segment the retinal fundus optic disc. The cGAN consists of two networks: generator and discriminator. To train properly, the cGAN network does not require a large number of images to train. In addition, it renders a high segmentation performance without adding any complexity, since the final segmentation is only achieved with the generator network. Experimental results show that the cGAN outperformed the state-of-the-art optic disc segmentation methods. Future work will aim at validating our approach on more and larger datasets, which confirm to the opthalmology into a clinical practice. In addition, using our proposed model in a comprehensive diagnosis model for practically analyzing fundus images.

References

  • [1] T. MacGillivray, E. Trucco, J. Cameron, B. Dhillon, J. Houston and E. Van Beek, Retinal imaging as a source of biomarkers for diagnosis, characterization and prognosis of chronic illness or long-term conditions, The British journal of radiology 87(1040) (2014), 20130832.
  • [2] A. Almazroa, R. Burman, K. Raahemifar and V. Lakshminarayanan, Optic disc and optic cup segmentation methodologies for glaucoma image detection: a survey, Journal of ophthalmology 2015 (2015).
  • [3] R. Chrástek, M. Wolf, K. Donath, H. Niemann, D. Paulus, T. Hothorn, B. Lausen, R. Lämmer, C.Y. Mardin and G. Michelson, Automated segmentation of the optic nerve head for diagnosis of glaucoma, Medical Image Analysis 9(4) (2005), 297–314.
  • [4] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher and L. Kennedy, Optic nerve head segmentation, IEEE Transactions on medical Imaging 23(2) (2004), 256–264.
  • [5] D. Welfer, J. Scharcanski, C.M. Kitamura, M.M. Dal Pizzol, L.W. Ludwig and D.R. Marinho, Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach, Computers in Biology and Medicine 40(2) (2010), 124–137.
  • [6] B. Al-Bander, W. Al-Nuaimy, B.M. Williams and Y. Zheng, Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc, Biomedical Signal Processing and Control 40 (2018), 91–101.
  • [7] P. Isola, J.-Y. Zhu, T. Zhou and A.A. Efros, Image-to-image translation with conditional adversarial networks, arXiv preprint (2017).
  • [8] D.P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
  • [9] A. Paskze and S. Chintala, Tensors and Dynamic neural networks in Python with strong GPU acceleration, 2017.
  • [10] J. Sivaswamy, S. Krishnadas, A. Chakravarty, G. Joshi, A.S. Tabish et al., A comprehensive retinal image dataset for the assessment of glaucoma from the optic nerve head analysis, JSM Biomedical Imaging Data Papers 2(1) (2015), 1004.
  • [11] F. Fumero, S. Alayón, J. Sanchez, J. Sigut and M. Gonzalez-Hernandez, RIM-ONE: An open retinal image database for optic nerve evaluation, in: Computer-Based Medical Systems (CBMS), 2011 24th International Symposium on, IEEE, 2011, pp. 1–6.
  • [12] J. Long, E. Shelhamer and T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
  • [13] O. Ronneberger, P. Fischer and T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
  • [14] V. Badrinarayanan, A. Kendall and R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE transactions on pattern analysis and machine intelligence 39(12) (2017), 2481–2495.
  • [15] S.M. Shankaranarayana, K. Ram, K. Mitra and M. Sivaprakasam, Joint Optic Disc and Cup Segmentation Using Fully Convolutional and Adversarial Networks, in: Fetal, Infant and Ophthalmic Medical Image Analysis, Springer, 2017, pp. 168–176.
  • [16] K.-K. Maninis, J. Pont-Tuset, P. Arbeláez and L. Van Gool, Deep retinal image understanding, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 140–148.
  • [17] J.G. Zilly, J.M. Buhmann and D. Mahapatra, Boosting convolutional filters with entropy sampling for optic cup and disc image segmentation from fundus images, in: International Workshop on Machine Learning in Medical Imaging, Springer, 2015, pp. 136–143.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
202523
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description