Retinal Vessel Segmentation in Fundoscopic Images with Generative Adversarial Networks

Retinal Vessel Segmentation in Fundoscopic Images with Generative Adversarial Networks

Keywords:
{

C5em C5em Model& DRIVE  & STARE
& ROC & PR & ROC& PR
U-Net (No discriminator) &0.9700&0.8867&0.9739& 0.9023
Pixel GAN $(1×1)$ &0.9710&0.8892&0.9671&0.8978
Patch GAN-1 $(10×10)$&0.9706&0.8898&0.9760&0.9037
Patch GAN-2 $(80×80)$&0.9720&0.8933&0.9775&0.9086
Image GAN $(640×640)$&
0.9803&0.9149&0.9838&0.9167
\@close@row
U-Net, which has no discriminator, shows inferior performance to patch GANs and image GAN suggesting that GANs framework improves quality of segmentation. Also, image GAN, which has the most discriminatory capability, outperforms others. This observation is consistent to claims that a powerful discriminator is key to successful training with GANs~[4, 12].

Fig.~1 compares ROC and PR curves for the image GAN (V-GAN) with existing methods and Table~1 summarizes AUC for ROC and PR and dice coefficient. We retrieved dice coefficients and output images of other methods from [7] and the curves are computed from the images. Our method shows better performance in other methods in all operating regime except DRIU. Still, our method shows superior AUC and dice coefficient to DRIU. Also, our method surpasses the human annotator's ability on DRIVE dataset.

(a)
(b)
(c)
(d)
Figure 1: Receiver Operating Characteristic (ROC) curve and Precision and Recall (PR) curve for various methods on DRIVE dataset (Top) and STARE dataset (Bottom).
Method& DRIVE & STARE
& ROC & PR & Dice&ROC& PR & Dice
Kernel Boost~[1]&0.9306&0.8464&0.800&-&-&-
HED ~[16]&0.9696&0.8773&0.796&0.9764&0.8888&0.805
Wavelets~[15]&0.9436&0.8149&0.762& 0.9694&0.8433&0.774
$N^4$-Fields~[3]&0.9686&0.8851&0.805&-&-&-
DRIU~[7]&0.9793&0.9064&0.822&0.9772&0.9101&0.831
Human Expert &-&-&0.791&-&-&0.760
V-GAN &0.9803&0.9149&0.829&0.9838&0.9167&0.834
Table 1: Comparison of different methods on two datasets with respect to Area Under Curve (AUC) for Receiver Operating Characteristic (ROC), Precision and Recall (PR) and Dice Coefficient.

Fig.~2 illustrates qualitative difference of our method from the best existing method (DRIU). As shown in the figure, our method generates concordant probability maps to the gold standard while DRIU assigns overconfident probability on fine vessels and boundary between vessels and fundus background which may results over-segmentation.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
Figure 2: (From left to right) fundoscopic images, gold standard, probability maps of best existing technique (DRIU~[7]) and probability maps of our method on DRIVE (top) and STARE (bottom) dataset.

For further comparison, we converted the probability maps into binary vessel images with Otsu threshold as is done in [2]. We can see in Fig~3 that DRIU generally yields more false positives than our method due to the overconfident probability maps. In contrast, our proposed method allows more false negatives around terminal vessels due to its tendency to assign low probability around uncertain regions as human annotators would do.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 3: Comparison of our method (2nd, 4th columns) with DRIU~[7] (1st, 3rd columns) on DRIVE (top) and STARE (bottom) dataset. Green marks correct segmentation while blue and red indicate false positive and false negative.

4 Conclusion and Discussion

We introduced GANs framework to retinal vessel segmentation and experimental results suggest that presence of a discriminator can help segment vessels more accurately and clearly. Also, our method outperformed other existing methods in ROC AUC, PR AUC and dice coefficient. Compared to best existing method, our method included less false positives at fine vessels and stroked more clear lines with adequate details like the human annotator. Still, our results fail to detect very thin vessels that span only 1 pixel. We expect that additional prior knowledge on the vessel structures such as connectivity may leverage the performance further.

Footnotes

  1. {
  2. {
  3. {
  4. {
  5. Corresponding author
  6. {
  7. {

References

  1. Becker, C., Rigamonti, R., Lepetit, V., Fua, P.: Supervised feature learning for curvilinear structure segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 526–533. Springer (2013)
  2. Fu, H., Xu, Y., Lin, S., Wong, D.W.K., Liu, J.: Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 132–139. Springer (2016)
  3. Ganin, Y., Lempitsky, V.: N^ 4-fields: Neural network nearest neighbor fields for image transforms. In: Asian Conference on Computer Vision. pp. 536–551. Springer (2014)
  4. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672–2680 (2014)
  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778 (2016)
  6. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004 (2016)
  7. Maninis, K.K., Pont-Tuset, J., Arbeláez, P., Van~Gool, L.: Deep retinal image understanding. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 140–148. Springer (2016)
  8. Melinščak, M., Prentašić, P., Lončarić, S.: Retinal vessel segmentation using deep neural networks. In: VISAPP 2015 (10th International Conference on Computer Vision Theory and Applications) (2015)
  9. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  10. Nguyen, U.T., Bhuiyan, A., Park, L.A., Ramamohanarao, K.: An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern recognition 46(3), 703–715 (2013)
  11. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on systems, man, and cybernetics 9(1), 62–66 (1979)
  12. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  13. Ricci, E., Perfetti, R.: Retinal blood vessel segmentation using line operators and support vector classification. IEEE transactions on medical imaging 26(10), 1357–1365 (2007)
  14. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 234–241. Springer (2015)
  15. Soares, J.V., Leandro, J.J., Cesar, R.M., Jelinek, H.F., Cree, M.J.: Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification. IEEE Transactions on medical Imaging 25(9), 1214–1222 (2006)
  16. Xie, S., Tu, Z.: Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1395–1403 (2015)
  17. Zhang, B., Zhang, L., Zhang, L., Karray, F.: Retinal vessel extraction by matched filter with first-order derivative of gaussian. Computers in biology and medicine 40(4), 438–445 (2010)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
202241
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description