Generative Adversarial Network for Medical Images (MI-GAN)

Generative Adversarial Network for Medical Images (MI-GAN)

Talha Iqbal Department of Electrical Engineering, COMSATS University Islamabad, Abbottabad Campus, Pakistan. Hazrat Ali Department of Electrical Engineering, COMSATS University Islamabad, Abbottabad Campus, Pakistan. hazratali@ciit.net.pk
Abstract

Deep learning algorithms produces state-of-the-art results for different machine learning and computer vision tasks. To perform well on a given task, these algorithms require large dataset for training. However, deep learning algorithms lack generalization and suffer from over-fitting whenever trained on small dataset, especially when one is dealing with medical images. For supervised image analysis in medical imaging, having image data along with their corresponding annotated ground-truths is costly as well as time consuming since annotations of the data is done by medical experts manually. In this paper, we propose a new Generative Adversarial Network for Medical Imaging (MI-GAN). The MI-GAN generates synthetic medical images and their segmented masks, which can then be used for the application of supervised analysis of medical images. Particularly, we present MI-GAN for synthesis of retinal images. The proposed method generates precise segmented images better than the existing techniques. The proposed model achieves a dice coefficient of 0.837 on STARE dataset and 0.832 on DRIVE dataset which is state-of-the-art performance on both the datasets.

keywords:
GAN., medical imaging., style transfer., deep learning., retinal images.
journal: Journal of Medical Systems

1 Introduction

In recent times, strong interest has emerged in the use of computer-aided medical diagnosis a1 () a2 () a3 () a4 () a5 () a6 () a7 () a8 () a9 () a10 (). Computer aided diagnosis relies on advanced machine learning and computer vision techniques b1 (). Today, majority of the medical professionals use computer-aided medical images for diagnosis purposes. Retinal vessel network analysis gives us information about the status of general system and conditions of the eyes. Ophthalmologists can diagnose early sign of vascular burden due to hypertension and diabetes as well as vision threatening retinal diseases like Retinal Artery Occlusion (RAO) and Retinal Vein Occlusion (RVO) from abnormality in vascular structure b36 (). To aid this kind of analysis, automatic vessels segmentation methods have been extensively studied. Recently, deep learning methods have shown potential to produce promising results with higher accuracy, occasionally better than medical specialist in the field of medical imaging b3 (). Deep learning also improves efficiency of analyzing data due to its computational and automated nature but most of the medical images are often 3 dimensional (e.g. MRI and CT) and it is difficult as well as inefficient to produce manually annotated images. In general, medical images are inadequate, expensive and offer restricted use due to legal issues (patient privacy). Moreover, the datasets of medical images available publicly often lack consistency in size and annotation. This makes them less useful for training of neural networks, which are data-hungry. This directly limits the development of medical diagnosis systems. So, generation of synthetic images along with their segmented images will help in medical image analysis and provide better diagnosis systems.
Recent work in the domain of medical imaging has shown possibility of improved performance even on small datasets. This has became possible through provision of some prior knowledge in a deep neural network b2 (). U-net b3 () architecture is popular for segmentation of bio-medical images, which shows how strongly an augmented data can be utilized to cope with low amount of training data available to train deep networks. Data augmentation is easy to implement and gives good results but it is only able to give fixed variations for any given dataset and requires the augmentation to fit in the given dataset. Impressive results are achieved by Gatys et al. b6 () by application of deep learning algorithm. Similar approach with modifications has been used by b7 (), b8 (), reducing the computational complexity. More traditional approaches for segmentation of filamentary structured images have been reported in b9 () and b10 ().
Generative Adversarial Networks (GANs) are useful for many applications like unsupervised representation learning b4 () or image-to-image translation b5 (). Typically, vessel segmentation task is considered as image translation problem where segmented vessel map is produced at output using fundoscopic image as an input to the model. We can have clearer and sharper vessel segmented masks, if we constrain our output to resemble the annotation done by human experts. For image generation, Generative Adversarial Networks (GANs) b18 () provide a different approach. GANs are divided into two networks i.e. Generator and Discriminator. Both are trained to compete with each other like min-max game. Goal of discriminator is to classify the input image as real or synthetic image while generator goal is to generate images that are close to real so that discriminator gets fooled by it. To deal with over-fitting, generator is never shown the training dataset and is only fed with the gradient of discriminator decision. The training process is highly affected by the values of hyper-parameters. The major problem in GANs is to find Nash Equilibrium to stop the training process of generator and discriminator, which can otherwise lead to training instability.
Number of GANs like b11 (), b12 (), b13 (), b14 () have been developed. DCGAN b11 () introduced set of constraints which stabilized the training of the model. CGAN b12 () trained the model and generated output conditioned to some auxiliary information. LAPGAN b13 () uses cascade formation of convolutional neural networks within framework of Laplacian pyramid for the generation of the new images. InfoGAN b14 () learns disentangled representations in unsupervised manner. GANs have performed well on small medical image datasets as discussed in b19 (). The authors in b19 () have used GANs for unsupervised adaptation of the multi-model medical images.
In this paper, we propose a new approach for generation of retinal vessel images as well as their segmented masks using generative adversarial networks. The closest to our work is that of b15 (). The method proposed in b15 () is limited to generation of fixed output for a given input. On the contrary, our method can produce unlimited number of synthetic images from same input. Moreover, unlike b15 () that uses hundred to millions of training examples, our approach works on only tens of training images. Our method not only extracts sharp and clearer vessels having less false positives as compared to existing methods but also achieve state-of-the-art performance on two publicly available datasets i.e. STARE 111http://cecas.clemson.edu/ ahoover/stare/and DRIVE 222https://www.isi.uu.nl/Research/Databases/DRIVE/. Our model, when trained on the generated datasets, gives comparable results with the network trained on real data images. The major contributions of this work are:

  • We propose a GAN which is able to generate realistic looking retinal images from only tens of examples, unlike b15 (), which requires hundreds of training examples.

  • We propose a variant of the style transfer based on particular style representation provided by additional input.

  • Unlike the traditional training of GANs, we propose a new technique. We update generator twice than discriminator to get quicker convergence. Thus, the overall training time is reduced significantly.

The rest of the paper is organized as follows: We explain Generative Adversarial Network and the proposed design of our model in Section II. We have discussed experimental setup and results in Section III. Finally, the paper is concluded in Section IV.

Figure 1: Flowchart of our method

2 Generative Adversarial Network for Medical Imaging (MI-GAN)

We generate segmented images using ground truth segmented images of each dataset. To produce realistic filamentary structured output, we imitate image formation process i.e. image generation function. Input to this function is segmented binary image and normally distributed noise . Our goals are:

  1. Learn function from very small training set.

  2. Explore conditional probability of image formation distribution . Here is random variable used to show feasible image realization conditioning for any particular realization . In simple words, by varying noise vector , we should get plausible as well as distinct RGB image from same segmented input .

  3. Add these synthesized images to training set and improve the overall performance of the supervised segmentation.

  4. Interesting thing about our method is that a specific image style learned from an additional input is directly transferred to output image . Note that the style of the can be different from original image . Similarly, their corresponding segmented images and are also unrelated.

The achievement of these goals is challenging as image generation process is complex process and is a sophisticated function. Nonetheless, using a powerful deep learning methodology i.e. GANs, an end to end machine learning algorithm is proposed in this work. Figure 1 shows the overall flow of our proposed approach.
Along with generator , we have discriminator which gives output [0 1] depending on the input. Discriminator function is to classify synthetic image as 0 or synthetic and real image as 1 or real. Mathematically: i.e. generated image then (d 0) and if i.e. real image from dataset then (d 1) (see Figure 1). Here is discriminator output.
The training mechanism of GANs can be considered as two players competing against each other in a min-max game. Each player wants to get better than other and ultimately become the winner. Based on this analogue we define the optimization problem characterizing the G and D interplay, as:

(1)

Here is a trade-off constant and . This last term is introduced to make sure that the synthetic image produced by the generator is not too much deviated from the real image. It can be considered as simple L1 loss function, denoted as:

(2)

During the training, generator tries to generate realistic looking synthesized images so that it may fool the discriminator and let the discriminator classify these generated images as real ones. The generator achieves this by minimizing equation 1, which is our objective function. Practically, by using the approximation scheme as in b17 (), this can be done by minimizing , which is a simpler form than original . Overall generator loss can be defined as:

(3)

On the other side, discriminator tries to properly classify and separate synthesized images from the real images by maximizing the objective function (see equation 1). The discriminator loss is determined by:

(4)

The empirical summation is used to approximate the expectation value. The training is done by alternating the optimization operation between the generator and discriminator objective function. This is same as adopted by different GANs b17 (), b11 (), b20 (). Unfortunately, these GANs do not provide a formal guarantee that this optimization process will converge and we will be able to reach at Nash Equilibrium point. Different tricks are available which guarantee convergence of GANs training process and produces reasonable realistic looking synthesized images at output b17 (), b11 (), b20 (). Figure 1 illustrates overview of the work flow of proposed GAN model, excluding the dashed box. Next we discuss specific neural network architecture of our Generator G and Discriminator D in detail.

2.1 Generator and Discriminator Architecture in MI-GAN

Explained in b15 (), b22 (),b23 () and b24 (), commonly used technique of encoder-decoder is adopted here. This allows us to introduce noise code in natural manner. Encoder acts as feature extractor. It is a multiple layered neural network which captures local data representation in first few layers and goes on to capture more global representation as we move deep inside the neural network. A 400 dimensional random noise code z is fully connected to first layer of the network (see Figure 2). This noise code is then reshaped. One thing to note is that for all the layers of G and D, we use kernel with fixed size and there are two strides with no pooling layers. Meanwhile in our case, it is important for the generator to respect morphology of input segmented image while generating output images. To do so, the ‘skip connections’ of U-Net b16 () are taken into consideration. In skip connections approach the previous layer is mirrored and then duplicated by appending it to the current layer. Odd numbered layers are skipped and the center coding is considered as origin. Note that if we have small image size and a deep neural network, the encoder-decoder framework does work well even without using skip connections. However, we are working with sized images (which is a large size) and our network is relatively deep.
Training such a model is challenging. The main challenge one may face while using deeper network is of vanishing gradient over a long path during error back-propagation. ‘Skip connections’, used similarly as in residual nets b25 (), allows us to pass the error gradients directly from decoder layer to its corresponding encoder layer. This facilitates the memorization of local and global shapes representation as well as their corresponding textures encountered in training dataset, thus we are able to generate better results. We use the basic architecture of the network proposed in b11 () to build layers of our generator having multiple convolution layers, Batch Normalization and Leaky ReLU components as shown in Figure 2. The activation function used to squash the output of the final layer is . This function limits the output value between 1 and -1.

Figure 2: Generator Structure

With our generator, the discriminator network is also built by convolution layers, Batch Normalization and Leaky ReLU, as shown in Figure 3. The activation function used at output layer is instead of . After every convolution the feature map size is halved. For example, as we have input image of so after one convolution layer image size will be decreased to . The number of feature maps (filters) are doubled from 32 through 512 as we move from first to last layer.
Uptill here, we have explained how our proposed approach learns the generic representation from a small training dataset and use it to employ generation of synthesized segmented images. Next, we discuss the segmentation process and a variant of style transfer technique.

Figure 3: Discriminator Structure

2.2 Segmentation Technique

For segmentation, we utilize gold standard segmented images. We add a loss function that penalizes the distance between gold standard images and output segmented images. This loss is defined as binary cross-entropy i.e.

(5)

The objective function can be formulated by summing the GAN objective function and segmentation loss. So, the new objective function is as follows:

(6)

is used to balance both the objective functions.

2.3 Style Transfer Variant for MI-GAN

Recent advancement in image style transfer, such as b21 (), inspired us to use this technique in the field of medical imaging. Here given an input segmentation image which delineates content of its filamentary structure, we expect that the generated image possess the unique texture (referred as style) of the input which is our target, while still adhering the content of presented during the training. The difference of our style transfer approach from original style transfer is that instead of generic representation, our synthesized image is based on a particular style representation provided by . The procedure we follow is that we introduce a style image as an additional input along with training input i.e. a new segmented image is introduced having different style and texture. Note that in general has its own filamentary structure (segmentation), which is different from other input . Nonetheless, this does not affect the performance of generating synthesized images using our method. It is worth noticing that the proposed methodology is practically implementable in biomedical imaging field. On one hand we have very less annotated images available while on the other hand there are a lots of unannotated images available on world wide web which can be used as potential style inputs.
The overall training and testing methodology of this new algorithm is same as we have described in our approach. The training is carried out in form of batches for all annotated examples in training set. The generator and discriminator is same as mentioned before but the only difference is that in objective function (see equation 1), a new cost term is introduced, which replaces in equation 1. We follow style transfer idea proposed in b7 (), b8 () to use the Convolutional Neural Network (CNN) of VGG-19 b26 () for extraction of the features from this multi-layered network. VGG-19 network architecture is basically a series of five CNN blocks of VGG net. Each block further consists of two to four consecutive CNN layers of same size. Let us define some notations for convenience. Let be the index for a set of CNN blocks and is the index of a particular block where . Set of layers be represented by or . Here layer index is such that . Now the segmented image is denoted as , irrespective of real image or generated . VGG-19 network is obtained by training the ImageNet omega classification problem, which is explained in detail in b26 (). Optimization problem for this style transfer algorithm is explicitly incorporated with two perceptual losses i.e. style loss and content loss of b6 (), as well as total variational loss.
Style loss: This loss is used to minimize total textural deviation between target style and generated image . To calculate this loss, consider showing set of CNN blocks and for each block . The set of layer is represented by . -th layer of block is defined as . Here, or . Total number of interest feature maps inside current layer is denoted by . Let and be index of interest feature map and be index of an element of current feature map. Information of the corresponding feature is characterized using Gram matrix which belongs to . Each element is defining an inner product of and interest feature maps in layer of block . Mathematically,

(7)

The style loss of and during training is defined as:

(8)

Here is matrix Frobenius norm, represents weight of -th block Gram matrix. Note that by definition = .
Content loss: Following notations are considered for content loss: is index of set of convolution neural network blocks while each block index is as . Set of layers is represented as . We expect the synthesized output will abide the segmentation pattern of the real image (input image) . To make sure this happens, we encourage output image to minimize the Frobenius norm of the difference between input and output CNN features. Mathematically,

(9)

Total variational loss: Total variational loss is incorporated using following equation for spatial smoothness of the generated images.

(10)

Here denotes pixel value of location in generated image , where respectively. Summarizing all the three loss functions combined together gives us Style Loss ,

(11)

So, now we modify in equation 1 by this style transfer loss . The new objective function for generator G becomes:

(12)

Discriminator objective function remains unchanged (see equation 4). Style transfer from input style is obtained using back-propagation optimization of the above objective function.

3 Experimental Setup

3.1 Datasets Preparation

For evaluation of the proposed approach, we use two benchmark datasets. The first is DRIVE dataset and the second is STARE dataset. These both datasets include a broad spectrum of vascular structured retinal images. The image sizes and number of training examples are different in each dataset. In DRIVE dataset there are 20 training examples with image size of while STARE dataset has 10 training images with image size of . The images in both the datasets are roughly similar. In pre-processing stage all the images are re-sized to . Images in DRIVE dataset contain large size background area thus they are cropped into sized sub-image centered to the original one to make sure all the fore-ground pixels are still contained in the new image. Then this image is again re-sized to using bi-cubic interpolation. Images in STARE dataset has rather small background margins (area outside fore-ground mask) so they are directly converted to using bi-cubic interpolation. Pixel values of all the input signals are scaled down in-between -1 and 1 so that our generator should learn to generate synthetic image of size . In the last stage these images are again up-sampled to there original sizes. The final result is obtained by applying circular mask to the segmented image so that only inside pixels are retained as a fore-ground. Figure 4 shows few fundoscopic images from DRIVE (upper row) and STARE (lower row) along with their ground truths.

Figure 4: (From Top to Bottom) DRIVE Dataset Images with their ground truth and STARE Dataset Images with their ground truth.

3.2 Parameters of proposed model

The 3D boxes in generator as well as in discriminator (in Figure 2 and 3) shows CNN layer with its number of features size. Edges of the boxes show the convolutional or de-convolutional operation having filter size of . Here we have considered , where is self-manifested by third dimension of consecutive layer. The number in the figure 2 and 3 specify intrinsic parameters of the networks. For example, length of the noise vector is 400 and size of first layer is . In generator G, the sign along with two directed edges pointing inward shows concatenation operation. Let us see the first ; here concatenation operation takes place between tensor and tensor to produce tensor. This concatenation operation is followed by deconvolution operation using filter size of which in-return produces 3-D box with size of .
We update generator G twice and then update discriminator D during the learning iteration to balance the overall learning process of generator and discriminator. Noise is sampled element-wise from zero mean Gaussian having standard deviation of 0.001 during training. Standard deviation is changed to 1 and sampling is done in same manner as above, when we evaluate our algorithm. Based on observation, this change in standard deviation is useful to maintain proper level of diversity as we have very small-size data. To get better training of generator and discriminator in our model, batch normalization b27 () is used right after every convolutional layer.
The VGG-19 network is used to produce feature descriptor for style transfer algorithms. Output of this network is style and content features. Values of the parameters used in this network are: and . for style loss and for content loss. is kept fixed for all blocks and is set to 0.2. The weights of three loss functions are as follow: , and .
Images are augmented by rotation and left-right flip and then normalization is done on each image to get z-score for each channel. These augmented images are then divided in train and validation images with ratio of 19 to 1. Generator having least validation loss is selected from the models. The generator and discriminator are trained for n epochs until convergence. For optimization of the objective function we use Adam optimizer. The learning rate is set to and trade-off co-efficient .
All the experimentation is carried out using standard PC with Intel Core i5 CPU and GeForce GTX 1080 GPU with 8 GB memory. We evaluate our technique with Area Under Curve for Precision and Recall (AUC PR), Dice co-efficient (F1-score) and Area Under Curve for Receive Operating Characteristics (AUC ROC). The probability map is threshold using Ostu thresholding b28 (), which is mostly used to separate fore-ground from background for calculation of dice co-efficient. Pixels inside the Field Of View (FOV) is counted when we are computing the measures, for fair measurement.

Model DRIVE STARE
ROC PR ROC PR
U-Net b26 () 0.970 0.886 0.973 0.902
Pixel GAN b36 () 0.971 0.889 0.967 0.897
Patch GAN-1 (10x10) b15 () 0.970 0.889 0.976 0.903
Patch GAN-2 (80x80) b15 () 0.972 0.893 0.977 0.908
Image GAN b35 () 0.980 0.914 0.983 0.916
Table 1: Comparison of different models having different discriminators
Method DRIVE STARE
Dice Score AUC ROC AUC PR Dice Score AUC ROC AUC PR
Our Method 0.832 0.984 0.916 0.838 0.985 0.922
Kernel Boost b1 () 0.800 0.931 0.846 - - -
- Fields b3 () 0.805 0.968 0.885 - - -
DRIU b31 () 0.822 0.979 0.906 0.831 0.972 0.910
Wavelets b32 () 0.762 0.943 0.814 0.774 0.969 0.843
HED b33 () 0.796 0.969 0.877 0.805 0.976 0.888
Human Expert 0.791 - - 0.76 - -
Table 2: Comparison of proposed method with other existing techniques on basis of AUC ROC and PR and Dice Score

4 Results and Discussions

In Table 1, we have compared performance of different models with different discriminators. There is no discriminator in U-Net so it shows inferior performance as compare to patch GAN and Image GAN. Patch GAN and Image GAN have shown improvement in overall segmentation quality but Image GAN, which has most powerful discriminator framework, out-performs all the other networks. This result is enough to claim that a powerful discriminatory framework is key for successful training of the networks with GANs b29 (),b30 ().
Table 2 summarizes dice coefficients (F1-score), AUC for ROC and AUC for PR for our proposed method in comparison with other methods. Our method outperformed all the existing methods and shows better dice coefficient and AUC values. Our method also surpasses human’s annotating ability on DRIVE dataset.
Qualitative comparison of segmentation using our method and best existing method DRIU (Deep Retinal Image Understanding b31 ()) is illustrated in Figure 5. Our proposed method generates concordant probability values to the gold standard while DRIU gives overconfident probability on boundaries between vessels and background, as well as on fine vessels. This may cause over-segmentation of retinal image, resulting in high false positive values. In contrast, the proposed technique allows more false negatives near the edges and terminal end of the vessels because it has tendency to give low probability to the pixels which falls in uncertain region. This is same as human annotators would do.

Figure 5: Fundoscopic images (first column), Probability Map of DRIU (second column) and Probability Map of Our Method (third column). Top image is DRIVE dataset and Bottom image is STARE dataset.

In Figure 6, we have shown the generated masks (outer boundary), filamentary structured image and generated output images. We can see that these generated output images are visually close to real ones.

Figure 6: (From left to right) Masks, filamentary structures and Output retinal images

5 Conclusion

In this paper, we have introduced a new Generative Adversarial Network for Medical Imaging (MI-GAN) framework which focuses on retinal vessels image segmentation and generation. These synthesized images are realistic looking. When used as additional training dataset, the framework helps to enhance the image segmentation performance. The proposed model is capable of learning useful features from a small training set. In our case the training set consisted of only 10 examples from each dataset namely DRIVE and STARE. Our model outperformed other existing models in terms of AUC ROC, AUC PR and Dice co-efficient. Our method had less false positive rate at fine vessels and have drawn more clearer lines, as compared to other methods. Future work involves investigation into datasets of different bio-medical images for interplay of synthesized images, domain adaptation tasks and segmentation of the medical images.

Compliance with Ethical Standards

Funding: No funding declared.
Conflict of Interest: Talha Iqbal declares that he has no conflict of interest. Hazrat Ali declares that he has no conflict of interest.
Ethical approval: This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent: Not applicable.

References

  • (1) B. D. de Vos, J. M. Wolterink, P. A. de Jong, M. A. Viergever, I. Išgum, 2d image classification for 3d anatomy localization: employing deep convolutional neural networks, in: Medical Imaging 2016: Image Processing, Vol. 9784, International Society for Optics and Photonics, 2016, p. 97841Y.
  • (2) Y. Cai, M. Landis, D. T. Laidley, A. Kornecki, A. Lum, S. Li, Multi-modal vertebrae recognition using transformed deep convolution network, Computerized medical imaging and graphics 51 (2016) 11–19.
  • (3) H. Chen, D. Ni, J. Qin, S. Li, X. Yang, T. Wang, P. A. Heng, Standard plane localization in fetal ultrasound via domain transferred deep neural networks, IEEE journal of biomedical and health informatics 19 (5) (2015) 1627–1636.
  • (4) A. Kumar, P. Sridar, A. Quinton, R. K. Kumar, D. Feng, R. Nanan, J. Kim, Plane identification in fetal ultrasound images using saliency maps and convolutional neural networks, in: Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on, IEEE, 2016, pp. 791–794.
  • (5) F. C. Ghesu, B. Georgescu, T. Mansi, D. Neumann, J. Hornegger, D. Comaniciu, An artificial agent for anatomical landmark detection in medical images, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 229–237.
  • (6) C. F. Baumgartner, K. Kamnitsas, J. Matthew, S. Smith, B. Kainz, D. Rueckert, Real-time standard scan plane detection and localisation in fetal ultrasound using fully convolutional neural networks, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 203–211.
  • (7) B. Kong, Y. Zhan, M. Shin, T. Denny, S. Zhang, Recognizing end-diastole and end-systole frames via deep temporal regression network, in: International conference on medical image computing and computer-assisted intervention, Springer, 2016, pp. 264–272.
  • (8) A. Barbu, L. Lu, H. Roth, A. Seff, R. M. Summers, An analysis of robust cost functions for cnn in computer-aided diagnosis, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 6 (3) (2018) 253–258.
  • (9) H. R. Roth, L. Lu, J. Liu, J. Yao, A. Seff, K. Cherry, L. Kim, R. M. Summers, Improving computer-aided detection using convolutional neural networks and random view aggregation, IEEE transactions on medical imaging 35 (5) (2016) 1170–1181.
  • (10) A. Teramoto, H. Fujita, O. Yamamuro, T. Tamaki, Automated detection of pulmonary nodules in pet/ct images: Ensemble false-positive reduction using a convolutional neural network technique, Medical physics 43 (6Part1) (2016) 2821–2827.
  • (11) C. Becker, R. Rigamonti, V. Lepetit, P. Fua, Supervised feature learning for curvilinear structure segmentation, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2013, pp. 526–533.
  • (12) A. Makhzani, B. J. Frey, Pixelgan autoencoders, in: Advances in Neural Information Processing Systems, 2017, pp. 1972–1982.
  • (13) Y. Ganin, V. Lempitsky, n^ 4-fields: Neural network nearest neighbor fields for image transforms, in: Asian Conference on Computer Vision, Springer, 2014, pp. 536–551.
  • (14) G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. van der Laak, B. van Ginneken, C. I. Sánchez, A survey on deep learning in medical image analysis, Medical image analysis 42 (2017) 60–88.
  • (15) L. A. Gatys, A. S. Ecker, M. Bethge, A neural algorithm of artistic style, arXiv preprint arXiv:1508.06576.
  • (16) D. Ulyanov, V. Lebedev, A. Vedaldi, V. S. Lempitsky, Texture networks: Feed-forward synthesis of textures and stylized images., in: ICML, 2016, pp. 1349–1357.
  • (17) J. Johnson, A. Alahi, L. Fei-Fei, Perceptual losses for real-time style transfer and super-resolution, in: European Conference on Computer Vision, Springer, 2016, pp. 694–711.
  • (18) C. Kirbas, F. Quek, A review of vessel extraction techniques and algorithms, ACM Computing Surveys (CSUR) 36 (2) (2004) 81–121.
  • (19) D. Lesage, E. D. Angelini, I. Bloch, G. Funka-Lea, A review of 3d vessel lumen segmentation techniques: Models, features and extraction schemes, Medical image analysis 13 (6) (2009) 819–845.
  • (20) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in neural information processing systems, 2014, pp. 2672–2680.
  • (21) C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning., in: AAAI, Vol. 4, 2017, p. 12.
  • (22) C. Ding, Y. Xia, Y. Li, Supervised segmentation of vasculature in retinal images using neural networks, in: Orange Technologies (ICOT), 2014 IEEE International Conference on, IEEE, 2014, pp. 49–52.
  • (23) A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv preprint arXiv:1511.06434.
  • (24) M. Mirza, S. Osindero, Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784.
  • (25) E. L. Denton, S. Chintala, R. Fergus, et al., Deep generative image models using a laplacian pyramid of adversarial networks, in: Advances in neural information processing systems, 2015, pp. 1486–1494.
  • (26) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. C. Courville, Improved training of wasserstein gans, in: Advances in Neural Information Processing Systems, 2017, pp. 5769–5779.
  • (27) H. Peng, M. Hawrylycz, J. Roskams, S. Hill, N. Spruston, E. Meijering, G. A. Ascoli, Bigneuron: large-scale 3d neuron reconstruction from optical microscopy images, Neuron 87 (2) (2015) 252–256.
  • (28) P. Isola, J.-Y. Zhu, T. Zhou, A. A. Efros, Image-to-image translation with conditional adversarial networks, arXiv preprint.
  • (29) J. Zhang, L. Chen, L. Zhuo, X. Liang, J. Li, An efficient hyperspectral image retrieval method: Deep spectral-spatial feature extraction with dcgan and dimensionality reduction using t-sne-based nm hashing, Remote Sensing 10 (2) (2018) 271.
  • (30) L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, R. Fergus, Regularization of neural networks using dropconnect, in: International Conference on Machine Learning, 2013, pp. 1058–1066.
  • (31) X. Wang, A. Gupta, Generative image modeling using style and structure adversarial networks, in: European Conference on Computer Vision, Springer, 2016, pp. 318–335.
  • (32) X. Mao, C. Shen, Y.-B. Yang, Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections, in: Advances in neural information processing systems, 2016, pp. 2802–2810.
  • (33) J.-Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, arXiv preprint arXiv:1703.10593.
  • (34) O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
  • (35) C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning., in: AAAI, Vol. 4, 2017, p. 12.
  • (36) H. Zhao, H. Li, L. Cheng, Synthesizing filamentary structured images with gans, arXiv preprint arXiv:1706.02185.
  • (37) L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence 40 (4) (2018) 834–848.
  • (38) Y. Li, N. Wang, J. Shi, J. Liu, X. Hou, Revisiting batch normalization for practical domain adaptation, arXiv preprint arXiv:1603.04779.
  • (39) W. Zhang, W. Li, J. Yan, L. Yu, C. Pan, Adaptive threshold selection for background removal in fringe projection profilometry, Optics and Lasers in Engineering 90 (2017) 209–216.
  • (40) J. Son, S. J. Park, K.-H. Jung, Retinal vessel segmentation in fundoscopic images with generative adversarial networks, arXiv preprint arXiv:1706.09318.
  • (41) K.-K. Maninis, J. Pont-Tuset, P. Arbeláez, L. Van Gool, Deep retinal image understanding, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 140–148.
  • (42) F. Farokhian, C. Yang, H. Demirel, S. Wu, I. Beheshti, Automatic parameters selection of gabor filters with the imperialism competitive algorithm with application to retinal vessel segmentation, Biocybernetics and Biomedical Engineering 37 (1) (2017) 246–254.
  • (43) S. Xie, Z. Tu, Holistically-nested edge detection, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1395–1403.
  • (44) J. Son, S. J. Park, K.-H. Jung, Retinal vessel segmentation in fundoscopic images with generative adversarial networks, arXiv preprint arXiv:1706.09318.
  • (45) A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv preprint arXiv:1511.06434.
Comments 3
Request Comment
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
297560
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
3

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description