Unsupervised Medical Image Segmentation with Adversarial Networks: From Edge Diagrams to Segmentation Maps

Unsupervised Medical Image Segmentation with Adversarial Networks: From Edge Diagrams to Segmentation Maps

Umaseh Sivanesan
umaseh.sivanesan@medportal.ca
   Luis H. Braga
braga@mcmaster.ca
   Ranil R. Sonnadara
Vector Institute, Toronto
ranil@mcmaster.ca
   Kiret Dhindsa*
Vector Institute, Toronto
dhindsj@mcmaster.ca
*Corresponding Author
Department of Surgery, McMaster University
Abstract

We develop and approach to unsupervised semantic medical image segmentation that extends previous work with generative adversarial networks. We use existing edge detection methods to construct simple edge diagrams, train a generative model to convert them into synthetic medical images, and construct a dataset of synthetic images with known segmentations using variations on extracted edge diagrams. This synthetic dataset is then used to train a supervised image segmentation model. We test our approach on a clinical dataset of kidney ultrasound images and the benchmark ISIC 2018 skin lesion dataset. We show that our unsupervised approach is more accurate than previous unsupervised methods, and performs reasonably compared to supervised image segmentation models. All code and trained models are available at https://github.com/kiretd/Unsupervised-MIseg.

1 Introduction

In vivo medical imaging is one of the primary technologies available for clinical evaluation, diagnosis, and treatment planning. The physical challenge of imaging internal tissues is reflected in the low resolution, low signal-to-noise ratio, and high degree of occlusion seen with many common medical imaging technologies. Using medical images to make accurate and meaningful clinical decisions requires substantial training and experience combined with a large body of medical knowledge. As a result, current medical practice places a significant burden on highly trained clinicians specialized in interpreting medical images [46, 55].

A fundamental step in medical image analysis is to identify a region of interest, i.e.,, segmentation. This typically means identifying a bounding region that separates an organ or abnormality from other tissue in the image. For human readers, segmentation allows the extraction of clinically important metrics, such as volume, and for the planning of radiation therapy or surgical removal. In computer-aided-diagnosis (CAD), organ and tissue segmentation allows computer vision models to focus their feature extraction or feature learning computation on the clinically relevant tissue, allowing for more computationally efficient models that are better able to avoid extraneous information in the data [20]. Manually performing these segmentations is time-consuming, expensive, and subjective, leading to major research effort in developing algorithms that can efficiently perform accurate and reliable semantic segmentation in medical images (i.e.,, segmentation by associating pixels or regions of the image with a classification label).

We present an approach to organ and tissue segmentation based on the use of a Generative Adversarial Networks (GANs) to generate a labelled synthetic training set in the absence of ground truth labels for real medical images. We circumvent the need for labelled real images by generating medical images from simplistic and arbitrary edge diagrams. We then use the synthetic training set to train supervised segmentation models, which are then applied to real images. We evaluate our approach using two datasets: a dataset of ultrasound images for which the task is to segment the kidney, and the ISIC 2018 Skin Lesion Analysis competition dataset, for which the task is to segment skin lesions in dermoscopic images.

The main contributions of this work are as follows:

  • [noitemsep,nolistsep]

  • We demonstrate a novel form of data augmentation by using GANs to generate labelled training data from edge diagrams for applications in which we can exploit a common geometry that is inherent to the semantic segmentation task itself.

  • We show that GANs can generate reasonable synthetic medical images with corresponding organ segmentation maps from just edge diagrams.

  • By generating data using edge diagrams, we show that We can obtain accurate and reliable organ segmentation in a fully unsupervised way, with the option of semi-supervised training if labelled data are available.

2 Related Work

Traditional approaches to algorithmic image segmentation were largely unsupervised, i.e.,, they did not rely on ground truth (clinician-supplied) segmentations to train a model. A variety of such methods were developed in previous decades (for example, methods based on edge detection [7], region growing [2], contour modelling [28], and texture analysis [39]); however, these typically relied on built-in constraints about object appearance or differences in contrast or intensity between regions of interest and background pixels. Such constraints do not always work well for medical images, particularly for imaging modalities that produce lower quality images (e.g.,, ultrasound imaging), or for regions of the body where multiple organ and tissue types are imaged together.

To overcome the shortcomings of these earlier approaches, modern image segmentation techniques often rely on supervised learning with deep neural networks and large amounts of labelled training data. These models are capable of performing semantic segmentation, and thus can categorize regions of images based on meaningful labels provided by a clinician. The most common approach of the last few years has been based on convolutional neural networks (CNNs), which have been widely demonstrated to be successful for many kinds of computer vision tasks [59, 64, 17, 37].

Key developments in semantic segmentation have been based on variations of CNNs. The Fully Convolutional Network (FCN) omitted the fully connected layers used in standard CNNs, which are used to obtain a pixel-wise grouping label, and instead used deconvolution layers to obtain segmentation probability maps for images [35]. A similar idea based on encoder-decoder networks was developed by deconvolving VGG16 [50], a CNN pretrained on the ImageNet dataset [13] that is sometimes used as a starting point for specific medical imaging problems (e.g., [36, 29]). In order to take greater advantage of the spatial correlations between pixels that should be grouped together, CNNs have also been combined with Conditional Random Fields (CRFs) [33].

Different forms of these CNN approaches have dominated the field of medical imaging segmentation as well [40, 66, 32, 16, 6, 11]. In particular, a specific instance of FCNs, U-net [47], has performed well for a variety of medical imaging segmentation tasks (e.g.,, [9]). Due to its success, it has since been extended in many ways: for 3D images [10], with an attention mechanism [41], with a pretrained VGG11 encoder [25], and so on.

Two major limitations reduce the utility of the CNN approaches described above: 1) they are trained explicitly to minimize pixel-wise segmentation error and therefore typically require significant post-processing of their outputs in order to obtain solutions that are spatially contiguous, and 2) they require ground truth segmentations for training, which can be very difficult and expensive to obtain on the scale that is required for effective deep learning. While some recent methods have been able to address the first limitation by training for scalable spatial coherence using patch learning with multi-scale loss functions [21, 27, 44], they do not address the need for manually segmented training images. Here we propose the use of a GAN to create synthetic training data that can be used to train supervised image segmentation models when no labelled training data are available, thus allowing for unsupervised medical image segmentation.

GANs have been formulated as image-to-image translation architectures that take paired images as input [26], and thus have been successfully applied to semantic segmentation by training them on pairs of images with their corresponding ground truth segmentations. This has been done in a fully supervised manner [38, 62, 52, 63, 40] and in a semi-supervised or weakly supervised manner [23]. Most interestingly, researchers have taken advantage of the fact that GANs, by their nature, can be used to generate synthetic data as a form of data augmentation [49]. Using this approach, GANs can be trained with a relatively low number image-segmentation pairs to generate additional training data for a DualGAN semantic segmentation model [53, 19], or a fully supervised model like U-net [49]. However, these approaches, like the previous CNN-based models, are limited by the fact that ground truth segmentations are required to train the GANs in the first place.

Different approaches have been taken to overcome the need for segmentation labels during training. W-net pairs two U-nets to form a deep auto-encoder that can be used in combination with a CRF algorithm for scene decomposition [61]. In contrast, co-segmentation approaches exploit feature similarity for multiple instances of same-class objects in an image, which is suitable for certain kinds of segmentation tasks with distinct ROIs [24]. Recent recomposition approaches based on generative modelling (e.g., SEIGAN [42]) segment foreground objects by moving them to similar background images. Perhaps most similar to ours is a very recent approach, ReDO [8], that performs scene decomposition following region-wise composition using a GAN based on the assumption that different objects composing a scene would be statisticaly independent with respect to certain properties, such as colour and texture.

All of the above approaches assume that the target ROI for segmentation is easily distinguishable from the rest of the image along some feature dimensions, such as brightness or colour, and therefore try to define or learn the properties that distinguish regions of the image. In many medical imaging applications, this is extremely difficult to do, as there may not be a set of learnable properties that support the task. In the case of organ segmentation, as demonstrated with the kidney ultrasound dataset presented here, a clinical expert would typically rely heavily on prior anatomical knowledge and experience, which provides an expectation of the contours of the kidney in the absence of a clear boundary. For this reason, non-expert humans are likely to fail at this particular task (see Figure 5). We overcome this challenge using a generative process to learn an expectation of the shape of the ROI in the data generation phase, as described below.

3 Methods

3.1 Overview of our Approach

Here we propose a way of extending this previous work to generate synthetic training data using GANs in a fully unsupervised way for applications in which there is an expected segmentation geometry that can serve as a prior. It is based on the assumption that there exists a simple template structure that can be exploited to generate simple diagrams with known segmentations, what we call edge diagrams, from which a GAN can generate sufficiently realistic (and similarly challenging) training images. As long as reasonable edge diagrams can be extracted from the original images to train the GAN, and new edge diagrams can be constructed using variations on the template structure as the ground truth segmentations, then synthetic training data can be generated with known segmentations.

Our approach follows a simple recipe. First we generate simple edge diagrams from real unlabelled training images using available computer vision techniques. We use the corresponding image-diagram pairs to train a GAN to produce synthetic medical images from the edge diagrams. We then use a simple algorithm to generate variations of these edge diagrams with known ROIs, and use the trained GAN to synthesize new images from these new edge diagrams. Finally, we use these new purely synthetic image-segmentation pairs to train a supervised image segmentation model that can be used to identify ROIs in real medical images. The entire approach is illustrated in Figure 6.

3.2 Dataset 1: Renal Ultrasound Images

We use a dataset of renal ultrasound images developed for prenatal hydronephrosis, a congenital kidney disorder marked by excessive and potentially dangerous fluid retention in the kidneys [14]. The dataset consists of 2492 2D sagittal kidney ultrasound images from 773 patients across multiple hospital visits. This is a difficult dataset for image segmentation due to poor image quality, unclear contours of the kidneys, and the large variation introduced by different degrees of the kidney disorder called hydronephrosis (see Figure 5). In addition, a major challenge of this dataset is that the two most salient boundaries, the outer ultrasound cone inherent to ultrasound imaging with a probe, and the dark inner region of the kidney, which is caused by fluid retention in hydronephrosis, are both misleading with respect to segmenting the kidney.

(a) Grade 1
(b) Grade 2
(c) Grade 3
(d) Grade 4
Figure 5: Examples from the kidney ultrasound dataset with different hydronephrosis severity grades, from 1 (low severity) to 4 (severe hydronephrosis).

3.3 Dataset 2: Skin Lesion Segmentation

We use the ISIC 2018 Challenge dataset to evaluate our model with respect to Task 1 of the challenge: Lesion Boundary Segmentation [12, 58]. By showing that our approach is also successful on this benchmark dataset, we show that the method is not limited to only one domain and imaging modality.

3.4 Image Preprocessing

We follow a similar methodology used for preprocessing renal ultrasound imaging for deep learning described in [14]. We crop the images to remove white borders, despeckle them to remove speckle noise caused by interference with the ultrasound probe during imaging [56], and re-scale to 256256 pixels for consistency. We remove text annotations made by clinicians using the pre-trained Efficient and Accurate Scene Text Detector (EAST) [67]. We then normalize the pixel intensity of each image to be from 0 to 1 after trimming the pixel intensity from the 2nd percentile to the 98th percentile of the original pixel intensity across the image. In addition, we enhance the contrast of each image using Contrast Limited Adaptive Histogram Equalization with a clip limit of 0.03 [45]. Finally, we normalize the images by the mean and standard deviation of the training set during cross-validation. The results of preprocessing can be seen in the example given in Figure 6.

We perform no preprocessing for the ISIC skin lesion images other than to resize them to 265 256 pixels.

3.5 Creating Edge Diagrams for Training

3.5.1 Ultrasound Images

To obtain edge diagrams from real medical images, we start with a rough edge map given by a pre-trained edge detector [34] that uses richer convolutional features (RCF) with the VGG16 architecture [50], which we then fine-tune using non-maximum suppression with Structured Forests for [15] edge thinning (as recommended by the authors of RCF). In order to simplify the edge map and remove non-zero pixels that do not belong to the ROI, we downscale the image to 3232 pixels, remove any regions with an area smaller than 3 pixels, and skeletonize the image [65].

Since the edge diagrams are simplistic, synthetic edge diagrams can be generated in a variety of ways (e.g.,, they can be drawn by hand if desired). For the results presented here, we train a Variational Autoencoder (VAE) [31] to learn a latent space representing edge diagrams obtained from real images. While this model can generate synthetic edge diagrams, it does not directly provide a known ground truth segmentation. We therefore use Otsu’s method [43, 48] for edge detection to extract just the outer profile of the edge diagram, which corresponds to the ultrasound cone (the outer profile of ultrasound images produced by the ultrasound probe). We then generate a ground truth segmentation inside the cone of the synthetic edge diagram to ensure that we know every pixel belonging to the desired segmentation mask.

To generate the ground truth segmentation representing the kidney ROI, we compute a random ellipse with a random origin, rotation, and major and minor axes within the bounds of the 3232 pixel edge diagram. We draw randomly selected arcs from the ellipse so as to leave gaps in the kidney outline, simulating occlusion of the kidney boundary. We then also draw an arc inside the ellipse roughly parallel to the major axis to represent the renal pelvis. Finally, we add some noise in the form of random pixels inside the ellipse. Both the extracted and synthetic edge diagrams are rescaled up to 256256 pixels for training the GAN.

Note that in many medical imaging applications, the entire process involving the VAE may be skipped and only the ground truth segmentation is needed (as we do with the ISIC 2018 dataset). We specifically include the cone and the segmentation in the synthetic edge diagrams for ultrasound images to ensure the GAN generates synthetic ultrasound images with cone profiles, thus preventing the later segmentation model from learning to only segment the outer cone.

3.5.2 Other Medical Images

The same process was used to generate synthetic edge diagrams for the skin lesion images, with two notable exceptions: no cone was created, and random lines, arcs, and smaller ellipses were added to some synthetic edge diagrams to mimic the presence of rulers, pen marks, hairs, and other objects that sometimes appeared in the real images.

In principle, any method that produces edge diagrams with known segmentations and enough variation can be used. The methods described here are included for reproducibility rather than methodological necessity.

3.6 Generative Adversarial Networks

The conventional GAN [18] uses the loss function

{dmath}

min_θ_Gmax_θ_DL(θ_G,θ_D) = E_x∼P_X[log(D(x))] + E_z∼P_Z[log(1-D(G(z)))],

where and are the parameters for generator and discriminator , is a real image from our set of real ultrasound images with unknown distribution , and is a random vector noise vector drawn from some defined probability distribution (in this case, a Gaussian distribution). Training the GAN involves setting the generator an discriminator in competition with one another: the generator is trained to minimize the objective function by generating images that are indistinguishable from the real training images, and the discriminator is trained to maximize the objective function by learning to distinguish the images synthesized by the generator from real training images. For this work we use the pix2pixHD architecture [60].

3.6.1 pix2pixHD Architecture

This architecture uses two subnetworks to create a coarse-to-fine generator that can upscale image quality during image-to-image translation, and three multiscale discriminators to address the need to discriminate between high resolution synthetic images and real images while keeping the network size and memory requirements relatively low. Training the entire network comes with a loss function extended from 3.6 for multiple discriminators by summing over the discriminators to obtain

{dmath}

min_θ_G (( max_θ_D_1,θ_D_3,θ_D_3 ∑_k=1^3 L(θ_G,θ_D_k) ) + λ∑_k=1^3 L_FM(θ_G,θ_D_k) ), where is a parameter used to balance the influence of each term of the loss function. Here, is the layer-wise feature matching loss that is incorporated to account for the fact that the generator must now model data distributions at multiple scales:

(1)

where is the number of layers and is the number of units in layer . In this work, we are not upscaling the resolution of images, but we find pix2pixHD to also be valuable for translating from a simple image (our edge diagrams) to more complex images (medical images).

Figure 6: The proposed unsupervised image segmentation pipeline.

3.7 Training

3.7.1 Gan

For our ultrasound images, a trained surgical urologist provided segmentations for 491 images (approximately evenly split by class; range: 96-100). We reserve those images for evaluation (i.e.,, they are not used to train any model). We additionally remove any training images taken from the same patients that are also represented in the evaluation set to avoid overfitting due to subject-specific characteristics. In total, we use 918 images to train the GAN with 20% used for validation. From these, we create a synthetic training set of 2000 image-segmentation pairs.

For the ISIC 2018 dataset, 2075 images are used for training and 519 are used for evaluation. Using these data, we generate 3000 synthetic image-segmentation pairs for training and 750 for validation. For both datasets, we generated as many images as required until segmentation accuracy on the validation set no longer improved.

We train our implementation of pix2pixHD using the same settings given in [60] and choose the parameters corresponding to the epoch that minimizes the Fréchet Inception Distance (FID) with respect to the validation data [22]. This is the 90th epoch for the ultrasound dataset, and the 100th epoch for the skin lesion dataset.

3.7.2 Vae

The VAE we use to generate ultrasound cones for synthetic edge diagrams is shown in Figure 7. We train the VAE over 40 epochs and a batch size of 128 using the adaptive moment estimator (Adam) [30] and the Kullback-Leibler divergence loss.

Figure 7: VAE model architecture for generating ultrasound cones.

3.7.3 U-Net

We use the U-net architecture defined in [47] to train a segmentation model for the ultrasound dataset. However, we use the sum of the pixel-wise binary cross-entropy and the dice coefficient as our loss function. We use Adam for optimization with a batch size of 1. Finally, we perform data augmentation with horizontal flips (50% probability) and horizontal and vertical translations of up to 26 pixels (10%).

3.7.4 Mask-RCNN

We use the Mask-RCNN implementation provided here [1] adjusted for the ISIC 2018 dataset. We use anchor sizes of and 32 training ROIs per image. Other hyperameters were kept as default values. We perform data augmentation with both horizontal and vertical flips (50% probability), rotation of or , and a Gaussian blur of up to 5 standard deviations.

3.8 Evaluation

We evaluate our model using three standard metrics: the F1 score, mean intersection over union (mIoU), and pixel-wise classification accuracy (pACC).

3.8.1 Comparison with W-net

We train W-net with the soft normalized cut term in the loss function [61]. In addition, we perform the recommended post-processing of the W-net generated segmentation maps using a fully-connected CRF for edge recovery, and hierarchical image segmentation for contour grouping [4].

3.8.2 Mask Extraction from Clinician-Provided Kidney Segmentations

The clinician-provided segmentations were drawn as imprecise outlines on the ultrasound images (see Figure 10), and therefore could not be used to generate masks in a simple and direct way. We therefore use OpenCV [5] to convert these segmentations to masks.

For each clinician-provided segmentation, we first compute its difference with the original unsegmented ultrasound image. Since some background noise iss retained in most images, we use an adaptive threshold to convert the difference image to a binary image using the following formula:

(2)

where is the mean in the pixel neighbourhood around each pixel computed from the difference image .

We then use a border-following algorithm [54] with Teh-Chin chain approximation [57] to identify contours from the binary images. Contours with an area of less than 25 pixels are removed as noise. We compute and fill the convex hull of the remaining contours using the Sklansky algorithm [51]. We use these as the ground truth masks for evaluating segmentation performance.

(a) Successful mask extraction
(b) Unsuccessful mask extraction
Figure 10: A successful and unsuccessful example of mask extraction from clinician-provided kidney segmentations. From left to right, panel 1 shows the original image with the kidney outlined by the clinician, panel 2 shows the difference between panel 1 and the original image from our database without the outline, panel 3 shows the difference image thresholded by pixel value, panel 4 shows the convex hull of the thresholded image in panel 3, and panel 5 shows the mask obtained by filling in the convex hull in panel 4.

Following this procedure, the masked images are visually inspected compared to the clinician-provided segmentations, and those masks which deviate significantly from the clinician’s segmentations (e.g., because additional annotations are added to the segmented images, as seen in Figure 10) are omitted from further analysis. In total, 53 images are removed (438 are used for evaluation), and the class distribution remains relatively even (range: 83-93 per class).

4 Results

4.1 Synthetic Image Generation

To illustrate the similarity between real and synthetic images, we show a random sample of real and generated kidney ultrasound images in Figure 11, and a random sample of real and generated dermoscopic images in Figure 12. Since our goal is to generate images that are similar enough for generalizeable training of a segmentation model, our approach does not produce state-of-the-art synthetic image generation. Instead, it produces images that have similar segmentation properties.

Figure 11: Real (first four columns) and generated (last four columns) kidney ultrasound images.
Figure 12: Real (first four columns) and generated (last four columns) dermoscopic images.

4.2 Kidney Segmentation Performance

In Figure 13 we show the kidney segmentation masks learned through our fully unsupervised approach (with U-net as the segmentation model), compared with a purely supervised U-net and a purely unsupervised W-net. In Table 1 we show the corresponding segmentation performance metrics. For the semi-supervised extensions of our approach, we train a U-net using real and synthetic ultrasound images in a standard training protocol (U-net), and we also train a U-net using just the synthetic data followed by supervised fine-tuning with 45 of the real images with clinician segmentations, which were then removed from the evaluation set (U-net+).

Figure 13: Kidney segmentation masks comparing our unsupervised method (blue) to a supervised U-Net (red) and the clinician-provided ground truth (green). Top row: images with high agreement between models. Middle row: images with moderate agreement between models. Bottom row: images with poor agreement between models (specific cases where the unsupervised approach fails).
Model F1 Specificity Sensitivity mIoU pACC
Unsup. Ours (U-net) 0.81 (0.09) 0.92 (0.05) 0.87 (0.14) 0.69 (0.12) 0.90 (0.05)
W-net 0.46 (0.10) 0.20 (0.05) 0.98 (0.02) 0.41 (0.07) 0.41 (0.07)
Semi-Sup. Ours (U-net) 0.87 (0.11) 0.97 (0.04) 0.86 (0.13) 0.78 (0.13) 0.93 (0.05)
Ours (U-net+) 0.88 (0.08) 0.97 (0.03) 0.88 (0.09) 0.80 (0.11) 0.94 (0.04)
Sup. U-net 0.91 (0.09) 0.97 (0.04) 0.90 (0.10) 0.84 (0.10) 0.95 (0.03)
Table 1: Performance metrics for ultrasound kidney segmentation.

4.3 Skin Lesion Segmentation Performance

Performance metrics on the ISIC 2018 dataset using our unsupervisd approach are shown in Table 2 along with results obtained by the competition winner and current top submission. Here we use the metrics given by the online submission system, which includes a thresholded mIoU (th-mIoU). This metric sets all per-image IoU scores that are less than 0.65 to 0 before computing the mean IoU. Semi-supervised results are not available because the ISIC 2018 test submission page has been removed whie preparing this manuscript, and the test set is not currently available. Examples of the output masks on validation images are shown in Figure 14.

Model F1 Specificity Sensitivity mIoU th-mIoU pACC
Unsup. Ours (Mask-RCNN) 0.830 0.947 0.835 0.753 0.683 0.904
Ali et al. 2019 [3] 0.543 n/a n/a 0.440 n/a n/a
Sup. Mask-RCNN 0.882 0.950 0.922 0.811 0.763 0.936
Winner 0.898 0.963 0.906 0.838 0.802 0.942
Current Top 0.915 0.941 0.956 0.852 0.836 0.954
Table 2: Performance metrics for ISIC 2018 skin lesion boundary segmentation.
Figure 14: Skin lesion segmentation masks comparing our unsupervised method (blue) to a supervised Mask-RCNN (red). Top row: images with high agreement between models. Middle row: images with moderate agreement between models. Bottom row: images with poor agreement between models (specific cases where the unsupervised approach fails). Ground truth segmentation not available for ISIC 2018 test images.

5 Discussion

We present an unsupervised approach to semantic medical image segmentation that takes advantage of recent advances in image synthesis and generative modelling by making assumptions about the common geometry inherent to an object of interest. This method performs better than some previous unsupervised methods that fit the problem definition (e.g., W-net), or for which results are available (e.g., the CNN-based approach in [3]). For example, W-net performs poorly on the kidney segmentation task because it only identifies the ultrasound cone itself, rather than the kidney. We also show that our approach performs nearly as well as supervised methods for most images. Importantly, we show that with just a few training examples for supervised fine-tuning (here, only 10% of the data used for the supervised models), we approach the segmentation performance of purely supervised models.

Our method tends towards identifying larger ROIs that contain the desired ROI, which results in high specificity (0.92 and 0.947 for the kidney dataset and ISIC 2018 respectively) and only moderate sensitivity (0.87 and 0.835). For both datasets, the model fails for a small subset of the images. In the case of the kidney dataset, we find no clear pattern to explain the failed images. However, in the case of ISIC 2018 images, the unsupervised model does poorly with images that contain a lens or film placed on top of the skin lesion (in these cases, the model incorrectly segments the lens instead of the skin lesion underneath).

Interestingly, even though we construct edge diagrams based on smooth and convex shapes for image synthesis, the resulting segmentation models are able to fit non-smooth and non-convex boundaries. It is possible that alternative methods for generating edge diagrams with greater complexity may lead to a more flexible model that can adapt to more complex geometries. We are currently exploring the utility of this method in segmenting organs with more complex geometries, segmenting multiple objects per image, and performing 3D segmentation. We are also currently exploring adaptations of our approach that make it more end-to-end, e.g., by using multiple GANs.

References

  • [1] Waleed Abdulla. Mask r-cnn for object detection and instance segmentation on keras and tensorflow. https://github.com/matterport/Mask_RCNN, 2017.
  • [2] Rolf Adams and Leanne Bischof. Seeded region growing. IEEE Transactions on pattern analysis and machine intelligence, 16(6):641–647, 1994.
  • [3] Abder-Rahman Ali, Jingpeng Li, and Thomas Trappenberg. Supervised versus unsupervised deep learning based methods for skin lesion segmentation in dermoscopy images. In Canadian Conference on Artificial Intelligence, pages 373–379. Springer, 2019.
  • [4] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898–916, May 2011.
  • [5] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.
  • [6] Tom Brosch, Lisa YW Tang, Youngjin Yoo, David KB Li, Anthony Traboulsee, and Roger Tam. Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation. IEEE transactions on medical imaging, 35(5):1229–1239, 2016.
  • [7] John Canny. A computational approach to edge detection. In Readings in computer vision, pages 184–203. Elsevier, 1987.
  • [8] Mickaël Chen, Thierry Artières, and Ludovic Denoyer. Unsupervised object segmentation by redrawing. arXiv preprint arXiv:1905.13539, 2019.
  • [9] Patrick Ferdinand Christ, Mohamed Ezzeldin A Elshaer, Florian Ettlinger, Sunil Tatavarty, Marc Bickel, Patrick Bilic, Markus Rempfler, Marco Armbruster, Felix Hofmann, Melvin D’Anastasi, et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 415–423. Springer, 2016.
  • [10] Özgün Çiçek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pages 424–432. Springer, 2016.
  • [11] Dan C Cireşan, Alessandro Giusti, Luca M Gambardella, and Jürgen Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In International Conference on Medical Image Computing and Computer-assisted Intervention, pages 411–418. Springer, 2013.
  • [12] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv preprint arXiv:1902.03368, 2019.
  • [13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  • [14] Kiret Dhindsa, Lauren C Smail, Melissa McGrath, Luis H Braga, Suzanna Becker, and Ranil R Sonnadara. Grading prenatal hydronephrosis from ultrasound imaging using deep convolutional neural networks. In 2018 15th Conference on Computer and Robot Vision (CRV), pages 80–87. IEEE, 2018.
  • [15] Piotr Dollár and C Lawrence Zitnick. Fast edge detection using structured forests. IEEE transactions on pattern analysis and machine intelligence, 37(8):1558–1570, 2014.
  • [16] Qi Dou, Hao Chen, Lequan Yu, Lei Zhao, Jing Qin, Defeng Wang, Vincent CT Mok, Lin Shi, and Pheng-Ann Heng. Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE transactions on medical imaging, 35(5):1182–1195, 2016.
  • [17] Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor Villena-Martinez, and Jose Garcia-Rodriguez. A review on deep learning techniques applied to semantic segmentation. arXiv preprint arXiv:1704.06857, 2017.
  • [18] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [19] John T Guibas, Tejpal S Virdi, and Peter S Li. Synthetic medical images from dual generative adversarial networks. arXiv preprint arXiv:1709.01872, 2018.
  • [20] Yanming Guo, Yu Liu, Theodoros Georgiou, and Michael S Lew. A review of semantic segmentation using deep neural networks. International journal of multimedia information retrieval, 7(2):87–93, 2018.
  • [21] Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, and Hugo Larochelle. Brain tumor segmentation with deep neural networks. Medical image analysis, 35:18–31, 2017.
  • [22] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626–6637, 2017.
  • [23] Seunghoon Hong, Hyeonwoo Noh, and Bohyung Han. Decoupled deep neural network for semi-supervised semantic segmentation. In Advances in neural information processing systems, pages 1495–1503, 2015.
  • [24] Kuang-Jui Hsu, Yen-Yu Lin, and Yung-Yu Chuang. Deepco3: Deep instance co-segmentation by co-peak search and co-saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8846–8855, 2019.
  • [25] Vladimir Iglovikov and Alexey Shvets. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv preprint arXiv:1801.05746, 2018.
  • [26] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
  • [27] Konstantinos Kamnitsas, Christian Ledig, Virginia FJ Newcombe, Joanna P Simpson, Andrew D Kane, David K Menon, Daniel Rueckert, and Ben Glocker. Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis, 36:61–78, 2017.
  • [28] Michael Kass, Andrew Witkin, and Demetri Terzopoulos. Snakes: Active contour models. International journal of computer vision, 1(4):321–331, 1988.
  • [29] Brady Kieffer, Morteza Babaie, Shivam Kalra, and Hamid R Tizhoosh. Convolutional neural networks for histopathology image classification: Training vs. using pre-trained networks. In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), pages 1–6. IEEE, 2017.
  • [30] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [31] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [32] Jens Kleesiek, Gregor Urban, Alexander Hubert, Daniel Schwarz, Klaus Maier-Hein, Martin Bendszus, and Armin Biller. Deep MRI brain extraction: a 3D convolutional neural network for skull stripping. NeuroImage, 129:460–469, 2016.
  • [33] Guosheng Lin, Chunhua Shen, Anton Van Den Hengel, and Ian Reid. Efficient piecewise training of deep structured models for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3194–3203, 2016.
  • [34] Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Kai Wang, and Xiang Bai. Richer convolutional features for edge detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3000–3009, 2017.
  • [35] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
  • [36] Adria Romero Lopez, Xavier Giro-i Nieto, Jack Burdick, and Oge Marques. Skin lesion classification from dermoscopic images using deep learning techniques. In 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), pages 49–54. IEEE, 2017.
  • [37] Le Lu, Yefeng Zheng, Gustavo Carneiro, and Lin Yang. Deep learning and convolutional neural networks for medical image computing. Advances in Computer Vision and Pattern Recognition; Springer: New York, NY, USA, 2017.
  • [38] Pauline Luc, Camille Couprie, Soumith Chintala, and Jakob Verbeek. Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408, 2016.
  • [39] BS Manjunath and Rama Chellappa. Unsupervised texture segmentation using markov random field models. IEEE Transactions on Pattern Analysis & Machine Intelligence, 5:478–482, 1991.
  • [40] Pim Moeskops, Jelmer M Wolterink, Bas HM van der Velden, Kenneth GA Gilhuijs, Tim Leiner, Max A Viergever, and Ivana Išgum. Deep learning for multi-task medical image segmentation in multiple modalities. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 478–486. Springer, 2016.
  • [41] Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al. Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999, 2018.
  • [42] Pavel Ostyakov, Roman Suvorov, Elizaveta Logacheva, Oleg Khomenko, and Sergey I Nikolenko. Seigan: Towards compositional image generation by simultaneously learning to segment, enhance, and inpaint. arXiv preprint arXiv:1811.07630, 2019.
  • [43] Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1):62–66, 1979.
  • [44] Sérgio Pereira, Adriano Pinto, Victor Alves, and Carlos A Silva. Brain tumor segmentation using convolutional neural networks in mri images. IEEE transactions on medical imaging, 35(5):1240–1251, 2016.
  • [45] Stephen M Pizer, E Philip Amburn, John D Austin, Robert Cromartie, Ari Geselowitz, Trey Greer, Bart ter Haar Romeny, John B Zimmerman, and Karel Zuiderveld. Adaptive histogram equalization and its variations. Computer vision, graphics, and image processing, 39(3):355–368, 1987.
  • [46] M Ravi and Ravindra S Hegadi. Pathological medical image segmentation: A quick review based on parametric techniques. Medical Imaging: Artificial Intelligence, Image Recognition, and Machine Learning Techniques, page 207, 2019.
  • [47] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
  • [48] Mehmet Sezgin and Bülent Sankur. Survey over image thresholding techniques and quantitative performance evaluation. Journal of Electronic imaging, 13(1):146–166, 2004.
  • [49] Hoo-Chang Shin, Neil A Tenenholtz, Jameson K Rogers, Christopher G Schwarz, Matthew L Senjem, Jeffrey L Gunter, Katherine P Andriole, and Mark Michalski. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In International Workshop on Simulation and Synthesis in Medical Imaging, pages 1–11. Springer, 2018.
  • [50] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [51] Jack Sklansky. Finding the convex hull of a simple polygon. Pattern Recognition Letters, 1(2):79–83, 1982.
  • [52] Jaemin Son, Sang Jun Park, and Kyu-Hwan Jung. Retinal vessel segmentation in fundoscopic images with generative adversarial networks. arXiv preprint arXiv:1706.09318, 2017.
  • [53] Nasim Souly, Concetto Spampinato, and Mubarak Shah. Semi supervised semantic segmentation using generative adversarial network. In Proceedings of the IEEE International Conference on Computer Vision, pages 5688–5696, 2017.
  • [54] Satoshi Suzuki et al. Topological structural analysis of digitized binary images by border following. Computer vision, graphics, and image processing, 30(1):32–46, 1985.
  • [55] Nima Tajbakhsh, Laura Jeyaseelan, Qian Li, Jeffrey Chiang, Zhihao Wu, and Xiaowei Ding. Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. arXiv preprint arXiv:1908.10454, 2019.
  • [56] Peter C Tay, Christopher D Garson, Scott T Acton, and John A Hossack. Ultrasound despeckling for contrast enhancement. IEEE Transactions on Image Processing, 19(7):1847–1860, 2010.
  • [57] C-H Teh and Roland T. Chin. On the detection of dominant points on digital curves. IEEE Transactions on pattern analysis and machine intelligence, 11(8):859–872, 1989.
  • [58] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 5:180161, 2018.
  • [59] Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. Deep learning for computer vision: A brief review. Computational intelligence and neuroscience, 2018, 2018.
  • [60] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8798–8807, 2018.
  • [61] Xide Xia and Brian Kulis. W-net: A deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506, 2017.
  • [62] Yuan Xue, Tao Xu, Han Zhang, L Rodney Long, and Xiaolei Huang. SegAN: Adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics, 16(3-4):383–392, 2018.
  • [63] Dong Yang, Daguang Xu, S Kevin Zhou, Bogdan Georgescu, Mingqing Chen, Sasa Grbic, Dimitris Metaxas, and Dorin Comaniciu. Automatic liver segmentation using an adversarial image-to-image network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 507–515. Springer, 2017.
  • [64] Hyeon-Joong Yoo. Deep convolution neural networks in computer vision. IEEE Transactions on Smart Processing & Computing, 4(1):35–43, 2015.
  • [65] TY Zhang and Ching Y Suen. A fast parallel algorithm for thinning digital patterns. Communications of the ACM, 27(3):236–239, 1984.
  • [66] Wenlu Zhang, Rongjian Li, Houtao Deng, Li Wang, Weili Lin, Shuiwang Ji, and Dinggang Shen. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage, 108:214–224, 2015.
  • [67] Xinyu Zhou, Cong Yao, He Wen, Yuzhi Wang, Shuchang Zhou, Weiran He, and Jiajun Liang. East: an efficient and accurate scene text detector. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 5551–5560, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398260
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description