DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
Automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits previous segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a probabilistic bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans, using multi-level deep convolutional networks (ConvNets). We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i.e. superpixels. We first present a dense labeling of local image patches via P-ConvNet and nearest neighbor fusion. Then we describe a regional ConvNet () that samples a set of bounding boxes around each image superpixel at different scales of contexts in a “zoom-out” fashion. Our ConvNets learn to assign class probabilities for each superpixel region of being pancreas. Last, we study a stacked leveraging the joint space of CT intensities and the dense probability maps. Both 3D Gaussian smoothing and 2D conditional random fields are exploited as structured predictions for post-processing. We evaluate on CT images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity Coefficient of 83.66.3% in training and 71.810.7% in testing.
Segmentation of the pancreas can be a prerequisite for computer aided diagnosis (CADx) systems that provide quantitative organ volume analysis, e.g. for diabetic patients. Accurate segmentation could also necessary for computer aided detection (CADe) methods to detect pancreatic cancer. Automatic segmentation of numerous organs in computed tomography (CT) scans is well studied with good performance for organs such as liver, heart or kidneys, where Dice Similarity Coefficients (DSC) of 90% are typically achieved Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013), Ling et al. (2008). However, achieving high accuracies in automatic pancreas segmentation is still a challenging task. The pancreas’ shape, size and location in the abdomen can vary drastically between patients. Visceral fat around the pancreas can cause large variations in contrast along its boundaries in CT (see Fig. 3). Previous methods report only 46.6% to 69.1% DSCs Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013), Farag et al. (2014). Recently, the availability of large annotated datasets and the accessibility of affordable parallel computing resources via GPUs have made it feasible to train deep convolutional networks (ConvNets) for image classification. Great advances in natural image classification have been achieved Krizhevsky et al. (2012). However, deep ConvNets for semantic image segmentation have not been well studied Mostajabi et al. (2014). Studies that applied ConvNets to medical imaging applications also show good promise on detection tasks Cireşan et al. (2013), Roth et al. (2014). In this paper, we extend and exploit ConvNets for a challenging organ segmentation problem.
We present a coarse-to-fine classification scheme with progressive pruning for pancreas segmentation. Compared with previous top-down multi-atlas registration and label fusion methods, our models approach the problem in a bottom-up fashion: from dense labeling of image patches, to regions, and the entire organ. Given an input abdomen CT, an initial set of superpixel regions is generated by a coarse cascade process of random forests based pancreas segmentation as proposed by Farag et al. (2014). These pre-segmented superpixels serve as regional candidates with high sensitivity (97%) but low precision. The resulting initial DSC is 27% on average. Next, we propose and evaluate several variations of ConvNets for segmentation refinement (or pruning). A dense local image patch labeling using an axial-coronal-sagittal viewed patch () is employed in a sliding window manner. This generates a per-location probability response map . A regional ConvNet () samples a set of bounding boxes covering each image superpixel at multiple spatial scales in a “zoom-out” fashion Mostajabi et al. (2014), Girshick et al. (2014) and assigns probabilities of being pancreatic tissue. This means that we not only look at the close-up view of superpixels, but gradually add more contexts to each candidate region. -ConvNet operates directly on the CT intensity. Finally, a stacked regional is learned to leverage the joint convolutional features of CT intensities and probability maps P. Both 3D Gaussian smoothing and 2D conditional random fields for structured prediction are exploited as post-processing. Our methods are evaluated on CT scans of 82 patients in 4-fold cross-validation (rather than “leave-one-out” evaluation Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013)). We propose several new ConvNet models and advance the current state-of-the-art performance to a DSC of 71.8 in testing. To the best of our knowledge, this is the highest DSC reported in the literature to date.
2.1 Candidate region generation
We describe a coarse-to-fine pancreas segmentation method employing multi-level deep ConvNet models. Our hierarchical segmentation method decomposes any input CT into a set of local image superpixels . After evaluation of several image region generation methods Achanta et al. (2012), we chose entropy rate Liu et al. (2011) to extract superpixels on axial slices. This process is based on the criterion of DSCs given optimal superpixel labels, in part inspired by the PASCAL semantic segmentation challenge Everingham et al. (2014). The optimal superpixel labels achieve a DSC upper-bound and are used for supervised learning below. Next, we use a two-level cascade of random forest (RF) classifiers as in Farag et al. (2014). We only operate the RF labeling at a low class-probability cut 0.5 which is sufficient to reject the vast amount of non-pancreas superpixels. This retains a set of superpixels with high recall (97%) but low precision. After initial candidate generation, over-segmentation is expected and observed with low DSCs of 27%. The optimal superpixel labeling is limited by the ability of superpixels to capture the true pancreas boundaries at the per-pixel level with , but is still much above previous state-of-the-art Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013), Farag et al. (2014). These superpixel labels are used for assessing ‘positive’ and ‘negative’ superpixel examples for training. Assigning image regions drastically reduces the amount of ConvNet observations needed per CT volume compared to a purely patch-based approach and leads to more balanced training data sets. Our multi-level deep ConvNets will effectively prune the coarse pancreas over-segmentation to increase the final DSC measurements.
2.2 Convolutional neural network (ConvNet) setup
We use ConvNets with an architecture for binary image classification. Five layers of convolutional filters compute and aggregate image features. Other layers of the ConvNets perform max-pooling operations or consist of fully-connected neural networks. Our ConvNet ends with a final two-way layer with softmax probability for ‘pancreas’ and ‘non-pancreas’ classification (see Fig. 1). The fully-connected layers are constrained using “DropOut” in order to avoid over-fitting by acting as a regularizer in training Srivastava et al. (2014). GPU acceleration allows efficient training (we use cuda-convnet2111https://code.google.com/p/cuda-convnet2).
2.3 : Deep patch classification
We use a sliding window approach that extracts 2.5D image patches composed of axial, coronal and sagittal planes within all voxels of the initial set of superpixel regions (see Fig. 3). The resulting ConvNet probabilities are denoted as hereafter. For efficiency reasons, we extract patches every voxels and then apply nearest neighbor interpolation. This seems sufficient due to the already high quality of and the use of overlapping patches to estimate the values at skipped voxels.
2.4 : Deep region classification
We employ the region candidates as inputs. Each superpixel will be observed at several scales with an increasing amount of surrounding contexts (see Fig. 4). Multi-scale contexts are important to disambiguate the complex anatomy in the abdomen. We explore two approaches: only looks at the CT intensity images extracted from multi-scale superpixel regions, and a stacked integrates an additional channel of patch-level response maps for each region as input. As a superpixel can have irregular shapes, we warp each region into a regular square (similar to RCNN Girshick et al. (2014)) as is required by most ConvNet implementations to date. The ConvNets automatically train their convolutional filter kernels from the available training data. Examples of trained first-layer convolutional filters for , , are shown in Fig. 2. Deep ConvNets behave as effective image feature extractors that summarize multi-scale image regions for classification.
2.5 Data augmentation
Our ConvNet models (, ) sample the bounding boxes of each superpixel at different scales . During training, we randomly apply non-rigid deformations to generate more data instances. The degree of deformation is chosen so that the resulting warped images resemble plausible physical variations of the medical images. This approach is commonly referred to as data augmentation and can help avoid over-fitting Krizhevsky et al. (2012), Cireşan et al. (2013). Each non-rigid training deformation is computed by fitting a thin-plate-spline (TPS) to a regular grid of 2D control points . These control points are randomly transformed within the sampling window and a deformed image is generated using a radial basic function , where is the transformed location of and is a set of mapping coefficients.
2.6 Cross-scale and 3D probability aggregation
At testing, we evaluate each superpixel at different scales. The probability scores for each superpixel being pancreas are averaged across scales: . Then the resulting per-superpixel ConvNet classification values and (according to and , respectively), are directly assigned to every pixel or voxel residing within any superpixel . This process forms two per-voxel probability maps and . Subsequently, we perform 3D Gaussian filtering in order to average and smooth the ConvNet probability scores across CT slices and within-slice neighboring regions. 3D isotropic Gaussian filtering can be applied to any with to form smoothed . This is a simple way to propagate the 2D slice-based probabilities to 3D by taking local 3D neighborhoods into account. In this paper, we do not work on 3D supervoxels due to computational efficiency222Supervoxel based regional ConvNets need at least one-order-of-magnitude wider input layers and thus have significantly more parameters to train. and generality issues. We also explore conditional random fields (CRF) using an additional ConvNet trained between pairs of neighboring superpixels in order to detect the pancreas edge (defined by pairs of superpixels having the same or different object labels). This acts as the boundary term together with the regional term given by in order to perform a min-cut/max-flow segmentation Boykov and Funka-Lea (2006). Here, the CRF is implemented as a 2D graph with connections between directly neighboring superpixels. The CRF weighting coefficient between the boundary and the unary regional term is calibrated by grid-search.
3 Results & Discussion
Manual tracings of the pancreas for 82 contrast-enhanced abdominal CT volumes were provided by an experienced radiologist. Our experiments are conducted using 4-fold cross-validation in a random hard-split of 82 patients for training and testing folds with 21, 21, 20, and 20 patients for each testing fold. We report both training and testing segmentation accuracy results. Most previous work Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013) uses leave-one-patient-out cross-validation protocols which are computationally expensive (e.g., hours to process one case using a powerful workstation Wang et al. (2014)) and may not scale up efficiently towards larger patient populations. More patients (i.e. 20) per testing fold make the results more representative for larger population groups.
The ground truth superpixel labels are derived as described in Sec. 2.1. The optimally achievable DSC for superpixel classification (if classified perfectly) is 80.5%. Furthermore, the training data is artificially increased by a factor using the data augmentation approach with both scale and random TPS deformations at the level (Sec. 2.5). Here, we train on augmented data using , . In testing we use (without deformation based data augmentation) and voxels (as 3D Gaussian filtering kernel width) to compute smoothed probability maps . By tuning our implementation of Farag et al. (2014) at a low operating point, the initial superpixel candidate labeling achieves the average DSCs of only 26.1% in testing; but has a 97% sensitivity covering all pancreas voxels. Fig. 5 shows the plots of average DSCs using the proposed ConvNet approaches, as a function of and in both training and testing for one fold of cross-validation. Simple Gaussian 3D smoothing (Sec. 2.6) markedly improved the average DSCs in all cases. Maximum average DSCs can be observed at , , and in our training evaluation after 3D Gaussian smoothing for this fold. These calibrated operation points are then fixed and used in testing cross-validation to obtain the results in Table 1. Utilizing (stacked on ) and Gaussian smoothing (), we achieve a final average DSC of 71.8% in testing, an improvement of 45.7% compared to the candidate region generation stage at 26.1%. also performs well wiht 69.5% mean DSC and is more efficient since only dense deep patch labeling is needed. Even though the absolute difference in DSC between and is small, the surface-to-surface distance improves significantly from 1.461.5mm to 0.940.6mm, (p0.01). An example of pancreas segmentation at this operation point is shown in Fig. 6. Training of a typical with superpixel examples of size pixels (after warping) takes 55 hours for 100 epochs on a modern GPU (Nvidia GTX Titan-Z). However, execution run-time in testing is in the order of only 1 to 3 minutes per CT volume, depending on the number of scales . Candidate region generation in Sec. 2.1 consumes another 5 minutes per case.
To the best of our knowledge, this work reports the highest average DSC with 71.8% in testing. Note that a direct comparison to previous methods is not possible due to lack of publicly available benchmark datasets. We will share our data and code implementation for future comparisons333http://www.cc.nih.gov/about/SeniorStaff/ronald_summers.html444http://www.holgerroth.com/. Previous state-of-the-art results are at 68% to 69% Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013), Farag et al. (2014). In particular, DSC drops from 68% (150 patients) to 58% (50 patients) under the leave-one-out protocol Wolz et al. (2013). Our results are based on a 4-fold cross-validation. The performance degrades gracefully from training (83.66.3%) to testing (71.810.7%) which demonstrates the good generality of learned deep ConvNets on unseen data. This difference is expected to diminish with more annotated datasets. Our methods also perform with better stability (i.e., comparing 10.7% versus 18.6% Wang et al. (2014), 15.3% Chu et al. (2013) in the standard deviation of DSCs). Our maximum test performance is 86.9% DSC with 10%, 30%, 50%, 70%, 80%, and 90% of cases being above 81.4%, 77.6%, 74.2%, 69.4%, 65.2% and 58.9%, respectively. Only 2 outlier cases lie below 40% DSC (mainly caused by over-segmentation into other organs). The remaining 80 testing cases are all above 50%. The minimal DSC value of these outliers is 25.0% for . However Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013), Farag et al. (2014) all report gross segmentation failure cases with DSC even below 10%. Lastly, the variation of enforcing within a structured prediction CRF model achieves only 68.2% 4.1%. This is probably due to the already high quality of and in comparison.
We present a bottom-up, coarse-to-fine approach for pancreas segmentation in abdominal CT scans. Multi-level deep ConvNets are employed on both image patches and regions. We achieve the highest reported DSCs of 71.810.7% in testing and 83.66.3% in training, at the computational cost of a few minutes, not hours as in Wang et al. (2014), Chu et al. (2013), Wolz et al. (2013). The proposed approach can be incorporated into multi-organ segmentation frameworks by specifying more tissue types since ConvNet naturally supports multi-class classifications Krizhevsky et al. (2012). Our deep learning based organ segmentation approach could be generalizable to other segmentation problems with large variations and pathologies, e.g. tumors.
This work was supported by the Intramural Research Program of the NIH Clinical Center. The final publication will be available at Springer.
- Achanta et al. (2012) Achanta, R., A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk (2012). Slic superpixels compared to state-of-the-art superpixel methods. Pattern Analysis and Machine Intelligence, IEEE Transactions on 34(11), 2274–2282.
- Boykov and Funka-Lea (2006) Boykov, Y. and G. Funka-Lea (2006). Graph cuts and efficient ND image segmentation. IJCV 70(2), 109–131.
- Chu et al. (2013) Chu, C., M. Oda, T. Kitasaka, K. Misawa, M. Fujiwara, Y. Hayashi, Y. Nimura, D. Rueckert, and K. Mori (2013). Multi-organ segmentation based on spatially-divided probabilistic atlas from 3D abdominal CT images. In MICCAI, pp. 165–172. Springer.
- Cireşan et al. (2013) Cireşan, D. C., A. Giusti, L. M. Gambardella, and J. Schmidhuber (2013). Mitosis detection in breast cancer histology images with deep neural networks. In MICCAI, pp. 411–418. Springer.
- Everingham et al. (2014) Everingham, M., S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2014). The PASCAL visual object classes challenge: A retrospective. IJCV 111(1), 98–136.
- Farag et al. (2014) Farag, A., L. Lu, E. Turkbey, J. Liu, and R. M. Summers (2014). A bottom-up approach for automatic pancreas segmentation in abdominal CT scans. MICCAI Abdominal Imaging workshop, arXiv preprint arXiv:1407.8497.
- Girshick et al. (2014) Girshick, R., J. Donahue, T. Darrell, and J. Malik (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pp. 580–587. IEEE.
- Krizhevsky et al. (2012) Krizhevsky, A., I. Sutskever, and G. E. Hinton (2012). Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097–1105.
- Ling et al. (2008) Ling, H., S. K. Zhou, Y. Zheng, B. Georgescu, M. Suehling, and D. Comaniciu (2008). Hierarchical, learning-based automatic liver segmentation. In CVPR, pp. 1–8. IEEE.
- Liu et al. (2011) Liu, M.-Y., O. Tuzel, S. Ramalingam, and R. Chellappa (2011). Entropy rate superpixel segmentation. In CVPR, pp. 2097–2104. IEEE.
- Mostajabi et al. (2014) Mostajabi, M., P. Yadollahpour, and G. Shakhnarovich (2014). Feedforward semantic segmentation with zoom-out features. arXiv preprint arXiv:1412.0774.
- Roth et al. (2014) Roth, H. R., L. Lu, A. Seff, K. M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey, and R. M. Summers (2014). A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. In MICCAI, pp. 520–527. Springer.
- Srivastava et al. (2014) Srivastava, N., G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014). Dropout: A simple way to prevent neural networks from overfitting. JMLR 15(1), 1929–1958.
- Wang et al. (2014) Wang, Z., K. K. Bhatia, B. Glocker, A. Marvao, T. Dawes, K. Misawa, K. Mori, and D. Rueckert (2014). Geodesic patch-based segmentation. In MICCAI, pp. 666–673. Springer.
- Wolz et al. (2013) Wolz, R., C. Chu, K. Misawa, M. Fujiwara, K. Mori, and D. Rueckert (2013). Automated abdominal multi-organ segmentation with subject-specific atlas generation. TMI 32(9), 1723–1730.