Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields
Automatic segmentation of the liver and its lesion is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT abdomen images using cascaded fully convolutional neural networks (CFCNs) and dense 3D conditional random fields (CRFs). We train and cascade two FCNs for a combined segmentation of the liver and its lesions. In the first step, we train a FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions from the predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a dense 3D CRF that accounts for both spatial coherence and appearance. CFCN models were trained in a 2-fold cross-validation on the abdominal CT dataset 3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over for liver with computation times below 100s per volume. We experimentally demonstrate the robustness of the proposed method as a decision support system with a high accuracy and speed for usage in daily clinical routine.
Keywords:Liver, Lesion, Segmentation, FCN, CRF, CFCN, Deep Learning
Anomalies in the shape and texture of the liver and visible lesions in CT are important biomarkers for disease progression in primary and secondary hepatic tumor disease . In clinical routine, manual or semi-manual techniques are applied. These, however, are subjective, operator-dependent and very time-consuming. In order to improve the productivity of radiologists, computer-aided methods have been developed in the past, but the challenges in automatic segmentation of combined liver and lesion remain, such as low-contrast between liver and lesion, different types of contrast levels (hyper-/hypo-intense tumors), abnormalities in tissues (metastasectomie), size and varying amount of lesions.
Nevertheless, several interactive and automatic methods have been developed to segment the liver and liver lesions in CT volumes. In 2007 and 2008, two Grand Challenges benchmarks on liver and liver lesion segmentation have been conducted [9, 4]. Methods presented at the challenges were mostly based on statistical shape models. Furthermore, grey level and texture based methods have been developed . Recent work on liver and lesion segmentation employs graph cut and level set techniques [16, 15, 17], sigmoid edge modeling  or manifold and machine learning [11, 6]. However, these methods are not widely applied in clinics, due to their speed and robustness on heterogeneous, low-contrast real-life CT data. Hence, interactive methods were still developed [7, 1] to overcome these weaknesses, which yet involve user interaction.
Deep Convolutional Neural Networks CNN have gained new attention in the scientific community for solving computer vision tasks such as object recognition, classification and segmentation [14, 18], often out-competing state-of-the art methods. Most importantly, CNN methods have proven to be highly robust to varying image appearance, which motivates us to apply them to fully automatic liver and lesions segmentation in CT volumes.
Semantic image segmentation methods based on fully convolutional neural networks FCN were developed in , with impressive results in natural image segmentation competitions [3, 24]. Likewise, new segmentation methods based on CNN and FCNs were developed for medical image analysis, with highly competitive results compared to state-of-the-art. [20, 8, 23, 21, 19, 12].
In this work, we demonstrate the combined automatic segmentation of the liver and its lesions in low-contrast heterogeneous CT volumes. Our contributions are three-fold. First, we train and apply fully convolutional CNN on CT volumes of the liver for the first time, demonstrating the adaptability to challenging segmentation of hepatic liver lesions. Second, we propose to use a cascaded fully convolutional neural network (CFCN) on CT slices, which segments liver and lesions sequentially, leading to significantly higher segmentation quality. Third, we propose to combine the cascaded CNN in 2D with a 3D dense conditional random field approach (3DCRF) as a post-processing step, to achieve higher segmentation accuracy while preserving low computational cost and memory consumption. In the following sections, we will describe our proposed pipeline (Section 2.2) including CFCN (Section 2.3) and 3D CRF (Section 2.4), illustrate experiments on the 3DIRCADb dataset (Section 2) and summarize the results (Section 4).
In the following section, we denote the 3D image volume as , the total number of voxels as and the set of possible labels as . For each voxel , we define a variable that denotes the assigned label. The probability of a voxel belonging to label given the image is described by and will be modelled by the FCN. In our particular study, we use for background, liver and lesion, respectively.
2.1 3DIRCADb Dataset
For clinical routine usage, methods and algorithms have to be developed, trained and evaluated on heterogeneous real-life data. Therefore, we evaluated our proposed method on the 3DIRCADb dataset111The dataset is available on http://ircad.fr/research/3d-ircadb-01. In comparison to the the grand challenge datasets, the 3DIRCADb dataset offers a higher variety and complexity of livers and its lesions and is publicly available. The 3DIRCADb dataset includes 20 venous phase enhanced CT volumes from various European hospitals with different CT scanners. For our study, we trained and evaluated our models using the 15 volumes containing hepatic tumors in the liver with 2-fold cross validation. The analyzed CT volumes differ substantially in the level of contrast-enhancement, size and number of tumor lesions (1 to 42). We assessed the performance of our proposed method using the quality metrics introduced in the grand challenges for liver and lesion segmentation by [9, 4].
2.2 Data preparation, processing and pipeline
Pre-processing was carried out in a slice-wise fashion. First, the Hounsfield unit values were windowed in the range to exclude irrelevant organs and objects, then we increased contrast through histogram equalization. As in , to teach the network the desired invariance properties, we augmented the data by applying translation, rotation and addition of gaussian noise. Thereby resulting in an increased training dataset of 22,693 image slices, which were used to train two cascaded FCNs based on the UNet architecture . The predicted segmentations are then refined using dense 3D Conditional Random Fields. The entire pipeline is depicted in Figure 2.
2.3 Cascaded Fully Convolutional Neural Networks (CFCN)
We used the UNet architecture  to compute the soft label probability maps . The UNet architecture enables accurate pixel-wise prediction by combining spatial and contextual information in a network architecture comprising 19 convolutional layers. In our method, we trained one network to segment the liver in abdomen slices (step 1), and another network to segment the lesions, given an image of the liver (step 2). The segmented liver from step 1 is cropped and resampled to the required input size for the cascaded UNet in step 2, which further segments the lesions.
The motivation behind the cascade approach is that it has been shown that UNets and other forms of CNNs learn a hierarchical representation of the provided data. The stacked layers of convolutional filters are tailored towards the desired classification in a data-driven manner, as opposed to designing hand-crafted features for separation of different tissue types. By cascading two UNets, we ensure that the UNet in step 1 learns filters that are specific for the detection and segmentation of the liver from an overall abdominal CT scan, while the UNet in step 2 arranges a set of filters for separation of lesions from the liver tissue. Furthermore, the liver ROI helps in reducing false positives for lesions.
A crucial step in training FCNs is appropriate class balancing according to the pixel-wise frequency of each class in the data. In contrast to , we observed that training the network to segment small structures such as lesions is not possible without class balancing, due to the high class imbalance. Therefore we introduced an additional weighting factor in the cross entropy loss function of the FCN.
denotes the probability of voxel belonging to the foreground, represents the ground truth. We chose to be .
The CFCNs were trained on a NVIDIA Titan X GPU, using the deep learning framework caffe , at a learning rate of 0.001, a momentum of 0.8 and a weight decay of 0.0005.
2.4 3D Conditional Random Field (3DCRF)
Volumetric FCN implementation with 3D convolutions is strongly limited by GPU hardware and available VRAM . In addition, the anisotropic resolution of medical volumes (e.g. 0.57-0.8mm in xy and 1.25-4mm in z voxel dimension in 3DIRCADb) complicates the training of discriminative 3D filters. Instead, to capitalise on the locality information across slices within the dataset, we utilize 3D dense conditional random fields CRFs as proposed by . To account for 3D information, we consider all slice-wise predictions of the FCN together in the CRF applied to the entire volume at once.
We formulate the final label assignment given the soft predictions (probability maps) from the FCN as maximum a posteriori (MAP) inference in a dense CRF, allowing us to consider both spatial coherence and appearance.
We specify the dense CRF following  on the complete graph with vertices for each voxel in the image and edges between all vertices. The variable vector describes the label of each vertex . The energy function that induces the according Gibbs distribution is then given as:
where are the unary potentials that are derived from the FCNs probabilistic output, . are the pairwise potentials, which we set to:
where is the Potts function, is the spatial distance between voxels and and is their intensity difference in the original image. The influence of the pairwise terms can be adjusted with their weights and and their effective range is tuned with the kernel widths and .
We estimate the best labelling using the efficient mean field approximation algorithm of . The weights and kernels of the CRF were chosen using a random search algorithm.
3 Results and Discussion
The qualitative results of the automatic segmentation are presented in Figure 1. The complex and heterogeneous structure of the liver and all lesions were detected in the shown images. The cascaded FCN approach yielded an enhancement for lesions with respect to segmentation accuracy compared to a single FCN as can be seen in Figure 1. In general, we observe significant222Two-sided paired t-test with p-value additional improvements for slice-wise Dice overlaps of liver segmentations, from mean Dice to after applying the 3D dense CRF.
|UNET as in ||72.9|
|Cascaded UNET + 3D CRF||10.7||-1.4||1.5||24.0||94.3|
|Li et al.  (liver-only)|
|Chartrand et al.  (semi-automatic)|
|Li et al.  (liver-only)||94.5|
Quantitative results of the proposed method are reported in Table 1. The CFCN achieves higher scores as the single FCN architecture. Applying the 3D CRF improved the segmentations results of calculated metrics further. The runtime per slice in the CFCN is s without and 0.8s with CRF.
Cascaded FCNs and dense 3D CRFs trained on CT volumes are suitable for automatic localization and combined volumetric segmentation of the liver and its lesions. Our proposed method competes with state-of-the-art. We provide our trained models under open-source license allowing fine-tuning for other medical applications in CT data 333Trained models are available at https://github.com/IBBM/Cascaded-FCN. Additionally, we introduced and evaluated dense 3D CRF as a post-processing step for deep learning-based medical image analysis. Furthermore, and in contrast to prior work such as [5, 15, 16], our proposed method could be generalized to segment multiple organs in medical data using multiple cascaded FCNs. All in all, heterogeneous CT volumes from different scanners and protocols as present in the 3DIRCADb dataset and in clinical trials can be segmented in under 100s each with the proposed approach. We conclude that CFCNs and dense 3D CRFs are promising tools for automatic analysis of liver and its lesions in clinical routine.
-  Ben-Cohen, A., et al.: Automated method for detection and segmentation of liver metastatic lesions in follow-up ct examinations. Journal of Medical Imaging (3) (2015)
-  Chartrand, G., et al.: Semi-automated liver ct segmentation using laplacian meshes. In: ISBI. pp. 641–644. IEEE (2014)
-  Chen, L.C., et al.: Semantic image segmentation with deep convolutional nets and fully connected crfs. ICLR (2015)
-  Deng, X., Du, G.: Editorial: 3d segmentation in the clinic: a grand challenge ii-liver tumor segmentation. In: MICCAI Workshop (2008)
-  Foruzan, A.H., Chen, Y.W.: Improved segmentation of low-contrast lesions using sigmoid edge model. Int J Comput Assist Radiol Surg pp. 1–17 (2015)
-  Freiman, M., Cooper, O., Lischinski, D., Joskowicz, L.: Liver tumors segmentation from cta images using voxels classification and affinity constraint propagation. Int J Comput Assist Radiol Surg 6(2), 247–255 (2011)
-  Häme, Y., Pollari, M.: Semi-automatic liver tumor segmentation with hidden markov measure field model and non-parametric distribution estimation. Med Image Anal 16(1), 140–149 (2012)
-  Havaei, M., et al.: Brain Tumor Segmentation with Deep Neural Networks. Med Image Anal (2016)
-  Heimann, T., et al.: Comparison and evaluation of methods for liver segmentation from ct datasets. IEEE Trans. Med. Imag. 28(8), 1251–1265 (Aug 2009)
-  Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: Proc ACM Int Conf Multimed. pp. 675–678. ACM (2014)
-  Kadoury, S., Vorontsov, E., Tang, A.: Metastatic liver tumour segmentation from discriminant grassmannian manifolds. Phys Med Biol 60(16), 6459 (2015)
-  Kamnitsas, K., et. al.: Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. arXiv preprint arXiv:1603.05959 (2016)
-  Krähenbühl, P., Koltun, V.: Efficient inference in fully connected crfs with gaussian edge potentials. In: NIPS. pp. 109–117 (2011)
-  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. pp. 1097–1105 (2012)
-  Li, C., Wang, X., Eberl, S., Fulham, M., Yin, Y., Chen, J., Feng, D.D.: A likelihood and local constraint level set model for liver tumor segmentation from ct volumes. IEEE Trans. Biomed. Eng. 60(10), 2967–2977 (2013)
-  Li, G., Chen, X., Shi, F., Zhu, W., Tian, J., Xiang, D.: Automatic liver segmentation based on shape constraints and deformable graph cut in ct images. IEEE Trans. Image Process. 24(12), 5315–5329 (2015)
-  Linguraru, M.G., Richbourg, W.J., Liu, J., Watt, J.M., Pamulapati, V., Wang, S., Summers, R.M.: Tumor burden analysis on computed tomography by automated liver and tumor segmentation. IEEE Trans. Med. Imag. 31(10), 1965–1976 (2012)
-  Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. CVPR (2015)
-  Prasoon, A., Petersen, K., Igel, C., Lauze, F., Dam, E., Nielsen, M.: Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In: MICCAI. vol. 16, pp. 246–253 (2013)
-  Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI, vol. 9351, pp. 234–241 (2015)
-  Roth, H.R., Lu, L., Farag, A., Shin, H.C., Liu, J., Turkbey, E.B., Summers, R.M.: Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In: MICCAI, pp. 556–564 (2015)
-  Soler, L., et al.: 3d image reconstruction for comparison of algorithm database: a patient-specific anatomical and medical image database (2012)
-  Wang, J., MacKenzie, J.D., Ramachandran, R., Chen, D.Z.: Detection of glands and villi by collaboration of domain knowledge and deep learning. In: MICCAI, pp. 20–27 (2015)
-  Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.: Conditional random fields as recurrent neural networks. ICCV (2015)