Deep Learning for Brain Tumor Segmentation in Radiosurgery: Prospective Clinical Evaluation

Deep Learning for Brain Tumor Segmentation in Radiosurgery: Prospective Clinical Evaluation

Abstract

Stereotactic radiosurgery is a minimally-invasive treatment option for a large number of patients with intracranial tumors. As part of the therapy treatment, accurate delineation of brain tumors is of great importance. However, slice-by-slice manual segmentation on T1c MRI could be time-consuming (especially for multiple metastases) and subjective. In our work, we compared several deep convolutional networks architectures and training procedures and evaluated the best model in a radiation therapy department for three types of brain tumors: meningiomas, schwannomas and multiple brain metastases. The developed semiautomatic segmentation system accelerates the contouring process by times on average and increases inter-rater agreement from % to .

Keywords:
stereotactic radiosurgery segmentation CNN MRI
1

1 Introduction

Brain stereotactic radiosurgery involves an accurate delivery of radiation to the delineated tumor. The basis of the corresponding planning process is to achieve the maximum conformity of the treatment plan. Hence, the outcome of the treatment is highly dependent on the clinician’s delineation of the target on the MRI. Several papers have been shown that experts defined different tumour volumes for the same clinical case [10]. As there are no margins applied to a contoured target, the differences in contouring could increase normal tissue toxicity or the risk of recurrence.

The process of contouring is the largest source of potential errors and inter-observer variations in target delineation [12]. Such variability could create challenges for evaluating treatment outcomes and assessment of the dosimetric impact on the target. Routinely the targets are delineated through slice-by-slice manual segmentation on MRI, and an expert could spend up to one hour delineating an image. However, stereotactic radiosurgery is one-day treatment and it is critical to provide fast segmentation in order to avoid treatment delays.

Automatic segmentation is a promising tool in time savings and reducing inter-observer variability of target contouring [11]. Recently deep learning methods have become popular for a wide range of medical image segmentation tasks. In particular, gliomas auto-segmentation methods are well-developed [1] thanks to BRATS datasets and contests [8]. At the same time, the most common types of brain tumors treated by radiosurgery, namely meningiomas, schwannomas and multiple brain metastases, are less studied. Recently published studies [2, 6, 5] developed deep learning methods for automatic segmentation of these types of tumors. However, these studies do not investigate the above-mentioned clinical performance metrics: inter-rater variability and time savings.

Our work aimed to fill this gap and evaluate the performance of semi-automatic segmentation of brain tumors in clinical practice. We developed an algorithm based on deep convolutional neural network (CNN) with suggested adjustment to cross-entropy loss, which allowed us to significantly boost quality of small tumors segmentation. The model achieving the state-of-the-art level of segmentation was integrated into radiosurgery planning workflow. Finally, we evaluated the quality of the automatically generated contours and reported the time reduction using these contours within the treatment planning.

2 Related work

During recent years, various deep learning architectures were developed. For medical imaging, the best results were achieved by 3D convolutional networks: 3D U-Net [3] and V-Net [9]. However, a large size of brain MRI for some tasks places additional restrictions on CNN. A network called DeepMedic [4] demonstrated solid performance in such problems, including glioma segmentation [1].

Some image processing methods were proposed for the other brain tumors as well. For example, authors of [7] developed a multistep approach utilizing classical computer vision tools such as thresholding or super-pixel clustering. In common with other medical image processing tasks, such methods have two key drawbacks: processing speed and quality of small lesions segmentation [6]. Deep learning-based approaches may potentially resolve these issues thanks to its high inference speed and great flexibility. Indeed, several recently published studies validated CNN in the task of nonglial brain tumors segmentation and demonstrated promising results. In [6] authors modified the DeepMedic to improve segmentation quality. Authors of [2] compared various combinations of T1c, T2 and Flair modalities. New patch generation methods were proposed and evaluated on three types of brain tumors in [5]. In [9] authors introduced a novel loss function based on Dice coefficient to improve segmentation results in highly class imbalance tasks.

3 Data

For computational experiments, we used 548 contrast-enhanced T1-weighted MRI with mm image resolution. These cases were characterized by multiple brain tumors ( per patient) of different sizes: from mm up to cm in diameter. These images were naturally divided into two datasets. The first one, training dataset, consisted of 489 unique patients examined before 2017. It was used to train different models and tune their parameters via cross-validation. The second, hold-out dataset, was represented by another 59 patients who were treated in 2017. We performed the final comparison of the best methods on the hold-out dataset to avoid overfitting.

Finally, to evaluate the quality of tumor delineation algorithm in clinical practice, we used the third, clinical, dataset which consists of four cases of meningioma, two cases of vestibular schwannoma and four cases of multiple brain metastases (ranged from 3 to 19 lesions per case) collected in 2018. Four experts (or users) with experience in brain radiosurgery ranged from 3 to 15 years delineated each of these cases in two setups: manually and using the output of our model as the starting point, see the details in 4.4.

4 Methods

4.1 Cnn

We used vanilla 3D U-Net, V-Net and DeepMedic models as network architectures. We trained all models for epochs, starting with learning rate of , and reducing it to at the epoch . Each epoch consists of 200 stochastic gradient descent iterations. At every iteration, we generated training patches of size with batches of size for 3D U-Net and for V-Net. For DeepMedic we generated patches of effective size in one batch. We used 5-fold cross-validation to split our training data patient-wise. After the train-predict process, we gather test predictions over the 5 splits to form the metric curve and compare experiment results.

For a subset of experiments (see Sec. 5 for the details), we also used a modified loss function, described in the next subsection and Tumor Sampling from [5]. For the Tumor Sampling as well as the original patches sampling procedures we set the probability to choose the central voxel of each patch belonging to the target mask to be for all experiments. We reported the results on the hold-out dataset while using the training dataset to fit the models.

4.2 Inversely weighted cross-entropy

We observed that all methods were missing lots of small tumors or inappropriate segmented them. We assumed that such a performance comes from loss function properties: errors on small targets have the same impact on the loss function as small inaccuracies in large lesions. To make all possible errors contribute equally to the BCE (binary cross-entropy) loss function, we construct a tensor of weights, which are equal to inverse relative volumes of regions of interest.

Given the ground truth on the training stage, we generate a tensor of weights for every image in the train set. To form such a tensor for the given image we split the corresponding ground-truth mask into connected components , where is the background and is the number of tumors. Weights of the background component were set to be . The weights for pixels in the connected component are equal to:

(1)

where is the fraction of positive class in the current training set. The final form of our loss is the same with weighted BCE over voxels in the propagated sample:

(2)

where is the weight of the -th pixel calculated using (1).

We compare proposed loss function with the current state-of-the-art Dice loss [9] as well as with the standard BCE.

4.3 Metric

We highlighted two essential characteristics that could characterize small tumors segmentation: tumor delineation and detection quality. Since delineation could be simply measured by local Dice score and experts could always adjust contours of found tumors, we focus our attention on the detection quality.

We suggested measuring it in terms of tumor-wise precision-recall curves. We adopted the FROC curve from [13] by changing its hit condition between predicted and ground truth tumors. Predicted tumors were defined as connected components above the probability of , and we treated the maximum probability of a component as a model’s certainty level for it. Our hit condition is that the Dice score between real and predicted lesions is greater than zero. We found such lesion-wise PRC (precision-recall curve) to be more interpretable and useful for model comparison than traditional pixel-wise PRC.

4.4 Contouring quality and time reduction

Within a clinical experiment, we implemented the final model as a service which can process Dicom images and generate contours as Dicom RT files. This output was uploaded to a standard planning system and validated and adjusted (if needed) by experts there; we call these contours CNN-initialized. In addition, the same cases were annotated manually in the same planning systems by the same four experts.

To perform the quality evaluation of our algorithm we introduced the following three types of comparisons.

  • 1 vs 3 – the manual contour of one user comparing to a ground truth estimation which is the averaged contour of the other users. This setting allows us to measure the current inter-rater variability for a specific user.

  • vs 3 – a CNN-initialized contour of one user comparing to the same ground truth as above. In this setting we estimate the effect of algorithm on the users.

  • vs – the same as previous setting, but the average contour was obtained using CNN-initialized contours for the three corresponding users. The last setting allows us to measure the level of additional standardization provided by CNN.

To investigate the differences in Dice scores we performed the Sign test for pairs of metrics (1 vs 3, vs 3) and (1 vs 3, vs ), see Sec. 5.

To evaluate a speed-up provided by our algorithm in routine clinical practice we compared times needed for two contouring techniques: manual delineation of the tumors and user adjustment of the CNN-initialized contours of the same tumors. The time spent on each task was recorded for all users and cases.

We didn’t perform comparison types which include pure CNN generated contours, because AI could not be used in a treatment planing solely without user control and verification.

5 Results

5.1 Methods comparison on the hold-out dataset

Figure 1: CNN models comparison. We zoomed all the PRC images from standard scale to better show some model or method had higher recall. We treated recall as a more important metric than precision in our task: a radiologist spends few seconds on deleting miss-prediction but much more time on finding and delineating the tumor which CNN didn’t predict.
Figure 2: The best model with TS (Tumor Sampling) and then with iwBCE or DL (Dice Loss).

Firstly, we compared three network architectures, see Fig. 1. The results suggest the superiority of U-Net-like architectures over the DeepMedic in our task (see Fig. 1). We made the architecture choice in favor of 3D U-Net and changed it in a minor way to fit our inference timings and memory requirements. We used this model for the subsequent experiments and the final model.

We also observed all the models perform poorly on the small tumors (Fig. 1, left). Within the second set of experiments, we aimed to improve recall for small lesions by adding Tumor Sampling and iwBCE to 3D U-Net, the best model from the first experiments. The proposed loss re-weighting strategy (see 4.2) reduced the number of missed small tumors by a factor of two with the same level of precision (Fig. 2, left) and improve the network performance over all tumors (Fig. 2, right), achieving almost recall on the hold-out dataset. It slightly outperformed Dice loss function, so we used iwBCE to train our model for the clinical installation.

The shaded area on the PRC plots shows 95% confidence intervals of bootstrapped curves over 100 iterations choosing 80% of the test patients every time. The median lesion-wise Dice score of 3D U-Net trained with Tumor Sampling and iwBCE is for the hold-out dataset.

5.2 Clinical evaluation

We observed better agreement between contours created by the expert and the reference one when the contours were initialized by CNN, even if the reference contour was generated completely manually. Tab. 1 shows a reduction of inter-rater variability. Improvements for 3 out of 4 experts are statistically significant according to the Sign test p-values. The total median agreement increased from to in terms of Dice score.

The automatic contours were generated and imported to the treatment planning system in less than one minute. The total median time needed to delineate a case manually was min., details for all four experts could be seen in tab. 2. On average, the automatic algorithm speeds up the process of the delineation in times with the median reduction of time of min. We observed speed-up for all users and for all cases they have delineated. We should note that acceleration plays more significant role in the cases of multiple lesions. The total median time needed to delineate a case with multiple metastases manually was min. (ranged from 15:20 to 44:00 in mm:ss). The automatic tumor segmentation speeded up the delineation of multiple lesions in times with median time reduction of min.

Median Dice Scores p-values
1 vs 3 vs 3 vs I II
User 1 0.938 0.947 0.969 2.85e-1 7.00e-6
User 2 0.930 0.941 0.968 7.01e-3 7.00e-6
User 3 0.915 0.920 0.934 2.29e-3 2.26e-3
User 4 0.918 0.935 0.968 1.40e-2 3.55e-2
All data 0.924 0.941 0.965 6.57e-4 3.61e-5
Table 1: Quality evaluation in tumor contouring. Case I evaluated hypothesis that median difference between settings (1 vs 3) and ( vs 3) is equal to zero. Case II evaluated the same hypothesis for settings (1 vs 3) and ( vs ). All data contains results for the consolidated set of experiments.
Median manual time Range Median time reduction Range
User 1 13:15 07:00 - 35:06 06:54 00:40 - 17:06
User 2 05:30 02:17 – 15:20 02:16 00:48 – 08:20
User 3 12:00 03:00 – 44:00 09:00 01:00 – 26:00
User 4 06:30 03:00 – 23:30 05:27 03:00 – 17:35
All data 10:05 02:17 – 44:00 05:32 00:40 – 26:00
the results are given in mm:ss
Table 2: Time reduction in tumor delineation. Median time is given per one case.
Figure 3: Plots of inter-rater agreement vs delineation time. Left: each point corresponds to a pair lesion-user. Dice scores for blue dots (manual segmentation) were calculated using 1 vs 3 strategy, for red dots - 1 vs . Central, right: dashed lines connect two points for the same pair lesion-user for manual and CNN-initialized delineations. Note that we restricted both time-axis to the maximum of 1000 s and Dice-axis to the minimum of , therefore few blue points were left outside the plot.
Figure 4: Segmentation results for two metastastic lesions, one schwannoma and one meningioma in vertical order. Blue corresponds to the manual contour, red – CNN-initialized contour with user’s adjustment, dashed yellow — pure CNN contour without user’s adjustment.

We also present quality-time plot (see fig. 3) for both manual and CNN-initialized techniques separately for each user and each case. One can distinguish the global trend of simultaneous improvement of inter-rater agreement and speedup of delineation time. Examples of different contouring techniques for all three types of lesions could be found on the fig. 4.

6 Discussion

For this study, we developed and successfully implemented a deep learning algorithm for automatic brain tumor segmentation into radiosurgery workflow. We demonstrated that our algorithm could achieve near expert-level performance, providing significant time savings in tumor contouring, and reducing the variability in targets delineation at the same time. We should note that within the clinical evaluation, the users initially delineated a case manually, and then they were asked to adjust the CNN-initialized contours of the same case. The adjustment of the CNN-initialized contours typically was performed in one day after manual delineation of the tumor. The fact that the experts had seen tumors previously might have a small impact on the results on the evaluation of time savings.

We proposed a new loss function, called iwBCE, which has not been discussed in all the details. However, it seemed to be a promising approach to improve segmentation quality of modern deep learning tools. We aimed to continue research of the proposed method and compare it with state-of-the-art Dice loss in different setups and on different datasets.

Acknowledgements.

The Russian Science Foundation grant 17-11-01390 supported the development of the new loss function, computational experiments and article writing.

Footnotes

  1. institutetext: Moscow Institute of Physics and Technology, Moscow, Russia

References

  1. S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, R. T. Shinohara, C. Berger, S. M. Ha and M. Rozycki (2018) Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv preprint arXiv:1811.02629. Cited by: §1, §2.
  2. O. Charron, A. Lallement, D. Jarnet, V. Noblet, J. Clavier and P. Meyer (2018) Automatic detection and segmentation of brain metastases on multimodal mr images with a deep convolutional neural network. Computers in Biology and Medicine. Cited by: §1, §2.
  3. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox and O. Ronneberger (2016) 3D u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pp. 424–432. Cited by: §2.
  4. K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert and B. Glocker (2017) Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical Image Analysis 36, pp. 61–78. Cited by: §2.
  5. E. Krivov, V. Kostjuchenko, A. Dalechina, B. Shirokikh, G. Makarchuk, A. Denisenko, A. Golanov and M. Belyaev (2018) Tumor delineation for brain radiosurgery by a convnet and non-uniform patch generation. In International Workshop on Patch-based Techniques in Medical Imaging, pp. 122–129. Cited by: §1, §2, §4.1.
  6. Y. Liu, S. Stojadinovic, B. Hrycushko, Z. Wardak, S. Lau, W. Lu, Y. Yan, S. B. Jiang, X. Zhen and R. Timmerman (2017) A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. Plos One 12 (10), pp. e0185844. Cited by: §1, §2.
  7. Y. Liu, S. Stojadinovic, B. Hrycushko, Z. Wardak, W. Lu, Y. Yan, S. B. Jiang, R. Timmerman, R. Abdulrahman and L. Nedzi (2016) Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications. Physics in Medicine & Biology 61 (24), pp. 8440. Cited by: §2.
  8. B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom and R. Wiest (2015) The multimodal brain tumor image segmentation benchmark (brats). IEEE Transactions on Medical Imaging 34 (10), pp. 1993–2024. Cited by: §1.
  9. F. Milletari, N. Navab and S. Ahmadi (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. Cited by: §2, §2, §4.2.
  10. T. Roques (2014) Patient selection and radiotherapy volume definition—can we improve the weakest links in the treatment chain?. Clinical Oncology 26 (6), pp. 353–355. Cited by: §1.
  11. G. Sharp, K. D. Fritscher, V. Pekar, M. Peroni, N. Shusharina, H. Veeraraghavan and J. Yang (2014) Vision 20/20: perspectives on automated image segmentation for radiotherapy. Medical Physics 41 (5). Cited by: §1.
  12. M. Torrens, C. Chung, H. Chung, P. Hanssens, D. Jaffray, A. Kemeny, D. Larson, M. Levivier, C. Lindquist and B. Lippitz (2014) Standardization of terminology in stereotactic radiosurgery: report from the standardization committee of the international leksell gamma knife society: special topic. Journal of Neurosurgery 121 (Suppl_2), pp. 2–15. Cited by: §1.
  13. B. Van Ginneken, S. G. Armato III, B. de Hoop, S. van Amelsvoort-van de Vorst, T. Duindam, M. Niemeijer, K. Murphy, A. Schilham, A. Retico and M. E. Fantacci (2010) Comparing and combining algorithms for computer-aided detection of pulmonary nodules in computed tomography scans: the anode09 study. Medical Image Analysis 14 (6), pp. 707–722. Cited by: §4.3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402481
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description