AttentionAnatomy: A unified framework for whole-body Organs at Risk Segmentation using multiple partially annotated datasets

AttentionAnatomy: A unified framework for whole-body Organs at Risk Segmentation using multiple partially annotated datasets

Abstract

Organs-at-risk (OAR) delineation in computed tomography (CT) is an important step in Radiation Therapy (RT) planning. Recently, deep learning based methods for OAR delineation have been proposed and applied in clinical practice for separate regions of the human body (head and neck, thorax, and abdomen). However, there are few researches regarding the end-to-end whole-body OARs delineation because the existing datasets are mostly partially or incompletely annotated for such task. In this paper, our proposed end-to-end convolutional neural network model, called AttentionAnatomy, can be jointly trained with three partially annotated datasets, segmenting OARs from whole body. Our main contributions are: 1) an attention module implicitly guided by body region label to modulate the segmentation branch output; 2) a prediction re-calibration operation, exploiting prior information of the input images, to handle partial-annotation(HPA) problem; 3) a new hybrid loss function combining batch Dice loss and spatially balanced focal loss to alleviate the organ size imbalance problem. Experimental results of our proposed framework presented significant improvements in both Sørensen-Dice coefficient (DSC) and 95% Hausdorff distance compared to the baseline model.

\name

Shanlin Sun, Yang Liu1, Narisu Bai, Hao Tang, Xuming Chen, Qian Huang, Yong Liu, Xiaohui Xie \address DeepVoxel Inc, Irvine, CA, USA
Department of Radiation Oncology, Shanghai General Hospital,
Shanghai Jiao Tong University School of Medicine, Shanghai, China
Department of Computer Science, University of California, Irvine, CA, USA

{keywords}

whole body, automated anatomy segmentation, partial annotations, deep learning

1 Introduction

Radiation Therapy(RT) is an important curative treatment for multiple types of cancers. A key step in RT planning is to accurately delineate all OARs in CT images. Recently, Deep Convolutional Neural Networks(DCNNs) methods have been successfully applied to different medical image segmentation tasks[1, 2], including OAR delineation [3, 4]. However, these methods are proposed for delineating OARs in only part of the human body, e.g. head and neck (HaN), throax and abdomen. A unified model for whole-body OAR delineation has great clinical implication, but only few researches focus on this topic.

The most innegligible reason is about data. As for the whole-body OARs delineation taks, the existing datasets are mostly partially labelled for three different parts of human body (head and neck, thorax, and abdomen). This poses great challenges in training an end-to-end deep learning model for whole-body delineation. For instance, datasets annotated for HaN delineation may contain CT scans including thorax region but only have OARs in the HaN annotated. If the unannotated thoracic anatomies are treated as background, the model may have difficulty learning the contradictory representation.

There are also three main challenges for this task. First, a naive/brute force approach, which first classifies the CT scan into three regions and then uses different segmentation models for different regions, may have systematic errors. Misclassification of different parts of human body will significantly affect the segmentation quality, which largely offsets the gain from automatic delineation in clinical practice. Second, current state-of-the-arts OARs delineation methods[3, 4] use 3D convolutions and require whole volume CT image as input, which lacks scalability when applied to whole body because of memory constraints. Third, the imbalance of volume sizes of different OARs.

In this paper, we propose an end-to-end 2.5D DCNN framework, named AttentionAnatomy, to address the aforementioned challenges. AttentionAnatomy preserves the encoder-decoder structure of U-Net and has two branches: a CT region classification branch and an OAR segmentation branch. The CT region classification branch outputs a region predication as well as an attention vector of 33 elements, representing an inference of possible combination of OARs in current image. The OAR segmentation branch then uses this attention vector to modulate the final output mask. We further propose a re-calibration mechanism to tackle the partial-annotation problem, and a hybrid loss function consisting of batch dice loss and spatially balanced focal loss to cope with the extreme class imbalance. Experimental results showed AttentionAnatomy achieved a significant increase in DSC and drop in 95% Hausdorff distance compared to the baseline model.

2 Materials and Methods

2.1 Data

We used three in-house datasets (Head and Neck(HaN), Thorax and Abdomen) in our study and each contains 41, 43 and 45 CT scans respectively. A total of 33 OARs were delineated by a radiation oncologist with more than 10 years of experience. We randomly split three datasets into 36, 37 and 39 for training and 5, 6 and 6 for testing. This leads to a total number of 112 CT images in the training set and 17 the test set.

Each CT scan is manually assigned one of the five classification labels: head, upper chest, chest, upper abdomen and abdomen. And each slice of the CT image has resolution , and we center crop a region of for faster training. We stack 5 continuous slices from the CT scan along the channel and feed this tensor into the proposed model. The proposed model then outputs 34 2D binary masks, corresponding to the segmentation result of the input center slice, one for each OAR or background.

2.2 Network architecture

Fig.1 describes the architecture of AttentionAnatomy. The choice of encoder and decoder branches is flexible and not limited any particular implementations. We chose the standard residual U-Net in this work. , , are the number of slices, height and weight of the input images and is the channel number of the predicted segmentation. In our work, , , , are 5, 320, 320 and 34 respectively.

Figure 1: Overview of the model.

classification branch

The details of classification branch are shown in Fig.1, whose output is a scan-wise prediction to indicate which region one scan most probably belongs to. The principal purpose for designing such a classification branch is not to decide the type of scan during inference, but to enable the feature maps of classification branch to represent some general spatial information of the input scans. More specifically, we expect these feature map to help identify which OARs exist in the input scans.

attention module

Attention module, connecting classification branch and segmentation branch as shown in Fig.1, aims to modulate the probability prediction of segmentation branch. It is designed to suppress predictions of OARs which do not exist in the input scans at the same time, it should assign bigger weights on OARs which exist in the input scans. For class c, is the output of ’fc4’ layer as shown in Fig.1, and is n-th voxel of the feature map and is is n-th voxel of the feature map (both shown in Fig.1). The attention module works as

(1)

At the end, using softmax activation, for each voxel n we get the predicted probability . Here, we decoupled attention module with last layer of classification branch, because the classification task only cares about the distinctive features among different regions while the OARs of interest in different types of regions are not mutually exclusive. For example, spinal cord lies across the upper body, thus spinal cord hardly matters in scan classification but the segmentation branch should pay attention to it for most scans. To avoid model from sticking at the sub-optimal point resulting from the mismatched goals of classification and segmentation, we designed the architecture that attention module and classification branch share features except for their last layers.

2.3 Loss function

For the 2.5D model, volume size imbalance problem are introduced from two perspectives—spatial size in x-y plane and length in z axis. For example, spinal cord could be seen as a small anatomy structure in one single CT scan, but it could seen as a ’long’ structure because it tends to occupy many scans in a CT volume. In contrast, sublingual gland is not a very small organ spatially, but is so ’short’ that only occupies two or three scans in the CT volume. We employ a hybrid segmentation loss combining batch dice loss and spatially balance focal loss to alleviate the volume size imbalance problem. In terms of classification branch, we simply apply cross entropy loss. Thus, the total loss can be expressed as

(2)

where , and are trade-offs among batch dice loss , spatially balanced focal loss and region classification cross entropy loss .

In what follows, is the predicted probability for voxel i at the j-th sample in a batch (batch size is B) being class . Correspondingly, is the ground truth for voxel n at the j-th sample in a batch being class .

batch dice loss

The dice loss turns pixel-wise labeling problem into minimizing class-level distribution distance[5], thereby, dice loss is unaffected by the spatial size imbalance problem. But the frequency of each class of contributing to the dice loss computation varies greatly resulting from the high variance of the average occupied scans of each OAR. Batch dice loss[6]can significantly alleviate the length imbalance problem by taking a batch of segmentation predictions maps as a single one.

To illustrate the benefits of batch dice loss, we will take hypophysis and right lung as examples. In our training set, the total number of scans is 19113, and 3094 of them has right lung annotated but only 85 of them has hypophysis annotated. If we use the original dice loss, the expected frequency that right lung participate in the loss computation is , while that of hypophysis is . On the contrary, using our batch dice loss, if the batch size is set as 16, the expected frequency that right lung participate in the loss computation is and this expected frequency of hypophysis is . In this case, the frequency ratio goes down to 14.05 from 36.4, so the length imbalance problem could be greatly mitigated by using batch dice loss.

spatially balanced focal loss

Focal loss[7] would force model to learn more about poorly classified voxels/pixels. More importantly, focal loss would help dice loss deal with small-volume organs. It is because the gradient of dice loss regarding the prediction for voxel , which can be written as , would be very small if the sum of prediction probabilities is much higher than the number of foreground voxels/pixles. Thus, focal loss will potentially fasten the training of small organs. Our proposed spatially balance focal loss would foucs on the hard voxles/pixels from small-volume organs. It can be written as

(3)

where is the inverse of individual organ volumes, designed to handle organ size imbalances in x-y plane.

2.4 Handling partial annotations

Our training dataset is composed with three sub-datasets and there exits an annotation problem. For example, liver shows in both abdomen CT images and thorax CT images, but liver is only annotated in abdomen CT images. We define such problem as partial annotations. To deal with such inconsistent annotation problem, based on the prior information regarding the source and region of the input CT scans, our re-calibration operation can be formulated as

(4)

Here represents our three data sources. We employ a mask vector for the s-th CT image. That is for voxel n, if annotation of organ is missed in data source , and 1 otherwise. For the background, implies that we transport the predicted probability of missing-annotated organs into the background probability.

2.5 Implementation details and performance evaluation

In our experiments, a seven-fold cross validation was performed to demonstrate the performance of AttentionAnatomy. Batch size was set to 16 and Adam was used as the optimizer during training. The first 20 epochs was for pre-training the classification branch, where was set to be 1, and are 0, and learning rate was 0.001. From 20 to 50 epochs, , and were all set to 1. From 50 to 70 epochs, spatially balanced focal loss was not be involved into total loss and learning rate decreased to 0.0005. During the fine-tune phrase, we introduced 2d elastic transform for data augmentation, reduced learning rate to 0.0001 and set , , as 1, 0, 0 respectively. Additionally, we use DSC and 95% Hausdorff distance[8] as the final evaluation metric.

3 Results

In our experiment, the baseline results were generated from a Vanilla Unet. The encoder and segmentation branch of our proposed model are exactly the same as the baseline model. Thus, the GPU computation of our model is only 1.02% larger than our baseline model.

Here we can see from Table.1, AttentionAnatomy had a significant improvement over baseline. DSC is increased by an average of 2.84% and the 95% Hausdorff distance is lowered from an average of 11.42 mm to 9.33 mm. We can also learn from Table.1 that the method of handling partial -annotation problem will be more beneficial if applied in our proposed framework. It is because our proposed attention module would provide the posterior information that what OARs the input CT may contain. With the restriction of such posterior information, the probability re-calibration results would be more convincing. Fig.2 strongly supports the numerical results in Table.1 by visualizing the predicted segmentation of partially-annotated region, generated from different models and settings.

Figure 2: Prediction visualization of Vanilla Unet, Vanilla + HPA, AttentionAnatomy and AttentionAnatomy + HPA. HPA is short for handling partial-annotation problem.
OARs Vanilla Unet Vanilla Unet + HPA Att-Anatomy Att-Anatomy + HPA
Brain Stem 85.87 1.35
Constrictor Naris 77.14 4.31
Ear L 78.78 3.85
Ear R 82.13 4.18
Eye L 90.10 2.00
Eye R 90.41 0.71
Hypophysis 69.07 8.20
Larynx 92.20 0.68
Mandible 94.27 0.56
Oral Cavity 90.50 2.58
Parotid L 83.81 3.48
Parotid R 80.81 4.10
Smg L 77.93 5.37
Smg R 77.24 7.10
Spinal Cord 89.13 5.37
Sublingual Gland 46.73 12.67
Temporal Lobe L 88.78 3.45
Temporal Lobe R 88.58 2.90
TMJ L 84.94 3.27
TMJ R 88.65 1.81
Trachea 91.53 1.47
Heart 91.60 2.52
Lung L 97.70 0.60
Lung R 97.57 0.49
Eso 79.38 2.07
Gallbladder 68.81 25.14
Kidney L 94.09 1.46
Kidney R 94.87 1.79
Bag Bowel 81.49 2.93
Liver 93.89 1.84
Pancreas 64.77 13.54
Spleen 90.29 5.92
Stomach 63.97 17.58
Average 83.58
Table 1: DSC comparison between baseline model and our proposed model. L is short for left and R is short for right. SMG is short for submandibular gland and TMJ is short for temporomandibular joint. Att-Anatomy is short for AttentionAnatomy.

4 Conclusions

Deep learning based OARs delineation solutions have proved comparable to expert standard. However, most of these researches only focused on one particular region. In this work, we proposed a light, flexible and more clinical applicable end-to-end framework aiming to segment whole-body OARs. What’s more, we incorporate an attention module connecting segmentation and classification branches to guide the segmentation predication, and a re-calibration methods to tackle the partial-annotation problem.

Footnotes

  1. thanks: authors contributed equally

References

  1. Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016, pp. 565–571.
  2. Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
  3. Hao Tang, Xuming Chen, Yang Liu, Zhipeng Lu, Junhua You, Mingzhou Yang, Shengyu Yao, Guoqi Zhao, Yi Xu, Tingfeng Chen, et al., “Clinically applicable deep learning framework for organs at risk delineation in ct images,” Nature Machine Intelligence, pp. 1–12, 2019.
  4. Wentao Zhu, Yufang Huang, Liang Zeng, Xuming Chen, Yong Liu, Zhen Qian, Nan Du, Wei Fan, and Xiaohui Xie, “Anatomynet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy,” Medical physics, vol. 46, no. 2, pp. 576–589, 2019.
  5. Seyed Sadegh Mohseni Salehi, Deniz Erdogmus, and Ali Gholipour, “Tversky loss function for image segmentation using 3d fully convolutional deep networks,” in International Workshop on Machine Learning in Medical Imaging. Springer, 2017, pp. 379–387.
  6. Oldřich Kodym, Michal Španěl, and Adam Herout, “Segmentation of head and neck organs at risk using cnn with batch dice loss,” in German Conference on Pattern Recognition. Springer, 2018, pp. 105–114.
  7. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
  8. Daniel P Huttenlocher, Gregory A Klanderman, and William J Rucklidge, “Comparing images using the hausdorff distance,” IEEE Transactions on pattern analysis and machine intelligence, vol. 15, no. 9, pp. 850–863, 1993.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
404465
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description