Vispi: Automatic Visual Perception and Interpretation of Chest X-rays

Vispi: Automatic Visual Perception and Interpretation of Chest X-rays

Xin Li Department of Computer Science, Wayne State University, USA
11email: {xinlee,caorui,dzhu}@wayne.edu
   Rui Cao Department of Computer Science, Wayne State University, USA
11email: {xinlee,caorui,dzhu}@wayne.edu
   Dongxiao Zhu Department of Computer Science, Wayne State University, USA
11email: {xinlee,caorui,dzhu}@wayne.edu
Abstract

Medical imaging contains the essential information for rendering diagnostic and treatment decisions. Inspecting (visual perception) and interpreting image to generate a report are tedious clinical routines for a radiologist where automation is expected to greatly reduce the workload. Despite rapid development of natural image captioning, computer-aided medical image visual perception and interpretation remain a challenging task, largely due to the lack of high-quality annotated image-report pairs and tailor-made generative models for sufficient extraction and exploitation of localized semantic features, particularly those associated with abnormalities. To tackle these challenges, we present Vispi, an automatic medical image interpretation system, which first annotates an image via classifying and localizing common thoracic diseases with visual support and then followed by report generation from an attentive LSTM model. Analyzing an open IU X-ray dataset, we demonstrate a superior performance of Vispi in disease classification, localization and report generation using automatic performance evaluation metrics ROUGE and CIDEr.

Keywords:
Medical Image Report Generation;  Disease Classification and Localization;  Visual Perception;  Attention;  Deep Learning.

1 Introduction

X-ray is a widely used medical imaging technique in clinics for diagnosis and treatment of thoracic diseases. Medical image interpretation, including both disease annotation and report writing, is a laborious routine for radiologists. Moreoever, the quality of interpretation is often quite diverse due to the differential levels of experience, expertise and workload of the radiologists. To release radiologists from their excessive workload and to better control quality of the written reports, it is desirable to implement a medical image interpretation system that automates the visual perception and cognition process and generates draft reports for radiologists to review, revise and finalize.

Despite the rapid and significant development, the existing natural image captioning models, e.g. [7, 18], fail to perform satisfactorily on medical report generation. The major challenge lies in the limited number of image-report pairs and relative scarcity of abnormal pairs for model training, which are essential for quality radiology report generation. Additional challenge is the lack of appropriate performance evaluation metrics; the -gram based BLEU scores widely used in natural language processing (NLP) are not suitable for assessing the quality of generated reports.

Figure 1: Illustration of an existing medical report generation system (e.g. [6, 19]) (a) and the proposed medical image interpretation system (b). The former uses a coarse grid of image regions as visual features to generate report directly whereas the latter first predicts and localizes disease as semantic features then followed by report generation.

Nevertheless several approaches have been developed to generate reports automatically for chest X-rays using the CNN-RNN architecture developed in natural image captioning research [6, 8, 17, 19] (Fig. 1a). Since the medical report typically consists of a sequences of sentences, Jing et al. [6] use a hierarchical LSTM [7] to generate paragraphs and achieve impressive results on Indiana University (IU) X-ray dataset [2]. Instead of only using visual features extracted from image, they first predict the Medical Text Indexer (MTI) annotated tags, and then combine semantic features from the tags with visual features from the images for report generation. Similarly [19] use both visual and semantic features but generate ‘impression’ and ‘findings’ of the report separately. The former one-sentence summary is generated from a CNN encoder whereas the latter paragraph is generated using visual and semantic features. Different from [6], the semantic feature is extracted by embedding the last generated sentence as opposed to the annotated tags. Li et al. [8] use a hierarchical decision-making procedure to determine whether to retrieve a template sentence from an existing template corpus or to invoke the lower-level decision to generate a new sentence from scratch. The decision priority is updated via reinforcement learning based on sentence-level and word-level rewards or punishments. However, none of these methods demonstrate a satisfactory performance in disease localization and classification, which is a central issue in medical image interpretation.

Wang et al. [17] address both disease classification and medical image report generation problems in the same model. They introduce a novel Text-Image Embedding network (TieNet), which integrates self-attention LSTM using textual report data and visual attention CNN using image data. TieNet is capable of extracting an informative embedding to represent the paired medical image and report, which significantly improves the disease classification performance compared to [16]. However, TieNet’s performance on medical report generation improves only marginally over the baseline approach [18], trading the medical report generation performance for the disease classification performance. Moreover, TieNet does not provide a visual support for radiologists to review and revise the automatically generated report.

We present an automatic medical image interpretation system with in situ visual support striving for a better performance in both image annotation and report generation (Fig. 1b). To our knowledge this is among the first attempts to exploit disease localization for X-ray image report generation with visual supports. Our contributions are in four-fold: (1) we describe an integrated image interpretation framework for disease annotation and medical report generation, (2) we transfer knowledge from large image data sets (ChestX-ray8 [16] and ImageNet) to enhance medical image interpretation using a small number of reports for training (IU X-ray [2]), (3) we evaluate suitability of the NLP evaluation metrics for medical report generation, and (4) we demonstrate the functionality of localizing the key finding in an X-ray with a heatmap.

2 Method

Our workflow (Fig. 2) first annotates an X-ray image by classifying and localizing thoracic diseases (Fig. 2a) and then generates the corresponding sentences to build up the entire report (Fig. 2b). Fig. 2c displays the structure of attentive LSTM used to generate reports.

Figure 2: An automatic workflow of the X-ray interpretation system.

2.1 Disease Classification and Localization

Fig. 2a shows our classification module built on a -layer Dense Convolutional Network (DenseNet) [5]. Similar to [12], we replace the last fully-connected layer with a new layer of dimension , where is the number of diseases. This is a multiple binary classification problem that input is a frontal view X-ray image X and output is a binary vector , i.e., , indicating absence or presence of a disease . The binary cross-entropy loss function is defined as: , where is the probability for a target disease . If , an X-ray is annotated with disease for the next level modeling. Otherwise, it is considered as “Normal”. It is worth mentioning that a vast majority of X-rays are considered as “Normal”, therefore, other choices of thresholds also work well with our system.

We apply Grad-GAMs [14] to localize disease with a heatmap. Gard-CAMs uses the gradient information and flows it back to the final convolutional layer to decipher the importance of each neuron in classifying an image to disease . Formally, let be the th feature maps and weight represents importance of the feature map for the disease . We first calculate the gradient of the score for class m, (before the sigmoid), with respect to a feature map , i.e., . Thus are calculated by: . represents the coordinates of a pixel, and is the total number of pixels. We then generate a heatmap for disease by applying weighted average of , followed by a ReLU activation: . The localized semantic features to predict disease are identified and visualized with the heatmap . Similar to [16], we apply a thresholding based bounding box (B-Box) generation method. The B-Box bounds pixels whose heatmap intensity is above 90% of the maximum intensity. The resulting region of interest is then cropped for next level modeling.

2.2 Attention-based Report Generation

Fig. 2b illustrates the process of report generation. If there is no active thoracic disease found in an X-ray, a report will be directly generated by an attentive LSTM based on the original X-ray as shown in the green dashed box. Otherwise (as shown in the red dashed box), the cropped subimage with localized disease from the classification module (Fig. 2a) is used to generate description of abnormalities whereas the original X-ray is used to generate description of normalities in the report.

As shown in the Fig. 2c, the attentive LSTM is based on an encoder-decoder structure [18], which takes either the original X-ray image or the cropped subimage corresponding to abnormal region as the input and generates a sequence of sentences for the entire report. Our encoder is built on a pre-trained ResNet-101 [4], which extracts the visual features matrix (reshaped from ) from the last convolutional layer followed by an adaptive average pooling layer. Each vector of F represents one regional feature vector, where .

The LSTM decoder takes F as input and generates sentences by producing a word at each time . To utilize the spatial visual attention information, we define the weights , which can be interpreted as the relative importance of region feature at time . The weights is computed by a multilayer perceptron : and , and hence the attentive visual feature vector is computed by . In addition to the weighted visual feature and last hidden layer , the RNN also accepts the last output word at each time step as an input. We concatenate the embedding of last output word and visual feature as context vector . Thus the transition to the current hidden layer can be calculated as: After model training, a report is generated by sampling words and updating the hidden layer until hitting the stop token.

3 Experiments and Results

Datasets. We use the IU Chest X-ray Collection [2], an open image dataset with radiology reports paired with chest X-rays for our experimental evaluation. Each report contains three sections: impression, findings and Medical Subject Headings (MeSH) terms. Similar to [6, 19], we generate sentences in ‘impression’ and ‘findings’ together. The MeSH terms are used as labels for disease classification [17] as well as the follow-up report generation with abnormality and normality descriptions. We convert all the words to lower-case, remove all non-alphanumeric tokens, replace single-occurrence tokens with a special token and use another special token to separate sentences. We filter out images and reports that are non-relevant to the eight common thoracic diseases included in both ChestX-ray8 [16] and IU X-ray datasets [2], resulting in a dataset with pairs of X-ray image and report. Finally, we split all the image-report pairs into training, validation and testing dataset by ratio .

Implementation Details. We implement our model on a GeForce GTX 1080ti GPU platform using PyTorch. The dimension of all hidden layers and word embeddings are set to . The network is trained with Adam optimizer with a mini-batch size of . The training stops when the performance on validation dataset does not increase for epochs. We do not fine-tune the DenseNet pretrained with ChestX-ray8 [16] and ResNet pretrained with ImageNet due to the small sample size of IU X-ray dataset [2]. For each disease class, a specific pair of LSTMs are trained to ensure consistency between the predicted disease annotation(s) and the generated report. For the disease classes with less than samples, we train a shared attentive LSTM across the classes to generate normality description of the report.

Evaluation of Automatic Medical Image Reports. We use the metrics for NLP tasks such as BLEU [11], ROUGE [9], and CIDEr [1] for automatic performance evaluation. Our model outperforms all baseline models [3, 10, 13, 15] and demonstrates the best CIDEr and ROUGE scores among all the advanced methods specifically designed for medical report generation [6, 8, 19], despite the fact that we only use a single frontal view X-ray. While BLEU scores measure the percentage of consistency between the automatic report and the manual report in light of the automatic report (precision), it is not illuminative in assessing the amount of information captured in the automatic report in light of the manual report (recall). In real-world clinical applications, both recall and precision are critical in evaluating the quality of an automatic report.

For example, automatic reports often miss description of abnormalities that contained in manual reports written by human radiologists [8, 19], which may decreases recall but does not affect precision. Thus, the automatic report missing the key disease information can still achieve high BLEU scores nevertheless it provides limited insight for medical image interpretation. Therefore, ROUGE is more suitable than BLEU for evaluating the quality of automatic reports since it measures both precision and recall. Further, CIDEr is more suitable for our purpose than ROUGE and BLEU since it captures the notions of grammaticality, saliency, importance and accuracy [1]. Additionally, CIDEr uses TF-IDF to filter out unimportant common words and weight more on disease keywords. As a result, higher ROUGE and CIDEr scores demonstrate a superior performance of our medical image interpretation system.

Model CIDEr ROUGE BLEU-1 BLEU-2 BLEU-3 BLEU-4
CNN-RNN [15]* 0.294 0.306 0.216 0.124 0.087 0.066
LRCN [3]* 0.284 0.305 0.223 0.128 0.089 0.067
AdaAtt [10]* 0.295 0.308 0.220 0.127 0.089 0.068
Att2in [13]* 0.297 0.308 0.224 0.129 0.089 0.068
CoAtt [6]* 0.277 0.369 0.455 0.288 0.205 0.154
HRGR [8]* 0.343 0.322 0.438 0.298 0.208 0.151
MRA [19] N\A 0.366 0.464 0.358 0.270 0.195
Vispi 0.553 0.371 0.419 0.280 0.201 0.150
Table 1: Automatic evaluations on IU dataset. * results from [8]. results from [19].

Evaluation of Disease Classification. Although ROUGE and CIDEr scores are effective in evaluating the consistency of an automatic report to a manual report, none of them, however, are designed for assessing the correctness of medical report annotation in terms of common thoracic diseases. The latter is another key output of a useful image interpretation system. For example, the automatically generated sentence: “no focal airspace consolidation, pleural effusion or pneumothorax” is considered as similar to the manually written sentence: “persistent pneumothorax with small amount of pleural effusion” using both ROUGE and CIDEr scores despite the completely opposite annotations. Therefore, we assess the accuracy in medical report annotation by comparing with TieNet [17] in disease classification using Area Under the ROC (AUROC) as the metric. Our result outperforms TieNet’s classification module in out of diseases (Table 5, Fig. 5). As we discussed before, the inferior performance of TieNet may due to the fact that it trades the image classification performance for report generation performance. On the contrary, our model exploits the former to enhance the latter via a bi-level attention.

\hb@xt@
\@@FRabove

\FB@wadj\FBo@wadj\FBo@wadj
\FB@frame
\FBo@wadj\@@FBabove\FBo@frame
Figure 5: Comparison of disease classification performance using ROC curves.

 

Figure 4: Comparison of disease classification performance using ROC curves.
\@@FBbelow\FB@wadj\FB@wadj\FB@wadj\FBo@wadj\FBo@wadj
\FB@frame
\FBo@wadj\@@FBabove\FBo@frame
Disease Vispi TieNet*
Atelectasis 0.806 0.774
Cardiomegaly 0.856 0.847
Effusion 0.919 0.899
Infiltration 0.610 0.718
Mass 0.984 0.723
Nodule 0.758 0.658
Pneumonia 0.764 0.731
Pneumothorax 0.733 0.709
Average 0.804 0.757
Table 4: Comparison using AUROCs. * results from [17].

 

Figure 4: Comparison of disease classification performance using ROC curves.
Disease Vispi TieNet*
Atelectasis 0.806 0.774
Cardiomegaly 0.856 0.847
Effusion 0.919 0.899
Infiltration 0.610 0.718
Mass 0.984 0.723
Nodule 0.758 0.658
Pneumonia 0.764 0.731
Pneumothorax 0.733 0.709
Average 0.804 0.757
Table 3: Comparison using AUROCs. * results from [17].
\@@FBbelow\hb@xt@
\@@FRbelow

Example System Outputs. Fig. 6 shows two example outputs each with a generated report and image annotation. The first row presents an annotated “Normal” case whereas the second row presents an annotated “Cardiomegaly” case with the disease localized in a red bounding box on the heatmap generated from our classification and localization module. The results show that our medical interpretation system is capable of diagnosing thoracic diseases, highlighting the key findings in X-rays with heatmaps and generating well-structured reports.

Figure 6: Illustration of two cases of example outputs of our system.

4 Conclusions

In summary, we propose a bi-level attention mechanism for automatic X-ray image interpretation. Using only a single frontal view chest X-ray, our system is capable of accurately annotating X-ray images and generating quality reports. Our system also provides visual supports to assist radiologists in rendering diagnostic decisions. With more quality training data becomes available in the near future, our medical image interpretation system can be improved by: (1) incorporating both frontal and lateral view of X-rays, (2) predicting more disease classes, and (3) using hand labeled bounding boxes as the target of localization. We will also generalize our system by extracting informative features from Electronic Health Record (EHR) data and repeated longitudinal radiology reports to further enhance the performance of our system.

References

  • [1] Agrawal, A., Lu, J., Antol, S., Mitchell, M., Zitnick, C.L., Parikh, D., Batra, D.: Vqa: Visual question answering. Int. J. Comput. Vision 123(1), 4–31 (May 2017)
  • [2] Demner-Fushman, D., Kohli, M.D., Rosenman, M.B., Shooshan, S.E., Rodriguez, L., Antani, S., Thoma, G.R., McDonald, C.J.: Preparing a collection of radiology examinations for distribution and retrieval. JAMIA 23(2), 304–310 (2015)
  • [3] Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. In: ICCV. pp. 2625–2634 (2015)
  • [4] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. pp. 770–778 (2016)
  • [5] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR. pp. 4700–4708 (2017)
  • [6] Jing, B., Xie, P., Xing, E.: On the automatic generation of medical imaging reports. arXiv preprint arXiv:1711.08195 (2017)
  • [7] Krause, J., Johnson, J., Krishna, R., Fei-Fei, L.: A hierarchical approach for generating descriptive image paragraphs. In: CVPR. pp. 3337–3345. IEEE (2017)
  • [8] Li, C.Y., Liang, X., Hu, Z., Xing, E.P.: Hybrid retrieval-generation reinforced agent for medical image report generation. arXiv preprint arXiv:1805.08298 (2018)
  • [9] Lin, C.Y.: Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out (2004)
  • [10] Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In: CVPR. vol. 6, p. 2 (2017)
  • [11] Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: ACL. pp. 311–318. Association for Computational Linguistics (2002)
  • [12] Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K., et al.: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
  • [13] Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: CVPR. pp. 7008–7024 (2017)
  • [14] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: ICCV. pp. 618–626. IEEE (2017)
  • [15] Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image caption generator. In: CVPR. pp. 3156–3164 (2015)
  • [16] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: CVPR. pp. 3462–3471. IEEE (2017)
  • [17] Wang, X., Peng, Y., Lu, L., Lu, Z., Summers, R.M.: Tienet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays. In: CVPR. pp. 9049–9058 (2018)
  • [18] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: ICML. pp. 2048–2057 (2015)
  • [19] Xue, Y., Xu, T., Long, L.R., Xue, Z., Antani, S., Thoma, G.R., Huang, X.: Multimodal recurrent model with attention for automated radiology report generation. In: MICCAI. pp. 457–466. Springer (2018)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
375270
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description