[

[

Hadrien Bertrand hadrien.bertrand@mila.quebec
\ANDMohammad Hashir00footnotemark: 0 mohammad.hashir.khan@umontreal.ca
\ANDJoseph Paul Cohen joseph.paul.cohen@mila.quebec

Mila, Quebec Artificial Intelligence Institute
Université de Montréal
Contributed equally
Abstract

Most convolutional neural networks in chest radiology use only the frontal posteroanterior (PA) view to make a prediction. However the lateral view is known to help the diagnosis of certain diseases and conditions. The recently released PadChest dataset contains paired PA and lateral views, allowing us to study for which diseases and conditions the performance of a neural network improves when provided a lateral x-ray view as opposed to a frontal posteroanterior (PA) view. Using a simple DenseNet model, we find that using the lateral view increases the AUC of 8 of the 56 labels in our data and achieves the same performance as the PA view for 21 of the labels. We find that using the PA and lateral views jointly doesn’t trivially lead to an increase in performance but suggest further investigation.

Do lateral views help?]Do Lateral Views Help Automated Chest X-ray Predictions?

[ Hadrien Bertrand**footnotemark: * hadrien.bertrand@mila.quebec
Mohammad Hashir**footnotemark: * mohammad.hashir.khan@umontreal.ca
Joseph Paul Cohen joseph.paul.cohen@mila.quebec

Mila, Quebec Artificial Intelligence Institute
Université de Montréal

Editors: Under Review for MIDL 2019

11footnotetext: Contributed equally

Keywords: Chest X-ray, classification, multilabel, multi view

\thesection Introduction

Most automated radiology prediction models use only posteroanterior (PA) views to make a prediction (Wang et al., 2017; Rajpurkar et al., 2017; Lakhani & Sundaram, 2017; Cohen et al., 2019) as the PA view is often the only available one in public datasets. In many hospitals, the lateral view is infrequently used and usually replaced by a CT scan, as it is difficult to read without specific training (Feigin, 2010). But a CT scan uses a larger dose of radiation, and is only ordered if the PA view is insufficient to diagnose, adding a latency in the diagnosis and risk to the patient.

However, there are specific cases in which the lateral view provides information for diagnosis that isn’t clear or visible on the PA view (Shiraishi et al., 2007; Feigin, 2010; Ittyachen et al., 2017). For example, up to 15% of the lung can be obscured by cardiovascular structures and the diaphragm (Raoof et al., 2012). The question we investigate in this work is whether a neural network can make a better prediction using the lateral view or the posteroanterior view, across a wide variety of diseases and conditions. If so, we can look further into how to best augment models to use both modalities.

The release of PadChest (Bustos et al., 2019), a large-scale public chest X-ray dataset that includes paired PA and lateral views, provides us with the opportunity to give a preliminary answer to this question.

\thesection Data and methods

We use the PadChest (Bustos et al., 2019) dataset which is comprised of 160,000 chest X-rays and reports gathered from a Spanish hospital spanning over 67,000 patients with multiple visits and views available. The images have been annotated with various types of radiological findings and differential diagnoses, with 27% of the annotations created manually by physicians and the rest extracted from the report by a recurrent neural network.

For our analysis, we extract a single visit from only those patients who have both PA and lateral views available resulting in a total of 30,699 patients. We resize the images to pixels, utilizing a center crop if the aspect ratio is uneven, and scale the pixel values to for the training. Each visit can have any number of labels from a total of 194. Since the PadChest dataset defines a hierarchy of labels, we mapped the labels to their respective top level one, in order to maximize the number of images for each label. From those top level labels, we retain only those that occur in at least a 100 patients and combine the rest into “other” resulting in 56 total labels. Some of them are of low clinical interest, such as “electrical device” or “artificial heart valve”, however they provide a sanity check on the results of the models.

The model we use is a DenseNet (Huang et al., 2017). This is a convolutional neural networks defined in blocks. Each block contains a set of convolutional layers, where the input of a layer is the concatenation of the output of every preceding layers in the block, making the network densely connected. In between blocks are pooling layers. At the end, there is a linear layer with as many units as we have labels, followed by a sigmoid.

\thesection Experiments

We trained two DenseNets: one on only PA images and the other on only lateral images with a 60-20-20 split between our training, validation and test sets. We ran all models 5 times with different seeds for the random data splits and model initialization for 40 epochs with a batch size of 8 and a learning rate of 0.0001. All models are trained with the Adam optimizer with a binary cross-entropy loss that is weighted for each label according to their frequency. The class weights are applied only to the positive examples and were computed by dividing the total number of samples in the particular split by the number of samples in the class. As this led to weights ranging from 1 to 250 for the rarest labels, we then multiplied them by 0.1, and clamped the resultant value in . The code for extracting the data and training the models is publicly available on GitHub.

For testing, we load the model with the weights from the epoch where it achieved the highest area under the ROC curve (AUC) on the validation set. We visualize the results on the test set in Figure [. For 26 labels, the PA view was more informative. For 8 labels, it was the lateral view, and for the 21 remaining labels both views where similarly informative. There is a high variance for some of the labels, as shown by the error bars, suggesting the need for further testing.

Figure \thefigure: AUC of the best model for each label. (Blue) PA was better compared to L. (Red) L was better compared to PA. (Purple) Both networks had performance difference inferior to the standard deviation across seeds.

Concerning the absolute performance, the average weighted AUC is . This is encouraging as to the quality of the dataset, since the model we used was simple and we used only a subset of the available images.

\thesection Conclusion

We trained a model on either PA or lateral images, and found that the lateral view performs better for 8 labels, namely pleural effusion, artificial heart valve, hemidiaphragm elevation, osteopenia, flattened diaphragm, costophrenic angle blunting, vertebral degenerative changes and surgery. This suggests that using the lateral images can help for certain prediction tasks, though a more extensive validation is required.

A natural question to ask is if combining both views would improve further the results. There are different ways to do this combination such as stacking the views on the input channels or using a model like DualNet (Rubin et al., 2018) or HeMIS (Havaei et al., 2016) that process each view separately before combining them. Testing those methods, we found that they give an increased AUC for some labels for any given split of the data, but aggregating the results across splits shows a high variance for individual labels, and an overall lower performance than the PA or L only models. While this suggests there is value in using both views jointly, finding a robust way to do so is non-trivial and require further investigation.

There are also limitations from the PadChest dataset. Most labels where extracted from reports by a RNN, making them partly unreliable. There is a bias in the data, as the images come from a single hospitals. Both points can be addressed by validating the results on other datasets such as MIMIC-CXR (Johnson et al., 2019) and CheXpert (Irvin et al., 2019).

Acknowledgments

This work is partially funded by a grant from the Fonds de Recherche en Sante du Quebec and the Institut de valorisation des donnees (IVADO). This work utilized the supercomputing facilities managed by Mila, NSERC, Compute Canada, and Calcul Quebec. We also thank NVIDIA for donating a DGX-1 computer used in this work.

  • Bustos et al. (2019) Aurelia Bustos, Antonio Pertusa, Jose-Maria Salinas, and Maria de la Iglesia-Vayá. PadChest: A large chest x-ray image dataset with multi-label annotated reports, 2019.
  • Cohen et al. (2019) Joseph Paul Cohen, Paul Bertin, and Vincent Frappier. Chester: A Web Delivered Locally Computed Chest X-Ray Disease Prediction System, 2019.
  • Feigin (2010) David S Feigin. Lateral Chest Radiograph: A Systematic Approach. Academic Radiology, 2010. doi: 10.1016/j.acra.2010.07.004.
  • Havaei et al. (2016) Mohammad Havaei, Nicolas Guizard, Nicolas Chapados, and Yoshua Bengio. HeMIS: Hetero-modal image segmentation. In Medical Image Computing and Computer Assisted Intervention, volume 9901 LNCS, 2016. doi: 10.1007/978-3-319-46723-8˙54.
  • Huang et al. (2017) Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely Connected Convolutional Networks. In Computer Vision and Pattern Recognition, 2017.
  • Irvin et al. (2019) Jeremy Irvin et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison, 2019.
  • Ittyachen et al. (2017) Abraham M. Ittyachen, Anuroopa Vijayan, and Megha Isac. The forgotten view: Chest x-ray - lateral view. Respiratory Medicine Case Reports, 2017. doi: https://doi.org/10.1016/j.rmcr.2017.09.009.
  • Johnson et al. (2019) AEW Johnson, TJ Pollard, S Berkowitz, NR Greenbaum, MP Lungren, C-Y Deng, RG Mark, and S Horng. Mimic-cxr: A large publicly available database of labeled chest radiographs, 2019.
  • Lakhani & Sundaram (2017) Paras Lakhani and Baskaran Sundaram. Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks. Radiology, 2017. doi: 10.1148/radiol.2017162326.
  • Rajpurkar et al. (2017) Pranav Rajpurkar et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning, 2017. doi: 1711.05225.
  • Raoof et al. (2012) Suhail Raoof, David Feigin, Arthur Sung, Sabiha Raoof, Lavanya Irugulpati, and Edward C Rosenow III. Interpretation of plain chest roentgenogram. Chest, 2012. doi: 10.1378/chest.10-1302.
  • Rubin et al. (2018) Jonathan Rubin, Deepan Sanghavi, Claire Zhao, Kathy Lee, Ashequl Qadir, and Minnan Xu-Wilson. Large Scale Automated Reading of Frontal and Lateral Chest X-Rays using Dual Convolutional Neural Networks, 2018.
  • Shiraishi et al. (2007) Junji Shiraishi, Feng Li, and Kunio Doi. Computer-aided diagnosis for improved detection of lung nodules by use of posterior-anterior and lateral chest radiographs. Academic radiology, 2007. doi: 10.1016/j.acra.2006.09.057.
  • Wang et al. (2017) Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, 2017. doi: 10.1109/CVPR.2017.369.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
354739
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description