2.75D Convolutional Neural Network for Pulmonary Nodule Classification in Chest CT

2.75D Convolutional Neural Network for Pulmonary Nodule Classification in Chest CT


Early detection and classification of pulmonary nodules in Chest Computed tomography (CT) images is an essential step for effective treatment of lung cancer. However, due to the large volume of CT data, finding nodules in chest CT is a time-consuming thus error prone task for radiologists. Benefited from the recent advances in Convolutional Neural Networks (ConvNets), many algorithms based on ConvNets for automatic nodule detection have been proposed. According to the data representation in their input, these algorithms can be further categorized into: 2D, 3D and 2.5D which uses a combination of 2D images to approximate 3D information. Leveraging 3D spatial and contextual information, the method using 3D input generally outperform that based on 2D or 2.5D input, whereas its large memory footprints becomes the bottleneck for many applications. In this paper, we propose a novel 2D data representation of a 3D CT volume, which is constructed by spiral scanning a set of radials originated from the 3D volume center, referred to as the 2.75D. Comparing to the 2.5D, the 2.75D representation captures omni-directional spatial information of a 3D volume. Based on 2.75D representation of 3D nodule candidates in Chest CT, we train a convolutional neural network to perform the false positive reduction in the nodule detection pipeline. We evaluate the nodule false positive reduction system on the LUNA16 data set which contains 1186 nodules out of 551,065 candidates. By comparing 2.75D with 2D, 2.5D and 3D, we show that our system using 2.75D input outperforms 2D and 2.5D, yet slightly inferior to the systems using 3D input. The proposed strategy dramatically reduces the memory consumption thus allow fast inference and training by enabling larger number of batches comparing to the methods using 3D input. Furthermore, the proposed method is generic, which can be simply extended to many neural network architectures. Our source code will be made publicly available.

1 Introduction

Lung cancer is the first leading cause of cancer deaths worldwide, with 1.8 million new cases being diagnosed each year. Lung Cancer is the second leading cause of death globally, about 1 in 6 deaths is due to cancer. The effectiveness of state-of-the-art cancer treatments still heavily depends on the phase at which the cancer is diagnosed. For example, the five-year survival rate for localized stage of non-small cell lung cancer is 60%, whereas that for distant stage is only 6%. However, only 16% percent of lung cancer cases are diagnosed at an early stage [27]. Similarly, the five-year survival rate of bladder cancer diagnosed at localized stage and distant stage are 95% and 5% respectively. About half of people are diagnosed after localized stage [35]. Clearly, early and robust cancer diagnosis is of crucial importance for improving patient survival rates by effective cancer treatments, yet full of challenges ahead.

CT is the most common imaging modality for early lung cancer screening, from which a series of x-ray images are taken from different angles around the body of the patient. In current practice, CT images are read manually by radiologists. This manual step of identifying suspicious lesions from a large number of normal samples is considered experience demanding and error prone due to the nodule diversity in shape and size. What is more, it is very time consuming, leading to low efficiency for hospitals and high diagnosis cost for patients.

To make early lung cancer screening using CT imaging feasible, many Computer Aided Diagnostic (CAD) systems have been proposed in the past decades. Their aims [34] were to improve one or more steps of a typical CAD system: organ segmentation[12, 10], nodule detection[4, 18, 8, 23, 20], nodule segmentation[41, 18, 19], and nodule diagnosis[40, 17].

In this paper, we propose a novel strategy, which can contribute to the false positive step reduction in nodule detection and diagnosis in CT images. False positive reduction is a major component in CAD systems to identify and classify a relatively small amount of suspicious nodules out of a large nodule CT image set. In the past few years, many deep learning based methods have been explored for cancer nodule classification in CT images, among which Convolutional Neural Networks (CNNs) have been exceptionally successful due to its ability to hierarchically capture texture information.

Based on the dimension of the input image and convolution function, most CNN based methods for nodule classification can be categorized into three strategies: 1). single-view 2D CNN (2D strategy). 2). multi-view 2D CNN (2.5D strategy)[31, 22, 39, 25, 21], 3). volume-based 3D CNN (3D strategy)[33, 7, 15, 6, 1, 5, 28, 9, 2, 13].

For the volume-based 3D CNN strategy, 3D volume cubes which contain a candidate nodule are directly fed into a 3D CNN model, resulting a binary classification output. The 3D Volume of Interest (VOI) contains the structural information not only on the x-y plane, but also on the z direction. Contributed by much richer input information, 3D CNN generally outperforms the 2D CNN with similar architectures when a significant larger training set is a available, yet costs more memory and computational power on both training and testing phases. When limited training samples are given, it is not feasible to perform transfer learning compared to 2D approach when a large number of nature images are available.

With the goal of achieving high classification performance without significantly increasing computation costs, many 2D CNN methods have been proposed for cancer nodule classification in CT images. The 2D and 2.5D strategies typically yields relatively less model parameters, thus more computationally efficient. This strategy usually obtains one or more 2D patches from cuts with different angles of a volume of interest (VOI) at the center. The most common cuts are from three orthogonal planes. This approach has been applied extensively for breast cancer detection in automated 3D breast ultrasound, lung CT nodule detection, etc. An extension of patches from three orthogonal planes is to take patches from more than three different angles around a fixed axis through the center of the VOI. Next, each 2D patch is fed into a neural network, which is then fused either within the networks or after the last fully connected layer of each CNN stream. For example, Setio et al[31] proposed to extract 9 slices from various sections of each 3D nodule volume, each trained with a 2D CNN stream separately, the outputs neurons of which are then fused into the final classification result. The paper [31] also shows that multi-view based CNN outperforms signal view based CNN. That is to say, the 2.5D strategy captures more useful spatial information than the 2D strategy. In order to best exploit the texture information of the nodule itself and its surroundings in various zoom levels, Shen et al[33] proposed to utilize multi-scale nodule patches to train independent 2D CNNs simultaneously and then concatenate output neurons obtained from each input scale before the final binary classification layer.

These 2.5D approaches have shown their advantages in terms of accuracy over 2D approaches and require less training samples and less training parameters compared to the 3D strategy. However, as only 2D information are extracted and the favorably diagnostic information in 3D is lost, the classification results are typically inferior to 3D CNN based methods.

Given comparable CNN models, more spatial texture information typically means higher classification accuracy. For 2.5D, taking more patches from VOI at various positions or angles would provide more information to CNN model. However, too many 2D patches from each VOI will also increase the number of parallel CNN streams to be used and deployed. It is also a challenge to fuse the information in an optimized way.

Aiming for achieving comparable classification performance as 3D CNN based methods and meanwhile being computational efficient as 2D CNN based methods, in this study, we propose a novel 2.75D strategy training scheme. In essence, each 3D VOI is represented by only one 2D image in such a way that the 2D image contains maximized stereo texture information.

The contributions of this paper are as follows: (1) We propose a CNN based strategy using a 2.75D representation of VOI as neural network input for cancer classification in CT images. To our best knowledge, this is the first work to represent the 3D volume as a single 2D image among CNN based methods for classification. (2). The proposed 2.75D can make use the important advantage of transfer learning from 2D nature images. (3) We evaluate the proposed strategy on the LIDC-IDRI datasets and compare its performance with the comparable 2D, 2.5D and 3D strategies.

The remaining of this paper is organized as follows. First, the proposed algorithm is described in section 2. In section 3, we explain the dataset used for algorithm evaluation and comparison. Next, experiment setup is detailed out in section 4, where the exact neural network architectures of various strategies are explained. In section 5, we discuss the comparison outcomes between strategies. Finally, section 6 summarizes the conclusions of this paper.

Figure 1: Workflow of 3D volume to 2D image transformation using spiral scanning. Each dotted line in red color represents a sampled radial line originated from sphere center to a sampled surface point. 32 intensity values are on each sampled radial line, which forms one column in the transformed 2D image. 43 radial lines from top to bottom of the sphere are ordered from left to right in the transformed 2D image.

2 2.75D strategy

Inspired by the idea of [38], we propose a novel spiral view convolutional neural network based strategy, named the 2.75D strategy. We extract a 2D spiral view from each 3D VOI and feed those images into 2D CNNs for cancer nodule classification tasks.

2.1 Patch extraction using spiral scanning

The workflow of using spiral scanning technique to transform a 3D volume to a 2D image is illustrated in Fig. 1. First, given a 3D cubic volume of size , a sphere with a radius of is defined at the volume center. Next, we transform the spherical VOI into a 2D re-sampled image using a set of radial lines originating from the sphere center to the surface. To best preserving the spherical 3D spatial information while reducing dimensions from 3D to 2D, a set of surface sample points are selected such that they are evenly distributed along a spiral line on the spherical surface, which originates from the north pole and ends at the south pole. By connecting the sphere center with each surface points, a set of radial lines are defined. Finally, the intensities along each radial line are sampled into an array of length . All the sampled radial lines are arranged as columns in a sequential order, which results to a 2D re-sampled image. The original spiral scanning method was proposed for in the favor of performing dynamic programming for segmentation purposes which makes perfect sense to search a curve to delineate lesion boundary. However, from the first intuition, from human eyes it is very challenging to perform classification tasks. In our study, we innovatively leverage this representation for data-driven deep learning scheme.

Using the spiral scanning technique [38], the radial lines and their corresponding surface sample point are determined by two angles: azimuth (longitude) angle and elevation (latitude) angle . Suppose that the spiral line is densely around the sphere, the length of the spiral line would be approximately equal to the sum of circumferences of a number of separate horizontal circles. For simplicity, the number of surface points are calculated based on this approximation. Suppose the angle step is , the azimuth and elevation angles are evenly divided into () and () sections, respectively. The number of sample points on a horizontal circle at can be expressed as , i.e., , regardless of the sphere radius. Therefore, the total number of sample points on the sphere surface is approximately when N is large, as calculated in equation 1.


After spiral scanning, we get a 2.75D representation for each 3D VOI of the cancer candidates, which is essentially a 2D image, which consists of 3D texture information.

2.2 2.75D based neural network

In such a way, the 3D volume classification problem is transformed into a regular 2D image classification problem. Various 2D deep learning models can be applied to the spiral images. This method is not bundled with any specific deep learning models. In this work, we focus on making a fair comparison of cancer nodule classification strategies which take different input dimensions and different dimensions of convolution function, rather than finding a specific deep learning model which performs the best on the cancer nodule classification task.

The specific convolutional neural network configurations used for experiments are described in section 4.3.4.

3 Materials

In this paper, we use the Lung Image Database Consortium (LIDC-IDRI) [3] dataset, which is a publicly available dataset for development, training and evaluation of various computer-aided diagnostic (CAD) methods for lung cancer detection and diagnosis. The dataset contains 1018 cases from seven institutions and their corresponding nodule annotations. The slice thickness of the CT scans vary between 2.5 mm and 5.0 mm. The CT scans with either slice thickness larger than 2.5mm or inconsistent slice thickness are removed from the dataset, resulting 888 scans.

The dataset comes along with a list of annotated nodule candidates by four experienced thoracic radiologists. In the initial blinded-read phase, suspicious nodules are independently annotated into 3 nodule types: nodule >= 3mm, nodule < 3mm, non-nodule. In the subsequent unblinded-read phase, the nodule annotations from each radiologist are reviewed along the annotations of the other radiologists. While reviewing, own annotations are updated if considered necessary by each radiologist independently. In the end, a list of suspicious nodules which are annotated up to 4 times by all radiologists are obtained. The nodules which are >= 3mm and annotated by >= 3 out of 4 radiologists, resulting 1186 nodules, are used as the reference standard for the false positive reduction task.

For each CT scan, a number of volumes of interests (VOI) are extracted as candidate nodules using three existing Computer Aided Diagnosis (CAD) systems [24][14][30], aiming at solid nodules, subsolid nodules and large solid nodules respectively. By merging the three sets of detected candidate nodules and combining candidates that are closer than 5 mm with averaged location and probability, a list of candidate nodules are obtains and the locations are stored in LIDC-IDRI dataset. As a result of candidate nodule detection, 1120 out of 1186 annotated nodules are detected. All the detected candidate nodules are used for false positive reduction step.

(a) 2.5D: multi-view 2D CNN architecture
(b) 2D: single view 2D CNN architecture
(c) 3D: volume based 3D CNN architecture
(d) 2.75D: spiral view based CNN architecture
Figure 2: Architectures of 2D, 2.5D, 3D and 2.75D strategy

4 Experiment

In order to compare the performance of the proposed strategy with existing strategies, four strategies(2.5D, 2D, 3D and 2.75D) are evaluated on the pulmonary nodule false positive reduction problem using the LIDC-IDRI dataset.

4.1 Preprocessing

Before feeding images of candidate nodule into CNNs, some preprocessing steps are necessary. In this study, we followed the preprocessing procedure described by Setio et al[31]. A 50x50x50 mm volume of interest (VOI) was extracted from each of the candidate nodule location. The size of VOI was chosen to ensure fully visibility of all nodules () and sufficient context information. Each VOI was then resized into a 64x64x64 pixel 3D cube, resulting a resolution of 0.78 mm. Next, we rescaled the pixel intensity range from (-1000, 400 Houndsfield Unit (HU)) to (0,1) and clipped the intensity beyond this range.

4.2 Data Augmentation

Data augmentation is a widely used technique to increase the number of training and validation data samples to avoid imbalanced training and validation data sample sizes. A balanced and sufficiently large dataset is import for model robustness and overfitting prevention.

In this experiment, the nodule sample size is 1555, whereas the non-nodule sample size is 750386, which is 482.5 times higher (See Table 1). To ensure a balanced dataset, we increased the number of nodules by randomly applying one or more of following augmentation steps: 1). rotation from to along one or two axes (randomly chosen) of the 64x64x64 pixel 3D volume. 2). flip the volume along one random axis (x, y or z). 3). zoom in along a random axis or axes to maximum 125%. 4). shift the volume by maximum 25% along a random axis, with mirror padding. The volume size were kept same during augmentation.

4.3 Comparison between 2D, 2.5D, 3D and 2.75D strategies

A dataset of candidate nodule volumes of size 64x64x64 are obtained after preprocessing as input for training and evaluation in the false positive reduction task. Depending on the strategy, either the entire 3D volume or a portion of it is fed into CNNs for nodule classification. In this experiment, four strategies as described in section sections 4.3.4, 4.3.3, 4.3.2 and 4.3.1 were compared. With the goal of fairly comparing all the four strategies, we kept the CNN structure and hyper-parameters identical, only toggling the 2D functions to 3D functions (e.g. convolution).

2.5D Method

Setio  [31] proposed a multiple-view CNN based method for the false positive reduction task in the Luna challenge 2016 [32]. The method uses partial 3D volume in a format of multiple 2D images, we refer such multiple-view based methods as the 2.5D strategy.

Data Preparation From each VOI of size , nine 2D slice is extracted. Each of the 2D slice is fed into a 2D CNN architecture, and the outputs are then fused into the final classification result.

Data subset 0 1 2 3 4 5 6 7 8 9 total
nodule 138 170 181 158 170 127 154 120 195 142 1555
augmented 78859 70672 74096 75634 76243 75437 76363 74823 74098 72606 748831
non-nodule 78997 70842 74277 75792 76413 75564 76517 74943 74293 72748 750386
Table 1: Statistics on the number of nodules and non-nodules in each of the 10 folds

2D CNN Configuration In this experiment, we re-implemented the multi-view CNN proposed by Setio et al[31], which consists of 3 consecutive convolutional layers and max-pooling layers (see Fig. (a)a). The input of the network is a patch, which consists of nine 2D views of the 3D volume as shown in Fig. (a)a. For the ease of comparison, we adopted their optimized hyper parameters as well (i.e., number of layers, kernel size, learning rate, number of views, fusion method). All nine 2D views are fed into the 2D CNNs in parallel streams, the outputs of which are then fused for a final binary decision. In this configuration, the parameters of the convolutional layers for different streams are shared.

The first convolutional layer consists of 24 kernels of size 5x5x1. The second convolutional layer consists of 32 kernels of size 3x3x24. The third convolutional layer consists of 48 kernels of size 3x3x32. Each kernel outputs a 2D feature map (e.g. 24 of 60x60 images after the first convolutional layer, which is denoted as 24@60x60 in Fig. (a)a). Max-pooling is used in the pooling layer, which down-samples the patch size by half (e.g. from 24@60x60 to 24@30x30 after the first max-pooling layer) by taking maximum values in non-overlapping windows of size 2x2 (stride of 2). The last layer is a fully connected layer with 16 output units. The Rectified linear units (ReLU) activation function is applied in each convolutional layer and fully connected layer, where the activation for a given input is obtained as .

Neural Network Fusion Multiple 2D CNNs need to be fused to generate the final classification result. Setio et al. explored three fusion approaches: Committee-Fusion, Late-Fusion, Mixed-Fusion. In this experiment, late fusion was implemented for comparison with other strategies.

The late-fusion method [29, 16] concatenates the each of the 16 output units from all 2D CNNs and fully connects the concatenated outputs directly to the classification layer (see Fig. (a)a). By combining the information from multiple views, this strategy has the potential to learn the 3D characteristics.

2D Strategy

The 2D strategy refers to the single view version of the 2.5D strategy as explained section 4.3.1. As shown in Fig. (b)b, in this experiment, a single 2D slice of size 64x64 pixel was extracted on the X-Y plane at the center of Z axis from each VOI. A single 2D CNN stream was applied, which is the same as described in section 4.3.1, followed by another fully connected layer directly to the binary classification layer.

3D Strategy

Besides the 2D and 2.5D strategies, it is a widely adopted strategy to directly use the full 3D volume as input to CNNs for nodule classification (e.g. 3D-CNN [2]).

In this experiment, the input was a 64x64x64 pixel patch. The used CNN architecture for comparison (See Fig.(c)c) was the same layer structure of the 2D CNN described in section 4.3.2, except that Conv2D and MaxPooling2D were replaced with Conv3D and Maxpooling3D respectively and filters were changed from 2D to 3D correspondingly.

As illustrated in figure (c)c, the 3D CNN consist of same number of kernels as the 2D CNN for each layer. The filter sizes of the three layers are 5x5x5x1, 3x3x3x24 and 3x3x3x32 respectively. The max-pooling layer has a non-overlapping window of size 2x2x2 (stride of 2), which halves the size of patches.

2.75D Strategy

We differentiate our 2.75D strategy from traditional 2.5D and 3D strategies as follows. First we use a single 2D image to represent of 3D VOI. Second, comparing to 2.5D approach, a 2.75D patch contains more 3D texture information.

As shown in Fig. (d)d, a 2D image of size 32x43 pixel was extracted from each 3D VOI of size 64x64x64 pixel by applying the spiral scanning technique as described in section 2.1. Such 2D patches were fed into the 2D CNN model, which shares the same layer architecture as described in section 4.3.1.

As shown in Fig. (d)d, after the first convolutional layer, 24 of 28x39 images are produced. The second convolutional layer outputs 32 of 12x17 images. The third convolutional layer results in 48 of 4x7 images.

4.4 Training

We performed evaluation in 10-fold cross-validation across the selected 888 LIDC-IDRI cases, which was split into 10 subsets with similar number of candidates in the Luna Challenge[32]. For each fold, we used 7 subsets for training, 2 subset for validation, and 1 subset for testing. The data size of each fold is shown in table 1.

One of the challenges of using CNNs is to efficiently optimize the model weights given the training dataset. To limit the differences between between all the CNNs in this experiment, the same optimizer (RMSProp [37]), loss function measured by cross-entropy error, batch size of 12 and dropout [36] with probability of 0.5 were shared among all strategies. Early stopping with maximum of 50 epochs was also applied to all CNN architectures when the validation loss does not decrease for 10 consecutive epochs. We adopted normalized initialization for the model weights as proposed by Glorot and Bengio [11]. The biases were set initially to zero.

4.5 Evaluation

The performances of the four strategies were compared based on two metrics: Area Under the ROC Curve (AUC) and Competition Performance Metric (CPM) [26]. AUC is a commonly used evaluation metric for machine learning tasks, which represents the classification performance by the area under the Receiver Operating Characteristic curve (ROC). However, in this specific false positive reduction task where the AUC is close to 100%, CPM turns out to be more differentiating. It is defined as the average sensitivity at seven operating points of the Free-response ROC curve (FROC): 1/8, 1/4, 1/2, 1, 2, 4, and 8 FPs/scan.

5 Experimental Results

On the LIDC dataset, the nodule false positive reduction task was performed using the CNN models described in section 4.3. The strategies were evaluated using 10-fold cross validation based on AUC and CPM. The performance comparison is shown in Fig. 3 and Table 2.

The AUC values in table 2 shows that the 3D strategy slightly outperforms the other strategies, while the 2D, 2.5D and our 2.75D strategy achieves comparable sensitivity. In terms of CPM ( see Table 2) and FROC ( see Fig. 3), the 2.75D strategy clearly shows the capability of capture spatial 3D information of the volume with much less pixel sizes. Our strategy outperforms both 2D and 2.5D by 10% and 2% respectively. The 3D method achieves highest performance (0.79), which is 7% higher than the proposed 2.75D strategy. Since 2.75D utilizes 2D convolution, the performance could be further improved with the benefit of transfer learning.

Figure 3: FROC curve of four different strategies.
Strategy Patch Size CPM AUC
Single View 2D CNN 64x64 0.62 98.02%
Multiple View 2D CNN 64x64x9 0.70 98.29%
Volume based 3D CNN 64x64x64 0.79 99.28%
Our strategy 32x43 0.72 98.05%
Table 2: Result statistics of four different strategies

6 Conclusion and Discussion

To our best knowledge, we are the first to propose 2.75D convolutional neural networks for analytical tasks in medical imaging. The proposed strategy is a general idea to transform a 3D medical image analysis problem into a 2D task, which is not limited to a specific CNN architecture. In terms of performance, our approach substantially outperforms traditional 2D approach and 2.5D approach in lung nodule classification. In terms of inferencing time, our approach is equivalent to 2D approach and substantively better than 2.D approach by 9 times and 3D approach by 10 times. From green AI perspective, our approach is specially useful for 3D medical image analysis.

Regarding performance, the very possible reason that 2.75D works is that the stereo information of the 3D volume is captured by the 2.75D representation, leading to comparable classification performance to 3D CNN based strategies, superior to 2D and 2.5D strategies.

Another advantage of applying 2.75D approach is that transfer-learning is directly applicable for 2.75D, 2.5D and 2D data representation using ImageNet dataset while for 3D approach, a dataset comparable to ImageNet is not publicly available for pre-training.


  1. H. Ahmed, Y. Hu, D. Barratt, D. Hawkes and M. Emberton (2009) Medical image computing and computer-assisted intervention–miccai 2009. Cited by: §1.
  2. W. Alakwaa, M. Nassef and A. Badr (2017) Lung cancer detection and classification with 3d convolutional neural network (3d-cnn). Lung Cancer 8 (8), pp. 409. Cited by: §1, §4.3.3.
  3. S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke and E. A. Hoffman (2011) The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. Medical physics 38 (2), pp. 915–931. Cited by: §3.
  4. S. G. Armato, M. L. Giger, C. J. Moran, J. T. Blackburn, K. Doi and H. MacMahon (1999) Computerized detection of pulmonary nodules on ct scans. Radiographics 19 (5), pp. 1303–1311. Cited by: §1.
  5. R. Dey, Z. Lu and Y. Hong (2018) Diagnostic classification of lung nodules using 3d neural networks. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 774–778. Cited by: §1.
  6. A. Dobrenkii, R. Kuleev, A. Khan, A. R. Rivera and A. M. Khattak (2017) Large residual multiple view 3d cnn for false positive reduction in pulmonary nodule detection. In 2017 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), pp. 1–6. Cited by: §1.
  7. Q. Dou, H. Chen, L. Yu, J. Qin and P. Heng (2016) Multilevel contextual 3-d cnns for false positive reduction in pulmonary nodule detection. IEEE Transactions on Biomedical Engineering 64 (7), pp. 1558–1567. Cited by: §1.
  8. A. A. Enquobahrie, A. P. Reeves, D. F. Yankelevitz and C. I. Henschke (2004) Automated detection of pulmonary nodules from whole lung helical ct scans: performance comparison for isolated and attached nodules. In Medical Imaging 2004: Image Processing, Vol. 5370, pp. 791–800. Cited by: §1.
  9. L. Fu, J. Ma, Y. Chen, R. Larsson and J. Zhao Automatic detection of lung nodules using 3d deep convolutional neural networks. Journal of Shanghai Jiaotong University (Science), pp. 1–7. Cited by: §1.
  10. Q. Gao, S. Wang, D. Zhao and J. Liu (2007) Accurate lung segmentation for x-ray ct images. In Third International Conference on Natural Computation (ICNC 2007), Vol. 2, pp. 275–279. Cited by: §1.
  11. X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §4.4.
  12. S. Hu, E. A. Hoffman and J. M. Reinhardt (2001) Automatic lung segmentation for accurate quantitation of volumetric x-ray ct images. IEEE transactions on medical imaging 20 (6), pp. 490–498. Cited by: §1.
  13. X. Huang, J. Shan and V. Vaidya (2017) Lung nodule detection in ct using 3d convolutional neural networks. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 379–383. Cited by: §1.
  14. C. Jacobs, E. M. van Rikxoort, T. Twellmann, E. T. Scholten, P. A. de Jong, J. Kuhnigk, M. Oudkerk, H. J. de Koning, M. Prokop and C. Schaefer-Prokop (2014) Automatic detection of subsolid pulmonary nodules in thoracic computed tomography images. Medical image analysis 18 (2), pp. 374–384. Cited by: §3.
  15. G. Kang, K. Liu, B. Hou and N. Zhang (2017) 3D multi-view convolutional neural networks for lung nodule classification. PloS one 12 (11), pp. e0188290. Cited by: §1.
  16. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar and L. Fei-Fei (2014) Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732. Cited by: §4.3.1.
  17. Y. Kawata, N. Niki, H. Ohmatsu, R. Kakinuma, K. Eguchi, M. Kaneko and N. Moriyama (1997) Classification of pulmonary nodules in thin-section ct images based on shape characterization. In Proceedings of International Conference on Image Processing, Vol. 3, pp. 528–530. Cited by: §1.
  18. W. J. Kostis, A. P. Reeves, D. F. Yankelevitz and C. I. Henschke (2003) Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical ct images. IEEE Trans. Med. Imaging 22 (10), pp. 1259–1274. Cited by: §1.
  19. J. Kuhnigk, V. Dicken, L. Bornemann, D. Wormanns, S. Krass and H. Peitgen (2004) Fast automated segmentation and reproducible volumetry of pulmonary metastases in ct-scans for therapy monitoring. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 933–941. Cited by: §1.
  20. J. Lin, S. Lo, A. Hasegawa, M. T. Freedman and S. K. Mun (1996) Reduction of false positives in lung nodule detection using a two-level neural classification. IEEE Transactions on Medical Imaging 15 (2), pp. 206–217. Cited by: §1.
  21. K. Liu and G. Kang (2017) Multiview convolutional neural networks for lung nodule classification. International Journal of Imaging Systems and Technology 27 (1), pp. 12–22. Cited by: §1.
  22. X. Liu, F. Hou, H. Qin and A. Hao (2018) Multi-view multi-scale cnns for lung nodule type classification from ct images. Pattern Recognition 77, pp. 262–275. Cited by: §1.
  23. S. Lo, S. Lou, J. Lin, M. T. Freedman, M. V. Chien and S. K. Mun (1995) Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Transactions on Medical Imaging 14 (4), pp. 711–718. Cited by: §1.
  24. K. Murphy, B. van Ginneken, A. M. Schilham, B. De Hoop, H. Gietema and M. Prokop (2009) A large-scale evaluation of automatic pulmonary nodule detection in chest ct using local image features and k-nearest-neighbour classification. Medical image analysis 13 (5), pp. 757–770. Cited by: §3.
  25. A. Nibali, Z. He and D. Wollersheim (2017) Pulmonary nodule classification with deep residual networks. International journal of computer assisted radiology and surgery 12 (10), pp. 1799–1808. Cited by: §1.
  26. M. Niemeijer, M. Loog, M. D. Abramoff, M. A. Viergever, M. Prokop and B. van Ginneken (2010) On combining computer-aided detection systems. IEEE Transactions on Medical Imaging 30 (2), pp. 215–223. Cited by: §4.5.
  27. A. Noone, N. Howlader, M. Krapcho, D. Miller, A. Brest, M. Yu, J. Ruhl, Z. Tatalovich, A. Mariotto and D. Lewis (2018) SEER cancer statistics review, 1975-2015, national cancer institute. bethesda, md. Cited by: §1.
  28. H. Polat and H. Danaei Mehr (2019) Classification of pulmonary ct images by using hybrid 3d-deep convolutional neural network architecture. Applied Sciences 9 (5), pp. 940. Cited by: §1.
  29. A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam and M. Nielsen (2013) Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In International conference on medical image computing and computer-assisted intervention, pp. 246–253. Cited by: §4.3.1.
  30. A. A. Setio, C. Jacobs, J. Gelderblom and B. van Ginneken (2015) Automatic detection of large pulmonary solid nodules in thoracic ct images. Medical physics 42 (10), pp. 5642–5653. Cited by: §3.
  31. A. A. A. Setio, F. Ciompi, G. Litjens, P. Gerke, C. Jacobs, S. J. Van Riel, M. M. W. Wille, M. Naqibullah, C. I. Sánchez and B. van Ginneken (2016) Pulmonary nodule detection in ct images: false positive reduction using multi-view convolutional networks. IEEE transactions on medical imaging 35 (5), pp. 1160–1169. Cited by: §1, §1, §4.1, §4.3.1, §4.3.1.
  32. A. A. A. Setio, A. Traverso, T. De Bel, M. S. Berens, C. van den Bogaard, P. Cerello, H. Chen, Q. Dou, M. E. Fantacci and B. Geurts (2017) Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge. Medical image analysis 42, pp. 1–13. Cited by: §4.3.1, §4.4.
  33. W. Shen, M. Zhou, F. Yang, C. Yang and J. Tian (2015) Multi-scale convolutional neural networks for lung nodule classification. In International Conference on Information Processing in Medical Imaging, pp. 588–599. Cited by: §1, §1.
  34. J. Shiraishi, F. Li and K. Doi (2007) Computer-aided diagnosis for improved detection of lung nodules by use of posterior-anterior and lateral chest radiographs. Academic radiology 14 (1), pp. 28–37. Cited by: §1.
  35. A. C. Society (2019) Cancer facts & figures 2019. American Cancer Society. Cited by: §1.
  36. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §4.4.
  37. T. Tieleman and G. Hinton (2012) Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4 (2), pp. 26–31. Cited by: §4.4.
  38. J. Wang, R. Engelmann and Q. Li (2007) Segmentation of pulmonary nodules in three-dimensional ct images by use of a spiral-scanning technique. Medical Physics 34 (12), pp. 4678–4689. Cited by: §2.1, §2.
  39. H. Xie, D. Yang, N. Sun, Z. Chen and Y. Zhang (2019) Automated pulmonary nodule detection in ct images using deep convolutional neural networks. Pattern Recognition 85, pp. 109–119. Cited by: §1.
  40. D. F. Yankelevitz, R. Gupta, B. Zhao and C. I. Henschke (1999) Small pulmonary nodules: evaluation with repeat ct—preliminary experience. Radiology 212 (2), pp. 561–566. Cited by: §1.
  41. D. F. Yankelevitz, A. P. Reeves, W. J. Kostis, B. Zhao and C. I. Henschke (2000) Small pulmonary nodules: volumetrically determined growth rates based on ct evaluation. radiology 217 (1), pp. 251–256. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description