Considerations for a PAP Smear Image Analysis System with CNN Features

Considerations for a PAP Smear Image Analysis System with CNN Features

Srishti Gautam, Harinarayan K. K., Nirmal Jith, Anil K. Sao, Arnav Bhavsar, and Adarsh Natarajan Srishti Gautam (email: srishti_gautam@students.iitmandi.ac.in), Anil K. Sao and Arnav Bhavsar (email: anil/arnav@iitmandi.ac.in) are with Indian Institute of Technology, Mandi. Harinarayan K. K., Nirmal Jith and Adarsh Natarajan are with Aindra Systems Pvt. Ltd., Bangalore. (email: hari/nirmal/adarsh@iitmandi.ac.in)
Abstract

It has been shown that for automated PAP-smear image classification, nucleus features can be very informative. Therefore, the primary step for automated screening can be cell-nuclei detection followed by segmentation of nuclei in the resulting single cell PAP-smear images. We propose a patch based approach using CNN for segmentation of nuclei in single cell images. We then pose the question of ion of segmentation for classification using representation learning with CNN, and whether low-level CNN features may be useful for classification. We suggest a CNN-based feature level analysis and a transfer learning based approach for classification using both segmented as well full single cell images. We also propose a decision-tree based approach for classification. Experimental results demonstrate the effectiveness of the proposed algorithms individually (with low-level CNN features), and simultaneously proving the sufficiency of cell-nuclei detection (rather than accurate segmentation) for classification. Thus, we propose a system for analysis of multi-cell PAP-smear images consisting of a simple nuclei detection algorithm followed by classification using transfer learning.

Cervical cancer screening, Nuclei segmentation, Nuclei detection, PAP-smear image classification, Transfer learning, Convolutional neural networks (CNN).

I Introduction

Cervical cancer continues to be one of the deadliest cancers among women worldwide, especially with it being the most common cause of death in developing countries [1, 2, 3]. Every year, approximately 500,000 new cases are reported, out of which 85% occur in developing countries, along with approximately 270,000 deaths worldwide [1]. The pre-cancerous lesions of cervical cancer take almost a decade to convert into the cancerous ones. Hence, despite the above facts, unlike many other cancers, it can be fully cured if detected early [2].

For screening, traditional PAP-smear test continues to be prevalent, especially in the developing countries [3]. Due to vast differences in the morphology of cells (in size and regularity of nucleus) because of the cancerous/pre-cancerous changes, manual screening is reasonably straightforward. However, it has many drawbacks in terms of being tedious, time-consuming and expensive [3]. Also, there can be huge inter and intra observer variability [4]. Hence, automation is essential for development of a system with lower cost, adequate speed up and higher accuracy.

Automatic screening system using PAP-smear image analysis, traditionally comprises of three steps i.e cell (cytoplasm and nuclei) segmentation, feature extraction and cell classification. Segmentation seems important because the morphological changes associated with the degree of malignancy can be represented by features calculated from the segmented nuclei and cytoplasm, for example, the nucleus-cytoplasm ratio or texture features associated with chromatin pattern irregularity [5]. However, typically the segmentation of nucleus is more reliable than that of cytoplasm (possibly due to overlapping, occluded cytoplasm in multi-cell images). Moreover, only the nucleus based features can also be extremely valuable and effective in cervical cancer screening [6]. Further, for some classification frameworks, a Region of Interest (ROI) / nuclei detection step in the images containing multiple cells may be used as a substitute to accurate segmentation step. These detected nuclei can be used for classification.

While the segmentation process is more rigorous, but can assist the classifier to focus only on the features of the object in question, detection is relatively easier. Additionally, due to presence of background in the detected sub-images, some background contextual information is available when using detected cells as opposed to accurately segmented ones. Therefore, an interesting question to consider is that whether an accurate segmentation necessary for classification with contemporary frameworks (e.g. CNN)? Having said that, we acknowledge that segmentation can be useful on its own, in case one considers manual interventions, and for medical education and training applications. With regards to the CNN based classification, we note that unlike in standard computer vision applications, cell images, arguably do not contain high-level semantics. Thus, we also explore the effect of high-level CNN features vs. the low-level CNN features, where the latter can enable the system to be more efficient.

Thus, noting the above aspects, for an overall system development, we propose algorithms for 1) Detection of nuclei in multi-cell images in single cell images, 2) Segmentation of nuclei in single cell images, 3) Classification strategies considering both accurate nuclei segmentation, and nuclei detection (involving some cellular background pixels), and also considering CNN features from different layers.

I-a Related work: Segmentation

Numerous works have been reported for cervical cell nucleus segmentation, indicating its importance. Phoulady et. al [6, 7] uses adaptive multilevel thresholding and ellipse fitting followed by iterative thresholding and subsequent binarizations but on a different problem of segmenting overlapping cells. Cheng et. al uses HSV color space and color clustering. Genctav et. al [4] uses multi-scale hierarchical segmentation algorithm. Ref [8] uses patch-based fuzzy C-means (FCM) clustering technique, where, on the over-segmented image obtained from FCM, a threshold is applied to classify the FCM cluster centers as nucleus and background. A superpixel-based Markov random field (MRF) framework with a gap-search algorithm is proposed in [9]. Bora et. al [10] uses Wavelet and Haar transform along with MSER (maximally stable extremal region). In recent years, deep learning techniques have also been explored in this area. Multiscale CNNs are used in [11] along with superpixels for multi-cellular images. Song et. al. also uses neural networks in [12] for nucleus and cytoplasm segmentation, and multiple scale deep CNNs [13, 14] for overlapping cell segmentation. However, most of the above approaches use non-public datasets, or public datasets lacking variation of normal and abnormal cell images (which typically have very different characteristics). For example, ISBI segmentation challenge dataset [7] for overlapping cells, has no distinction between normal and abnormal slides. Only the approaches in [8, 9] use a publicly available dataset suitable for full cervical cancer diagnosis i.e. segmentation and classification of cells into normal vs abnormal.

In this work, we report a CNN based segmentation approach, which works on selectively pre-processed cell images depending on the homogeneity of the nucleus, post which the pre-processed and non-pre-processed cells are segmented with two different CNNs. Considering low amount of variety in data, such a selective preprocessing helps each of the CNNs to learn the data characteristics better.

I-B Related work: Classification

Recently, deep learning and CNNs have gained the center stage for various classification problems [15, 16]. They have also gained some popularity in the medical imaging applications [17, 18, 19]. An important feature of CNN is a reduced dependency on an exact segmentation for classification. More specifically, an approximate segmentation (or ROI detection) can be considered to be sufficient for classification. This is especially important for the application of classification in medical imaging. However, there are a few drawbacks associated with training a CNN from scratch for a particular problem, for example, the availability of a very few number of annotated images especially in the case of medical datasets [20], requirement of a huge number of days/weeks to train etc. Transfer learning [21] has proved to be very effective in overcoming these limitations, both in medical [22, 23] and non-medical domains [21]. On account of the CNN features being more generic at early layers [21] and having been already learned on a million images [16], these can be used to train the subsequent layers of application-specific CNN. This reduces the chances of overfitting as well as the overall training time of CNN. Recently, both methodologies i.e training a CNN from scratch as well as transfer learning have also been applied on PAP-smear images [24, 25].

In this work, we consider deep learning methods for classification, using transfer learning on Alexnet [16], on both segmented as well as non-segmented single cell images. Alexnet is selected considering the need for a smaller architecture, enabling an efficient processing in medical systems. We also propose a combination of decision-tree based classification with transfer learning. Finally, the transfer learning approach is applied on the detected cells from multi-cell images and is also shown to perform effectively.

Figure 1: Proposed system for automated analysis of PAP-smear images.

Thus, the overall contributions of this work are: 1) We propose a patch-based CNN approach for segmentation using selective pre-processing and show that the proposed selectiveness is effective for nuclei segmentation in single-cell images. 2) We explore the classification results with transfer learning from the features extracted from different CNN layers in Alexnet [16], and demonstrate that the low-level features can be more effective. 3) We demonstrate that the easier cell-nuclei detection can be more effective than an accurate segmentation for CNN-based classification. 4) We introduce a decision-tree based classification, which outperforms a simple multi-class classification with transfer learning. 5) We consider various classification scenarios and demonstrate state-of-the-art classification results.

A preliminary version of this work has been reported [26].

Ii Dataset

In this work, we have used two datasets for our experimentation, one for single cell images and another for multi-cell images. The latter is relatively more noisy and artifact-prone.

Ii-a Herlev dataset

The first dataset on which we have evaluated our algorithms is Herlev PAP-smear dataset [20]. It consists of 917 cell images whose description in the increasing order of abnormality is given in Table I. Each image in Herlev dataset consists of a single nuclei. It is a publicly available dataset collected at Herlev University Hospital by a digital camera and microscope.

Normal Abnormal
Classes Superficial Squamous (nsup) Intermediate Squamous (nint) Columnar (ncol) Light dysplasia (ldys) Moderate dysplasia (mdys) Severe dysplasia (sdys) Carcinoma in situ (cis)
Sample cells
Total cells 74 70 98 182 146 197 150
Table I: Sample cervical cells in Herlev dataset

Ii-B Aindra dataset

80 multi-cell images were collected by Aindra Systems Pvt. ltd., Bangalore, India, from an oncology center. Staining and preparation of slides was done at the same center. The images are labeled into 4 classes i.e Normal, Low-grade squamous intraepithelial lesion (LSIL), High-grade squamous intraepithelial lesion (HSIL) and Squamous cell carcinoma (SCC). The nuclei in these images have been annotated by doctors. The sample images in the increasing order of abnormality are shown in Table II.

Normal Abnormal
Classes LSIL HSIL SCC
Sample cells
Total cells 36 13 21 10
Table II: Sample cervical cells in Aindra dataset.

Iii Proposed methodology

We now describe the proposed system which consists of three methods i.e detection of nuclei, segmentation of nuclei, and classification of segmented/detected nuclei via deep-learning approaches. The block diagram of the proposed system is given in Figure 1.

Iii-a Detection of nuclei in multi-cell images

For detection of nuclei in multi-cell images, we propose a straightforward algorithm applied on the V channel of HSV (Hue, Saturation, Value) color space of the images. Here we neglect the color information because of the presence of extreme color variation in the stains, as can be seen in Table II. The process is divided into three steps: 1) The PAP-smear images are generally noisy as can be seen in Figure 2. Hence, we apply median filtering with a window. 2) To improve the contrast and accentuate the differences between nucleus and background, we apply contrast-limited adaptive histogram equalization (CLAHE [27]). 3) Finally a global threshold, is applied which localizes the nuclei.

(a)
(b)
(c)
Figure 2: Nuclei detection (a) Multi-cell image, (c) V channel after contrast adjustment by CLAHE, (d) Detected nuclei by a global threshold

We note that the detection is required only for the Aindra dataset. The Herlev dataset [20], consists of such detected single cell images.

Iii-B Segmentation of nuclei in single-cell PAP-smear images

We propose a segmentation method comprising broadly of two steps i.e selective pre-processing followed by patch-based classification using CNN, as described in the following subsections. We have reported this method in [26]. However, for self sufficiency, we also briefly discuss it here. We note that the approach assumes that the cell detection is already carried out, and thus operates on single-cell images.

Iii-B1 Cell separation and selective pre-processing

Often, a contrast enhancement pre-processing aids the segmentation task. While this is useful for small and uniform nuclei, in this case, in most cells with larger nuclei due to the irregularity of chromatin pattern, pre-processing hinders good segmentation as it also increases the intra-nuclear contrast (within the nucleus) (see Figure  3). Thus, we suggest that the cells with small and compact nuclei need pre-processing, while those with bigger nuclei and irregular chromatin pattern don’t.

(a)
(b)
(c)
(d)
Figure 3: Pre-processing. (a, c) Original Image: Carcinoma in-situ, Normal intermediate (b, d) Respective pre-processed image with ground truth overlapped, Note that pre-processing on (a) increases the contrast within the nuclei, while on (c) increases the contrast between the nuclei and the background.

We propose feature-based cell separation method where we compute homogeneity [28] of the original images, and use a threshold on its value for separating the cell images in two categories. Thus, images with relatively homogeneous and compact nuclei are passed on for pre-processing, before computing the final segmentation via CNN. For images with irregular nuclei, no pre-processing is done.

Iii-B2 Patch-based CNN

For segmentation, we train two independent CNNs from scratch, one operating on patches from the pre-processed images and the other on patches from non-preprocessed images. During the testing phase, after cell separation, the images are passed on to the respective CNN. We convert our 2-class classification problem (nucleus vs background) into a 3-class problem among nucleus (interior), edge (boundary) and background classes [19]. The details of the overall approach can be found in [26].

Iii-C Classification with detected nuclei

In this section, we explore different strategies and scenarios for the classification of nuclei in cervical cell images.

Iii-C1 Multi-class classification using transfer learning

Considering the success of deep learning methodologies in recent years for the task of classification, we too explore their application in our work. The PAP-smear images generally have a large appearance variation in terms of both contrast and color in the normal and abnormal cells. Furthermore, in medical imaging applications, very few annotated images are available, in the range of hundreds, as opposed to the millions of natural images available for other applications [16]. To overcome the aforementioned difficulties, we make use of the concept of transfer learning where the filters learned by a CNN, pre-trained on ImageNet consisting of millions of images, are directly used for classification in some other domain (medical images in our case). This strategy helps us in two ways: 1) It mitigates the dependency of deep CNNs on huge amount of annotated training data. 2) It effectively reduces the training time required for training a CNN from scratch.

It is shown in literature that the lower level convolutional layers learn the low-level primitive features such as gradients, texture etc., and the deeper layers, learn the high-level data specific semantic features [21]. Considering the hypothesis that semantic features may not be important for cell classification, we explore for classification, the outputs from the filters learned by Alexnet [16] at last (conv5), intermediate (conv3) and first (conv1) convolutional layers followed by two fully connected layers which we retrain, one consisting of 256 neurons and the last layer consisting of number of neurons equal to number of classes. We refer to these new transfer-learning based networks in the rest of the paper as conv5T (Figure 4), conv3T (Figure 5) and conv1T (Figure 6).

Figure 4: conv5T: features from fifth convolutional layer of Alexnet.
Figure 5: conv3T:features from third convolutional layer of Alexnet
Figure 6: conv1T: features from first convolutional layer of Alexnet

Iii-C2 Decision-tree based classification using transfer learning

Considering certain similarities and differences between some classes, we propose a decision-tree based approach for classifying the cell images in a hierarchical way as shown in Figure 7. At the first node, a two-class classification is done between the normal and abnormal classes. This is also important from the perspective of a screening system where only the difference between normal and abnormal classes can also be considered important. Additionally, once we have a good classification between the normal and abnormal class, we can classify the abnormal cells into further gradations of abnormalities. We achieve this in the daughter nodes where at the second level we discriminate between the highest level of abnormal class with other classes. Next, we discriminate between the lowest level of abnormal class with the remaining classes. Finally, at the leaf node, we discriminate between the leftover abnormal classes. The number of levels in the tree is based on the number of gradations of abnormalities in the dataset. At each node, we use a CNN with conv1T architecture (Figure 6) for classification.

Figure 7: Decision-tree based approach for classification

Iii-C3 Classification of detected nuclei in multi-cell images using transfer learning

Following the success of CNN-based classification approach, we apply the transfer learning based methodology to the real-world multi-cell PAP-smear images. After detecting the nuclei with the help of detection algorithm mentioned, we extract the detected nuclei regions as sub-images using bounding boxes around the connected components. These sub-images are now passed on for classification to a CNN whose features for the first convolutional layer are extracted directly from the first convolutional layer of the pre-trained Alexnet. Because there can be large variations in multi-cell images, overfitting can occur. To reduce the chances of overfitting, we use two techniques: 1) By appending a few untrained convolutional layers before the fully connected layers, the number of nodes in the fully connected layers are reduced and hence the number of parameters to be trained in the network are reduced. 2) Using max-pooling dropout after every untrained convolutional layer [29].

Iii-C4 Classification with segmented nuclei

For classification on segmented nuclei images with conv1T architecture. From the detected cells, the nuclei is segmented and the background values are replaced by 255. These images are now fed into the conv1T architecture, assisting it by emphasizing the nucleus features. Here, we also pose the question if the exact segmentation is needed for classification and demonstrate the answer through experimentation with and without segmentation of nuclei in single-cell images.

Iv Experiments and Results

We now discuss various experiments for detection, segmentation and different classification scenarios. In all the experiments involving supervised learning (segmentation and classification), we have used 70% of images from each class are used for training, 15% for validation and remaining 15% for testing. The results are reported over a 5 random training, validation and testing sets.

Iv-a Evaluation metrics

For segmentation, we have quantified the boundary of segmented nuclei using pixel-based F-score [30]. For comparisons with other segmentation techniques, we use Zijdenbos similarity index (ZSI) [9].

For classification, based on the ground-truth label information in the datasets, we consider 2 problems i.e 2-class classification in Herlev and Aindra datasets and 7-class classification in Herlev dataset. We use accuracy for quantification of classification approaches on both 2-class and 7-class classification problems. The overall accuracy is computed as the fraction of correctly classified cells over all classes [30].

Iv-B Cervical cell nuclei detection results on Aindra dataset

After obtaining the output from nuclei detection algorithm on multi-cell images, a bounding box with a padding of 20 pixels on 4 sides around each connected component is used to capture a sub-image containing the nuclei. These sub-images are labeled as normal/abnormal nuclei based on the ground truth annotations. The visual results for nuclei detection on Aindra dataset are shown in Table III. We observe that in most cases the detected cells indeed focuses on a single cell-nuclei, with some cell and background regions.

Original Image Detected nuclei
Normal nuclei
Abnormal nuclei
Table III: Nuclei detection on Aindra dataset. Normal and abnormal detected nuclei are marked with green and red respectively.

Iv-C Cervical cell nuclei segmentation results

We provide the quantitative effectiveness of the proposed segmentation algorithm on Herlev dataset. We have used an architecture similar to that of VGGNet[15] for both our CNNs i.e CNN trained without pre-processed images (CNN) and CNN trained with pre-processed images. (CNN).

In Table IV, we compare final results of our approach (row1) with its counterpart without selective pre-processing i.e with 1) same pre-processing on all images (row3) 2) no pre-processing on any image (row2), which further stresses on the importance of cell separation step. We compare the results of CNN with 2 classes (nucleus and background, without the boundary class) with similar architecture as CNN and CNN and homogeneity-based cell separation technique (row 4). The results clearly show that a CNN with 2 class does not perform well. Therefore, the third class has significant effect in disambiguating the boundary class. We note that our method performs better than the state of the art FCM based technique [8], which also reports F-scores for individual classes.

We provide ZSI comparisons[9] in Table V with various contemporary methods which are also compared in [9]. We note that the performance of the proposed approach is better than some of the contemporary methods (FCM[8], Threshold[9] & Hierarchical tree[9]) and comparable to the best (MRF[9] & RGVF[9]). Some visual results for the algorithm are shown in Figure 8, where the segmentation map obtained is also shown with 3 classes i.e nuclei, edge and background.


nsup nint ncol ldys mdys sdys cis Average








Proposed: feature-based CS with Homogeneity
0.86 0.91 0.89 0.92 0.94 0.87 0.89 0.90



Without cell separation: no pre-processing on any image
0.62 0.75 0.73 0.77 0.91 0.91 0.74 0.77

Without cell separation: same pre-processing on all images
0.86 0.91 0.89 0.92 0.82 0.45 0.89 0.82


2-class CNN with homogeneity based CS
0.60 0.29 0.62 0.42 0.51 0.84 0.61 0.63



FCM[8]
0.84 0.89 0.83 0.83 0.84 0.83 0.78 0.84



Table IV: Class-wise F-scores for nucleus segmentation results for Herlev dataset.
Method Proposed FCM [8] Threshold [9] Hierarchical tree [9] MRF [9] RGVF [9]


ZSI
0.90 0.80 0.78 0.89 0.93 0.93

Table V: Pixel-based ZSI comparison
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 8: Nucleus segmentation. (a),(e),(i) Original images of different sizes, (b),(f),(j) Segmentation map produced by CNN, where blue, cyan & yellow color represents nucleus, edge and boundary pixels respectively, (c),(g),(k) Final segmentation map, (d),(h),(l) Ground truth.

Iv-D Cervical cell classification results

Iv-D1 With detected nuclei using transfer learning

For multi-class classification using transfer learning, we explore the architectures conv5T, conv3T and conv1T given in Figure 4, 5 and 6 where we use the outputs from the fifth, third and first convolutional layers of Alexnet, respectively. After getting the respective outputs from pre-trained Alexnet, we train the fully connected layers with a 256-neuron hidden layer and a final output layer with number of neurons equal to the number of classes. Because of high dimensional outputs from the convolution layers of Alexnet, the number of weights to be trained are huge (in the range of 1 million), hence we use data augmentation on the training data. We also use data augmentation on validation data to reduce the extreme fluctuations in validation accuracy while training. After this, we end up with 12,000 examples for training and 3000 for validation. We use 5 random sets of training, validation and testing data for the experimentation and report the average results in Figure 9. All three of these networks are trained for 200 epochs and mean squared error as loss function.

Figure 9: 7-class CNN accuracies with transfer learning on Herlev dataset

Activation map from different convolutional layers of Alexnet for an example image are shown in Figure 10. It can be seen that the activation map from the first convolutional layer learns the prominant texture features from the images as opposed to the third and fifth convolutional layers. This observation supports the hypothesis that for cell images, as the depth of the network increases, the high-level features do not seem informative. This also supports our motivation to select Alexnet consisting of smaller number of layers. We provide the average training, validation and testing accuracies for the 7-class classification, over 5 random trials for different architectures in Figure 9. The constant increase in accuracies from conv5T to conv1T shows that the cell classification problem performs better with low-level features rather than those at the deeper levels. We believe this is an interesting and important insight, as typical deep learning approaches only consider the last layer features for classification.

(a)
(b)
(c)
Figure 10: Activation maps: Left to Right: Conv1, Conv3 and Conv5 activation maps for Normal superficial cell images.

Iv-D2 With detected nuclei using decision-tree

Next, we explore the results of decision-tree based classification using transfer learning. Because of the transfer learning with conv1T giving the best results, we report the decision-tree based classification results with the architecture given in Figure 6. The overall accuracies at each stage with transfer learning are given in Table VI. We note that the decision-tree based method with transfer learning gives high accuracy at each stage.

Stage 1 Stage 2 Stage 3 Stage 4
conv1T 99.3% 95.6% 95.1% 94.1%
Table VI: Accuracies at different stage of decision-tree based approach using traditional and deep learning based methods.

Iv-D3 Classification on Aindra dataset

After detection of nuclei in multi-cell images of Aindra dataset, we use transfer learning for classification. The connected components obtained from the nuclei detection step are extracted by taking a bounding box around the detected nuclei. A padding of 20 pixels from all four sides is applied on the actual seed region bounding box. These sub-images are labeled as normal/abnormal based on the overlap with the ground truth. Next, 3 random training and validation sets are created based on the whole slide images, which are passed on to the transfer learning architecture for classification into normal/abnormal classes. The mean training and validation accuracies are reported in Table VII. The training and validation accuracies are quite good considering that the Aindra dataset is complex in terms of contrast variation and artifacts.

Training accuracy Validation accuracy
Accuracy 95.5% 85.7%
Table VII: Training and validation accuracies on Aindra dataset after nuclei detection

Iv-D4 With segmented nuclei

Considering the best results with conv1T architecture, we now explore the test accuracies using conv1T with segmented nuclei Herlev dataset for 7 class classification. For this, we keep the original nuclei intensity values and set the background values as 255. The results of classification with ground-truth and proposed segmentations, and using the full cell images (without segmentation) are shown in Table VIII. The results clearly demonstrate that with segmentation the performance is rather limited, and suggests that the contextual information in images with nuclei detections, may be important for good classification. Thus, it indicates that, as far as the classification performance is concerned, an easier nuclei detection process may be sufficient rather than a sophisticated segmentation approach.

Without segmentation With ground truth segmentation With segmentation using proposed method
93.75% 82.33% 78.25%
Table VIII: Accuracies for segmented and non segmented cell images.

Iv-D5 Comparisons

In Table IX, we compare the results of our classification approaches on Herlev dataset for 2-class classification problem with the following existing methods: 1) Benchmark results in [31], 2) Ensemble learning for three classifiers [10], 3) Particle swarm feature selection and 1-nearest neighbor as classifier [32], 4) Genetic algorithm feature selection [33], 5) Artificial neural network with 9 cell-based features [30] and 6) Transfer learning by training a new architecture (ConvNet-T) from scratch [25]. We note that our results with conv1T surpass all of the existing algorithms and are quite close to the results with ANN in [30]. This might suggest that this is the best accuracy that can be reached for this dataset. For 7-class classification problem in Herlev dataset, we provide a comparison in Table X. The results show that our approach surpasses the benchmark results and the results are extremely close to that of ANN [30].

We stress on the fact that our approach is comparatively easier (segmentation-free) and faster than that of [30] wherein both nuclei and cytoplasm are segmented, whereas we pass on the cell images directly to a CNN trained with transfer learning. Also, we note that our approach is arguably better than the similar approaches stated in [25] in terms of training time where they train their CNN architecture from scratch and we only train the fully connected layers.

Method Proposed:conv1T Benchmark [31] Ensemble [10] PSO-1nn [32] GEN-1nn [33] ANN [30] ConvNet-T[25]


Accuracy
99.3% 93.6% 96.5% 96.7% 96.8% 99.27% 98.3%

Table IX: 2-class classification (Normal vs Abnormal) on Herlev dataset
Method Proposed (conv1T) Benchmark [31] ANN [30]


Accuracy
93.75% 61.1% 93.78%

Table X: 7-class classification on Herlev dataset

V Conclusion

In this paper, we reported a PAP-smear image analysis system for cervical cancer screening for both single and multi-cell images. The image analysis generally consists of three steps: detection, segmentation and classification. We propose a simple nuclei detection algorithm for multi-cell images, and a patch-based CNN approach with selective pre-processing for segmentation. This approach results in an overall F-score of 0.90 on Herlev dataset. For classification, we propose feature-level analysis using transfer learning on Alexnet on both single and multi-cell images. A decision-tree based classification is proposed as an alternative to the multi-class classification. Further, we prove through experimentation that accurate segmentation is not necessary for classification with deep learning. We obtain state-of-the-art classification accuracy on Herlev for 2-class (99.3%) and for 7-class classification (93.75%).

Acknowledgment

We acknowledge the support of Aindra Systems Pvt. Ltd. for funding this research and regular discussions.

References

  • [1] C. Shakuntala, “Cervical cancer preventable, treatable, but continues to kill women,” vol. 1.   Medknow Publications, 2016.
  • [2] A. Sreedevi, R. Javed, and A. Dinesh, “Epidemiology of cervical cancer with special focus on india,” Int J Womens Health, vol. 7, pp. 405–14, 2015.
  • [3] E. Bengtsson and P. Malm, “Screening for cervical cancer using automated analysis of pap-smears,” Computational and mathematical methods in medicine, vol. 2014, 2014.
  • [4] A. Gençtav, S. Aksoy, and S. Önder, “Unsupervised segmentation and classification of cervical cell images,” Pattern Recognition, vol. 45, no. 12, pp. 4151 – 4168, 2012.
  • [5] N. Lassouaoui, L. Hamami, and N. Nouali, “Morphological description of cervical cell images for the pathological recognition.” in World Academy of Science(5), 2005, pp. 49–52.
  • [6] H. A. Phoulady, M. Zhou, D. B. Goldgof, L. O. Hall, and P. R. Mouton, “Automatic quantification and classification of cervical cancer via adaptive nucleus shape modeling,” in IEEE International Conference on Image Processing (ICIP), Sept 2016, pp. 2658–2662.
  • [7] H. A. Phoulady, D. B. Goldgof, L. O. Hall, and P. R. Mouton, “A new approach to detect and segment overlapping cells in multi-layer cervical cell volume images,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), April 2016, pp. 201–204.
  • [8] T. Chankong, N. Theera-Umpon, and S. Auephanwiriyakul, “Automatic cervical cell segmentation and classification in pap smears,” Computer Methods and Programs in Biomedicine, vol. 113, no. 2, pp. 539 – 556, 2014.
  • [9] L. Zhao, K. Li, M. Wang, J. Yin, E. Zhu, C. Wu, S. Wang, and C. Zhu, “Automatic cytoplasm and nuclei segmentation for color cervical smear image using an efficient gap-search {MRF},” Computers in Biology and Medicine, vol. 71, pp. 46 – 56, 2016.
  • [10] K. Bora, M. Chowdhury, L. B. Mahanta, M. K. Kundu, and A. K. Das, “Automated classification of pap smear images to detect cervical dysplasia,” Computer Methods and Programs in Biomedicine, vol. 138, pp. 31 – 47, 2017.
  • [11] Y. Song, L. Zhang, S. Chen, D. Ni, B. Lei, and T. Wang, “Accurate segmentation of cervical cytoplasm and nuclei based on multiscale convolutional network and graph partitioning,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 10, pp. 2421–2433, Oct 2015.
  • [12] Y. Song, L. Zhang, S. Chen, D. Ni, B. Li, Y. Zhou, B. Lei, and T. Wang, “A deep learning based framework for accurate segmentation of cervical cytoplasm and nuclei,” in 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Aug 2014, pp. 2903–2906.
  • [13] Y. Song, J. Z. Cheng, D. Ni, S. Chen, B. Lei, and T. Wang, “Segmenting overlapping cervical cell in pap smear images,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), April 2016, pp. 1159–1162.
  • [14] Y. Song, E. L. Tan, X. Jiang, J. Z. Cheng, D. Ni, S. Chen, B. Lei, and T. Wang, “Accurate cervical cell segmentation from overlapping clumps in pap smear images,” IEEE Transactions on Medical Imaging, vol. 36, no. 1, pp. 288–300, Jan 2017.
  • [15] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds.   Curran Associates, Inc., 2012, pp. 1097–1105.
  • [17] X. W. Gao, R. Hui, and Z. Tian, “Classification of ct brain images based on deep learning networks,” Computer Methods and Programs in Biomedicine, vol. 138, no. Supplement C, pp. 49 – 56, 2017.
  • [18] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou, “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1207–1216, May 2016.
  • [19] D. A. Van Valen, T. Kudo, K. M. Lane, D. N. Macklin, N. T. Quach, M. M. DeFelice, I. Maayan, Y. Tanouchi, E. A. Ashley, and M. W. Covert, “Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments,” PLOS Computational Biology, vol. 12, no. 11, pp. 1–24, 11 2016.
  • [20] J. Jantzen, J. Norup, G. Dounias, and B. Bjerregaard, Pap-smear Benchmark Data For Pattern Classification.   NiSIS, 2005, pp. 1–9.
  • [21] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” in Proceedings of the 27th International Conference on Neural Information Processing Systems, ser. NIPS’14.   Cambridge, MA, USA: MIT Press, 2014, pp. 3320–3328.
  • [22] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional neural networks for medical image analysis: Full training or fine tuning?” vol. abs/1706.00712, 2017.
  • [23] H. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. J. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” vol. abs/1602.03409, 2016.
  • [24] K. Bora, M. Chowdhury, L. B. Mahanta, M. K. Kundu, and A. K. Das, “Pap smear image classification using convolutional neural network,” in Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, ser. ICVGIP ’16.   New York, NY, USA: ACM, 2016, pp. 55:1–55:8.
  • [25] L. Zhang, L. Lu, I. Nogues, R. Summers, S. Liu, and J. Yao, “Deeppap: Deep convolutional networks for cervical cell classification,” IEEE Journal of Biomedical and Health Informatics, vol. PP, no. 99, pp. 1–1, 2017.
  • [26] S. Gautam, A. Bhavsar, A. K. Sao, and H. K.K., “Cnn based segmentation of nuclei in pap-smear images with selective pre-processing,” pp. 10 581 – 10 581 – 9, 2018.
  • [27] A. J. Vyavahare and R. C. Thool, “Segmentation using region growing algorithm based on clahe for medical images,” in Fourth International Conference on Advances in Recent Technologies in Communication and Computing (ARTCom2012), Oct 2012, pp. 182–185.
  • [28] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610–621, Nov 1973.
  • [29] H. Wu and X. Gu, “Max-pooling dropout for regularization of convolutional neural networks,” CoRR, vol. abs/1512.01400, 2015. [Online]. Available: http://arxiv.org/abs/1512.01400
  • [30] T. Chankong, N. Theera-Umpon, and S. Auephanwiriyakul, “Automatic cervical cell segmentation and classification in pap smears,” Computer Methods and Programs in Biomedicine, vol. 113, no. 2, pp. 539 – 556, 2014.
  • [31] J. Jantzen, J. Norup, G. Dounias, and B. Bjerregaard, Pap-smear Benchmark Data For Pattern Classification.   NiSIS, 2005, pp. 1–9.
  • [32] Y. Marinakis, M. Marinaki, and G. Dounias, “Particle swarm optimization for pap-smear diagnosis,” Expert Systems with Applications, vol. 35, no. 4, pp. 1645 – 1656, 2008.
  • [33] Y. Marinakis, G. Dounias, and J. Jantzen, “Pap smear diagnosis using a hybrid intelligent scheme focusing on genetic algorithm based feature selection and nearest neighbor classification,” Computers in Biology and Medicine, vol. 39, no. 1, pp. 69 – 78, 2009.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
205271
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description