CeliacNet: Celiac Disease Severity Diagnosis on Duodenal Histopathological Images Using Deep Residual Networks

CeliacNet: Celiac Disease Severity Diagnosis on Duodenal Histopathological Images Using Deep Residual Networks

Rasoul Sali 1, Lubaina Ehsan2, Kamran Kowsari1, Marium Khan2, Christopher A. Moskaluk2,
Sana Syed23, and Donald E. Brown13 {rs8wa, lubaina, kk7nc, mk2ne, cam5p, sana.syed, deb}@virginia.edu
1 Department of System and Information Engineering, University of Virginia, Charlottesville, VA, USA 2 Department of Pediatrics, School of Medicine, University of Virginia, Charlottesville, VA, USA 3 School of Data Science, University of Virginia, Charlottesville, VA, USA
Abstract

Celiac Disease (CD) is a chronic autoimmune disease that affects the small intestine in genetically predisposed children and adults. Gluten exposure triggers an inflammatory cascade which leads to compromised intestinal barrier function. If this enteropathy is unrecognized, this can lead to anemia, decreased bone density, and, in longstanding cases, intestinal cancer. The prevalence of the disorder is in the United States. An intestinal (duodenal) biopsy is considered the “gold standard” for diagnosis. The mild CD might go unnoticed due to non-specific clinical symptoms or mild histologic features. In our current work, we trained a model based on deep residual networks to diagnose CD severity using a histological scoring system called the modified Marsh score. The proposed model was evaluated using an independent set of whole slide images from  CD patients and achieved an AUC greater than  in all classes. These results demonstrate the diagnostic power of the proposed model for CD severity classification using histological images.

Deep Learning, Residual Networks, Celiac Disease, Marsh Score, Medical Imaging, Duodenal Histopathological Images

I Introduction

Celiac disease (CD) is an inability to normally process dietary gluten (present in foods such as wheat, rye, and barley) and is present in % of the US population. Gluten consumption by people with CD can cause diarrhea, abdominal pain, bloating, and weight loss. If unrecognized, it can lead to anemia, decreased bone density, and, in longstanding cases, intestinal cancer [7, 24]. An intestinal (duodenal) biopsy, obtained via endoscopic evaluation, is considered the “gold standard” for diagnosis of CD. Due to unclear clinical symptoms and/or obscure histopathological features (based on biopsy images), CD is often undiagnosed [6]. There has been major clinical interest towards developing new and innovative methods to automate and enhance the detection of morphological features of CD on biopsy images.

Studies have shown the ease of training Convolutional Neural Networks (CNNs) for image recognition. These networks are a family of machine learning architectures which have proven to have superior performance over a wide range of computer vision tasks such as classification and object detection. Due to the wide availability of robust open source software and high-quality public datasets, these architectures are fast becoming the standard choice for being selected as the backbone of many modern computer vision technologies. Using large amounts of data, these models have shown to be effective in solving many biomedical imaging challenges. Currently, CNNs have been successfully applied to medical images such as MRI and X-rays [12, 21]. CNNs have also shown promising performance on histopathological images [20, 2].

Among various architectures of CNNs, Residual Networks (ResNet) have received special attention due to their considerably superior performance in the analysis of histopathological images for disease detection, diagnosis and prognosis prediction to complement the opinion of a human pathologist. Multiple groups have published on the use of the ResNet architecture for classification of Hematoxylin and Eosin (H&E) stained biopsy images including breast and prostate cancer [9, 25, 22, 5, 26] and colorectal polyps [18]. Similarly impressive results for CD diagnosis based on whole slide biopsy images have been noted in published literature [29]. Herein we explore the performance of deep residual networks in severity diagnosis of CD on duodenal biopsy images.

This paper is organized as follows: In Section II, disease severity classes of CD are presented. In Section III, we describe the data used in this study. Section IV presents the data pre-processing steps. The methodology is explained in Section V. Empirical results are elaborated in Section VI. Finally, Section VII concludes the paper along with outlining future directions.

Ii Severity Classes of Celiac Disease

Modified Marsh Score Classification was developed to classify the severity of CD based on microscopic histological morpological features (Figure 1). It takes into account the architecture of the duodenum as having finger-like projections (called “villi”) which are lined by cells called epithelial cells. Between the villi are crevices called crypts that contain regenerating epithelial cells. The normal ratio of the length of a typical healthy villus to the depth of a representative health crypt should be between 3:1 and 5:1. In the normal, healthy duodenum (first part of the small intestine), there should be no more than 30 immune cells known as lymphocytes interspersed per 100 epithelial cells in the top layer of the villus. Marsh I histology comprises of normal villus architecture with an increase in the number of intraepithelial lymphocytes. Marsh II includes increased intraepithelial lymphocytes along with a finding known as crypt hypertrophy in which the crypts appear enlarged. This is usually rare since patients typically rapidly progress from Marsh I to IIIa. Marsh III is sub-divided into IIIa (partial villus atrophy), Marsh IIIb (subtotal villus atrophy) and Marsh IIIc (total villus atrophy) to explain the spectrum of villus atrophy along with crypt hypertrophy and increased intra-epithelial lymphocytes. Finally, in Marsh IV, villi are completely atrophied. This is called “hypoplastic” or complete villus atrophy and describes the microscopic histology of duodenal tissue from patients at the extreme end of gluten sensitivity.

Fig. 1: CD severity classification based on modified Marsh score [8]

Iii Data Source

H&E stained duodenal biopsy slides were obtained from the archival biopsies of CD patients from the University of Virginia (UVa) in Charlottesville, VA, United States. Each slide contained multiple biopsies per patient resulting in whole slide images at 40x magnification using the Leica SCN 400 slide scanner (Meyer Instruments, Houston, TX) at the Biorepository and Tissue Research Facility at UVa. Characteristics of our patient population were as follows: the median age was months. we had a roughly equal distribution of males and females . Biopsy images for our study population were scored by two medical professionals and validated with reads from a pathologist specialized in gastroenterology. Our biopsy image dataset ranged from Marsh I to IIIc with no biopsy images present in Marsh II.

Iv Data Pre-processing

Since whole slide images (WSIs) were digitized at high resolutions, these were large files with notable color variability apparent on visual inspection. Therefore, we pre-processed these before any computational analyses were conducted. This section describes all pre-processing steps including image patching, patch clustering and color normalization.

Iv-a Image patching

The effectiveness of CNNs in image classification has been shown in various studies across different domains [19, 16, 14]. However, the training of a CNN on high resolution WSIs that are at a gigapixel level is not often feasible due to high computational cost. Also, the application of CNNs on WSIs further contributes to the loss of a large part of discriminatory information due to extensive down-sampling which is needed in such images [15]. We hypothesized that since there were cellular level morphological differences between different CD severity classes given the spectrum of pathology, a trained classifier on image patches would likely perform as well or better than a trained WSI-level classifier. A sliding window method was applied to each high-resolution WSI to generate patches of size pixels with  overlapping area. After generating patches from each image, we labelled each patch based on its associated image.

Fig. 2: Color normalization artifacts when using the method proposed by Vahadane et al. [27]. Images in the first row represent the target image and some source images. Their associated normalized images are in second row

Iv-B Patch Clustering

Clustering is organizing objects in a such way that objects within a group or cluster in some way are more similar to each other compared to objects in other groups. There is a wide variety of algorithms for data clustering and K-means clustering is one of the easiest ones [17]. Finding the optimal solution to the k-means clustering problem is NP-hard in general Euclidean space even for 2 clusters. Clustering of -dimension entities in clusters can be exactly solved in time of  [3]. Obviously, reduction of dimension will result in significant improvement of the K-means clustering algorithm in term of time complexity. To address the problem of dimensionality reduction, a convolutional auto-encoder [10] was used to learn embedded features of each patch. These auto-encoders have been reported in the literature as having had great success as a dimensionality reduction method via the powerful reprehensibility of neural networks [28].

In our work, a two-step clustering process was applied to identify useless patches which had mostly been created from the background of the WSIs. All or a large part of these patches were blank or did not contain any useful biopsy information. Through the first step, a convolutional autoencoder was used to learn the embedded features of each patch and in the second step k-means clustering algorithm was applied to cluster embedded features into two clusters: useful and not useful. Some results of patch clustering have been shown in Figure 3.

Fig. 3: Some samples of clustering results - cluster 1 included patches with useful information and cluster 2 included patches without useful information (mostly created from the slide background and border areas of the WSIs)

Iv-C Stain normalization

Histological images have substantial color variation that adds bias while training the model. This arises due to a wide variety of factors such as differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners [27]. To avoid any bias, unwanted color variations are neutralized by conducting color normalization as an essential pre-processing step prior to any analyses.

Various color normalization approaches have been proposed in the published literature. In this study, we used the approach proposed by Vahadane et al. [27]. This approach preserves biological structure information by basing color mixture modeling on sparse non-negative matrix factorization. Figure 2 shows an example of the result of applying this technique on representative biopsy patches.

V Methodology

V-a Model development

CNNs have demonstrated promising performance in image classification tasks. There are many different architectures of CNNs in the literature,with associated advantages and drawbacks. In the current study, we used the deep residual network (ResNet)[13], a model which has shown great performance in image classification problems including medical image analysis[9, 30]. Although it has been shown that CNNs with more convolutional layers achieve the most accurate results, simply stacking more convolutional layers will not lead to better performance. When the deep network reaches a certain depth, its performance tends to be saturated and even begins to rapidly decline. In such cases, the models involve a large amount of parameters and are computationally expensive to train through whole parameters. This is called the degradation problem and ResNet was originally proposed to tackle this issue. The core idea of ResNet is introducing a skip connection that skips one or more layers and bypasses the input from the previous layer to the next layer without any modification. Since these added shortcut connections perform identity mapping, extra parameters are not added to the model. Such architecture enables deployment of deeper networks without problem of degeneracy. The building block of the ResNet is compared to the building block of the traditional network in Figure 4. In the traditional networks, the mapping from input to output can be represented by the nonlinear function . In residual learning blocks, is used as mapping function [13]. In essence, as part of traditional CNNs, the input is mapped to which is a completely new representation that does not keep any information about the original input, while ResNet blocks compute a slight change to the original input to get a slightly altered representation. ResNet was the Winner of the ILSVRC 2015 in image classification, detection, and localization, as well as the Winner of the MS COCO 2015 detection and segmentation.

Fig. 4: Building blocks of (left) a traditional CNN, (right) a ResNet

Different variants of ResNet models such as ResNet50, ResNet101, and ResNet152 were trained on the ImageNet dataset [11]. We customized the Resnet50 by removing fully connected layers and keeping only the ResNet backbone as a feature extractor. Then we added one fully connected layer with neurons that received the flattened output of the feature extractor. Finally, the output layer was added such that it represented a prediction probability for each of the four Marsh score categories: I, IIIa, IIIb and IIIc. We used dropout on the fully-connected layers with as the regulizer. This model has been summarized in Table I.

Class Layer Type Output Shape Number of Parameters
1 Model
2 Flatten
3 Dense
4 Dropout
5 Dense
TABLE I: Architecture of the model
       Class        Accuracy (%)        Precision (%)        Recall (%)        F1-measure (%)
I
IIIa
IIIb
IIIc
TABLE II: Patch-level performance of model for celiac disease severity diagnosis

We resized pre-processed patches into  pixels and used them to train our model. Both horizontal and vertical random rotations were performed as part of our data augmentation. The model was trained on around  patches for each of four classes. Optimization was performed using RMSprop optimization with no momentum, a base learning rate of and a multiclass cross entropy loss function.

Fig. 5: Overview of whole-slide inference process using aggregation of patch-level classifications

V-B Whole slide classification

Our goal was to classify WSIs based on severity assessed via the modified Marsh score. The model used was trained to classify small patches rather than WSIs. To achieve this goal, a heuristic method was developed which aggregated crop classifications and translated them to whole-slide inferences. Each WSI in the test set was initially patched, those patches which did not contain any information were filtered out and finally stain normalization was performed. After these pre-processing steps our trained model was applied with the goal of image classification. We denoted the probability distribution over possible labels, given the crop and training set  by . In general, this represented a vector of length , where is number of classes. In our notation, the probability is conditional on the test patch , as well as the training set . For each crop, the model gives an output of a vector composed of four components showing probabilities for each one of the four classes of CD severity. Given a probabilistic output, the patch  in slide is assigned to the most probable class label  which is shown in Equation 1.

(1)

where is called maximum a posteriori (MAP) . Summation over these vectors and normalizing the resultant vector, created a vector that had components showing the probability of CD severity for the associated WSI. Equation 2, shows how the class of WSI was predicted.

(2)

where is number of patches in slide . Figure 5 depicts overview of the whole-slide inference process.

Fig. 6: Patch-level ROC and AUC for different classes
Fig. 7: Class activation mapping heat maps highlighting the most informative regions of patches relevant to different categories including goblet cells, inflammatory cells in lamina propria, crypt epithelium, fibrotic inflammatory debris, surface enterocytes, Paneth cells, Brunner’s glands, neuroendocrine cells, villus edge and apposed enterocytes. Area of attention is shown in blue color.

Vi Experimental results

Vi-a Patch-level performance

To evaluate the effectiveness of our proposed model, we used an independent test set including  WSIs. After application of a sliding window for patching these whole slides and doing the aforementioned pre-processing steps,  crops remained to be used for our model evaluation. Performance of our model on this set is shown in Table II, which includes accuracy, precision, recall, and the F1 score with  confidence intervals. Also patch-level ROC curves and AUC for each class are shown in Figure 6. As shown AUC for all classes was greater than .

Vi-B Slide-level performance

After classification of the test patches, their results were aggregated based on the method described in section V-B to make an inference about each test slide. By applying this method, all slides in the test set were classified correctly and the accuracy of the model in all the classes was . In the four classes of I, IIIa, IIIb and IIIc there were , , and slides, respectively. This means that CD severity was correctly diagnosed.

Vi-C Class Activation Mapping

We used the Grad-CAM approach to obtain visual explanation microscopic feature heat-maps for WSI patch areas predictive of CD severity. Grad-CAM visualizations were obtained for  images ( Marsh I, Marsh IIIa,  Marsh IIIb,  Marsh IIIc). Qualitatively, the Grad-CAM images of our model localized microscopic morphological features such as different cell types and tissue structures that corresponded to the disease pathology. Quantitatively, our Grad-CAM heat-maps were reviewed by two medical professionals. These heat-maps were broadly categorized into 10 groups that are as follows: goblet cells, inflammatory cells in the lamina propria, crypt epithelium, fibrotic inflammatory debris, surface enterocytes, Paneth cells, Brunner’s glands, neuroendocrine cells, villus edge and apposed enterocytes. Visualization of these different categories on individual patches are shown in Fig 7. Most images depicted an overlap of heat-map for enterocytes and goblet cells or enterocytes and lymphocytes that are known to be representative of CD [23] (Fig 7).

Vi-D Hardware and Framework

All of the results shown in this paper are performed on Central Process Units (CPU) and Graphical Process Units (GPU). Also, This model is capable to be performed on only GPU, CPU, or both.The processing units that has been used through this experiment was intel on Xeon E5-2640  (2.6 GHz) with 12 cores and 64 GB memory (DDR3). Also, graphical card on our machine is Nvidia Quadro K620 and Nvidia Tesla K20c. This work is implemented in Python using Compute Unified Device Architecture (CUDA) which is a parallel computing platform and Application Programming Interface  (API) model created by . We used and library for creating the neural networks [1, 4].

Vii Conclusion

In this paper, we investigated CD severity using CNNs applied to histopathological images. A state-of-the-art deep residual neural network architecture was used to categorize patients based on H&E stained duodenal histopathological images into four classes, representing different CD severity based on a histological classification called the modified Marsh score. Our model was trained to classify different patches of WSIs. In addition we provided a heuristic to aggregate results of patch classification and make inference about the WSIs. Our model was tested on  crops derived from an independent test set WSIs from CD patients. It achieved AUC greater than in all classes. At the WSI level classification, the proposed model correctly classified all WSIs. Validation results were highly promising and showed that our model has great potential to be utilized by pathologists to support their CD severity decision based on a histological assessment. We also used the Grad-CAM approach to obtain visual explanation of microscopic features predictive of CD severity. These heat-maps were broadly categorized into  groups including goblet cells, inflammatory cells in the lamina propria, crypt epithelium, fibrotic inflammatory debris, surface enterocytes, Paneth cells, Brunner’s glands, neuroendocrine cells, villus edge and apposed enterocytes.

Albeit achieving promising results, this study has a number of limitations. Firstly, healthy cases were not included this study. This is an avenue for future work. In addition, all biopsy images used in this study were collected from a single medical center and scanned with the same equipment, thus our data may not be representative of the entire range of histopathologic patterns in patients worldwide. Furthermore, the target image for stain normalization was selected manually based on the opinion of a pathologist. Selecting a different image as the target image could affect the appearance of stain normalized images. It is known that some variability exists in this selection, which is then propagated through the framework. Finally, in this study we applied a single method of stain normalization and the use of other methods may lead to different results. Therefore, investigating the effect of different stain normalization techniques can be another potential area of future work.

Acknowledgements

This research was initially supported by an Engineering in Medicine SEED Grant from the University of Virginia  and the University of Virginia Translational Health Research Institute of Virginia () Mentored Career Development Award . Research reported in this publication was supported by [National Institute of Diabetes and Digestive and Kidney Diseases] of the National Institutes of Health under award number . The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. Cited by: §VI-D.
  • [2] M. Al Boni, S. Syed, A. Ali, S. R. Moore, and D. E. Brown (2019) Duodenal biopsies classification and understanding using convolutionalneural networks. American Medical Informatics Association. Cited by: §I.
  • [3] D. Aloise, A. Deshpande, P. Hansen, and P. Popat (2009) NP-hardness of euclidean sum-of-squares clustering. Machine learning 75 (2), pp. 245–248. Cited by: §IV-B.
  • [4] F. Chollet et al. (2015) Keras: deep learning library for theano and tensorflow. URL: https://keras. io/k. Cited by: §VI-D.
  • [5] H. Chougrad, H. Zouaki, and O. Alheyane (2018) Deep convolutional neural networks for breast cancer screening. Computer methods and programs in biomedicine 157, pp. 19–30. Cited by: §I.
  • [6] G. R. Corazza, V. Villanacci, C. Zambelli, M. Milione, O. Luinetti, C. Vindigni, C. Chioda, L. Albarello, D. Bartolini, and F. Donato (2007) Comparison of the interobserver reproducibility with different histologic criteria used in celiac disease. Clinical Gastroenterology and Hepatology 5 (7), pp. 838–843. Cited by: §I.
  • [7] A. Fasano, I. Berti, T. Gerarduzzi, T. Not, R. B. Colletti, S. Drago, Y. Elitsur, P. H. Green, S. Guandalini, I. D. Hill, et al. (2003) Prevalence of celiac disease in at-risk and not-at-risk groups in the united states: a large multicenter study. Archives of internal medicine 163 (3), pp. 286–292. Cited by: §I.
  • [8] A. Fasano and C. Catassi (2001) Current approaches to diagnosis and treatment of celiac disease: an evolving spectrum. Gastroenterology 120 (3), pp. 636–651. Cited by: Fig. 1.
  • [9] Z. Gandomkar, P. C. Brennan, and C. Mello-Thoms (2018) MuDeRN: multi-category classification of breast histopathological image using deep residual networks. Artificial intelligence in medicine 88, pp. 14–24. Cited by: §I, §V-A.
  • [10] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT press Cambridge. Cited by: §IV-B.
  • [11] M. Guillaumin and V. Ferrari (2012) Large-scale knowledge transfer for object localization in imagenet. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3202–3209. Cited by: §V-A.
  • [12] V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, et al. (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 316 (22), pp. 2402–2410. Cited by: §I.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §V-A.
  • [14] M. Heidarysafa, K. Kowsari, D. E. Brown, K. J. Meimandi, and L. E. Barnes (2018) An improvement of data classification using random multimodel deep learning (rmdl). arXiv preprint arXiv:1808.08121. Cited by: §IV-A.
  • [15] L. Hou, D. Samaras, T. M. Kurc, Y. Gao, J. E. Davis, and J. H. Saltz (2016) Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2424–2433. Cited by: §IV-A.
  • [16] Z. Hu, J. Tang, Z. Wang, K. Zhang, L. Zhang, and Q. Sun (2018) Deep learning for image-based cancer detection and diagnosis- a survey. Pattern Recognition 83, pp. 134–149. Cited by: §IV-A.
  • [17] A. K. Jain (2010) Data clustering: 50 years beyond k-means. Pattern recognition letters 31 (8), pp. 651–666. Cited by: §IV-B.
  • [18] B. Korbar, A. M. Olofson, A. P. Miraflor, C. M. Nicka, M. A. Suriawinata, L. Torresani, A. A. Suriawinata, and S. Hassanpour (2017) Deep learning for classification of colorectal polyps on whole-slide images. Journal of pathology informatics 8. Cited by: §I.
  • [19] K. Kowsari, M. Heidarysafa, D. E. Brown, K. J. Meimandi, and L. E. Barnes (2018) Rmdl: random multimodel deep learning for classification. In Proceedings of the 2nd International Conference on Information System and Data Mining, pp. 19–28. Cited by: §IV-A.
  • [20] K. Kowsari, R. Sali, M. N. Khan, W. Adorno, S. A. Ali, S. R. Moore, B. C. Amadi, P. Kelly, S. Syed, and D. E. Brown (2019) Diagnosis of celiac disease and environmental enteropathy on biopsy images using color balancing on convolutional neural networks. External Links: 1904.05773 Cited by: §I.
  • [21] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez (2017) A survey on deep learning in medical image analysis. Medical image analysis 42, pp. 60–88. Cited by: §I.
  • [22] N. H. Motlagh, M. Jannesary, H. Aboulkheyr, P. Khosravi, O. Elemento, M. Totonchi, and I. Hajirasouliha (2018) Breast cancer histopathological image classification: a deep learning approach. bioRxiv, pp. 242818. Cited by: §I.
  • [23] G. Oberhuber, G. Granditsch, and H. Vogelsang (1999) The histopathology of coeliac disease: time for a standardized report scheme for pathologists.. European journal of gastroenterology & hepatology 11 (10), pp. 1185–1194. Cited by: §VI-C.
  • [24] I. Parzanese, D. Qehajaj, F. Patrinicola, M. Aralica, M. Chiriva-Internati, S. Stifter, L. Elli, and F. Grizzi (2017) Celiac disease: from pathophysiology to treatment. World journal of gastrointestinal pathophysiology 8 (2), pp. 27. Cited by: §I.
  • [25] A. Rakhlin, A. Shvets, V. Iglovikov, and A. A. Kalinin (2018) Deep convolutional neural networks for breast cancer histology image analysis. In International Conference Image Analysis and Recognition, pp. 737–744. Cited by: §I.
  • [26] A. J. Schaumberg, M. A. Rubin, and T. J. Fuchs (2018) H&E-stained whole slide image deep learning predicts spop mutation state in prostate cancer. BioRxiv, pp. 064279. Cited by: §I.
  • [27] A. Vahadane, T. Peng, A. Sethi, S. Albarqouni, L. Wang, M. Baust, K. Steiger, A. M. Schlitter, I. Esposito, and N. Navab (2016) Structure-preserving color normalization and sparse stain separation for histological images. IEEE transactions on medical imaging 35 (8), pp. 1962–1971. Cited by: Fig. 2, §IV-C, §IV-C.
  • [28] W. Wang, Y. Huang, Y. Wang, and L. Wang (2014) Generalized autoencoder: a neural network framework for dimensionality reduction. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 490–497. Cited by: §IV-B.
  • [29] J. W. Wei, J. W. Wei, C. R. Jackson, B. Ren, A. A. Suriawinata, and S. Hassanpour (2019) Automated detection of celiac disease on duodenal biopsy slides: a deep learning approach. arXiv preprint arXiv:1901.11447. Cited by: §I.
  • [30] H. Wen, J. Shi, W. Chen, and Z. Liu (2018) Deep residual network predicts cortical representation and organization of visual features for rapid categorization. Scientific reports 8 (1), pp. 3752. Cited by: §V-A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393468
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description