Projectron – A Shallow and Interpretable Network for Classifying Medical Images

Projectron – A Shallow and Interpretable Network for Classifying Medical Images

Aditya Sriram, Shivam Kalra, H.R. Tizhoosh Kimia Lab, University of Waterloo, Ontario, Canada; kimia.uwaterloo.ca
Vector Institute, Toronto, Canada
asriram/s6kalra/tizhoosh@uwaterloo.ca
Abstract

This paper introduces the “Projectron” as a new neural network architecture that uses Radon projections to both classify and represent medical images. The motivation is to build shallow networks which are more interpretable in the medical imaging domain. Radon transform is an established technique that can reconstruct images from parallel projections. The Projectron first applies global Radon transform to each image using equidistant angles and then feeds these transformations for encoding to a single layer of neurons followed by a layer of suitable kernels to facilitate a linear separation of projections. Finally, the Projectron provides the output of the encoding as an input to two more layers for final classification. We validate the Projectron on five publicly available datasets, a general dataset (namely MNIST) and four medical datasets (namely Emphysema, IDC, IRMA, and Pneumonia). The results are encouraging as we compared the Projectron’s performance against MLPs with raw images and Radon projections as inputs, respectively. Experiments clearly demonstrate the potential of the proposed Projectron for representing/classifying medical images.

Artificial neural networks, image classification, medical imaging, Radon projections, Projectron.

I Introduction

Computer vision is a collection of techniques to, among others, extract features (also called descriptors) from images using handcrafted algorithms. The problem with such approaches is the limited scope as these algorithms are designed to only solve a certain task or application with no inherent capability of adjusting to the characteristics of new (unseen) images. Despite the large number of image descriptors that are available in literature, the principle of comparing features using some distance metrics or using them for classification requires careful design and customized configuration. The leap forward is to develop techniques that learn/weigh these features automatically without any manual intervention. Over the years, Radon transform has gained some traction as an image descriptor in the medical imaging domain. The features captured by Radon transform are based on equi-distant angles that capture the shape of the objects and organs. So far, the usage of Radon projections as image features have been primarily based on developing handcrafted descriptors for retrieval approach [1] [2][3] or for classification [4] [5] [6]. These techniques have difficulty generalizing across different image instances for the same application. To overcome this shortcoming, we propose a new network called “Projectron” that learns to represent/classify medical images using Radon projections as input.

The proposed network is comprised of three phases: generating Radon projections, an encoding block, and a classification block (whose weights can also be used as representation). For every image, multiple Radon projections are calculated across equi-distant angles ranging between and . To ensure that each Radon projection has the same length, the images are re-sized to have same-length width/height. As for the encoding block, the projections are passed onto a single-layer of neurons. This layer, considered isolated, is a binary classifier that decides whether the input, represented as a Radon projection vector, belong to a specific class. In this case, the first layer learns the Radon projection by combining a set of weights through ReLU activation function, to linearly classify all projections. A kernel layer follows the first layer in order to transform the neurons’ outputs into a more easily separable space. The last stage in the Projectron classification is the two fully connected layers which intuitively carry the most dense features prior to the final reduction to the number of classes through a traditional softmax classification scheme.

The motivation for this research is twofold: 1) We would like to design “shallow networks” that are more easily trainable compared to deep architectures that require a lot of efforts for design, implementation and training, and 2) we would like to work with networks in the medical domain whose results are more interpretable; when a decision is made by the network, it should be possible to “understand” that decision by observing/examining the input, a possibility that is not available when deep features are employed for classification.

In this work, we introduce a new Radon-based neural network, called “Projectron”, which learns and classifies Radon projections. Five public datasets (MNIST, Emphysema, IDC, IRMA, and Pneumonia) were adopted to evaluate the proposed network. The Projectron is compared against two conventional MLP networks - one with raw images as input, and the other where the inputs are the same Radon projections computed for the Projectron. The Projectron performs the same if not better than MLP in majority of the datasets.

Ii Related Works

Recently, Radon projections have gained some traction in computer vision as an image descriptor [7]. One of the first versions of Radon projection as an image descriptor was proposed by Tizhoosh [1], called Radon Barcodes (RBC). The purpose was to binarize Radon projections to create a barcode annotation that is a short feature representation of an image. Validated on the IRMA dataset consisting of 14,410 radiograph images from 193 different classes, the RBC performed an IRMA error, defined in Equation 6, of 476.62 using only 4 projections forming a vector length of 512 digits. In 2016, Tizhoosh et al. [8] introduced an improved RBC descriptor, called MinMax RBC. The authors apply a smoothing function before capturing the shape of projections. This enables the detection of all major minimum and maximum values in a projection profile, and allows for creating sections for more meaningful binarization of the projection. Subsequently, the authors assign an encoded value of 1 or 0 based on the slope of the projection profile between the sections. With the MinMax RBC, the IRMA error dropped to 415.75 and was observed to be become computationally much faster. A recent improvement of RBC was put forward by Babaie et al. [9], wherein the authors experiment if a single Radon projection can represent an image, called Single Projection Radon barcode (Sp-RBC). This study deduced that exploiting a single projection to form a short feature vector does provide acceptable results. To make Sp-RBC more robust, the authors use the outcome of each projection separately, and deduce the best-match using local search. Tested on the IRMA dataset, Sp-RBC yields an IRMA error of 356.57. Later, Babaie et al. [2] explored the difference when capturing Radon features at a global and local neighborhoods of images. This strategy is particularly useful when the dimension of the image is too large to process, such as in digital pathology (i.e., whole slide imaging or WSI) and satellite images. The authors developed a descriptor called Local Radon Descriptor (LRD) that generates a histogram based on Radon projections computed at a local neighborhood across the entire image. This approach is observed to yield a higher discrimination of features as compared to global radon projection. The aforementioned descriptor is validated on IRMA dataset as well as INRIA holiday dataset (consisting of 1,990 images). LRD obtained an IRMA error of 287.77, and an accuracy of 40.02% on the INRIA holiday dataset - which is a comparative score against well-established descriptors such as LBP and HoG.

A quasi-learning approach using Radon projections was investigated by Zhu and Tizhoosh [4]. The authors used normalized Radon projections for extracting features from raw images, and provided these descriptors as an input to Support Vector Machines (SVM) for classification. In particular, the Radon features are binarized to form Radon barcodes which are then used to tag all images. For retrieving similar images, Radon barcodes are extracted for the query image and a k-nearest neighbor search is applied to find the best match using the minimum Hamming distance. This approach is observed to correctly identify image classes for those that are mistakenly classified by SVM. Experimental results yield an IRMA error of 294.83. More recently, Sriram et al. [5] performed a study to determine if the use of Radon projections is a better feature for learning algorithms to generalize. The authors performed a comparative study, between Radon projections, histogram of oriented gradients (HoG) and raw (unprocessed) pixels, to determine the best image descriptor for an autoencoder to compress an image with minimal loss. The proposed framework extracted the aforementioned features from images, compressed those features using a shallow autoencoder, and passed them onto a Multi-Layer Perceptron (MLP) for classification. The authors observed that Radon features as an input vector to an autoencoder provided the best result. Validated on the IRMA dataset, the proposed framework achieved an IRMA error of 313 (equivalent to 82% accuracy), which outperforms all other autoencoding approaches.

Apart from using Radon transform for classification, these descriptors are also used for narrowing the search-space for better image retrieval. In 2017, Khatami et al. [10] used a deep CNN to classify radiograph images to obtain a set of “best predicted categories”. To further narrow the query, Radon transform was adopted for similarity-based search schemes, after obtaining the k-nearest neighbors. This approach is observed to be fast as well as providing improved performance. Later in 2018, Khatami et al. [6] proposed a two-step approach to shrink the search space. The proposed method used Radon projections as feature vectors for similarity comparison after narrowing the search-space by a convolutional neural network (CNN). To get a more meaningful Radon feature vector, the authors used the difference between two orthogonal projections for similarity search. This approach is validated on IRMA dataset, achieving an IRMA error of 168.05 (approximately 90.30% accuracy), setting the benchmark on the dataset.

In early 2018, Tizhoosh and Babaie [3] introduced a new dense-sampling descriptor, based on Radon projections, called “Encoded Local Projections” (ELP). The authors build this histogram-based descriptor based on the highest gradient angle within each local neighborhood. The angle of the highest gradient allows for capturing spatial projection patterns that are more descriptive and meaningful as compared to equi-distant angles. ELP is validated on three public datasets (IRMA, KIMIA Path24, and CT Emphysema), yielding a competitive accuracy measure against other established handcrafted descriptor, in several experimental settings. Later in 2018, Sharma et al. [11] performed a study using ELP descriptor for facial recognition. In this setting, ELP was observed to perform better than LBP when used in the same configuration.

Presently, Deep Learning (DL) is the most trending research sub-field of Machine Learning (ML). The DL models are generally trained end-to-end, which greatly simplifies the training process. The popular architectures for image classification are Convolutional Neural Networks (CNNs). The first trainable CNN architecture was proposed by LeCun et al. in 1998 [12]. Almost a decade later, in 2012, AlexNet was developed at University of Toronto, establishing itself as the state-of-the-art model for image classification of that time [13]. AlexNet achieved top-5 test error rate of 15.3% on ImageNet classification challenge [14]. The two major reasons for the success of AlexNet were, i) availability of large amount of labelled data, and ii) accelerated computing using GPUs. Since 2012, there has been a significant evolution of CNN architecture. The preference is given to deeper networks with smaller receptive fields, as the network becomes deeper its cumulative receptive field increases. An example of such a deep network is a 19-layer model often known as VGG-19 or OxfordNet that won the ImageNet challenge in 2014 [15]. Furthermore, many improvements have been made in building blocks of CNNs architecture. These improvements include, inception module [16]—combines features from multi-resolution receptive fields at each layer, residual block [17]—uses residual learning for feature extraction, and dense block [18]—extends the idea of residual learning to dense connections within layers. The performance of a deep model is highly dependent on availability of large amount of quality labeled data. The availability of large amount of labeled data is a limiting factor in medical image analysis due to shortage of experts, subjectivity of medical interpretations, and legal obligations to patients’ privacy [19]. It is harder to adapt the performance of DL models in non-conventional problems (including medical domain), due to their black-box nature [20]. In healthcare, interpretability, quantitative performance and run-time performance of a ML technique are equally important.

The motivation of this work is to employ Radon projections within shallow a neural topology to classify medical images. This should not only increase the interpretability of the network but also make the design, training and inference more practical and efficient.

Iii Projectron

Projectron is a shallow neural network that uses Radon projections as input. We call the proposed architecture a “Projectron” because as a neural automaton it uses projections111The name projectron has been used once in literature [21] but as it describes an algorithm and not a neural network, re-purposing the name for this work seems justified.. Projectron, much like an MLP, is a supervised learning platform wherein the classification accuracy is dependent on the input features. A higher discrimination of features between each class usually results in a higher classification accuracy [22]. In recent years, several descriptors have been introduced that have shown to complement learning techniques [23] [24]; one such descriptor that has gained traction in the medical imaging domain are Radon projections [7].

The proposed network takes in multiple global Radon projections and pushes them through an encoding stage, and an MLP with two layers. Using global projections should contribute to increased interpretability as compared to “local” deep features. The Projectron learning is comprised of three phases: (i) Applying Radon transform to provide parallel projections as image descriptors, (ii) Encoding the projections using a one layer of neurons followed by a kernel layer using radial basis functions (RBFs), and (iii) a shallow MLP with two layers for representation and classification. The following section will cover each of these phases in more detail. Fig. 1 provides a pictorial representation of the Projectron network.

Fig. 1: Projectron – Classifying global projections. A small number of projection angles are used to generate parallel projections. The encoding stage weights the projection values and pushes pairs of neuron outputs through RBF kernels. An MLP classifies the input image by using the output of the encoding stage.

Iii-a Projection Stage: Projecting the Image

Radon transform, introduced by J. Radon in 1917 [25], provides a profile that is a set of 1-dimensional projections of an object (Fig. 1). The obtained projections of an object may be used to reconstructed the scene of the object at space, a technique called inverse Radon transform (i.e., filtered backprojection) [26]. Over the years, Radon transform has been adopted across various applications such as reconstructing images from computed axial tomography scans, and barcode scanners and computer vision [26]. Examining an image function , one can project along a number of projection angles. Each Radon projection is essentially an integral transform of which is a summation of values along lines constituted by each angle [27]. These projections are used for assembling the sinogram with . Hence, using the Dirac delta function , the Radon transform of a two-dimensional image can be defined as the integral along a line inclined at an angle and at a distance from the origin [28] (see Fig. 2):

(1)

where .

Fig. 2: Computing Radon projections in a window.

For the Projectron, Radon projections are computed for every image from to , with an , empirically chosen, to provide a total of projections per image. Each image is gray-scaled and re-sized to have the same dimensions such that the projection length across all angles are the same (see Algorithm 1).

Input: Query image , Image dimension , Projection angles
Output: Radon image descriptor which consists of number of projections angles and projection length
Procedure getFeatures()
      4321 (re-sizes to ) (convert to grayscale) (get Radon projection per image of shape (,) for each angle which is acquired between two consecutive projections return
Algorithm 1 Projecting images

Iii-B Encoding Stage: Kernelizing Weighted Projections

In simple terms, the encoding block is responsible for learning the relationship between the projections using a layer of neurons followed by a kernel layer. After extracting the projections for each equi-distant angle , these projections are provided to a first layer of neurons which encodes the inputs by adjusting the synaptic weights , bias, and is an activation function. Hence, given an input vector , each transformation is defined by .The weights and bias are generally adjusted based on the ReLu activation function [29]. Introduced in 2000 by Hahnloser et al. [30][31], the Rectified Linear Unit (ReLu) is an established activation function that is found to accelerate the convergence of stochastic gradient descent algorithm when compared to sigmoid and tanh functions [13]. This activation function is simply thresholded to zero wherein the function outputs either or a positive value for every input to a neuron: .

The outputs of the first layer in the encoding stage is passed onto a layer with RBF kernels to enforce an increase in linear separability as we intend to keep the Projectron rather shallow. To better capture the variations of each projection, we use the Gaussian basis function to center each vector and set the value as a dynamic variable. Hence, the variable is weighted to minimize the overall classification error. The RBF takes two parameters as inputs that determines the center (mean) value of the function to provide a desired output value. The RBF is a real-valued function whose value depends only on the distance from the origin, so that . For our purposes, the RBF value depends on the distance from some other point , so that

(2)

The sums of RBFs are typically used to approximate given functions. This approximation process can also be interpreted as a simple type of neural network that we placed after the first layer. The RBF approximations are of the form

(3)

where is a dynamic variable that we learn. The approximating function is the sum of RBFs, each of which are associated with a different center , and weighted by coefficient . To learn the gamma variable , every other encoding neuron output is provided as an input pair to the RBF kernel. The RBF layer computes the distance between the inputs to enhance the discrimination to the classification block that follows.

Iii-C Classification Stage: Shallow MLP

The RBF layer is connected to a shallow Multi-Layer Perceptron (MLP) [32] to perform classification. The MLP is among the most useful types of neural networks, with an ability to learn the representation of data and relate it to the output, increasing the overall accuracy. In this case, the MLP is structured to have an input layer to be the RBF layer output from encoding stage, followed by a single hidden-layer with ReLU activation functions. The last layer is the classification layer which is based on softmax function to calculate the probability distributions. In particular, the softmax function derives probabilities for each class and for every image. The class that has the highest probability (i.e., closer to 1) is considered the predicted label. The weights of the shallow MLP may as well be employed as a “represnetation” for the input image for other purposes (e.g., image search). All steps to train the Projectron are described in Algorithm 2.

Input: Radon descriptor of shape (), epochs , Batch Size , true labels per image
Output: Classification label
Procedure Projectron()
       21 (flatten Radon descriptor to 1-D vector) (get total number of classes) /* Define placeholders for perceptron */
       43 (define input features as placeholder) (define classes as placeholder) /* First learning layer, the perceptron */
       5 (Perceptron layer with ReLu activation function) /* Define gamma for RBF distribution */
       76 (declare as dynamic variable for every other projection) (provide the perceptron output to RBF layer as input along with ) /* Multi-layer perceptron */
       1098 (provide RBF output as input to MLP) (compress input layer to half its dimension) (compress previous layer onto number of classes) /* Define error and predictions */
       131211 (get error by comparing the predicted label to ground-truth using softmax classification) (get predicted outputs) return
Algorithm 2 Projectron: Learning projections

Iv Experiments

A total of five different experiments were conducted on publicly available datasets to evaluate Projectron against MLP with raw images and Radon features as input. These datasets include a non-medical dataset (MNIST) and four medical datasets, namely CT Emphysema, Invasive Ductal Carcinoma (IDC), IRMA, and Pneumonia. The following section briefly describes these datasets. Thereafter, the classification results of Projectron against the MLP are reported for each dataset.

Iv-a Datasets

Iv-A1 MNIST Dataset

The MNIST dataset [33] is among the most popular image processing multi-class dataset and is comprised of several thousands of handwritten digits. In particular, there are a total of 70,000 images depicting digits 0 to 9, which are first pre-processed using min-max normalization. The dataset is pre-distributed with 60,000 images for training and 10,000 images are for testing. For training, each image is processed at its original resolution sized at .

Iv-A2 Emphysema Dataset

For this study, we also used the “Computed Tomography Emphysema” dataset [34] which contains 168 CT patches of size from 115 high-resolution CT slices. These scans are gathered from 39 patients, and each patch is manually annotated into one of three categories: (i) 59 observation of normal tissue, (ii) 50 observations of Centrilobular Emphysema (CLE), and (iii) 59 observations of Paraseptal Emphysema (PSE). For training, the images were re-sized to prior to training and testing using leave-one-out approach - i.e. a total of 168 models were trained. The accuracy is measured based on the cardinality of correctly classified images . The accuracy can be calculated as

(4)

Iv-A3 IDC Dataset

Invasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers detected in histopathology slides. To grade the whole slide image (WSI), pathologists typically focus on the IDC region. The dataset is retrieved from Kaggle website, and consists of 162 WSIs in total [35] [36]. Slides are from the Hospital of the University of Pennsylvania and The Cancer Institute of New Jersey. All slides were digitized and scanned at 40x magnification. Each slide is broken down into 277,524 patches of size with 198,738 being IDC negative and 78,786 diagnosed as IDC positive. The training and testing were distributed to 114,235 and 50,963 instances, respectively. For training, each image was re-sized to and converted into gray scales. Similar to emphysema, the correctly classified images is compared against the test set for total accuracy measure as follows:

(5)

Iv-A4 IRMA Dataset

IRMA is a retrieval dataset of radiography images. This x-ray dataset is comprised of 12,677 training and 1,733 testing images created from clinical cases at the Department of Diagnostic Radiology at RWTH Aachen University. Each image is annotated using an IRMA code which is comprised of four mono-hierarchical axes: the technical code (T) for imaging modality, directional code (D) for body orientations, anatomical code (A) for the body region being imaged, and biological code (B) for the biological system examined. The IRMA code is 13 characters in length of form: TTTT-DDD-AAA-BBB, wherein each can range from 0, 1,…, 9; a, b,…,z [37]. The IRMA code is evaluated when comparing the IRMA codes between the retrieved and the ground-truth. The IRMA error is defined as [38]

(6)

where is the IRMA code of the query image, the IRMA code of the retrieved image, the number of possible states for each position, is the number of characters on the axis, and is a function for correct/wrong matching. The total error is then defined as follows

(7)

For training, each image is re-sized to .

Iv-A5 Pneumonia Dataset

Pneumonia is an infection of the lungs which results in an inflammation in the air sacs making it difficult for the patient to breathe. This is a dataset retrieved from Kaggle which consists of 5,863 x-ray images classified either as “pneumonia” or “normal” [39]. These chest x-rays were selected from pediatric patients between 1 to 5 years of age from Guangzhou Women and Children’s Medical Center. Since each image is sized differently in this dataset, we re-sized each image to . As for the distribution of the dataset - 70% was randomly selected for training, and the remaining 30% of the data was selected for testing the Projectron and MLP.

Iv-B Parameter Setting

The implementation of the Projectron is straight-forward. The first step is to gray-scale (if necessary) and re-size all images. For each image, the projection gap at is empirically chosen - generating a total of 12 projections per image: . The length of the projection is generally equal to the hypotenuse length of the image with zero padding when necessary. The Radon features for each image is provided to the Projectron which classifies images based on ReLU activation using Adam optimizer. Since Radon features are global representation of the image, the run time for Projectron is a lot quicker as compared to MLP with raw images as input. To avoid overfitting, the accuracy and loss is calculated per epoch, and the training is terminated when the loss per epoch stays the same or drops for every iteration. For sake of comparison, we also train and test (conventional) MLPs with raw images as input, the re-sized images that were computed for the Projectron are also used as input to the MLP. In terms of architecture, the MLP is a shallow network, with either one or two hidden layers followed by a softmax classification. Not only does the MLP with raw images takes a longer time to train, it also provides a comparable or even lower accuracy for most of the reported datasets. Finally, the use of MLP with Radon feature as input was also tested and observed to be a competitor to the Projectron. This, of course, empirically confirms one of our assumptions, namely that using projections instead of raw data is useful when shallow architectures are preferred. In this case, the inputs to the MLP is the exact same Radon features that were provided to the Projectron. This technique resulted in a better accuracy when compared to MLP with raw images. However, Projectron yields a better result on the IDC, IRMA, and Pneumonia datasets when compared against both of the aforementioned approaches. As for the MNIST and CT Emphysema dataset, Projectron yields a lower-yet-comparable accuracy.

Iv-C Results

Table I shows the results for all five datasets, wherein the reported values are testing accuracy and the total number of trainable parameters . For all datasets, the images were gray-scaled and re-sized. In addition, early-stopping with a patience of three epochs is adopted to avoid overfitting for each approach across all datasets.

It was observed that MLP with raw images trains the slowest with Projectron training as quick as MLP with Radon projections even though it has the highest number of trainable parameters. For instance, in the MNIST dataset, each image is re-sized to , yielding a total trainable parameters of for Projectron which took roughly 2.5 minutes to train. This is similar to MLP with Radon projections, which had parameters and took a little more than 2 minutes to train . In comparison, a one-hidden layer MLP with raw images has trainable parameters and took more than 4 minutes to train.

For a better comparison, we constructed a deep MLP with 7 hidden layers to increase the total number of parameters to be comparable to that of Projectron. Keeping the reduction of each layer the same (i.e., half the length of the input), the deep MLP had trainable parameters, achieving a similar testing accuracy when compared to its shallow MLP. For testing accuracy, please refer to Table I.

Overall each strategy was able to generalize the dataset well. This experiment shows that a shallow network, such as Projectron, can generalize and learn the features just as good as a deeper network. Moreover, the global Radon features extracted in Projectron network is more interpretable than deep local features as observed when comparing classes in the IRMA dataset in Figs. 3 and 4.

Fig. 3: Comparing projections for classified digits can shed light on the reason for classification.The global nature of projections and their small numbers enables us to understand the rational for classification.
Fig. 4: Global projections (the inputs to the Projectron) are well-understood in the medical imaging. The mixture and typicality of these projections, their shape and relationships to the image content can be easily interpreted by the human expert. For instance, for the lung x-ray (2nd column from left), the two valleys of the blue projections represent the lungs whereas projections at and , as well as projections at and are extremely similar as they both go across both lungs, respectively. Such relationships can be visualized more apparently to assist the human expert in interpreting the results.

The Projectron is observed to outperform both the MLP approaches for the IDC, IRMA and Pneumonia datasets. For these datasets, each image is re-sized to , , and respectively. The MLPs are observed to struggle when learning the image features, forming a bias towards one of the classes during classification. This could be due to the variation of images in the dataset as well as noise present in the images. As for the relatively smaller dataset, CT Emphysema, leave-one-out-approach was adopted wherein the images is re-sized to for training. In this dataset, MLP with Radon achieves the best result at , followed by Projectron at and MLP with raw images at . For this dataset, each network is able to classify between the three classes well.

Generally, the Projectron achieves competitive accuracy results if not better than convetional MLPs for multiple datasets. Not only is Projectron a shallow and more explainable neural network, it is also observed to train faster than MLP with raw images as input, and learn the images better in most cases.

MNIST Dataset
(60,000 images training and 10,000 images for testing)
Method Accuracy
MLP+Raw 97.89% 311,650
MLP+Radon 96.81% 117,850
Projectron 94.75% 1,196,100
CT Emphysema Dataset
(168 images for training/testing)
Method Accuracy
MLP+Radon 60.12% 560,212
Projectron 59.00% 6,884,958
MLP+Raw 57.73% 7,397,782
Invasive Ductal Carcinoma
(114,235 images for training and 50,963 images for testing)
Method Accuracy
Projectron 78.00% 3,754,918
MLP+Raw 71.35% 3,128,752
MLP+Radon 71.35% 364,232
IRMA Dataset
(12,677 images for training and 1,733 images for testing)
Method Accuracy
Projectron 69.00% 38,548,291
MLP+Radon 47.02% 3,187,449
MLP+Raw 41.23% 233,582,490
Pneumonia Dataset
(5,216 images for training and 624 images for testing)
Method Accuracy
Projectron 70.03% 33,767,754
MLP+Raw 37.50% 253,158,752
MLP+Radon 37.50% 3,270,404
TABLE I: Results for Projectron against two shallow MLP networks with raw images and Radon projections as input. Results include the trainable parameters () for each approach.

V Summary and Conclusions

In this paper, a new artificial neural network called “Projectron” was introduced. The Projectron learns a small number of Radon projections captured from images. The proposed network is comprised of three phases, namely acquiring projections, encoding projections, and classifying the results with a shallow MLP. For each image, a small number of projections are obtained with an equi-distant angle of between and . These projections are provided to a layer of neurons to weight them. The output is then forwarded to a layer of RBF kernels which is supposed to increase the linear separability between each pair of weighted projection values. Finally, a shallow MLP with only two layers is adopted to classify the encoded features using ReLu activation for the hidden layer, and “Softmax” activation on the output layer. We validated the Projectron on five public datasets. The Projectron was observed to perform better than MLP approach for IDC, IRMA and Pneumonia datasets. For the MNIST and Emphysema dataset, the Projectron performs competitively with MLP approaches. For all the dataset, Projectron learns the dataset much quicker than a traditional MLP with an input of either raw images or Radon projections. Finally, the Projectron seems to generalize the dataset better – which is apparent when examining the loss per epoch graph as well as classification results. For future work, we would like to explore the possibilities of using Radon projections for extracting features for a deeper network - a Convolution Neural Network inspired architecture. Also, we would like to further improve the Radon features by determining an optimal angle at which the projection contains the most relevant features.

The proposed Projectron demonstrates the potential for classifying medical images on a par with or even better than established MLP networks. The shallowness of Projectron is certainly beneficial for fast developments. As well, incorporating global projections does in fact increase the interpretability of the classification for medical images.

References

  • [1] Tizhoosh, H.R.: Barcode annotations for medical image retrieval: A preliminary investigation. In: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE (2015) 818–822
  • [2] Babaie, M., Tizhoosh, H.R., Khatami, A., Shiri, M.: Local radon descriptors for image search. In: 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), IEEE (2017) 1–5
  • [3] Tizhoosh, H., Babaie, M.: Representing medical images with encoded local projections. IEEE Transactions on Biomedical Engineering (2018)
  • [4] Zhu, S., Tizhoosh, H.R.: Radon features and barcodes for medical image retrieval via svm. arXiv preprint arXiv:1604.04675 (2016)
  • [5] Sriram, A., Kalra, S., Tizhoosh, H.R., Rahnamayan, S.: Learning autoencoded radon projections. In: 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE (2017) 1–5
  • [6] Khatami, A., Babaie, M., Tizhoosh, H.R., Khosravi, A., Nguyen, T., Nahavandi, S.: A sequential search-space shrinking using cnn transfer learning and a radon projection pool for medical image retrieval. Expert Systems with Applications 100 (2018) 224–233
  • [7] Sanz, J.L., Hinkle, E.B., Jain, A.K.: Radon and projection transform-based computer vision: algorithms, a pipeline architecture, and industrial applications. Volume 16. Springer Science & Business Media (2013)
  • [8] Tizhoosh, H.R., Zhu, S., Lo, H., Chaudhari, V., Mehdi, T.: Minmax radon barcodes for medical image retrieval. In: International Symposium on Visual Computing, Springer (2016) 617–627
  • [9] Babaie, M., Tizhoosh, H.R., Zhu, S., Shiri, M.: Retrieving similar x-ray images from big image data using radon barcodes with single projections. arXiv preprint arXiv:1701.00449 (2017)
  • [10] Khatami, A., Babaie, M., Khosravi, A., Tizhoosh, H.R., Salaken, S.M., Nahavandi, S.: A deep-structural medical image classification for a radon-based image retrieval. In: 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE), IEEE (2017) 1–4
  • [11] Sharma, D., Zafar, S., Babaie, M., Tizhoosh, H.: Facial recognition with encoded local projections. arXiv preprint arXiv:1809.06218 (2018)
  • [12] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11) (1998) 2278–2324
  • [13] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097–1105
  • [14] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, Ieee (2009) 248–255
  • [15] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [16] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2015) 1–9
  • [17] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 770–778
  • [18] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR. Volume 1(2). (2017)  3
  • [19] Razzak, M.I., Naz, S., Zaib, A.: Deep learning for medical image processing: Overview, challenges and the future. In: Classification in BioApps. Springer (2018) 323–350
  • [20] Miotto, R., Wang, F., Wang, S., Jiang, X., Dudley, J.T.: Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics (2017)
  • [21] Orabona, F., Keshet, J., Caputo, B.: The projectron: a bounded kernel-based perceptron. In: Proceedings of the 25th international conference on Machine learning, ACM (2008) 720–727
  • [22] De Stefano, C., Fontanella, F., Marrocco, C., Schirinzi, G.: A feature selection algorithm for class discrimination improvement. In: Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International, IEEE (2007) 425–428
  • [23] Pawara, P., Okafor, E., Surinta, O., Schomaker, L., Wiering, M.: Comparing local descriptors and bags of visual words to deep convolutional neural networks for plant recognition. In: ICPRAM. (2017) 479–486
  • [24] Driss, S.B., Soua, M., Kachouri, R., Akil, M.: A comparison study between mlp and convolutional neural network models for character recognition. In: Real-Time Image and Video Processing 2017. Volume 10223., International Society for Optics and Photonics (2017) 1022306
  • [25] Radon, J.: On the determination of functions from their integral values along certain manifolds. IEEE transactions on medical imaging 5(4) (1986) 170–176
  • [26] Toft, P.: The Radon transform. Theory and implementation. PhD thesis, Danmarks Tekniske University, Lyngby (1996)
  • [27] Rey, M.T., Tunaley, J.K., Folinsbee, J., Jahans, P.A., Dixon, J., Vant, M.R.: Application of radon transform techniques to wake detection in seasat-a sar images. IEEE Transactions on Geoscience and Remote Sensing 28(4) (1990) 553–560
  • [28] Seo, J.S., Haitsma, J., Kalker, T., Yoo, C.D.: A robust image fingerprinting system using the radon transform. Signal Processing: Image Communication 19(4) (2004) 325–339
  • [29] Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10). (2010) 807–814
  • [30] Hahnloser, R.H., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J., Seung, H.S.: Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405(6789) (2000) 947
  • [31] Hahnloser, R.H., Seung, H.S.: Permitted and forbidden sets in symmetric threshold-linear networks. In: Advances in Neural Information Processing Systems. (2001) 217–223
  • [32] El Kessab, B., Daoui, C., Bouikhalene, B., Fakir, M., Moro, K.: Extraction method of handwritten digit recognition tested on the mnist database. International Journal of Advanced Science & Technology 50 (2013) 99–110
  • [33] LeCun, Y.: The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998)
  • [34] Sorensen, L., Shaker, S.B., De Bruijne, M.: Quantitative analysis of pulmonary emphysema using local binary patterns. IEEE transactions on medical imaging 29(2) (2010) 559–569
  • [35] Cruz-Roa, A., Basavanhally, A., González, F., Gilmore, H., Feldman, M., Ganesan, S., Shih, N., Tomaszewski, J., Madabhushi, A.: Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In: Medical Imaging 2014: Digital Pathology. Volume 9041., International Society for Optics and Photonics (2014) 904103
  • [36] Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. Journal of pathology informatics 7 (2016)
  • [37] Lehmann, T.M., Schubert, H., Keysers, D., Kohnen, M., Wein, B.B.: The irma code for unique classification of medical images. In: Medical Imaging 2003: PACS and Integrated Medical Information Systems: Design and Evaluation. Volume 5033., International Society for Optics and Photonics (2003) 440–452
  • [38] Muller, H., Clough, P., Deselaers, T., Caputo, B.: Image-clef: Experimental evaluation in visual information retrieval series. The information retrieval series, Springer (2010)
  • [39] Kermany, D.S., Goldbaum, M., Cai, W., Valentim, C.C., Liang, H., Baxter, S.L., McKeown, A., Yang, G., Wu, X., Yan, F., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5) (2018) 1122–1131
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
349789
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description