A novel machine learning based framework for detection of Autism Spectrum Disorder (ASD)

A novel machine learning based framework for detection of Autism Spectrum Disorder (ASD)

Abstract

Computer vision and machine learning are the linchpin of field of automation. The medicine industry has adopted numerous methods to discover the root causes of many diseases in order to automate detection process. But, the biomarkers of Autism Spectrum Disorder (ASD) are still unknown, let alone automating its detection, due to intense connectivity of neurological patterns in brain. Studies from the neuroscience domain highlighted the fact that corpus callosum and intracranial brain volume holds significant information for detection of ASD. Such results and studies are not tested and verified by scientists working in the domain of computer vision / machine learning. Thus, in this study we have proposed a machine learning based framework for automatic detection of ASD using features extracted from corpus callosum and intracranial brain volume from ABIDE dataset. Corpus callosum and intracranial brain volume data is obtained from T1-weighted MRI scans. Our proposed framework first calculates weights of features extracted from Corpus callosum and intracranial brain volume data. This step ensures to utilize discriminative capabilities of only those features that will help in robust recognition of ASD. Then, conventional machine learning algorithm (conventional refers to algorithms other than deep learning) is applied on features that are most significant in terms of discriminative capabilities for recognition of ASD. Finally, for benchmarking and to verify potential of deep learning on analyzing neuroimaging data i.e. T1-weighted MRI scans, we have done experiment with state of the art deep learning architecture i.e. VGG16 . We have used transfer learning approach to use already trained VGG16 model for detection of ASD. This is done to help readers understand benefits and bottlenecks of using deep learning approach for analyzing neuroimaging data which is difficult to record in large enough quantity for deep learning.

Keywords:
ASD Machine learning Corpus callosum Intracranial brain volume T1-weighted structural brain imaging data deep learning

1 Introduction

The emerging field of computer vision and artificial intelligence has dominated research and industry in various domains and now aiming to outstrip human intellect [Sebe et al., 2005]. With computer vision and machine learning techniques, unceasing advancement has been made in different areas like imaging [Kak and Slaney, 1988], biometric systems [Munir and Khan, 2019], computational biology [Zhang, 2002], video processing [Van den Branden Lambrecht, 2013], affect analysis [Khan et al., 2013, 2019b, 2013], medical diagnostics [Akram et al., 2013] and much more. However, despite all the advances, neuroscience is one of the area in which machine learning is minimally applied due to complex nature of data. This article proposes a framework for automatic identification of Autism Spectrum Disorder (ASD) [Jaliaawala and Khan, 2019] by applying machine learning algorithm on neuroimaging dataset known as ABIDE (Autism Brain Imaging Data Exchange) [Di Martino et al., 2014].

Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that is perceived by a lack of social interaction and emotional intelligence, repetitive, abhorrent, stigmatized and fixated behavior [Choi, 2017, Jaliaawala and Khan, 2019]. This syndrome is not a rare condition, but a spectrum with numerous disabilities. ICD-10 WHO (World Health Organization 1992) [Organization, 1993] and DSM-IV APA (American Psychiatric Association) [Castillo et al., 2007], outlined criteria for defining ASD in terms of social and behavioral characteristics. According to their nomenclature: an individual facing ASD has an abnormal trend associated with social interaction, lack of verbal and non-verbal communication skills and a limited range of interests in specific tasks and activities [Jaliaawala and Khan, 2019]. Based on these behavioral lineaments, ASD is further divided into groups, which are:

  1. High Functioning Autism (HFA) [Baron-Cohen et al., 2001]: HFA is a term applied to people with autistic disorder, who are deemed to be cognitively “higher functioning” (with an IQ of 70 or greater) than other people with autism.

  2. Asperger Syndrome (AS) [Klin et al., 2000]: individuals facing AS have qualitative impairment in social interaction, show restricted repetitive and stereotyped patterns of behavior, interests, and activities. Usually such individuals have no clinically significant general delay in language or cognitive development. Generally, individuals facing AS have higher IQ levels but lack in facial actions and social communication skills.

  3. Attention Deficit Hyperactivity Disorder (ADHD) [Barkley and Murphy, 1998]: individuals with ADHD show impairment in paying attention (inattention). They have overactive behavior (hyperactivity) and sometimes impulsive behavior (acting without thinking).

  4. Psychiatric symptoms [Simonoff et al., 2008], such as anxiety and depression.

Figure 1: MRI scan in different cross-sectional view. Where A, P, S, I, R, L in the figure represents anterior, posterior, superior, inferior, right, left. The axial / horizontal view divides the MRI scan into head and tail / superior and inferior portions, sagittal view breaks the scan into left and right and coronal / vertical view divides the MRI scan into anterior and posterior portions [Schnitzlein and Murtagh, 1985].

Recent population-based statistics have shown that autism is the fastest-growing neurodevelopmental disability in the United States and the UK [Rice, 2009]. More than 1% of children and adults are diagnosed with autism and costs of $2.4 million and $2.2 million are used for treatment in the United States and the United Kingdom respectively, as reported by the Center of Disease Control and Prevention (CDC), USA [Rice, 2009, Buescher et al., 2014]. It is also known that delay in detection of ASD is associated with increase in cost for supporting individual with ASD [Horlin et al., 2014]. Thus, it is utmost important for research community to propose novel solutions for early detection of ASD and our proposed framework can be used for early detection of ASD.

Until now, biomarkers of ASD are unknown [Del Valle Rubido et al., 2018, Jaliaawala and Khan, 2019]. Physicians and clinicians are practicing standardized / conventional methods for ASD analysis and diagnosis. Intellectual properties and behavioral characteristics are accessed for the diagnosis of ASD; however, synaptic affiliations of ASD are still unknown and presents a challenging task for cognitive neuroscience and psychological researchers [Kushki et al., 2013]. A recent hypothesis in neurology demonstrates that an abnormal trend is associated with different neural regions of the brain among individuals facing ASD [Bourgeron, 2009]. This variational trend is due to irregularities in neural pattern, disassociation and anti-correlation of cognitive function between different regions, that effects global brain network [Schipul et al., 2011].

Magnetic Imaging Resonance (MRI), a non-invasive technique, has been widely used to study brain regional network(s). Thus, MRI data can be used to reveal subtle variations in neural patterns / network which can help in identifying biomarkers for ASD. An MRI technology expends electrical pluses to generate a pictorial representation of particular brain tissue. An example of MRI scan in different cross-sectional view is shown in Figure 1. MRI scans are further divided into structural MRI (s-MRI) and functional MRI (f-MRI) depending on type of scanning technique used [Bullmore and Sporns, 2009]. The entire brain network using structural and functional MRI is shown in Figure 2.

Figure 2: Brain network mapping using structural MRI (s-MRI) and functional MRI (f-MRI) techniques [Bullmore and Sporns, 2009].

Structural MRI (s-MRI) scans are used to examine anatomy and neurology of the brain. s-MRI scans are also employed to measure volume of brain i.e. regional grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF) [Giedd, 2004], volume of its sub-regions and to identify localized lesions. s-MRI is classified into two sequences: T1-weighted MRI and T2-weighted MRI, where sequence means number of radio-frequency pulses and gradients that result in a set of images with a particular appearance [Haacke et al., 2009]. These sequences depends on the value of the scanning parameters: Repetition Time (TR) and Echo Time (TE). TR and TE parameters are used to control image contrast and weighting of MRI image [Rutherford and Bydder, 2002]. T1-weighted scans are produced with short TE and long TR. Conversely, T2-weighted scans have long TE and short TR parameter values. The bright and dark regions in scans are primarily determined by T1 and T2 properties of cerebrospinal fluid (CSF). Cerebrospinal Fluid (CSF) is a clear, colorless body fluid present in brain. Therefore, CSF is dark in T1-weighted scans and appears bright in T2-weighted scans [Budman et al., 1992].

Functional MRI (f-MRI) scans are used to visualize the activated brain regions associated with brain function. f-MRI computes synchronized neural activity through the detection of blood flow variation across different cognitive regions. By using MRI scans, numerous researchers have reported that distinctive brain regions are associated with ASD [Huettel et al., 2004a].

In 2012, the Autism Brain Imaging Data Exchange (ABIDE) provided scientific community with an “open source” repository to study ASD from brain imaging data i.e. MRI data [Di Martino et al., 2014]. The ABIDE dataset consists of 1112 participants (autism and healthy control) with rs-fMRI (resting state functional magnetic resonance imaging) data. rs-fMRI is a type of f-MRI data captured in resting or task-negative state [Plitt et al., 2015, Smith et al., 2009]. ABIDE also provides anatomical scans and phenotypical5 data [Di Martino et al., 2014]. All the details (data collection and preprocessing) related to ABIDE dataset are presented in Section 3.

In this study, we have proposed a machine learning based framework for automatic detection of ASD using T1-weighted MRI scans from from ABIDE dataset. T1-weighted MRI data is used as it is reported that results from T1-weighted MRI data are highly reproducible [McGuire et al., 2017]. Initially, for automatic detection of ASD we have utilized different conventional machine learning methods (refer Section 4.2 for details of machine learning algorithms used in this study). Conventional machine learning methods refer to methods other than recently popularized deep learning approach. We further improved results achieved by conventional machine learning methods by calculating importance / weights of different features for the given task (Section 4.1 presents feature selection methodology employed in this study). Features are measurable attribute of the data [Bishop, 2006]. Feature selection methods find weights / importance of different features by calculating their discriminative ability. Thus, improving prediction performance, computational time and generalization capability of machine learning algorithm [Chandrashekar and Sahin, 2014]. Results obtained by applying feature selection methods and conventional machine learning methods are discussed in Section 4.3. Finally, to verify potential of deep learning [LeCun et al., 2015] on analyzing neuroimaging data, we have done experiment with state of the art deep learning architecture i.e. VGG16 [Simonyan and Zisserman, 2014]. We have used transfer learning approach [Khan et al., 2019a] to use already trained VGG16 model for detection of ASD. Result obtained using transfer learning approach is presented in Section 5. Section 5 will help readers to understand benefits and bottlenecks of using deep learning / CNN approach for analyzing neuroimaging data which is difficult to record in large enough quantity for deep learning. Survey of related literature is presented in next section, i.e Section 2.

In summary, our contributions in this study are:

  1. We showed potential of using machine learning algorithms applied to brain anatomical scans for automatic detection of ASD.

  2. This study demonstrated that feature selection / weighting methods helps to achieve better recognition accuracy for detection of ASD.

  3. We also provided automatic ASD detection results using deep learning [LeCun et al., 2015] / Convolutional Neural Networks (CNN) via transfer learning approach. This will help readers to understand benefits and bottlenecks of using deep learning / CNN approach for analyzing neuroimaging data which is difficult to record in large enough quantity for deep learning.

  4. We also highlighted future directions to improve performance of such frameworks for automatic detection of ASD. Thus, such frameworks could perform well not only for published databases but also for real world applications and help clinicians in early detection of ASD.

2 State of the Art

In this section various methods that have been explored for classification of neurodevelopmental disorders are discussed. Fusion of artificial intelligence techniques (machine learning and deep learning) with brain imaging data has allowed to study representation of semantic categories [Haxby et al., 2001], meaning of noun [Buchweitz et al., 2012], learning [Bauer and Just, 2015] and emotions [Kassam et al., 2013]. But, generally use of machine learning algorithms to detect psychological and neurodevelopmental ailments i.e. schizophrenia [Bellak, 1994], autism [Just et al., 2014] and anxiety / depression [Craddock et al., 2009], remains restricted due to complex nature of problem. This literature review section is focused on the state-of-the-art methods that operates on brain imaging data to discover neurodevelopmental disorders via machine learning approaches.

Craddock et al. [Craddock et al., 2009] used multi-voxel pattern analysis technique for detection of Major Depressive Disorder (MDD) [Greicius et al., 2007]. They have shown results on MRI data gathered from forty subjects i.e. twenty healthy controls and twenty individuals with MDD. Their proposed framework achieved accuracy of 95%.

Just et al. [Just et al., 2014] presented Gaussian Naïve Bayes (GNB) classifiers based approach to identify ASD and control participants using fMRI data. They achieved accuracy of 97% while detecting autism from a population of 34 individuals (17 control and 17 autistic individuals).

One of the promising study done by Sabuncu et al. [Sabuncu et al., 2015] used Multivariate Pattern Analysis (MVPA) algorithm and structural MRI (s-MRI) data to predict chain of neurodevelopmental disorders i.e. Alzheimer’s, Autism, and Schizophrenia. Sabuncu et al. analyzed structural neuroimaging data from six publicly available websites (https://www.nmr.mgh.harvard.edu/lab/mripredict), with 2800 subjects. MVPA algorithm constituted with three classes of classifiers that includes: Support Vector Machine (SVM) [Vapnik, 2013], Neighborhood Approximation Forest (NAF) [Konukoglu et al., 2012] and Relevance Vector Machine (RVM) [Tipping, 2001]. Sabuncu et al. attained detection accuracies of 70%, 86% and 59% for Schizophrenia, Alzheimer and Autism respectively using 5-fold validation scheme (refer Section 4.3 for discussion on -fold cross validation methodology).

Deep learning models i.e. DNN (Deep Neural Network) [LeCun et al., 2015], holds a great potential in clinical / neuroscience / neuroimaging research applications. Plis et al. [Plis et al., 2014] used Deep Belief Network (DBN) for automatic detection of Schizophrenia [Bellak, 1994]. Plis et al. trained model with three hidden layers: 50-50-100 hidden neurons in the first, second and top layer respectively, using T1-weighted structural MRI (s-MRI) imaging data (refer Section 1 for discussion on s-MRI data). They analyzed dataset from four different studies conducted by Johns Hopkins University (JHU), the Maryland Psychiatric Research Center (MPRC), the Institute of Psychiatry, London, UK (IOP), and the Western Psychiatric Institute and Clinic at the University of Pittsburgh (WPIC), with 198 Schizophrenia patients and 191 control and achieved classification accuracy of 90%.

Koyamada et al. [Koyamada et al., 2015] showed DNN outperforms conventional supervised learning methods i.e. Support Vector Machine (SVM) [Vapnik, 2013], in learning concept from neuroimaging data. Koyamada et al. investigated brain states from brain activities using DNN to classify task-based fMRI data that has seven task categories: emotional response, wagering, language, motor, experiential, interpersonal and working memory. They trained deep neural network with two hidden layers and achieved an average accuracy of 50.47%.

In another study Heinsfeldl et al. [Heinsfeld et al., 2018] trained neural network (refer Section 4.2.4 for discussion on artificial neural networks and multilayer perceptron) by transfer learning from two auto-encoders [Vincent et al., 2008]. Transfer learning methodology allows distributions used in training and testing to be different and it also paves the path for neural network to use learned neurons weights in different scenarios [Khan et al., 2019a]. The aim of the study by Heinsfeldl et al. was to detect ASD and healthy control. The main objective of auto-encoders is to learn data in an unattended way to improve the generalization of a model [Vincent et al., 2010]. For unsupervised pre-training of these two auto encoders, Heinsfeldl et al. utilized rs-fMRI (resting state-fMRI) image data from ABIDE-I dataset. The knowledge in the form of weights extracted from these two auto-encoders were mapped to multilayer perceptron (MLP). Heinsfeldl et al. achieved classification accuracy up to 70% .

It is important to note that studies that combine machine learning with brain imaging data collected from multiple sites like ABIDE [Di Martino et al., 2014] to identify Autism demonstrated that classification accuracy tends to decreases [Arbabshirani et al., 2017]. In this study we also observed same trend. Nielsen et al. [Nielsen et al., 2013] also discovered the same pattern / trend from ABIDE dataset and also concluded that those sites with longer BOLD imaging time significantly have higher classification accuracy. Whereas, Blood Oxygen Level Dependent (BOLD) is an imaging method used in fMRI to observe active regions, using blood flow variation. Those regions where blood concentration is more appear to be more active than other regions [Huettel et al., 2004b].

The studies described above in this section, focused on analyzing neuroimaging data i.e. MRI and fMRI scanning data to detect different neurodevelopmental disorders. Different brain regions used to predict psychological disorders are not focused. It has been shown that different regions of brain highlight subtle variations that differentiates healthy individuals from individual facing neurodevelopmental disorder. A quantitative survey using ABIDE dataset reported that increase in brain volume and reduction in corpus callosum [Zaidel and Iacoboni, 2003] area were found in participants with ASD. Where, the corpus callosum have a central function in integrating information and mediating behaviors [Hinkley et al., 2012]. The corpus callosum consists of approximately 200 million fibers of varying diameters and is the largest inter-hemispheric joint of the human brain [Tomasch, 1954].

Hiess et al. [Hiess et al., 2015] also concluded that although there was no significant difference in the corpus callosum sub-regions between ASD and control participants, but the individuals facing ASD had increased intracranial volume. Intracranial volume (ICV) is used as an estimate of size of brain and brain regions / volumetric analysis [Nordenskjöld et al., 2013]. Waiter et al. [Waiter et al., 2005] reported reduction in the size of splenium and isthmus and Chung et al. [Chung et al., 2004] also found diminution in the area of splenium, genu and rostrum of corpus callosum in ASD. Whereas, splenium, isthmus, genu and rostrum are regional subdivisions of the corpus callosum based on Witelson et al. [Witelson, 1989] and Venkatasubramanian et al. [Venkatasubramanian et al., 2007] studies. Refer Figure 3 for pictorial representation of different segmented sub-regions of corpus callosum. Motivation of using subdivisions of the corpus callosum and intracranial brain volume as feature vector (refer Section 4.1 for discussion on feature vector) in this study study comes from the fact that in the reviewed literature these regions are usually considered important for detection of ASD.

Next section presents all the details related to the ABIDE database and also explains preprocessing procedure.

3 Database

This study is performed using structural MRI (s-MRI) scans from Autism Brain Imaging Data Exchange (ABIDE-I) dataset (http://fcon_1000.projects.nitrc.org/indi/abide/abide_I.html). ABIDE is an online sharing consortium that provides imaging data of ASD and control participants with their phenotypic information [Di Martino et al., 2014]. ABIDE-I dataset consists of 17 international sites, with total of 1112 subjects or samples, that includes (539 autism cases and 573 healthy control participants). According to Health Insurance Portability and Accountability Act (HIPAA) [Act, 1996] guidelines, identity of individuals participated in ABIDE database recording was not disclosed. Table 1 shows image acquisition parameters for structural MRI (s-MRI) scans for each site in ABIDE study.

We used same features as used in the study of Hiess et al. [Hiess et al., 2015]. Next, we will explain preprocessing done by Hiess et al. on T1-weighted MRI scans from ABIDE dataset to calculate different parameters and regions of corpus callosum and brain volume.

Site Typical controls (m/f) Autism Spectrum disorder (m/f) Image acquisition Make model Voxel size () Flip angle (deg) TR (ms) TE (ms) T1 (ms) Bandwidth (Hz/Px)
CALTECH 15/4 15/4 3D MPRAGE Siemens Magnetom (Trio Trim) 1 10 1590 2.73 800 200
CMU 10/3 11/3 3D MPRAGE Siemens Magnetom (Verio) 1 8 1870 2.48 1100 170
KKI 25/8 18/4 3D FFE Philips (Achieva) 1 8 8 3.7 843 191.5
MAXMUN 29/4 21/3 3D MPRAGE Siemens Magnetom (Verio) 1 9 1800 3.06 900 230
NYU 79/26 68/11 3D MPRAGE Siemens Magnetom (Allegra) 7 2530 3.25 1100 200
OLIN 13/3 18/2 3D MPRAGE Siemens Magnetom (Allegra) 1 8 2500 2.74 900 190
OHSU 15/0 15/0 3D MPRAGE Siemens Magnetom (Trio Trim) 1 10 2300 3.58 900 180
SDSU 16/6 13/1 3D SPGR GE (MR750) 1 45 11.08 4.3 NA NA
SBL 15/0 15/0 3D FFE Philips (Intera) 1 8 9 3.5 1000 191.5
STANFORD 16/4 16/4 3D SPGR GE(Signa) 15 8.4 1.8 NA NA
TRINITY 25/0 24/0 3D FFE Philips (Achieva) 1 8 8.5 3.9 1060.17 178.7
UCLA_1 29/4 42/7 3D MPRAGE Siemens Magnetom (Trio Trim) 9 2300 2.84 853 240
UCLA_2 12/2 13/0 3D MPRAGE Siemens Magnetom (Trio Trim) 9 2300 2.84 853 240
LEUVEN_1 15/0 14/0 3D FFE Philiphs (Intera) 8 9.6 4.6 885.145 135.4
LEUVEN_2 15/5 12/3 3D FFE Philiphs (Intera) 8 9.6 4.6 885.145 135.4
UM_1 38/17 46/9 3D SPGR GE (Signa) 15 250 1.8 500 15.63
UM_2 21/1 12/1 3D SPGR GE (Signa) 15 250 1.8 500 15.63
PITT 23/4 26/4 3D MPRAGE Siemens Magnetom (Allegra) 7 2100 3.93 1000 130
USM 43/0 58/0 3D MPRAGE Siemens Magnetom (Trio Trim) 9 2300 2.91 900 240
YALE 20/8 20/8 3D MPRAGE Siemens Magnetom (Trio Trim) 1 9 1230 1.73 624 320
California Institute of Technology
Carnegie Mellon University
Kennedy Krieger Institute, Baltimore
Ludwig Maximilians University, Munich
NYU Langone Medical Center, New York
Olin, Institute of Living, Hartford Hospital
Oregon Health and Science University
San Diego State University
Social Brain Lab BCN NIC UMC Groningen and Netherlands Institute for Neurosciences
Stanford University
Trinity Centre for Health Sciences
University of California, Los Angeles
University of Leuven
University of Michigan
University of Pittsburgh School of Medicine
University of Utah School of Medicine
Child Study Centre, Yale University
Table 1: Structural MRI acquisition parameters for each site in the ABIDE database [Hiess et al., 2015]

3.1 Preprocessing

Figure 3: An example of corpus callosum area segmentation. The figure shows example data for an individual facing ASD in the ABIDE study. Figure A: represents 3D volumetric T1-weighted MRI scan. Figure B: represents segmentation of corpus callosum in red. Figure C: represents the further division of corpus callosum according to Witelson Scheme [Witelson, 1989]. The regions W1(rostrum), W2(genu), W3(anterior body), W4(mid-body), W5(posterior body), W6(isthmus) and W7(splenium) are shown in red, orange, yellow, green, blue, purple and light purple [Hiess et al., 2015].

Corpus callosum area, its sub-regions and intracranial volume were calculated using different softwares. These softwares are:

  1. yuki [Ardekani, 2013]

  2. itksnap [Yushkevich et al., 2006]

The corpus callosum have a central function in integrating information and mediating behaviors [Hinkley et al., 2012]. The corpus callosum consists of approximately 200 million fibers of varying diameters and is the largest inter-hemispheric joint of the human brain [Tomasch, 1954]. Whereas, intracranial volume (ICV) is used as an estimate of size of brain and brain regions / volumetric analysis [Nordenskjöld et al., 2013].

The corpus callosum area for each participant was segmented using “yuki” software [Ardekani, 2013]. The corpus callosum was automatically divided into its sub regions using Witelson scheme [Witelson, 1989]. An example of corpus callosum segmentation is shown in Figure 3. Each segmentation was inspected visually and corrected necessarily using “ITK-SNAP” [Yushkevich et al., 2006] software package. The inspection and correction procedure was performed by two readers. Due to minor manual correction in corpus callosum segmentation for some MRI scans, statistical equivalence analysis and intra-class correlation were calculated to measure corpus callosum area by both readers.

Total intracranial brain volume [Malone et al., 2015] of each participant was measured by using software tool “brainwash”. “Automatic Registration Toolbox” (www.nitrc.org/projects/art) a feature in brainwash was used to extract intracranial brain volume. The brainwash method uses non-linear transformation to estimate intracranial regions by mapping the co-registered labels (pre-labeled intracranial regions) to participant’s MRI scan. The voxel-voting scheme [Manjón and Coupé, 2016] is used to classify each voxel in the participant MRI as intracranial or not. Each brain segmentation was visually inspected to ensure accurate segmentation. Some of cases where segmentation were not performed accurately, following additional steps were taken in order to process it:

  1. In some cases where brain segmentation was not achieved correctly, the brainwash method was executed again with same site of preprocessed MRI scan that had error free brain segmentation.

  2. The brainwash software automatically identifies the coordinates of anterior and posterior commissure. In some cases, these points were not correctly identified. In such cases, they were identified manually and entered in the software.

  3. A “region-based snakes” feature implemented in “ITK-SNAP” [Yushkevich et al., 2006] software package was used for minor correction of intracranial volume segmentation error manually.

Figure 4: Schematic overview of proposed framework

Figure 4 shows how T1-weighted MRI scans is transformed into feature vector of M x N dimension, where M denotes the total number of samples and N denotes total number of features in the feature vector. Where features are measurable attribute of the data [Bishop, 2006].

4 Experiments and results: conventional machine learning classification methods

In every machine learning problem before application of any machine learning method, selection of useful set of features or feature vector is an important task. The optimal features extracted from dataset minimizes within-class variations (ASD vs control individuals) while maximizes between class variations [Khan, 2013]. Feature selection techniques are utilized to find optimal features by removing redundant or irrelevant features for a given task. Next subsection, Section 4.1, will present evaluated feature selection methods. Section 4.2 will discuss conventional machine learning methods used in this study, where conventional machine learning methods refer to methods other than recently popularized deep learning approach. Results from conventional machine learning methods are discussed in Section 4.3.

4.1 Feature Selection

As described above, we used same features as used in the study of Hiess et al. [Hiess et al., 2015]. By using same features, we can robustly verify relative strength or weakness of proposed machine learning based framework as study done by Hiess et al. does not employ machine learning. Hiess et al. have made preprocessed T1-weighted MRI scans data from ABIDE available for research (https://sites.google.com/site/hpardoe/cc_abide). Preprocessed data consists of parametric features of corpus callosum, its sub-regions and intracranial brain volume with label. In total, preprocessed data consists of 12 features from 1100 examples or samples each (12 x 1100). Statistical summary of preprocessed data is outlined in Table 2.

Class Healthy Controls Autism Spectrum Disorder
Number 571 529
Sex(m/f) 479/99 465/64
Age(years) 17.102 7.726 17.082 8.428
CC_area () 596.654 102.93 596.908 110.134
CC_perimeter () 196.405 6.353 198.102 17.265
CC_length () 70.583 5.342 70.711 5.671
CC_circularity 0.194 0.020 0.191 0.023
W1 (Rostrum) () 20.753 14.264 25.899 10.809
W2 (genu) () 128.789 32.134 128.855 33.704
W3 (anterior body) () 91.088 19.212 91.734 20.302
W4 (mid-body)() 69.705 13.351 69.345 13.796
W5 (posterior body)() 59.007 11.698 59.454 12.501
W6 (isthmus) () 51.843 12.519 52.137 13.313
W7 (splenium) () 175.471 32.353 174.483 34.562
Brain Volume () 1482428.866 150985.323 1504247.415 170357.180
CC = corpus callosum
Witelson’s [Witelson, 1989] sub-regions of the corpus callosum
Table 2: Statistical summary of ABIDE preprocessed data

Selection of useful subset of features to extract meaningful results by eliminating redundant feature is very comprehensive and recursive task. To enhance computational simplicity, reduce complexity and improve performance of machine learning algorithms, different feature selection techniques are applied on the preprocessed ABIDE dataset. In literature, usually entropy or correlation based methods are used for feature selection. Thus, we have also employed state-of-the-art methods based on entropy and correlation to select features that minimizes within-class variations (ASD vs control individuals) while maximizes between class variations. Methods evaluated in this study are explained below:

Information Gain

Information gain is a feature selection technique that measures how much information a feature provides for the corresponding class. It measures information in the form of entropy. Entropy is defined as probabilistic measure of impurity, disorder or uncertainty in feature [Quinlan, 1986]. Therefore, a feature with reduced entropy value intends to give more information and considered as more relevant. For a given set of training examples, , the vector of feature in this set, , the fraction of the examples of feature with value , the following equation is mathematically denoted:

(1)

with entropy:

(2)

;
is the probability of training sample in dataset belonging to corresponding positive and negative class, respectively.

Information Gain Ratio

Information gain is biased in selecting features with larger values [Yu and Liu, 2003]. Information gain ratio, is modified version of information gain that reduces its bias. It is calculated as the ratio of information gain and intrinsic value [Kononenko and Hong, 1997]. Intrinsic value is additional calculation of entropy. For a given set of features , of all training examples , with , where defines the specific example with feature value . The function denotes the set of all possible values of features . The information gain ratio for a feature is mathematically denoted as:

(3)

with intrinsic value :

(4)

Chi-Square Method

The Chi-Square () is correlation based feature selection method (also known as the Pearson Chi-Square test), which calculates the dependencies of two independent variables, where two variables and are defined as independent, if , or equivalent, and . In terms of machine learning, two variables are the occurrence of the features and class label [Doshi, 2014]. Chi square method calculates the correlation strength of each feature by calculating statistical value represented by the following expression:

(5)

;
() is the chi-square statistic, is the actual value of feature, and is the expected value of feature, respectively.

Symmetrical Uncertainty

Symmetrical Uncertainty (SU) is referred as relevance indexing or scoring [Brown et al., 2012] method which is used to find the relationship between a feature and class label. It normalizes the value of features within the range of [0, 1], where 1 indicates that feature and target class are strongly correlated and 0 indicates no relationship between them [Peng et al., 2005]. For a class label , the symmetrical uncertainty for set of features is mathematically denoted as:

(6)

;
represents information gain, and represents entropy, respectively.

All four methods (information gain, information gain ratio, chi-square and symmetrical uncertainty) calculates value / importance / weight of each feature for a given task. The weight of each feature is calculated with respect to class label and feature value calculated by each method. The higher the weight of feature, the more relevant it is considered. The weight of each feature is normalized between in the range of [0, 1]. The results of each feature selection method is shown in Figure 5.

Figure 5: Result of entropy and correlation based feature selection methods. All features are represented with their corresponding weights. A: Represents the result of information gain. B: Represents the result of information gain ratio. C: Represents the result of chi-square method. D: Represents the result of symmetrical uncertainty.

Figure 5 presents result of feature selection study. First two graphs show weights of different features calculated from entropy based methods i.e. information gain and information gain ratio. Last two graphs present feature weights obtained from correlation based methods i.e. chi-square and symmetrical uncertainty. Result of information gain ratio differs from information gain but in both the methods and emerged as most important features. Results from correlation based methods i.e. chi-square and symmetrical uncertainty are almost similar with little differences. , , and emerged as the most discriminant features.

It is important to highlight that feature(s) that give more discriminant information in our study are comparable with features identified in study by Hiess et al. [Hiess et al., 2015]. Hiess et al. [Hiess et al., 2015] concluded that and corpus callosum area are two important features used to discriminate ASD and control in ABIDE dataset. In our study we also concluded that and different sub-regions of corpus callosum i.e. genu, mid-body and splenium labeled as , and are most discriminant features. As a matter of fact, results from correlation based methods i.e. chi-square and symmetrical uncertainty are comparable with results presented by Hiess et al. [Hiess et al., 2015].

In our proposed framework, we have applied threshold on results obtained from feature(s) selection method to select subset of features that reduce computational complexity and improve performance of machine learning algorithms. We performed experiments with different threshold values and empirically found that average classification accuracy (detection of ASD) obtained on subset of features from chi-square method at threshold value is highest.

Final feature vector deduced in this study includes , , , , , and , where = corpus callosum . Average classification accuracy, after application of conventional machine learning methods, with and without feature selection method is presented in Table 3. It can be observed from table that training classifier on subset of discriminant features gives better result not only in terms of computational complexity by also in terms of average classification accuracy.

Next subsection, Subsection 4.2, discusses conventional machine learning methods evaluated in this study.

4.2 Conventional classification methods

Classification is a process of searching patterns / learning pattern / concept from a given dataset or examples and predicting its class [Bishop, 2006]. For automatic detection of ASD from preprocessed ABIDE dataset (features selected by feature selection algorithm, refer Section 4.1) we have evaluated below mentioned state-of-the-art conventional classifiers:

  1. Linear Discriminant Analysis (LDA)

  2. Support Vector Machine (SVM) with radial basis function (rbf) Kernel

  3. Random Forest (RF) of 10 trees

  4. Multi-Layer Perceptron (MLP)

  5. K- Nearest Neighbor (KNN) with =3

We chose classifiers from diverse categories. For example, K-Nearest Neighbor (KNN) is non parametric instance based learner, Support Vector Machine (SVM) is large margin classifier that theorizes to map data to higher dimensional space for better classification, Random Forest (RF) is tree based classifier which break the set of samples into a set of covering decision rules while Multilayer Perceptron (MLP) is motivated by human brain anatomy. Above mentioned classifiers are briefly explained below.

Linear Discriminant Analysis (LDA)

LDA is a statistical method that finds linear combination of features, which separates the dataset into their corresponding classes. The resulting combination is used as linear classifier [Jain and Huang, 2004]. LDA maximizes the linear separability by maximizing the ratio of between-class variance to the within-class variance for any particular dataset. Let and be the classes and number of exampleset in each class, respectively. Let and be the means of the classes and grand mean respectively. Then, the within and between class scatter matrices and are defined as:

(7)
(8)

;
is the prior probability and represents covariance matrix of class , respectively.

Support Vector Machine (SVM)

SVM classifier segregates samples into corresponding classes by constructing decision boundaries known as hyperplanes [Vapnik, 2013]. It implicitly maps the dataset into higher dimensional feature space and construct a linear separable line with maximal marginal distance to separates hyperplane in higher dimensional space. For a training set of examples {} where and -1, 1 , a new test example is classified by the following function:

(9)

;
are Langrange multipliers of a dual optimization problem separating two hyperplanes, is a kernel function, and is the threshold parameter of the hyperplane respectively.

Random Forest (RF)

Random Forest belongs to family of decision tree, capable of performing classification and regression tasks. A classification tree is composed of nodes and branches which break the set of samples into a set of covering decision rules [Mitchell, 1997]. RF is an ensemble tree classifier consisting of many correlated decision trees and its output is mode of class’s output by individual decision tree.

Multilayer Perceptron (MLP)

MLP belongs to the family of neural-nets which consists of interconnected group of artificial neurons called nodes and connections for processing information called edges [Jain et al., 1996]. A neural network consists of an input, hidden and output layer. The input layer transmits inputs in form of feature vector with a weighted value to hidden layer. The hidden layer, is composed with activation units or transfer function [Gardner and Dorling, 1998], carries the features vector from first layer with weighted value and performs some calculations as output. The output layer is made up of single activation units, carrying weighted output of hidden layer and predicts the corresponding class. An example of MLP with 2 hidden layer is shown in Figure 6. Multilayer perceptron is described as fully connected, with each node connected to every node in the next and previous layer. MLP utilizes the functionality of back-propagation [Hecht-Nielsen, 1992] during training to reduce the error function. The error is reduced by updating weight values in each layer. For a training set of examples { } and output 0 , 1 , a new test example is classified by the following function:

(10)

;
is non-linear activation function, is weight multiplied by inputs in each layer , and is bias term, respectively.

Figure 6: An architecture of Multilayer Perceptron (MLP)

K-Nearest Neighbor (KNN)

KNN is an instance based non-parametric classifier which is able to find number of training samples closest to new example based on target function [Khan et al., 2013, Acuna and Rodriguez, 2004]. Based upon the value of targeted function, it infers the value of output class. The probability of an unknown sample belonging to class can be calculated as follows:

(11)
(12)

;
is the set of nearest neighbors, the class of , and the Euclidean distance of from , respectively.

4.3 Results and Evaluation

We chose to evaluate performance of our framework in the same way, as evaluation criteria proposed by Heinsfeldl et al. [Heinsfeld et al., 2018]. Heinsfeldl et al. evaluated the performance of their framework on the basis of -fold cross validation and leave-one-site-out classification schemes [Bishop, 2006]. We have also evaluated results of above mentioned classifiers based on these schemes.

k-Fold cross validation scheme

Cross validation is statistical technique for evaluating and comparing learning algorithms by dividing the dataset into two segments: one used to learn or train the model and other used to validate the model [Kohavi et al., 1995]. In -fold cross validation schema, dataset is segmented into equally sized portions, segments or folds. Subsequently, iterations of learning and validation are performed, within each iteration folds are used for learning and a different fold of data is used for validation [Bishop, 2006]. Upon completion of folds, performance of an algorithm is calculated by averaging values of evaluation metric i.e. accuracy of each fold.

Figure 7: Schematic overview of -fold cross-validation scheme

All the studied classifiers are evaluated on -fold cross validation scheme. The dataset is divided into 5 segments of equal portions. In -fold cross validation, 4 segments of data are used for training purpose and the other one portion is used for testing purpose. This process is explained in Figure 7.

Figure 8: Results of 5-fold cross-validation scheme

Figure 8 presents average ASD recognition accuracy achieved by studied classifier using -fold cross validation scheme on preprocessed ABIDE data (features selected by feature selection algorithm, refer Section 4.1).The result shows that the overall accuracy of all classifiers increases with number of folds. Linear discriminant analysis (LDA), Support Vector Machine (SVM), Random Forest (RF), Multi-layer Perceptron (MLP) and K-nearest neighbor (KNN) achieved an average accuracy of 55.93%, 52.20%, 54.79%, 54.98% and 51.00% respectively. The result is also reported in Table 3.

Without Feature Selection With Feature Selection
Average Accuracy using Average Accuracy using Average Accuracy using
leave-one-site-out classification leave-one-site-out classification 5-fold cross-validation
Linear Discriminant Analysis (LDA) 55.45% 56.21% 55.93%
Support Vector Machine (SVM) 51.34% 51.34% 52.2%
Random Forest (RF) 53.9% 54.61% 54.79%
Multi-Layer Perceptron (MLP) 52.8% 56.26% 54.98%
K-Nearest Neighbor (KNN) 48.74% 52.16% 51%
Table 3: Average classifiers accuracy with and without feature selection

Leave-one-site-out classification scheme

In this classification validation scheme data from one site is used for testing purpose to evaluate the performance of model and rest of data from other sites is used for training purpose. This procedure is represented in Figure 9.

Figure 9: Schematic overview of Leave-one-site-out classification scheme

The framework achieved an average accuracy of 56.21%, 51.34%, 54.61%, 56.26% and 52.16% for linear discriminant analysis (LDA), Support Vector Machine (SVM), Random Forest (RF), Multi-layer Perceptron (MLP) and K-nearest neighbor (KNN) for ASD identification using leave-one-site-out classification scheme. Results are tabulated in Table 3.

Figure 10 presents recognition result for each site using leave-one-site-out classification method. It is interesting to observe that for all sites, maximum ASD classification accuracy is achieved for USM site data, with accuracy of 79.21% by -NN classifier. Second highest accuracy is achieved by LDA, with accuracy of 76.32% on CALTECH site data. This result is consistent with result obtained by Heinsfeldl et al. [Heinsfeld et al., 2018].

Figure 10: Results of leave-one-site-out classification scheme

The results of leave-one-site-out classification of all classifiers shows variations across different sites. The result suggests that this variation could be due to change in number of samples size used for training phase. Furthermore, there is variability in data across different sites. Refer Table 1 for structural MRI acquisition parameters used across sites in the ABIDE dataset [Hiess et al., 2015].

5 Autism detection, a transfer learning based approach

Results obtained with conventional machine learning algorithms with and without feature selection method are presented in Section 4.3. It can be observed that average recognition accuracy for autism detection on ABIDE dataset remains between the range of 52%-55% for different conventional machine learning algorithms, refer Table 3. In order achieve better recognition accuracy and to test potential of latest machine learning technique i.e. deep learning [LeCun et al., 2015], we employed transfer learning approach using VGG16 model [Simonyan and Zisserman, 2014].

Generally, training and test data are drawn from same distribution in machine learning algorithms. On the contrary, transfer learning allows distributions used in training and testing to be different [Pan and Yang, 2010]. Motivation for employing transfer learning approach comes from the fact that training deep learning network from the scratch requires large amount of data [LeCun et al., 2015], but in our case ABIDE dataset [Di Martino et al., 2014] contains labeled samples from 1112 subjects (539 autism cases and 573 healthy control participants). Transfer learning allows partial re-training of already trained model (re-training usually last layer) [Pan and Yang, 2010] while keeping all other layers (trained weights) in the model intact, which are trained on millions of examples for semantically similar task. We used transfer learning approach in our study as we wanted to benefit from deep learning model that has achieved high accuracy on visual recognition tasks i.e. ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [Russakovsky et al., 2015], and is available for research purposes.

Figure 11: An illustration of VGG16 architecture [Simonyan and Zisserman, 2014]

Few of the well known deep learning architectures that emerged from ILSVRC are GoogleNet (a.k.a. Inception V1) from Google [Szegedy et al., 2015] and VGGNet by Simonyan and Zisserman [Simonyan and Zisserman, 2014]. Both of these architectures are from the family of Convolutional Neural Networks or CNN as they employ convolution operations to analyze visual input i.e. images. We chose to work with VGGNet, which consists of 16 convolutional layers (VGG16) [Simonyan and Zisserman, 2014]. It is one of the most appealing framework because of its uniform architecture and its robustness for visual recognition tasks, refer Figure 11. It’s pre-trained model is freely available for research purpose, thus making a good choice for transfer learning.

VGG16 architecture (refer Figure 11) takes image of 224 x 224 with the receptive field size of 3 x 3, convolution stride is 1 pixel and padding is 1 (for receptive field of 3 x 3). It uses rectified linear unit (ReLU) [Nair and Hinton, 2010] as activation function. Classification is done using softmax classification layer with units (representing classes / classes to recognize). Other layers are Convolution layer and Feature Pooling layer. Convolution layer use filters which are convolved with the input image to produce activation or feature maps. Feature Pooling layer is used in the architecture to reduce size of the image representation, to make the computation efficient and control over-fitting.

5.1 Experiment and results

As mentioned earlier, this study is performed using structural MRI (s-MRI) scans from Autism Brain Imaging Data Exchange (ABIDE-I) dataset (http://fcon_1000.projects.nitrc.org/indi/abide/abide_I.html) [Di Martino et al., 2014]. ABIDE-I dataset consists of 17 international sites, with total of 1112 subjects or samples, that includes (539 autism cases and 573 healthy control participants).

MRI scans in the dataset ABIDE-I are provided in the Neuroimaging Informatics Technology Initiative (nifti) file format [Cox et al., 2003], where, images represents the projection of an anatomical volume onto an image plane. Initially all anatomical scans were converted from nifti to Tagged Image File Format i.e. TIFF or TIF , a compression less format [Guarneri et al., 2008], which created a dataset of 200k tif images. But we did not use all tif images for transfer learning as beginning and trailing portion of images extracted from individual scans contains clipped / cropped portion of region of interest i.e. corpus callosum. Thus, we were left with 100k tif images with visibly complete portion of corpus callosum.

Figure 12: Transfer learning results using VGG16 architecture: (A) Training accuracy vs Validation accuracy (B) Training loss vs Validation loss.

For transfer learning, VGGNet which consists of 16 convolutional layers (VGG16) was used [Simonyan and Zisserman, 2014] (refer Section 5 for explanation of VGG16 architecture) . Last fully connected dense layer of VGG16 pre-trained model was replaced and re-trained with extracted images from ABIDE-I dataset. We trained last dense layer with images using softmax activation function and ADAM optimizer [Kingma and Ba, 2014] with learning rate of 0.01.

80% of tif images extracted from MRI scans were used for training, while for validation 20% of frames were used. With above mentioned parameters, proposed transfer learning approach achieved autism detection accuracy of 66%. Model accuracy and loss curves are shown in Figure 12. In comparison with conventional machine learning methods (refer Table 3 for results obtained using different conventional machine learning methods), transfer learning approach gained around 10% in ASD detection.

6 Conclusion and Future Work

Our research study show potential of machine learning (conventional and deep learning) algorithms for development of neuroimaging data understanding. We showed how machine learning algorithms can be applied to structural MRI data for automatic detection of individuals facing Autism Spectrum Disorder (ASD).

Although achieved recognition rate is in the range of 55% - 65% but still in the of absence of biomarkers such algorithms can assist clinicians in early detection of ASD. Secondly it is known that studies that combine machine learning with brain imaging data collected from multiple sites like ABIDE [Di Martino et al., 2014] to identify autism demonstrated that classification accuracy tends to decreases [Arbabshirani et al., 2017]. In this study we also observed same trend.

Main conclusions drawn from this study are:

  • Machine learning algorithms applied to brain anatomical scans can help in automatic detection of ASD. Features extracted from corpus callosum and intracranial brain regions presents significant discriminative information to classify individual facing ASD from control sub group.

  • Feature selection / weighting methods helps build robust classifier for automatic detection of ASD. These methods not only help framework in terms of reducing computational complexity but also in terms of getting better average classification accuracy.

  • We also provided automatic ASD detection results using Convolutional Neural Networks (CNN) via transfer learning approach. This will help readers to understand benefits and bottlenecks of using deep learning / CNN approach for analyzing neuroimaging data which is difficult to record in large enough quantity for deep learning.

  • To enhance recognition results of proposed framework it is recommended to use multimodal system. In addition to neuroimaging data other modalities i.e. EEG, speech or kinesthetic can be analyzed simultaneously to achieve better recognition of ASD.

Results obtained using Convolutional Neural Networks (CNN) / deep learning are promising. One of the challenge to fully utilize learning / data modeling capabilities of CNN is the use of large database to learn concept [Zhou et al., 2018, LeCun et al., 2015], making it impractical for applications where labeled data is hard to record. For clinical applications where getting data, specially neuroimaging data is difficult, training of deep learning algorithm poses challenge. One of the solution to counter this problem is to propose hybrid approach, where data modeling capabilities of conventional machine learning algorithms (that can learn concept on small data as well) are combined with deep learning.

In order to bridge down the gap between neuroscience and computer science researchers, we emphasize and encourage the scientific community to share the database and results for automatic identification of psychological ailments.

Footnotes

  1. email: sharifmhamza@gmail.com
  2. email: rizwan.khan@bhu.edu.pk, rizwan17@gmail.com
  3. email: sharifmhamza@gmail.com
  4. email: rizwan.khan@bhu.edu.pk, rizwan17@gmail.com
  5. clinical information such as age, sex and ethnicity

References

  1. Accountability Act. Health insurance portability and accountability act of 1996. Public law, 104:191, 1996.
  2. Edgar Acuna and Caroline Rodriguez. The treatment of missing values and its effect on classifier accuracy. In Classification, clustering, and data mining applications, pages 639–647. Springer, 2004.
  3. M. Usman Akram, Shehzad Khalid, and Shoab A. Khan. Identification and classification of microaneurysms for early detection of diabetic retinopathy. Pattern Recogn., 46(1):107–116, January 2013. ISSN 0031-3203. doi: 10.1016/j.patcog.2012.07.002. URL http://dx.doi.org/10.1016/j.patcog.2012.07.002.
  4. Mohammad R Arbabshirani, Sergey Plis, Jing Sui, and Vince D Calhoun. Single subject prediction of brain disorders in neuroimaging: promises and pitfalls. NeuroImage, 145:137–165, 2017.
  5. BA Ardekani. Yuki module of the automatic registration toolbox (art) for corpus callosum segmentation. Google Scholar, 2013.
  6. Russell A Barkley and Kevin R Murphy. Attention-deficit hyperactivity disorder: A clinical workbook. Guilford Press, 1998.
  7. Simon Baron-Cohen, Sally Wheelwright, Richard Skinner, Joanne Martin, and Emma Clubley. The autism-spectrum quotient (aq): Evidence from asperger syndrome/high-functioning autism, malesand females, scientists and mathematicians. Journal of autism and developmental disorders, 31(1):5–17, 2001.
  8. Andrew James Bauer and Marcel Adam Just. Monitoring the growth of the neural representations of new animal concepts. Human brain mapping, 36(8):3213–3226, 2015.
  9. L Bellak. The schizophrenic syndrome and attention deficit disorder. Thesis, antithesis, and synthesis? The American psychologist, 49:25–29, 1994. doi: 10.1037//0003-066x.49.1.25. URL https://doi.org/10.1037//0003-066X.49.1.25.
  10. Christopher Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, 2006.
  11. Thomas Bourgeron. A synaptic trek to autism. Current opinion in neurobiology, 19(2):231–234, 2009.
  12. Gavin Brown, Adam Pocock, Ming-Jie Zhao, and Mikel Luján. Conditional likelihood maximisation: a unifying framework for information theoretic feature selection. Journal of machine learning research, 13(Jan):27–66, 2012.
  13. Augusto Buchweitz, Svetlana V Shinkareva, Robert A Mason, Tom M Mitchell, and Marcel Adam Just. Identifying bilingual semantic neural representations across languages. Brain and language, 120(3):282–289, 2012.
  14. Simon H Budman, Michael F Hoyt, and Steven Friedman. The first session in brief therapy. Guilford Press, 1992.
  15. Ariane VS Buescher, Zuleyha Cidav, Martin Knapp, and David S Mandell. Costs of autism spectrum disorders in the united kingdom and the united states. JAMA pediatrics, 168(8):721–728, 2014.
  16. Ed Bullmore and Olaf Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3):186, 2009.
  17. RJ Castillo, DJ Carlat, T Millon, CM Millon, S Meagher, S Grossman, R Rowena, J Morrison, American Psychiatric Association, et al. Diagnostic and statistical manual of mental disorders. Washington, DC: American Psychiatric Association Press, 2007.
  18. Girish Chandrashekar and Ferat Sahin. A survey on feature selection methods. Computers & Electrical Engineering, 40(1):16 – 28, 2014.
  19. Hongyoon Choi. Functional connectivity patterns of autism spectrum disorder identified by deep feature learning. connections, 4:5, 2017.
  20. Moo K Chung, Kim M Dalton, Andrew L Alexander, and Richard J Davidson. Less white matter concentration in autism: 2d voxel-based morphometry. Neuroimage, 23(1):242–251, 2004.
  21. R.W Cox, J Ashburner, H Breman, K Fissell, C Haselgrove, C.J Holmes, J. L Lancaster, D.E Rex, S.M Smith, J.B Woodward, and S.C Strother. A (sort of) new image data format standard: NiFTI-1. In 10th Annual Meeting of Organisation of Human Brain Mapping, 2003.
  22. R Cameron Craddock, Paul E Holtzheimer III, Xiaoping P Hu, and Helen S Mayberg. Disease state prediction from resting state functional connectivity. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 62(6):1619–1628, 2009.
  23. M Del Valle Rubido, J. T McCracken, E Hollander, F Shic, J Noeldeke, L Boak, O Khwaja, S Sadikhov, P Fontoura, and D Umbricht. In search of biomarkers for autism spectrum disorder. Autism Research, 11:1567–1579, 2018.
  24. Adriana Di Martino, Chao-Gan Yan, Qingyang Li, Erin Denio, Francisco X Castellanos, Kaat Alaerts, Jeffrey S Anderson, Michal Assaf, Susan Y Bookheimer, Mirella Dapretto, et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular psychiatry, 19(6):659, 2014.
  25. Mital Doshi. Correlation based feature selection (cfs) technique to predict student perfromance. International Journal of Computer Networks & Communications, 6(3):197, 2014.
  26. Matt W Gardner and SR Dorling. Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmospheric environment, 32(14-15):2627–2636, 1998.
  27. Jay N Giedd. Structural magnetic resonance imaging of the adolescent brain. Annals of the New York Academy of Sciences, 1021(1):77–85, 2004.
  28. MD Greicius, BH Flores, V Menon, GH Glover, HB Solvason, H Kenna, AL Reiss, and AF Schatzberg. Resting-state functional connectivity in major depression: abnormally increased contributions from subgenual cingulate cortex and thalamus. Biological Psychiatry, 62, 2007.
  29. F. Guarneri, M. Vaccaro, and C. Guarneri. Digital image compression in dermatology: Format comparison. Telemedicine and e-Health, 14(7):666–670, 2008. doi: 10.1089/tmj.2007.0119. URL https://doi.org/10.1089/tmj.2007.0119. PMID: 18817495.
  30. E.M. Haacke, S. Mittal, Z. Wu, J. Neelavalli, and Y.-C.N. Cheng. Susceptibility-weighted imaging: Technical aspects and clinical applications, part 1. American Journal of Neuroradiology, 30(1):19–30, 2009. ISSN 0195-6108.
  31. James V Haxby, M Ida Gobbini, Maura L Furey, Alumit Ishai, Jennifer L Schouten, and Pietro Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539):2425–2430, 2001.
  32. Robert Hecht-Nielsen. Theory of the backpropagation neural network. In Neural networks for perception, pages 65–93. Elsevier, 1992.
  33. Anibal Sólon Heinsfeld, Alexandre Rosa Franco, R Cameron Craddock, Augusto Buchweitz, and Felipe Meneguzzi. Identification of autism spectrum disorder using deep learning and the abide dataset. NeuroImage: Clinical, 17:16–23, 2018.
  34. R Kucharsky Hiess, R Alter, S Sojoudi, BA Ardekani, R Kuzniecky, and HR Pardoe. Corpus callosum area and brain volume in autism spectrum disorder: quantitative analysis of structural mri from the abide database. Journal of autism and developmental disorders, 45(10):3107–3114, 2015.
  35. Leighton B. N. Hinkley, Elysa J. Marco, Anne M. Findlay, Susanne Honma, Rita J. Jeremy, Zoe Strominger, Polina Bukshpun, Mari Wakahiro, Warren S. Brown, Lynn K. Paul, A. James Barkovich, Pratik Mukherjee, Srikantan S. Nagarajan, and Elliott H. Sherr. The role of corpus callosum development in functional connectivity and cognitive processing. PLOS ONE, 7:1–17, 2012.
  36. C Horlin, M Falkmer, R Parsons, MA Albrecht, and T. Falkmer. The cost of autism spectrum disorders. PLoS One, 9, 2014.
  37. Scott A Huettel, Allen W Song, Gregory McCarthy, et al. Functional magnetic resonance imaging, volume 1. Sinauer Associates Sunderland, MA, 2004a.
  38. Scott A Huettel, Allen W Song, Gregory McCarthy, et al. Functional magnetic resonance imaging, volume 1. Sinauer Associates Sunderland, MA, 2004b.
  39. Amit Jain and Jeffrey Huang. Integrating independent components and linear discriminant analysis for gender classification. In Automatic Face and Gesture Recognition, 2004. Proceedings. Sixth IEEE International Conference on, pages 159–163. IEEE, 2004.
  40. Anil K Jain, Jianchang Mao, and K Moidin Mohiuddin. Artificial neural networks: A tutorial. Computer, 29(3):31–44, 1996.
  41. Muhammad Shoaib Jaliaawala and Rizwan Ahmed Khan. Can autism be catered with artificial intelligence-assisted intervention technology? a comprehensive survey. Artificial Intelligence Review, pages 1–32, 2019.
  42. Marcel Adam Just, Vladimir L Cherkassky, Augusto Buchweitz, Timothy A Keller, and Tom M Mitchell. Identifying autism from neural representations of social interactions: neurocognitive markers of autism. PloS one, 9(12):e113879, 2014.
  43. Avinash C.. Kak and Malcolm Slaney. Principles of computerized tomographic imaging. IEEE press New York, 1988.
  44. Karim S Kassam, Amanda R Markey, Vladimir L Cherkassky, George Loewenstein, and Marcel Adam Just. Identifying emotions on the basis of neural activation. PloS one, 8(6):e66032, 2013.
  45. R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz. Pain detection through shape and appearance features. In 2013 IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, July 2013. doi: 10.1109/ICME.2013.6607608.
  46. Rizwan Ahmed Khan. Detection of emotions from video in non-controlled environment. PhD thesis, LIRIS, Universite Claude Bernard Lyon1, France, 2013.
  47. Rizwan Ahmed Khan, Alexandre Meyer, Hubert Konik, and Saida Bouakaz. Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recognition Letters, 34(10):1159–1168, 2013.
  48. Rizwan Ahmed Khan, Arthur Crenn, Alexandre Meyer, and Saida Bouakaz. A novel database of children’s spontaneous facial expressions (LIRIS-CSE). Image Vision Comput., 2019a.
  49. Rizwan Ahmed Khan, Alexandre Meyer, Hubert Konik, and Saida Bouakaz. Saliency-based framework for facial expression recognition. Frontiers of Computer Science, 13(1):183–198, Feb 2019b. ISSN 2095-2236. doi: 10.1007/s11704-017-6114-9. URL https://doi.org/10.1007/s11704-017-6114-9.
  50. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, arXiv:1412.6980, 2014.
  51. Ami Klin, Fred R Volkmar, and Sara S Sparrow. Asperger syndrome. Guilford Press New York, 2000.
  52. Ron Kohavi et al. A study of cross-validation and bootstrap for accuracy estimation and model selection. In International joint conference on Artificial intelligence, volume 14, pages 1137–1145. Montreal, Canada, 1995.
  53. Igor Kononenko and Se June Hong. Attribute selection for modelling. Future Generation Computer Systems, 13(2-3):181–195, 1997.
  54. Ender Konukoglu, Ben Glocker, Darko Zikic, and Antonio Criminisi. Neighbourhood approximation forests. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 75–82. Springer, 2012.
  55. Sotetsu Koyamada, Yumi Shikauchi, Ken Nakae, Masanori Koyama, and Shin Ishii. Deep learning of fmri big data: a novel approach to subject-transfer decoding. arXiv preprint arXiv:1502.00093, 2015.
  56. Azadeh Kushki, Ellen Drumm, Michele Pla Mobarak, Nadia Tanel, Annie Dupuis, Tom Chau, and Evdokia Anagnostou. Investigating the autonomic nervous system response to anxiety in children with autism spectrum disorders. PLoS one, 8(4):e59730, 2013.
  57. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436–444, 2015.
  58. Ian B Malone, Kelvin K Leung, Shona Clegg, Josephine Barnes, Jennifer L Whitwell, John Ashburner, Nick C Fox, and Gerard R Ridgway. Accurate automatic estimation of total intracranial volume: a nuisance variable with less nuisance. Neuroimage, 104:366–372, 2015.
  59. José V Manjón and Pierrick Coupé. volbrain: an online mri brain volumetry system. Frontiers in neuroinformatics, 10:30, 2016.
  60. SA. McGuire, SA. Wijtenburg, PM. Sherman, LM. Rowland, M. Ryan, JH. Sladky, and PV. Kochunov. Reproducibility of quantitative structural and physiological MRI measurements. Brain and behavior, 2017.
  61. Tom M. Mitchell. Machine Learning. McGraw-Hill Series in Computer Science, 1997.
  62. Rumaisah Munir and Rizwan Ahmed Khan. An extensive review on spectral imaging in biometric systems: Challenges & advancements. Journal of Visual Communication and Image Representation, 65:102660, 2019. ISSN 1047-3203. doi: https://doi.org/10.1016/j.jvcir.2019.102660. URL http://www.sciencedirect.com/science/article/pii/S1047320319302810.
  63. Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, pages 807–814, USA, 2010. Omnipress. ISBN 978-1-60558-907-7. URL http://dl.acm.org/citation.cfm?id=3104322.3104425.
  64. Jared A Nielsen, Brandon A Zielinski, P Thomas Fletcher, Andrew L Alexander, Nicholas Lange, Erin D Bigler, Janet E Lainhart, and Jeffrey S Anderson. Multisite functional connectivity mri classification of autism: Abide results. Frontiers in human neuroscience, 7:599, 2013.
  65. Richard Nordenskjöld, Filip Malmberg, Elna-Marie Larsson, Andrew Simmons, Samantha J. Brooks, Lars Lind, HÃ¥kan Ahlström, Lars Johansson, and Joel Kullberg. Intracranial volume estimated with commonly used methods could introduce bias in studies including brain volume measurements. NeuroImage, 83:355 – 360, 2013. ISSN 1053-8119.
  66. World Health Organization. The ICD-10 classification of mental and behavioural disorders: diagnostic criteria for research, volume 2. World Health Organization, 1993.
  67. S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, Oct 2010. ISSN 1041-4347. doi: 10.1109/TKDE.2009.191.
  68. Hanchuan Peng, Fuhui Long, and Chris Ding. Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis & Machine Intelligence, (8):1226–1238, 2005.
  69. Sergey M Plis, Devon R Hjelm, Ruslan Salakhutdinov, Elena A Allen, Henry J Bockholt, Jeffrey D Long, Hans J Johnson, Jane S Paulsen, Jessica A Turner, and Vince D Calhoun. Deep learning for neuroimaging: a validation study. Frontiers in neuroscience, 8:229, 2014.
  70. Mark Plitt, Kelly Anne Barnes, and Alex Martin. Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards. NeuroImage: Clinical, 7:359–366, 2015.
  71. J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986.
  72. Catherine Rice. Prevalence of autism spectrum disorders-autism and developmental disabilities monitoring network, united states, 2006. Morbidity and Mortality Weekly Report (MMWR) - Surveillance Summary, 2009.
  73. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.
  74. Mary A Rutherford and Graeme M Bydder. MRI of the Neonatal Brain. WB Saunders London, 2002.
  75. Mert R Sabuncu, Ender Konukoglu, Alzheimer’s Disease Neuroimaging Initiative, et al. Clinical prediction from structural brain mri scans: a large-scale empirical study. Neuroinformatics, 13(1):31–46, 2015.
  76. Sarah E Schipul, Timothy A Keller, and Marcel Adam Just. Inter-regional brain communication and its disturbance in autism. Frontiers in systems neuroscience, 5:10, 2011.
  77. Harold N Schnitzlein and F Reed Murtagh. Imaging anatomy of the head and spine. a photographic color atlas of mri, ct, gross and microscopic anatomy in axial, coronal, and sagittal planes. Journal of Neurology, Neurosurgery and Psychiatry., 1985.
  78. Nicu Sebe, Ira Cohen, Ashutosh Garg, and Thomas S Huang. Machine learning in computer vision, volume 29. Springer Science & Business Media, 2005.
  79. Emily Simonoff, Andrew Pickles, Tony Charman, Susie Chandler, Tom Loucas, and Gillian Baird. Psychiatric disorders in children with autism spectrum disorders: prevalence, comorbidity, and associated factors in a population-derived sample. Journal of the American Academy of Child & Adolescent Psychiatry, 47(8):921–929, 2008.
  80. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2014.
  81. Stephen M Smith, Peter T Fox, Karla L Miller, David C Glahn, P Mickle Fox, Clare E Mackay, Nicola Filippini, Kate E Watkins, Roberto Toro, Angela R Laird, et al. Correspondence of the brain’s functional architecture during activation and rest. Proceedings of the National Academy of Sciences, 106(31):13040–13045, 2009.
  82. C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–9, June 2015. doi: 10.1109/CVPR.2015.7298594.
  83. Michael E Tipping. Sparse bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun):211–244, 2001.
  84. Joseph Tomasch. Size, distribution, and number of fibres in the human Corpus Callosum. The anatomical records, 1954.
  85. Christian J Van den Branden Lambrecht. Vision models and applications to image and video processing. Springer Science & Business Media, 2013.
  86. Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013.
  87. Ganesan Venkatasubramanian, George Anthony, Umesh Srinivasa Reddy, Varun Venkatesh Reddy, Peruvumba N. Jayakumar, and Vivek Benegal. Corpus callosum abnormalities associated with greater externalizing behaviors in subjects at high risk for alcohol dependence. Psychiatry Research: Neuroimaging, 156(3):209 – 215, 2007. ISSN 0925-4927.
  88. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 1096–1103. ACM, 2008. ISBN 978-1-60558-205-4. doi: 10.1145/1390156.1390294. URL http://doi.acm.org/10.1145/1390156.1390294.
  89. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371–3408, 2010.
  90. Gordon D Waiter, Justin HG Williams, Alison D Murray, Anne Gilchrist, David I Perrett, and Andrew Whiten. Structural white matter deficits in high-functioning individuals with autistic spectrum disorder: a voxel-based investigation. Neuroimage, 24(2):455–461, 2005.
  91. Sandra F Witelson. Hand and sex differences in the isthmus and genu of the human corpus callosum: a postmortem morphological study. Brain, 112(3):799–835, 1989.
  92. Lei Yu and Huan Liu. Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th international conference on machine learning (ICML-03), pages 856–863, 2003.
  93. Paul A Yushkevich, Joseph Piven, Heather Cody Hazlett, Rachel Gimpel Smith, Sean Ho, James C Gee, and Guido Gerig. User-guided 3d active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage, 31(3):1116–1128, 2006.
  94. Eran Zaidel and Marco Iacoboni. The parallel brain: the cognitive neuroscience of the corpus callosum. MIT press, 2003.
  95. Tao Jiang Michael Q Zhang. Current topics in computational molecular biology. MIT Press, 2002.
  96. Fengwei Zhou, Bin Wu, and Zhenguo Li. Deep meta-learning: Learning to learn in the concept space. CoRR, arXiv:1802.03596, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
403542
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description