WISDoM: characterizing neurological timeseries with the Wishart distribution

WISDoM: characterizing neurological timeseries with the Wishart distribution

Abstract

WISDoM (Wishart Distributed Matrices) is a new framework for the quantification of deviation of symmetric positive-definite matrices associated to experimental samples, like covariance or correlation matrices, from expected ones governed by the Wishart distribution WISDoM can be applied to tasks of supervised learning, like classification, in particular when such matrices are generated by data of different dimensionality (e.g. time series with same number of variables but different time sampling). We show the application of the method in two different scenarios. The first is the ranking of features associated to electro encephalogram (EEG) data with a time series design, providing a theoretically sound approach for this type of studies. The second is the classification of autistic subjects of the ABIDE study, using brain connectivity measurements.

1 Introduction

High-dimensionality time-structured data are extremely common in fields such as finance, biophysics and biomedical data. Very often, experimental limitations lead to uneven sampling (i.e. a different number of time points in terms of frequency or duration) [4] and this poses problems for many types of analysis (e.g. sample classification). As a consequence, clipping or padding techniques are applied, altering the underlying temporal structure. In recent years, studies on such data have seen an increasing popularity in a wide range of fields, from functional magnetic resonance imaging (fMRI) [30, 33, 1] to time series exploration for critical transition prediction in clinical scenarios [10, 13]. The common goal of this type of research is to develop models and algorithms capable of reaching the highest possible classification and prediction performances, for diagnostic and real time applications, while unveiling underlying information about a system. Reproducibility and generalization issues of commonly applied methods are in part caused by ad-hoc preprocessing of data, due to the lack of simple null models, often substituted by reshuffling-based null models. We introduce a method based on the statistical distribution of symmetric positive-definite matrices (i.e covariance and correlation matrices) extracted from data, using the Wishart distributon as a null model, as a possible way to overcome some of the aforementioned issues. Properties of distribution of random symmetric positive-definite matrices have proven to be useful in fields such as condensed matter, especially in the study of disordered systems [34, 9]. The WISDoM method exploits the properties of the Wishart distribution in order to compute limit distributions for the classes of samples in a classification problem, and a log-likelihood based score is defined for the single variables to quantify their relevance in the classification task.

2 Method

2.1 The Wishart Distribution

The Wishart distribution is a probability distribution of random nonnegative-definite matrices that is used to model random covariance matrices.

The parameter is the number of degrees of freedom (e.g. the number of points in the time series), and is a nonnegative-definite symmetric matrix (with the number of variables, or features, of the time series) called the scale matrix.

Def. Let be distribuited vectors, forming a data matrix , . The distribution of a , random matrix is a Wishart distribution. [14]

We have then by definition:

(1)

so that is the distribution of a sum of rank-one matrices defined by independent normal with and .

In particular, it holds for the present case:

(2)

2.2 PDF Computation for Invertible

In general, any can be represented as

(3)

so that

(4)

The easiest way to find in terms of is the LU-decomposition, which finds a unique lower diagonal matrix with such that .

Then by 1 and 4, with we have:

(5)

where and .

Assuming that and is invertible, the density of the random matrix in 1 can be written

(6)

so that unless is symmetric and positive-definite. [2]

Note that in 6 we define as the generalized gamma function:

(7)

2.3 Estimation of the Wishart Parameters from Empirical Covariance

We justify the use of the Wishart distribution under the assumption of Multivariate Gaussian distributed data scenarios. This kind of assumption is indeed generally good for a wide range of problems. Furthermore, the use of the average covariance matrix (obtained from all the elements of one class) to compute the scale matrix for the class estimated distribution will be proven to be a good approximation of a complete Bayesian model.

This is done by considering that the Wishart Distribution is the conjugate prior of a multivariate Gaussian distribution, such as the Gamma distribution for the univariate Gaussian case. By considering a Gaussian model with known mean , so that the free parameter is the variance , as in [22], the likelihood function is defined as follows:

(8)
(9)

The conjugate prior is an inverse Gamma distribution. Recall that has an inverse Gamma distribution with parameters when .

The density then takes the form

(10)

Using this prior, the posterior distribution of is given by

(11)

In the multidimensional setting, the inverse Wishart takes the place of the inverse Gamma. It has already been stated that the Wishart distribution is a distribution over symmetric positive semi-definite matrices . A more compact form of the density is given by

(12)
(13)

where the parameters are the degrees of freedom and the positive-definite scale matrix .

If we can then state that has an Inverse Wishart Distribution, whose density has the form

(14)

Let be distributed observed data. Then an inverse Wishart prior multiplying the likelihood yields

(15)
(16)
(17)

where is the empirical covariance :

(18)

Thus, an a posteriori distribution with the form

(19)

is obtained.

Similarly, it can be stated that for the inverse covariance (precision) matrix the conjugate prior is a Wishart distribution.

2.4 Class-Wise Estimated Distribution

The core idea of the WISDoM method is to represent each element undergoing classification as a covariance matrix of its features. Nominally, each element can be characterized by the covariance matrix extracted by the repeated observations of the vector of its features, for example derived by a time series. The aim is to use the free parameters of the Wishart distribution (the scale matrix and the number of the degrees of freedom, as shown in 6) to compute an estimation of the distribution for a certain class of elements, and then assign a single element to a given class by computing a log-likelihood between the element being analyzed and each class. Furhermore, a score can be assigned to each feature by estimating the variation in terms of log-likelihood, due to its removal from the feature set. If the removal of a feature causes significant increase (or decrease) in the log-likelihood, it can be stated that such feature is highly representative of the system analyzed. Thus, the WISDoM approach allows not only to assign a given element to a class, but also to identify the features with the highest relevance in the classification process.

Covariance matrices are a good choice for a distance metrics in a classification task, both for the way they represent a system and for the property that the average of a set of covariance matrices is a covariance matrix itself. If each element of a given class is represented by a covariance matrix of its features, this property allows us to estimate a distribution for the class by choosing

(20)

The other necessary parameter for the estimation is the number of degrees of freedom n. Assume that an vector of features is associated to each element of a given class, while having observations for this vector. The covariance matrix computed over the observations will represent the interactions between the features of element . The number of degrees of freedom of the Wishart distribution is then given by the number of times is observed.

Let us give an example tied to functional MR brain imaging. An image of patient ’s brain is acquired; as usual these images are divided in a certain number of zones (voxel, pixel etc.), each zone being sampled times over a given time interval in order to observe a certain type of brain activity and functionality. In this example, the features contained in vector associated to patient are indeed the zones chosen to divide the brain image, each zone having been sampled times during an acquisition interval. The correlation matrix is then representative of the functional correlation between the brain areas. Repeating this procedure for the patients of a known class (i.e. a diagnostic group) and computing the scale matrix for the class, will allow us to estimate a Wishart distribution for that class and draw samples from it.

2.5 Log-Likelihood Ratio Score

After defining how to represent classes distribution, WISDoM allows to compute the log-likelihood of each element to belong to one of the classes. Moreover, WISDoM allows to compute the variation of log-likelihood ratio scores due to the removal of features, singularly or in groups, thus estimating how much the classification performance changes. Uninformative (or less informative) features can thus be pruned, allowing for a dimensionality reduction of the initial feature set. The whole process can be seen as a feature transformation, mapping the covariance matrix of subject to a score vector formed by the change in log-likelihood for each feature.

Complete Matrix Score

The WISDoM Classifier relies upon computing the log-likelihood of a matrix with respect to the Wishart distribution estimated for a class , using as the scale marix. If a problem concerning two given classes and is taken into account, the score assigned to each can be defined as follows:

(21)

Where are the scale matrices computed for the classes respectively, and is the logarithm of the probability of belonging to the Wishart distribution estimated for one of the two classes .

Single Feature Score

WISDoM allows to obtain information about the features used for classification by reducing the matrix to its principal submatrices (see Appendix). An important property for the principal submatrices of a symmetric positive definite matrix is that any partition is also symmetric and positive definite.

By removing one feature from the dataset, calculating the WISDoM scores, and iterating this process over all the features (i.e. analyzing all the principal submatrices of and ) the method can assign a score to each feature, representing its relevance in the decision for to be assigned to one class or another. Let be a principal submatrix of order , of the matrix computed on the observation of for subject , obtained by the deletion of the row and the column. Similarly, let be a principal submatrix of order , of the matrix computed for the class . The score assigned to each feature of is then given by eq.(23).

(22)
(23)

In a 2-class example, we obtain a score vector as follows:

(24)

A generalization to dimensionality reduction can be found in the supplemental materials.

3 Results

3.1 Eye state detection via EEG

The dataset used was downloaded from the UCI Machine Learning Repository (http://archive.ics.uci.edu/ml/). This dataset has been chosen for many reasons: it’s openly accessible, contains records from 14 electrodes with standard headset placement (fig.1), thus making the features of our problem directly linked to brain topology and a published classification performance benchmark on the dataset exists [27]. The data consisted in a series of 14980 time points, sampled for each one of the 14 electrodes and labelled with a 1 or a 0 to mark wether the eyes of the subject are open or closed at that time point. The time series has been split into batches of different length according to eye state changes. In this way, a correlation matrix can be extracted for each batch (the “elements” for this classification problem), while the length of each batch is used for computing the degrees of freedom of each class Wishart distribution during training. A total of 140 batches with various lenghts, 70 with eye state 1 and 70 with eye state 0, were obtained.

The representative matrix for each class is computed as the average (weighted on the length of each batch) of matrices of the elements belonging to eye state 0 or eye state 1, excluding the element to be predicted in a Leave One Out fashion in order to avoid overfitting. By doing this, we verify that the method is independent from the sampling window chosen when applied to time series data, with the only constraint that the length of such window cannot be less than the number of the features of the system.

After undergoing feature score computation, a stochastic grid search on a set of classifiers has been performed in order to obtain the best prediction performance with the transformed features. All the classification tasks are validated through a 10-fold crossvalidation. Versions and references for all Python packages used can be found in supplemental and at [20, 23, 17, 28, 24, 21, 25, 31].

We first tried to assess eye state using complete matrix score, as in eq. (21). Classifiers reported in fig.(2) were trained and tuned, with the aim of obtaining the best performance possible. However, in this scenario the resulting classification performances were poor, reaching an accuracy of in the best cases. We then proceeded to compute single feature scoring, as in eq.(23), obtaining a feature transformation. As in fig.(2), different classifiers belonging to two main categories (decision trees and linear classifiers) have been trained on the transformed features. The best performance has been achieved with a C-support Vector Machine (Python 3.6 SciKitLearn implementation) resulting in a ROC AUC score and an accuracy score of , comparable with the benchmark of accuracy set by Rajesh Kanna et al.[27].

To assess which features contain the largest amount of useful information for prediction, a set of single feature C-SVM classifications has been performed (fig.3): a performance of accuracy is obtained by using only the top three ranking electrodes (fig. 4). Training the classifier with the top three electrodes yields a local maxima in the landascape performance, highlighting the importance of the information recorded by these three electrodes about the state of the whole system.

3.2 Autism classification via fMRI

The Autism Brain Imaging Data Exchange [11] is a consortium effort, aggregating fMRI datasets from individuals with autism spectrum disorders (ASD) and age-matched typical controls (TC). Data from 17 acquisition sites were merged, using different preprocessing tools and pipelines [8]. Complete information about the dataset is found at http://preprocessed-connectomes-project.org/abide/. For our classification task , we focused on male subjects of the “Autism” diagnostic group (AUT): we analyzed a total of 369 TC and 220 AUT subjects, with 200 time points each (number of degrees of freedom of the Wishart distribution). The chosen preprocessing pipeline for the extraction of the average time series of the ROIs is the Connectome Computation System (CCS), with a global signal correction and the application of a bandpass filter (0,01-0,1 Hz). The 116 ROIs (features), of which covariance matrices are extracted, are labelled according to the Automated Anatomical Labeling of the Brain (AAL2) [29].

The representative matrix for each class is computed as the average of matrices of the elements belonging to class AUT or TC, excluding the sets of element to be predicted. For this task, a shuffled 10-fold splitting of the dataset for feature score computation has been used to avoid overfitting. After undergoing feature score computation, the parameters of a C-Support Vector Machine Classifier have been fine tuned in order to obtain the best prediction performance with the transformed features. All the classification tasks are validated through a stratified 10-fold crossvalidation, in order to minimize the effects of class imbalance in train and test sets. Versions and references for all Python packages used can be found in supplemental and at [20, 23, 17, 28, 24, 21, 25, 31].

The C-SVM classifier trained on the transformed features resulted in an accuracy score of and a ROC AUC score of . Furhtermore, we obtained a ROC AUC score of and an accuracy of with a fine tuned Random Forest Classifier. We also compared the classifiers in fig.2 training them with WISDoM transformed features and non-transformed features (nominally, the elements of the lower triangle of each covariance matrix). Results in fig.5 show an overall improvement of classification performances when using transformed features. As a comparison, the state of art of classification on the entire ABIDE dataset is set at accuracy obtained with a deep learning architecture built by Heinsfeld et al. [15]. This result on the whole spectrum of autism required the use of various stacked denoising autoencoders and hidden layers, resulting in a large time-consuming training routine ( hours), while WISDoM obtained satisfying classification performance in much smaller time ( minutes, including feature transformation which is the most time-consuming step of the pipeline).

4 Discussion

The WISDoM framework is introduced: a method for modelling symmetric positive definite matrices, such as covariance and correlation matrices, used in a wide array of problems. It can provide a null model for classification purposes in which each sample is represented as a covariance/correlation matrix, even if the number of observations (e.g. the length of the time series) is different from sample to sample. This property makes the WISDoM method suitable for problems with non-homogeneous data size, for example time series with uneven lengths, missing points or irregularly sampled data. Moreover, we show that a feature transformation based on WISDoM scores can be used for dimensionality reduction, providing a ranking for the most important variables in the dataset. While showing good generalization capabilities with time-series data and non-homogeneous sampling related issues, the method is not suitable when the number of features exceeds the sampling (). This is a theoretical limit tied to the invertibility of the scale matrix required to compute the Wishart probability density function. At present, WISDoM cannot thus be applied to problem involving the so called “long data”, such as gene expression tables, unless considering corrections such as matrix regularization methods and hierchical methods such as power priors [18]

The method has been tested on the EEG eye state prediction dataset of the open UCI Machine Learning Repository, slightly improving the previous classification benchmark with little to no preprocessing, and giving useful insights on the minimum number and location of electrodes needed to record sufficient information for the task. Moreover, the method has been applied to the classification of a subset of the ABIDE dataset, using brain functional connectivity data. We obtained satisfying classification scores, comparable with the state of art classification results on the dataset, with very simple classifiers and without the use of additional time-consuming processing routines. Furthermore, the Bayesian-like framework of scores-computation through log-likelihood, could allow for a sort of inline learning by continuously updating the estimation of each class Wishart distribution. This property makes the WISDoM method also suitable for real-time learning during data acquisition.

Author contribution

EG, CM, GC and DR designed the research. CM and EG analyzed the data and implemented the method. CM, DR, GC and EG wrote the paper.

Figure captions

Figure 1: Electrodes position in the headset used for EEG dataset acquisition.
Figure 2: Performance comparison of different classifiers on the WISDoM EEG transformed features. The classifiers are reported as follows: RFC: Random Forest Classifier [6], DTC: Decision Tree Classifier[5, 3], ADA: ADA Boosting Tree Classifier [12, 34], LDA: Linear Discriminant Analysis Classification [19], LogReg: Logistic Regression Classifier [16], Perc: Perceptron Classifier [32], SVM: C-Support Vector Machine[7, 26]. All classifiers are SciKitLearn implementations.
Figure 3: EEG feature ranking based classification performance (ROC AUC score). Temporal (T8) and outer frontal (F8, F7) electrodes seems to convey the most important signals for eye state prediction.
Figure 4: EEG features performance landscape at an increasing number of ranked features used for classification. Labels on the X-axis point out which feature is being added to the previous ones.
Figure 5: Performance comparison of different classifiers on the WISDoM transformed features and non-transformed feature in the ABIDE dataset. The classifiers are reported as follows: RFC: Random Forest Classifier [6], DTC: Decision Tree Classifier[5, 3], ADA: ADA Boosting Tree Classifier [12, 34], LDA: Linear Discriminant Analysis Classification [19], LogReg: Logistic Regression Classifier [16], Perc: Perceptron Classifier [32], SVM: C-Support Vector Machine[7, 26]. All classifiers are SciKitLearn implementations.

References

  1. A. ,F, M. A. ,SN, S. ,A, B. ,H and M. Daliri (2019) Granger causality analysis in combination with directed network measures for classification of ms patients and healthy controls using task-related fmri. Comput Biol Med 115. Cited by: §1.
  2. Anderson, T. W.Hardle, Wolfgang and L. Simar (2012) An introduction to multivariate statistical analysis. John Wiley and Sons. Cited by: §2.2.
  3. C. Barros Rodrigo, M. P. Basgalupp, A. C. P. L. F. Carvalho and A. Freitas Alex (2012) A survey of evolutionary algorithms for decision-tree induction. IEEE Transactions on Systems, Man and Cybernetics. Part C: Applications and Reviews 42, pp. 291–312. Cited by: Figure 2, Figure 5.
  4. A. Bazzani, G. Castellani and L. Cooper (2010) Eigenvalue distributions for a class of covariance matrices with application to bienenstock-cooper-munro neurons under noisy conditions. Physical Review E 81. Cited by: §1.
  5. L. Breiman, J. Friedman, R. Olshen and C. Stone (1984) Classification and regression trees. Wadsworth. Cited by: Figure 2, Figure 5.
  6. L. Breiman (2001) Random forests. Machine Learning 45, pp. 5–32. Cited by: Figure 2, Figure 5.
  7. C. Chang and C. Lin (2013–) LIBSVM: a library for support vector machines. Note: Department of Computer Science National Taiwan University Cited by: Figure 2, Figure 5.
  8. C. Craddock, Y. Benhajali, C. Chu, F. Chouinard, A. Evans, A. Jakab, B. S. Khundrakpam, J. D. Lewis, Q. Li, M. Milham, C. Yan and P. Bellec () The neuro bureau preprocessing initiative: open sharing of preprocessed neuroimaging data and derivatives. Frontiers in Neuroinformatics (41). External Links: Link, Document, ISSN 1662-5196 Cited by: §3.2.
  9. A. Crisanti, G. Paladin, M. Serva and A. Vulpiani (1994) Products of random matrices for disordered systems. Physical Review E 49, pp. R953–R955. Cited by: §1.
  10. D. Cuesta-Frau, P. Miró-Martínez, S. Oltra-Crespo, A. Molina-Picó, P. Dakappa, C. Mahabala, B. Vargas and P. González (2019) Classification of fever patterns using a single extracted entropy feature: a feasibility study based on sample entropy. Math Biosci Eng. 17, pp. 235–249. Cited by: §1.
  11. A. Di Martino, C. Yan, Q. Li, E. Denio, F. Castellanos, K. Alaerts, J. Anderson, M. Assaf, Bookheimer,SY, M. Dapretto, B. Deen, S. Delmonte, I. Dinstein, B. Ertl-Wagner, D. Fair, L. Gallagher, D. Kennedy, C. Keown, C. Keysers, J. Lainhart, C. Lord, B. Luna, V. Menon, N. Minshew, C. Monk, Mueller,S, R. Muller, M. Nebel, J. Nigg, K. O’Hearn, K. Pelphrey, S. Peltier, J. Rudie, S. Sunaert, M. Thioux, J. Tyszka, L. Uddin, J. Verhoeven, N. Wenderoth, J. Wiggins, S. Mostofsky and M. Milham (2014) The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular Psychiatry 19, pp. 659–667. Cited by: §3.2.
  12. Y. Freund and R. Schapire (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55, pp. 119–139. Cited by: Figure 2, Figure 5.
  13. P. Ghalati, S. Samal, J. Bhat, R. Deisz, G. Marx and A. Schuppert (2019) Critical transitions in intensive care units: a sepsis case study. Sci Rep. 9. Cited by: §1.
  14. Hardle, Wolfgang and L. Simar (2003) Applied multivariate statistical analysis. Springer Berlin. Cited by: §2.1.
  15. A. Heinsfeld, A. Franco, R. Craddock, A. Buchweitz and F. Meneguzzi (2017) Identification of autism spectrum disorder using deep learning and the abide dataset. Neuroimage Clin. 17, pp. 16–23. External Links: Document Cited by: §3.2.
  16. Y. Hsiang-Fu, H. Fang-Lan and L. Chih-Jen (2011) Dual coordinate descent methods for logistic regression and maximum entropy models. Machine Learning 85, pp. 41–75. Cited by: Figure 2, Figure 5.
  17. J. D. Hunter (2007) Matplotlib: a 2d graphics environment. Computing in Science & Engineering 9 (3), pp. 90–95. External Links: Document Cited by: §3.1, §3.2.
  18. J. Ibrahim, M. Chen, Y. Gwon and F. Chen (2015) The power prior: theory and applications. Wiley. External Links: Document Cited by: §4.
  19. M. G. J. (2004) Discriminant analysis and statistical pattern recognition. Wiley Interscience. Cited by: Figure 2, Figure 5.
  20. E. Jones, T. Oliphant and P. Peterson (2001–) SciPy: open source scientific tools for Python. External Links: Link Cited by: §3.1, §3.2.
  21. T. Kluyver, B. Ragan-Kelley, F. Pérez, B. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. Hamrick, J. Grout, S. Corlay, P. Ivanov, D. Avila, S. Abdalla and C. Willing (2016) Jupyter notebooks – a publishing format for reproducible computational workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas, F. Loizides and B. Schmidt (Eds.), pp. 87 – 90. Cited by: §3.1, §3.2.
  22. H. Liu and L. Wasserman (2014) Statistical machine learning. CMU University. Cited by: §2.3.
  23. W. McKinney (2010) Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference, S. van der Walt and J. Millman (Eds.), pp. 51 – 56. Cited by: §3.1, §3.2.
  24. A. Meurer, C. P. Smith, M. Paprocki, O. Certík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, S. Singh, T. Rathnayake, S. Vig, B. E. Granger, R. P. Muller, F. Bonazzi, H. Gupta, S. Vats, F. Johansson, F. Pedregosa, M. J. Curry, A. R. Terrel, S. Roucka, A. Saboo, I. Fernando, S. Kulal, R. Cimrman and A. Scopatz (2017-01) SymPy: symbolic computing in python. PeerJ Computer Science 3, pp. e103. External Links: ISSN 2376-5992, Link, Document Cited by: §3.1, §3.2.
  25. F. Pérez and B. E. Granger (2007) IPython: a system for interactive scientific computing. Computing in Science & Engineering 9 (3), pp. 21–29. External Links: Document Cited by: §3.1, §3.2.
  26. J. Platt (1999) Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In Advances in Large Margin Classifiers, Cited by: Figure 2, Figure 5.
  27. K. Rajesh, V. Sabarinathan, V. Sarath Kumar and V. Sugumaran (2015) Eye state prediction using eeg signal and c4.5 decision tree algorithm. International Journal of Applied Engineering Research 10. Cited by: §3.1, §3.1.
  28. O. Travis E (2006) A guide to numpy. Trelgol Publishing. Cited by: §3.1, §3.2.
  29. N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer and M. Joliot (2002) Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni mri single-subject brain. NeuroImage 15 (1), pp. 273 – 289. External Links: ISSN 1053-8119, Document, Link Cited by: §3.2.
  30. M. Van Den Heuvel and H. E. H. Pol (2010) Exploring the brain network: a review on resting-state fmri functional connectivity. European Neuropsychopharmacology 20, pp. 519–534. Cited by: §1.
  31. M. Waskom, O. Botvinnik, D. O’Kane, P. Hobson, S. Lukauskas, D. C. Gemperline, T. Augspurger, Y. Halchenko, J. B. Cole, J. Warmenhoven, J. de Ruiter, C. Pye, S. Hoyer, J. Vanderplas, S. Villalba, G. Kunter, E. Quintero, P. Bachant, M. Martin, K. Meyer, A. Miles, Y. Ram, T. Yarkoni, M. L. Williams, C. Evans, C. Fitzgerald, Brian, C. Fonnesbeck, A. Lee and A. Qalieh (2017-09) Mwaskom/seaborn: v0.8.1 (september 2017). External Links: Document, Link Cited by: §3.1, §3.2.
  32. F. Y. and S. R. E. (1999) Large margin classification using the perceptron algorithm. Machine Learning 37, pp. 277–296. Cited by: Figure 2, Figure 5.
  33. W. Yan, V. Calhoun, S. ,M, Y. Cui, H. Yan, S. Liu, L. Fan, N. Zuo, Y. ,Z, K. Xu, J. Yan, L. Lv, J. Chen, Y. Chen, H. Guo, L. Li, P. Wan, H. Wang, H. Wang, Y. Yang, H. Zhang, D. Zhang, T. Jiang and J. Sui (2019) Discriminating schizophrenia using recurrent neural network applied on time courses of multi-site fmri data. EBioMedicine 47, pp. 543–552. Cited by: §1.
  34. H. Zhu, S. Zou and T. Hastie (2009) Multi-class adaboost. Statistics and Its Interface 2, pp. 349–360. Cited by: §1, Figure 2, Figure 5.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414410
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description