Interpretability of Multivariate Brain Maps in Brain Decoding: Definition and Quantification

Interpretability of Multivariate Brain Maps in Brain Decoding: Definition and Quantification

Seyed Mostafa Kia 111University of Trento, Trento, Italy222Fondazione Bruno Kessler (FBK), Trento, Italy333Centro Interdipartimentale Mente e Cervello (CIMeC), Trento, Italy seyedmostafa.kia@unitn.it
Abstract

Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study the spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed theoretical definition, we formalize a heuristic method for approximating the interpretability of multivariate brain maps in a binary magnetoencephalography (MEG) decoding scenario. Third, we propose to combine the approximated interpretability and the performance of the brain decoding model into a new multi-objective criterion for model selection. Our results for the MEG data show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.

keywords:
MVPA, brain decoding, brain mapping, interpretation, model selection
journal: arXiv\newdefinition

mydefsDefinition \newdefinitionmyprobsProblem \@testdefundefined

1 Introduction

Understanding the mechanisms of the brain has been a crucial topic throughout the history of science. Ancient Greek philosophers envisaged different functionalities for the brain ranging from cooling the body to acting as the seat of the rational soul and the center of sensation Crivellato and Ribatti (2007). Modern cognitive science, emerging in the 20th century, provides better insight into the brain’s functionality. In cognitive science, researchers usually analyze recorded brain activity and behavioral parameters to discover the answers of where, when, and how a brain region participates in a particular cognitive process.

To answer the key questions in cognitive science, scientists often employ mass-univariate hypothesis testing methods to test scientific hypotheses on a large set of independent variables Groppe et al. (2011); Maris and Oostenveld (2007). Mass-univariate hypothesis testing is based on performing multiple tests, e.g., t-tests, one for each unit of the neuroimaging data, i.e., independent variables. Although the high spatial and temporal granularity of the univariate tests provides good interpretability of results, the high dimensionality of neuroimaging data requires a large number of tests, which reduces the sensitivity of these methods after multiple comparison correction. Although some techniques such as the non-parametric cluster-based permutation test Maris and Oostenveld (2007) provide more sensitivity because of the cluster assumption, they still experience low sensitivity to brain activities that are narrowly distributed in time and space Groppe et al. (2011, 2011). The multivariate counterparts of mass-univariate analysis, known generally as multivariate pattern analysis (MVPA), have the potential to overcome these deficits. Multivariate approaches are capable of identifying complex spatio-temporal interactions between different brain areas with higher sensitivity and specificity than univariate analysis van Gerven et al. (2009), especially in group analysis of neuroimaging data Davis et al. (2014).

Brain decoding Haynes and Rees (2006) is an MVPA technique that provides a model based on the recorded brain signal to predict the mental state of a human subject. There are two potential applications for brain decoding: 1) brain-computer interfaces (BCIs) Wolpaw et al. (2002); Waldert et al. (2008); van Gerven and Jensen (2009); Nicolas-Alonso and Gomez-Gil (2012), and 2) multivariate hypothesis testing Bzdok (2016). In the first case, a brain decoder with maximum prediction power is desired. In the second case, in addition to the prediction power, extra information on the spatio-temporal nature of a cognitive process is desired. In this study, we are interested in the second application of brain decoding, which can be considered a multivariate alternative for mass-univariate hypothesis testing.

In brain decoding, generally, linear classifiers are used to assess the relation between independent variables, i.e., features, and dependent variables, i.e., cognitive tasks Pereira et al. (2009); Lemm et al. (2011); Besserve et al. (2007). This assessment is performed by solving a linear optimization problem that assigns weights to each independent variable. Currently, brain decoding is the gold standard in multivariate analysis for functional magnetic resonance imaging (fMRI) Haxby et al. (2001); Cox and Savoy (2003); Mitchell et al. (2004); Norman et al. (2006) and magnetoencephalogram/electroencephalogram (MEEG) studies Parra et al. (2003); Rieger et al. (2008); Carroll et al. (2009); Chan et al. (2011); Huttunen et al. (2013); Vidaurre et al. (2013); Abadi et al. (2015). It has been shown that brain decoding can be used in combination with brain encoding Naselaris et al. (2011) to infer the causal relationship between stimuli and responses Weichwald et al. (2015).

Brain mapping Kriegeskorte et al. (2006) is a higher form of neuroimaging that assigns pre-computed quantities, e.g., univariate statistics or weights of a linear classifier, to the spatio-temporal representation of neuroimaging data. In MVPA, brain mapping uses the learned parameters from brain decoding to produce brain maps, in which the engagement of different brain areas in a cognitive task is visualized. The interpretability of a brain decoder generally refers to the level of information that can be reliably derived by an expert from the resulting maps. From the neuroscientific perspective, a brain map is considered interpretable if it enables the scientist to answer where, when, and how questions.

Typically, a trained classifier provides a black box that predicts the label of an unseen data point with some accuracy. Valverde-Albacete and Peláez-Moreno (2014) experimentally showed that in a classification task optimizing only classification error rate is insufficient to capture the transfer of crucial information from the input to the output of a classifier. It is also shown by Ramdas et al. (2016) that in the case of data with small sample size, high dimensionality, and low signal to noise ratio, using the classification accuracy as a test statistic for two sample testing should be performed with extra cautious. Beside these limitations of classification accuracy in inference, and considering the fact that the best predictive model might not be the most informative one Turner (2015); brain decoding, taken alone, only answers the question of what is the most likely label of a given unseen sample Baehrens et al. (2010). This fact is generally known as knowledge extraction gap Vellido et al. (2012) in the classification context. Therefore, despite the theoretical advantages of MVPA, its practical application to inferences regarding neuroimaging data is limited primarily by a lack of interpretability Sabuncu (2014); Haynes (2015); Naselaris and Kay (2015). Thus far, many efforts have been devoted to filling the knowledge extraction gap of linear and non-linear data modeling methods in different areas such as computer vision Bach et al. (2015), signal processing Montavon et al. (2013), chemometrics Yu et al. (2015), bioinformatics Hansen et al. (2011), and neuroinformatics Haufe et al. (2013).

Improving the interpretability of linear brain decoding and associated brain maps is a primary goal in the brain imaging literature Strother et al. (2014). The lack of interpretability of multivariate brain maps is a direct consequence of low signal-to-noise ratios (SNRs), high dimensionality of whole-scalp recordings, high correlations among different dimensions of data, and cross-subject variability Besserve et al. (2007); Anderson et al. (2011); Brodersen et al. (2011); Lemm et al. (2011); Langs et al. (2011); Varoquaux et al. (2012); Kauppi et al. (2013); Taulu et al. (2014); Varoquaux and Thirion (2014); Olivetti et al. (2014); Haufe et al. (2014); Haynes (2015). At present, two main approaches are proposed to enhance the interpretability of multivariate brain maps: 1) introducing new metrics into the model selection procedure and 2) introducing new penalty terms for regularization to enhance stability selection.

The first approach to improving the interpretability of brain decoding concentrates on the model selection procedure. Model selection is a procedure in which the best values for the hyper-parameters of a model are determined Lemm et al. (2011). This selection process is generally performed by considering the generalization performance, i.e., the accuracy, of a model as the decisive criterion. Rasmussen et al. (2012) showed that there is a trade-off between the spatial reproducibility and the prediction accuracy of a classifier; therefore, the reliability of maps cannot be assessed merely by focusing on their prediction accuracy. To utilize this finding, they incorporated the spatial reproducibility of brain maps in the model selection procedure. An analogous approach, using a different definition of spatial reproducibility, is proposed by Conroy et al. (2013). Beside spatial reproducibility, the stability of the classifiers Bousquet and Elisseeff (2002) is another criterion that is used in combination with generalization performance to enhance the interpretability. For example, Yu (2013); Lim and Yu (2015) showed that incorporating the stability of models into cross-validation improves the interpretability of the estimated parameters (by linear models).

The second approach to improving the interpretability of brain decoding focuses on the underlying mechanism of regularization. The main idea behind this approach is two-fold: 1) customizing the regularization terms to address the ill-posed nature of brain decoding problems (where the number of samples is much less than the number of features) Mørch et al. (1997); Varoquaux and Thirion (2014) and 2) combining the structural and functional prior knowledge with the decoding process so as to enhance stability selection. Group Lasso Yuan and Lin (2006) and total-variation penalty Tibshirani et al. (2005) are two effective methods using this technique Xing et al. (2014); Rish et al. (2014). Sparse penalized discriminant analysis Grosenick et al. (2008), group-wise regularization van Gerven et al. (2009), randomized Lasso Varoquaux et al. (2012), smoothed-sparse logistic regression de Brecht and Yamagishi (2012), total-variation L1 penalization Michel et al. (2011); Gramfort et al. (2013), the graph-constrained elastic-net Grosenick et al. (2009, 2013), and randomized structural sparsity Wang et al. (2015) are examples of brain decoding methods in which regularization techniques are employed to improve stability selection, and thus, the interpretability of brain decoding.

Recently, taking a new approach to the problem, Haufe et al. questioned the interpretability of weights of linear classifiers because of the contribution of noise in the decoding process Bießmann et al. (2012); Haufe et al. (2013, 2014). To address this problem, they proposed a procedure to convert the linear brain decoding models into their equivalent generative models. Their experiments on the simulated and fMRI/EEG data illustrate that, whereas the direct interpretation of classifier weights may cause severe misunderstanding regarding the actual underlying effect, their proposed transformation effectively provides interpretable maps. Despite the theoretical soundness of this method, the major challenge of estimating the empirical covariance matrix of the small sample size neuroimaging data Engemann and Gramfort (2015) limits the practical application of this method.

Despite the aforementioned efforts to improve the interpretability of brain decoding, there is still no formal definition for the interpretability of brain decoding in the literature. Therefore, the interpretability of different brain decoding methods are evaluated either qualitatively or indirectly (i.e., by means of an intermediate property). In qualitative evaluation, to show the superiority of one decoding method over the other (or a univariate map), the corresponding brain maps are compared visually in terms of smoothness, sparseness, and coherency using already known facts (see, for example, Varoquaux et al. (2012); Li et al. (2015)). In the second approach, important factors in interpretability such as spatio-temporal reproducibility are evaluated to indirectly assess the interpretability of results (see, for example, Langs et al. (2011); Rasmussen et al. (2012); Conroy et al. (2013); Kia et al. (ress)). Despite partial effectiveness, there is no general consensus regarding the quantification of these intermediate criteria. For example, in the case of spatial reproducibility, different methods such as correlation Rasmussen et al. (2012); Kia et al. (ress), dice score Langs et al. (2011), or parameter variability Haufe et al. (2013); Conroy et al. (2013) are used for quantifying the stability of brain maps, each of which considers different aspects of local or global reproducibility.

With the aim of filling this gap, our contribution in this study is three-fold: 1) Assuming that the true solution of brain decoding is available, we present a theoretical definition of the interpretability. Furthermore, we show that the interpretability can be decomposed into the reproducibility and the representativeness of brain maps. 2) As a proof of the theoretical concepts, we propose a practical heuristic based on event-related fields for quantifying the interpretability of brain maps in MEG decoding scenarios. 3) Finally, we propose the combination of the interpretability and the performance of the brain decoding as a new Pareto optimal multi-objective criterion for model selection. We experimentally show that incorporating the interpretability of the models into the model selection procedure provides more reproducible, more neurophysiologically plausible, and (as a result) more interpretable maps.

2 Methods

2.1 Notation and Background

Let be a manifold in Euclidean space that represents the input space and be the output space, where . Then, let be a training set of independently and identically distributed (iid) samples drawn from the joint distribution of based on an unknown Borel probability measure . In the neuroimaging context, X indicates the trials of brain recording, e.g., fMRI, MEG, or EEG signals, and Y represents the experimental conditions or dependent variables. The goal of brain decoding is to find the function as an estimation of the ideal function .

In this study, as is a common assumption in the neuroimaging context, we assume the true solution of a brain decoding problem is among the family of linear functions (). Therefore, the aim of brain decoding reduces to finding an empirical approximation of , indicated by , among all . This approximation can be obtained by estimating the predictive conditional density by training a parametric model (i.e., a likelihood function), where denotes the parameters of the model. Alternatively, can be estimated by solving a risk minimization problem:

(1)

where is the loss function, is the regularization term, and is a hyper-parameter that controls the amount of regularization. There are various choices for , each of which reduces the hypothesis space to by enforcing different prior functional or structural constraints on the parameters of the linear decoding model (see, for example, Tibshirani (1996); Zou and Hastie (2005); Tibshirani et al. (2005); Jenatton et al. (2009)). The amount of regularization is generally decided using cross-validation or other data perturbation methods in the model selection procedure.

In the neuroimaging context, the estimated parameters of a linear decoding model can be used in the form of a brain map so as to visualize the discriminative neurophysiological effect. Although the magnitude of is affected by the dynamic range of data and the level of regularization, it has no effect on the predictive power and the interpretability of maps. On the other hand, the direction of affects the predictive power and contains information regarding the importance of and relations among predictors. This type of relational information is very useful when interpreting brain maps in which the relation between different spatio-temporal independent variables can be used to describe how different brain regions interact over time for a certain cognitive process. Therefore, we refer to the normalized parameter vector of a linear brain decoder in the unit hyper-sphere as a multivariate brain map (MBM); we denote it by where ( represents the 2-norm of a vector).

As shown in Eq. 1, learning occurs using the sampled data. In other words, in the learning paradigm, we attempt to minimize the loss function with respect to (and not Poggio and Shelton (2002). Therefore, all of the implicit assumptions (such as linearity) regarding might not hold on , and vice versa (see the supplementary material for a simple illustrative example). The irreducible error is the direct consequence of this sampling; it provides a lower bound on the error of a model, where we have:

(2)

The distribution of dictates the type of loss function in Eq. 1. For example, assuming a Gaussian distribution with mean and variance for implies the least squares loss function Wu et al. (2006).

2.2 Interpretability of Multivariate Brain Maps: Theoretical Definition

In this section, we introduce a theoretical definition for the interpretability of linear brain decoding models and their associated MBMs. The presented definition remains theoretical, as it is based on a restrictive assumption in practical applications. We assume that the brain decoding problem is linearly separable and that its unique, neurophysiologically plausible444Here, neurophysiological plausibility refers to the spatio-temporal chemo-physical constraints of the underlying neural activity that is highly dependent on the acquisition device. solution, i.e., , is available. In this theoretical environment, the goal is to assess the quality of estimated MBMs obtained using different brain decoding methods on a small-sample-size dataset .

Consider a linearly separable brain decoding problem in an ideal scenario where and . In this case, is linear and its parameters are unique and plausible. The unique parameter vector can be computed as follows:

(3)

Using as the reference, we define the strong interpretability of an MBM as follows: {mydefs} An MBM associated with a linear function is “strongly interpretable” if and only if .

It can be shown that, in practice, the estimated solution of a linear brain problem (using Eq. 1) is not strongly interpretable because of the inherent limitations of neuroimaging data, such as uncertainty Aggarwal and Yu (2009) in the input and output space (), limitations in data acquisition, the high dimensionality of data (), and the high correlation between predictors (). With these limitations in mind, even though linear brain decoders might not be absolutely interpretable, one can argue that some models are more interpretable than others. For example, in the case in which , a linear model where can be considered more interpretable than a linear model where . To address this issue, and having in mind the definition of strong-interpretability, our goal is to answer the following question:

{myprobs}

Let be perturbed training sets drawn from via a certain perturbation scheme such as jackknife, bootstrapping Efron (1979), or cross-validation Kohavi et al. (1995). Assume are MBMs of a certain (estimated using Eq. 1 for certain , , and ) on the corresponding perturbed training sets. How can we quantify the closeness of to the strongly-intrepretable solution of brain decoding problem ?

To answer this question, considering the uniqueness and the plausibility of as the two main characteristics that convey its strong interpretability, we define the geometrical proximity between to as a measure for interpretability of .

{mydefs}

Let () be the angle between and . The “interpretability” () of the MBM derived from a linear function is defined as follows:

(4)

Empirically, the interpretability is the mean of cosine similarities between and MBMs derived from different samplings of the training set. In addition to the fact that employing cosine similarity is a common method for measuring the similarity between vectors, we have another strong motivation for this choice. It can be shown that, for large values of , the distribution of the dot product in the unit hyper-sphere, i.e., the cosine similarity, converges to a normal distribution with mean and variance of , i.e., . Due to the small variance for a large enough values, any similarity significantly larger than zero represents a meaningful similarity between two high dimensional vectors (see the supplementary material for more details about the distribution of cosine similarity).

In what follows, we demonstrate how the definition of interpretability is geometrically related to the uniqueness and plausibility characteristics of the true solution to brain decoding.

2.3 Interpretability Decomposition into Reproducibility and Representativeness

An alternative approach toward quantifying the interpretability of an MBM is to assess its neurophysiological plausibility and uniqueness separately. The high dimensionality and the high correlation between variables are two inherent characteristics of neuroimaging data that negatively affect the uniqueness of the solution of a brain decoding problem. Therefore, a certain configuration of hyper-parameters may result different estimated parameters on different portions of data. Here, we are interested in assessing this variability. Let be the th () element of an MBM estimated on the th () perturbed training set. We define the main multivariate brain map as follows:

{mydefs}

The “main multivariate brain map” of a linear model is defined as the sum of all estimated MBMs () on the perturbed training sets in the unit hyper-sphere:

(5)

The definition of is analogous to the main prediction of a learning algorithm Domingos (2000); it provides a reference for quantifying the reproducibility of an MBM as a measure of its uniqueness:

{mydefs}

Let be the main multivariate brain map of . Then, let be the angle between and . The “reproducibility” () of an MBM derived from a linear function is defined as follows:

(6)

In fact, reproducibility provides a measure for quantifying the dispersion of MBMs, computed over different perturbed training sets, from the main multivariate brain map.

In theory, the directional proximity between and the estimated MBM of a linear model provides a measure for plausibility of that quantifies the coherency between the estimated parameters and the real underlying physiological activities. Here, we define this coherency as the representativeness of an MBM.

{mydefs}

Let be the main multivariate brain map of . The “representativeness” () of is defined as the cosine similarity between and :

(7)

The relationship between the presented definitions for both reproducibility and representativeness and the interpretability can be expressed using the following proposition:

Proposition 1.

.

See D and Figure 10 for a proof. Proposition 1 indicates the interpretability can be decomposed into the representativeness and the reproducibility of a decoding model.

2.4 A Heuristic for Practical Quantification of Interpretability in Time-Domain MEG decoding

In practice, it is impossible to evaluate the interpretability, as is unknown. In this study, to provide a practical proof of the mentioned theoretical concepts, we propose the use of contrast event-related fields (cERFs) of MEG data as neurophysiological plausible heuristics for in a binary MEG decoding scenario in the time domain.

The EEG/MEG data are a mixture of several simultaneous stimulus-related and stimulus-unrelated brain activities. In general, unrelated-stimulus brain activities are considered as Gaussian noise with zero mean and variance . One popular approach to canceling the noise component is to compute the average of multiple trials. It is expected that the average will converge to the true value of the signal with a variance of . The result of the averaging process is generally known as ERF in the MEG context; separate interpretation of different ERF components can be performed Rugg and Coles (1995)555The application of the presented heuristic to MEG data can be extended to EEG because of the inherent similarity of the measured neural correlates in these two devices. In the EEG context, the ERF can be replaced by the event-related potential (ERP)..

Assume and . Then, the cERF brain map is computed as follows:

(8)

Using the core theory presented in Haufe et al. (2013), it can be shown that cERF is the equivalent generative model for the least squares solution in a binary time-domain MEG decoding scenario (see A). Using as a heuristic for , the representativeness can be approximated as follows:

(9)

Where is an approximation of and we have:

(10)

represents the cosine similarity between and (see Figures 8 and B). If then .

In a similar manner, can be used to heuristically approximate the interpretability as follows:

(11)

where are the angles between and . The following equality represents the relation between and (see Figures 9 and C).

(12)

Again, if then . Notice that is independent of the decoding approach used; it only depends on the quality of the heuristic. It can be shown that .

Eq. 12 shows that the choice of heuristic has a direct effect on the approximation of interpretability and that an inappropriate selection of the heuristic yields a very poor estimation of interpretability because of the destructive contribution of . Therefore, the choice of heuristic should be carefully justified based on accepted and well-defined facts regarding the nature of the collected data (see the supplementary material for the experimental investigation of the limitations of the proposed heuristic).

2.5 Incorporating the Interpretability into Model Selection

The procedure for evaluating the performance of a model so as to choose the best values for hyper-parameters is known as model selection Hastie et al. (2009). This procedure generally involves numerical optimization of the model selection criterion. The most common model selection criterion is based on an estimator of generalization performance, i.e., the predictive power. In the context of brain decoding, especially when the interpretability of brain maps matters, employing only the predictive power of the decoding model in model selection is problematic in terms of interpretability Gramfort et al. (2012); Rasmussen et al. (2012); Conroy et al. (2013). Here, we propose a multi-objective criterion for model selection that takes into account both prediction accuracy and MBM interpretability.

Let and be the approximated interpretability and the generalization performance of a linear function , respectively. We propose the use of the scalarization technique Caramia and Dell´ Olmo (2008) for combining and into one scalar as follows:

(13)

where and are weights that specify the level of importance of the interpretability and the performance of the model, respectively. is a threshold on the performance that filters out solutions with poor performance. In classification scenarios, can be set by adding a small safe interval to the chance level of classification.

It can be shown that the hyper-parameters of a model are optimized based on are Pareto optimal Marler and Arora (2004). In other words, there exist no other for which we obtain both and . We expect that optimizing the hyper-parameters of the model based on , rather only , yields more informative MBMs.

2.6 Experimental Materials

2.6.1 Toy Dataset

To illustrate the importance of integrating the interpretability of brain decoding with the model selection procedure, we use simple 2-dimensional toy data presented in Haufe et al. (2013). Assume that the true underlying generative function is defined by

where ; and and represent the first and the second dimension of the data, respectively. Furthermore, assume the data is contaminated by Gaussian noise with co-variance . Figure 1 shows the distribution of the noisy data.

2.6.2 MEG Data

In this study, we use the MEG dataset presented in Henson et al. (2011)666The full dataset is publicly available at ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/. This dataset was also used for the DecMeg2014 competition777The competition data are available at http://www.kaggle.com/c/decoding-the-human-brain. In this dataset, visual stimuli consisting of famous faces, unfamiliar faces, and scrambled faces are presented to subjects and fMRI, EEG, and MEG signals are recorded. In this study, we are only interested in MEG recordings. The MEG data were recorded using a VectorView system (Elekta Neuromag, Helsinki, Finland) with a magnetometer and two orthogonal planar gradiometers located at 102 positions in a hemispherical array in a light Elekta-Neuromag magnetically shielded room.

Three major reasons motivated the choice of this dataset: 1) It is publicly available. 2) The spatio-temporal dynamic of the MEG signal for face vs. scramble stimuli has been well studied. The event-related potential analysis of EEG/MEG shows that occurs after stimulus presentation and reflects the neural processing of faces Bentin et al. (1996); Henson et al. (2011). Therefore, the component can be considered the ground truth for our analysis. 3) In the literature, non-parametric mass-univariate analysis such as cluster-based permutation tests is unable to identify narrowly distributed effects in space and time (e.g., an component) Groppe et al. (2011, 2011). These facts motivate us to employ multivariate approaches that are more sensitive to these effects.

As in Olivetti et al. (2014), we created a balanced face vs. scrambled MEG dataset by randomly drawing from the trials of unscrambled (famous or unfamiliar) faces and scrambled faces in equal number. The samples in the face and scrambled face categories are labeled as and , respectively. The raw data is high-pass filtered at , down-sampled to , and trimmed from before the stimulus onset to after the stimulus. Thus, each trial has time-points for each of the MEG sensors ( magnetometers and planar gradiometers)888The preprocessing scripts in python and MATLAB are available at: https://github.com/FBK-NILab/DecMeg2014/. To create the feature vector of each sample, we pooled all of the temporal data of MEG sensors into one vector (i.e., we have features for each sample). Before training the classifier, all of the features are standardized to have a mean of and standard-deviation of .

2.7 Classification and Evaluation

In all experiments, a least squares classifier with L1-penalization, i.e., Lasso Tibshirani (1996), is used for decoding. Lasso is a very popular classification method in the context of brain decoding, mainly because of its sparsity assumption. The choice of Lasso helps us to better illustrate the importance of including the interpretability in the model selection. Lasso solves the following optimization problem:

(14)

where is the hyper-parameter that specifies the level of regularization. Therefore, the aim of the model selection is to find the best value for . In this study, we try to find the best regularization parameter value among .

We use the out-of-bag (OOB) Tibshirani (1996); Wolpert and Macready (1999); Breiman (2001) method for computing , , , , and for different values of . In OOB, given a training set , replications of bootstrap Efron (1979) are used to create perturbed training sets (we set 999The MATLAB code used for experiments is available at https://github.com/smkia/interpretability/. In all of our experiments, we set and in the computation of . Furthermore, we set where EPE indicates the expected prediction error; it is computed using the procedure explained in E. Employing OOB provides the possibility of computing the bias and variance of the model as contributing factors in EPE.

To investigate the behavior of the proposed model selection criterion, we benchmark it against the commonly used performance criterion in the single-subject decoding scenario. Assuming for are MEG trial/label pairs for subject , we separately train a Lasso model for each subject to estimate the parameter of the linear function , where . Let and represent the optimized solution based on and , respectively. We denote the MBM associated with and by and , respectively. Therefore, for each subject, we compare the resulting decoders and MBMs computed based on these two model selection criteria.

3 Results

3.1 Performance-Interpretability Dilemma: A Toy Example

In the definition of on the toy dataset discussed in Section 2.6.1, is the decisive variable and has no effect on the classification of the data into target classes. Therefore, excluding the effect of noise and based on the theory of the maximal margin classifier Vapnik and Kotz (1982); Vapnik (2013), is the true solution to the decoding problem. By accounting for the effect of noise and solving the decoding problem in space, we have as the parameter of the linear classifier. Although the estimated parameters on the noisy data provide the best generalization performance for the noisy samples, any attempt to interpret this solution fails, as it yields the wrong conclusion with respect to the ground truth (it says has twice the influence of on the results, whereas it has no effect). This simple experiment shows that the most accurate model is not always the most interpretable model, primarily because the contribution of the noise in the decoding process Haufe et al. (2013). On the other hand, the true solution of the problem does not provide the best generalization performance for the noisy data.

To illustrate the effect of incorporating the interpretability in the model selection, a Lasso model with different values is used for classifying the toy data. In this case, because is known, the exact value of interpretability can be computed using Eq. 4. Table 1 compares the resultant performance and interpretability from Lasso. Lasso achieves its highest performance () at with (indicated by the magenta line in Figure 1). Despite having the highest performance, this solution suffers from a lack of interpretability (). By increasing , the interpretability of the model increases. For the model reaches its highest interpretability by compensating for of its performance. This observation highlights two main points:

  1. In the case of noisy data, the interpretability of a decoding model is incoherent with its performance. Thus, optimizing the parameter of the model based on its performance does not necessarily improve its interpretability. This observation confirms the previous finding by Rasmussen et al. (2012) regarding the trade-off between the spatial reproducibility (as a measure for the interpretability of a model) and the prediction accuracy in brain decoding.

  2. If the right criterion is used in the model selection, employing proper regularization technique (sparsity prior, in this case) provides more interpretability for the decoding models.

Figure 1: Noisy samples of toy data. The black line shows the true separator based on the generative model (). The magenta line shows the most accurate classification solution. Because of the contribution of noise, any interpretation of the parameters of the most accurate classifier yields a misleading conclusion with respect to the true underlying phenomenon Haufe et al. (2013).
0 0.001 0.01 0.1 1 10 50 100 250 500 1000
0.9883 0.9883 0.9883 0.9883 0.9883 0.9884 0.9880 0.9840 0.9310 0.9292 0.9292
0.4391 0.4391 0.4391 0.4392 0.4400 0.4484 0.4921 0.5845 0.9968 1 1
0.7137 0.7137 0.7137 0.7137 0.7142 0.7184 0.7400 0.7842 0.9639 0.9646 0.9646
Table 1: Comparison between , , and for different values on the toy 2D example shows the performance-interpretability dilemma, in which the most accurate classifier is not the most interpretable one.

3.2 Mass-Univariate Hypothesis Testing on MEG Data

Results show that non-parametric mass-univariate analysis is unable to detect narrowly distributed effects in space and time (e.g., an component) Groppe et al. (2011, 2011). To illustrate the advantage of the proposed decoding framework for spotting these effects, we performed a non-parametric cluster-based permutation test Maris and Oostenveld (2007) on our MEG dataset using Fieldtrip toolbox Oostenveld et al. (2010). In a single subject analysis scenario, we considered the trials of MEG recordings as the unit of observation in a between-trials experiment. Independent-samples t-statistics are used as the statistics for evaluating the effect at the sample level and to construct spatio-temporal clusters. The maximum of the cluster-level summed t-value is used for the cluster level statistics; the significance probability is computed using a Monte Carlo method. The minimum number of neighboring channels for computing the clusters is set to . Considering as the two-sided threshold for testing the significance level and repeating the procedure separately for magnetometers and combined-gradiometers, no significant result is found for any of the subjects. This result motivates the search for more sensitive (and, at the same time, more interpretable) alternatives for hypothesis testing.

3.3 Single-Subject Decoding on MEG Data

In this experiment, we aim to compare the multivariate brain maps of brain decoding models when and are used as the criteria for model selection. Figure 2(a) represents the mean and standard-deviation of the performance and interpretability of Lasso across subjects for different values. The performance and interpretability curves further illustrate the performance-interpretability dilemma in the single-subject decoding scenario in which increasing the performance delivers less interpretability. The average performance across subjects is improved when approaches , but on the other side, the reproducibility and the representativeness of models declines significantly [see Figure 2(b)].

Figure 2: (a) Mean and standard-deviation of the performance, interpretability, and plausibility of Lasso over 16 subjects. The performance and interpretability become incoherent as increases. (b) Mean and standard-deviation of the reproducibility, representativeness, and interpretability of Lasso over subjects. The interpretability declines because of the decrease in both reproducibility and representativeness.

One possible reason behind the performance-interpretability dilemma is illustrated in Figure 3. The figure shows the mean and standard deviation of bias, variance, and EPE of Lasso across subjects. The plot proposes that the effect of variance is overwhelmed by bias in the computation of EPE, where the best performance (minimum EPE) at has the lowest bias, its variance is higher than for . While this tiny increase in the variance is not reflected in EPE but Figure 2(b) shows a significant effect on the reproducibility of the model.

Figure 3: Mean and standard-deviation of the bias, variance, and EPE of Lasso over 16 subjects. The effect of variance on the EPE is overwhelmed by bias.

Table 2 summarizes the performance, reproducibility, representativeness, and interpretability of and for subjects. The average result over 16 subjects shows that employing instead of in model selection provides significantly higher reproducibility, representativeness, and (as a result) interpretability compensating for of performance.

Subj Criterion: Criterion:
1 0.81 0.53 0.26 0.42 0.62 0.78 0.70 0.63 0.76 0.83
2 0.80 0.70 0.60 0.72 0.83 0.80 0.70 0.60 0.72 0.83
3 0.81 0.63 0.45 0.64 0.71 0.78 0.71 0.64 0.78 0.83
4 0.84 0.52 0.20 0.31 0.66 0.76 0.70 0.64 0.77 0.83
5 0.80 0.54 0.29 0.44 0.65 0.78 0.69 0.61 0.73 0.83
6 0.79 0.52 0.24 0.39 0.63 0.74 0.67 0.61 0.74 0.82
7 0.84 0.55 0.27 0.40 0.66 0.81 0.70 0.59 0.71 0.84
8 0.87 0.55 0.24 0.35 0.68 0.85 0.68 0.52 0.61 0.84
9 0.80 0.55 0.31 0.46 0.67 0.77 0.67 0.57 0.69 0.82
10 0.79 0.53 0.26 0.41 0.64 0.77 0.68 0.58 0.70 0.83
11 0.74 0.65 0.56 0.68 0.82 0.74 0.65 0.56 0.68 0.82
12 0.80 0.55 0.29 0.46 0.64 0.79 0.70 0.61 0.74 0.83
13 0.83 0.50 0.18 0.29 0.61 0.77 0.70 0.63 0.76 0.82
14 0.90 0.58 0.27 0.39 0.68 0.81 0.78 0.74 0.89 0.84
15 0.92 0.63 0.34 0.48 0.71 0.89 0.78 0.66 0.77 0.86
16 0.87 0.55 0.23 0.37 0.62 0.81 0.74 0.67 0.81 0.83
Mean 0.830.05 0.700.04 0.620.05 0.740.06 0.830.01
Table 2: The performance, reproducibility, representativeness, and interpretability of and over 16 subjects.

These results are further analyzed in Figure 4 where and are compared subject-wise in terms of their performance and interpretability. The comparison shows that adopting instead of as the criterion for model selection yields significantly better interpretable models by compensating a negligible degree of performance in out of subjects. Figure 4(a) shows that employing provides on average slightly higher accurate models (Wilcoxon rank sum test p-value) across subjects () than using (). On the other side, Figure 4(b) shows that employing and compensating by in the performance provides (on average) substantially higher (Wilcoxon rank sum test p-value) interpretability across subjects () compared to (). For example, in the case of subject 1 (see table 2), using in model selection to select the best value for the Lasso model yields a model with and . In contrast, using provides a model with and .

Figure 4: a) Comparison between performance of and . Adopting instead of in model selection yields (on average) less accurate classifiers over 16 subjects. b) Comparison between interpretability of and . Adopting instead of in model selection yields on average more interpretable classifiers over 16 subjects.

The advantage of the exchange between the performance and the interpretability can be seen in the quality of MBMs. Figure 4(a) and 4(b) show and of subject 1, i.e., the spatio-temporal multivariate maps of the Lasso models with maximum values of and , respectively. The maps are plotted for 102 magnetometer sensors. In each case, the time course of weights of classifiers associated with the MEG2041 and MEG1931 sensors are plotted. Furthermore, the topographic maps represent the spatial patterns of weights averaged between and after stimulus onset101010The bounds of colorbars are symmetrized based on the maximum absolute value of parameters. While is sparse in time and space, it fails to accurately represent the spatio-temporal dynamic of the N170 component. Furthermore, the multicollinearity problem arising from the correlation between the time course of the MEG2041 and MEG1931 sensors causes extra attenuation of the N170 effect in the MEG1931 sensor. Therefore, the model is unable to capture the spatial pattern of the dipole in the posterior area. In contrast, represents the dynamic of the N170 component in time (see Figure 6). In addition, it also shows the spatial pattern of two dipoles in the posterior and temporal areas. In summary, suggests a more representative pattern of the underlying neurophysiological effect than .

(a) Spatio-temporal pattern of .
(b) Spatio-temporal pattern of .
Figure 5: Comparison between spatio-temporal multivariate maps of the most accurate ( 4(a)) and the most interpretable ( 4(b)) classifiers for Subject 1. provides more spatio-temporal representativeness of the N170 effect than .
Figure 6: Event related fields (ERFs) of face and scrambled face samples for MEG2041 and MEG1931 sensors.

In addition, optimizing the brain decoding model based on provides more reproducible brain decoders. According to table 2, using instead of provides (on average) more reproducibility over 16 subjects. To illustrate the advantage of higher reproducibility on the interpretability of maps, Figure 7 visualizes and over 4 perturbed training sets. The spatial maps [Figure 7(a) and Figure 7(c)] are plotted for the magnetometer sensors averaged in the time interval between and after stimulus onset. The temporal maps [Figure 7(b) and Figure 7(d)] are showing the multivariate temporal maps of MEG1931 and MEG2041 sensors. While is unstable in time and space across the 4 perturbed training sets, provides more reproducible maps.

Figure 7: Comparison of the reproducibility of Lasso when and are used in the model selection procedure. (a) and (b) show the spatio-temporal patterns represented by across the 4 perturbed training sets. (c) and (d) show the spatio-temporal patterns represented by across the 4 perturbed training sets. Employing instead of in the model selection yields more reproducible MBMs.

4 Discussions

4.1 Defining Interpretability: Theoretical Advantages

An overview of the brain decoding literature shows frequent co-occurrence of the terms interpretation, interpretable, and interpretability with the terms model, classification, parameter, decoding, method, feature, and pattern (see the quick meta-analysis on the literature in the supplementary material); however, a formal formulation of the interpretability is never presented. In this study, our primary interest is to present a theoretical definition of the interpretability of linear brain decoding models and their corresponding MBMs. Furthermore, we show the way in which interpretability is related to the reproducibility and neurophysiological representativeness of MBMs. Our definition and quantification of interpretability remains theoretical, as we assume that the true solution of the brain decoding problem is available. Despite this limitation, we argue that the presented theoretical definition provides a concrete framework of a previously abstract concept and that it establishes a theoretical background to explain an ambiguous phenomenon in the brain decoding context. We support this argument using an example in time-domain MEG decoding in which we show how the presented definition can be exploited to heuristically approximate the interpretability. This example shows how partial prior knowledge111111This partial knowledge can be based on already known facts regarding the timing and location of neural activity. regarding underlying brain activity can be used to find more plausible multivariate patterns in data. Furthermore, the proposed decomposition of the interpretability of MBMs into their reproducibility and representativeness explains the relationship between the influential cooperative factors in the interpretability of brain decoding models and highlights the possibility of indirect and partial evaluation of interpretability by measuring these effective factors.

4.2 Application in Model Evaluation

Discriminative models in the framework of brain decoding provide higher sensitivity and specificity than univariate analysis in hypothesis testing of neuroimaging data. Although multivariate hypothesis testing is performed based solely on the generalization performance of classifiers, the emergent need for extracting reliable complementary information regarding the underlying neuronal activity motivated a considerable amount of research on improving and assessing the interpretability of classifiers and their associated MBMs. Despite ubiquitous use, the generalization performance of classifiers is not a reliable criterion for assessing the interpretability of brain decoding models Rasmussen et al. (2012). Therefore, considering extra criteria might be required. However, because of the lack of a formal definition for interpretability, different characteristics of brain decoding models are considered as the main objective in improving their interpretability. Reproducibility Rasmussen et al. (2012); Conroy et al. (2013), stability selection van Gerven et al. (2009); Varoquaux et al. (2012); Wang et al. (2015), and neurophysiological plausibility Afshin-Pour et al. (2011) are examples of related criteria.

Our definition of interpretability helped us to fill this gap by introducing a new multi-objective model selection criterion as a weighted compromise between interpretability and generalization performance of linear models. Our experimental results on single-subject decoding showed that adopting the new criterion for optimizing the hyper-parameters of brain decoding models is an important step toward reliable visualization of learned models from neuroimaging data. It is not the first time in the neuroimaging context that a new metric is proposed in combination with generalization performance for the model selection. Several recent studies proposed the combination of the reproducibility of the maps Rasmussen et al. (2012); Conroy et al. (2013); Strother et al. (2014) or the stability of the classifiers Yu (2013); Lim and Yu (2015) with the performance of discriminative models to enhance the interpretability of decoding models. Our definition of interpretability supports the claim that the reproducibility is not the only effective factor in interpretability. Therefore, our contribution can be considered a complementary effort with respect to the state of the art of improving the interpretability of brain decoding at the model selection level.

Furthermore, this work presents an effective approach for evaluating the quality of different regularization strategies for improving the interpretability of MBMs. As briefly reviewed in Section 1, there is a trend in research within the brain decoding context in which prior knowledge is injected into the penalization term as a technique to improve the interpretability of decoding models. Thus far, in the literature, there is no ad-hoc method to compare these different methods. Our findings provide a further step toward direct evaluation of interpretability of the currently proposed penalization strategies. This evaluation can highlight the advantages and disadvantages of applying different strategies on different data types and facilitates the choice of appropriate methods for a certain application.

4.3 Regularization and Interpretability

Haufe et al. (2013) demonstrated that the weight in linear discriminative models does not provide an accurate measure for evaluating the relationship between variables, primarily because of the contribution of noise in the decoding process. This disadvantage is primarily caused by the decoding process that minimizes the classification error only considering the uncertainty in the output space Aggarwal and Yu (2009); Zhang (2005); Tzelepis et al. (2015) and not the uncertainty in the input space (or noise). The authors concluded that the interpretability of brain decoding cannot be improved using regularization. Our experimental results on the toy data (see Section 3.1) shows that if the right criterion is used for selecting the best values for hyper-parameters, appropriate choice of the regularization strategy can still play significant role in improving the interpretability of results. For example, in this case, the true generative function behind the sampled data is sparse (see Section 2.6.1), but because of the noise in the data, the sparse model is not the most accurate one. Using a more comprehensive criterion (in this case, ) shows the advantage of selecting correct prior assumptions about the distribution of the data via regularization. This observation encourages the modification of the conclusion in Haufe et al. (2013) as follows: if the performance of the model is the only criterion in the model selection, then the interpretability cannot necessarily be improved by means of regularization.

4.4 Advantage over Mass-Univariate Analysis

Mass-univariate hypothesis testing methods are among the most popular tools in neuroscience research because they provide significance checks and a fair level of interpretability via univariate brain maps. Mass-univariate analyses consist of univariate statistical tests on single independent variables followed by multiple comparison correction. Generally, multiple comparison correction reduces the sensitivity of mass-univariate approaches because of the large number of univariate tests involved. Cluster-based permutation testing Maris and Oostenveld (2007) provides a more sensitive univariate analysis framework by making the cluster assumption in the multiple comparison correction. Unfortunately, this method is not able to detect narrow spatio-temporal effects in the data Groppe et al. (2011). As a remedy, brain decoding provides a very sensitive tool for hypothesis testing; it has the ability to detect multivariate patterns, but suffers from a low level of interpretability. Our study proposes a possible solution for the interpretability problem of classifiers, and therefore, it facilitates the application of brain decoding in the analysis of neuroimaging data. Our experimental results for the MEG data demonstrate that, although the non-parametric cluster-based permutation test is unable to detect the N170 effect in MEG data, employing instead of in model selection not only detects the stimuli-relevant information in the data, but also provides both reproducible and representative spatio-temporal mapping of the timing and the location of underlying neurophysiological effect.

4.5 Limitations and Future Directions

Despite theoretical and practical advantages, the proposed definition and quantification of interpretability suffer from some limitations. All of the theoretical and practical concepts are defined for linear models, with the main assumption that (where is a class of linear functions). This fact highlights the importance of linearizing the experimental protocol in the data collection phase Naselaris et al. (2011). Extending the definition of interpretability to non-linear models demands future research into the visualization of non-linear models in the form of brain maps. Currently, our findings cannot be directly applied to non-linear models. Furthermore, the proposed heuristic for the time-domain MEG data applies only to binary classification. One possible solution in multiclass classification is to separate the decoding problem into several binary sub-problems. In addition the quality of the proposed heuristic is limited for the small sample size datasets (see supplementary material). Finding physiologically relevant heuristics for other acquisition modalities such as fMRI can be also considered in future work.

5 Conclusions

In this paper, we presented a novel theoretical definition for the interpretability of brain decoding and associated multivariate brain maps. We showed how the interpretability relates to the representativeness and reproducibility of brain decoding. The multiplicative nature of the relation between the reproducibility and the representativeness in the computation of interpretability of MBMs is also demonstrated. Although it is theoretical, the presented definition is a first step toward practical solutions for revealing the knowledge learned from linear classifiers. As an example of this major breakthrough, and to provide a proof of concept, a heuristic approach based on the contrast event-related field is proposed for practical evaluation of the interpretability in time-domain MEG decoding. We experimentally showed that adding the interpretability of brain decoding models as a criterion in the model selection procedure yields significantly higher interpretable models by sacrificing a negligible amount of performance in the single-subject decoding scenario. Our methodological and experimental achievements can be considered a complementary theoretical and practical effort that contributes to efforts to enhance the interpretability of multivariate approaches.

Acknowledgments

The author wishes to thank Sandro Vega-Pons and Nathan Weisz for valuable discussions and comments.

Appendix A cERF and its Generative Nature

According to Haufe et al. (2013), for a linear discriminative model with parameters , the unique equivalent generative model can be computed as follows:

(15)

In a binary () least squares classification scenario, we have:

(16)

where represents the covariance of the input matrix X, and and are the means of positive and negative samples, respectively. Therefore, the equivalent generative model for the above classification problem can be derived by computing the difference between the mean of samples in two classes, which is equivalent to the definition of cERF in time-domain MEG data.

Appendix B Relation between and (Eq.  10)

Let be the angle between and . Let be the angle between and . Furthermore, assume that is the angle between and and that . We consider both cases in which is underestimated/overestimated by (see Figure 8 as an example in 2-dimensional space). Then, we have:

(17)
(a)
Figure 8: Misrepresentation of with respect to .

Appendix C Relation between and (Eq.  12)

Let be the angles between and , and be the angles between and . Furthermore, assume that is the angle between and . We consider both cases in which is underestimated/overestimated by (see Figure 9 as an example in 2-dimensional space).

(18)
(a)
Figure 9: Relation between and .

Appendix D Proof of Proposition 1

Throughout this proof, we assume that all of the parameter vectors are normalized in the unit hypersphere (see Figure 10 as an illustrative example in 2 dimensions). Let be a set MBMs, for perturbed training sets where . Now, consider any arbitrary -dimensional hyperplane that contains . Clearly, divides the -dimensional parameter space into 2 subspaces. Let and be binary operators where indicates that and are in the same subspace, and indicates that they are in different subspaces. Now, we define and . Let the cardinality of denoted by be (). Thus, . Now, assume that are the angles between and , and (similarly) for and . Based on Eq. 5, let and be the main maps of and , respectively. Therefore, we obtain and . Furthermore, assume . As a result, and . According to Eq. 4 and using a cosine similarity definition, we have:

(19)

A similar procedure can be used to prove by replacing with .

(a)
Figure 10: Relation between representativeness, reproducibility, and interpretability in 2 dimensions.

Appendix E Computing the Bias and Variance in Binary Classification

Here, using the out-of-bag (OOB) technique, and based on procedures proposed by Domingos (2000) and Valentini and Dietterich (2004), we compute the expected prediction error (EPE) for a linear binary classifier under bootstrap perturbation of the training set. Let be the number of perturbed training sets resulting from partitioning into and , i.e., training and test sets. If is the linear classifier estimated from the th perturbed training set, then the main prediction for each sample in the dataset can be computed as follows:

(20)

where is the number of times that is present in the test set121212It is expected that each sample appears (on average) times in the test sets..1

The computation of bias is challenging because the optimal model is unknown. According to Tibshirani (1996), misclassification error is one of the loss measures that satisfies a Pythagorean-type equality, and:

(21)

Because all terms of the above equation are positive, the mean loss between the main prediction and the actual labels can be considered as an upper-bound for the bias:

(22)

Therefore, a pessimistic approximation of bias can be calculated as follows:

(23)

Then, the unbiased and biased variances (see Domingos (2000) for definitions) in each training set can be calculated by:

(24)
(25)

Then, the expected prediction error of can be computed as follows (ignoring the irreducible error):

(26)

References

  • Crivellato and Ribatti (2007) E. Crivellato, D. Ribatti, Soul, mind, brain: Greek philosophy and the birth of neuroscience, Brain research bulletin 71 (2007) 327–336.
  • Groppe et al. (2011) D. M. Groppe, T. P. Urbach, M. Kutas, Mass univariate analysis of event-related brain potentials/fields i: A critical tutorial review, Psychophysiology 48 (2011) 1711–1725.
  • Maris and Oostenveld (2007) E. Maris, R. Oostenveld, Nonparametric statistical testing of eeg-and meg-data, Journal of neuroscience methods 164 (2007) 177–190.
  • Groppe et al. (2011) D. M. Groppe, T. P. Urbach, M. Kutas, Mass univariate analysis of event-related brain potentials/fields ii: Simulation studies, Psychophysiology 48 (2011) 1726–1737.
  • van Gerven et al. (2009) M. van Gerven, C. Hesse, O. Jensen, T. Heskes, Interpreting single trial data using groupwise regularisation, NeuroImage 46 (2009) 665–676.
  • Davis et al. (2014) T. Davis, K. F. LaRocque, J. A. Mumford, K. A. Norman, A. D. Wagner, R. A. Poldrack, What do differences between multi-voxel and univariate analysis mean? how subject-, voxel-, and trial-level variance impact fmri analysis, NeuroImage 97 (2014) 271–283.
  • Haynes and Rees (2006) J.-D. Haynes, G. Rees, Decoding mental states from brain activity in humans, Nature Reviews Neuroscience 7 (2006) 523–534.
  • Wolpaw et al. (2002) J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, T. M. Vaughan, Brain–computer interfaces for communication and control, Clinical neurophysiology 113 (2002) 767–791.
  • Waldert et al. (2008) S. Waldert, H. Preissl, E. Demandt, C. Braun, N. Birbaumer, A. Aertsen, C. Mehring, Hand movement direction decoded from meg and eeg, The Journal of neuroscience 28 (2008) 1000–1008.
  • van Gerven and Jensen (2009) M. van Gerven, O. Jensen, Attention modulations of posterior alpha as a control signal for two-dimensional brain–computer interfaces, Journal of neuroscience methods 179 (2009) 78–84.
  • Nicolas-Alonso and Gomez-Gil (2012) L. F. Nicolas-Alonso, J. Gomez-Gil, Brain computer interfaces, a review, Sensors 12 (2012) 1211–1279.
  • Bzdok (2016) D. Bzdok, Classical statistics and statistical learning in imaging neuroscience, arXiv preprint arXiv:1603.01857 (2016).
  • Pereira et al. (2009) F. Pereira, T. Mitchell, M. Botvinick, Machine learning classifiers and fMRI: a tutorial overview., NeuroImage 45 (2009) 199–209.
  • Lemm et al. (2011) S. Lemm, B. Blankertz, T. Dickhaus, K.-R. Müller, Introduction to machine learning for brain imaging, Neuroimage 56 (2011) 387–399.
  • Besserve et al. (2007) M. Besserve, K. Jerbi, F. Laurent, S. Baillet, J. Martinerie, L. Garnero, Classification methods for ongoing eeg and meg signals, Biological research 40 (2007) 415–437.
  • Haxby et al. (2001) J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, P. Pietrini, Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex, Science 293 (2001) 2425–2430.
  • Cox and Savoy (2003) D. D. Cox, R. L. Savoy, Functional magnetic resonance imaging (fmri)“brain reading”: detecting and classifying distributed patterns of fmri activity in human visual cortex, Neuroimage 19 (2003) 261–270.
  • Mitchell et al. (2004) T. M. Mitchell, R. Hutchinson, R. S. Niculescu, F. Pereira, X. Wang, M. Just, S. Newman, Learning to decode cognitive states from brain images, Machine Learning 57 (2004) 145–175.
  • Norman et al. (2006) K. A. Norman, S. M. Polyn, G. J. Detre, J. V. Haxby, Beyond mind-reading: multi-voxel pattern analysis of fmri data, Trends in cognitive sciences 10 (2006) 424–430.
  • Parra et al. (2003) L. Parra, C. Alvino, A. Tang, B. Pearlmutter, N. Yeung, A. Osman, P. Sajda, Single-trial detection in EEG and MEG: Keeping it linear, Neurocomputing 52-54 (2003) 177–183.
  • Rieger et al. (2008) J. W. Rieger, C. Reichert, K. R. Gegenfurtner, T. Noesselt, C. Braun, H.-J. Heinze, R. Kruse, H. Hinrichs, Predicting the recognition of natural scenes from single trial meg recordings of brain activity, Neuroimage 42 (2008) 1056–1068.
  • Carroll et al. (2009) M. K. Carroll, G. A. Cecchi, I. Rish, R. Garg, A. R. Rao, Prediction and interpretation of distributed neural activity with sparse models, NeuroImage 44 (2009) 112–122.
  • Chan et al. (2011) A. M. Chan, E. Halgren, K. Marinkovic, S. S. Cash, Decoding word and category-specific spatiotemporal representations from meg and eeg, Neuroimage 54 (2011) 3028–3039.
  • Huttunen et al. (2013) H. Huttunen, T. Manninen, J.-P. Kauppi, J. Tohka, Mind reading with regularized multinomial logistic regression, Machine vision and applications 24 (2013) 1311–1325.
  • Vidaurre et al. (2013) D. Vidaurre, C. Bielza, P. Larrañaga, A survey of l1 regression, International Statistical Review 81 (2013) 361–387.
  • Abadi et al. (2015) M. Abadi, R. Subramanian, S. Kia, P. Avesani, I. Patras, N. Sebe, Decaf: Meg-based multimodal database for decoding affective physiological responses, IEEE Transactions on Affective Computing 6 (2015) 209–222.
  • Naselaris et al. (2011) T. Naselaris, K. N. Kay, S. Nishimoto, J. L. Gallant, Encoding and decoding in fmri, Neuroimage 56 (2011) 400–410.
  • Weichwald et al. (2015) S. Weichwald, T. Meyer, O. Özdenizci, B. Schölkopf, T. Ball, M. Grosse-Wentrup, Causal interpretation rules for encoding and decoding models in neuroimaging, NeuroImage 110 (2015) 48–59.
  • Kriegeskorte et al. (2006) N. Kriegeskorte, R. Goebel, P. Bandettini, Information-based functional brain mapping, Proceedings of the National Academy of Sciences of the United States of America 103 (2006) 3863–3868.
  • Valverde-Albacete and Peláez-Moreno (2014) F. J. Valverde-Albacete, C. Peláez-Moreno, 100% classification accuracy considered harmful: The normalized information transfer factor explains the accuracy paradox, PLOS ONE 9 (2014) e84217.
  • Ramdas et al. (2016) A. Ramdas, A. Singh, L. Wasserman, Classification accuracy as a proxy for two sample testing, arXiv preprint arXiv:1602.02210 (2016).
  • Turner (2015) R. Turner, A model explanation system, 2015.
  • Baehrens et al. (2010) D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, K.-R. Müller, How to explain individual classification decisions, The Journal of Machine Learning Research 11 (2010) 1803–1831.
  • Vellido et al. (2012) A. Vellido, J. Martin-Guerroro, P. Lisboa, Making machine learning models interpretable, in: Proceedings of the 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN). Bruges, Belgium, 2012, pp. 163–172.
  • Sabuncu (2014) M. R. Sabuncu, A universal and efficient method to compute maps from image-based prediction models, Medical Image Computing and Computer-Assisted Intervention–MICCAI 2014 (2014) 353–360.
  • Haynes (2015) J.-D. Haynes, A primer on pattern-based approaches to fmri: Principles, pitfalls, and perspectives, Neuron 87 (2015) 257–270.
  • Naselaris and Kay (2015) T. Naselaris, K. N. Kay, Resolving ambiguities of mvpa using explicit models of representation, Trends in cognitive sciences 19 (2015) 551–554.
  • Bach et al. (2015) S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, W. Samek, On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PloS one 10 (2015).
  • Montavon et al. (2013) G. Montavon, M. Braun, T. Krueger, K.-R. Muller, Analyzing local structure in kernel-based learning: Explanation, complexity, and reliability assessment, Signal Processing Magazine, IEEE 30 (2013) 62–74.
  • Yu et al. (2015) D. Yu, S. J. Lee, W. J. Lee, S. C. Kim, J. Lim, S. W. Kwon, Classification of spectral data using fused lasso logistic regression, Chemometrics and Intelligent Laboratory Systems 142 (2015) 70–77.
  • Hansen et al. (2011) K. Hansen, D. Baehrens, T. Schroeter, M. Rupp, K.-R. Müller, Visual interpretation of kernel-based prediction models, Molecular Informatics 30 (2011) 817–826.
  • Haufe et al. (2013) S. Haufe, F. Meinecke, K. Görgen, S. Dähne, J.-D. Haynes, B. Blankertz, F. Bießmann, On the interpretation of weight vectors of linear models in multivariate neuroimaging, NeuroImage (2013).
  • Strother et al. (2014) S. C. Strother, P. M. Rasmussen, N. W. Churchill, K. Hansen, Stability and Reproducibility in fMRI Analysis, New York: Springer-Verlag, 2014.
  • Anderson et al. (2011) A. Anderson, J. S. Labus, E. P. Vianna, E. A. Mayer, M. S. Cohen, Common component classification: What can we learn from machine learning?, Neuroimage 56 (2011) 517–524.
  • Brodersen et al. (2011) K. H. Brodersen, F. Haiss, C. S. Ong, F. Jung, M. Tittgemeyer, J. M. Buhmann, B. Weber, K. E. Stephan, Model-based feature construction for multivariate decoding, NeuroImage 56 (2011) 601–615.
  • Langs et al. (2011) G. Langs, B. H. Menze, D. Lashkari, P. Golland, Detecting stable distributed patterns of brain activation using gini contrast, NeuroImage 56 (2011) 497–507.
  • Varoquaux et al. (2012) G. Varoquaux, A. Gramfort, B. Thirion, Small-sample brain mapping: sparse recovery on spatially correlated designs with randomization and clustering, in: Proceedings of the 29th International Conference on Machine Learning (ICML-12), 2012, pp. 1375–1382.
  • Kauppi et al. (2013) J.-P. Kauppi, L. Parkkonen, R. Hari, A. Hyvärinen, Decoding magnetoencephalographic rhythmic activity using spectrospatial information, NeuroImage 83 (2013) 921–936.
  • Taulu et al. (2014) S. Taulu, J. Simola, J. Nenonen, L. Parkkonen, Novel noise reduction methods, Magnetoencephalography (2014) 35–71.
  • Varoquaux and Thirion (2014) G. Varoquaux, B. Thirion, How machine learning is shaping cognitive neuroimaging, GigaScience 3 (2014) 28.
  • Olivetti et al. (2014) E. Olivetti, S. M. Kia, P. Avesani, Meg decoding across subjects, in: Pattern Recognition in Neuroimaging, 2014 International Workshop on, IEEE, 2014.
  • Haufe et al. (2014) S. Haufe, S. Dähne, V. V. Nikulin, Dimensionality reduction for the analysis of brain oscillations, NeuroImage (2014).
  • Rasmussen et al. (2012) P. M. Rasmussen, L. K. Hansen, K. H. Madsen, N. W. Churchill, S. C. Strother, Model sparsity and brain pattern interpretation of classification models in neuroimaging, Pattern Recognition 45 (2012) 2085–2100.
  • Conroy et al. (2013) B. R. Conroy, J. M. Walz, P. Sajda, Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fmri decoding models, PloS one 8 (2013) e79271.
  • Bousquet and Elisseeff (2002) O. Bousquet, A. Elisseeff, Stability and generalization, The Journal of Machine Learning Research 2 (2002) 499–526.
  • Yu (2013) B. Yu, Stability, Bernoulli 19 (2013) 1484–1500.
  • Lim and Yu (2015) C. Lim, B. Yu, Estimation stability with cross validation (escv), Journal of Computational and Graphical Statistics (2015).
  • Mørch et al. (1997) N. Mørch, L. K. Hansen, S. C. Strother, C. Svarer, D. A. Rottenberg, B. Lautrup, R. Savoy, O. B. Paulson, Nonlinear versus linear models in functional neuroimaging: Learning curves and generalization crossover, in: Information processing in medical imaging, Springer Berlin Heidelberg, 1997, pp. 259–270.
  • Yuan and Lin (2006) M. Yuan, Y. Lin, Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 (2006) 49–67.
  • Tibshirani et al. (2005) R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, K. Knight, Sparsity and smoothness via the fused lasso, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67 (2005) 91–108.
  • Xing et al. (2014) E. P. Xing, M. Kolar, S. Kim, X. Chen, High-dimensional sparse structured input-output models, with applications to gwas, Practical Applications of Sparse Modeling (2014) 37.
  • Rish et al. (2014) I. Rish, G. A. Cecchi, A. Lozano, A. Niculescu-Mizil, Practical Applications of Sparse Modeling, MIT Press, 2014.
  • Grosenick et al. (2008) L. Grosenick, S. Greer, B. Knutson, Interpretable classifiers for fmri improve prediction of purchases, Neural Systems and Rehabilitation Engineering, IEEE Transactions on 16 (2008) 539–548.
  • de Brecht and Yamagishi (2012) M. de Brecht, N. Yamagishi, Combining sparseness and smoothness improves classification accuracy and interpretability, NeuroImage 60 (2012) 1550–1561.
  • Michel et al. (2011) V. Michel, A. Gramfort, G. Varoquaux, E. Eger, B. Thirion, Total variation regularization for fmri-based prediction of behavior, Medical Imaging, IEEE Transactions on 30 (2011) 1328–1340.
  • Gramfort et al. (2013) A. Gramfort, B. Thirion, G. Varoquaux, Identifying predictive regions from fmri with tv-l1 prior, in: Pattern Recognition in Neuroimaging (PRNI), 2013 International Workshop on, IEEE, 2013, pp. 17–20.
  • Grosenick et al. (2009) L. Grosenick, B. Klingenberg, S. Greer, J. Taylor, B. Knutson, Whole-brain sparse penalized discriminant analysis for predicting choice, NeuroImage 47 (2009) S58.
  • Grosenick et al. (2013) L. Grosenick, B. Klingenberg, K. Katovich, B. Knutson, J. E. Taylor, Interpretable whole-brain prediction analysis with graphnet, NeuroImage 72 (2013) 304–321.
  • Wang et al. (2015) Y. Wang, J. Zheng, S. Zhang, X. Duan, H. Chen, Randomized structural sparsity via constrained block subsampling for improved sensitivity of discriminative voxel identification, NeuroImage (2015).
  • Bießmann et al. (2012) F. Bießmann, S. Dähne, F. C. Meinecke, B. Blankertz, K. Görgen, K.-R. Müller, S. Haufe, On the interpretability of linear multivariate neuroimaging analyses: filters, patterns and their relationship, in: Proceedings of the 2nd NIPS Workshop on Machine Learning and Interpretation in Neuroimaging, 2012.
  • Haufe et al. (2014) S. Haufe, F. Meinecke, K. Gorgen, S. Dahne, J.-D. Haynes, B. Blankertz, F. Biessmann, Parameter interpretation, regularization and source localization in multivariate linear models, in: Pattern Recognition in Neuroimaging, 2014 International Workshop on, IEEE, 2014, pp. 1–4.
  • Engemann and Gramfort (2015) D. A. Engemann, A. Gramfort, Automated model selection in covariance estimation and spatial whitening of meg and eeg signals, NeuroImage 108 (2015) 328–342.
  • Li et al. (2015) Z. Li, Y. Wang, Y. Wang, X. Wang, J. Zheng, H. Chen, A novel feature selection approach for analyzing high dimensional functional mri data, arXiv preprint arXiv:1506.08301 (2015).
  • Kia et al. (ress) S. M. Kia, S. Vega-Pons, E. Olivetti, P. Avesani, Multi-task learning for interpretation of brain decoding models, in: NIPS Workshop on Machine Learning and Interpretation in Neuroimaging (MLINI), 2014, Springer Lecture Notes on Artificial Intelligence Series, In press.
  • Tibshirani (1996) R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Series B (Methodological) (1996) 267–288.
  • Zou and Hastie (2005) H. Zou, T. Hastie, Regularization and variable selection via the elastic net, Journal of the Royal Statistical Society: Series B 67 (2005) 301–320.
  • Jenatton et al. (2009) R. Jenatton, J.-Y. Audibert, F. Bach, Structured variable selection with sparsity-inducing norms, arXiv preprint arXiv:0904.3523 (2009).
  • Poggio and Shelton (2002) T. Poggio, C. Shelton, On the mathematical foundations of learning, American Mathematical Society 39 (2002) 1–49.
  • Wu et al. (2006) M. C.-K. Wu, S. V. David, J. L. Gallant, Complete functional characterization of sensory neurons by system identification, Annu. Rev. Neurosci. 29 (2006) 477–505.
  • Aggarwal and Yu (2009) C. C. Aggarwal, P. S. Yu, A survey of uncertain data algorithms and applications, Knowledge and Data Engineering, IEEE Transactions on 21 (2009) 609–623.
  • Efron (1979) B. Efron, Bootstrap methods: another look at the jackknife, The annals of Statistics (1979) 1–26.
  • Kohavi et al. (1995) R. Kohavi, et al., A study of cross-validation and bootstrap for accuracy estimation and model selection, in: Ijcai, volume 14, 1995, pp. 1137–1145.
  • Domingos (2000) P. Domingos, A unified bias-variance decomposition for zero-one and squared loss, AAAI/IAAI 2000 (2000) 564–569.
  • Rugg and Coles (1995) M. D. Rugg, M. G. Coles, Electrophysiology of mind: Event-related brain potentials and cognition., Oxford University Press, 1995.
  • Hastie et al. (2009) T. Hastie, R. Tibshirani, J. Friedman, The elements of statistical learning, volume 2, Springer, 2009.
  • Gramfort et al. (2012) A. Gramfort, G. Varoquaux, B. Thirion, Beyond brain reading: randomized sparsity and clustering to simultaneously predict and identify, in: Machine Learning and Interpretation in Neuroimaging, Springer, 2012, pp. 9–16.
  • Caramia and Dell´ Olmo (2008) M. Caramia, P. Dell´ Olmo, Multi-objective optimization, Multi-objective Management in Freight Logistics: Increasing Capacity, Service Level and Safety with Optimization Algorithms (2008) 11–36.
  • Marler and Arora (2004) R. T. Marler, J. S. Arora, Survey of multi-objective optimization methods for engineering, Structural and multidisciplinary optimization 26 (2004) 369–395.
  • Henson et al. (2011) R. N. Henson, D. G. Wakeman, V. Litvak, K. J. Friston, A Parametric Empirical Bayesian framework for the EEG/MEG inverse problem: generative models for multisubject and multimodal integration, Frontiers in Human Neuroscience 5 (2011).
  • Bentin et al. (1996) S. Bentin, T. Allison, A. Puce, E. Perez, G. McCarthy, Electrophysiological studies of face perception in humans, Journal of cognitive neuroscience 8 (1996) 551–565.
  • Tibshirani (1996) R. Tibshirani, Bias, variance and prediction error for classification rules (1996).
  • Wolpert and Macready (1999) D. H. Wolpert, W. G. Macready, An efficient method to estimate bagging’s generalization error, Machine Learning 35 (1999) 41–55.
  • Breiman (2001) L. Breiman, Random forests, Machine learning 45 (2001) 5–32.
  • Vapnik and Kotz (1982) V. N. Vapnik, S. Kotz, Estimation of dependences based on empirical data, volume 40, Springer-verlag New York, 1982.
  • Vapnik (2013) V. Vapnik, The nature of statistical learning theory, Springer Science & Business Media, 2013.
  • Oostenveld et al. (2010) R. Oostenveld, P. Fries, E. Maris, J.-M. Schoffelen, Fieldtrip: open source software for advanced analysis of meg, eeg, and invasive electrophysiological data, Computational intelligence and neuroscience 2011 (2010).
  • Afshin-Pour et al. (2011) B. Afshin-Pour, H. Soltanian-Zadeh, G.-A. Hossein-Zadeh, C. L. Grady, S. C. Strother, A mutual information-based metric for evaluation of fmri data-processing approaches, Human brain mapping 32 (2011) 699–715.
  • Zhang (2005) J. B. T. Zhang, Support vector classification with input data uncertainty, Advances in neural information processing systems 17 (2005) 161.
  • Tzelepis et al. (2015) C. Tzelepis, V. Mezaris, I. Patras, Linear maximum margin classifier for learning from uncertain data, arXiv preprint arXiv:1504.03892 (2015).
  • Valentini and Dietterich (2004) G. Valentini, T. G. Dietterich, Bias-variance analysis of support vector machines for the development of svm-based ensemble methods, The Journal of Machine Learning Research 5 (2004) 725–775.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
8483
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote