Brain-Network Clustering viaKernel-ARMA Modeling and the Grassmannian

Brain-Network Clustering via
Kernel-ARMA Modeling and the Grassmannian

Cong Ye,1 Konstantinos Slavakis,1, Pratik V. Patil,1
Sarah F. Muldoon,2 and John Medaglia,3
1C. Ye, K. Slavakis and P. V. Patil are with the Department of Electrical Engineering, University at Buffalo, The State University of New York (SUNY), NY 14260, USA.2S. F. Muldoon is with the Department of Mathematics and the Computational and Data-Enabled Science and Engineering Program, University at Buffalo, SUNY, NY 14260, USA.3J. Medaglia is with the Department of Psychology, Drexel University, PA 19104, USA, and the Perelman School of Medicine, University of Pennsylvania, PA 19104, USA.
Abstract

Recent advances in neuroscience and in the technology of functional magnetic resonance imaging (fMRI) and electro-encephalography (EEG) have propelled a growing interest in brain-network clustering via time-series analysis. Notwithstanding, most of the brain-network clustering methods revolve around state clustering and/or node clustering (a.k.a. community detection or topology inference) within states. This work answers first the need of capturing non-linear nodal dependencies by bringing forth a novel feature-extraction mechanism via kernel autoregressive-moving-average modeling. The extracted features are mapped to the Grassmann manifold (Grassmannian), which consists of all linear subspaces of a fixed rank. By virtue of the Riemannian geometry of the Grassmannian, a unifying clustering framework is offered to tackle all possible clustering problems in a network: Cluster multiple states, detect communities within states, and even identify/track subnetwork state sequences. The effectiveness of the proposed approach is underlined by extensive numerical tests on synthetic and real fMRI/EEG data which demonstrate that the advocated learning method compares favorably versus several state-of-the-art clustering schemes.

Brain network, clustering, ARMA, kernel, Grassmann.

I Introduction

Recent advances in neuroscience reveal the brain to be a complex network capable of integrating and generating information from external and internal sources in real time [1]. The rapidly growing field of Network Neuroscience uses network analytics to reveal, via graph theory and its concepts (e.g., nodes and edges), topological and functional dependencies of the brain [2]. At the microscopic scale, nodes of a brain network correspond to individual neurons, while edges might describe synaptic coupling between neurons or relationships between their firing patterns [3]. At the macroscopic scale, nodes might be brain regions, and edges might represent anatomical connections (structural connectivity) or statistical relationships between regional brain dynamics (functional connectivity) [4].

Popular noninvasive techniques used to acquire time series data from brain networks include functional magnetic resonance imaging (fMRI) and electro-encephalography (EEG). In particular, fMRI monitors the blood oxygen-level dependent (BOLD) time series [5], while EEG tracks brain activity through the time series which are collected via electrodes on the scalp. EEG possesses a high temporal resolution and is considered to be relatively convenient, inexpensive, and harmless compared to other methods such as magneto-encephalography (MEG), which is much less risky than positron emission tomography (PET) [6].

An important aspect of the majority of works in network analytics is that the time-series data describing the nodal signals tend to be considered stationary, and many learning algorithms make the temporal smoothness assumption [7, 8]. However, stationarity of the brain network data should not be assumed, as it is known that the brain acts as a non-stationary network even during its resting state, e.g., [9, 10, 11, 12, 13]. Dynamic functional brain networks can be built using pairwise relationships derived from the time series data described above, and this dynamic-network viewpoint has been widely exploited to identify diseases, cognitive states, and individual differences in performance [14, 15, 16, 17].

Learning algorithms are often employed to identify functional dependencies among nodes and topology in networks. As a prominent example, clustering algorithms have been already utilized to verify the dynamic nature of brain networks [9, 10], as well as to predict and detect brain disorders, applied to syndromes at large, such as depression [18], epilepsy, schizophrenia [19], Alzheimer disease and autism [20]. In general, brain-network clustering methods aim at three major goals: Node clustering (a.k.a. community detection or topology inference) within a given brain state, state clustering of similar brain states, and subnetwork-state-sequence identification. Loosely speaking, a “brain state” corresponds to a specific “global” network topology or nodal connectivity pattern which stays fixed over a time interval. A “subnetwork state sequence” is defined as the latent (stochastic) process that drives a subnetwork/subgroup of nodal time series, may span several network-wide/“global” states, and the collaborating nodes may even change as the brain transitions from one “global” state to another.

Most brain-clustering algorithms are used for nodal and state clustering, while only very few schemes try to identify/track subnetwork state sequences. For example, community detection in brain networks has been studied extensively to perform clustering in both static and dynamic brain networks [21, 22]. Modularity maximization [23, 24] is a popular method for performing community detection in functional brain networks, but also relies on the selection of additional parameters determining the proper null model, resolution of clustering and in the dynamic case, the value of interlayer coupling [25]. In [26, 27], the discrete wavelet transform decomposes EEG signals into frequency sub-bands, and K-means is used to cluster the acquired wavelet coefficients. In [28], non-parametric Bayesian models, coined Bayesian community detection and infinite relational modeling, are introduced and applied to resting state fMRI data to define probabilities on network edges. Clustering in [28] is performed by running sophisticated comparisons on the values of those edge probabilities. Study [29] investigated network “motifs,” defined as recurrent and statistically significant sub-graphs or patterns. A spectral-clustering based algorithm applied to motif features revealed a spatially coherent, yet frequency dependent, sub-division between the posterior, occipital and frontal brain regions [29]. Entropy maximization and frequency decomposition were utilized in [30] prior to applying vector quantization [31] to frequency-based features for clustering communities within EEG data. In [32], EEG-data topography via Renyi’s entropy was proposed as a feature extraction mapping, before applying self-organizing maps as the off-the-shelf clustering algorithm. In the recently popular graph-signal-processing context [8, 33], topology inference is achieved by solving optimization problems formed via the observed time-series data and the eigen-decomposition of the Laplacian matrix of the network.

Other approaches have been used to perform state clustering. For example, [34] advocates hidden Markov models (HMMs) to characterize brain-state dynamics. HMM parameters are extracted from each state and used to form vectors in a Euclidean space, with their pairwise metric distances comprising the entries of an affinity matrix. Hierarchical clustering is then applied to the affinity matrix to cluster brain states. In [35], time-varying features are extracted from healthy controls and patients with schizophrenia using independent vector analysis (IVA). Mutual information among the IVA features, Markov modeling and K-means are used to detect changes in the brain’s spatial connectivity patterns. A change-point detection approach for resting state fMRI is introduced in [36]. Functional connectivity patterns of all fiber-connected cortical voxels are concatenated into a descriptive feature vector to represent the brain’s state, and the temporal change points of different brain states are decided by detecting the abrupt changes of the vector patterns via a sliding window approach. In [37], hierarchical clustering is applied to a time series of graph-distance measures to identify discrete states of networks. Moreover, motivated by the observation that changes in nodal communities suggest changes in network states, studies [38, 39] perform community detection on fMRI data, prior to state clustering, by capitalizing on K-means, multi-layer modeling, (Tucker) tensor and higher-order singular value decomposition.

There is only a few methods that can cluster subnetwork state sequences in fMRI and EEG modalities. In [40], features are extracted from the frequency content of the fMRI/EEG time series. A feature example is the ratio of the sum of amplitudes within a specific frequency interval over the sum of amplitudes over the whole frequency range of the time series. Features, and thus subnetwork state sequences, are then clustered via K-means [40]. A computer-vision approach is introduced in [41]. EEG data are transformed into dynamic topographic maps, able to display features such as voltage amplitude, power and peak latency. The flow of activation within those topographic maps is estimated by using an optical-flow estimation method [42] which generates motion vectors. Motion vectors are clustered into groups, and these dynamic clusters are tracked along the time axis to depict the activation flow and track the subnetwork state sequences.

This paper capitalizes on the directions established by [43] to introduce a unifying feature-extraction and clustering framework, with strong geometric flavor, that make no assumptions of stationarity and can carry through all possible brain-clustering duties, i.e., community detection, state clustering, and subnetwork-state-sequence clustering/tracking. A kernel autoregressive-moving-average (K-ARMA) model is proposed to capture latent non-linear and causal dependencies, not only within a single time series, but also among multiple nodal time series of the brain network. To accommodate the highly likely non-stationarity of the time series, the K-ARMA model is applied via a time-sliding window. Per application of the K-ARMA model, a system identification problem is solved to extract a low-rank observability matrix. Such a low-rank representation enables dimensionality-reduction arguments which are beneficial to learning methods for the usually high-dimensional ambient spaces associated with brain-network analytics. Features are defined as the low-rank column spaces of the computed observability matrices. For a fixed rank, those features become points of the Grassmann manifold (Grassmannian), which enjoys the rich Riemannian geometry. This feature-extraction scheme permeates all clustering duties in this study. Having obtained the features and to identify clusters, this study builds on Riemannian multi-manifold modeling (RMMM) [44, 45, 43], which postulates that clusters take the form of sub-manifolds in the Grassmannian. To compute clusters, the underlying Riemannian geometry is exploited by the geodesic-clustering-with-tangent-spaces (GCT) algorithm [44, 45, 43]. Unlike the pipeline in [46], which used covariance matrix of EEG, after low-pass filtering, as feature on a manifold and considered only the Riemannian distance between features, GCT considers both distance and angle as geometric information for clustering. In contrast to [44, 45, 43], where the number of clusters need to be known a priori, this paper incorporates hierarchical clustering to render GCT free from any a-priori knowledge of the number of clusters. Extensive numerical tests on synthetic and real fMRI/EEG data demonstrate that the proposed framework, i.e., feature extraction mechanism and GCT-based clustering algorithm, compares favorably versus state-of-the-art manifold learning and brain-network clustering schemes.

The rest of the paper is organized as follows. The K-ARMA model and the feature-extraction mechanism are introduced in Section II. The new variant of the GCT clustering algorithm is presented in Section III, while synthetic and real fMRI/EEG data are used in Section IV to validate the theoretical and algorithmic developments. The manuscript is concluded in Section V, while mathematical notation, any background material as well as proofs are deferred to the Appendix.

Ii Kernel-ARMA Modeling

Fig. 1: Brain network with nodes, edges, and nodal time series (signals) .

Consider a (brain) network/graph , with sets of nodes , of cardinality , and edges . Each node is annotated by a discrete-time stochastic process (time series) , where denotes discrete time and the set of all integer numbers; cf. Fig. 1. To avoid congestion in notations, stands for both the random variable (RV) and its realization. The physical meaning of and depends on the underlying data-collection modalities. For example, in fMRI, nodes comprise regions of interest (ROI) of the brain which are connected either anatomically or functionally, and becomes a BOLD time series of the average signal in a given ROI [5], e.g., Fig. (e)e. In EEG, consists of all electrodes placed on the scalp, and gathers the signal samples collected by electrode ; cf. Fig. (i)i. For index and an integer , the vector is used in this manuscript to collect all signal samples from node(s) of the network at the time instance , and to unify several scenarios of interest as the following discussion demonstrates.

Ii-a State Clustering ()

Since a “state” is a global attribute of the network across all nodes, vector , with and , stands as the "snapshot" of the network at the time instance . Features will be learned from the time series in Sec. II-C to monitor the evolution of the network and to cluster states in Sec. III.

Ii-B Community Detection and Clustering of Subnetwork State Sequences ()

In the case of community detection and subnetwork-state-sequence clustering, nodes need to be partitioned via the (dis)similarities of their time series. For example, in subnetwork-state-sequence clustering, same-cluster nodes collaborate to carry through a common task. To be able to detect common features and to identify those nodes, it is desirable first to extract individual features from each nodal time series. To this end, is assigned the value , so that , for a given buffer length and with , takes the form of .

Ii-C Extracting Grassmannian Features

Consider now a user-defined RKHS with its kernel mapping ; cf. App. A. Given and assuming that the sequence is available, define . This work proposes the following kernel (K-)ARMA model to fit the variations of features within space : There exist matrices , , the latent variable , and vectors , that capture noise and approximation errors, s.t. ,

(1a)
(1b)
Proposition 1.

Given parameter , define the “forward” matrix-valued function

(2a)
and the “backward” matrix-valued function
(2b)

Then, there exist matrices and s.t. the following low-rank factorization holds true:

(3)

where product is defined in App. A, and is the so-called observability matrix:

With regards to a probability space, if and in (1) are considered to be zero-mean and independent and identically distributed stochastic processes, independent of each other, if is independent of , and independency holds true also between , s.t. , then

(4)

If, in addition, , , , and , , are wide-sense stationary, then , , in the mean-square (-) sense w.r.t. the probability space.

Proof:

See App. B. ∎

There can be many choices for the reproducing kernel function (cf. App. A). If the linear kernel is chosen, then , becomes the identity mapping, , and boils down to the usual matrix product. This case was introduced in [43]. The most popular choice for is the Gaussian kernel , where parameter stands for standard deviation. However, pinpointing the appropriate for a specific dataset is a difficult task which may entail cumbersome cross-validation procedures [47]. A popular approach to circumvent the judicious selection of is to use a dictionary of parameters , with , to cover an interval where is known to belong to. A reproducing kernel function can be then defined as the convex combination , where are convex weights, i.e., non-negative real numbers s.t.  [47]. Such a strategy is followed in Section IV. Examples of non-Gaussian kernels can be also found in App. A.

Kernel-based ARMA models have been already studied in the context of support-vector regression [48, 49, 50]. However, those models are different than (1) since only the AR and MA vectors of coefficients are mapped to an RKHS feature space, while the observed data (of only a single time series) are kept in the input space. Here, (1) offers a way to map even the observed data to an RKHS to capture non-linearities in data via applying the ARMA idea to properly chosen feature spaces. In a different context [51], time series of graph-distance metrics are fitted by ARMA modeling to detect anomalies and thus identify states in networks. Neither Riemannian geometry nor kernel functions were investigated in [51].

Motivated by (3), (4), the result (, ), and the fact that the conditional expectation is the least-squares-best estimator [52, §9.4], the following task is proposed to obtain an estimate of the observability matrix:

(5)

To solve (5), the singular value decomposition (SVD) is applied to obtain , where is orthogonal. Assuming that , the Schmidt-Mirsky-Eckart-Young theorem [53] provides the estimates and , where is the orthogonal matrix that collects those columns of that correspond to the top (principal) singular values in .

Due to the factorization , identifying the observability matrix becomes ambiguous, since for any non-singular matrix , , and can serve also as an estimate. By virtue of the elementary observation that the column (range) spaces of and coincide, it becomes preferable to identify the column space of , denoted hereafter by , rather than the matrix itself. If , then becomes a point in the Grassmann manifold , or Grassmannian, which is defined as the collection of all linear subspaces of with rank equal to  [54, p. 73]. The Grassmannian is a Riemannian manifold with dimension equal to  [54, p. 74]. The algorithmic procedure of extracting the feature from the available data is summarized in Alg. 1. To keep notation as general as possible, instead of using all of the signal samples, a subset is considered and signal samples are gathered in per node . All generated features are gathered in step 1 of Alg. 1, denoted by , and indexed by the set of cardinality .

Input : Time series .
Parameters : Positive integers , , , and .
Output : Grassmannian features .
1 Form data , where becomes either [state clustering; Sec. II-A] or [community detection or subnetwork-state-sequence clustering; Sec. II-B]. for all  do
2       for all  do
3            Form via (2). Apply SVD: . Feature is the linear subspace spanned by the “principal” columns of .
4      
Gather all features in .
Algorithm 1 Extracting Grassmannian features

Iii Clustering Grassmannian Features

Having features available in the Grassmannian via Alg. 1, the next task in the pipeline is to cluster . This work follows the Riemannian multi-manifold modeling (RMMM) hypothesis [45, 44, 43], where clusters are considered to be submanifolds of the Grassmannian, with data located close to or onto (see Fig. (a)a for the case of clusters). RMMM allows for clusters to intersect; a case where the classical K-means, for example, is known to face difficulties [55].

Clustering is performed by Alg. 2, coined geodesic clustering by tangent spaces (GCT). The GCT of Alg. 2 extends its initial form in [45, 44, 43], since Alg. 2 operates without the need to know the number of clusters a-priori, as opposed to [45, 44, 43] where needs to be provided as input to the clustering algorithm. This desirable feature of Alg. 2 is also along the lines of usual practice, where it is unrealistic to know before employing a clustering algorithm.

In a nutshell, Alg. 2 computes the affinity matrix of features in step 2, comprising information about sparse data approximations, via weights , as well as angles between linear subspaces. Although the incorporation of sparse weights originates from [56], one of the novelties of GCT is the usage of the angular information via . GCT’s version of [45, 44, 43] applies spectral clustering in step 2, where knowledge of the number of clusters is necessary. To surmount the obstacle of knowing beforehand, Louvain clustering method [57] is adopted in step 2. Louvain method belongs to the family of hierarchical-clustering algorithms that attempt to maximize a modularity function, which monitors the intra- and inter-cluster density of links/edges. Needless to say that any other hierarchical-clustering scheme can be used at step 2 instead of Louvain method.

(a) Clusters on
(b) Angular information
Fig. 2: (a) The Riemannian multi-manifold modeling (RMMM) hypothesis. (b) Angular information computed in tangent spaces and used in Alg. 2.
Input : Grassmannian features .
Parameters :  and .
Output : Clusters .
1 for all  do
2      Define the -nearest-neighbors . Map into the tangent space of the Grassmannian at via the logarithm map: , . Identify via (6). Set , for all s.t. . Compute the sample correlation matrix in (7). Perform principal component analysis (PCA) on to extract the eigenspace . Compute angle between vector and , (). Let also for .
Form the symmetric affinity (adjacency) matrix , where entry is defined as
Apply Louvain method [57] to to map the data to clusters .
Algorithm 2 Geodesic clustering by tangent spaces (GCT)

A short description of the steps in Alg. 2 follows, with Riemannian-geometry details deferred to [45, 44, 43]. Alg. 2 visits sequentially (step 2). At step 2, the -nearest-neighbors of are identified, i.e., those points, taken from , which are placed the closest from with respect to the Grassmannian distance [58]. The neighbors are then mapped at step 2 to the Euclidean vectors in the tangent space of the Grassmannian at (the gray-colored plane in Fig. (b)b) via the logarithm map , whose computation (non-closed form via SVD) is provided in [45, 43]. Step 2 computes the weights , with , via the following sparse-coding task:

s.to (6)

The affine constraint in (6), imposed on the coefficients in representing via its neighbors, is motivated by the affine nature of the tangent space (Fig. (b)b). Moreover, the larger the distance of neighbor from , the larger the weight , which in turn penalizes severely the coefficient by pushing it to values close to zero. Step 2 computes the sample covariance matrix

(7)

where denotes the sample average of the neighbors of . PCA is applied to at step 2 to compute the principal eigenspace , which may be viewed as an approximation of the image of the cluster (submanifold) , via the logarithm map, into the tangent space (see Fig. (b)b). Once is computed, the angle between vector and is also computed at step 2 to extract angular information. The larger the angle is, the less the likelihood for to belong to cluster . The information carried by both and the angles is used to define the adjacency matrix at step 2. The use of angular information here, as well as in [45, 44, 43], advances the boundary of state-of-the-art clustering methods in the Grassmannian, where usually the weights of the adjacency matrix are defined via the Grassmannian (geodesic) distance or sparse-coding schemes [56].

Input : Time series .
Output : Clusters .
1 Function MainModule() :
2       Apply Alg. 1 to obtain features . Apply Alg. 2 to map to clusters . return .
3if “state clustering” then
4       Set and form data according to Sec. II-A. Call MainModule() to identify clusters/states.
5 else if “community detection”  then
6       Set and form data according to Sec. II-A. Call MainModule() to identify states, i.e., cluster the time horizon into a partition s.t. data belong to the same state. for  to  do
7            Set and form data according to Sec. II-B. Call MainModule() to identify communities in state .
8      
9 else if “subnetwork-state-sequence clustering” then
10       Call lines 3 and 3 to identify states, i.e., a partition of . for  to  do
11             Set and form data according to Sec. II-B. Apply Alg. 1 to to obtain the Grassmannian features .
12      Form features . Apply Alg. 2 to to identify clusters/tasks.
Algorithm 3 Clustering framework

To summarize, the clustering framework is presented in pseudo-code form in Alg. 3. More specifically, the main module of the framework, which is frequently utilized and contains Algs. 1 and 2, is presented at steps 33. While the “state-clustering” part (steps 33) is quite straightforward, the “community detection” (steps 33) and “subnetwork-state-sequence clustering” (steps 33) comprise several steps. More specifically, in “community detection” (steps 33), states are first identified via steps 33 and then communities are identified in steps 33 within each state. In “subnetwork-state-sequence clustering” (steps 33), states are again identified first in step 3, the Grassmannian features are extracted in steps 33, all features are gathered as in step 3, and finally Alg. 2 is applied to to identify clusters/tasks in step 3.

To achieve a high accuracy clustering result, it is necessary to cluster states first, before applying community detection and subnetwork-state-sequence clustering. Without knowing the starting and ending points of different states, there will be time-series vectors in Alg. 1 which capture data from two consecutive states, since takes the form of . Features corresponding to those vectors will decrease the clustering accuracy since the extracted features do not correspond to any actual state or task.

The main computational burden comes from the module of steps 33 in Alg. 3. If denotes the points in the Grassmannian, the computational complexity for computing features in Alg. 1 is , where denotes the cost of computing , which includes SVD computations. In Alg. 2, the complexity for computing the nearest neighbors of is , where denotes the cost of computing the Riemannian distance between any two points, and refers to the cost of finding the nearest neighbors of . Step 2 of Alg. 2 is a sparsity-promoting optimization task of (6) and let denotes the complexity to solve it. Under , step 2 of Alg. 2 involves the computation of the eigenvectors of the sample covariance matrix , with complexity of . In step 2, the complexity for computing empirical geodesic angles is , where is the complexity of computing the logarithm map  [43]. For the last step of Alg. 2, the exact complexity of Louvain method is not known but the method seems to run in time with most of the computational effort spent on modularity optimization at first level, since modularity optimization is known to be NP-hard [59]. To summarize, the complexity of Alg. 2 is .

Iv Numerical Tests

This section validates the proposed framework on synthetic and real data. First, the competing clustering algorithms are briefly described.

Iv-a Competing Algorithms

Iv-A1 Sparse Manifold Clustering and Embedding (SMCE) [56]

Each point on the Grassmannian is described by a sparse affine combination of its neighbors. The computed sparse weights define the entries of a similarity matrix, which is subsequently used to identify data-cluster associations. SMCE does not utilize any angular information, as step 2 of Alg. 2 does.

Iv-A2 Interaction K-means with PCA (IKM-PCA) [60]

IKM is a clustering algorithm based on the classical K-means and Euclidean distances within a properly chosen feature space. To promote time-efficient solutions, the classical PCA is employed as a dimensionality-reduction tool for feature-subset selection.

Iv-A3 Graph-shift-operator estimation (GOE) [33]

The graph shift operator is a symmetric matrix capturing the network’s structure, i.e., topology. There are widely adopted choices of graph shift operators, including the adjacency and Laplacian matrices, or their various degree-normalized counterparts. An estimation algorithm in [33] computes the optimal graph shift operator via convex optimization. The computed graph shift operator is fed to a spectral-clustering module to identify communities within a single brain state, since [33] assumes stationary time-series data.

Iv-A4 3D-Windowed Tensor Approach (3D-WTA) [38]

3D-WTA was originally introduced for community detection in dynamic networks by applying tensor decompositions onto a sequence of adjacency matrices indexed over the time axis. 3D-WTA was modified in [39] to accommodate multi-layer network structures. High-order SVD (HOSVD) and high-order orthogonal iteration (HOOI) are used within a pre-defined sliding window to extract subspace information from the adjacency matrices. The “asymptotic-surprise” metric is used as the criterion to determine the number of clusters. 3D-WTA is capable of performing both state clustering and community detection.

SMCE, 3D-WTA and the classical K-means will be compared against Alg. 3 on state clustering. SMCE, IKM-PCA, 3D-WTA, GOA and K-means will be used in community detection. Since none of IKM-PCA, GOA and 3D-WTA can perform subnetwork-state-sequence clustering across multiple states, only the results of Alg. 3 and SMCE are reported. To ensure fair comparisons, the parameters of all methods were carefully tuned to reach optimal performance for every scenario at hand.

In the following discussion, tags K-ARMA[S] and K-ARMA[M] denote the proposed framework whenever a single and multiple kernel functions are employed, respectively. In the case where the linear kernel is used, the K-ARMA method boils down to the ARMA method of [43].

The evaluation of all methods was based on the following three criteria: 1) Clustering accuracy, defined as the number of correctly clustered data points (ground-truth labels are known) over the total number of points; 2) normalized mutual information (NMI) [61]; and 3) the classical confusion matrix [62], with true-positive ratio (TPR), false-positive ratio (FPR), true-negative ratio (TNR), and false-negative ratio (FNR), in the case where the number of clusters to be identified is equal to two. In what follows, every numerical value of the previous criteria is the uniform average of independently performed tests for the particular scenario at hand.

Iv-B Synthetic Data

Iv-B1 fMRI Data

(a) State 1
(b) State 2
(c) State 3
(d) State 4
(e) BOLD time series of node #2, dataset #5
Fig. 3: Synthetic data generated by the Matlab SimTB toolbox [12]. (a)-(d) Noiseless and outlier-free connectivity matrices corresponding to four network states. Nodes that share the same color cooperate to perform a common subnetwork task.

Data were generated by the open-source Matlab SimTB toolbox [12]. A -node network is considered that transitions successively between distinct network states. Each state corresponds to a certain connectivity matrix, generated via the following path. Each connectivity matrix, fed to the SimTB toolbox, is modeled as the superposition of three matrices: 1) The ground-truth (noiseless) connectivity matrix (cf. Fig. 3), where nodes sharing the same color belong to the same cluster and collaborate to perform a common subnetwork task; 2) a symmetric matrix whose entries are drawn independently from a zero-mean Gaussian distribution with standard deviation to model noise; and 3) a symmetric outlier matrix where entries are equal to to account for outlier neural activity.

Different states may share different outlier matrices, controlled by . Aiming at extensive numerical tests, six datasets were generated (corresponding to the columns of Table I) by choosing six pairs of parameters in the modeling of the connectivity matrices and the SimTB toolbox. Datasets 1, 2 and 3 (D1, D2 and D3) were created without outliers, while datasets 4, 5 and 6 (D4, D5 and D6) include outlier matrices with different s in different states. Table IV details the parameters of those six datasets. Driven by the previous connectivity matrices, the SimTB toolbox generates BOLD time series [5]. Each state contributes signal samples, for a total of samples, to every nodal time series, e.g., Fig. (e)e.

(a) State 1
(b) State 2
(c) D1S1
(d) D1S2
(e) D2S1
(f) D2S2
(g) D3S1
(h) D3S2
(i) EEG time series of node #27, dataset #2
Fig. 4: Synthetic data generated by the Virtual Brain toolbox [63]. (a) and (b) Task connectivity matrices without noise for each state. There are three communities/tasks for state 1 and four tasks in state 2. (c)-(h) are connectivity matrices for each state of three datasets. (g) is the time series of 27th node of dataset 2.
Methods Without Outliers With Outliers
Clustering Accuracy NMI Clustering Accuracy NMI
D1 D2 D3 D1 D2 D3 D4 D5 D6 D4 D5 D6
ARMA 0.969 0.805 0.640 0.948 0.766 0.596 0.944 0.743 0.589 0.860 0.627 0.340
K-ARMA[S] 1 0.824 0.671 1 0.791 0.622 0.983 0.775 0.599 0.930 0.651 0.379
K-ARMA[M] 1 0.839 0.708 1 0.808 0.641 0.992 0.800 0.626 0.967 0.689 0.435
3DWTA 1 0.792 0.603 1 0.735 0.556 0.943 0.721 0.517 0.872 0.558 0.281
SMCE 0.920 0.784 0.583 0.887 0.673 0.480 0.883 0.712 0.508 0.713 0.562 0.246
Kmeans 0.866 0.670 0.402 0.800 0.590 0.307 0.768 0.621 0.337 0.476 0.403 0.168
TABLE I: Synthetic fMRI Data: State clustering

Table I demonstrates the results of state clustering. The parameters used for ARMA and K-ARMA are: , , , , . The Gaussian kernel (cf. App. A) is used in the single-kernel method K-ARMA[S], while kernel is used in the K-ARMA[M] case since it performed the best among other choices of kernel functions. Fig. 5 depicts also the standard deviations of the results of Table I, computed after performing independent repetitions of the same test. Standard deviation of all algorithms increase when the strength of the noisy matrix increases. For dataset D1, K-ARMA[S], K-ARMA[M] and 3DWTA reach accuracy; for other datasets, K-ARMA[M] exhibits the highest accuracy and the smallest standard deviation.

3DWTA, K-ARMA[S] and K-ARMA[M] achieved perfect score () for both the clustering-accuracy and NMI metrics on dataset . Among all methods, K-ARMA[M] scores the highest clustering accuracy and NMI over all six datasets. It can be observed by Table I that the existence of outliers affects negatively the ability of all methods to cluster data. The main reason is that the algorithms tend to detect outliers and gather those in clusters different from the nominal ones. Ways to reject those outliers are outside of the scope of this study and will be provided in a future publication.

Methods Without Outliers With Outliers
Clustering
Accuracy
NMI
Clustering
Accuracy
NMI
D1 D2 D3 D1 D2 D3 D4 D5 D6 D4 D5 D6
ARMA 1 0.960 0.842 1 0.876 0.775 0.973 0.910 0.817 0.940 0.793 0.664
K-ARMA[S] 1 1 0.915 1 1 0.838 1 0.942 0.852 1 0.864 0.710
K-ARMA[M] 1 1 0.945 1 1 0.907 1 0.958 0.879 1 0.892 0.803
3DWTA 1 0.941 0.839 1 0.927 0.754 0.925 0.863 0.799 0.842 0.780 0.638
SMCE 0.975 0.929 0.827 0.902 0.865 0.691 0.909 0.773 0.745 0.769 0.647 0.563
GOE 1 0.933 0.809 1 0.915 0.655 0.918 0.740 0.684 0.833 0.652 0.409
IKM-PCA 0.948 0.907 0.791 0.890 0.814 0.629 0.892 0.756 0.712 0.738 0.551 0.486
Kmeans 0.908 0.876 0.725 0.810 0.729 0.547 0.843 0.672 0.605 0.620 0.391 0.314
TABLE II: Synthetic fMRI data: Community detection

Table II presents the results of community detection. The numerical values in Table II stand for the average values over the states for each one of the datasets. Parameters of ARMA and K-ARMA were set as follows: , , , , , . In K-ARMA[S], the utilized kernel function is , while in K-ARMA[M] the kernel is defined as (cf. App. A). Table II demonstrates that K-ARMA[M] consistently outperforms all other methods across all datasets and even for the case where outliers contaminate the data. Fig. 6 depicts also the standard deviations of the results of Table II. ARMA, K-ARMA[S], K-ARMA[M] and 3DWTA score accuracy for dataset D1, while K-ARMA[S] and K-ARMA[M] show accuracy for dataset D4. K-ARMA[M] shows the highest accuracy on all other datasets.

Methods Without Outliers With Outliers
Clustering
Accuracy
NMI
Clustering
Accuracy
NMI
D1 D2 D3 D1 D2 D3 D4 D5 D6 D4 D5 D6
ARMA 1 0.816 0.749 1 0.767 0.684 0.928 0.701 0.633 0.874 0.484 0.355
K-ARMA[S] 1 0.856 0.781 1 0.791 0.702 0.956 0.728 0.664 0.913 0.534 0.410
K-ARMA[M] 1 0.884 0.817 1 0.821 0.739 1 0.757 0.721 1 0.602 0.485
SMCE 0.936 0.792 0.691 0.804 0.617 0.495 0.851 0.665 0.580 0.785 0.416 0.318
TABLE III: Synthetic fMRI data: Subnetwork state sequences

Table III illustrates the results of task clustering on synthetic-fMRI datasets: D1, D2, D3, D4, D5 and D6. The parameters of ARMA and Kernel ARMA were set as follows: , , , , , . The kernel functions used in K-ARMA[S] and K-ARMA[M] are identical to those employed in Table II. Similarly to the previous cases, K-ARMA[M] outperforms all other methods across all datasets and scenarios on both clustering accuracy and NMI. Fig. 7 depicts also the standard deviations of the results of Table III. ARMA, K-ARMA[S] and K-ARMA[M] score accuracy on dataset D1. K-ARMA[M] shows the highest accuracy with the smallest standard deviation on all other datasets.

Table IV provides the parameters and used to generate noise matrices and symmetric matrices to simulate outlier neural activities. By choosing different combinations of , different synthetic fMRI datasets were created.

Dataset State 1 State 2 State 3 State 4
1
2
3
4
5
6
TABLE IV: Parameters used to generate synthetic BOLD time series
(a) Datasets 1, 2, 3
(b) Datasets 4, 5, 6
Fig. 5: State-clustering results of synthetic fMRI datasets. (a) Data without an independent event; (b) Data with an independent event.
(a) Datasets 1, 2, 3
(b) Datasets 4, 5, 6
Fig. 6: Community detection results of synthetic fMRI datasets. (a) Data without an independent event; (b) Data with an independent event.
(a) Datasets 1, 2, 3
(b) Datasets 4, 5, 6
Fig. 7: Subnetwork-state-sequence clustering results of synthetic fMRI datasets. (a) Data without an independent event; (b) Data with an independent event.

Iv-B2 EEG Data

Synthetic EEG data were generated by the open-source Virtual Brain (VB) toolbox [63]. A -node network is considered that transitions between two states, with noiseless and outlier-free connectivity matrices depicted in Figs. (a)a and (b)b. It is worth noticing that the number of communities in Fig. (a)a is , while in Fig. (b)b. Similarly to the previous fMRI case, every connectivity matrix, which is fed to the VB toolbox [63], is the superposition of three matrices: The ground-truth matrix (cf. Figs. (a)a and (b)b), the “noise” and the “outlier” matrices. Three scenarios/datasets are considered, with the noisy and outlier-contaminated connectivity matrices illustrated in Figs. (c)c(h)h. Each state contributes signal samples, for a total of samples, to every nodal time series, e.g., Fig. (i)i.

TABLE V: Synthetic EEG data: State clustering
Methods Clustering Accuracy NMI
D1 D2 D3 D1 D2 D3
ARMA 1 1 0.944 1 1 0.906
K-ARMA[S] 1 1 0.963 1 1 0.926
K-ARMA[M] 1 1 0.992 1 1 0.984
3DWTA 1 0.976 0.939 1 0.942 0.896
SMCE 0.952 0.901 0.804 0.914 0.852 0.766
Kmeans 0.879 0.793 0.732 0.796 0.761 0.694

The results of state clustering are shown in Table V; standard deviations are also included in Fig. 8. Parameters of ARMA and K-ARMA were set as follows: , , , , . The reproducing kernel used are for K-ARMA[S], and for K-ARMA[M]. Furthermore, since clustering on dataset 3 is performed between two states, the entries of the confusion matrix for 3DWTA are , , and , while those for K-ARMA[S] are , , and . Fig. 8 depicts the standard deviations of the results of Table V. Due to noise and outliers, all algorithms show the highest accuracy on dataset 1 and the lowest accuracy on dataset 3. ARMA,K-ARMA[S], K-ARMA[M] and 3DWTA score accuracy on dataset 1. ARMA, K-ARMA[S], and K-ARMA[M] exhibit accuracy on dataset 2, while K-ARMA[M] scores the highest accuracy and the smallest standard deviation on all other datasets.

Fig. 8: State-clustering results of synthetic EEG data.
Methods Clustering accuracy NMI
D1 D2 D3 D1 D2 D3
ARMA 1 0.975 0.923 1 0.924 0.861
K-ARMA[S] 1 1 0.958 1 1 0.895
K-ARMA[M] 1 1 0.972 1 1 0.921
3DWTA 1 1 0.867 1 1 0.753
SMCE 1 0.951 0.833 1 0.887 0.706
GOE 1 0.946 0.785 1 0.869 0.684
IKM-PCA 0.961 0.913 0.811 0.935 0.847 0.652
Kmeans 0.894 0.808 0.696 0.836 0.713 0.530
TABLE VI: Synthetic EEG data: Community detection

Table VI illustrates the results of community detection of synthetic EEG datasets, which include three time series (D1–D3) across two states. The illustrated values are the average results over the two network states per time series. The parameters of ARMA, K-ARMA[S] and K-ARMA[M] were set as follows: , , , , , . Moreover, was used as the reproducing kernel function in K-ARMA[S], while in K-ARMA[M]. Fig. 9 depicts also the standard deviations of the results of Table VI. Due to noise and outliers, all algorithms show their highest accuracies on dataset 1 and their lowest one on dataset 3. ARMA, K-ARMA[S], K-ARMA[M] and 3DWTA show accuracy on dataset 1. K-ARMA[S], K-ARMA[M] and 3DWTA score accuracy on dataset 2. K-ARMA[M] still shows the highest accuracy with the smallest standard deviation on all other datasets.

Fig. 9: Community-detection results of synthetic EEG datasets.
Methods Unknown state label Known state label
Clustering accuracy NMI Clustering accuracy NMI
D1 D2 D3 D1 D2 D3 D1 D2 D3 D1 D2 D3
ARMA 1 0.932 0.798 1 0.882 0.710 1 0.978 0.929 1 0.947 0.890
K-ARMA[S] 1 0.944 0.859 1 0.903 0.794 1 1 0.965 1 1 0.943
K-ARMA[M] 1 0.965 0.877 1 1 0.816 1 1 1 1 1 1
SMCE 0.975 0.902 0.755 0.931 0.857 0.629 1 0.940 0.886 1 0.899 0.828
TABLE VII: Synthetic EEG Data: Subnetwork state sequences

Results of subnetwork-state-sequence clustering for the EEG synthetic data are shown in Table VII. The utilized parameters and kernel functions were chosen to be the same as in the previous case of community detection. ARMA, K-ARMA[S] and K-ARMA[M] got 100% accuracy and NMI for dataset D1, since the standard deviation of “noise” matrices are small and dataset D1 does not include outliers. It can be clearly verified by Table VII that if the “true” state labels are known beforehand, and thus step 3 in Alg. 3 is not needed, then all values of clustering accuracies and NMI increase. Fig. 10 depicts also the standard deviations of the results of Table VII. By comparing the two figures in Fig. 10, it can be clearly seen that if the “true” state labels are known beforehand, i.e., step 3 in Alg. 3 is not needed, then all values of clustering accuracies increase while standard deviations decrease. Similarly to other tests, K-ARMA[M] shows the highest accuracy on all datasets.

(a) Estimated state label
(b) True state label
Fig. 10: Subnetwork-state-sequence clustering results of synthetic EEG datasets. (a) With estimated state label; (b) with true state label.

Iv-C Real Data

The open-source real EEG data [64] were used. The data comprise five sets (A–E), each containing single-channel segments of -sec duration. The sampling rate of the data was Hz. Only state clustering is examined, since data [64] do not contain any connectivity-structure information. Datasets D and E were chosen, where set D contains only activity measured during seizure free intervals, while set E contains only seizure activity. The length of the time series extracted from the data was set equal to . Methods ARMA, K-ARMA[S], K-ARMA[M], SMCE and K-means were validated. 3D-WTA did not perform well on those datasets, and thus, its results are not shown here.

Toward a realistic scenario, the time series of the sets D and E are concatenated to create a single time series with length . The network has 100 nodes, so . Parameters of ARMA, K-ARMA[S] and K-ARMA[M] are defined as: , , , , . In K-ARMA[S], the kernel function is set equal to , while in K-ARMA[M] . Due to the sliding-window implementation in the proposed framework, there are cases where the sliding window captures samples from both the D and E time series. The features extracted from those cases are labeled as cluster . Fig. 11 depicts also the standard deviations of the results of Table VIII. K-ARMA[M] scores the highest clustering-accuracy and NMI values with the smallest standard deviation. In the case where the time series D and E are not concatenated, the case of sliding windows capturing data from both clusters (states) is non-existent. In such a binary-decision case, the confusion-matrix results of ARMA, K-ARMA[S] and K-ARMA[M] are , , and , while for SMCE, , , and .

Methods Clustering accuracy NMI
ARMA 0.907 0.844
K-ARMA[S] 0.925 0.886
K-ARMA[M] 0.939 0.921
SMCE 0.873 0.815
Kmeans 0.829 0.741
TABLE VIII: Real EEG data: State clustering

Real fMRI behavioral data, acquired from the Stellar Chance 3T scanner (SC3T) at the University of Pennsylvania, were used to cluster different states. The time series in data are collected in two arms before and after an inhibitory sequence of transcranial magnetic stimulation (TMS) known as continuous theta burst stimulation [65]. Real and Sham stimulation of two different tasks were applied for TMS. The two behavioral tasks are: 1) Navon task: A big shape made up of little shapes is shown on the screen. The big shape can either be green or white in color. If green, participant identifies the big shape, while if white, the participant identifies the little shape. The task was presented in three blocks: All white stimuli, all green stimuli, and switching between colors on 70% of trials to introduce switching demands. Responses given via button box are in the order of circle, x, triangle, square; 2) Stroop task: Words are displayed in different color inks. There are two difficulty conditions; one where subjects respond to words that introduced low color-word conflict (far, deal, horse, plenty) or high conflict with color words differing from the color the word is printed in (e.g., red printed in blue, green printed in yellow, etc.) [66]. The participant has to tell the color of the ink the word is printed in using a button box in the order of red, green, yellow, blue.

Each BOLD time series was collected during an min scan with ms, which means that the length of time series is . The time series has cortical and subcortical regions so . To test the state clustering results of fMRI time series, 3 states are concatenated to create a single time series with length . The 3 states are: 1) Before real stimulation of the Navon task; 2) after real stimulation of the Navon task; and 3) after real stimulation of the Stroop task.

Parameters of ARMA, K-ARMA[S] and K-ARMA[M] are defined as: , , , , . In K-ARMA[S], the kernel function is set equal to , while in K-ARMA[M] . Notice here that due to the sliding-window implementation in the proposed framework, there are cases where the sliding window captures samples from two consecutive states.

Methods Clustering accuracy NMI
ARMA 0.885 0.809
K-ARMA[S] 0.904 0.843
K-ARMA[M] 0.919 0.875
SMCE 0.893 0.816
Kmeans 0.801 0.720
TABLE IX: Real fMRI data: State clustering results

Results of state clustering on real fMRI data are revealed in Table IX. Fig. 12 depicts also the standard deviations of the results of Table IX. Again, K-ARMA[M] exhibits the highest clustering-accuracy and NMI values, with the smallest standard deviation among all employed methods.

Fig. 11: Real EEG data: State clustering.
Fig. 12: Real fMRI data: State clustering.

V Conclusions

This paper introduced a novel clustering framework to perform all possible clustering tasks in dynamic (brain) networks: state clustering, community detection and subnetwork-state-sequence tracking/identification. Features were extracted by a kernel-based ARMA model, with column spaces of observability matrices mapped to the Grassmann manifold (Grassmannian). A clustering algorithm, the geodesic clustering with tangent spaces, was also provided to exploit the rich underlying Riemannian geometry of the Grassmannian. The framework was validated on multiple simulated and real datasets and compared against state-of-the-art clustering algorithms. Test results demonstrate that the proposed framework outperforms the competing methods in clustering states, identifying community structures, and tracking multiple subnetwork state sequences which may span several network states. Current research effort includes finding ways to reduce the size of the computational footprint of the framework, and techniques to reject network-wide outlier data.

Appendix A Reproducing Kernel Hilbert Spaces

A reproducing kernel Hilbert space , equipped with inner product , is a functional space where each point is a function , for some , s.t. the mapping is continuous, for any choice of  [67, 47, 68]. There exists a kernel function , unique to , such that (s.t.) and , for any and any  [67, 68]. The latter property is the reason for calling kernel reproducing, and yields the celebrated “kernel trick”: , for any .

Popular examples of reproducing kernels are:

  1. The linear , where space is nothing but ;

  2. the Gaussian , where and is the standard Euclidean norm. In this case, is infinite dimensional [68];

  3. the Laplacian , where stands for the -norm [69]; and

  4. the polynomial , for some parameter .

There are several ways of generating reproducing kernels via certain operations on well-known kernel functions such as convex combinations, products, etc. [47].

Define , for some , as the space whose points take the following form: s.t. , , where is a compact notation for . For and given a matrix , the product stands for the vector-valued function whose th entry is . Similarly, define , for some , as the space comprising all