BrainNetwork Clustering via
KernelARMA Modeling and the Grassmannian
Abstract
Recent advances in neuroscience and in the technology of functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have propelled a growing interest in brainnetwork clustering via timeseries analysis. Notwithstanding, most of the brainnetwork clustering methods revolve around state clustering and/or node clustering (a.k.a. community detection or topology inference) within states. This work answers first the need of capturing nonlinear nodal dependencies by bringing forth a novel featureextraction mechanism via kernel autoregressivemovingaverage modeling. The extracted features are mapped to the Grassmann manifold (Grassmannian), which consists of all linear subspaces of a fixed rank. By virtue of the Riemannian geometry of the Grassmannian, a unifying clustering framework is offered to tackle all possible clustering problems in a network: Cluster multiple states, detect communities within states, and even identify/track subnetwork state sequences. The effectiveness of the proposed approach is underlined by extensive numerical tests on synthetic and real fMRI/EEG data which demonstrate that the advocated learning method compares favorably versus several stateoftheart clustering schemes.
I Introduction
Recent advances in neuroscience reveal the brain to be a complex network capable of integrating and generating information from external and internal sources in real time [1]. The rapidly growing field of Network Neuroscience uses network analytics to reveal, via graph theory and its concepts (e.g., nodes and edges), topological and functional dependencies of the brain [2]. At the microscopic scale, nodes of a brain network correspond to individual neurons, while edges might describe synaptic coupling between neurons or relationships between their firing patterns [3]. At the macroscopic scale, nodes might be brain regions, and edges might represent anatomical connections (structural connectivity) or statistical relationships between regional brain dynamics (functional connectivity) [4].
Popular noninvasive techniques used to acquire time series data from brain networks include functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). In particular, fMRI monitors the blood oxygenlevel dependent (BOLD) time series [5], while EEG tracks brain activity through the time series which are collected via electrodes on the scalp. EEG possesses a high temporal resolution and is considered to be relatively convenient, inexpensive, and harmless compared to other methods such as magnetoencephalography (MEG), which is much less risky than positron emission tomography (PET) [6].
An important aspect of the majority of works in network analytics is that the timeseries data describing the nodal signals tend to be considered stationary, and many learning algorithms make the temporal smoothness assumption [7, 8]. However, stationarity of the brain network data should not be assumed, as it is known that the brain acts as a nonstationary network even during its resting state, e.g., [9, 10, 11, 12, 13]. Dynamic functional brain networks can be built using pairwise relationships derived from the time series data described above, and this dynamicnetwork viewpoint has been widely exploited to identify diseases, cognitive states, and individual differences in performance [14, 15, 16, 17].
Learning algorithms are often employed to identify functional dependencies among nodes and topology in networks. As a prominent example, clustering algorithms have been already utilized to verify the dynamic nature of brain networks [9, 10], as well as to predict and detect brain disorders, applied to syndromes at large, such as depression [18], epilepsy, schizophrenia [19], Alzheimer disease and autism [20]. In general, brainnetwork clustering methods aim at three major goals: Node clustering (a.k.a. community detection or topology inference) within a given brain state, state clustering of similar brain states, and subnetworkstatesequence identification. Loosely speaking, a “brain state” corresponds to a specific “global” network topology or nodal connectivity pattern which stays fixed over a time interval. A “subnetwork state sequence” is defined as the latent (stochastic) process that drives a subnetwork/subgroup of nodal time series, may span several networkwide/“global” states, and the collaborating nodes may even change as the brain transitions from one “global” state to another.
Most brainclustering algorithms are used for nodal and state clustering, while only very few schemes try to identify/track subnetwork state sequences. For example, community detection in brain networks has been studied extensively to perform clustering in both static and dynamic brain networks [21, 22]. Modularity maximization [23, 24] is a popular method for performing community detection in functional brain networks, but also relies on the selection of additional parameters determining the proper null model, resolution of clustering and in the dynamic case, the value of interlayer coupling [25]. In [26, 27], the discrete wavelet transform decomposes EEG signals into frequency subbands, and Kmeans is used to cluster the acquired wavelet coefficients. In [28], nonparametric Bayesian models, coined Bayesian community detection and infinite relational modeling, are introduced and applied to resting state fMRI data to define probabilities on network edges. Clustering in [28] is performed by running sophisticated comparisons on the values of those edge probabilities. Study [29] investigated network “motifs,” defined as recurrent and statistically significant subgraphs or patterns. A spectralclustering based algorithm applied to motif features revealed a spatially coherent, yet frequency dependent, subdivision between the posterior, occipital and frontal brain regions [29]. Entropy maximization and frequency decomposition were utilized in [30] prior to applying vector quantization [31] to frequencybased features for clustering communities within EEG data. In [32], EEGdata topography via Renyi’s entropy was proposed as a feature extraction mapping, before applying selforganizing maps as the offtheshelf clustering algorithm. In the recently popular graphsignalprocessing context [8, 33], topology inference is achieved by solving optimization problems formed via the observed timeseries data and the eigendecomposition of the Laplacian matrix of the network.
Other approaches have been used to perform state clustering. For example, [34] advocates hidden Markov models (HMMs) to characterize brainstate dynamics. HMM parameters are extracted from each state and used to form vectors in a Euclidean space, with their pairwise metric distances comprising the entries of an affinity matrix. Hierarchical clustering is then applied to the affinity matrix to cluster brain states. In [35], timevarying features are extracted from healthy controls and patients with schizophrenia using independent vector analysis (IVA). Mutual information among the IVA features, Markov modeling and Kmeans are used to detect changes in the brain’s spatial connectivity patterns. A changepoint detection approach for resting state fMRI is introduced in [36]. Functional connectivity patterns of all fiberconnected cortical voxels are concatenated into a descriptive feature vector to represent the brain’s state, and the temporal change points of different brain states are decided by detecting the abrupt changes of the vector patterns via a sliding window approach. In [37], hierarchical clustering is applied to a time series of graphdistance measures to identify discrete states of networks. Moreover, motivated by the observation that changes in nodal communities suggest changes in network states, studies [38, 39] perform community detection on fMRI data, prior to state clustering, by capitalizing on Kmeans, multilayer modeling, (Tucker) tensor and higherorder singular value decomposition.
There is only a few methods that can cluster subnetwork state sequences in fMRI and EEG modalities. In [40], features are extracted from the frequency content of the fMRI/EEG time series. A feature example is the ratio of the sum of amplitudes within a specific frequency interval over the sum of amplitudes over the whole frequency range of the time series. Features, and thus subnetwork state sequences, are then clustered via Kmeans [40]. A computervision approach is introduced in [41]. EEG data are transformed into dynamic topographic maps, able to display features such as voltage amplitude, power and peak latency. The flow of activation within those topographic maps is estimated by using an opticalflow estimation method [42] which generates motion vectors. Motion vectors are clustered into groups, and these dynamic clusters are tracked along the time axis to depict the activation flow and track the subnetwork state sequences.
This paper capitalizes on the directions established by [43] to introduce a unifying featureextraction and clustering framework, with strong geometric flavor, that make no assumptions of stationarity and can carry through all possible brainclustering duties, i.e., community detection, state clustering, and subnetworkstatesequence clustering/tracking. A kernel autoregressivemovingaverage (KARMA) model is proposed to capture latent nonlinear and causal dependencies, not only within a single time series, but also among multiple nodal time series of the brain network. To accommodate the highly likely nonstationarity of the time series, the KARMA model is applied via a timesliding window. Per application of the KARMA model, a system identification problem is solved to extract a lowrank observability matrix. Such a lowrank representation enables dimensionalityreduction arguments which are beneficial to learning methods for the usually highdimensional ambient spaces associated with brainnetwork analytics. Features are defined as the lowrank column spaces of the computed observability matrices. For a fixed rank, those features become points of the Grassmann manifold (Grassmannian), which enjoys the rich Riemannian geometry. This featureextraction scheme permeates all clustering duties in this study. Having obtained the features and to identify clusters, this study builds on Riemannian multimanifold modeling (RMMM) [44, 45, 43], which postulates that clusters take the form of submanifolds in the Grassmannian. To compute clusters, the underlying Riemannian geometry is exploited by the geodesicclusteringwithtangentspaces (GCT) algorithm [44, 45, 43]. Unlike the pipeline in [46], which used covariance matrix of EEG, after lowpass filtering, as feature on a manifold and considered only the Riemannian distance between features, GCT considers both distance and angle as geometric information for clustering. In contrast to [44, 45, 43], where the number of clusters need to be known a priori, this paper incorporates hierarchical clustering to render GCT free from any apriori knowledge of the number of clusters. Extensive numerical tests on synthetic and real fMRI/EEG data demonstrate that the proposed framework, i.e., feature extraction mechanism and GCTbased clustering algorithm, compares favorably versus stateoftheart manifold learning and brainnetwork clustering schemes.
The rest of the paper is organized as follows. The KARMA model and the featureextraction mechanism are introduced in Section II. The new variant of the GCT clustering algorithm is presented in Section III, while synthetic and real fMRI/EEG data are used in Section IV to validate the theoretical and algorithmic developments. The manuscript is concluded in Section V, while mathematical notation, any background material as well as proofs are deferred to the Appendix.
Ii KernelARMA Modeling
Consider a (brain) network/graph , with sets of nodes , of cardinality , and edges . Each node is annotated by a discretetime stochastic process (time series) , where denotes discrete time and the set of all integer numbers; cf. Fig. 1. To avoid congestion in notations, stands for both the random variable (RV) and its realization. The physical meaning of and depends on the underlying datacollection modalities. For example, in fMRI, nodes comprise regions of interest (ROI) of the brain which are connected either anatomically or functionally, and becomes a BOLD time series of the average signal in a given ROI [5], e.g., Fig. (e)e. In EEG, consists of all electrodes placed on the scalp, and gathers the signal samples collected by electrode ; cf. Fig. (i)i. For index and an integer , the vector is used in this manuscript to collect all signal samples from node(s) of the network at the time instance , and to unify several scenarios of interest as the following discussion demonstrates.
Iia State Clustering ()
IiB Community Detection and Clustering of Subnetwork State Sequences ()
In the case of community detection and subnetworkstatesequence clustering, nodes need to be partitioned via the (dis)similarities of their time series. For example, in subnetworkstatesequence clustering, samecluster nodes collaborate to carry through a common task. To be able to detect common features and to identify those nodes, it is desirable first to extract individual features from each nodal time series. To this end, is assigned the value , so that , for a given buffer length and with , takes the form of .
IiC Extracting Grassmannian Features
Consider now a userdefined RKHS with its kernel mapping ; cf. App. A. Given and assuming that the sequence is available, define . This work proposes the following kernel (K)ARMA model to fit the variations of features within space : There exist matrices , , the latent variable , and vectors , that capture noise and approximation errors, s.t. ,
(1a)  
(1b) 
Proposition 1.
Given parameter , define the “forward” matrixvalued function
(2a)  
and the “backward” matrixvalued function  
(2b) 
Then, there exist matrices and s.t. the following lowrank factorization holds true:
(3) 
where product is defined in App. A, and is the socalled observability matrix:
With regards to a probability space, if and in (1) are considered to be zeromean and independent and identically distributed stochastic processes, independent of each other, if is independent of , and independency holds true also between , s.t. , then
(4) 
If, in addition, , , , and , , are widesense stationary, then , , in the meansquare () sense w.r.t. the probability space.
Proof:
See App. B. ∎
There can be many choices for the reproducing kernel function (cf. App. A). If the linear kernel is chosen, then , becomes the identity mapping, , and boils down to the usual matrix product. This case was introduced in [43]. The most popular choice for is the Gaussian kernel , where parameter stands for standard deviation. However, pinpointing the appropriate for a specific dataset is a difficult task which may entail cumbersome crossvalidation procedures [47]. A popular approach to circumvent the judicious selection of is to use a dictionary of parameters , with , to cover an interval where is known to belong to. A reproducing kernel function can be then defined as the convex combination , where are convex weights, i.e., nonnegative real numbers s.t. [47]. Such a strategy is followed in Section IV. Examples of nonGaussian kernels can be also found in App. A.
Kernelbased ARMA models have been already studied in the context of supportvector regression [48, 49, 50]. However, those models are different than (1) since only the AR and MA vectors of coefficients are mapped to an RKHS feature space, while the observed data (of only a single time series) are kept in the input space. Here, (1) offers a way to map even the observed data to an RKHS to capture nonlinearities in data via applying the ARMA idea to properly chosen feature spaces. In a different context [51], time series of graphdistance metrics are fitted by ARMA modeling to detect anomalies and thus identify states in networks. Neither Riemannian geometry nor kernel functions were investigated in [51].
Motivated by (3), (4), the result (, ), and the fact that the conditional expectation is the leastsquaresbest estimator [52, §9.4], the following task is proposed to obtain an estimate of the observability matrix:
(5) 
To solve (5), the singular value decomposition (SVD) is applied to obtain , where is orthogonal. Assuming that , the SchmidtMirskyEckartYoung theorem [53] provides the estimates and , where is the orthogonal matrix that collects those columns of that correspond to the top (principal) singular values in .
Due to the factorization , identifying the observability matrix becomes ambiguous, since for any nonsingular matrix , , and can serve also as an estimate. By virtue of the elementary observation that the column (range) spaces of and coincide, it becomes preferable to identify the column space of , denoted hereafter by , rather than the matrix itself. If , then becomes a point in the Grassmann manifold , or Grassmannian, which is defined as the collection of all linear subspaces of with rank equal to [54, p. 73]. The Grassmannian is a Riemannian manifold with dimension equal to [54, p. 74]. The algorithmic procedure of extracting the feature from the available data is summarized in Alg. 1. To keep notation as general as possible, instead of using all of the signal samples, a subset is considered and signal samples are gathered in per node . All generated features are gathered in step 1 of Alg. 1, denoted by , and indexed by the set of cardinality .
Iii Clustering Grassmannian Features
Having features available in the Grassmannian via Alg. 1, the next task in the pipeline is to cluster . This work follows the Riemannian multimanifold modeling (RMMM) hypothesis [45, 44, 43], where clusters are considered to be submanifolds of the Grassmannian, with data located close to or onto (see Fig. (a)a for the case of clusters). RMMM allows for clusters to intersect; a case where the classical Kmeans, for example, is known to face difficulties [55].
Clustering is performed by Alg. 2, coined geodesic clustering by tangent spaces (GCT). The GCT of Alg. 2 extends its initial form in [45, 44, 43], since Alg. 2 operates without the need to know the number of clusters apriori, as opposed to [45, 44, 43] where needs to be provided as input to the clustering algorithm. This desirable feature of Alg. 2 is also along the lines of usual practice, where it is unrealistic to know before employing a clustering algorithm.
In a nutshell, Alg. 2 computes the affinity matrix of features in step 2, comprising information about sparse data approximations, via weights , as well as angles between linear subspaces. Although the incorporation of sparse weights originates from [56], one of the novelties of GCT is the usage of the angular information via . GCT’s version of [45, 44, 43] applies spectral clustering in step 2, where knowledge of the number of clusters is necessary. To surmount the obstacle of knowing beforehand, Louvain clustering method [57] is adopted in step 2. Louvain method belongs to the family of hierarchicalclustering algorithms that attempt to maximize a modularity function, which monitors the intra and intercluster density of links/edges. Needless to say that any other hierarchicalclustering scheme can be used at step 2 instead of Louvain method.
A short description of the steps in Alg. 2 follows, with Riemanniangeometry details deferred to [45, 44, 43]. Alg. 2 visits sequentially (step 2). At step 2, the nearestneighbors of are identified, i.e., those points, taken from , which are placed the closest from with respect to the Grassmannian distance [58]. The neighbors are then mapped at step 2 to the Euclidean vectors in the tangent space of the Grassmannian at (the graycolored plane in Fig. (b)b) via the logarithm map , whose computation (nonclosed form via SVD) is provided in [45, 43]. Step 2 computes the weights , with , via the following sparsecoding task:
s.to  (6) 
The affine constraint in (6), imposed on the coefficients in representing via its neighbors, is motivated by the affine nature of the tangent space (Fig. (b)b). Moreover, the larger the distance of neighbor from , the larger the weight , which in turn penalizes severely the coefficient by pushing it to values close to zero. Step 2 computes the sample covariance matrix
(7) 
where denotes the sample average of the neighbors of . PCA is applied to at step 2 to compute the principal eigenspace , which may be viewed as an approximation of the image of the cluster (submanifold) , via the logarithm map, into the tangent space (see Fig. (b)b). Once is computed, the angle between vector and is also computed at step 2 to extract angular information. The larger the angle is, the less the likelihood for to belong to cluster . The information carried by both and the angles is used to define the adjacency matrix at step 2. The use of angular information here, as well as in [45, 44, 43], advances the boundary of stateoftheart clustering methods in the Grassmannian, where usually the weights of the adjacency matrix are defined via the Grassmannian (geodesic) distance or sparsecoding schemes [56].
To summarize, the clustering framework is presented in pseudocode form in Alg. 3. More specifically, the main module of the framework, which is frequently utilized and contains Algs. 1 and 2, is presented at steps 3–3. While the “stateclustering” part (steps 3–3) is quite straightforward, the “community detection” (steps 3–3) and “subnetworkstatesequence clustering” (steps 3–3) comprise several steps. More specifically, in “community detection” (steps 3–3), states are first identified via steps 3–3 and then communities are identified in steps 3–3 within each state. In “subnetworkstatesequence clustering” (steps 3–3), states are again identified first in step 3, the Grassmannian features are extracted in steps 3–3, all features are gathered as in step 3, and finally Alg. 2 is applied to to identify clusters/tasks in step 3.
To achieve a high accuracy clustering result, it is necessary to cluster states first, before applying community detection and subnetworkstatesequence clustering. Without knowing the starting and ending points of different states, there will be timeseries vectors in Alg. 1 which capture data from two consecutive states, since takes the form of . Features corresponding to those vectors will decrease the clustering accuracy since the extracted features do not correspond to any actual state or task.
The main computational burden comes from the module of steps 3–3 in Alg. 3. If denotes the points in the Grassmannian, the computational complexity for computing features in Alg. 1 is , where denotes the cost of computing , which includes SVD computations. In Alg. 2, the complexity for computing the nearest neighbors of is , where denotes the cost of computing the Riemannian distance between any two points, and refers to the cost of finding the nearest neighbors of . Step 2 of Alg. 2 is a sparsitypromoting optimization task of (6) and let denotes the complexity to solve it. Under , step 2 of Alg. 2 involves the computation of the eigenvectors of the sample covariance matrix , with complexity of . In step 2, the complexity for computing empirical geodesic angles is , where is the complexity of computing the logarithm map [43]. For the last step of Alg. 2, the exact complexity of Louvain method is not known but the method seems to run in time with most of the computational effort spent on modularity optimization at first level, since modularity optimization is known to be NPhard [59]. To summarize, the complexity of Alg. 2 is .
Iv Numerical Tests
This section validates the proposed framework on synthetic and real data. First, the competing clustering algorithms are briefly described.
Iva Competing Algorithms
IvA1 Sparse Manifold Clustering and Embedding (SMCE) [56]
Each point on the Grassmannian is described by a sparse affine combination of its neighbors. The computed sparse weights define the entries of a similarity matrix, which is subsequently used to identify datacluster associations. SMCE does not utilize any angular information, as step 2 of Alg. 2 does.
IvA2 Interaction Kmeans with PCA (IKMPCA) [60]
IKM is a clustering algorithm based on the classical Kmeans and Euclidean distances within a properly chosen feature space. To promote timeefficient solutions, the classical PCA is employed as a dimensionalityreduction tool for featuresubset selection.
IvA3 Graphshiftoperator estimation (GOE) [33]
The graph shift operator is a symmetric matrix capturing the network’s structure, i.e., topology. There are widely adopted choices of graph shift operators, including the adjacency and Laplacian matrices, or their various degreenormalized counterparts. An estimation algorithm in [33] computes the optimal graph shift operator via convex optimization. The computed graph shift operator is fed to a spectralclustering module to identify communities within a single brain state, since [33] assumes stationary timeseries data.
IvA4 3DWindowed Tensor Approach (3DWTA) [38]
3DWTA was originally introduced for community detection in dynamic networks by applying tensor decompositions onto a sequence of adjacency matrices indexed over the time axis. 3DWTA was modified in [39] to accommodate multilayer network structures. Highorder SVD (HOSVD) and highorder orthogonal iteration (HOOI) are used within a predefined sliding window to extract subspace information from the adjacency matrices. The “asymptoticsurprise” metric is used as the criterion to determine the number of clusters. 3DWTA is capable of performing both state clustering and community detection.
SMCE, 3DWTA and the classical Kmeans will be compared against Alg. 3 on state clustering. SMCE, IKMPCA, 3DWTA, GOA and Kmeans will be used in community detection. Since none of IKMPCA, GOA and 3DWTA can perform subnetworkstatesequence clustering across multiple states, only the results of Alg. 3 and SMCE are reported. To ensure fair comparisons, the parameters of all methods were carefully tuned to reach optimal performance for every scenario at hand.
In the following discussion, tags KARMA[S] and KARMA[M] denote the proposed framework whenever a single and multiple kernel functions are employed, respectively. In the case where the linear kernel is used, the KARMA method boils down to the ARMA method of [43].
The evaluation of all methods was based on the following three criteria: 1) Clustering accuracy, defined as the number of correctly clustered data points (groundtruth labels are known) over the total number of points; 2) normalized mutual information (NMI) [61]; and 3) the classical confusion matrix [62], with truepositive ratio (TPR), falsepositive ratio (FPR), truenegative ratio (TNR), and falsenegative ratio (FNR), in the case where the number of clusters to be identified is equal to two. In what follows, every numerical value of the previous criteria is the uniform average of independently performed tests for the particular scenario at hand.
IvB Synthetic Data
IvB1 fMRI Data
Data were generated by the opensource Matlab SimTB toolbox [12]. A node network is considered that transitions successively between distinct network states. Each state corresponds to a certain connectivity matrix, generated via the following path. Each connectivity matrix, fed to the SimTB toolbox, is modeled as the superposition of three matrices: 1) The groundtruth (noiseless) connectivity matrix (cf. Fig. 3), where nodes sharing the same color belong to the same cluster and collaborate to perform a common subnetwork task; 2) a symmetric matrix whose entries are drawn independently from a zeromean Gaussian distribution with standard deviation to model noise; and 3) a symmetric outlier matrix where entries are equal to to account for outlier neural activity.
Different states may share different outlier matrices, controlled by . Aiming at extensive numerical tests, six datasets were generated (corresponding to the columns of Table I) by choosing six pairs of parameters in the modeling of the connectivity matrices and the SimTB toolbox. Datasets 1, 2 and 3 (D1, D2 and D3) were created without outliers, while datasets 4, 5 and 6 (D4, D5 and D6) include outlier matrices with different s in different states. Table IV details the parameters of those six datasets. Driven by the previous connectivity matrices, the SimTB toolbox generates BOLD time series [5]. Each state contributes signal samples, for a total of samples, to every nodal time series, e.g., Fig. (e)e.
Methods  Without Outliers  With Outliers  

Clustering Accuracy  NMI  Clustering Accuracy  NMI  
D1  D2  D3  D1  D2  D3  D4  D5  D6  D4  D5  D6  
ARMA  0.969  0.805  0.640  0.948  0.766  0.596  0.944  0.743  0.589  0.860  0.627  0.340 
KARMA[S]  1  0.824  0.671  1  0.791  0.622  0.983  0.775  0.599  0.930  0.651  0.379 
KARMA[M]  1  0.839  0.708  1  0.808  0.641  0.992  0.800  0.626  0.967  0.689  0.435 
3DWTA  1  0.792  0.603  1  0.735  0.556  0.943  0.721  0.517  0.872  0.558  0.281 
SMCE  0.920  0.784  0.583  0.887  0.673  0.480  0.883  0.712  0.508  0.713  0.562  0.246 
Kmeans  0.866  0.670  0.402  0.800  0.590  0.307  0.768  0.621  0.337  0.476  0.403  0.168 
Table I demonstrates the results of state clustering. The parameters used for ARMA and KARMA are: , , , , . The Gaussian kernel (cf. App. A) is used in the singlekernel method KARMA[S], while kernel is used in the KARMA[M] case since it performed the best among other choices of kernel functions. Fig. 5 depicts also the standard deviations of the results of Table I, computed after performing independent repetitions of the same test. Standard deviation of all algorithms increase when the strength of the noisy matrix increases. For dataset D1, KARMA[S], KARMA[M] and 3DWTA reach accuracy; for other datasets, KARMA[M] exhibits the highest accuracy and the smallest standard deviation.
3DWTA, KARMA[S] and KARMA[M] achieved perfect score () for both the clusteringaccuracy and NMI metrics on dataset . Among all methods, KARMA[M] scores the highest clustering accuracy and NMI over all six datasets. It can be observed by Table I that the existence of outliers affects negatively the ability of all methods to cluster data. The main reason is that the algorithms tend to detect outliers and gather those in clusters different from the nominal ones. Ways to reject those outliers are outside of the scope of this study and will be provided in a future publication.
Methods  Without Outliers  With Outliers  


NMI 

NMI  
D1  D2  D3  D1  D2  D3  D4  D5  D6  D4  D5  D6  
ARMA  1  0.960  0.842  1  0.876  0.775  0.973  0.910  0.817  0.940  0.793  0.664  
KARMA[S]  1  1  0.915  1  1  0.838  1  0.942  0.852  1  0.864  0.710  
KARMA[M]  1  1  0.945  1  1  0.907  1  0.958  0.879  1  0.892  0.803  
3DWTA  1  0.941  0.839  1  0.927  0.754  0.925  0.863  0.799  0.842  0.780  0.638  
SMCE  0.975  0.929  0.827  0.902  0.865  0.691  0.909  0.773  0.745  0.769  0.647  0.563  
GOE  1  0.933  0.809  1  0.915  0.655  0.918  0.740  0.684  0.833  0.652  0.409  
IKMPCA  0.948  0.907  0.791  0.890  0.814  0.629  0.892  0.756  0.712  0.738  0.551  0.486  
Kmeans  0.908  0.876  0.725  0.810  0.729  0.547  0.843  0.672  0.605  0.620  0.391  0.314 
Table II presents the results of community detection. The numerical values in Table II stand for the average values over the states for each one of the datasets. Parameters of ARMA and KARMA were set as follows: , , , , , . In KARMA[S], the utilized kernel function is , while in KARMA[M] the kernel is defined as (cf. App. A). Table II demonstrates that KARMA[M] consistently outperforms all other methods across all datasets and even for the case where outliers contaminate the data. Fig. 6 depicts also the standard deviations of the results of Table II. ARMA, KARMA[S], KARMA[M] and 3DWTA score accuracy for dataset D1, while KARMA[S] and KARMA[M] show accuracy for dataset D4. KARMA[M] shows the highest accuracy on all other datasets.
Methods  Without Outliers  With Outliers  

NMI 

NMI  
D1  D2  D3  D1  D2  D3  D4  D5  D6  D4  D5  D6  
ARMA  1  0.816  0.749  1  0.767  0.684  0.928  0.701  0.633  0.874  0.484  0.355  
KARMA[S]  1  0.856  0.781  1  0.791  0.702  0.956  0.728  0.664  0.913  0.534  0.410  
KARMA[M]  1  0.884  0.817  1  0.821  0.739  1  0.757  0.721  1  0.602  0.485  
SMCE  0.936  0.792  0.691  0.804  0.617  0.495  0.851  0.665  0.580  0.785  0.416  0.318 
Table III illustrates the results of task clustering on syntheticfMRI datasets: D1, D2, D3, D4, D5 and D6. The parameters of ARMA and Kernel ARMA were set as follows: , , , , , . The kernel functions used in KARMA[S] and KARMA[M] are identical to those employed in Table II. Similarly to the previous cases, KARMA[M] outperforms all other methods across all datasets and scenarios on both clustering accuracy and NMI. Fig. 7 depicts also the standard deviations of the results of Table III. ARMA, KARMA[S] and KARMA[M] score accuracy on dataset D1. KARMA[M] shows the highest accuracy with the smallest standard deviation on all other datasets.
Table IV provides the parameters and used to generate noise matrices and symmetric matrices to simulate outlier neural activities. By choosing different combinations of , different synthetic fMRI datasets were created.
Dataset  State 1  State 2  State 3  State 4 

1  
2  
3  
4  
5  
6 
IvB2 EEG Data
Synthetic EEG data were generated by the opensource Virtual Brain (VB) toolbox [63]. A node network is considered that transitions between two states, with noiseless and outlierfree connectivity matrices depicted in Figs. (a)a and (b)b. It is worth noticing that the number of communities in Fig. (a)a is , while in Fig. (b)b. Similarly to the previous fMRI case, every connectivity matrix, which is fed to the VB toolbox [63], is the superposition of three matrices: The groundtruth matrix (cf. Figs. (a)a and (b)b), the “noise” and the “outlier” matrices. Three scenarios/datasets are considered, with the noisy and outliercontaminated connectivity matrices illustrated in Figs. (c)c–(h)h. Each state contributes signal samples, for a total of samples, to every nodal time series, e.g., Fig. (i)i.
Methods  Clustering Accuracy  NMI  

D1  D2  D3  D1  D2  D3  
ARMA  1  1  0.944  1  1  0.906 
KARMA[S]  1  1  0.963  1  1  0.926 
KARMA[M]  1  1  0.992  1  1  0.984 
3DWTA  1  0.976  0.939  1  0.942  0.896 
SMCE  0.952  0.901  0.804  0.914  0.852  0.766 
Kmeans  0.879  0.793  0.732  0.796  0.761  0.694 
The results of state clustering are shown in Table V; standard deviations are also included in Fig. 8. Parameters of ARMA and KARMA were set as follows: , , , , . The reproducing kernel used are for KARMA[S], and for KARMA[M]. Furthermore, since clustering on dataset 3 is performed between two states, the entries of the confusion matrix for 3DWTA are , , and , while those for KARMA[S] are , , and . Fig. 8 depicts the standard deviations of the results of Table V. Due to noise and outliers, all algorithms show the highest accuracy on dataset 1 and the lowest accuracy on dataset 3. ARMA,KARMA[S], KARMA[M] and 3DWTA score accuracy on dataset 1. ARMA, KARMA[S], and KARMA[M] exhibit accuracy on dataset 2, while KARMA[M] scores the highest accuracy and the smallest standard deviation on all other datasets.
Methods  Clustering accuracy  NMI  

D1  D2  D3  D1  D2  D3  
ARMA  1  0.975  0.923  1  0.924  0.861 
KARMA[S]  1  1  0.958  1  1  0.895 
KARMA[M]  1  1  0.972  1  1  0.921 
3DWTA  1  1  0.867  1  1  0.753 
SMCE  1  0.951  0.833  1  0.887  0.706 
GOE  1  0.946  0.785  1  0.869  0.684 
IKMPCA  0.961  0.913  0.811  0.935  0.847  0.652 
Kmeans  0.894  0.808  0.696  0.836  0.713  0.530 
Table VI illustrates the results of community detection of synthetic EEG datasets, which include three time series (D1–D3) across two states. The illustrated values are the average results over the two network states per time series. The parameters of ARMA, KARMA[S] and KARMA[M] were set as follows: , , , , , . Moreover, was used as the reproducing kernel function in KARMA[S], while in KARMA[M]. Fig. 9 depicts also the standard deviations of the results of Table VI. Due to noise and outliers, all algorithms show their highest accuracies on dataset 1 and their lowest one on dataset 3. ARMA, KARMA[S], KARMA[M] and 3DWTA show accuracy on dataset 1. KARMA[S], KARMA[M] and 3DWTA score accuracy on dataset 2. KARMA[M] still shows the highest accuracy with the smallest standard deviation on all other datasets.
Methods  Unknown state label  Known state label  
Clustering accuracy  NMI  Clustering accuracy  NMI  
D1  D2  D3  D1  D2  D3  D1  D2  D3  D1  D2  D3  
ARMA  1  0.932  0.798  1  0.882  0.710  1  0.978  0.929  1  0.947  0.890 
KARMA[S]  1  0.944  0.859  1  0.903  0.794  1  1  0.965  1  1  0.943 
KARMA[M]  1  0.965  0.877  1  1  0.816  1  1  1  1  1  1 
SMCE  0.975  0.902  0.755  0.931  0.857  0.629  1  0.940  0.886  1  0.899  0.828 
Results of subnetworkstatesequence clustering for the EEG synthetic data are shown in Table VII. The utilized parameters and kernel functions were chosen to be the same as in the previous case of community detection. ARMA, KARMA[S] and KARMA[M] got 100% accuracy and NMI for dataset D1, since the standard deviation of “noise” matrices are small and dataset D1 does not include outliers. It can be clearly verified by Table VII that if the “true” state labels are known beforehand, and thus step 3 in Alg. 3 is not needed, then all values of clustering accuracies and NMI increase. Fig. 10 depicts also the standard deviations of the results of Table VII. By comparing the two figures in Fig. 10, it can be clearly seen that if the “true” state labels are known beforehand, i.e., step 3 in Alg. 3 is not needed, then all values of clustering accuracies increase while standard deviations decrease. Similarly to other tests, KARMA[M] shows the highest accuracy on all datasets.
IvC Real Data
The opensource real EEG data [64] were used. The data comprise five sets (A–E), each containing singlechannel segments of sec duration. The sampling rate of the data was Hz. Only state clustering is examined, since data [64] do not contain any connectivitystructure information. Datasets D and E were chosen, where set D contains only activity measured during seizure free intervals, while set E contains only seizure activity. The length of the time series extracted from the data was set equal to . Methods ARMA, KARMA[S], KARMA[M], SMCE and Kmeans were validated. 3DWTA did not perform well on those datasets, and thus, its results are not shown here.
Toward a realistic scenario, the time series of the sets D and E are concatenated to create a single time series with length . The network has 100 nodes, so . Parameters of ARMA, KARMA[S] and KARMA[M] are defined as: , , , , . In KARMA[S], the kernel function is set equal to , while in KARMA[M] . Due to the slidingwindow implementation in the proposed framework, there are cases where the sliding window captures samples from both the D and E time series. The features extracted from those cases are labeled as cluster . Fig. 11 depicts also the standard deviations of the results of Table VIII. KARMA[M] scores the highest clusteringaccuracy and NMI values with the smallest standard deviation. In the case where the time series D and E are not concatenated, the case of sliding windows capturing data from both clusters (states) is nonexistent. In such a binarydecision case, the confusionmatrix results of ARMA, KARMA[S] and KARMA[M] are , , and , while for SMCE, , , and .
Methods  Clustering accuracy  NMI 

ARMA  0.907  0.844 
KARMA[S]  0.925  0.886 
KARMA[M]  0.939  0.921 
SMCE  0.873  0.815 
Kmeans  0.829  0.741 
Real fMRI behavioral data, acquired from the Stellar Chance 3T scanner (SC3T) at the University of Pennsylvania, were used to cluster different states. The time series in data are collected in two arms before and after an inhibitory sequence of transcranial magnetic stimulation (TMS) known as continuous theta burst stimulation [65]. Real and Sham stimulation of two different tasks were applied for TMS. The two behavioral tasks are: 1) Navon task: A big shape made up of little shapes is shown on the screen. The big shape can either be green or white in color. If green, participant identifies the big shape, while if white, the participant identifies the little shape. The task was presented in three blocks: All white stimuli, all green stimuli, and switching between colors on 70% of trials to introduce switching demands. Responses given via button box are in the order of circle, x, triangle, square; 2) Stroop task: Words are displayed in different color inks. There are two difficulty conditions; one where subjects respond to words that introduced low colorword conflict (far, deal, horse, plenty) or high conflict with color words differing from the color the word is printed in (e.g., red printed in blue, green printed in yellow, etc.) [66]. The participant has to tell the color of the ink the word is printed in using a button box in the order of red, green, yellow, blue.
Each BOLD time series was collected during an min scan with ms, which means that the length of time series is . The time series has cortical and subcortical regions so . To test the state clustering results of fMRI time series, 3 states are concatenated to create a single time series with length . The 3 states are: 1) Before real stimulation of the Navon task; 2) after real stimulation of the Navon task; and 3) after real stimulation of the Stroop task.
Parameters of ARMA, KARMA[S] and KARMA[M] are defined as: , , , , . In KARMA[S], the kernel function is set equal to , while in KARMA[M] . Notice here that due to the slidingwindow implementation in the proposed framework, there are cases where the sliding window captures samples from two consecutive states.
Methods  Clustering accuracy  NMI 

ARMA  0.885  0.809 
KARMA[S]  0.904  0.843 
KARMA[M]  0.919  0.875 
SMCE  0.893  0.816 
Kmeans  0.801  0.720 
V Conclusions
This paper introduced a novel clustering framework to perform all possible clustering tasks in dynamic (brain) networks: state clustering, community detection and subnetworkstatesequence tracking/identification. Features were extracted by a kernelbased ARMA model, with column spaces of observability matrices mapped to the Grassmann manifold (Grassmannian). A clustering algorithm, the geodesic clustering with tangent spaces, was also provided to exploit the rich underlying Riemannian geometry of the Grassmannian. The framework was validated on multiple simulated and real datasets and compared against stateoftheart clustering algorithms. Test results demonstrate that the proposed framework outperforms the competing methods in clustering states, identifying community structures, and tracking multiple subnetwork state sequences which may span several network states. Current research effort includes finding ways to reduce the size of the computational footprint of the framework, and techniques to reject networkwide outlier data.
Appendix A Reproducing Kernel Hilbert Spaces
A reproducing kernel Hilbert space , equipped with inner product , is a functional space where each point is a function , for some , s.t. the mapping is continuous, for any choice of [67, 47, 68]. There exists a kernel function , unique to , such that (s.t.) and , for any and any [67, 68]. The latter property is the reason for calling kernel reproducing, and yields the celebrated “kernel trick”: , for any .
Popular examples of reproducing kernels are:

The linear , where space is nothing but ;

the Gaussian , where and is the standard Euclidean norm. In this case, is infinite dimensional [68];

the Laplacian , where stands for the norm [69]; and

the polynomial , for some parameter .
There are several ways of generating reproducing kernels via certain operations on wellknown kernel functions such as convex combinations, products, etc. [47].
Define , for some , as the space whose points take the following form: s.t. , , where is a compact notation for . For and given a matrix , the product stands for the vectorvalued function whose th entry is . Similarly, define , for some , as the space comprising all