Bayesian Topological Learning for Brain State Classification

Bayesian Topological Learning for Brain State Classification

Abstract

Investigation of human brain states through electroencephalograph (EEG) signals is a crucial step in human-machine communications. However, classifying and analyzing EEG signals are challenging due to their noisy, nonlinear and nonstationary nature. Current methodologies for analyzing these signals often fall short because they have several regularity assumptions baked in. This work provides an effective, flexible and noise-resilient scheme to analyze EEG by extracting pertinent information while abiding by the 3N (noisy, nonlinear and nonstationary) nature of data. We implement a topological tool, namely persistent homology, that tracks the evolution of topological features over time intervals and incorporates individual’s expectations as prior knowledge by means of a Bayesian framework to compute posterior distributions. Relying on these posterior distributions, we apply Bayes factor classification to noisy EEG measurements. The performance of this Bayesian classification scheme is then compared with other existing methods for EEG signals.

Bayesian classification, EEG signals, intensity, marked Poisson point processes, persistent homology, topological data analysis

I Introduction

The emergence of computational intelligence has led us to an era of excellent communication between users and systems. These human-computer communications do not require any external device or muscle intervention and enable computers to be deliberately controlled via the monitoring of brain state signals. In order to potentially improve human-machine interactions, it is crucial to analyze and interpret physiological measurements effectively to assess individual’s states [15, 33]. Brain signals can encode one’s expectations as a form of prior beliefs, which have an influence on behavior in times of uncertainty [41]. A Bayesian approach that integrates prior knowledge of an individualâs innate brain activity with newly measured data may improve individual’s state detection, which can aide to characterize and control oneâs actions.

EEG signals are 3N–nonstationary, nonlinear and noisy [24]. In particular, they are obscured by various forms of noise, are nonlinear due to the complexity of underlying interaction in the nervous system [42, 24, 23] and are nonstationary due to the involvement of different time scales in brain activity [17]. The 3N nature of EEG signals requires methods that can encode individual’s brain history and draw statistical inferences for these signals.

In this paper, we develop a Bayesian classification scheme relying on the posterior distributions of persistence diagrams, which are pertinent topological descriptors. Persistent homology is a widely used tool for topological data analysis (TDA) that captures topological features at multiple scales and produces summary representations, called persistence diagrams, that encode the lifespan of the topological features in the data. Persistent homology has proved to be promising in the field of data sciences yielding astounding results in a variety of applications in variety of applications [34, 12, 40, 3, 5, 28, 37, 30, 31, 27, 26, 25, 14, 21, 29, 13]. Indeed, physiological signals’ features are defined by the topological changes of the signals across time. Engaging TDA in the study of physiological signals is recently emerging. The authors of [47] measure the topology of EEG data with persistence landscapes to detect differences in the EEG signals during epilepsy attacks versus those produced during healthy brain activity. However, this method does not investigate the distribution of the diagrams themselves and suffers from a loss of pertinent information. Several other studies implement traditional machine learning based on feature extractions [8, 46, 35]. As selection of an appropriate feature is crucial, these methods rely on summaries of persistence diagrams, which already summarize the underlying data themselves. We develop a Bayesian learning approach that can be applied directly on the space of persistence diagrams. However, this learning scheme depends on the estimation of posterior probabilities, which is not straightforward due to the unusual multiset structure of persistence diagrams.

To establish a Bayesian framework, we need to define the prior uncertainty and likelihood through probability distributions of persistence diagrams. By viewing persistence diagrams as finite set samples, the authors of [28] propose a nonparametric estimation of the probability density function of random persistence diagrams. They also show that the probability density function can successfully detect the underlying dynamics of EEG signals and compare it with other pre-existing TDA methods. A prior distribution of persistence diagrams can be obtained through this density function. However, computing posteriors entirely through the random set analog of Bayes’ rule may have exponential computational complexity [11]. To address this, we model random persistence diagrams as point processes. In particular, we utilize Poisson point processes which can be entirely characterized by their intensity.

We commence the Bayesian framework by modeling random persistence diagrams generated from a Poisson point process with a given intensity, which captures the prior uncertainty. In the case of brain state detection, we can incorporate individual’s expectations about the statistical regularities in the environment as prior knowledge [41]. Alternatively, we can choose an uninformative prior intensity when no expert opinion or information about individual’s expectations is available. We construct the likelihood component of our framework by utilizing the theory of marked point processes. Indeed, we employ the topological summaries of signals in place of the actual signals. This proves to be useful for a range of physiological signal analysis [8, 46, 35, 47, 2, 7, 15]. The application considered in this paper is the classification of EEG signals, which allows us to predict individual’s brain states and advances human-computer communication techniques. Through these topological summaries, we adopt a substitution likelihood technique [19] rather than considering the full likelihood of the entire signal data.

Next, we develop a Bayesian learning method by relying on the posterior obtained from the Bayesian framework. This method is remarkably flexible as it abides by the 3N nature of the signals and is extremely powerful as it incorporates individual’s expectations or domain experts’ knowledge as prior beliefs. Furthermore, the Bayes factor provides a measure of confidence that in turn dictates whether further investigation is feasible. Our model enjoys a closed form of the posterior distribution through a conjugate family of priors, e.g., the Gaussian mixtures. Hence the prior-to-posterior updates yield posterior distributions of the same family. We present a detailed example of our closed form implementation on simulated EEG signals to demonstrate computational tractability and showcase applicability in classification through Bayes factor estimation. Furthermore, we present a detailed comparison with other TDA and non-TDA based learning methods.

This paper is organized as follows. Section II provides a brief overview of persistence diagrams and Poisson point processes. We establish the Bayesian framework for persistence diagrams in Section III. We then develop our Bayesian learning method in Section IV, which is used to quantify the classification outcome. Section V-A introduces a closed form to the posterior intensity utilizing Gaussian mixture models. To assess the capability of our algorithm, we investigate its performance in classifying EEG signals and provide comparisons with several other existing methods in Section V. Finally, we end with the conclusion in Section VI.

Ii Background

We commence by discussing the background essential for building our Bayesian model. In Subsection II-A, we start with the formation of persistence diagrams (PDs) by implementing sublevel set filtrations. In order to model the uncertainty present in these persistence diagrams, we consider them as point processes and pertinent definitions from point processes (PPs) are given in Subsection II-B .

Ii-a Persistent Homology for Noisy Signals

Persistent homology is a tool from TDA that provides a robust way to model the topology of real datasets by tracking the evolution of homological features and summarizing these in persistence diagrams. Several methods exist to generate persistence diagrams such as Vietoris Rips or ech filtrations [9], but such techniques require the transformation of a signal to an appropriate point cloud using Takens’s delay embedding theorem. To circumvent this transformation to point clouds, we employ the sublevel set filtration method, which summarizes the shape of signals directly in a PD by employing local critical points as tersely outlined next.

Consider a signal as a bounded and continuous function of time (Fig. 1 (a)). The sublevel set filtration tracks the evolution of connected components in sets , as increases. The central idea is that as increases the connectivity of the set remains unchanged except when it passes through a critical point. For a given connected component, we record the value of at which is born (when reaches a local minimum), call it , and the value at which disappears (when reaches a local maximum), call it , by merging with a pre-existing connected component. That is to say, whenever two connected components merge, the one born later disappears while the one born earlier persists by the elder rule [9]. Once we reach the value in the filtration, all the sublevel sets have merged into a single connected component, and we terminate the procedure. For every connected component that arises in the filtration, we plot the points in and call the resulting collection a persistence diagram (Fig. 1 (b)). To facilitate computation and preserve the geometric information, we apply the linear transformation to each point in our persistence diagrams. We refer to the resulting coordinates as birth and persistence, respectively, in and call this transformed persistence diagram a tilted representation (Fig. 1 (c)). Hereafter whenever we refer to persistence diagrams, we imply their tilted representation.

Ii-B Poisson Point Processes

One samples from a finite point process on a Polish space by generating a random number according to a cardinality distribution and then for spatially distribute according to a probability distribution. In other words, a finite point process is characterized by a probability mass function (pmf) of the cardinality and a joint probability density function (pdf) of the elements for a given cardinality. We model random persistence diagrams as Poisson point processes (PPPs), hence as points . The defining feature of these point processes is that they are solely characterized by a single parameter known as the intensity. The intensity of a given is the density of the expected number of points per unit volume at . Indeed, the intensity serves as an analog of the first order moment of a random variable. The intensity in a Poisson point process accounts for the joint pdf of elements, and the cardinality is Poisson with mean .

Considering persistence diagrams as modeled by such processes, a link is needed between the prior and the data/likelihood to conduct Bayesian analysis. The marked point process provides this connection. Effectively, a marked point process is a special case of bivariate point process where one PP in the Polish space (containing the marks) is determined given knowledge of the PP in the Polish space . A marked Poisson point process is a finite PP on such that: (i) is a PPP on , and (ii) for a realization , the marks of each are drawn independently from a given stochastic kernel .

Iii The Bayesian Model

According to Bayes’ theorem, the posterior is proportional to the product of a likelihood function and a prior. To investigate Bayesian framework for persistence diagrams, we need to compute the conditional distribution by establishing the proposed Bayesian formula for persistence diagrams, where the likelihood and the prior need to be defined and computed for random persistence diagrams. We employ a likelihood model for the persistence diagrams generated from the signals which is analogous to the idea of substitution likelihood [19]. Next, we develop the prior and likelihood on the space of persistence diagrams.

Prior: To model prior knowledge for the brain state classification problem, human expectations for statistical regularities in the environment and the uncertainty involved are summarized as a persistence diagram . We assume that the underlying prior uncertainty of a persistence diagram is generated by a Poisson point process with intensity . An example of prior persistence diagrams is shown in Fig. 2 (a) as black rectangles. Any point in a persistence diagram may not be observed in actual data due to the presence of noise, sparsity, and/or other unexpected scenarios. We address this instance by defining a probability function . In particular, if is not observed in the data, the probability of this event is and similarly is the probability of being observed.

Data/Likelihood Model: EEG signals are encoded into the observed PDs, , using the method discussed in Section II-A. Points are linked to points in PD , generated by the prior PPP. We investigate the linking of these points to the prior PPP by relying on the theory of marked Poisson point processes (MPPP) [22, 18]. The probability density of the MPPP is given by a stochastic kernel, such that the marks of are drawn independently from , which in our case plays the exact role of the likelihood (see Section II-B for details). One needs to account for all possible marks, with the more likely marks realized as larger likelihood values for all . In order to accommodate the nature of persistence diagrams, we need to define one last point process that unveils the topological noise in the observed data. Intuitively, this point process consists of the points in the observed diagram that fail to associate with the prior. We define this as a Poisson point process with intensity .

A sample observed persistence diagram is shown in Fig. 2 (a) as red hexagons. Fig. 2 (b) and (c) show different combinations of possible associations between prior and data in the green regions. However, it is evident that the associations in (b) would have higher likelihood values than that in (c) and in turn, would have more impact on posteriors. Also, for every configuration, some of the observed points do not associate with any point , which is shown with blue regions. We denote the features in blue regions as , which stands for the features that vanished. If they are not vanished and make associations with features of , we denote it as . Samples from are shown in Fig. 2 (b) and (c) as yellow regions.

Posterior: With the above model characterization, the posterior intensity which explicitly show the update of the prior has the following form [29]:

 λDX|DY1:m(x)=\pagecolorblue!30$(1−α(x))λDX(x)$+ α(x)mm∑i=1∑y∈DYi\pagecolorgreen!20$ℓ(y|x)λDX(x)$\pagecoloryellow!40$λDYU(y)$+\pagecolorgreen!20$∫Wℓ(y|u)α(u)λDX(u)du$. (1)

In the posterior intensity density, the two terms reflect the decomposition in the prior point process. The first term is for the features of prior which may not be observed and hence the intensity is weighted by . On the other hand, the second term corresponds to the features in prior that may be observed and similarly is weighted by . Here we observe an expression consistent with the traditional Bayes’ theorem, specifically a product of prior intensity and likelihood divided by a normalizing constant. The normalizing constant consists of two terms illustrating the two instances of our data model. consists of the features that are not associated to the prior and this is evident in the first term of the normalizing factor. Consequently, the second term provides the contribution of the observed data from , coupling with prior features to form the marked PPP.

Iv Bayesian Classification

In this section, we develop a Bayesian learning approach that discriminates EEG signals from different cognitive states. In particular, we present a classification scheme based on Bayes factors of persistence diagrams generated from physiological signals. We start by extracting fundamental topological features from a collection of EEG signals and record the information in persistence diagrams using the sublevel set filtration discussed in Section II-A.

For a persistence diagram that needs to be classified, we assume that is sampled from a Poisson point process in with prior intensity . Consequently, its probability density has the form where is the expected number of points in . For training sets for from classes of random diagrams , we obtain the posterior intensities by following the estimation process discussed in Section III. The posterior probability density of given the training set defined as

 pD|DYk(D|QYk)=e−λ|D|!∏d∈DλD|QYk(d). (2)

The posterior probability densities given the other training sets are obtained by analogous expressions. Consequently, the Bayes factor is defined as

 BFi,j(QYi,QYj)=pD|DYi(D|QYi)pD|DYj(D|QYj) (3)

For every pair of for if , we will assign one vote to class or otherwise for . The final assignment of the class of to a class is obtained by a majority voting scheme.

V Application to EEG

V-a Conjugate family of priors for EEG signals

Here, we present a a closed form of the posterior distribution through a conjugate family of priors, e.g., the Gaussian mixtures. Hence the prior-to-posterior updates yield posterior distributions of the same family. We specify the prior intensity density as , where is the number of corresponding mixture components and is the restricted Gaussian density on the wedge . In a similar fashion, we define the density of the Poisson point process . The likelihood density is also Gaussian as and . With all of these we obtain a Gaussian mixture posterior intensity density of the form

 λDX|DY1:m(x) =(1−α)λDX(x)+ αmm∑i=1∑y∈DYiN∑j=1Cx|yjN∗(x;μx|yj,σx|yjI), (4)

where and are weights, mean and variance of the posterior intensity respectively, corresponding to the second part of (1), and these are pertinent updates of the prior parameters [29].

V-B EEG Datasets

US Army Aberdeen Proving Ground (APG) researchers have simulated noisy EEG signals based on different mental activities. We used this dataset for our analysis mainly focusing on two different frequency bands – alpha and beta. Alpha (frequency from 8 to 13 Hz) corresponds to intense mental activity, stress, and tension, and beta (frequency 13â30Hz) correlates with active attentions and focusing on concrete problems or solutions [38]. As the dataset contains several predominant oscillations based EEG signals, a Gaussian conjugate prior produces promising results for estimating the posterior probabilities as well as for Bayes factor classification [32, 45, 16].

V-C Posterior estimation of EEG Datasets

We first converted the EEG signals to persistence diagrams via sublevel set filtrations. In Fig. 3, we present two samples from the EEG dataset of alpha (a) and beta (d) bands respectively along with their persistence diagrams in (b) and (e). Typically EEG signals encode various forms of noise and the simulated EEG dataset accounts for this by corrupting these signals with additive noise. The signals in Fig. 3 have the signal to noise ratio (SNR) 0, which implies equal contribution from signal and noise.

In Fig. 3 we illustrate a posterior intensity estimation of a noisy alpha band and a noisy beta band utilizing (V-A). To demonstrate a data-driven posterior, we employed an uninformative prior of the form . To present the intensity maps uniformly, we divide the intensities by their corresponding maxima and call them scaled intensities ranging from 0 to 1.

V-D EEG signal classification with Bayesian learning

Detection and classification of specific patterns in the brain activity are crucial steps in understanding functional behaviors for developing human-machine communications. We have taken the first step toward engaging Bayesian learning in EEG signal analysis by implementing Gaussian posterior intensities as explained in Section V-A and using these posteriors for a binary Bayes factor classification. From the dataset provided by APG researchers, we used two instances of additive noise in order to represent cases with two different SNR. Our considered dataset comprises SNRs such as 3 and 5, where SNR 5 has more contribution from the signal than SNR3.

We followed the process discussed in Section V-A to estimate the posterior intensity of a persistence diagram in given a training set , with the goal of identifying the correct class of . We used the R package BayesTDA to obtain posterior intensities. Consequently, the probability density was obtained from (2). After computing the intensities with respect to the training sets from both of the classes, the Bayes factor was computed by (3) as the ratio of the posterior probability densities of the unknown persistence diagram given each of the two competing training sets from or . For a threshold , implies that belongs to and implies otherwise.

We implemented 10-fold cross validation for estimating the accuracy. For this we partitioned each class into 10 different sets and 9 of them for each class were used for training sets, and 1 was used for testing. We repeated this 10 times so that every partition acts as the testing data exactly once. We then found the average among all partitions. Results from the Gaussian learning scheme are presented in Fig. 4. We compared the results of Gaussian learning scheme with Artificial Neural Networks (ANNs), logistic regression (LR) with features (mean, standard deviation and entropy of the recorded coefficients) extracted from Wavelet Transform (WT). We prefer to use WT rather than Fourier transform (FT) due to its inability to analyze nonstationary nature of EEG signals [1, 10]. Both ANN and LR have been widely applied for physiological signals classification [8, 39, 43, 44, 20, 36, 4]. We also compared our result with an exiting TDA technique namely, persistence landscape [6]. We extracted the first landscape functions of the persistence diagrams for all considered EEG signals and implemented support vector machine and logistic regression on the extracted landscape function. Our results for classifying these two bands of SNRs outperforms the other existing TDA and non-TDA based classification methods over all levels of SNRs considered here. Furthermore, the Gaussian learning scheme is able to classify almost perfectly with a high signal to noise ratio.

Vi Conclusion

In this work, we have proposed a Bayesian framework for persistence diagrams that incorporates prior beliefs about signals and does not rely on any regularity assumptions such as stationarity or linearity for the computation of posterior distributions. The topological descriptors, e.g., persistence diagrams of EEG signals can decipher essential shape peculiarities by avoiding complex and unwanted geometric features. Our method perceives persistence diagrams as point processes (PPs). As required for a Bayesian paradigm, we incorporate prior uncertainty by viewing persistence diagrams as Poisson PPs with a given intensity. We model the connection between prior PP and persistence diagrams of noisy observations through marked PPs. These models the data likelihood component of the Bayesian framework. Additionally, we define the likelihood through topological summaries of a signal rather than using the entire signal. This is analogous to the substitution likelihood discussed by Jeffreys [19].

Relying on the posterior distributions obtained from the Bayesian framework, we develop a Bayesian learning scheme. Furthermore, we present a closed form of the posterior estimation through a conjugate family for priors. In the case of synchronized brain activity, this implementation is useful for analyzing EEG signals. This exhibits the ability of our method to recover the underlying persistence diagram, analogously to the standard Bayesian paradigm for random variables.

We employ this Bayesian learning scheme for EEG signal classification. We provide a detailed comparison with some of the existing methods of signal classification and showcase that our method outperforms them. For comparison purposes, we pursue two directions. Firstly, we compare with two most widely used signal classification algorithms–neural nets and logistic regression. Secondly, we show a comparison between our method and another topological tool, namely persistence landscape, with traditional machine learning methods such as support vector machine and logistic regression. We exhibit higher accuracy for all considered cases. Thus, the Bayesian inference developed here opens up new avenues for machine learning algorithms for complex signal analysis directly on the space of persistence diagrams.

References

1. A. S. Al-Fahoum and A. A. Al-Fraihat (2014) Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains. ISRN Neuroscience. External Links: Link Cited by: §V-D.
2. F. Altindis, B. Yilmaz, S. Borisenok and K. Icoz (2018) Use of topological data analysis in motor intention based brain-computer interfaces. 26th European Signal Processing Conference. Cited by: §I.
3. A. Babichev and Y. Dabaghian (2017) Persistent memories in transient networks. Emergent Complexity from Nonlinearity, in Physics, Engineering and the Life Sciences 191, pp. 179–188. Cited by: §I.
4. M. M. E. Bahy, M. Hosny, A. Mohamed and S. Ibrahim (2016) EEG signal classification using neural network and support vector machine in brain computer interface. Advances in Intelligent Systems and Computing. Cited by: §V-D.
5. C. A.N. Biscio and J. Møller (2019) The accumulated persistence function, a new useful functional summary statistic for topological data analysis, with a view to brain artery trees and spatial point process applications. Journal of Computational and Graphical Statistics, pp. 1537–2715. External Links: Document Cited by: §I.
6. P. Bubenik (2015) Statistical topological data analysis using persistence landscapes. Journal of Machine Learning Research 16, pp. 77–102. Cited by: §V-D.
7. E. Campbell, A. Phinyomark and E. Scheme (2019) Feature extraction and selection for pain recognition using peripheral physiological signals. Frontiers in Neuroscience 13, pp. 437. Cited by: §I.
8. M. Dindin, Y. Umeda and F. Chazal (2019) Topological data analysis for arrhythmia detection through modular neural networks. arXiv:1906.05795. Cited by: §I, §I, §V-D.
9. H. Edelsbrunner (2010) Computational topology: an introduction.. American Mathematical Society, Providence, R.I.. Cited by: §II-A, §II-A.
10. G. Fiscon and et al. (2018) Combining EEG signal processing with supervised methods for Alzheimer?s patients classification. BMC Medical Informatics and Decision Making 18 (1), pp. 35. Cited by: §V-D.
11. I.R. Goodman, R.P. Mahler and T. N. Hung (1997) Mathematics of data fusion. Kluwer Academic Publisher, Dordrecht/Boston/Londin. Cited by: §I.
12. W. Guo, K. Manohar, S. L. Brunton and A. G. Banerjee (2018) Sparse-TDA: sparse realization of topological data analysis for multi-way classification. IEEE Transactions on Knowledge and Data Engineering 30 (7), pp. 1403 – 1408. Cited by: §I.
13. D. P. Humphreys, M. R. McGuirl, M. Miyagi and A. J. Blumberg (2019) Fast estimation of recombination rates using topological data analysis. GENETICS. External Links: Document Cited by: §I.
14. T. Ichinomiya, I. Obayashi and Y. Hiraoka (2017) Persistent homology analysis of craze formation. Physical Review E 95 (1), pp. 012504. Cited by: §I.
15. M. Z. Ilyas, P. Saad, M. I. Ahmad and A. R. I. Ghani (2016-11) Classification of EEG signals for brain-computer interface applications: performance comparison. In 2016 International Conference on Robotics, Automation and Sciences (ICORAS), Vol. , pp. 1–4. External Links: Document, ISSN Cited by: §I, §I.
16. R. A. A. Ince and et al. (2017) A statistical framework for neuroimaging data analysis based on mutual information estimated via a Gaussian copula.. Human Brain Mapping 38, pp. 1541–1573. Cited by: §V-B.
17. P. Indic, R. Pratap, V.P. Nampoori and N. Pradhan (1999) Significance of time scales in nonlinear dynamical analysis of electroencephalogram signals. Int Journal of Neuroscience 99 (1–4), pp. 181–194. Cited by: §I.
18. M. Jacobsen (2005) Point process theory and applications: marked point and piecewise deterministic processes. Birkhauser, . Cited by: §III.
19. H. Jeffreys (1961) Theory of probability. Clarendon Press. Cited by: §I, §III, §VI.
20. E. Kabir, Siuly and Y. Zhang (2016-06-01) Epileptic seizure detection from eeg signals using logistic model trees. Brain Informatics 3 (2), pp. 93–100. External Links: ISSN 2198-4026, Document Cited by: §V-D.
21. M. Kimura, I. Obayashi, Y. Takeichi, R. Murao and Y. Hiraoka (2018) Non-empirical identification of trigger sites in heterogeneous processes using persistent homology. Scientific reports 8 (1), pp. 3553. Cited by: §I.
22. J. F. C. Kingman (1993) Poisson processes. Clarendon Press, Oxford. Cited by: §III.
23. W. Klonowski, E. Olejarczyk and R. Stepien (2004) ’Epileptic seizures’ in economic organism.. Physica A 342, pp. 701–707. Cited by: §I.
24. W. Klonowski (2007) From conformons to human brains: an informal overview of nonlinear dynamics and its applications in biomedicine. Nonlinear Biomedical Physics, pp. 1–5. Cited by: §I.
25. Y. Lee and et al. (2017) Quantifying similarity of pore-geometry in nanoporous materials. Nature Communications 8, pp. 15396. Cited by: §I.
26. A. Marchese and V. Maroulas (2016) Topological learning for acoustic signal identification. In 2016 19th International Conference on Information Fusion (FUSION), pp. 1377–1381. Cited by: §I.
27. A. Marchese and V. Maroulas (2018) Signal classification with a point process distance on the space of persistence diagrams. Advances in Data Analysis and Classification 12 (3), pp. 657–682. Cited by: §I.
28. V. Maroulas, J. L. Mike and C. Oballe (2019) Nonparametric estimation of probability density functions of random persistence diagrams. Journal of Machine Learning Research. Note: in press, arXiv:1803.02739 Cited by: §I, §I.
29. V. Maroulas, F. Nasrin and C. Oballe (2019) A Bayesian framework for persistent homology. arXiv:1901.02034. Cited by: §I, §III, §V-A.
30. V. Maroulas and A. Nebenführ (2015) Tracking rapid intracellular movements: a Bayesian random set approach. The Annals of Applied Statistics 9 (2), pp. 926–949. External Links: Document Cited by: §I.
31. J. Mike, C. D. Sumrall, V. Maroulas and F. Schwartz (2016) Nonlandmark classification in paleobiology: computational geometry as a tool for species discrimination. Paleobiology 42 (4), pp. 696–706.. Cited by: §I.
32. K. H. Norwich (1993) Information, sensation, and perception. Academic Press, Inc, San Diego, CA. Cited by: §V-B.
33. N. Padfield, J. Zabalza, H. Zhao, V. Masero and J. Ren (2019) EEG-based brain-computer interfaces using motor-imagery: techniques and challenges. Sensors (Basel) 19 (6), pp. 1423. Cited by: §I.
34. V. Patrangenaru, P. Bubenik, R. L. Paige and D. Osborne (2018) Topological data analysis for object data. arXiv:1804.10255. Cited by: §I.
35. M. Piangerelli, M. Rucco, L. Tesei and E. Merelli (2018) Topological classifier for detecting the emergence of epileptic seizures. BMC Research Notes 11, pp. 392. Cited by: §I, §I.
36. P. D. Prasad, H. N. Halahalli, J. P. John and K. K. Majumdar (2014) Single-trial EEG classification using logistic regression based on ensemble synchronization.. IEEE Journal of Biomedical and Health Informatics 18 (3), pp. 1074–1080. Cited by: §V-D.
37. I. Sgouralis, A. Nebenführ and V. Maroulas (2017) A Bayesian topological framework for the identification and reconstruction of subcellular motion. SIAM Journal on Imaging Sciences 10 (2), pp. 871–899. External Links: Document Cited by: §I.
38. S. Siuly, Y. Li and Y. Zhang (2016) EEG signal analysis and classification:techniques and applications. Springer, Cham, Switzerland. Cited by: §V-B.
39. K. Sivasankari and K. Thanushkodi (2014) An improved EEG signal classification using neural network with the consequence of ICA and STFT. J Electr Eng Technol 9 (3), pp. 1060–1071. Cited by: §V-D.
40. A. E. Sizemore, J. E. Phillips-Cremins, R. Ghrist and D. S. Bassett (2018) The importance of the whole: topological data analysis for the network neuroscientist. Network Neuroscience. External Links: Document Cited by: §I.
41. H. Sohn, D. Narain, N. Meirhaeghe and M. Jazayeri (2019) Bayesian computation through cortical latent dynamics. Neuron. External Links: Document Cited by: §I, §I.
42. C. Stam (2005) Nonlinear dynamical analysis of EEG and MEG: review of an emerging field.. Clinical Neurophysiology 116, pp. 2266 – 2301. Cited by: §I.
43. A. Subasi and E. Ercelebi (2005) Classification of EEG signals using neural network and logistic regression. Computer Methods and Programs in Biomedicine 78, pp. 87–99. Cited by: §V-D.
44. R. Tomioka, K. Aihara and K.R. Muller (2007) Logistic regression for single trial EEG classification. Advances in Neural Information Processing Systems 19, pp. 1377?1384. Cited by: §V-D.
45. M. J. A. M. van Putten and C. J. Stam (2001) Application of a neural complexity measure to multichannel EEG. Physics Letters A 281, pp. 131–141. Cited by: §V-B.
46. Y. Wang, D. D. M. K. Chung, A. Lutz and R. Davidson (2017) Topological network analysis of electroencephalographic power maps. Connectomics Neuroimaging 10511, pp. 134–142. Cited by: §I, §I.
47. Y. Wang, H. Ombao and M. K. Chung (2018) Topological data analysis of single-trial electroencephalographic signals. The Annals of Applied Statistics 12 (3), pp. 1506–1534. Cited by: §I, §I.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters