Classification of EEG Signal based on non-Gaussian Neutral Vector

Classification of EEG Signal based on non-Gaussian Neutral Vector

Abstract

In the design of brain-computer interface systems, classification of Electroencephalogram (EEG) signals is the essential part and a challenging task. Recently, as the marginalized discrete wavelet transform (mDWT) representations can reveal features related to the transient nature of the EEG signals, the mDWT coefficients have been frequently used in EEG signal classification. In our previous work, we have proposed a super-Dirichlet distribution-based classifier, which utilized the nonnegative and sum-to-one properties of the mDWT coefficients. The proposed classifier performed better than the state-of-the-art support vector machine-based classifier. In this paper, we further study the neutrality of the mDWT coefficients. Assuming the mDWT vector coefficients to be a neutral vector, we transform them non-linearly into a set of independent scalar coefficients. Feature selection strategy is proposed on the transformed feature domain. Experimental results show that the feature selection strategy helps improving the classification accuracy.

keywords:
Neutral vector, neutrality, nonlinear decorrelation, Dirichlet variable, super-Dirichlet distribution, beta distribution, EEG classification
1

1 Introduction

Brain-computer interface (BCI) connects persons suffering from neuromuscular diseases with computers by analyzing the recorded brain signals. With a well-designed BCI system, persons with neuromuscular disease can communicate with computers enabling them to get assistances from machines. As non-invasively acquired signal, the Electroencephalogram (EEG) signal is the most studied and applied one in the design of a BCI system Lotte2007 (); Chiang2012 (). While a person is imagining a kind of action, the electrical activity along the scalp is recorded in the EEG signal. EEG signals show different patterns for different actions. Hence, the type of imagined action can be estimated by analyzing the EEG signals. Appropriate classification of EEG signals plays an essential role in a BCI system Prasad2011 ().

Various types of features have been extracted from EEG signals for the purpose of classification, such as the auto-aggressive (AR) parameters Penny2000 (), the multi-variate AR parameters Chiang2012 (), the Fourier transform based features Veluvolu2012 (); Wang2013 (), and the marginalized discrete wavelet transform (mDWT) coefficients Subasi2007 (); Farina2007 (); Ma2012 (). The DWT coefficients present the signal by projecting it onto a set of spaces. The wavelet transform applied to the EEG signal can reveal features related to the transient nature of the signal in which the time-scale regions are defined Subasi2007 (). In order to make the DWT coefficients insensitive to time alignment, the marginalized DWT (mDWT) coefficients are usually used as the feature for the task of EEG signal classification Prasad2011 (); Subasi2007 (); Farina2007 (). In this paper, we focus studying the EEG classification performance only on the mDWT features. A widely applied method, among others, is to design a classifier based on the support vector machine (SVM) Subasi2007 (); Farina2007 (); Chang2011 (); 08 (); 09 (). Generally speaking, the SVM-based classifier is not sensitive to the curse of dimensionality. It is also not sensitive to overtraining when choosing proper parameters Prasad2011 (). Moreover, it can easily be implemented for binary classification and extended to a multiple classes case. By involving a kernel function (e.g., Gaussian kernel), the performance of the SVM-based classifier could be further improved.

In EEG signal classification, the SVM-based classifier has been demonstrated as a successful tool Subasi2010 (); Prasad2011 (). Nevertheless, the SVM-based method does not exploit the nonnegativity and the sum-to-one nature of the mDWT coefficients Ma2012 (). In order to capture such properties, we applied the Dirichlet distribution to model the mDWT coefficients’ underlying distribution. For the mDWT coefficients from more mutually independent channels, it is natural to apply the so-called super-Dirichlet distribution Ma2011b (). In Ma2012 (), we have designed a super-Dirichlet distribution-based classifier to classify the EEG signals with mDWT representation2. The performance of the proposed classifier is superior to the SVM-based classifier.

It is well-known that the Dirichlet variable is a neutral vector Connor1969 (); James1980 (). For a vector , an element is neutral if is independent of . If all the elements in are neutral, then is defined as a completely neutral vector Connor1969 (); Hankin2010 (). The idea of neutrality was introduced by Connor et al. Connor1969 () to describe constrained variables with the property mentioned above. It was originally developed for biological applications. The neutral vector is highly negatively correlated. As all the elements in a neutral vector have bounded support and are nonnegative, the neutral vector cannot be described efficiently by Gaussian distribution Ma2013 (). Thus, the conventional principal component analysis (PCA) method Bishop2006 () cannot be applied for optimal decorrelation3. We use the parallel nonlinear transformation (PNT) to decorrelate the neutral vector in an optimal manner Ma2013 (). With such procedure, a neutral vector is decorrelated into a set of independent scalars. Moreover, if the neutral vector is treated as a vector variable and assumed to be Dirichlet distributed, the obtained scalar variables are all beta distributed Ma2011a (). After decorrelation, we propose a feature selection strategy to keep the relevant features. Both the variance and differential entropy of the decorrelated scalar variable are used as criteria to determine which dimension should be kept.

The purpose of dimension reduction is to remove the redundant dimensions and thus improve the corresponding performance Bishop2006 (); Saeys2007 (); Kwak2002 (); He2011 (); Zhu2010 (). We apply the proposed feature selection method in EEG signal classification tasks. The mDWT coefficients from each recording channel are assumed to be Dirichlet distributed Ma2012 (); Ma2014 () and decorrelated into a set of mutually independent scalars that are beta distributed. By retaining the most relevant features, we design a multi-variate beta distribution classifier for EEG signals. Experimental results demonstrate that the proposed method performs better than both the state-of-the-art SVM-based classifier Prasad2011 () and our previously proposed super-Dirichlet disribution-based classifier Ma2012 ().

The rest of this paper is organized as follows: the EEG signals are introduced in Sec. 2. In Sec. 3, we design a classifier via feature selection. Experimental results are shown in Sec. 4 and some conclusions are draw in Sec. 5

2 Electroencephalogram Signal Analysis

EEG signal represents the brain electrical activities over a short period of time and it is recorded from multiple electrodes placed on the scalp. Therefore, the EEG signals are obtained from multiple channels. When a classifier trained on the first day is used to classify the data from the following days, it is very difficult and challenging to achieve good performance. The EEG signal we use in this paper is obtained from the BCI competition III BCI (). The training data and the test data were recorded from the same subject and with the same task, but on two different days with about one week in between. This way of recording data is robust to time variant.

2.1 Data Description

During the EEG signal recording, a subject had to perform imagined movements of either the left small finger or the tongue BCI (). Thus we have two classes of EEG signals and the task is binary classification. The electrical brain activity was picked up during these trials using an ECoG platinum electrode grid which was placed on the contralateral (right) motor cortex. In total, channels of EEG signals were obtained. For each channel, several trials of the imaginary brain activity were recorded. In total, trials were recorded as the labeled training set and trials were recorded as the labeled test set. In both the training set and test set, the data are evenly recorded for each imaginary movement.

2.2 Feature Extraction

For each trial out of in the training set, channel data of length samples were provided. Each channel data was band pass filtered in the Hz range4 and was then processed by a multilevel one dimensional DWT. The scaling function and the corresponding mother wavelet function are presented in (1), with and as the low-pass and high-pass filter, respectively Farina2007 ().

(1)

After the DWT, we obtained a set of coefficients , where is the index of decomposition level, is the index for the coefficient at each level, and is the length of the data from each channel. In order to make the DWT representation insensitive to time alignment, the DWT coefficients were marginalized to so-called mDWT coefficients defined as Farina2007 ()5

(2)

where and denote the high-band and low-band coefficients in the last decomposition level, respectively. The normalized coefficients were cascaded into a mDWT vector as . In our case, the DWT was carried out at level with Daubechies wavelet. Comparative work of applying different wavelets can be found in, e.g.Gandhi2011 (). With such settings, the total dimensionality of the mDWT vector is five. For each trial out of in the training set, we have mDWT vectors. The same procedure was also applied to the trials in the test set.

2.3 Channel Selection

As mentioned above, the EEG signals were recorded independently from channels, which were located on different positions over the scalp. However, it is unclear that which channels (i.e., recording position) are more relevant to the imaginary task than the rest Lal2004 () and the signals recorded from irrelevant channels should be noisy for the classification task Prasad2011 (). Thus the selection of the most relevant channels would improve the classification accuracy. Since it is a binary classification task in our study, we use two criteria, the Fisher ratio (FR) Malina1981 (); Chae2012 () and the generalization error estimation (GEE) Lal2004 (), to select the best channels, respectively.

Fisher Ratio

In binary classification, the FR presents how strong a channel correlates with labels . For a channel , the Fisher ratio of this channel, with equal prior probability to each class, is defined as Malina1981 ()

(3)

where are the mean and the covariance matrix of class in channel , respectively. is a vector with the same size as . It represents the feature space coordinate axes. The channels with larger FRs are preferable for classification. The FRs were calculated based on the training set. Table 1 lists the FRs corresponds to recording channels.

Generalization Error Estimation

  Channel
FR
CR
  Channel
FR
CR
Channel
FR
CR
Channel
FR 0.01
CR 49.28
Channel
FR
CR
Channel
FR
CR
Channel
FR
CR
Channel
FR 0.01
CR

Table 1: Fisher ratios and classification rate (in ) for different channels. The best scores are in green bold font and the worse ones are in red Italic font.

To select channels, the performance of the channel can also be estimated by the generalization error with -folds cross validation. In the BCI competition III database, the data has already been split into the training set and test set and there is no overlap between these two sets. The evaluation of the classification rate (CR) on the training set is sufficient for estimating the channel performance. For each channel, we train a SVM-based classifier with the labeled training set. With the obtained classifier, we test the performance by the labeled training set itself. The higher the CR is, the more preferable the channel is. The CRs are also listed in Table 1.

3 EEG Classification via Feature Selection

The channel selection methods mentioned in the above section motivate us to combine different channels to obtain better classification results. As described in Ma2012 (), for each imagined trial we cascade EEG signals from the top channels to create a super-vector. The classification task is carried out based on such super-vectors.

3.1 Super-Dirichlet Modeling

  Input: Neutral vector
  Set , ;
  repeat
      
      if  is even then
          for  do
              
              
          end for
          
          
      else
          for  do
              
              
          end for
          
          
      end if
      
  until 
  Set
  Output: Transformed vector , which is of size .
Algorithm 1 Parallel Nonlinear Transformation

According to (2), the mDWT vector extracted from each channel contains elements which are nonnegative and whose sum is one. Hence, it is natural to model the underlying distribution of the mDWT vector by Dirichlet distribution. For more than one channels, we apply the super-Dirichlet distribution Ma2011b () to describe the super-vector’s distribution. For a super-vector from the top channels (), the probability density function (PDF) of the super-Dirichlet distribution is defined as

(4)

where is the gamma function, is the number of subvectors (i.e., the number of selected channels) in the super-vector, and is the degrees of freedom of the th subvector (in our case, ). is the parameter corresponds to , where denotes the th element in the th subvector . The PDF of the super-Dirichlet distribution is actually a multiplication of several PDFs of the Dirichlet distribution. The parameter estimation methods for the super-Dirichlet distribution can be found in Ma2012 ().

3.2 Non-linear Decorrelation of Neutral Vector

Neutral Vector

Assuming we have a random vector variable , where and . An element is neutral if is independent of . If all the elements in are neutral, then is defined as a completely neutral vector Connor1969 (); Hankin2010 (). A neutral vector with elements has degrees of freedom. According to the above definition, the neutral vector conveys a particular type of independence among its elements, even though the element variables themselves are mutually negatively correlated.

Decorrelation via Parallel Non-linear Transformation

In most signal processing applications, the transformations we use are linear or non-linear according to some nonlinear kernel functions. Even though we could apply PCA directly to the neutral random vector variable, this linear transformation could only decorrelate the data, but cannot guarantee the independence if the data is not Gaussian. Furthermore, the PCA does not exploit the neutrality Ma2011 (). Therefore, PCA is not optimal for decorrelating neutral vector. By considering the neutrality, we apply nonlinear invertible transformation in this paper, which decorrelates the vector variable into a set of mutually independent variables. In contrast to PCA, the transformations do not require any statistical information (e.g., the covariance matrix) of the observed vector set. Thus, it avoids the eigenvalue analysis for PCA and, therefore, the computational cost is saved.

As each element in is neutral, with the neutrality of , we know that is independent of the remaining normalized elements. The remaining normalized elements then build a new neutral vector. Based on this fact, the parallel non-linear transformation (PNT) scheme described in Algorithm 1 can be applied to non-linearly decorrelate to a vector with mutually independent variables. Discussion of the independence is presented in Ma2013 (). The nonlinear transformation scheme proposed above is invertible by iterative multiplications. It shows the PNT procedure for dimensional neutral vector.

Distribution of the Decorrelated Elements

  Input: Original Dirichlet parameters
  Set , ;
  repeat
      
      if  is even then
          for  do
              
              ,
          end for
          
          ,
      else
          for  do
              
              ,
          end for
          
          ,
      end if
      
  until 
  Set ,
  Output: Parameters for the transformed variable: and , which are all of size .
Algorithm 2 Calculation of Parameters in Beta Distributions

The Dirichlet variable is a completely neutral vector Frigyik2010 (). Assuming is a Dirichlet variable whose PDF is , we apply the above proposed PNT algorithm to decorrelate to obtain . Moreover, all the elements in are not only decorrelated but also mutually independent. The parameters in the Dirichlet PDF are . With the permutable property, aggregation property and the neutral property Ma2013 (), each element in obtained vector is beta distributed. The algorithm of calculating the parameters for the resulted beta distributions are described in Algorithm 2. For the example, we have

(5)

where

(6)

To illustrate the decorrelation effect of the PNT schemes on the Dirichlet variable, we generated vectors from a Dirichlet distribution with . xThe sample correlation coefficient for the original element pair was also evaluated. Table shows the sample correlation coefficients before and after transformation with PNT. The coefficients are very small after transformation, hence the correlation between each element pair vanished.

3.3 Selection of Relevant Features

Feature selection is an important problem in EEG signal classification Peng2005 (); Prasad2011 (); Lawhern2013 (). In section 2.3, the FR and GEE were applied to select the most relevant channels. However, within each channel, it is unknown which dimensions are more relevant to the class labels than others. Another difficulty for feature selection within each channel is that the feature in different dimensions are highly negatively correlated. The above introduced decorrelation strategy can transform the negatively correlated Dirichlet vector variable into a set of mutually independent scalar variables. Thus, we can directly select the features without considering the correlations among them.

Typically, two criteria can be used for feature selection, the variance of the data Bishop2006 (); He2011 () and the differential entropy of the data Kwak2002 (); Zhu2010 (). The variance reflects how far a set of data are spread out. The differential entropy is a measure of average uncertainty of a random variable under continuous probability distributions. In general, the dimension with larger variance/differential entropy is preferred in classification, as they can better describe the divergence among the data. With the assumption that the source data is Dirichlet distributed, the transformed vector contains a set of scalar variables which are beta distributed. For beta distribution , the variance of is computed as

(7)

and the differential entropy of is calculated as

(8)

where is the digamma function defined as .

In the following paragraph, we use both of the above mentioned criteria to select dimensions that correlate with the largest variances or differential entropies.

3.4 Multi-variate Beta Distribution-based MAP Classifier

According to the above procedure, a set of selected dimensions are obtained. As the data in each dimension is assumed to be beta distributed and the dimensions are mutually independent, we can model the underlying distribution of the selected -dimensional vector variable , which are selected from one recording channel, by a multi-variate beta distribution (mvBeta) as

(9)

Similarly, for the recordings from top channels, there are dimension selected in total. Therefore, these dimensions are modeled as

(10)

where .

The BCI competition III data contains two classes, with label index . Since the parameters in the beta distributions are known according to Algorithm 2, a class dependent mvBeta distribution can be obtained for each class. In the test procedure, we create a maximum a posterior (MAP) classifier with the above obtained models. In each recording channel, for the vector from a test trial, we firstly transform it into with Algorithm 1, and then select the dimensions via the dimension’s variance/entropy. Finally, a decision based on the selected features for recording channels is made as

(11)

where .

4 Experimental Results and Discussions

We evaluated the performance of the proposed feature selection strategy with the mvBeta distribution-based classifier on the BCI competition III database and compared it with the SVM-based classifier, the recently proposed super-Dirichlet mixture model (sDMM)-based method, and the PCA-based classifier. The DWT is calculated using Matlab wavedec function with declevel equal to , followed by marginalization described in (2). According to Tab. 1, the best channels were selected based on FRs or CRs, in terms of their ranks.

Classifier setting and implementations:

  • The mvBeta-based classifier was implemented according to the description in section 3.4. Feature selection was carried out within each channel.

  • The LIBSVM Chang2011 () was used to implement the SVM-based classifier, which had Gaussian kernel function with and the soft margin parameter . No feature selection was applied for SVM-based classifier.

  • The sDMM-based classifier was implemented based on the method described in Ma2012 (). There was no decorrelation strategy for sDMM or no feature selection either.

  • The PCA-based classifier was implemented with the standard PCA method. Within each channel, PCA was applied to decorrelate the data and features were selected according to their variances. The Gaussian mixture model was applied to model the distribution of the selected features.

All the above mentioned classifiers were trained and evaluated based on mDWT coefficients collected from the best channels.

4.1 Classification Accuracy without Feature Selection

In order to demonstrate the non-linear decorrelation strategy, we evaluated the mvBeta distribution-based classifier without feature selection, which means that we set . In such case, the proposed classifier should perform the same as the one used in Ma2012 (), as no information is added or lost during the non-linear transformation. As expected, experimental results show identical performance as that reported in Ma2012 (), where the sDMM-based classifier was employed. The highest classification accuracy is for both cases.

4.2 Classification Accuracy with Feature Selection

The total dimension of the mDWT is for each recording channel, which has the degrees of freedom equal to . Hence, after decorrelation (both with PNT and PCA), the obtained vector are dimensional (). In order to evaluate the mvBeta distribution-based classifier with the proposed feature selection strategy in Sec. 3.3, we set and , respectively6. We also took similar feature selection choices for the PCA-based classifier. The classification accuracies are illustrated in Fig. 1.

It can be observed that for the FR case (Fig. 1(a)1(c), and 1(e)), when setting , the best performance appears at for mvBeta distribution-based classifier. This classification rate is the same as that obtained by the sDMM/mvBeta distribution (without feature selection)-based classifiers, the only difference is the best performance occurs at in the latter classifiers. For the PCA-based classifier, the best performance, which is , appears at with . When investigating the CR case (Fig. 1(b)1(d), and 1(f)), it can be observed that the mvBeta distribution-based classifier performs better than the sDMM/mvBeta distribution (without feature selection)-based classifiers. The classification rate reaches at and at . Meanwhile, has been reached at several s. This fact supports our motivation that removing redundant features can improve the classification performance. The choice of does not work well, which is because we have reduced too much dimensions and key information are lost. The best performance of the PCA-based classifier is again , which happens at . In this case, feature selection does not help in improving the classification accuracy.

(a) Channel selection with Fisher ratio and .
(b) Channel selection with classification rates and .
(c) Channel selection with Fisher ratio and .
(d) Channel selection with classification rates and .
(e) Channel selection with Fisher ratio and .
(f) Channel selection with classification rates and .
Figure 1: Classification rates comparisons of mvBeta-based classifier, PCA-based classifier, and SVM-based classifier, with different channel selection strategies and number of selected channels.
Channel Selection Classifier Best performance Mean Acc. Std. Dev.
Fisher ratio mvBeta /sDMM ()
mvBeta ()
mvBeta ()
PCA ()
PCA ()
PCA ()
SVM ()
  Classification rate    mvBeta /sDMM ()
mvBeta ()
mvBeta ()
   PCA ()
PCA ()
PCA ()
SVM    ()    

Table 2: Summary of classification rates ( is the case without feature selection).

4.3 Discussion

In general, the non-linear decorrelation strategy for the neutral vector works well in EEG signal classification, no matter with or without feature selection. This verifies the effectiveness of the non-linear decorrelation strategy.

When comparing with the SVM-based classifier Prasad2011 (), the recently proposed sDMM-based classifier Ma2012 () and the PCA-based classifier, the feature selection strategy proposed in this paper indeed improves the classification results. A summary of comparisons is listed in Tab. 2.

For the FR case, the mvBeta distribution-based classifier (with ) and the sDMM-based classifier have the same highest accuracies. However, the latter one needs to involve more channels ( or ) while the former one obtains the same classification rate at . This indicates that the latter method has higher complexity. Comparing with the best PCA-based classifier ( and ), the mvBeta classification-based classifier improves the classification rate by . The mean accuracy is improved as well. For the CR case, the mvBeta distribution-based classifier (with ) outperforms the sDMM-based classifier by and outperforms the PCA-based classifier ( and ) by . Similar to the FR case, the mvBeta distribution-based classifier requires less channels. Moreover, when comparing the mean classification rate and the standard deviation, the mvBeta distribution-based classifier (with ) is more reliable and stable than all the other methods.

To further test the statistical meaning of the classification accuracies, we also applied the Student’s t-test to analyze the results. The -values of the null hypothesis that the two compared methods perform similar are listed in Tab. 3. All the -values are further smaller than and, therefore, the null hypothesis are rejected. This means that the proposed mvBeta distribution-based method indeed improves the classification accuracy.

Fisher ratio
Null hypothesis mvBeta & SVM mvBeta & PCA ()
-value
Classification rate
Null hypothesis mvBeta & SVM mvBeta & PCA ()
-value

Table 3: -values of the Student’s t-test for the “null hypothesis” that the classification performance of two methods are similar. The best performance of each method is selected for comparisons.

5 Conclusions and future work

In order to optimally remove the correlation among the feature dimensions and thus improve classification accuracy, a parallel non-linear transformation strategy was applied to decorrelate the negatively correlated neutral vector. Specially, when the neutral vector is Dirichlet distributed, the obtained decorrelated scalar variables are mutually independent and each of them is beta distributed. After decorrelation, we applied the variance and the differential entropy as criteria in feature selection. The proposed feature selection strategy with non-linear transformation has been employed in EEG signal classification. Experimental results demonstrate that classifier based on the selected features performs better and is more stable than the SVM-based classifier, the recently proposed sDMM-based classifier, and the PCA-based classifier.

There are many possible ways to improve the classification accuracy in the future work. In current work, the feature selection is conducted for each channel independently. If we apply proper feature selection strategy on the best channels, further improvement of the the classification accuracy can be expected. Moreover, there exists other features, e.g., Fourier features, that can be used for EEG classification. Although the Fourier features does not fit the definition of Dirichlet distribution naturally, we can apply proper normalization strategy to make the feature neutral. Since Fourier features are more intuitive, classification accuracy improvement with normalized neutral Fourier feature can also be expected.

6 Acknowledgements

The authors would like to thank the reviewers for their fruitful suggestions. Also, the authors would like to thank Dr. Jing-Hao Xue for his kind discussions and suggestions.

This work was partly supported by the National Natural Science Foundation of China (NSFC) under grant No.  and No. , the Scientific Research Foundation for Returned Scholars, Ministry of Education of China, Chinese program of Advanced Intelligence and Network Service under grant No. B, and EU FP IRSES MobileCloud Project (Grant No. ).

Footnotes

  1. journal: arXiv
  2. A super-Dirichlet variable is obtained by cascading several Dirichlet variables.
  3. Even though we could apply the PCA directly to the neutral random vector variable, this linear transformation could only decorrelate the data, but can not guarantee the independence if the data is not Gaussian distributed.
  4. It is also suggested in other literature that the frequency characteristic can be found in even higher frequency band Leuthardt2004 (). We use the band pass, as suggested in Prasad2011 () and Ma2012 (), purely for the purpose of making the feature extraction settings consistent with previous work.
  5. The definition in Farina2007 () was unclear about processing the low-band data obtained at the last decomposition level. We use a different expression here to make it clearer.
  6. We have tried both the variance and the differential entropy criteria. For the BCI competition III data set that used in this paper, these two criteria yield exactly the same order of features.

References

  1. F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi, “A review of classification algorithms for EEG-based brain-computer interfaces,” Journal of Neural Engineering, vol. 4, no. 2, p. R1, 2007.
  2. J. Chiang, Z. Wang, and M. McKeown, “A generalized multivariate autoregressive (gmar)-based approach for eeg source connectivity analysis,” IEEE Transactions on Signal Processing, vol. 60, no. 1, pp. 453–465, Jan 2012.
  3. K. C. Veluvolu, Y. Wang and S. S. Kavuri,“Adaptive estimation of EEG-rhythms for optimal band identification in BCI,” Journal of Neurosci Methods, vol. 203,pp. 163–173, 2012.
  4. Y. Wang,K. C. Veluvolu, and M. Lee, “Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications,” Journal of Neuroeng Rehabilitation, vol. 10, 2013.
  5. S. Prasad, Z.-H. Tan, R. Prasad, A. F. Cabrera, Y. Gu, and K. Dremstrup, “Feature selection strategy for classification of single-trial EEG elicited by motor imagery,” in International Symposium on Wireless Personal Multimedia Communications (WPMC), 2011, oct. 2011, pp. 1 –4.
  6. W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, “EEG-based communication: A pattern recognition approach,” IEEE Transactions on Rehabilitation Engineering, vol. 8, no. 2, pp. 214 –215, Jun. 2000.
  7. A. Subasi, “Eeg signal classification using wavelet feature extraction and a mixture of expert model,” Expert Systems with Applications, vol. 32, no. 4, pp. 1084 – 1093, 2007.
  8. D. Farina, O. F. Nascimento, M. F. Lucas, and C. Doncarli, “Optimization of wavelets for classification of movement-related cortical potentials generated by variation of force-related parameters,” Journal of Neuroscience Methods, vol. 162, pp. 357 – 363, 2007.
  9. Z. Ma, Z. H. Tan, and S. Prasad, “EEG signal classification with super-dirichlet mixture model,” in Proceedings of IEEE Statistical Signal Processing Workshop, Aug. 2012, pp. 440 – 443.
  10. Z. Ma, P. K. Rana, J. Taghia, M. Flierl, and A. Leijon, “Bayesian estimation of Dirichlet mixture model with variational inference,” Pattern Recognition, vol. 47, no. 9, pp. 3143–3157, 2014.
  11. C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transaction on Intelligent System Technology, vol. 2, no. 3, pp. 27:1–27:27, May 2011.
  12. Z. Ma and A. Leijon, “Bayesian estimation of beta mixture models with variational inference.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2160–73, 2011.
  13. A. Subasi and M. I. Gursoy, “Eeg signal classification using pca, ica, lda and support vector machines,” Expert Systems with Applications, vol. 37, no. 12, pp. 8659 – 8666, 2010.
  14. J. Taghia, Z. Ma, and A. Leijon, “Bayesian estimation of the von-Mises Fisher mixture model with variational inference,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 9, pp. 1701–1715, Sept 2014.
  15. Z. Ma, A. Leijon, and W. B. Kleijn, “Vector quantization of LSF parameters with a mixture of Dirichlet distributions,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 9, pp. 1777–1790, Sept 2013.
  16. Z. Ma and A. Leijon, “Super-Dirichlet mixture models using differential line spectral frequences for text-independent speaker identification,” in Proceedings of INTERSPEECH, 2011, pp. 2349–2352.
  17. R. J. Connor and J. E. Mosimann, “Concepts of independence for proportions with a generalization of the Dirichlet distribution,” Journal of the American Statistical Association, vol. 64, no. 325, pp. 194–206, 1969.
  18. Z. Ma and A. E. Teschendorff, “A variational Bayes beta mixture model for feature selection in DNA methylation studies,” Journal of Bioinformatics and Computational Biology, vol. 11, no. 4, 2013.
  19. P. K. Rana, J. Taghia, Z. Ma, and M. Flierl, “Probabilistic multiview depth image enhancement using variational inference,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 3, pp. 435–448, April 2015.
  20. I. R. James and J. E. Mosimann, “A new characterization of the Dirichlet distribution through neutrality,” The Annals of Statistics, vol. 8, no. 1, pp. 183–189, 1980.
  21. Z. Ma and A. Leijon, “Pdf-optimized lsf vector quantization based on beta mixture models,” in Proceedings of INTERSPEECH, 2010.
  22. R. K. S. Hankin, “A generalization of the Dirichlet distribution,” Journal of Statistical Software, vol. 33, no. 11, pp. 1–18, 2010.
  23. Z. Ma, “Bayesian estimation of the dirichlet distribution with expectation propagation,” in Proceedings of European Signal Processing Conference, 2012.
  24. Z. Ma, A. Leijon, and W. B. Kleijn, “Vector quantization of LSF parameters with a mixture of Dirichlet distributions,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, pp. 1777 – 1790, Sep. 2013.
  25. C. M. Bishop, Pattern Recognition and Machine Learning.    Springer, 2006.
  26. Z. Ma and A. Leijon, “Bayesian estimation of beta mixture models with variational inference,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2160–2173, 2011.
  27. Y. Saeys, I. Inza, and P. Larrañaga, “A review of feature selection techniques in bioinformatics,” Bioinformatics, vol. 23, pp. 2507–2517, 2007.
  28. Z. Ma, A. Leijon, Z.-H. Tan, and S. Gao, “Predictive distribution of the dirichlet mixture model by local variational inference,” Journal of Signal Processing Systems, vol. 74, no. 3, pp. 359–374, Mar 2014.
  29. Z. Ma, S. Chatterjee, W. Kleijn, and J. Guo, “Dirichlet mixture modeling to estimate an empirical lower bound for LSF quantization,” Signal Processing, vol. 104, no. 11, pp. 291–295, Nov. 2014.
  30. Z. Ma, H. Li, Q. Sun, C. Wang, A. Yan, and F. Starfelt, “Statistical analysis of energy consumption patterns on the heat demand of buildings in district heating systems,” Energy and Buildings, vol. 85, pp. 464–472, Dec. 2014.
  31. Z. Ma, J. Taghia, W. B. Kleijn, A. Leijon, and J. Guo, “Line spectral frequencies modeling by a mixture of von mises¨cfisher distributions,” Signal Processing, vol. 114, pp. 219–224, Sept. 2015.
  32. J. Taghia and A. Leijon, “Variational inference for Watson mixture model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1886–1900, 2015.
  33. Z. Ma and A. Leijon, “Human skin color detection in rgb space with bayesian estimation of beta mixture models,” in Proceedings of European Signal Processing Conference, 2010.
  34. Z. Ma and A. Leijon, “Human audio-visual consonant recognition analyzed with three bimodal integration models,” in Proceedings of INTERSPEECH, 2009.
  35. H. Yu, Z. Ma, M. Li, and J. Guo, “Histogram transform model uding mfcc features for text-independent speaker identification,” in Proceedings of IEEE Asilomar Conference on Signals, Systems, and Computers, 2014.
  36. P. K. Rana, Z. Ma, J. Taghia, and M. Flierl, “Multiview depth map enhancement by variational bayes inference estimation of dirichlet mixture models,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2013.
  37. Z. Ma, R. Martin, J. Guo, and H. Zhang, “Nonlinear estimation of missing ¦¤lsf parameters by a mixture of dirichlet distributions,” in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, 2014.
  38. Z. Ma and A. Leijon, “A probabilistic principal component analysis based hidden markov model for audio-visual speech recognition,” in Proceedings of IEEE Asilomar Conference on Signals, Systems, and Computers, 2008.
  39. ——, “Expectation propagation for estimating the parameters of the beta distribution,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2010.
  40. N. Kwak and C.-H. Choi, “Input feature selection by mutual information based on parzen window,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 12, pp. 1667–1671, Dec 2002.
  41. X. He, M. Ji, C. Zhang, and H. Bao, “A variance minimization criterion to feature selection using laplacian regularization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 2013–2025, Oct 2011.
  42. S. Zhu, D. Wang, K. Yu, T. Li, and Y. Gong, “Feature selection for gene expression using model-based entropy,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 7, no. 1, pp. 25–36, Jan. 2010.
  43. Z. Ma, P. K. Rana, J. Taghia, M. Flierl, and A. Leijon, “Bayesian Estimation of Dirichlet Mixture Model with Variational Inference,” Pattern Recognition, vol. 47, no. 9, pp. 3143–3157, Sep 2014.
  44. “BCI competition III,” http://www.bbci.de/competition/iii.
  45. E. C. Leuthardt, G. Schalk, J. R. Wolpaw, J. G. Ojemann, and D. W. Moran, “A brain-computer interface using electrocorticographic signals in humans,” Journal of Neural Engineering, no. 1, pp. 63–71, 2004.
  46. Z. Ma, A.E. Teschendorff, H. Yu, J. Taghia, J. Guo, “Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis”, International Journal of Molecular Science. No. 15, pp. 10835-10854 ,2014.
  47. K. Laurila, B. Oster, C. Andersen, P. Lamy, T. Orntoft, O. Yli-Harja, and C. Wiuf, “A beta-mixture model for dimensionality reduction, sample classification and analysis”, BMC Bioinformatics, 2011.
  48. Z. Ma and A. Leijon, “Bayesian Estimation of Beta Mixture Models with Variational Inference”, IEEE Transactions on Pattern Analysis and Machine Intelligence. No. 33, pp. 2160-2173, 2011.
  49. Z. Ma and A.E. Teschendorff, “A Variational Bayes Beta Mixture Model for Feature Selection in DNA Methylation Studies”,  Journal of Bioinformatics and Computational Biology. Vol.11 No.4, pp.19, 2013.
  50. T. Gandhi, B. K. Panigrahi, and S. Anand, “A comparative study of wavelet families for eeg signal classification,” Neurocomputing, vol. 74, no. 17, pp. 3051 – 3057, 2011.
  51. T. N. Lal, M. Schroder, T. Hinterberger, J. Weston, M. Bogdan, N. Birbaumer, and B. Scholkopf, “Support vector channel selection in BCI,” Biomedical Engineering, IEEE Transactions on, vol. 51, no. 6, pp. 1003 –1010, Jun. 2004.
  52. W. Malina, “On an extended fisher criterion for feature selection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-3, no. 5, pp. 611 –614, Sep. 1981.
  53. Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational Bayesian Matrix Factorization for Bounded Support Data”,  IEEE Transaction on Pattern Analysis and Machine Intelligence.Vol. 37, No. 4, pp. 876-889, 2015.
  54. Y. Chae, J. Jeong, and S. Jo, “Toward brain-actuated humanoid robots: Asynchronous direct control using an eeg-based bci,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1131–1144, Oct 2012.
  55. Z. Ma, “Non-gaussian statistical models and their applications,” Ph.D. dissertation, KTH - Royal Institute of Technology, 2011.
  56. B. A. Frigyik, A. Kapila, and M. R. Gupta, “Introduction to the Dirichlet distribution and related processes,” Department of Electrical Engineering, University of Washington, Tech. Rep., 2010.
  57. H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226–1238, Aug 2005.
  58. V. Lawhern, W. Hairston, and K. Robbins, “Optimal feature selection for artifact classification in eeg time series,” in Foundations of Augmented Cognition, ser. Lecture Notes in Computer Science, D. Schmorrow and C. Fidopiastis, Eds.    Springer Berlin Heidelberg, 2013, vol. 8027, pp. 326–334.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406424
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description