DeepBrain: Towards Personalized EEG Interaction through Attentional and Embedded LSTM Learning

# DeepBrain: Towards Personalized EEG Interaction through Attentional and Embedded LSTM Learning

## Abstract

The “mind-controlling” capability has always been in mankind’s fantasy. With the recent advancements of electroencephalograph (EEG) techniques, brain-computer interface (BCI) researchers have explored various solutions to allow individuals to perform various tasks using their minds. However, the commercial off-the-shelf devices to run accurate EGG signal collection are usually expensive and the comparably cheaper devices can only present coarse results, which prevents the practical application of these devices in domestic services. To tackle this challenge, we propose and develop an end-to-end solution that enables fine brain-robot interaction (BRI) through embedded learning of coarse EEG signals from the low-cost devices, namely DeepBrain, so that people having difficulty to move, such as the elderly, can mildly command and control a robot to perform some basic household tasks. Our contributions are two folds: 1) We present a stacked long short term memory (Stacked LSTM) structure with specific pre-processing techniques to handle the time-dependency of EEG signals and their classification. 2) We propose personalized design to capture multiple features and achieve accurate recognition of individual EEG signals by enhancing the signal interpretation of Stacked LSTM with attention mechanism. Our real-world experiments demonstrate that the proposed end-to-end solution with low cost can achieve satisfactory run-time speed, accuracy and energy-efficiency.

## Introduction

Brain-Computer Interface (BCI) design, as an emerging sub-field of Machine Learning (ML) and Human-Computer Interaction (HCI), has made significant progress in recent years. In general, BCI systems reply on a head-worn device to collect electroencephalography (EEG) signals and interpret them into various user attentions. Based on this technology, many experimental BCI systems have been proposed in different scenarios. For example, \citeauthorAkram2015 [1] studied how to extract a user’s EEG signal to simulate a mouse click action on a PC. \citeauthorMauss2009 [9] took a step further where they designed a system to extract that signal from one participant and then transmit it into another participant’s mind to study whether it can influence participants’ video game playing behavior. \citeauthorPinheiro2016 [12] dived into a different domain where they aimed to design a BCI system that allowed patients to control a robot in the healthcare domain.

The advancements of EEG-based BCI also attribute to the powerful neural network architectures released in recent years. Nowadays, researchers can use advanced neural network based models, as opposed to the early-days regression models, to interpret the EEG signals [5]. This approach works extremely well with recurrent neural network model architectures (RNN) and its derived Long Short-Term Memory (LSTM) architecture, as the EEG signal is a chronological sequence of data.

However, these existing systems suffer from a common drawback that most of them are experimental prototypes, or they were developed for institutional users (e.g., hospitals and governments [16]). Thus, the hardware cost is rather expensive, which inhibits the wide adoption in people’s daily use scenarios. EEG collection equipments have different prices. As shown in Figure 1, EMOTIV EPOC+ 14 Channel Mobile EEG costs $799.00 whereas Brainlink costs$99. The usage of such EEG collection equipments for ordinary users is often limited by the price. Another drawback of the BCI-controlled robots is that they only allow users to perform one action, e.g., using a specific pattern of signals to move the robot forward [11].

Our key motivation is to provide an accessible end-to-end solution for the general public users. Many people having difficulty on walking or other movements, e.g., the elderly live alone at home and they have needs to get many “simple” housework done, but those simple tasks (e.g., pick up a remote from the floor) are not simple for them anymore. We aim to design a solution for these users to use only their brains to control a robot performing these actions. This solution has to be low-cost, energy efficient, performing more-than-one categories of tasks, and at a relatively high accuracy. To ensure we are getting the users’ needs correctly, we adopt participatory design [10] approach to invite targeted users to be part of the iterative design process. We spend days visiting the elderly care facilities and some of the houses where an elderly live alone. Through the analysis of observation note, interview, and participatory design session, we evolve our initial design (e.g., a NAO robot) into the final design (e.g., a TX2-based robot) after multiple iterations.

### Contributions

To tackle these challenges, we propose and develop an end-to-end solution that enables fine brain-robot interaction (BRI) through embedded learning of coarse EEG signals from the low-cost devices, namely DeepBrain, so that our targeted users can mindly command and control a robot to perform some basic household tasks. The main contributions of this work are as follows:

• On technical aspect, we first present a Stacked Long short-term memory (Stacked LSTM) structure with specific pre-processing techniques to handle the time-dependency of EEG signals and their classification. Then we propose personalized design in the DeepBrain to capture multiple features and achieve accurate recognition of individual EEG signals by enhancing the signal interpretation of Stacked LSTM with attention mechanism. Thus, our DeepBrain can present comprehensive capabilities to process time-dependency and personal features of EEG signals at the same time.

• On experimental aspect, we collect two datasets (one dataset in a quiet environment and the other one is in a practical but noisy environment). These datasets are from different gender- and age-groups (3 males and 3 females, span between 40 to 70) to illustrate the performance of our method. We compared our DeepBrain approach with the current state-of-the-art works. The experimental results demonstrate that our method outperforms other methods on run-time speed, accuracy and energy-efficiency.

## Related Work

This section presents current research on Wearable EEG devices and EEG processing using recurrent neural networks.

### Wearable EEG Devices

Since emotions have many tracks inside and outside our body, kinds of methods have been adopted for constructing emotion recognition models, such as facial expressions, voices, and so on [2]. Among these approaches, EEG-based methods are measured to be promising approaches for emotion recognition. Many findings in neuroscience support that EEG allow direct assessment for the ”inner” states of users [3]. However, most of these researches link much with wet electrodes (some with dozens of electrodes). Except the time costs and high price for placing the electrodes, the unrelated channels may interfuse noise in the system, which can affect the performance of the system badly. The HCI community invokes user-friendly usage for effective brain-computer interactions. With the rapid development of wearable devices and dry electrode techniques [6], it is possible to develop wearable EEG application devices. For instance, a language or hand disabled person wearing such a device could show his or her emotions to service robot if the device detects that he or she is in certain kind of emotional state. As we all know, an emotional recognition EEG device with easy installation is popular in HCI. In order to realize this idea, in our paper, we apply a relatively lower number of electrodes for EEG collection, the EEG collection device is called Brainlink, which has two non-invasive dry electrodes in the forehead position. When the user is in a different state (focused or relaxed), the Brainlink will display different colors of breathing lights. And then perform emotion recognition on the collected EEG data through the system described above [19].

### EEG Processing Using Recurrent Neural Networks

When a neural network is used to EEG data, the connection between the data at the time before and after can be finished by manually building a sliding window to deal with the EEG time series data. Deep neural networks [8] are applied to classify EEG data. LSTM [14] are recurrent neural networks (RNNs) equipped with a special gating mechanism that controls access to memory cells. Long short-term memory cells controlled by gates allow information to pass unmodified over many time steps. Since the gates can prevent the rest of the network from modifying the contents of the memory cells for multiple time steps, LSTM networks preserve signals and propagate errors for much longer than ordinary RNNs. Through independently reading, writing and erasing contents from the memory cells, the gates can also be trained to deal with the selection of input signals and negligence of other parts. Stacked LSTM [4] consists of LSTM unit connections along depth dimension. It benefits to store and generate longer range patterns and is much robuster. The attention LSTM cells are present along the sequential computation of each LSTM network. However, they are not present in the vertical computation from one layer to the next. Multidimensional LSTM [13] replaces a single recurrent connection with many recurrent connections, so that it can deal with multi-dimensional tasks such as images and videos.

Recently, Zhang et al. [18] present a brain typing system for converting user’s thoughts to texts via deep feature learning of EEG signals. Classifier used in their systems achieves the accuracy of 95.53% on multivariate classification. their latest research work in [17] builds a universal EEG-based identification model which achieves the accuracy of 99.9% without any actual system. However, in the above research approaches, some methods collect only one-channel EEG from the frontal cortex with the device, which is limited to calculate the emotion index. Some approaches focus on more channels EEG from the frontal cortex with more advanced equipment. Although the advanced devices can bring us more diverse EEG data, as is known to all, every additional electrode costs a lot of money. Therefore, we may spend thousands of dollars on data collection devices finally, which is not good for us to design a complete and feasible BCI system. Few studies attempt to build a feasible, high precision, civilian and easily deployable EEG-based emotion recognition system. However, this approach is only applicable to short-term dependent time series data. Our approach uses an enhanced LSTM prediction model. This model introduces the explicit internal LSTM unit structure and frequently updates the internal state values while acquiring input data at each time point, which guarantee that the EEG data before and after the time point can hold a powerful connection.

## System Overview

Our solution consists of two subsystems: EEG signal collection and pre-processing module, and neural-network-based EEG signal interpreter. The main goal of our method is to design a deep learning model that classifies the user’s emotion status with raw EEG signal generated by our low-cost equipment in real-time.

In the EEG signal pre-processing section, we use an off-the-shelf headset equipment to collect raw EEG signal; then we feed the signal into a EEG signal pre-processing algorithm for multi-classification.

The neural-network-based EEG signal interpreter module read in the processed EEG signals and translate it into one of the four status with the help of LSTM network architecture. First, we use stacked LSTM to process the long-time dependency. Second, we propose an attention-based enhanced stacked LSTM to capture user’s EEG signal status. It is worth mentioning that we also incorporate a personalization step in this module so that we can achieve a more accurate predication results after a simple fine-tuning step.

## LSTM-Based Method Processing EEG Signal

We propose a hybrid deep learning model to interpret the raw EEG signals. In this part, we first summarize the pre-processing step for EEG signals. Then, we outline the proposed LSTM-Based method and its various components. At the end, we introduce the technical details Brain-robot interaction system in subsequent subsections.

### EEG Signal Collection and Pre-processing Module

This paper uses low-cost EEG data collection devices. Although the device is resistant to noise from multiple signal channels from users, it is occasionally affected by external environments such as weather and sound. Therefore, there are sometimes unusual data points in the data set. To improve the accuracy and stability of the results, it is necessary to perform data preprocessing. The unprepared EEG data collects in 180 seconds is shown in Figure 3. It can be seen that the data is quite dense and confusing. If there is no appropriate means for data preprocessing, it will train the model and forecasting brings big challenges. In Figure 3, the low-cost EEG data collection device provides a score value that reflects the decimal representation of the user’s brain emotional state, which is a numerical representation of the above-mentioned EEG pattern. When the user’s EEG signal value is high, it indicates that the brain has a high probability of being in the Beta or Gamma pattern. Conversely, when the user’s value is low, it indicates that the brain has a high probability of being in the Delta or Theta pattern. As for the user’s medium-conscious Alpha pattern, the low-cost EEG data collection device will give corresponding numerical values according to different users (male or female).

Furthermore, we use low-cost equipment to collect EEG data. Compared to high-cost equipments, such as Emotiv Epoc+11 headset, which detects 5 types of patterns (Delta, Theta, Alpha, Beta and Gamma), Our EEG equipment can only detect the two types of patterns, which is high and low. The signal is low value, and for the middle Alpha pattern, according to different people (such as male and female). The signal value is divided into focused or relaxed. Therefore, based on the changes in the two states, we divide the local datasets’ label into four categories. For Alpha pattern in the middle of Delta to Gamma, we need to divide it into relaxed or focused based on the signal value from male or female.

First of all, our EEG collection device gives decimal scores that reflect the user’s brain emotion status, as shown in the Figure 4. Our EEG equipment is resistant to the noises from multiple channels of signals from the user, it can occasionally be affected by the external environment, such as weather and sound. Thus, sometimes there would be an outlier data point in the dataset. We apply statistic methods to identify the outlier and discard them. And we use the average score of the data point before and after it (as all the data is a time series sequence) to represent it.

As described at the beginning of the article, we properly encode the two types of collected data , which achieves the expected four classification effects. This article uses one of the typical encoding methods named one-hot encoding for label processing, which can effectively avoid the model abrupt problem caused by the logarithmic calculation of the model during training. In the initial stage of multi-classification coding operation, the experimental results obtained by our experiments are always in a low numerical range. We think that the situation may be caused by two factors, one is that there has a problem with the multi-class model architecture, the other one is that the entered data cannot be classified correctly. Through the follow-up experiments, we exclude the first case and finally find that the problem occurs in the second case, because the multi-class model structure has no problem. We list the problems as follows: 1) There is a specific correspondence between the classification data and the label. 2) There may be some similarity in the features of the classification data extraction. 3) Classification data cannot be classified. 4) The categorical data can be a feature combination (A&B) but not a feature (A1&A2). The experimental data that we collect conflict with case 3) above, which results in the model not being able to classify the data normally. Based on this, we give some additional explanations about case 3). If we now want to identify the ear characteristics of an animal, such as cat ears and dog ears, we can get the correct classification result by training the appropriate model. However, if the characteristics of dog ears and cat ears are each taken in half and combined into a new feature, the model would not correctly classify the characteristics of the combination during the operation. For the middle Alpha pattern, we divide it into relaxed or focused based on the EEG signal value of the male or female user. As shown in the Figure 5, the intermediate state of the EEG data is higher for male, even in the relaxed state, the value is still close to 60, while the value of female is less than 50.

Therefore, the model can only judge the category by random division, which leads to the low numerical range of the experimental results. If the above-mentioned mixed condition occurs in the feature, it is not good to find a suitable function for feature classification. Taking the situation into consideration, we perform secondary separation on the mixed features, and the designed function needs to satisfy the following conditions: , and the regions represented by and are isolated at . Here we give the mathematical formula used in the preprocessing, , where A is the raw data, B is part of the raw data. We have re-sampled the data several times to verify the feasibility of the formula. In addition, we have considered other functions, such as , where A is the raw data, B is part of the raw data, etc., but there are always various problems to reduce the accuracy of the model.

### Neural-Network-Based EEG Signal Interpreter

We focus on learning the meanings of the user’s intent signals which are 1-D vectors (collected in one time point). We express the single input EEG signal as E. Then, we feed E to the LSTM structure for temporal feature learning. At last, according to the learned temporal features X, the result of the classification is given [15].

The central idea of our DeepBrain workflow and interaction operations is depicted in Figure 6. The input raw EEG data is a single sample vector. We first utilize two fully connected layers as our hidden layer, and then input its value of output to the LSTM units. In addition, the arrow shows the internal structure of the LSTM layer, where and represent the activation function. is the input of model. is the output of LSTM cell in the -th time step and is derived in the previous sequence step. stands for the value of LSTM memory cell in the -th time step.

For the challenges of time series data processing, we first use down sampling technique to obtain the characteristic subsequence of the original time series. Down sampling reduces the complexity of the original time series and makes it easier for learning patterns. At the same time, to speed up the convergence rate of the model, we normalize our time series data using min-max normalization, which is a linear transformation of the original data. The transformed values are mapped into the interval [0, 1].

In the temporal feature processing part, the powerful time feature extraction capability of the LSTM structure is proven. LSTM can explore the feature dependencies over time through an inner state of the network, which permits it to display trend temporal behavior. LSTM cells control the input, storage and output of data by introducing a set of gate mechanisms. As shown in the Figure 6, the LSTM gate units receive the output of the LSTM internal units at the previous time step and the input at current time step sample. If the previous layer of the LSTM cell layer is not the input layer, its various gate units accept the output of previous layer’s LSTM internal units at the current time step and the output of the LSTM internal units at the previous time step. We utilize an LSTM model that contains three components: one input layer, 2 hidden layers, and one output layer. LSTM (shown as the rectangles in the Figure 6) cells are in the hidden layers. Assume that a batch of input EEG data contain (generally called batch size) EEG samples and the total input data have the 3-D shape as . Let the data in the -th layer be denoted by = , , where denotes the -th EEG sample and denotes the number of dimensions in the -th layer.

Assume that the weights between layer and layer can be denoted by , e.g., means the weight between layer 2 and layer 3. denotes the biases of the -th layer. The calculation between the -th layer data and the -th layer data can be denoted as

 Xri+1=Xri∗Wri,i+1+bri

The calculation of LSTM layers are shown as follows:

 fi=sigmoid(H(Xr(i−1)j,Xr(i)(j−1)))
 ff=sigmoid(H(Xr(i−1)j,Xr(i)(j−1)))
 fo=sigmoid(H(Xr(i−1)j,Xr(i)(j−1)))
 fm=tanh(H(Xr(i−1)j,Xr(i)(j−1)))
 cij=ff⊙ci(j−1)+fi⊙fm
 Xrij=fo⊙tanh(cij)

where , , and represent the input gate, forget gate, output gate and input modulation gate separately, and denotes the element-wise multiplication. means the state (memory) in the -th LSTM cell in the -th layer, which is the most important part to quest the time-series relevance among EEG data samples. means the operation as follows:

 Xr(i−1)j∗W+Xr(i)(j−1)∗W′+b

where , and mean the corresponding weights and biases. Subsequently, we use the Back Propagation Through Time (BPTT) algorithm to train our designed model. Finally, we get the designed model prediction results and use softmax crossentropy as the loss function. The loss function is optimized by the Adam optimizer [7] with a learning rate of and a minibatch size of 64.

### Attention-based Enhanced Stacked LSTM

In addition to the proposed Stacked LSTM structure that can handle the time-dependency of EEG signals, we are aware that different people might have various EEG signal patterns and this aspect need to be carefully tackled with extended design on the Stacked LSTM. In order to present personalized solution in DeepBrain, we enhance the representation of Stacked LSTM with attention mechanism. The attention-based enhanced Stacked LSTM can facilitate the DeepBrain to learn specific features of different people, and tune our system to achieve accurate recognition of individual EEG signals.

Figure 7 depicts the overall architecture of attention-based enhanced stacked LSTM. We use history timestamp to predict the current timestamp. The embedding layer is the first layer of attention-based enhanced stacked LSTM. And then, we use stacked LSTM to capture the long-dependency. After that, in order to customize the EEG recognition, we enhance the representation of stacked LSTM with attention selector. The attention selector accepts the final LSTM cell’s input value as the attention weights which is measured by the operation

 W′att=P′(cri(j−1),Xri(j−1),Xr(i−1)j)

where denotes the hidden state of the -th LSTM cell. The operation is similar with the calculation process of the LSTM structure and calculates the normalized attention weights as

 Watt=softmax(W′att)

Then, the concatenation layer is used to connect the output of Stacked LSTM and attention selector. The dropout layer is a regularization technique to improve the generalization of EEG signal recognition model. The final softmax layer is to classify four EEG signal patterns.

## Experiments and Results

In this section, we use the collected local dataset to evaluate the designed deep learning model.

### Evaluation Metrics

We use six metrics to comprehensively evaluate the performance of our model: Accuracy, Precision, Recall, F1, ROC (Receiver Operating Characteristic), AUC (Area Under Curve), TPR (True Positive Rate), and FPR (False Positive Rate). These metrics have been widely used to assess machine learning algorithms. The details of the metrics are listed following:

 Accuracy =TP+TNTP+FN+FP+TN F1 =2⋅Precision⋅RecallPrecision+Recall Precision =TPTP+FP TPR =TPTP+FN Recall =TPTP+FN FPR =FPFP+TN

In general, accuracy is used to judge a model whose goal is to classify, meanwhile since our goal is to identify the EEG data category, precision and recall are also important metrics for evaluating our model. Precision rate is mainly used to judge whether the classifier can correctly get classification results, that is to say, it mainly focuses on identifying abnormal samples. Furthermore, recall rate mainly evaluates whether the classifier can identify all abnormal samples. score is a combination of the previous two metrics, if is less than 1, it represents that the recall rate is more important. On the contrary, the precision rate has a greater impact on the model quality assessment. score is used as a general overview of the performance about the algorithm. ROC is a graph composed of a False positive rate (horizontal axis) and a True positive rate (vertical axis). We can obtain different pairs by adjusting the classifier’s classification threshold. These data pairs are ROC data points, and they can intuitively reflect the advantages of the classifier. AUC is the area under the ROC curve, and it reflects the performance of the classification model. The closer the AUC value is to 1, the better the classification is. Four outcomes of the classification include True Positive (TP), False Positive (FP), True Negative (TN) and False Negative (FN), as shown above. We show the ROC curve under the EEG data and analyze the ROC curves of the four models.

### Experimental Settings

We first collect data through Adriano serial communication, and we need to install corresponding driver module on Jetson TX2. The subject is asked to wear the EEG device and control robot by mind. At the beginning, we need to compile and configure the various toolkits needed for the experiment on Jetson TX2. Specifically, we install the deep learning framework TensorFlow required for the experiment by means of source code compilation. By running TensorFlow, we can smoothly load the model we designed. We have carefully annotated the EEG data to the corresponding actions that are undertaken by the subject and been available from context. In our experiments, we choose a sum of 800 labeled EEG samples collected from 4 subjects (800 samples per subject). Each sample is a vector of 180 elements and corresponds to one channel of the EEG data. To evaluate the performance, we use several evaluation metrics such as accuracy, CPU and GPU ratio, ram footprint and so on.

### EEG Signals Analysis

Furthermore, we concisely analyze the similarities between EEG signals corresponding to different intents and quantify them using spearman correlation as shown in Table 1. In order to make the machine understand human intentions better, we present two similarities used in our experiment, inter-class similarity and extra-class similarity. The inter-class similarity means the similarity of EEG signals within the same meaning. We randomly choose several EEG data samples from the same intent and calculate the spearman correlation coefficient respectively. The inter-class similarity is measured as the average of spearman correlation coefficients of all samples. Likewise, extra-class similarity indicates the correlation coefficient between different EEG categories. We estimate the correlation coefficients matrix for each subject and then calculate the average matrix. Table 1 shows the correlation coefficients matrix and the relevant statistical extra-similarity and inter-similarity. Through these observations, feature representation and classification can be performed effectively.

### Overall Comparison with Other Methods

In this section, we give the performance study and then illustrate the efficiency of our approach by comparing with other methods and other deep learning algorithms. Recall that the designed approach is a hybrid model which uses the LSTM for feature learning and the softmax classifier for intent recognition. In our experiments, the EEG data are randomly divided into two parts: the training dataset and the testing dataset. It should be noted that Brainlink collects EEG data in both noisy and non-noise conditions. We show that our designed model gets the multi-classification accuracy of 0.975 and 0.970 on without noise and noise local dataset, respectively. To take a clear look at the result, we introduce the detailed classification reports in Table 3. We can observe that the Stacked LSTM with attention enhanced layer is generally better than the general hidden layer in the results of each of metrics. Table 2 shows the metrics on local dataset with noise. It shows that the evaluation metrics on local dataset without noise are better than with noise. The ROC in both noisy and non-noise conditions data as the input of models is shown in Figure 8. The area under DeepBrain curve is larger than the area under the curve of the other three methods, which can also be found from the value of the AUC. It shows that our method is better than the other three methods. According to the definition of ROC curve, we can realize that the ROC unceasingly decreases the threshold of classification, and then count the values of TPR and FPR. Analyzing the ROC curves of DeepBrain in the Figure 8, we can find that the TPR value rapidly achieve 0.9 during the process of continuously moving down the threshold value. At the same time, we also analyze the data under appropriate noise in the Figure 8 (b). For our target group, we believe, the reasonable noise decibels should be below 48 decibels, which is slightly lower than the number of noise decibels when people communicate normally, such as a central air-conditioned room. It means that our approach is much more robust. Although the other three methods perform well, the growth rate of the TPR value is slightly worse than ours. Accuracy comparison between our method and the other three methods are also listed in Figure 9.

We compare DeepBrain with SVM, MLP, LSTM, Stacked LSTM. In addition, the key parameters are listed here: multi-layer perceptron (MLP) (hidden layer node is 30), and LSTM with 32 unit cells. The results illustrate that our designed model achieves the higher accuracy than other methods. Our model also performs better than other deep learning models such as MLP or LSTM. Furthermore, contrasted with the existing EEG classification research which concentrates on binary classification, our designed model runs in multi-class scenario and still achieves a high-level accuracy. To illustrate the advantage of our designed model of robust features from raw EEG data, we also contrast our DeepBrain method with the single deep learning methods MLP and RNN. The experimental results are shown in Figure 9 (a), where we can notice that our method outperforms MLP, LSTM, SVM and Stacked LSTM in classification accuracy by 35%, 21.5%, 18.5% and 9.5% respectively. Figure 9 (b) demonstrates that the accuracy changes along with the training iterations under three categories of feature learning methods, which shows that the designed model converges to its high accuracy in fewer iterations than independent MLP and RNN.

## Conclusions

In this paper, we propose DeepBrain for disabled people’s application. We demonstrate a viable technique that the LSTM neural network builds model on normal time series behaviour and then uses prediction to give real-time feedback to our domestic robot. The DeepBrain produces relatively good results on real-world dataset that involves long-term time-dependent and weak time-dependent and is difficult to predict. As compared with MLP, SVM, LSTM, and Stacked LSTM, our model achieves better results, indicating the robustness of our methods.

Future work may consider using different levels of network structure and more accurate EEG collection devices instead of the equipment used in our paper, which can bring more categories of classification and still maintain high level of accuracy since it has more categorizable data and a high-precision network model. In general, the DeepBrain system with its associated methods presents a viable candidate to apply the state-of-the-art AI techniques to the field of HCI applications.

### References

1. F. Akram, S. M. Han and T.-S. Kim (2015) An efficient word typing p300bci system using a modified t9 interface and random forest classifier. In Computers in biology and medicine, Vol. 56, pp. 30–36. Cited by: Introduction.
2. R. A. Calvo and S. DâMello (2010-01) Affect detection: an interdisciplinary review of models, methods, and their applications,. IEEE Trans. Affect. Comput. 1 (1). Note: External Links: Document, Link Cited by: Wearable EEG Devices.
3. R. A. Calvo and S. DâMello (2014-Jul./Sep.) Feature extraction and selection for emotion recognition from eeg. IEEE Trans. Affect. Comput. 5 (3). Note: External Links: Document, Link Cited by: Wearable EEG Devices.
4. A. Graves (2013) Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (). Note: External Links: Document, Link
5. S. Gudmundsson, T. P. Runarsson, S. Sigurdsson, G. Eiriksdottir and K. Johnsen (2007) Reliability of quantitative eeg features. In Clin. Neurophysiol., , Vol. 118, , pp. 2162–2171. Note: External Links: Link Cited by: Introduction.
6. Y.-J. Huang, C.-Y. Wu, A. M.-K. Wong and B.-S. Lin (2015-01) Novel active comb-shaped dry electrode for eeg measurement in hairy site. IEEE Trans. Biomed. Eng. 62 (1). Note: External Links: Document, Link Cited by: Wearable EEG Devices.
7. D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization,. arXiv:1412.6980 (). Note: External Links: Document, Link
8. Y. LeCun, Y. Bengio and G. Hinton (2015) . In Deep learning, , Vol. 521, , pp. 436–444. Note: External Links: Link
9. I. B. Mauss and M. D. Robinson (2009) Measures of emotion: a review. In Cogn. Emotion, Vol. 23, pp. 209–237. Cited by: Introduction.
10. M. J. Muller (2003) Participatory design: the third space in hci. In Human-computer interaction: Development process, , Vol. , , pp. 165–185. Note: External Links: Link Cited by: Introduction.
11. T. Nguyen, S. Nahavandi, A. Khosravi, D. Creighton and I. Hettiarachchi (2015-07) Eeg signal analysis for bci application using fuzzy system. 2015 International Joint Conference on Neural Networks (IJCNN) (). Note: External Links: Cited by: Introduction.
12. O.R.Pinheiro, J.R.deSouza, L.R.Alves and M.Romero (2016-02) Wheelchair simulator game for training people with severe disabilities. IEEE (). Note: External Links: Cited by: Introduction.
13. J. Schmidhuber, A. Graves and S. Fernandez (2007) Multi-dimensional recurrent neural networks. In International Conference on Artificial Neural Networks, pp. 549–558.
14. J. Schmidhuber and S. Hochreiter (1997) Long short-term memory. In Neural computation, , Vol. 9, , pp. 1735–1780. Note: External Links: Link
15. S. Stober, A. Sternin, A. M. Owen and J. A. Grahn (2015) Deep feature learning for eeg recordings. arXiv preprint arXiv:1511.04306.
16. M. A. Williams, A. Roseway, C. OâDowd, M. Czerwinski and M. R. Morris (2015) SWARM: an actuated wearable for mediating affect. Proc. 9th ACM Int. Conf. Tangible Embedded Embodied Interaction (). Note: External Links: Document, Link Cited by: Introduction.
17. X. Zhang, L. Yao, Salil.S. Kanhere, Y. Liu, T. Gu and K. Chen. (2018) MindID: person identification from brain waves through attention-based recurrent neural network. ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp 2018) (). Note: External Links: Document, Link
18. X. Zhang, L. Yao, Q. Z. Sheng, S. S. Kanhere, T. Gu and D. Zhang (2017) Converting your thoughts to texts: enabling brain typing via deep feature learning of eeg signals. arXiv:1709.08820 (). Note: External Links: Document, Link
19. W. Zheng, W. Liu, Y. Lu, B. Lu and A. Cichocki (2018-02) EmotionMeter: a multimodal framework for recognizing human emotions. IEEE Transactions on Cybernetics (). Note: External Links: Cited by: Wearable EEG Devices.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters