A Novel Independent RNN Approach to Classification of Seizures against Non-seizures

A Novel Independent RNN Approach to Classification of Seizures against Non-seizures

Xinghua Yao, Ph.D, Qiang Cheng, Ph.D, Guo-Qiang Zhang, Ph.D

A Novel Independent RNN Approach to Classification of Seizures against Non-seizures

Xinghua Yao, Ph.D, Qiang Cheng, Ph.D, Guo-Qiang Zhang, Ph.D

Institute of Biomedical Informatics, University of Kentucky, Lexington, Kentucky, USA; The University of Texas Health Science Center at Houston, Houston, Texas, USA

Abstract

In current clinical practices, electroencephalograms (EEG) are reviewed and analyzed by trained neurologists to provide supports for therapeutic decisions. Manual reviews can be laborious and error prone. Automatic and accurate seizure/non-seizure classification methods are desirable. A critical challenge is that seizure morphologies exhibit considerable variabilities. In order to capture essential seizure features, this paper leverages an emerging deep learning model, the independently recurrent neural network (IndRNN), to construct a new approach for the seizure/non-seizure classification. This new approach gradually expands the time scales, thereby extracting temporal and spatial features from the local time duration to the entire record. Evaluations are conducted with cross-validation experiments across subjects over the noisy data of CHB-MIT. Experimental results demonstrate that the proposed approach outperforms the current state-of-the-art methods. In addition, we explore how the segment length affects the classification performance. Thirteen different segment lengths are assessed, showing that the classification performance varies over the segment lengths, and the maximal fluctuating margin is more than 4%. Thus, the segment length is an important factor influencing the classification performance.

1 Introduction

More than 50 million people in the world suffer from epilepsy[1]. Epilepsy is a central nervous system disorder, in which brain activity becomes abnormal, leading to sensations and sometimes loss of awareness. It can be life-threatening. Patients with epilepsy bear a high burden from disease in their daily lives, for example, having stringent limitations of acquiring and using a driving license[2]. An important technique to diagnose epilepsy is to use electroencephalography (EEG). EEG records the electrical activities of the brain, and may reveal patterns of normal or abnormal brain electrical activities. In current clinical practices, EEG signals are collected from the brains by making use of either non-intrusive or implanted devices. The collected EEG signals are then reviewed and analyzed by trained neurologists to identify characteristic patterns of the disease, such as seizures and pre-ictal spikes. Disease information, like seizure frequency, seizure type, etc., are to provide supports for therapeutic decisions. However, this manual handling is tedious and error prone, and takes several hours for a trained professional to analyze one-day recordings from one patient[3, 4, 5, 6, 7, 8]. These limitations have motivated researchers to develop automated approaches to recognize seizures. In this paper, we focus on developing an automatic approach to classify seizure segments from the off-line EEG data for assisting physicians in making diagnosis.

Inter-patient and intra-patient seizure-morphology variation is a source of technical challenge in machine learning approaches to seizure/non-seizure classification of EEG signals acquired from epilepsy patients. Different machine learning methods and computational technologies have been applied to address this challenge. There are extensive studies for constructing patient-specific detectors capable of detecting seizure onsets [6, 7, 9, 10, 11, 12]. In these studies, the problem of seizure detection is often converted into the seizure/non-seizure classification problem but more of a real-time flavor. Using traditional machine learning methods, hand-crafted features are usually needed to capture characteristics of seizure manifestations in EEG. More recent studies focus on designing deep learning approaches for seizure detection [4, 12, 13, 14, 15]. There are components shared by most of these studies. For example, signal processing techniques are used to filer data; modules need to be pre-trained; multiple channels are utilized to extract spatial features; temporal features are extracted by sliding windows, and so on. Most of the deep learning-based approaches for seizure detection are developed based on classical neural network models, like convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). While widely used, these standard neural network models have limitations for handling EEG data. CNN is good at processing two or more dimensional data, while EEG data are one dimensional and thus are not directly suited to CNN; RNN usually suffers from the gradient vanishing or exploding problem; though LSTM and GRU improved upon RNN, the training of a deep LSTM or GRU based network is in general difficult because of gradient decay over layers[16]. An emerging variant of RNN, independently recurrent neural network (IndRNN), addresses the above limitations. By taking the Hadamard product over the recurrent inputs [16], it overcomes the gradient vanishing or exploding problems, and supports computations over multiple layers efficiently. Additionally, it is able to process longer sequence data than LSTM. To exploit such advantages, this paper leverages IndRNN to design a new approach for the seizure/non-seizure classification.

EEG signals are highly dynamic and non-linear[17]. From different brain areas they have different morphologies[7, 17]. And also seizure patterns in EEG data may manifest different temporal and spatial characteristics that have long-range correlations. Based on these observations, we propose an approach which extracts temporal features ranging from local time durations to the entire record. Spatial features are extracted from EEG signals from different brain areas. At the smallest time scale, IndRNN is utilized to extract temporal features at each time step. With several loops of IndRNN and max-pooling, the scale of time duration increases gradually, and the temporal features cover a longer time duration. Then, the overall features are extracted by an average pooling operation. The extracted features are passed into two fully connected layers for further integration and for final classification. In order to reduce the scale differences and speed up training, a batch normalization layer is inserted after each IndRNN layer. We perform extensive cross-validation experiments across subjects to evaluate the proposed approach. In experiments, we obtain promising results that are comparable or superior to those of the current state-of-the-art approaches. In addition, we explore how the segment length affects the performance of seizure/non-seizure classification. Our experimental results with different segment lengths show that the performance of seizure recognition fluctuates over the segment lengths, and the maximal fluctuating margin is more than 4%.

The main contributions of our paper include the following: (1) An emerging deep learning model, IndRNN, is applied to seizure/non-seizure classification for the first time, and multi-scale temporal features are extracted with a deep architecture; (2) The relationship between the segment length and the performance of seizure/non-seizure classification is investigated; (3) Extensive cross-validation experimental results on the noisy EEG data of CHB-MIT demonstrate that seizures can be recognized more accurately, and the inter-patient seizure variabilities can be better overcome than current state-of-the-art deep learning approaches.

2 Related Work

Seizure/non-seizure classification distinguishes seizure segments from non-seizure segments, which can be used to recognize whether a data segment contains seizure or not. For this task, extensive studies have been performed. Because seizure detection, which is often of a real-time flavor, is often treated as the seizure/non-seizure classification problem, many machine learning methods have been developed [7, 9, 10, 11, 12, 18, 19, 20, 21]. Recently, deep learning techniques have been applied to the seizure detection problem[4, 13, 14, 15, 22]. The evaluations of these methods are conducted with patient-specific or across-patients experiments. The classification across patients is more challenging because it needs to overcome the inter-patients variabilities.

Shoeb and Guttag propose a method to construct a patient-specific detector for seizure detection by using the support vector machine (SVM)[7]. The method leverages filters to extract spectral features over each channel, and then stacks feature vectors to catch time-evolution information. A sensitivity of 96% is achieved with the mean latency of 4.6s, and the median false positive rate is 2 false detections per 24 hours. The performance results are often used as a benchmark for patient-specific seizure detection on the data set CHB-MIT.

Zandi et al. propose a wavelet-based algorithm for real-time detection of epileptic seizures using scalp EEG[6]. In this algorithm, the EEG from each channel is decomposed by wavelet packet transform, and a patient-specific measure is deployed by using wavelet coefficients to separate the seizure and non-seizure states. Utilizing the measure, a combined seizure index is derived for each epoch of every EEG channel. Through inspecting the combined seizure index, proper channel alarms are generated.

Fergus et al. present a method for the seizure/non-seizure classification based on traditional machine learning techniques, and obtain 88% in sensitivity and 88% in specificity over CHB-MIT[18]. The method consists of data filtering, feature extraction, feature selection and training classifiers. It is evaluated over data segments which are produced by the authors’ proposed segmentation method. A seizure segment is only truncated from the beginning of one seizure and the truncated length is 60 seconds. Non-seizure segment is truncated from non-seizure records. On the average, each seizure segment contains 40s ictal data.

Vidyaratne et al. propose a deep recurrent architecture by combining cellular neural network and bidirectional RNN[12]. The bidirectional RNN is deployed into each cell of the cellular neural network to extract temporal features in the forward and the backward directions. Each cell interacts with its neighboring cells to extract local spatial-temporal features. The sensitivities are 100% in patient-specific experiments over five patients from CHB-MIT.

Thodoroff et al. design a recurrent convolutional neural network to capture spectral, spatial and temporal patterns of seizures[4]. EEG signals are firstly transformed into images. Created images are fed into CNN. Output vectors of the CNN are organized to be sequences in chronological order. The sequences are passed into the bidirectional RNN to make classification. Both patient-specific experiments and cross-patient experiments are conducted. In the cross-patient testing, the sensitivity is 85% on average and the false positive rate is 0.8/hours. Transfer learning technique is utilized to overcome the problem of small amount of data in the patient-specific experiments.

Golmohammadi et al. explore two kinds of neural networks over TUH EEG Corpus[14]. Their experiment results show that convolutional LSTM network outperforms convolutional GRU network. Different initialization and regularization methods are considered. LSTM and GRU are limited when using multiple layers.

Hussein et al. design a deep neural network for seizure/non-seizure classification by using LSTM[22]. It extracts temporal features by using LSTM. Acharya et al. present a 13-layers deep neural network for seizure/non-seizure classification by using CNN[15]. The two approaches are evaluated over the same EEG data set provided by University of Bonn[17]. The LSTM approach achieves performances of 100%. The CNN approach obtains the sensitivity of 95% and the specificity of 90%. Each record in the Bonn EEG data set contains only one channel and has no artifacts.

These developed seizure-detection methods based on traditional machine learning techniques can work well with small amount of samples. They often need crafted features and manually selecting features. In the currently developed seizure/non-seizure classification methods based on deep learning techniques, most methods are based on classical neural network models, such as CNN, RNN, and LSTM. These models have their limitations for processing EEG signal data, such as suffering from gradient vanishing or exploding problem.

3 Method

3.1 Model Design

EEG signals are dynamic[17], and seizure morphologies varies with brain areas[7, 17]. Capturing temporal-spatial features in EEG signals is critical to seizure/non-seizure classification. For extracting spatial features, EEG signals over multiple channels are taken as inputs. For capturing overall temporal features, we leverage IndRNN in different time scales. Predominant features in local time scales are computed with max-pooling operation. Sequence produced by the combination of IndRNN and max-pooling operation forms a new time scale hierarchy. Stacking such combinations provides temporal features. Local features in the last hierarchy are averaged as overall features. Finally, the overall features are passed into fully connected layers to integrate features and make final classification. Batch normalization is utilized after IndRNN to facilitate training.

3.2 Model Architecture

Our model architecture consists of IndRNN blocks, one average pooling layer and two fully connected layers. Each IndRNN block comprises of an IndRNN layer, a bach normalization layer, and a max pooling layer. The architecture is depicted in Figure 1.

Figure 1: Architecture of IndRNN approach
  • IndRNN layer: IndRNN layer processes input sequences in forward order, and extracts time-dependent features. It executes computation as follows:

    (1)
    (2)

    Here, , , are input vector, hidden layer vector, and output vector at the time , respectively. is an input weight matrix, is recurrent weight vector, and is an output weight matrix. and are bias weights. represents Hadamard product. and are activation functions such as ReLU.

  • BN layer: It is inserted after each IndRNN layer. It is used to speed up training and to reduce overfitting[23].

  • Max-pooling layer: In each IndRNN block, a max-pooling layer is applied to the batch normalized results. It extracts predominant features from the normalized sequences at a specific temporal scale.

  • Average pooling layer: Following IndRNN blocks, an average pooling layer is inserted to extract overall features across time scales for the final classification.

  • FC layers: Two fully connected layers are designed. The first aims to integrate features from the outputs of the average pooling layer over channels and make further extractions. The second is to perform final classification of seizure/non-seizure.

4 Evaluation

4.1 Data Set

The data set of CHB-MIT[8] contains 686 EEG recordings from 23 subjects of different ages ranging from 1.5 years to 22 years. The recordings include 198 seizures. The sampling frequency is 256 Hz. Most recordings are one hour long, and some are two hours long or four hours long. The EEG recordings are grouped into 24 cases. In each case, the data recordings are from a single subject. Case chb21 was obtained 1.5 years after Case chb01 from the same subject. Each data file contains data over 23 or more channels. In several data files, there are missing values; thus, we only consider those channels without missing values. Three data files, including chb12_27.edf, chb12_28.edf and chb12_29.edf, have different channel montages from other data files. In our experiments, we remove these three data files.

4.2 Data Segmentation

In order to extract effective seizure features, 17 common channels are chosen. According to a data segment length of 23 seconds, each data record in each case is split into data segments from the beginning to the end without overlapping. If the duration of a data record is not divided by the segment length and there is a seizure happening in the remaining part, we will ensure that the last segment will have the same length but will overlap with its prior segment. If the remaindering part contains no seizure, then it is dropped. Using annotation files for this data set, we determine whether a data segment contains a seizure or not. In our experiments, if a segment contains any seizure data, it is considered as a seizure segment; otherwise, it is a non-seizure segment.

Using the above segmentation method, 665 seizure segments are obtained. In these seizure segments, the lengths of seizures vary from 1s to 23s, with the average length being 16.9s. Among all seizure segments, segments containing seizure signals of less than 7s comprise 14.7%, those containing more than 17s comprise 59.8%, and those containing more than 10s comprise 76.1%. All the seizure data segments are taken as a part of our experiment data. We randomly choose 665 non-seizure segments in each experiment. The 1330 seizure/non-seizure segments are randomly split into training set, validation set and testing set with a ratio of 70:15:15 in each experiment, and we adopt the repeated random sub-sampling validation as a strategy for cross validation.

4.3 Cross-Validation Results for the Proposed Approach

Based on the architecture in Figure 1, we build a model by stacking 15 IndRNN layers to classify seizure/non-seizure segments from CHB-MIT. In the model, the main parameters are set as follows: The number of hidden states in the first five IndRNN layers is 128, that in the second five IndRNN layers is 200, that in the third five IndRNN layers is 250, each layer for max pooling has a window size of 2 and stride of 2, the number of hidden states for the first fully connected layer is 100, the optimizer is Adam, the learning rate is 0.0004. We train the model using a batch size of 30 for 100 epochs in each experiment. Overall, ten cross-validation experiments are conducted. The obtained results of these ten experiments are given in Table 1. It can be seen that the average (Ave.) in sensitivity is 87.3%, in specificity is 86.7%, in precision in 87.08%, and in F1 score is 87.07%.

Item Sensitivity Specificity F1 Score Precision Accuracy
1 0.9100 0.8300 0.8750 0.8426 0.8700
2 0.8900 0.9000 0.8945 0.8990 0.8950
3 0.9300 0.8600 0.8986 0.8692 0.8950
4 0.7900 0.8500 0.8144 0.8404 0.8200
5 0.8400 0.8900 0.8615 0.8842 0.8650
6 0.8500 0.8600 0.8543 0.8586 0.8550
7 0.8700 0.8500 0.8614 0.8529 0.8600
8 0.8700 0.9500 0.9062 0.9457 0.9100
9 0.9000 0.7300 0.8295 0.7692 0.8150
10 0.8800 0.9500 0.9119 0.9462 0.9150
Ave. 0.8730 0.8670 0.8707 0.8708 0.8700
Std. 0.0377 0.0602 0.0310 0.0498 0.0328
Table 1: Cross-validation results using the proposed approach

4.4 Comparison with the LSTM and CNN Approaches

As a main module LSTM has been used to detect seizures[22]. The LSTM approach is evaluated through cross-validation experiments over the EEG data set from University of Bonn[17], showing state-of-the-art performance. A CNN-based approach has also been proposed for seizure/non-seizure classification[15], which also demonstrates state-of-the-art performance over the Bonn data set. Because the Bonn data set is heavily processed and contains no artifacts, and its size is small, we compare the proposed approach with the LSTM approach and the CNN approach over the noisy dataset CHB-MIT.

For the LSTM approach and the CNN approach, we implement them according to their descriptions in the related literatures. And the implementations are tested. Our obtained testing results reach the reported performances. Based on the two implementations, cross-validation experiments are conducted for the LSTM approach and the CNN approach separately. The LSTM approach consists of one LSTM layer, one time-distributed computing layer, one average pooling layer and one fully connected layer. In the experiments using the LSTM approach, our parameter setting is as follows: The number of hidden states is 120 in the LSTM layer, that in the time-distributed computing layer is 60, the optimizer is RMSprop, the learning rate is 0.0007, the batch size is 30, and epochs is 30. For the CNN approach, it contains five convolutional layers, five max pooling layers, and three fully connected layers. The parameters are set as follows: the number of hidden states in the first two convolutional layers is 100, that in each of the second two convolutional layers is 200, that in the fifth convolutional layer is 260, that in the first fully connected layer is 100, that in the second fully connected layer is 50, the parameter alpha is 0.01 in the LeakyReLU activation function, the optimizer is Adam, the learning rate is 0.001, the batch size is 30, and the number of epochs is 50. Using the LSTM approach, ten cross-validation results are obtained and shown in Table 2. The obtained average sensitivity is 84.4%, the average specificity is 84.3%, and the average precision is 84.7%. For the CNN approach, ten cross-validation experiments are conducted, and the results are given in Table 3. The achieved average sensitivity, the average specificity and the average precision are 84.8%, 81.0% and 82.56%, respectively.

Item Sensitivity Specificity F1 Score Precision Accuracy
1 0.8500 0.8800 0.8629 0.8763 0.8650
2 0.7700 0.8500 0.8021 0.8370 0.8100
3 0.7900 0.8700 0.8229 0.8587 0.8300
4 0.7100 0.9300 0.7978 0.9103 0.8200
5 0.8200 0.8900 0.8497 0.8817 0.8550
6 0.9100 0.7900 0.8585 0.8125 0.8500
7 0.8600 0.8300 0.8473 0.8350 0.8450
8 0.8600 0.8400 0.8515 0.8431 0.8500
9 0.9400 0.7200 0.8468 0.7705 0.8300
10 0.9300 0.8300 0.8857 0.8455 0.8800
Ave. 0.8440 0.8430 0.8425 0.8470 0.8435
Std. 0.0696 0.0550 0.0259 0.0368 0.0201
Table 2: Cross-validation results using the LSTM approach
Item Sensitivity Specificity F1 Score Precision Accuracy
1 0.8400 0.8500 0.8442 0.8485 0.8450
2 0.9200 0.7700 0.8558 0.8000 0.8450
3 0.8000 0.8400 0.8163 0.8333 0.8200
4 0.9000 0.6900 0.8145 0.7438 0.7950
5 0.9200 0.8000 0.8679 0.8214 0.8600
6 0.7900 0.8500 0.8144 0.8404 0.8200
7 0.6300 0.9700 0.7590 0.9545 0.8000
8 0.8500 0.8700 0.8586 0.8673 0.8600
9 0.8700 0.7700 0.8286 0.7909 0.8200
10 0.9600 0.6900 0.8458 0.7559 0.8250
Ave. 0.8480 0.8100 0.8305 0.8256 0.8290
Std. 0.0891 0.0809 0.0301 0.0571 0.0217
Table 3: Cross-validation results using the CNN approach

Comparing cross-validation results in Tables 1-3, we can conclude that the average performance, in either one of the metrics of sensitivity, specificity, F1 score, precision, and accuracy, using the proposed approach is at least 2% greater than that using the LSTM approach or the CNN approach. The above comparisons show that the proposed approach outperforms the LSTM approach and the CNN approach in seizure/non-seizure classification.

4.5 Validation of IndRNN Layers

The performance of a deep learning model is typically affected by the number of layers. In this section, we investigate the performance of the proposed approach with different numbers of IndRNN layers. Four cases with various layers, i.e. 6 IndRNN layers, 9 IndRNN layers, 12 IndRNN layers, and 15 IndRNN layers, are tested separately. For each case, ten cross-validation experiments are conducted based on the tuned optimal parameters over 23s data segments from CHB-MIT. The cross-validation results of the four cases are summarized in Table 4.

IndRNN layers Sensitivity Specificity F1 Score Precision Accuracy
6 layers 0.84200.0387 0.88900.0416 0.86220.0282 0.88510.0393 0.86550.0278
9 layers 0.84400.0338 0.87700.0560 0.85840.0177 0.87680.0498 0.86050.0204
12 layers 0.85200.0424 0.87300.0377 0.86080.0175 0.87230.0314 0.86250.0157
15 layers 0.87300.0377 0.86700.0602 0.87070.0310 0.87080.0498 0.87000.0328
Table 4: Cross-validation results using different number of IndRNN layers

The results in Table 4 indicate that the sensitivities in the four cases increase as the number of the IndRNN layers increases, but the specificities decrease. The four accuracies are similar. The structure with 15 IndRNN layers has the best sensitivity, and the difference between the sensitivity and the specificity is small. In order to detect more seizures, we select the approach with 15 IndRNN layers and use it to compare with the LSTM approach and the CNN approach for the seizure/non-seizure classification.

4.6 Effects of Segment Lengths

In the following, we explore relationship between the segment length and the performance of seizure/non-seizure classification.

Generally, seizures last for less than two minutes. We select 13 temporal lengths less than 2 min and separately use each length to segment EEG signals in CHB-MIT. Besides the length of 23s, other 12 lengths are 30s, 35s, 40s, 45s, 50s, 55s, 60s, 70s, 80s, 90s, 100s, and 110s. The segmentation is similar to the case of 23s. For each length, ten cross-validation experiments are conducted based on a group of tuned optimal parameters by using the proposed approach containing 12 IndRNN layers. In the experiments, the tuned parameters mainly include the learning rate and the number of epochs. The same numbers of hidden states are used as in the case of 23s. The number of seizure segments for each segment length and the obtained cross-validation results are listed in Table 5 and visualized in Figure 2, where Len. stands for segment length, and Num. Sei. for the number of seizure segments.

Len. Num. Sei. Sensitivity Specificity F1 Score Precision Accuracy
23s 665 0.85200.0424 0.87300.0377 0.86080.0175 0.87230.0314 0.86250.0157
30s 543 0.85980.0427 0.87680.0398 0.86690.0186 0.87680.0329 0.86830.0175
35s 496 0.86270.0348 0.87330.0354 0.86730.0298 0.87250.0329 0.86800.0298
40s 463 0.87000.0370 0.86000.0541 0.86590.0283 0.86400.0463 0.86500.0293
45s 429 0.86460.0236 0.85850.0429 0.86220.0220 0.86090.0374 0.86150.0236
50s 411 0.86290.0300 0.87260.0233 0.86700.0204 0.87170.0208 0.86770.0198
55s 396 0.84670.0452 0.86670.0483 0.85500.0235 0.86640.0376 0.85670.0226
60s 358 0.85740.0414 0.84070.0505 0.85030.0221 0.84570.0371 0.84910.0227
70s 340 0.86080.0310 0.87260.0450 0.86600.0135 0.87350.0365 0.86660.0153
80s 325 0.83060.0429 0.85510.0423 0.84070.0261 0.85320.0363 0.84280.0255
90s 289 0.86590.0570 0.88860.0448 0.87540.0392 0.88740.0417 0.87730.0380
100s 305 0.82390.0579 0.88040.0569 0.84750.0383 0.87660.0528 0.85220.0368
110s 293 0.84090.0249 0.84770.0204 0.84370.0154 0.84700.0172 0.84430.0144
Table 5: Cross-validation results over segments with different lengths and number of produced seizure segments
Figure 2: Performance over data segments with different lengths

Figure 2 shows that the relation between the data segment lengths and the performances of seizure/non-seizure classification is not linear. The classification performance does not go up or go down as the segment length increases, and it fluctuates over the segment lengths. With six lengths, including 60s, 70s, 80s, 90s, 100s and 110s, the performances manifest wide fluctuating margin. With six lengths of 23s, 30s, 35s, 40s, 45s, and 50s, the differences of performances are relatively small. The best performance is obtained with the segment length of 90s. For the three metrics of Sensitivity, Specificity and Precision, their maximal gaps are all more than 4%. For the F1 Score and Accuracy, the maximal differences are more than 3%. It can be seen that the influence of the segment length can not be overlooked for the seizue/non-seizure classification. With different segment lengths, the distributions of time lengths of seizure in seizure segments are different. The fluctuations are likely to be from the differences between the distributions of seizure time lengths.

5 Discussion

To automatically identify seizure segments from off-line EEG data for assisting neurologists in reviewing and analyzing, a deep learning approach is proposed to classify seizure against non-seizure. Compared to the LSTM approach and the CNN approach, our proposed approach has improvements typically of more than 2%, and more than 4% in specificity and precision. The improvements show the IndRNN approach is powerful to classify seizure/non-seizure on EEG data. The strength of the proposed approach is from the model IndRNN. IndRNN is able to handle many layers and longer sequences than LSTM and RNN[16]. The IndRNN approach extracts features from the forward direction. When conducting experiments, we attempt to construct a bidirectional IndRNN approach to support the forward and the backward computing. Our experiments show that the bidirectional IndRNN approach is time-consuming in computation while the performance gain is marginal, and we thus adopt the unidirectional IndRNN approach.

When segmenting data, we attempt to ensure that the obtained data segments are close to a real-world scenario. A seizure segment could contain seizure data and non-seizure data. It is unrealistic in the real world that all the seizure segments only contain seizure data. As the lengths of seizure data in seizure segments vary significantly, the segment length of 23s is selected for evaluating the proposed approach against the LSTM and CNN approaches.

In the exploration of relations between segment lengths and performances of classifying seizure/non-seizure, we choose to use the IndRNN approach with 12 IndRNN layers. The choice is based on the following three considerations: (1) The structure with 12 IndRNN layers has relatively good sensitivity over the 23s segments, as shown in Table 4; (2) For the structure with 15 IndRNN layers, more parameters need to be trained, and the number of seizure samples decreases as the segment length increases, which increases the risk of overfitting. (3) The results in Table 5 and Figure 2 indicate that the influence of segment size is not small for the seizure/non-seizure classification.

6 Limitations

The proposed IndRNN approach processes EEG signals from different brain areas and extract spatial features. It does not distinguish signals over different channels in a strict way. The neural network in the proposed approach is deep and long. Its training needs much more samples than traditional machine learning methods, and its strength is limited in processing imbalanced data.

7 Conclusion

For seizure/non-seizure classification, a novel approach is proposed. The approach leverages an emerging neural network model, IndRNN, and achieves the state-of-art performance in cross-validation experiments across patients. The obtained performance in sensitivity, specificity, and precision, is better than LSTM and CNN approaches. The results demonstrate that our proposed approach is more resilient to the inter-patients variabilities and recognizes seizure segments more accurately. The proposed approach can accurately select most of seizure segments for assisting the neurologists in reviewing and analyzing. Additionally, we explore how the segment length affects the performance of seizure/non-seizure classification. Our cross-validation experiments over 13 segment lengths show that the classification performance fluctuates over the segment lengths, with the maximal fluctuating margin being more than 4%. The segment length is thus an important factor influencing the seizure/non-seizure classification performance. As a future research line, we will investigate the problem of how to use the IndRNN approach for real-time seizure detection.

References

  • 1. Megiddo I, Colson A, Chisholm D, Dua T, Nandi A, Laxminarayan R. Health and economic benefits of public financing of epilepsy treatment in India: an agent-based simulation model. Epilepsia. 2016 January 14;57(3):464-474.
  • 2. Elger CE, Hoppe C. Diagnostic challenges in epilepsy: seizure under-reporting and seizure detection. Lancet Neurol. 2018 Mar 01;17(3):279-288.
  • 3. Gotman J. Automatic recognition of epileptic seizures in the EEG. Electroencephalography and Clinical Neurophysiology. 1982 November;54(5):530-540.
  • 4. Thodoroff P, Pineau J, Lim A. Learning robust features using deep learning for automatic seizure detection. In: Finale DV, Jim F, David K, Byron W, Jenna W editors. Prceedings of the 1st Machine Learning for Healthcare Conference; 2016 August 19-20; Los Angeles, CA, USA. Journal of Machine Learning Research. 2016; 56:178-190.
  • 5. Fürbass F, Ossenblok P, Hartmann M, et al. Prospective multi-center study of an automatic online seizure detection system for epilepsy monitoring units. Clinical Neurophysiology. 2015 June;126(6):1124-1131.
  • 6. Zandi AS, Javidan M, Dumont GA, Tafreshi R. Automated real-time epileptic seizure detection in scalp EEG recordings using an algorithm based on wavelet packet transform.IEEE Transactions on Biomedical Engineering. 2010 June 14;57(7):1639-1651.
  • 7. Shoeb A, Guttag J. Application of machine learning to epileptic seizure detection. In: Fürnkranz J, Joachims T, editiors. ICML 2010: Proceedings of the 27th International Conference on Machine Learning; 2010 June 21-24; Haifa, Israel. Omnipress; 2010. p. 975-982.
  • 8. Shoeb A. Application of machine learning to epileptic seizure onset detection and treatment[dissertation]. Cambridge: Massachusetts Institute of Technology; September 2009.
  • 9. Amin S, Kamboh AM. A robust approach towards epileptic seizure detection. In: Palmieri FAN, Uncini A, Diamantaras K, Larsen J, editors. MLSP2016: Proceedings of IEEE 26th International Workshop on Machine Learning for Signal Processing; 2016 September 13-16; Salerno, Italy. IEEE; 2016. p. 1-6.
  • 10. Hunyadi B, Signoretto M, Paesschen WV, Suykens JAK, Huffel SV, Vos MD. Incorporating structural information from the multichannel EEG improves patient-specific seizure detection. Clinical Neurophysiology. 2012 December;123(12):2352-2361.
  • 11. Esbroeck AV, Smith L, Syed Z, Singh S, Karam Z. Multitask seizure detection: addressing intra-patient variation in seizure morphologies. Machine Learning. 2016 March;102(3):309-321.
  • 12. Vidyaratne L, Glandon A, Alam M, Iftekharuddin KM. Deep recurrent neural network for seizure detection. In: IJCNN2016: Proceedings of 2016 International Joint Conference on Neural Networks; 2016 July 24-29; Vancouver, BC, Canada. IEEE; 2016. p. 1202-1207.
  • 13. Truong ND, Nguyen Ad, Kuhlmann L, et al. Integer convolutional neural network for seizure detection. IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 2018 December; 8(4):849-857.
  • 14. Golmohammadi M, Ziyabari S, Shah V, et al. Gated recurrent networks for seizure detection. In: SPMB17: Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium; 2017 December 2; Philadelphia, Pennsylvania, USA. IEEE; 2017. p. 1-5.
  • 15. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Computers in Biology and Medicine. 2018 September 1; 100(1):270-278.
  • 16. Li S, Li W, Cook C, Zhu C, Gao Y. Independent recurrent neural network: building a longer and deeper RNN. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 June 18-23; Salt Lake City, UT, USA. IEEE Computer Society; 2018. p. 5457-5466.
  • 17. Andrzejak RG, Lehnertz K, Mormann F, Rieke C, David P, Elger CE. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain state. Physical Review E. 2001 November 20;64,061907.
  • 18. Fergus P, Hussain A, Hignett D, Jumeily DA, Aziz KA, Hamdan H. A machine learning system for automated whole-brain seizure detection. Applied Computing and Informatics. 2016 January;12(1):70-89.
  • 19. Nicalaou N, Georgiou J. Detection of epileptic electroencephalogram based on permutation entropy and support vector machines. Expert Systems with Applications. 2012 January;39(1):202-209.
  • 20. Kharbouch A, Shoeb A, Guttag J, Cash SS. An algorithm for seizure onset detection using intracranial EEG. Epilepsy & Behavior. 2011 December;22(1):S29-S35.
  • 21. Bolagh SNG, Clifford GD. Subject selection on a Riemannian manifold for unsupervised cross-subject seizure detection. arXiv:1712.00465v1. 2017[cited 2019 March 12]:[5 p.]. Available from: https://arxiv.org/abs/1712.00465
  • 22. Hussein R, Palangi H, Ward R, Wang ZJ. Epileptic seizure detection: a deep learning approach. arXiv:1803.09848v1 [Preprint]. 2018[cited 2019 March 12]: [12 p.]. Available from: https://arxiv.org/abs/1803.09848
  • 23. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach F.R., Blei D.M. editors. JMLR Workshop and Conference Prceedings. Proceedings of the 32nd International Conference on Machine Learning; 2015 July 6-11; Lille, France. JMLR.org; 2015. p. 448-456.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
347604
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description