DASPS: A Database for Anxious States based on a Psychological Stimulation

DASPS: A Database for Anxious States based on a Psychological Stimulation

Asma Baghdadi,  Yassine Aribi,  Rahma Fourati,  Najla Halouani, Patrick Siarry,  and Adel M. Alimi,  A. Baghdadi, Y. Aribi, R. Fourati and A. M. Alimi are with the Research Groups in Intelligent Machines, National Engineering School of Sfax (ENIS), University of Sfax, BP 1173, 3038 Sfax, Tunisia
E-mail: {asma.baghdadi, yassine.aribi, rahma.fourati, adel.alimi}@ieee.org Najla Halouani is with the Hedi Chaker hospital, Sfax, Tunisia.
E-mail:najla.halouani@yahoo.fr Patrick Siarry is with the LERISS, Université de Paris 12, 94010 Créteil, France.

Anxiety affects human capabilities and behavior as much as it affects productivity and quality of life. It can be considered as the main cause of depression and suicide. Anxious states are easily detectable by humans due to their acquired cognition, humans interpret the interlocutor’s tone of speech, gesture, facial expressions and recognize their mental state. There is a need for non-invasive reliable techniques that performs the complex task of anxiety detection. In this paper, we present DASPS database containing recorded Electroencephalogram (EEG) signals of 23 participants during anxiety elicitation by means of face-to-face psychological stimuli. EEG signals were captured with Emotiv Epoc headset as it’s a wireless wearable low-cost equipment. In our study, we investigate the impact of different parameters, notably: trial duration, feature type, feature combination and anxiety levels number. Our findings showed that anxiety is well elicited in 1 second. For instance, stacked sparse autoencoder with different type of features achieves 83.50% and 74.60% for 2 and 4 anxiety levels detection, respectively. The presented results prove the benefits of the use of a low-cost EEG headset instead of medical non-wireless devices and create a starting point for new researches in the field of anxiety detection.

Electroencephalogram, stress and anxiety detection, psychological stimulation, feature extraction, feature selection.

1 Introduction

Anxiety is a mental health issue that has physical consequences on our bodies. However, it can affect the immune system, and unfortunately, there is evidence that too much anxiety can actually weaken the immune system dramatically [1]. Anxiety is essentially a long term stress, in such a way the stress hormone is liberated by our bodies in huge quantities which correlates with body performance degradation. This invisible disability can greatly affect academic performance as well. Anxiety impacts memory capacities, leading to difficulties in learning and retraining information.

The anxious student or employee works and thinks less efficiently, which significantly affects his capability. One in eight children suffers from anxiety disorders according to the Anxiety Disorders Association of America reports [2]. Nevertheless, it presents a risk for poor performance, diminished learning and social/behavioral problems in school. Since anxiety disorders in children are difficult to identify, it is an imperative task to learn how to detect them in early stage in order to help them. It may manifest by signs such as increased inflexibility, over-reactivity and emotional intensity.

People who experience panic-related attacks, trembling, or other different effects of anxiety disorders will have trouble doing tasks that often require attention. The different jobs, that require manual labor, are harmful for those suffering from anxiety attacks. People who suffer from anxiety-related disorders also have muscle pains, making it very difficult for them to do any demanding physical work.

Anxiety-related Disorders have various impacts on emotional and thinking capacity. If a person suffers from Generalized Anxiety Disorder (GAD) [3], then GAD will make things really difficult for working properly. It may also make it challenging for him to coordinate with his employer, since it would be hard for him to remain in one place for a long period of time. People suffering from GAD sometimes have Obsessive-Compulsive Disorder (OCD), which can have greater difficulty effect in the workplace. People who suffer from Post-Traumatic Stress Disorder (PTSD) cannot handle different environments [3]. Most of the employees with these types of anxiety disorders find it difficult to stay employed in one place since the working condition may not suit them. Anxiety cases do not help a person work for a long period of time since they will find it difficult to concentrate for long hours. If you are diagnosed with an anxiety disorder that has lasted more than a year, then it is surely going to affect your work performance.

Anxiety mainly arises due to three factors, namely external, internal and interpersonal. Table I shows anxiety categories and their stimuli from real life situations. To select the situations with the highest anxiety levels, a survey was carried out and diffused for all volunteers who wanted to participate in our experiment. Based on the conducted survey, we have selected 6 situations where participants experienced the highest anxiety levels. The selected situations and percentage for each level of anxiety are shown in Fig. 1.

Category Stimuli
External     Witnessing a deadly accident
    Familial instability
    Maltreatment / Abuse
    Financial instability
Interpersonal     Relationship with the supervisor
    Relationship with the manager
    Lack of confidence towards spouse
    Being in an embarrassing situation
Internal     Fear of getting cheated on
    Fear of children’s failure
    Fear of failure
    Fear of losing someone close
    Feeling guilty permanently
    Recalling a bad memory
    Health (Fear of getting sick
and missing on an important event)
    Health (Fear of being diagnosed
with a serious illness)
TABLE I: Anxiety triggers categories and stimuli
Fig. 1: Survey results of the most common anxiety triggers

This paper is organized into 8 sections. In section 2, we present an overview of the realized works for anxiety detection from EEG signals. In section 3, we detail the implemented experimental protocol and the analysis of the collected data in terms of variability and coherence. The followed steps for data recording and preprocessing as well as the general architecture of the proposed system are presented in section 4. A variety of features are presented in Section 5. Three classifiers are described in Section 6. An analysis and discussion of the obtained results are carried out in section 7. Finally, the last section summarizes our paper and outlines future work.

2 Literature overview

Reference Stimulus #Participants #Channels Method description Affective states Accuracy ()
[4] Audio-Visual 32 32 ESN with band power features Stress and Calm 76.15
[5] Audio-Visual 32 32 SVM with Entropy features Stress and Calm 81.31
[6] AudioVisual 23 14 PSD with SVM Valence, Arousal and Dominance 62.49
[7] Audio-Visual 18 32 Asymmetry Index, Coherence, Brain Load Index and Spectral Centroid Frequency Stress and Relax —–
[8] Audio-Visual 32 32 Statistical characteristics, PSD and HOC with k-NN Calm and Stress 70.10
[9] Visual 15 10 FD, Correlation Dimension, Lyapuniv exponent and wavelet coefficient with LDA and SVM Calm and negatively excited 87.30
[10] Mathematical tasks 6 14 Hilbert-Huang Transform with SVM Neutral, Stress-low, Stress-medium and Stress-high 89.07
[11] —– 13 8 k-means clustering with stress indice Stress and Relax —-
[12] Stroop Color Word test 25 1 ANN, k-NN, LDA with DCT coefficient Non-stressed and Stressed 72.00
[13] Examination period 26 8 k-NN and SVM with Higuchi FD, GM and MSCE Stress and Stress-free 90.00
TABLE II: Previous works on EEG-based Anxiety Detection

Based on biometrics, many researches were conducted for the recognition of persons’ identity, mental and physical health [14] [15] [16] [17] [18] [19].
Researches conducted for anxiety/stress detection based on EEG signals analysis are few compared to those done for emotion recognition surveyed in [20] [21]. Most of the proposed works for EEG-based emotion recognition as in [22] [4] were validated using DEAP datatset [23]. It was recorded using Biosemi Active 2 headset with 32 channels. 32 particpants were watching 40 videos of one-minute duration. In their work, Giorgos et al. [7] extracted two subdatasets of trials from DEAP dataset [23] according to predefined conditions for two emotional states: stress and calm. The idea is to define thresholds for valence and arousal and extract only trials which respect this condition. Consequently, the previous step leads to a subset of 18 subjects conform to the adequate norm. The authors extracted spectral, temporal and non linear EEG features to represent the investigated states.

Bastos-Filho et al. [8] validate the proposed system based on three EEG signals feature extraction techniques by the classification of stress/calm emotional states. The classification was performed by a k-Nearest-Neighbor (k-NN) classifier. In the same way, authors in [9] defined two specific fields of valence and arousal space to represent two emotional states: calm and negatively excited. In order to improve the performance and efficiency of the stress recognition system, authors performed a qualitative and quantitative analysis to choose relevant EEG segments.

Otherwise, some researchers opt to conduct a suitable experimentation to collect their own EEG signals. In the work of Vanita et al. [10], the authors looked at students’ stress levels and defined their own experimental protocol to record EEG signals during stress elicitation session. Thenceforth, data was preprocessed for noise and ocular artifact removal. Features were extracted by the mean of a time-frequency analysis and classification was performed by an hierarchical Support Vector Machines that gave an accuracy of 89.07%. To investigate the realtime issue, Lahane et al. [24] proposed an EEG-based stress detection system. They employed an android application to gather EEG data. As feature, the Relative Energy Ratio (RER) was calculated for each frequency band.

Single channel EEG signal was recorded from 25 students from the Sunway University for a Stress Detection System proposed by [12]. Using the NeuroSky Mindwave headset, the data was collected and stored for further analysis. Students’ stress was elicited for 60 seconds by Stroop color word test preceded by 30 seconds of one-screen instruction reading. EEG signal’s high-frequency components comprising noise and artifacts were discarded, and only low-frequency components obtained after a Discrete Cosine Transform (DCT) were processed to the classification. Based on an interview with the subjects who reported that the instruction reading was the most stressful part of the experiment, it was concluded that only the first 30 seconds of the recorded data were preprocessed and processed for stress classification. Results show that k-NN which reaches 72% outperforms LDA(60%) and ANN(44%) in classifying stress.

Khosrowabadi et al. [13], discerned that the examination period is the most stressful for students. Based on this fact, they conducted their experiment during and after the examination period. Collecting EEG signals from 26 students (15 during examination period and 11 two weeks after). Data were preprocessed to noise removal with an elliptic band-pass filter (2-32 Hz). Three different features were investigated in this work: Higuchi’s Fractal Dimension (HFD), Gaussian mixtures of EEG spectrogram and Magnitude Square Coherence Estimation (MSCE). The classification step was handled using k-NN and SVM classifiers. As a result, MSCE gives the best accuracy up to 90% in classifying chronic mental stress.

Recently, a new work is published by Katsigiannis et al. [6] presenting their database for emotion recognition. In this work, EEG and ECG signals were collected from 23 participants during affect elicitation experiment. The study do not aim to detect stress from recorded data, it proposes a classification for valence, arousal and dominance levels. Moreover, the authors proved the effectiveness of the use of Emotiv Epoc headset [25] for the recording of EEG signals . To detect stress in healthy subjects, Norizam et al. [26] used in their study a k-NN classifier and some parameters such as Shannon Entropy (SE), Relative Spectral Centroid (RSC) and Energy Ratio (ER). Their study employed 185 EEG data from different experiments.

According to Table II, most of the previous works rely on audio-visual stimulus from the international IAPS, IADS databases [27]. Whereas, others used arithmetic tasks as practiced in [28] where the level of stress is supposed to increase when the hardness level of tasks is increased. In our opinion, mathematical tasks can not be realized by any person. Thus, the experience is limited to specific participants. Let’s recall that anxiety is felt by any person and our aim is to detect it in all people without any restriction. The definition of anxious states differs from a study to another. In our work, we will keep the term anxiety and present its levels as recommended by our therapist. For the classification, k-NN and SVM are popular among all works. Therefore, we follow these works and use them for the detection of anxiety levels. To add, we also use Stacked Sparse AutoEncoder (SSAE) for the classification step.

In this context, we rise the challenge of proposing a new database 111The database and Matlab scripts for data segmentation considered in this article can be downloaded on the following site: http://www.regim.org/publications/databases/dasps/ of EEG signals for anxiety levels detection. The innovation of our work does not reside only in making public EEG data for affective computing community but also in the design of a psychological stimulation protocol providing comfortable conditions for participants thanks to the face-to-face interaction with the therapist and to the use of a wireless EEG cap with lesser channels, i. e. only 14 dry electrodes.

3 Experimental protocol

3.1 Protocol description

We defined an experimental protocol that meets our needs. After discussion with a psychotherapist, the fixed protocol is as follows. Each participant is asked to sign a consent before starting the experiment. The anxiety level is calculated before stimulation according to the Hamilton Anxiety Rating Scale (HAM-A) in order to measure the severity of participants’ anxiety. This tool provides 14 items, each one contains a number of symptoms that can be rated on a scale of zero to four.

Our psychotherapist inquires the participant about the degree of severity of each symptom and its rate on the scale, with four being the most severe. This acquired data is used to compute an overarching score that indicates a person’s anxiety severity [29]. After which, the participant is prepared to start the experiment, with closed eyes and minimizing gesture and speech. The psychotherapist starts by reciting the first situation and helps the subject to imagine it. This phase is divided into two stages: recitation by the psychotherapist for the first 15 sec and Recall by the subject for the last 15 sec.

When time is over, the subject is asked to rate how he felt during stimulation using the Self Assessment Manikin (SAM). It has two rows for rating: Valence ranging from negative to positive and Arousal ranging from calm to excited. Each row contains nine items for rating. In order to evaluate the current emotion, each volunteer has to tick items that are suitable for emotion on only two dimensions (Arousal, valence). This trial is repeated until the sixth situation. At the end of the experiment, some items from HAM-A are re-evaluated by the psychotherapist to adjust the participant’s anxiety level. All steps of the protocol of stimulation is presented in Fig. 2.

Fig. 2: The experimental protocol of anxiety stimulation

3.2 Data analysis

Fig. 3: Presentation of participant rating in two-dimensional space

Before the preprocessing phase, the data were evaluated to eliminate those with large difference between expected and real rating. In [30] [31], Russell defines anxiety as: Low Valence and High Arousal. As a matter of fact, trials having this condition and belonging to LVHA quadrant are the main focus of our work as shown in Fig. 3. To analyze data across all participants, we opt to measure the relative variability by computing the Coefficient of Variation (CV) for all participants’ ratings for all stimuli situations.

CV is the ratio of the standard deviation to the mean (average). If the calculated CV is equal to zero, it indicates no variability whereas higher CVs indicate more variability. The mean CV between the participants’ assessments was 0.58 for valence and 0.42 for arousal, which can be considered as low variability. Note that this value is higher than expected. In most cases, this variability is due to the lack of comprehension of SAM scales leading to no objective rating. The mean rating across all study participants for each stimuli case in terms of valence and arousal are shown in Table V.

For each situation, the participant rating can be presented in 2D plan corresponding to the valence and arousal values. This plan can be divided into four quadrants according to the possible combinations of valence and arousal scales. The four quadrants as shown in Fig. 3 are: Low Valence and Low Arousal (LVLA), High Valence and Low Arousal (HVLA), Low Valence and High Arousal (LVHA), and High Valence and High Arousal (HVHA). A summary of the subjective classification into the four Valence-Arousal quadrants from participants’ ratings is presented in Table III.

As shown in Fig. 3, samples are focused on LVHA and LVLA quadrants, which proves that the employed situations successfully worked in eliciting anxiety with most participants. Table IV presents the number of participants classified by anxiety levels based on HAM score before and after the experiment.The number of participants with severe anxiety increased from 7 to 13 denoting the impact of elicitation of participants’ anxiety level.

Situation 1 7 0 16 0
Situation 2 12 0 11 0
Situation 3 9 0 14 0
Situation 4 14 0 7 0
Situation 5 8 0 15 0
Situation 6 7 0 16 0
TABLE III: Participants Number in each quadrant according to SAM ratings
Anxiety level Normal Light Moderate Severe
Before Experiment 4 6 6 7
After Experiment 2 5 3 13
TABLE IV: Participants anxiety levels according to Hamilton scores
Stimulus Valence Arousal
Situation 1
Situation 2
Situation 3
Situation 4
Situation 5
Situation 6
Mean CV 0.58 0.42
TABLE V: Mean rating and Standard Deviation across all participants for each situation
Fig. 4: Architecture of the proposed system

4 Materials and Methods

This work covers all stages needed to create a robust EEG-based anxiety detection system, starting from the elaboration of an anxiety stimulation experimental protocol to the classification of anxiety levels. The general architecture of the proposed system is depicted in Fig. 4.

4.1 EEG recording

Fig. 5: Emotiv EPOC electrodes placement

Voltage changes in the human brain during elicitation are recorded as EEG signals. External physiological (such as heart rate, muscle or even ocular movements) or extra-physiological (material settings or environment) noises may contaminate recorded brain signals. To remove all kind of artifacts, the signals must be treated by EOG/EMG/ECG specific techniques for artifacts removal. Once data are cleaned, relevant features can be extracted using time, frequency or time-frequency analysis. Then, a classifier will be trained using a test set of these features. The rest of the data shall then be classified by applying the said classifier. This step aims to provide a simplified interpretation of the original raw brain signals.

The experiment was performed on 23 healthy subjects not suffering from psychological diseases. 13 women and 10 men with an average age of 30 years old. The purpose was clearly explained to each participant before starting the experiment. Items of the Hamilton test are highlighted to avoid misunderstanding each question. The experiment was performed in an isolated environment to avoid distracting noises and to guarantee a subject’s full concentration. The anxiety stimulation is accomplished by face-to-face psychological elicitation performed in a professional manner by our psychotherapist.

EEG signals were recorded using a wireless EEG headset, the Emotiv EPOC 14 channels and 2 mastoids [25] were placed according to the international 10-20 system. The electrodes were attached to the scalp at position AF3, F7,F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4 as shown in Fig. 5. he M1 mastoid sensor acts as a ground reference point to compare the voltage of all other sensors, while the M2 mastoid sensor is an indirect reference to minimize electrical interference from the outside.[6].

Emotiv Epoc neuro headset [25] is used in this work regarding it’s easiness of use. It offers comfort to users and mainly it is a wireless equipment and don’t require an intricate set up like with clinical EEG material. In addition, it shows efficiency while used for emotion recognition systems like proved by [32] [33] [34] [35] [36] and more recently by [6] [37].

The recording was performed through Emotiv Epoc Software for EEG raw data recording. It allows us to view and save data for all channels or just customised the ones we need. The produced raw data have ”.Edf” extension that is convertible using matlab script to ”.mat” for further processing. The recording started before carrying out the first situation and ended after finishing the sixth one. Each recording took 6 min as depicted in Fig. 2, divided into 1 min by trial. The acquired EEG signals were processed at 128 Hz and impedance was kept as low as 7 k.

4.2 EEG preprocessing

In biomedical signal processing, the determination of noise and artifacts in the acquired signal is necessary to move to the feature extraction phase with a clean signal and achieve good classification results. Physiological artifacts are generated by a source different than the brain, such as electroculogram (EOG) artifacts under 4 Hz, muscle artifacts (EMG) with frequency exeeding 30 Hz, and heart rate ( electrocardiogram: EMG) of about 1.2 Hz. They can also be extra physiological, unrelated to the human body and are in the 50 Hz range. This may be caused by the environment or related to EEG acquisition parameters [27] [29] .

In research [10], two primary noises (artifacts) were eliminated, namely power line noise and ocular artifacts that arise due to body movement. Finite impulse Response (FIR) filter was used to filter out the noise, band between 0.75 Hz and 45 Hz.In order to remove ocular artifacts occurring at 0.1-16 Hz and preserve the natural proprieties of the signal, a wavelet decomposition approach using bi-orthogonal wavelet bior 3.9 was employed. The preprocessing in Deap dataset was achieved by using a blind source separation technique to remove ocular artifacts and by applying a band pass filter of 4-45Hz to the collected EEG signals [23].

For the aim of denoising our set of signals, we have applied an EEGLab script serving to cut relevant sub-band of EEG signals, removing baseline and removing Ocular and Muscular artifacts. A 4-45 hz Finite impulse response (FIR) pass-band filter was applied to the raw data. The Automatic Artifact removal for EEGLAB toolbox [38] (AAR) was used to remove EOG and EMG artifacts.

As mentioned, the experiment lasts almost 6 minutes divided into 6 different situations. We are only interested by the first 30 seconds of each trial. 15 seconds of SAM are removed during this step. As a result, we have 6 trials of 30 sec each per participant. We recall that, as shown in the experiment protocol, after each 30 sec of stimulation the participant is asked to fill in the SAM survey to express in terms of excitement (Arousal) and feeling (Valence) his emotions during the stimulation. And after the end of the whole experiment, the participant is asked to identify the most excited situation. We used this information to label all trials.

By applying this labeling step, we get: 156 ’Normal’ trials, 90 ’Severe’ trials, 10 trials ’Moderate’ and 20 ’Light’ trials. In order to increase the number of samples per participant, we follow previous work [39] by constructing two additional sub datasets for 5 seconds and 1 second trials extracted as sample from the main EEG signal.

The labeling process is depicted in the Flow Chart of Fig.6.

Fig. 6: Flow Chart of the labeling process

5 Feature extraction

A wide range of EEG features for emotion recognition have been investigated in the literature [40]. Generally, we can classify these features into three main classes according to the domain, namely, time-domain features, frequency-domain features and time-frequency-domain features. Other features can be extracted from a combination of electrodes, we mention one of them in this section.

5.1 Time Domain Features

Time-domain features are results of an exploration of signal characteristics that differ between emotional states. Many approaches were employed in the researches to extract this type of features. In our work, we have extracted Hjorth features and FD features:

5.1.1 Hjorth Features

Hjorth parameters [41] are: Activity, Mobility, and Complexity. The variance of a time series represents the activity parameter. The mobility parameter is represented by the mean frequency, or the standard deviation proportion of the power spectrum. Finally The complexity parameter represents the variation in frequency. Besides, it indicates the deviation of the slope.

Assume that .
The expressions of Hjorth parameters are:

Activity = (1)
Mobility = (2)
Complexity = (3)

Hjorth were used in many EEG studies such as in [42] [41] [43]. In our work, we calculated Hjorth parameters for all EEG channels, that produce a size Feature Vector of 42x1 for each trial.

5.1.2 Fractal Dimension

The Higuchi algorithm calculates fractal dimension value of time-series data. X (1), X (2),…, X (N) is a finite set of time series samples. Then, the newly constructed time series is defined as follows:


where m is the initial time and k is the interval time. For example, if k = 3 and N = 50, the newly constructed time series are:

k sets of are calculated as follows:


where L(k) denotes the average value of , and a relationship exists as follows:


Then, the fractal dimension can be obtained by logarithmic plotting between different k and its associated L(k).

This feature was used in the work [44] to recognize 4 emotions for the 4 quadrants of Russell affective plan, which are: negative high aroused (fear), positive high aroused (happy), negative low aroused (sad), and positive low aroused (pleasant) states. Using 3 electrodes, the proposed system achieved an accuracy of 70%

5.2 Frequency Domain Features

Band Power Power bands features are the most popular features in the context of EEG-based emotion recognition. The Definition of EEG frequency bands differs slightly between studies. Commonly, they are defined as following: (1-4 Hz), (4-8 Hz), (8-13 Hz), (13-32 Hz) and (32-64 Hz).

The decomposition of the overall power in the EEG signal into individual bands is commonly achieved through Fourier transforms and related methods for spectral analysis [45] as stated in [40]. Otherwise, short-time fourier transform (STFT) [39] is the most commonly used alternatives, or the estimation of power spectra density (PSD) using Welch’s method [23].

5.3 Time-Frequency Domain Features

5.3.1 Hilbert-Huang Spectrum

The Empirical Mode Decomposition (EMD) along with the Hilbert-Huang Spectrum (HHS) are considered as a new way to extract necessary information from EEG signal since it defines amplitude and instantaneous frequency for each sample [46]. EMD decomposes the EEG signal into a set of Intrinsic Mode Function (IMF) through an automatic shifting process. Each IMF represents different frequency components of original signals. EMD acts as an adaptive high-pass filter. It shifts out the fastest changing component first and as the level of IMF increases, the oscillation of the latter becomes smoother. Each component is band-limited, which can reflect the characteristic of instantaneous frequency [47].

x(t) is then represented as a sum of IMFs and the residual [46].

  • indicate the extracted EM

  • indicate the residual

In this work, we computed HHS for each signal using the EMD to obtain a set of IMFs representing the original signal. Extracted features are Hilbert Spectrum (HS) and instantaneous energy density (IED) level. The decomposition into IMfs resulted in 10 IMFs per each channel.

5.3.2 Band Power and RMS using DWT

Discrete wavelet transform (DWT) is a recent technique of signal processing, it proceeds by the decomposition of the signal into different levels of approximation and detail corresponding to different frequency bands. It also keeps the temporal information of the signal. Compromise is done by downsampling the signal for each level.

For example. Correspondence of frequency bands and wavelet decomposition levels depends on the sampling frequency. In our case, the correspondante decomposition is given in the last column of Table VI for = 128 Hz.

Bandwidth (Hz) Frequency Band Decomposition Level
1-4 Hz Delta A5
4-8 Hz Theta D5
8-13 Hz Alpha D4
13-32 Hz Beta D3
32-64 Hz Gamma D2
TABLE VI: EEG signal Frequency Bands and Decomposition Levels at fs=128 Hz

In our approach, in addition to the Band Power, the statistical feature Root Mean Square (RMS) derived from a wavelet decomposition with the function ’db5’ for 5 levels, is extracted for each frequency band.


where are the detail coefficients, the number of at the decomposition level, and j denotes the number of levels [48].

5.4 Other Features

5.4.1 Quantitative Features

In addition to the aforementioned features, we adopted the set of features used in [49], which include a variation of commonly used EEG features. However, for some features we use all channels and bands unlike in [49] in which, a reduction of features was applied by averaging outputs. The feature set includes stationary features that capture amplitude and frequency characteristics and inter-hemispheric connectivity features like shown in the Table VII.

Feature Description FB
Absolute spectral power Yes
Spectral Power: relative (normalised to total spectral power) Yes
Spectral entropy: Wiener (measure of spectral flatness) Yes
Difference between consecutive short-time spectral estimates Yes
cut-off frequency: 95% of spectral power contained No
between 0.5 and fc Hz
Amplitude: Time-domain signal: total power Yes
Amplitude: Time-domain signal: standard deviation Yes
Amplitude: Skewness of time-domain signal Yes
Amplitude: Kurtosis of time-domain signal Yes
Amplitude: Envelope mean value Yes
Amplitude: Envelope standard deviation (SD) Yes
Connectivity: Brain Symmetry Index Yes
Connectivity: Correlation between envelopes of Yes
hemisphere-paired channels
Connectivity: lag of maximum correlation coefficient Yes
between hemisphere-paired channels
Connectivity: coherence: mean value Yes
Connectivity: coherence: maximum value Yes
Connectivity: coherence: frequency of maximum value Yes
Range EEG: mean Yes
Range EEG: median Yes
Range EEG: standard deviation Yes
Range EEG: coefficient of deviation Yes
Range EEG: measure of skew about median Yes
Range EEG: lower margin (5th percentile) Yes
Range EEG: upper margin (95th percentile) Yes
Range EEG: upper margin - lower margin Yes
TABLE VII: EEG quantitative features

A feature vector resulting from this step containing a fusion of all qEEG features, constructed for ulterior classification.

5.4.2 Differential Asymmetry

Frontal asymmetry (the relative difference in power between two signals in different hemispheres) has been suggested as biomarker for anxiety [50]. In different studies, FA was calculated from the beta band (13-25 Hz) or the gamma band ( 30 Hz), but there are also studies using the alpha (8-12 Hz) frequency band. Due to the inverse relationship between alpha power and cortical activity, decreased alpha power reflects increased anxiety [50].

Frontal asymmetry within the alpha band can be inversely related to stress/anxiety [21] [7]. The feature was calculated using alpha band power. The natural logarithm of left side channels were subtracted from the right ones (L-R).


To calculate the asymmetry index, the continuous signal must be broken into small parts. Scientific studies recommend overlapping epochs with each limited to a duration of 1-2 seconds [39]. For each epoch a DWT is calculated. The DWT determines which frequencies underlie the actual data, allowing you to extract the power in a specific frequency band.

6 Classification

In our Anxiety levels detection methodology, classification step is carried out with different classifiers which are SVM, k-NN and SSAE.

6.1 Support Vector Machines

SVM [51] is known primarily as a binary classifier, while it can complete a multilabel classification. Known by its high generalization ability, it is tested and proven as a good classifier [52]. Consider a training set , where indicates the extracted EEG feature vectors, denotes the corresponding labels, and is the number of data. The SVM decision function is expressed by the following equation:


denotes the the input vector extracted from the EEG signal, k is the kernel function, indicates support vectors, weights are denoted by and the bias is denoted by b. The kernel of SVM used in this work is the Radial Basis Function (RBF).

6.2 k-Nearest Neighbor

k-NN showed efficiency in EEG signal classification [53]. The k parameter is an integer constant always chosen par the user, where new case will be assigned to the class most common amongst its k nearest neighbours measured by an Euclidean distance, or Manhattan, Minkowski, and Hamming distances. k-NN have an issue in classifying unbalanced sets, results can be influenced by the most dominant class.

6.3 Stacked Sparse Autoencoder

An autoencoder is known as an unsupervised learning algorithm The structure of this network consists of three layers: input, hidden and output layers [54]. It minimizes the error of reconstruction to a best expression of the hidden layer by making the output and the input layers equal. A sparse autoencoder is one of a range of types of autoencoder artificial neural networks that work on the principle of unsupervised machine learning. Autoencoders are a type of deep network that can be used for dimensionality reduction and to reconstruct a model through backpropagation. Autoencoders seek to use items like feature selection and feature extraction to promote more efficient data coding. Autoencoders often use a technique called backpropagation to change weighted inputs, in order to achieve dimensionality reduction, which in a sense scales down the input for corresponding results. A sparse autoencoder is one that has small numbers of simultaneously active neural nodes. The data processing in an auto-encoder network consists of two steps:

Encoding, where the original data are encoded from the input layer to the hidden layer :


where indicates the feature expression of the hidden layer; is the number of nodes in the input layer. is the number of nodes in the output layer. denotes the weight matrix and design the bias vector. is the sigmoid function, which is expressed as .

Decoding, the feature expression y is decoded from the hidden layer to the output layer:


The term Sparse AutoEncoder [55] is used when sparse restrictions are added to the hidden noses of an autoencoder, in order to control the number of activated neurons. Sparse AutoEncoder shown better results in learning features regarding to its reduced number of activated neurons [56].

7 Anxiety detection results and discussion

Fig. 7: Overall class distribution across all participants for two and four anxiety levels.

The main aim of the current work is to provide a new database for EEG-based anxiety detection. Therefore, to validate the proposed methodology three experiments basing on trial duration were conducted. As a preliminary step, data were labeled by applying an algorithm based on arousal and valence values to handle two classification problems which are anxiety two levels detection and anxiety four levels detection. The number of trials for each class resulting from this step is presented in Fig. 7 in term of distribution percentage leading to unbalanced classes.

We believe that unbalanced data in the case of 4 anxiety levels affects the classification results. So, we propose in order to obtain a balanced data, to regroup classes two-by-two: normal and light in the first class and moderate and severe in the second class. The dataset becomes slightly unbalanced, with samples amounting to 36% and 64% in average for the first class (labeled light) and the second class (labeled severe) respectively. Fig. 7 shows the overall class distribution throughout the whole dataset for the two-class rating scale.

Classification was performed using a SVM classifier with Radial Basis Function (RBF) kernel. It was trained and tested in Matlab. Furthermore, a 5-fold cross validation technique was used in order to validate the classification performance. It must be noted that k-NN was also evaluated using the same procedure, and happened in some cases to produce a more significant results than SVM. We highlight that the task here is subject-independent that means the training and test samples does not belongs to the same subject.

Trial duration Feature #Features Accuracy(%)
15s Hjorth 42 56.20 56.50
qEEG 25 56.50 56.50
HHT 10 57.00 56.80
Power 56 58.30 57.60
RMS 56 59.10 56.50
5s Hjorth 42 57.40 58.80
qEEG 25 56.80 56.40
HHT 9 56.90 56.30
Power 56 62.00 63.20
RMS 56 65.30 64.30
1s Hjorth 42 60.10 57.00
qEEG 25 58.30 56.40
HHT 7 56.60 56.30
Power 56 64.40 68.00
RMS 56 70.20 73.60
TABLE VIII: Anxiety detection results of 4 levels

Anxiety detection results for 4 levels are presented in Table VIII. We report results obtained from different kind of features. We remark that DWT-based RMS features with SVM achieve the best results 59.10% and 65.30% for 15s and 5s trial duration respectively. When the trial length is 1s, the result is increased with k-NN classifier again with DWT-based RMS features to reach 73.60%.

Trial duration Feature #Features Accuracy(%)
15s Hjorth 42 66.30 63.80
qEEG 25 64.10 63.80
HHT 10 64.10 64.10
Power 56 66.30 66.30
RMS 56 66.30 67.00
5s Hjorth 42 72.90 64.90
qEEG 25 64.00 63.60
HHT 9 64.00 64.10
Power 56 73.10 70.50
RMS 56 72.90 73.40
1s Hjorth 42 67.40 81.40
qEEG 25 64.00 63.50
HHT 7 64.00 63.60
Power 56 76.00 74.90
RMS 56 77.40 80.30
TABLE IX: Anxiety detection results of 2 levels

Table IX presents the results of two anxiety levels (light and severe) detection. Power and RMS features obtained after a DWT for the 15 s EEG signals give a significant rates 66.30%, 67.00% with SVM and k-NN classifiers, respectively. For 5s Trial duration, 73.40% is achieved with DWT based RMS features and k-NN classifier which slightly outperforms the result 73.10% obtained with Power features and SVM classifier. Classification accuracy reached 81.40% using Hjorth parameters and k-NN classifier against 77.40% with DWT based RMS features and SVM for 1s trial duration.

Through the aforementioned results, it is clear that detection from one second trial length is more accurate and this is related to the anxiety as an emotion. It can be evoked in 1s, but 5s or 15s are too long and may contain more than one emotion. To add, we can notice that the best rates are related to time-frequency features obtained after a wavelet transform. Regardless of the trials’ duration, features produced from a Hilbert Hung Transform and the set of quantitative EEG features do not lead to a great accuracy, despite proving that this approach outperform rates in many researches. Knowing this, Hjorth parameters are the most simple features to extract from an EEG signal, yet they produce a significant accuracy throughout all types of datasets.

#Features L1-L2 sizes Accuracy(%)
2 Levels 4 levels
Time Features 67 44-33 67.90 57.80
33-16 67.60 59.90
Frequency Features 154 102-77 82.00 71.20
77-38 78.70 68.80
Time-Frequency Features 112 74-56 67.10 59.70
56-28 67.10 59.50
All Features 277 184-138 83.50 74.60
138-70 81.60 72.60
TABLE X: Anxiety detection results using SSAE

For further study the impact of feature type on the performance of the proposed system, Feature vectors from 1s trial are grouped by type and then passed to SSAE. As the role of SSAE is the selection of the most relevant features, we specified the size of both hidden layers to be lower than the input size. Note that, a softmax layer is added to perform the classification task. Table X depicts the results obtained for Time, Frequency, Time-Frequency and All Features. Frequency features outperform other types with 82% and 71.20% for 2 and 4 anxiety levels, respectively. The highest accuracy is obtained with the combination of all features to reach 83.50% for 2 anxiety levels and 74.60% for 4 anxiety levels.

While, the combination of features allows to provide rich information, the representation generated by SSAE proved their effectiveness in handling more discriminative aspect. The result of 2 levels is higher than the 4 levels and this is mainly to the increase of complexity aspect in the classification task.

8 Conclusion

A novel approach to anxiety elicitation based on a face-to-face psychological stimulation has been presented in this paper. A dataset was constructed containing EEG data gathered during the experimentation. We present insights to analyze acquired data showing the efficiency of the followed strategy. The approach showed success in inducing anxiety levels, which was validated by HAM-A Test Scores calculated before and after the experiment.

A various sets of emotion recognition features and in particular anxiety/stress, based on EEG signal, are reviewed and applied in this paper. We presented the most popular feature extraction techniques from the wide range used in the literature. Some methods perform slightly better than others. We also investigated which trial durations are most promising and which features are most effective for it.

To the best of our knowledge, there are no available databases that contain EEG data recorded with a portable devices. The headset Emotiv Epoc used in our work is available for everyone and is easy to install and use. Patients can use it at home and check their stress levels without consulting an expert. Many clinical applications can be derived from this work improving life quality and reducing cognitive disabilities.


The research leading to these results has received funding from the Ministry of Higher Education and Scientific Research of Tunisia under the grant agreement number LR11ES48.


  • [1] A. Felman. (2018) What are anxiety disorders?. medical news today. [Online]. Available: https://www.medicalnewstoday.com/articles/323454.php
  • [2] ADAA. (2018) Anxiety and depression association of america. [Online]. Available: https://adaa.org/
  • [3] H. Y. Khdour, O. M. Abushalbaq, I. T. Mughrabi, A. F. Imam, M. A. Gluck, M. M. Herzallah, and A. A. Moustafa, “Generalized anxiety disorder and social anxiety disorder, but not panic anxiety disorder, are associated with higher sensitivity to learning from negative feedback: behavioral and computational investigation,” Frontiers in integrative neuroscience, vol. 10, p. 20, 2016.
  • [4] R. Fourati, B. Ammar, J. Sanchez-Medina, and A. M. Alimi, “Unsupervised learning in reservoir computing for eeg-based emotion recognition,” arXiv preprint arXiv:1811.07516, 2018.
  • [5] B. García-Martínez, A. Martínez-Rodrigo, R. Zangróniz, J. M. Pastor, and R. Alcaraz, “Symbolic analysis of brain dynamics detects negative stress,” Entropy, vol. 19, no. 5, p. 196, 2017.
  • [6] S. Katsigiannis and N. Ramzan, “Dreamer: a database for emotion recognition through eeg and ecg signals from wireless low-cost off-the-shelf devices,” IEEE journal of biomedical and health informatics, vol. 22, no. 1, pp. 98–107, 2018.
  • [7] G. Giannakakis, D. Grigoriadis, and M. Tsiknakis, “Detection of stress/anxiety state from eeg features during video watching,” in Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE.   IEEE, 2015, pp. 6034–6037.
  • [8] T. F. Bastos-Filho, A. Ferreira, A. C. Atencio, S. Arjunan, and D. Kumar, “Evaluation of feature extraction techniques in emotional state recognition,” in Intelligent human computer interaction (IHCI), 2012 4th international conference on.   IEEE, 2012, pp. 1–6.
  • [9] S. A. Hosseini, M. A. Khalilzadeh, and S. Changiz, “Emotional stress recognition system for affective computing based on bio-signals,” Journal of Biological Systems, vol. 18, no. spec01, pp. 101–114, 2010.
  • [10] V. Vanitha and P. Krishnan, “Real time stress detection system based on eeg signals,” Biomedical Research, pp. 271–275, 2016.
  • [11] M. N. B. Patil, M. R. P. Mirajkar, M. S. Patil, and M. P. Patil, “A method for detection and reduction of stress using eeg,” 2017.
  • [12] C.-K. A. Lim and W. C. Chia, “Analysis of single-electrode eeg rhythms using matlab to elicit correlation with cognitive stress,” International Journal of Computer Theory and Engineering, vol. 7, no. 2, p. 149, 2015.
  • [13] R. Khosrowabadi, C. Quek, K. K. Ang, S. W. Tung, and M. Heijnen, “A brain-computer interface for classifying eeg correlates of chronic mental stress.” in IJCNN, 2011, pp. 757–762.
  • [14] Y. Aribi, A. Wali, and A. M. Alimi, “Automated fast marching method for segmentation and tracking of region of interest in scintigraphic images sequences,” in International Conference on Computer Analysis of Images and Patterns.   Springer, 2015, pp. 725–736.
  • [15] ——, “An intelligent system for renal segmentation,” in e-Health Networking, Applications & Services (Healthcom), 2013 IEEE 15th International Conference on.   IEEE, 2013, pp. 11–15.
  • [16] Y. Aribi, A. Wali, M. Chakroun, and A. M. Alimi, “Automatic definition of regions of interest on renal scintigraphic images,” AASRI Procedia, vol. 4, pp. 37–42, 2013.
  • [17] Y. Aribi, A. Wali, and A. M. Alimi, “A system based on the fast marching method for analysis and processing dicom images: the case of renal scintigraphy dynamic,” in Computer Medical Applications (ICCMA), 2013 International Conference on.   IEEE, 2013, pp. 1–6.
  • [18] Y. Aribi, A. Wali, F. Hamza, A. M. Alimi, and F. Guermazi, “Analysis of scintigraphic renal dynamic studies: an image processing tool for the clinician and researcher,” in International Conference on Advanced Machine Learning Technologies and Applications.   Springer, 2012, pp. 267–275.
  • [19] Y. Aribi, F. Hamza, W. Ali, A. M. Alimi, and F. Guermazi, “An automated system for the segmentation of dynamic scintigraphic images,” Applied Medical Informatics, vol. 34, no. 2, pp. 1–12, 2014.
  • [20] A. Baghdadi, Y. Aribi, and A. M. Alimi, “A survey of methods and performances for eeg-based emotion recognition,” in International Conference on Hybrid Intelligent Systems.   Springer, 2016, pp. 164–174.
  • [21] S. M. Alarcao and M. J. Fonseca, “Emotions recognition using eeg signals: A survey,” IEEE Transactions on Affective Computing, 2017.
  • [22] R. Fourati, B. Ammar, C. Aouiti, J. Sanchez-Medina, and A. M. Alimi, “Optimized echo state network with intrinsic plasticity for eeg-based emotion recognition,” in International Conference on Neural Information Processing.   Springer, 2017, pp. 718–727.
  • [23] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “Deap: A database for emotion analysis; using physiological signals,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012.
  • [24] C. U. S. S. A. R. Prashant Lahane, Amit Vaidya, “Real time system to detect human stress using eeg signals,” International Journal of Innovative Research in Computer and Communication Engineering, vol. 4, no. 4, 2016.
  • [25] H. Ekanayake, “P300 and emotiv epoc: Does emotiv epoc capture real eeg?” Web publication http://neurofeedback. visaduma. info/emotivresearch. htm, 2010.
  • [26] N. Sulaiman, M. N. Taib, S. Lias, Z. H. Murat, S. A. Aris, and N. H. A. Hamid, “Novel methods for stress features identification using eeg signals,” International Journal of Simulation: Systems, Science and Technology, vol. 12, no. 1, pp. 27–33, 2011.
  • [27] D. Oude, “Eeg-based emotion recognition the influence of visual and auditory stimuli,” Emotion, vol. 57, pp. 1798–1806, 2007.
  • [28] G. Jun and K. G. Smitha, “Eeg based stress level identification,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC).   IEEE, 2016, pp. 003 270–003 274.
  • [29] K. McEvoy, K. Hasenstab, D. Senturk, A. Sanders, and S. S. Jeste, “Physiologic artifacts in resting state oscillations in young children: methodological considerations for noisy data,” Brain imaging and behavior, vol. 9, no. 1, pp. 104–114, 2015.
  • [30] J. A. Russell and A. Mehrabian, “Evidence for a three-factor theory of emotions,” Journal of research in Personality, vol. 11, no. 3, pp. 273–294, 1977.
  • [31] J. A. Russell, “A circumplex model of affect.” Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980.
  • [32] Y. Liu, O. Sourina, and M. K. Nguyen, “Real-time eeg-based emotion recognition and its applications,” in Transactions on computational science XII.   Springer, 2011, pp. 256–277.
  • [33] N. Jatupaiboon, S. Pan-ngum, and P. Israsena, “Real-time eeg-based happiness detection system,” The Scientific World Journal, vol. 2013, 2013.
  • [34] ——, “Emotion classification using minimal eeg channels and frequency bands,” in Computer Science and Software Engineering (JCSSE), 2013 10th International Joint Conference on.   IEEE, 2013, pp. 21–24.
  • [35] V. H. Anh, M. N. Van, B. B. Ha, and T. H. Quyet, “A real-time model based support vector machine for emotion recognition through eeg,” in Control, Automation and Information Sciences (ICCAIS), 2012 International Conference on.   IEEE, 2012, pp. 191–196.
  • [36] J. A. Coan and J. J. Allen, “Frontal eeg asymmetry and the behavioral activation and inhibition systems,” Psychophysiology, vol. 40, no. 1, pp. 106–114, 2003.
  • [37] D. S. Benitez, S. Toscano, and A. Silva, “On the use of the emotiv epoc neuroheadset as a low cost alternative for eeg signal acquisition,” in Communications and Computing (COLCOM), 2016 IEEE Colombian Conference on.   IEEE, 2016, pp. 1–6.
  • [38] A. Delorme and S. Makeig, “Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of neuroscience methods, vol. 134, no. 1, pp. 9–21, 2004.
  • [39] W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, “Identifying stable patterns over time for emotion recognition from eeg,” IEEE Transactions on Affective Computing, 2017.
  • [40] R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from eeg,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327–339, 2014.
  • [41] B. Hjorth, “Eeg analysis based on time domain properties,” Electroencephalography and clinical neurophysiology, vol. 29, no. 3, pp. 306–310, 1970.
  • [42] K. Ansari Asl, G. Chanel, and T. Pun, “A channel selection method for eeg classification in emotion assessment based on synchronization likelihood,” 2007.
  • [43] R. Horlings, D. Datcu, and L. J. Rothkrantz, “Emotion recognition using brain activity,” in Proceedings of the 9th international conference on computer systems and technologies and workshop for PhD students in computing.   ACM, 2008, p. 6.
  • [44] O. Sourina and Y. Liu, “A fractal-based algorithm of emotion recognition from eeg using arousal-valence model.” in Biosignals, 2011, pp. 209–214.
  • [45] J. N. Saby and P. J. Marshall, “The utility of eeg band power analysis in the study of infancy and early childhood,” Developmental neuropsychology, vol. 37, no. 3, pp. 253–273, 2012.
  • [46] K. I. Panoulas, L. J. Hadjileontiadis, and S. M. Panas, “Hilbert-huang spectrum as a new field for the identification of eeg event related de-/synchronization for bci applications,” in Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE.   IEEE, 2008, pp. 3832–3835.
  • [47] N. Zhuang, Y. Zeng, L. Tong, C. Zhang, H. Zhang, and B. Yan, “Emotion recognition from eeg signals using multidimensional information in emd domain,” BioMed research international, vol. 2017, 2017.
  • [48] M. Murugappan, M. Rizon, R. Nagarajan, and S. Yaacob, “Inferring of human emotional states using multichannel eeg,” European Journal of Scientific Research, vol. 48, no. 2, pp. 281–299, 2010.
  • [49] J. M. Toole and G. B. Boylan, “Neural: quantitative features for newborn eeg using matlab,” arXiv preprint arXiv:1704.05694, 2017.
  • [50] A. Demerdzieva and N. Pop-Jordanova, “Relation between frontal alpha asymmetry and anxiety in young patients with generalized anxiety disorder,” prilozi, vol. 36, no. 2, pp. 157–177, 2015.
  • [51] V. N. Vapnik, “An overview of statistical learning theory,” IEEE transactions on neural networks, vol. 10, no. 5, pp. 988–999, 1999.
  • [52] R. Fourati, C. Aouiti, and A. M. Alimi, “Improved recurrent neural network architecture for svm learning,” in Intelligent Systems Design and Applications (ISDA), 2015 15th International Conference on.   IEEE, 2015, pp. 178–182.
  • [53] M. Murugappan, N. Ramachandran, and Y. Sazali, “Classification of human emotion from eeg using discrete wavelet transform,” Journal of Biomedical Science and Engineering, vol. 3, no. 04, p. 390, 2010.
  • [54] H.-C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach, “Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4d patient data,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1930–1943, 2013.
  • [55] J. Deng, Z. Zhang, E. Marchi, and B. Schuller, “Sparse autoencoder-based feature transfer learning for speech emotion recognition,” in 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.   IEEE, 2013, pp. 511–516.
  • [56] W. Sun, S. Shao, R. Zhao, R. Yan, X. Zhang, and X. Chen, “A sparse auto-encoder-based deep neural network approach for induction motor faults classification,” Measurement, vol. 89, pp. 171–178, 2016.

Asma Baghdadi(IEEE Graduated Student Member). A PhD student in Computer Systems Engineering at the National Engineering School of Sfax (ENIS) and at the University of Paris-Est. Graduated as Software engineer in 2014 from the Higher Institute of computer science of Tunis. She is currently a member of the REsearch Group in Intelligent Machines (REGIM). Also member of the Images, Signals and Intelligent Systems Laboratory (Lissi-Upec). Since 2016 she is interested by researches in Affective computing, intelligent systems and EEG signal analysis..

Yassine Aribi (IEEE Student Member’08, Senior Member’16). He received the B. S. Degree in Computer Science applied in the management from the Faculty of Economic Sciences and Management of Sfax (FSEG) in 2002, the M. S. in Novel Technologies for Computer Systems from the National Engineering School of Sfax (ENIS) in 2011 and the PhD degree in Computer Science from the National Engineering School of Sfax (ENIS), Tunisia in 2015. In 2003, he joined Gabès University, where he is a common trunk professor in the Department of Computer Science, and a research member in the Research Groups on Intelligent Machines (REGIM) from 2010. In 2015, he joined the University of Monastir, where he is currently an assistant at the Higher Institute of computer science of Mahdia(Tunisia). His research include Medical images processing using intelligent tools.

Rahma Fourati (IEEE Graduated Student Member’15). She was born in Sfax, she is a PhD student in Computer Systems Engineering at the National Engineering School of Sfax (ENIS), since October 2015. She received the M.S. degree in 2011 from the Faculty of Economic Sciences and Management of Sfax (FSEGS). She is currently a member of the REsearch Group in Intelligent Machines (REGIM).Her research interests include Recurrent Neural Networks, Affective computing, EEG signals analysis, Support Vector Machines, Simulink Modeling.

Najla Halouani Najla Halouani : She was born in Sfax She is an Assistant University Hospital in Psychiatry since October 2012. She received the Doctorate of State in Medicine in 2011 from the Faculty of Medicine of Sfax (FMS). She received the Simulation Diploma in Medical Sciences in 2014. Her research interests include Diagnosis and therapy of stress and anxiety, Depression Study, Post-traumatic Stress Disorder.

Patrick Siarry (SM’95) is currently a Professor and a Doctoral Supervisor of Automatics and Informatics with the University of Paris-Est Créteil, Créteil, France. He is the Head of one team of the Laboratoire Images, Signaux et Systèmes Intelligents, University of Paris-Est Créteil. He supervises a Research Working Group, in the frame of Centre National de la Recherche Scientifique, META, since 2004. META is interested in theoretical and practical advances in metaheuristics for hard optimization. His current research interests include improvement of existing metaheuristics for hard optimization, adaptation of discrete meta-heuristics to continuous optimization, hybridization of metaheuristics with classical optimization techniques, dynamic optimization, and particle swarm optimization.

Adel M. Alimi (IEEE Student Member’91, Member’96, Senior Member’00).He was born in Sfax (Tunisia) in 1966. He graduated in Electrical Engineering 1990, obtained a Ph.D. and then an HDR both in Electrical and Computer Engineering in 1995 and 2000 respectively. He is now a Professor in Electrical and Computer Engineering at the University of Sfax. His research interest includes applications of intelligent methods (neural networks, fuzzy logic, evolutionary algorithms) to pattern recognition, robotic systems, vision systems, and industrial processes. He focuses his research on intelligent pattern recognition, learning, analysis and intelligent control of large scale complex systems. He is an Associate Editor and Member of the editorial board of many international scientific journals (e.g. Pattern Recognition Letters, Neurocomputing, Neural Processing Letters, International Journal of Image and Graphics, Neural Computing and Applications, International Journal of Robotics and Automation, International Journal of Systems Science, etc.). He was a Guest Editor of several special issues of international journals (e.g. Fuzzy Sets and Systems, Soft Computing, Journal of Decision Systems, Integrated Computer Aided Engineering, Systems Analysis Modeling and Simulations). He is an IEEE senior member and member of IAPR, INNS and PRS

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description