Deep learning-based electroencephalography analysis: a systematic review

Deep learning-based electroencephalography analysis: a systematic review

Yannick Roy
Faubert Lab
Université de Montréal
Montréal, Canada
yannick.roy@umontreal.ca
&Hubert Banville11footnotemark: 1
Inria
Université Paris-Saclay
Paris, France &
InteraXon Inc.
Toronto, Canada
&Isabela Albuquerque
MuSAE Lab
INRS-EMT
Université du Québec
Montréal, Canada
&Alexandre Gramfort
Inria
Université Paris-Saclay
Paris, France
&Tiago H. Falk
MuSAE Lab
INRS-EMT
Université du Québec
Montréal, Canada
&Jocelyn Faubert
Faubert Lab
Université de Montréal
Montréal, Canada
The first two authors contributed equally to this work.
Abstract

Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question.

Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations.

Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends.

Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About of the studies used convolutional neural networks (CNNs), while used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code.

Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.

\makeglossaries\newacronym

dlDLdeep learning \newacronymeegEEGelectroencephalography \newacronymmegMEGmagnetoencephalography \newacronymemgEMGelectromyography \newacronymeogEOGelectroculography \newacronymecogECoGelectrocorticography \newacronymsnrSNRsignal-to-noise ratio \newacronymbciBCIbrain-computer interface \newacronymerpERPevent-related potential \newacronymrsvpRSVPrapid serial visual presentation \newacronymdnnDNNdeep neural network \newacronymcnnCNNconvolutional neural network \newacronymrnnRNNrecurrent neural network \newacronymlstmLSTMlong short-term memory \newacronymdbnDBNdeep belief network \newacronymrbmRBMrestricted Boltzmann machine \newacronymganGANgenerative adversarial network \newacronymaeAEautoencoder \newacronymsdaeSDAEstacked denoising autoencoder \newacronymfcFCfully-connected \newacronymnlpNLPnatural language processing \newacronymcvCVcomputer vision \newacronymsgdSGDstochastic gradient descent \newacronymrocaucROC AUCarea under the receiver operating curve \newacronymsvmSVMsupport vector machine \newacronymicaICAindependent component analysis \newacronymstftSTFTshort-time Fourier transform \newacronympsdPSDpower spectral density \newacronymccaCCAcanonical correlation analysis

\keywords

EEG electroencephalogram deep learning neural networks review survey

1 Introduction

1.1 Measuring brain activity with EEG

\Gls

eeg, the measure of the electrical fields produced by the active brain, is a neuroimaging technique widely used inside and outside the clinical domain. Specifically, \glseeg picks up the electric potential differences, on the order of tens of , that reach the scalp when tiny excitatory post-synaptic potentials produced by pyramidal neurons in the cortical layers of the brain sum together. The potentials measured therefore reflect neuronal activity and can be used to study a wide array of brain processes.

Thanks to the great speed at which electric fields propagate, \glseeg has an excellent temporal resolution: events occurring at millisecond timescales can typically be captured. However, \glseeg suffers from low spatial resolution, as the electric fields generated by the brain are smeared by the tissues, such as the skull, situated between the sources and the sensors. As a result, \glseeg channels are often highly correlated spatially. The source localization problem, or inverse problem, is an active area of research in which algorithms are developed to reconstruct brain sources given \glseeg recordings [69].

There are many applications for \glseeg. For example, in clinical settings, \glseeg is often used to study sleep patterns [1] or epilepsy [3]. Various conditions have also been linked to changes in electrical brain activity, and can therefore be monitored to various extents using \glseeg. These include attention deficit hyperactivity disorder (ADHD) [10], disorders of consciousness [46, 41], depth of anaesthesia [60], etc. \glseeg is also widely used in neuroscience and psychology research, as it is an excellent tool for studying the brain and its functioning. Applications such as cognitive and affective monitoring are very promising as they could allow unbiased measures of, for example, an individual’s level of fatigue, mental workload, [19, 176], mood, or emotions [5]. Finally, \glseeg is widely used in \glsplbci - communication channels that bypass the natural output pathways of the brain - to allow brain activity to be directly translated into directives that affect the user’s environment [106].

1.2 Current challenges in EEG processing

Although \glseeg has proven to be a critical tool in many domains, it still suffers from a few limitations that hinder its effective analysis or processing. First, \glseeg has a low \glssnr [20, 77], as the brain activity measured is often buried under multiple sources of environmental, physiological and activity-specific noise of similar or greater amplitude called “artifacts”. Various filtering and noise reduction techniques have to be used therefore to minimize the impact of these noise sources and extract true brain activity from the recorded signals.

\gls

eeg is also a non-stationary signal [30, 57], that is its statistics vary across time. As a result, a classifier trained on a temporally-limited amount of user data might generalize poorly to data recorded at a different time on the same individual. This is an important challenge for real-life applications of \glseeg, which often need to work with limited amounts of data.

Finally, high inter-subject variability also limits the usefulness of \glseeg applications. This phenomenon arises due to physiological differences between individuals, which vary in magnitude but can severely affect the performance of models that are meant to generalize across subjects [29]. Since the ability to generalize from a first set of individuals to a second, unseen set is key to many practical applications of \glseeg, a lot of effort is being put into developing methods that can handle inter-subject variability.

To solve some of the above-mentioned problems, processing pipelines with domain-specific approaches are often used. A significant amount of research has been put into developing processing pipelines to clean, extract relevant features, and classify \glseeg data. State-of-the-art techniques, such as Riemannian geometry-based classifiers and adaptive classifiers [105], can handle these problems with varying levels of success.

Additionally, a wide variety of tasks would benefit from a higher level of automated processing. For example, sleep scoring, the process of annotating sleep recordings by categorizing windows of a few seconds into sleep stages, currently requires a lot of time, being done manually by trained technicians. More sophisticated automated \glseeg processing could make this process much faster and more flexible. Similarly, real-time detection or prediction of the onset of an epileptic seizure would be very beneficial to epileptic individuals, but also requires automated \glseeg processing. For each of these applications, most common implementations require domain-specific processing pipelines, which further reduces the flexibility and generalization capability of current \glseeg-based technologies.

1.3 Improving EEG processing with deep learning

To overcome the challenges described above, new approaches are required to improve the processing of \glseeg towards better generalization capabilities and more flexible applications. In this context, \glsdl [88] could significantly simplify processing pipelines by allowing automatic end-to-end learning of preprocessing, feature extraction and classification modules, while also reaching competitive performance on the target task. Indeed, in the last few years, \glsdl architectures have been very successful in processing complex data such as images, text and audio signals [88], leading to state-of-the-art performance on multiple public benchmarks - such as the Large Scale Visual Recognition challenge [35] - and an ever-increasing role in industrial applications.

\Gls

dl, a subfield of machine learning, studies computational models that learn hierarchical representations of input data through successive non-linear transformations [88]. \Glspldnn, inspired by earlier models such as the perceptron [142], are models where: 1) stacked layers of artificial “neurons” each apply a linear transformation to the data they receive and 2) the result of each layer’s linear transformation is fed through a non-linear activation function. Importantly, the parameters of these transformations are learned by directly minimizing a cost function. Although the term “deep” implies the inclusion of many layers, there is no consensus on how to measure depth in a neural network and therefore on what really constitutes a deep network and what does not [53].

Fig. 1 presents an overview of how \glseeg data (and similar multivariate time series) can be formatted to be fed into a \glsdl model, along with some important terminology (see Section 1.4), as well as an illustration of a generic neural network architecture. Usually, when channels are available and a window has length samples, the input of a neural network for \glseeg processing consists of a multidimensional array containing the samples corresponding to a window for all channels. This multidimensional array can be used as an example for training a neural network, as shown in Fig. 0(b). Variations of this end-to-end formulation can be imagined where the window is first passed through a preprocessing and feature extraction pipeline (e.g., time-frequency transform), yielding an example which is then used as input to the neural network instead.

Different types of layers are used as building blocks in neural networks. Most commonly, those are \glsfc, convolutional or recurrent layers. We refer to models using these types of layers as \glsfc networks, \glsplcnn [89] and \glsplrnn [145], respectively. Here, we provide a quick overview of the main architectures and types of models. The interested reader is referred to the relevant literature for more in-depth descriptions of \glsdl methodology [88, 53, 151].

\Gls

fc layers are composed of fully-connected neurons, i.e., where each neuron receives as input the activations of every single neuron of the preceding layer. Convolutional layers, on the other hand, impose a particular structure where neurons in a given layer only see a subset of the activations of the preceding one. This structure, akin to convolutions in signal or image processing from which it gets its name, encourages the model to learn invariant representations of the data. This property stems from another fundamental characteristic of convolutional layers, which is that parameters are shared across different neurons - this can be interpreted as if there were filters looking for the same information across patches of the input. In addition, pooling layers can be introduced, such that the representations learned by the model become invariant to slight translations of the input. This is often a desirable property: for instance, in an object recognition task, translating the content of an image should not affect the prediction of the model. Imposing these kinds of priors thus works exceptionally well on data with spatial structure. In contrast to convolutional layers, recurrent layers impose a structure by which, in its most basic form, a layer receives as input both the preceding layer’s current activations and its own activations from a previous time step. Models composed of recurrent layers are thus encouraged to make use of the temporal structure of data and have shown high performance in \glsnlp tasks [230, 210].

Additionally, outside of purely supervised tasks, other architectures and learning strategies can be built to train models when no labels are available. For example, \glsplae learn a representation of the input data by trying to reproduce their input given some constraints, such as sparsity or the introduction of artificial noise [53]. \Glsplgan [54] are trained by opposing a generator (G), that tries to generate fake examples from an unknown distribution of interest, to a discriminator (D), that tries to identify whether the input it receives has been artificially generated by G or is an example from the unknown distribution of interest. This dynamic can be compared to the one between a thief (G) making fake money and the police (D) trying to distinguish fake money from real money. Both agents push one another to get better, up to a point where the fake money looks exactly like real money. The training of G and D can thus be interpreted as a two-player zero-sum minimax game. When equilibrium is reached, the probability distribution approximated by G converges to the real data distribution [54].

Overall, there are multiple ways in which \glsdl improve and extend existing \glseeg processing methods. First, the hierarchical nature of \glspldnn means features could potentially be learned on raw or minimally preprocessed data, reducing the need for domain-specific processing and feature extraction pipelines. Features learned through a \glsdnn might also be more effective or expressive than the ones engineered by humans. Second, as is the case in the multiple domains where \glsdl has surpassed the previous state-of-the-art, it has the potential to produce higher levels of performance on different analysis tasks. Third, \glsdl facilitates the development of tasks that are less often attempted on \glseeg data such as generative modelling [52] and transfer learning [129]. Indeed, generative models can be leveraged to learn intermediate representations or for data augmentation [52]. In transfer learning, the model parameters can also be transferred from one subject to another or from task A to task B. This might drastically widen or change the applicability of several \glseeg-based technologies.

On the other hand, there are various reasons why \glsdl might not be optimal for \glseeg processing and that may justify the skepticism of some of the \glseeg community. First and foremost, the datasets typically available in \glseeg research contain far fewer examples than what has led to the current state-of-the-art in \glsdl-heavy domains such as \glscv and \glsnlp. Data collection being relatively expensive and data accessibility often being hindered by privacy concerns - especially with clinical data - openly available datasets of similar sizes are not common. Some initiatives have tried to tackle this problem though [65]. Second, the peculiarities of \glseeg, such as its low \glssnr, make \glseeg data very different from other types of data (e.g, images, text and speech) for which \glsdl has been most successful. Therefore, the architectures and practices that are currently used in \glsdl might not be readily applicable to \glseeg processing.

1.4 Terminology used in this review

Some terms are sometimes used in the fields of machine learning, deep learning, statistics, EEG and signal processing with different meanings. For example, in machine learning, “sample” usually refers to one example of the input received by a model, whereas in statistics, it can be used to refer to a group of examples taken from a population. It can also refer to the measure of a single time point in signal processing and EEG. Similarly, in deep learning, the term “epoch” refers to one pass through the whole training set during training; in EEG, an epoch is instead a grouping of consecutive EEG time points extracted around a specific marker. To avoid the confusion, we include in Table 1 definitions for a few terms as used in this review. Fig. 1 gives a visual example of what these terms refer to.

Definition used in this review
Point or sample A measure of the instantaneous electric potential picked up by the EEG sensors, typically in .
Example An instantiation of the data received by a model as input, typically denoted by in the machine learning literature.
Trial A realization of the task under study, e.g., the presentation of one image in a visual ERP paradigm.
Window or segment A group of consecutive EEG samples extracted for further analysis, typically between 0.5 and 30 seconds.
Epoch A window extracted around a specific trial.
Table 1: Disambiguation of common terms used in this review.
(a) Overlapping windows (which may correspond to trials or epochs in some cases) are extracted from multichannel EEG recordings.

Hidden layers

{varwidth}3cmExample composed of points or samples (e.g. raw EEG, features)

{varwidth}2.5cmPrediction (e.g. sleep stage, BCI classification)

Inputlayer

Outputlayer
(b) Illustration of a general neural network architecture.
Figure 1: Deep learning-based EEG processing pipeline and related terminology.

1.5 Objectives of the review

This systematic review covers the current state-of-the-art in \glsdl-based \glseeg processing by analyzing a large number of recent publications. It provides an overview of the field for researchers familiar with traditional \glseeg processing techniques and who are interested in applying \glsdl to their data. At the same time, it aims to introduce the field applying \glsdl to \glseeg to \glsdl researchers interested in expanding the types of data they benchmark their algorithms with, or who want to contribute to \glseeg research. For readers in any of these scenarios, this review also provides detailed methodological information on the various components of a DL-EEG pipeline to inform their own implementation111Additional information with more fine-grained data can be found in our data items table available at http://dl-eeg.com.. In addition to reporting trends and highlighting interesting approaches, we distill our analysis into a few recommendations in the hope of fostering reproducible and efficient research in the field.

1.6 Organization of the review

The review is organized as follows: Section 1 briefly introduces key concepts in \glseeg and \glsdl, and details the aims of the review; Section 2 describes how the systematic review was conducted, and how the studies were selected, assessed and analyzed; Section 3 focuses on the most important characteristics of the studies selected and describes trends and promising approaches; Section 4 discusses critical topics and challenges in DL-EEG, and provides recommendations for future studies; and Section 5 concludes by suggesting future avenues of research in DL-EEG. Finally, supplementary material containing our full data collection table, as well as the code used to produce the graphs, tables and results reported in this review, are made available online.

2 Methods

English journal and conference papers, as well as electronic preprints, published between January 2010 and July 2018, were chosen as the target of this review. PubMed, Google Scholar and arXiv were queried to collect an initial list of papers to be reviewed. 222The queries used for each database are available at http://dl-eeg.com. Additional papers were identified by scanning the reference sections of these papers. The databases were queried for the last time on July 2, 2018.

The following title and abstract search terms were used to query the databases: {enumerate*}

EEG,

electroencephalogra*,

deep learning,

representation learning,

neural network*,

convolutional neural network*,

ConvNet,

CNN,

recurrent neural network*,

RNN,

long short-term memory,

LSTM,

generative adversarial network*,

GAN,

autoencoder,

restricted boltzmann machine*,

deep belief network* and

DBN . The search terms were further combined with logical operators in the following way: (2 OR 2) AND (2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2 OR 2). The papers were then included or excluded based on the criteria listed in Table 2.

Inclusion criteria Exclusion criteria
  • Training of one or multiple deep learning architecture(s) to process non-invasive EEG data.

  • Studies focusing solely on invasive EEG (e.g., \glsecog and intracortical EEG) or \glsmeg.

  • Papers focusing solely on software tools.

  • Review articles.

Table 2: Inclusion and exclusion criteria.

To assess the eligibility of the selected papers, the titles were read first. If the title did not clearly indicate whether the inclusion and exclusion criteria were met, the abstract was read as well. Finally, when reading the full text during the data collection process, papers that were found to be misaligned with the criteria were rejected.

Non-peer reviewed papers, such as arXiv electronic preprints333https://arxiv.org/, are a valuable source of state-of-the-art information as their release cycle is typically shorter than that of peer-reviewed publications. Moreover, unconventional research ideas are more likely to be shared in such repositories, which improves the diversity of the reviewed work and reduces the bias possibly introduced by the peer-review process [126]. Therefore, non-peer reviewed preprints were also included in our review. However, whenever a peer-reviewed publication followed a preprint submission, the peer-reviewed version was used instead.

A data extraction table was designed containing different data items relevant to our research questions, based on previous reviews with similar scopes and the authors’ prior knowledge of the field. Following a first inspection of the papers with the data extraction sheet, data items were added, removed and refined. Each paper was initially reviewed by a single author, and then reviewed by a second if needed. For each article selected, around 70 data items were extracted covering five categories: origin of the article, rationale, data used, EEG processing methodology, DL methodology and reported results. Table 3 lists and defines the different items included in each of these categories. We make this data extraction table openly available for interested readers to reproduce our results and dive deeper into the data collected. We also invite authors of published work in the field of DL and EEG to contribute to the table by verifying its content or by adding their articles to it.

The first category covers the origin of the article, that is whether it comes from a journal, a conference publication or a preprint repository, as well as the country of the first author’s affiliation. This gives a quick overview of the types of publication included in this review and of the main actors in the field. Second, the rationale category focuses on the domains of application of the selected studies. This is valuable information to understand the extent of the research in the field, and also enables us to identify trends across and within domains in our analysis. Third, the data category includes all relevant information on the data used by the selected papers. This comprises both the origin of the data and the data collection parameters, in addition to the amount of data that was available in each study. Through this section, we aim to clarify the data requirements for using DL on EEG. Fourth, the EEG processing parameters category highlights the typical transformations required to apply DL to EEG, and covers preprocessing steps, artifact handling methodology, as well as feature extraction. Fifth, details of the DL methodology, including architecture design, training procedures and inspection methods, are reported to guide the interested reader through state-of-the-art techniques. Sixth, the reported results category reviews the results of the selected articles, as well as how they were reported, and aims to clarify how DL fares against traditional processing pipelines performance-wise. Finally, the reproducibility of the selected articles is quantified by looking at the availability of the data and code. The results of this section support the critical component of our discussion.

Category Data item Description
Origin of article Type of publication Whether the study was published as a journal article, a conference paper or in an electronic preprint repository.
Venue Publishing venue, such as the name of a journal or conference.
Country of first author affiliation Location of the affiliated university, institute or research body of the first author.
Study rationale Domain of application Primary area of application of the selected study. In the case of multiple domains of application, the domain that was the focus of the study was retained.
Data Quantity of data Quantity of data used in the analysis, reported both in total number of samples and total minutes of recording.
Hardware Vendor and model of the EEG recording device used.
Number of channels Number of EEG channels used in the analysis. May differ from the number of recorded channels.
Sampling rate Sampling rate (reported in Hertz) used during the EEG acquisition.
Subjects Number of subjects used in the analysis. May differ from the number of recorded subjects.
Data split and cross-validation Percentage of data used for training, validation, and test, along with the cross-validation technique used, if any.
Data augmentation Data augmentation technique used, if any, to generate new examples.
EEG processing Preprocessing Set of manipulation steps applied to the raw data to prepare it for use by the architecture or for feature extraction.
Artifact handling Whether a method for cleaning artifacts was applied.
Features Output of the feature extraction procedure, which aims to better represent the information of interest contained in the preprocessed data.
Deep learning methodology Architecture Structure of the neural network in terms of types of layers (e.g. fully-connected, convolutional).
Number of layers Measure of architecture depth.
EEG-specific design choices Particular architecture choices made with the aim of processing EEG data specifically.
Training procedure Method applied to train the neural network (e.g., standard optimization, unsupervised pre-training followed by supervised fine-tuning, etc.).
Regularization Constraint on the hypothesis class intended to improve a learning algorithm generalization performance (e.g., weight decay, dropout).
Optimization Parameter update rule.
Hyperparameter search Whether a specific method was employed in order to tune the hyperparameter set.
Subject handling Intra- vs inter-subject analysis.
Inspection of trained models Method used to inspect a trained DL model.
Results Type of baseline Whether the study included baseline models that used traditional processing pipelines, DL baseline models, or a combination of the two.
Performance metrics Metrics used by the study to report performance (e.g., accuracy, f1-score, etc.).
Validation procedure Methodology used to validate the performance of the trained models, including cross-validation and data split.
Statistical testing Types of statistical tests used to assess the performance of the trained models.
Comparison of results Reported results of the study, both for the trained DL models and for the baseline models.
Reproducibility Dataset Whether the data used for the experiment comes from private recordings or from a publicly available dataset.
Code Whether the code used for the experiment is available online or not, and if so, where.
Table 3: Data items extracted for each article selected.

3 Results

The database queries yielded 553 different results that matched the search terms (see Fig. 2). 49 additional papers were then identified using the reference sections of the initial papers. Based on our inclusion and exclusion criteria, 446 papers were excluded. Therefore, 156 papers were selected for inclusion in the analysis.

Figure 2: Selection process for the papers.

3.1 Origin of the selected studies

Our search methodology returned journal papers, conference and workshop papers, preprints and journal paper supplement ([201], included in the "Journal" category in our analysis) that met our criteria. A total of journal and conference papers had initially been made available as preprints on arXiv or bioRxiv. Popular journals included Neurocomputing, Journal of Neural Engineering and Biomedical Signal Processing and Control, each with three publications contained in our selected studies. We also looked at the location of the first author’s affiliation to get a sense of the geographical distribution of research on DL-EEG. We found that most contributions came from the USA, China and Australia (see Fig. 3).

Figure 3: Countries of first author affiliations.

3.2 Domains

The selected studies applied DL to EEG in various ways (see Fig. 4 and Table 4). Most studies () focused on using DL for the classification of EEG data, most notably for sleep staging, seizure detection and prediction, brain-computer interfaces (BCIs), as well as for cognitive and affective monitoring. Around of the studies focused instead on the improvement of processing tools, such as learning features from EEG, handling artifacts, or visualizing trained models. The remaining papers () explored ways of generating data from EEG, e.g. augmenting data, or generating images conditioned on EEG.

Despite the absolute number of DL-EEG publications being relatively small as compared to other DL applications such as computer vision [88], there is clearly a growing interest in the field. Fig. 5 shows the growth of the DL-EEG literature since 2010. The first seven months of 2018 alone count more publications than 2010 to 2016 combined, hence the relevance of this review. It is, however, still too early to conclude on trends concerning the application domains, given the relatively small number of publications to date.

Figure 4: Focus of the studies. The number of papers that fit in a category is showed in brackets for each category. Studies that covered more than one topic were categorized based on their main focus.
Figure 5: Number of publications per domain per year. To simplify the figure, some of the categories defined in Fig. 4 have been grouped together.
Domain 1 Domain 2 Domain 3 Domain 4 AE CNN CNN+RNN DBN FC GAN MLP N/M Other RBM RNN
Classification of EEG signals BCI Detection [39]
Active Grasp and lift [8]
Mental tasks [125] [68, 133]
Motor imagery [226, 94] [43, 146, 172, 150, 36, 104, 170, 147, 204] [216] [9] [27, 107, 59, 113] [121] [224] [222, 23]
RSVP [154]
Slow cortical potentials [37]
Speech decoding [162] [166]
Active & Reactive MI & ERP [87]
Reactive ERP [211, 191, 16, 25]
Heard speech decoding [114]
RSVP [131, 213, 63, 109, 24] [161]
SSVEP [135] [11, 194, 83]
Clinical Alzheimer’s disease [115]
Anomaly detection [198]
Dementia [116]
Epilepsy Detection [212, 49] [201, 64, 185, 156, 123, 2, 175] [187, 51, 153, 50] [184] [138, 124] [173] [76, 4, 171, 117]
Event annotation [205]
Prediction [181, 127] [180] [183]
Ischemic stroke [48]
Pathological EEG [149] [143]
Schizophrenia Detection [28]
Sleep Abnormality detection [144]
Staging [86, 179] [132, 137, 160, 26, 190, 199, 110, 182] [167, 22] [85] [38, 47]
Monitoring Affective Bullying incidents [13]
Emotion [17, 200, 103, 79] [101, 102] [228, 229, 96] [174, 42] [112, 84] [44] [100, 220, 6]
Cognitive Drowsiness [61]
Engagement [93]
Eyes closed/open [118]
Fatigue [62]
Mental workload [208, 207] [7, 218] [71, 82] [72]
Mental workload & fatigue [209]
Cognitive vs. Affective [14]
Music semantics [163]
Physical Exercise [45] [55]
Multi-purpose architecture [91] [225] [34]
Music semantics [140]
Personal trait/attribute Person identification [223, 221]
Sex [188]
Generation of data Data augmentation [192, 219, 152]
Generating EEG [66]
Spatial upsampling [33]
Generating images conditioned on EEG [92] [128]
Improvement of processing tools Feature learning [195, 164, 95] [15]
Hardware optimization Neuromorphic chips [122] [206]
Model interpretability Model visualization [67] [165]
Reduce effect of confounders [197]
Signal cleaning Artifact handling [202, 203] [193] [130]
Table 4: Categorization of the selected studies according to their application domain and DL architecture. Domains are divided into four levels, as described in Fig. 4.

3.3 Data

The availability of large datasets containing unprecedented numbers of examples is often mentioned as one of the main enablers of deep learning research in the early 2010s [53]. It is thus crucial to understand what the equivalent is in EEG research, given the relatively high cost of collecting EEG data. Given the high dimensionality of EEG signals [105], one would assume that a considerable amount of data is required. Although our analysis cannot answer that question fully, we seek to cover as many dimensions of the answer as possible to give the reader a complete view of what has been done so far.

3.3.1 Quantity of data

We make use of two different measures to report the amount of data used in the reviewed studies: 1) the number of examples available to the deep learning network and 2) the total duration of the EEG recordings used in the study, in minutes. Both measures include the EEG data used across training, validation and test phases. For an in-depth analysis of the amount of data, please see the data items table which contains more detailed information.

The left column of Fig. 6 shows the amount of EEG data, in minutes, used in the analysis of each study, including training, validation and/or testing. Therefore, the time reported here does not necessarily correspond to the total recording time of the experiment(s). For example, many studies recorded a baseline at the beginning and/or at the end but did not use it in their analysis. Moreover, some studies recorded more classes than they used in their analysis. Also, some studies used sub-windows of recorded epochs (e.g. in a motor imagery BCI, using 3 s of a 7 s epoch). The amount of data in minutes used across the studies ranges from 2 up to 4,800,000 (mean = 62,602; median = 360).

The center column of Fig. 6 shows the amount of examples available to the models, either for training, validation or test. This number presents a relevant variability as some studies used a sliding window with a significant overlap generating many examples (e.g., 250 ms windows with 234 ms overlap, therefore generating 4,050,000 examples from 1080 minutes of EEG data [154]), while some other studies used very long windows generating very few examples (e.g., 15-min windows with no overlap, therefore generating 62 examples from 930 minutes of EEG data [48]). The wide range of windowing approaches (see Section 3.3.4) indicates that a better understanding of its impact is still required. The number of examples used ranged from 62 up to 9,750,000 (mean = 251,532; median = 14,000).

The right column of Fig. 6 shows the ratio between the amount of data in minutes and the number of examples. This ratio was never mentioned specifically in the papers reviewed but we nonetheless wanted to see if there were any trends or standards across domains and we found that in sleep studies for example, this ratio tends to be of two as most people are using 30 s non-overlapping windows. Brain-computer interfacing is seeing the most sparsity perhaps indicating a lack of best practices for sliding windows. It is important to note that the BCI field is also the one in which the exact relevant time measures were hardest to obtain since most of the recorded data isn’t used (e.g. baseline, in-between epochs). Therefore, some of the sparsity on the graph could come from us trying our best to understand and calculate the amount of data used (i.e., seen by the model). Obviously, in the following categories: generation of data, improvement of processing tools and others, this ratio has little to no value as the trends would be difficult to interpret.

Figure 6: Amount of data used by the selected studies. Each dot represents one dataset. The left column shows the datasets according to the total length of the EEG recordings used, in minutes. The center column shows the number of examples that were extracted from the available EEG recordings. The right column presents the ratio of number of examples to minutes of EEG recording.

The amount of data across different domains varies significantly. In domains like sleep and epilepsy, EEG recordings last many hours (e.g., a full night), but in domains like affective and cognitive monitoring, the data usually comes from lab experiments on the scale of a few hours or even a few minutes.

3.3.2 Subjects

Often correlated with the amount of data, the number of subjects also varies significantly across studies (see Fig. 7). Half of the datasets used in the selected studies contained fewer than 13 subjects. Six studies, in particular, used datasets with a much greater number of subjects: [132, 160, 188, 149] all used datasets with at least 250 subjects, while [22] and [49] used datasets with 10,000 and 16,000 subjects, respectively. As explained in Section 3.7.4, the untapped potential of DL-EEG might reside in combining data coming from many different subjects and/or datasets to train a model that captures common underlying features and generalizes better. In [202], for example, the authors trained their model using an existing public dataset and also recorded their own EEG data to test the generalization on new subjects. In [191], an increase in performance was observed when using more subjects during training before testing on new subjects. The authors tested using from 1 to 30 subjects with a leave-one-subject-out cross-validation scheme, and reported an increase in performance with noticeable diminishing returns above 15 subjects.

Figure 7: Number of subjects per domain in datasets. Each point represents one dataset used by one of the selected studies.

3.3.3 Recording parameters

As shown later in Section 3.8, of reported results came from private recordings. We look at the type of EEG device that was used by the selected studies to collect their data, and additionally highlight low-cost, often called "consumer" EEG devices, as compared to traditional "research" or "medical" EEG devices (see Fig. 7(a)). We loosely defined low-cost EEG devices as devices under the USD 1,000 threshold (excluding software, licenses and accessories). Among these devices, the Emotiv EPOC was used the most, followed by the OpenBCI, Muse and Neurosky devices. As for the research grade EEG devices, the BioSemi ActiveTwo was used the most, followed by BrainVision products.

The EEG data used in the selected studies was recorded with 1 to 256 electrodes, with half of the studies using between 8 and 62 electrodes (see Fig. 7(b)). The number of electrodes required for a specific task or analysis is usually arbitrarily defined as no fundamental rules have been established. In most cases, adding electrodes will improve possible analyses by increasing spatial resolution. However, adding an electrode close to other electrodes might not provide significantly different information, while increasing the preparation time and the participant’s discomfort and requiring a more costly device. Higher density EEG devices are popular in research but hardly ecological. In [153], the authors explored the impact of the number of channels on the specificity and sensitivity for seizure detection. They showed that increasing the number of channels from 4 up to 22 (including two referential channels) resulted in an increase in sensitivity from to and from to in specificity. They concluded, however, that the position of the referential channels is very important as well, making it difficult to compare across datasets coming from different neurologists and recording sites using different locations for the reference(s) channel(s).

Similarly, in [26], the impact of different electrode configurations was assessed on a sleep staging task. The authors found that increasing the number of electrodes from two to six produced the highest increase in performance, while adding additional sensors, up to 22 in total, also improved the performance but not as much. The placement of the electrodes in a 2-channel montage also impacted the performance, with central and frontal montages leading to better performance than posterior ones on the sleep staging task.

Furthermore, EEG sampling rates varied mostly between 100 and 1000 Hz in the selected studies (the sampling rate reported here is the one used to record the EEG data and not after downsampling, as described in Section 3.4). Around of studies used sampling rates of 250 Hz or less and the highest sampling rate used was 5000 Hz ([67]).

(a) EEG hardware used in the studies. The device name is followed by the manufacturer’s name in parentheses. Low-cost devices (defined as devices below $1,000 excluding software, licenses and accessories) are indicated by a different color.
(b) Distribution of the number of EEG channels.
Figure 8: Hardware characteristics of the EEG devices used to collect the data in the selected studies.

3.3.4 Data augmentation

Data augmentation is a technique by which new data examples are artificially generated from the existing training data. Data augmentation has proven efficient in other fields such as computer vision, where data manipulations including rotations, translations, cropping and flipping can be applied to generate more training examples [134]. Adding more training examples allows the use of more complex models comprising more parameters while reducing overfitting. When done properly, data augmentation increases accuracy and stability, offering a better generalization on new data [215].

Out of the 156 papers reviewed, three papers explicitly explored the impact of data augmentation on DL-EEG ([192, 219, 152]). Interestingly, each one looked at it from the perspective of a different domain: sleep, affective monitoring and BCI. Also, all three are from 2018, perhaps showing an emerging interest in data augmentation. First, in [192], Gaussian noise was added to the training data to obtain new examples. This approach was tested on two different public datasets for emotion classification (SEED [227] and MAHNOB-HCI [159]). They improved their accuracy on the SEED dataset using LeNet ([90]) from (without augmentation) to (with augmentation), from (without) to (with) using ResNet ([70]) and from (without) to (with) on MAHNOB-HCI dataset using ResNet. Their best accuracy was obtained with a standard deviation of 0.2 and by augmenting the data to 30 times its original size. Despite impressive results, it is important to note that they also compared LeNet and ResNet to an SVM which had an accuracy of (without) and (with) on the SEED dataset. This might indicate that the initial amount of data was insufficient for LeNet or ResNet but adding data clearly helped bring the performance up to par with the SVM. Second, in [219], a conditional deep convolutional generative adversarial network (cDCGAN) was used to generate artificial EEG signals on one of the BCI Competition motor imagery datasets. Using a CNN, it was shown that data augmentation helped improve accuracy from to around to classify motor imagery. In [152], the authors explicitly targeted the class imbalance problem of under-represented sleep stages by generating Fourier transform (FT) surrogates of raw EEG data on the CAPSLPDB dataset. They improved their accuracy up to on some classes.

An additional 30 papers explicitly used data augmentation in one form or another but only a handful investigated the impact it hae on performance. In [82, 15], noise was added to 2D feature images, although it did not improve results in [15]. In [76], artifacts such as eye blinks and muscle activity, as well as Gaussian white noise, were used to augment the data and improve robustness. In [209] and [208], Gaussian noise was added to the input feature vector. This approach increased the accuracy of the SDAE model from around (without augmentation) to (with).

Multiple studies also used overlapping windows as a way to augment their data, although many did not explicitly frame this as data augmentation. In [185, 123], overlapping windows were explicitly used as a data augmentation technique. In [83], different shift lengths between overlapping windows (from 10 ms to 60 ms out of a 2-s window) were compared, showing that by generating more training samples with smaller shifts, performance improved significantly. In [150], the concept of overlapping windows was pushed even further: 1) redundant computations due to EEG samples being in more than one window were simplified thanks to "cropped training", which ensured these computations were only done once, thereby speeding up training and 2) the fact that overlapping windows share information was used to design an additional term to the cost function, which further regularizes the models by penalizing decisions that are not the same while being close in time.

Other procedures used the inherent spatial and temporal characteristics of EEG to augment their data. In [34], the authors doubled their data by swapping the right and left side electrodes, claiming that as the task was a symmetrical problem, which side of the brain expresses the response would not affect classification. In [17], the authors augmented their multimodal (EEG and EMG) data by duplicating samples and keeping the values from one modality only, while setting the other modality values to 0 and vice-versa. In [42], the authors made use of the data that is usually thrown away when downsampling EEG in the preprocessing stage. It is common to downsample a signal acquired at higher sampling rate to 256 Hz or less. In their case, they reused the data thrown away during that step as new samples: a downsampling by a factor of would therefore allow an augmentation of times.

Finally, classification of rare events where the number of available samples are orders of magnitude smaller than their counterpart classes [152] is another motivation for data augmentation. In EEG classification, epileptic seizures or transitional sleep stages (e.g. S1 and S3) often lead to such unbalanced classes. In [190], the class imbalance problem was addressed by randomly balancing all classes while sampling for each training epoch. Similarly, in [26], balanced accuracy was maximized by using a balanced sampling strategy. In [183], EEG segments from the interictal class were split into smaller subgroups of equal size to the preictal class. In [160], cost-sensitive learning and oversampling were used to solve the class imbalance problem for sleep staging but the overall performance using these approaches did not improve. In [144], the authors randomly replicated subjects from the minority class to balance classes. Similarly, in [167, 38, 39, 109], oversampling of the minority class was used to balance classes. Conversely, in [175, 154], the majority class was subsampled. In [181], an overlapping window with a subject-specific overlap was used to match classes. Similar work by the same group [180] showed that when training a GAN on individual subjects, augmenting data with an overlapping window increased accuracy from to . For more on imbalanced learning, we refer the interested reader to [155].

3.4 EEG processing

One of the oft-claimed motivation for using deep learning on \glseeg processing is automatic feature learning [132, 76, 45, 68, 114, 11, 213]. This can be explained by the fact that feature engineering is a time-consuming task [98]. Additionally, preprocessing and cleaning \glseeg signals from artifacts is a demanding step of the usual EEG processing pipeline. Hence, in this section, we look at aspects related to data preparation, such as preprocessing, artifact handling and feature extraction. This analysis is critical to clarify what level of preprocessing \glseeg data requires to be successfully used with deep neural networks.

3.4.1 Preprocessing

Preprocessing \glseeg data usually comprises a few general steps, such as downsampling, band-pass filtering, and windowing. Throughout the process of reviewing papers, we found that a different number of preprocessing steps were employed in the studies. In [71], it is mentioned that “a substantial amount of preprocessing was required” for assessing cognitive workload using DL. More specifically, it was necessary to trim the \glseeg trials, downsample the data to 512 Hz and 64 electrodes, identify and interpolate bad channels, calculate the average reference, remove line noise, and high-pass filter the data starting at 1 Hz. On the other hand, Stober et al. [164] applied a single preprocessing step by removing the bad channels for each subject. In studies focusing on emotion recognition using the DEAP dataset [81], the same preprocessing methodology proposed by the researchers that collected the dataset was typically used, i.e., re-referencing to the common average, downsampling to 256 Hz, and high-pass filtering at 2 Hz.

We separated the papers into three categories based on whether or not they used preprocessing steps: “Yes”, in cases where preprocessing was employed; “No”, when the authors explicitly mentioned that no preprocessing was necessary; and not mentioned (“N/M”) when no information was provided. The results are shown in Fig. 9.

Figure 9: \glseeg processing choices. (a) Number of studies that used preprocessing steps, such as filtering, (b) number of studies that included, rejected or corrected artifacts in their data and (c) types of features that were used as input to the proposed models.

A considerable proportion of the reviewed articles () employed at least one preprocessing method such as downsampling or re-referencing. This result is not surprising, as applications of \glspldnn to other domains, such as computer vision, usually require some kind of preprocessing like cropping and normalization as well.

3.4.2 Artifact handling

artifact handling techniques are used to remove specific types of noise, such as ocular and muscular artifacts [186]. As emphasized in [203], removal of artifacts may be crucial for achieving good \glseeg decoding performance. Adding this to the fact that cleaning \glseeg signals might be a time-consuming process, some studies attempted to apply only minimal preprocessing such as removing bad channels and leave the burden of learning from a potentially noisy signal on the neural network [164]. With that in mind, we decided to look at artifact handling separately.

artifact removal techniques usually require the intervention of a human expert [120]. Different techniques leverage human knowledge to different extents, and might fully rely on an expert, as in the case of visual inspection, or require prior knowledge to simply tune a hyperparameter, as in the case of wavelet-based \glsica [108]. Among the studies which handled artifacts, a myriad of techniques were applied. Some studies employed methods which rely on human knowledge such as amplitude thresholding [114], manual identification of high-variance segments [71], and handling \glseeg blinking-related noise based on high-amplitude EOG segments [109]. On the other hand, many other articles favored techniques that rely less on human intervention, such as blind source separation techniques. For instance, in [166, 207, 208, 45, 131, 133], \glsica was used to separate ocular components from \glseeg data.

In order to investigate the necessity of removing artifacts from \glseeg when using deep neural networks, we split the selected papers into three categories, in a similar way to the preprocessing analysis (see Fig. 9). Almost half the papers () did not use artifact handling methods, while did. Additionally, of the studies did not mention whether artifact handling was necessary to achieve their results. Given those results, we are encouraged to believe that using \glspldnn on \glseeg might be a way to avoid the explicit artifact removal step of the classical \glseeg processing pipeline without harming task performance.

3.4.3 Features

Feature engineering is one of the most demanding steps of the traditional \glseeg processing pipeline [98] and the main goal of many papers considered in this review [132, 76, 45, 68, 114, 11, 213] is to get rid of this step by employing deep neural networks for automatic feature learning. This aspect appears to be of interest to researchers in the field since its early stages, as indicated by the work of Wulsin et al. [198], which, in 2011, compared the performance of \glspldbn on classification and anomaly detection tasks using both raw \glseeg and features as inputs. More recently, studies such as [165, 66] achieved promising results without the need to extract features.

On the other hand, a considerable proportion of the reviewed papers used hand-engineered features as the input to their deep neural networks. In [174], for example, authors used a time-frequency domain representation of \glseeg obtained via the \glsstft for detecting binary user-preference (like versus dislike). Similarly, Truong et al. [181], used the \glsstft as a 2-dimensional \glseeg representation for seizure prediction using \glsplcnn. In [218], \glseeg frequency-domain information was also used. Widely adopted by the \glseeg community, the \glspsd of classical frequency bands from around 1 Hz to 40 Hz were used as features. Specifically, authors selected the delta (1-4 Hz), theta (5-8 Hz), alpha (9-13 Hz), lower beta (14-16 Hz), higher beta (17-30 Hz), and gamma (31-40 Hz) bands for mental workload state recognition. Moreover, other studies employed a combination of features, for instance [48], which used PSD features, as well as entropy, kurtosis, fractal component, among others, as input of the proposed \glscnn for ischemic stroke detection.

Given that the majority of \glseeg features are obtained in the frequency-domain, our analysis consisted in separating the reviewed articles into four categories according to the respective input type. Namely, the categories were: “Raw \glseeg”, “Frequency-domain”, “Combination” (in case more than one type of feature was used), and “Other” (for papers using neither raw \glseeg nor frequency-domain features). Studies that did not specify the type of input were assigned to the category “N/M” (not mentioned). Notice that, here, we use “feature” and “input type” interchangeably.

Fig. 9 presents the result of our analysis. One can observe that of the papers used only raw \glseeg data as input, whereas used hand-engineered features, from which corresponded to frequency domain-derived features. Finally, did not specify the type of input of their model. According to these results, we find indications that \glspldnn can be in fact applied to raw \glseeg data and achieve state-of-the-art results.

3.5 Deep learning methodology

3.5.1 Architecture

A crucial choice in the DL-based EEG processing pipeline is the neural network architecture to be used. In this section, we aim at answering a few questions on this topic, namely: 1) "What are the most frequently used architectures?", 2) "How has this changed across years?", 3) "Is the choice of architecture related to input characteristics?" and 4) "How deep are the networks used in DL-EEG?".

To answer the first three questions, we divided and assigned the architectures used in the 156 papers into the following groups: \glsplcnn, \glsplrnn, \glsplae, \glsplrbm, \glspldbn, \glsplgan, \glsfc networks, combinations of \glsplcnn and \glsplrnn (CNN+RNN), and “Others” for any other architecture or combination not included in the aforementioned categories. Fig. 9(a) shows the percentage of studies that used the different architectures. of the papers used \glsplcnn, whereas \glsplrnn and \glsplae were the architecture choice of about and of the works, respectively. Combinations of CNNs and RNNs, on the other hand, were used in of the studies. RBMs and DBNs corresponded together to of the architectures. FC neural networks were employed by of the papers. GANs and other architectures appeared in of the considered cases. Notice that of the analyzed papers did not report their choice of architecture.

(a) Architectures.
(b) Distribution of architectures across years.
(c) Distribution of input type according to the architecture category.
(d) Distribution of number of neural network layers.
Figure 10: Deep learning architectures used in the selected studies. “N/M” stands for “Not mentioned” and accounts for papers which have not reported the respective deep learning methodology aspect under analysis.

In Fig. 9(b), we provide a visualization of the distribution of architecture types across years. Until the end of 2014, DBNs and FC networks comprised the majority of the studies. However, since 2015, CNNs have been the architecture type of choice in most studies. This can be attributed to the their capabilities of end-to-end learning and of exploiting hierarchical structure on the data [177], as well as their success and subsequent popularity on computer vision tasks, such as the ILSVRC 2012 challenge [35]. Interestingly, we also observe that as the number of papers grows, the proportion of studies using CNNs and combinations of recurrent and convolutional layers has been growing steadily. The latter shows that RNNs are increasingly of interest for EEG analysis. On the other hand, the use of architectures such as RBMs, DBNs and AEs has been decreasing with time. Commonly, models employing these architectures utilize a two-step training procedure consisting of 1) unsupervised feature learning and 2) training a classifier on top of the learned features. However, we notice that recent studies leverage the hierarchical feature learning capabilities of CNNs to achieve end-to-end supervised feature learning, i.e., training both a feature extractor and a classifier simultaneously.

To complement the previous result, we cross-checked the architecture and input type information provided in Fig. 9. Results are presented in Fig. 9(c) and clearly show that CNNs are indeed used more often with raw EEG data as input. This corroborates the idea that researchers employ this architecture with the aim of leveraging the capabilities of deep neural networks to process EEG data in an end-to-end fashion, avoiding the time-consuming task of extracting features. From this figure, one can also notice that some architectures such as deep belief networks are typically used with frequency-domain features as inputs, while GANs, on the other hand, have been only applied to EEG processing using raw data.

Number of layers

Deep neural networks are usually composed of stacks of layers which provide hierarchical processing. Although one might think the use of deep neural networks implies the existence of a large number of layers in the architecture, there is no absolute consensus in the literature regarding this definition. Here we investigate this aspect and show that the number of layers is not necessarily large, i.e., larger than three, in many of the considered studies.

In Fig 9(d), we show the distribution of the reviewed papers according to the number of layers in the respective architecture. For studies reporting results for different architectures and number of layers, we only considered the highest value. We observed that most of the selected studies (128) utilized architectures with at most 10 layers. A total of 16 articles have not reported the architecture depth. When comparing the distribution of papers according to the architecture depth with architectures commonly used for computer vision applications, such as VGG-16 (16 layers) [157] and ResNet-18 (18 layers) [70], we observe that the current literature on DL-EEG suggests shallower models achieve better performance.

Some studies specifically investigated the effect of increasing the model depth. Zhang et al. [218] evaluated the performance of models with depth ranging from two to 10 on a mental workload classification task. Architectures with seven layers outperformed both shallower (two and four layers) and deeper (10 layers) models in terms of accuracy, precision, F-measure and G-mean. Moreover, O’Shea et al. [123] compared the performance of a \glscnn with six and 11 layers on neonatal seizure detection. Their results show that, in this case, the deeper network presented better \glsrocauc in comparison to the shallower model, as well as a \glssvm. In [83], the effect of depth on \glscnn performance was also studied. The authors compared results obtained by a CNN with two and three convolutional layers on the task of classifying SSVEPs under ambulatory conditions. The shallower architecture outperformed the three-layer one in all scenarios considering different amounts of training data. \Glscca together with a KNN classifier were also evaluated and employed as a baseline method. Interestingly, as the number of training samples increased, the shallower model outperformed the \glscca-based baseline.

EEG-specific design choices

Particular choices regarding the architecture might enable a model to mimic the process of extracting EEG features. An architecture can also be specifically designed to impose specific properties on the learned representations. This is for instance the case with max-pooling, which is used to produce invariant feature maps to slight translations on the input [53]. In the case of EEG signals, one might be interested in forcing the model to process temporal and spatial information separately in the earlier stages of the network. In [26, 83, 213, 16, 150, 109], one-dimensional convolutions were used in the input layer with the aim of processing either temporal or spatial information independently at this point of the hierarchy. Other studies [224, 167] combined recurrent and convolutional neural networks as an alternative to the previous approach of separating temporal and spatial content. Recurrent models were also applied in cases where it was necessary to capture long-term dependencies from the \glseeg data [100, 220].

3.5.2 Training

Details regarding the training of the models proposed in the literature are of great importance as different approaches and hyperparameter choices can greatly impact the performance of neural networks. The use of pre-trained models, regularization, and hyperparameter search strategies are examples of aspects we took into account during the review process. We report our main findings in this section.

Training Procedure

One of the advantages of applying deep neural networks to \glseeg processing is the possibility of simultaneously training a feature extractor and a model for executing a downstream task such as classification or regression. However, in some of the reviewed studies [86, 195, 116], these two tasks were executed separately. Usually, the feature learning was done in an unsupervised fashion, with \glsplrbm, \glspldbn, or \glsplae. After training those models to provide an appropriate representation of the EEG input signal, the new features were then used as the input for a target task which is, in general, classification. In other cases, pre-trained models were used for a different purpose, such as object recognition, and were fine-tuned on the specific \glseeg task with the aim of providing a better initialization or regularization effect [97].

In order to investigate the training procedure of the reviewed papers, we classify each one according to the adopted training procedure. Models which have parameters learned without using any kind of pre-training were assigned to the “Standard” group. The remaining studies, which specified the training procedure, were included in the “Pre-training” class, in case the parameters were learned in more than one step. Finally, papers employing different methodologies for training, such as co-learning [34], were included in the “Other” group.

In Fig. 11a) we show how the reviewed papers are distributed according to the training procedure. “N/M” refers to studies which have not reported this aspect. Almost half the papers did not employ any pre-traning strategy, while did. Even though the training strategy is crucial for achieving good performance with deep neural networks, of the selected studies have not explicitly described it in their paper.

Figure 11: Deep learning methodology choices. (a) Training methodology used in the studies, (b) number of studies that reported the use of regularization methods such as dropout, weight decay, etc. and (c) type of optimizer used in the studies.
Regularization

In the context of our literature review, we define regularization as any constraint on the set of possible functions parametrized by the neural network intended to improve its performance on unseen data during training [53]. The main goal when regularizing a neural network is to control its complexity in order to obtain better generalization performance [21], which can be verified by a decrease on test error in the case of classification problems. There are several ways of regularizing neural networks, and among the most common are weight decay (L2 and L1 regularization) [53], early stopping [139], dropout [168], and label smoothing [169]. Notice that even though the use of pre-trained models as initialization can also be interpreted as a regularizer [97], in this work we decided to include it in the training procedure analysis instead.

As the use of regularization might be fundamental to guarantee a good performance on unseen data during training, we analyzed how many of the reviewed studies explicitly stated that they have employed it in their models. Papers were separated in two groups, namely: “Yes” in case any kind of regularization was used, and “N/M” otherwise. In Fig. 11 we present the proportion of studies in each group.

From Fig. 11, one can notice that more than half the studies employed at least one regularization method. Furthermore, regularization methods were frequently combined in the reviewed studies. Hefron et al. [71] employed a combination of dropout, L1- and L2- regularization to learn temporal and frequency representations across different participants. The developed modelwas trained for recognizing mental workload states elicited by the MATB task [31]. Similarly, Längkvist and Loutfi [86], combined two types of regularization with the aim of developing a model tailored to an automatic sleep stage classification task. Besides L2-regularization, they added a penalty term to encourage weight sparsity, defined as the KL-divergence between the mean activation of each hidden unit over all training examples in a training batch and a hyperparameter .

Optimization

Learning the parameters of a deep neural network is, in practice, an optimization problem. The best way to tackle it is still an open research question in the deep learning literature, as there is often a compromise between finding a good solution in terms of minimizing the cost function and the performance of a local optimum expressed by the generalization gap, i.e. the difference between the training error and the true error estimated on the test set. In this scenario, the choice of a parameter update rule, i.e. the learning algorithm or optimizer, might be key for achieving good results.

The most commonly used optimizers are reported in Fig. 11. One surprising finding is that even though the choice of optimizer is a fundamental aspect of the \glsdl-\glseeg pipeline, of the considered studies did not report which parameter update rule was applied. Moreover, used Adam [80] and Stochastic Gradient Descent [141] (notice that we also refer to the mini-batch case as SGD). of the papers utilized different optimizers, such as RMSprop [178], Adagrad [40], and Adadelta [214].

Another interesting finding the optimizer analysis provided is the steady increase in the use of Adam. Indeed, from 2017 to 2018, the percentage of studies using Adam increased from to . Adam was proposed as a gradient-based method with the capability of adaptively tuning the learning rate based on estimates of first and second order moments of the gradient. It became very popular in general deep neural networks applications (accumulating approximately 15,000 citations since 2014444Google scholar query run on 30/11/2018.). Interestingly, we notice a proportional decrease from 2017 to 2018 of the number of papers which did not report the optimizer utilized.

Hyperparameter search

From a practical point-of-view, tuning the hyperparameters of a learning algorithm often takes up a great part of the time spent during training. \glsplgan, for instance, are known to be sensitive to the choices of optimizer and architecture hyperparameters [58, 99]. In order to minimize the amount of time spent finding an appropriate set of hyperparameters, several methods have been proposed in the literature. Examples of commonly applied methods are grid search [18] and Bayesian optimization [158]. Grid search consists in determining a range of values for each parameter to be tuned, choosing values in this range, and evaluating the model, usually in a validation set considering all combinations. One of the advantages of grid search is that it is highly parallelizable, as each set of hyperparameter is independent of the other. Bayesian optimization, in turn, defines a posterior distribution over the hyperparameters space and iteratively updates its values according to the performance obtained by the model with a hyperparameter set corresponding to the expected posterior.

Given the importance of finding a good set of hyperparameters and the difficulty of achieving this in general, we calculate the percentage of papers that employed some search method for tuning their models and optimizers, as well as the amount of articles that have not included any information regarding this aspect. Results indicate that almost of the reviewed papers have not mentioned the use of hyperparameters search strategies. It is important to highlight that among those articles, it is not clear how many have not done any tuning at all and how many have just not considered to include this information in the paper. From the that declared to have searched for an appropriate set of hyperparameters, some have manually done this by trial and error (e.g. [2, 38, 183, 132]), while others employed grid search (e.g. [207, 200, 39, 208, 101, 11, 86]), and a few used other strategies such as Bayesian methods (e.g. [163, 164, 152]).

3.6 Inspection of trained models

In this section, we review if, and how, studies have inspected their proposed models. Out of the selected studies, reported inspecting their models. Two studies focused more specifically on the question of model inspection in the context of DL and EEG [67, 45]. See Table 5 for a list of the different techniques that were used by more than one study. For a general review of DL model inspection techniques, see [75].

The most frequent model inspection techniques involved the analysis of the trained model’s weights [135, 211, 86, 34, 87, 200, 182, 122, 170, 228, 164, 109, 204]. This often requires focusing on the weights of the first layer only, as their interpretation in regard to the input data is straightforward. Indeed, the absolute value of a weight represents the strength with which the corresponding input dimension is used by the model - a higher value can therefore be interpreted as a rough measure of feature importance. For deeper layers, however, the hierarchical nature of neural networks means it is much harder to understand what a weight is applied to.

The analysis of model activations was used in multiple studies [212, 194, 87, 83, 208, 167, 154, 109]. This kind of inspection method usually involves visualizing the activations of the trained model over multiple examples, and thus inferring how different parts of the network react to known inputs. The input-perturbation network-prediction correlation map technique, introduced in [149], pushes this idea further by trying to identify causal relationships between the inputs and the decisions of a model. The impact of the perturbation on the activations of the last layer’s units then shines light onto which characteristics of the input are important for the classifier to make a correct prediction. To do this, the input is first perturbed, either in the time- or frequency-domain, to alter its amplitude or phase characteristics [67], and then fed into the network. Occlusion sensitivity techniques [92, 26, 175] use a similar idea, by which the decisions of the network when different parts of the input are occluded are analyzed.

Several studies used backpropagation-based techniques to generate input maps that maximize activations of specific units [188, 144, 160, 15]. These maps can then be used to infer the role of specific neurons, or the kind of input they are sensitive to.

Finally, some model inspection techniques were used in a single study. For instance, in [45], the class activation map (CAM) technique was extended to overcome its limitations on EEG data. To use CAMs in a CNN, the channel activations of the last convolutional layer must be averaged spatially before being fed into the model’s penultimate layer, which is a \glsfc layer. For a specific input image, a map can then be created to highlight parts of the image that contributed the most to the decision, by computing a weighted average of the last convolutional layer’s channel activations. Other techniques include Deeplift [87], saliency maps [190], input-feature unit-output correlation maps [150], retrieval of closest examples [34], analysis of performance with transferred layers [63], analysis of most-activating input windows [67], analysis of generated outputs [66], and ablation of filters [87].

Model inspection technique Citation
Analysis of weights [135, 211, 86, 34, 87, 200, 182, 122, 170, 228, 164, 109, 204, 85, 25]
Analysis of activations [212, 194, 87, 83, 208, 167, 154, 109]
Input-perturbation network-prediction correlation maps [149, 191, 67, 16, 150]
Generating input to maximize activation [188, 144, 160, 15]
Occlusion of input [92, 26, 175]
Table 5: Model inspection techniques used by more than one study.

3.7 Reporting of results

The performance of DL methods on EEG is of great interest as it is still not clear whether DL can outperform traditional EEG processing pipelines [105]. Thus, a major question we thus aim to answer in this review is: “Does DL lead to better performance than traditional methods on EEG?” However, answering this question is not straightforward, as benchmark datasets, baseline models, performance metrics and reporting methodology all vary considerably between the studies. In contrast, other application domains of DL, such as computer vision and NLP, benefit from standardized datasets and reporting methodology [53].

Therefore, to provide as satisfying an answer as possible, we adopt a two-pronged approach. First, we review how the studies reported their results by focusing on directly quantifiable items: 1) the type of baseline used as a comparison in each study, 2) the performance metrics, 3) the validation procedure, and 4) the use of statistical testing. Second, based on these points and focusing on studies that reported accuracy comparisons with baseline models, we analyze the reported performance of a majority of the reviewed studies.

3.7.1 Type of baseline

When contributing a new model, architecture or methodology to solve an already existing problem, it is necessary to compare the performance of the new model to the performance of state-of-the-art models commonly used for the problem of interest. Indeed, without a baseline comparison, it is not possible to assess whether the proposed method provides any advantage over the current state-of-the-art.

Points of comparison are typically obtained in two different ways: 1) (re)implementing standard models or 2) referring to published models. In the first case, authors will implement their own baseline models, usually using simpler models, and evaluate their performance on the same task and in the same conditions. Such comparisons are informative, but often do not reflect the actual state of the art on a specific task. In the second case, authors will instead cite previous literature that reported results on the same task and/or dataset. This second option is not always possible, especially when working on private datasets or tasks that have not been explored much in the past.

In the case of typical EEG classification tasks, state-of-the-art approaches usually involve traditional processing pipelines that include feature extraction and shallow/classical machine learning models. With that in mind, of the studies selected included at least one traditional processing pipeline as a baseline model (see Fig. 15). Some studies instead (or also) compared their performance to DL-based approaches, to highlight incremental improvements obtained by using different architectures or training methodology: of the studies therefore included at least one DL-based model as a baseline model. Out of the studies that did not compare their models to a baseline, six did not focus on the classification of EEG. Therefore, in total, of the studies did not report baseline comparisons, making it impossible to assess the added value of their proposed methods in terms of performance.

(a) Type of performance metrics used in the selected studies. Only metrics that appeared in at least three different studies are included in this figure.
(b) Cross-validation approaches.
Figure 12: Performance metrics and cross-validation approaches.

3.7.2 Performance metrics

The types of performance metrics used by studies focusing on EEG classification are shown in Fig. 11(a). Unsurprisingly, most studies used metrics derived from confusion matrices, such as accuracy, sensitivity, f1-score, \glsrocauc and precision. As highlighted in [26, 200], it is often preferable to use metrics that are robust to class imbalance, such as balanced accuracy, f1-score, and the \glsrocauc for binary problems. This is often the case in sleep or epilepsy recordings, where clinical events are rare.

Studies that did not focus on the classification of EEG signals also mainly used accuracy as a metric. Indeed, these studies generally used a classification task to evaluate model performance, although their main purpose was different (e.g., correcting artifacts). In other cases, performance metrics specific to the study’s purpose, such as generating data, were used, e.g., the inception score ([148]), the Fréchet inception distance ([74]), as well as custom metrics.

3.7.3 Validation procedure

When evaluating a machine learning model, it is important to measure its generalization performance, i.e., how well it performs on unseen data. In order to do this, it is common practice to divide the available data into a training and a test sets. When hyperparameters need to be tuned, the performance on the test set cannot be used anymore as an unbiased evaluation of the generalization performance of the model. Therefore, the training set is divided to obtain a third set called a "validation set" which is used to select the best hyperparameter configuration, leaving the test set to evaluate the performance of the best model in an unbiased way. However, when the amount of data available is small, dividing the data into different sets and only using a subset for training can seriously undermine the performance of data-hungry models. A procedure known as "cross-validation" is used in these cases, where the data is broken down into different partitions, which will then successively be used as either training or validation data.

The cross-validation techniques used in the selected studies are shown in Fig. 11(b). Some studies mentioned using cross-validation but did not provide any details. The category ‘Train-Valid-Test’ includes studies doing random permutations of train/valid, train/test or train/valid/test, as well as studies that mentioned splitting their data into training, validation and test sets but did not provide any details on the validation method. The Leave-One-Out variations correspond to the special case where in the Leave-N-Out versions. of the studies did not use any form of cross-validation. Interestingly, in [104], the authors proposed a ’warm restart’ within the gradient descent steps to remove the need for a validation set.

3.7.4 Subject handling

Whether a study focuses on intra- or inter-subject classification has an impact on the performance. Intra-subject models, which are trained and used on the data of a single subject, often lead to higher performance since the model has less data variability to account for. However, this means the data the model is trained on is obtained from a single subject, and thus often comprises only a few recordings. In inter-subject studies, models generally see more data, as multiple subjects are included, but must contend with greater data variability, which introduces different challenges.

In the case of inter-subject classification, the choice of the validation procedure can have a big impact on the reported performance of a model. The Leave-N-Subject-Out procedure, which uses different subjects for training and for testing, may lead to lower performance, but is applicable to real-life scenarios where a model must be used on a subject for whom no training data is available. In contrast, using k-fold cross-validation on the combined data from all the subjects often means that the same subjects are seen in both the training and testing sets. In the selected studies, 22 out of the 108 studies using an inter-subject approach used a Leave-N-Subjects-Out or Leave-One-Subjects-Out procedure.

In the selected studies, focused only on intra-subject classification, focused only on inter-subject classification, focused on both, and did not mention it. Obviously, ’N/M’ studies necessarily fall under one of the three previous categories. The ‘N/M’ might be due to certain domains using a specific type of experiment (i.e. intra or inter-subject) almost exclusively, thereby obviating the need to mention it explicitly.

Fig. 13 shows that there has been a clear trend over the last few years to leverage DL for inter-subject rather than intra-subject analysis. In [34], the authors used a large dataset and tested the performance of their model both on new (unseen) subjects and on known (seen) subjects. They obtained accuracy on unseen subjects and on seen subjects, showing that classifying EEG data from unseen subjects can be significantly more challenging than from seen ones.

In [184], the authors compared their model on both intra- and inter-subject tasks. Despite the former case providing the model with less less training data than the latter, it led to better results. In [62], the authors compared different DL models and showed that cross-subject (37 subjects) models always performed worse than within-subject models. In [127], a hybrid system trained on multiple subjects and then fine-tuned on subject-specific data led to the best performance. Finally, in [175], the authors compared their DNN to a state-of-the-art traditional approach and showed that deep networks generalize better, although their performance on intra-subject classification is still higher than on inter-subject classification.

Figure 13: Distribution of intra- vs. inter-subject studies per year.

3.7.5 Statistical testing

To assess whether a proposed model is actually better than a baseline model, it is useful to use statistical tests. In total, of the selected studies used statistical tests to compare the performance of their models to baseline models. The tests most often used were Wilcoxon signed-rank tests, followed by ANOVAs.

3.7.6 Comparison of results

Although, as explained above, many factors make this kind of comparison imprecise, we show in this section how the proposed approaches and traditional baseline models compared, as reported by the selected studies.

We focus on a specific subset of the studies to make the comparison more meaningful. First, we focus on studies that report accuracy as a direct measure of task performance. As shown in Fig. 11(a), this includes the vast majority of the studies. Second, we only report studies which compared their models to a traditional baseline, as we are interested in whether DL leads to better results than non-DL approaches. This means studies which only compared their results to other DL approaches are not included in this comparison. Third, some studies evaluated their approach on more than one task or dataset. In this case, we report the results on the task that has the most associated baselines. If that is more than one, we either report all tasks, or aggregate them if they are very similar (e.g., binary classification of multiple mental tasks, where performance is reported for each possible pair of tasks). In the case of multimodal studies, we only report the performance on the EEG-only task, if it is available. Finally, when reporting accuracy differences, we focus on the difference between the best proposed model and the best baseline model, per task. Following these constraints, a total of studies/tasks were left for our analysis.

Figure 14 shows the difference in accuracy between each proposed model and corresponding baseline per domain type (as categorized in Fig. 4), as well as the corresponding distribution over all included studies and tasks.

The median gain in accuracy with DL is of , with an interquartile range of . Only four values were negative values, meaning the proposed DL approach led to a lower performance than the baseline. The best improvement in accuracy was obtained by [161], where their approach led to a gain of in accuracy in an \glsrsvp classification task.

Figure 14: Difference in accuracy between each proposed DL model and corresponding baseline model for studies reporting accuracy (see Section 3.7.6 for a description of the inclusion criteria). The difference in accuracy is defined as the difference between the best DL model and the best corresponding baseline. In the top figure, each study/task is represented by a single point, and studies are grouped according to their respective domains. The bottom figure is a box plot representing the overall distribution.

3.8 Reproducibility

Reproducibility is a cornerstone of science [111]: having reproducible results is fundamental to moving a field forward, especially in a field like machine learning where new ideas spread very quickly. Here, we evaluate ease with which the results of the selected papers can be reproduced by the community using two key criteria: the availability of their data and the availability of their code.

Figure 15: Reproductibility of the selected studies. (a) Availability of the datasets used in the studies, (b) availability of the code, shown by where the code is hosted, (c) type of baseline used to evaluate the performance of the trained models and (d) estimated reproducibility level of the studies (Easy: both the data and the code are available, Medium: the code is available but some data is not publicly available, Hard: either the code or the data is available but not both, Impossible: neither the data nor the code are available).

From the 156 studies reviewed, used public data, used private data555Data that is not freely available online was considered private regardless of when and where it was recorded. Moreover, three of the reviewed studies mentioned that their data was available upon request but were included in the "private" category., and used both public and private data. In particular, studies focusing on BCI, epilepsy, sleep and affective monitoring made use of openly available datasets the most (see Table 6). Interestingly, in cognitive monitoring, no publicly available datasets were used, and papers in that field all relied on internal recordings.

Fittingly, a total of 33 papers (21%) explicitly mentioned that more publicly available data is required to support research on DL-EEG. In clinical settings, the lack of labeled data, rather than the quantity of data, was specifically pointed out as an obstacle.

As for the source code, only of the studies chose to make it available online [82, 149, 160, 225, 197, 152, 87, 150, 224, 222, 167, 223, 221, 104, 161, 15, 164, 163, 85] and as illustrated in Fig 15, GitHub is by far the preferred code sharing platform. Needless to say, having access to the source code behind published results can drastically reduce time and increase incentive to reproduce a paper’s results.

Therefore, taking both data and code availability into account, only 11 out of 156 studies () could easily be reproduced using both the same data and code [149, 160, 152, 224, 222, 167, 221, 104, 161, 164, 85]. 4 out of 156 studies () shared their code but tested on both private and public data making their studies only partially reproducible [225, 87, 150, 223], see Fig. 15. As follows, a significant number of studies (61) did not have publicly available data or code, making them almost impossible to reproduce.

It is important to note, moreover, that for the results of a study to be perfectly reproduced, the authors would also need to share the weights (i.e. parameters) of the network. Sharing the code and the architecture of the network might not be sufficient since retraining the network could converge to a different minimum. On the other hand, retraining the network could also end up producing better results if a better performing model is obtained. For recommendations on how to best share the results, the code, the data and relevant information to make a study easy to reproduce, please see the discussion section and the checklist provided in Appendix B.

Main domain Dataset # articles References
Affective DEAP 9 [100, 6, 17, 102, 200, 103, 42, 79, 96]
SEED 3 [220, 103, 228]
BCI BCI Competition 13 [43, 87, 146, 150, 150, 170, 170, 109, 147, 204, 37, 37, 25]
Other 8 [68, 87, 150, 63, 63, 63, 166, 8]
eegmmidb 8 [216, 107, 224, 222, 226, 36, 121, 59]
Keirn & Aunon (1989) 2 [125, 133]
MAHNOB 1 [39]
Cognitive Other 4 [82, 61, 61, 61]
EEG Eye State 1 [118]
Epilepsy CHB-MIT 9 [201, 183, 212, 181, 180, 127, 175, 138, 184]
Bonn University 7 [76, 185, 4, 171, 2, 124, 117]
TUH 5 [51, 153, 50, 49, 205]
Other 3 [181, 50, 173]
Freiburg Hospital 2 [181, 180]
Generation of data BCI Competition 2 [33, 219]
MAHNOB 1 [192]
Other 1 [152]
SEED 1 [192]
Improvement of processing tools BCI Competition 3 [202, 165, 203]
Other 2 [206, 164]
Bonn University 1 [195]
CHB-MIT 1 [195]
Others TUH 3 [149, 143, 225]
eegmmidb 3 [225, 223, 221]
Other 2 [188, 225]
EEG Eye State 1 [91]
Sleep MASS 4 [137, 26, 167, 38]
Sleep EDF 4 [190, 167, 199, 182]
Other 3 [160, 179, 47]
UCDDB 3 [86, 110, 85]
Table 6: Most often used datasets by domain. Datasets that were only used by one study are grouped under "Other" for each category.

4 Discussion

In this section, we review the most important findings from our results section, and discuss the significance and impact of various trends highlighted above. We also provide recommendations for DL-EEG studies and present a checklist to ensure reproducibility in the field.

4.1 Rationale

It was expected that most papers selected for the review would focus on the classification of EEG data, as DL has historically led to important improvements on supervised classification problems [88]. Interestingly though, several papers also focused on new applications that were made possible or facilitated by DL: for instance, generating images conditioned on EEG, generating EEG, transfer learning between subjects, or feature learning. One of the main motivations for using DL cited by the papers reviewed was the ability to use raw EEG with no manual feature extraction steps. We expect these kinds of applications that go beyond using DL as a replacement for traditional processing pipelines to gain in popularity.

4.2 Data

A critical question concerning the use of DL with EEG data remains “How much data is enough data?”. In Section 3.3, we explored this question by looking at various descriptive dimensions: the number of subjects, the amount of EEG recorded, the number of training/test/validation examples, the sampling rate and data augmentation schemes used.

Although a definitive answer cannot be reached, the results of our meta-analysis show that the amount of data necessary to at least match the performance of traditional approaches is already available. Out of the 156 papers reviewed, only six reported lower performance for DL methods over traditional benchmarks. To achieve these results with limited amounts of data, shallower architectures were often preferred. Data augmentation techniques were also used successfully to improve performance when only limited data was available. However, more work is required to clearly assess their advantages and disadvantages. Indeed, although many studies used overlapping sliding windows, there seems to be no consensus on the best overlapping percentage to use, e.g., the impact of using a sliding window with 1% overlap versus 95% overlap is still not clear. \Glsbci studies had the highest variability for this hyperparameter, while clinical applications such as sleep staging already appeared more standardized with most studies using 30 s non-overlapping windows.

Many authors concluded their paper suggesting that having access to more data would most likely improve the performance of their models. With large datasets becoming public, such as the TUH Dataset [65] and the National Sleep Research Resource [217], deeper architectures similar to the ones used in computer vision might become increasingly usable. However, it is important to note that the availability of data is quite different across domains. In clinical fields such as sleep and epilepsy, data usually comes from hospital databases containing years of recordings from several patients, while other fields usually rely on data coming from lab experiments with a limited number of subjects.

The potential of DL in EEG also lies in its ability (at least in theory) to generalize across subjects and to enable transfer learning across tasks and domains. When only limited data is available, intra-subject models still work best given the inherent subject variability of EEG data. However, transfer learning might be the key to moving past this limitation. Indeed, Page and colleagues [127] showed that with hybrid models, one can train a neural network on a pool of subjects and then fine-tune it on a specific subject, achieving good performances without needing as much data from a specific subject.

While we did report the sampling rate, we did not investigate its effect on performance because no relationship stood out particularly in any of the reviewed papers. The impact of the number of channels though, was specifically studied. For example, in [26], the authors showed that they could achieve comparable results with a lower number of channels. As shown in Fig. 7(a), a few studies used low-cost EEG devices, typically limited to a lower number of channels. These more accessible devices might therefore benefit from DL methods, but could also enable faster data collection on a larger-scale, thus facilitating DL in return.

As DL-EEG is highly data-driven, it is important when publishing results to clearly specify the amount of data used and to clarify terminology (see Table 1 for an example). We noticed that many studies reviewed did not clearly describe the EEG data that they used (e.g., the number of subjects, number of sessions, window length to segment the EEG data, etc.) and therefore made it hard or impossible for the reader to evaluate the work and compare it to others. Moreover, reporting learning curves (i.e. performance as a function of the number of examples) would give the reader valuable insights on the bias and variance of the model.

4.3 EEG processing

According to our findings, the great majority of the reviewed papers preprocessed the EEG data before feeding it to the deep neural network or extracting features. Despite observing this trend, we also noticed that recent studies outperformed their respective baseline(s) using completely raw EEG data. Almogbel et al. [7] used raw EEG data to classify cognitive workload in vehicle drivers, and their best model achieved a classification accuracy approximately better than their benchmarks which employed preprocessing on the EEG data. Similarly, Aznan et al. [11] outperformed the baselines by a margin on SSVEP decoding using no preprocessing. Thus, the answer to whether it is necessary to preprocess EEG data when using DNNs remains elusive.

As most of the works considered did not use, or explicitly mention using, artifact removal methods, it appears that this EEG processing pipeline step is in general not required. However, one should observe that in specific cases such as tasks that inherently elicit quick eye movements (MATB-II [31]), artifact handling might still be crucial to obtaining desired performance.

One important aspect we focused on is whether it is necessary to use EEG features as inputs to \glspldnn. After analyzing the type of input used by each paper, we observed that there was no clear preference for using features or raw EEG time-series as input. We noticed though that most of the papers using CNNs used raw EEG as input. With \glsplcnn becoming increasingly popular, one can conclude that there is a trend towards using raw EEG instead of hand-engineered features. This is not surprising, as we observed that one of the main motivations mentioned for using \glspldnn on EEG processing is to automatically learn features. Furthermore, frequency-based features, which are widely used as hand-crafted features in EEG [105], are very similar to the temporal filters learned by a \glscnn. Indeed, these features are often extracted using Fourier filters which apply a convolutive operation. This is also the case for the temporal filters learned by a \glscnn although in the case of \glsplcnn the filters are learned.

From our analysis, we also aimed to identify which input type should be used when trying to solve a problem from scratch. While the answer depends on many factors such as the domain of application, we observed that in some cases raw EEG as input consistently outperformed baselines based using classically extracted features. For example, for seizure classification, recently proposed models using raw EEG data as input [64, 185, 156] achieved better performances than classical baseline methods, such as \glsplsvm with frequency-domain features. For this particular task, we believe following the current trend of using raw EEG data is the best way to start exploring a new approach.

4.4 Deep learning methodology

Another major topic this review aimed at covering is the \glsdl methodology itself. Our analysis focused on architecture trends and training decisions, as well as on model selection techniques.

4.4.1 Architecture

Given the inherent temporal structure of EEG, we expected RNNs would be more widely employed than models that do not explicitly take time dependencies into account. However, almost half of the selected papers used CNNs. This observation is in line with recent discussions and findings regarding the effectiveness of CNNs for processing time series [12]. We also noticed that the use of energy-based models such as RBMs has been decreasing, whereas on the other hand, popular architectures in the computer vision community such as GANs have started to be applied to EEG data as well.

Moreover, regarding architecture depth, most of the papers used fewer than five layers. When comparing this number with popular object recognition models such as VGG and ResNet for the ImageNet challenge comprising 19 and 34 layers respectively, we conclude that for EEG data, shallower networks are currently necessary. Schirrmeister et al. [177] specifically focused on this aspect, comparing the performance of architectures with different depths and structures, such as fully convolutional layers and residual blocks, on different tasks. Their results showed that in most cases, shallower fully convolutional models outperformed their deeper counterpart and architectures with residual connections.

4.4.2 Training and optimization

Although crucial to achieving good results when using neural networks, only of the papers employed some hyperparameter search strategy. Even fewer studies provided detailed information about the method used. Amongst these, Stober et al. [164] described their hyperparameter selection method and cited its corresponding implementation; in addition, the available budget in number of iterations per searching trial as well as the cross-validation split were mentioned in the paper.

4.4.3 Model inspection

Inspecting trained \glsdl models is important, as DNNs are notoriously seen as black boxes, when compared to more traditional methods. This is problematic in clinical settings for instance, where understanding and explaining the choice made by a classification model might be critical to making informed clinical choices. Neuroscientists might also be interested by what drives a model’s decisions and use that information to shape hypotheses about brain function.

About of the reviewed papers looked at interpreting their models. Interesting work on the topic, specifically tailored to EEG, was reviewed in [150, 67, 45]. Sustained efforts aimed at inspecting models and understanding the patterns they rely on to reach decisions are necessary to broaden the use of DL for EEG processing.

4.5 Reported results

Our meta-analysis focused on how studies compared classification accuracy between their models and traditional EEG processing pipelines on the same data. Although a great majority of studies reported improvements over traditional pipelines, this result has to be taken with a grain of salt. First, the difference in accuracy does not tell the whole story, as an improvement of , for example, is typically more difficult to achieve from to than from to . More importantly though, very few articles reported negative improvements, which could be explained by a publication bias towards positive results.

The reported baseline comparisons were highly variable: some used simple models (e.g., combining straightforward spectral features and linear classifiers), others used more sophisticated pipelines (including multiple features and non-linear approaches), while a few reimplemented or cited state-of-the-art models that were published on the same dataset and/or task. Since the observed improvement will likely be higher when comparing to simple baselines than to state-of-the-art results, the values that we report might be biased positively. For instance, only two studies used Riemannian geometry-based processing pipelines as baseline models [11, 87], although these methods have set a new state-of-the-art in multiple EEG classification tasks [105].

Moreover, many different tasks and thus datasets were used. These datasets are often private, meaning there is very limited or no previous literature reporting results on them. On top of this, the lack of reproducibility standards can lead to low accountability: since study results are not expected to be replicated and results can be inflated by non-standard practices such as omitting cross-validation.

Different approaches have been taken to solve the problem of heterogeneity of result reporting and benchmarking in the field of machine learning. For instance, OpenML [189] is an online platform that facilitates the sharing and running of experiments, as well as the benchmarking of models. As of November 2018, the platform already contained one EEG dataset and multiple submissions. The MOABB [78], a solution tailored to the field of brain-computer interfacing, is a software framework for ensuring the reproducibility of BCI experiments and providing public benchmarks for many BCI datasets. In [73], a similar approach, but for DL specifically, is proposed.

Additionally, a few EEG/MEG/ECoG classification online competitions have been organized in the last years, for instance on the Kaggle platform (see Table 1 of [32]). These competitions informally act as benchmarks: they provide a standardized dataset with training and test splits, as well as a leaderboard listing the performance achieved by every competitor. These platforms can then be used to evaluate the state-of-the-art as they provide a publicly available comparison point for new proposed architectures. For instance, the IEEE NER 2015 Conference competition on error potential decoding could have been used as a benchmark for the studies reviewed that focused on this topic.

Making use of these tools, or extending them to other EEG-specific tasks, appears to be one of the greatest challenges for the field of DL-EEG at the moment, and might be the key to more efficient and productive development of practical EEG applications. Whenever possible, authors should make sure to provide as much information as possible on the baseline models they have used, and explain how to replicate their results (see Section 4.6).

4.6 Reproducibility

The significant use of public EEG datasets across the reviewed studies suggests that open data has greatly contributed to recent developments in DL-EEG. On the other hand, of studies used data not publicly available - notably in domains such as cognitive monitoring. To move the field forward, it is thus important to create new benchmark datasets and share internal recordings. Moreover, the great majority of papers did not make their code available. Many papers reviewed are thus more difficult to reproduce: the data is not available, the code has not been shared, and the baseline models that were used to compare the performances of the models are either non-existent or not available.

Recent initiatives to promote best practices in data and code sharing would benefit the field of DL-EEG. FAIR neuroscience [196] and the Brain Imaging Data Structure (BIDS) [56] both provide guidelines and standards on how to acquire, organize and share data and code. BIDS extensions specific to EEG [136] and \glsmeg [119] were also recently proposed. Moreover, open source software toolboxes are available to perform DL experiments on EEG. For example, the recent toolbox developed by Schirrmeister and colleagues, called BrainDecode [150], enables faster and easier development cycles by providing the basic functionality required for DL-EEG analysis while offering high level and easy to use functions to the user. The use of common software tools could facilitate reproducibility in the community. Beyond reproducibility, we believe simplifying access to data, making domain knowledge accessible and sharing code will enable more people to jump into the field of DL-EEG and contribute, transforming what has traditionally been a domain-specific problem into a more general problem that can be tackled with machine learning and DL methods.

4.7 Recommendations

To improve the quality and reproducibility of the work in the field of DL-EEG, we propose six guidelines in Table 7. Moreover, Appendix B presents a checklist of items that are critical to ensuring reproducibility and should be included in future studies.

Recommendation Description
1 Clearly describe the architecture. Provide a table or figure clearly describing your model (e.g., see [26, 51, 150]).
2 Clearly describe the data used. Make sure the number of subjects, the number of examples, the data augmentation scheme, etc. are clearly described. Use unambiguous terminology or define the terms used (for an example, see Table 1).
3 Use existing datasets. Whenever possible, compare model performance on public datasets.
4 Include state-of-the-art baselines. If focusing on a research question that has already been studied with traditional machine learning, clarify the improvements brought by using DL.
5 Share internal recordings. Whenever possible.
6 Share reproducible code. Share code (including hyperparameter choices and model weights) that can easily be run on another computer, and potentially reused on new data.
Table 7: Recommendations for future DL-EEG studies. See Appendix B for a detailed list of items to include.

4.7.1 Supplementary material

Along with the current paper, we make our data items table and related code available online at http://dl-eeg.com. We encourage interested readers to consult it in order to dive deeper into data items that are of specific interest to them - it should be straightforward to reproduce and extend the results and figures presented in this review using the code provided. The data item table is intended to be updated frequently with new articles, therefore results will be brought up to date periodically.

Authors of DL-EEG papers not included in the review are invited to submit a summary of their article following the format of our data items table to our online code repository. We also invite authors whose papers are already included in the review to verify the accuracy of our summary. Eventually, we would like to indicate which studies have been submitted or verified by the original authors.

By updating the data items table regularly and inviting researchers in the community to contribute, we hope to keep the supplementary material of the review relevant and up-to-date as long as possible.

4.8 Limitations

In this section, we quickly highlight some limitations of the present work. First, our decision to include arXiv preprints in the database search requires some justification. It is important to note that arXiv papers are not peer-reviewed. Therefore, some of the studies we selected from arXiv might not be of the same quality and scientific rigor as the ones coming from peer-reviewed journals or conferences. For this reason, whenever a preprint was followed by a publication in a peer-reviewed venue, we focused our analysis on the peer-reviewed version. ArXiv has been largely adopted by the DL community as a means to quickly disseminate results and encourage fast research iteration cycles. Since the field of DL-EEG is still young and a limited number of publications was available at the time of writing, we decided to include all the papers we could find, knowing that some of the newer trends would be mostly visible in repositories such as arXiv. Our goal with this review was to provide a transparent and objective analysis of the trends in DL-EEG. By including preprints, we feel we provided a better view of the current state-of-the-art, and are also in a better position to give recommendations on how to share results of DL-EEG studies moving forward.

Second, in order to keep this review reasonable in length, we decided to focus our analysis on the points that we judged most interesting and valuable. As a result, various factors that impact the performance of DL-EEG were not covered in the review. For example, we did not cover weight initialization: in [51], the authors compared 10 different initialization methods and showed an impact on the specificity metric, with ranged from to . Similarly, multiple data items were collected during the review process, but were not included in the analysis. These items, which include data normalization procedures, software toolboxes, hyperparameter values, loss functions, training hardware, training time, etc., remain available online for the interested reader. We are confident other reviews or research articles will be able to focus on more specific elements.

Third, as any literature review in a field that is quickly evolving, the relevance of our analysis decays with time as new articles are being published and new trends are established. Since our last database search, we have already identified other articles that should eventually be added to the analysis. Again, making this work a living review by providing the data and code online will hopefully ensure the review will be of value and remain relevant for years to come.

5 Conclusion

The usefulness of EEG as a functional neuroimaging tool is unequivocal: clinical diagnosis of sleep disorders and epilepsy, monitoring of cognitive and affective states, as well as brain-computer interfacing all rely heavily on the analysis of EEG. However, various challenges remain to be solved. For instance, time-consuming tasks currently carried out by human experts, such as sleep staging, could be automated to increase the availability and flexibility of EEG-based diagnosis. Additionally, better generalization performance between subjects will be necessary to truly make \glsplbci useful. DL has been proposed as a potential candidate to tackle these challenges. Consequently, the number of publications applying DL to EEG processing has seen an exponential increase over the last few years, clearly reflecting a growing interest in the community in these kinds of techniques.

In this review, we highlighted current trends in the field of DL-EEG by analyzing 156 studies published between January 2010 and July 2018 applying DL to EEG data. We focused on several key aspects of the studies, including their origin, rationale, the data they used, their EEG processing methodology, DL methodology, reported results and level of reproducibility.

Among the major trends that emerged from our analysis, we found that 1) DL was mainly used for classifying EEG in domains such as brain-computer interfacing, sleep, epilepsy, cognitive and affective monitoring, 2) the quantity of data used varied a lot, with datasets ranging from 1 to over 16,000 subjects (mean = 223; median = 13), producing to 62 up to 9,750,000 examples (mean = 251,532; median = 14,000) and from two to 4,800,000 minutes of EEG recording (mean = 62,602; median = 360), 3) various architectures have been used successfully on EEG data, with \glsplcnn, followed by \glsplrnn and \glsplae, being most often used, 4) there is a clear growing interest towards using raw EEG as input as opposed to handcrafted features, 5) almost all studies reported a small improvement from using DL when compared to other baselines and benchmarks (median = ), and 6) while several studies used publicly available data, only a handful shared their code - the great majority of studies reviewed thus cannot easily be reproduced.

Moreover, given the high variability in how results were reported, we made six recommendations to ensure reproducibility and fair comparison of results: 1) clearly describe the architecture, 2) clearly describe the data used, 3) use existing datasets, whenever possible, 4) include state-of-the-art baselines, ideally using the original authors’ code, 5) share internal recordings, whenever possible, and 6) share code, as it is the best way to allow others to pick up where your work leaves off. We also provided a checklist (see Appendix B) to help authors of DL-EEG studies make sure all the relevant information is available in their publications to allow straightforward reproduction.

Finally, to help the DL-EEG community maintain an up-to-date list of published work, we made our data items table open and available online. The code to reproduce the statistics and figures of this review as well as the full summaries of the papers are also available at http://dl-eeg.com.

The current general interest in artificial intelligence and DL has greatly benefited various fields of science and technology. Advancements in other field of application will most likely benefit the neuroscience and neuroimaging communities in the near future, and enable more pervasive and powerful applications based on EEG processing. We hope this review will constitute a good entry point for EEG researchers interested in applying DL to their data, as well as a good summary of the current state of the field for DL researchers looking to apply their knowledge to new types of data.

Acknowledgments

We thank Raymundo Cassani, Colleen Gillon, João Monteiro and William Thong for comments that greatly improved the manuscript.

Funding

This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) for YR (reference number: RDPJ 514052-17), HB, IA and THF, the Fonds québécois de la recherche sur la nature et les technologies (FRQNT) for YR and InteraXon Inc. (graduate funding support) for HB.

References

  • [1] Aboalayon, K. A. I., Faezipour, M., Almuhammadi, W. S., and Moslehpour, S. Sleep stage classification using EEG signal analysis: A comprehensive survey and new investigation. Entropy 18, 9 (2016), 272.
  • [2] Acharya, U. R., Oh, S. L., Hagiwara, Y., Tan, J. H., and Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Computers in Biology and Medicine, August (2017), 1–9.
  • [3] Acharya, U. R., Sree, S. V., Swapna, G., Martis, R. J., and Suri, J. S. Automated EEG analysis of epilepsy: a review. Knowledge-Based Systems 45 (2013), 147–165.
  • [4] Ahmedt-Aristizabal, D., Fookes, C., Nguyen, K., and Sridharan, S. Deep Classification of Epileptic Signals. arXiv preprint (2018), 1–4.
  • [5] Al-Nafjan, A., Hosny, M., Al-Ohali, Y., and Al-Wabil, A. Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. Applied Sciences 7, 12 (2017), 1239.
  • [6] Alhagry, S., Fahmy, A. A., and El-Khoribi, R. A. Emotion Recognition based on EEG using LSTM Recurrent Neural Network. IJACSA) International Journal of Advanced Computer Science and Applications 8, 10 (2017), 8–11.
  • [7] Almogbel, M. A., Dang, A. H., and Kameyama, W. EEG-Signals Based Cognitive Workload Detection of Vehicle Driver using Deep Learning. 20th International Conference on Advanced Communication Technology 7 (2018), 256–259.
  • [8] An, J., and Cho, S. Hand motion identification of grasp-and-lift task from electroencephalography recordings using recurrent neural networks. 2016 International Conference on Big Data and Smart Computing, BigComp 2016 (2016), 427–429.
  • [9] An, X., Kuang, D., Guo, X., Zhao, Y., and He, L. A deep learning method for classification of eeg data based on motor imagery. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8590 LNBI (2014), 203–210.
  • [10] Arns, M., Conners, C. K., and Kraemer, H. C. A Decade of EEG Theta/Beta Ratio Research in ADHD: A Meta-Analysis. Journal of Attention Disorders 17, 5 (2013), 374–383.
  • [11] Aznan, N. K. N., Bonner, S., Connolly, J. D., Moubayed, N. A., and Breckon, T. P. On the Classification of SSVEP-Based Dry-EEG Signals via Convolutional Neural Networks. arXiv preprint (2018).
  • [12] Bai, S., Kolter, J. Z., and Koltun, V. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv preprint (2018).
  • [13] Baltatzis, V., Bintsi, K.-M., Apostolidis, G. K., and Hadjileontiadis, L. J. Bullying incidences identification within an immersive environment using HD EEG-based analysis: A Swarm Decomposition and Deep Learning approach. Scientific Reports 7, 1 (2017), 17292.
  • [14] Bashivan, P., Rish, I., and Heisig, S. Mental State Recognition via Wearable EEG. arXiv preprint (2016).
  • [15] Bashivan, P., Rish, I., Yeasin, M., and Codella, N. Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks. arXiv preprint (2016), 1–15.
  • [16] Behncke, J., Schirrmeister, R. T., Burgard, W., and Ball, T. The signature of robot action success in EEG signals of a human observer: Decoding and visualization using deep convolutional neural networks. arXiv (2017).
  • [17] Ben Said, A., Mohamed, A., Elfouly, T., Harras, K., and Wang, Z. J. Multimodal deep learning approach for Joint EEG-EMG Data compression and classification. IEEE Wireless Communications and Networking Conference, WCNC (2017).
  • [18] Bergstra, J., and Bengio, Y. Random search for hyper-parameter optimization. The Journal of Machine Learning Research 13, 1 (2012), 281–305.
  • [19] Berka, C., Levendowski, D. J., Lumicao, M. N., Yau, A., Davis, G., Zivkovic, V. T., Olmstead, R. E., Tremoulet, P. D., and Craven, P. L. {EEG} correlates of task engagement and mental workload in vigilance, learning, and memory tasks. Aviation, space, and environmental medicine 78, 5 (2007), B231—-B244.
  • [20] Bigdely-Shamlo, N., Mullen, T., Kothe, C., Su, K.-M., and Robbins, K. A. The PREP pipeline: standardized preprocessing for large-scale EEG analysis. Frontiers in Neuroinformatics 9 (2015), 16.
  • [21] Bishop, C. M. Neural Networks for Pattern Recognition, vol. 92. Oxford university press, 1995.
  • [22] Biswal, S., Kulas, J., Sun, H., Goparaju, B., Westover, M. B., Bianchi, M. T., and Sun, J. SLEEPNET: Automated Sleep Staging System via Deep Learning. arXiv preprint (2017), 1–17.
  • [23] Bu, N., Shima, K., and Tsuji, T. EEG discrimination using wavelet packet transform and a reduced-dimensional recurrent neural network. Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine 0, 2 (2010), 1–4.
  • [24] Cecotti, H., Eckstein, M. P., and Giesbrecht, B. Single-Trial Classification of Event-Related Potentials in Rapid Serial Visual Presentation Tasks Using Supervised Spatial Filtering. {IEEE} Trans. Neural Netw. Learning Syst. 25, 11 (2014), 2030–2042.
  • [25] Cecotti, H., and Gräser, A. Convolutional neural networks for P300 detection with application to brain-computer interfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 3 (2011), 433–445.
  • [26] Chambon, S., Galtier, M. N., Arnal, P. J., Wainrib, G., Gramfort, A., Paristech, T., and Nov, M. L. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Transactions on Neural Systems and Rehabilitation Engineering (2017), 1–12.
  • [27] Chiarelli, A. M., Croce, P., Merla, A., and Zappasodi, F. Deep Learning for hybrid EEG-fNIRS Brain-Computer Interface: application to Motor Imagery Classification. Journal of Neural Engineering (2018), 0–17.
  • [28] Chu, L., Qiu, R., Liu, H., Ling, Z., Zhang, T., and Wang, J. Individual Recognition in Schizophrenia using Deep Learning Methods with Random Forest and Voting Classifiers: Insights from Resting State EEG Streams. arXiv preprint (2017), 1–7.
  • [29] Clerc, M., Bougrain, L., and Lotte, F. Brain-Computer Interfaces 1: Foundations and Methods. Wiley, 2016.
  • [30] Cole, S. R., and Voytek, B. Cycle-by-cycle analysis of neural oscillations. bioRxiv (2018).
  • [31] Comstock, J. R. Mat - Multi-Attribute Task Battery for Human Operator Workload and Strategic Behavior Research.
  • [32] Congedo, M., Barachant, A., and Bhatia, R. Riemannian geometry for EEG-based brain-computer interfaces; a primer and a review. Brain-Computer Interfaces 4, 3 (2017), 155–174.
  • [33] Corley, I. A., and Huang, Y. Deep EEG Super-resolution: Upsampling EEG Spatial Resolution with Generative Adversarial Networks. In IEEE EMBS International Conference on Biomedical & Health Informatics (BHI) (2018), no. March, pp. 4–7.
  • [34] Deiss, O., Biswal, S., Jin, J., Sun, H., Westover, M. B., and Sun, J. HAMLET: Interpretable Human And Machine co-LEarning Technique. arXiv preprint (2018).
  • [35] Deng, J., Berg, A., Satheesh, S., Su, H., Khosla, A., and Fei-Fei, L. ILSVRC-2012, 2012. URL http://www. image-net. org/challenges/LSVRC (2012).
  • [36] Dharamsi, T., Das, P., Pedapati, T., Bramble, G., Muthusamy, V., Samulowitz, H., Varshney, K. R., Rajamanickam, Y., Thomas, J., and Dauwels, J. Neurology-as-a-Service for the Developing World. arXiv preprint, Nips (2017), 1–5.
  • [37] Ding, S., Zhang, N., Xu, X., Guo, L., and Zhang, J. Deep Extreme Learning Machine and Its Application in EEG Classification. Mathematical Problems in Engineering 2015 (2015).
  • [38] Dong, H., Supratak, A., Pan, W., Wu, C., Matthews, P. M., and Guo, Y. Mixed Neural Network Approach for Temporal Sleep Stage Classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, 2 (2018), 324–333.
  • [39] Drouin-Picaro, A., and Falk, T. H. Using deep neural networks for natural saccade classification from electroencephalograms. In 2016 IEEE EMBS International Student Conference: Expanding the Boundaries of Biomedical Engineering and Healthcare, ISC 2016 - Proceedings (2016), IEEE, pp. 1–4.
  • [40] Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research 12, Jul (2011), 2121–2159.
  • [41] Engemann, D. A., Raimondo, F., King, J.-R., Rohaut, B., Louppe, G., Faugeras, F., Annen, J., Cassol, H., Gosseries, O., Fernandez-Slezak, D., et al. Robust eeg-based cross-site and cross-protocol classification of states of consciousness. Brain 141, 11 (2018), 3179–3192.
  • [42] Frydenlund, A., and Rudzicz, F. Emotional Affect Estimation Using Video and EEG Data in Deep Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9091. 2015, pp. 273–280.
  • [43] Gao, G., Shang, L., Xiong, K., Fang, J., Zhang, C., and Gu, X. Eeg classification based on sparse representation and deep learning. NeuroQuantology 16, 6 (2018), 789–795.
  • [44] Gao, Y., Lee, H. J., and Mehmood, R. M. Deep learninig of EEG signals for emotion recognition. In 2015 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2015 (jun 2015), IEEE, pp. 1–5.
  • [45] Ghosh, A., Dal Maso, F., Roig, M., Mitsis, G. D., and Boudrias, M.-H. Deep Semantic Architecture with discriminative feature visualization for neuroimage analysis. arXiv preprint (2018).
  • [46] Giacino, J. T., Fins, J. J., Laureys, S., and Schiff, N. D. Disorders of consciousness after acquired brain injury: the state of the science. Nature Reviews Neurology 10, 2 (2014), 99.
  • [47] Giri, E. P., Fanany, M. I., and Arymurthy, A. M. Combining Generative and Discriminative Neural Networks for Sleep Stages Classification. arXiv preprint (2016), 1–13.
  • [48] Giri, E. P., Fanany, M. I., and Arymurthy, A. M. Ischemic Stroke Identification Based on EEG and EOG using 1D Convolutional Neural Network and Batch Normalization. arXiv preprint (2016), 484–491.
  • [49] Golmohammadi, M., Torbati, A. H. H. N., de Diego, S. L., Obeid, I., and Picone, J. Automatic Analysis of EEGs Using Big Data and Hybrid Deep Learning Architectures. arXiv preprint (2017).
  • [50] Golmohammadi, M., Ziyabari, S., Shah, V., de Diego, S. L., Obeid, I., and Picone, J. Deep Architectures for Automated Seizure Detection in Scalp EEGs. arXiv preprint (2017).
  • [51] Golmohammadi, M., Ziyabari, S., Shah, V., Von Weltin, E., Campbell, C., Obeid, I., and Picone, J. Gated recurrent networks for seizure detection. 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB) (2017), 1–5.
  • [52] Goodfellow, I. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016).
  • [53] Goodfellow, I., Bengio, Y., and Courville, A. Deep learning, vol. 1. MIT press Cambridge, 2016.
  • [54] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems (2014), pp. 2672–2680.
  • [55] Gordienko, Y., Stirenko, S., Kochura, Y., Alienin, O., Novotarskiy, M., and Gordienko, N. Deep Learning for Fatigue Estimation on the Basis of Multimodal Human-Machine Interactions. arXiv preprint (2017).
  • [56] Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., Flandin, G., Ghosh, S. S., Glatard, T., Halchenko, Y. O., Handwerker, D. A., Hanke, M., Keator, D., Li, X., Michael, Z., Maumet, C., Nichols, B. N., Nichols, T. E., Pellman, J., Poline, J. B., Rokem, A., Schaefer, G., Sochat, V., Triplett, W., Turner, J. A., Varoquaux, G., and Poldrack, R. A. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data 3 (2016), 160044.
  • [57] Gramfort, A., Strohmeier, D., Haueisen, J., Hämäläinen, M. S., and Kowalski, M. Time-frequency mixed-norm estimates: Sparse M/EEG imaging with non-stationary source activations. NeuroImage 70 (2013), 410–422.
  • [58] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems (2017), pp. 5767–5777.
  • [59] H., M., Samaha, A., and AlKamha, K. Automated Classification of L/R Hand Movement EEG Signals using Advanced Feature Extraction and Machine Learning. International Journal of Advanced Computer Science and Applications 4, 6 (2013), 6.
  • [60] Hagihira, S. Changes in the electroencephalogram during anaesthesia and their physiological basis. British Journal of Anaesthesia 115, suppl_1 (2015), i27–i31.
  • [61] Hajinoroozi, M., Mao, Z., and Huang, Y. Prediction of driver’s drowsy and alert states from EEG signals with deep learning. 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2015 (2015), 493–496.
  • [62] Hajinoroozi, M., Mao, Z., Jung, T. P., Lin, C. T., and Huang, Y. EEG-based prediction of driver’s cognitive performance by deep convolutional neural network. Signal Processing: Image Communication 47 (2016), 549–555.
  • [63] Hajinoroozi, M., Mao, Z., and Lin, Y.-p. Deep Transfer Learning for Cross-subject and Cross-experiment Prediction of Image Rapid Serial Visual Presentation Events from EEG Data. 45–55.
  • [64] Hao, Y., Khoo, H. M., von Ellenrieder, N., Zazubovits, N., and Gotman, J. DeepIED: An epileptic discharge detector for EEG-fMRI based on deep learning. NeuroImage: Clinical 17, November 2017 (2018), 962–975.
  • [65] Harati, A., López, S., Obeid, I., and Picone, J. THE TUH EEG CORPUS : A Big Data Resource for Automated EEG Interpretation. In Signal Processing in Medicine and Biology Symposium (SPMB), 2014 IEEE (2014), IEEE, pp. 1–5.
  • [66] Hartmann, K. G., Schirrmeister, R. T., and Ball, T. EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals. arXiv preprint (2018).
  • [67] Hartmann, K. G., Schirrmeister, R. T., and Ball, T. Hierarchical internal representation of spectral features in deep convolutional networks trained for EEG decoding. In 2018 6th International Conference on Brain-Computer Interface, BCI 2018 (2018), vol. 2018-Janua, IEEE, pp. 1–6.
  • [68] Hasib, M. M., Nayak, T., and Huang, Y. A hierarchical LSTM model with attention for modeling EEG non-stationarity for human decision prediction. 2018 IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2018 2018-Janua, March (2018), 104–107.
  • [69] He, B., Sohrabpour, A., Brown, E., and Liu, Z. Electrophysiological source imaging: A noninvasive window to brain dynamics. Annual Review of Biomedical Engineering 20, 1 (2018), 171–196. PMID: 29494213.
  • [70] He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (2015), pp. 770–778.
  • [71] Hefron, R., Borghetti, B., Schubert Kabban, C., Christensen, J., and Estepp, J. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks. Sensors 18, 5 (apr 2018), 1339.
  • [72] Hefron, R. G., Borghetti, B. J., Christensen, J. C., and Kabban, C. M. S. Deep long short-term memory structures model temporal dependencies improving cognitive workload estimation. Pattern Recognition Letters 94 (2017), 96–104.
  • [73] Heilmeyer, F. A., Schirrmeister, R. T., Fiederer, L. D. J., Völker, M., Behncke, J., and Ball, T. A large-scale evaluation framework for EEG deep learning architectures. ArXiv e-prints (jun 2018).
  • [74] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems (2017), pp. 6626–6637.
  • [75] Hohman, F. M., Kahng, M., Pienta, R., and Chau, D. H. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers. IEEE Transactions on Visualization and Computer Graphics (2018).
  • [76] Hussein, R., Palangi, H., Ward, R., and Wang, Z. J. Epileptic Seizure Detection: A Deep Learning Approach. Arxiv (2018), 1–12.
  • [77] Jas, M., Engemann, D. A., Bekhti, Y., Raimondo, F., and Gramfort, A. Autoreject: Automated artifact rejection for MEG and EEG data. NeuroImage 159 (2017), 417–429.
  • [78] Jayaram, V., and Barachant, A. MOABB: trustworthy algorithm benchmarking for BCIs. Journal of neural engineering 15, 6 (2018), 066011.
  • [79] Jirayucharoensak, S., Pan-Ngum, S., and Israsena, P. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation. Scientific World Journal 2014 (2014).
  • [80] Kingma, D. P., and Ba, J. Adam: A Method for Stochastic Optimization. arXiv preprint (2014).
  • [81] Koelstra, S., Mühl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., and Patras, I. DEAP: A database for emotion analysis; Using physiological signals. IEEE Transactions on Affective Computing 3, 1 (2012), 18–31.
  • [82] Kuanar, S., Athitsos, V., Pradhan, N., Mishra, A., and Rao, K. R. Cognitive Analysis of Working Memory Load from EEG, by a Deep Recurrent Neural Network. In IEEE Signal Processing Society (2018).
  • [83] Kwak, N. S., Müller, K. R., and Lee, S. W. A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS ONE 12, 2 (2017), 1–20.
  • [84] Kwon, Y., Nan, Y., and Kim, S. D. Transformation of EEG signal for emotion analysis and dataset construction for DNN learning. In Lecture Notes in Electrical Engineering, vol. 474. Springer, 2017, pp. 96–101.
  • [85] Längkvist, M., Karlsson, L., and Loutfi, A. Sleep Stage Classification Using Unsupervised Feature Learning. Advances in Artificial Neural Systems 2012 (2012), 1–9.
  • [86] Längkvist, M., and Loutfi, A. A Deep Learning Approach with an Attention Mechanism for Automatic Sleep Stage Classification. Arxiv (2018), 1–18.
  • [87] Lawhern, V. J., Solon, A. J., and Waytowich, N. R. EEGNet : a compact convolutional neural network for EEG-based brain – computer interfaces. Journal of neural engineering (2018).
  • [88] LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature 521, 7553 (2015), 436.
  • [89] LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. Backpropagation applied to handwritten zip code recognition. Neural computation 1, 4 (1989), 541–551.
  • [90] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.
  • [91] Lee, W.-H., Ortiz, J., Ko, B., and Lee, R. Time Series Segmentation through Automatic Feature Learning. arXiv preprint (2018).
  • [92] Lee, Y., and Huang, Y. Generating Target / non-Target Images of an RSVP Experiment from Brain Signals in by Conditional Generative Adversarial Network. arXiv preprint, March (2018), 4–7.
  • [93] Li, F., Zhang, G., Wang, W., Xu, R., Schnell, T., Wen, J., McKenzie, F., and Li, J. Deep Models for Engagement Assessment with Scarce Label Information. IEEE Transactions on Human-Machine Systems 47, 4 (2017), 598–605.
  • [94] Li, J., and Cichocki, A. Deep Learning of Multifractal Attributes from Motor Imagery Induced EEG. Neural Information Processing (2014), 503–510.
  • [95] Li, J., Struzik, Z., Zhang, L., and Cichocki, A. Feature learning from incomplete EEG with denoising autoencoder. Neurocomputing 165 (2015), 23–31.
  • [96] Li, K., Li, X., Zhang, Y., and Zhang, A. Affective state recognition from EEG with deep belief networks. Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013 (2013), 305–310.
  • [97] Li, X., Grandvalet, Y., and Davoine, F. Explicit Inductive Bias for Transfer Learning with Convolutional Networks. arXiv preprint (2018).
  • [98] Li, X., Zhang, P., Song, D., Yu, G., Hou, Y., and Hu, B. EEG Based Emotion Identification Using Unsupervised Deep Feature Learning. SIGIR2015 Workshop on Neuro- Physiological Methods in IR Research (2015), 2–4.
  • [99] Li, Y., Schwing, A., Wang, K.-C., and Zemel, R. Dualing GANs. In Advances in Neural Information Processing Systems (2017), pp. 5606–5616.
  • [100] Li, Z., Tian, X., Shu, L., Xu, X., and Hu, B. Emotion Recognition from EEG Using RASM and LSTM. In Internet Multimedia Computing and Service, vol. 819. 2018, pp. 310–318.
  • [101] Liao, C.-y., Chen, R.-c., and Tai, S.-k. Emotion stress detection using EEG signal and deep learning technologies. 2018 IEEE International Conference on Applied System Invention (ICASI), 2 (2018), 90–93.
  • [102] Lin, W., Li, C., and Sun, S. Deep Convolutional Neural Network for Emotion Recognition Using EEG and Peripheral Physiological Signal. In International Conference on Image and Graphics (2017), pp. 385–394.
  • [103] Liu, W., Zheng, W.-L., and Lu, B.-L. Multimodal emotion recognition using multimodal deep learning. arXiv preprint (2016).
  • [104] Loshchilov, I., and Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv preprint (2016).
  • [105] Lotte, F., Bougrain, L., Cichocki, A., Clerc, M., Congedo, M., Rakotomamonjy, A., and Yger, F. A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces: A 10-year Update. Journal of Neural Engineering 15, 3 (2018), 0–20.
  • [106] Lotte, F., Bougrain, L., and Clerc, M. Electroencephalography (EEG)-Based Brain-Computer Interfaces. American Cancer Society, 2015, pp. 1–20.
  • [107] Major, T. C., and Conrad, J. M. The effects of pre-filtering and individualizing components for electroencephalography neural network classification. Conference Proceedings - IEEE SOUTHEASTCON (2017).
  • [108] Makeig, S., Bell, A. J., Jung, T.-P., and Sejnowski, T. J. Independent Component Analysis of Electroencephalographic Data. In Advances in Neural Information Processing Systems (1996), vol. 8, pp. 145–151.
  • [109] Manor, R., and Geva, A. B. Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI. Frontiers in Computational Neuroscience 9, December (2015), 1–12.
  • [110] Manzano, M., Guillén, A., Rojas, I., and Herrera, L. J. Deep learning using EEG data in time and frequency domains for sleep stage classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10305 LNCS. 2017, pp. 132–141.
  • [111] Marcus R. Munafò, Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P. A. Ioannidis. A manifesto for reproducible science. Nature Human Behaviour 1, 1 (2017), 0021.
  • [112] Mehmood, R. M., Du, R., and Lee, H. J. Optimal feature selection and deep learning ensembles method for emotion recognition from human brain EEG sensors. IEEE Access 5 (2017), 14797–14806.
  • [113] Mohamed, A. K., Marwala, T., and John, L. R. Single-trial EEG discrimination between wrist and finger movement imagery and execution in a sensorimotor BCI. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS (2011), 6289–6293.
  • [114] Moinnereau, M.-A., Brienne, T., Brodeur, S., Rouat, J., Whittingstall, K., and Plourde, E. Classification of Auditory Stimuli from EEG Signals with a Regulated Recurrent Neural Network Reservoir. Arxiv (2018).
  • [115] Morabito, F. C., Campolo, M., Ieracitano, C., Ebadi, J. M., Bonanno, L., Bramanti, A., DE SALVO, S., Mammone, N., and Bramanti, P. Deep convolutional neural networks for classification of mild cognitive impaired and Alzheimer’s disease patients from scalp EEG recordings. 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a Better Tomorrow, RTSI 2016 (2016).
  • [116] Morabito, F. C., Campolo, M., Mammone, N., Versaci, M., Franceschetti, S., Tagliavini, F., Sofia, V., Fatuzzo, D., Gambardella, A., Labate, A., Mumoli, L., Tripodi, G. G., Gasparini, S., Cianci, V., Sueri, C., Ferlazzo, E., and Aguglia, U. Deep Learning Representation from Electroencephalography of Early-Stage Creutzfeldt-Jakob Disease and Features for Differentiation from Rapidly Progressive Dementia. International Journal of Neural Systems 27, 02 (2017), 1650039.
  • [117] Naderi, M. A., and Mahdavi-Nasab, H. Analysis and classification of EEG signals using spectral analysis and recurrent neural networks. In 2010 17th Iranian Conference of Biomedical Engineering (ICBME) (nov 2010), no. November, IEEE, pp. 1–4.
  • [118] Narejo, S., Pasero, E., and Kulsoom, F. EEG based eye state classification using deep belief network and stacked autoencoder. International Journal of Electrical and Computer Engineering 6, 6 (2016), 3131–3141.
  • [119] Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., et al. Meg-bids, the brain imaging data structure extended to magnetoencephalography. Scientific data 5 (2018), 180110.
  • [120] Nolan, H., Whelan, R., and Reilly, R. B. FASTER: fully automated statistical thresholding for EEG artifact rejection. Journal of neuroscience methods 192, 1 (2010), 152–162.
  • [121] Normand, R., and Ferreira, H. A. Superchords: the atoms of thought. arXiv preprint (may 2015), 1–5.
  • [122] Nurse, E., Mashford, B. S., Yepes, A. J., Kiral-Kornek, I., Harrer, S., and Freestone, D. R. Decoding EEG and LFP signals using deep learning: heading TrueNorth. Proceedings of the ACM International Conference on Computing Frontiers - CF ’16 (2016), 259–266.
  • [123] O ’shea, A., Lightbody, G., Boylan, G., and Temko, A. Neonatal Seizure Detection Using Convolutional Neural Networks. arXiv (2017).
  • [124] Omerhodzic, I., Avdakovic, S., Nuhanovic, A., and Dizdarevic, K. Energy Distribution of EEG Signals: EEG Signal Wavelet-Neural Network Classifier. arXiv preprint 2 (2013).
  • [125] Padmanabh, L., Shastri, R., and Biradar, S. Mental Tasks Classification using EEG signal, Discrete Wavelet Transform and Neural Network. Discovery 48, December 2015 (2017), 38–41.
  • [126] Paez, A. Gray literature: An important resource in systematic reviews. Journal of Evidence-Based Medicine 10, 3 (2017), 233–240.
  • [127] Page, A., Shea, C., and Mohsenin, T. Wearable seizure detection using convolutional neural networks with transfer learning. 2016 IEEE International Symposium on Circuits and Systems (ISCAS) (2016), 1086–1089.
  • [128] Palazzo, S., Spampinato, C., Kavasidis, I., Giordano, D., and Shah, M. Generative Adversarial Networks Conditioned by Brain Signals. Proceedings of the IEEE International Conference on Computer Vision 2017-Octob (2017), 3430–3438.
  • [129] Pan, S. J., Yang, Q., et al. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22, 10 (2010), 1345–1359.
  • [130] Pardede, J., Turnip, M., Manalu, D. R., and Turnip, A. Adaptive recurrent neural network for reduction of noise and estimation of source from recorded EEG signals. ARPN Journal of Engineering and Applied Sciences 10, 3 (2015), 993–997.
  • [131] Parekh, V., Subramanian, R., Roy, D., and Jawahar, C. V. An EEG-based image annotation system. Communications in Computer and Information Science 841 (2018), 303–313.
  • [132] Patanaik, A., Ong, J. L., Gooley, J. J., Ancoli-Israel, S., and Chee, M. W. L. An end-to-end framework for real-time automatic sleep stage classification. Sleep 41, 5 (2018), 1–11.
  • [133] Patnaik, S., Moharkar, L., and Chaudhari, A. Deep RNN Learning for EEG based Functional Brain State Inference. In International Conference on Advances in Computing, Communication and Control (ICAC3) (2017).
  • [134] Perez, L., and Wang, J. The Effectiveness of Data Augmentation in Image Classification using Deep Learning. arXiv preprint (2017).
  • [135] Perez-Benitez, J. L., Perez-Benitez, J. A., and Espina-Hernandez, J. H. Development of a Brain Computer Interface Interface using multi-frequency visual stimulation and deep neural networks . 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP) (2018), 18–24.
  • [136] Pernet, C. R., Appelhoff, S., Flandin, G., Phillips, C., Delorme, A., and Oostenveld, R. Bids-eeg: an extension to the brain imaging data structure (bids) specification for electroencephalography, Dec 2018.
  • [137] Phan, H., Andreotti, F., Cooray, N., Chen, O. Y., and De Vos, M. Joint Classification and Prediction CNN Framework for Automatic Sleep Stage Classification. IEEE Transactions on Biomedical Engineering (2018), 1–11.
  • [138] Pramod, S., Page, A., Mohsenin, T., and Oates, T. Detecting epilectic seizures from EEG data using neural networks. Iclr 2015, 2014 (2015), 1–4.
  • [139] Prechelt, L. Automatic early stopping using cross validation: quantifying the criteria. Neural Networks 11, 4 (1998), 761–767.
  • [140] Raposo, F., de Matos, D. M., Ribeiro, R., Tang, S., and Yu, Y. Towards Deep Modeling of Music Semantics using EEG Regularizers. arXiv preprint (2017).
  • [141] Robbins, H Monro, S. A stochastic approximation method. In Statistics. Springer, 1951, pp. 102–109.
  • [142] Rosenblatt, F. The perceptron : a probabilistic model for information storage and organization. Psychological Review 65, 6 (1958), 386–408.
  • [143] Roy, S., Kiral-Kornek, I., and Harrer, S. ChronoNet: A Deep Recurrent Neural Network for Abnormal EEG Identification. arXiv preprint (2018), 1–10.
  • [144] Ruffini, G., Ibanez, D., Castellano, M., Dubreuil, L., Gagnon, J.-F., Montplaisir, J., and Soria-Frisch, A. Deep learning with EEG spectrograms in rapid eye movement behavior disorder. bioRxiv (2018), 1–14.
  • [145] Rumelhart, D. E., Hinton, G. E., and Williams, R. J. Learning representations by back-propagating errors. nature 323, 6088 (1986), 533.
  • [146] Sakhavi, S., and Guan, C. Convolutional neural network-based transfer learning and knowledge distillation using multi-subject data in motor imagery BCI. International IEEE/EMBS Conference on Neural Engineering, NER (2017), 588–591.
  • [147] Sakhavi, S., Guan, C., and Yan, S. Parallel convolutional-linear neural network for motor imagery classification. 2015 23rd European Signal Processing Conference, EUSIPCO 2015 (2015), 2736–2740.
  • [148] Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems (2016), pp. 2234–2242.
  • [149] Schirrmeister, R. T., Gemein, L., Eggensperger, K., Hutter, F., and Ball, T. Deep learning with convolutional neural networks for decoding and visualization of EEG pathology. arXiv preprint (2017).
  • [150] Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F., Burgard, W., and Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping 38, 11 (2017), 5391–5420.
  • [151] Schmidhuber, J. Deep Learning in neural networks: An overview: read section 6.6. Neural Networks 61 (2015), 85–117.
  • [152] Schwabedal, J. T. C., Snyder, J. C., Cakmak, A., Nemati, S., and Clifford, G. D. Addressing Class Imbalance in Classification Problems of Noisy Signals by using Fourier Transform Surrogates. arXiv preprint (2018), 1–7.
  • [153] Shah, V., Golmohammadi, M., Ziyabari, S., Von Weltin, E., Obeid, I., and Picone, J. Optimizing channel selection for seizure detection. In Signal Processing in Medicine and Biology Symposium (SPMB), 2017 IEEE (2017), IEEE, pp. 1–5.
  • [154] Shamwell, J., Lee, H., Kwon, H., Marathe, A. R., Lawhern, V., and Nothwang, W. Single-trial EEG RSVP classification using convolutional neural networks. 983622.
  • [155] Shang, J., Yuanyue, H., Haixiang, G., Yijing, L., Mingyun, G., and Gong, B. Learning from class-imbalanced data: Review of methods and applications. Expert Systems With Applications 73 (2017), 220–239.
  • [156] Shea, A. O., Lightbody, G., Boylan, G., and Temko, A. Investigating the Impact of CNN Depth on Neonatal Seizure Detection Performance. Arxiv (2018), 15–18.
  • [157] Simonyan, K., and Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint (2014).
  • [158] Snoek, J., Larochelle, H., and Adams, R. P. Practical Bayesian Optimization of Machine Learning Algorithms. In Advances in neural information processing systems (2012), pp. 2951–2959.
  • [159] Soleymani, M., Lichtenauer, J., Pun, T., and Pantic, M. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42–55.
  • [160] Sors, A., Bonnet, S., Mirek, S., Vercueil, L., and Payen, J. F. A convolutional neural network for sleep stage scoring from raw single-channel EEG. Biomedical Signal Processing and Control 42, April 2018 (2018), 107–114.
  • [161] Spampinato, C., Palazzo, S., Kavasidis, I., Giordano, D., Souly, N., and Shah, M. Deep learning human mind for automated visual classification. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017-Janua (2017), 4503–4511.
  • [162] Sree, R. A., and Kavitha, A. Vowel classification from imagined speech using sub-band EEG frequencies and deep belief networks. In 2017 4th International Conference on Signal Processing, Communication and Networking, ICSCN 2017 (2017), IEEE, pp. 1–4.
  • [163] Stober, S., Cameron, D. J., and Grahn, J. a. Using Convolutional Neural Networks to Recognize Rhythm Stimuli from Electroencephalography Recordings. In Neural Information Processing Systems (NIPS) 2014 (2014), pp. 1–9.
  • [164] Stober, S., Sternin, A., Owen, A. M., and Grahn, J. A. Deep Feature Learning for EEG Recordings. arXiv preprint (2015).
  • [165] Sturm, I., Bach, S., Samek, W., and Müller, K.-R. Interpretable Deep Neural Networks for Single-Trial EEG Classification. arXiv preprint 33518 (2016), 1–5.
  • [166] Sun, P., and Qin, J. Neural networks based EEG-Speech Models. arXiv preprint (2016), 1–10.
  • [167] Supratak, A., Dong, H., Wu, C., and Guo, Y. DeepSleepNet: a Model for Automatic Sleep Stage Scoring based on Raw Single-Channel EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25, 11 (2017), 1998–2008.
  • [168] Sutskever, I., Hinton, G., Krizhevsky, A., and Salakhutdinov, R. R. Dropout : A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15, 1 (2014), 1929–1958.
  • [169] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 2818–2826.
  • [170] Tabar, Y. R., and Halici, U. A novel deep learning approach for classification of EEG motor imagery signals. Journal of Neural Engineering 14, 1 (2016), 16003.
  • [171] Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. arXiv preprint (2017).
  • [172] Tang, Z., Li, C., and Sun, S. Single-trial EEG classification of motor imagery using deep convolutional neural networks. Optik 130 (2017), 11–18.
  • [173] Taqi, A. M., Al-Azzo, F., Mariofanna, M., and Al-Saadi, J. M. Classification and discrimination of focal and non-focal EEG signals based on deep neural network. 2017 International Conference on Current Research in Computer Science and Information Technology (ICCIT) (2017), 86–92.
  • [174] Teo, J., Hou, C. L., and Mountstephens, J. Preference Classification Using Electroencephalography ( EEG ) and Deep Learning. Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 10, 1 (2018), 87–91.
  • [175] Thodoroff, P., Pineau, J., and Lim, A. Learning Robust Features using Deep Learning for Automatic Seizure Detection. arXiv preprint (2016), 1–12.
  • [176] Thorsten, O. Z., and Christian, K. Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general. Journal of Neural Engineering 8, 2 (2011), 25005.
  • [177] Tibor, S. R., Tobias, S. J., Josef, F. L. D., Martin, G., Katharina, E., Michael, T., Frank, H., Wolfram, B., and Tonio, B. Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping 38, 11 (2017), 5391–5420.
  • [178] Tieleman, T., Hinton, G. E., Srivastava, N., and Swersky, K. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 4, 2 (2012), 26—-31.
  • [179] Tripathy, R. K., and Rajendra Acharya, U. Use of features from RR-time series and EEG signals for automated classification of sleep stages in deep neural network framework. Biocybernetics and Biomedical Engineering (2018), 1–13.
  • [180] Truong, N. D., Kuhlmann, L., Bonyadi, M. R., and Kavehei, O. Semi-supervised Seizure Prediction with Generative Adversarial Networks. arXiv preprint (2018), 1–6.
  • [181] Truong, N. D., Nguyen, A. D., Kuhlmann, L., Bonyadi, M. R., Yang, J., Ippolito, S., and Kavehei, O. Convolutional neural networks for seizure prediction using intracranial and scalp electroencephalogram. Neural Networks 105 (2018), 104–111.
  • [182] Tsinalis, O., Matthews, P. M., Guo, Y., and Zafeiriou, S. Automatic Sleep Stage Scoring with Single-Channel EEG Using Convolutional Neural Networks. arXiv preprint (2016).
  • [183] Tsiouris, K. M., Pezoulas, V. C., Zervakis, M., Konitsiotis, S., Koutsouris, D. D., and Fotiadis, D. I. A Long Short-Term Memory deep learning network for the prediction of epileptic seizures using EEG signals. Computers in Biology and Medicine 99 (2018), 24–37.
  • [184] Turner, J. T., Page, A., Mohsenin, T., and Oates, T. Deep Belief Networks used on High Resolution Multichannel Electroencephalography Data for Seizure Detection. AAAI Spring Symposium Series (2014), 75–81.
  • [185] Ullah, I., Hussain, M., Qazi, E.-U.-H., and Aboalsamh, H. An Automated System for Epilepsy Detection using EEG Brain Signals based on Deep Learning Approach. Arxiv (2018).
  • [186] Urigen, J. A., and Garcia-Zapirain, B. EEG artifact removal state-of-the-art and guidelines. Journal of Neural Engineering 12, 3 (2015), 031001.
  • [187] van Putten, M. J., de Carvalho, R., and Tjepkema-Cloostermans, M. C. Deep learning for detection of epileptiform discharges from scalp EEG recordings. Clinical Neurophysiology 129, 2018 (2018), e98–e99.
  • [188] Van Putten, M. J. A. M., Olbrich, S., and Arns, M. Predicting sex from brain rhythms with deep learning. Scientific Reports 8, 1 (2018), 1–7.
  • [189] Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. OpenML: networked science in machine learning. SIGKDD Explorations 15, 2 (2014), 49–60.
  • [190] Vilamala, A., Madsen, K. H., and Hansen, L. K. Deep Convolutional Neural Networks for Interpretable Analysis of EEG Sleep Stage Scoring. arXiv preprint, 659860 (2017).
  • [191] Völker, M., Schirrmeister, R. T., Fiederer, L. D. J., Burgard, W., and Ball, T. Deep Transfer Learning for Error Decoding from Non-Invasive EEG. In Brain-Computer Interface (BCI), 2018 6th International Conference on (2017), IEEE, pp. 1–6.
  • [192] Wang, F., Zhong, S. H., Peng, J., Jiang, J., and Liu, Y. Data augmentation for eeg-based emotion recognition with deep convolutional neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10705 LNCS. 2018, pp. 82–93.
  • [193] Wang, S., Guo, B., Zhang, C., Bai, X., and Wang, Z. EEG detection and de-noising based on convolution neural network and Hilbert-Huang transform. Proceedings - 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2017 2018-Janua, 2 (2018), 1–6.
  • [194] Waytowich, N. R., Lawhern, V., Garcia, J. O., Cummings, J., Faller, J., Sajda, P., and Vettel, J. M. Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials. arXiv preprint (2018), 1–21.
  • [195] Wen, T., and Zhang, Z. Deep Convolution Neural Network and Autoencoders-Based Unsupervised Feature Learning of EEG Signals. IEEE Access 6 (2018), 25399–25410.
  • [196] Wilkinson, M. D. Comment: The fair guiding principles for scientific data management and stewardship. Scientific Data 3 (2016), 1–9.
  • [197] Wu, Z., Wang, H., Cao, M., Chen, Y., and Xing, E. P. Fair Deep Learning Prediction for Healthcare Applications with Confounder Filtering. arXiv preprint (2018), 1–17.
  • [198] Wulsin, D. F., Gupta, J. R., Mani, R., Blanco, J. A., and Litt, B. Modeling electroencephalography waveforms with semi-supervised deep belief nets: Fast classification and anomaly measurement. Journal of Neural Engineering 8, 3 (2011).
  • [199] Xie, S., Li, Y., Xie, X., Wang, W., and Duan, X. The Analysis and Classify of Sleep Stage Using Deep Learning Network from Single-Channel EEG Signal. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10637 LNCS (2017), 752–758.
  • [200] Xu, H., and Plataniotis, K. N. Affective states classification using EEG and semi-supervised deep learning approaches. 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP) (2016), 1–6.
  • [201] Yan, P., Wang, F., and Grinspan, Z. : Spectrographic Seizure Detection Using Deep Learning With Convolutional Neural Networks (S19. 004), 2018.
  • [202] Yang, B., Duan, K., Fan, C., Hu, C., and Wang, J. Automatic ocular artifacts removal in EEG using deep learning. Biomedical Signal Processing and Control 43 (2018), 148–158.
  • [203] Yang, B., Duan, K., and Zhang, T. Removal of EOG artifacts from EEG using a cascade of sparse autoencoder and recursive least squares adaptive filter. Neurocomputing 214 (2016), 1053–1060.
  • [204] Yang, H., Sakhavi, S., Ang, K. K., and Guan, C. On the use of convolutional neural networks and augmented CSP features for multi-class motor imagery of EEG signals classification. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2015-Novem (2015), 2620–2623.
  • [205] Yang, S., Golmohammadi, M., Obeid, I., and Picone, J. Semi-automated annotation of signal events in clinical EEG data, Engineering Data Consortium , Temple University , Philadelphia , Pennsylvania , USA. Signal Processing in Medicine and Biology Symposium (2016), 1–5.
  • [206] Yepes, A. J., Tang, J., and Mashford, B. S. Improving classification accuracy of feedforward neural networks for spiking neuromorphic chips. IJCAI International Joint Conference on Artificial Intelligence (2017), 1973–1979.
  • [207] Yin, Z., and Zhang, J. Recognition of Cognitive Task Load levels using single channel EEG and Stacked Denoising Autoencoder. In Chinese Control Conference, CCC (jul 2016), vol. 2016-Augus, IEEE, pp. 3907–3912.
  • [208] Yin, Z., and Zhang, J. Cross-session classification of mental workload levels using EEG and an adaptive deep learning model. Biomedical Signal Processing and Control 33 (2017), 30–47.
  • [209] Yin, Z., and Zhang, J. Cross-subject recognition of operator functional states via EEG and switching deep belief networks with adaptive weights. Neurocomputing 260 (2017), 349–366.
  • [210] Yogatama, D., Dyer, C., Ling, W., and Blunsom, P. Generative and discriminative text classification with recurrent neural networks. arXiv preprint arXiv:1703.01898 (2017).
  • [211] Yoon, J., Lee, J., and Whang, M. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network. Computational Intelligence and Neuroscience 2018 (2018).
  • [212] Yuan, Y., Xun, G., Ma, F., Suo, Q., Xue, H., Jia, K., and Zhang, A. A Novel Channel-aware Attention Framework for Multi-channel EEG Seizure Detection via Multi-view Deep Learning. IEEE EMBS International Conference on Biomedical & Health Informatics, March (2018), 4–7.
  • [213] Zafar, R., Dass, S. C., and Malik, A. S. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion. arXiv preprint (2017), 1–23.
  • [214] Zeiler, M. D. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint (2012).
  • [215] Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization. arXiv preprint (2016).
  • [216] Zhang, D., Yao, L., Zhang, X., Wang, S., Chen, W., and Boots, R. Cascade and Parallel Convolutional Recurrent Neural Networks on EEG-Based Intention Recognition for Brain Computer Interface. In Thirty-Second AAAI Conference on Artificial Intelligence (2018), pp. 1703–1710.
  • [217] Zhang, G.-Q., Cui, L., Mueller, R., Tao, S., Kim, M., Rueschman, M., Mariani, S., Mobley, D., and Redline, S. The national sleep research resource: towards a sleep data commons. Journal of the American Medical Informatics Association 25, 10 (2018), 1351–1358.
  • [218] Zhang, J., Li, S., and Wang, R. Pattern recognition of momentary mental workload based on multi-channel electrophysiological data and ensemble convolutional neural networks. Frontiers in Neuroscience 11, MAY (2017), 1–16.
  • [219] Zhang, Q., and Liu, Y. Improving brain computer interface performance by data augmentation with conditional Deep Convolutional Generative Adversarial Networks. arXiv preprint (2018).
  • [220] Zhang, T., Zheng, W., Cui, Z., Zong, Y., and Li, Y. Spatial-Temporal Recurrent Neural Network for Emotion Recognition. IEEE Transactions on Cybernetics 1 (2018), 1–9.
  • [221] Zhang, X., Yao, L., Chen, K., Wang, X., Sheng, Q., and Gu, T. DeepKey: An EEG and Gait Based Dual-Authentication System. arXiv preprint 9, 4 (2017), 1–20.
  • [222] Zhang, X., Yao, L., Huang, C., Sheng, Q. Z., and Wang, X. Intent Recognition in Smart Living Through Deep Recurrent Neural Networks. arXiv preprint (2017), 1–11.
  • [223] Zhang, X., Yao, L., Kanhere, S. S., Liu, Y., Gu, T., and Chen, K. MindID: Person Identification from Brain Waves through Attention-based Recurrent Neural Network. arXiv preprint (2017), 1–20.
  • [224] Zhang, X., Yao, L., Sheng, Q. Z., Kanhere, S. S., Gu, T., and Zhang, D. Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals. arXiv preprint (2017).
  • [225] Zhang, X., Yao, L., Wang, X., Zhang, W., Zhang, S., and Liu, Y. Know Your Mind: Adaptive Brain Signal Classification with Reinforced Attentive Convolutional Neural Networks. arXiv preprint (2018).
  • [226] Zhang, X., Yao, L., Zhang, D., Wang, X., Sheng, Q. Z., and Gu, T. Multi-Person Brain Activity Recognition via Comprehensive EEG Signal Analysis. arXiv preprint (2017).
  • [227] Zheng, W.-L., Liu, W., Lu, Y., Lu, B.-L., and Cichocki, A. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Transactions on Cybernetics, 99 (2018), 1–13.
  • [228] Zheng, W. L., and Lu, B. L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Transactions on Autonomous Mental Development 7, 3 (2015), 162–175.
  • [229] Zheng, W. L., Zhu, J. Y., Peng, Y., and Lu, B. L. EEG-based emotion classification using deep belief networks. Proceedings - IEEE International Conference on Multimedia and Expo 2014-Septe, Septmber (2014), 1–6.
  • [230] Zhou, J., and Xu, W. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (2015), vol. 1, pp. 1127–1137.

Appendix A List of acronyms

\printglossary

[type=\acronymtype,title=,nogroupskip] \glsaddallunused

Appendix B Checklist of items to include in a DL-EEG study

This section contains a checklist of items we believe DL-EEG papers should mention to ensure their published results are readily reproducible. The following items of information should all be clearly stated at one point or another in the text or supplementary materials of future DL-EEG studies:

Data
  • Number of subjects (and relevant demographic data)

  • Electrode montage including reference(s) (number of channels and their locations)

  • Shape of one example (e.g., “ samples channels”)

  • Data augmentation technique (e.g., percentage of overlap for sliding windows)

  • Number of examples in training, validation and test sets

EEG processing
  • Temporal filtering, if any

  • Spatial filtering, if any

  • Artifact handling techniques, if any

  • Resampling, if any

Neural network architecture
  • Architecture type

  • Number of layers (consider including a diagram or table to represent the architecture)

  • Number of learnable parameters

Training hyperparameters
  • Parameter initialization

  • Loss function

  • Batch size

  • Number of epochs

  • Stopping criterion

  • Regularization (e.g., dropout, weight decay, etc.)

  • Optimization algorithm (e.g., stochastic gradient descent, Adam, RMSProp, etc.)

  • Learning rate schedule and optimizer parameters

  • Values of all hyperparameters (including random seed) for the results that are presented in the paper

  • Hyperparameter search method

Performance and model comparison
  • Performance metrics (e.g., f1-score, accuracy, etc.)

  • Type of validation scheme (intra- vs. inter-subject, leave-one-subject-out, k-fold cross-validation, etc.)

  • Description of baseline models (thorough description or reference to published work)

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
331903
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description