Unsupervised Domain Adversarial Self-Calibration for Electromyographic-based Gesture Recognition

Unsupervised Domain Adversarial Self-Calibration for Electromyographic-based Gesture Recognition

Abstract

Surface electromyography (sEMG) provides an intuitive and non-invasive interface from which to control machines. However, preserving the myoelectric control system’s performance over multiple days is challenging, due to the transient nature of this recording technique. In practice, if the system is to remain usable, a time-consuming and periodic re-calibration is necessary. In the case where the sEMG interface is employed every few days, the user might need to do this re-calibration before every use. Thus, severely limiting the practicality of such a control method.

Consequently, this paper proposes tackling the especially challenging task of adapting to sEMG signals when multiple days have elapsed between each recording, by presenting SCADANN, a new, deep learning-based, self-calibrating algorithm. SCADANN is ranked against three state of the art domain adversarial algorithms and a multiple-vote self-calibrating algorithm on both offline and online datasets. Overall, SCADANN is shown to systematically improve classifiers’ performance over no adaptation and ranks first on almost all the cases tested.

EMG, Myoelectric Control, Domain Adaptation, Self-Calibration, Virtual Reality, Domain Adversarial, Gesture Recognition.

I Introduction

Robots have become increasingly prominent within humans’ life. As a result, the way in which people interact with machines is constantly evolving to reach a better synergy between human intention and machine action. The ease of transcribing intention into commands is highly dependent on the type of interface and its implementations [7]. Within this context, muscle activity offers an attractive and intuitive way to perform gesture recognition as a guidance method [3, 32]. Such activity can be recorded from surface electromyography (sEMG), a non-invasive technique widely adopted both for prosthetic control and in research as a way to seamlessly interact with machines [37, 42]. sEMG signals are non-stationary, and represent the sum of subcutaneous motor action potentials generated through muscular contractions [32]. Artificial intelligence can then be leveraged as the bridge between these biological signals and robot input guidance.

Current state of the art algorithms in gesture recognition routinely achieve accuracies above 95% for the classification of offline, within-day datasets [11, 18]. However, many practical issues still need to be solved before implementing these type of algorithms into practical applications [37, 22]. Electrode shift and the transient nature of the sEMG signal are among the main obstacles to a robust and widespread implementation of real-time sEMG-based gesture recognition [37]. In practice, this means that users of current myoelectric systems need to perform periodic recalibration of their device so as to retain their usability. To address the issue of real-time myoelectric control, researchers have proposed rejection-based methods where a gesture is predicted only when a sufficient level of certainty is achieved [39, 2]. While these types of methods have been shown to increase online usability, they do not directly address the inherent decline in performance of the classifier overtime. One way to address this issue is to leverage transfer learning algorithm to periodically re-calibrate the system with less data than normally required [36, 13]. While these types of methods reduce the burden on the user, they still require said user to periodically record labeled data.

This work focuses on the problem of across-day sEMG-based gesture recognition both within an offline and online setting. In particular, this work considers the setting where several days are elapsed between each recording session. Such a setting naturally arises when sEMG-based gesture recognition is used for video games, artistic performances or, simply, to control a non-essential device [43, 42, 4]. In contrast to within-day or even day-to-day adaptation, this work’s setting is especially challenging as the change in signal between two sessions is expected to be substantially greater and no intermediary data exists to bridge this gap. The goal is then for the classifier to be able to adapt over-time using the unlabeled data obtained from the myoelectric system. Such a problem can be framed within an unsupervised domain adaptation setting [1] where there exists an initial labeled dataset on which to train, but the classifier then has to adapt to data from a different, but similar distribution. Huang et al. [24] proposes to use this setting to update a support vector machine by replacing old examples forming the support vectors with new unlabeled examples which are close to the old ones (and assigning the same label as the example which get replaced). Other authors [21] propose instead to periodically retrain an LDA by updating the training dataset itself. The idea is to replace old examples with new, near (i.e. small distance within the feature space) ones. Such methods however are inherently restricted to single-day use as they rely on smooth and small signal drift to update the classifier. Additionally, these type of methods do not leverage the potential large quantity of unlabeled data generated. Deep learning algorithms however are well suited to scale to large amounts of data and were shown to be more robust to between day signal drift than LDA, especially as the amount of training data increases [47]. Within the field of image recognition, deep learning-based unsupervised domain adaptation has been extensively studied. A popular approach to this problem is domain adversarial training popularized by DANN [1, 17]. The idea is to train a network on the labeled training dataset while also trying to learn a feature representation which makes the network unable to distinguish between the labeled and unlabeled data (see Section III for details). Building on this idea, VADA [40] tries to also minimize the cluster assumption [46] (i.e. decision boundary should avoid area of high data density). Another state of the art algorithm is DIRT-T, which starting from the output of VADA, removes the labeled data and iteratively tries to continue minimizing the cluster assumption. Detailed explanation of VADA and DIRT-T are also given in Section III. DANN, VADA and DIRT-T are state of the art domain adversarial algorithm which achieve a two-digit accuracy increase on several difficult image recognition benchmarks [40]. This work thus proposes to test these algorithms on the challenging problem of multiple-day sEMG-based gesture recognition both within an offline and online setting.

An additional difficulty of the setting considered in this work is that real-time myoelectric control imposes strict limitations in relation to the amount of temporal data which can be accumulated before each new prediction. The window’s length requirement has a direct negative impact on classifiers’ performance [41, 2]. This is most likely due to the fact that temporally neighboring segments most likely belong to the same class [5, 45]. In other words, provided that predictions can be deferred, it should be possible to generate a classification algorithm with improved accuracy (compared to the real-time classifier) by looking at a wider temporal context of the data [2]. Consequently, one potential way of coping with electrode shift and the non-stationary nature of EMG signals for gesture recognition is for the classifier to self-calibrate using pseudo-labels generated from this improved classification scheme. The most natural way of performing this re-labeling is using a majority vote around each classifier’s prediction. [45] have shown that such a re-calibration strategy significantly improves intra-day accuracy on an offline dataset for both able-bodied and amputees (tested on the NinaPro DB2 and DB3 datasets [6]). However for real-time control, such a majority vote strategy will increase latency, as transitions between gestures inevitably take longer to be detected. Additionally, trying to re-label every segment even when there is no clear gesture detected by the classifier will necessarily introduce undesirable noise in the pseudo-labels. Finally, the domain divergence over multiple days is expected to be substantially greater than within a single day. Consequently, ignoring this gap before generating the pseudo-labels might negatively impact the self re-calibrated classifier. To address these issues, the main contribution of this paper is the introduction of SCADANN (Self-Calibrating Asynchronous Domain Adversarial Neural Network), a deep learning-based algorithm, which leverages domain adversarial training and the unique properties of real-time myoelectric control for inter-day self-recalibration.

This paper is organized as follows. An overview of the datasets and the deep network architecture employed in this work is given in Section II. Section III presents the domain adaptation algorithm considered in this work, while Section IV thoroughly describes SCADANN. Finally, results and their associated discussions are given in Section V and VI respectively.

Ii Datasets and Convolutional Network’s Architecture

This work leverages the 3DC Dataset [12] for architecture building and hyperparameters optimization and the Long-term 3DC Dataset [13] for training and testing the algorithms presented in this work. Both datasets were recorded using the 3DC Armband [12]; a wireless, 10-channel, dry-electrode, 3D printed sEMG armband. The device samples data at 1000 Hz per channel, allowing to take advantage of the full spectra of sEMG signals [35].

As stated in [12], the data acquisition protocol of the 3DC Dataset and Long-term 3DC Dataset were approved by the Comités d’Éthique de la Recherche avec des êtres humains de l’Université Laval (approbation number: 2017-0256 A-1/10-09-2018 and 2017-026 A2-R2/26-06-2019 respectively), and informed consent was obtained from all participants.

Ii-a Long-term 3DC Dataset

The Long-term 3DC Dataset features 20 able-bodied participants (5F/15M) aged between 18 and 34 years old (average 26 4 years old) performing eleven gestures (shown in Figure 1). Each participant performed three recording sessions over a period of fourteen days (in seven days increments). Each recording session is divided into a Training and two Evaluation sessions.

Fig. 1: The eleven hand/wrist gestures recorded in the Long-term 3DC dataset (image re-used from [12])

Training Session

During the training session, each participant was standing and held their forearm, unsupported, parallel to the floor, with their hand relaxed (neutral position). Starting from this neutral position, each participant was asked to perform and hold each gesture for a period of five seconds. This was referred to as a cycle. Two more such cycles were recorded. In this work, the first two cycles are used for training, while the last one is used for testing (unless specified otherwise). Note that in the original dataset, four cycles are recorded for each participant, with the second one recording the participant performing each gesture with maximal intensity. This second cycle was removed for this work to reduce confounding factors. In other words, cycle two and three in this work correspond to cycle three and four in the original dataset.

In addition to the eleven gestures considered in the Long-term 3DC Dataset, a reduced dataset from the original Long-term dataset containing seven gestures is also employed. This Reduced Long-term 3DC Dataset is considered as it could more realistically be implanted on a real-world system given the current state of the art of EMG-based hand gesture recognition. The following gestures are selected to form the reduced dataset: neutral, open hand, power grip, radial/ulnar deviation and wrist flexion/extension. They were selected as they were shown to be sufficient in conjunction with orientation data to control a 6 degree-of-freedom robotic arm in real-time [4].

Evaluation Session

In addition to the offline datasets (i.e. the normal and the reduced datasets from the training sessions), the evaluation sessions represent a real-time dataset. Each evaluation session lasted three and a half minutes. During that time, the participants were asked to perform a specific gesture at a specific intensity and at a specific position. A new gesture, intensity and position were randomly asked every five seconds. These evaluations were also recorded over multiple days and the participants were the ones placing the armband on their forearm at the beginning of each session. As such, the evaluation sessions provide a real-time dataset which include the four main dynamic factors [38] in sEMG-based gesture recognition. Note that while the participant received visual feedback within the VR environment in relation to the performed gesture, gesture intensity and limb position, the performed gestures were classified using a leap motion camera [23] as to not bias the dataset towards a particular EMG-based classifier. In this work, when specified, the first evaluation session of a given recording session was employed as the unlabeled training dataset for the algorithms presented in Section III and IV, while the second evaluation session was used for testing.

Data Pre-processing

This work aims at studying unsupervised re-calibration of myoelectric control systems. Consequently, the input latency is a critical factor to consider. The optimal guidance latency was found to be between 150 and 250 ms [41]. Consequently, within this work, the data from each participant is segmented into 150 ms frames with an overlap of 100 ms. Each segment thus contains () data points. The segmented data is then band-pass filtered between 20-495 Hz using a fourth-order butterworth filter.

Given a segment, the spectrogram for each sEMG channel are then computed using a 48 points Hann window with an overlap of 14 yielding a matrix of (). The first frequency band is then removed in an effort to reduce baseline drift and motion artifacts. Finally, following [10], the time and channel axis are swapped such that an example is of the shape ().

Ii-B 3DC Dataset

The 3DC Dataset features 22 able-bodied participants and is used for architecture building and hyperparameter selection. This dataset, presented in [12], features the same eleven gestures as the Long-term 3DC Dataset. Its recording protocol closely matches the training session description (Section II-A), with the difference being that two such sessions were recorded for each participant (within the same day). This dataset was preprocessed as described in Section II-A3. Note that when recording the 3DC Dataset, participants were wearing both the Myo and 3DC Armband, however in this work, only the data from the 3DC Armband is employed.

Ii-C Convolutional Network’s Architecture

Spectrograms were selected to be fed as input to the ConvNet as they were shown to be competitive with the state of the art [11, 45]. A simple ConvNet’s architecture inspired from [9] presented in Figure 2 was selected as to reduce potential confounding factors. The ConvNet’s architecture contains four blocks followed by a global average pooling and two heads. The first head is used to predict the gesture held by the participant. The second head is only activated when employing domain adversarial algorithms (see Section III and IV for details). Each blocks encapsulate a convolutional layer [29], followed by batch normalization [25], leaky ReLU [44] and dropout [16].

Fig. 2: The ConvNet’s architecture employing 206 548 learnable parameters. In this figure, refers to the ith block (). Conv refers to a convolutional layer. when working with the reduced dataset, the number of output neurons from the gesture-head are reduced to seven.

ADAM [27] is employed for the ConvNet’s optimization with an initial learning rate of 0.0404709 and batch size of 512 (as used in [9]). Early stopping, with a patience of 10 epochs, is also applied by using 10% of the training dataset as a validation set. Additionally, learning rate annealing, with a factor of five and a patience of five, was also used. Dropout is set to 0.5 (following [9]). The architecture choices and hyperparameter selections were derived from the 3DC Dataset and previous literature using it (mainly [9, 12]).

Note that the ConvNet’s architecture implementation, written with PyTorch [34], is made readily available here (https://github.com/UlysseCoteAllard/LongTermEMG).

Ii-D Calibration Methods

This work considers three calibration methods for long-term classification of sEMG signals: No Calibration, Re-Calibration and Unsupervised Calibration. In the first case, the network is trained solely from the data of the first session. In the Re-Calibration case, the model is re-trained at each new session with the new labeled data. Unsupervised Calibration is similar to Re-Calibration, but the dataset used for re-calibration is unlabeled. Section III and IV presents the unsupervised calibration algorithms considered in this work.

Iii Unsupervised Domain Adaptation

Domain adaptation is a research area in machine learning which aims at learning a discriminative predictor from two datasets coming from two different, but related, distributions [17] (referred to as and ). In the unsupervised case, one of the datasets is labeled (and comes from ), while the second is unlabeled (and comes from ).

Within the context of myoelectric control systems, labeled data is obtained through a user’s conscious calibration session. However, due to the transient nature of sEMG signals [38, 30], classification performance tends to degrade over time. This naturally creates a burden for the user who needs to periodically recalibrate the system to maintain its usability [30, 15]. During normal usage however, unlabeled data is constantly generated. Consequently, the unsupervised domain adaptation setting naturally arises by defining the source dataset as the labeled data of the calibration session and the target dataset as the unlabeled data generated by the user during control.

The PyTorch implementation of the domain adaptation algorithms is mainly based on [33].

Iii-a Domain-Adversarial Training of Neural Networks

The Domain-Adversarial Neural Network (DANN) algorithm proposes to predict on the target dataset by learning a representation from the source dataset that makes it hard to distinguish examples from either distribution [1, 17]. To achieve this objective, DANN adds a second head to the network. This head, referred to as the domain classification head, receives the features from the last feature extraction layer of the network (in this work case, from the global average pooling layer). The goal of this second head is to learn to discriminate between the two domains (source and target). However, during backpropagation, the gradient computed from the domain loss is multiplied by a negative constant (-1 in this work). This gradient reversal explicitly forces the feature distribution of the domains to be similar. The backpropagation algorithm proceeds normally for the original head (classification head). The two losses are combined as follows: , where is the classifier’s parametrization, and are the prediction and domain loss respectively. is a scalar that weights the domain loss (set to in this work).

Iii-B Decision-boundary Iterative Refinement Training with a Teacher

Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) is a two-step domain-adversarial training algorithm which achieves state of the art results on a variety of domain adaptation benchmarks [40].

First step

During the first step, referred to as VADA (for Virtual Adversarial Domain Adaptation) [40]), training is done using DANN as described previously (i.e. using a second head to discriminate between domains). However, with VADA, the network is also penalized when it violates the cluster assumption on the target. This assumption states that data belonging to the same cluster in the feature space share the same class. Consequently, decision boundaries should avoided crossing dense regions. As shown in [20], this behavior can be achieved by minimizing the conditional entropy with respect to the target distribution:

(1)

Where is the parametrization of a classifier .

In practice, must be estimated from the available data. However, as noted by [20], such an approximation breaks if the classifier is not locally-Lipschitz (i.e. an arbitrary small change in the classifier’s input produces an arbitrarily large change in the classifier’s output). To remedy this, VADA [40] proposes to explicitly incorporate the locally-Lipschitz constraint during training via Virtual Adversarial Training (VAT) [31]. VAT generates new ”virtual” examples at each training batch by applying small perturbation to the original data. The average maximal Kullback-Leibler divergence ([28] is then minimized between the real and virtual examples to enforce the locally-Lipschitz constraint. In other words, VAT adds the following function to minimize during training:

(2)

As VAT can be seen as a form of regularization, it is also applied for the source data. In summary, the combined loss function to minimize during VADA training is:

(3)

Where the importance of each losses functions are weighted by hyperparameters (, , , ) . A diagram of VADA is given in Figure 3.

Fig. 3: The VADA algorithm which simultanously tries to reduce the divergence between the labeled source ({}) and unlabeled target () dataset while also penalizing violation of the cluster assumption on the target dataset.

Second Step

During the second step, the signal from the source is removed. The idea is then to find a new parametrization that further minimizes the target cluster assumption violation while remaining close to the classifier found during the first step. This process can then be repeated by updating the original classifier with the classifier’s parametrization found at each iteration. The combined loss function to minimize during the th iteration thus becomes:

(4)

Where is a hyperparameter which weighs the importance of remaining close to . In practice, the optimization problem of Eq. 4 can be approximately solved with a finite number of stochastic gradient descent steps [40].

Note that, both DANN and VADA were conservative domain adaptation algorithms (i.e. the training algorithms try to generate a classifier that is able to discriminate between classes from both the source and target simultanously). In contrast, DIRT-T is non-conservative as it ignores the source’s signal during training. In the case where the gap between the source and the target is important, this type of non-conservative algorithm are expected to perform better than their conservative counterparts [40]. In principle, this second step could be applied as a refinement step to any other domain adaptation training algorithms.

Following [40], the hyperparameters values are set to , , , , .

Iv Unsupervised Self-Calibration

Within an unsupervised domain adaptation setting, the classifier’s performance is limited by the unavailability of labeled data from the target domain. However, real-time EMG-based gesture recognition offers a particular context from which pseudo-labels can be generated from the recorded data by looking at the prediction’s context of the classifier. These pseudo-labels can then be used as a way for the classifier to perform self-recalibration. [45] proposed to leverage this special context by re-labeling the network’s predictions. Let be the softmax value of the network’s output for the th gesture (associated with the th output neuron) of the th example of a sequence. The heuristic considers an array composed of the segments surrounding example (included). For each , the median softmax value over this array is computed,

(5)

The pseudo-label of then becomes the gesture associated with the maximal . The median of the softmax’s outputs is used instead of the prediction’s mean to reduce the impact of outliers [45]. This self-calibrating heuristic will be refered to as MV (for Multiple Votes) from now on. The hyperparameter was set to 1 second, as recommended in [45].

This work proposes to improve on MV with a new self-calibrating algorithm, named SCADANN. SCADANN is divided into three steps:

  1. Apply DANN to the network using the labeled and newly acquired unlabeled data.

  2. Using the adapted network, perform the re-labeling scheme described in Section IV-A.

  3. Starting from the adapted network, train the network with the pseudo-labeled data and labeled data while continuing to apply DANN to minimize domain divergence between the two datasets.

The first step aims at reducing the domain divergence between the labeled recording session and the unlabeled recording as to improve classification performance of the network.

The second step uses the pseudo-labeling heuristic described in Section IV-A. In addition to using the prediction’s context to enhance the re-labeling process, the proposed heuristic introduces two improvements compared to [45]:

First, the heuristic tries to detect transition from one gesture to another. Then, already re-labeled predictions falling within the transition period are vetted and possibly re-labeled to better reflect when the actual transition occurred. This improvement aims at addressing two problems. First, the added latency introduced by majority-voting pseudo-labeling is removed. Second, this re-labeling can provides the training algorithm with gestures’ transition examples. This is of particular interest as labeled transition examples are simply too time consuming to produce. In fact, given a dataset with gestures, the number of transitions which would need to be recorded is , something that is simply not viable considering the current need for periodic re-calibration. Introducing pseudo-labeled transition examples within the target dataset, could allow the network to detect transitions more rapidly and thus reduce the system latency. In turn, due to this latency’s reduction, window’s length could be increases to improve the overall system’s performance.

The second improvement, introduces the notion of stability to the network’s predictions. Using this notion, the heuristic removes from the pseudo-labeled dataset examples that are more likely to be re-labeled falsely. This second improvement is essential for a realistic implementation of self-calibrating algorithms, as otherwise the pseudo-labeled dataset would rapidly be filled with an important quantity of noise. This would result in a rapidly degenerating network as self-calibration is performed iteratively.

The third step re-calibrates the network using the labeled and pseudo-labeled dataset in conjunction. DANN is again employed to try to obtain a similar feature representation between the source and target datasets. The source dataset contains the labeled dataset alongside all the pseudo-labeled data from prior sessions, while the target dataset contains the pseudo-labeled data from the current session. The difference between the first step is that the network’s weights are also optimized in relation to the cross-entropy loss calculated from the pseudo-labels. Early stopping is performed using only examples from the pseudo-labeled dataset as examples. If only the pseudo-labeled dataset was employed for re-calibration, the network performance would rapidly degrade from being trained only with noisy labels and possibly without certain gestures (i.e. nothing ensure that the pseudo-labeled dataset is balanced or even contains all the gestures).

Iv-a Proposed Pseudo-labels Generation Heuristic

For concision’s sake, the pseudo-code for the proposed re-labeling heuristic is presented in Appendix A-Algorithm 1. Note also that a python implementation of SCADANN (alongside the pseudo-labeling heuristic) is available in the previously mentioned repository repository.

The main idea behind the heuristic is to look at the network’s prediction one after the other, so that when the next prediction is different than the previous, the heuristic goes from the stable state to the unstable state. During the stable state, the prediction of the considered segment is added to the pseudo-label array. During the unstable state, all the network’s output (after the softmax layer) are instead accumulated in a second array. When this second array contains enough segments (hyperparameter sets to 1.5s in this work), the class associated with the output neuron with the highest median value is defined as the new possible stable class. The new possible stable class is confirmed if the median percentage of this class (compared with the other classes) is above a certain threshold (85% and 65% for the seven and eleven gestures dataset respectively (selected using the 3DC dataset)). If this threshold is not obtained, the oldest element in the second array is removed and replaced with the next segment. Note that the computation of the new possible stable class using the median is identical to MV.

When the new possible class is confirmed, the heuristic first looks if it was in the unstable state for too long (2s in this work). If it was, all the predictions accumulated during the unstable state are removed. If the unstable state was not too long, the heuristic can then take two paths: 1) if the new stable state class is the same as before, or 2) if they are different. If they are different, it means that a gesture’s transition probably occurred. Consequently, the heuristic goes back in time before the instability began (maximum of 500ms in this work) and looks at the derivative of the entropy calculated from the network’s softmax output to determine when the network started to be affected by the gesture’s transition. All the segments from this instability period (and adding the relevant segments from the look-back step) are then re-labeled as the new stable state class found. If instead the new stable state class is identical to the previous one, only the segments from the instability period are re-labeled. The heuristic then returns to its stable state.

V Experiments and results

As suggested in [14], a two-step statistical procedure is employed whenever multiple algorithms are compared against each other. First, Friedman’s test ranks the algorithms amongst each other. Then, Holm’s post-hoc test is applied () using the No Calibration setting as a comparison basis. Additionally, Cohen’s d [8] is employed to determine the effect size of using one of the self-supervised algorithm over the No Calibration setting.

V-a Training Sessions

In this subsection, all training were performed using the first and second cycles of the relevant training session, while the third cycle was employed for testing. Training sessions one through three contains data from 20 participants, while the fourth session contains data from six participants The time-gap between each training session is around seven days (21 days-gap between session 1 and 4). Note that for the first session, all algorithms are equivalent to the no re-calibration scheme and consequently perform the same.

Offline Seven Gestures Reduced Dataset

The average test-set accuracy obtained from the first training session across all subjects is 92.71%5.46%. This accuracy for a ConvNet using spectrograms as input is consistent with other works using the same seven gestures with similar datasets [10, 11].

Table I shows a comparison of the No Calibration setting alongside the three DA algorithms, MV (using the best performing All-Session recalibration setting [45]) and SCADANN. Figure 4 shows a point-plot of the No Calibration, SCADANN and the Re-Calibration classifiers.

No Cal DANN VADA DIRT-T MV SCADANN
Session 0
STD
92.76%
5.46%
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
Session 1
STD
Friedman Rank
H0
Cohen’s d
69.57%
25.81%
4.55
N\A
N\A
73.83%
26.66%
3.58
1
0.20
72.60%
26.28%
4.25
1
0.14
73.94%
25.98%
3.38
1
0.21
72.38%
27.02%
3.60
1
0.13
78.50%
26.40%
1.65
0 (0.00001)
0.43
Session 2
STD
Friedman Rank
H0
Cohen’s d
69.16%
26.99%
4.78
N\A
N\A
74.53%
28.44%
3.30
1
0.23
74.65%
28.25%
3.53
1
0.24
74.70%
28.66%
3.10
1
0.24
72.23%
26.76%
4.13
1
0.14
78.14%
27.42%
2.18
0 (0.00017)
0.40
TABLE I: Offline accuracy for seven gestures of the unsupervised re-calibration algorithms
Fig. 4: Offline accuracy for seven gestures in respect to time. Note that the slight offset for methods from the same session are for visualisation purposes only, as all methods use the same data. The values on the x-axis represent the average number of days elapsed across participants since the first session.

Offline Eleven Gestures Dataset

The average test-set accuracy obtained from the first training session across all subjects is 82.79%, which is consistent with accuracies obtained on the 3DC datasets [12, 9]. Table II compares the No Calibration setting with the three DA algorithms, MV and SCADANN.

No Cal DANN VADA DIRT-T MV SCADANN
Session 0
STD
82.79%
9.50%
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
Session 1
STD
Friedman Rank
H0
Cohen’s d
58.13%
24.94%
4.50
N\A
N\A
61.63%
26.10%
3.38
1
0.16
61.52%
25.95%
3.10
1
0.15
61.41%
25.73%
3.60
1
0.15
59.31%
26.35%
3.33
1
0.05
62.25%
26.09%
3.10
1
0.18
Session 2
STD
Friedman Rank
H0
Cohen’s d
55.52%
24.43%
4.90
N\A
N\A
60.69%
24.67%
3.20
0 (0.01218)
0.24
61.64%
24.67%
2.85
0 (0.00212)
0.28
60.64%
25.21%
3.38
0 (0.01989)
0.23
57.59%
25.51%
3.95
1
0.09
62.13%
25.37%
2.73
0 (0.00118)
0.30
TABLE II: Offline accuracy for eleven gestures of the unsupervised re-calibration algorithms using offline sessions as unlabeled data

V-B Evaluation Sessions

In this subsection, training using labeled data were conducted using the first, second and third cycles of the relevant training session.

Online Eleven Gestures Dataset

Table III compares the No Calibration setting with the three DA algorithms, MV and SCADANN on the second evaluation session of each experimental session, when the labeled and unlabeled data leveraged for training comes from the offline dataset.

No Cal DANN VADA DIRT-T MV SCADANN
Session 0
STD
46.35%
10.48%
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
N\A
Session 1
STD
Friedman Rank
H0
Cohen’s d
36.81%
18.21%
4.15
N\A
N\A
38.75%
18.72%
3.65
1
0.11
38.36%
19.06%
3.93
1
0.09
37.53%
18.32%
3.90
1
0.04
38.32%
19.55%
3.18
1
0.09
40.51%
19.05%
2.20
0 (0.00490)
0.22
Session 2
STD
Friedman Rank
H0
Cohen’s d
37.23%
16.50%
4.60
N\A
N\A
38.43%
16.80%
3.40
1
0.08
38.02%
16.82%
3.75
1
0.05
37.97%
16.80%
3.75
1
0.05
39.46%
17.44%
3.00
0 (0.02736)
0.15
39.47%
16.91%
2.50
0 (0.00193)
0.15
TABLE III: online accuracy for eleven gestures of the unsupervised re-calibration algorithms using offline sessions as unlabeled data

The average accuracy obtained on the second evaluation session of each experiment’s session across all participant is 39.91%14.67% and 48.89%10.95% for the No Calibration and Re-Calibration setting respectively.

Table IV presents the comparison between the No Calibration setting and using the first evaluation session of each experiment’s session as the unlabeled dataset for the three DA algorithms, MV and SCADANN.

No Cal DANN VADA DIRT-T MV SCADANN
Session 0
STD
Friedman Rank
H0
Cohen’s d
46.35%
10.48%
4.30
N\A
N\A
47.84%
10.83%
3.60
1
0.14
48.26%
11.04%
3.05
1
0.18
48.19%
11.01%
3.05
1
0.17
46.02%
10.84%
4.40
1
-0.03
48.06%
11.34%
2.60
0 (0.02030)
0.16
Session 1
STD
Friedman Rank
H0
Cohen’s d
36.81%
18.21%
4.68
N\A
N\A
40.81%
18.92%
2.98
0 (0.01624)
0.24
40.74%
18.94%
2.75
0 (0.00569)
0.23
40.54%
18.96%
3.20
0 (0.01624)
0.22
37.17%
18.73%
4.40
1
0.02
39.84%
19.40%
3.20
0 (0.02532)
0.18
Session 2
STD
Friedman Rank
H0
Cohen’s d
37.23%
16.50%
4.90
N\A
N\A
40.17%
17.31%
2.43
0 (0.00014)
0.20
39.98%
17.27%
2.83
0 (0.00181)
0.18
39.90%
17.02%
3.13
0 (0.00704)
0.18
37.21%
17.00%
4.63
1
0.00
40.10%
16.70%
3.10
0 (0.00704)
0.20
TABLE IV: Online accuracy for eleven gestures of the unsupervised re-calibration algorithms using the first evaluation session as unlabeled data

A point-plot of the online accuracy of the No Calibration, Re-Calibrated, SCADANN and Re-Calibrated SCADANN using the first evaluation session of each experimental session as unlabeled data is shown in Figure 5.

Fig. 5: Online accuracy for eleven gestures in respect to time. Note that the slight offset for methods from the same session are for visualisation purposes only, as all methods use the same data. The values on the x-axis represent the average number of days elapsed across participants since the first session.

Vi Discussion

The task of performing adaptation when multiple days have elapsed is especially challenging. As a comparison, on the within-day adaptation task presented in [45], MV was able to enhance classification accuracy by 10% on average compared to the No Calibration scheme. Within this work however, the greatest improvement achieved by MV was 3.07% on the reduced offline dataset. Overall, the best improvement shown in this paper was achieved by SCADANN on the same task achieving an improvement of 8.93%. All three tested domain adversarial algorithms were also able to constantly improve the network’s accuracy compared to the No Calibration scheme. When used to adapt to online unsupervised data, they were even able to achieve higher overall accuracy than SCADANN. This decrease of performance from SCADANN and MV on harder datasets is most likely due to the reduction of the overall classifier’s performance. This phenomena is perhaps best shown by looking at Table III and IV, where all algorithms were tested on the same data in both tables. Note how SCADANN was the best ranked adaptation method and MV was the second best on Table III, whereas on Table IV they degenerated into being the worst (with MV being even worst than the No Calibration setting on session 0 and 2).

Even more so than the general performance of the classifier however, the type of error that the classifier makes has the potential to affect the self-calibrating algorithms the most. In other words, if the classifier is confident in its error and the errors span a large amount of time, the pseudo-labeling heuristic cannot hope to re-label the segments correctly. This can rapidly make the self-calibrating algorithm degenerate as the adaptation might occur when a subset of gestures is completely misclassified in the pseudo-labeled dataset. To address this issue, future work will also leverage a hybrid IMU/EMG classifier which where shown to also be able to achieve state of the art gesture recognition [19, 26]. The hope of this approach is that using two completely different modalities will result in a second classifier which makes mistakes at different moments, so that SCADANN re-labeling heuristic is able to more accurately generate the pseudo-labels. Note that, overall, this re-labeling heuristic substantially enhanced pseudo-labels accuracy compared to the one used with MV. As an example, consider the supervised Re-Calibrating classifier trained on all the training cycles of the relevant training session and tested on the evaluation sessions. This classifier achieves an average accuracy of 48.36% over 544 263 examples. In comparison, The MV re-labeling heuristic achieves 54.28% accuracy over the same amount of examples, while the SCADANN re-labeling heuristic obtains 61.89% and keeps 478 958 examples using the 65% threshold. When using a threshold of 85%, the accuracy reaches 68.21% and retains 372 567 examples. SCADANN’s improved re-labeling accuracy compared to MV is in part due to the look-back feature of the heuristic (when de-activated, SCADANN’s relabeling accuracy drops to 65.23% for the 85% threshold) and its ability to remove highly uncertain sub-sequences of predictions. Within this study, the reason for the 65% threshold was due to the limited availability of unlabeled data within each session, as removing too many examples might completely erase some gestures. It is suspected that in real application scenarios, where the amount of unlabeled data is not limited as it is on a finite dataset, higher thresholds should be preferred as discarding more examples can be afforded and would most likely enhance the performance of the self-calibrated classifier further.

The main limitation of this work was that the self-calibration algorithms were tested without having the participant be able to react in real-time to the classifier’s updates. While the effect’s size was often small, the tested self-calibrating algorithms, and in particular SCADANN, consistently outperformed the No Calibration scheme. However, due to the type of data available, the self-calibration could only occur once every seven days, thus substantially augmenting the difficulty of this already challenging problem. In comparison, MV [45] was shown to almost completely counteract the signal drift when used on a within-day offline dataset. Future works will thus focus on using SCADANN to self-calibrate the classifier in real-time as to measure its ability to adapt in conjunction with the participant over longer period of real-time use.

Vii Conclusion

This paper presents SCADANN, a self-calibrating domain adversarial algorithm for myoelectric control systems. Overall, SCADANN was shown to improve the network’s performance compared to the No Calibration setting in all the tested cases and the difference was significant in almost all the tested cases. This work also tested three widely used, state of the art, unsupervised domain adversarial algorithms on the challenging task of EMG-based self-calibration. These three algorithms were also found to consistently improve the classifier’s performance compared to the No Calibration setting. MV, a previously proposed self-calibrating algorithm specifically for EMG-based gesture recognition, was also compared to the three DA algorithms and SCADANN. Overall, SCADANN was shown to consistently rank amongst the best (and often was the best) of the tested algorithm both using offline and online datasets as test.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC)[funding reference numbers 401220434, 376091307, 114090], the Institut de recherche Robert-Sauvé en santé et en sécurité du travail (IRSST).

Appendix A Pseudo-labeling Heuristic

1:procedure GeneratePseudoLabels(unstable_len, threshold_stable, max_len_unstable, max_look_back, threshold_derivative)
2:     pseudo_labels empty array
3:     arr_preds network’s predictions
4:     arr_net_out network’s softmax output
5:     begin_arr The unstable_len first elements of arr_net_out
6:     stable TRUE arr_unstable_output empty array
7:     current_class The label associated with the output neuron with the highest median value in begin_arr
8:     for  from 0..arr_preds length do
9:         if current_class different than arr_preds[i] AND stable TRUE then
10:              stable FALSE
11:              first_index_unstable i
12:              arr_unstable_output empty array          
13:         if stable is FALSE then
14:              APPEND arr_net_out to arr_unstable_output
15:              if length of arr_unstable_output is greater than unstable_len then
16:                  REMOVE the oldest element of arr_unstable_output               
17:              if length of arr_unstable_output is greater or equal to unstable_len then
18:                  arr_median The median value in arr_unstable_output for each gesture
19:                  arr_percentage_medians arr_median / the sum of arr_median
20:                  gesture_found The label associated with the gesture with the highest median percentage from arr_percentage_medians
21:                  if arr_percentage_medians[gesture_found] greater than threshold_stable then
22:                       stable TRUE
23:                       if current_class is gesture_found AND The time within instability is less than max_len_unstable then
24:                           Add the predictions which occurred during the unstable time to pseudo_labels with the gesture_found
25:                       else if current_class is different than gesture_found AND The time within instability is less than max_len_unstable then
26:                           index_start_change GetIndexStartChange(arr_net_out, first_index_unstable, max_look_back)
27:                           Add the predictions which occurred during the unstable time to pseudo_labels with the gesture_found label
28:                           Re-label the predictions from pseudo_labels starting at index_start_change with the gesture_found label                        
29:                       current_class gesture_found
30:                       arr_unstable_output empty array                                 
31:         else
32:              Add current prediction to pseudo_labels with the current_class label               return pseudo_labels
Algorithm 1 Pseudo-labeling Heuristic
1:procedure GetIndexStartChange(arr_net_out, first_index_unstable, max_look_back, threshold_derivative)
2:     data_uncertain Populate the array with the elements from arr_net_out starting from the first_index_unstable-max_look_back index to the first_index_unstable index
3:     discrete_entropy_derivative Calculate the entropy for each element of data_uncertain and then create an array with their derivatives.
4:     index_transition_start 0
5:     for  from 0..data_uncertain length do
6:         if discrete_entropy_derivative[i] greater than threshold_derivative then
7:              index_transition_start
8:              Get out of the loop               return first_index_unstable + index_transition_start
Algorithm 2 Find index start of transition heuristic

References

  1. H. Ajakan, P. Germain, H. Larochelle, F. Laviolette and M. Marchand (2014) Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446. Cited by: §I, §III-A.
  2. A. Al-Timemy, G. Bugmann and J. Escudero (2018) Adaptive windowing framework for surface electromyogram-based pattern recognition system for transradial amputees. Sensors 18 (8), pp. 2402. Cited by: §I, §I.
  3. U. C. Allard, F. Nougarou, C. L. Fall, P. Giguère, C. Gosselin, F. Laviolette and B. Gosselin (2016) A convolutional neural network for robotic arm guidance using semg based frequency-features. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2464–2470. Cited by: §I.
  4. U. C. Allard, F. Nougarou, C. L. Fall, P. Giguère, C. Gosselin, F. Laviolette and B. Gosselin (2016) A convolutional neural network for robotic arm guidance using semg based frequency-features. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2464–2470. Cited by: §I, §II-A1.
  5. S. Amsüss, P. M. Goebel, N. Jiang, B. Graimann, L. Paredes and D. Farina (2013) Self-correcting pattern recognition system of surface emg signals for upper limb prosthesis control. IEEE Transactions on Biomedical Engineering 61 (4), pp. 1167–1176. Cited by: §I.
  6. M. Atzori, A. Gijsberts, C. Castellini, B. Caputo, A. M. Hager, S. Elsig, G. Giatsidis, F. Bassetto and H. Müller (2014) Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Scientific data 1, pp. 140053. Cited by: §I.
  7. A. Campeau-Lecours, U. Côté-Allard, D. Vu, F. Routhier, B. Gosselin and C. Gosselin (2018) Intuitive adaptive orientation control for enhanced human–robot interaction. IEEE Transactions on Robotics 35 (2), pp. 509–520. Cited by: §I.
  8. J. Cohen (2013) Statistical power analysis for the behavioral sciences. Routledge. Cited by: §V.
  9. U. Côté-Allard, E. Campbell, A. Phinyomark, F. Laviolette, B. Gosselin and E. Scheme (2019) Interpreting deep learning features for myoelectric control: a comparison with handcrafted features. arXiv preprint arXiv:1912.00283. Cited by: §II-C, §II-C, §V-A2.
  10. U. Cote-Allard, C. L. Fall, A. Campeau-Lecours, C. Gosselin, F. Laviolette and B. Gosselin (2017) Transfer learning for semg hand gestures recognition using convolutional neural networks. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1663–1668. Cited by: §II-A3, §V-A1.
  11. U. Côté-Allard, C. L. Fall, A. Drouin, A. Campeau-Lecours, C. Gosselin, K. Glette, F. Laviolette and B. Gosselin (2019) Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27 (4), pp. 760–771. Cited by: §I, §II-C, §V-A1.
  12. U. Côté-Allard, G. Gagnon-Turcotte, F. Laviolette and B. Gosselin (2019) A low-cost, wireless, 3-d-printed custom armband for semg hand gesture recognition. Sensors 19 (12), pp. 2811. Cited by: Fig. 1, §II-B, §II-C, §II, §II, §V-A2.
  13. U. Côté-Allard, G. Gagnon-Turcotte, A. Phinyomark, E. Scheme, F. Laviolette and B. Gosselin (2019) Virtual reality to study the gap between offline and real-time emg-based gesture recognition. arXiv preprint. Cited by: §I, §II.
  14. J. Demšar (2006) Statistical comparisons of classifiers over multiple data sets. Journal of Machine learning research 7 (Jan), pp. 1–30. Cited by: §V.
  15. Y. Du, W. Jin, W. Wei, Y. Hu and W. Geng (2017) Surface emg-based inter-session gesture recognition enhanced by deep domain adaptation. Sensors 17 (3), pp. 458. Cited by: §III.
  16. Y. Gal and Z. Ghahramani (2016) Dropout as a bayesian approximation: representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. Cited by: §II-C.
  17. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand and V. Lempitsky (2016) Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17 (1), pp. 2096–2030. Cited by: §I, §III-A, §III.
  18. W. Geng, Y. Du, W. Jin, W. Wei, Y. Hu and J. Li (2016) Gesture recognition by instantaneous surface emg images. Scientific reports 6, pp. 36571. Cited by: §I.
  19. M. Georgi, C. Amma and T. Schultz (2015) Recognizing hand and finger gestures with imu based motion and emg based muscle activity sensing.. In Biosignals, pp. 99–108. Cited by: §VI.
  20. Y. Grandvalet and Y. Bengio (2005) Semi-supervised learning by entropy minimization. In Advances in neural information processing systems, pp. 529–536. Cited by: §III-B1, §III-B1.
  21. Y. Gu, D. Yang, Q. Huang, W. Yang and H. Liu (2018) Robust emg pattern recognition in the presence of confounding factors: features, classifiers and adaptive learning. Expert Systems with Applications 96, pp. 208–217. Cited by: §I.
  22. M. Hakonen, H. Piitulainen and A. Visala (2015) Current state of digital signal processing in myoelectric interfaces and related applications. Biomedical Signal Processing and Control 18, pp. 334–359. Cited by: §I.
  23. D. Holz, K. Hay and M. Buckwald (2015-April 14) Electronic sensor. Google Patents. Note: US Patent App. 29/428,763 Cited by: §II-A2.
  24. Q. Huang, D. Yang, L. Jiang, H. Zhang, H. Liu and K. Kotani (2017) A novel unsupervised adaptive learning method for long-term electromyography (emg) pattern recognition. Sensors 17 (6), pp. 1370. Cited by: §I.
  25. S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §II-C.
  26. R. N. Khushaba, A. Al-Timemy, S. Kodagoda and K. Nazarpour (2016) Combined influence of forearm orientation and muscular contraction on emg pattern recognition. Expert Systems with Applications 61, pp. 154–161. Cited by: §VI.
  27. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-C.
  28. S. Kullback (1997) Information theory and statistics. Courier Corporation. Cited by: §III-B1.
  29. Y. LeCun, Y. Bengio and G. Hinton (2015) Deep learning. nature 521 (7553), pp. 436–444. Cited by: §II-C.
  30. J. Liu, X. Sheng, D. Zhang, J. He and X. Zhu (2014) Reduced daily recalibration of myoelectric prosthesis classifiers based on domain adaptation. IEEE journal of biomedical and health informatics 20 (1), pp. 166–176. Cited by: §III.
  31. T. Miyato, S. Maeda, M. Koyama and S. Ishii (2018) Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence 41 (8), pp. 1979–1993. Cited by: §III-B1.
  32. M. A. Oskoei and H. Hu (2007) Myoelectric control systems—a survey. Biomedical signal processing and control 2 (4), pp. 275–294. Cited by: §I.
  33. Ozan Ciga (2019) Github repository for pytorch implementation of a dirt-t approach to unsupervised domain adaptation. Zenodo. External Links: Document, Link Cited by: §III.
  34. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga and A. Lerer (2017) Automatic differentiation in pytorch. In NIPS-W, Cited by: §II-C.
  35. A. Phinyomark and E. Scheme (2018) A feature extraction issue for myoelectric control based on wearable emg sensors. In 2018 IEEE Sensors Applications Symposium (SAS), pp. 1–6. Cited by: §II.
  36. C. Prahm, B. Paassen, A. Schulz, B. Hammer and O. Aszmann (2017) Transfer learning for rapid re-calibration of a myoelectric prosthesis after electrode shift. In Converging clinical and engineering research on neurorehabilitation II, pp. 153–157. Cited by: §I.
  37. E. Scheme and K. Englehart (2011) Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use.. Journal of Rehabilitation Research & Development 48 (6). Cited by: §I, §I.
  38. E. Scheme and K. Englehart (2011) Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use.. Journal of Rehabilitation Research & Development 48 (6). Cited by: §II-A2, §III.
  39. E. J. Scheme, B. S. Hudgins and K. B. Englehart (2013) Confidence-based rejection for improved pattern recognition myoelectric control. IEEE Transactions on Biomedical Engineering 60 (6), pp. 1563–1570. Cited by: §I.
  40. R. Shu, H. H. Bui, H. Narui and S. Ermon (2018) A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735. Cited by: §I, §III-B1, §III-B1, §III-B2, §III-B2, §III-B2, §III-B.
  41. L. H. Smith, L. J. Hargrove, B. A. Lock and T. A. Kuiken (2010) Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay. IEEE Transactions on Neural Systems and Rehabilitation Engineering 19 (2), pp. 186–192. Cited by: §I, §II-A3.
  42. D. St-Onge, U. Côté-Allard, K. Glette, B. Gosselin and G. Beltrame (2019) Engaging with robotic swarms: commands from expressive motion. ACM Transactions on Human-Robot Interaction (THRI) 8 (2), pp. 11. Cited by: §I, §I.
  43. A. Tabor, S. Bateman and E. Scheme (2016) Game-based myoelectric training. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts, pp. 299–306. Cited by: §I.
  44. B. Xu, N. Wang, T. Chen and M. Li (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853. Cited by: §II-C.
  45. X. Zhai, B. Jelfs, R. H. Chan and C. Tin (2017) Self-recalibrating surface emg pattern recognition for neuroprosthesis control based on convolutional neural network. Frontiers in neuroscience 11, pp. 379. Cited by: §I, §II-C, §IV, §IV, §IV, §V-A1, §VI, §VI.
  46. X. Zhu and A. B. Goldberg (2009) Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning 3 (1), pp. 1–130. Cited by: §I.
  47. M. Zia ur Rehman, A. Waris, S. Gilani, M. Jochumsen, I. Niazi, M. Jamil, D. Farina and E. Kamavuako (2018) Multiday emg-based classification of hand motions with deep learning techniques. Sensors 18 (8), pp. 2497. Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
403195
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description