ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

ECGadv: Generating Adversarial Electrocardiogram to
Misguide Arrhythmia Classification System

Huangxun Chen, Chenyu Huang, Qianyi Huang, Qian Zhang, Wei Wang
The Hong Kong University of Science and Technology
Huazhong University of Science and Technology
{hchenay, chuangak},,,
Co-primary Authors

Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis systems recently achieve promising progress to take over tedious examinations by cardiologists. However, their vulnerability to adversarial attacks still lack comprehensive investigation. The existing attacks in image domain could not be directly applicable due to the distinct properties of ECGs in visualization and dynamic properties. Thus, this paper takes a step to thoroughly explore adversarial attacks on the DNN-powered ECG diagnosis system. We analyze the properties of ECGs to design effective attacks schemes under two attacks models respectively. Our results demonstrate the blind spots of DNN-powered diagnosis systems under adversarial attacks, which calls attention to adequate countermeasures.

1 Introduction

In common clinical practice, the ECG is an important tool to diagnose a wide spectrum of cardiac disorders, which are the leading health problem and cause of death worldwide by statistics [32]. There are recent high-profile examples of Deep Neural Networks (DNNs)-powered approaches achieving parity with human cardiologists on ECG classification and diagnosis [5, 17, 19, 1]. Given enormous costs of healthcare, it is tempting to replace expensive manual ECG examining of cardiologists with a cheap and highly accurate deep learning system. In recent, the U.S. Food and Drug Administration has granted clearance to several deep learning-based ECG diagnostic systems such as AliveCor, Biofourmis and Lepu.

With DNN’s increasing adoption in ECG diagnosis, its potential vulnerability to ‘adversarial examples’ also arouses great public concern. The state-of-the-art literature has shown that to attack a DNN-based image classifier, an adversary can construct adversarial images by adding almost imperceptible perturbations to the input image. This misleads DNNs to misclassify them into an incorrect class [31, 14, 7].

Such adversarial attacks would pose devastating threats to the DNN-powered ECG diagnosis system. On one hand, adversarial examples fool the system to give incorrect results so that the system fails to serve the purpose of diagnosis assistance. On the other hand, adversarial examples would breed medical frauds. The DNNs’ outputs are expected to be utilized in other decision-making in medical system [12], including billing and reimbursement between hospitals/physicians and insurance companies. Large institutions or individual actors may exploit the system’s blind spots on adversarial examples to inflate medical costs (e.g., exaggerate symptoms) for profit111Cardiologist convicted in fountain of youth billing fraud scam.

To our knowledge, previous literature on DNN model attacks mainly focus on the image domain, and has yet to thoroughly discuss the adversarial attacks on ECG recordings. In this paper, we identify the distinct properties of ECGs, and investigate two types of adversarial attacks for DNN-based ECG classification system.

In Type I Attack, the adversary can access the ECG recordings and corrupt them by adding perturbations. Such adversary could be a cardiologist who purposely manipulates patient’s ECGs to get more reimbursement from the insurance company. We found that simply applying exiting image-targeted attacks on ECG recordings generates suspicious adversarial instances, because commonly-used norm in image domain to encourage visual imperceptibility is unsuitable for ECGs (see Figure 5). In visualization, each value in an ECG represents the voltage of a sample point which is visualized as a line curve. Meanwhile, each value in a image represents the grayscale or RGB value of a pixel which is visualized as the corresponding color. Humans have different perceptual sensitivities to colors and line curves. As shown in Fig. 1, when two data arrays are visualized as line curves, their differences are more prominent rather than those visualized as gray-scale images. In this paper, we propose smoothness metrics to quantify perceptual similarities of line curves, and leverages them to generate unsuspicious adversarial ECG instances.

Figure 1: Perception test. There are two data arrays in the range of , and the second one is obtained by adding a few perturbations with amplitude to the first one. Both of them are visualized as line curves and gray-scale images.

In Type II Attack, the adversary would corrupt ECGs via physical process like EMI signal injection [20], which injects the desired perturbation to the on-the-fly signals. In this case, the adversary may not be able to access the ECGs directly or they want to fool the system without leaving digital tampering footage. Different from images, ECGs have periodicity property and it is hard to determine the exact sampling point of the on-the-fly ECGs. The possible skewing may make the perturbation intended for signal peaks added onto signal troughs. Moreover, filtering, as a standard process in most ECG devices to combat noise, may impair the effect of perturbation. In addition, ECGs are likely to be regarded as private data and only stored in local device for privacy. In this paper, we explicitly consider the possible skewing and filtering in the attack scheme to generate filtering-resistant perturbations that are effective for the on-the-fly ECGs, and the perturbations obtained based on some leaked instances can be applicable on other unseen ones as well.

In summary, the contributions of this paper are as follows:

  • This paper thoroughly investigate adversarial attacks for DNN-based ECG classification systems. We identify the distinct properties of ECGs to facilitate designing effective attack schemes under two attack models respectively.

  • We propose a smoothness metric to effectively quantify human perceptual distance on line cures, which quantifies the pattern similarity in a computationally-efficient way. Adversarial attacks using the smoothness metric achieve a 99.9% success attack rate. In addition, we conduct an extensive human perceptual study on both ordinary people and cardiologists to evaluate the imperceptibility of adversarial ECG instances.

  • We model the sampling point uncertainty of the on-the-fly ECGs and the filtering effect within the adversarial generation scheme. The generated perturbations are skewing-resistant and filtering-resistant to tamper with on-the-fly signals (99.64% success rate), and generalize well in unseen examples.

2 Background

In this section, we first introduce the victim DNN-based ECG classification system for attack scheme evaluation, then we illustrate its threat models.

2.1 Victim DNN-powered ECG Diagnosis Model

We apply our attack strategies to the DNN-based arrhythmia classification system [29, 2, 5]. An arrhythmia is defined as any rhythm other than a normal rhythm. Early and accurate detection of arrhythmia types is important for detecting heart diseases and choosing appropriate treatment for a patient. If the detection algorithm is mislead to classify an arrhythmia as a normal rhythm, the patient may miss the optimal treatment period. Conversely if a normal rhythm is misclassified as an arrhythmia, the patient may accept unnecessary consultation and treatment, which would result in a waste of medical resources or medical frauds.

The original model [29] adopts 34-layer Residual Networks (ResNet) [16] to classify a 30s single-lead ECG segment into 14 different classes. However, their dataset and trained model are not public. In the Physionet/Computing in the Cardiology Challenge 2017 [9], [2] reproduced the approach by  [29] on the PhyDB dataset and achieved a good performance. The model is the representative of the current state-of-the-art in arrhythmia classification. Both their algorithm and model are available in open source. The model architecture is shown in Figure 2. PhyDB dataset consists of 8,528 short single-lead ECG segments labeled as 4 classes: normal rhythm(N), atrial fibrillation(A), other rhythm(O) and noise(). Both atrial fibrillation and other rhythm indicates arrhythmia. Atrial fibrillation is the most prevalent cardiac arrhythmia. “Other rhythm” in the dataset refers to other abnormal arrhythmia except atrial fibrillation. For note, the accuracy of this model is not 100% on the PhyDB dataset. Thus, to prove the effectiveness of the proposed attacks, we only generate adversarial examples for those ECGs originally correctly classified by the model without attacks (shown in Table 2, 6081 ECGs in total) .

Figure 2: Architecture of Victim Model.

2.2 Threat Models

2.2.1 Type I Attack

The adversary has access to ECG recordings. One possible case is a cardiologist who can access patients’ ECGs and have monetary incentive to manipulate them to fool the checking system of insurance companies. Another possible case is a hacker who intercept and corrupt data to attack a cloud-deployed ECG diagnosis system for fun or profit. That data may be uploaded from portable patches like Life Signal LP1100 or household medical instruments like Heal Force ECG monitor to the cloud-deployed algorithms for analysis. For both cases, the adversary aims to engineer ECGs so that the ECG classification system is mislead to give the diagnosis that he/she desires, and in the meanwhile, the data perturbations should be sufficiently subtle that they are either imperceptible to humans, or if perceptible, seems natural and not representative of an attack.

It is worth mentioning the difference between adversarial attacks and simple substitution attacks. In substitution attack, the adversary replaces the victim ECG with ECG of another subject with the target class. However, the ECGs, as a kind of biomedical signs, are often unique to their owners as fingerprint [26]. Thus, the simple substitution attacks can be effectively defended if the system checks input ECGs against prior recordings from the same patient. However, the adversarial attacks only add subtle perturbations without substantially altering the personal identifier (Figure 3).

Figure 3: Adversarial Attack v.s. Substitution attack

2.2.2 Type II Attack

The adversary would corrupt ECGs via physical process like EMI signal injection [20], which injects the desired perturbation to the on-the-fly signals. One possible case is a cardiologist who want to manipulate the system diagnosis without leaving digital tampering footage to avoid getting caught. Another possible case is the adversary may not be able to access the ECGs directly when the ECGs are measured and analyzed by the ECG classification model stored in a local device. For both cases, the adversary aims to inject the perturbation to the on-the-fly ECGs so that the success probability of attacks to fool the system is maximized.

3 Related Works

Here we review recent works on adversarial examples, and the existing arrhythmia classification systems.

3.1 Adversarial Examples

Recently, considerable attack strategies have been proposed to generate adversarial examples. Attacks can be classified into targeted and untargeted ones based on the adversarial goal. The adversary of the former modifies an input to mislead the targeted model to classify the perturbed input into a chosen class, while the adversary of the latter make the perturbed input misclassified to any class other than the ground truth. In this paper, we only focus on the more powerful targeted attacks.

Based on the accessibility to the target model, the existing attacks fall into white-box and black-box attacks categories. In former manner, an adversary has complete access to a classifier [31, 14, 25, 7, 21], while in latter manner, an adversary has zero knowledge about them [28, 24, 22]. This paper studies the white-box adversarial attacks to explore the upper bound of an adversary to better motivate defense methods. Besides, prior works [28, 22] have shown the transferability of adversarial attacks, i.e, train a substitute model given black-box access to a target model, and transfer the attacks to it by attacking the substitute one.

In the image domain, most works adopted norm as approximations of human perceptual distance to constrain the distortion. However, for ECGs in time-series format, people focus more on the overall pattern/shape, which can not be fully described by norm [11, 13] (see Section ‘Similarity Metrics’ for details). Recent works [21, 4, 8] have explored the robustness of the adversarial examples in the physical world, where the input images could not be precisely controlled, and may change under different viewpoints, lighting and camera noise. Our strategy on Type II attack is inspired by [4, 6]. Different from images, we deal with sampling point uncertainty of the periodic ECGs and the filtering function of ECG devices.

Recent works on GAN-based attacks [33, 30] focus on improve attacking efficiency to image classification system, which can be combined with metric computation efficiency of ECGadv in future work. A workshop paper [15] convolves perturbation with Gaussian kernels for ECG adversarial attacks. Our proposed smoothness metric and Gaussian kernels method can be integrated to improve the system. Besides, our paper further addresses the issues in physical ECG attacks. For the emerging defense methods,  [3] proposed a general framework to circumvent several published defenses based on randomly transforming the input. Thus, we do not discuss defense breaking in this paper.

3.2 Arrhythmia Classification System

Considerable efforts have been made on automated arrhythmia classification systems to take over tedious manual examinations. Deep learning methods show great potential due to their ability to automatically learn features through multiple levels of abstraction, which frees the system from the dependence on hand-engineered features. Recent works [19, 1, 5] started applying DNN models on ECG signals for arrhythmia classification and achieved good performance. For any system in the health-care field, it is crucial to defend against any possible attacks since people’s lives rely heavily on the system’s reliability. Prior work [20] has launched attacks to pollute the measurement of cardiac devices by a low-power emission of chosen electromagnetic waveforms. The adversarial attacks and the injection attacks in [20] complement each other. The injection attack can inject the carefully-crafted perturbation generated by adversarial attacks to perform targeted attacks to mislead the arrhythmia classification system.

4 Technical Approach

In this section, we illustrate our attack strategies for two threat models respectively.

4.1 Type I Attack Strategy

4.1.1 Problem Formulation

Given an m-class classifier, that accepts an input and produces an output . The output vector , treated as the probability distribution, satisfies and . The classifier assigns the label to the input . Let be the correct label of . Given a valid input and a target class , an adversary aims to generate adversarial examples so that the classifier predicts (i.e. successful attack), and and are close based on the similarity metric (i.e. visual imperceptibility). It can be modeled as a constrained minimization problem as seen in prior works [31]:


where is some similarity metric. It is worth mentioning that there is no box constraints for time-series measurement. It is equivalent to solve [7]:


where is an objective function mapping the input to a positive number, which satisfies if and only if . One common objective function is cross-entropy. We adopt the one in [7].


where is logits, i.e., the output of all layers except the softmax. is short-hand for .

4.1.2 Similarity Metrics

To generate adversarial examples, we require a distance metric to quantify perceptual similarity to encourage visual imperceptibility. The widely-adopted distance metrics in the literature are norms , where the p-norm is defined as . norms focus on the change in each pixel value. However, human perception on line curves focuses more on the overall pattern/shape. Studies in  [11, 13] show that given a group of line curves for similarity assessment, pattern-focused distance metrics like the Dynamic time warping (DTW)-based ones produce rankings that are closer to the human-annotated rankings than value-focused metrics like Euclidean distances. Thus, we consider using DTW to quantify the similarity of ECGs at first. However, the non-differentiability and non-parallelism of DTW make it ill-suited for adversarial attacks. Recent work [10] proposes a differentiable DTW variant, Soft-DTW. However, Soft-DTW does not change the essence of DTW – a standard dynamic programming problem. The value and gradient of Soft-DTW would be computed in quadratic time, and it is hard to leverage the parallel computing of the GPU to speed it up.

To capture the pattern similarity in a computation-efficient way, we adopt the following metric, denoted as smoothness as our similarity metric. Given and refers to variance calculation:


Smoothness metric quantifies the smoothness of perturbation() by measuring the variation of the difference between neighbouring points of perturbation. The smaller the variation, the smoother the perturbation. A smoother perturbation means that the adversarial instances are more likely to preserve a similar pattern to the original instance . In the extreme case where , should be a constant and , i.e., the adversarial instances have the same shape as the original instance . It is worth mentioning that in our attack scheme, we intentionally preserve the zero-mean and one-variance property of the generated , therefore the perturbation can not be easily filtered by the normalization layer of the system. Besides, compared with the quadratic time complexity of Soft-DTW, the smoothness metric can be computed in linear time, which is efficient in principle. To further quantify the efficiency, we run the adversarial attacks with different metrics: Soft-DTW, smoothness metric and norm. Both the computing resources (AWS c5.2xlarge instances) and the victim ECGs are the same. The average CPU time per iteration of different metrics are shown in Table 1. The smoothness metric can be further accelerated by GPU as norm does.

CPU time/iteration 12.28s 0.05s 0.05s
Table 1: Computation Efficiency across Different Metrics

4.2 Type II Attack Strategy

4.2.1 Problem Formulation

Given the same m-class classifier, as above, in Type II attack, we explicitly consider the filtering process in attack scheme. Filtering is a standard process in ECG devices to combat noises before the data analysis, including baseline wandering noises (0.05Hz) and the power-line noises (50 or 60 Hz) [23]. To generate filtering-resistant perturbations, we constrain the power of the perturbation within those filtered frequency bands during the optimization procedure. We also consider the possible skewing to generate perturbations that are effective for the on-the-fly ECGs, since it is hard for the attacker to obtain the exact time that the device begins measuring ECGs. Inspired by Expectation Over Transformation(EOT) [4], we regard such uncertainty as a shifting transformation of the original measurement and explicitly consider such a transformation within the optimization procedure.

Formally, given a distance function and a chosen distribution of transformation function , we have the following optimization problem:


where . is the added perturbation and is a rectangular filter. Specifically, we transform the from time domain to frequency domain via Fast Fourier transform. We utilize a mask to zero the power of frequency bins for less than 0.05Hz and 50/60Hz. Finally, inverse Fast Fourier transform will transform it back to the time domain. Besides, we add a constraint . is large enough that can have a large probability of successful attacks under most shifting transformations. Since the ECG signals of the same class share common pattern, a sufficiently large can implicitly enable the universality of an adversarial sample, i.e., a perturbation is effective on other unseen samples of the same class. forces the adversarial examples to be within a certain distance constraint of the original.

4.2.2 Perturbation Window Size

For adversarial attacks, it is better that the perturbation attracts minimal attention of the victim. Thus, we introduce the length of the perturbation as a parameter, which could be set by the adversary and fixed during the perturbation generation. gives the system flexibility to control the added perturbation. The intuition behind is that the smaller is, the smaller the attack duration. Attack duration denotes the time when the attacker try to inject the signal. It is obviously that the less time the attacker stays active in the crime scene, the less chance it will be perceived by the victim. Moreover, the larger is, the generated perturbation has higher probability of having an effect on other unseen samples of the same class(i.e., universality).

5 Experimental Results

In this section, we evaluate our attacks in two threat models respectively222

5.1 Evaluation for Type I Attack

5.1.1 Experiment Setup

We implement our attack strategy for Type I Attack under the framework of CleverHans [27]. We adopt the Adam optimizer [18] with learning rate to search for adversarial examples. We compare the performance of three similarity metrics on adversarial examples generation, given : (i) , (ii) (Equation 4), (iii) = , .

All metrics are evaluated under the same optimization scheme with the same hyper-parameters. As said before, we only attack ECGs that are originally correctly classified by the model without attack. The profile of the attack dataset is shown in Table 2. Here, “A, N, O, ” denote normal, atrial fibrillation(AF), other rhythm and noise respectively. The sampling rate of the ECGs is 300Hz, i.e., the length of a 30s ECG is 9000.

Type Number Time length (s)
mean std
Normal rhythm(N) 3886 32.85 9.70
Atrial Fibrillation(A) 447 32.25 11.98
Other rhythm(O) 1488 35.46 11.56
Noisy signal() 260 24.02 10.42
Table 2: Data profile for the attack dataset
A / 97.22% 100% 100% / 100% 100% 100% / 100.0% 100.0% 100.0%
N 100% / 100% 100% 100% / 100% 100% 100% / 100% 100%
O 99.44% 95.0% / 100% 99.72% 100% / 100% 100% 100% / 100%
100% 99.55% 100% / 100% 100% 100% / 100% 100% 100% /
Table 3: Success rates of targeted attacks (Type I Attack)

5.1.2 Success Rate of Targeted Attacks

We select the first 360 segments of class N, class A and class O respectively, and the first 220 segments of class in attack dataset to evaluate the success rate of the targeted attacks. For each ECG segment, we conduct three targeted attacks to other classes one by one. Thus, we have 12 source-target pairs given 4 classes. The attack results are shown in Table 3.

With all three similarity metrics, the generated adversarial instances achieve high attack success rates. fails in a few instances of some source-target pairs, such as “O A”, “A N”, “O N” and “ N”. case achieves almost a 100% success rate and achieves a 100% success rate. A sample of generated adversarial ECG signals are shown in Fig. 5. Due to the limited space, we only show a case where an original atrial fibrillation ECG(A) is misclassified to a normal rhythm(N). It is noticed that the one generated with metric looks more suspicious due to lots of small spikes, while the one preserves more similar pattern to the original. The property of falls in between the above two as expected.

5.1.3 Human Perceptual Study

We conduct an extensive human perceptual study on both ordinary people and cardiologists to evaluate the imperceptibility of adversarial ECGs.

Ordinary human participants without medical expertise are recruited from Amazon Mechanical Turk(AMT). Thus, they are only required to compare the adversarial examples generated using different similarity metrics and choose the one closer to the original ECG. For each similarity metric, we generate 600 adversarial examples (each source-target pair accounts for 50 examples). In the study, the participants are asked to observe an original example and its two adversarial ones generated using two different similarity metrics. Then they need to choose one of the two adversarial examples that is closer to the original. The perceptual study comprises three parts, (i) versus , (ii) versus , and (iii) versus . To avoid labeling bias, we allow each user to conduct at most 60 trials for each part. For each tuple of an original example and its two adversarial examples, we collect 5 annotations from different participants. In total, we collected 9000 annotations from 57 AMT users. The study results are shown in Table 4, where “triumphs” denotes the metric got 4 or 5 votes for all 5 annotations, and “wins” denotes that the metric got 3 votes for 5, i.e., a narrow victory.

i wins(%) wins(%)
triumphs wins total triumphs wins total
58.67 22.67 81.34 10 8.66 18.66
ii wins(%) wins(%)
triumphs wins total triumphs wins total
65.5 18.5 84 7.83 8.17 16
iii wins(%) wins(%)
triumphs wins total triumphs wins total
31.83 27.83 59.67 15.83 24.5 40.33
Table 4: Human perceptual study (AMT participants)

Compared with the -generated examples, the -generated ones are voted closer to the original in 81.34% of the trials. When comparing and , the latter is voted as the winner in 84% of the trials. This indicates that the smoothness metric encourages generated adversarial examples preserve similar patterns to original ones, so they are more likely to be imperceptible. When comparing and , get a few more votes (59.67%) than , which further validates that the smoothness metric better qualifies human similarity perception on line curves than norm.

Besides participants on AMT, we also invite three cardiologists to evaluate whether added perturbations arouse their suspicion. The cardiologists are asked to classify the given ECG and its adversarial counterparts into 4 classes(A, N, O, ) based on their medical expertise. We focus on the cases of “N A”, “N O”, “A N”, “O N”, which misclassify a normal rhythm to an arrhythmia or vise versa. For the above 4 source-target pairs, we randomly select 6 type N, 3 type A and 3 type O, then we generate adversarial examples with different similarity metrics. Thus, we have 48 samples (original and adversarial ones) and shuffle them randomly. For every sample, we collect annotations from all three cardiologists. The results are shown in Table 5.

Idx Original
1 100% 100% 100% 100%
2 91.7% 100% 100% 100%
3 100% 100% 100% 100%
Table 5: Human Perceptual Study (Cardiologists)

Each row refers to one cardiologist. The first column denotes the percentage of the cardiologist’s annotations the same as the labels in PhyDB dataset. Only one cardiologist annotates a type A instance as type O. The last three columns show the percentage of adversarial examples which are annotated the same type as their original counterparts. The results show that in all cases, cardiologists give the same annotations to adversarial examples as their original counterparts. The possible reason is that most perturbations generally occur on the wave valley, but the cardiologists give annotations based on the peak-to-peak intervals. They think the subtle perturbations possibly caused by instrument noise. The results that adversarial signals can be correctly classified by cardiologists but wrongly classified by the classifier prove that our attacks successfully fool the classifier to disable its function of diagnosis assistance without arousing human suspicion.

Figure 4: A sample of generated adversarial ECG signal.
Figure 5: Success attack rates with different sized windows.

5.2 Evaluation for Type II Attack

5.2.1 Success Rate of Targeted Attacks

We implement our attack strategy for Type II attack under the framework of CleverHans [27]. We maximize the objective function using the Adam [18] optimizer, and approximate the gradient of the expected value through independently sampling transformations at each gradient decent step. Among 12 source-target pairs, we randomly choose 10 samples of each pair to generate adversarial perturbations by applying the attack strategy in Section 4.2. It is worth mentioning that we generate one perturbation from one sample. Thus, we have 10 perturbations from 10 samples respectively. Then we apply the above perturbations to 100 randomly-chosen samples of the corresponding source class to see whether the adversarial examples could mislead the classifier universally. Before adding perturbations to the target sample, we apply a filter on the perturbations to test the filtering-resistance. The filter has two choices: Filter 1 is the rectangular filter that removes the signal with frequency of lower than 0.05Hz and 50/60Hz. Filter 2 is the combination of two common filters used in ECG signal processing, a high-pass butterworth filter with 0.05Hz cutting frequency and notch filters for 50/60Hz power line noises. To mimic the sample point uncertainty of the on-the-fly signals, we generate the perturbation at full length which means is equal to 9000. Then we randomly shift perturbations and add them to the original signals. In this evaluation, we randomly shift the perturbations 200 times for testing. The average success rates are shown in Table 6. The row is the origin class and the column is the target class. In one cell The top success attack rate is for filter 1 and the bottom is for filter 2. Our attack strategy achieves pretty high success rates, which indicates that the generated perturbation is filtering-resistant, skewing-resistant and universal.

A /
Table 6: Success rates of targeted attacks (Type II Attack)

5.2.2 Impact of Window Size

In this section, we evaluate the success attack rates with different sized windows . As mentioned before, the smaller the window size, the lower the chance that the attacker can be perceived. In this evaluation, we generate perturbations on different sized windows 9000,7500,6000,4500,3000 and 1500. For each window size, we generate adversarial examples under the same conditions as the previous section – randomly 10 samples for 12 source-target pairs. Then we apply filters, shift the perturbation randomly and add it to other samples from the original source class. The results are shown in Figure 5. The legend is the target class under different filters. In most cases, the success rate decreases a lot when the window size increases. However, they slowly decrease and even remain almost unchanged under the cases of “A O”, “N O” and “ O”. All these cases are from a certain class to class O. This is mainly because class O (refers to other abnormal arrhythmia except atrial fibrillation) may cover an expansive input space so that it is easier to misclassify an other class to class O. Besides, we find that except for class O, the success rate decrease more slowly when the target class is A. The possible reason is the inherent property of class A, i.e., if a certain part of the ECG signal is regraded as atrial fibrillation, then the whole ECG segment will be classified as class A. The success attack rates under different filters are quite similar, which shows the filtering-resistance of our generated perturbations.

6 Conclusion

This paper proposes ECGadv to generate adversarial ECG examples to misguide arrhythmia classification systems. The existing attacks in image domain could not be directly applicable due to the distinct properties of ECGs in visualization and dynamic properties. We analyze the properties of ECGs to design effective attacks schemes under two attacks models respectively. Our results demonstrate the blind spots of DNN-powered diagnosis systems under adversarial attacks to call attention to adequate countermeasures.


  • [1] M. M. Al Rahhal, Y. Bazi, H. AlHichri, N. Alajlan, F. Melgani, and R. R. Yager (2016) Deep learning approach for active classification of electrocardiogram signals. Information Sciences 345, pp. 340–354. Cited by: §1, §3.2.
  • [2] F. Andreotti, O. Carr, M. A. Pimentel, A. Mahdi, and M. De Vos (2017) Comparing feature-based classifiers and convolutional neural networks to detect arrhythmia from short segments of ecg. Computing 44, pp. 1. Cited by: §2.1, §2.1.
  • [3] A. Athalye, N. Carlini, and D. Wagner (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In International Conference on Machine Learning, pp. 274–283. Cited by: §3.1.
  • [4] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok (2018) Synthesizing robust adversarial examples. In International Conference on Machine Learning, pp. 284–293. Cited by: §3.1, §4.2.1.
  • [5] H. Awni Y, R. Pranavm, H. Masoumeh, T. Geoffrey H, B. Codie, T. Mintu P, and N. Andrew Y (2019) Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nature Medicine, pp. volume 25, 65–69. Cited by: §1, §2.1, §3.2.
  • [6] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer (2017) Adversarial patch. arXiv preprint arXiv:1712.09665. Cited by: §3.1.
  • [7] N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. Cited by: §1, §3.1, §4.1.1.
  • [8] S. Chen, C. Cornelius, J. Martin, and D. H. Chau (2018) Robust physical adversarial attack on faster r-cnn object detector. arXiv preprint arXiv:1804.05810. Cited by: §3.1.
  • [9] G. D. Clifford, C. Liu, B. Moody, L. H. Lehman, I. Silva, Q. Li, A. Johnson, and R. G. Mark (2017) AF classification from a short single lead ecg recording: the physionet computing in cardiology challenge 2017. Proceedings of Computing in Cardiology 44, pp. 1. Cited by: §2.1.
  • [10] M. Cuturi and M. Blondel (2017) Soft-dtw: a differentiable loss function for time-series. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 894–903. Cited by: §4.1.2.
  • [11] P. Eichmann and E. Zgraggen (2015) Evaluating subjective accuracy in time series pattern-matching using human-annotated rankings. In Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 28–37. Cited by: §3.1, §4.1.2.
  • [12] S. G. Finlayson, H. W. Chung, I. S. Kohane, and A. L. Beam (2018) Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296. Cited by: §1.
  • [13] A. Gogolou, T. Tsandilas, T. Palpanas, and A. Bezerianos (2018) Comparing similarity perception in time series visualizations. IEEE transactions on visualization and computer graphics. Cited by: §3.1, §4.1.2.
  • [14] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1, §3.1.
  • [15] X. Han, Y. Hu, L. Foschini, L. Jankelson, and R. Ranganath (2019) Adversarial examples for electrocardiograms. Cited by: §3.1.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §2.1.
  • [17] IEEE-Spectrum (2018) Artificial intelligence is challenging doctors. Note: Cited by: §1.
  • [18] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.1.1, §5.2.1.
  • [19] S. Kiranyaz, T. Ince, and M. Gabbouj (2016) Real-time patient-specific ecg classification by 1-d convolutional neural networks. IEEE Transactions on Biomedical Engineering 63 (3), pp. 664–675. Cited by: §1, §3.2.
  • [20] D. F. Kune, J. Backes, S. S. Clark, D. Kramer, M. Reynolds, K. Fu, Y. Kim, and W. Xu (2013) Ghost talk: mitigating emi signal injection attacks against analog sensors. In Security and Privacy (SP), 2013 IEEE Symposium on, pp. 145–159. Cited by: §1, §2.2.2, §3.2.
  • [21] A. Kurakin, I. J. Goodfellow, and S. Bengio (2018) Adversarial examples in the physical world. In Artificial Intelligence Safety and Security, pp. 99–112. Cited by: §3.1, §3.1.
  • [22] Y. Liu, X. Chen, C. Liu, and D. Song (2016) Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770. Cited by: §3.1.
  • [23] S. Luo and P. Johnston (2010) A review of electrocardiogram filtering. Journal of electrocardiology 43 (6), pp. 486–496. Cited by: §4.2.1.
  • [24] S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard (2017) Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773. Cited by: §3.1.
  • [25] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582. Cited by: §3.1.
  • [26] I. Odinaka, P. Lai, A. D. Kaplan, J. A. O’Sullivan, E. J. Sirevaag, and J. W. Rohrbaugh (2012) ECG biometric recognition: a comparative analysis. IEEE Transactions on Information Forensics and Security 7 (6), pp. 1812–1824. Cited by: §2.2.1.
  • [27] N. Papernot, F. Faghri, N. Carlini, I. Goodfellow, R. Feinman, A. Kurakin, C. Xie, Y. Sharma, T. Brown, A. Roy, A. Matyasko, V. Behzadan, K. Hambardzumyan, Z. Zhang, Y. Juang, Z. Li, R. Sheatsley, A. Garg, J. Uesato, W. Gierke, Y. Dong, D. Berthelot, P. Hendricks, J. Rauber, and R. Long (2018) Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768. Cited by: §5.1.1, §5.2.1.
  • [28] N. Papernot, P. McDaniel, and I. Goodfellow (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277. Cited by: §3.1.
  • [29] P. Rajpurkar, A. Y. Hannun, M. Haghpanahi, C. Bourn, and A. Y. Ng (2017) Cardiologist-level arrhythmia detection with convolutional neural networks. arXiv preprint arXiv:1707.01836. Cited by: §2.1, §2.1.
  • [30] Y. Song, R. Shu, N. Kushman, and S. Ermon (2018) Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems, pp. 8312–8323. Cited by: §3.1.
  • [31] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1, §3.1, §4.1.1.
  • [32] World Health Organization (2018) Cardiovascular disease is the leading global killer.. Note: Cited by: §1.
  • [33] C. Xiao, B. Li, J. Y. Zhu, W. He, M. Liu, and D. Song (2018) Generating adversarial examples with adversarial networks. In 27th International Joint Conference on Artificial Intelligence, IJCAI 2018, pp. 3905–3911. Cited by: §3.1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description