ECGNET: Learning Where to Attend for Detection of Atrial Fibrillation with Deep Visual Attention

ECGNET: Learning Where to Attend for Detection of Atrial Fibrillation with Deep Visual Attention


The complexity of the patterns associated with Atrial Fibrillation (AF) and the high level of noise affecting these patterns have significantly limited the current signal processing and shallow machine learning approaches to get accurate AF detection results. Deep neural networks have shown to be very powerful to learn the non-linear patterns in the data. While a deep learning approach attempts to learn complex pattern related to the presence of AF in the ECG, they can benefit from knowing which parts of the signal is more important to focus during learning. In this paper, we introduce a two-channel deep neural network to more accurately detect AF presented in the ECG signal. The first channel takes in a preprocessed ECG signal and automatically learns where to attend for detection of AF. The second channel simultaneously takes in the preprocessed ECG signal to consider all features of entire signals. The model shows via visualization that what parts of the given ECG signal are important to attend while trying to detect atrial fibrillation. In addition, this combination significantly improves the performance of the atrial fibrillation detection (achieved a sensitivity of 99.53%, specificity of 99.26% and accuracy of 99.40% on the MIT-BIH atrial fibrillation database with 5-s ECG segments.).

ECGNET: Learning Where to Attend for Detection of Atrial Fibrillation with Deep Visual Attention

Sajad Mousavi, Fatemeh Afghahthanks: This material is based upon work supported by the National Science Foundation under Grant Number 1657260. Research reported in this publication was supported by the National Institute On Minority Health And Health Disparities of the National Institutes of Health under Award Number U54MD012388.
School of Informatics, Computing and Cyber Systems, Northern Arizona University, USA

Index Terms—  Atrial fibrillation (AF), deep learning, visual attention, ECG analysis.

1 Introduction

Atrial fibrillation (AF) is the most prevalent type of arrhythmia leading to hospital admissions [1], and is currently affecting the lives of more than 3 million people in the U.S. and over 33 million worldwide, while the number of AF patients in the US is expected to double by 2050 [2]. Its incidence is associated with an increase in the risk of stroke, congestive heart failure and overall mortality [3, 4]. Interpretation of ECG signals by the cardiologists and medical practitioners is usually a time-consuming task and prone to errors. Moreover, the complexity of the patterns associated with AF and the high level of noise affecting these collected signals have significantly limited the accuracy and reliability of the monitoring systems designed for AF detection. Also, the majority of these methods while have shown an effective performance on one set of data, they may fail on other tests, discouraging the community to use any of these methods in clinical settings [5, 6, 7].

Therefore, it is desirable to develop algorithms for automatic detection of AF with high diagnostic accuracy and reliability. Several algorithms have been introduced to automatically detect the presence of AF based on ECG signal characteristics. Most of them depend on the detection of P-waves and R-peaks. As a result, the performance significantly degrades, if an algorithm misses to detect the relevant peaks or waves because of the presence of noise in the ECG. Although, there are some research [8, 9] that eliminate the detection of P wave and R peak in their methodologies, they still need to extract hand-crafted features that might not be totally representative features if the dataset changes in terms of size and the presence of other arrhythmias.

Deep learning (DL) can model high-level abstractions in data using deep networks of supervised and/or unsupervised learning algorithms, in order to learn from multiple levels of abstractions. It learns a very complex function that represents a map between inputs and targets. Over past years, deep learning based methods have been used in ECG analysis and classification. However, their performances have not been quite significant as expected them to be. Thus, developing new deep learning architectures that match specific medical problems and can capture the specific characteristics of ECG signal is still a challenge.

Motivated by the aforementioned limitations, we propose an end-to-end deep visual network for automatic detection of AF called ECGNET. The model is a two-channel deep neural network to more accurately detect AF presented in the ECG signal. The first channel takes in a preprocessed ECG signal and automatically learns where to attend for detection of AF. The second channel simultaneously takes in the preprocessed ECG signal to consider all features of entire signals. In this way, it gives more weights to the related parts of the ECG signal with higher potential relevance to AF, and at the same time considers the whole cycle (i.e., the beat) to extract other consecutive dependencies between each wave (i.e., P-, QRS-, T-waves, etc.). Moreover, the proposed approach visualize the parts of a given ECG signal that are more important to attend while trying to detect atrial fibrillation. It is also worth mentioning that, despite the majority of current AF detection techniques, our proposed method is capable of detecting AF in very short ECG recordings (e.g. duration around 5 s).

To the best of our knowledge, this is the first study that uses the whole information provided by the input and the visual attention at the same time for the purpose of AF detection. Recently, Shashikumar et al. [10] reported an attention mechanism to detect AF, where they considered a deep recurrent neural network on 30-second ECG windows’ inputs and took advantage of some time series covariates. The key contribution of our method is to develop an end-to-end two-channel deep network that automatically extract features from the focused parts of the signal with the capability of focusing on each part of a cycle (i.e., P-, QRS-, T-waves, etc.) instead of each windowed segment and from the abstracted features of the whole segment, just 5-s ECG segments. It is worth mentioning that our method does not rely on any hand-crafted features to the network as considered in [10]. We also visualize which regions of the signal are important while there is an underlying AF arrhythmia in the signal. Therefore, the proposed method can potentially assist the physicians in AF detection and can be also utilized to recognize complex pattern in the signals related to other arrhythmias that cannot be easily seen in the signals.

The rest of this paper is organized as follows. Section 2 introduces the data preparation approach and the used database in this study. Section 3 gives a detailed description of the proposed method. Section 4 describes the experimental setup and presents the visualization and results. Section 5 discusses the obtained results, and a performance comparison to the state-of- the-art algorithms, followed by the conclusions.

Fig. 1: The network architecture for AF detection method. The top channel gets a sequence of splitted ECG signal (i.e., window = 128 and stride = 30) and the bottom channel gets the wavelet power spectrum of the sequence. Then, the average of two sections is computed and fed into a softmax layer.
Fig. 2: The network architecture of attention model.

2 Dataset and Data Preparation

The proposed method has been evaluated using several PhysioNet databases, including MIT-BIH Atrial Fibrillation Database (AFDB), the Long-Term Atrial Fibrillation Database (LTAFDB) and Normal Sinus Rhythm Database [11]. The AF database and the long-term AF database are comprised of 25 and 84 long-term ECG recordings of human subjects with mostly atrial fibrillation, respectively. The NSR database consists of 18 long-term ECG recordings of subjects who had no significant arrhythmias including both men and women (20 to 50 years old). The AF database includes two 10-hours long ECG recordings for each individual. The signals are sampled at 250 Hz with 12-bit resolution over a range of millivolts. Individual recordings in the long-term AF database are typically 24- to 25-hours long, and sampled at 128 HZ with 12-bit resolution over a range of millivolts. We followed two strategies to extract the AF and non-AF excerpts of the available ECG signals. First, we extracted 30-s excerpts of the mentioned databases. To select the excerpts with AF tag, we considered rhythm annotations of type (AFIB (atrial fibrillation) of each AF episodes. Similarly, we got a guide of rhythm annotations of type (N (normal sinus rhythm) to select non-AF excerpts. Then, to prepare inputs for the proposed model, each ECG signal (i.e., the excerpts) of each patient was divided into a sequence of windows with lengths of 128 and 200 for the AFDB-NSRDB and LTAFDB databases, respectively, and an overlap of roughly 25%. Second, each ECG signal of AFBD is divided into 5-s segments and each segment is labeled based on a threshold parameter, . When the percentage of annotated AF beats of the 5-s segment is greater than or equal to , we considered it as AF, otherwise non-AF arrhythmia. Similar to previous reported studies in [12, 8], we selected . It is worth noticing that no noise removal approaches have been applied to the ECG signals.

3 Proposed approach

3.1 Model Description

An overview of the proposed model for AF detection is depicted in Fig. 1. The model architecture is a two-channel deep neural network. The top channel takes the row windowed signal as input and includes an attention strategy to emphasis on important visual task-relevant features of given signal. This section of the architecture is called Attention Network. We divided the given ECG signal into several windows with size 128 and an overlap of 25%. The bottom channel considers a deep recurrent convolutional network that takes wavelet power spectrum of the windowed ECG signal. The output of the network is a vector of decimal probabilities regarding the classes. A more detailed explanation of each section of the network is provided below.

Attention Network

Overall, there are two types of attention models: the soft attention and the hard attention models. The soft attention models are end-to-end approaches and differentiable deterministic mechanisms that can be learned by gradient based methods. However, the hard attention models are stochastic processes and not differentiable. Thus, they can be trained by using the REINFORCE algorithm [13] in the reinforcement learning framework [14]. In this paper, a soft attention mechanism is used because the back propagation seems to be more effective [15, 16]. Figure 2 depicts a schematic diagram of the Attention network. It includes three main parts as follow:

Convolutional neural network (CNN): The CNN consists of two consecutive one-dimensional convolutional layers followed by Rectifed Linear Unit (ReLU) nonlinearity layers. They have 32 and 64 filters of with strides 1 for each one. Figure 3 depicts the detailed architecture. Sequences of windowed ECG signals are fed into the CNN for feature extraction. At each time-step , a windowed frame is fed into the network and the last convolutional layer of the 1-Dimensional CNN part outputs feature maps of size (e.g, we concluded feature maps ). Then, the feature maps are converted to vectors in which each vector has dimension as follows:

Fig. 3: A diagram of convolutional layers used in the proposed model. The CNN part of the model takes the windowed ECG signal as input (i.e., a sequence of frames) and computes vertical feature slices, with dimension .

Attention layer (i.e., a soft attention mechanism): The extracted features of the CNN part are sequentially passed to the attention layer to compute the probabilities corresponding to the importance of each part of the windowed frame (e.g., P-, QRS- and T-waves, etc.). In other words, the input window is divided into regions and the attention mechanism attempts to attend to the most relevant regions which are related to AF. Figure 4 shows the structure of the attention mechanism. The attention layer gets two separate inputs: 1) vectors, , where each is a representation of different regions of the input window frame, and 2) A hidden state , which is the internal state of the LSTM at the previous time step. Then, it computes a vector, which is a linear weighted combination of the values of . Therefore, the attention mechanism can be formulated as follows:


where is the importance of the region of input window frame. At each time step , the attention module calculates , a composition of the values of and followed by a layer. Then, it is passed to a softmax layer to compute over regions. Indeed, each is considered as the amount of importance of the corresponding vector among of vectors in the input window. Finally, the attention layer computes , a weighted sum of all the vectors based on calculated ’s. Thus, the network can learn to put more emphasize on the interesting parts (e.g., P-, QRS- and T-waves, etc.) of the input window frame with higher probabilities of presence of AF in the input ECG.

Fig. 4: The structure of the attention mechanism used in the proposed model. At each time step , the attention module utilizes and the previous hidden state of the RNN part, to calculate an expected value, with respect to vertical feature slices, and the importance of each region of input window frame, .

Recurrent neural network (i.e., Long Short-Term Memory (LSTM) units): The attention layer is followed by LSTM units (which are a stack of two LSTM layers with the LSTM sizes of 64) for long-term learning to capture temporal dependencies between windows of each input signal. The RNN part of the network utilizes the previous hidden state and the output of the attention module , to calculate the next hidden state . The parameter is used as the input of the attention module in order to calculate the value at the next time-step. In addition, it is utilized as the input of a fully-connected linear layer with 256 neurons.

Deep Recurrent Convolutioal Neural Network (RCNN)

The first layer consists of 1-D convolution filters of size with a stride 1 followed by a Rectifed Linear Unit (ReLU) nonlinearity. The second layer is comprised of 16 1-D convolution filters of size with stride 1, again followed by a rectifier nonlinearity. The third layer is a RNN layer with the LSTM units of size followed by a fully connected layer with 256 hidden units. Here, the spectrogram size is . It can be considered as a sequence of column vectors (300 vectors) that each consists of 270 values. For the purpose of feature extraction, we feed these sequences to the first 1-D convolutional layers of the deep RCNN.

Similar to other deep learning-based AF detectors [12, 17], the deep neural network part of our model takes a 2-D representation with wavelet power spectrum of the ECG segment. They employ 2-D convolution operators on the entire input, while our method applies 1-D convolution operators to the each frequency vector (i.e., at each time step) of the given the spectrograms obtained from each segment, and feeds the output of the 1-D convolutional layers to long short-term memory units to capture dependencies between each frequency vector. Indeed, we consider the temporal potential patterns that might be present in the an AF arrhythmia. In other words, a CNN with two-dimensional filters shares weights of the and dimensions and considers the extracted features have the same meaning apart from their locations. However, in spectrograms, the two dimensions shows the strength of frequency and the time, and are completely diffident. In a 2-D convolution operator, frequency shifts of a signal (in a spectrogram representation) can change its spatial extent. Hence, the ability of 2-D CNNs to learn the spatial invariant features might not be well for the spectrograms [18]. This is the main reason, we included 1-D CNNs followed by LSTM units instead of 2-D CNNs. Moreover, using 1-D CNNs in the network would bring lower number of parameters and as a result further complexity reduction.

Finally, the outputs of the attention and RCNN sections are averaged and fed into a softmax layer. Then, the softmax assigns decimal probabilities to each class of interest (i.e., AF and non-AF).

Fig. 5: Visualization of the attention network’s result on an ECG sample with AF arrhythmia. The white circles depicts the most important regions of the ECG signal to attend. More brightness means more attention.

4 Experimental evaluation

4.1 Experimental setup

We evaluated the performance of our proposed method using different databases with various segmenting strategies. In the first scenario, we evaluated the proposed method with 30-s ECG segments which came from two sets of data: 1) the AF database was used to extract AF samples and the NSR database was utilized to extract non-AF samples. We considered around samples of 30-s extracted AF and non-AF segments (each class ). of them were used to train the network and the remaining were used to test the model, and 2) the Long-Term AF database was used to generate 30-s AF and non-AF excerpts. Again 30-s segments were considered; of segments were examined as the training set and the rest as the test set. For the second scenario the AF database was utilized. There are a total of 5-s segments, where the number of AF segments is , and the number of non-AF segments is 2. We randomly selected the same number of segments for each class (i.e., AF and non-AF), as ; totally samples, to remove the effect of imbalanced data samples on training the model. Similarly, of data samples were allocated to train the model and to evaluate the model.

The network was trained for steps (50 epochs) and the initial LSTM hidden and cell states were set to zero. All network weights were updated by the Momentum optimizer with mini batches of size . An exponential decay function with an initial learning rate of was applied to lower the learning rate as the training progresses. To evaluate the power of the proposed model for AF detection against the other algorithms, four measures were considered (commonly used in the literature), which are defined as:


where TP (True Positive), TN (True Negative), FP (False Positive) and FN (False Negative) indicate the number of excerpts correctly labeled as AF, number of excerpts correctly identified as non-AF, number of excerpts that incorrectly labeled as AF, and number of excerpts which were not identified as AF but they should have been, respectively. Also, measure is an average of the two values from each classification type (i.e., non-AF and AF) so that they can be defined as:

Best Performance (%)
Method Database
Proposed method AFDB 99.53 99.26 99.40
Xia, et al. (2018) [12] AFDB
Asgari, et al. (2015) [8] AFDB
Lee, et al. (2013) [19] AFDB
Jiang, et al. (2012) [20] AFDB
Huang, et al. (2011) [21] AFDB
Babaeizadeh, et al. (2009) [22] AFDB
Dash, et al. (2009) [23] AFDB
Tateno, et al. (2001) [24] AFDB
Table 1: Comparison of performance of the proposed model against other algorithms on the MIT-BIH AFIB database with the ECG segment of size 5-s ( 7 Beats).

4.2 Visualization and Results

Table 1 presents a performance comparison between the proposed method and several state-of-the-art algorithms using the MIT-BIH AFIB database, where the 5-s data segments were considered. As it is clear in Table 1, overall, our proposed AF detector shows better results in terms of the sensitivity, specificity and accuracy evaluation metrics compared to all methods presented in the table. To further evaluate the performance of this method, we also calculated the score of the detection outcome for different databases. Table 2 summarizes the performance of proposed model with different data segmentation approach. We should note that the scores of other algorithms in the literature were not available to compare with. In addition to consider the 5-s data segments as input to the network, we employed data segments of 30 seconds in our experiments. Table 3 reports the results of sensitivity, specificity and accuracy of the proposed detection models. As it is shown in the table, our models result in the best possible performances with sensitivity, specificity and accuracy of on the MIT-BIH AFIB dataset as well as stunning outcomes on the Long-Term AFIB dataset.

A visualization example of attended parts of an ECG signal with an AF is illustrated in Figure 5. The white regions, showing with circles indicate where the model learned to look while the patient had the AF. We should note that the two main indicators of AF in ECG signals as considered in the majority of the previous works are: 1) the absence of P-waves that can be replaced by a series of low-amplitude oscillations called fibrillatory waves, and 2) the irregularly irregular rhythm (i.e., irregularity of R-R intervals) [20, 22, 21]. It is worth noting that the attention network focused on the regions where there is no P-waves or fibrillatory waves in the ECG signal. Furthermore, there are some attentions on the R-peaks that may show the irregularity of R-R intervals. As the R-R intervals of the signal were not computed here, we do not have a reference point to show if the focused R-peaks are because of the irregularity. We encourage interested readers to view the clip video through the link below that visualizes the attended locations by proposed model as well. Video: As shown in Tables 1,2 and 3, our attention-based DL model with the proposed combination architecture of the attention network and the deep recurrent neural network (i.e., deep net part of the entire model) has superior performance in detection of AF compared to the existing techniques.

Performance (%)
Method Database Length
Proposed Method AFDB 5-s ( 7 Beats)
Proposed Method AFDB-NSRDB 30-s ( 45 Beats)
Proposed Method LTAFDB 30-s ( 37 Beats)
Table 2: scores of each classification type obtained by the proposed method on different PhysioNet databases.
Best Performance (%)
Method Database Length Sensitivity Specificity Accuracy
The entire network AFDB-NSRDB 30-s ( 45 Beats)
Attention Network AFDB-NSRDB 30-s ( 45 Beats)
Deep CNN part AFDB-NSRDB 30-s ( 45 Beats)
The entire network LTAFDB 30-s ( 37 Beats)
Attention Network LTAFDB 30-s ( 37 Beats)
Deep RCNN part LTAFDB 30-s ( 37 Beats)
Table 3: Detection performances of the proposed method on different PhysioNet databases with the ECG segment of size 30-s.

5 Discussion and Conclusion

Several algorithms have been reported in the literature to detect AF ranging from RR interval variability, P-wave detection based to machine leaning paradigms including deep learning-based methods. In this paper, we propose an attention-based AF detection method that automatically identify the potential AF regions in the ECG signal. Despite the majority of the previously reported works in this area [21, 19, 22, 20], the performance of the proposed method do not rely on the accuracy of the hand-crafted algorithms for detecting P-waves and R-R intervals. In contrast, the attention network in our proposed model automatically focuses on the most relevant regions of each heartbeat which is prone to be a part of an AF, and puts more weights on those regions in the network to distinguish between the AF and non-AF classes. The performance of our method is compared against the majority of existing algorithms for detection of AF using the same databases and same evaluation metrics. The proposed method achieved an accuracy of , a sensitivity of and a specificity of validated on the MIT-BIH AFIB database with 5-s data segments that significantly outperforms the results of other studies. Moreover, the detector obtained the value for the accuracy, sensitivity and specificity on the MIT-BIH AFIB database with 30-s ( 45 Beats) data segments, and the values , and for the accuracy, sensitivity and specificity, respectively on the Long-Term AFIB database with 30-s ( 37 Beats) data segments.

One of the other key issues in AF detection methods is their poor performance in detecting AF episodes in short signal recordings (i.e., less than 30-s). While the majority of the state-of-the-art algorithms require a 30-s episode or at least 127/128 beats to achieve an acceptable detection performance [25, 26, 21, 27], our proposed method offers a great performance on very short ECG segments of size 5-s which here are less than a 7-beats window.

Thanks to the proposed end-to-end deep learning approach, we do not need tuning any parameters which might affect on the detection performance. For instance, the setting of parameters in [25, 28] have a significant influence on optimizing their final results.

In this study, we proposed a novel deep network architecture to classify the given signal as AF or non-AF. The proposed AF detector shows the better performance compared to the other detectors in the literature. One key aspect of our AF detector is that it simultaneously gives more weights to the related parts of the ECG signal with higher potential prevalence of AF, and also considers the whole cycle (i.e., the beat) to extract other consecutive dependencies between each wave (i.e., P-, QRS-, T-waves, etc.). Moreover, The proposed method obtains significant detection results by using a short ECG segment, 5-s long, to detect AF without the need of tuning any parameters in the model.


  • [1] Hisham ElMoaqet, Zakaria Almuwaqqat, and Mohammed Saeed, “A new algorithm for predicting the progression from paroxysmal to persistent atrial fibrillation,” in Proceedings of the 9th International Conference on Bioinformatics and Biomedical Technology. ACM, 2017, pp. 102–106.
  • [2] Véronique L. Roger et al., “Heart disease and stroke statistics—2012 update,” Circulation, vol. 125, no. 1, pp. e2–e220, 2012.
  • [3] Kannel WB Wolf PA, Abbott RD, “Temporal relations of atrial fibrillation and congestive heart failure and their joint influence on mortality: The framingham heart study,” Circulation, vol. 107, pp. 920–2925, 2003.
  • [4] D’Agostino RB Silbershatz H Kannel WB Levy D Benjamin EJ, Wolf PA, “Impact of atrial fibrillation on the risk of death: The framingham heart study,” Circulation, vol. 98, pp. 946–952, 1998.
  • [5] Ross JS, Mulvey GK, Stauffer B, and et al, “Statistical models and patient predictors of readmission for heart failure: A systematic review,” Archives of Internal Medicine, vol. 168, no. 13, pp. 1371–1386, 2008.
  • [6] Mohammad Zaeri-Amirani, Fatemeh Afghah, and Sajad Mousavi, “A feature selection method based on shapley value to false alarm reduction in icus, a genetic-algorithm approach,” arXiv preprint arXiv:1804.11196, 2018.
  • [7] Fatemeh Afghah, Abolfazl Razi, SM Reza Soroushmehr, Somayeh Molaei, Hamid Ghanbari, and Kayvan Najarian, “A game theoretic predictive modeling approach to reduction of false alarm,” in International Conference on Smart Health. Springer, 2015, pp. 118–130.
  • [8] Shadnaz Asgari, Alireza Mehrnia, and Maryam Moussavi, “Automatic detection of atrial fibrillation using stationary wavelet transform and support vector machine,” Computers in biology and medicine, vol. 60, pp. 132–142, 2015.
  • [9] Philip de Chazal and Richard B Reilly, “A patient-adapting heartbeat classifier using ecg morphology and heartbeat interval features,” IEEE transactions on biomedical engineering, vol. 53, no. 12, pp. 2535–2543, 2006.
  • [10] Supreeth P Shashikumar, Amit J Shah, Gari D Clifford, and Shamim Nemati, “Detection of paroxysmal atrial fibrillation using attention-based bidirectional recurrent neural networks,” arXiv preprint arXiv:1805.09133, 2018.
  • [11] PhysioNet, “Mit-bih atrial fibrillation database (afdb);long-term af database (ltafdb); normal sinus rhythm database (nsrdb),” 2000.
  • [12] Yong Xia, Naren Wulan, Kuanquan Wang, and Henggui Zhang, “Detecting atrial fibrillation by deep convolutional neural networks,” Computers in biology and medicine, vol. 93, pp. 84–92, 2018.
  • [13] Ronald J Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992.
  • [14] Seyed Sajad Mousavi, Michael Schukat, and Enda Howley, “Deep reinforcement learning: an overview,” in Proceedings of SAI Intelligent Systems Conference. Springer, 2016, pp. 426–440.
  • [15] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in International conference on machine learning, 2015, pp. 2048–2057.
  • [16] Sajad Mousavi, Michael Schukat, Enda Howley, Ali Borji, and Nasser Mozayani, “Learning to predict where to look in interactive environments using deep recurrent q-learning,” arXiv preprint arXiv:1612.05753, 2016.
  • [17] Runnan He, Kuanquan Wang, Na Zhao, Yang Liu, Yongfeng Yuan, Qince Li, and Henggui Zhang, “Automatic detection of atrial fibrillation based on continuous wavelet transform and 2d convolutional neural networks,” Frontiers in physiology, vol. 9, pp. 1206, 2018.
  • [18] Lonce Wyse, “Audio spectrogram representations for processing with convolutional neural networks,” arXiv preprint arXiv:1706.09559, 2017.
  • [19] Jinseok Lee, Bersain A Reyes, David D McManus, Oscar Maitas, and Ki H Chon, “Atrial fibrillation detection using an iphone 4s,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 1, pp. 203–206, 2013.
  • [20] Kai Jiang, Chao Huang, Shu-ming Ye, and Hang Chen, “High accuracy in automatic detection of atrial fibrillation for holter monitoring,” Journal of Zhejiang University SCIENCE B, vol. 13, no. 9, pp. 751–756, 2012.
  • [21] Chao Huang, Shuming Ye, Hang Chen, Dingli Li, Fangtian He, and Yuewen Tu, “A novel method for detection of the transition between atrial fibrillation and sinus rhythm,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 4, pp. 1113–1119, 2011.
  • [22] Saeed Babaeizadeh, Richard E Gregg, Eric D Helfenbein, James M Lindauer, and Sophia H Zhou, “Improvements in atrial fibrillation detection for real-time monitoring,” Journal of electrocardiology, vol. 42, no. 6, pp. 522–526, 2009.
  • [23] S Dash, KH Chon, S Lu, and EA Raeder, “Automatic real time detection of atrial fibrillation,” Annals of biomedical engineering, vol. 37, no. 9, pp. 1701–1709, 2009.
  • [24] K Tateno and L Glass, “Automatic detection of atrial fibrillation using the coefficient of variation and density histograms of rr and rr intervals,” Medical and Biological Engineering and Computing, vol. 39, no. 6, pp. 664–671, 2001.
  • [25] Xingran Cui, Emily Chang, Wen-Hung Yang, Bernard C Jiang, Albert C Yang, and Chung-Kang Peng, “Automated detection of paroxysmal atrial fibrillation using an information-based similarity approach,” Entropy, vol. 19, no. 12, pp. 677, 2017.
  • [26] Xiaolin Zhou, Hongxia Ding, Benjamin Ung, Emma Pickwell-MacPherson, and Yuanting Zhang, “Automatic online detection of atrial fibrillation based on symbolic dynamics and shannon entropy,” Biomedical engineering online, vol. 13, no. 1, pp. 18, 2014.
  • [27] Jinseok Lee, Yunyoung Nam, David D McManus, and Ki H Chon, “Time-varying coherence function for atrial fibrillation detection.,” IEEE Trans. Biomed. Engineering, vol. 60, no. 10, pp. 2783–2793, 2013.
  • [28] Andrius Petrėnas, Vaidotas Marozas, and Leif Sörnmo, “Low-complexity detection of atrial fibrillation in continuous long-term monitoring,” Computers in biology and medicine, vol. 65, pp. 184–191, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description