A Propagation Model

Probabilistic Model-Based Approach for Heart Beat Detection

Abstract

Nowadays, hospitals are ubiquitous and integral to modern society. Patients flow in and out of a veritable whirlwind of paperwork, consultations, and potential inpatient admissions, through an abstracted system that is not without flaws. One of the biggest flaws in the medical system is perhaps an unexpected one: the patient alarm system. One longitudinal study reported an rate of false alarms, with other studies reporting numbers of similar magnitudes. These false alarm rates lead to a number of deleterious effects that manifest in a significantly lower standard of care across clinics.

This paper discusses a model-based probabilistic inference approach to identifying variables at a detection level. We design a generative model that complies with an overview of human physiology and perform approximate Bayesian inference. One primary goal of this paper is to justify a Bayesian modeling approach to increasing robustness in a physiological domain.

We use three data sets provided by Physionet, a research resource for complex physiological signals, in the form of the Physionet 2014 Challenge set-p1 and set-p2, as well as the MGH/MF Waveform Database. On the extended data set our algorithm is on par with the other top six submissions to the Physionet 2014 challenge.

Keywords: Beat Detection, PhysioNet Challenge, Particle Filter, Dynamic Bayesian Network, ECG, Blood Pressure, Model Based Probabilistic Inference.

1 Introduction

Patient monitoring is a significant part of health care, not only to ensure that physicians can accurately diagnose and treat patients, but also to trigger biometric-based alarms. The intent of these alarms is to guarantee that a patient receives attention from clinicians whenever his or her condition takes a turn for the worse.

One of the important facets of such biometric data are heart beats. By monitoring heart beats, a number of cardiac issues, including a variety of life-threatening arrhythmia (asystole, bradycardia, tachycardia, etc.), can be detected [?]. In the case where there is a high noise level and poor signal quality the heart beats themselves are unreliable. These unreliable beats can lead to false-negatives, where alarms fail to be triggered, as well as false-positives, where alarms are triggered for no substantive reason. These two cases respectively result in alarm failure and alarm fatigue [?].

False-negatives are direct alarm failures. Instances of alarm failure are dangerous because they mean that patients in life-threatening situations may be completely overlooked. Alarm fatigue, on the other hand, is an indirect consequence of an excess of false-positives (AKA false alarms). An excess of false alarms has a number of pernicious effects, including the desensitization of nurses and doctors to true alarms, disturbance of ailing patients, and the cost of time wasted for both physicians and patients. One article cites alarm fatigue as one of the top patient safety concerns in hospitals [?]. In a 31-day study across 461 adults in intensive care units, 88.8% of the 12,671 arrhythmia alarms were falsely detected [?]. Other studies report similar numbers, indicating that alarm fatigue is a real phenomenon.

Typically, the methods used to identify the heart beats are straightforward signal processing algorithms that take advantage of the regular waveforms of either electrocardiogram (ECG) or arterial blood pressure (ABP) signals. In ECG signals, there exists a regular QRS complex. In signals with well-defined QRS complexes, executing signal processing algorithms that annotate heart beats at the R peaks achieves high accuracy. Likewise for the ABP signals, there exists a spike, albeit not as sharply defined as the QRS peak, that indicates the location of the heartbeat. In ABP, heart beat detection is further complicated by a delay between heartbeats and the pressure peaks.

Figure 1: Section of signal from example 1522 of Physionet 2014 Challenge’s set-p2. In the left subsection you can see that the electrocardiogram has flat-lined. On the right, we conversely see an example where blood pressure has flat-lined, all within the same signal.

Standard methods for heart beat detection typically include signal processing algorithms that rely on a particular lead for a particular signal. This naive approach is already problematic because of the potential for dropped signals. Figure 1 provides an example of dropped signals within a single piece of data. This shows that for a given patient, across a relatively short period of time, it is possible for either the ECG or the ABP signal to simply flat-line and yield no useful information. These dropped signals occur for a variety of reasons, including but not limited to technical malfunctions or the detachment of sensors due to patient movement. In these cases, any signal processing algorithm that naively depends on a single signal’s lead will fail to provide useful data for extended periods of time.

The problem of dropped signals almost naturally suggests a solution in the form of a signal switching algorithm. One that evaluates whether the lead has flatlined, and if so, switches to a lead with a notable stream of data. This approach certainly works to a degree, but a naive application will fail due to events that disturb the signal and generate noise, otherwise known as artifacts.

Figure 2: Section of signal from example 1715 from Physionet 2014 Challenge’s set-p2. In this subsection you can see that the electrocardiogram has an area with a significantly jittery signal.

In Figure 2, the middle of the ECG signal has a clear example of an artifact. The regular QRS complexes are disturbed, leaving a signal that bears no recognizable patterns. Artifacts can be the manifestation of a patient brushing their teeth, bumping into something, or simply rolling around in their sleep. They complicate the detection of heart beats, because the artifacts can corrupt the signals in a variety of ways, thereby rendering any naive switching algorithm insufficient. Ultimately, the goal is to robustly detect heart beats despite the occurrence of artifacts, noise, and dropped signals. In doing so, patient monitoring systems in hospitals can become much more efficient, reducing ICU false alarm rates.

Given all of this clinical data, we recognize the problem as one that can be solved through either a data-driven or a model-based approach. We could potentially treat the problem as a regression problem and allow the program to train on the Physionet data and develop its own interpretation and understanding of the problem. Alternatively, we can assume that the problem is an inherently biological problem, and take an approach that borrows from modern understanding of human physiology.

Because there exists a large corpus of research in the direction of human physiology, we choose the latter. Yet earlier, we have recognized that the data we are dealing with is inherently uncertain. In order to capture both the model and inherent uncertainty we approach the problem through a model-based probabilistic inference approach.

We develop a Dynamic Bayesian Network (DBN) to describe the interactions of the patient’s physiological characteristics over time. In order to incorporate our beliefs about how artifiacts and noise manifest themselves in data we also incorporate an observation model. Finally, we use particle filtering, which is a Sequential Monte Carlo (SMC) method, to perform combined state and parameter estimation on our non-linear, non-Gaussian model.

2 Physionet

Before moving directly into the algorithm it is worth mentioning Physionet. Physionet is a research resource that offers access to a sizable supply of recorded physiological signals and their open source software. In addition, they hold challenges on a yearly basis, with last year’s challenge pertaining to heart beat detection. We use several of the datasets provided by Physionet to evaluate our algorithm’s performance.

2.1 Material

The first is the Physionet Challenge 2014 training set (set-p1), which is was a dataset released during the first phase of the challenge [?]. There are 100 examples that are all relatively clean and artifact-free. The data is typically 10 minutes long or shorter and each record contains at least one ECG signal and at least one ABP signal. The sampling frequency is also consistent among these examples at 250 samples per second.

The next dataset set is the Physionet Challenge 2014 extended training set (set-p2), consisting of 100 records [?]. These records contain signals that have more noise and artifacts than those of set-p1. The signals in set-p2 are mostly 10 minutes long, although there are occasionally shorter signals. The sampling frequency varies between 250 samples per second to 360 samples per second.

The final dataset we use is the Massachusetts General Hospital/Marquette Foundation (MGH/MF) Waveform Database provided by Physionet [?]. The database consists of 250 recordings and represents a broad spectrum of physiologic and pathophysiologic states. Individual recordings vary in length from 12 to 86 minutes, and in most cases are about an hour long. The effective sampling frequency is 360 samples per second.

For these three datasets, reference beat annotations are available. These reference beat annotations represent the consensus of several expert beat annotators, and are used for determining the accuracy of the algorithms.

Beyond the datasets, we also make use of the GQRS and WABP functions, which are the basic beat detectors for ECG and ABP respectively provided by Physionet’s WFDB Toolbox [?]. In addition, we use other WFDB Toolbox functions for processing signals and annotations.

2.2 Related Work

Since Physionet held a challenge and collected many submissions, there are also quite a few bodies of work that make meaningful progress towards improving heart beat detection. The top six algorithms that were submitted to the Physionet 2014 Challenge, ordered by their performance in Physionet are Pangerc [?], Johnson [?], Antink [?], De Cooman [?], Johannesen [?], and Vollmer [?].

In general, most of the submissions involved a subset of a few typical techniques: pre-processing, a combination algorithm, and then post-processing. In addition most algorithms used some form of delay incorporation, either in their pre-processing or combination algorithm to account for the ABP delay. Finally, a few algorithms make use of signals outside of the ECG and ABP signals. It is worth noting that Pangerc, whose performance is significantly higher than the other applications involved re-implementing ECG and pulsatile-signal detection algorithms. For a recent review of the submissions, refer to the Physionet 2014 Challenge summary paper [?].

3 Methods - Algorithm

3.1 Introduction

Dynamic Bayesian networks (DBNs) are widely used to model the processes underlying sequential data such as speech signals, financial time series, genetic sequences, and in our case, physiological signals. DBNs model a process using static parameters, hidden variables that evolve over time, and observations at each time step, as shown in Figure 3.

More specifically, for a partially observable Markov process with unobserved state variables , and observations that is parametrized by a static parameter space , the probabilistic model is defined as follows.

(1)
(2)
(3)

The first equation represents the initialization of state variables, which corresponds to the prior probability. The second equation represents the propagation model, which is based on transition probabilities from one state to another. For our propagation model, we implemented a drastically simplified model of human biometrics and then evolved variables in a physiological manner. The final equation represents the observation model, which corresponds to the probability of a particular observation given a certain state. In our case, our observations consisted of annotations and signal quality indices from signal detection algorithms.

Figure 3: A state-space model with static parameters . are latent states and are observations.

3.2 Sequential Monte Carlo

Given a probabilistic model and a sequence of observations, one can attempt to estimate the latent states. That is the problem of state estimation, also known as filtering: the process of computing the posterior distribution of the hidden state variables given a sequence of observations. Exact filtering is often intractable except for specific cases, but approximate filtering using the particle filter (a sequential Monte Carlo method) is feasible in many applications [?] [?].

The specific method we use is Sequential Importance Sampling-Resampling (SIR), otherwise known as bootstrap filtering and particle filtering. This representation relies on approximating the posterior density, function at any given time using a set of random particles that we recursively evolve. As the cardinality of the set grows, the approximation improves in accuracy.

We initialize the states of our particles based on the prior probabilities. We then propagate the state of the particles using the transition probabilities, weight based on the observation probabilities, and resample at each time step. Through this propagate-weight-resample scheme, particle filtering generates simulations that explore the likely portions of the latent probability space.

{algorithm2e}

[ht] \KwIn: number of particles;
: observation sequence \KwOut initialize \For \For sample   resample Sequential importance sampling-resampling (SIR)

4 Methods - Model

Our approach relies on the assumption that human physiology follows a pattern that can be modeled in a Bayesian manner. Specifically, we construct a DBN that corresponds to human physiology, with static variables (e.g. resting heart rate), dynamic state variables (e.g. true heart rate) which define a propagation model, and observations (e.g. ECG annotations and signal quality) which define an observation model. The propagation and observation models are described in the following sections and illustrated in Figure 5. We proceed to use the filtering techniques covered in the previous section to perform state estimation on the DBN we have constructed.

4.1 Propagation Model;

Figure 4: This figure shows the physical representation of some of the propagation variables.

Our propagation model (transition model) encodes a DBN, and indicates our beliefs about the interdependent evolution of our relevant variables over time. The model makes use of nine variables to represent a simplified model of human physiology. Refer to Figure 4, to see the physical representation of a few of the propagation variables.

The first two variables are static parameters: and , which represent the resting heart rate of the patient and the delay between ECG and ABP signals, respectively [?]. These static parameters converge quickly during particle filtering.

The next variables we cover are latent variables. The third variable is the , which represents the belief of the patient’s heart rate at a particular point in the signal. The fourth variable is the , which is a binary variable that represents whether there is a peak in the current window of the ECG signal. The variable evolves based on the and the next variable, , which represents the last time we believed there was a peak in the ECG signal. The sixth and seventh variables are and which are analogous to the corresponding ECG variables, but incorporate the variable. Note that in this model, the at time coincides with the at time . This means that the variable represents our final belief about heart beat annotations, because it incorporates both ECG and ABP information. The final two variables are and , which are binary variables that are used to label a signal as artifactual. For a more in-depth explanation, refer to the Propagation Model in Appendix A.

4.2 Observation Model;

The observation model (sensor model) encodes our beliefs about the functions we use to derive observations and the probability that the observations correspond to the current states. In the observation model there are six variables that we relate to the variables in our propagation model.

The first two variables are annotation observations, and . These binary variables represent whether the algorithms provided by Physionet, GQRS and WABP, found an annotation at the current time. The observation model describes a relationship between these variables and the corresponding , , and variables in the propagation model. The next two are heart rate observations, and , which once again are derived using WABP and GQRS to give an estimate of the heart rate. These parameters are mainly associated with the variable. Finally we have two SQI observations, and , which represent how trustworthy a particular signal is. The is calculated by comparing the results from two of Physionet’s ECG signal detectors, and the is calculated by checking that the detections are within certain physiological boundaries [?] [?] [?]. For a more in-depth explanation, refer to the Observation Model in the Appendix B.

Figure 5: This figure shows the propagation and observation model that encodes our Dynamic Bayesian Network. The grayed out variables correspond to observed variables.

4.3 Performing State Estimation

Given our probabilistic heart beat model, we now use particle filtering to perform state estimation and to determine when heart beats occurred. Our particle filter follows SIR whereby we perform three steps recursively: propagation, weighting, and resampling.

First, we split our signals into 25 millisecond windows, and calculate the values of the observation model variables for each of these windows. Then, the particle filter assigns a prior belief to our set of particles (in our algorithm we use a set of 2000). For the propagation step we propagate the particles individually according to our transition model to acquire the state of the model one step into the future. Next, for each particle, we calculate a weight which is representative of the likelihood of that particle’s realized state given the observations we have made at the corresponding time. The final recursive step is to resample the particles according to the weights we calculated in order to avoid the degeneracy problem, where the particles all have negligible weight [?].

We repeat the recursive steps above for the duration of the signal in a sequential fashion. At each step in the recursion we also save the average state of the variables across the particles, which is used to compose our actual heart beat annotations. Based on s, we backshift a distance of and then annotate a beat only if enough particles are in the state that corresponds to a peak. This threshold is one of several hyperparameters within our algorithm that were tuned over the development of the filter. The final annotations we use correspond to timesteps where enough particles were in a state of s backshifted by the between s and s.

5 Results

Now that we have established the algorithm, we discuss the performance of the algorithm on several datasets. In this section we compare results in terms of the sensitivity (recall) and positive predictivity (precision). The sensitivity represents the percentage of actual beats our algorithm annotated, and the positive predictivity represents the percentage of detections that corresponded to actual beats.

5.1 Results - Individual

The first example we discuss is record 1376 from set-p2. This record is an example where we make a large improvement over GQRS and WABP.

Figure 6: Example 1376 from set-p2. On the left we plot against the ECG signal, and on the right we plot against ABP signal. Both signals are plotted over a section of two seconds. The red dots signify annotations. On this example we achieved a sensitivity of 0.91835 and a predictivity of 0.98842 whereas GQRS had a sensitivity of 0.69051 and a predictivity of 0.99002. The slightly misaligned beats on the left are likely due to an overestimated latency.

In Figure 6 we note that the particle filter can recover from spurious GQRS annotations on the left and spurious WABP annotations on the right, all within the same signal. Our algorithm performs well overall on record 1376, and this example illustrates its capacity for artifact recovery. Most of the improvement our algorithm achieves is in knowing when to trust a particular signal to the point that it will incorporate it within our observation model.

The next example we discuss is record 2664 from set-p2. This record demonstrates our improvement over GQRS and provides an illustrative example of signal quality and artifacts.

(a) ECGSQI
(b) ECGArt
(c) ECG HR
Figure 7: Example 2664 from the set-p2. On the left we depict the ECG signal quality observation over time. In the middle we depict the mean of the particles’ variable over time. On the right we depict the heart rate over time.
(a) GQRS Performance
(b) DBN-PF Performance
Figure 8: Example 2664 from set-p2. This visualizes an estimate of the scoring method from Physionet we used. Green indicates true positives, red is a false positive, and yellow is missed actual annotations. Note that this visualization is not the same as the method we used to collect the scoring.

In Figure 7, we see that our estimate of the correlates strongly with the . Wherever the is low, our particle filter correspondingly believes there to be an artifact. The fact that the variable is quite accurate serves as a proof of concept that our particle filter can track other biometric or sensor state in addition to heart beats. In addition, we see that the matches the actual heart rate quite closely. This means we can potentially incorporate other variables, such as those that could determine the presence of arrhythmia or other information relevant to the general status of a patient, to create a more refined model.

Then, in Figure 8, we see a strong improvement over GQRS. Our algorithm has a sensitivity of 0.941 and a predictivity of 0.988, whereas GQRS has a sensitivity of 0.343 and a predictivity of 0.975. These numbers indicate that for example 2664, GQRS missed a lot of actual beats, but didn’t make many false predictions, which is reflected in the figure as well. Overall, this example highlights the powerful recovery our algorithm elicited simply by combining GQRS and WABP annotations under a physiological model.

In our last record of interest, 1033 from set-p2, we observe a phenomenon we denoted as ”double annotations” pictured in Figure 9. These double annotations are mainly due to artificial pacemakers and low dicrotic notches. This is where the signals are shaped in such a way that either the GQRS or WABP or both algorithms annotate an extra set of beats. Here we observe the case where both GQRS and WABP believe there to be a double annotation, resulting in our particle filter believing there to be a double annotations as well. These double annotation phenomena are the primary reason for our lower positive predictivity in the next results section. Recovering from double annotations is actually quite difficult within the generative capabilities of our probabilistic model, however through the introduction of new features the double annotations can potentially be ameliorated. Without other data about the signal, determining the presence of double annotations is infeasible.

Figure 9: Example 1033 from the set-p2. The red dots incidate annotations.

5.2 Results - Overall

In order to generate these results, we took the top submissions on Physionet, downloaded their entries, and re-ran them against the exact same datasets we used. We then used Physionet’s provided bxb function to compare the generated heart beat annotations with the reference ones and to determine the sensitivity and predictivity for a given record. Finally, we averaged the sensitivity and predictivity across all records from a given dataset. This was for the sake of consistency in our comparisons. While all the submissions ran without errors on set-p1 and set-p2, our algorithm, Pangerc’s, and Johnson’s did not run properly on some of the records in the MGH/MF Waveform Database. We omitted those records when calculating scores. The results for set-p1 (top left), set-p2 (top right), and the MGH/MF Waveform Database (bottom):

set-p1
Method Sensitivity Pos. Pred.
DBN-PF 0.99893 0.99380
GQRS 0.99942 0.99320
WABP 0.39006 0.38997
Pangerc 0.99995 0.99950
Johnson 0.99829 0.99805
Antink 0.99965 0.99972
De Cooman 0.99962 0.99985
Johannesen 0.99956 0.99809
Vollmer 0.99963 0.99992
set-p2
Method Sensitivity Pos. Pred.
DBN-PF 0.92808 0.89527
GQRS 0.88134 0.85048
WABP 0.48544 0.45173
Pangerc 0.95737 0.94473
Johnson 0.92734 0.89265
Antink 0.91394 0.91829
De Cooman 0.88364 0.88341
Johannesen 0.91535 0.86260
Vollmer 0.91092 0.91334
MGH/MF Waveform Database
Method Sensitivity Pos. Pred.
DBN-PF 0.92853 0.93716
GQRS 0.80400 0.85934
Pangerc 0.88567 0.91749
Johnson 0.93832 0.91808

Overall, the performance from GQRS is nigh impossible to beat on set-p1, precisely because the data is so clean and regular. It seems highly unlikely that there is any statistical significance to be derived from set-p1. All algorithms perform extremely well, except for WABP which suffers primarily from delay.

In regards to set-p2, we start to observe some differentiation. Comparing against the other entries, we see that our particle filter ends up outperforming all algorithms in regards to sensitivity except for Pangerc on set-p2. Our predictivity does suffer due to double annotations, but we still perform well overall. This is fairly strong evidence that our algorithm is capable of accurately combining the information from multiple channels of signals. In set-p2 we see that the performance of the Pangerc submission is substantially better than the others. This is likely due to their use of custom ECG and BP pulse detectors. Their QRS detector (repdet) provided a much improved performance over GQRS on set-p2 in particular, likely due to their inclusion of a step in the detector to identify double annotations (paced beats) [?] [?]. This step may account for Pangerc’s improved performance over the algorithms that used GQRS.

Finally, for the MGH/MF Waveform Database, we note that the particle filter outperforms all other algorithms in predictivity, and does very well in the sensitivity aspect as well with approximately .929 sensitivity and .937 predictivity. In comparison to Pangerc and GQRS this is a great improvement, and in comparison to Johnson, we improve predictivity and only slightly lose out on sensitivity. Our algorithm was able to perform well on multiple datasets, which suggests that the improvement was fairly significant.

6 Discussion

In this section, we discuss some of the interesting features of our algorithm. First, it is important to state that our algorithm is slower than other standard signal detection algorithms mainly because it is a simulation based filtering algorithm. In general, on the set-p1 (10 minutes signals), our algorithm takes 94.33 seconds on average to run on MATLAB r2015a (on a MacBook Pro with a 2.9 GHz Intel Core i5 processor and 8 GB 1867 MHz DDR3 memory), but it is worth noting that it is quite feasible for it to be implemented as an online algorithm. This is simply because particle filters process data sequentially.

We showed an example of double annotation in the previous section. In order to mitigate double annotations we would have to augment our model. One way to do this would be to incorporate information about the amplitude of the beat as an observation to determine if our algorithm should expect a double annotation, and determine peaks based off the new information. The model augmentations would likely be a few variables to probabilistically indicate the presence of double annotations and represent the amplitude of the signal.

Because there were so many other meaningful algorithms, it is worth comparing our own algorithm against the other top submissions. First of all, our algorithm inherently differed from the other approaches, and was the only one that focused on performing probabilistic inference on a Dynamic Bayesian Model. In terms of similarities, our algorithm slightly resembles the Johnson algorithm, because both algorithms use signal quality indices. However, not only do they focus on a deterministic switching, in the latter part of their algorithm they focus on the regularity of signals besides ECG and ABP as well. As a whole, our algorithm outperforms the others that relied on implementing a form of intelligent switching. Only Pangerc, who relied on their reimplemented beat detection algorithm (repdet) outperformed our own [?].

Another point of discussion is actually a point of differentiation between our algorithm and the others, which is our flexibility. With the flexibility of inputs, it becomes feasible to consider using alternate detectors to further strengthen our algorithm. One alternative would be to use the detectors used in the Pangerc submission, which should in theory greatly boost our performance.

Apart from performance concerns, we can consider the ease of implementation. Model based probabilistic inference approaches are becoming more and more appealing due to the emergence of probabilistic programming languages (PPL). A probabilistic programming language is a high-level language that makes it easy to represent probabilistic models and perform inference over them. PPLs enable domain experts who don’t have enough experience in probability theory or machine learning to use state-of-the-art machine learning methodologies to perform meaningful inferences [?]. Because the problem we are analyzing is within the domain of physicians and clinicians, it would be best if we could empower them to create the physiological models they wished to represent on their own. As a proof of concept, we have also implemented our probabilistic model in a probabilistic modeling language called BLOG [?]. We used BLOG’s particle filtering engine to perform the state estimation, and our results agree with our MATLAB implementation. For physicians, they simply need to implement their model, and use the particle filter (or any other estimation technique) implemented in BLOG. For a glimpse at the BLOG implementation, refer to Appendix D.1.

7 Conclusion

Based on our results, we find that using particle filtering on our DBN model performs quite well. It matches or outperforms a majority of the top submissions for the Physionet 2014 Challenge. In regards to the default Physionet beat detector GQRS, we improve in set-p2 by about in both sensitivity and positive predictivity. As a whole, our particle filter serves as a strong proof of concept that a probabilistic model based inference approach can robustly detect heart beats.

Our probabilistic model is not completely perfect, but some of our algorithm’s advantages include: {enumerate*} Utilizes multi-channel information by incorporating the ECG-ABP peak delay (latency). Flexible model - easy to incorporate new variables and relationships. Flexible observations - easy to switch out GQRS and WABP for any other signal processing algorithms. Consistent with a physiological perspective. Can be implemented using a PPL. Unlike many machine learning techniques, our model does not require a training set. Some of our disadvantages include: {enumerate*} Slower run time. One ten-minute signal takes approximately a minute and a half for the algorithm to run. Depends on the signal processing methods heavily. In terms of future work on the particle filter, we can incorporate the preprocessing mentioned in the other submissions for the sake of reducing noise. In addition, our algorithm only looks at one ECG and one ABP signal, when there are a multitude of other signals that could be utilized. Incorporating these would likely improve performance, as would initializing the delay based on one of the methods described in the other submissions. In the far future, this algorithm could be tweaked to directly detect arrhythmia. By placing emphasis on the artifacts and introducing arrhythmia variables, it would be possible to develop beliefs regarding these cardiac abnormalities. Since we have improved the detection rate for heart beats, it stands to reason we can improve the detection rate for the arrhythmia as well as for other problems that can be modelled physiologically.

8 Acknowledgements

We are thankful to Xiao Hu, Quan Ding, Yong Bai, Rebeca Salas-Boni and Daniel Schindler for helpful discussions.

Supplemental Material

Appendix A Propagation Model

The propagation model (transition model) encodes interdependent evolution of our relevant physiological variables over time. The structure of the dependencies is observable in the figure in section 5.

a.1 RestHR

is a real-valued static variable that represents the resting heart of a particular patient, which is simply the patient’s expected heart rate while at rest. The prior value is a Gaussian around the average heart rate as determined by patient demographics. It obeys the following static (i.e., identity) propagation function:

(4)

Latency

Figure 10: The blue lines represent heart beat annotations. The gray lines and circles are simply to highlight the annotated heart beat and the delay present in ART and PAP (the ABP signals).

In Figure 10, it is possible to see that the heartbeats fall on, or very near to, the R peaks in the ECG’s QRS signals. Conversely, in the ABP signals the heart beat does not fall on the peaks pictured, but instead it falls a reliable distance before the peaks. In our model, is an integer-valued static variable that represents this relationship between heart beats and the blood pressure. The prior value is a Gaussian around 200 milliseconds. It obeys the following propagation function:

(5)

a.2 TrueHR

is a real-valued variable that represents the heart rate of a patient at the current time. The prior value of the starts off as a gaussian around the . Here means sampling from a gaussian distribution with a given and .

(6)

Since the true heart rate of a patient might vary over a few minutes where the resting heart rate might vary over a few months, the true heart rate is not a static variable. It obeys the following propagation function:

(7)

a.3 ECGPeak

is a boolean-valued variable that is if there should be a beat annotated at the current timestep and otherwise. ’s prior starts as with a small probability (currently ). It obeys the following propagation function:

(8)

We calculate dynamically. As time progresses, the parameter will change depending on the current values of the and . First we define the difference between the current timestep and the lastpeak as . Then we define , which is the number of windows per beat based on the current heart rate.

(9)

Figure 11: This graph represents the value of as a function of for a fixed TrueHR and window. This is an example with a TrueHR of 60 bpm and a window of .025 s.

Thus, we represent the probability according to repeated binomial distributions, as in figure 11. The reason for this calculation is to create a few important properties. The first is to create a memory that allows us to believe beats should occur based on the and to not preclude beats even if we don’t believe there to be a beat earlier on. It also has the convenient property of generally not allowing us to double annotate beats, because it’s unlikely the heart will beat twice in a short span of time. The other reason is that the binomial distributed random variable with parameters and has an and . This means that we can control the expected value in our case to be and the variance to be , using our values of and .

a.4 ECGLastPeak

is an integer-valued variable that represents the last time we believed there was a peak based on ECG. The prior for is a uniform distribution in the range of integers between []. The helps to shift the expected location of all of the heartbeats, because of the multimodal value of in the . The uniform sampling means that the prior believes the expected heartbeats may be shifted to cover all possible initializations. It obeys the following propagation function:

In words, this equation states that the will change purely to record the last time was 1.

a.5 ABPPeak

is a boolean-valued variable that represents whether there is a peak based on the and the . Since theoretically the should align with the while accounting for the , it obeys the following propagation function (even for the prior):

(10)

Here, means that we take the mean across all the particles. This is admittedly unorthodox, but it means that we ameliorate the issue of double and triple annotations in adjacent locations for the . If the particle filter had a while to learn the latency and let it converge, then this fix would be unnecessary, but, as is, this fix means that the particle filter’s particles will not overly diverge at the .

a.6 ABPLastPeak

is an integer-valued variable that represents the last time we believed there was a peak based on both ECG and ABP. The prior starts off as . It obeys the following propagation function:

a.7 ECGArtifact and ABPArtifact

and are boolean-valued variables that represents whether we believe there is currently an artifact associated with either or , respectively. Their prior and propagations behavior is exactly the same. represents an artifact and represents no artifact. The prior belief for the artifacts starts off as with a small probability (currently ) It obeys the following propagation function:

(11)

Here, represents the Pr(), which is determined by the following conditional probability table:

Pr()
1 1 0.99
1 0 0.01
0 1 0.01
0 0 0.99

This table represents a form of inertia. Given that there is currently an artifact, the probability that there continues to be an artifact is high. Likewise, it there is currently an absence of an artifact, the probability that there continues to be an absence is similarly high.

Appendix B Observation Model

The observation model (sensor model) encodes our beliefs about the functions we use to derive observations and the probability that the observations correspond to the current states. The structure of the dependencies is observable in the figure in section 5.

b.1 Annotation Observations

These are the and the variables in the model. The observations are derived by using the WABP and GQRS algorithms that are provided by Physionet. The algorithms simply use a specified window size and see if the signal processing algorithms place any annotations within a given window. If so, then the observation is set to be true (), and otherwise it’s set to be false (). Then, we define the probability of the annotations given the states according to the following probability table:

Pr()
0 1 1
0 1 0
0 0 1
0 0 0
1 1 1 0.7
1 1 0 0.3
1 0 1 0.99
1 0 0 0.01

In the case that there is a peak and there is no artifact, then the belief that there should be an annotation is quite high (), because there is high likelihood of signal accuracy. Correspondingly the belief that there should not be an annotation is quite low ().

In the case that there is a peak and there is an artifact, then the belief that there should be an annotation is lower than without an artifact (), because the artifact means that we are not entirely able to trust the signal. Correspondingly the belief that there should not be an annotation is higher than without an artifact (). In general when we fix the other states, the presence of an artifact makes our annotation beliefs less certain.

Next in the case that there is no peak and no artifact, should represent the likelihood that the observation can be trusted. Since the probability of the annotation occuring in this case truly depends on the and the , we end up calculating it in the same way as the for the . First we define the difference between the current timestep and the lastpeak as . Then we define , which is the number of windows per beat based on the current heart rate.

(12)

Then, we know that the probability of the annotation not occuring will simply be .

Finally in the case that there is no peak and an artifact, we can apply what we used earlier and say that the presence of an artifact makes our belief about the annotation less certain. So if we strongly believe there would be an annotation, in the case of an artifact, we only moderately believe there would be an annotation. Likewise if we don’t believe there would be an annotation, then an artifact would make us moderately not believe in an annotation. This means that we can simply represent the according to the following definition:

(13)

Appendix C Heart Rate Observations

These are the and the variables in the model (collectively called variables). The observations are derived by using the WABP and GQRS algorithms that are provided by Physionet. The algorithms simply use a specified window size and have a sliding window that computes the local heart rate. The observations are set accordingly. Then, we define the probability of the observations given the states according to the following:

(14)

Here, corresponds to the probability density function of a Gaussian random variable with the given parameters.

Appendix D SQI Observations

These correspond to the and variables. is calculated by taking two signal processing algorithms and comparing them beat by beat. It was derived using the ecgsqi function that was developed in another Physionet submission [?]. In particular, we use gqrs (unpublished algorithm optimized for sensitivity) and wqrs (open source algorithm optimized for adult human ECGs). The number of beats that match between the two algorithms is reported as a real number between and . is calculated by checking to see if the pressure, the mean arterial pressure, the heart rate, the pulse pressure, and a variety of other physiologic details are within normal ranges or not. If everything is in a normal range, is , otherwise it is . This was also derived from the abpsqi function in the same Physionet submission, which was in turn borrowing from other publications [?] [?].

The observations are used to choose which observations to depend on. If the and , then we weight based on the ABP annotations and heart rates, otherwise we weight based on the ECG annotations and heart rates.

d.1 BLOG Code - Propagation Functions

The following code is the propagation model represented in the BLOG language:

// Functions
random Real Rest_HR(Timestep t) ~
    if t == @0 then
        Gaussian(avg_hr, 10)
    else
        Rest_HR(prev(t));

random Real True_HR(Timestep t) ~
    if t == @0 then
        Gaussian(Rest_HR(t), 5)
    else
        Gaussian(.2*Rest_HR(prev(t)) + .8*True_HR(prev(t)), 1);

random Integer ECG_Art(Timestep t) ~
    if t == @0 then
        Bernoulli(.01)
    else case ECG_Art(prev(t)) in {
        0 -> Bernoulli(.01),
        1 -> Bernoulli(.99)
    };

random Integer ECG_Peak(Timestep t) ~
    if t == @0 then
        Bernoulli(.01)
    else
        Bernoulli(binompdf(round(60/(w_off*True_HR(prev(t)))), 2.0/3.0,
            (toInt(t)-ECG_Last_Peak(prev(t)))%round(60/(w_off*True_HR(prev(t))))));

random Integer ECG_Last_Peak(Timestep t) ~
    if t == @0 then
        UniformInt(-round(60/(w_off*True_HR(t))), -1)
    else
        if ECG_Peak(t) == 1 then
            toInt(t)
    else
        ECG_Last_Peak(prev(t));

random Integer ABP_Art(Timestep t) ~
    if t == @0 then
        Bernoulli(.01)
    else case ABP_Art(prev(t)) in {
        0 -> Bernoulli(.01),
        1 -> Bernoulli(.99)
    };

random Integer ABP_Peak(Timestep t) ~
    if (ECG_Last_Peak(t) + Latency) == toInt(t) then
        1
    else
        0;

random Integer ABP_Last_Peak(Timestep t) ~
    if t == @0 then
        ECG_Last_Peak(t) + Latency
    else
        if ABP_Peak(t) == 1 then
            toInt(t)
    else
        ABP_Last_Peak(prev(t));

d.2 BLOG Code - Observation Functions

The following code is the observation model represented in the BLOG language:

// Functions
random Integer ECG_Ann(Timestep t) ~
    if ECG_Peak(t) == 1 then
        if ECG_Art(t) == 1 then
            Bernoulli(.7)
        else
            Bernoulli(.9)
    else
        if ECG_Art(t) == 1 then
            Bernoulli((binompdf(round(60/(w_off*True_HR(t))), 2.0/3.0,
                (toInt(t)-ECG_Last_Peak(t))%round(60/(w_off*True_HR(t)))) + .5)/2)
        else
            Bernoulli(binompdf(round(60/(w_off*True_HR(t))), 2.0/3.0,
                (toInt(t)-ECG_Last_Peak(t))%round(60/(w_off*True_HR(t)))));

random Real ECG_HR(Timestep t) ~
    Gaussian(True_HR(t), abs(True_HR(t))/4);

random Integer ABP_Ann(Timestep t) ~
    if ABP_Peak(t) == 1 then
        if ABP_Art(t) == 1 then
            Bernoulli(.7)
        else
            Bernoulli(.9)
    else
        if ABP_Art(t) == 1 then
            Bernoulli((binompdf(round(60/(w_off*True_HR(t))), 2.0/3.0,
                (toInt(t)-ABP_Last_Peak(t))%round(60/(w_off*True_HR(t)))) + .5)/2)
        else
            Bernoulli(binompdf(round(60/(w_off*True_HR(t))), 2.0/3.0,
                (toInt(t)-ABP_Last_Peak(t))%round(60/(w_off*True_HR(t)))));

random Real ABP_HR(Timestep t) ~
    Gaussian(True_HR(t), abs(True_HR(t))/4);

d.3 Running BLOG

The following is the shell script we ran to execute the blog particle filter.

#!/bin/sh

time ./../../blog/dblog \
  prop_fn.dblog \
  query.dblog \
  obs_fn.dblog \
  obs.dblog \
  -n 1000 -o out.json

The query.dblog and obs.dblog files contained the data we wanted to load. This command ran the particle filter with 1000 particles and put the output into the out.json file. Note: we will be making the Matlab and BLOG code we wrote available at a later date.

References

  1. \harvarditem[Arulampalam et al.]Arulampalam, Maskell, Gordon \harvardand Clapp2002Particlefilter Arulampalam, M. S., Maskell, S., Gordon, N. \harvardand Clapp, T. \harvardyearleft2002\harvardyearright. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking, Signal Processing, IEEE Transactions 50: 174–188.
  2. \harvarditemChopra \harvardand McMahon2014alarmarticle Chopra, V. \harvardand McMahon, L. F. \harvardyearleft2014\harvardyearright. Redesigning hospital alarms for patient safety, JAMA .
  3. \harvarditem[De Cooman et al.]De Cooman, Goovaerts, Varon, Widjaja \harvardand Huffel2014CoomanPaper De Cooman, T., Goovaerts, G., Varon, C., Widjaja, D. \harvardand Huffel, S. V. \harvardyearleft2014\harvardyearright. S. heart beat detection in multimodal data using signal recognition and beat location estimation, Computing in Cardiology .
  4. \harvarditemDoucet \harvardand Johansen2011ParticleLearningAndSmoothing Doucet, A. \harvardand Johansen, A. M. \harvardyearleft2011\harvardyearright. A tutorial on particle filtering and smoothing: fifteen years later, The Oxford Handbook of Nonlinear Filtering pp. 4–6.
  5. \harvarditem[Drew et al.]Drew, Harris, ZègreHemsey, Mammone, Schindler, SalasBoni, Bai, Tinoco, Ding \harvardand Hu2014AlarmFatigueArticle Drew, B. J., Harris, P., ZègreHemsey, J. K., Mammone, T., Schindler, D., SalasBoni, R., Bai, Y., Tinoco, A., Ding, Q. \harvardand Hu, X. \harvardyearleft2014\harvardyearright. Insights into the problem of alarm fatigue with physiologic monitor devices: A comprehensive observational study of consecutive intensive care unit patients, PLoS ONE .
  6. \harvarditem[Goldberger et al.]Goldberger, Amaral, Glass, Hausdorff, Ivanov, Mark, Mietus, Moody, Peng \harvardand Stanley2000 (June 13)PhysioNet Goldberger, A. L., Amaral, L. A. N., Glass, L., Hausdorff, J. M., Ivanov, P. C., Mark, R. G., Mietus, J. E., Moody, G. B., Peng, C. K. \harvardand Stanley, H. E. \harvardyearleft2000 (June 13)\harvardyearright. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals, Circulation 101(23): e215–e220. Circulation Electronic Pages: http://circ.ahajournals.org/cgi/content/full/101/23/e215 PMID:1085218; doi: 10.1161/01.CIR.101.23.e215.
  7. \harvarditem[Gordon et al.]Gordon, Henzinger, Nori \harvardand Rajamani2014gordon2014probabilistic Gordon, A. D., Henzinger, T. A., Nori, A. V. \harvardand Rajamani, S. K. \harvardyearleft2014\harvardyearright. Probabilistic programming, Proceedings of the on Future of Software Engineering, ACM, pp. 167–181.
  8. \harvarditem[Hoog Antink et al.]Hoog Antink, Bruser \harvardand Leonhardt2014AntinkPaper Hoog Antink, C., Bruser, C. \harvardand Leonhardt, S. \harvardyearleft2014\harvardyearright. Multimodal sensor fusion of cardiac signals via blind deconvolution: A source-filter approach, Computing in Cardiology .
  9. \harvarditem[Johannesen et al.]Johannesen, Vicente, Scully, Galeotti \harvardand Strauss2014JohannesenPaper Johannesen, L., Vicente, J., Scully, C. G., Galeotti, L. \harvardand Strauss, D. G. \harvardyearleft2014\harvardyearright. Robust algorithm to locate heart beats from multiple physiological waveforms, Computing in Cardiology .
  10. \harvarditem[Johnson et al.]Johnson, Bechar, Andreotti, Clifford \harvardand Oster2014JohnsonPaper Johnson, A. E., Bechar, J., Andreotti, F., Clifford, G. D. \harvardand Oster, J. \harvardyearleft2014\harvardyearright. R- peak estimation using multimodal lead switching, Computing in Cardiology 41: 281–284.
  11. \harvarditemMacDonald2007AlarmSignificance MacDonald, I. \harvardyearleft2007\harvardyearright. Hospitals rank alarm fatigue as top patient safety concern, Fierce Healthcare .
    \harvardurlhttp://www.fiercehealthcare.com/story/hospitals-rank-alarm-fatigue-top-patient-safety-concern/2014-01-22
  12. \harvarditem[Milch et al.]Milch, Marthi, Russell, Sontag, Ong \harvardand Kolobov2007BLOGPaper Milch, B., Marthi, B., Russell, S., Sontag, D., Ong, D. L. \harvardand Kolobov, A. \harvardyearleft2007\harvardyearright. Blog: Probabilistic models with unknown objects, Introduction to Statistical Relational Learning .
  13. \harvarditem[Moody et al.]Moody, Moody \harvardand Silva2014Physionet2014 Moody, G., Moody, B. \harvardand Silva, I. \harvardyearleft2014\harvardyearright. Robust detection of heart beats in multimodal data: The physionet/computing in cardiology challenge 2014, Computing in Cardiology .
  14. \harvarditemPangerc \harvardand Jager2014PangercPaper Pangerc, U. \harvardand Jager, F. \harvardyearleft2014\harvardyearright. Robust detection of heart beats in multimodal data using integer multiplier digital filters and morphological algorithms, Computing in Cardiology .
  15. \harvarditem[Silva et al.]Silva, Moody, Behar, Johnson, Oster, Clifford \harvardand Moody2015PMEA Silva, I., Moody, B., Behar, J., Johnson, A., Oster, J., Clifford, G. D. \harvardand Moody, G. \harvardyearleft2015\harvardyearright. Robust detection of heart beats in multimodal data, Physiological Measurement .
  16. \harvarditemSilva \harvardand Moody2014WFDBToolbox Silva, I. \harvardand Moody, G. \harvardyearleft2014\harvardyearright. An open-source toolbox for analysing and processing physionet databases in matlab and octave, Journal of Open Research Software 2(1): 27. [http://dx.doi.org/10.5334/jors.bi].
  17. \harvarditemSun2006Sunmastersthesis Sun, J. X. \harvardyearleft2006\harvardyearright. Cardiac output estimation using arterial blood pressure waveforms.
  18. \harvarditem[Sun et al.]Sun, Reisner, Saeed \harvardand Mark2005SunPaper Sun, J. X., Reisner, A. T., Saeed, M. \harvardand Mark, R. G. \harvardyearleft2005\harvardyearright. Estimating cardiac output from arterial blood pressure waveforms: a critical evaluation using the mimic ii database, Computing in Cardiology pp. 295–298.
  19. \harvarditemVollmer2014VollmerPaper Vollmer, M. \harvardyearleft2014\harvardyearright. Robust detection of heart beats using dynamic thresholds and moving windows, Computing in Cardiology .
  20. \harvarditem[Welch et al.]Welch, Ford, Teplick \harvardand Rubsamen1991MGHDB Welch, J. P., Ford, P. J., Teplick, R. S. \harvardand Rubsamen, R. M. \harvardyearleft1991\harvardyearright. The massachusetts general hospital-marquette foundation hemodynamic and electrocardiographic database – comprehensive collection of critical care waveforms, J Clinical Monitoring 7(1): 96–97.
  21. \harvarditem[Zong et al.]Zong, Moody \harvardand Mark1998Latency Zong, W., Moody, G. \harvardand Mark, R. \harvardyearleft1998\harvardyearright. Effects of vasoactive drugs on the relationship between ecg-pulse wave delay time and arterial blood pressure in icu patients, Computing in Cardiology pp. 673–676.
104940
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description