Predictive Multi-level Patient Representationsfrom Electronic Health Records

Predictive Multi-level Patient Representations
from Electronic Health Records

Zichang Wang, Haoran Li, Luchen Liu, Haoxian Wu and Ming Zhang The two authors have equal contribution. Department of Computer Science, Peking University, Beijing, China
{dywzc123, lhrshitc, liuluchen292}@163.com, {MOVIEGEORGE, mzhang_cs}@pku.edu.cn
Abstract

The advent of the Internet era has led to an explosive growth in the Electronic Health Records (EHR) in the past decades. The EHR data can be regarded as a collection of clinical events, including laboratory results, medication records, physiological indicators, etc, which can be used for clinical outcome prediction tasks to support constructions of intelligent health systems. Learning patient representation from these clinical events for the clinical outcome prediction is an important but challenging step. Most related studies transform EHR data of a patient into a sequence of clinical events in temporal order and then use sequential models to learn patient representations for outcome prediction. However, clinical event sequence contains thousands of event types and temporal dependencies. We further make an observation that clinical events occurring in a short period are not constrained by any temporal order but events in a long term are influenced by temporal dependencies. The multi-scale temporal property makes it difficult for traditional sequential models to capture the short-term co-occurrence and the long-term temporal dependencies in clinical event sequences. In response to the above challenges, this paper proposes a Multi-level Representation Model (MRM). MRM first uses a sparse attention mechanism to model the short-term co-occurrence, then uses interval-based event pooling to remove redundant information and reduce sequence length and finally predicts clinical outcomes through Long Short-Term Memory (LSTM). Experiments on real-world datasets indicate that our proposed model largely improves the performance of clinical outcome prediction tasks using EHR data.

Electric Health Record, Deep Learning, Machine Learning

I Introduction

In the past decades, the scale of Electronic Health Records (EHR) has exploded because of the advent of the Internet era, which makes the construction of electronic medical record systems possible.

We focus on clinical event outcome prediction based on patient representation sequence learning.[8, 10] The electronic medical record data can be considered as a collection of clinical events, including thousands of event types such as diagnosis, laboratory tests, medication records, activity records, and physical signs. The clinical outcome prediction based on patient representation sequence learns the low-dimensional representation of the patient from the electronic medical record data and predicts the results of the specified clinical events, which can assist the medical experts to make correct clinical decisions.

Some of the related works sort the clinical events in the electronic medical record data according to their time of occurrence and converted the electronic medical record data into a sequence of clinical events. On this basis, the embedded layer is used to represent the clinical events and then the sequence model is used to capture the temporal dependencies between events and predict the results of the specified clinical events. However, the clinical event outcome prediction model under this framework has several challenges:

  • Clinical events in electronic medical records contain rich and complex high-dimensional information, which have thousands of types.

  • Events in a small neighborhood is out-of-order but the interaction of these events are also predictive.

  • The sequence is too long for sequence model like long short-term memory neural networks (LSTM) to capture the long-term dependency.

To gently solve the above challenges, we propose a Multi-level Representation Model (MRM). MRM uses the attention mechanism to capture the short-term co-occurrence of the events and obtain a low-level neighborhood representation of events. The pooling mechanism is then used to reduce the length of the clinical event sequence based on the short-term out-of-order clinical events. Finally, MRM uses LSTM to capture long-term temporal dependencies between events, obtain final patient representation and predict the outcome of a given clinical event.

The main contributions of this paper are as follows:

  • Compared to studies only using medical code or dozens of event types, this paper make use of nearly a thousand event types and events’ features to make predictions.

  • This paper proposes a multi-level representation model for patient medical records to capture the short-term co-occurrence and long-term temporal dependencies between clinical events. The effect was verified in experiments with actual data.

  • The interval-based event pooling mechanism proposed in this paper preserves the integrity of information while removing redundant information and reducing sequence length.

Ii Related Works

Ii-a Patient Representation from EHR

One general patient representation method to make direct use of high-dimensional EHR data is to use a vector that records the number of each type of clinical events to represent EHR data[4, 5, 6]. However, it is obvious that it ignores the relative order of clinical events and lacks a more detailed description of the features of clinical events.

Another method is to use a matrix in which the rows of the matrix represent different time intervals and the columns of the matrix represent a type of event[12, 14]. Wang et al. use the convolutional non-negative matrix factorization to resolve the matrices[12]. Zhou et al. decomposed the matrix into the product of Latent Medical Concept Matrix and Concept Value Evolution Matrix[14]. These method depend on the time span which is set ahead and still only uses the event occurrence information and lacks more detailed features.

The Temporal Phenotyping proposed by Liu et al. converts the EHR data into a sequence diagram where the nodes represent clinical events and the edge weights represent the correlation of the connected nodes[7]. Such method focuses on capturing the short-term co-occurrence of events and ignores the long-term temporal dependencies between important events.

Ii-B Deep Sequential models for EHR

Some related works introduce the time information or the interval information of the events into the model to solve the problem of inconsistent sampling frequency[1, 2, 13, 9]. For example, Che et al. multiply the hidden state by a time decay factor before calculating the next hidden state in Gated Recurrent Unit (GRU)[2]. Zheng et al. balance the inheritance and update of hidden states based on the time decay function when updating the hidden layer state of GRU[13]. Bai et al. propose the Timeline model to model the decay rate of different events affecting patients[1]. These efforts use time decay factors to solve inconsistencies in clinical event sequences but do not consider the short-term out-of-order in clinical event sequence in EHR data.

Choi et al. propose a model RETAIN[3] that combines RNN with attention mechanisms. RETAIN divides the sequence into several visits, and then uses attention mechanism to generate patient representation based on the visits. However, RETAIN uses only medical code information for clinical events.

Iii Methodology

Fig. 1: The architecture of MRM: Event is encoded as a representation . The short-term co-occurrence mechanism gather the neighborhood information and generate the event representation . Then the interval-based event pooling mechanism divide the event sequence into groups and generate the group representation . Finally the event group representation sequence is fed to LSTM and generate the output

This chapter shows the Multi-level Representation Model (MRM) proposed in this paper in detail. This chapter formalizes the problems studied in this paper, and then introduces the mechanisms of the model showed in figure 1.

Iii-a Notations

A clinical event can be formalized as a quadruple . We use , , , standing for the encoding of the event, occurrence time, category feature and numerical feature.

Iii-B Multi-level Representation Model

Iii-B1 Short-term Co-occurrence Modeling

Our method uses attention mechanisms to model the short-term co-occurrence of events. For an event representation , we generate an event neighborhood representation based on the attention mechanism and its neighborhood event representations.

We assume that events occurring in a short period are out-of-order, so the short-term co-occurrence between events can be captured using the attention mechanism that does not consider the order.

Then we introduce the short-term co-occurrence modeling mechanism in detail. For the event , we consider that the events occur within the time interval have a short-term co-occurrence with . We use to represent the index set of these events. is defined as follows:

(1)

Referring to the attention mechanism in related works[11], out method calculates as follows:

(2)
(3)
(4)

where , , .

, where is the dimension of the attention mechanism.

As could be quite large in real data, it is difficult to capture all of the co-occurrences. Thus for an event , we only capture events which are the closest.

Our method also use a multi-head attention mechanism. The final representation is as follows:

(5)

Where is defined above. And each shares the same structure but has separate parameters . We guarantee that .

Iii-B2 Interval-based Event Pooling

contains neighborhood information around . If the event is very close to , the information contained in and will be quite similar. Due to the similarity of the neighboring element information, it is difficult to directly process the sequence with RNN.

So we propose a pooling mechanism based on event interval to solve the above problems. In this paper, the clinical event sequence is first divided into several non-overlapping event groups according to the distribution density of events, and then each group goes through a max-pooling layer separately. The division should satisfy two conditions: 1) the period covered by an event group must be as small as it can; 2) the number of event groups should not be too large.

Let be the index set of the events contained in the i-th event group, is the set of all event groups.

is the group number limit and is the limit number of events in a group.

To make each group’s time span as small as it can, we define time span function as follows:

(6)

The optimal partition of the sequence can be obtained by minimizing the maximum time span of the partition:

(7)

We can get the optimal partition with dichotomy and greed algorithm.

After the max-pooling in each event group, the representation of the event group can be obtained. The representation of the i-th event group can be calculated as follows:

(8)

Iii-B3 Long-term Temporal Dependency Modeling

We use LSTM to deal with event group representation sequence. In t-th iteration, LSTM cell takes former output , state and the event group representation sequence input as input and generate as output.

(9)

represents an iteration.

The last output is the representation for the clinical event sequence as well as the patient.

We use a sigmoid function to get the prediction from the patient representation :

(10)

, are the parameters to learn.

Then we use a cross entropy loss function to calculate the classification loss from the true label and the prediction :

(11)

Iv Experiments

Iv-a Experiment settings

This part describes the parameter settings and model training methods of the MRM proposed in this paper.

The parameter settings for MRM are as follows:

Model dimension is 64, refined event number is 3418, feature number is 649 and maximum feature number of a event is 3. The time interval is 0.5 hour, the attention number is 8, the dimension of the attention mechanism is 8 and the number of reserved events is 4. The maximum number of the event group is 64 and the maximum length of the event group is 32.

This work divides the dataset into 3 parts: training set (70%), validation set (10%), and test set (20%).

All of the network structures mentioned in this part are implemented in Keras and Theano and optimized with the Adam method.

Iv-B Experiment analysis

We compare MRM proposed in this paper with two types of models: traditional statistical models and sequential neural network models. We use the two datasets, death and labtest described in previous work[8].

The sequential models use the output of the event representation described in Chapter III(B) as its input. The statistical models use a vector which records the number of event occurrences as its input. is defined by , where is the one-hot encoding vector for the event .

The following is the baseline models of MRM:

  • SVM takes vector as its input.

  • Logistic Regression takes vector as its input and adds L2 regularization layer. It is noted as LR.

  • LSTM uses the LSTM model to process event sequential data and adds a sigmoid layer for prediction at the end.

  • RETAIN is described in related work. This method uses a fixed partition of length 32 to partition the sequence.

  • Timeline is described in related work. The input configuration is the same as RETAIN.

  • TCN is described in related work.

Methods AUC(death) AP(death) AUC(labtest) AP(labtest)
SVM 0.7523 0.5154 0.6587 0.2987
LR 0.8843 0.5213 0.6839 0.3014
RETAIN 0.8967 0.6244 0.7325 0.3196
Timeline 0.9349 0.7119 0.7455 0.3456
LSTM 0.9455 0.7414 0.7495 0.3513
TCN 0.8752 0.5752 0.7234 0.3131
MRM 0.9512 0.7695 0.7688 0.3714
TABLE I: Performance compared with baselines

Table I shows the experimental results of each model on two datasets. Based on the experimental results in table I, we can draw the following conclusions:

  • All sequential models perform better on both tasks than SVM and LR which are based on event frequency. This is because SVM and LR not only ignore the temporal information of the events but also ignore the feature information of the events.

  • RETAIN and TCN perform poorly in both tasks. Although RETAIN and TCN both use a multi-level representation to model clinical event sequences, their division of events depends either on the visit information that exists in the data or on fixed step size.

  • The MRM model proposed in this paper overperforms other models in both tasks. On the death prediction dataset, MRM increased by at least 0.6% on the AUC indicator and by 3.7% on the AP indicator relative to other models. On the potassium ion concentration abnormality detection dataset, MRM increased by 2.5% on the AUC indicator and by 5.7% on the AP indicator. This is because MRM models the short-term co-occurrence of events with attention mechanism and reduces the length of the sequence by the pooling mechanism, which reduces the difficulty of long-term temporal dependency capture.

V Conclusion

We propose a multi-level representation model MRM for long clinical event sequences generated from EHR with complex event types and multi-scale temporal information. MRM uses a sparse attention mechanism to capture the short-term co-occurrence of events and uses interval-based event pooling mechanism to reduce sequence length and to preserve as much the temporal information between events as possible. Experiments on the death prediction dataset and the potassium ion concentration abnormality detection dataset constructed on the open dataset MIMIC-III have proved the effectiveness of the MRM.

Vi Acknowledgement

This paper is partially supported by National Key Research and Development Program of China with Grant No. 2018AAA0101900, Beijing Municipal Commission of Science and Technology under Grant No. Z181100008918005, and the National Natural Science Foundation of China (NSFC Grant No. 61772039 and No. 91646202).

References

  • [1] T. Bai, S. Zhang, B. L. Egleston, and S. Vucetic (2018) Interpretable representation learning for healthcare via capturing disease progression through time. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 43–51. Cited by: §II-B.
  • [2] Z. Che, D. Kale, W. Li, M. T. Bahadori, and Y. Liu (2015) Deep computational phenotyping. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, pp. 507–516. Cited by: §II-B.
  • [3] E. Choi, M. T. Bahadori, J. Sun, J. Kulas, A. Schuetz, and W. Stewart (2016) Retain: an interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems, pp. 3504–3512. Cited by: §II-B.
  • [4] P. Dai, F. Gwadry-Sridhar, M. Bauer, and M. Borrie (2016) Bagging ensembles for the diagnosis and prognostication of alzheimer’s disease. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 3944–3950. Cited by: §II-A.
  • [5] M. Ghassemi, T. Naumann, F. Doshi-Velez, N. Brimmer, R. Joshi, A. Rumshisky, and P. Szolovits (2014) Unfolding physiological state: mortality modelling in intensive care units. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 75–84. Cited by: §II-A.
  • [6] V. Huddar, B. K. Desiraju, V. Rajan, S. Bhattacharya, S. Roy, and C. K. Reddy (2016) Predicting complications in critical care using heterogeneous clinical data. IEEE Access 4, pp. 7988–8001. Cited by: §II-A.
  • [7] C. Liu, F. Wang, J. Hu, and H. Xiong (2015) Temporal phenotyping from longitudinal electronic health records: A graph based framework. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 705–714. Cited by: §II-A.
  • [8] L. Liu, H. Li, Z. Hu, H. Shi, Z. Wang, J. Tang, and M. Zhang (2019) Learning hierarchical representations of electronic health records for clinical outcome prediction. In AMIA Annual Symposium, Cited by: §I, §IV-B.
  • [9] L. Liu, J. Shen, M. Zhang, Z. Wang, and J. Tang (2018) Learning the joint representation of heterogeneous temporal events for clinical endpoint prediction. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §II-B.
  • [10] L. Liu, H. Wu, Z. Wang, Z. Liu, and M. Zhang (2019) Early prediction of sepsis from clinical datavia heterogeneous event aggregation. In Computing in Cardiology, Cited by: §I.
  • [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §III-B1.
  • [12] F. Wang, N. Lee, J. Hu, J. Sun, S. Ebadollahi, and A. F. Laine (2013) A framework for mining signatures from event sequences and its applications in healthcare data. IEEE Trans. Pattern Anal. Mach. Intell. 35 (2), pp. 272–285. Cited by: §II-A.
  • [13] K. Zheng, W. Wang, J. Gao, K. Y. Ngiam, B. C. Ooi, and J. W. L. Yip (2017) Capturing feature-level irregularity in disease progression modeling. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1579–1588. Cited by: §II-B.
  • [14] J. Zhou, F. Wang, J. Hu, and J. Ye (2014) From micro to macro: data driven phenotyping by densification of longitudinal electronic medical records. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 135–144. Cited by: §II-A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398251
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description