Learning to Write Notes in Electronic Health Records

Learning to Write Notes in Electronic Health Records

\namePeter J. Liu \emailpeterjliu@google.com
\addrGoogle Brain
Mountain View, CA
Abstract

Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems. Much of this documentation work is seen as a burden, reducing time spent with patients and contributing to clinician burnout. With the aspiration of AI-assisted note-writing, we propose a new language modeling task predicting the content of notes conditioned on past data from a patient’s medical record, including patient demographics, labs, medications, and past notes. We train generative models using the public, de-identified MIMIC-III dataset and compare generated notes with those in the dataset on multiple measures. We find that much of the content can be predicted, and that many common templates found in notes can be learned. We discuss how such models can be useful in supporting assistive note-writing features such as error-detection and auto-complete.

Learning to Write Notes in Electronic Health Records Peter J. Liu peterjliu@google.com
Google Brain
Mountain View, CA

1 Introduction

According to a study (Sinsky et al., 2016), physicians spend nearly 2 hours doing administrative work for every hour of face-time with patients. The most time-consuming aspect of the administrative work is inputting clinical notes into the electronic health record (EHR) software documenting information about the patient including health history, assessment (e.g. diagnoses) and treatment plan. Much of the documentation requirements are viewed as drudgery and is a major contributor to career dissatisfaction and burnout among clinicians. Furthermore, patient satisfaction is affected by the intrusion of this work into time spent with patients.

The severity of the problem is such that documentation support industries have arisen as work-arounds, ranging from dictation services, where clinicians speak notes to be transcribed by a human or machine backend, to scribes, human-assistants whose primary job is to write the notes. We take a similar opinion as Gellert et al. (2015) and view this as a sign that EHR software usability should be improved.

Assistive-writing features for notes, such as auto-completion or error-checking, benefit from language models. The stronger the model, the more effective such features would likely be. Thus the focus of this paper is in building language models for clinical notes.

Our main contributions are:

  1. introducing a new medical language modeling task based on the MIMIC-III (’Medical Information Mart for Intensive Care’) EHR dataset (Johnson et al., 2016),

  2. demonstrating how to represent the multi-modal mix of structured and unstructured (text) data found in EHRs as context for conditional language modeling;

  3. proposing multiple quantitative metrics for evaluating such models

  4. showing that recently developed language models can predict much of the content in notes while capturing their global templates;

2 Related Work

The MIMIC-III dataset is comprised of de-identified electronic health records of 39,597 patients from the intensive-care unit (ICU) of a large, tertiary care hospital. It is the most comprehensive publicly- and freely-accessible dataset of its kind that includes patient demographic data, medications ordered, laboratory measurements, and of particular importance for this work, notes documented by care providers. The release of the data and its predecessor MIMIC-II (Saeed et al., 2011) has spurred many studies, predominantly focused on predicting clinical events including acute kidney injury (Mandelbaum et al., 2011), mortality (Johnson et al., 2017), or diagnoses and medications orders (Choi et al., 2016). Many other EHR datasets have also been used to predict clinical events although they often are nonpublic or have no clinical notes.

There exists substantial prior work on utilizing clinical notes for many purposes. Friedman et al. (2004) extract structured output from notes in the form of Unified Medical Language System (UMLS) codes. A common use of notes is to incorporate them as input to machine learning models that predict future clinically relevant events from past EHR data (Miotto et al. (2017), Rajkomar et al. (2018)). Portet et al. (2009) automatically summarize EHR ICU data into text form using expert rules and templates. Jing et al. (2017) train models to generate medical imaging reports from x-rays, a type of image captioning task, although they do not use data from the EHR.

We believe we are the first to focus on the task of building conditional language models for notes based on EHR data. Outside of the medical domain, language modeling has been extensively studied in the natural language processing (NLP) community, including well-established benchmarks based on large corpora of English sentences (Marcus et al. (1993), Chelba et al. (2013)).

Conditional language modeling has also been extensively studied including for machine translation (Wu et al., 2016) and speech recognition (Chiu et al., 2017), using a class of techniques based on sequence-to-sequence learning (Sutskever et al., 2014). There we model an output sequence (e.g. English words) conditioned on an input sequence (e.g. French words). However, most prior work relies on mapping a single modality to text, e.g. text-to-text or audio-to-text. In our work, the data conditioned on includes a diverse mix of both static and time-dependent data, and sequential structured and unstructured (text) data.

3 Methods

3.1 Conditional language models for clinical notes

Language models specify a probability distribution, , over pieces of language (e.g. sentences or documents), which can be seen as a sequence of tokens (e.g. words), . Using the chain-rule of probability we can write:

which decomposes the joint distribution into a product of conditional distributions of the current token given previous tokens. This also defines a generative model which allows us to sample likely tokens one at a time to generate full sequences.

Conditional language models are similar except they are provided with additional context data, :

The context could be of any modality, image, text, audio, etc. For example, in the case of machine translation, this context is the source language sentence to be translated. In image captioning, the context may be an image (Vinyals et al., 2015).

In this work, the sequence to be predicted is the text of the current clinical note and context is the past data extracted from the Electronic Health Record (EHR) for the same patient, . We also augment the context with the intended note type, , and a hint of 10 tokens from the current note, . This note-context, , is intended to simulate what is known at the time a clinician begins writing a note.

Formally, we develop the model:

where and which is smaller than . We restrict the number of note tokens to predict to . As is typical in machine learning, we can view this as a supervised problem, learning the mapping .

3.2 Extracting and representing context

Figure 1: Schematic showing which context data is extracted from the patient record.

In our experiments we only consider context data in the 24 hours leading up to the time of the note being written as shown in Figure 1. We experiment with the following context data classes:

  1. Demographic data (): In MIMIC-III, this is static data per patient and found in the Patients table. We extract gender, and compute the age at the time of the note using the date-of-birth field.

  2. Medications (): from the Prescriptions table we extract names of drugs prescribed to the patient. These are medications prescribed within the context window.

  3. Labs (): from the LabEvents table, we extract the lab name, value, unit of measurement, and if available the flag saying whether it is abnormal. These are lab tests ordered during the context window.

  4. Past notes (): We use the text of past notes in the context window.

The exact MIMIC-III table column names can be viewed in the Appendix A.2.

All the above data elements are converted to a string representation with special delimiting tokens between data classes. Using notation similar to Backus-Naur form, the context data is represented as:

<Context> ::= <Hint><NoteType><Demographic><MedList><LabList><NoteList>
<Demographic> ::= <Gender><Age>
<Hint> ::= first-10-tokens-of-note "<H>"
<NoteType> ::= note-type "<T>"
<Gender> ::= "M" | "F" "<G>"
<Age> ::= age-in-years "<A>"
<MedList> ::= <Medication> "<M>" | <Medication> <Delim> <MedList>
<Medication> ::= drug-name
<Delim> ::= "|"
<LabList> ::= <Lab> "<L>" | <Lab> <Delim> <LabList>
<Lab> ::= lab-name "," lab-value "," unit-of-measurement <LabFlag>
<LabFlag> ::= "abnormal" | ""
<NoteList> ::= <Note> | <Note> "<N>" <NoteList>
<Note> ::= raw-note-text

An example instantiation of the input is the following:

Start of note <H>Nursing/other<T>F<G>46<A>Phenylephrine|Heparin<0>
Potassium,4.1,mEq/L,|Nitrogen,4,mg/dL,abnormal<1>Progress note<N>
Another progress note

In our experiments we truncate the number of context tokens to 500.

3.3 Task dataset construction

We construct our supervised dataset in two stages. In the first stage we randomly assign each patient to train, validation, or test sets. In the second stage, for every note MIMIC-III, , we create a supervised learning example, (, ).

We did not perform any patient cohort filtering as we want to use the model for any note of any patient. However we partitioned train, development, and test sets so that each patient would only appear in exactly one set. This is to prevent the model from memorizing a particular patient’s history in training and using it to advantage in predicting test notes. The train/validation/test sizes were 1682869, 201181, 198808 respectively.

3.4 Input and Target encoding

We have reduced our conditional language modeling task to that of training a supervised model to map note-context, , a sequence of strings, to note-text, , another sequence of strings. Figure 2 illustrates how the data is transformed into input and output sequences.

Figure 2: Schematic showing how raw data is transformed to model training data.

Both input and target are tokenized using a sub-word tokenizer with a vocabulary of about 32,000, derived in a data-driven approach as described in Wu et al. (2016). This allows us to transform both input and output into sequences of integers representing sub-word tokens from the same vocabulary. In particular, we do not pre-process note text and retain all white-space included in the original raw notes. We found that the white-space in clinical notes is quite important in delineating sections and removing it would reduce readability.

3.5 Model architectures

We begin with a very strong baseline for sequence-to-sequence learning, called the Transformer architecture introduced in Vaswani et al. (2017), achieving state-of-the-art machine translation results. It is an encoder-decoder architecture without any recurrent sub-modules, in contrast to recurrent neural network approaches that had dominated natural language processing until its publication.

In our task the input and output sequences may be much longer than in typical machine translation tasks where the input and output sequences are sentences. As discussed in Section 5 we found that while the Transformer encoder-decoder (T-ED) architecture performs well with shorter context, , it is unable to take advantage of greater context length.

Thus we also experiment with a recently introduced Transformer-based model that has been shown to be effectively used for large scale (i.e. much longer sequences) abstractive summarization in Liu et al. (2018), called Transformer with memory-compressed attention, or T-DMCA.

Both models are implemented as part of the open-source tensor2tensor package (Vaswani et al., 2018). The hyper-parameters used are described in Appendix A.5.

4 Evaluation

4.1 Evaluating the language models

We evaluate our conditional language models on several metrics, some standard and some specific for our task:

  1. Perplexity per token (PPL): A standard intrinsic metric used in evaluating language models is the perplexity per token which is a measure of how well the current token is predicted given previous tokens averaged over the whole sequence. This measure is highly local in that the predictive power beyond the next token is not taken into account. Note that perlexities from models with different vocabularies are not really comparable. In our case all models share the same vocabulary of size 32,000. We report log-perplexity.

  2. Accuracy of next token (ACC): The accuracy of predicting the next token given previous tokens. Similar to perplexity it is a highly local metric.

  3. ROUGE-1, ROUGE-2 (R1, R2): For a model-independent evaluation and a more global metric, we look at n-gram recall and precision statistics comparing the candidate generated note and the ground truth note. We use the commonly-used ROUGE package (Lin, 2004) to compute ROUGE-1 (unigram) and ROUGE-2 (bigram) scores. We report the F1 variant which is the harmonic mean of ROUGE-P (precision) and ROUGE-R (recall). ROUGE package parameters can be found in Appendix A.4.

  4. ROUGE-1 after boiler-plate removal (B-R1): We found that a significant amount of text in notes could be described as low-information content boiler-plate. We attempt to remove boiler-plate by removing text lines that are also predicted by a model trained to generate a note conditioned only on note-type, , and the hint, (details provided in Appendix A.6). In particular, as discussed in Section 5.1, this model reliably produces the hint and many canonical section headings. After boiler-plate removal, we compute the ROUGE metrics as usual. We report the proportion of text removed as boiler-plate when reporting this number. This allows the metric to compare sections of greater information content, for example, sections which are written as a narrative by a human, rather than auto-populated by an EHR template.

  5. Sex and Age accuracy: We use regular expressions to identify implied age and sex in generated notes. We then report the overall accuracy of the model compared to the ground truth found in the MIMIC-III Patients table. For consider age correct if it is within 1 year of the computed age at the time of the note. The regular expressions used can be found in the Appendix A.3.

Note all numbers except for log-perplexity are expressed as a percentage and bounded between 0 and 100 and higher means better. In the case of perplexity, lower is better.

5 Results and Discussion

5.1 Template learning

In our examination of model samples we noticed that commonly-used templates in the dataset were learned and consistently employed in note generation. This was seen even in the models conditioned on very little information.

Figure 3, shows a generated note (from validation set) from a model conditioned only on note-type () and the short hint () along with the associated ground truth. Although much of the content is different (e.g. age is wrong according to the Patient table), the template was well-inferred with appropriate section titles and formatting, and filled in with plausible values for a radiology report.

[**2101-7-12**] 5:44 PM
 CT HEAD W/O CONTRAST                                            Clip # [**Clip Number (Radiology) 105293**]
 Reason: eval for ICH
 ______________________________________________________________________________
 [**Hospital 2**] MEDICAL CONDITION:
  History: 79M with fall
 REASON FOR THIS EXAMINATION:
  eval for ICH
 No contraindications for IV contrast
 ______________________________________________________________________________
 WET READ: [**First Name9 (NamePattern2) 1333**] [**Doctor First Name 141**] [**2101-7-12**] 6:02 PM
  no acute intracranial process
 WET READ VERSION #1
 ______________________________________________________________________________
                                 FINAL REPORT
 HISTORY:  79-year-old male with fall.
 COMPARISON:  None available.
 TECHNIQUE:  Non-contrast head CT was obtained.
 FINDINGS:  There is no evidence of acute intracranial hemorrhage, mass effect,
 shift of normally midline structures, hydrocephalus, or acute major vascular
 territorial infarction.  The [**Doctor Last Name 107**]-white matter differentiation is preserved.
 The visualized paranasal sinuses and mastoid air cells are well aerated.
 IMPRESSION:  No acute intracranial process.
[**2101-7-12**] 5:44 AM
 CHEST (PORTABLE AP)                                             Clip # [**Clip Number (Radiology) 44638**]
 Reason: ETT,collapsed lung
 Admitting Diagnosis: CHEST PAIN
 ______________________________________________________________________________
 [**Hospital 2**] MEDICAL CONDITION:
  55 year old man s/p mainstem stent placement intubated
 REASON FOR THIS EXAMINATION:
  ETT,collapsed lung
 ______________________________________________________________________________
                                 FINAL REPORT
 INDICATION:  55-year-old man status post mainstem stent placement, ET tube,
 collapsed lung.
 COMPARISON:  Chest radiograph from [**2101-7-11**].
 FINDINGS:  ET tube is 8.2 cm above the carina.  A left mainstem bronchus stent
 is in place.  Since the prior radiograph, there is no significant change.
 Small left pleural effusion is unchanged The right lung is clear.  There is no
 focal consolidation, or pneumothorax.  The bony structures are intact.
 IMPRESSION:  ET tube 8.2 cm above the carina.  Otherwise, no significant
 change since prior radiograph.
Figure 3: Top: A radiology note generated from Model 2 (from Table 1), conditioned on note-type and hint ([**2101-7-12**] 5:44). The model is able to capture the correct global structure of such notes. Bottom: Ground-truth.

We found that note-type-specific templates and styles were learned as well. Figure 4 shows a nursing note whose style and structure is well emulated by the model.

Although all models were able to learn globally-coherent templates, which can qualitatively be assessed by looking at samples (more provided in the Appendix), what separated the baseline and better models was how well the content beyond the templates was predicted, which we attempt to show through the quantitative results below.

5.2 Quantitative Results

Model Context PPL ACC R1 R2 B-R1 Sex Age
1 T-ED 1.89 60.4 19.8 9.1 N/A 58.5 4.6
2 T-ED 1.79 62.6 41.2 24.3 N/A 63.3 21.0
3 T-ED 1.77 62.9 41.4 26.4 31.6 99.9 97.8
4 T-ED 1.79 63.0 41.1 26.2 32.3 99.8 97.5
5 T-ED 1.81 62.7 39.8 25.1 31.4 99.8 97.7
6 T-ED 1.86 62.2 40.5 25.5 32.3 99.5 97.4
7 T-DMCA 1.76 62.8 43.1 27.2 34.3 99.7 96.3
8 T-DMCA 1.76 63.2 43.1 27.3 34.5 99.8 95.8
9 T-DMCA 1.76 63.2 44.6 28.5 36.8 99.9 95.8
Table 1: Quantitative results for model architectures and EHR contexts used in experiments.

Table 1 shows all metrics for all models trained in our experiments. We analyze varying two primary dimensions:

  1. the context data extracted from the patient’s record and used as input: in addition to , we study the effect of adding the hint (), demographic data (), medications (), lab-results (), and previous notes ().

  2. the model architecture used: Transformer encoder-decoder (T-ED), or with memory-compressed attention (T-DMCA).

5.2.1 T-ED experiments

In the first set of experiments we look at the effect of including more context data on performance using the Transformer encoder-decoder architecture. We observe that overall performance without the hint is relatively poor, effectively being an unconditional language model. Simply providing the first 10 sub-word tokens reveals a lot about the note, perhaps giving a strong hint of the template to be used in the note.

As expected the accuracy of demographics (sex/age) is roughly equivalent to random guessing without providing the demographic context () to the models. On this metric, Model 2 has no but does better than without the hint (Model 1) because occasionally the age/gender is revealed in the first 10 tokens. In all models with provided, sex accuracy is virtually 100%. The age accuracy is very high at about 95% accuracy for all models.

Interestingly, using the same Age/Sex metrics applied to the existing notes in MIMIC-III shows that while the sex is almost always correct, the age in true notes was found to be significantly less accurate (88.5%) than our models when comparing to the computed age from the Patient table. We discuss using the models for finding errors in Section 5.3.

Overall, we observed that the T-ED Models 3-6 were not able to take full-advantage of the additional context provided beyond note-type, hint, and demographic data (). The model’s self-reported perplexity is slightly worse, suggesting optimization issues.

We confirm a similar result as Liu et al. (2018) that the T-ED model has difficulty with longer sequence modeling tasks. The T-DMCA architecture employs localized attention which results in fewer unnecessary attention weights and in practice is easier to optimize. The model also has more capacity due to the addition of mixture-of-expert layers (Shazeer et al., 2017). This motivated the T-DMCA experiments in the next section.

5.2.2 T-DMCA experiments

Models 7-9 show results for the T-DMCA model in the cases of longest note-context. The results are improved over T-ED and the best model is the one with the most context provided, showing the architecture can appropriately take advantage of more data, in contrast to the T-ED model.

Overall the T-DMCA with most context performs the best on all metrics, except for Age accuracy, which we attribute to notes sometimes having age contradicting the Patient table-derived age. The perplexity and next-token accuracy metrics show a smaller relative improvement due to their local nature; predicting the very next token often does not require taking into account long-range dependencies. Whereas the ROUGE-based scores show larger relative improvement due to the compounding of errors when predicting the entire note.

Comparing Models 3 and 9 on the B-R1 and R1 metrics we see a much greater relative improvement of 16.5% vs 7.7%, respectively, suggesting the best model is more accurately predicting non-template words, i.e. meaningful content.

Figure 4 shows a full sample nursing progress note about a newborn patient generated from the best Model 9. It has inferred the proper global structure of such notes with identical section headings as the ground truth note. Within each section there is a high overlap in vocabulary used and the style is well emulated.

NPN 0700-1900
RESP:  Infant remains in RA, maintaining O2 sats 98-100%.
RR 30s-50s.  LS cl/=.  Mild SC retractions.  No As or Bs.
P: Cont to support & monitor resp status.
ID:  Infant continues on 48hr r/o of ampi & gent.  CBC
benign & blood cx neg to date.  P: Cont to monitor for s/s
sepsis.
FEN:  NPO.  TF at 80cc/kg/day of D10W infusing well via
PIV.  Abd exam benign; min asp & no spits.  Voiding & no
stool.  P: Cont to support growth & nutrition.
G&D:  Temps stable, nested on sheepskin on open warmer.
Infant is a&a with cares.  Settles well in between cares.
Appropriately brings hands to face & sucks pacifier to
comfort self.  AFSF.  AGA.  P: Cont to support dev needs.
SOCIAL:  No contact with family thus far.  P: Update &
support family when possible.
NPN 0700-1900
RESP:  Infant remains in RA, maintaining O2 sats 90-100%.
RR 40s-70s.  LS cl/=.  Mild SC retractions.  No As or Bs
thus far.  P: Cont to monitor resp status.
FEN:  TF increased to 80cc/kg/day of D10W infusing at
60cc/kg/day via PIV without incidence.  Infant po fed at
1300 & only took 2cc.  PG feeds will be started at 1700 at
20cc/kg/day of SC20.  DS 93.  Abd exam soft, round, +BS, no
loops.  Voiding, u/o= 1.8cc/kg/hr.  Med mec stool x1.  Lytes
at 24hrs of age = 131/5.3/100/21.  Please see flowsheet for
bili results.  P: Cont to support nutritional needs.
G&D:  Temps stable, now swaddled on off warmer.  Infant is
a&a with cares.  Irritable at times in between cares.
Appropriately brings hands to face to comfort self.  AFSF.
AGA.  P: Cont to support dev needs.
SOCIAL:  Mom called x1.  Updated on infants condition and
plan of care.  States she hopes to come up to NICU later
today to visit infant.  P: Cont to support & update parents.
ID:  Infant cont on 48hr r/o of ampi & gent.  Initial CBC
was neutropenic.  CBC at 24hrs of age = WBC 12.8, 51 neuts,
0 bands, 45 lymphs, Hct 55.1, plt 273.  [** 322**] cx neg to
date.  P: Cont to monitor infant for s/sx of sepsis.
Figure 4: Top: (Nursing) note was generated from Model 9 (from Table 1). The hint provided was NPN 0700 ... Infant remains in. Bottom: ground-truth note. The section headings are identical, though in slightly different order. Within each section there is a high overlap in words and style.

5.3 Detection of errors in notes

As discussed in Section 5, our models conditioned on demographic information are able to generate the appropriate age based on structured data given as input to the model. For example, Figure 5 shows the same segment from generated and actual notes where only the generated note is correct. This could be considered as an error in the original note, and we consider how the model could be used to detect and prevent errors in notes.

To demonstrate error detection in existing notes, we corrupted select notes by replacing certain drug names with random ones to simulate errors. We then took one of our conditional language models trained to use medications as context to compute the likelihood of the current token given previous tokens, iteratively, for every word in the note. Words with very low likelihood compared to surrounding context were labeled as possible errors.

Figure 6 shows a sample of note segments from the test set with low likelihood words highlighted in red. For the words we corrupted we display the top suggestions for replacement. In most cases, only the corrupted word is highlighted and suggestions either include the original word, or related terms.

[**Hospital 4**] MEDICAL CONDITION:
  45 year old man s/p 9 days of trauma
 REASON FOR THIS EXAMINATION:
  eval for fx
 No contraindications for IV contrast
 [**Hospital 2**] MEDICAL CONDITION:
  19 year old man with trauma
 REASON FOR THIS EXAMINATION:
  trauma
 No contraindications for IV contrast
Figure 5: (Left) Segment of a note describing age and gender of a patient generated from Model 9 from Table 1. The correct age according to the Patient table is 45. (Right) Corresponding note segment found in MIMIC-III.
(a) coumarin was replaced with celebrex, and the model suggests coumadin, which is an alternate name
(b) amiodarone is one of correct suggestions for trizivir, while suggestions for nitrofurantoin are related heart medications to the original, lasix
(c) the original drug was heparin, an anticoagulation drug
Figure 6: Three corrupted snippets each from a different test note from MIMIC-III. Words highlighted in red are deemed unlikely under the model conditioned on medications. Underlined words were the result of artificial corruption and the suggested model corrections are shown.

5.4 Note auto-complete

Given that our models are trained to predict the next token given prior context they naturally lend themselves to an auto-complete feature, where the next word, sentence, or more are predicted and presented to the user as suggestions, as commonly found in software code editors (Raychev et al., 2014).

Table 2 shows predictions from a model for each token of a segment of a note conditioned on previous tokens and note-context. As can be seen, age and gender are correctly predicted as well as many medical and non-medical terms. One would expect that the more accurate such predictions are, especially when predicting long sequences of future tokens, the greater time saved in typing notes given a well-designed auto-complete feature.

Token Top-5 predictions (probability, token)
History (0.94, History), (0.00095, Past), (0.00044, Allergies), (0.00014, Date), (0.00013, HISTORY)
of (0.92, of), (0.00044, OF), (0.00017, \n), (7.2e-05, or), (6.6e-05, from)
Present (0.94, Present), (0.0005, present), (0.00029, Birth), (0.00016, Invasive), (0.00016, Illness)
Illness (0.94, Illness), (0.00025, Complaint), (0.00024, ILLNESS), (0.00015, Present), (0.00012, HPI)
(0.91, :\n), (0.0072, :\n[**), (0.0055, :\n ), (0.0022, :\n), (0.00052, :\n\n)
67 (0.27, 67), (0.15, 67), (0.071, This), (0.064, 66), (0.041, The)
F (0.47, F), (0.3, yo), (0.057, y), (0.034, yoF), (0.0092, year)
bic (0.32, s), (0.069, with), (0.068, who), (0.057, pedestrian), (0.04, struck)
ycli (0.96, ycli), (0.028, ycl), (0.00051, restrained), (0.00034, ped), (0.00032, lo)
st (0.98, st), (0.0023, ng), (0.001, nger), (0.001, ne), (0.00048, vs)
hit (0.4, struck), (0.39, vs), (0.017, driver), (0.016, hit), (0.011, versus)
by (0.86, by), (0.016, in), (0.0062, a), (0.0048, head), (0.0047, at)
car (0.68, car), (0.13, a), (0.02, truck), (0.018, vehicle), (0.014, auto)
, (0.13, , ), (0.087, . ), (0.081, while), (0.063, at), (0.052, , +)
unknown (0.083, thrown), (0.054, GCS), (0.046, struck), (0.041, no), (0.03, unrestrained)
LOC (0.46, LOC), (0.077, loss), (0.048, speed), (0.028, details), (0.021, loc)
, (0.47, , ), (0.11, . ), (0.055, at), (0.054, . ), (0.015, , +)
was (0.23, GCS), (0.042, intubated), (0.031, found), (0.026, no), (0.02, transferred)
wearing (0.13, taken), (0.055, found), (0.054, brought), (0.04, intubated), (0.029, struck)
helmet (0.38, a), (0.24, helmet), (0.065, car), (0.034, an), (0.015, [**)
but (0.12, and), (0.053, at), (0.041, upon), (0.037, while), (0.033, but)
this (0.18, was), (0.033, found), (0.029, hit), (0.029, had), (0.022, did)
was (0.42, was), (0.056, time), (0.042, is), (0.019, AM), (0.018, morning)
found (0.1, not), (0.081, unwitnessed), (0.054, witnessed), (0.042, a), (0.021, an)
separated (0.22, to), (0.12, by), (0.11, on), (0.061, down), (0.037, at)
from (0.35, from), (0.23, by), (0.058, . ), (0.039, and), (0.035, on)
patient (0.41, her), (0.11, the), (0.085, car), (0.04, vehicle), (0.029, a)
. ( 0.14, . ), (0.12, and), (0.11, ), (0.099, . ), (0.061, , )
GCS (0.25, She), (0.09, Pt), (0.06, Patient), (0.03, Per), (0.028, No)
of (0.23, was), (0.19, 15), (0.074, at), (0.065, of), (0.043, on)
15 (0.35, 15), (0.14, 3), (0.11, 14), (0.042, 13), (0.04, 8)
when (0.31, at), (0.19, on), (0.053, . ), (0.048, , ), (0.039, in)
EMS (0.33, \n), (0.23, EMS), (0.18, she), (0.031, the), (0.013, he)
arrived (0.78, arrived), (0.085, was), (0.033, called), (0.0068, arrival), (0.0056, found)
. ( 0.24, . ), (0.15, , ), (0.081, and), (0.079, at), (0.068, . )
Taken (0.14, She), (0.1, Pt), (0.076, Patient), (0.036, Upon), (0.035, Intubated)
to (0.85, to), (0.013, emergently), (0.0093, by), (0.0041, for), (0.0036, directly)
OSH (0.55, [**), (0.22, OSH), (0.063, an), (0.018, the), (0.011, outside)
where (0.57, where), (0.13, , ), (0.077, and), (0.021, ED), (0.019, . )
she (0.46, she), (0.067, CT), (0.058, head), (0.05, a), (0.049, GCS)
was (0.67, was), (0.13, had), (0.039, received), (0.0062, became), (0.0056, underwent)
found (0.3, intubated), (0.16, found), (0.052, noted), (0.032, awake), (0.019, alert)
to (0.87, to), (0.0042, on), (0.0027, with), (0.0021, by), (0.0017, in)
have (0.65, have), (0.24, be), (0.00084, develop), (0.00068, \n), (0.00064, to)
a ( 0.27, a), (0.077, multiple), (0.034, \n), (0.027, GCS), (0.025, SAH)
splenic (0.11, right), (0.089, large), (0.075, GCS), (0.068, small), (0.066, \n)
laceration (0.65, laceration), (0.077, lac), (0.043, hematoma), (0.039, bleed), (0.015, contusion)
requiring (0.24, , ), (0.21, and), (0.16, . ), (0.11, with), (0.029, . )
Table 2: Sub-word tokens, one per line, and top-5 predictions with associated probabilities from one of our language models.

6 Limitations and Future Work

In our experiments we restricted our context data within 24 hours before the note to limit input sequence length for performance reasons, although conceivably additional past context would be informative. Due to the intensive nature of the ICU, many events may occur within a 24-hour period, although for non-ICU datasets a greater window of context should be used.

Furthermore, more columns from more tables found in MIMIC-III could be added as note-context using the same procedure described to improve the language model, although this was not explored.

In many cases, the maximum context provided by the EHR is insufficient to fully predict the note. The most obvious case is the lack of imaging data in MIMIC-III for radiology reports. For non-imaging notes we also lack information about the latest patient-provider interactions. Future work could attempt to augment the note-context with data beyond the EHR, e.g. imaging data, or transcripts of patient-doctor interactions.

Although we discussed error-correction and auto-complete features in EHR software, their effects on user-productivity were not measured in the clinical context, which we leave as future work.

7 Conclusion

We have introduced a new language modeling task for clinical notes based on EHR data and showed how to represent the multi-modal data context to the model. We proposed evaluation metrics for the task and presented encouraging results showing the predictive power of such models. We discussed how such models could be useful in sophisticated spell-checking and auto-complete features, potentially assisting with the burden of writing accurate clinical documentation.

AcknowledgmentsWe thank Kun Zhang for assistance in data preparation; Claire Cui for much feedback and suggesting evaluation metrics; Kai Chen and Jeff Dean for reviewing the manuscript. We also thank the Tom Pollard, Roger Mark, and Alistair Johnson from the MIT Lab for Computation Physiology for approving the inclusion of selected de-identified notes from MIMIC-III in this publication.

References

  • Chelba et al. (2013) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
  • Chiu et al. (2017) Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Katya Gonina, et al. State-of-the-art speech recognition with sequence-to-sequence models. arXiv preprint arXiv:1712.01769, 2017.
  • Choi et al. (2016) Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, and Jimeng Sun. Doctor ai: Predicting clinical events via recurrent neural networks. In Machine Learning for Healthcare Conference, pages 301–318, 2016.
  • Friedman et al. (2004) Carol Friedman, Lyudmila Shagina, Yves Lussier, and George Hripcsak. Automated encoding of clinical documents based on natural language processing. Journal of the American Medical Informatics Association, 11(5):392–402, 2004.
  • Gellert et al. (2015) George A Gellert, Ricardo Ramirez, and S Luke Webster. The rise of the medical scribe industry: implications for the advancement of electronic health records. Jama, 313(13):1315–1316, 2015.
  • Jing et al. (2017) Baoyu Jing, Pengtao Xie, and Eric Xing. On the automatic generation of medical imaging reports. arXiv preprint arXiv:1711.08195, 2017.
  • Johnson et al. (2016) Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Scientific data, 3:160035, 2016.
  • Johnson et al. (2017) Alistair EW Johnson, Tom J Pollard, and Roger G Mark. Reproducibility in critical care: a mortality prediction case study. In Machine Learning for Healthcare Conference, pages 361–376, 2017.
  • Lin (2004) Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
  • Liu et al. (2018) Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating Wikipedia by summarizing long sequences. In International Conference on Learning Representations, 2018.
  • Mandelbaum et al. (2011) Tal Mandelbaum, Daniel J Scott, Joon Lee, Roger G Mark, Atul Malhotra, Sushrut S Waikar, Michael D Howell, and Daniel Talmor. Outcome of critically ill patients with acute kidney injury using the akin criteria. Critical care medicine, 39(12):2659, 2011.
  • Marcus et al. (1993) Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993.
  • Miotto et al. (2017) Riccardo Miotto, Fei Wang, Shuang Wang, Xiaoqian Jiang, and Joel T Dudley. Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics, 2017.
  • Portet et al. (2009) François Portet, Ehud Reiter, Albert Gatt, Jim Hunter, Somayajulu Sripada, Yvonne Freer, and Cindy Sykes. Automatic generation of textual summaries from neonatal intensive care data. Artificial Intelligence, 173(7-8):789–816, 2009.
  • Rajkomar et al. (2018) Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. Scalable and accurate deep learning with electronic health records. npj Digital Medicine, 1(1):18, 2018.
  • Raychev et al. (2014) Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In Acm Sigplan Notices, volume 49, pages 419–428. ACM, 2014.
  • Saeed et al. (2011) Mohammed Saeed, Mauricio Villarroel, Andrew T Reisner, Gari Clifford, Li-Wei Lehman, George Moody, Thomas Heldt, Tin H Kyaw, Benjamin Moody, and Roger G Mark. Multiparameter intelligent monitoring in intensive care ii (mimic-ii): a public-access intensive care unit database. Critical care medicine, 39(5):952, 2011.
  • Shazeer et al. (2017) Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
  • Sinsky et al. (2016) Christine Sinsky, Lacey Colligan, Ling Li, Mirela Prgomet, Sam Reynolds, Lindsey Goeders, Johanna Westbrook, Michael Tutty, and George Blike. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of internal medicine, 165(11):753–760, 2016.
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010, 2017.
  • Vaswani et al. (2018) Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416, 2018. URL http://arxiv.org/abs/1803.07416.
  • Vinyals et al. (2015) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3156–3164. IEEE, 2015.
  • Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.

Appendix A Appendix

a.1 Permission to publish MIMIC-III notes in this paper

The authors received explicit permission from the MIMIC-III team regarding publishing the notes presented in this paper.

a.2 MIMIC-III Tables and columns used

  • Patients: GENDER (sex), DOB (date of birth)

  • Prescriptions: DRUG (drug name)

  • NoteEvents: CATEGORY, TEXT (raw text of note)

  • LabEvents: ITEMID (join key to D_LABITEMS), VALUE, VALUEUOM, FLAG

  • D_LABITEMS: LABEL (lab name)

a.3 Matching sex and age in text

We use the following regular expressions to identify implied age and sex in text:

  • Age:

     (\d+)\s*(year\s*old|y.\s*o.|yo|year\s*old|year-old|-year-old|-year old) 
    
  • Male Sex:

      (male|man|m|M)
      Sex:\s*(M) 
    
  • Female Sex:

      (woman|female|f|F)
      Sex:\s*(F)} 
    

a.4 ROUGE computation

We use version ROUGE-1.5.5 with the following command-line options: -m -n 2 .

To compute ROUGE scores in Table 1 we sampled 4096 examples from the test set.

a.5 Model hyper-parameters

The tensor2tensor package supports named model-type and hyper-parameter settings. We used the following common hyper-parameters:

  max_length=10000,max_target_seq_length=500,max_input_seq_length=500
  

T-ED was trained for 500,000 steps. T-DMCA was trained for 150,000 steps. We also used the model-specific hyperpameters found in Table 3.

Model name tensor2tensor model hparam
T-ED transformer transformer_base
T-DMCA transformer_moe transformer_moe_prepend_8k, moe_num_experts=64
Table 3: Model-specific hyper-parameters.

For decoding notes, we used a beam search of size 2.

a.6 Removing boiler-plate from notes

To compute the B-R1 metric we attempt to remove boilerplate from generated and ground-truth notes in a pre-processing step before computing the ROUGE scores. In order to identify boiler-plate we rely on Model 2 from Table1. We make the assumption that this model is incapable of predicting non-boiler-plate as it does not have access to sufficient context. Thus any text that is identically generated by Model 2 is considered boiler-plate. We use the python library difflib to remove lines that are present both in the note to be preprocessed and Model 2’s generated note.

PATIENT/TEST INFORMATION:
Indication: Intraop CABG evaluate valves, ventricular function, aortic contours
Height: (in) 70
Weight (lb): 220
BSA (m2): 2.18 m2
BP (mm Hg): 110/70
HR (bpm): 60
Status: Inpatient
Date/Time: [**2132-5-22**] at 11:58
Test: TEE (Complete)
Doppler: Full Doppler and color Doppler
Contrast: None
Technical Quality: Adequate
INTERPRETATION:
Findings:
LEFT ATRIUM: Mild LA enlargement. No spontaneous echo contrast in the body of
the [**Name Prefix (Prefixes) 4**] [**Last Name (Prefixes) 5**] LAA. Good (>20 cm/s) LAA ejection velocity.
RIGHT ATRIUM/INTERATRIAL SEPTUM: Mildly dilated RA. Normal interatrial septum.
No ASD by 2D or color Doppler.
LEFT VENTRICLE: Normal LV wall thickness. Normal LV cavity size. Normal
regional LV systolic function. Overall normal LVEF (>55%).
RIGHT VENTRICLE: Normal RV chamber size and free wall motion.
AORTA: Normal aortic diameter at the sinus level. Normal ascending aorta
diameter. Normal aortic arch diameter. Complex (>4mm) atheroma in the aortic
arch. Normal descending aorta diameter. Complex (>4mm) atheroma in the
descending thoracic aorta.
AORTIC VALVE: Normal aortic valve leaflets (3). No AS. No AR.
MITRAL VALVE: Normal mitral valve leaflets with trivial MR.
TRICUSPID VALVE: Normal tricuspid valve leaflets with trivial TR.
PULMONIC VALVE/PULMONARY ARTERY: Normal pulmonic valve leaflets. Physiologic
(normal) PR.
PERICARDIUM: No pericardial effusion.
GENERAL COMMENTS: A TEE was performed in the location listed above. I certify
I was present in compliance with HCFA regulations. The patient was under
general anesthesia throughout the procedure. No TEE related complications. The
patient appears to be in sinus rhythm. Results were personally reviewed with
the MD caring for the patient.
Conclusions:
Pre Bypass: The left atrium is mildly dilated. No spontaneous echo contrast is
seen in the body of the left atrium or left atrial appendage. No atrial septal
defect is seen by 2D or color Doppler. Left ventricular wall thicknesses are
normal. The left ventricular cavity size is normal. Regional left ventricular
wall motion is normal. Overall left ventricular systolic function is normal
(LVEF>55%). Right ventricular chamber size and free wall motion are normal.
(TRUNCATED FOR SPACE)
Figure 7: A sample echocardiogram note from Model 9 generated from the validation set.
Admission Date:  [**2123-10-8**]              Discharge Date:   [**2123-10-16**]
Date of Birth:  [**2043-3-16**]             Sex:   F
Service: MEDICINE
Allergies:
Patient recorded as having No Known Allergies to Drugs
Attending:[**First Name3 (LF) 2297**]
Chief Complaint:
Shortness of breath
Major Surgical or Invasive Procedure:
None
History of Present Illness:
71 yo F with h/o COPD, HTN, and recent admission for COPD
exacerbation who presents with worsening shortness of breath.
She reports that she has been feeling more short of breath
recently. She has been having worsening cough with green
sputum production over the past few days. She denies fevers,
chills, chest pain, nausea, vomiting, abdominal pain, diarrhea,
constipation, dysuria, hematuria, leg pain or swelling. She
reports that she has been taking her medications regularly. She
has been taking her medications regularly. She also reports
that she has been taking her medications regularly.
.
In the ED, initial vs were: T: 98.6 P: 116 BP: 110/65 R: 16 O2
sat: 100% on NRB. Patient was given albuterol and ipratropium
nebs, methylprednisolone 125mg IV x1, magnesium 2gm IV x1, and
levofloxacin 750mg IV x1. She was admitted to the MICU for
further management.
On the floor, she reports feeling short of breath but otherwise
feels well.
Review of systems:
(+) Per HPI
(-) Denies fever, chills, night sweats, recent weight loss or
gain. Denies headache, sinus tenderness, rhinorrhea or
congestion. Denies cough, shortness of breath, or wheezing.
Denies chest pain, chest pressure, palpitations, or weakness.
Denies nausea, vomiting, diarrhea, constipation, abdominal pain,
or changes in bowel habits. Denies dysuria, frequency, or
urgency. Denies arthralgias or myalgias. Denies rashes or skin
changes.
Past Medical History:
COPD
Hypertension
Hyperlipidemia
Asthma
GERD
Hypothyroidism
Depression
(TRUNCATED FOR SPACE)
Figure 8: A sample discharge summary note from Model 9 generated from the validation set.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
247340
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description