Clara: Dynamic Doctor Representation Learning for Clinical Trial Recruitment
Generating clinical reports from raw recordings such as X-rays and electroencephalogram (EEG) is an essential and routine task for doctors. However, it is often time-consuming to write accurate and detailed reports. Most existing methods try to generate the whole reports from the raw input with limited success because 1) generated reports often contain errors that need manual review and correction, 2) it does not save time when doctors want to write additional information into the report, and 3) the generated reports are not customized based on individual doctors’ preference. We propose CLinicAl Report Auto-completion (CLARA), an interactive method that generates reports in a sentence by sentence fashion based on doctors’ anchor words and partially completed sentences. CLARA searches for most relevant sentences from existing reports as the template for the current report. The retrieved sentences are sequentially modified by combining with the input feature representations to create the final report. In our experimental evaluation CLARA achieved 0.393 CIDEr and 0.248 BLEU-4 on X-ray reports and 0.482 CIDEr and 0.491 BLEU-4 for EEG reports for sentence-level generation, which is up to 35% improvement over the best baseline. Also via our qualitative evaluation, CLARA is shown to produce reports which have a significantly higher level of approval by doctors in a user study (3.74 out of 5 for CLARA vs 2.52 out of 5 for the baseline).
Medical imaging or neural recordings (e.g., X-ray images or EEG) are widely used in clinical practice for diagnosis and treatment. Typically clinical experts will visually inspect the images and signals, and then identify key disease phenotypes and compose text reports to narrate the abnormal patterns and detailed explanation of those findings. Currently, clinical report writing is cumbersome and labor-intensive. Moreover, it requires thorough knowledge and extensive experience in understanding the image or signal patterns and their correlations with target diseases . In the age of telemedicine, more diagnostic practices can be done on the web which requires a more efficient diagnostic process. Improving the quality and efficiency of medical report writing can have a direct impact on telemedicine and healthcare on the web.
To alleviate the limitation of manual report writing, several medical image reporting generation methods  have been proposed. However, none of the existing works simultaneously provide the following desired properties for medical report generation.
Align with disease phenotypes. Medical reports describe clinical findings and diagnosis from medical images or neural recordings, which need to align with disease phenotypes and ensure the correctness of medical terminology usage.
Adaptive report generation. The generated reports need to be adapted to the preference of end-users (e.g., clinicians) for improved adoption.
To fill the gap, we propose an interactive method named CLARA to fill in the medical reports in a sentence by sentence fashion based on anchor words (disease phenotypes) and partially completed sentences (prefix text) provided by doctors. CLARA adopts an adaptive retrieve-and-edit framework to progressively complete report writing with doctors’ guidance. CLARA constructs a prototype sentence database from all previous reports. In particular, CLARA extracts the most relevant sentence templates based on user queries and then edit those sentences with the feature representation extracted from the data. In particular, the retrieval step uses an information retrieval system such as Lucene to enable fast, flexible and accurate search . Then the edit step uses a modified version of the seq2seq method  to generate sentences for the current report. The latent representation of the previous sentences is adaptively used as context to generate the next sentence. In summary, CLARA has the following contributions compared with other medical report generation approaches.
Phenotype oriented. Since CLARA generated report is created using the anchor words of relevant disease phenotypes, it ensures that the report is clinically accurate. We also evaluate our method on clinical accuracy via disease phenotype classification.
Interactive report generation. Users (e.g., doctors) have more control over the generated reports via interactive guidance on a sentence by sentence level.
We evaluate CLARA on two types of clinical report writing tasks: (1) X-ray report generation that takes fixed length imaging data as input, and (2) EEG report generation that considers varying-length EEG time series as input. For EEG data, we evaluated our model using two datasets to test the generalizability of CLARA. We show that with our CLARA framework, we can achieve 0.393 CIDEr and 0.248 BLEU-4 on X-ray reports and 0.482 CIDEr and 0.491 BLEU-4 for EEG reports for sentence-level generation, which is up to 35% improvement over the best baseline. Compared to other methods, our CLARA approach can generate more clinically meaningful reports. We show via a user study, CLARA can produce more clinically acceptable reports measured through quality score metric 3.74 out of 5 for CLARA vs. 2.52 out of 5 for the best baseline.
Image captioning generates short descriptions of image input. There have been few attempts at solving image captioning task before the deep learning era [48, ?]. Several deep learning models were proposed for this task [44, 17]. Many of these different image captioning frameworks proposed can be categorized into template-based, retrieval-based and novel caption generation[8, 50, 22, 28, 26, 6, 41, 34, 5, 47]. However, they do not perform very well in generating longer paragraphs. There is limited research for generating longer captions, notably hierarchical RNN .
Medical report generation adapts similar ideas from image captioning to generate full medical text report based on X-ray images [16, 23, 53, 24, 52, 9, 20]. To improve report accuracy, researchers have utilized curated report templates to simplify the generation task [20, 10]. However, the generated full reports often contain errors that require significant time to correct. CLARA focuses on an interactive report generation that follows the natural workflow of clinicians and led to more accurate results. CLARA does not require any predefined templates but instead retrieves and adapts existing reports to generate the new one interactively. More recently,  develops a template-based approach to generate EEG reports using a hybrid model of CNN and LSTM.
Query auto-completion is about expanding prefix text with related text to generate more informative search queries. This is a well-established topic . Tradition query auto-completion suggests the more popular and relevant queries to the prefix text . Recently neural networks models have been used for query auto-completion task that can potentially generate new and unseen queries using LSTM  and hierarchical encoder-decoder . CLARA differs in terms of input for the model as our models accept multimodal input, not just short prefix text.
Data: We denote data samples as .
In the case of EEG data, we denote
as the EEG record for subject , where is the number of electrodes and is the number of discretized time steps per recording.
In the case of X-ray, the input is a image. and are the guidance provided by users, namely, the anchor words and prefix text for subject .
These anchor words include general descriptions such as “normal” as well as diagnostic phenotype such as “seizure”.
The prefix text is the first few words from each sentence in the report.
Task: In this work, we focus on generating findings (impression) section of medical reports due to its clinical importance. Given an input sample (X-ray or EEG), CLARA generates a text report consisting of a sequence of sentences to narrate the patterns and findings in . are optional prefix texts provided by users for each sentence. Note that can be empty. CLARA generates a sentence using the data embedding of input and the context generated by the previous sentence , anchor words and optional prefix text . The notations are summarized in Table. 1. We have illustrated the overall CLARA framework in Fig 2.
|-th data sample,|
|-th input sample (X-ray or EEG) and its embedding,|
|-th sentence in the -th report,|
|anchor words provided by users for the -th report|
|optional prefix text provided by users for each sentence|
|prototype sentences extracted from all reports|
The Clara Framework
The CLARA framework comprises of the following modules.
M1. Input encoder module transforms medical data such as image or EEG time series into compressed feature representations.
M2. Prototype construction constructs a sentence-level repository which includes distinction sentences, their representations, writer information and frequency statistics derived from a large medical report database. This repository will be searched dynamically to provide a starting point for generating sentences in a new report.
M3. Query module provides more control for the clinicians to interactively produce a customized medical report. It accepts queries from the clinicians in the form of anchor words (global context) and prefix text (local context). Anchor words are phenotype keywords associated with the entire report. And optional prefix text are partial sentences entered by the users through interactive edit.
M4. Retrieve and edit module interactively produces report guided by users using the data representation, anchor words, and prefix text. This module sequentially performs report generation. First, the retrieve module extracts most relevant sentences from prototypes repository. Then the edit module uses a sequence-to-sequence  model to modify the retrieved sentences based on the data rsentation, anchor words, and prefix text.
M1. Input Encoder Module This module is used to extract data embedding from the input to guide the report completion. The input can be raw measurements of X-ray or EEG. For both images and EEG time series in the form of a sequence of EEG epochs = ,,…,, we can encode them using a convolutional neural network(CNN) to obtain image embedding , or the EEG embedding for epoch .
For X-ray imaging, the DenseNet  architecture is used for CNN. For EEG, the final embedding for all epochs is the average embedding . We use a CNN with convolutional-max pooling blocks for processing the EEG data into feature space. We use Rectified Linear Units(ReLUs) activation function for these convolutional networks, and with batch normalization 
More detailed model configuration is provided in the experiment section. Finally, we average over these feature vectors to produce for an EEG recording associated with the sample. More sophisticated aggregations such as LSTM or attention model is considered as well but with very limited improvement. Therefore, we decide to use this simple but effective method of average embedding. The output data embedding will be fed into the retrieving step to be associated with anchor words and used to generate reports jointly. The anchor words are provided as labels.
M2. Prototype Construction
The idea here is to organize all the existing sentences from medical reports into a retrieval system as prototype sentences. We take a hybrid approach between information retrieval and deep learning to structure prototype sentences.
Motivation: Prototype learning [35, 21, 20] and memory networks [46, 37] are different ways to incorporate data instances directly into the neural networks. The common idea is to construct a set of prototypes and their representation .
Then given a new data instance , prototype learning will try to learn a representation of as where is a distance or similarity function. Similarly, memory network will put all those prototype representation in a memory bank and learn a similarity function between and every instance in the memory bank.
However, there are several significant limitations to these approaches:
1) storage and computation cost can be large when we have a large number of prototypes. For example, we want to treat all unique sentences from a medical report database as prototypes. Every pass of the network involves a large number of distance/similarity computations. 2) static prototypes - Often prototypes and their representations have to be fixed first before the prototype learning model can be trained. Also once the model is trained, no new prototypes can be added easily. In medical report applications, new reports are continuously being created and should be incorporated into the model without retraining from scratch. 3) computational waste - it seems quite wasteful to conduct all the similarity computations knowing only a small fraction of prototypes are relevant for a given query.
Approach: We take a scalable approach to structure prototypes in CLARA. We extract all sentences from a large collection of medical reports, then index these sentences to be used by a retrieval system, e.g., inverted index over the unique sentences. We also weigh those sentences based on their popularity so that frequent sentences will have higher weights to be retrieved. There are several immediate benefits of this approach: 1) we can support a large number of sentences as a typical retrieval system such as Lucene can support a web-scale corpus; 2) We are able to update the index with new documents easily so new reports can be integrated; 3) The query response is much faster than a typical prototype learning model thanks to the fast retrieval system.
Formally, given a report corpus , we map them into a set of sentence pairs where is the number of reports and the number of unique sentences. Then we index the set with retrieval engine such as Lucene to support similarity query.
M3. Query Module provides interactive report auto-completion for users to efficiently produce report sentence by sentence. It has two ways of interactions.
Anchor words are a set of keywords that provide a high-level context for the report. For EEG reports, anchor words include Normal, Sleep, Seizure, Focal Slowing, and Epileptiform. Similarly, for X-ray reports anchor words include Pneumonia, Cardiomegaly, Lung Lesion, Airspace Opacity, Edema, Pleural Effusion, Fracture as used in .
Prefix text specifies the partial sentence of sentence in report . This prefix text enables customization and controls from users. Note that prefix text are completely optional to CLARA.
Anchor words and prefix text are used in the Retrieve module to find relevant sentences from the prototype repository.
M4. Interactive Retrieve and Edit
module aims to find the most relevant sentences from the prototype repository (Retrieve phase), and then edit them to fit the current report (Edit phase). Usually, clinicians use a predefined template to draft the report in the clinical workflow. For example, the standard clinical documentation often follows a SOAP note (an acronym for subjective, objective, assessment, and plan). In this case, we seek sentence-level templates that users prefer using. Below we describe the two-phase approach that CLARA uses to generate sentences for medical reports.
In the retrieve phase, we use an information retrieval system to find the most relevant sentences in the prototype repository. This step simulates a doctor looking up his previously written reports to identify the relevant sentences to modify. Given an anchor word and optional prefix text, this module extracts a template sentence from the prototype repository. Here we use the widely-adopted information retrieval system Lucene to index and search for the relevant sentences [27, 54, 33]. More details of indexing and scoring operations performed by Lucene engine are in Appendix A.
If anchor words are not available, CLARA will first predict what anchor words should be there by learning a classifier from data embedding to anchor words . Compared to other retrieve approach such as , our approach is more flexible and scalable thanks to the power of retrieval systems.
In the edit phase, the retrieved sentence is modified to produce the final sentence for the current report. We adopted a sequence-to-sequence model  which consists of an encoder and a decoder, where the encoder projects the input to compressed representations and the decoder reconstructs the output. Here we use both the sentence template and the data embedding as input for the encoder and revised sentence is the output sequence. The encoder is implemented as two layer bi-directional Long short term memory network (LSTM) . The decoder is a three-layered LSTM. The decoder takes the resulting context vector as input for the generation process. Then it is concatenated with the decoder’s hidden states and used to compute a softmax distribution over output words to produce the final .
Our CLARA framework uses a sequential generation process to produce the final report. We iteratively use the previous hidden states with the encoder to enforce the context generated at each sentence to guide the next sentence generation. The anchor words and prefix texts are often included in the final report generated as these words are part of the reports.
We evaluate CLARA framework to answer the following questions:
Can CLARA generate higher quality clinical reports?
Can the generated reports capture disease phenotypes?
Does CLARA generate better reports from clinicians’ view?
Data We conduct experiments using the following datasets.
(1) Indiana University X-ray Data(IU X-ray) dataset contains 7,470 images and paired reports collected. Each patient has 2 images (a frontal view and a lateral view) . The paired report contains impression, finding and indication sections. We apply some data preprocessing techniques to remove duplicates from this dataset. For X-ray reports, we only focus on findings section of the report. After extracting the findings section, we apply tokenization and keep tokens with at least 3 occurrences in the corpus resulting in 1235 tokens in total.We use the labels used by CheXpert labeler as the anchor words . These labels are representative of the different phenotypes present in X-ray reports.
(2) TUH EEG Data is an EEG dataset which provides variable length EEG recording and corresponding EEG report  collected at Temple University Hospital. This dataset contains 16,950 sessions from 10,865 unique subjects. We preprocess the reports to extract the impression section of the report. We apply similar tokenization to these reports to extract tokens. We only keep the tokens with 3 or more occurrences.
(3) Massachusetts General Hospital (MGH) EEG Data This is another EEG reports dataset which was used to evaluate our methods which was collected at large hospital in United States and contains EEG recordings paired with EEG reports written by clinicians. This dataset contains 12,980 deidentified EEG recordings paired with text reports. We apply similar preprocessing steps to clean the reports from this dataset.
The data statistic are summarized in Table 2.
|IU X-Ray||TUH EEG||MGH EEG|
|Number of Patients||3,996||10,890||10,865|
|Number of Reports||7,470||12,980||16,950|
|Total EEG length||-||4,523 hrs||3,452 hrs|
|Total number of Final Tokens||1235||2987||2675|
Baselines: For IU X-ray image data, we compared CLARA with these following baselines. We use DenseNet  as the CNN model for extracting features for all variants of CLARA models for fair comparison.
CNN-RNN  passes the image through a CNN to obtain visual features and then passes to an LSTM to generate text reports.
Adaptive Attention  uses adaptive attention to produces context vectors and then generate text reports via LSTM .
HRGR  uses reinforcement learning to either generate a text report or retrieve a report from a template database.
KERP  uses a graph transformer-based neural network to generate reports with a template database based approach.
AG  first generates the tags associated with X-ray reports then generates reports based on those tags and visual features.
Likewise, for EEG datasets, we consider the following baselines.
Mean-pooling(MP)  uses CNN to extract features for different EEG segments and then combine them using mean pooling. The output feature vectors are then passed to a 2-layer LSTM to generate text reports.
S2VT  applies a seq-to-seq model which reads CNN outputs using an LSTM and then produce text with another LSTM.
Temporal Attention Network(TAM)  uses CNN to learn EEG features and then passes them to a decoder equipped with temporal attention which allows focusing on different EEG segments to produce the text report.
Soft Attention(SA)  uses a soft attention mechanism to allow the decoder for focusing on EEG feature representations.
EEG2text develops a template based approach to generate EEG reports using a hybrid model of CNN and LSTM.
Metrics: To evaluate report generation quality, we use BLEU and CIDEr  which are commonly used to evaluate language generation tasks. In addition, we also qualitatively evaluate the generated texts via a user study with doctors.
Training Details For all models, we split the data into train, validation, test set with 70%, 10%, 20% ratio. There is no overlap between patients between train, validaation and test sets. The word embeddings which are used in the editing module were pre-trained specifically for each dataset.
We implemented CLARA in PyTorch 1.2 .We use ADAM  with batch size of 128 samples. We use a machine equipped with Intel Xeon e5-2640, 256GB RAM, eight Nvidia Titan-X GPU and CUDA 10.0. For ADAM to optimize all models and the learning rate is selected from [2e-3, 1e-3, 7.5e-4] and is selected from [0.5, 0.9]. We train all models for 1000 epochs. We start to half the learning rate every 2 epochs after epoch 50. We used 10% of the dataset as a validation set for tuning hyper-parameters of each model. We searched for different model parameters using random search method. While preprocessing the text reports, if words were excluded, then a special ”UNKNOWN” token is used to represent that word. Word embeddings were used with the seq2seq model in the editing module of CLARA. Word embedding are typically used with such models to provide a fixed length vector to the LSTM model. We used pretrained word embeddings in our training procedure.
Pretraining CNN for X-ray data.
It has been shown that pretraining of neural networks leads to better classification performance in various tasks. In other image captioning tasks, often ResNets pretrained on imagenet dataset is used instead of retraining the entire network from scratch. So we also pretrained a DenseNet  model with publicly available ChestX-ray8  dataset on multi-label classification.
ChestX-ray8 dataset consists of 108,948 frontal-view X-ray images of 32,717 unique patients with each image labeled with occurrence of 14 common thorax diseases where labels were text-mined from the associated radiological reports using natural language processing.
Encoder CNN Details for EEG data. Usually the input EEG is 25-30minutes long, we divide EEG into 1 minute segments. This chunking operation leads [19x6000x30] dimension input for 30 minute length EEG where there are 19 channels and 6000 data points for time(100Hz, 60second). Each of the 19x600 is passed through a CNN architecture which can accept multi-channel input. This CNN is composed multiple convolution, batch normalization, max-pooling blocks. The output of this CNN is 1x512 dimension feature vector which is obtained at last layer of the network which is a fully connected layer to obtain the final representation.
These are the steps of the operations for the CNN with EEG input. In the following notations, Conv2D refers to a 2D convolution operation. DepthwiseConv2D refers to depthwise spatial convolution. Separable Conv2D refers to separable convolutions consisting of a depth wise spatial convolution followed by a pointwise convolution. The following operations describe the CNN for processing the EEG input. (1)Input EEG -¿ (C,T) (2) Reshape -¿ (1,C,T) (3) Conv2D -¿ (F1, C, T), kernel size = 64, filter = 8 [here we denote C = number of channels, T = number of time points, F1= filter size](4) Batch Normalization (5) DepthwiseConv2D , number of spatial filters = 2 (6) Batch Normalization (7) Activation , ReLU (8) AveragePool2D, pool size=(1, 4) (9) Dropout, Dropout Rate = 0.5 (10) Separable Conv2D, filters =16 (11) Batch Normalization (12) Activation -¿ ReLU (13) AveragePool2D: pool size = (1,8) (14) Dropout: Dropout Rate = 0.5 (15) Dense.
Anchor words used as classification labels
Anchor words are the words which are used by our method CLARA to trigger auto-completion by retrieving and editing the sentences to produce the final report. These words are critical because these are used to extract different candidate sentences. We have listed different anchor words which are used in our experiments. For image, we used labels in CheXpert  as the anchor words for the X-ray report completion. For EEG, we use a list of terms obtained from American Clinical Neurophysiology Society (ACNS)  guidelines. The two sets of keywords are listed below.
X-ray anchor words include the following ones: No Finding; Enlarged Cardiomediastinum; Cardiomegaly; Lung Lesion; Airspace Opacity; Edema; Consolidation ; Pneumonia ; Atelectasis; Pneumothorax ; Pleural Effusion; Pleural Other; Fracture.
EEG anchor words include Normality, Sleep, Generalized Slowing, Focal Slowing, Epileptiform Discharges, Drowsiness, Spindles, Vertex Waves, Seizure.
(A). CLARA can generate higher quality clinical reports
We compare CLARA with state-of-the-art baselines using the following experiments:
Report level auto-completion with predefined anchor words.
Report level auto-completion without predefined anchor words (i.e., anchor words are predicted). This experiment evaluates the scenario of fully automated report generation.
Sentence level auto-completion. This experiment simulates the real-world report auto-completion behavior where the recommendation is provided sentence by sentence.
Table 5 summarizes the report level performance on both X-ray image and EEG datasets. CLARA (predicted anchor words) outperforms the best baselines with a 17-30% improvement in CIDEr, which confirms the effectiveness of the retrieval from the prototype repository. We can also see with interactive guidance of anchor words from clinicians, CLARA (defined anchor words) provides an even better performance, which shows the importance of human input for report generation. To further understand the behavior of individual modules, we evaluate CLARA without edit module (only sentence retrieval from existing reports), which still achieves better performance in CIDEr than baselines but is much lower than CLARA (predicted anchor words) utilizing both retrieval and edit modules.
With sentence-by-sentence interactive report auto-completion with anchor words and prefix text, the performance of CLARA can be further improved. We evaluated CLARA with varying numbers of anchor words and prefix sentences to understand the effect of the increasing number of anchor words. We used 1-5 anchor words with CLARA. We also used prefix sentences with variable length. We present these sentence-level auto-completion results in table 3. As the results show the with increasing the number of anchor words, we can obtain higher scores. We observe that with increasing the number of anchor words the performance of CLARA increases 1-2%. In real-world deployed version of CLARA, clinicians can provide more input(anchor words) to the system to obtain more accurate results which is an advantage over current baselines where clinicians do not have control over the report generation.
|IU X-ray (image)||CNN-RNN||0.294||0.216||0.124||0.087||0.066|
|CLARA (1 anchor word)||0.356||0.471||0.318||0.209||0.199|
|CLARA (2 anchor words)||0.367||0.484||0.334||0.212||0.218|
|CLARA (3 anchor words)||0.374||0.488||0.355||0.235||0.227|
|CLARA (4 anchor words)||0.379||0.495||0.358||0.243||0.234|
|CLARA (5 anchor words)||0.393||0.498||0.375||0.259||0.248|
|MGH (EEG)||MP ||0.371||0.715||0.634||0.561||0.448|
|CLARA(1 anchor word)||0.443||0.763||0.684||0.603||0.463|
|CLARA(2 anchor words)||0.458||0.765||0.687||0.609||0.468|
|CLARA(3 anchor words)||0.462||0.773||0.688||0.614||0.485|
|CLARA(4 anchor words)||0.477||0.781||0.693||0.627||0.489|
|CLARA(5 anchor words)||0.482||0.785||0.716||0.645||0.491|
|CLARA(1 anchor words)||0.449||0.763||0.688||0.614||0.464|
|CLARA(2 anchor words)||0.452||0.771||0.691||0.621||0.469|
|CLARA(3 anchor words)||0.454||0.773||0.694||0.624||0.470|
|CLARA(4 anchor words)||0.467||0.782||0.701||0.637||0.475|
|CLARA(5 anchor words)||0.479||0.789||0.705||0.645||0.483|
|IU X-ray (image)||CNN-RNN||0.804||0.709|
|CLARA (predicted anchor words)||0.871||0.796|
|CLARA (defined anchor words)||0.894||0.804|
|CLARA(predicted anchor words)||0.835||0.803|
|CLARA(defined anchor words)||0.861||0.814|
|CLARA(predicted anchor words)||0.827||0.786|
|CLARA(defined anchor words)||0.834||0.804|
|IU X-ray (image)||CNN-RNN||0.294||0.216||0.124||0.087||0.066|
|CLARA(without edit module)||0.317||0.421||0.288||0.201||0.142|
|CLARA(predicted anchor words)||0.359||0.471||0.324||0.214||0.199|
|CLARA(defined anchor words)||0.374||0.489||0.356||0.225||0.234|
|MGH (EEG)||MP ||0.367||0.714||0.644||0.563||0.443|
|CLARA(without edit module)||0.382||0.691||0.651||0.564||0.405|
|CLARA(predicted anchor words)||0.419||0.742||0.674||0.594||0.452|
|CLARA(defined anchor words)||0.443||0.762||0.684||0.614||0.464|
|TUH (EEG )||MP ||0.363||0.645||0.578||0.459||0.361|
|CLARA(without edit module)||0.368||0.725||0.634||0.573||0.423|
|CLARA(predicted anchor words)||0.399||0.769||0.635||0.601||0.455|
|CLARA(defined anchor words)||0.425||0.784||0.659||0.624||0.483|
(B). CLARA provides accurate disease phenotyping
We also evaluate the effectiveness of CLARA in disease phenotype prediction. In particular, we train a character CNN classifier on original reports written by doctors to predict disease phenotypes. This classifier is used to score the generated reports produced by different baselines and CLARA. We measure the accuracy for predicting different disease phenotypes. CLARA consistently outperforms baselines. This results for X-ray is in Table 4.
(C). Clinical Expert Evaluation of CLARA The results of our models were evaluated by an expert neurologist in terms of its usefulness for clinical practice. In this setup, we measured the quality score metric for the generated reports. We only evaluated the EEG report generation task in this experimental setting. We provided the experts with samples with doctor written reports, reports generated by best-performing baselines and CLARA presented side by side. Clinicians were asked to provide a quality score in the range of 0-5. As shown in Figure 6, CLARA obtained an average quality score of 3.74 compared to TAM (best performing baseline) obtaining an average quality score of 2.52. These results indicate that the reports produced by CLARA are of higher clinical quality.
(D). Qualitative Analysis
We show sample results of clinical report generation using CLARA in Figure 4. Reports generated by CLARA show significant clinical accuracy and granular understanding of the input image. As clinicians use CLARA with different anchor words to generate the report, it ensures inclusion of important clinical findings such as “Pleural effusion”, “Pneumothorax”. Since anchor words are based on clinically significant findings, it enforces the report generation module to be clinically accurate. The third and fourth columns of the figure show the difference and changes introduced by the edit module. The edit module is able to modify the retrieved sentences with more details. For example, “focal consolidation” in row 1, “Chloecystectomy” in row 2, “Granuloma in right side” are important edits performed by the edit module of CLARA.
Medical report writing is important but labor-intensive for human doctors. Most existing works on medication generation focus on generating full reports without close human guidance, which is error-prone and does not follow clinical workflow. In this work, we propose CLARA, a computational method for supporting clinical report auto-completion task, which interactively facilitates doctors to write clinical reports in a sentence by sentence fashion. At the core, CLARA combines the information retrieval engine and neural networks to enable a powerful mechanism to retrieve most relevant sentences via retrieval systems then modify that using neural networks. Our experiments show that CLARA can produce higher quality and clinically accurate reports. CLARA outperforms a variety of compelling baseline methods across tasks and datasets with up to 35% improvement in CIDEr and BLEU-4 over the best baseline.
This work is part supported by National Science Foundation award IIS-1418511, CCF-1533768 and IIS-1838042, the National Institute of Health award NIH R01 1R01NS107291-01 and R56HL138415.
In this work, we proposed CLARA, a doctor representation learning based on both patient representations from longitudinal patient EHR data and trial embedding from the multimodal trial description. CLARA leverages a dynamic memory network where the representations of patients seen by the doctor are stored as memory while trial embedding serves as queries for retrieving the memory. Evaluated on real world patient and trial data, we demonstrated via trial enrollment prediction tasks that CLARA can learn accurate doctor embeddings and greatly outperform state-of-the-art baselines. We also show by additional experiments that the CLARA embedding can also be transferred to benefit the data insufficient setting (e.g., model transfer to less populated/newly explored country or from common disease to rare disease) that is highly valuable yet extremely challenging for clinical trials.
This work was in part supported by the National Science Foundation award IIS-1418511, CCF-1533768 and IIS-1838042, the National Institute of Health award NIH R01 1R01NS107291-01 and R56HL138415.
Appendix A Supplemetary
Lucene implements a variant of the Tf-Idf scoring model
tf = term frequency in document = measure of how often a term appears in the document
idf = inverse document frequency = measure of how often the term appears across the index
coord = number of terms in the query that were found in the document
lengthNorm = measure of the importance of a term according to the total number of terms in the field
queryNorm = normalization factor so that queries can be compared
boost (index) = boost of the field at index-time
boost (query) = boost of the field at query-time
tf(t ind): Term frequency factor for the term (t) in the document (d).
idf(t): Inverse document frequency of the term.
coord(q,d): Score factor based on how many of the query terms are found in the specified document.
queryNorm(q): Normalizing factor used to make scores between queries comparable.
t.getBoost(): Field boost.
norm(t,d): Encapsulates a few (indexing time) boost and length factors.
Lucene steps from query to output In this section, we describe some details of the search engine behind Lucene. Usually, query is passed to the Searcher of the Lucene engine, beginning the scoring process. Then the Searcher uses Collector for the scoring and sorting of the search results. These important objects are involved in a search: (1) The Weight object of the Query: this is an internal representation of the Query that allows the Query to be reused by the Searcher. (2) The Searcher that initiated the call.(3) Filter for limiting the result set. (4) Sort object for specifying the sorting criteria for the results when the standard score based sort method is not desired.
Simulated auto-completion: The auto-completion system requires a trigger from the user to initiate the process. These triggers are initialized with prefix or anchor words provided by the user. In real-world a deployed version of our model, doctors will provide anchor words ore prefix to CLARA to trigger completion of the sentences. But while developing the method, we can not expect to train the model with input from clinicians. So we created a simulated environment where the anchor words are predefined for each report. These anchor word creation steps are detailed in the above section.
- (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: item 4, Table 3, Table 4, Table 5.
- (2011) Context-sensitive query auto-completion. In Proceedings of the 20th International Conference on World Wide Web, WWW ’11, New York, NY, USA, pp. 107–116. Cited by: Related Work.
- (2019) EEGtoText: learning to write medical reports from eeg recordings. In Machine Learning for Healthcare 2019, Cited by: Related Work, item 5, Table 3, Table 4, Table 5.
- (2016) A survey of query auto completion in information retrieval. Foundations and Trends® in Information Retrieval 10 (4), pp. 273–363. Cited by: Related Work.
- (2017) Show-and-fool: crafting adversarial examples for neural image captioning. CoRR abs/1712.02051. External Links: Cited by: Related Work.
- (2018) A neural compositional paradigm for image captioning. CoRR abs/1810.09630. External Links: Cited by: Related Work.
- (2015) Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association 23 (2), pp. 304–310. Cited by: Experimental Setup.
- (2010) Every picture tells a story: generating sentences from images. In European conference on computer vision, pp. 15–29. Cited by: Related Work.
- (2018) Producing radiologist-quality reports for interpretable artificial intelligence. CoRR abs/1806.00340. External Links: Cited by: Related Work.
- (2018) Towards automatic report generation in spine radiology using weakly supervised framework. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part IV, pp. 185–193. External Links: Cited by: Related Work.
- (2013) American clinical neurophysiology societyâs standardized critical care eeg terminology: 2012 version. Journal of clinical neurophysiology 30 (1), pp. 1–27. Cited by: Anchor words used as classification labels.
- (1997) Long short-term memory. Neural computation 9. Cited by: The CLARA Framework.
- (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: The CLARA Framework, Experimental Setup, Experimental Setup.
- (2019) Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In Thirty-Third AAAI Conference on Artificial Intelligence, Cited by: item 1, Experimental Setup, Anchor words used as classification labels.
- (2018) Personalized language model for query auto-completion. arXiv preprint arXiv:1804.09661. Cited by: Related Work.
- (2017) On the automatic generation of medical imaging reports. arXiv preprint arXiv:1711.08195. Cited by: Introduction, Related Work, item 5, Table 3, Table 5.
- (2015) Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3128–3137. Cited by: Related Work.
- (2014) Adam: a method for stochastic optimization. ICLR. Cited by: Experimental Setup.
- (2017) A hierarchical approach for generating descriptive image paragraphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 317–325. Cited by: Related Work.
- (2019) Knowledge-driven encode, retrieve, paraphrase for medical image report generation. arXiv preprint arXiv:1903.10122. Cited by: Related Work, The CLARA Framework, The CLARA Framework, item 4, Table 5.
- (2017) Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. CoRR abs/1710.04806. External Links: Cited by: The CLARA Framework.
- (2011) Composing simple image descriptions using web-scale n-grams. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pp. 220–228. Cited by: Related Work.
- (2018) Hybrid retrieval-generation reinforced agent for medical image report generation. In Advances in Neural Information Processing Systems, pp. 1537–1547. Cited by: Related Work, item 3, Table 3, Table 5.
- (2019) Clinically accurate chest x-ray report generation. CoRR abs/1904.02633. External Links: Cited by: Related Work.
- (2017) Knowing when to look: adaptive attention via a visual sentinel for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 375–383. Cited by: item 2.
- (2018) Neural baby talk. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7219–7228. Cited by: Related Work.
- (2019) Lucene. Note: \urlhttps://lucene.apache.org/[Online; accessed 14-May-2019] Cited by: Introduction, The CLARA Framework.
- (2014) Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632. Cited by: Related Work.
- (2016) The temple university hospital eeg data corpus. Frontiers in neuroscience 10, pp. 196. Cited by: Experimental Setup.
- (2004) Neurology atlas 2004. URL www. who. int/mentalhealth/neurology/neurology_atlas_review_references. pdf. Cited by: Introduction.
- (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: Experimental Setup.
- (2017) Automatic differentiation in pytorch. Cited by: Experimental Setup.
- (2009) Integrating the probabilistic models bm25/bm25f into lucene. arXiv preprint arXiv:0911.5046. Cited by: The CLARA Framework.
- (2016) Self-critical sequence training for image captioning. CoRR abs/1612.00563. External Links: Cited by: Related Work.
- (2017) Prototypical networks for few-shot learning. CoRR abs/1703.05175. External Links: Cited by: The CLARA Framework.
- (2015) A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 553–562. Cited by: Related Work.
- (2015) End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448. Cited by: The CLARA Framework.
- (2014) Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence and K. Q. Weinberger (Eds.), pp. 3104â3112. Cited by: Introduction, 4th item, The CLARA Framework.
- (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: The CLARA Framework.
- (2015) Cider: consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4566–4575. Cited by: Experimental Setup.
- (2016) Captioning images with diverse objects. CoRR abs/1606.07770. External Links: Cited by: Related Work.
- (2015) Sequence to sequence-video to text. In Proceedings of the IEEE international conference on computer vision, pp. 4534–4542. Cited by: item 2, Table 3, Table 4, Table 5.
- (2014) Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729. Cited by: item 1, Table 3, Table 4, Table 5.
- (2015) Show and tell: a neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3156–3164. Cited by: Related Work, item 1.
- (2017) Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2097–2106. Cited by: Experimental Setup.
- (2015) Memory networks. In ICLR, Cited by: The CLARA Framework.
- (2015) Show, attend and tell: neural image caption generation with visual attention. In International conference on machine learning, pp. 2048–2057. Cited by: Related Work.
- (2010) I2t: image parsing to text description. Proceedings of the IEEE 98 (8), pp. 1485–1508. Cited by: Related Work.
- (2015) Describing videos by exploiting temporal structure. In Proceedings of the IEEE international conference on computer vision, pp. 4507–4515. Cited by: Figure 5, item 3, Table 3, Table 4, Table 5.
- (2016) Image captioning with semantic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4651–4659. Cited by: Related Work.
- (2015) Character-level convolutional networks for text classification. In Advances in neural information processing systems, pp. 649–657. Cited by: Results.
- (2018) Learning to summarize radiology findings. CoRR abs/1809.04698. External Links: Cited by: Related Work.
- (2017) MDNet: A semantically and visually interpretable medical image diagnosis network. CoRR abs/1707.02485. External Links: Cited by: Related Work.
- (2006) Inverted files for text search engines. ACM computing surveys (CSUR) 38 (2), pp. 6. Cited by: The CLARA Framework.