Examining Deep Learning Models with Multiple Data Sources for COVID-19 Forecasting
The COVID-19 pandemic represents the most significant public health disaster since the 1918 influenza pandemic. During pandemics such as COVID-19, timely and reliable spatio-temporal forecasting of epidemic dynamics is crucial. Deep learning-based time series models for forecasting have recently gained popularity and have been successfully used for epidemic forecasting. Here we focus on the design and analysis of deep learning-based models for COVID-19 forecasting. We implement multiple recurrent neural network-based deep learning models and combine them using the stacking ensemble technique. In order to incorporate the effects of multiple factors in COVID-19 spread, we consider multiple sources such as COVID-19 testing data and human mobility data for better predictions. To overcome the sparsity of training data and to address the dynamic correlation of the disease, we propose clustering-based training for high-resolution forecasting. The methods help us to identify the similar trends of certain groups of regions due to various spatio-temporal effects. We examine the proposed method for forecasting weekly COVID-19 new confirmed cases at county-, state-, and country-level. A comprehensive comparison between different time series models in COVID-19 context is conducted and analyzed. The results show that simple deep learning models can achieve comparable or better performance when compared with more complicated models. We are currently integrating our methods as a part of our weekly forecasts that we provide state and federal authorities.
The COVID-19 pandemic is the worst outbreak we have seen since 1918; it has caused over 22 million confirmed cases globally and over 791,000 deaths in more than 200 countries.
Our contributions. Our work focuses on exploring deep learning-based methods that incorporate multiple sources for weekly 4 weeks ahead forecasting of COVID-19 new confirmed cases at multiple geographical resolutions including country-, state-, and county-level. In the context of COVID-19, the problem is more complicated than seasonal influenza forecasting for the following reasons: () very sparse training data for each region; () noisy surveillance data due to heterogeneity in epidemiological context e.g. disease spreading timeline and testing prevalence in different regions, () system is constantly in churn – individual behavioral adaptation, policies and disease dynamics are constantly co-evolving. Given these challenges, we examine different types of time series models and propose an ensemble framework that combines simple deep learning models using multiple sources such as COVID-19 testing data and human mobility data. The multi-source data allows us to capture the above mentioned factors more effectively. To overcome the data sparsity problem we propose clustering-based training methods to augment training data for each region. We group spatial regions based on trend similarity and infer a model per cluster. Among other things this avoids overfitting due to sparse training data. As an additional benefit it aids in explicitly uncovering the spatial correlation across regions by training models with similar time series. Our main contributions are summarized below:
First, we systematically examine time series-based deep learning models for COVID-19 forecasting and propose clustering-based training methods to augment sparse and noisy training data for high resolution regions which can avoid overfitting and explicitly uncover the similar spreading trends of certain groups of regions.
Second, we implement a stacking ensemble framework to combine multiple deep learning models and multiple sources for better performance. Stacking is a natural way to combine multiple methods and data sources.
Third, we analyze the performance of our method and other published results in their ability to forecast weekly new confirmed cases at country, state, and county level. The results show that our ensemble model outperforms any individual models as well as several classic machine learning and state-of-the-art deep learning models;
Finally, we conduct a comprehensive comparison among mechanistic models, statistical models and deep learning models. The analysis shows that for COVID-19 forecasting deep learning-based models can capture the dynamics and have better generalization capability as opposed to the mechanistic and statistical baselines. Simple deep learning models such as simple recurrent neural networks can achieve better performance than complex deep learning models like graph neural networks for high resolution forecasting.
Ii Related work
COVID-19 is a very active area of research and thus it is impossible to cover all the recent manuscripts. We thus only cover important papers here.
Ii-a COVID-19 forecasting by mechanistic methods
Mechanistic methods have been a mainstay for COVID-19 forecasting due to their capability of represent the underlying disease transmission dynamics as well as incorporating diverse interventions. They enable counterfactual forecasting which is important for future government interventions to control the spread. Forecasting performance depends on the assumed underlying disease model. Yang et al. [yang2020modified] use a modified susceptible(S)-exposed(E)-infected(I)-recovered(R) (SEIR) model for predicting the COVID-19 epidemic peaks and sizes in China. Anastassopoulou et al. [anastassopoulou2020data] provide estimations of the basic reproduction number and the per day infection mortality and recovery rates using an susceptible(S)-infected(I)-dead(D)-recovered(R) (SIDR) model. Giordano et al. [giordano2020modelling] propose a new susceptible(S)-infected(I)-diagnosed(D)-ailing(A)-recognized(R)-threatened(T)-healed(H)-extinct(E) (SIDARTHE) model to help plan an effective control strategy. Yamana et al. [yamana2020projection] use a metapopulation SEIR model for US county resolution forecasting. Chang et al. [chang2020modelling] develop an agent-based model for a fine-grained computational simulation of the ongoing COVID-19 pandemic in Australia. Kai et al. [kai2020universal] present a stochastic dynamic network-based compartmental SEIR model and an individual agent-based model to investigate the impact of universal face mask wearing upon the spread of COVID-19.
Ii-B COVID-19 forecasting by time series models
Time series models, such as statistical models and deep learning models, are popular for their simplicity and forecasting accuracy in the epidemic domain. One big challenge is the lack of sufficient training data in the context of COVID-19 dynamics. Another challenge is that the surveillance data is extremely noisy (hard to model noise) due to rapidly evolving epidemics. However, additional data becomes available and the surveillance systems mature these models become more promising. Harvey et al. [harvey2020time] propose a new class of time series models based on generalized logistic growth curves that reflect COVID-19 trajectories. Petropoulos et al. [petropoulos2020forecasting] produce forecasts using models from the exponential smoothing family. Ribeiro et al. [ribeiro2020short] evaluate multiple regression models and stacking-ensemble learning for COVID-19 cumulative confirmed cases forecasting with one, three, and six days ahead in ten Brazilian states. Hu et al. [hu2020artificial] propose a modified auto-encoder model for real-time forecasting of the size, lengths and ending time in China. Chimmula et al. [chimmula2020time] use LSTM networks to predict COVID-19 transmission. Arora et al. [arora2020prediction] use LSTM-based models for positive reported cases for 32 states and union territories of India. Magri et al. [magri2020first] propose a data-driven model trained with both data and first principles. Dandekar et al. [dandekar2020neural] use neural network aided quarantine control models to estimate the global COVID-19 spread.
Ii-C Deep learning-based epidemic forecasting
Recurrent neural networks (RNN) has been demonstrated to be able to capture dynamic temporal behavior of a time sequence. Thus it has become a popular method in recent years for seasonal influenza-like-illness (ILI) forecasting. Volkova et al. [volkova2017forecasting] build an LSTM model for short-term ILI forecasting using CDC ILI and Twitter data. Venna et al. [venna2019novel] propose an LSTM-based method that integrates the impacts of climatic factors and geographical proximity. Wu et al. [wu2018deep] construct CNNRNN-Res combining RNN and convolutional neural networks to fuse information from different sources. Wang et al. [wang2019defsi, wang2020tdefsi] propose TDEFSI combining deep learning models with casual SEIR models to enable high-resolution ILI forecasting with no or less high-resolution training data. Adhikari et al. [adhikari2019epideep] propose EpiDeep for seasonal ILI forecasting by learning meaningful representations of incidence curves in a continuous feature space. Deng et al. [deng2019graph] design cola-GNN which is a cross-location attention-based graph neural network for forecasting ILI. Regarding COVID-19 forecasting, Amol et al. [kapoor2020examining] examined a novel forecasting approach for COVID-19 daily case prediction that uses graph neural networks and mobility data. Gao et al. [gao2020stan] proposed STAN that uses a spatio-temporal attention network. Aamchandani et al. [ramchandani2020deepcovidnet] presented DeepCOVIDNet to compute equidimensional representations of multivariate time series. These works examine their models on daily forecasting for US state or county levels.
Our work focuses on time series deep learning models for COVID-19 forecasting that yield weekly forecast at multiple resolution scales and provide 4 weeks ahead forecasts (equal to 28 days ahead in the context of daily forecasting). We use an ensemble model to combine multiple simple deep learning models. We show that compared to state-of-the-art time series models, simple recurrent neural network-based models can achieve better performance. More importantly, we show that the ensemble method is an effective way to mitigate model overfitting caused by the super small and noisy training data.
Iii-a Problem Formulation
We formulate the COVID-19 new confirmed cases forecasting problem as a regression task with time series of multiple sources as the input. We have regions in total. Each region is associated with a time series of multi-source input in a time window . For a region , at time step , the multi-source input is denoted as where is the feature numbers. We denote the training data as . The objective is to predict COVID-19 new confirmed cases at a future time point where refers to the horizon of the prediction. We are interested in a predictor that predicts new confirmed case count at time , denoted as , by taking as the input where is the most recent time of data availability.
where denotes parameters of the predictor and denotes the prediction of .
Iii-B Recurrent Neural Networks (RNNs)
For brevity, we assume a region is given, thus we omit subscript in this subsection. An RNN model consists of k-stacked RNN layers. Each RNN layer consists of cells, denoted as . The input is , the output from the last layer is denoted as . Let be the dimension of the hidden state in . For the first layer , will work as:
where is activation function; , and are learned weights and bias; is the output of and is from . The cell computation is similar in the , but with being replaced by , and . The first RNN layer takes as the input, the second layer takes as the input, and the rest of the layers behave in the same manner. The RNN module can be replaced by Gated Recurrent Unit (GRU) [cho2014learning] or Long Short-term Memory (LSTM) [hochreiter1997long] which avoid short-term memory and gradient vanishing problems of vanilla RNNs.
The output of the k-stacked RNN layers is fed into a fully connected layer:
where is the output dimension, , , and is a linear function.
Iii-C Multi-source Attention RNNs
The Multi-source attention RNN model consists of k-stacked RNN models, each of which encodes a time series of one feature. Assume the output of branch is in which we omit subscript for brevity. An attention layer is used to measure the impact of multi-source on new confirmed cases. We assume the time series of new confirmed cases is encoded in branch , and we define attention coefficient as the effect of feature on target feature:
where , is RELU function. Then the output of attention layer is:
where , , is the tanh function. The output layer is a dense layer that outputs :
where , , is the linear function. In our paper, all the features have the same length of time series. However, the multi-source attention RNN model enables training with the input that has a different length of time series of the features, which is superior in heterogeneous availability of multiple factors.
Iii-D Clustering-based Training
Deep learning models usually require a large amount of training data which is not the case in the context of COVID-19. Particularly, for regions where the pandemic starts late, there are only a few valid data points for weekly forecasting. Thus training a single model for each such region, which we call vanilla training, is highly susceptible to overfitting. One modeling strategy is to train a model for a group of selected regions which to some extent overcomes the data sparsity problem. It is more likely that groups of regions exhibit strong correlations due to various spatio-temporal effects and geographical or demographic similarity. We explore a clustering-based approach that simultaneously learns COVID-19 dynamics from multiple regions within the cluster and infers a model per cluster. Various types of similarity metrics can be used to uncover the trend similarity allowing for an explainable time series forecasting framework.
Generalizing the earlier problem formulation, we denote the historical available time series for a region as where is the time span of the available surveillance data. is increasing as new data becomes available and it varies across different regions. The set of time series for regions is denoted as . The clustering process aims to partition the into sets .
In our work, the trend is represented as the time series of new confirmed cases and we cluster the time series in two ways – geography-based clustering (geo-clustering) and algorithm-based clustering (alg-clustering). Geo-clustering: Clustering is based on their geographical proximity, e.g. partition counties based on their state codes for the US. We propose this method due to differences across regions with respect to their size, population density, epidemiological context, and differences in how policies are being implemented. Thus we assume those who belong to the same jurisdictions would have strong relationship in COVID-19 time series. Alg-clustering: Clustering using (i) k-means [hartigan1979algorithm] which partitions observations into clusters in which each observation belongs to the cluster with the nearest mean; (ii) time series k-means (tskmeans) [huang2016time] that clusters time series data using the smooth subspace information; (iii) kshape [paparrizos2015k] uses a normalized version of the cross-correlation measure in order to consider the shapes of time series while comparing them. Note that kmeans requires the time series to be clustered must have the same length, while geo-clustering, tskmeans and kshape allow for clustering on different lengths of time series. Alg-clustering discovers implicit correlation of epidemic trends which does not assume any geographical knowledge. We denote the set of above methods as .
Ensemble learning is primarily used to improve the model performance. Ren et. al. [ren2016ensemble] present a comprehensive review. In this paper, we implement stacking ensemble. It is to train a separate dense neural network using the predictions of individual models as the inputs. We use leave-one-out cross validation to train and predict for each region. For each target value , we train the ensemble model using the training samples from the same region but other time points.
Iii-F Probabilistic Forecasting
In the epidemic forecasting domain, probabilistic forecasting is important for capturing the uncertainty of the disease dynamics and to better support public health decision making. We implement MCDropout [gal2016dropout] for each individual predictors to demonstrate estimation of prediction uncertainty. However, the ensemble predictions are point estimation by the definition of stacking.
Iii-G Proposed Framework
Fig. 1 shows the framework of the proposed method. It works as follows: (1) we choose a geographical scale and resolution, e.g. counties in the US; (2) we collect and process multi-source training data; (3) we cluster regions into certain groups based on their similarities between time series of new confirmed cases; (4) we train multiple predictors per cluster and ensemble individual predictors to make final predictions.
Multiple data sources
In order to model the co-evolution of multiple factors in COVID-19 spread, we incorporate the following data sources in our models to make future forecasts. COVID-19 Surveillance Data [uva2020uva] and Case Count Growth Rate (CGR) quantify case count and case count changes of COVID-19 time series. COVID-19 Testing Data [jhu2020covid], Testing Rate (TR) and Testing Positive Rate (TPR) quantify the COVID-19 testing coverage in each region. Google COVID-19 Aggregated Mobility Research Dataset [kraemer2020mapping], Flow Reduction Rate (FRR) and Social Distancing Index (SDI) quantify the anonymized weekly mobility flow (MF) and flow changes between and within regions. We denote the set of multiple sources as where can be expanded by combining any new data sources. We generate by preprocessing . Details of data description and generation are shown in section IV-A.
Multiple RNN-based models
By combining different data sources (single feature, features, attention features), RNN modules (RNN, GRU, LSTM), and training methods (vanilla, geo, kmeans, tskmeans, kshape), we implement multiple individual models. For country, US state and US county levels, models include: RNN, GRU, LSTM use vanilla training with single feature; RNN-m, GRU-m, LSTM-m use vanilla training with features; RNN-att, GRU-att, LSTM-att are attention-based models using vanilla training with features. For US county level, to investigate the effect of clustering training, we implement additional models using RNN module and single feature: RNN-geo, RNN-kmeans, RNN-tskmeans and RNN-kshape. We analyze the effects by varying clustering methods while fixing other factors. Thus other combinations of modules, features and training methods are omitted in this work. We denote the set of individual models as . Note that is not limited to the models we implemented in this paper. It can be expanded by adding or improving upon any of the individual components.
Training and forecasting
Algorithm 1 presents how the proposed framework works. We first preprocess the collected data sources to generate based on the data availability for different resolutions. Each feature is in the form of time series of weekly data points at a given geographical resolution. We design various models for different resolutions based on . Next, each model in is trained using its corresponding cluster of training data. For region , given an input , a model will output . Then the outputs of individual models in will be combined using stacking ensemble which will output the final prediction for region at time .
For single feature, we use a recursive forecasting approach to make multi-step forecasting. That is appending the most recent prediction to the input for the next step forecasting. For multiple features that include exogenous time series as the input, we train a separate model for each step ahead forecasting.
Iv Experiment Setup
COVID-19 surveillance data is obtained via the UVA COVID-19 surveillance dashboard [uva2020uva]. It contains daily confirmed cases (CF) and death count (DT) at the resolution of county/state in the US and national-level data for other countries. Daily case counts and death counts are further aggregated to weekly counts.
Case count growth rate (CGR): Denoting the new confirmed/death case count at week as , the CGR of week is computed as , where we add 1 to smooth zero counts. We compute confirmed CGR (CCGR) and death CGR (DCGR).
COVID-19 testing data via JHU COVID-19 tracking project [jhu2020covid]. It includes multiple data like positive and negative testing count for state and country level of the US. We compute testing per 100K (TR) and testing positive rate (TPR) i.e. positive/(positive+negative).
Google COVID-19 Aggregated Mobility Research (MF) Dataset [kraemer2020mapping] contains the anonymized relative weekly mobility flows aggregated over users within a 5 km cell. Given a set of regions , the flow from to during week is denoted by . The outgoing flow of region during week is . Similarly the incoming flow is . All the flow data can be aggregated to the level of county/state/country in the globe and cover most of countries all over the world. In our experiment, we work with outgoing flows since MF is mostly symmetric. Thus we omit , in the notation.
Flow Reduction Rate (FRR) [adiga2020interplay] measures the impact of social distancing by comparing the levels of connectivity before and after the spread of pandemic. Given a region , we compute the average outgoing MF during the pre-pandemic period (the first 6 weeks of 2020) and then compute weekly FRR by .
Social Distancing Index (SDI) [adiga2020interplay] quantifies the mixing or movement within a county, we consider the MF between the 5 km cells in it. Let denote the normalized flow matrix of the county at week where , we compare to the uniform matrix and the identity matrix . The SDI is defined as . Note that value close to one indicates less mixing within a county while a value close to zeros indicates more mixing within a county. For more details please refer to [adiga2020interplay].
All data sources are weekly and ends on Saturday. It starts from Week ending March 7th and ends at Week ending August 22nd (25 weeks) at Global, US-State and US-County resolutions. The global dataset includes Austria, Brazil, India, Italy, Nigeria, Singapore, the United Kingdom, and the United States. The summary of each dataset is shown in Table I. We chose 2020/03/07 as the start week since commercial laboratories began testing for SARS-CoV-2 in the US on March 1st, 2020. Thus the COVID-19 surveillance data before that date is substantially noisy. The forecasting week starts from 2020/05/23 and we make 4 weeks ahead forecasting at each week until 2020/08/22. For example, if we use time series of data from 2020/03/07 to 2020/05/16 to train models, then the forecasting weeks are 2020/05/23, 2020/05/30, 2020/06/06, and 2020/06/13. Then we move one week ahead to repeat the training and forecasting.
|Data set||# regions||# weeks||# features|
The metrics used to evaluate the forecasting performance are: root mean squared error (RMSE), mean absolute percentage error (MAPE), Pearson correlation (PCORR).
Root mean squared error ():
Mean absolute percentage error ():
Pearson correlation ():
To serve as baselines for comparing the individual models, we also implemented SEIR compartmental model and several statistical time series models as well as state-of-the-art deep learning models. There are a few deep learning models proposed recently for COVID-19 forecasting which have not been peer reviewed, thus we do not consider any models published within 2 months upon our completion of this paper.
Naive uses the observed value of the most recent week as the future prediction.
SEIR [venkatramanan2017spatio] is an SEIR compartmental model for simulating epidemic spread. We calibrate model parameters based on surveillance data for each region. Predictions are made by persisting the current parameter values to the future time points and run simulations.
Autoregressive (AR) uses observations from previous time steps as input to a regression equation to predict the value at the next time step. We train one model per region using AR order 3.
Global Autoregression (GAR) trains one global AR model using the data available from each region. This is similar to the clustering-based methods that we proposed in this paper. We train one model per resolution using AR order 3.
Vector Autoregression (VAR) is a stochastic process model used to capture the linear interdependencies among multiple time series. We train one model per resolution using AR order 3.
Autoregressive Moving Average (ARMA) [contreras2003arima] is used to describe weakly stationary stochastic time series in terms of two polynomials for the autoregression (AR) and the moving average (MA). We set AR order to 3 and MA order to 2.
CNNRNN-Res [wu2018deep] uses RNNs to capture the long-term correlation in the data and uses convolution neural networks to fuse time series of other regions. A residual structure is also applied in the training process. We train one model per region. We set the residual window size as 3 and all the other parameters are set as the same as the original paper.
Cola-GNN [deng2019graph] uses attention-based graph neural networks to a graph message passing framework to combine graph structures and time series features in a dynamic propagation process. We train one model per resolution. We set RNN window size as 3 and all the other parameters are set as the same as the original paper.
Iv-D Settings and Implementation Details
We set training window size for all RNN-based models due to the short length of available CF and DT. We examine weekly CF forecasting at county and state level for US and country level for 8 countries of which at least one country is from each continent. The forecasting is made to 1, 2, 3, 4 weeks ahead at each time point i.e. . All RNN-based models consist of 2 recurrent neural network layers with 32 hidden units, 1 dense layer with 16 hidden units, 1 dropout layer with 0.2 drop probability. We set batch size as 32, epoch number as 500. Stacking ensemble model consists of 1 dense layer with 32 hidden units and RELU activation function. We train ensemble with batch size 8 and epoch number as 200. Adam optimizer with default settings and early stopping with patience of 50 epochs are used for all model training. Geo-clustering and alg-clustering methods are applied when training county level models. We set the number of clusters for alg-clustering method as . The clustering is conducted on the normalized training curves using MinMaxScaler. Single feature means time series of CF. For country level forecasting, features include CF, DT, CCGR, DCGR, MF and FRR. For US state level forecasting, features include CF, DT, CCGR, DCGR, MF, FRR, TR and TPR. And CF, DT, CCGR, DCGR, MF, FRR and SDI are included for US county level forecasting. AR-based models and CNNRNN-based models are trained with single feature time series. For all models, we run 50 Monte Carlo predictions. For SEIR method, we calibrate a weekly effective reproductive number () using simulation optimization to match the new confirmed cases per 100k. We set the disease parameters as follows: mean incubation period 5.5 days, mean infectious period 5 days, delay from onset to confirmation 7 days and case ascertainment rate of 15% [lauer2020incubation].
V-a Forecasting Performance
We evaluate the model performance of horizon 1, 2, 3, and 4 at county-, state- and national-level using RMSE, MAPE and PCORR. To mitigate the performance bias caused by our settings, we divide the individual models into several categories based on different modules, training methods, features. Then we calculate the average performance per category. Note that an individual model may belong to multiple categories. RNNs includes models mainly consist of RNN module. GRUs includes models mainly consist of GRU module. LSTMs includes models mainly consist of LSTM module. GNNRNNs includes models mix CNN, RNN, GNN modules. ARs includes autoregression based models. Vanillas includes models in RNNs that use single feature and vanilla training. Clusters includes models in RNNs that use single feature and geo, kmeans, tskmeans, kshape clustering training. SglFtrs includes RNN, GRU, LSTM. MulFtrs includes RNN-m, GRU-m, LSTM-m, RNN-att, GRU-att, LSTM-att. SEIRs includes SEIR. Naive includes Naive. ENS is stacking ensemble of RNNs, GRUs and LSTMs. GNNRNNs excludes cola-GNN and ARs excludes VAR for US-county forecasting due to their failures to make reasonable forecasting. For more details please refer to Table II note.
Table II presents the numerical results. In general, we observe that (i) at US state and county level ENS performs the best on 2, 3 and 4 weeks ahead forecasting while Naive performs the best on 1 week ahead. (ii) SEIR outperforms others at global level forecasting on horizon 1, 2 and 3. (iii) Models with a single type of DNN modules outperform those with mixed types of modules. (iv) Models trained with vanilla methods outperform models trained with clustering-based methods. We will investigate and explain this observation in the next two paragraphs. (v) Models trained with multiple features outperform models trained with a single feature at US state and county level.
To better understand the model performance distribution over all regions, we select one individual method from each category without overlapping and count frequency of the best performance (FRQBP) per method. Fig. 2 presents the aggregate counts of 1, 2, 3, 4 horizons. Note that methods with larger counts do not necessarily have better MAPE, RMSE and PCORR performance. The observations are in general consistent with those from Table II but with more specific observations regarding FRQBP: (vi) the best 1 week ahead predictions are mostly achieved by Naive methods. (vii) For US state and county level, the best 2, 3, 4 weeks ahead predictions are achieved by ENS and the value increases as horizon increases. (viii) Alg-clustering-based models and models with multiple features achieve more best performance than vanilla models. (ix) GAR and AR have larger FRQBP than DNN models at US county level.
Furthermore, in Fig. 3 we show the US county level curves of weekly new confirmed cases grouped by individual methods where the best RMSE performance is achieved. It is interesting to observe that different methods achieve best performance over regions with different patterns, such as when the curves of weekly new confirmed cases have large fluctuation between subsequent weeks, the deep learning-based methods are able to capture the dynamics well as opposed to SEIR and Naive methods. The naive and SEIR models assume certain level of regularity in the time series, which tends to be violated in the curves pertaining to deep learning methods. LSTM, RNN-kmeans, RNN-kshape, and RNN-tskmeans are outstanding in capturing dynamics with various patterns which show their generalization capability for time series forecasting. However, as we mentioned above the good performance in FRQBP does not indicate a better average performance on RMSE, MAPE, and PCORR since the latter also depends on the scales of ground truths. AR and GAR perform well on capturing dynamics of small number of cases. The CNNRNN-based methods does not perform well on county level forecasting. The likely reason is that the complexity of these models is much higher than simple RNN-based models and the complexity increases as the number of regions increases. Thus overfitting happens with such a small training data size at county level.
We want to highlight that in order to investigate deep learning models for COVID-19 forecasting, the ensemble framework in this paper only combine DNN models. However it can but not necessarily include baselines like SEIR and Naive who perform very well in this task. We encourage researchers to ensemble models of various types to average the forecasting errors made by a particular poor model.
RNNs: RNN, RNN-geo, RNN-m, RNN-att, RNN-kmeans, RNN-tskmeans, RNN-kshape. GRUs: GRU, GRU-m , GRU-att. LSTMs: LSTM, LSTM-m, LSTM-att. GNNRNNs: cola-GNN, GCNRNN-Res, CNNRNN-Res. ARs: AR, ARMA, VAR, GAR. Vanillas: RNN. Clusters: RNN-geo, RNN-kmeans, RNN-tskmeans, RNN-kshape. SglFtrs: RNN, GRU, LSTM. MulFtrs: RNN-m, GRU-m, LSTM-m, RNN-att, GRU-att, LSTM-att. Naive: naive. SEIRs: SEIR. ENS is stacking ensemble of the union of RNNs, GRUs, and LSTMs. CNNRNNs excludes cola-GNN and ARs excludes VAR for US-county forecasting due to their failures to make reasonable forecasting.
V-B Sensitivity Analysis and Discussion
In this section, we show sensitivity analysis on model types, feature number, and clustering method for individual models.
We compare RMSE performance of models with pure RNN, GRU, LSTM modules. Fig. 4 shows the comparison between RNN, GRU, LSTM methods for three resolution datasets. We observe that RNN performs the best on 1 week ahead forecasting while GRU and LSTM outperform RNN on 3 and 4 weeks ahead forecasting at state and county level. The results indicate that RNN tends to perform better than GRU and LSTM for short-term forecasting while it loses advantage for long-term forecasting.
Number of features
In our framework, we involve multiple data sources to model the co-evolution of multiple factors in epidemic spreading. We implement individual models either with single feature or with features. In addition, we use an attention layer to model the effect of other features on the target feature. Fig. 5 presents the model performance of GRU, GRU-m, and GRU-att at three datasets. In general, GRU-m and GRU-att using features outperform GRU using single feature in most cases except for 1 and 2 week ahead forecasting at global level. Note that for global forecasting, there is no testing information which is a critical factor for revealing COVID-19 dynamics.
Clustering-based training is applied in our framework to mitigate the likely overfitting due to small training data size. We compare US county level model performance of RNN, RNN-geo, RNN-kmeans, RNN-tskmeans, RNN-kshape. The comparison is shown in 6. In general, we observe RNN, RNN-geo and RNN-kshape outperform RNN-kmeans and RNN-tskmeans. RNN-geo performs the best for 1 and 2 week ahead forecasting while RNN-kshape performs the best for 3 and 4 week ahead forecasting. This indicates that geo-clustering can capture near future co-evolution dynamics within a state informed by similar local epidemiological environments. Kshape clustering can further capture far future dynamics informed by other counties with similar trends.
In this work, we developed an ensemble framework that combines multiple RNN-based deep learning models using multiple data sources for COVID-19 forecasting. The multiple data sources enable better forecasting performance. To mitigate the likely overfitting to noisy and small size of training datasets, we proposed clustering-based training method to further improve DNN model performance. We trained stacking ensembles to combine individual deep learning models of simple architectures. We show that the ensemble in general performs the best among baseline individual models for high resolution and long term forecasting like US state and county level. Ensembles play a very important role for improving model performance for COVID-19 forecasting. A comprehensive comparison between SEIR methods, DNN-based methods and AR-based methods are conducted. In the context of COVID-19, our experimental results show that different models are likely to perform best on different patterns of time series. Despite the lack of sufficient training data, DNN-based methods can capture the dynamics well and show strong generalization ability for high resolution forecasting as opposed to SEIR and Naive methods. Among multiple DNN-based models, spatio-temporal models are more likely to overfitting due to the high model complexity for high resolution forecasting.
- Source: https://covid19.who.int/ as of August 26, 2020.
- Source:https://www.cdc.gov/coronavirus/2019-ncov/covid-data/forecasting-us.html as of August 10, 2020.