A Framework for Evaluating Epidemic Forecasts
Abstract
\parttitleBackground Over the past few decades, numerous forecasting methods have been proposed in the field of epidemic forecasting. Such methods can be classified into different categories such as deterministic vs. probabilistic, comparative methods vs. generative methods, and so on. In some of the more popular comparative methods, researchers compare observed epidemiological data from early stages of an outbreak with the output of proposed models to forecast the future trend and prevalence of the pandemic. A significant problem in this area is the lack of standard welldefined evaluation measures to select the best algorithm among different ones, as well as for selecting the best possible configuration for a particular algorithm.
\parttitleResults In this paper we present an evaluation framework which allows for combining different features, error measures, and ranking schema to evaluate forecasts. We describe the various epidemic features (Epifeatures) included to characterize the output of forecasting methods and provide suitable error measures that could be used to evaluate the accuracy of the methods with respect to these Epifeatures. We focus on longterm predictions rather than shortterm forecasting and demonstrate the utility of the framework by evaluating six forecasting methods for predicting influenza in United States. Our results demonstrate that different error measures lead to different rankings even for a single Epifeature. Further, our experimental analyses show that no single method dominates the rest in predicting all Epifeatures, when evaluated across error measures. As an alternative, we provide various consensus ranking schema that summarize individual rankings, thus accounting for different error measures. Since each Epifeature presents a different aspect of the epidemic, multiple methods need to be combined to provide a comprehensive forecast. Thus we call for a more nuanced approach while evaluating epidemic forecasts and we believe that a comprehensive evaluation framework, as presented in this paper, will add value to the computational epidemiology community.
Technical advance article
addressref=aff1,aff3, corref=aff1, email=fstaba2@vt.edu ]\initsFST\fnmFarzaneh Sadat \snmTabataba addressref=aff2, email=prithwi@vt.edu ]\initsPC\fnmPrithwish \snmChakraborty addressref=aff1,aff3, email=naren@cs.vt.edu ]\initsNR\fnmNaren \snmRamakrishnan addressref=aff3, email=vsriniv@bi.vt.edu ]\initsSV\fnmSrinivasan \snmVenkatramanan addressref=aff3, email=chenj@bi.vt.edu ]\initsJC\fnmJiangzhuo \snmChen addressref=aff3, email=blewis@bi.vt.edu ]\initsBL\fnmBryan \snmLewis addressref=aff1,aff3, email=mmarathe@vt.edu ]\initsMM\fnmMadhav \snmMarathe
Epidemic forecasting \kwdError Measure \kwdPerformance evaluation \kwdEpidemicFeatures \kwdRanking
Background
There is considerable interest in forecasting about future trends in diverse fields such as weather, economics and epidemiology[1, 2, 3, 4, 5, 6]. Epidemic forecasting, specifically, is of prime importance to epidemiologists and healthcare providers, and many forecasting methods have been proposed in this area[7]. Typically, predictive models receive input in the form of a timeseries of the epidemiological data from early stages of an outbreak and are used to predict a few data points in the future and/or the remainder of the season. However, assessing the performance of a forecasting algorithm is a big challenge. Recently, several epidemic forecasting challenges have been organized by Centers for Disease Control and Prevention (CDC), National Institutes of Health (NIH), Department of Health and Human Services (HHS), National Oceanic and Atmospheric Administration (NOAA), and Defense Advanced Research Projects Agency (DARPA) to encourage different research groups to provide forecasting methods for disease outbreaks such as Flu [8], Ebola [9], Dengue [10, 11] and Chikungunya [12]. Fair evaluation and comparing the output of different forecasting methods has remained an open question. Three competitions named Makridakis Competitions (MCompetitions), were held in 1982, 1993, and 2000 and intended to evaluate and compare the performance and accuracy of different timeseries forecasting methods [13, 14]. In their analysis, the accuracy of different methods is evaluated by calculating different error measures on business and economic timeseries which may be applicable to other disciplines. The target for prediction was economic timeseries which have characteristically different behavior compared to those arising in the epidemiology. Though their analysis is generic enough, it does not consider properties of the timeseries that are epidemiologically relevant. Armstrong [15] provides a thorough summary of the key principles that must be considered while evaluating such forecast methods. Our work expands upon their philosophy of objective evaluation, with specific focus on the domain of epidemiology. To the best of our knowledge, at the time of writing this paper, there have been no formal studies on comparing the standard epidemiologically relevant features across appropriate error measures for evaluating and comparing epidemic forecasting algorithms.
Nsoesie \etal[16] reviewed different studies in the field of forecasting influenza outbreaks and presented the features used to evaluate the performance of proposed methods. Eleven of the sixteen forecasting methods studied by the authors, predicted daily/weekly case counts [16]. Some of the studies used various distance functions or errors as a measure of closeness between the predicted and observed timeseries. For example, Viboud \etal[17], Aguirre and Gonzalez [18], and Jiang \etal[19] used correlation coefficients to calculate the accuracy of daily or weekly forecasts of influenza case counts. Other studies evaluated the precision and ”closeness” of predicted activities to observed values using different statistical measure of errors such as rootmeansquareerror (RMSE), percentage error [19, 20], etc. However, defining a good distance function which demonstrates closeness between the surveillance and predicted epidemic curves is still a challenge. Moreover, the distance function provides a general comparison between the two timeseries and ignores their epidemiological relevance between them, i.e. specific features of the epidemic curves that are more significant and meaningful from the epidemiologist perspective; these features could be better criteria to compare epidemic curves together rather than simple distance error. Cha[21] provided a survey on different distance/similarity functions for calculating the closeness between two timeseries or discrete probability density functions. Some other studies have analyzed the overlap or difference between the predicted and observed weekly activities by graphical inspection [22]. Epidemic peak is one of the most important quantities of interest in an outbreak, and its magnitude and timing are important from the perspective of health service providers. Consequently, accurately predicting the peak has been the goal of some forecasting studies [23, 24, 25, 26, 18, 22, 27, 28, 29, 30]. Hall \etal[24], Aguirre and Gonzalez [18] and Hyder \etal[30] predicted the pandemic duration and computed the error between the predicted and real value. A few studies also consider the attack rate for the epidemic season as the feature of interest for their method [20, 26].
Study Objective & Summary of Results
In this paper, an epidemic forecast generated by a model/datadriven approach is to be quantified based on epidemiologically relevant features which we hereon refer to as Epifeatures. Further, the accuracy of a model’s estimate of a particular Epifeatures is to be quantified by evaluating its error with respect to the Epifeatures extracted from the ground truth. This is enabled by using functions that capture their dissimilarity, which we hereon refer to as error measures.
We present a simple end to end framework for evaluating epidemic forecasts, keeping in mind the variety of epidemic features and error measures that can be used to quantify their performance. The software framework, EpiEvaluator (shown in Figure 1), is built by taking into account several possible use cases and expected to be a growing lightweight library of loosely coupled scripts. To demonstrate its potential and flexibility, we use the framework on a collection of six different methods used to predict influenza in the United States. In addition to quantifying the performance of each method, we also show how the framework allows for comparison among the methods by ranking them.
We used influenza surveillance data, as reported by United States Centers for Disease Control and Prevention (CDC)[31], as the gold standard epidemiological data. Output of six forecasting methods was used as the predicted data. We calculated 8 Epifeatures on the 20132014 season data against 10 HHS regions of the United States (provided by U.S. Department of Health & Human Services) [32] and 6 error measures to assess the Epifeatures. We applied the proposed Epifeatures and error measures on both real and predicted data to compare them to each other.
As expected, the performance of a particular method depends on the Epifeatures and error measures of choice. Our experimental results demonstrate that some algorithms perform well with regards to one Epifeature but do not perform well with respect to other features. It is possible that none of the forecasting algorithms could dominate all the other algorithms in every Epifeature and error measure.
As one Epifeature cannot describe all attributes of a forecasting algorithm’s output, all of them should be considered in the ranking process to have a comprehensive comparison. We suggest aggregation of different error measures in the ranking procedure. To this effect, we show how Consensus Ranking could be used to provide comprehensive evaluation. In addition, depending on the purpose of the forecasting algorithm, some Epifeatures could be considered more significant than others, and weighted more accordingly while evaluating forecasts. We recommend a second level of Consensus Ranking to accumulate the analysis for various Epifeatures and provide a total summary of forecasting methods’ capabilities.
We also propose another ranking method, named Horizon ranking, to provide a comparative evaluation of the methods performance across time. If the Horizon Ranking fluctuates a lot over the time steps, that gives lower credit to the average Consensus Ranking as selection criteria for the best method. Based on experimental results of Horizon ranking, you will notice that for a single Epifeature, one method may show the best performance in early stages of the prediction, whereas another algorithm is the dominator in other time intervals. Finding a pattern in Horizon Ranking plots helps to figure out which methods should be selected for different periods of forecasting.
Note that many of the proposed Epifeatures or error measures have been studied earlier in the literature. The aim of our study is to perform an objective comparison across Epifeatures and error measures and ascertain their impact on evaluating and ranking competing models. Further, the focus is not on the performance of methods being compared, but on the features provided by the software framework for evaluating them. The software package is scheduled to be released in an open source environment. We envision it as a growing ecosystem, where endusers, domain experts and statisticians alike, can contribute Epifeatures and error measures for performance analysis of forecasting methods.
Methods
The goal of this paper is demonstrating how to apply the Epifeatures and error measures on the output of a forecasting algorithm to evaluate its performance and compare it with other methods. We implemented a stochastic compartment SEIR algorithm [33] with six different configurations to forecast influenza outbreak (described in the Supplementary material). These six configurations result in different forecasts which are then used for evaluation. In the following sections, we expand upon the different possibilities we consider for each module (Epifeatures, error measures and ranking schema) and demonstrate their effect on evaluating and ranking the forecasting methods.
Forecasting Process
Epidemic data are in the form of a timeseries such as , where denotes the number of new infected cases observed in time , and is the duration of the epidemic season. Weekly timesteps are usually preferred to average out the noise in daily case counts. Let denote the prediction time by and the prediction horizon by . Given the early timeseries up to time () as observed data, the forecasting algorithm predicts the timeseries up to the prediction horizon as . The forecasts could be shortterm (small ), or longterm (). As most of the proposed Epifeatures are only defined based on the complete epidemic curve rather than a few predicted data points, we generate longterm forecasts for each prediction time. The remainder of the observed timeseries () is used as a test set for comparing with the predicted timeseries (Figure 2). We increment the prediction time , and update the predictions as we observe newer data points. For each prediction time , we generate an epidemic curve for the remainder of the season.
Epidemiologically Relevant Features
In this section, we list the Epifeatures we will use to characterize the features of an epidemic timeseries. While some of these quantities are generic and applicable to any timeseries, the others are specific to epidemiology. Table 1 summarizes the notations needed to define these Epifeatures.
Symbol  Definition 

number of new cases of disease in the week observed in surveillance data  
number of new cases of disease in the week predicted by forecasting methods  
number of new cases of disease predicted at the start of epidemic season  
number of new cases of disease predicted at the peak of epidemic season  
: the prediction error  
duration of the epidemic season  
: the mean for y values over weeks  
: The variance of y values over weeks  
Total number of infected persons during specified period  
The population size at the start of specified period  
Total number of infected persons with specific age during the specified period  
The population size with specific age at the start of specified period  
or is the number of contacts of primary infected persons  
or is the new number of infected persons among the contacts of primary infected individuals during a specified period  
: Geometric Mean of a set of Errors  
Arithmetic Mean of a set of Errors  
Median value a set of Errors  
Root Mean Square of a set of Errors 
Peak Value & Time
Peak value is the highest value in a timeseries. In the epidemic context, it refers to the most number of newly infected individuals on any given week during an epidemic season. Closely associated with peak value, is peak time, which is the week in which the peak value is attained. Predicting these values accurately will help the healthcare providers estimate the resource burden and the preparation time.
FirstTakeOff (Value & Time):
Seasonal outbreaks, like the flu, usually remain dormant and exhibit a sharp rise in the number of cases just as the season commences. A similar phenomenon of sharp increase is exhibited by emerging infectious diseases. Referred to as ”firsttakeoff” time, this is useful to detect early and will help the authorities alert the public and raise awareness. Mathematically, it is the time at which the first derivative of the epidemic curve exceeds a specific threshold. Since the epidemic curve is discretized in weekly increments, the approximate slope of the curve over time steps is defined as follows:
(1) 
where is the number of new infected casecounts and indicates the week number. In our experiment, we set . The value of is the slope of the curve and shows the takeoffvalue while the start time of the takeoff indicates the takeofftime. The threshold used in calculating the firsttakeoff depends on the type of the disease and how aggressive and dangerous the outbreak could be. The epidemiologists determine the threshold value. In this case, we set the threshold to 150.
Epifeature name  Definition 

Peak value  Most number of weekly newly infected cases in the epidemic timeseries 
Peak time  The week when peak value is attained 
Total attack rate  Fraction of individuals ever infected in the whole population 
Agespecific attack rate  Fraction of individuals ever infected belonging to a specific age window 
Firsttakeoff(value):  Sharp increase in the number of new infected case counts over a few consecutive weeks 
Firsttakeoff(time):  The start time of sudden increase in the number of new infected case counts 
Intensity duration  The number of weeks (usually consecutive) where the number of new infected case counts is more than a specific threshold 
Speed of epidemic  Rate at which the case counts approach the peak value 
Starttime of disease season  Time at which fraction of infected individuals exceeds a specific threshold 
Intensity Duration
Intensity Duration (ID) indicates the number of weeks, usually consecutive, where the number of new infected case counts is more than a specific threshold. This feature can be used by hospitals to estimate the number of weeks for which the epidemic will stress their resources (Figure 3).
Speed of Epidemic
The Speed of Epidemic (SpE) indicates how fast the infected case counts reach the peak value. This feature includes peak value and peak time simultaneously. The following equation shows the definition of speed of epidemic:
(2) 
where and are the number of new case count diseases at peak time and the start time of the season, respectively. In other words, speed of epidemic is the steepness of the line that connects the start datapoint of timeseries sequence to the peak datapoint(Figure 4).
Total Attack Rate (TAR):
Attack rate (TAR) is the ratio of the total number of infected cases during a specified period, usually one season, to the size of the whole population at the start of the period.
(3) 
where is the total number of infected people during specified period.
Agespecific Attack Rate (AgeAR)
This is similar to the total attack rate but focuses on a specific subpopulation. Specific attack rate is not only limited to agespecific attack rate, but the subpopulation could be restricted by any feature like age, gender, or any special group.
(4) 
Secondary Attack Rate (SAR):
Secondary attack rate (SAR) means the ratio of new infected cases of a disease, during a particular period, among the contacts of primary cases who are infected first; In other words, it is a measure of the spreading of disease in the contact network.
(5) 
where is the number of contacts of primary infected persons and is the number of infected persons among those contacts during a specified period[34]. In order to calculate the secondary attack rate, individual information about households and their contacts network are needed. Epidemiologists estimated the secondary attack rate in household contacts of several states of the United States which was to for acuterespiratoryillness (ARI) and to for influenzalikeillness (ILI)[35].
Starttime of a disease Season
We define the ”Starttime of a flu season” as the week when the flupercentage exceeds a specified threshold. The flupercentage is defined as follows:
(6) 
where is weekly influenza related illnesses in week and is the weekly number of nonILI patients seen by health providers for any reason and/or specimens recognized as negative cases by clinical laboratories. Usually, predicting the denominator of the above equation is a difficult task. But if the observed data is available, one can use that to calculate the denominator and only predicts the numerator of the above equation and calculates the Flu percentage. The value of threshold that is used as the criteria is determined by epidemiologist and could be calculated in different ways. We define the threshold through the analysis of past flu seasons inspiring from the flu baseline definition by CDC [36]. CDC defines the baseline as the mean percentage of visits for influenza during noninfluenza weeks for the previous three seasons plus two standard deviations [36]. The noninfluenza weeks are defined as two or more consecutive weeks in which the number of counted ILI diagnoses for each week is less than 2% of total seasonal ILI case counts. The definition of startofseason could be generalized for any disease like Ebola, Zika, etc.
Error Measures
The second step of evaluating epidemic forecasting algorithms is to measure the error for each predicted Epifeatures. There are a variety of measures that can be used to assess the error between the predicted timeseries and the observed one. The error measures that we consider in this study are listed in Table 3 along with their features. The notations used in the error measures equations are described in Table 1. Note that all the error measures considered, only handle the absolute value of the error. They do not distinguish between under and overestimation of the timeseries. The signed versions of some of these absolute error measures are listed in the supporting information. These signed measures include the direction of error i.e. the positive sign demonstrates the underestimation while the negative one indicates overestimation. Moreover, all the measures referred to in Table 3 use Arithmetic Mean to get an average value of the error. Variants that use geometric mean, median, etc. are listed in the supporting information section.
Measure name  Formula  Description  Scaled  Outlier Protection  Other forms  Penalize extreme deviation  Other Specification 

Mean Absolute Error (MAE)  Demonstrates the magnitude of overall error  No  Not Good  GMAE  NO    
Root Mean Squared Error (RMSE)  Root square of average squared error  No  Not Good  MSE  Yes    
Mean Absolute Percentage Error (MAPE)  Measures the percentage of average absolute error  Yes  Not Good  MdAPE^{1}^{1}1Md represent Median, RMSPE^{2}^{2}2RMS represent Root Mean Square  NO    
symmetric Mean Absolute Percentage Error (sMAPE)  Scale the error by dividing it by the average of and  Yes  Good  sMdAPE  No  Less possibility of division by zero rather than MAPE.  
Mean Absolute Relative Error (MARE)  Measures the average ratio of absolute error to Random walk error  Yes  Fair  MdRAE, GMRAE  No    
Relative Measures: e.g. RelMAE (RMAE)  Ratio of accumulation of errors to cumulative error of Random Walk method  Yes  Not Good  RelRMSE, LMR [37] , RGRMSE [38]  No    
Mean Absolute Scaled Error (MASE)  Measures the average ratio of error to average error of onestep Random Walk method  Yes  Fair  RMSSE  No    
Percent Better (PB)  Demonstrates average number of times that method overcomes the Random Walk method  Yes  Good    No  Not good for calibration and close competitive methods.  
Mean Arctangent Absolute Percentage Error (MAAPE)  Calculates the average arctangent of absolute percentage error  Yes  Good  MdAAPE  No  Smooths large errors. Solve division by zero problem.  
Normalized Mean Squared Error (NMSE)  Normalized version of MSE: value of error is balanced  No  Not Good  NA  No  Balanced error by dividing by variance of real data.  
After careful consideration, we have selected MAE, RMSE, MAPE, sMAPE, MdAPE and MdsAPE as the error measures for evaluating the Epifeatures and ignored others based on different reasons. We list some of these reasons and observations on the eliminated error measures in part B of Supplementary Information. Also, instead of using MAPE, we suggest corrected MAPE (cMAPE) to solve the problem of division by zero:
(7) 
where is a small value. It could be equal to the lowest nonzero value of observed data. We have also added two error measures based on the median namely, Median Absolute Percentage Error (MdAPE) and Median symmetric Absolute Percentage Error (MdsAPE). However, as median errors have low sensitivity to change in methods, we do not recommend them for isolated use as the selection or calibration criteria.
Ranking Methods
The third step of the evaluation process is ranking different methods based on different Epifeatures and the result of different error measures. For this purpose, we have used two kinds of ranking methods: Consensus Ranking and Horizon Ranking.

Consensus Ranking: Consensus Ranking (CR) for each method is defined as the average ranking of the method among others. This kind of Consensus Ranking could be defined in different scenarios. For example, the average ranking that is used in Table 5 in Result section is Consensus Ranking of a method based on one specific Epifeature across different error measures.
(8) where is the individual ranking assigned to method among other methods for predicting one Epifeature based on error measure , and Consensus Ranking is the overall Ranking of method m based on different error measures.
Consensus Ranking could also be defined across different Epifeatures. In this case, CR over error measures could be considered as the individual ranking of a method, and the average is calculated over different Epifeatures. It is important to consider the variance of ranking and the intensity of quartiles besides the mean value of CR. In the Results section we demonstrate how to process and analyze these rankings in a meaningful way.

Horizon Ranking: While Consensus ranking considers the average performance of methods over prediction times, Horizon ranking demonstrates the trend of forecasting methods’ performance in predicting a single Epifeature across different prediction times. First of all, for each Epifeature, we compute an error measure like Absolute Percentage Error (APE) or its symmetric variant (sAPE) per prediction time. For each prediction time, APE values of different forecasting methods are sorted from smallest to largest to determine the ranking of the methods. The average value of this ranking over different error measures determines the overall Horizon Ranking of the methods in each timestep.
Data
The ILI surveillance data used in this paper were obtained from the website of the United States Centers for Disease Control and Prevention (CDC). The information of patient visits to health care providers and hospitals for ILI was collected through the US Outpatient Influenzalike Illness Surveillance Network since 1997 and lagged by two weeks(ILINet)[39, 31]; This Network covers all 50 states, Puerto Rico, the District of Columbia and the U.S. Virgin Islands. The weekly data are separately provided for 10 regions of HHS regions [32] that cover all of the US. The forecasting algorithms have been applied to CDC data for each HSS region. We applied our forecasting algorithm on the 20132014 flu season data where every season is less than or equal to one year and contains one major epidemic. Figure 5 shows the HHS Region Map that assigned US states to the regions.
Results and Analysis
Past literature in the area of forecasting performs an overall evaluation for evaluating the performance of the predictive algorithm by defining a statistical distance/similarity function to measure the closeness of predicted epidemic curve to the observed epidemic curve. However, they rarely evaluate the robustness of a method’s performance across epidemic features of interest and error measures. Although the focus of the paper is not on the method to be chosen, it is instructive to observe the software framework in action as we use different evaluation criterion for the methods.
Rankings based on Error Measures applied to peak value
In Table 4, we have calculated six error measures, MAE, RMSE, MAPE, sMAPE, MdAPE, and MdsAPE for the peak value predicted by six different forecasting methods. The corresponding ranks are provided in the Ranking Table (Table 5). The most successful method is assigned rank 1 (R1); As can be seen, even similar measures like MAPE and sMAPE do not behave the same for the ranking process. The fourth algorithm wins six first places among other methods for seven error measures that shows almost the best performance. However, it is hard to come to a similar conclusion for other methods. The last column in the table is Consensus Ranking, which shows the average ranking of the method over different error measures. According to Consensus Ranking, some methods like method 2 and 5 could have close or same average ranking which makes the comparison harder. Figure 6 shows the BoxWhisker diagram of methods’ rankings. The second and third quartile area around the median of the ranking are much more intense for the fifth method than the second one which shows more certainty about the median value (4) as correct ranking. Also, the two data points which deviate the mean value from the median are recognized as outliers. However, for the second method, quartiles’ boxes around the median are wider and whiskers are longer which implies less certainty about the mean and median value of the ranking. Based on such analysis, the fourth method (M4) is the superior for predicting the peak value. After that the order of performance for other methods will be: Method 6 (M6), Method 3, Method 5, Method 2 and Method 1. Note, however, that this analysis is specific to using peak value as the Epifeature of interest.
MAE  RMSE  MAPE  sMAPE  MdAPE  MdsAPE  

Method 1  4992.0  9838.6  4.9  1.04  1.7  1.03 
Method 2  4825.2  9770.4  4.7  0.99  1.4  0.95 
Method 3  3263.0  5146.5  3.2  0.96  1.5  1.01 
Method 4  2990.7  4651.3  2.9  0.899  1.1  0.85 
Method 5  3523.2  5334.8  3.4  0.95  2.1  1.01 
Method 6  3310.9  4948.5  3.2  0.896  1.5  0.85 
MAE  RMSE  MAPE  sMAPE  MdAPE  MdsAPE  Consensus Ranking  Median  

Method 1  6  6  6  6  5  6  5.83  6 
Method 2  5  5  5  5  2  3  4.17  5 
Method 3  2  3  2  4  3  4  3.00  3 
Method 4  1  1  1  2  1  1  1.17  1 
Method 5  4  4  4  3  6  4  4.17  4 
Method 6  3  2  3  1  3  1  2.17  2.5 
Consensus Ranking across all Epifeatures
In order to make a comprehensive comparison, we have calculated the error measures on the following Epifeatures: Peak value and time, Takeoffvalue and Takeofftime, Intensity Duration’s length and start time, Speed of epidemic, and start of flu season. We do not include demographicspecific Epifeatures such as agespecific attack rate or secondary attack rate, since such information is not available for our methods.
Figure 7 shows the
Consensus Ranking of the methods in predicting different Epifeatures for Region 1. Note that
Method 4, which is superior in predicting some Epifeatures such as Peak value
and start of Flu season, is worse than other methods in predicting other
Epifeatures such as Takeoff time and Intensity Duration. The tables
corresponding to the boxplots are included in supporting information.
Figure 8 shows the second level of Consensus Ranking over various Epifeatures for Region 1. This figure summarizes the performance of different methods based on the average consensus rankings that
are listed in Table 6. It is evident that the first, second and
fifth methods have similar performance, while the third method performs
moderately well across Epifeatures. The fourth method which performs best for
five out of eight Epifeatures, however, is not among the top three methods for
predicting Takeoff time and Intensity Duration. Method 6 comes in as the
second best method when considering the consensus ranking.
The first level of Consensus Ranking over error measures for other HHS regions are included in Figures S2Fig to S10Fig of Supporting Information. Figures 9 and 10 represent the second level of Consensus Rankings of the six approaches over all Epifeatures for regions 1 to 10. If each region can be assigned with a different method as predictor comparing other regions, we suggest to assess the performance of methods on each region separately and to select the best one. Otherwise, if the experts need to select one method as the best predictor for all regions, we propose the third level of Consensus Ranking to aggregate the results across different regions. Figure 11 represents the Consensus Ranking over all 10 HHS regions, based on the average of consensus rankings across all Epifeatures for each region listed in Table 7 . As can be seen in Figure 11, the performance of the first and the second methods are behind the other approaches and we can exclude them from the pool of selected algorithm. However, the other four methods show very competitive performance and are considered the same according the total rankings. The sequential aggregations provide a general conclusion which eliminates the nuances of similar methods.
Peak value  Peak time  takeoffvalue  takeofftime  ID length  ID start time  Start of flu season  Speed of Epidemic  Average  Median  

M1  5.83  3.83  6  1  3.33  5.67  6  5.83  4.69  5.67 
M2  4.17  4.5  5  2  1  4.33  5.0  4.5  3.81  4.33 
M3  3  2.83  3.83  3  3.33  3.17  3  3.17  3.17  3.17 
M4  1.17  3.33  1.17  5  4.00  1.0  1  1.17  2.23  1.17 
M5  4.17  1.17  3  4  4.33  4.67  3  4.17  3.56  4 
M6  2.17  2.33  1.50  6  4.67  2.00  1.00  1.67  2.67  2.17 
Region1  Region2  Region3  Region4  Region5  Region6  Region7  Region8  Region9  Region10  Ave  

M1  4.69  3.31  4.6  3.94  3.65  2.21  4.3  3.94  3.46  4.29  3.84 
M2  3.81  2.77  4.23  4.0  3.71  1.29  3.73  3.69  3.79  3.96  3.50 
M3  3.17  3.46  1.96  2.68  2.67  2.21  3.03  2.73  2.17  2.33  2.64 
M4  2.23  3.19  2.04  2.7  3.08  1.29  2.93  2.60  2.44  3.71  2.62 
M5  3.56  1.79  1.79  2.41  2.77  2.21  2.67  3.06  2.88  2.67  2.58 
M6  2.67  3.23  2.13  2.48  2.83  1.29  2.60  3.27  3.13  3.58  2.72 
Horizon Rankings for each Epifeature
Horizon Ranking helps track the change in accuracy and ranking
of the methods over prediction time. If the Horizon Ranking fluctuates a lot
over the time steps, this hints at the unsuitability of Consensus Ranking as
selection criteria for the best method. It is possible that the method that
performs best during early stages of prediction need not perform the best at a
later timepoints. Figure 12 shows the evolution of
Horizon Ranking of the six methods for predicting the peak value calculated
based on AEP and sAPE. As shown in Figure 7, Methods 4
and 6 have the best average consensus ranking in predicting peak value and is
consistent with observations on Horizon ranking. In
Figure 12 the ranking of Methods 4 and 6 demonstrates a
little fluctuation at the first timesteps. However, as prediction time moves
forward they provide more accurate forecasts causing them to rank higher.
The most interesting case for Horizon Rankings concerns the prediction of peak
time. The Consensus Ranking in Figure 7 selects Method 5
as the superior in predicting peak time and Methods 6 and 4 as the second and
third best approaches. However, by observing the trends of ranks over
prediction times (Figure 13), Methods 4 and 6 are the
dominant for the first eight weeks of prediction, and then method 1 wins the
first place for seven weeks. In the next eight weeks, methods 1, 3, and 5 are
superiors simultaneously.
Figure 14 ,15, 16
shows Horizon Ranking graphs for leveraging forecasting methods in predicting
other Epifeatures. These Horizon rankings are almost consistent with their
corresponding Consensus ranking which confirms the superior methods from
Consensus ranking perspective could be used for any prediction time.
Visual Comparison of forecasting methods
In order to visualize the output of forecasting methods, we generate the onestepahead epidemic curve. It means given the early time series up to time () as observed data, the forecasting algorithm predicts the the next data point of time series and this process is repeated for all values of prediction time where . By putting together the shortterm predictions, we construct a timeseries from to as onestepahead predicted epidemic curve. Figure 17 depicts the onestepahead predicted epidemiccurve for HHS region 1 that are generated by the six forecasting methods (refer to figures S11S19 in Supplementary Information for other Regions). We have used and as the beginning and end for the prediction time. As can be seen in figure 17, the first and second methods show bigger deviations from observed curve especially in the first half of the season. As these six methods are different configurations of one algorithm, their outputs are so competitive and sometimes similar to each other. Methods 3 and 5, and methods 4 and 6 show some similarity in their onestepahead epidemic curve that is consistent with Horizon Ranking charts for various Epifeatures. However, Horizon Ranking graphs contains more information regarding the longterm predictions; Therefore, the ranking methods, especially Horizon Ranking, could help the experts to distinguish the better methods when the outputs of forecasting methods are competitive and judging based on the visual graph is not straightforward.
Epidemic Forecast Evaluation Framework
We have proposed a set of Epifeatures and error measures and showed how to evaluate different forecasting methods together. These are incorporated into Software Framework as described (Figure 1). The software framework, named EpiEvaluator, receives the observed epidemic curve and predicted ones as an input and can generate various rankings based on the choice of Epifeatures and error measure. The system is designed as a collection of scripts that are loosely coupled through the data they exchange. This is motivated by two possible scenarios: (a) individuals must be able to use each module in isolation. (b) Users must not be restricted to the possibilities described in this paper, and be able to contribute features and measures of their interest.
We also include a standardized visualization module capable of producing a variety of plots and charts summarizing the intermediate outputs of each module. This enables the package to have a plugandplay advantage for end users. We envision the endusers ranging from (a) epidemiologists who wish to quickly extract/plot key Epifeatures from a given surveillance curve, (b) computational modelers who wish to quantify their predictions and possibly choose between different modeling approaches, (c) forecasting challenge organizers who wish to compare and rank the competing models, and (d) policymakers who wish to decide on models based on their Epifeature of interest.
Evaluating Stochastic forecasts
The aforementioned measures deal primarily with deterministic forecasts, whereas there are a lot of stochastic forecasting algorithm with some levels of uncertainty that makes the evaluation much harder. Moreover, the observed data may be stochastic because of possible errors in measurements and source of information. We are going to extend our measures and provide new methods to handle the stochastic forecasts and observation. Nondeterministic forecasts could be in one of the following formats:

Multiple replicates of the time series

A timeseries of mean and variance of the predicted values
Stochastic forecasts as multiple replicates
Most of the stochastic algorithms, generate multiple replicates of series and state vectors to simulate the posterior density function by aggregating discrete values together. State vector contains the parameters that are used by the epidemic model to generate the epidemic curve (timeseries of new infected cases). Therefore, the best state vectors (models) are those that generate epidemiccurve closer to observed one (i.e., models with the higher likelihood). When the forecasting methods output is a collection of replicates of state vectors and timeseries, we have the option to calculate Epifeatures on each series, for each prediction time, and assess the error measures on each series. The error measures should be accumulated across the series through getting Arithmetic Mean, Median, Geometric Mean, etc. to provide a unique comparable value per each method. Table 8 provides advanced error measures to aggregating the simple EM values over the series.
Measure name  Formula  Description 

Absolute Percentage Error ()  where is time horizon and is the series index.  
Mean Absolute Percentage Error ()  where is time horizon, is the series index is the number of series for the method.  
Median Absolute Percentage Error ()  Median Observation of  where Observations are sorted s, is time horizon, is the series index. 
Relative Absolute Error ()  Measures the average ratio of absolute error to Random walk error in time horizon t.  
Geometric Mean Relative Absolute Error ()  Measures the average ratio of absolute error to Random walk error  
Median Relative Absolute Error ()  Median Observation of  Measures the median observation of sorted for time horizon t 
Cumulative Relative Error ( )  Ratio of accumulation of errors to cumulative error of Random walk Method  
Geometric Mean Cumulative Relative Error ( )  Geometric Mean of Cumulative Relative Error across all series.  
Median Cumulative Relative Error ( )  Median of Cumulative Relative Error across all series.  
Root Mean Squared Error ()  Root square of average squared error across series in time horizon t  
Percent Better ()  Demonstrates average number of times that method overcomes the Random Walk method in time horizon t.  
Armstrong [40], performed an evaluation over some of these measures and suggested the best ones in different conditions. In calibration problems, a sensitive error measure is needed to demonstrate the change in parameters in the error measure values. The EMs with good sensitivity are: RMSE, MAPE, and GMRAE. He suggests GMRAE because of poor reliability of RMSE and he claimed that MAPE is biased towards the low forecasts [40]. As we mention in the discussion section, We believe that MAPE is not biased in favor of the low forecasts and could also be a good metric for calibration (Refer to Discussion section). Also, GMRAE could drop to zero when only one zero in the errors pops up and lower down the sensitivity of GMRAE to zero.
For selecting among forecasting methods, Armstrong offered MdRAE when the output has a small set of series and MdAPE for a moderate number of series. He believes that reliability, protection against outliers, construct validity, and the relationship to decisionmaking are more important criteria than sensitivity. MdRAE is reliable and has better protection against outliers. MdAPE has a closer relationship to decision making and is protected against outliers [40].
For the stochastic algorithms that generate multiple series with uneven weights, it is important to consider the weight of the series in calculating the arithmetic means. As an illustration, instead of calculating MAPE, sMAPE, RMSE, and MdAPE across the series, we suggest measuring weightedMAPE, weightedsMAPE, weightedRMSE, and weightedMdAPE respectively.
Stochastic forecasts with uncertainty estimates
Sometimes the output of stochastic forecasting method is in the form of mean value and variance/uncertainty interval for the predicted value. In statistics theory, the summation of Euclidean distance between the data points and a fixed unknown point in ndimensional space is minimized in the mean point. Therefore, the mean value is a good representative of other data points. As a result, we can simply calculate epimeasure on the predicted mean value of epidemic curve and compare them through error metrics. However, this comparison is not comprehensive enough because the deviation from the average value is not included in the discussion. To handle this kind of evaluation, we divide the problem to two subproblems

A) Deterministic observation and stochastic forecasts with uncertainty estimates

B) Stochastic observation and stochastic forecasts with uncertainty estimates
A) Deterministic observation and stochastic forecasts with uncertainty estimates
In this case, we assume that each forecasting method’s output is a timeseries of uncertain estimates of predicted case counts and is reported by the mean value , variance for data point at week, and the number of samples . For simplicity, we eliminate the subscript . Table 9 lists the required notations used in the following sections. Sample size refers to the number of predicted samples from which the mean and variance are obtained. In the best situation, the forecast algorithm could provide with the probability density function (pdf) of each predicted data point denoted by , unless we assume the pdf is Normal distribution for the large enough sample size, or tdistribution if the sample size is low. Tdistribution has heavier tails, which means it is more subject to producing values far from the mean. is assumed as large sample size. is used to calculate the standard deviation of the random variable X, from the standard deviation of its samples: . When the sample size is low, the degree of freedom of tdistribution is calculated by : .
Symbol  Definition 

Random variable (or ) that is the predicted estimate of a data point at one week( week)  
Probability density function (pdf) of random variable  
Mean value for the random variable  
Standard deviation for the random variable  
Mean value of the samples belonging to random variable  
Standard deviation of the samples belonging to random variable  
Degree of freedom of tdistribution  
: the mean for y values over n weeks  
where is the sample from distribution  
Number of sample set  
Random variable (or ) that is the estimate of observed value at one week( week)  
Probability density function (pdf) of random variable  
where is the sample from distribution 
In order to evaluate the performance of stochastic methods, we suggest to perform the Bootstrap sampling from the distribution and generate the sample set for each data point of timeseries where . Note that we don’t have access to the instances of the first sample size, so we generate a large enough sample set from its pdf function . Then the six selected error measures , MAE, RMSE, MAPE, sMAPE, MdAPE, and MdsAPE, are calculated across the sample set for each week. Table S8 in supplementary information contains the extended formulation of the error measures used for stochastic forecasts. Using the equations in Table S8 we can estimate different expected/median errors for each week for a stochastic forecasting method. The weekly errors could be aggregated my getting Mean or Median across the time to calculate the total error measures for each method. The aggregated error measures can be used to calculate the Consensus Ranking for the existing forecasting approaches. Moreover, having the errors for each week, we can depict the HorizonRanking and evaluate the trend of rankings across the time similar to the graphs for deterministic approaches.
B) Stochastic observation and stochastic forecasts with uncertainty estimates
There are a lot of source of errors in measurements and data collections which result in the uncertainty for the observation data and makes the evaluation more challenging. We suggest two categories of solutions to deal with this problem:

a) Calculating the Distance between probability density functions

b) Calculating the error measures between two probability density functions
Ba) Calculating the Distance between probability density functions
Assuming that both predicted and observed data are stochastic, they are represented as the timeseries of probability densities functions (pdfs). There are a lot of distance functions that can calculate the distance between two pdfs [21]. Three of the most common distance functions for this application are listed in Table 10.
Distance Function  Formula (Continues)  Formula (Discrete form) 

Bhattacharyya  
,  
Hellinger  
Jaccard    
Bhattacharyya distance function [21] and Hellinger [41] both belong to the squaredchord family, and their continues forms are available for comparing continues probability density functions. In special cases, like when the two classes are under the normal distribution, these two distance functions can be calculated by the mean and variances of pdfs as follows [42, 43]:
(9) 
(10) 
However, calculating the Integral may not be straightforward for every kind of pdfs. Also, Jaccard distance function is in the discrete form. To solve this problem, we suggest Bootstrap sampling from both predicted and observed pdfs and generating the sample set where , , and . Then we calculated the summation for the distance function over all the items that belong to the sample set . As an example for Jaccard distance function:
(11) 
Jaccard distance function belongs to inner product class and incorporates both similarity and dissimilarity of two pdfs. Using one of the aforementioned distance functions between the stochastic forecasts and stochastic observation, we can demonstrate Horizon Ranking across time and also aggregate the distance values by getting the mean value over the weeks and then calculate the Consensus Ranking.
Although these distance functions between the two pdfs seem to be a reasonable metric for comparing the forecast outputs, it ignores some information about the magnitude of error and its ratio to the real value. In other word, any pair of distributions like (P1,Q1) and (P2,Q2) could have same distance value if : and and . Therefore, the distance functions do not consider the domain values of (, ) and lose the information about the relative magnitude of error to the observed value.
In the ranking process of different forecasting approaches, as the observed data is assumed to be fixed, this issue will not be a concern.
The other problem of using distance functions between pdfs arises when some forecasting methods are stochastic, and others are deterministic. As the proposed error measures are not compatible with distance functions, we cannot compare them together.
Bb) Calculating the error measures between two probability density functions
In order to compare stochastic and deterministic forecasting approaches together, we suggest estimating the same error measures used for deterministic methods. We perform Bootstrap sampling from both predicted and observed pdfs for each data point of timeseries and generate two separate sample sets and where , and . The six selected error measures, MAE, RMSE, MAPE, sMAPE, MdAPE, and MdsAPE , could be estimated through the equations listed in Table S9 in supporting information. These measures incorporate the variance of pdfs through the sampling and represent the difference between the predicted and observed densities by weighted expected value of the error across the samples.
Discussion
As shown in previous sections, none of the forecasting algorithms may outperform the others in predicting all Epifeatures. For a given Epifeature, we recommend using the consensus ranking across different error measures. Further, even for a single Epifeature, the rankings of methods seem to vary as the prediction time varies.
Horizon Ranking vs Consensus Ranking
How do we decide on the best method when Horizon ranking and Consensus ranking lead to different conclusions? The significant difference between Horizon and Consensus Rankings comes from the fact that Consensus Ranking calculates the average (or median) of the errors for a given time step and then sorts them to determine the ranking. This aggregation of errors is not always a disadvantage, because sometimes a slight difference in errors could change the Horizon Ranking level while the Consensus Ranking accumulates the errors for whole timeseries which gives an overall perspective of methods’ performance. If the purpose of evaluation is to select a method as the best predictor for all weeks, Consensus rankings can be used to guide the method selection. However, if there is a possibility for using different prediction methods at different periods, we offer to determine a few time intervals in which the Horizon Rankings of the best methods are consistent. Then, in each time interval, the best method based on Horizon Ranking could be selected, or the Consensus Ranking could be calculated for each period by calculating the average errors (error measures) over time steps. The superior method for each time interval is the one with first Consensus Ranking in that period. One of the advantages of Horizon Ranking is to detect and reduce the effect of outliers across time horizons, whereas Consensus Ranking aggregates the errors across time steps that results in a noticeable change in total value of error measures by outliers.
MAPE vs sMAPE
MAPE and sMAPE have been the two important error measures for measuring forecast errors since 1993. MAPE was used as the primary measure in M2Competition, and it was replaced by sMAPE in M3Competition to overcome the disadvantages of MAPE. One of the drawbacks is that MAPE could get a large or undefined value when the observed data point gets close to zero. That’s one of the reasons why sMAPE used the average of observed and predicted value in the denominator to alleviate this phenomenon. The other issue that has been claimed for MAPE in some literature is biasing in favor of small forecasts. Therefore, the critics believe that MAPE leads to higher penalty for large overestimation rather than any underestimation. sMAPE, as the symmetric version of MAPE, normalized the error value with the mean of predicted and observed data which limits the range of sMAPE error between 0 and 2 for both overestimating and underestimating of the prediction. However, we believe that although the range of sMAPE function is symmetric, it does not provide a uniform scoring of the errors. We believe sMAPE is significantly biased toward the large forecasts. Figure 18 and Table S8 in supporting information demonstrate the corresponding domains that generate equal MAPE or sMAPE errors in term of magnitude. The figures in the left column belong to MAPE and the right ones are sMAPE’s. In figure 18, the black line represents the observed epidemic curve (y), and the horizontal axis is the weekly time steps (t). The yellow borders show the predicted curves as overestimated or underestimated predictions which both results in MAPE= 0.5 or sMAPE = 0.5. The green spectrum shows the predicted curves with low values of MAPE or sMAPE. Equal colors in these figures correspond to equal values for the discussed error measure. The red borders in the left graph belong to predicted curves and with MAPE = 1 and the red borders in the right chart corresponds to and which generate sMAPE = 1. As can be seen, MAPE grows faster than sMAPE which means MAPE reaches to 1 with smaller values in the domain. Moreover, MAPE demonstrates symmetrical growth around the observed curve that results in fair scoring toward over and underestimation.
The black borders in lower charts are corresponding to predicted epidemic curve which generates MAPE=2 and sMAPE =2 in the left and right charts sequentially. The color spectrum of sMAPE in the right chart represents the nonsymmetric feature of this error measure which is in favor of large predictions. As we couldn’t show the infinity domain for sMAPE, we limited it to the predicted curve . Figure 19 shows the blue spectrum of MAPE that corresponds to large predictions where and MAPE approaches infinity. This error measure provides more sensible scoring for both calibration and selection problems.
Relative evaluation vs Absolute one
In this paper, we covered how to evaluate the performance of forecasting algorithm relative to each other and rank them based on various error measures. The ranking methods, like the Horizon ranking, can represent the difference in performances, even when the algorithms are so competitive. However, the other question that arises is, how can we recognize that the ranked methods have similar performance or that they are completely far from each other? What if we only have one forecasting method and just need to know about its performance?
The absolute measurement is a bigger challenge because most of the available error measures are not scaled or normalized and do not provide meaningful range. However, if you have only one forecasting method to evaluate, we suggest taking advantage of MAPE measure, as it is scaled based on the observed value and its magnitude defines how large on average the error is, compared with the observed value.
For multiple algorithms, we suggest to calculate MAPE measure on the onestepahead epidemic curve of each algorithm and cluster them based on its MAPE value. As discussed in the previous section and Table S10, four meaningful intervals for MAPE value could be defined as the criteria to cluster the forecasting approaches into the four corresponding groups which means: Methods with , Methods with , Methods with , and Methods with . This kind of clustering can provide borders between the methods which are completely different in the performance. Then the algorithms of each group can be passed through the three steps of evaluation framework, and be ranked based on various Epifeatures and error measures. As an illustration, Table 11 provides the average value of different error measures over all 10 HHS regions for the six aforementioned methods and an autoregressive forecasting method named ARIMA [44]. As can be seen, the MAPE value of the six methods are under 0.5, that clusters all of them in the same category, while the MAPE for the ARIMA method is 0.77 which assigns it to the second group; It means the performance of ARIMA is completely behind all other methods. Figure 20 depicts the 1stepahead predicted curve of ARIMA method comparing to the observed data that shows ARIMA output has large deviations from the real observed curve and confirms the correctness of clusteringapproaches.
MAE  RMSE  MAPE  sMAPE  MdAPE  MdsAPE  

Method 1  316.18  378.63  0.39  0.33  0.34  0.29 
Method 2  293.76  357.34  0.35  0.31  0.30  0.26 
Method 3  224.53  293.52  0.25  0.22  0.22  0.20 
Method 4  204.5  274.41  0.21  0.21  0.18  0.18 
Method 5  224.57  293.90  0.25  0.22  0.22  0.20 
Method 6  204.25  274.97  0.21  0.20  0.18  0.18 
ARIMA  1015.60  1187.62  0.77  0.74  0.78  0.75 
Prediction Error vs Calibration Error
In this paper, prediction error is considered to calculate the predicted error measures, i.e. only the errors after prediction time is taken into account and the deviation between the model curve and data before prediction time is ignored. However, we suggest the evaluator framework in two different modes: Forecasting mode vs Calibration mode. As mentioned in the forecasting mode, only prediction error is measured. Moreover, if the observed Epifeature is already occurred in the week, the forecasts corresponding to the prediction times after the week are not considered in accumulation of the errors because they are not interested anymore. However, in calibration mode, the aim is to find the error between model curves and observed data, regardless of the time of observed Epifeature. Therefore the error measures on one epifeature are accumulated for all prediction weeks. Also, in calculating error measures on the epidemic curve, the fitting errors before the prediction time are cumulated with prediction errors, to measure the calibration one.
Conclusion
Evaluating epidemic forecasts arising from varied models is inherently challenging due to the wide variety of epidemic features and error measures to choose from. We proposed different Epifeatures for quantifying the prediction accuracy of forecasting methods and demonstrated how suitable error measures could be applied to those Epifeatures to evaluate the accuracy and error of prediction. We have applied the proposed Epifeatures and error measures on the output of six forecasting methods to assess their performance. As the experimental results showed, different error measures provides various measurements of the error for a single Epifeature. Therefore, we provided the Consensus ranking method to aggregate the rankings across error measures and summarize the performance of forecasting algorithms in predicting a single Epifeature. Based on the first round of rankings, none of the forecasting algorithms could outperform the others in predicting all Epifeatures. Therefore, we recommended the second set of rankings to accumulate the analysis for various Epifeatures and provide a total summary of forecasting methods’ capabilities. We also proposed Horizon ranking to trace the performance of algorithms across the time steps to provide better perspective over time. We finally hint at how these methods can be adapted for the stochastic setting. Choosing the best forecasting method enables policy planners to make more reliable recommendations. Understanding the practical relevance of various Epifeatures of interest, and the properties offered by different error measures will help guide the method selection. We hope that our work allows for a more informed conversation and decision process while using and evaluating epidemic forecasts.
Declarations
{backmatter}List of abbreviations
List of abbreviations are included in Table 12.
AgeAR  Agespecific Attack Rate 
APE  Absolute Percentage Error 
ARI  acuterespiratoryillness 
CDC  Centers for Disease Control and Prevention 
cMAPE  corrected MAPE 
CR  Consensus Ranking 
CumRAE  Cumulative Relative Error 
DARPA  Defense Advanced Research Projects Agency 
Epifeatures  epidemic features 
GM  Geometric Mean 
GMRAE  Geometric Mean Relative Absolute Error 
HHS  Department of Health and Human Services 
ID  Intensity Duration 
ILINet  Influenzalike Illness Surveillance Network 
MAAPE  Mean Arctangent Absolute Percentage Error 
MAE  Mean Absolute Error 
MAPE  Mean Absolute Percentage Error 
MARE  Mean Absolute Relative Error 
MASE  Mean Absolute Scaled Error 
MCompetitions  Makridakis Competitions 
Md  Median 
MdRAE  Median Relative Absolute Error 
NIH  National Institutes of Health 
NMSE  Mean Normalized Mean Squared Error 
NOAA  National Oceanic and Atmospheric Administration 
PB  Percent Better 
probability density function  
RMAE  Relative Measures 
RMSE  rootmeansquareerror 
sAPE  symmetric Absolute Percentage Error 
SAR  Secondary Attack Rate 
sMAPE  symmetric Mean Absolute Percentage Error 
SpE  Speed of Epidemic 
TAR  Total Attack Rate 
Ethics approval and consent to participate
Not applicable
Consent for publication
Not applicable
Availability of data and material
Data that was used during the study are available upon request.
Competing interests
The authors declare that they have no competing interests.
Funding
This work has been supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center (DoINBC) contract number D12PC00337.
Author’s contributions
FST and MM and NR has contribution in the main idea and the framework of the project. FST proposed and implemented the Epifeatures and Error Measures and generated the outputs and Diagrams and interpreted them. JC and BL have analyzed and prepared the data for the Digital Library for the simulation. FST wrote the paper. SV and PC were major contributors in writing the manuscript. All authors read and approved the final manuscript.
Acknowledgements
We thank DTRA CNIMS, IARPA, NetSE, NIH MIDAS and DTRA V&V for their support and also thank our external collaborators and members of the Network Dynamics and Simulation Science Laboratory (NDSSL) for their suggestions and comments.
We thank Fereshteh Kazemi and Milad Hosseinipour for their invaluable support.
References
 [1] Paul, M.J., Dredze, M., Broniatowski, D.: Twitter improves influenza forecasting. PLoS currents 6 (2014). doi:10.1371/currents.outbreaks.90b9ed0f59bae4ccaa683a39865d9117
 [2] Scott, S.L., Varian, H.R.: Bayesian variable selection for nowcasting economic time series. forthcoming in Economics of Digitization (July 2012), 1–22 (2014)
 [3] Stock, J.H.: Forecasting Economic Time Series (1999)
 [4] Radzuan, N.F.M., Othman, Z., Bakar, A.A.: Uncertain Time Series in Weather Prediction. Procedia Technology 11(Iceei), 557–564 (2013). doi:10.1016/j.protcy.2013.12.228
 [5] Voyant, C., Paoli, C., Muselli, M., Nivet, M.L.: Multihorizon solar radiation forecasting for Mediterranean locations using time series models. Renewable and Sustainable Energy Reviews 28, 44–52 (2013). doi:10.1016/j.rser.2013.07.058
 [6] Kumar, N., Jha, G.K.: A Time Series ANN Approach for Weather Forecasting 3(1), 19–25 (2013)
 [7] Chretien, J.P., George, D., Shaman, J., Chitale, R.a., McKenzie, F.E.: Influenza Forecasting in Human Populations: A Scoping Review. PLoS ONE 9(4), 94130 (2014). doi:10.1371/journal.pone.0094130
 [8] Announcement of Requirements and Registration for the Predict the Influenza Season Challenge. http://www.gpo.gov/fdsys/pkg/FR20131125/pdf/201328198.pdf Accessed 20160707
 [9] RAPIDD Ebola Challenge: Comparison of Disease Forecasting Models. http://www.ebolachallenge.org/ Accessed 20160707
 [10] Forecasting the Next Dengue Outbreak. https://www.ncdc.noaa.gov/news/forecastingnextdengueoutbreak Accessed 20160707
 [11] Dengue Forecasting Project. http://dengueforecasting.noaa.gov/docs/project_description.pdf Accessed 20160707
 [12] DARPA CHIKV Challenge to Address Threat of Chikungunya. http://globalbiodefense.com/2014/08/18/darpachikvchallengechikungunya/ Accessed 20160707
 [13] Makridakis, S., Chatfield, C., Hibon, M., Lawrence, M., Mills, T., Ord, K., Simmons, L.F.: The M2competition: A realtime judgmentally based forecasting study. International Journal of Forecasting 9(1), 5–22 (1993). doi:10.1016/01692070(93)90044N
 [14] Makridakis, S.: The M3Competition : results , conclusions and implications. International Journal of Forecasting 16, 451–476 (2000)
 [15] Armstrong, J.S.: Evaluating Forecasting Methods, pp. 443–472. Springer, Boston, MA (2001). \(http://dx.doi.org/10.1007/9780306476303\_20\)
 [16] Nsoesie, E.O., Brownstein, J.S., Ramakrishnan, N., Marathe, M.V.: A systematic review of studies on forecasting the dynamics of influenza outbreaks. Influenza and other respiratory viruses 8, 309–16 (2014). doi:10.1111/irv.12226
 [17] Viboud, C., Boëlle, P.Y., Carrat, F., Valleron, A.J., Flahault, A.: Prediction of the Spread of Influenza Epidemics by the Method of Analogues. American Journal of Epidemiology 158(10), 996–1006 (2003). doi:10.1093/aje/kwg239
 [18] Aguirre, A., Gonzalez, E.: The feasibility of forecasting influenza epidemics in Cuba. Memorias do Instituto Oswaldo Cruz 87(3), 429–32
 [19] Jiang, X., Wallstrom, G., Cooper, G.F., Wagner, M.M.: Bayesian prediction of an epidemic curve. Journal of Biomedical Informatics 42(1), 90–99 (2009). doi:10.1016/j.jbi.2008.05.013
 [20] Soebiyanto, R.P., Adimi, F., Kiang, R.K.: Modeling and predicting seasonal influenza transmission in warm regions using climatological parameters. PLoS ONE 5(3), 1–10 (2010). doi:10.1371/journal.pone.0009450
 [21] Cha, S.h.: Comprehensive Survey on Distance / Similarity Measures between Probability Density Functions. International Journal of Mathematical Models and Methods in Applied Sciences 1(4), 300–307 (2007). doi:10.1007/s001670090884z
 [22] Longini, I.M., Fine, P.E., Thacker, S.B.: Predicting the global spread of new infectious agents. American journal of epidemiology 123(3), 383–91 (1986)
 [23] Chao, D.L., Matrajt, L., Basta, N.E., Sugimoto, J.D., Dean, B., Bagwell, D.A., Oiulfstad, B., Halloran, M.E., Longini, I.M.: Planning for the control of pandemic influenza A (H1N1) in Los Angeles County and the United States. American journal of epidemiology 173(10), 1121–30 (2011). doi:10.1093/aje/kwq497
 [24] Hall, I.M., Gani, R., Hughes, H.E., Leach, S.: Realtime epidemic forecasting for pandemic influenza. Epidemiology and infection 135, 372–85 (2007). doi:10.1017/S0950268806007084
 [25] Ong, J.B.S., Chen, M.I.C., Cook, A.R., Lee, H.C., Lee, V.J., Lin, R.T.P., Tambyah, P.A., Goh, L.G.: Realtime epidemic monitoring and forecasting of H1N12009 using influenzalike illness from general practice and family doctor clinics in Singapore. PloS one 5(4), 10036 (2010). doi:10.1371/journal.pone.0010036
 [26] Tizzoni, M., Bajardi, P., Poletto, C., Ramasco, J.J., Balcan, D., Gonçalves, B., Perra, N., Colizza, V., Vespignani, A.: Realtime numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm. BMC medicine 10, 165 (2012). doi:10.1186/1741701510165
 [27] Towers, S., Feng, Z.: Pandemic H1N1 influenza: predicting the course of a pandemic and assessing the efficacy of the planned vaccination programme in the United States. Euro surveillance : bulletin Europeen sur les maladies transmissibles = European communicable disease bulletin 14(41), 19358 (2009)
 [28] Shaman, J., Karspeck, A.: Forecasting seasonal outbreaks of influenza. Proceedings of the National Academy of Sciences of the United States of America 109(3), 20425–30 (2012). doi:10.1073/pnas.1208772109
 [29] Andersson, E., KühlmannBerenzon, S., Linde, A., Schiöler, L., Rubinova, S., Frisén, M.: Predictions by early indicators of the time and height of the peaks of yearly influenza outbreaks in Sweden. Scandinavian journal of public health 36(5), 475–82 (2008). doi:10.1177/1403494808089566
 [30] Hyder, A., Buckeridge, D.L., Leung, B.: Predictive Validation of an Influenza Spread Model 8(6) (2013). doi:10.1371/journal.pone.0065459
 [31] Overview of Influenza Surveillance in the United States. http://www.cdc.gov/flu/pdf/weekly/overview.pdf Accessed 20160707
 [32] HHS Region Map. http://www.hhs.gov/about/agencies/iea/regionaloffices/index.html Accessed 20160707
 [33] Lekone, P.E., Finkenstädt, B.F.: Statistical inference in a stochastic epidemic SEIR model with control intervention: Ebola as a case study. Biometrics 62(December), 1170–1177 (2006). doi:10.1111/j.15410420.2006.00609.x
 [34] Principles of Epidemiology in Public Health Practice, Third Edition An Introduction to Applied Epidemiology and Biostatistics. http://www.cdc.gov/ophss/csels/dsepd/SS1978/Lesson3/Section2.html Accessed 20160707
 [35] 2009 H1N1 Early Outbreak and Disease Characteristics. http://www.cdc.gov/h1n1flu/surveillanceqa.htm#4 Accessed 20160707
 [36] Overview of Influenza Surveillance in the United States. http://www.cdc.gov/flu/weekly/overview.htm Accessed 20160707
 [37] Shcherbakov, M.V., Brebels, A., Shcherbakova, N.L., Tyukov, A.P., Janovsky, T.A., evich Kamaev, V.A.: A survey of forecast error measures. World Applied Sciences Journal 24(24), 171–176 (2013). doi:10.5829/idosi.wasj.2013.24.itmies.80032
 [38] Syntetos, A.A., Boylan, J.E.: On the variance of intermittent demand estimates. International Journal of Production Economics 128(2), 546–555 (2010). doi:10.1016/j.ijpe.2010.07.005
 [39] U.S. Outpatient Influenzalike Illness Surveillance Network (ILINet). https://public.health.oregon.gov/DiseasesConditions/CommunicableDisease/DiseaseSurveillanceData/Influenza/Documents/recruitment_cdc_system.pdf Accessed 20160707
 [40] Armstrong, B.J.S., Collopy, F.: Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons By J. Scott Armstrong and Fred Collopy Reprinted with permission form. International Journal of Forecasting 8(1), 69–80 (1992). doi:10.1016/01692070(92)90008W
 [41] Deza, M.M., Deza, E.: Encyclopedia of Distances, pp. 1–590 (2009). doi:10.1007/9783642002342. 0505065
 [42] AbouMoustafa, K.T., Ferrie, F.P.: A note on metric properties for some divergence measures: The Gaussian case. Journal of Machine Learning Research 25, 1–15 (2012)
 [43] Pardo, L.: Statistical Inference Based on Divergence Measures vol. 170, p. 497 (2006). doi:10.1111/j.1467985X.2007.004857.x. http://doi.wiley.com/10.1111/j.1467985X.2007.00485_7.x
 [44] ARIMA Models for Time Series Forecasting. https://people.duke.edu/%7Ernau/411arim.htm Accessed 20161231
Figures
Tables
Additional Files
Additional file 1 — Supporting Information.pdf
This is a pdf file in which our forecasting algorithm and the six used configurations are described.
Additional file 2 — Supporting_Tables.pdf
Additional file 3 — S1 Fig.png
Summary of Methodology: This figure is referred in Supporting Information.pdf, describing the forecasting pipeline.
Additional file 4 — S2 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 2
Additional file 5 — S3 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 3
Additional file 6 — S4 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 4
Additional file 7 — S5 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 5
Additional file 8 — S6 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 6
Additional file 9 — 7 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 7
Additional file 10 — S8 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 8
Additional file 11 — S9 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 9
Additional file 12 — S10 Fig.pdf
Consensus Ranking of forecasting methods over all error measuresfor predicting different Epifeatures for Region 10
Additional file 13 — S11 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 2.
Additional file 14 — S12 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 3.
Additional file 15 — S13 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 4.
Additional file 16 — S14 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 5.
Additional file 17 — S15 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 6.
Additional file 18 — S16 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 7.
Additional file 19 — S17 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 8.
Additional file 20 — S18 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 9.
Additional file 21 — S19 Fig.pdf
Visual comparison of 1stepahead predicted curves generated by six methods vs. the observed curve, Region 10.