1 Introduction

Predictive Likelihood for Coherent Forecasting of Count Time Series

Siuli Mukhopadhyay111Corresponding author. Email: siuli@math.iitb.ac.in, V. Satish

Department of Mathematics, Department of Electrical Engineering, Indian Institute of Technology Bombay,

Mumbai 400 076, India

 

Abstract

A new forecasting method based on the concept of the profile predictive likelihood function is proposed for discrete valued processes. In particular, generalized auto regressive and moving average (GARMA) models for Poisson distributed data are explored in details. Highest density regions are used to construct forecasting regions. The proposed forecast estimates and regions are coherent. Large sample results are derived for the forecasting distribution. Numerical studies using simulations and a real data set are used to establish the performance of the proposed forecasting method. Robustness of the proposed method to possible misspecifications in the model is also studied.

Keywords: GARMA models, Highest Density Regions, Observation Driven Models, Partial Likelihood Function, Profile Likelihood

1 Introduction

In contrast to time series for Gaussian responses, where numerous forecasting methods are available, literature on forecasting for count type time series is still very sparse. However, time series data in form of counts is frequently measured in various fields like finance, insurance, biomedical, public health. As an example, consider a disease surveillance study, where health officials record the number of disease cases over a certain time period to understand the disease trajectory. The main interest in such surveillance studies is to forecast the disease counts in the future, so that public health-care providers are able to respond to disease outbreaks on time, thereby reducing the disease impact and saving economic resources (Myers et al. (2000)). However, forecasting disease counts in these situations is complex, due to the fact that the required forecasts have to be consistent with the non-negative and integer valued sample space of such count time series. The usage of the estimated mean at a future time point which is a non integer, as a suitable point estimate for the future count (Davis et al. (2003), Andrade et al. (2015), Jalalpour et al. (2015)) as practiced in usual ARIMA forecasting techniques gives rise to an incoherent forecast (Freeland and McCabe (2004)).

In this article, we present a forecasting method based on the profile predictive likelihood function for count time series data. The proposed forecast estimates and regions are coherent. The class of models we consider is the observation driven models (Zeger and Qaqish (1988), Li (1994), Benjamin et al. (2003), Fokianos and Kedem (2004) and Kedem and Fokianos (2005)), where the conditional distribution of counts given past information belongs to the exponential family. As done in the available statistical literature (Agresti (1990), Bishop et al. (1975), Haberman (1974), Cameron and Trivedi (1998), Winkelmann (2000)), we model the counts with a Poisson type distribution. These Poisson time series models allow inclusion of both autoregressive and moving average terms along with trend, seasonality and dependence on several covariates. To the best of our knowledge this is the first coherent forecasting method proposed for count time series modeled by such observation driven models.

Mainly two approaches have been used for modelling non Gaussian time series, parameter driven and observation driven models (Cox (1981)). These two types of models differ in the way they account for the autocorrelation in the data. In observation driven models, the correlation between observations measured at successive time points is modeled through a function of past responses. However, parameter driven models use an unobserved latent process to account for the correlation. Conditioning on the latent process, the observations are assumed to be evolve independently of the past history of the observation process. Parameter driven models for non-Gaussian time series were first considered by West et al. (1985). Later, these models were also investigated by Fahrmeir and Tutz (2001), Fahrmeir and Wagenpfeil (1997), Durbin and Koopman (1997); Durbin and Koopman. (2000). To compute the posterior distributions for the parameters of these models Markov chain Monte Carlo (MCMC) methods are frequently used (some references are Durbin and Koopman (1997), Shephard and Pitt (1997) and Gamerman (1998)). However, often these MCMC algorithms fail to converge resulting in poor inference and predictions. In more recent times, the particle filter algorithm has been used as an alternative to the MCMC method in parameter driven models (Durbin and Koopman (2012)). As compared to parameter driven models, the computational burden for parameter estimation is much less in observation-driven models (Chan and Ledolter (1995), 1995; Durbin and Koopman. (2000), Davis et al. (2003)).

In this article our focus is on observation driven models, the conditional distribution for each observation given past information on responses and past and possibly present covariates is described by a generalized linear model (GLM) distribution. Partial likelihood theory combined with GLMs is used for estimation. This type of a flexible modeling framework include namely, regression models for count time series proposed by Zeger and Qaqish (1988), GLM time series models by Li (1994) and Fokianos and Kedem (2004), generalized linear autoregression models (GLMs) of Shephard (1995), GLARMA models of Davis et al. (2003) and GARMA models of Benjamin et al. (2003). However, in all these papers we may note that the main focus is on parameter estimation and in-sample prediction with almost no attention being given to the forecasting component involved. However, out-of-sample prediction based on past behaviour is an important and necessary component of time series studies. Consider a financial data based on number of transactions in stocks, the main interest in such studies is to use the history of stock transactions and information on other external covariates influencing the stocks, to predict the number of future transactions and reap large profits (Granger (1992)). In worldwide development of early warning systems for vector borne diseases like dengue, the main aim is to forecast the future disease counts by considering the effect of past disease outcomes and also different covariates like climate conditions, social ecological conditions on disease counts (Shi et al. (2016) and Lee et al. (2017)). In both count time series mentioned above, number of stock transactions and disease cases, non-negative and integer valued estimates and intervals for future counts are required.

We use the profile predictive likelihood function of Mathiasen (1979) to develop coherent forecasts for count time series. See Bjornstad (1990) and the references therein for a detailed review of some of the prediction functions used in the statistical literature. Forecasting using predictive likelihood has been used before for Gaussian models with an autoregressive structure of order one (Ytterstad (1991)). However, to the best of our knowledge, it has not been used for out-of-sample forecasts in any non-Gaussian time series models.

Other forecasting techniques for count time series data modeled using the Poisson auto regressive model (PAR) and the integer auto regressive (INAR) class of models, have been proposed by namely, Freeland and McCabe (2004), McCabe and Martin (2005), McCabe et al. (2011) and Maiti et al. (2016). However, the PAR and the INAR models are structurally different from the observation driven GLM time series models as discussed above and usually do not accommodate covariates (McKenzie (1988)).

The original contributions of this article include (i) coherent and consistent point forecasting method for count time series, (ii) coherent forecasting regions based on highest density regions, (iii) use of the the predictive likelihood concept for predicting dependent non Gaussian time series data. The performance of the proposed method is illustrated using several simulated examples and a real data set from a polio surveillance study. The effects of model misspecification on the proposed method and large sample results for the forecasting estimator are also discussed.

The rest of the paper is organized as follows. GLM for time series and particularly Poisson time series models are discussed in Section 2. This is followed by details on predictive likelihood (PL) function and PL for Poisson time series models in Section 3. The steps of the forecasting algorithm, forecasting intervals based on highest density regions and asymptotic properties of the forecast estimator are also discussed in this section. Section 4 gives the details of the simulation results, real data prediction results, and the study of the robustness of the proposed method. Concluding remarks are given in Section 5.

2 GLMs for time series

The count at time , is assumed to depend on the past counts and past and possibly present covariate values. We assume that the conditional density of each for , given the past information, , is given by,

(1)

where is the canonical parameter, is the additional scale or dispersion parameter, while and are some specific functions corresponding to the type of exponential family considered. The canonical parameter is a function of the conditional mean of , i.e., . The conditional is of the form . The mean is related to the linear predictor by , where is a one-to-one sufficiently smooth function. The inverse of , denoted by is called the link function. To model the dependence of the means at time , , on the linear predictor is assumed to be,

(2)

where represents the covariate vector, and represent the autoregressive (AR) and moving average (MA) components, respectively. Some forms for the linear predictor under the general model setup proposed in the statistical literature are:
Model form 1: Zeger and Qaqish (1988)

(3)

Model form 2: Li (1994)

(4)

Model form 3: Fokianos and Kedem (2004)

(5)

Model form 4: GARMA Benjamin et al. (2003)

(6)

where is the vector of covariates at time , are the usual regression parameters, are the autoregressive parameters and correspond to the moving average components, , and are known functions.

Looking at the above model forms in equations (3-6) we see that they are very similar to each other. We choose the GARMA model proposed by Benjamin et al. (2003) given in equation (6) for representing the linear predictor. The GARMA models allows both autoregressive and moving average terms to be included in the linear predictor along with covariates. In this article we model the conditional distribution of the count data using Poisson GARMA models, which we define next.

2.0.1 Poisson GARMA models

The conditional density of given the past information is a Poisson distribution with mean parameter ,

(7)

Comparing with equation (1), we get , , and . The conditional mean and variance of given is . Using the GARMA model from equation (6) and the canonical link function which is log for the Poisson family we write,

(8)

if the function is taken to be the log function, then for zero counts we define where and

2.0.2 Estimation of Model Parameters

As in a standard GLM, the maximum likelihood estimate (MLE) of , are obtained by using an iterated weighted least squares algorithm. Since here the covariates are stochastic in nature, the partial likelihood function () instead of the entire likelihood function is maximized. An estimate of the conditional mean is given by, , where

is the MLE of . For the Poisson GARMA model, the conditional information matrix of the parameter estimates is given by .

3 Predictive Likelihood

Prediction of the value of an observation at a future time point is a fundamental problem in statistics, especially in the time series context. To be more precise, suppose we have observations at time points , denoted by and we would like to predict or forecast (out-of-sample) the observation at time point . We assume that has a joint probability density indexed by the unknown parameter . Note in this prediction problem we have two unknown quantities, , the unobserved value of and the parameter, . However, our primary aim is to find . So we treat as a nuisance parameter. Mathiasen (1979) proposed several predictive functions for finding the future value given the observed sample . In this article we choose the prediction function, say , from the likelihood viewpoint,

for inferring about the unknown . In the above equation, the maximum likelihood estimate (MLE) of denoted by is computed using the available observations and the unobserved future count .

Different ways of estimating the nuisance parameters lead to different predictive likelihoods. For our problem we use the MLE method to estimate and eliminate the dependence on the nuisance parameter . This particular form of the prediction function is known as the profile predictive likelihood (PL) function. Dunsmore (1976), Lauritzen (1974), Hinkley (1979) and Lejeune and Faulkenberry (1982), suggested an alternative method for the elimination of the unknown parameter by using the principle of sufficiency. However, finding closed form sufficient statistic for the parameters is impossible given the complicated framework of the GARMA models and the correlated nature of the responses.

3.1 PL for the Poisson Time Series

A sample of size is observed from the Poisson GARMA model and denoted by . We start with the one step at a time prediction (out-of-sample forecasts) of counts. In the context of the Poisson GARMA time series model, the predictive likelihood function is

(9)

where

is the MLE of using the observed sample and the future count . Here, , is the available past information at time . The estimated density of given the observed time series is then,

(10)

where is the normalizing constant.

The predictor of is chosen to be the value of which maximizes , and denoted by . The steps leading to the choice of are detailed below and later illustrated with an example:

  1. At the start of the process we have the past information . Our interest is to compute , the actual density of given the past information. However the density depends on the parameters which are unknown. So we first compute the MLEs of the parameters and then use equation (10) to compute an estimate of .

  2. To compute the MLEs of we need and . We have not yet observed . So we start with the first possible value i.e., and estimate the ’s and calculate .

  3. We next set and re-estimate the parameters and find the corresponding normalized predictive likelihood value .

  4. The above step is repeated for , each time re-estimating the parameters and computing . If for any , then we ignore .

  5. At the end we have a set of (say) normalized predictive likelihoods of , denoted by .

    If , we set .

Example: We explain the above steps of the algorithm with a simple example. A sample of size is drawn from a Poisson GARMA model, denoted by and the model equation is

for . We apply the PL procedure as given above for predicting . Table 1 shows the (19) possible values of and the corresponding density value. Note for .

0 0.715 5 0.1754 10 0.170 15 0.139
1 0.353 6 0.1443 11 0.765 16 0.429
2 0.873 7 0.1018 12 0.315 17 0.124
3 0.14378 8 0.629 13 0.119 18 0.342
4 0.1775 9 0.345 14 0.422 19 0.89
Table 1: and corresponding values

The maximum value of is 0.1775 corresponding to = 4. Thus, predicted value of is .

The PL function can also be extended for m-step predictions of Poisson GARMA models. The predictive likelihood function for step ahead forecasts is,

(11)

where the MLEs of are found using the observed sample and the past information . The value of which maximizes the normalized is chosen as the predictor of and denoted by .

We note from the above forecasting equations that to predict a future count at time , we need information on the counts, and also on the covariates at time points . The covariates may be stochastic in nature, say in a disease data set we study the effect of time and also covariates like temperature and humidity on the disease counts. In these situations, separate time series models have to be fitted to the covariates data and used for obtaining forecasts of the covariates. These forecasted values of the covariates can then be used in the PL function to forecast .

3.2 Forecast Region

In the last section we used the predictive likelihood to get a point forecast of by using the data . Along with the point forecast it is of our interest also to identify a region in the sample space of which in some sense will summarize the actual density . Such a forecasting region can be constructed in several ways.

We may suggest that the predictive interval of given by , where is the th quantile of , is one such region. However, since we are dealing with an asymmetric distribution here, it would be more prudent to use highest density regions (HDRs) instead of a forecasting region based on quantiles. These regions are more flexible than those based on quantiles and are able to address the issues of asymmetry, skewness as well as multimodality in the forecast distributions. The HDR denoted by is the region in the sample space of such that (Hyndman (1996))

where is the largest constant such that is at least . In our computations of HDRs, we used the th sample quantile of to estimate and then obtained by choosing the values of for which .

Note: that the sample space of is non negative and integer valued. Thus, can take only the integer values in the computed HDRs giving rise to coherent forecasting regions.

3.3 Some large sample results

In this section we discuss the consistent properties and the asymptotic distribution of the forecast estimator.

Proposition 1: As , .

Proof: We use Theorem 2.3.5 from Sen and Singer (1994) which states

If and if is uniformly continuous, then .

For our problem, and . Also and .

The normalized predictive likelihood function is uniformly continuous since

exists and is bounded.

Thus proved.

Proposition 2: As ,

Proof: We know that the ML estimator () is asymptotically normal (Benjamin et al. (2003)),

Using the the first order approximation of a Taylor series,

Rearranging the terms and multiplying both sides by we obtain,

From Slutsky’s theorem it follows,

This concludes the proof.

Note: Though in Propostion 2 we show that the large sample distribution of the forecast estimate is a Gaussian distribution, this fact has not been used anywhere in the computations. We have used the Poisson mass function for all the computations.

4 Results

In this section we illustrate the proposed forecasting technique using simulation studies and a real data set based on polio counts. Robustness properties of the PL forecasts are also studied.

4.1 Simulations

The performance of the predictive likelihood in forecasting GARMA models is evaluated using simulation studies for various values of and . Along with point prediction, we also compute HDRs for the future counts.

Our simulation results mainly focus on one step at a time forecasts instead of step ahead forecasts. Note, for computing step ahead forecasts using equation (11) complete enumeration of all the sums and products over all the possible projected paths from to is necessary which is impossible unless we truncate the sample space. We discuss one simulated example for 2 step ahead forecasting based on a truncated sample space.

Two different GARMA models with the following means were used in the simulations:

  • Model 1: GARMA

    where ,
    and .

  • Model 2: GARMA

for . For both models, we chose three different values of . The different values of enabled us to study the effect increasing on the profile likelihood predictions. Ten future counts (time horizon is 10 counts) were predicted one step at a time in each case. It is possible to increase the time horizon to more than 10, however those computations are not shown here.

We explain the simulation steps for a Poisson GARMA model in details below for a fixed value of :

  1. Using the chosen GARMA model and the fixed value of we simulated data sets. For all computations is 1000. Increasing further did not change the results.

  2. The PL predictor of in the th simulation is denoted by for . We also get HDRs for . The minimum and maximum values of in the th HDR is denoted as and , respectively.

  3. The is selected as the predicted value of . Selecting the median instead of the mean ensures a integer future count.

  4. The median of is selected as the lowest value of in the HDR and denoted by , while median of is chosen as the highest value and denoted by . The HDR is then defined as . Remember is discrete valued. For clarity, if and , then the HDR is . Note, all the HDRs computed displayed unimodal behavior.

The true and predicted values of the ten future counts and the 50 and 75 HDRs are reported in Figures 1-2 for simulation models 1 and 2, respectively. Table 2 reports the RMSE values for both simulation models. From the figures and the table it is noted that point prediction using the profile likelihood method improves as increases. This happens since the MLE, , converges to the true as increases, which in turn causes the normalized to converge to the actual pdf of (see Proposition 1). Also we note that the HDRs successfully capture all the true counts in both simulation models for . For , the and HDRs are unable to capture , however the HDRs (not shown) are able to capture .

(a) Simulated data,
(b) Forecasting results for
(c) Simulated data,
(d) Forecasting results for
(e) Simulated data,
(f) Forecasting results for
Figure 1: Simulation and Forecasting results using the PL function for the GARMA model. Note the HDRs of though shown as continuous intervals contain only integer values.
(a) Simulated data,
(b) Forecasting results for
(c) Simulated data,
(d) Forecasting results for
(e) Simulated data,
(f) Forecasting results for
Figure 2: Simulation and Forecasting results using the PL function for the GARMA model. Note the HDRs of though shown as continuous intervals contain only integer values.
Model GARMA(0, 2) GARMA(5, 0)
50 100 240 50 100 240
RMSE 3.24037 3.04959 0.89443 2.28035 2.19089 1.89737
Table 2: RMSE values for the two simulation scenarios, GARMA and GARMA for different values of

We also assessed the effect of an empirical probability mass function (pmf) instead of the actual pmf on the HDR computations using a GARMA model. The comparisons of the empirical and exact pmfs for are shown in Figure 3. We note from Figure 3, as increases, , which in turn causes (), i.e., the estimated HDR () to approach the true HDR.

(a) 75% HDR: Exact pmf of
(b) 75% HDR: Empirical pmf of
(c) 75% HDR: Exact pmf of
(d) 75% HDR: Empirical pmf of
(e) 75% HDR: Exact pmf of
(f) 75% HDR: Empirical pmf of
Figure 3: Shapes of Exact and Empirical pmfs for different values of for GARMA model

4.2 Forecasting of Polio Data

To illustrate the new forecasting method proposed we use a data set based on monthly cases of poliomyelitis cases reported to the U.S. Centers for Disease Control for the years 1970 to 1982. This data has been analyzed before by Zeger and Qaqish (1988) and Benjamin et al. (2003), but no forecasting results have been discussed.

Parameter Estimates
Standard Errors
Intercept 0.409 0.122
cos(2pt/12) 0.143 0.157
sin(2pt/12) -0.530 0.146
cos(2pt/6) 0.462 0.121
sin(2pt/6) -0.021 0.123
MA(1) 0.273 0.052
MA(2) 0.242 0.052
Deviance 490.714
Table 3: Model estimates for Polio data based on the first 158 observations

We used the first 158 polio cases to fit a Poisson GARMA model and then forecasted the polio counts for years 1972 to 1982. The model fitting statistics along with the estimates and their standard errors are shown in Table 3. Figure 4 shows the true counts, predicted counts using the profile likelihood estimation and the and HDRs. The RMSE value for the 10 forecasts is 1.1186.

(a) Polio data first 158 obervations
(b) Forecasting results for
Figure 4: Forecasting results using the PL function for the Polio data. Note the lower bounds of both HDR regions in Figure 4 (b) are zero. Also, the HDRs of though shown as continuous intervals, contain only integer values.

4.3 Two Step Forecasting using the PL function

The predictive likelihood function for the two step prediction of the Poisson GARMA model is

We chose . For predicting we computed MLEs of the model parameters based on the observed data and possible values of the tuplet . If , then we did not choose that tuplet. The MLEs were then used to find evaluate the normalized for each by summing over all values of . The true and forecasted values of () and the 50 and 75 HDRs are reported in Table 4.

50 HDR 75 HDR
0 0
Table 4: The two step out-of-sample prediction results

4.4 Robustness Study of the Profile Likelihood Based Prediction

The PL prediction depends on the model being fitted to the data. In this section, we study the effect of model misspecification on the forecasting. Suppose a data set of size is drawn from a Poisson GARMA model given by,

Note the MA coefficients are non zero at lags 1, 2 and 5. However, the coefficient at lag 5 is quite small, so it is easy to misspecify the model as a GARMA (0,2). The estimated GARMA model fitted to the data is,

We performed forecasting of future counts on the basis of the actual and the fitted models and recorded the respective RMSEs, and . From Figure 5 we note that the forecasts based on the estimated model are quite close to the true values and also HDRs successfully captures all the true values except . Thus, we may conclude on the basis of the example considered that the performance of the PL method in forecasting is not very sensitive to model misspecification.

(a) Forecasting results for based on the correct model GARMA
(b) Forecasting results for based on the incorrect model GARMA
Figure 5: Robustness results for the PL forecasting method. Note the HDRs of shown contain only integer values.

4.5 Computations

All computations have been done in Matlab. The GARMA toolbox of Jalalpour (2014) has been used to estimate the model parameters. The time needed to obtain one simulation result for the one step forecast is 49 seconds while for the two step forecast is 404 seconds. The Matlab program for one step forecast using the PL function GARMA model is given in the supplementary material. Other programs are available from the authors on request.

5 Concluding Remarks

In this article we look at several Poisson GARMA time series models for different values of and , and use PL functions to find a point forecast and a forecast region. The large sample properties of the estimators based on PL functions are studied. The new forecasting method proposed gives non negative discrete valued forecasts and forecasting regions coherent with the sample space of the count time series under consideration.

Very often in practical situations we note that the count data collected usually suffer from overdispersion. In such cases the negative binomial distribution instead of the Poisson distribution is used. One possible future extension of our work is to use the PL technique to forecast such overdispersed count time data.

References

  • Agresti [1990] Alan Agresti. Categorical data analysis. John Wiley and Sons, Inc., New York, 1990.
  • Andrade et al. [2015] B. S. Andrade, M. G. Andrade, and R. S. Ehlers. Bayesian GARMA models for count data. Communications in Statistics: Case Studies, Data Analysis and Applications, 1:192–205, 2015.
  • Benjamin et al. [2003] Michael A. Benjamin, Robert A. Rigby, and D. Mikis Stasinopoulos. Generalized autoregressive moving average models. Journal of the American Statistical Association, 98(461):214–223, 2003. ISSN 01621459.
  • Bishop et al. [1975] Yvonne M. M. Bishop, Stephen E. Fienberg, and Paul W. Holland. Discrete multivariate analysis: theory and practice. MIT Press, Cambridge, Mass.-London, 1975.
  • Bjornstad [1990] Jan F. Bjornstad. Predictive likelihood: A review. Statistical Science, 5(2):242–254, 05 1990.
  • Cameron and Trivedi [1998] A. Colin Cameron and Pravin K. Trivedi. Regression analysis of count data, volume 30 of Econometric Society Monographs. Cambridge University Press, Cambridge, 1998. ISBN 0-521-63567-5.
  • Chan and Ledolter [1995] K. S. Chan and J. Ledolter. Monte carlo EM estimation for time series models involving counts. Journal of the American Statistical Association, 90:242–252, 1995.
  • Cox [1981] D. R. Cox. Statistical analysis of time series: Some recent developments. Scandinavian Journal of Statistics, 8:93–115, 1981.
  • Davis et al. [2003] Richard A. Davis, William T. M. Dunsmuir, and Sarah B. Streett. Observation-driven models for poisson counts. Biometrika, 90(4):777–790, 2003. ISSN 00063444.
  • Dunsmore [1976] I. R. Dunsmore. Asymptotic prediction analysis. Biometrika, 63(3):627–630, 1976. ISSN 00063444.
  • Durbin and Koopman [1997] J. Durbin and S. J. Koopman. Monte carlo maximum likelihood estimation for non-gaussian state space models. Biometrika, 84:669–684, 1997.
  • Durbin and Koopman. [2000] J. Durbin and S. J. Koopman. Time series analysis of non-gaussian observations based on state-space models from both classical and bayesian perspectives. Journal of the Royal Statistical Society, Ser. B, 62:3–56, 2000.
  • Durbin and Koopman [2012] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford University Press, 2012.
  • Fahrmeir and Tutz [2001] L. Fahrmeir and G. Tutz. Multivariate Statistical Modeling Based on Generalized Linear Models. Springer-Verlag, New York, 2001.
  • Fahrmeir and Wagenpfeil [1997] L. Fahrmeir and S. Wagenpfeil. Penalized likelihood estimation and iterative kalman filtering for non-gaussian dynamic regression models. Computational Statistics and Data Analysis, 24:295–320, 1997.
  • Fokianos and Kedem [2004] Konstantinos Fokianos and Benjamin Kedem. Partial likelihood inference for time series following generalized linear models. Journal of Time Series Analysis, 25(2):173–197, 2004.
  • Freeland and McCabe [2004] R.K. Freeland and B.P.M. McCabe. Forecasting discrete valued low count time series. International Journal of Forecasting, 20(3):427 – 434, 2004. ISSN 0169-2070.
  • Gamerman [1998] D. Gamerman. Markov chain monte carlo for dynamic generalised linear models. Biometrika, 1:215–227, 1998.
  • Granger [1992] C. W. J. Granger. Forecasting stock market prices: Lessons for forecasters. International Journal of Forecasting, 8:3–13, 1992.
  • Haberman [1974] Shelby J. Haberman. The analysis of frequency data. The University of Chicago Press, Chicago, Ill.-London, 1974. Statistical Research Monographs, Vol. IV.
  • Hinkley [1979] David Hinkley. Predictive likelihood. The Annals of Statistics, 7(4):718–728, 07 1979.
  • Hyndman [1996] Rob J. Hyndman. Computing and graphing highest density regions. The American Statistician, 50(2):120–126, 1996. ISSN 00031305.
  • Jalalpour [2014] M. Jalalpour. Garma toolbox for matlab, available at: http://works.bepress.com/mehdi-jalalpour/1/. 2014.
  • Jalalpour et al. [2015] M. Jalalpour, Y. Gel, and S. Levin. Forecasting demand for health services: Development of a publicly available toolbox. Operations Research for Health Care, 5:1–9, 2015.
  • Kedem and Fokianos [2005] B. Kedem and K. Fokianos. Regression Models for Time Series Analysis. John Wiley & Sons, Inc., 2005.
  • Lauritzen [1974] Steffen L. Lauritzen. Sufficiency, prediction and extreme models. Scandinavian Journal of Statistics, 1(3):128–134, 1974. ISSN 03036898, 14679469.
  • Lee et al. [2017] J. S. Lee, C. Mabel, J. K. Lim, V. M. Herrera, I. Y. Park, L. Villar, and A. Farlow. Early warning signal for dengue outbreaks and identification of high risk areas for dengue fever in colombia using climate and non-climate datasets. BMC Infectious Diseases, 17:1471–2334, 2017.
  • Lejeune and Faulkenberry [1982] Michel Lejeune and G. David Faulkenberry. A simple predictive density function. Journal of the American Statistical Association, 77(379):654–657, 1982. ISSN 01621459.
  • Li [1994] W. K. Li. Time series models based on generalized linear models: Some further results. Biometrics, 50:506–511, 1994.
  • Maiti et al. [2016] R. Maiti, A. Biswas, and S. Das. Coherent forecasting for count time series using box–jenkins’s ar(p) model. Statistica Neerlandica, 70:123–145, 2016.
  • Mathiasen [1979] P. E. Mathiasen. Prediction functions. Scandinavian Journal of Statistics, 6(1):1–21, 1979. ISSN 03036898, 14679469.
  • McCabe and Martin [2005] B.P.M. McCabe and G.M. Martin. Bayesian predictions of low count time series. International Journal of Forecasting, 21(2):315 – 330, 2005. ISSN 0169-2070.
  • McCabe et al. [2011] Brendan P. M. McCabe, Gael M. Martin, and David Harris. Efficient probabilistic forecasts for counts. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 73(2):253–272, 2011. ISSN 13697412, 14679868.
  • McKenzie [1988] E. McKenzie. Some arma models for dependent sequences of poisson counts. Advances in Applied Probability, 20:822–835, 1988.
  • Myers et al. [2000] M. F. Myers, D. J. Rogers, J. Cox, A. Flahault, and S. I. Hay. Forecasting disease risk for increased epidemic preparedness in public health. Advances in parasitology, 47:309–330, 2000.
  • Sen and Singer [1994] P.K. Sen and J.M. Singer. Large Sample Methods in Statistics: An Introduction with Applications. Chapman & Hall/CRC Texts in Statistical Science. Taylor & Francis, 1994. ISBN 9780412042218.
  • Shephard [1995] N. Shephard. Generalized linear autoregression. In Technical report, Nuffield College, Oxford University; available at http://www.nu.ox.ac.uk/economics/papers/1996/w8/glar.ps., 1995.
  • Shephard and Pitt [1997] N. Shephard and M. K. Pitt. Likelihood analysis of non-gaussian measurement time series. Biometrika, 84:653–667, 1997.
  • Shi et al. [2016] Y. Shi, X. Liu, S. Y. Kok, J. Rajarethinam, S. Liang, G. Yap, C. S. Chong, K.S. Lee, S. S. Tan, C. K. Chin, A. Lo, W. Kong, L. C. Ng, and A. R. Cook. Three-month real-time dengue forecast models: an early warning system for outbreak alerts and policy decision support in singapore. Environmental Health Perspectives, 124:1369–1375, 2016.
  • West et al. [1985] M. West, P. J. Harrison, and H. S. Migon. Dynamic generalized linear models and bayesian forecasting. Journal of the American Statistical Association, 80:73–96, 1985.
  • Winkelmann [2000] Rainer Winkelmann. Econometric analysis of count data. Springer-Verlag, Berlin, 2000. ISBN 3-540-67340-7.
  • Ytterstad [1991] E. Ytterstad. Predictive likelihood predictors for the AR(1) model: A small sample simulation. Scandinavian Journal of Statistics, 18(2):97–110, 1991. ISSN 03036898, 14679469.
  • Zeger and Qaqish [1988] Scott L. Zeger and Bahjat Qaqish. Markov regression models for time series: a quasi-likelihood approach. Biometrics, 44(4):1019–1031, 1988.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
225458
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description