Penalised complexity priors for stationary autoregressive processes

The autoregressive process of order (AR) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior, to ensure that it behaves according to the users prior knowledge. In this paper, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These priors have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR is the corresponding AR model expressed using the partial autocorrelations. The properties of the new prior are compared with the reference prior in a simulation study.

Keywords: AR, R-INLA, prior selection, robustness.

1 Introduction

Autoregressive (AR) processes are widely applied to model time-varying stochastic processes, for example within finance, biostatistics and natural sciences (Brockwell and Davis, 2002; Chatfield, 2003; Prado and West, 2010). Applications also include Bayesian model formulations, often combined with Markov chain Monte Carlo computations to perform posterior and predictive inference (Albert and Chib, 1993; Chib, 1993; Barnett et al., 1996). Particularly, AR processes are useful to model underlying latent dependency structure and they make up important building blocks in complex hierarchical models, for example analysing spatial data (Lesage, 1997; Sahu et al., 2007; Sahu and Bakar, 2012).

In fitting an AR() process using a Bayesian approach, it is necessary to select priors for all model parameters. A simple choice is to assign uniform priors to the regression coefficients (Zellner, 1971; DeJong and Whiteman, 1991), but this is not optimal neither for the first-order nor higher-order processes (Berger and Yang, 1994). A more reasonable approach is given by Liseo and Macaro (2013), who provide a general framework to compute both Jeffreys and reference priors using the well-known partial autocorrelation function (PACF) parameterisation (Barndorff-Nielsen and Schou, 1973). Stationarity of the AR() process is equivalent to choosing the partial autocorrelations within a -dimensional unit hypercube. In general, Jeffreys priors are invariant to reparameterisations, while reference priors are not. Liseo and Macaro (2013) recommend reference priors, at least when the order of the AR process is smaller or equal to 4. For higher-order processes, calculation of the reference prior is numerically cumbersome and requires extensions of their suggested numerical approximation.

This paper derives and investigates penalised complexity (PC) priors (Simpson et al., 2016) for the partial autocorrelations of stationary AR processes of any finite order. In general, a PC prior is computed based on specific underlying principles, in which a model component is seen as a flexible parameterisation of a simple base model structure. The main idea is to assign a prior to a measure of divergence from the flexible version of the component to its base model and the PC prior for the relevant parameter is derived by transformation. In the AR(1) case, this implies that the PC prior for the first-lag coefficient can be derived using white noise () as a base model. Alternatively, we can view the limiting random walk case as a base model (), representing no change in time. Which of these base models that represent a natural choice depends on the relevant application.

In the higher-order AR() case, we introduce a sequential approach to construct a PC prior for the th partial autocorrelation, using the corresponding AR process as a base model. The resulting joint prior for the partial autocorrelations is consistent under marginalisation, and each of the marginals can be adjusted according to a user-defined scaling criterion. The scaling is important and prescribes the degree of informativeness of the prior. Here, we suggest to incorporate a scaling criterion using the variance of the one-step ahead forecast error, also allowing for different rates of shrinkage for each of the partial autocorrelations. The resulting priors have good robustness properties and are also seen to have comparable frequentistic properties with reference priors.

The plan of this paper is as follows. PC priors and their properties are reviewed in Section 2. We derive PC priors for the coefficient of an AR(1) process in Section 3, using the two mentioned base models. PC priors are designed to prevent overfitting and this property is demonstrated for a real data example in Section 4, where an AR(1) process is used to model time-dependent frailty in a Cox proportional hazard model. Contrary to previous results (Fleming and Harrington, 2005; Yau and McGilchrist, 1998), the given data on chronic granulomatous disease do not seem to support the additional introduction of a time-varying frailty. Extension of the PC priors to higher-order AR processes is given in Section 5, including incorporation of interpretable scaling parameters to adjust the rate of shrinkage. Section 6 contains simulation results, comparing the performance of the PC and reference priors, while concluding remarks are given in Section 7.

2 Penalised complexity priors and their properties

The framework of PC priors (Simpson et al., 2016) represents a systematic and unified approach to compute priors for parameters of model components with an inherit nested structure. A simple version of the model component is referred to as a base model, typically characterised by a fixed value of the relevant parameter, while the flexible version is seen as a function of the random parameter. The PC prior is computed to penalise deviation from the flexible model to the fixed base model. This section gives a brief review on PC priors and their properties in the context of AR processes.

2.1 A brief review on the principles underlying PC priors

The informativeness of PC priors is specified in terms of four main principles, stated in Simpson et al. (2016). These principles are useful both to compute priors in a unified way and to understand their properties. The principles, summarised below, express support to Occam’s razor, penalisation of model complexity using the Kullback-Leibler divergence, a constant rate penalisation and user-defined scaling, see Simpson et al. (2016) for a thorough description of PC priors and their applications.

  1. Let denote the density of a model component in which we aim to find a prior for the parameter . A simpler structure of this model component is characterised by the density , where is a fixed value. In accordance with the principle of parsimony expressed by Occam’s razor, the prior for should be designed to give proper shrinkage to and decay with decreasing complexity of .

  2. In order to characterise the complexity of compared with , we calculate a measure of complexity between these two densities. PC priors are derived using the Kullback-Leibler divergence (Kullback and Leibler, 1951),

    which measures the information lost when the flexible model is approximated with the simpler model . For zero-mean multi-normal densities, calculation of the Kullback-Leibler divergence simplifies to performing simple matrix computations on the covariance matrices as

    where , , while is the dimension. To facilitate interpretation, the Kullback-Leibler divergence is transformed to a unidirectional distance measure


    This is not a distance metric in the ordinary sense, but a quantity which is interpretable as a measure of distance from the flexible model to the base model .

  3. In choosing a prior for the distance measure , it is natural to assume that the mode should be located at the base model while the density decays as the distance from the base model increases. The PC prior is derived based on a principle of constant rate penalisation,


    where . This implies that the relative change in the prior for is independent of the actual distance. This is a reasonable choice as it is complicated to properly characterise different decay rates for different distances. The resulting prior is exponentially distributed, , , where and the corresponding PC prior for follows by a standard change of variable transformation.

  4. The rate characterises the shrinkage properties of the prior and it is important that this parameter can be chosen (implicitly) in an intuitive and interpretable way, for example by a user-defined probability statement for the parameter of interest. Simpson et al. (2016) suggest to determine by incorporating a probability statement of tail events, e.g.


    where represents an assumed upper limit for an interpretable transformation , while is a small probability. However, other scaling suggestions might be just as reasonable, depending on the specific application.

2.2 Important properties of PC priors in the context of AR processes

The given four principles provide a strategy to calculate priors for model parameters in a systematic way, rather than turning to ad-hoc prior choices still often made in Bayesian literature. Also, the principles can be helpful to interpret the assumed prior information and how this influences posterior results.

A first important property of PC priors is invariance to reparameterisations. This follows automatically as the prior is derived based on a measure of divergence between models, which does not depend on the specific model parameterisation. We consider the invariance property to be particularly useful in the case of autoregressive processes, as these are typically parameterised either in terms of the regression coefficients, or by using the partial autocorrelations. The great benefit of using the partial autocorrelations is that these give an unconstrained set of parameter values, ensuring a positive definite correlation matrix. In contrast, the valid parameter space for the regression coefficients is rather complicated, especially for higher-order processes ().

Second, the PC priors are designed to shrink towards well-defined base models. In the setting of autoregressive processes, this implies that the priors will prevent overfitting, for example in terms of selecting an unnecessarily high order of the process. In addition, the base model can be chosen to reflect different simple structures of a model component, depending on the given application. For an AR(1) process, it is relevant to assume either no dependency, or no change in time, as simple base model formulations. For higher-order processes, we could also choose no correlation as a base model but this might cause too much shrinkage in many applications. As an alternative, we introduce a new sequential approach which defines a sequence of base models, reflecting the additional complexity in increasing the order of the fitted AR process.

Third, PC priors are computationally simple and are already implemented within the R-INLA framework (Rue et al., 2009; Martins et al., 2013), for different latent Gaussian model components. The priors are designed to have a clear interpretation as the informativeness of the priors is adjusted by user-defined scaling. Here, we will take advantage of this to allow for different rates of shrinkage for priors assigned to partial autocorrelations of different lags. In contrast, objective priors simply aim to incorporate as little information to the inference as possible.

3 PC priors for AR(1) using two different base models

A first-order autoregressive process can be defined by

where is assumed to be normally distributed with mean 0 and marginal precision . This process represents an important special case of general autoregressive processes, in which the dependency structure is governed by the coefficient . Using the framework of penalised complexity priors, is viewed as a flexibility parameter reflecting deviation from simple fixed base model formulations. In this section, we derive PC priors for both using no autocorrelation () and no change in time () as base models, and we suggest how these priors can be scaled. A real-data application using the latter base model is included in Section 4.

Note that we also use a penalised complexity prior for the precision parameter . Following Simpson et al. (2016), this prior is derived using infinite precision as a base model, which gives the type-2 Gumbel distribution


The rate is inferred using the probability statement , where is a small probability. The prior is scaled by specifying an upper limit for the marginal standard deviation , in which the corresponding rate is . To make an intuitive choice for , one can consider the marginal standard deviation after the precision is integrated out. For example, if this standard deviation is (Simpson et al., 2016).

3.1 Base model: No dependency in time

In general, the correlation matrix of the first-order autoregressive process is . Choosing no autocorrelation () as a base model, the resulting process is white noise with correlation matrix equal to the identity matrix, . By simple matrix calculations, the distance function (1) is seen to equal Using the principle of constant rate penalisation (2), an exponential prior is assigned to with rate . The resulting prior is invariant to and by the ordinary transformation of variable formula, the PC prior for the one-lag autocorrelation is


The rate parameter is important as it influences how fast the prior shrinks towards the white noise base model. To infer , we need a sensible criterion which facilitates the interpretation of this parameter. Simpson et al. (2016) suggest to use a probability statement for an interpretable transformation of the parameter of interest, for example in terms of tail events as defined by (3). When the base model is , a reasonable alternative is to define such a tail event as large absolute correlations being less likely, i.e.

This implies that . The interpretation of this criterion is intuitive in the first-order case, but we find it difficult to use in practice for higher-order processes. An alternative scaling idea is presented in Section 5.2, where we consider the variance of the one-step forecast error as the order of the AR process is increased. We recommend the latter approach, as this is more intuitively implemented for general AR processes.

3.2 Base model: No change in time

An alternative base model for the AR(1) process is to assume that the process does not change in time (). This represents a limiting random walk case, being a non-stationary and singular process. Consequently, a limiting argument is needed to derive the PC prior for .

Let and where is close to and . In this case, the Kullback-Leibler divergence is

Considering the limiting value as , the distance

for a constant that does not depend on . Since , we assign a truncated exponential distribution to with rate and the resulting PC prior for is


Again, we need to suggest an intuitive criterion to scale the prior in terms of . This case requires separate consideration, as it cannot be seen as a special case of the approach in Section 5. One option is to make use of (3), and determine in terms of the probability statement . The solution to this equation is given implicitly by

provided that is larger than the lower limit .

3.3 The PC priors versus the reference prior

The two alternative PC priors for the first-lag coefficient of an AR(1) process are illustrated in Figure 1, using rate parameter in (5) and (6), respectively. For comparison, we also illustrate the reference prior defined by , (Barndorff-Nielsen and Schou, 1973; Berger and Yang, 1994; Liseo and Macaro, 2013).

Figure 1: The PC priors for the coefficient of AR(1), using (solid thick line) and (dashed line) as base models. The rate parameters in (5) and (6) are set equal to 2 in both cases. For comparison we also include the reference prior for (dotted line).

In general, reference priors are designed to give objective Bayesian inference in the sense of being least informative in a certain information-theoretic sense (Berger et al., 2009). This implies that the data are given a maximum effect on the posterior estimates. In general, the reference prior is calculated to maximise a measure of divergence from the posterior to the prior. In the given AR(1) case, the reference prior for is calculated to maximise an asymptotic version of the expected Kullback-Leibler divergence, in practice performed using an asymptotic version of the Fisher information matrix (Barndorff-Nielsen and Schou, 1973; Liseo and Macaro, 2013). The resulting reference prior is seen to be similar to the Jeffreys prior which is defined (up to a constant) by the square root of the determinant of the Fisher information matrix (Liseo and Macaro, 2013). Using a small rate parameter, the PC prior with base will be quite similar to the reference/Jeffreys prior but for increasing rate parameters, the effect of shrinkage to 0 is increased. Note that a PC prior using as the base model can be derived similarly as using the base model.

4 Application: Modeling time-varying frailty with AR(1)

To demonstrate the use of the PC prior for the lag-one autocorrelation, we consider an example of a Cox proportional hazard model with time-varying frailty. The Cox proportional hazard model is a popular type of survival model that can be fitted to recurrent event data. It assumes that the time-varying hazard for the th subject can be expressed as , where the combined risk variable in most cases depends on subject-specific covariates and contributions from random effects/frailty. The function is the baseline hazard, see Fleming and Harrington (2005) for further details and applications of the model. In the given example, our main focus is on the inclusion of a subject-specific and possibly time-dependent frailty term in .

4.1 Dependent Gaussian random effects

A full Bayesian analysis of the Cox proportional hazard model requires a model for the baseline hazard. A natural choice is to consider the log baseline hazard as a piecewise constant function on small time intervals, and impose smoothness to penalise deviations from a constant, see for example Fahrmeir and Tutz (2001, Sec 8.1.1) and Rue and Held (2005, Sec. 3.3.1). Let be the time interval of interest, and divide that interval into equidistant (for simplicity) intervals . Let , denote the log baseline hazard in the th interval. The first order random walk (RW1) model imposes smoothing among neighbour ’s,

This is a first-order intrinsic Gaussian Markov random field with a covariance matrix on the form , where the correlation matrix is singular and of rank . The parameter is a positive scaling constant which is added such that the generalised variance (the geometric mean of the diagonal elements of ), is 1. This is needed to make the model invariant to the size of and to unify the interpretation of , which then represents the precision of the (marginal) deviation from the null space of , see Sørbye and Rue (2014) and Simpson et al. (2016) for further details. To separate the baseline hazard from the intercept, we impose the constraint . The base model is a constant (in time) baseline hazard, which corresponds to infinite smoothing, . The resulting penalised complexity prior for is given by (4).

An interesting extension to the commonly used subject specific frailty model is to allow the frailty term to depend on time (Yau and McGilchrist, 1998), leading to a time-dependent combined risk variable . Anticipating a positive correlation in time, it is natural to model this time dependent risk using a continuous-time Ornstein-Uhlenbeck process or its discrete time version given by AR(1). The stationary AR(1) model for subject ’s specific frailty is given by

parameterised so that is the marginal precision and is the lag-one correlation. For this model component, the natural base model (keeping the marginal precision constant) is a time-constant frailty, in which we use the PC prior for in (6). For a fixed correlation , the base model for the precision is the constant zero which gives the type-2 Gumbel prior in (4).

4.2 Analysis of chronic granulomatous disease data

We end this section by analysing data on chronic granulomatous disease (CGD) (Fleming and Harrington, 2005) available in R as the cgd dataset in the survival package. This data set consists of patients from hospitals with CGD. These patients participated in a double-blinded placebo controlled randomised trial, in which a treatment using gamma interferon (-IFN) was used to avoid or reduce the number of infections suffered by the patients. The recorded number of CGD infections for each patient ranged from zero to a maximum of seven, and the survival times are given as the times between recurrent infections on each patient. We follow Yau and McGilchrist (1998) and introduce a deterministic time dependent covariate for each patient, given as the time since the first infection (if any). Additionally, we include the covariates treatment (placebo or -IFN), inherit (pattern of inheritance), age (in years), height (in cm), weight (in kg), propylac (use of prophylactic antibiotics at study entry), sex, region (US or Europe), and steroids (presence of corticosteroids) (Manda and Meyer, 2005; Yau and McGilchrist, 1998). The covariates age, height and weight were scaled before the analysis.

The computations were performed using the R-INLA package, by rewriting the model into a larger Poisson regression, see Fahrmeir and Tutz (2001) for a more general discussion and Martino et al. (2010) for R-INLA specific details. The prior specifications are as follows. We used a constant prior for the intercept and independent zero mean Gaussian prior with low precision, i.e. , for all the fixed effects. For the log baseline hazard with segments, we used the type-2 Gumbel prior with parameters giving a marginal standard deviation for the log baseline hazard of about . This seems adequate as we do not expect the log baseline hazard to be highly variable. The time-dependent frailty was assigned a type-2 Gumbel prior for the precision with parameters giving a marginal standard deviation of about , hence we allow for moderate subject specific variation. For the derived prior (6) for , we used the parameters , which puts most of the prior mass for high values of as . This corresponds to using a rate parameter in (6).

Figure 2 (a) shows the prior (dashed) and posterior (solid) densities for the autocorrelation coefficient of the AR(1) model for the frailty. The data hardly alters the prior at all, showing that there is not much information in the data available for this parameter, and we cannot conclude anything about the time-varying frailty. This is contrary to the findings in Manda and Meyer (2005) and Yau and McGilchrist (1998). Figure 2 (b) displays the log baseline hazard, showing an increasing trend (additional to the deterministic time dependent covariate), but the wide point-wise credible bands give no clear evidence for a time-dependent baseline hazard. With the new prior we are more confident that we do not overfit the data using the more flexible model for the log baseline hazard, as we do control the amount of deviation and its shrinkage towards it. The given conclusions are robust to changes in the parameter choices for the different model components.


Figure 2: Panel (a) displays the posterior density (solid) and prior density (dashed) for the lag-one autocorrelation in the AR(1) model for the time-dependent frailty. Panel (b) displays the log baseline hazard, mean (solid), median (dashed-dotted), lower (, dashed) and upper (, dotted) quantiles.

5 Deriving PC priors for higher-order AR processes

Define an autoregressive process of order by


where is an -dimensional vector, , and is the precision of the innovations. The corresponding correlation matrix is Toeplitz (Gray, 2002) with elements that can be expressed as , where . Although (7) is a natural parameterisation for known parameter values , it is an awkward parameterisation when these are unknown, as the positive definiteness requirement of the correlation matrix makes the space of valid complicated for . This implies that it is necessary to impose a number of non-linear constraints on these coefficients to define a stationary process.

A good alternative is to make use of the invariance property of the PC prior and define the prior for implicitly. The basic idea, which is commonly used when estimating AR parameters, is to assign the prior to the partial autocorrelations , where . This gives a useful unconstrained set of parameters for this problem. Furthermore, there is a smooth bijective mapping between the partial autocorrelations and the autocorrelations in , given by the Levinson-Durbin recursions (Monahan, 1984; Golub and van Loan, 1996).

5.1 A sequential approach to construct PC priors

In deriving PC priors for the partial autocorrelations of an AR process, we suggest to use a sequential approach, augmenting the partial autocorrelations one by one. Define and assume that for We calculate the Kullback-Leibler divergence, conditional on the terms already included in the model,

where and and represent the densities of the AR and AR processes, respectively. Notice that by augmenting the partial autocorrelations with one (or several) terms, the correlation structure between the first elements of the corresponding AR process remains unchanged. As the inverse correlation matrix of the AR process is a band matrix of order , we immediately notice that

and . Also,

The resulting measure of distance from the AR model to its base AR, is only a function of the th order partial autocorrelation, i.e.,

Applying the principle of constant rate penalisation (2), an exponential density is assigned to with rate . The resulting prior for the th partial autocorrelation is


where the parameter influences how fast the prior shrinks towards the base model.

The given formulation allows us to derive interpretable conditional priors for each of the partial autocorrelations , given the previous parameters . As the resulting priors are conditionally independent, the partial autocorrelations are seen to be consistent under marginalisation (as discussed in West (1991) in the context of kernel density estimation). Also, the marginal for an AR process is not influenced by higher-order partial autocorrelations when these are 0, i.e. for :

5.2 Controlling shrinkage properties

The given sequential approach implies that the prior for partial autocorrelations of different lags have the same functional form, but potentially different rate parameters. The next step is to determine a reasonable criterion to choose the rate in (8). Our suggestion is motivated by the conditional variance of the one-step ahead forecast error for an AR() with fixed ,

and the observation that often is an non-decreasing function with . We assume that

so the one-step ahead prediction, a priori, is non-decreasing with . This reduces the prior specification into two parameters and , which have to be specified by the user. The parameter represents the initial expectation . The choice induces the same shrinkage for all while gives increasing shrinkage for increasing . For given values of and , the corresponding value for the rate parameter in (8) is found by solving


for each , where denotes the complementary error function

6 Simulation results

To illustrate the properties of PC priors for the partial autocorrelations of autoregressive processes, we conduct a simulation study in which an AR(3) process is fitted to six different test cases. Except for the two first cases, the test examples are similar to the ones used in Liseo and Macaro (2013). In each case, we fit an AR(3) model to generated time series of length .

Root mean squared error Coverage ()
Test cases
PC prior ()
1. 0.133 0.123 0.111 0.928 0.919 0.956
2. 0.106 0.118 0.103 0.912 0.961 0.968
3. 0.174 0.150 0.106 0.888 0.882 0.968
4. 0.070 0.123 0.108 0.953 0.921 0.959
5. 0.093 0.136 0.107 0.938 0.918 0.965
6. 0.092 0.146 0.118 0.937 0.892 0.950
Reference prior
1. 0.146 0.151 0.135 0.911 0.901 0.931
2. 0.101 0.143 0.126 0.911 0.932 0.944
3. 0.185 0.143 0.130 0.879 0.920 0.929
4. 0.070 0.111 0.133 0.949 0.933 0.934
5. 0.092 0.133 0.130 0.939 0.923 0.938
6. 0.088 0.143 0.133 0.938 0.916 0.928
Table 1: The root mean squared error and the frequentistic coverage of highest posterior density intervals for each of the estimated partial autocorrelations of AR(3) processes, using PC priors with and the reference prior, respectively. The given results are averaged over 1000 simulations, and the time series length in each simulation is .

The results using simulations, are displayed in Table 1, where the average root-mean squared error is

We also report frequentistic coverage, , of the estimated highest posterior density intervals. In all test examples, the PC prior was implemented with scaling . By solving (9), this corresponds to using rate parameters in estimating the three partial autocorrelations.

As expected, the results illustrate that the use of PC priors avoid overfitting. In the first test case of simulating white noise, the PC prior is seen to give both smaller root mean squared error and better frequentistic coverage, compared with using the reference prior. We also notice that using PC priors gives better results in estimating for all the test cases. For the other parameters, the PC and reference priors are seen to have quite comparable performance. This implies that the PC prior seems like a promising alternative to reference priors in estimating the partial autocorrelations of AR() processes. The main advantage of PC priors is that these are easy to compute, also for higher-order processes, and more flexible than the reference prior, allowing for individual scaling. In comparing the two priors, we also considered the forecast error and coverage of highest posterior density intervals for one-step ahead predictions. The results were very similar using the PC and reference priors and we do not report these here.

The given results are not surprising. Especially, the approach of scaling the PC priors is designed to reflect decreasing partial autocorrelations as the order of the process is increased. If we have reasons to believe that the partial autocorrelations do not decrease with higher order, we suggest to scale the priors for the partial autocorrelations of all lags similarly, using . We have chosen to report results only using but have also tried several other combinations of the scaling parameters and . The main impression is that the PC priors are quite robust to different choices of and . Also, it is easy to understand how changes in these parameter will induce changes in the estimates. A larger value of and/or smaller value of give more shrinkage to 0. In general, we recommend that is chosen to be less or equal to 0.5 as higher values of might impose too must shrinkage for the first-lag partial autocorrelation. Also, values of less than 0.5 might impose too much shrinkage for the partial autocorrelations of higher lags.

7 Discussion

An important aspect of statistical model fitting is to select models that are flexible enough to capture true underlying structure but do not overfit. Among competing models we would prefer the more parsimonious one, for example in terms of having fewer assumptions, fewer model components or a simpler structure of model components. Hawkins (2004) describes overfitting in terms of violating the principle of parsimony given by Occam’s razor, the models and procedures used should contain all that is necessary for the modeling but nothing more. The given PC priors obey this principle, ensuring shrinkage to specific base models chosen to reflect the given application.

The PC priors represent a weakly informative alternative to existing prior choices for autoregressive processes, allowing for user-defined scaling to adjust the informativeness of the priors. The PC priors are computationally simple and are easily implemented for any finite order of the autoregressive process. The priors are available within the R-INLA framework, in which AR processes can be used as building blocks within the general class of latent Gaussian models (Rue et al., 2009). This class of models have many applications, among others including analysis of temporal and spatial data. A natural extension in time series applications is to derive PC priors also for autoregressive (integrated) moving average processes. Other useful model extensions would include vector autoregressive models (Sims, 1980), frequently used to analyse multivariate time series, for example within the fields of econometrics.

In this paper, we have only considered the stationary case. Previous controversy (Phillips, 1991) in assigning a prior to the lag-one autocorrelation of an AR(1) process relates to whether the stationarity condition is included, or not. Phillips (1991) argued that objective ignorance priors, like the Jeffreys prior, should be used for AR(1) processes when no stationarity assumptions are made, while uniform priors would give inference biased towards stationarity. One of the problem seen with Jeffreys prior is that it puts most of its probability mass on regions of the parameter space giving a non-stationary process (Liseo and Macaro, 2013). The reference prior was originally only defined for stationary process but has been extended in a symmetric way for (Berger and Yang, 1994), in which it is seen to have a more reasonable shape than Jeffreys prior (Robert, 2007). A relevant future project is to study the use of PC priors also for non-stationary AR processes.


  • Albert and Chib (1993) Albert, J. H. and Chib, S. (1993). Bayesian inference via Gibbs sampling of autoregressive time series subject to Markov mean and variance shifts. Journal of Business and Economic Statistics, 11:1–15.
  • Barndorff-Nielsen and Schou (1973) Barndorff-Nielsen, O. and Schou, G. (1973). On the parametrization of autoregressive models by partial autocorrelations. Journal of Multivariate Analysis, 3:408–419.
  • Barnett et al. (1996) Barnett, G., Kohn, R., and Sheather, S. (1996). Bayesian estimation of an autoregressive model using Markov chain Monte Carlo. Journal of Econometrics, 74:237–254.
  • Berger et al. (2009) Berger, J. O., Bernardo, J. M., and Sun, D. (2009). The formal definition of reference priors. The Annals of Statistics, 37:905–938.
  • Berger and Yang (1994) Berger, J. O. and Yang, R. (1994). Noninformative priors and Bayesian testing for the AR(1) model. Econometric Theory, 10:461–482.
  • Brockwell and Davis (2002) Brockwell, P. J. and Davis, R. A. (2002). Introduction to Time Series and Forecasting. Springer-Verlag, New Work, 2nd edition.
  • Chatfield (2003) Chatfield, C. (2003). The Analysis of Time Series: An Introduction. Chapman & Hall/CRC, 6th edition.
  • Chib (1993) Chib, S. (1993). Bayes regression with autoregressive errors: A Gibbs sampling approach. Journal of Econometrics, 58:275–294.
  • DeJong and Whiteman (1991) DeJong, D. N. and Whiteman, C. H. (1991). Reconsidering ‘Trends and random walks in macroeconomic time series’. Journal of Monetary Economics, 28:221–254.
  • Fahrmeir and Tutz (2001) Fahrmeir, L. and Tutz, G. (2001). Multivariate Statistical Modelling Based on Generalized Linear Models. Springer Science + Business Media, New York, 2nd edition.
  • Fleming and Harrington (2005) Fleming, T. R. and Harrington, D. P. (2005). Counting Processes and Survival Analysis. Wiley Series in Probability and Statistics (Book 625). John Wiley & Sons, Inc., New Jersey, 2nd edition.
  • Golub and van Loan (1996) Golub, G. H. and van Loan, C. F. (1996). Matrix Computations. Johns Hopkins University Press, Baltimore, 3rd edition.
  • Gray (2002) Gray, R. M. (2002). Toeplitz and circulant matrices: A review. Free book available from, Department of Electrical Engineering, Stanford University.
  • Hawkins (2004) Hawkins, D. M. (2004). The problem of overfitting. Journal of Chemical Information and Computer Sciences, 44:1–12.
  • Kullback and Leibler (1951) Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics, 22:79–86.
  • Lesage (1997) Lesage, J. P. (1997). Bayesian estimation of spatial autoregressive models. International Regional Science Review, 20:113–129.
  • Liseo and Macaro (2013) Liseo, B. and Macaro, C. (2013). Objective priors for causal AR(p) with partial autocorrelations. Journal of Statistical Computation and Simulation, 83:1613–1628.
  • Manda and Meyer (2005) Manda, S. O. M. and Meyer, R. (2005). Bayesian inference for recurrent events data using time-dependent frailty. Statistics in Medicine, 24:1263–1274.
  • Martino et al. (2010) Martino, S., Akerkar, R., and Rue, H. (2010). Approximate Bayesian inference for survival models. Scandinavian Journal of Statistics, 38:514–528.
  • Martins et al. (2013) Martins, T. G., Simpson, D., Lindgren, F., and Rue, H. (2013). Bayesian computing with INLA: New features. Computational Statistics and Data Analysis, 67:68–83.
  • Monahan (1984) Monahan, J. F. (1984). A note on enforcing stationarity in autoregressive-moving average models. Biometrika, 71:403–404.
  • Phillips (1991) Phillips, P. C. B. (1991). To criticize the critics: An objective Bayesian analysis of stochastic trends. Journal of Applied Econometrics, 6:333–364.
  • Prado and West (2010) Prado, R. and West, M. (2010). Time Series - Modeling, Computation and Inference. Chapman & Hall/CRC, Boca Raton.
  • Robert (2007) Robert, C. R. (2007). The Bayesian choice. Springer Science+Business Media, LLC, New York.
  • Rue and Held (2005) Rue, H. and Held, L. (2005). Gaussian Markov Random Fields: Theory and Applications, volume 104 of Monographs on Statistics and Applied Probability. Chapman & Hall, London.
  • Rue et al. (2009) Rue, H., Martino, S., and Chopin, N. (2009). Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations (with discussion). Journal of the Royal Statistical Society, Series B, 71:319–392.
  • Sahu and Bakar (2012) Sahu, S. K. and Bakar, K. S. (2012). Hierarchical Bayesian autoregressive models for large space-time data with applications to ozone concentration modelling. Applied Stochastic Models in Business and Industry, 28:395–415.
  • Sahu et al. (2007) Sahu, S. K., Gelfand, A. E., and Holland, D. M. (2007). High resolution space-time ozone modeling for assessing trends. Journal of the American Statistical Association, 102:1221–1234.
  • Simpson et al. (2016) Simpson, D., Rue, H., Riebler, A., Martins, T. G., and Sørbye, S. H. (2016). Penalising model component complexity: A principled, practical approach to constructing priors (with discussion). To appear in Statistical Science.
  • Sims (1980) Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48:1–48.
  • Sørbye and Rue (2014) Sørbye, S. H. and Rue, H. (2014). Scaling intrinsic Gaussian Markov random field priors in spatial modelling. Spatial Statistics, 8:39–51.
  • West (1991) West, M. (1991). Kernel density estimation and marginalization consistency. Biometrika, 78:421–425.
  • Yau and McGilchrist (1998) Yau, K. K. W. and McGilchrist, C. A. (1998). ML and REML estimation in survival analysis with time dependent correlated frailty. Statistics in Medicine, 17:1201–1213.
  • Zellner (1971) Zellner, A. (1971). Introduction to Bayesian inference in econometrics. John Wiley & Sons, Inc., New York.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description