Probabilistic Forecasting and Comparative Model Assessment Based on Markov Chain Monte Carlo Output

Probabilistic Forecasting and Comparative
Model Assessment Based on
Markov Chain Monte Carlo Output

Fabian Krüger    Sebastian Lerch    Thordis Thorarinsdottir    Tilmann Gneiting

Heidelberg University
Heidelberg Institute for Theoretical Studies
Karlsruhe Institute of Technology
Norwegian Computing Center
Abstract

In Bayesian inference, predictive distributions are typically available only through a sample generated via Markov chain Monte Carlo (MCMC) or related algorithms. In this paper, we conduct a systematic analysis of how to make and evaluate probabilistic forecasts from such simulation output. Based on proper scoring rules, we develop a notion of consistency that allows to assess the adequacy of methods for estimating the stationary distribution underlying the simulation output. We then provide asymptotic results that account for the salient features of Bayesian posterior simulators, and derive conditions under which choices from the literature satisfy our notion of consistency. Importantly, these conditions depend on the scoring rule being used, such that the choices of approximation method and scoring rule are intertwined. While the logarithmic rule requires fairly stringent conditions, the continuous ranked probability score (CRPS) yields consistent approximations under minimal assumptions. These results are illustrated in a simulation study and an economic data example. Overall, we find that mixture-of-parameters approximations which exploit the parametric structure of Bayesian models perform particularly well.

1 Introduction

Probabilistic forecasts take the form of predictive probability distributions over quantities or events of interest (Gneiting and Katzfuss, 2014). In this context, a rapidly growing transdisciplinary literature uses Bayesian inference to produce posterior predictive distributions in a wide range of applications, including economic, financial, ecological, and meteorological problems, among many others. Prompted by these developments, as recently noted by Broms et al. (2016, p. 1759),

“a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence.”

In Bayesian statistics, posterior predictive distributions arise as mixture distributions with respect to the posterior distribution of the parameter vector. For a real-valued continuous quantity of interest, the posterior predictive distribution, , can be represented by its cumulative distribution function (CDF) or the respective density. The posterior predictive CDF is then of the generic form

(1)

for , where is the posterior distribution of the parameter, , over some parameter space, , and is the conditional predictive CDF when is the true parameter. Harris (1989) argues that predictive distributions of this form have appeal in frequentist settings as well. Frequently, the integral in (1) does not admit a solution in closed form, and so the posterior predictive CDF must be approximated or estimated in some way, typically using some form of Markov chain Monte Carlo (MCMC); see, e.g., Gelfand and Smith (1990) and Gilks et al. (1996).

Given a simulated sequence of parameter values from , one approach, which we call the mixture-of-parameters (MP) technique, is to approximate by

(2)

However, this method can be used only when the conditional distributions are available in closed form. An alternative route is to simulate a sequence where , and to approximate based on this sample, using either nonparametric or parametric techniques. The most straightforward option is to estimate by the empirical CDF (ECDF),

(3)

Alternatively, one might employ a kernel density (KD) estimate,

(4)

of the posterior predictive density, where is a kernel function, i.e., a symmetric, bounded, and square-integrable probability density, such as the Gaussian or the Epanechnikov kernel, and is a suitable bandwidth (Rosenblatt, 1956; Silverman, 1986). Finally, much extant work employs a Gaussian approximation (GA) to , namely

(5)

where is the CDF of the standard normal distribution, and and are the empirical mean and standard deviation of the sample .

Following Rubin (1984) and Little (2006), it is now widely accepted that posterior predictive inference should be evaluated using frequentist principles, without prior information entering the model evaluation stage. For the comparison and ranking of probabilistic forecasting methods one typically uses a proper scoring rule (Gneiting and Raftery, 2007) that assigns a numerical score or penalty based on the predictive CDF, , or its density, , and the corresponding realization, , such as the logarithmic score (LogS; Good, 1952),

(6)

or the continuous ranked probability score (CRPS; Matheson and Winkler, 1976),

(7)

In practice, one finds and compares the mean score over an out-of-sample test set, and the forecasting method with the smaller mean score is preferred. Formal tests of the null hypothesis of equal predictive performance can be employed as well (Diebold and Mariano, 1995; Giacomini and White, 2006; Clark and McCracken, 2013; DelSole and Tippett, 2014).

Approximation method Logarithmic score CRPS
Based on parameter draws
Mixture-of-parameters Amisano and Giacomini (2007) Kallache et al. (2010)
Gschlößl and Czado (2007) Trombe et al. (2012)
Lopes et al. (2008) Risser and Calder (2015)
Maheu and Gordon (2008)
Liu and Maheu (2009)
Maheu and McCurdy (2009)
Geweke and Amisano (2010, 2011)
Jensen and Maheu (2010, 2013, 2014)
Jochmann et al. (2010)
Kallache et al. (2010)
Li et al. (2010)
Delatola and Griffin (2011)
Maheu et al. (2012)
Maneesoonthorn et al. (2012)
Jin and Maheu (2013, 2016)
Baştürk et al. (2014)
Maheu and Song (2014)
Risser and Calder (2015)
Zhou et al. (2015)
Kastner (2016)
Warne et al. (2016)
Based on a sample
Empirical CDF Gschlößl and Czado (2007)
Lopes et al. (2008)
Panagiotelis and Smith (2008)
De la Cruz and Branco (2009)
Salazar et al. (2011)
Friederichs and Thorarinsdottir (2012)
Sigrist et al. (2012)
Groen et al. (2013)
Leininger et al. (2013)
Berrocal et al. (2014)
Clark and Ravazzolo (2015)
Krüger et al. (2017)
Sahu et al. (2015)
Sigrist et al. (2015)
Smith and Vahey (2016)
Berg (2017)
Kernel density estimation Belmonte et al. (2014) Krüger and Nolte (2016)
Bauwens et al. (2015)
Berg and Henzel (2015)
Carriero et al. (2015b, c)
Berg (2017)
Gaussian approximation Adolfson et al. (2007) Brandt et al. (2014)
Clark (2011) Rodrigues et al. (2014)
Carriero et al. (2015a, 2016)
Clark and Ravazzolo (2015)
Giannone et al. (2015)
Warne et al. (2016)
Table 1: Approximation methods and scoring rules in recent studies using probabilistic forecasts based on MCMC output. For details see Appendix A.

Table 1 summarizes the use of evaluation techniques in recently published comparative studies of probabilistic forecasting methods that use Bayesian inference via MCMC. The listing highlights that the choices of approximation method and scoring rule are heavily intertwined. For example, the mixture-of-parameters technique has mainly been applied in concert with the logarithmic score, whereas the empirical CDF method can be used in conjunction with the CRPS only. However, to this date, there are few, if any, guidelines to support choices in the table, and it is not clear how they affect practical model comparisons. The present paper provides a systematic analysis of this topic. We focus on the following questions. First, what defines reasonable choices of the approximation method and scoring rule? Second, under what conditions do extant choices from the literature satisfy this definition? Third, for a given scoring rule, how accurate are alternative approximation methods in practically relevant scenarios?

To answer the first question, we propose the notion of a consistent approximation method. This formalizes the idea that, as the size of the simulated sample becomes larger and larger, and in terms of a given scoring rule, the approximation ought to perform as well as the unknown true forecast distribution. To tackle the second question, we evaluate methodological choices such as those in Table 1. The results account for the salient features of Bayesian posterior simulators, in particular time series type dependence among the MCMC draws. Regarding the third question, we provide a simulation study that is motivated by practical applications of MCMC. Furthermore, we consider the behavior of various approximation methods and scoring rules in an economic data example. Overall, our findings support the use of the mixture-of-parameters estimator in order to approximate the posterior predictive distribution of interest.

We emphasize that the present study — and the use of scoring rules in general — concern the comparative assessment of two or more predictive models: The model with the smallest mean score is considered the most appropriate. Comparative assessment is essential in order to choose among a large number of specifications typically available in practice. This task is different from absolute assessment, which amounts to diagnosing possible misspecification, using the probability integral transform (e.g. Diebold et al., 1998; Gneiting et al., 2007; Held et al., 2010), posterior predictive checks (Gelman et al., 1996; Gelman et al., 2014a, Chapter 6) and related methods.

The remainder of this paper is organized as follows. Section 2 introduces the notion of a consistent approximation method. In Section 3 we provide theoretical justifications of approximation methods encountered in the literature. Sections 4 and 5 present simulation and empirical evidence on the performance of these methods, and Section 6 concludes with a discussion. Technical material is deferred to appendices, and software is available within the scoringRules package for R (Jordan et al., 2017).

2 Formal Setting

In this section, we discuss the posterior predictive distribution in Bayesian forecasting, give a brief review of proper scoring rules and score divergences, and introduce the concept of a consistent approximation method based on MCMC output.

As discussed earlier, the posterior predictive cumulative distribution function (CDF) of a Bayesian forecasting model is given by

where is the parameter, is the posterior distribution of the parameter, and is the predictive distribution conditional on a parameter value ; see, e.g., Greenberg (2013, p. 33) or Gelman et al. (2014a, p. 7). A generic Markov chain Monte Carlo (MCMC) algorithm designed to sample from can be sketched as follows.

  • Fix at some arbitrary value.

  • For iterate as follows:

    • Draw , where is a transition kernel that specifies the conditional distribution of given .

    • Draw .

We assume throughout that the transition kernel is such that the sequence is stationary and ergodic in the sense of Geweke (2005, Definition 4.5.5) with invariant distribution , as holds widely in practice (Craiu and Rosenthal, 2014). Importantly, stationarity and ergodicity of with invariant distribution imply that is stationary and ergodic with invariant distribution (Genon-Catalot et al., 2000, Proposition 3.1).

This generic MCMC algorithm allows for two general options for estimating the posterior predictive distribution in (1), namely,

  • Option A: Based on parameter draws ,

  • Option B: Based on a sample ,

where typically is on the order of a few thousands or ten thousands. Alternatively, some authors, such as Krüger et al. (2017), generate, for each , independent draws , where ; see also Waggoner and Zha (1999, Section III. B). The considerations below apply in this more general setting as well.

2.1 Approximation methods

In the case of Option A, the sequence of parameter draws is used to approximate the posterior predictive distribution, , by the mixture-of-parameters estimator in (2). Under the assumption of ergodicity,

for . This estimator was popularized by Gelfand and Smith (1990, Section 2.2), based on earlier work by Tanner and Wong (1987), and is often called a conditional or Rao-Blackwellized estimator. For independent samples, the Rao-Blackwell theorem implies that exploiting the conditional distributions reduces the variance in the estimation; see, e.g., Geweke (2005, Section 4.4.1). However, this theoretical motivation does not readily extend to the more realistic case of dependent samples. Therefore, we follow suggestions of Geyer (1995, p. 152) and avoid the term Rao-Blackwellization. Instead, we refer to as the mixture-of-parameters (MP) estimator.

In the case of Option B, the sample is employed to approximate the posterior predictive distribution . Various methods for doing this have been proposed and used, including the empirical CDF of the sample, denoted in (3), the kernel density estimator in (4), and the Gaussian approximation in (5). Approaches of this type imply ‘more randomness than necessary’, in that the simulation step to draw can be avoided if Option A is used. That said, Option A requires full knowledge of the model specification, as the conditional distributions must be known in closed form in order to compute . There are situations where this is restrictive, e.g., when the task is to predict a complicated transformation of the original, possibly vector-valued predictand. The implementation of the approximation methods is straightforward, except for the case of kernel density estimation, in which we discuss implementation choices in Section 3.3.

2.2 Proper scoring rules and score divergences

Let denote the set of possible values of the quantity of interest, and let denote a convex class of probability distributions on . A scoring rule is a function

that assigns numerical values to pairs of forecasts and observations . We typically set , but will occasionally restrict attention to compact subsets.

Throughout this paper, we define scoring rules to be negatively oriented, i.e., a lower score indicates a better forecast. A scoring rule is proper relative to if the expected score

is minimized for , i.e.,

for all probability distributions . It is strictly proper relative to the class if, furthermore, equality implies that . The score divergence associated with the scoring rule is given by

Clearly, for all is equivalent to propriety of the scoring rule , which is a critically important property in practice (Gneiting and Raftery, 2007).


Scoring rule
Logarithmic score
CRPS
Dawid–Sebastiani score

Table 2: Examples of proper scoring rules, along with the associated score divergence and natural domain, . For a probability distribution with CDF , we write for its mean, for its standard deviation, and for its density.

Table 2 shows frequently used proper scoring rules, along with the associated score divergences and the natural domain. For any given scoring rule , the associated natural domain is the largest convex class of probability distributions such that is well-defined and finite almost surely under . Specifically, the natural domain for the popular logarithmic score (LogS; eq. (6)) is the class of the probability distribution with densities, and the respective score divergence is the Kullback-Leibler divergence; see, e.g., Gneiting and Raftery (2007) and Thorarinsdottir et al. (2013). For the continuous ranked probability score (CRPS; eq. (7)), the natural domain is the class of the probability distributions with finite mean. These two scores are strictly proper relative to their respective natural domains. Finally, the natural domain for the Dawid–Sebastiani score (DSS; Dawid and Sebastiani, 1999) is the class of the probability distributions with strictly positive, finite variance. This score is proper, but not strictly proper, relative to . In practice, the DSS is a natural choice of scoring rule if one is interested in predictions of the mean and variance, but not in higher moments.

2.3 Consistent approximations

To study the combined effects of the choices of approximation method and scoring rule in the evaluation of Bayesian predictive distributions, we introduce the notion of a consistent approximation procedure.

Specifically, let or , where , be output from a generic MCMC algorithm with the following property.

  • The process is stationary and ergodic with invariant distribution .

As noted, assumption (A) implies that is stationary and ergodic with invariant distribution . Consider an approximation method that produces, for all sufficiently large positive integers , an estimate that is based on or , respectively. Let be a proper scoring rule, and let be the associated natural domain. Then the approximation method is consistent relative to the scoring rule at the distribution if for all sufficiently large , and

or, equivalently, almost surely as . This formalizes the idea that under continued MCMC sampling, the approximation ought to perform as well as the unknown true posterior predictive distribution. We contend that this is a highly desirable property in practical work.

Note that is a random quantity that depends on the sample or . The specific form of the divergence stems from the scoring rule, which mandates convergence of a certain functional of the estimator or approximation, , and the theoretical posterior predictive distribution, . As we will argue, this aspect has important implications for the choice of scoring rule and approximation method.

Our concept of a consistent approximation procedure is independent of the question of how well a forecast model represents the true uncertainty. The definition thus allows to separate the problem of interest, namely, to find a good approximation to , from the distinct task of finding a good probabilistic forecast .111In fact, it is possible for an inconsistent approximation to a misspecified to approach the data generating model, whereas any consistent approximation will approach the misguided . However, the misspecification can be detected by diagnostic tools such as probability integral transform (PIT) histograms; see, e.g., Diebold et al. (1998) and Gneiting et al. (2007). The appropriate remedy is to improve the model specification, rather than attempting to save a flawed model by using an inconsistent approximation. We further emphasize that we study convergence in the sample size, , of MCMC output, given a fixed number of observations, say, , used to fit the model. Our analysis is thus distinct from traditional Bayesian asymptotic analyses that study convergence of the posterior distribution as becomes larger and larger (see, e.g., Gelman et al., 2014a, Section 4), thereby calling for markedly different technical tools.

3 Consistency results and computational complexity

We now investigate sufficient conditions for consistency of the aforementioned approximation methods, namely, the mixture-of-parameters (MP) estimator in (2), the empirical CDF (ECDF) method in (3), the kernel density (KD) estimate in (4), and the Gaussian approximation (GA) in (5). Table 3 summarizes upper bounds on the computational cost of pre-processing and computing the CRPS, Dawid–Sebastiani score (DSS) and logarithmic score (LogS) under these methods in terms of the size of the MCMC sample or , respectively.


Approximation method Pre-processing CRPS DSS LogS
MP
ECDF
KD
Gaussian

Table 3: Upper bounds on the computational complexity of approximation methods in terms of the size of the MCMC sample or , respectively, for pre-processing, and for the exact computation of the CRPS, Dawid–Sebastiani score (DSS) and logarithmic score (LogS).

Consistency requires the convergence of some functional of the approximation, , and the true posterior predictive distribution, . The conditions to be placed on the Bayesian model , the estimator , and the dependence structure of the MCMC output depend on the scoring rule at hand.

3.1 Mixture-of-parameters estimator

We now establish consistency of the mixture-of-parameters estimator in (2) relative to the CRPS, DSS and logarithmic score. The proofs are deferred to Appendix C.

Theorem 1 (consistency of mixture-of-parameters approximations relative to the CRPS and DSS). Under assumption (A), the mixture-of-parameters approximation is consistent relative to the CRPS at every distribution with finite mean, and consistent relative to the DSS at every distribution with strictly positive, finite variance.

Theorem 1 is the best possible result of its kind: It applies to every distribution in the natural domain and does not invoke any assumptions on the Bayesian model. In contrast, Theorem 2 hinges on rather stringent further conditions on the distribution and the Bayesian model (1), as follows.

  • The distribution is supported on some bounded interval . It admits a density, , that is continuous and strictly positive on . Furthermore, the density is continuous for every .

Theorem 2 (consistency of mixture-of-parameters approximations relative to the logarithmic score). Under assumptions (A) and (B), the mixture-of-parameters approximation is consistent relative to the logarithmic score at the distribution .

While we believe that the mixture-of-parameters technique is consistent under weaker assumptions, this is the strongest result that we have been able to prove. In particular, condition (B) does not allow for the case . However, practical applications often involve a truncation of the support for numerical reasons, as exemplified in Section 4, and in this sense the assumption may not be overly restrictive.

As regards the computation of the CRPS, the logarithmic score and the DSS for a predictive distribution of the form (2), the latter two are straightforward. To compute the CRPS, we note from eq. (21) of Gneiting and Raftery (2007) that

(8)

where and are independent random variables with distribution and , respectively. The expectations on the right-hand side of (8) often admit closed form expressions that can be derived with techniques described by Jordan (2016) and Taillardat et al. (2016), including but not limited to the ubiquitous case of Gaussian variables. Then the evaluation requires operations, as reported in Table 3. In Appendix B, we provide details and investigate the use of numerical integration in (7), which provides an attractive, computationally efficient alternative.

3.2 Empirical CDF-based approximation

The empirical CDF-based approximation in (3), which builds on a simulated sample , is consistent relative to the CRPS and DSS under minimal assumptions. We prove the following result in Appendix D, which is the best possible of its kind, as it applies to every distribution in the natural domain and does not invoke any assumptions on the Bayesian model.

Theorem 3 (consistency of empirical CDF-based approximations relative to the CRPS and DSS). Under assumption (A), the empirical CDF technique is consistent relative to the CRPS at every distribution with finite mean, and consistent relative to the DSS at every distribution with strictly positive, finite variance.

The computation of the CRPS under requires operations only. Specifically, let denote the order statistics of , which can be obtained in operations. Then

(9)

see Jordan (2016) and Jordan et al. (2017) for details.

A special case of eq. (8) suggests another way of computing the CRPS, in that

(10)

While the representations in (9) and (10) are algebraically equivalent, the latter requires operations and thus is inefficient.

3.3 Kernel density estimator

We now discuss conditions for the consistency of the kernel density estimator . In the present case of dependent samples , judicious choices of the bandwidth in (4) require knowledge of dependence properties of the sample, and the respective conditions are difficult to verify in practice. In our simulation and data examples, we use a simple implementation based on Gaussian kernel and the Silverman (1986, Section 3.4) plug-in rule for bandwidth selection. This leads to the specific form

(11)

where denotes the CDF of the standard normal distribution, and

(12)

with being the standard deviation of , for pre-processing costs of , as shown in Table 3. This choice is motivated by simulation evidence in Hall et al. (1995). Using the Sheather and Jones (1991) rule or cross-validation based methods yields slightly inferior results in our experience.222Furthermore, Sköld and Roberts (2003) and Kim et al. (2016) discuss bandwidth selection rules that are motivated by density estimation in MCMC samples. However, both studies rely on mean integrated squared error criteria that are different from the performance measures we consider here.

The score divergence associated with the logarithmic score is the Kullback-Leibler divergence, which is highly sensitive to tail behavior. Therefore, consistency of requires that the tail properties of the kernel in (4) and the true posterior predictive density be carefully matched, and any results tend to be technical (c.f. Hall, 1987). Roussas (1988) and Györfi et al. (1989) establish almost sure strong uniform consistency of under -mixing and other conditions. As noted in Appendix C, almost sure strong uniform consistency then implies consistency relative to the logarithmic score under assumption (B). Yu (1993) provides generalizations and optimal minimax convergence rates under -mixing conditions. However, the respective regularity conditions are stringent and difficult to check in practice. Indeed, Wasserman (2006, p. 57) opines that “Despite the natural role of Kullback-Leibler distance in parametric statistics, it is usually not an appropriate loss function in smoothing problems [..]”

Consistency of relative to the CRPS can be established under somewhat weaker conditions. For example, , and Tran (1989) studies convergence of the latter quantity under -mixing. Kernel density estimation approximations are generally not consistent relative to the DSS due to the variance inflation induced by typical choices of the bandwidth. However, adaptations based on rescaling or weighting allow for kernel density estimation under moment constraints, see, e.g., Jones (1991) and Hall and Presnell (1999).

As this brief review suggests, the theoretical properties of kernel density estimators are complex, and depend on the specifics of both the MCMC sample and the estimator. However, under the CRPS and DSS, a natural alternative is readily available: The empirical CDF approach is simpler and computationally cheaper than kernel density estimation, and is consistent under weak assumptions (see Theorem 3).

3.4 Gaussian approximation

A natural approximation method fits a member of a fixed parametric family, say , of probability distributions to the MCMC sample . The problem of estimating the unknown distribution is then reduced to a finite-dimensional parameter estimation problem. However, approximation methods of this type generally fail to be consistent relative to a strictly proper scoring rule at the distribution , unless belongs to .

The most important case is the quadratic or Gaussian approximation, which takes to be the Gaussian family, so that

where and are the empirical mean and standard deviation of . If has a density that is unimodal and symmetric, the approximation can be motivated by a Taylor series expansion of the log predictive density at the mode, similar to Gaussian approximations of posterior distributions in large-sample Bayesian inference (e.g. Kass and Raftery, 1995; Gelman et al., 2014a, Chapter 4).

The Ergodic Theorem implies that the Gaussian approximation is consistent relative to the Dawid–Sebastiani score under minimal conditions.

Theorem 4 (consistency of Gaussian approximations relative to the DSS). Under assumption (A), the Gaussian approximation technique is consistent relative to the DSS at every distribution with strictly positive, finite variance.

As noted, the Gaussian approximation fails to to be consistent relative to the CRPS and logarithmic score. However, the logarithmic score for the Gaussian approximation corresponds to the Dawid–Sebastiani score for the empirical CDF-based approximation , in that

for . Therefore, the Gaussian approximation under the logarithmic score yields model rankings that are identical to those for the empirical CDF technique under the Dawid–Sebastiani score. This suggests that the Gaussian approximation may not be overly restrictive when used in concert with the logarithmic score, which is well in line with the findings in recent empirical work by Warne et al. (2016). However any such use is subject to the caveat that a proper, but not strictly proper, scoring rule (namely the Dawid–Sebastiani score) is employed.

4 Simulation study

We now investigate the various approximation methods in a simulation study that is designed to emulate realistic MCMC behavior with dependent samples. Here, the posterior predictive distribution is known by construction, and so we can compare the different approximations to the true forecast distribution.

In order to judge the quality of an approximation of we consider the score divergence . Note that is a random variable, since depends on the particular MCMC sample or . In our results below, we therefore consider the distribution of across repeated simulation runs. For a generic approximation method producing an estimate , we proceed as follows:

  • For simulation run :

    • Draw MCMC samples and

    • Compute the approximation and the divergence for the approximation methods and scoring rules under consideration.

  • For each approximation method and scoring rule, summarize the distribution of
    .

In order to simplify notation, we typically suppress the superscript that identifies the Monte Carlo iteration. The results below are based on replicates.

4.1 Data generating process


Parameter Main role Value(s) considered
Persistence of {0.1, 0.5, 0.9}
Unconditional mean of 2
Unconditional variance of {12, 20}

Table 4: Hyper-parameters for the data generating process in the simulation setting of equations (14) to (17).

We generate sequences and in such a way that the invariant distribution,

where denotes the standard normal CDF, is a compound Gaussian distribution or normal scale mixture. Depending on the measure , which assumes the role of the posterior distribution in the general Bayesian model (1), can be modeled flexibly, including many well-known parametric distributions (Gneiting, 1997). As detailed below, our choice of implies that

(13)

where denotes the CDF of a variable with the property that is standard Student distributed with degrees of freedom. To mimic a realistic MCMC scenario with dependent draws, we proceed as proposed by Fox and West (2011). Given parameter values , and , let

(14)
(15)
(16)
(17)

where IG is the Inverse Gamma distribution, parametrized such that when , with G being the Gamma distribution with shape and rate .

Table 4 summarizes our choices for the parameter configurations of the data generating process. The parameter determines the persistence of the chain, in that the unconditional mean of , which can be viewed as an average autoregressive coefficient (Fox and West, 2011, Section 2.3), is given by . We consider three values, aiming to mimic MCMC chains with different persistence properties. The parameter represents a scale effect, and governs the tail thickness of the unconditional Student distribution in (13). We consider values of 12 and 20 that seem realistic for macroeconomic variables, such as the growth rate of the gross domestic product (GDP), that feature prominently in the empirical literature.

4.2 Approximation methods

We consider the following approximation methods, which have been discussed in detail in Section 3. The first approximation uses a sequence of parameter draws, and the other three employ an MCMC sample .

  • Mixture-of-parameters estimator in (2), which here is of the form

    where is the predictive standard deviation drawn in MCMC iteration .

  • Empirical CDF-based approximation in (3).

  • The nonparametric kernel density estimator using a Gaussian kernel and the Silverman rule for bandwidth selection, with predictive CDF of the form (11).

  • Gaussian approximation in (5).

It is interesting to observe that here is a scale mixture of centered Gaussian distributions, and is a location mixture of normal distributions, whereas the quadratic approximation is a single Gaussian.

The conditions for consistency of the mixture-of-parameters and empirical CDF approximations relative to the CRPS in Theorems 1 and 3 are satisfied. Furthermore, one might argue that numerically the support of and is bounded (cf. below), and then the assumptions of Theorem 2 also are satisfied. Clearly, the Gaussian approximation fails to be consistent relative to the CRPS or the logarithmic score, as is not Gaussian.

For each approximation method, scoring rule , sample size and replicate , we evaluate the score divergence . The divergence takes the form of a univariate integral (cf. Table 2) that is not available in closed form. Therefore, we compute by numerical integration as implemented in the R function integrate. This is unproblematic if the scoring rule is the CRPS. For the logarithmic score, the integration is numerically challenging, as the logarithm of the densities needs to be evaluated in their tails. We therefore truncate the support of the integral to the minimal and maximal values that yield numerically finite values of the integrand.

4.3 Main results

Logarithmic score CRPS
Figure 1: Score divergences in the simulation study with . For a given method and MCMC sample size, the bars range from the 10th to the 90th percentile of the score divergences across 1 000 replicates. The squares mark the respective medians.
Logarithmic score CRPS
Figure 2: Performance of the mixture-of-parameters estimator. The design is as in Figure 1, but for smaller sample sizes. For comparison, the blue horizontal line marks the median divergence of the kernel density estimator based on 20 000 draws.

In the interest of brevity, we restrict attention to results for a single set of parameters of the data generating process, namely . This implies an unconditional Student distribution with 14 degrees of freedom, and intermediate autocorrelation of the MCMC draws. The results for the other parameter constellations in Table 4 are similar and available in the Online Supplement.

Figure 1 illustrates the performance of the approximation methods under the logarithmic score and the CRPS, by showing the distribution of the score divergence as the sample size grows. The mixture-of-parameters estimator dominates the other methods by a wide margin: Its divergences are very close to zero, and show little variation across replicates. Under the logarithmic score, the performance of the kernel density estimator is highly variable across the replicates, even for large sample sizes. The variability is less under the CRPS, where the kernel density approach using the Silverman (1986) rule of thumb for bandwidth selection performs similar to the empirical CDF-based approximation. Other bandwidth selection rules we have experimented with tend to be inferior, as indicated by slower convergence and higher variability across replicates. Finally, we observe the lack of consistency of the Gaussian approximation.

Figure 2 provides insight into the performance of the mixture-of-parameters approximation for small MCMC samples. Using as few as 150 draws, the method attains a lower median CRPS divergence than the kernel density estimator based on 20 000 draws. The superiority of the mixture-of-parameters estimator is even more pronounced under the logarithmic score, where only 50 draws are required to outperform the kernel density estimator based on 20 000 draws.

4.4 Thinning the MCMC sample

Logarithmic score, Mixture-of-parameters CRPS, Mixture-of-parameters
Logarithmic score, Kernel density estimation CRPS, Empirical CDF
Figure 3: Performance of three sampling strategies: S1: draws, without thinning; S2: MCMC draws, retaining every th draw from a sequence of draws; S3: draws, without thinning. Bars range from the 10th to the 90th percentile of the score divergences across 1 000 replicates. The squares mark the respective medians.

We further investigate the effect of thinning the Markov chains. Thinning a chain by a factor of means that only every th simulated value is retained, and the rest is discarded. Thinning is often applied routinely with the goal of reducing autocorrelation in the draws. Of the articles listed in Table 1, about one in four explicitly reports thinning of the simulation output, with thinning factors ranging from 2 to 100. Here we compare three sampling approaches:

  • MCMC draws, without thinning

  • MCMC draws, retaining every th draw from a sequence of draws

  • MCMC draws, without thinning

Note that the samples in S1 and S3 have the same dynamic properties, whereas S2 will typically produce a chain with less autocorrelation. Furthermore, S2 and S3 require the same computing time, which exceeds that of S1 by a factor of ten. Figure 3 summarizes the corresponding simulation results, using parameter values and , and varying values of the persistence parameter . We report results for four popular combinations of scoring rules and approximation methods.

As expected, S2 tends to outperform S1: When the sample size is held fixed, less autocorrelation entails more precise estimators. While the difference in performance is modest in most cases, S2 attains large (relative) gains over S1 when the mixture-of-parameters estimator is applied to a very persistent sample with . This can be explained by the direct effect of the persistence parameter on the parameter draws , whereas the influence is less immediate for the KDE and ECDF approximation methods, which are based on the sequence obtained in an additional sampling step. Furthermore, S3 outperforms S2 in all cases covered. While the effects of thinning have not been studied in the context of predictive distributions before, this observation is in line with extant reports of the greater precision of unthinned chains (Geyer, 1992; MacEachern and Berliner, 1994; Link and Eaton, 2012). The performance gap between S3 and S2 is modest for the mixture-of-parameters estimator (top row of Figure 3), but very pronounced for the other estimators.

From a practical perspective, thinning thus seems justified if, and only if, a small MCMC sample is desired and the mixture-of-parameter estimator is applied. Two arguments in favor of a small sample appear particularly relevant even today. First, storing large amounts of data on public servers (as is often done for replication purposes) may be costly or inconvenient. Second, post-processing procedures such as score computations applied to the MCMC sample may be computationally demanding (cf. Table 3), and therefore may encourage and justify thinning.

5 Economic data example

In real-world uses of Bayesian forecasting methods, the posterior predictive distribution is typically not available in closed form. Therefore, computing or estimating the object of interest for assessing consistency, i.e., the score divergence , is not feasible. In the subsequent data example, we thus compare the approximation methods via their out-of-sample predictive performance, and examine the variation of the mean scores across chains obtained by replicates with distinct random seeds. While studying the predictive performance does not allow to assess consistency of the approximation methods, it does allow us to assess the variability and applicability of the approximations in a practical setting.

5.1 Data

We consider quarterly growth rates of U.S. real gross domestic product (GDP), as illustrated in Figure 4. The training sample used for model estimation is recursively expanded as forecasting moves forward in time. We use the real-time data set provided by the Federal Reserve Bank of Philadelphia333http://www.phil.frb.org/research-and-data/real-time-center/real-time-data/, which provides historical snapshots of the data vintages available at any given date in the past, and consider forecasts for the period from the second quarter of 1996 to the third quarter of 2014, for a total of forecast cases. For brevity, we present results for a prediction horizon of one quarter only. The Online Supplement contains results for longer horizons, which are qualitatively similar to the ones presented here.

Figure 4: Quarterly growth rate of U.S. real GDP from the second quarter of 1947 to the third quarter of 2014, using the data vintage from the first quarter of 2015.

5.2 Probabilistic forecasts

To construct density forecasts, we consider an autoregressive (AR) model with a single lag and state-dependent error term variance, in that

(18)

where and is a discrete state variable that switches according to a first-order Markov chain. The model, which is a variant of the Markov switching model proposed by Hamilton (1989), provides a simple description of time-varying heteroscedasticity. The latter is an important stylized feature of macroeconomic time series (see, e.g., Clark and Ravazzolo, 2015).

We conduct Bayesian inference via a Gibbs sampler, for which we give details in Appendix E. Let denote the complete set of latent states and model parameters at iteration of the Gibbs sampler. Conditional on , the predictive distribution under the model in (18) is Gaussian with mean and standard deviation , where we suppress time and forecast horizon for simplicity. As an illustration, Figure 5 presents histograms of the MCMC draws for and for forecasts made in the third quarter of 2014. The bimodal histogram for clearly reflects the two states; estimation uncertainty about the state-specific quantities creates additional variation. The predictive distributions obtained by the mixture-of-parameters and Gaussian approximations are also shown.444Alternatively, one could construct a mixture-of-parameters estimator by using a modified parameter vector that comprises predicted probabilities (rather than simulated realizations) of the future latent states. Given MCMC iterations, this alternative estimator yields a mixture of Gaussian component distributions. For reasons of computational and conceptual simplicity, we prefer the -component estimator described in the text.

Posterior draws for Posterior draws for
Posterior predictive density
Figure 5: Forecasts of the growth rate of U.S. real GDP in the third quarter of 2014. Top: Histograms based on 40 000 posterior draws, without thinning. Bottom: Posterior predictive densities, with the vertical line indicating the realized rate. The kernel density estimator is visually indistinguishable from the mixture-of-parameters estimator, and is therefore omitted.

At each forecast origin date , we produce 10 000 burn-in draws, and use 40 000 draws post burn-in. We construct parallel chains in this way. The (time-averaged) mean score of a given approximation method, based on MCMC draws within chain , is

where is the probabilistic forecast at time . The variation of across chains is due to differences in random seeds. From a pragmatic perspective, a good approximation method should be such that the values are small and display little variation.

5.3 Results

In Figure 6, the mean score is plotted against the size of the MCMC sample. The mixture-of-parameters approximation outperforms its competitors: Its scores display the smallest variation across chains, for both the CRPS and the logarithmic score, and for all sample sizes. The scores of the mixture-of-parameter estimator also tend to be lower (i.e., better) than the scores for the other methods. The kernel density estimator performs poorly for small sample sizes, with the scores varying substantially across chains. Under the CRPS, the kernel density estimator is dominated by the empirical CDF technique, which can be interpreted as kernel density estimation with a bandwidth of zero.

Logarithmic score CRPS
Figure 6: Mean score in the data example against sample size. The dots represent 16 parallel MCMC chains, and the lines connect averages across chains.

6 Discussion

We have investigated how to make and evaluate probabilistic forecasts based on MCMC output. The formal notion of consistency allows us to assess the appropriateness of approximation methods within the framework of proper scoring rules. Despite their popularity in the literature, Gaussian approximations generally fail to be consistent. Conditions for consistency depend on the scoring rule of interest, and we have demonstrated that the mixture-of-parameters and empirical CDF-based approximations are consistent relative to the CRPS under minimal conditions. Proofs of consistency relative to the logarithmic score generally rely on stringent assumptions.

In view of these theoretical considerations as well as the practical perspective taken in our simulation and data examples, we generally recommend the use of the mixture-of-parameters estimator, which provides an efficient approximation method and outperforms all alternatives. This can be explained by the fact that it efficiently exploits the parametric structure of the Bayesian model. The empirical CDF-based approximation provides a good alternative if the conditional distributions fail to be available in closed form, or if for some reason the draws are to be made directly from the posterior predictive distribution, as opposed to using parameter draws. The empirical CDF-based approximation is available under the CRPS and DSS but not under the logarithmic score, where a density is required. Under the logarithmic score, the case for the mixture-of-parameters estimator is thus particularly strong. In particular, the score’s sensitivity to the tails of the distribution renders kernel density estimators unattractive from both theoretical and applied perspectives.

Our recommendations have been implemented in the scoringRules package for R (R Core Team, 2017); see Jordan et al. (2017) for details. The functions and default choices aim to provide readily applicable and efficient approximations. The mixture-of-parameters estimator depends on the specific structure of the Bayesian model and can therefore not be covered in full generality. However, the implemented analytical solutions of the CRPS and logarithmic score allow for straightforward and efficient computation. The scoringRules package further contains functions and data for replicating the simulation and case study, with details provided at https://github.com/FK83/scoringRules/blob/master/KLTG2016_replication.pdf.

Ferro (2014) studies the notion of a fair scoring rule in the context of ensemble weather forecasts. A scoring rule is called fair if the expected score is optimal for samples with members that behave as though they and the verifying observation were sampled from the same distribution. While certainly relevant in the context of meteorological forecast ensembles, where the sample size is typically between 10 and 50, these considerations seem less helpful in the context of MCMC output, where is on the order of thousands and can be increased at low cost. Furthermore, the proposed small sample adjustments and the characterization of fair scores hold for independent samples only, an assumption that is thoroughly violated in the case of MCMC.

We are interested in evaluating probabilistic forecasts produced via MCMC, so that the predictive performance of a model during an out-of-sample, test or evaluation period can be used to estimate its forecast performance on future occasions. In contrast, information criteria suggest a different route towards estimating forecast performance (Spiegelhalter et al., 2002; Watanabe, 2010; Hooten and Hobbs, 2015). They consider a method’s in-sample performance, and account for model complexity via penalty terms. Preferred ways of doing so have been the issue of methodological debate, and a consensus has not been reached; see, e.g., the comments in Gelman et al. (2014b) and Spiegelhalter et al. (2014). This present analysis does not concern in-sample comparisons, and does not address the question of whether these are more or less effective than out-of-sample comparisons. However, our results and observations indicate that out-of-sample comparisons of the type considered here yield robust results across a range of implementation choices.

Necessarily, the scope of this paper is restricted along several dimensions. First, we have focused on univariate continuous forecast distributions. The corresponding applied literature is large and features a rich variety of implementation variants (cf. Table 1). Nevertheless, there are other empirically relevant setups, notably univariate discrete distributions and multivariate continuous distributions. Discrete distributions can be handled using tools similar to the one presented here (e.g. Held et al., 2017, Section 4.1). The multivariate case features some challenges, including the choice of the scoring rule (Gneiting et al., 2008; Scheuerer and Hamill, 2015) and the notorious ‘curse of dimensionality’ when applying nonparametric techniques. To retain focus, we need to leave these challenges for future work. Our theoretical results focus on consistency but do not cover rates of convergence. Results on the latter tend to rely on theoretical conditions that are hard to verify empirically, and the plausibility of which is likely to depend on the specifics of the MCMC algorithm. By contrast, many of our consistency results require only minimal conditions that hold across a wide range of sampling algorithms in the interdisciplinary applied literature.

References

  • Adolfson et al. (2007) Adolfson, M., Linde, J. and Villani, M. (2007). Forecasting performance of an open economy DSGE model. Econometric Rev., 26, 289–328.
  • Amisano and Giacomini (2007) Amisano, G. and Giacomini, R. (2007). Comparing density forecasts via weighted likelihood ratio tests. J. Bus. Econom. Statist., 25, 177–190.
  • Baştürk et al. (2014) Baştürk, N., Cakmakli, C., Ceyhan, S. P. and Van Dijk, H. K. (2014). Posterior-predictive evidence on US inflation using extended new Keynesian Phillips curve models with non-filtered data. J. Appl. Econometrics, 29, 1164–1182.
  • Bauwens et al. (2015) Bauwens, L., Koop, G., Korobilis, D. and Rombouts, J. V. K. (2015). The contribution of structural break models to forecasting macroeconomic series. J. Appl. Econometrics, 30, 596–620.
  • Belmonte et al. (2014) Belmonte, M. A. G., Koop, G. and Korobilis, D. (2014). Hierarchical shrinkage in time-varying parameter models. J. Forecast., 33, 80–94.
  • Berg (2017) Berg, T. O. (2017). Forecast accuracy of a BVAR under alternative specifications of the zero lower bound. Stud. Nonlinear Dyn. Econom., 21,.
  • Berg and Henzel (2015) Berg, T. O. and Henzel, S. R. (2015). Point and density forecasts for the Euro area using Bayesian VARs. Int. J. Forecast., 31, 1067–1095.
  • Berrocal et al. (2014) Berrocal, V. J., Gelfand, A. E. and Holland, D. M. (2014). Assessing exceedance of ozone standards: A space-time downscaler for fourth highest ozone concentrations. Environmetrics, 25, 279–291.
  • Brandt et al. (2014) Brandt, P. T., Freeman, J. R. and Schrodt, P. A. (2014). Evaluating forecasts of political conflict dynamics. Int. J. Forecast., 30, 944–962.
  • Broms et al. (2016) Broms, K. M., Hooten, M. B. and Fitzpatrick, R. M. (2016). Model selection and assessment for multi-species occupancy models. Ecology, 97, 1759–1770.
  • Carriero et al. (2015a) Carriero, A., Clark, T. E. and Marcellino, M. (2015a). Bayesian VARs: Specification choices and forecast accuracy. J. Appl. Econometrics, 30, 46–73.
  • Carriero et al. (2015b) Carriero, A., Clark, T. E. and Marcellino, M. (2015b). Realtime nowcasting with a Bayesian mixed frequency model with stochastic volatility. J. R. Stat. Soc. Ser. A. Stat. Soc., 178, 837–862.
  • Carriero et al. (2016) Carriero, A., Clark, T. E. and Marcellino, M. (2016). Common drifting volatility in large Bayesian VARs. J. Bus. Econom. Statist., 34, 375–390.
  • Carriero et al. (2015c) Carriero, A., Mumtaz, H. and Theophilopoulou, A. (2015c). Macroeconomic information, structural change, and the prediction of fiscal aggregates. Int. J. Forecast., 31, 325–348.
  • Clark and McCracken (2013) Clark, T. and McCracken, M. (2013). Advances in forecast evaluation. In Handbook of Economic Forecasting (G. Elliott and A. Timmermann, eds.), vol. 2. Elsevier, 1107–1201.
  • Clark (2011) Clark, T. E. (2011). Real-time density forecasts from BVARs with stochastic volatility. J. Bus. Econom. Statist., 29, 327–341.
  • Clark and Ravazzolo (2015) Clark, T. E. and Ravazzolo, F. (2015). Macroeconomic forecasting performance under alternative specifications of time-varying volatility. J. Appl. Econometrics, 30, 551–575.
  • Craiu and Rosenthal (2014) Craiu, R. V. and Rosenthal, J. S. (2014). Bayesian computation via Markov chain Monte Carlo. Annu. Rev. Stat. Appl., 1, 179–201.
  • Dawid and Sebastiani (1999) Dawid, A. P. and Sebastiani, P. (1999). Coherent dispersion criteria for optimal experimental design. Ann. Statist., 27, 65–81.
  • De la Cruz and Branco (2009) De la Cruz, R. and Branco, M. D. (2009). Bayesian analysis for nonlinear regression model under skewed errors, with application in growth curves. Biom. J., 51, 588–609.
  • Dehling and Philipp (2002) Dehling, H. and Philipp, W. (2002). Empirical process techniques for dependent data. In Empirical Process Techniques for Dependent Data (H. Dehling, T. Mikosch and M. Sørensen, eds.). Birkhäuser, Boston, 3–113.
  • Delatola and Griffin (2011) Delatola, E.-I. and Griffin, J. E. (2011). Bayesian nonparametric modelling of the return distribution with stochastic volatility. Bayesian Anal., 6, 901–926.
  • DelSole and Tippett (2014) DelSole, T. and Tippett, M. K. (2014). Comparing forecast skill. Mon. Weather Rev., 142, 4658–4678.
  • Diebold et al. (1998) Diebold, F. X., Gunther, T. A. and Tay, A. S. (1998). Evaluating density forecasts with applications to financial risk management. Internat. Econom. Rev., 39, 863–883.
  • Diebold and Mariano (1995) Diebold, F. X. and Mariano, R. S. (1995). Comparing predictive accuracy. J. Bus. Econom. Statist., 13, 253–263.
  • Diks et al. (2011) Diks, C., Panchenko, V. and van Dijk, D. (2011). Likelihood-based scoring rules for comparing density forecasts in tails. J. Econometrics, 163, 215–230.
  • Ferro (2014) Ferro, C. A. T. (2014). Fair scores for ensemble forecasts. Q. J. Royal Meteorol. Soc., 140, 1917–1923.
  • Fox and West (2011) Fox, E. B. and West, M. (2011). Autoregressive models for variance matrices: Stationary inverse Wishart processes. Preprint, available at http://arxiv.org/abs/1107.5239.
  • Friederichs and Thorarinsdottir (2012) Friederichs, P. and Thorarinsdottir, T. L. (2012). Forecast verification for extreme value distributions with an application to probabilistic peak wind prediction. Environmetrics, 23, 579–594.
  • Gelfand and Smith (1990) Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc., 85, 398–409.
  • Gelman et al. (2014a) Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A. and Rubin, D. B. (2014a). Bayesian Data Analysis. 3rd ed. Chapman & Hall/CRC, Boca Raton.
  • Gelman et al. (2014b) Gelman, A., Hwang, J. and Vehtari, A. (2014b). Understanding predictive information criteria for Bayesian models. Stat. Comput., 24, 997–1016.
  • Gelman et al. (1996) Gelman, A., Meng, X.-L. and Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statist. Sinica, 6, 733–760.
  • Genon-Catalot et al. (2000) Genon-Catalot, V., Jeantheau, T. and Larédo, C. (2000). Stochastic volatility models as hidden Markov models and statistical applications. Bernoulli, 6, 1051–1079.
  • Geweke (2005) Geweke, J. (2005). Contemporary Bayesian Econometrics and Statistics. John Wiley & Sons, Hoboken.
  • Geweke and Amisano (2010) Geweke, J. and Amisano, G. (2010). Comparing and evaluating Bayesian predictive distributions of asset returns. Int. J. Forecast., 26, 216–230.
  • Geweke and Amisano (2011) Geweke, J. and Amisano, G. (2011). Hierarchical Markov normal mixture models with applications to financial asset returns. J. Appl. Econometrics, 26, 1–29.
  • Geyer (1992) Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci., 7, 473–483.
  • Geyer (1995) Geyer, C. J. (1995). Conditioning in Markov chain Monte Carlo. J. Comput. Graph. Statist., 4, 148–154.
  • Giacomini and White (2006) Giacomini, R. and White, H. (2006). Tests of conditional predictive ability. Econometrica, 74, 1545–1578.
  • Giannone et al. (2015) Giannone, D., Lenza, M. and Primiceri, G. E. (2015). Prior selection for vector autoregressions. Rev. Econ. Stat., 97, 436–451.
  • Gilks et al. (1996) Gilks, W. R., Richardson, S. and Spiegelhalter, D. J. (eds.) (1996). Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, Boca Raton.
  • Gneiting (1997) Gneiting, T. (1997). Normal scale mixtures and dual probability densities. J. Stat. Comput. Simul., 59, 375–384.
  • Gneiting et al. (2007) Gneiting, T., Balabdaoui, F. and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. J. R. Stat. Soc. Ser. B. Stat. Methodol., 69, 243–268.
  • Gneiting and Katzfuss (2014) Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annu. Rev. Stat. Appl., 1, 125–151.
  • Gneiting and Raftery (2007) Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. J. Amer. Statist. Assoc., 102, 359–378.
  • Gneiting and Ranjan (2011) Gneiting, T. and Ranjan, R. (2011). Comparing density forecasts using threshold- and quantile-weighted scoring rules. J. Bus. Econom. Statist., 29, 411–422.
  • Gneiting et al. (2008) Gneiting, T., Stanberry, L. I., Grimit, E. P., Held, L. and Johnson, N. A. (2008). Assessing probabilistic forecasts of multivariate quantities, with an application to ensemble predictions of surface winds (with discussion and rejoinder). Test, 17, 211–264.
  • Good (1952) Good, I. J. (1952). Rational decisions. J. R. Stat. Soc. Ser. B. Stat. Methodol., 14, 107–114.
  • Greenberg (2013) Greenberg, E. (2013). Introduction to Bayesian Econometrics. 2nd ed. Cambridge University Press, New York.
  • Grimit et al. (2006) Grimit, E. P., Gneiting, T., Berrocal, V. J. and Johnson, N. A. (2006). The continuous ranked probability score for circular variables and its application to mesoscale forecast ensemble verification. Q. J. Royal Meteorol. Soc., 132, 2925–2942.
  • Groen et al. (2013) Groen, J. J. J., Paap, R. and Ravazzolo, F. (2013). Real-time inflation forecasting in a changing world. J. Bus. Econom. Statist., 31, 29–44.
  • Gschlößl and Czado (2007) Gschlößl, S. and Czado, C. (2007). Spatial modelling of claim frequency and claim size in non-life insurance. Scand. Actuar. J., 2007, 202–225.
  • Györfi et al. (1989) Györfi, L., Härdle, W., Sarda, P. and Vieu, P. (1989). Nonparametric Curve Estimation from Time Series. Springer, Berlin.
  • Hall (1987) Hall, P. (1987). On Kullback-Leibler loss and density estimation. Ann. Statist., 15, 1491–1519.
  • Hall et al. (1995) Hall, P., Lahiri, S. N. and Truong, Y. K. (1995). On bandwidth choice for density estimation with dependent data. Ann. Statist., 23, 2241–2263.
  • Hall and Presnell (1999) Hall, P. and Presnell, B. (1999). Density estimation under constraints. J. Comput. Graph. Statist., 8, 259–277.
  • Hamilton (1989) Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica, 57, 357–384.
  • Harris (1989) Harris, I. R. (1989). Predictive fit for natural exponential families. Biometrika, 76, 675–684.
  • Held et al. (2017) Held, L., Meyer, S. and Bracher, J. (2017). Probabilistic forecasting in infectious disease epidemiology: The 13th Armitage lecture. Stat. Med., 36, 3443–3460.
  • Held et al. (2010) Held, L., Schrödle, B. and Rue, H. (2010). Posterior and cross-validatory predictive checks: A comparison of MCMC and INLA. In Statistical Modelling and Regression Structures: Festschrift in Honour of Ludwig Fahrmeir (T. Kneib and G. Tutz, eds.). Physica-Verlag HD, Heidelberg, 91–110.
  • Hooten and Hobbs (2015) Hooten, M. B. and Hobbs, N. T. (2015). A guide to Bayesian model selection for ecologists. Ecol. Monogr., 85, 3–28.
  • Jensen and Maheu (2010) Jensen, M. J. and Maheu, J. M. (2010). Bayesian semiparametric stochastic volatility modeling. J. Econometrics, 157, 306–316.
  • Jensen and Maheu (2013) Jensen, M. J. and Maheu, J. M. (2013). Bayesian semiparametric multivariate GARCH modeling. J. Econometrics, 176, 3–17.
  • Jensen and Maheu (2014) Jensen, M. J. and Maheu, J. M. (2014). Estimating a semiparametric asymmetric stochastic volatility model with a Dirichlet process mixture. J. Econometrics, 178, 523–538.
  • Jin and Maheu (2013) Jin, X. and Maheu, J. M. (2013). Modeling realized covariances and returns. J. Financ. Economet., 11, 335–369.
  • Jin and Maheu (2016) Jin, X. and Maheu, J. M. (2016). Bayesian semiparametric modeling of realized covariance matrices. J. Econometrics, 192, 19–39.
  • Jochmann et al. (2010) Jochmann, M., Koop, G. and Strachan, R. W. (2010). Bayesian forecasting using stochastic search variable selection in a VAR subject to breaks. Int. J. Forecast., 26, 326–347.
  • Jones (1991) Jones, M. C. (1991). On correcting for variance inflation in kernel density estimation. Comput. Stat. Data Anal., 11, 3–15.
  • Jordan (2016) Jordan, A. (2016). Facets of forecast evaluation. Ph.D. thesis, Karlsruhe Institute of Technology, available at https://publikationen.bibliothek.kit.edu/1000063629.
  • Jordan et al. (2017) Jordan, A., Krüger, F. and Lerch, S. (2017). Evaluating probabilistic forecasts with the R package scoringRules. Preprint, available at https://arxiv.org/abs/1709.04743.
  • Kallache et al. (2010) Kallache, M., Maksimovich, E., Michelangeli, P.-A. and Naveau, P. (2010). Multimodel combination by a Bayesian hierarchical model: Assessment of ice accumulation over the oceanic Arctic region. J. Clim., 23, 5421–5436.
  • Kass and Raftery (1995) Kass, R. E. and Raftery, A. E. (1995). Bayes factors. J. Amer. Statist. Assoc., 90, 773–795.
  • Kastner (2016) Kastner, G. (2016). Dealing with stochastic volatility in time series using the R package stochvol. J. Stat. Softw., 69, 1–30.
  • Kim et al. (2016) Kim, H. J., MacEachern, S. N. and Jung, Y. (2016). Bandwidth selection for kernel density estimation with a Markov chain Monte Carlo sample. Preprint, available at http://arxiv.org/abs/1607.08274.
  • Krüger et al. (2017) Krüger, F., Clark, T. E. and Ravazzolo, F. (2017). Using entropic tilting to combine BVAR forecasts with external nowcasts. J. Bus. Econom. Statist., 35, 470–485.
  • Krüger and Nolte (2016) Krüger, F. and Nolte, I. (2016). Disagreement versus uncertainty: Evidence from distribution forecasts. J. Bank. Finance, 72, 172–186.
  • Kullback (1959) Kullback, S. (1959). Information Theory and Statistics. John Wiley & Sons.
  • Leininger et al. (2013) Leininger, T. J., Gelfand, A. E., Allen, J. M. and Silander Jr, J. A. (2013). Spatial regression modeling for compositional data with many zeros. J. Agric. Biol. Environ. Stat., 18, 314–334.
  • Li et al. (2010) Li, F., Villani, M. and Kohn, R. (2010). Flexible modeling of conditional distributions using smooth mixtures of asymmetric Student densities. J. Statist. Plann. Inference, 140, 3638–3654.
  • Link and Eaton (2012) Link, W. A. and Eaton, M. J. (2012). On thinning of chains in MCMC. Methods Ecol. Evol., 3, 112–115.
  • Little (2006) Little, R. J. (2006). Calibrated Bayes: A Bayes/frequentist roadmap. Amer. Statist., 60, 213–223.
  • Liu and Maheu (2009) Liu, C. and Maheu, J. M. (2009). Forecasting realized volatility: a Bayesian model-averaging approach. J. Appl. Econometrics, 24, 709–733.
  • Lopes et al. (2008) Lopes, H. F., Salazar, E. and Gamerman, D. (2008). Spatial dynamic factor analysis. Bayesian Anal., 3, 759–792.
  • MacEachern and Berliner (1994) MacEachern, S. N. and Berliner, L. M. (1994). Subsampling the Gibbs sampler. Amer. Statist., 48, 188–190.
  • Maheu and Gordon (2008) Maheu, J. M. and Gordon, S. (2008). Learning, forecasting and structural breaks. J. Appl. Econometrics, 23, 553–583.
  • Maheu and McCurdy (2009) Maheu, J. M. and McCurdy, T. H. (2009). How useful are historical data for forecasting the long-run equity return distribution? J. Bus. Econom. Statist., 27, 95–112.
  • Maheu et al. (2012) Maheu, J. M., McCurdy, T. H. and Song, Y. (2012). Components of bull and bear markets: Bull corrections and bear rallies. J. Bus. Econom. Statist., 30, 391–403.
  • Maheu and Song (2014) Maheu, J. M. and Song, Y. (2014). A new structural break model, with an application to Canadian inflation forecasting. Int. J. Forecast., 30, 144–160.
  • Maneesoonthorn et al. (2012) Maneesoonthorn, W., Martin, G. M., Forbes, C. S. and Grose, S. D. (2012). Probabilistic forecasts of volatility and its risk premia. J. Econometrics, 171, 217–236.
  • Matheson and Winkler (1976) Matheson, J. E. and Winkler, R. L. (1976). Scoring rules for continuous probability distributions. Manag. Sci., 22, 1087–1096.
  • Panagiotelis and Smith (2008) Panagiotelis, A. and Smith, M. (2008). Bayesian density forecasting of intraday electricity prices using multivariate skew distributions. Int. J. Forecast., 24, 710–727.
  • R Core Team (2017) R Core Team (2017). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.
  • Riebler et al. (2012) Riebler, A., Held, L. and Rue, H. (2012). Estimation and extrapolation of time trends in registry data — Borrowing strength from related populations. Ann. Appl. Stat., 6, 304–333.
  • Risser and Calder (2015) Risser, M. D. and Calder, C. A. (2015). Regression-based covariance functions for nonstationary spatial modeling. Environmetrics, 26, 284–297.
  • Rodrigues et al. (2014) Rodrigues, L. R. L., García-Serrano, J. and Doblas-Reyes, F. (2014). Seasonal forecast quality of the West African monsoon rainfall regimes by multiple forecast systems. J. Geophys. Res. Atmos., 119, 7908–7930.
  • Rosenblatt (1956) Rosenblatt, M. (1956). Remarks on some nonparametric estimates of a density function. Ann. Math. Stat., 27, 832–837.
  • Roussas (1988) Roussas, G. G. (1988). Nonparametric estimation in mixing sequences of random variables. J. Statist. Plann. Inference, 18, 135–149.
  • Rubin (1984) Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. Ann. Statist., 12, 1151–1172.
  • Sahu et al. (2015) Sahu, S. K., Bakar, K. S. and Awang, N. (2015). Bayesian forecasting using spatio-temporal models with applications to ozone levels in the eastern United States. In Geometry Driven Statistics (I. L. Dryden and J. T. Kent, eds.), chap. 13. John Wiley & Sons, 260–281.
  • Salazar et al. (2011) Salazar, E., Sansó, B., Finley, A. O., Hammerling, D., Steinsland, I., Wang, X. and Delamater, P. (2011). Comparing and blending regional climate model predictions for the American Southwest. J. Agric. Biol. Environ. Stat., 16, 586–605.
  • Scheuerer and Hamill (2015) Scheuerer, M. and Hamill, T. M. (2015). Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Weather Rev., 143, 1321–1334.
  • Sheather and Jones (1991) Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc. Ser. B. Stat. Methodol., 53, 683–690.
  • Sigrist et al. (2012) Sigrist, F., Künsch, H. R. and Stahel, W. A. (2012). A dynamic nonstationary spatio-temporal model for short term prediction of precipitation. Ann. Appl. Stat., 6, 1452–1477.
  • Sigrist et al. (2015) Sigrist, F., Künsch, H. R. and Stahel, W. A. (2015). Stochastic partial differential equation based modelling of large space-time data sets. J. R. Stat. Soc. Ser. B. Stat. Methodol., 77, 3–33.
  • Silverman (1986) Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London.
  • Sköld and Roberts (2003) Sköld, M. and Roberts, G. O. (2003). Density estimation for the Metropolis-Hastings algorithm. Scand. J. Stat., 30, 699–718.
  • Smith and Vahey (2016) Smith, M. S. and Vahey, S. (2016). Asymmetric density forecasting of U.S. macroeconomic variables. J. Bus. Econom. Statist., 34, 416–434.
  • Spiegelhalter et al. (2002) Spiegelhalter, D. J., Best, N. G., Carlin, B. P. and van der Linde, A. (2002). Bayesian measures of model complexity and fit (with discussion and rejoinder). J. R. Stat. Soc. Ser. B. Stat. Methodol., 64, 583–639.
  • Spiegelhalter et al. (2014) Spiegelhalter, D. J., Best, N. G., Carlin, B. P. and van der Linde, A. (2014). The deviance information criterion: 12 years on. J. R. Stat. Soc. Ser. B. Stat. Methodol., 76, 485–493.
  • Taillardat et al. (2016) Taillardat, M., Mestre, O., Zamo, M. and Naveau, P. (2016). Calibrated ensemble forecasts using quantile regression forests and ensemble model output statistics. Mon. Weather Rev., 144, 2375–2393.
  • Tanner and Wong (1987) Tanner, M. A. and Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. J. Amer. Statist. Assoc., 82, 528–540.
  • Thorarinsdottir et al. (2013) Thorarinsdottir, T. L., Gneiting, T. and Gissibl, N. (2013). Using proper divergence functions to evaluate climate models. SIAM/ASA J. Uncertain. Quantif., 1, 522–534.
  • Tran (1989) Tran, L. T. (1989). The convergence of kernel density estimates under dependence. Canad. J. Statist., 17, 197–208.
  • Tran et al. (2016) Tran, M.-N., Pitt, M. K. and Kohn, R. (2016). Adaptive Metropolis–Hastings sampling using reversible dependent mixture proposals. Stat. Comput., 26, 361–381.
  • Trombe et al. (2012) Trombe, P.-J., Pinson, P. and Madsen, H. (2012). A general probabilistic forecasting framework for offshore wind power fluctuations. Energies, 5, 621–657.
  • van der Vaart (2000) van der Vaart, A. W. (2000). Asymptotic Statistics. Cambridge University Press, Cambridge.
  • Waggoner and Zha (1999) Waggoner, D. F. and Zha, T. (1999). Conditional forecasts in dynamic multivariate models. Rev. Econ. Stat., 81, 639–651.
  • Warne et al. (2016) Warne, A., Coenen, G. and Christoffel, K. (2016). Marginalized predictive likelihood comparisons of linear Gaussian State-Space models with applications to DSGE, DSGE-VAR and VAR models. J. Appl. Econometrics, 32, 103–119.
  • Wasserman (2006) Wasserman, L. (2006). All of Nonparametric Statistics. Springer, New York.
  • Watanabe (2010) Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. J. Mach. Learn. Res., 11, 3571–3594.
  • Yu (1993) Yu, B. (1993). Density estimation in the norm for dependent data with applications to the Gibbs sampler. Ann. Statist., 21, 711–735.
  • Zhou et al. (2015) Zhou, Z., Matteson, D. S., Woodard, D. B., Henderson, S. G. and Micheas, A. C. (2015). A spatio-temporal point process model for ambulance demand. J. Amer. Statist. Assoc., 110, 6–15.

Acknowledgements

The work of Tilmann Gneiting and Fabian Krüger was funded by the European Union Seventh Framework Programme under grant agreement 290976. Sebastian Lerch and Thordis L. Thorarinsdottir acknowledge support by the Volkswagen Foundation through the program “Mesoscale Weather Extremes — Theory, Spatial Modelling and Prediction (WEX-MOP)”. Lerch further acknowledges support by Deutsche Forschungsgemeinschaft (DFG) through RTG 1953 “Statistical Modeling of Complex Systems and Processes” as well as project C7 (“Statistical postprocessing and stochastic physics for ensemble predictions”) within SFB/TRR 165 “Waves to Weather”. Gneiting, Krüger and Lerch thank the Klaus Tschira Foundation for infrastructural support at the Heidelberg Institute for Theoretical Studies (HITS). Helpful comments by Werner Ehm, Sylvia Frühwirth-Schnatter, Alexander Jordan, as well as seminar and conference participants at HITS, Karlsruhe Institute of Technology, University of Bern, the Extremes 2014 symposium (Hannover), Conference on Computational and Financial Econometrics (CFE, Pisa, 2014), German Probability and Statistics Days (GPSD, Bochum, 2016) and International Society for Bayesian Analysis meeting (ISBA, Sardinia, 2016) and Deutsche Bundesbank (Workshop on Forecasting, 2017) are gratefully acknowledged. Furthermore, we thank Gianni Amisano for sharing his program code for Bayesian Markov switching models.

Appendix A Literature survey

To survey how probabilistic forecasts based on MCMC output are evaluated in the literature, we made an attempt to conduct a systematic survey. To obtain a broad set of candidate articles, we performed Web of Science555http://webofscience.com and Google Scholar666https://scholar.google.com searches in May 2016 for the terms “forecast”, “probabilistic forecast”, “proper scoring rule”, “CRPS” or “logarithmic score” along with either “Bayesian” or “MCMC”. This approach left us with about 200 papers. Furthermore, we added papers that we knew from other sources (such as the websites of individual researchers).

Among these candidate articles, we considered published or pre-published articles in scientific journals or proceedings only. In particular, working papers and preprints were excluded from further consideration. We only retained studies where forecasts based on Bayesian MCMC methods are produced, and evaluated via proper scoring rules, and we restricted our attention to studies of real-valued linear variables, ignoring articles that deal with binary or categorical observations only. Furthermore, as we are interested in full probabilistic forecasts based on MCMC output, we disregarded articles where only functionals of the forecast distributions, such as means or medians, are evaluated, and we retained those studies only where the computation of the scores is documented in sufficient detail. Finally, Table 1 lists only papers that employ the CRPS or the logarithmic score. Very few studies have used scoring rules other than these: Riebler et al. (2012) use the Dawid–Sebastiani score, and Smith and Vahey (2016) and Tran et al. (2016) use weighted versions of the logarithmic score and the CRPS, as described by Diks et al. (2011) and Gneiting and Ranjan (2011), respectively.

Retaining only articles that meet the above selection criteria leaves us with the studies listed in Table 1, with multiple listings indicating that a study employs several choices.

Appendix B Computing the CRPS for mixtures of Gaussians

Here we discuss the computation of the CRPS in (7) when the predictive distribution is an equally weighted mixture of normal distributions, say , where is Gaussian with mean and variance . Grimit et al. (2006) note that in this case (8) can be written as

(19)

where , with and denoting the standard normal density and CDF, respectively. The scoringRules software package (Jordan et al., 2017) contains R/C++ code for the evaluation of (19), which requires operations.

A potentially much faster, but not exact, alternative is to evaluate the integral in (7) numerically.777Numerical integration could also be based on another representation of the CRPS that has recently been derived by Taillardat et al. (2016, p. 2390, bottom right). Here we provide some evidence on the viability of this strategy, which we implement via the R function integrate, with arguments rel.tol and abs.tol of integrate set to . As a first experiment, we use numerical integration to re-compute the CRPS scores of the mixture-of-parameters estimator in our data example for the first quarter of 2011. Figure 7 summarizes the results for 16 parallel chains. The left panel shows that the approximate scores are visually identical to the exact ones across all sample sizes and chains. Indeed, the maximal absolute error incurred by numerical integration is . The approximation errors are dwarfed by the natural variation of the scores across MCMC chains. The right panel compares the computation time for exact evaluation vs. numerical integration. The latter is much faster, especially for large samples. For a sample of size 40,000 numerical integration requires less than 1.5 seconds, whereas exact evaluation requires about two minutes on an Intel i7 processor.

CRPS Computation time
(in seconds)
Figure 7: CRPS scores for the first quarter of 2011 in the data example, for 16 parallel chains and various MCMC sample sizes. Left: The segments connect the CRPS value obtained using numerical integration (left node) to the score obtained using the exact formula (right node). Right: Computation times in seconds, for numerical integration (dots; solid line) and exact formula (crosses; dashed line).

To obtain broad-based evidence, we next compare exact evaluation vs. numerical integration for all 74 forecast dates, from the second quarter of 1996 to the third quarter of 2014, employing 16 parallel chains for each date. We focus on the two largest MCMC sample sizes, 20 000 and 40 000, and find that across all 2 368 instances (74 dates times 2 sample sizes times 16 chains), the absolute difference of the two CRPS values never exceeds . Therefore, we feel that numerical integration allows for the efficient evaluation of the CRPS for mixtures of normal distributions. The differences to the exact values are practically irrelevant and well in line with the error bounds in R’s integrate function.

Appendix C Consistency of mixture-of-parameters approximations

Proof of Theorem 1

In the case of the CRPS, we prove the stronger result that almost surely as . Putting and for , we find that, for every fixed ,

(20)
(21)

The Ergodic Theorem implies that the right-hand side of (20) tends to zero, and that

almost surely as . In view of (20) and (21) we conclude that

(22)

almost surely as . As the right-hand side of (22) decreases to zero as grows without bounds, the proof of the claim is complete.

In the case of the DSS, let and for . Due to the finiteness of the first moments of and , and . For the second moments, we find similarly that and . Proceeding as before, the Ergodic Theorem implies almost sure convergence of the first and second moments, and thereby consistency relative to the DSS.

Proof of Theorem 2

By Lemma 2.1 in Chapter 4 of Kullback (1959),

almost surely as implies the desired convergence of the Kullback-Leibler divergence. Let denote the empirical CDF of the parameter draws . Under assumption (B) almost sure strong uniform consistency,

almost surely as , yields Kullback’s condition. Finally, we establish almost sure strong uniform convergence under assumptions (A) and (B) by applying Theorem 19.4 and Example 19.8 of van der Vaart (2000).

Appendix D Consistency of empirical CDF-based approximations

Proof of Theorem 3

In the case of the CRPS, we proceed in analogy to the proof of Theorem 1 and demonstrate the stronger result that almost surely as . Putting and for , we see that, for every fixed ,

(23)
(24)

The Generalized Glivenko-Cantelli Theorem (Dehling and Philipp, 2002, Theorem 1.1) implies that the right-hand side of (23) tends to zero almost surely as . If has distribution , then , where denotes the positive part of . Furthermore, by the Ergodic Theorem

almost surely as , which along with (23) and (24) implies that