General Bayesian Updating and the Loss-Likelihood Bootstrap

General Bayesian Updating and the Loss-Likelihood Bootstrap

S. P. Lyddon, C. C. Holmes, S. G. Walker
Abstract

In this paper we revisit the weighted likelihood bootstrap, a method that generates samples from an approximate Bayesian posterior of a parametric model. We show that the same method can be derived, without approximation, under a Bayesian nonparametric model with the parameter of interest defined as minimising an expected negative log-likelihood under an unknown sampling distribution. This interpretation enables us to extend the weighted likelihood bootstrap to posterior sampling for parameters minimizing an expected loss. We call this method the loss-likelihood bootstrap. We make a connection between this and general Bayesian updating, which is a way of updating prior belief distributions without needing to construct a global probability model, yet requires the calibration of two forms of loss function. The loss-likelihood bootstrap is used to calibrate the general Bayesian posterior by matching asymptotic Fisher information. We demonstrate the methodology on a number of examples.

1 Bayesian inference and model misspecification

Bayesian theory provides a comprehensive framework for quantitative reasoning about uncertainty which is built on axiomatic foundations and is well suited to many modern scientific applications. For the statistician, much of the validity of the Bayesian approach hinges on the condition that the statistical model for the data is well specified, in that it contains the underlying data-generating mechanism. This cannot be expected to hold in general, if at all, and the extent and impact of the misspecification can be hard to quantify.

In this article we investigate statistical methods that remain honest about the inability to perfectly model the data. Suppose we observe a sample with the being independent draws from an unknown distribution , which we assume admits a density, . In a parametric Bayesian modelling paradigm, a family of densities is specified along with a prior belief distribution for the unknown parameter . Assuming that , beliefs about the true parameter are updated by conditioning on the observed data, using Bayes’ rule; .

We say that such a model is well specified if there exists a supported by the prior such that . If this is the case, then under mild regularity conditions the posterior will concentrate at as the number of observations increases. Conversely, if , we say that the model is misspecified. In this case the posterior will, under mild regularity conditions, concentrate at the pseudo-true parameter value which minimizes the Kullback–Leibler divergence (Kullback and Leibler, 1951) to the true sampling distribution (Berk, 1966); i.e.

Müller (2013) showed that, under regularity conditions, the asymptotic frequentist risk associated with misspecified Bayesian estimators is inferior to that of an artificial posterior which is normally distributed, centred at the maximum likelihood estimator and with a certain covariance matrix. Additionally, the rate at which the Bayesian learns about the parameter of a misspecified model is not necessarily optimal, as it is in the well-specified case; see Zellner (1988).

It is therefore important that the statistician accounts for the inadequacy of the model when making inference about and we shall study ways to do this. We show that the weighted likelihood bootstrap (Newton and Raftery, 1994), a method originally designed for approximate sampling from the Bayesian posterior of a well specified parametric model, can also be viewed as generating exact posterior samples under a Bayesian nonparametric model which assumes much less about the structure of the data-generating mechanism than the parametric model does. This approach can then be extended to generate posterior samples for a much larger family of parameters than just those indexing parametric models.

More often than not, when a statistical model is used, it is with a specific action in mind, such as a prediction. For the Bayesian, the optimal action is chosen by maximising an expected utility or, equivalently, minimising an expected loss. To do this it is necessary to construct a full probability model for the data. This can be a prohibitively expensive exercise, if it is indeed at all possible. It is natural, then, to question whether this global modelling approach is necessary, or whether we could instead focus solely and directly on the functional that is of interest.

Bissiri et al. (2016) presented a general framework for updating targeted belief distributions of this kind. Instead of restricting themselves to parameters that index a family of distribution functions, the authors considered general parameters whose true value minimizes an expected loss for some loss function ;

(1)

Treating as unknown, Bissiri et al. (2016) use a decision-theoretic argument, relying on a coherency property, that leads to a unique functional form for the update of prior beliefs , given observations . The resulting posterior distribution is fully determined up to a loss scale , and given by

(2)

where the loss for multiple observations is defined additively, i.e. . We assume throughout this paper that this posterior distribution is proper; i.e. the loss and prior are provided such that the right-hand side of (2) is integrable.

We refer to this posterior distribution as the general Bayesian posterior, because it can be computed for a more general family of parameters than those simply indexing a family of distributions. Bayes rule is recovered as a special case by choosing and using the self-information loss, . We call the data-dependent component the loss likelihood as it provides a prior-to-posterior belief update for the parameter , in analogy with the likelihood in the Bayesian setting. The loss scale is a non-negative scalar controlling the learning rate about attributable to the observed data. If the learning rate is too large, the posterior will be too concentrated, exaggerating the extent of the information about coming from the data. Conversely if is too small, the posterior will underplay the information in the data relative to the prior.

The theory presented in Bissiri et al. (2016) has a number of attractions compared to the Bayesian approach. It is built upon the assumption that the true underlying data-generating mechanism is unknown. It provides a principled means for performing targeted prior belief updates without the burden of having to construct a global probabilistic model. The prior specification is local to the parameter of interest. However, although a number of suggestions have appeared in the literature, the setting remains an open problem. We propose that a Bayesian bootstrap should be used for this purpose and we extend the weighted likelihood bootstrap’s interpretation to cover the wider class of models built from loss functions, of the form given in (1). We refer to this method as the loss-likelihood bootstrap. The asymptotic structure of this bootstrap is studied, alongside that of the general Bayesian posterior. These theoretical results are used to determine a loss scale for the general Bayesian posterior by matching asymptotic posterior information. This provides us with a calibrated general Bayesian posterior with a number of desirable properties, such as the Bayes posterior being recovered if the model is well specified. The general Bayesian approach admits a subjective prior and provides a prior-to-posterior belief update, which in some settings will be preferable over the prior-free loss-likelihood bootstrap. Although, as noted in Newton and Raftery (1994), the weighted likelihood bootstrap can provide a better approximation to the Bayesian posterior than a normal approximation, asymptotically, if the prior used is the square of the Jeffreys prior.

2 Revisiting the weighted likelihood bootstrap

The weighted likelihood bootstrap (Newton and Raftery, 1994) is a method for approximately sampling from a posterior distribution of a well specified parametric statistical model. Samples are generated by computing randomly-weighted maximum likelihood estimates. The weights are drawn from a Dirichlet distribution, scaled by the number of observations. The weights perturb the contribution of each observation to the likelihood; see Algorithm 1 for details.

 For to :
  Draw random weights with
    
  Compute
 Output
Algorithm 1 The Weighted Likelihood Bootstrap

The method produces independent samples and is trivially parallelizable over , which is advantageous over Markov chain Monte Carlo methods. However, the weighted likelihood bootstrap is not an exact method for sampling from the posterior of a parametric model and does not accommodate a prior. However, it is asymptotically first-order equivalent to a Bayesian posterior if the parametric model is well specified, with a higher order of asymptotic equivalence, under certain conditions, if the prior used is the square of the Jeffreys prior.

If interest is in the minimizing then it is possible to arrive at a method functionally equivalent to the weighted likelihood bootstrap, but without assuming the parametric model is well-specified. Under misspecification, the data-generating distribution function is unknown and so it is appropriate to construct a prior on the sampling distribution function , whose unknown true value is . Uncertainty about is inherited from uncertainty about . More precisely, a prior on , say , induces a prior probability on , via

(3)

the induced prior being .

The Bayesian nonparametric literature provides a number of methods for constructing priors over the space of distribution functions. The Dirichlet process prior (Ferguson, 1973) is a natural choice, as it is simple to use. The hyperparameter that determines the Dirichlet process prior is a finite measure . Upon observing the posterior is also a Dirichlet process with unit mass added to the base measure at each observation; i.e.

For a detailed account of the Dirichlet process, see Ghosal and van der Vaart (2017). Under regularity conditions the posterior for will concentrate at . In a noninformative setting, a small value of is chosen. In the limit the posterior distribution is supported only by the observations , with Dirichlet-distributed probabilities for each state. This sampling procedure is commonly referred to as the Bayesian bootstrap (Rubin, 1981) as it has a limiting Bayesian interpretation and is closely associated to Efron’s bootstrap (Efron, 1979). Posterior sampling under the Bayesian bootstrap is direct and fast and trivially parallelizable; see Algorithm 2 for further details.

 For to :
  Draw random distribution with
    
  Compute as in (3)
 Output
Algorithm 2 The Bayesian Bootstrap

If we apply the Bayesian bootstrap strategy to as defined in (3), we recover the weighted likelihood bootstrap. Conceptually, however, the parametric and nonparametric constructions are quite different. This can perhaps most easily be seen through the role that the random weights play. For the weighted likelihood bootstrap, the weights perturb the contribution of each observation to the likelihood, whereas for the Bayesian bootstrap the weights represent a posterior sample of the unknown distribution function.

The Bayesian bootstrap can produce posterior samples for a much larger family of parameters than just those that maximize expected log likelihoods, see for example Chamberlain and Imbens (2003). Of interest to us is the family of parameters than minimize an expected loss, as in (1). We call the method of posterior sampling for this family of parameters the loss-likelihood bootstrap; see Algorithm 3 for details. The reason for giving this name is that the method will prove important when we consider the problem of calibrating general Bayesian posterior distributions in Section 3.

 For to :
  Draw random distribution with
    
  Compute
 Output
Algorithm 3 The Loss-likelihood Bootstrap

Asymptotic properties of the weighted likelihood bootstrap have been studied in the 1991 University of Washington PhD thesis by M.A. Newton. It was shown that if the parametric model is well-specified then the distribution of samples under the weighted likelihood bootstrap is asymptotically first-order correct to a Bayesian posterior distribution. By this we mean that the probability laws and converge to the same limit as , almost surely with respect to , where is the maximum likelihood estimator, is a random sample from the weighted likelihood bootstrap, and represents the parameter under a Bayesian posterior.

The proof proceeds by determining the relevant asymptotic properties for both the weighted likelihood bootstrap under misspecification, and also for the loss-likelihood bootstrap. The relevant generalized asymptotic theory is contained in the following theorem:

Theorem 1.

Let be a loss-likelihood bootstrap sample of a parameter defined in (1) with loss function , given observations , and let be its probability measure. Under regularity conditions, for any Borel set , as we have

a.s. , where , with

where is the gradient operator with respect to , and .

Proof.

The proof follows along the lines of the weighted likelihood bootstrap asymptotic normality proof in the 1991 University of Washington PhD thesis by M.A. Newton. Details can be found in the Supplementary Material. ∎

The asymptotic covariance matrix in Theorem 1 is a well-known quantity in the robust statistics literature; sometimes called the sandwich covariance matrix. It was shown in Huber (1967) to be the asymptotic covariance matrix for general, potentially misspecified, maximum likelihood estimators. This asymptotic distribution and that of the Bayesian posterior do not coincide if the model is misspecified, in general. Müller (2013) showed that the sandwich covariance matrix can lead to an improvement in frequentist risk over a misspecified Bayesian posterior. Others, such as Royall and Tsou (2003), have argued that the sandwich covariance matrix can be used to make misspecified likelihood functions robust.

General Bayesian models admit a prior distribution and provide a belief update, for the same type of parameters as the loss-likelihood bootstrap, which does not admit a prior. However, the general Bayesian loss scale, in (2), must be calibrated. Under regularity conditions, the impact of a prior diminishes as the number of observations grows large. This indicates a potential strategy for calibrating general Bayesian posterior distributions to the loss-likelihood bootstrap, by ensuring that asymptotically these posteriors contain the same amount of information. We develop this idea in the next section.

3 Calibrating general Bayesian posteriors by asymptotic covariance matching

The loss-likelihood bootstrap posterior and the general Bayesian posterior have much in common; they target the same parameter; i.e. the in (1), and are not parametric, though to differing degrees. Thus, for large samples, as the data dominates the posterior, we would expect these distributions to be comparable with respect to their asymptotic normal distributions. Hence, we seek to match these two methods via these distributions, which will then provide a means by which to specify to match the information in the data.

The general Bayesian posterior of (2) has an asymptotic normal distribution, under regularity conditions. A second-order Taylor expansion of the loss function about the empirical risk minimizer provides some intuition about the nature of this distribution. If the minimizing parameter value is in the interior of the parameter space then the first derivative of the sample loss evaluated at the empirical risk minimizer is zero; i.e.

The second-order Taylor approximation about is as follows,

Using this approximation in place of in (2), we would expect for regular models that

(4)

where . Regularity conditions and a proof of this result can be found in Chernozhukov and Hong (2003); details can also be found in the Supplementary Material.

Thus under regularity conditions, both the loss likelihood and general Bayesian posterior distributions are asymptotically normal, with the same centering and scaling but different covariance matrices. The Fisher information matrix is a well-understood measure of the information in a sample relating to a parameter. Asymptotically, the posteriors can be considered as normal location models centred at the maximum likelihood estimator. However, the Fisher information matrix is clearly a matrix, and we have a scalar which can be used to calibrate the general Bayesian distribution.

Ferentinos and Papaioannou (1981) considered the problem of constructing one-dimensional information metrics from the Fisher information matrix, and argued that such metrics should be non-negative and strictly increasing functions of its eigenvalues, to ensure the resulting metric satisfies a number of properties. Two natural choices are the trace and determinant of the Fisher information matrix, which equate to the sum and product of the eigenvalues, respectively. This coincides with some quantities known in information theory; the differential entropy of a normal distribution is a function of the determinant of the precision matrix, and the less well-known Fisher information number, sometimes referred to as simply the Fisher information for a density, is equal to the trace of the Fisher information matrix.

In this work we choose the Fisher information number, i.e. the trace of the Fisher information matrix, as our information metric primarily due to its simplicity in computation. It takes the value zero for a flat posterior, and is positive otherwise, which is not true of differential entropy. It is the sum of the marginal Fisher information for each dimension, which is a well-understood quantity in statistics that summarizes the amount of information in a sample about a parameter. We shall denote the Fisher information number as , which is defined as follows,

Walker (2016) showed that the Fisher information number can be used to measure the information in a Bayesian experiment, in analogy with the work of Lindley (1956) who used differential entropy. Further, Holmes and Walker (2017) used the Fisher information number to calibrate a power likelihood temperature; see the discussion section for more details on this.

The following lemma determines the loss scale required to match the Fisher information number of the general Bayesian posterior to the loss-likelihood bootstrap.

Lemma 1.

The value of the loss scale which equates the Fisher information number of the asymptotic distributions, and , is

(5)
Proof.

For , if is the density of , then the Fisher information number is . Applying this result for the general Bayesian asymptotic posterior, (4), we have . Similarly, for the loss-likelihood bootstrap asymptotic posterior we have . Equating these expressions gives the required result. ∎

In practice is unknown so the empirical risk minimizer can be used as a strongly consistent estimator of . Also, as is unknown, matrices and can be estimated empirically.

(6)

where

This methodology can be applied to a parameter defined by an arbitrary loss function. For the case where a self-information loss is used, and the data is generated from the associated sampling distribution for some value of , we still recover Bayes’ theorem. This is shown in the following lemma; for which regularity conditions can be found in the Supplementary Material.

Lemma 2.

If for some , , then .

Proof.

We recall the result that for a regular density ,

where has density . This immediately gives us . Plugging this into (5) gives us . ∎

4 Illustrations

4.1 Normal model with quadratic loss

Suppose we observe independent data from a multivariate normal distribution, ,  , and we have a loss function that is of quadratic form,

Such a quadratic loss is non-trivial, as it could be used to estimate multiple integrals of the type , where in the loss function above we would take .

We shall assume throughout that and are strictly positive-definite matrices. Differentiating under the integral sign shows that the parameter of interest is the population mean , regardless of and . The loss covariance is free to be set by the practitioner and does not impact the parameter of interest or the loss-likelihood bootstrap, though does change the general Bayesian posterior.

The loss-likelihood bootstrap amounts to repeatedly drawing Dirichlet weights and computing the weighted mean . Using standard properties of the Dirichlet distribution it can be shown that this posterior is centred at the sample mean with a covariance matrix given by

where is the -dimensional identity matrix and is the matrix with every element equal to . Given we know the distribution of , we can compute the expectation of these quantities,

Now let us consider the general Bayesian approach. If then the loss function is equal, up to a constant, to the negative log likelihood associated with the data-generating mechanism. We say that this loss is well-specified, and find from Lemma 1, that we should set . For this special case, general Bayesian updating coincides with Bayesian updating under a normal location model.

Given our knowledge of the distribution of the data, we can determine the matrices and using standard properties of the normal distribution. In particular, we have and . Using Lemma 1 we get the following expression for the loss scale that calibrates the asymptotic posterior Fisher information number,

(7)

Suppose that the loss function is the standard quadratic loss (), and each dimension of is independent, with , the diagonal matrix with diagonal elements . Plugging these expressions into (7) we obtain . That is, the loss scale is the average precision. If , then the data is under-dispersed relative to the loss (considering the loss as a negative log likelihood). The loss scale calibration acts to correct the loss likelihood to better match the likelihood of the well-specified model. Specifically, it is clear we will obtain a , thus reducing the variance associated with our loss likelihood. Similarly if then we obtain which calibrates the loss to better match the data. We do not expect in all cases for our loss likelihood to match the likelihood under the well-specified model, as the data’s covariance structure of our data has many more degrees of freedom than the scalar available to calibrate the general Bayesian posterior. Reassuringly, Lemma 2 tells us that if we do obtain correct calibration; i.e. .

More generally, given matrices and we obtain the following general Bayesian posterior:

where . The scale of the general Bayes loss is set to match the trace of the precision matrix of the loss likelihood with that of the data-generating mechanism. For univariate data this setting of ensures that the loss likelihood is equal to the likelihood, and thus the general Bayesian update will coincide with the Bayesian update.

In practice the distribution of the data will be unknown. In this case we still have for all , and . As is the population mean, we can see that , where is the population covariance matrix for . So we can rewrite as . If , an unbiased estimator of is the sample variance divided by the loss variance. In this case, may be a biased but consistent estimator of . The plug-in estimator of (6) is available for .

4.2 Bayesian support vector machine for binary classification

Consider the problem of binary classification where we observe , with each , where denotes the covariates of an observation belonging to class . We would like to predict future s given their covariates . Specifically, the objective is to learn about the optimal linear classification rule that minimizes the expected loss under , a margin-based loss function,

In a general Bayesian framework, we can compute a posterior distribution for the parameter of interest given just the loss function, prior beliefs for and a loss scale. Our work in this paper provides a means for determining the loss scale. The loss-likelihood bootstrap also provides a posterior sample without requiring a loss scale, however it does not admit a prior.

Often the loss function of interest is the loss, , whose non-convexity leads to a number of computational problems relating to optimization under this loss. A popular approach is to use a convex surrogate loss in place of . Bartlett et al. (2006) explore this idea formally, and provides some simple conditions for a surrogate loss to be Bayes-risk consistent to the loss.

A popular classification method in the machine learning literature is the support vector machine (Cortes and Vapnik, 1995). Applied to linear classifiers as considered above, this method amounts to penalized optimization of a convex surrogate loss:

Here is a hyperparameter and is often referred to as the hinge loss. The optimization can be solved efficiently as a convex quadratic programming problem. The output is a linear classification rule that performs well empirically on a wide range of classification problems. The method doesn’t provide any uncertainty quantification about the optimal linear boundary. We consider how we can use the ideas developed in this paper to provide uncertainty quantification about the optimal linear discriminant.

The problem with using the hinge loss in our framework is that we use first and second derivatives of the loss in the calculation of . The hinge loss can easily be smoothed however; see for example Zhang (2004). We construct a smoothed hinge loss that coincides with the first three derivatives of outside of and is a monotonic polynomial in between:

A comparison of this loss function to the standard hinge loss can be found in Fig. 1.

We can sample from the loss-likelihood bootstrap or the calibrated general Bayes posterior of the parameter that minimses the expectation of . We propose the following prediction routine for a new observation given the covariates of the new observation , and a posterior sample. For each element of the posterior sample we compute the class prediction . We then predict the modal class across all of these predictions.

To test out this classification method, we constructed a synthetic dataset from the following distribution:

A class conditional density plot can be found in Fig. 1. A linear classification boundary attains the Bayes risk under the loss. It is of the form , which implies that and . Our classification posterior should concentrate here for large sample sizes.

Figure 1: Left: Margin loss plot for hinge (solid) and smooth hinge loss (dashed). Right: Class conditional probability density plot for the synthetic example.

We generated a synthetic dataset of size and assigned independent priors to and . For the general Bayes posterior we computed a loss scale using the plug-in estimator of specified in (6), under the smooth hinge loss . We generated a posterior sample using the Hamiltonian Monte Carlo routine implemented in probabilistic programming language Stan (Carpenter et al., 2016). We also generated a sample from the loss-likelihood bootstrap. Figure 2 shows the joint general Bayesian posterior density of estimated using a posterior sample of size , as well as marginal density plots for both methods. The general Bayesian posterior distribution matches well the loss-likelihood bootstrap, though the general posterior is calibrated to match asymptotic posterior information, as opposed to coverage.

Figure 2: Synthetic example plots. Left: Joint general Bayesian posterior density plot for . Middle: Marginal density plot for for the general Bayes posterior (black) and the loss-likelihood bootstrap (grey). Right: Marginal density plot of for the general Bayes posterior (black) and loss-likelihood bootstrap (grey).

Figure 3 shows the general posterior probability that given , as a function of , for various sample sizes. The curves are the average over repetitions. The loss scale estimate and the misclassification error on a test dataset of samples, relative to the misclassification error of a linear support vector machine, are displayed in Fig. 3. We used the e1071 support vector machine implementation in R, with five-fold cross validation to set the regularization parameter.

Posterior samples of size were used for prediction, and for the general Bayesian posterior the first samples were discarded as burn-in. In the left pane of Fig. 3 we see the posterior predictive probability curves steepening as the number of observations increase, at , showing that the posterior mass does concentrate at the optimal classification boundary. The plug-in loss scale exhibits higher variance for small datasets. Additionally our plug in estimator exhibits some bias which diminishes with the number of observations. The performance of our classification routines is very similar to a linear support vector machine.

Figure 3: Further synthetic example plots. Left: General Bayesian predictive probability of optimal decision rule predicting class 1, as a function of the covariate , for (from dark to light). Middle: Box plot for as a function of sample size, over 100 repeated runs. Right: Box plot for each method, of its misclassification rate minus the misclassification rate of a linear support vector machine, for observations, over repetitions; GB and LLB refer to general Bayesian and loss-likelihood bootstrap respectively.

As a more challenging example, we took the Statlog German Credit dataset from the UCI Machine Learning Depository (http://archive.ics.uci.edu/ml), preprocessed following Fernández-Delgado et al. (2014). It contains 24 covariates across 1000 customers, where each customer belongs to one of two classes pertaining to the lending experience of the customer.

The following test was repeated times: we randomly split our dataset 75:25 to obtain a training and test dataset. For the general Bayesian method, we use an independent prior on each covariate dimension and the intercept. We computed and generated a sample of size in the same way as in the synthetic dataset example. Misclassification rate was recorded in comparison to that of a linear support vector machine.

The classification performance of our general Bayesian procedure is very similar to that of the support vector machine method that does not provide uncertainty quantification. Posterior marginals for and the first component of are displayed in Fig. 4 for a single run. They seem to be well aligned again.

Figure 4: Statlog German Credit plots. Left: Marginal density plot for for the general Bayes posterior (black) and the loss-likelihood bootstrap (grey). Middle: Marginal density plot for for the general Bayes posterior (black) and the loss-likelihood bootstrap (grey). Right: Box plot of misclassification rate minus the support vector machine misclassification rate; GB and LLB refer to general Bayesian and loss-likelihood bootstrap respectively.

Both of our methods can easily be adapted to cost-sensitive classification problems, by use of an asymmetric hinge loss (Scott, 2012).

5 Discussion

In this paper we have presented two methods for generating samples from posterior belief distributions about a parameter defined by a loss function. Both methods make minimal assumptions about the data-generating mechanism.

The loss-likelihood bootstrap is a prior-free method built on a Bayesian nonparametric model. It does not require any calibration. Computationally, the method centres around optimization. Independent samples are generated and the method is trivially parallelizable.

The loss-likelihood bootstrap does not admit a prior specification, which may be unappealing to the subjective Bayesian. One idea would be to importance-weight the Bayesian bootstrap samples with respect to a chosen prior. This is also discussed in Newton and Raftery (1994) as a means of aligning weighted likelihood bootstrap samples to a Bayesian posterior, though it requires density estimation under the Bayesian bootstrap, which may well suffer from a curse of dimensionality in even moderate dimensions. Furthermore, these weights are prone to degeneracy if the posteriors under the two priors are meaningfully different. Kessler et al. (2015) suggests constructing a prior on the state probabilities that matches the marginal prior for the parameter of interest, and is conditionally non-informative given the parameter. Again, to compute this conditionally non-informative prior requires some density estimation.

General Bayesian updating provides a decision-theoretic posterior under a coherency condition and a proper prior. It does require the calibration of data information relative to the prior information, for which we have provided a new method using the loss likelihood bootstrap.

Our calibration argument relies on the matching of asymptotic posterior information, which is well motivated as both methods make few distributional assumptions and target the same parameter. In Bissiri et al. (2016) a number of possible alternative means of calibration are discussed, including subjective calibration, unit information loss matching and hierarchical treatment via a loss function on . Although these ideas may suit particular applications, they are less well motivated in general, compared to the proposal in this paper. Syring and Martin (2017) provide an iterative method for setting by ensuring calibration of a single user-specified posterior credible region. However, this method is computationally demanding and switches the issue of setting the loss scale to one of choosing a single credible region to calibrate.

For the special case of a potentially misspecified self-information loss, Holmes and Walker (2017) set the loss scale by matching the expected information gain from a single observation, for two experiments. In the first experiment the self-information loss is well specified in relation to the distribution of the observations. In this case the Bayesian update is optimal, so . In the second experiment the data come from an unknown distribution so a general Bayesian update with self-information loss is used. For self-information losses acts as a tempering parameter on the likelihood. The Fisher information distance is used to measure the prior-to-posterior information gain. The data is used to estimate these quantities.

The approach of Holmes and Walker (2017) focuses on prior information, whereas our method calibrates asymptotic posterior information to a bootstrap. One important quality of our method is that it recovers the parametric Bayesian learning rate when the model is correct up to an arbitrary tempering (any in Lemma 2), whereas Holmes and Walker (2017) will only recover the correct rate if the correct log likelihood is used; i.e .

However, Holmes and Walker (2017) is not an option in the general loss case as we do not have a benchmark experiment for which is known. The criterion of Fisher information matching with the loss-likelihood bootstrap is applicable to calibrating posterior distributions based on loss functions, under minimal assumptions about the underlying data generating mechanism. This includes self information loss, , as a particular case.

References

  • Bartlett et al. (2006) Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, Classification, and Risk Bounds. J Am Stat Assoc, 101(473):138–156, 2006.
  • Berk (1966) Robert H. Berk. Limiting Behavior of Posterior Distributions when the Model is Incorrect. Ann Math Stat, 37(1):51–58, 1966.
  • Bissiri et al. (2016) P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions. J R Stat Soc Series B Stat Methodol, 78(5):1103–1130, 2016.
  • Carpenter et al. (2016) Bob Carpenter, Andrew Gelman, Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Michael A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. STAN: A probabilistic programming language. J Stat Softw, 20, 2016.
  • Chamberlain and Imbens (2003) Gary Chamberlain and Guido W Imbens. Nonparametric Applications of Bayesian Inference. J Bus Econ Stat, 21(1):12–18, 2003.
  • Chernozhukov and Hong (2003) Victor Chernozhukov and Han Hong. An MCMC approach to classical estimation. J Econom, 115(2):293–346, 2003.
  • Cortes and Vapnik (1995) Corinna Cortes and Vladimir Vapnik. Support-Vector Networks. Mach Learn, 20(3):273–297, 1995.
  • Efron (1979) Bradley Efron. Bootstrap Methods: Another look at the Jackknife. Ann Stat, 7(1):1–26, 1979.
  • Ferentinos and Papaioannou (1981) K. Ferentinos and T. Papaioannou. New parametric measures of information. Inf Control, 51(3):193–208, 1981.
  • Ferguson (1973) Thomas S. Ferguson. A Bayesian Analysis of Some Nonparametric Problems. Ann Stat, 1(2):209–230, 1973.
  • Fernández-Delgado et al. (2014) Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? J Mach Learn Res, 15:3133–3181, 2014.
  • Ghosal and van der Vaart (2017) S. Ghosal and A. van der Vaart. Fundamentals of Nonparametric Bayesian Inference. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2017. ISBN 9780521878265.
  • Holmes and Walker (2017) C. C. Holmes and S. G. Walker. Assigning a value to a power likelihood in a general Bayesian model. Biometrika, 104(2):497, 2017.
  • Huber (1967) Peter J Huber. The behavior of maximum likelihood estimates under nonstandard conditions. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 221–233, 1967.
  • Kessler et al. (2015) David C. Kessler, Peter D. Hoff, and David B. Dunson. Marginally specified priors for non-parametric Bayesian estimation. J R Stat Soc Series B Stat Methodol, 77(1):35–58, 2015.
  • Kullback and Leibler (1951) S. Kullback and R. A. Leibler. On Information and Sufficiency. Ann Math Stat, 22(1):79–86, 1951.
  • Lindley (1956) D. V. Lindley. On a Measure of the Information Provided by an Experiment. Ann Math Stat, 27(4):986–1005, 1956.
  • Müller (2013) Ulrich K Müller. Risk of Bayesian inference in misspecified models, and the sandwich covariance matrix. Econometrica, 81(5):1805–1849, 2013.
  • Newton and Raftery (1994) Michael A Newton and Adrian E Raftery. Approximate Bayesian Inference with the Weighted Likelihood Bootstrap. J R Stat Soc Series B Stat Methodol, 56(1):3–48, 1994.
  • Royall and Tsou (2003) Richard Royall and Tsung-Shan Tsou. Interpreting statistical evidence by using imperfect models: robust adjusted likelihood functions. J R Stat Soc Series B Stat Methodol, 65(2):391–404, 2003.
  • Rubin (1981) Donald B. Rubin. The Bayesian bootstrap. Ann Stat, 9(1):130–134, 1981.
  • Scott (2012) Clayton Scott. Calibrated asymmetric surrogate losses. Electron J Stat, 6:958–992, 2012. ISSN 19357524.
  • Syring and Martin (2017) Nick Syring and Ryan Martin. Calibrating general posterior credible regions. arXiv 1509.00922, 2017. URL http://arxiv.org/abs/1509.00922.
  • Walker (2016) Stephen G. Walker. Bayesian information in an experiment and the Fisher information distance. Stat Probab Lett, 112:5–9, 2016.
  • Zellner (1988) Arnold Zellner. Optimal Information Processing and Bayes’s Theorem. Am Stat, 42(4):278–280, 1988.
  • Zhang (2004) Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning, page 116. ACM, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198650
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description