Quantifying Uncertainty in Transdimensional Markov Chain Monte Carlo Using Discrete Markov Models

Quantifying Uncertainty in Transdimensional Markov Chain Monte Carlo Using Discrete Markov Models

Daniel W. Heck Daniel W. Heck, Statistical Modeling in Psychology, University of Mannheim, Germany, heck@uni-mannheim.de.
R code for all simulations is available at the Open Science Framework at https://osf.io/kjrkz, and the R package MCMCprecision is available at https://CRAN.R-project.org/package=MCMCprecision. Statistical Modeling in Psychology, University of Mannheim
Antony M. Overstall Mathematical Sciences, University of Southampton Quentin F. Gronau Department of Psychology, University of Amsterdam Eric-Jan Wagenmakers Department of Psychology, University of Amsterdam
Abstract

Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indicator variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the indicator variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessing the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

Keywords: reversible jump MCMC, product space MCMC, Bayesian model selection, posterior model probabilities, Bayes factor.

\sectiondot

section \sectiondotsubsection \sectiondotsubsubsection \pdfstringdefDisableCommands\pdfstringdefDisableCommands

1cm

1 Introduction

Transdimensional Markov chain Monte Carlo (MCMC) methods provide an indispensable tool for the Bayesian analysis of models with varying dimensionality (Sisson, 2005). An important application is Bayesian model selection, where the aim is to estimate posterior model probabilities for a set of models , given the data (Kass and Raftery, 1995). In order to ensure that the Markov chain converges to the correct stationary distribution, transdimensional MCMC methods such as reversible jump MCMC (Green, 1995) or the product space approach (Carlin and Chib, 1995) match the dimensionality of parameter spaces across different models (e.g., by adding parameters and link functions). Transdimensional MCMC methods have proven to be very useful for the analysis of many statistical models including capture-recapture models (Arnold et al., 2010), generalized linear models (Forster et al., 2012), factor models (Lopes and West, 2004), and mixture models (Frühwirth-Schnatter, 2001), and are widely used in substantive applications such as selection of phylogenetic trees (Opgen-Rhein et al., 2005), gravitational wave detection in physics (Karnesis, 2014), or cognitive models in psychology (Lodewyckx et al., 2011).

Crucially, transdimensional MCMC methods always include a discrete parameter with values in indexing the competing models. At iteration , posterior samples are obtained for the indicator variable and the model parameters, which are usually continuous and differ in dimensionality (for a review, see Sisson, 2005). For instance, a Gibbs sampling scheme can be adopted (Barker and Link, 2013), in which the indicator variable and the continuous model parameters are updated in alternating order. Such a sampler switches between models depending on the current values of the continuous parameters, and then updates these parameters in light of the current model conditionally on the value of (Barker and Link, 2013). Given convergence of the MCMC chain, the sequence follows a discrete stationary distribution with probabilities . Due to the design of the sampler, these probabilities are identical to the posterior model probabilities of interest, and, given uniform model priors , also proportional to the marginal likelihoods . Hence, transdimensional MCMC samplers can be used to directly estimate these posterior probabilities as the relative frequencies of samples falling into the categories, , where is the indicator function. Due to the ergodicity of the Markov chain, this estimator is ensured to be asymptotically unbiased (Green, 1995; Carlin and Chib, 1995).

Usually, dependencies due to MCMC sampling are taken into account for continuous parameters (Jones et al., 2006; Flegal and Gong, 2015; Doss et al., 2014). In contrast, however, the estimate based on the sequence of discrete samples is usually reported without quantifying estimation uncertainty. Often, the samples are correlated to a substantial, but unknown, degree because of infrequent switching between models. This is illustrated in Figure 1, which shows a sequence of independent and correlated samples in Panels A and B, respectively. Inference about the stationary distribution should be more reliable in the first case compared to the second case, in which the autocorrelation reduces the amount of information available about . Moreover, the standard error that assumes independent sampling will obviously underestimate the true variability of the estimate if samples are correlated (Green, 1995; Sisson, 2005). To obtain a measure of precision, Green (1995) proposed running several independent MCMC chains and computing the standard error of the estimate across these independent replications. However, for complex models, this method might require a substantial amount of additional computing time for burn-in and adaption and thus can be infeasible in practice.

Figure 1: Illustration of MCMC iterations of a discrete model-index parameter that were sampled from (A) independent categorical distributions and (B) a Markov model with positive autocorrelation.

Assessing the precision of the estimate , which depends on the autocorrelation of the sequence of discrete samples , is of major importance. In case of model selection, it must be ensured that the estimated posterior probabilities are sufficiently precise for drawing substantive conclusions. This issue is especially important when estimating a ratio of marginal probabilities, that is, the Bayes factor (Jeffreys, 1961). Moreover, it is often of interest to compute the effective sample size defined as the number of independent samples that would provide the same amount of information as the given MCMC output for estimating with . Besides providing an intuitive measure of precision, a minimum effective sample size can serve as a principled and theoretically justified stopping rule for MCMC sampling (Gong and Flegal, 2016). However, standard methods of estimating the effective sample size (e.g., computing the spectral density at zero; Plummer et al., 2006; Heidelberger and Welch, 1981) are tailored to continuous parameters. When applied to the model-indicator variable of a transdimensional MCMC method, these methods neglect the discreteness of . Depending on the specific numerical labels used for the different models (e.g., vs. ), spectral decomposition can lead to widely varying and arbitrary estimates for the effective sample size (see Section 4).

In summary, transdimensional MCMC is a very important and popular method for Bayesian inference (Sisson, 2005). However, little attention has been paid to the analysis of the resulting MCMC output, which requires that one takes into account the autocorrelation as well as the discrete nature of the model indicator variable. As a solution, we propose to fit a discrete, first order Markov model to the MCMC output to assess the precision of the estimated stationary distribution . Whereas several diagnostics have previously been proposed to assess the convergence of transdimensional MCMC samplers (e.g., Brooks and Giudici, 2000; Castelloe and Zimmerman, 2002; Brooks et al., 2003a; Sisson and Fan, 2007), we are unaware of any methods that quantify the precision of the point estimate .

2 Method

2.1 A Discrete Markov Model for Transdimensional MCMC Output

The proposed method approximates the output of a transdimensional MCMC method (i.e., the sampled iterations ) by a discrete Markov model with transition matrix . This model explicitly accounts for autocorrelation, which in turn allows quantifying estimation uncertainty for the discrete stationary distribution . The entries of are defined as the transition probabilities for all , with rows summing to one, . According to the discrete Markov model, the probability distribution of the indicator variable at iteration is given by multiplying the transposed initial distribution by the transition matrix times, . The proposed method estimates the transition matrix as a free parameter based on the sufficient statistic , the matrix of frequencies counting the observed transitions from to (Anderson and Goodman, 1957).

Due to the construction of the transdimensional MCMC sampler, the discrete indicator variable follows a stationary distribution with a constant probability vector (i.e., the posterior model probabilities of interest). Hence, when modeling the sequence with the discrete Markov model , this implies that the transition matrix must satisfy the condition for stationarity

(1)

meaning that the probability vector is the left eigenvector of the matrix with eigenvalue one (with normalized to sum to one; Anderson and Goodman, 1957). Based on the model , an estimator for is thus obtained by computing the eigenvector of with eigenvalue one (Barker and Link, 2013).

However, we are less interested in a new estimator of the stationary distribution but rather in the precision of this estimate. To quantify estimation uncertainty, we thus fit the model with as a free parameter in a Bayesian framework by drawing posterior samples ( ). This sampling approach has the advantage that we can easily quantify estimation uncertainty (i.e., the dispersion of the posterior distribution of ) by computing descriptive statistics of the samples (e.g., the standard deviation or credibility intervals). Moreover, we can directly quantify the estimation uncertainty of derived quantities such as the posterior model probabilities, model rankings, or Bayes factors (see Section 2.2). In the following, it is important to distinguish between the posterior distribution of given the sufficient statistic , which quantifies the uncertainty of due to estimation error of the transdimensional MCMC method, and the posterior distribution of the model probabilities given the empirical data (given a specific outcome of a transdimensional MCMC method, the latter is the constant vector ).

Next, we define a prior distribution for the parameter of the model . Given that the transition matrix includes one probability vector for each row , we assume independent Dirichlet distributions with parameter for each row,

(2)

Conditional on the MCMC output , the estimation uncertainty of is approximated by drawing posterior samples . Since the Dirichlet prior is conjugate to the multinomial distribution, independent samples can efficiently be drawn from the Dirichlet distribution with parameters

(3)

Based on these samples, the estimation uncertainty of the stationary probabilities is assessed by computing the (normalized) eigenvector with eigenvalue one for each sample (Eq. 1).

With regard to the prior parameter , small values should be chosen to reduce its influence on the estimation of . In principle, the improper prior can be used, which minimizes the impact of the prior on the estimated stationary distribution. This improper prior also ensures that the results do not hinge on the set of models that could possibly be sampled, but were never actually observed in the sequence . For such unsampled models, the corresponding rows and columns of the observed transition matrix are filled with zeros. With , the relevant eigenvector of the transition matrix is thus identical to that of a reduced matrix that includes only the transitions for the subset of models sampled in . However, in our simulations, this improper Dirichlet prior proved to be numerically unstable and resulted in more variable point estimates than the standard i.i.d. estimate or the proper prior discussed next.

Here, we use the weakly informative prior as a default, which has an impact equivalent to one observation for each row of the observed transition matrix . By putting a small weight on all values of the transition matrix , this prior serves as a regularization of the posterior. However, in scenarios where the number of models exceeds the number of iterations of the transdimensional MCMC method (i.e., ), such a regularization assigns substantial probability weight to models that are never observed in . To limit the effect of the prior, we thus set only for those models that were observed in and for the remaining models. Besides reducing the impact of the prior, this choice has the computational advantage that one can draw posterior samples and compute eigenvectors for the reduced matrix that includes only the sampled models. In the two examples in Sections 4 and 5, this prior has proved to be numerically robust and resulted in point estimates close to the standard i.i.d. estimates.

As a third alternative, the prior can be adapted to the structure of specific transdimensional MCMC implementations, which only implement switches to a small subset of the competing models. For instance, in variable selection, regression parameters are often added or removed one at a time, resulting in a birth-death process (Stephens, 2000). For these kinds of samplers, the Dirichlet parameters can be set to zero selectively. However, such adjustments will be dependent on the chosen MCMC sampling scheme. Therefore, we propose the weakly informative prior as a default for the sampled models and an improper prior for the nonsampled models. This choice provides a good compromise of being very general and numerically robust, while having a small effect on the posterior. However, in general, the choice of becomes less influential as the number of MCMC samples increases (especially if the row sums of are large).

2.2 Precision

Based on the posterior samples , it is straightforward to estimate the stationary distribution by the posterior mean (alternatively, the median or mode may be used). More importantly, however, estimation uncertainty due to the transdimensional MCMC method can directly be assessed by plotting the estimated posterior densities for each . To quantify the precision of the estimate , one can report posterior standard deviations or credibility intervals for the components . These component-wise summary statistics are most useful if the number of models is relatively small.

An important advantage of drawing posterior samples in a Bayesian framework (instead of using asymptotic approximations for the standard error of ) is that one can directly quantify estimation uncertainty for other quantities of interest. For very large numbers of sampled models, the assessment of estimation uncertainty can be focused on the subset of models with the highest posterior model probabilities. Within the sampling approach, estimation uncertainty for the best-performing models can easily be assessed by computing ranks for each of the posterior samples . Then, the variability of these model rankings across the samples can be summarized, for instance, by the percentage of identical rank orders for the best-performing models, or the percentages of how often each model is included within the subset of the best-performing models (i.e., has a rank smaller or equal to ).

In case of model selection, dispersion statistics such as the posterior standard deviation are also of interest with respect to the Bayes factor (Kass and Raftery, 1995). To judge the estimation uncertainty for the Bayes factor, one can evaluate the corresponding posterior distribution by computing the derived quantities (given uniform prior model probabilities). Precision can also be assessed for model-averaging contexts when comparing subsets of models against each other (e.g., regression models including a specific effect vs. those not including it). Given such disjoint sets of model indices , the posterior probability for each subset of models is directly obtained by summing the posterior samples for all . Note that it is invalid to aggregate across model subsets before applying the proposed Markov approach because functions of discrete Markov chains (e.g., collapsing the original states into a subset of states) are not Markovian in general (Burke and Rosenblatt, 1958).

2.3 Effective Sample Size

Besides quantifying estimation uncertainty, the posterior samples can be used to estimate the effective sample size for the transdimensional MCMC output. For this purpose, we consider the benchmark model under the ideal scenario of drawing independent samples from the categorical distribution with probabilities . For this model, we assume an improper Dirichlet prior on the stationary distribution, (whereas the Markov model assumed a Dirichlet prior on the transition probabilities). Since this prior is conjugate to the multinomial distribution, the posterior for the stationary distribution is given by

(4)

conditional on the observed frequencies . Note that the transition frequencies are rendered irrelevant in this i.i.d. model, implying that possible dependencies in the sampled iterations are ignored.

Given the output of a transdimensional MCMC chain, we can now compare the empirical posterior distribution of estimated with the model against the theoretically expected posterior distribution under the hypothetical model . Essentially, we match the latter distribution to the former to estimate the effective sample size as the total number of independent samples that would result in a similar dispersion as that estimated by the Markov model. To fit the posterior distribution in Eq. 4 to the samples of the Markov model, we estimate the parameters of a Dirichlet distribution using an efficient fixed-point iteration scheme (Minka, 2000). Next, a comparison of the estimated Dirichlet parameters with the conjugate posterior in Eq. 4 yields . It follows that the effective sample size under the assumption of independent sampling from a multinomial distribution can be estimated as

(5)

Note that is necessary to subtract the prior sample size to account for the choice of in the Markov model ( occurs times in the transition matrix ; cf. Eq. 2). Importantly, the estimate takes the discreteness of the indicator variable into account and does not change under permutations of arbitrary, numerical values of the model indices.

2.4 Remarks

The proposed method quantifies estimation uncertainty by fitting a discrete Markov model to transdimensional MCMC output. For this purpose, a simplifying assumption is made that is not guaranteed to hold. Whereas samples of the full model space necessarily follow a Markov process by construction, this does not imply that the samples follow a Markov chain marginally (Brooks et al., 2003b; Lodewyckx et al., 2011). In practice, the iterations of the model-indicator variable might have higher-order dependencies since transition probabilities depend on the exact state of the MCMC sampler in each of the models’ parameter spaces. However, in Sections 4 and 5 we show in two empirical examples that the proposed simplification (i.e., fitting a Markov chain of order one) is sufficient to account for autocorrelations in the samples in practice. Moreover, the approximation by a first-order Markov chain provides a trade-off between ignoring dependencies completely (i.e., assuming i.i.d. samples) and accounting for any higher-order dependencies (which will likely increase the computational burden especially for large numbers of models). Note that it is a common practice to rely on simplifying assumptions for the analysis of simulation output; for instance, a standard approach of estimating the effective sample size for continuous parameters assumes that the output sequence can be modeled as a covariance stationary process with a smooth log spectrum (Heidelberger and Welch, 1981).

The proposed method of fitting a discrete Markov model is very general and can be applied irrespective of specific transdimensional MCMC implementations. Moreover, it requires only the sampled sequence of the discrete parameter or the matrix with the observed frequency of transitions. If output from multiple independent chains is available, the transition frequency matrices can simply be summed before applying the method. This follows directly from Bayesian updating of the stationary distribution . Essentially, each chain provides independent evidence for the posterior of the transition matrix , which is reflected by using the sums for the conjugate Dirichlet prior in Eq. 3. Note that this feature can be used to compare the efficiency of many short versus few long MCMC chains.

In the R package MCMCprecision (Heck et al., 2017), we provide an implementation that relies on the efficient computation of eigenvectors in the C++ library Armadillo (Sanderson and Curtin, 2016), accessible in R via the package RcppArmadillo (Eddelbüttel and Sanderson, 2014). On a notebook with an Intel® i7-7700HQ processing unit, drawing samples from the posterior distribution for 10 (100) sampled models requires approximately 30 milliseconds (7 seconds).

3 Illustration: Effect of Autocorrelation

Before applying the proposed method to actual MCMC output, we first illustrate its use in an idealized setting. To investigate the effect of autocorrelation in the case of discrete parameters, we generate sequences of length from the Markov model for a given stationary distribution . To induce autocorrelation, we define a mixture process for each iteration . With probability , the discrete indicator variable will be identical to the current model, . In contrast, with probability , the value is sampled from the given stationary distribution . Thereby, increasing values of result in larger autocorrelation of the sequence .

For varying levels of , we sampled 500 replications, applied the proposed method (with ) and computed the precision of the estimate . Besides the posterior SD, we were interested in the coverage probability of the data-generating value being in the 90% credibility interval defined by the 5% and 95% quantiles. As a benchmark, we also computed the corresponding posterior SD under the (false) assumption that the samples were independently drawn by fitting the model with the Dirichlet prior parameter . Note that the latter uncertainty estimate is equivalent to the standard Monte Carlo error that assumes independent sampling.

Figure 2: Estimation uncertainty for the stationary distribution . (A) The Markov method (black dots) correctly indicates that estimation error of the posterior model probabilities increases as autocorrelation increases. When assuming i.i.d. samples (gray triangles), the estimated precision does not depend on the autocorrelation. (B) Proportion of 500 replications for which the 90% CI intervals include the data-generating stationary distribution .

Figure 2 shows the result of this simulation. In Panel A, the estimation uncertainty increases for larger values of , thereby taking the increasing autocorrelation into account. In contrast, the model does assume independence a priori. Thereby, the posterior uncertainty is independent of . As a result of this, the 90% credibility interval is less likely to include the data-generating value as shown in Panel B, whereas the Markov model provides an accurate description of the estimation uncertainty.

4 Variable Selection in Logistic Regression

In the following, we apply the proposed method to the problem of selecting variables in a logistic regression, an example introduced by Dellaportas et al. (2000) to highlight the implementation of transdimensional MCMC in BUGS (see also Dellaportas et al., 2002; Ntzoufras, 2002). Table 1 shows the frequencies of deaths and survivals conditional on severity and whether patients received treatment (i.e., antitoxin medication; Healy, 1988). To emphasize the importance of considering estimation uncertainty for the posterior model probabilities, we compare the efficiency of two transdimensional MCMC approaches, which can both be implemented in JAGS (Plummer, 2003).

Condition (A) Antitoxin (B) Death Survival
More Severe Yes 15 6
No 22 4
Less Severe Yes 5 15
No 7 5
Table 1: Logistic regression data set by Healy (1988).

The full logistic regression model assumes a Bernoulli distribution of the survival frequencies and a linear model on the logit-transformed survival probabilities ,

(6)
(7)

where are the total number of patients in condition and the regression coefficient for the effect-coded variables , , and . Variable selection is required to choose between models: the intercept-only model, the three main effect models A, B, and A+B, and the model AB that includes the interaction. For comparability, we use the same priors as Dellaportas et al. (2000) and assume a centered Gaussian prior with variance for each regression parameter, . Moreover, the model probabilities were set to be uniform, . Note that efficiency can be increased by selecting prior probabilities that result in approximately uniform posterior probabilities (Lodewyckx et al., 2011). Moreover, nonuniform prior probabilities might be used to protect against multiple testing issues (i.e., Bayes multiplicity; Scott and Berger, 2010).

One of the two implemented transdimensional MCMC approaches uses unconditional priors (Kuo and Mallick, 1998, KM98) and includes indicator variables for each regression coefficient in model . The parameter determines which regression coefficients are included by removing some of the additive terms of the linear model in Equation 7. Details about the full and conditional posterior distributions are provided by Dellaportas et al. (2000, p. 7).

As a second transdimensional MCMC approach, we implemented the method of Carlin and Chib (1995; CC95), which stacks up all model parameters into a new parameter , where is the vector of regression parameters of model . Thereby, this approach samples a total of 12 regression parameters along with the indicator variable . Note that the method of Carlin and Chib (1995) uses pseudo-priors , , that do not influence the statistical inference about and . However, these pseudo-priors determine the conditional proposal probabilities of switching between the models and thereby affect the efficiency of the MCMC chain. In substantive applications, these pseudo-priors should be chosen to match the posterior in order to ensure high probabilities of switching between the models (cf. Carlin and Chib, 1995; Barker and Link, 2013). Here, however, we did not optimize the sampling scheme and use for the pseudo-priors to illustrate that our method can correctly detect the lower precision resulting from this suboptimal choice.

Figure 3: Posterior distribution of the posterior model probabilities in the logistic regression example based on the Markov and the i.i.d. model. Vertical black lines show the target values (CC95 = Carlin and Chib, 1995; KM98 = Kuo and Mallick, 1998).

Figure 3 shows the estimated posterior distribution of the posterior model probabilities using one Markov chain with 11,000 iterations (including 1,000 burn-in samples). The vertical black lines show the correct reference values, approximated by eight independent chains with one million samples each. As expected, the assumption that are sampled independently results in overconfidence in the point estimates of the CC95 approach. For all models, the corresponding posterior distributions miss the correct value and do not identify this estimation uncertainty. In contrast, the proposed Markov approach results in a posterior distribution that covers the target values with high probability. Moreover, the novel estimation method reveals that the KM98 implementation has a higher precision compared to the (intentionally not optimized) CC95 approach.

Kuo and Mallick (1998) Carlin and Chib (1995)
Model
1 0.50 0.07 0.16 0.21 0.60 0.07 0.41 0.37
A 49.38 0.50 1.22 1.15 49.02 0.50 6.93 6.02
B 1.18 0.10 0.27 0.46 1.29 0.10 0.75 0.62
A+B 43.74 0.50 1.10 1.09 43.26 0.49 7.18 6.67
AB 5.19 0.22 0.34 0.34 5.84 0.20 3.65 3.39
  • \note

    Posterior model probability estimates are shown in percent. and were computed across 100 replications. As a measure for the estimated precision, means of the posterior SD are shown ( assumes independent sampling; assumes a Markov chain model).

Table 2: Estimated posterior model probabilities in percent.

To test the validity of the proposed method more rigorously, we replicated the previous analysis 500 times. Thereby, the estimated precision can be compared against the actual sampling variability of the estimated model probabilities. For both transdimensional MCMC methods, Table 2 shows the mean estimated model probabilities in percent. Across replications, the point estimates (posterior means) from the Markov and the i.i.d. approach were very similar with a median absolute difference of and for the KM98 and CC95 implementations, respectively. To judge whether the estimated precision (i.e., the mean posterior standard deviations and ) is valid, Table 2 shows the empirical SD of the estimates across replications. The results show that the assumption of independent samples leads to an overconfident assessment of the precision for the estimated model probabilities, , which is especially severe for the less efficient CC95 implementation. In contrast, the Markov approach provided good estimates of the actual estimation uncertainty, . Moreover, for the MCMC method by Carlin and Chib (1995), the larger SDs indicate a smaller efficiency compared to the unconditional prior approach by Kuo and Mallick (1998). This theoretically expected result is likely to be due to the suboptimal choice of pseudo-priors. However, note that this difference in efficiency may be overlooked when merely computing relative proportions based on the sampled indicator variable (i.e., when implicitly assuming independent samples).

The higher efficiency of the KM98 approach becomes even clearer when assessing the median of the estimated effective sample size, which was for the KM98 approach compared to only for the CC95 method. As discussed above, commonly used estimators of effective sample size for continuous parameters (e.g., Plummer et al., 2006) should not be applied to the discrete model-indicator variable because they depend on the arbitrary numerical labels used for the models. To illustrate this, Figure 4 shows the estimated sample size for all 120 permutations of the indices for a fixed sequence , computed by the spectral decomposition available in the R package coda (Plummer et al., 2006). This estimate varied considerably depending on the arbitrary labeling of the models. In contrast, the proposed Markov approach results in a well-defined, invariant estimate by explicitly accounting for the discreteness of .

Figure 4: Effective sample size as estimated by the spectral density at zero (Plummer et al., 2006) for all permutations of the model indicator labels of the MCMC output (based on 10,000 samples of the method by Kuo and Mallick, 1998).

Finally, we show that the posterior samples of the model can directly be used to assess the uncertainty of Bayes factor estimates. For instance, substantive applications could be interested in testing whether to include the interaction term of condition (A) and treatment (B) in a logistic regression model. Given the output of a single MCMC run with 10,000 samples, Figure 5 shows the resulting posterior distribution of the Bayes factor in favor for the absence of an interaction. Similar to the posterior model probabilities, the i.i.d. approach results in overconfidence regarding the estimate and most of the probability mass excludes the correct value 8.50 (approximated with a precision of ). In contrast, the Markov approach corrects for dependencies in the samples and includes the correct value. The same pattern emerged across the 500 replications, that is, the mean estimated SD of the Bayes factor approximated the corresponding empirical SD of the Bayes factor estimates (KM98: 0.56 vs. 0.56; CC95: 114.8 vs. 320.2). When using transdimensional MCMC, Bayes factors cannot be expected to be reliably estimated if models are never or very infrequently sampled (e.g., Model 1 in Table 2). For instance, the Bayes factor was estimated very imprecisely even in the KM98 approach (mean SD = 12.5; empirical SD = 21.1). To obtain more precise Bayes factor estimates in the presence of infrequently sampled models, it is recommended to rerun the transdimensional MCMC chain including only the two relevant models of interest (Lodewyckx et al., 2011).

Figure 5: Posterior distribution for the Bayes factor in favor of Model A+B vs. AB. The vertical black line shows the target value (CC95 = Carlin and Chib, 1995; KM98 = Kuo and Mallick, 1998).

5 Log-Linear Models for a Contingency Table

The application of the proposed method is also feasible in realistic scenarios with hundreds of sampled models. To illustrate this, we reanalyzed the complete contingency table by Edwards and Havránek (1985), which includes six risk factors for coronary heart disease (i.e., smoking, strenuous mental work, strenuous physical work, systolic blood pressure, ratio of and lipoproteins, and family anamnesis of coronary heart disease). We are interested in finding the most parsimonious log-linear model, which accounts for the cell frequencies of cell () by assuming a Poisson distribution with mean and

(8)

where is the intercept, the vector of regression parameters, and the (transposed) design vector, which selects the elements of included for modeling cell . Here, we consider the class of hierarchical log-linear models that only allow the inclusion of an interaction if the corresponding marginal effects and lower interaction terms are included in the model as well (e.g., Overstall and King, 2014b).

To select between all 7.8 million possible hierarchical log-linear models (Dellaportas and Forster, 1999), we use the reversible jump algorithm proposed by Forster et al. (2012), which is implemented in the R package conting (Overstall and King, 2014a). Assuming a unit information prior (Ntzoufras et al., 2003), we sampled 100,000 iterations, discarded 10,000 as burn-in, and applied the proposed Markov chain method by drawing samples for the posterior model probabilities. To assess whether the estimated uncertainty accurately quantifies sampling variability, we ran 100 replications initialized with randomly chosen models.

Across replications, 4,484 models were sampled (on average, 567.6 per replication). Table 3 shows the 10 models with the highest posterior probabilities. The relatively large posterior standard deviations of the estimated posterior model probabilities indicate that the samples are autocorrelated to a substantial degree, despite the large number of iterations. This is also reflected by the effective sample size, which was estimated to be on average (), approximately of the number of iterations after burn-in.

Table 3 also shows means and standard deviations of the sampled rank for the models with the highest posterior probability, indicating that estimation uncertainty (i.e., the posterior SD) increased for models with smaller posterior probabilities. Moreover, the proportion of posterior samples is shown for which the sampled rank was identical to the rank across all replications () and smaller than or equal to 10 (). According to these proportions, exact ranks were estimated precisely only for the two best models, whereas the set of the 10 models with highest posterior probabilities was relatively stable across posterior samples (with the exception of model 10). Importantly, the Markov approach provided mean estimated probabilities and that matched the corresponding empirical proportions across replications.

Note that these results regarding estimation uncertainty are in line with our expectations — if models have small posterior probabilities, they are also sampled infrequently, which in turn results in estimation uncertainty. To quantify this variability, the proposed Markov chain approach provides an estimate for the achieved precision to assess the quality of the results and to find an appropriate stopping rule for MCMC sampling.

Posterior model probabilities Rank
# Model
1 CE 18.88 0.13 1.02 1.22 1.00 0.07 0.00 1.00 1.00 1.00 1.00
2 BE 11.76 0.11 0.83 1.07 2.01 0.14 0.10 .99 .99 1.00 1.00
3 BE + CE 7.12 0.09 0.43 1.14 3.43 0.49 0.67 .76 .64 1.00 1.00
4 BF + CE 6.65 0.08 0.52 1.25 3.90 0.50 1.08 .74 .65 .99 1.00
5 BE + BF 4.11 0.07 0.40 0.79 5.42 0.37 1.69 .92 .92 1.00 .96
6 CE + EF 2.86 0.06 0.34 0.51 6.63 0.59 1.52 .68 .66 1.00 .95
7 BE + BF + CE 2.55 0.05 0.24 0.62 8.58 0.65 7.37 .70 .65 1.00 .93
8 CE + ADE 1.86 0.05 0.25 0.30 8.71 0.83 1.54 .57 .55 .95 .96
9 BE + EF 1.73 0.04 0.25 0.37 9.51 0.94 3.37 .53 .52 .93 .93
10 BE + ADE 1.14 0.04 0.18 0.22 12.26 1.40 3.19 .39 .24 .55 .29
  • \note

    Posterior model probabilities are shown in percent. and were computed across 100 replications. All models include the six main effects, A: smoking, B: strenuous mental work, C: strenuous physical work, D: systolic blood pressure, E: ratio of and lipoproteins, F: family anamnesis of coronary heart disease, and the first-order interactions AC, AD, AE, BC, and DE.

Table 3: Models with the highest posterior probability for the contingency table.

6 Conclusion

We proposed a novel approach for estimating the precision of transdimensional MCMC output. Essentially, a first order Markov model is fitted to the observed model-indicator variable to quantify estimation uncertainty of the corresponding stationary distribution. We showed that the method accounts for autocorrelation in a given sequence and provides a good assessment of estimation uncertainty. Importantly, the method does not require output of multiple independent MCMC chains and thus reduces the computational costs for adaption and burn-in. Besides being useful for transdimensional MCMC output, the method provides an estimate of the precision and effective sample size of discrete parameters in MCMC samplers in general. Thereby, researchers can easily decide whether the obtained precision is sufficiently high for substantive applications of interest.

Acknowledgments

Daniel W. Heck was supported by the grant Statistical Modeling in Psychology (GRK 2277) of the German Research Foundation (DFG).

References

  • Anderson and Goodman (1957) Anderson, T. W. and Goodman, L. A. (1957). Statistical inference about Markov chains. The Annals of Mathematical Statistics, 28:89–110.
  • Arnold et al. (2010) Arnold, R., Hayakawa, Y., and Yip, P. (2010). Capture–recapture estimation using finite mixtures of arbitrary dimension. Biometrics, 66:644–655.
  • Barker and Link (2013) Barker, R. J. and Link, W. A. (2013). Bayesian multimodel inference by RJMCMC: A Gibbs sampling approach. The American Statistician, 67:150–156.
  • Brooks et al. (2003a) Brooks, S., Giudici, P., and Philippe, A. (2003a). Nonparametric convergence assessment for MCMC model selection. Journal of Computational and Graphical Statistics, 12:1–22.
  • Brooks and Giudici (2000) Brooks, S. P. and Giudici, P. (2000). Markov chain Monte Carlo convergence assessment via two-way analysis of variance. Journal of Computational and Graphical Statistics, 9:266–285.
  • Brooks et al. (2003b) Brooks, S. P., Giudici, P., and Roberts, G. O. (2003b). Efficient construction of reversible jump Markov chain Monte Carlo proposal distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65:3–39.
  • Burke and Rosenblatt (1958) Burke, C. J. and Rosenblatt, M. (1958). A Markovian Function of a Markov Chain. The Annals of Mathematical Statistics, 29:1112–1122.
  • Carlin and Chib (1995) Carlin, B. P. and Chib, S. (1995). Bayesian model choice via Markov chain Monte Carlo methods. Journal of the Royal Statistical Society. Series B (Methodological), 57:473–484.
  • Castelloe and Zimmerman (2002) Castelloe, J. M. and Zimmerman, D. L. (2002). Convergence assessment for reversible jump MCMC samplers.
  • Dellaportas and Forster (1999) Dellaportas, P. and Forster, J. J. (1999). Markov chain Monte Carlo model determination for hierarchical and graphical log-linear models. Biometrika, 86:615–633.
  • Dellaportas et al. (2000) Dellaportas, P., Forster, J. J., and Ntzoufras, I. (2000). Bayesian variable selection using the Gibbs sampler. In Dey, D. K., Ghosh, S. K., and Mallick, B. K., editors, Generalized Linear Models: A Bayesian Perspective, pages 273–286. Marcel Dekker, Inc., New York.
  • Dellaportas et al. (2002) Dellaportas, P., Forster, J. J., and Ntzoufras, I. (2002). On Bayesian model and variable selection using MCMC. Statistics and Computing, 12:27–36.
  • Doss et al. (2014) Doss, C. R., Flegal, J. M., Jones, G. L., and Neath, R. C. (2014). Markov chain Monte Carlo estimation of quantiles. Electronic Journal of Statistics, 8:2448–2478.
  • Eddelbüttel and Sanderson (2014) Eddelbüttel, D. and Sanderson, C. (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis, 71:1054–1063.
  • Edwards and Havránek (1985) Edwards, D. and Havránek, T. (1985). A fast procedure for model search in multidimensional contingency tables. Biometrika, 72:339–351.
  • Flegal and Gong (2015) Flegal, J. M. and Gong, L. (2015). Relative fixed-width stopping rules for markov chain Monte Carlo simulations. Statistica Sinica, 25:655–675.
  • Forster et al. (2012) Forster, J. J., Gill, R. C., and Overstall, A. M. (2012). Reversible jump methods for generalised linear models and generalised linear mixed models. Statistics and Computing, 22:107–120.
  • Frühwirth-Schnatter (2001) Frühwirth-Schnatter, S. (2001). Markov chain Monte Carlo estimation of classical and dynamic switching and mixture models. Journal of the American Statistical Association, 96:194–209.
  • Gong and Flegal (2016) Gong, L. and Flegal, J. M. (2016). A Practical Sequential Stopping Rule for High-Dimensional Markov Chain Monte Carlo. Journal of Computational and Graphical Statistics, 25:684–700.
  • Green (1995) Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711–732.
  • Healy (1988) Healy, M. J. R. (1988). GLIM: An Introduction. Claredon Press, UK.
  • Heck et al. (2017) Heck, D. W., Gronau, Q. F., Overstall, A. M., and Wagenmakers, E.-J. (2017). MCMCprecision: Precision of Discrete Variables in Transdimensional MCMC.
  • Heidelberger and Welch (1981) Heidelberger, P. and Welch, P. D. (1981). A spectral method for confidence interval generation and run length control in simulations. Communications of the ACM, 24:233–245.
  • Jeffreys (1961) Jeffreys, H. (1961). Theory of Probability. Oxford University Press, New York.
  • Jones et al. (2006) Jones, G. L., Haran, M., Caffo, B. S., and Neath, R. (2006). Fixed-width output analysis for Markov chain Monte Carlo. Journal of the American Statistical Association, 101:1537–1547.
  • Karnesis (2014) Karnesis, N. (2014). Bayesian model selection for LISA pathfinder. Physical Review D, 89.
  • Kass and Raftery (1995) Kass, R. E. and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90:773–795.
  • Kuo and Mallick (1998) Kuo, L. and Mallick, B. (1998). Variable selection for regression models. Sankhyā: The Indian Journal of Statistics, Series B, 60:65–81.
  • Lodewyckx et al. (2011) Lodewyckx, T., Kim, W., Lee, M. D., Tuerlinckx, F., Kuppens, P., and Wagenmakers, E.-J. (2011). A tutorial on Bayes factor estimation with the product space method. Journal of Mathematical Psychology, 55:331–347.
  • Lopes and West (2004) Lopes, H. F. and West, M. (2004). Bayesian model assessment in factor analysis. Statistica Sinica, 14:41–67.
  • Minka (2000) Minka, T. (2000). Estimating a Dirichlet distribution. Technical report, Technical Report.
  • Ntzoufras (2002) Ntzoufras, I. (2002). Gibbs Variable Selection using BUGS. Journal of Statistical Software, 7:1–19.
  • Ntzoufras et al. (2003) Ntzoufras, I., Dellaportas, P., and Forster, J. J. (2003). Bayesian variable and link determination for generalised linear models. Journal of Statistical Planning and Inference, 111:165–180.
  • Opgen-Rhein et al. (2005) Opgen-Rhein, R., Fahrmeir, L., and Strimmer, K. (2005). Inference of demographic history from genealogical trees using reversible jump Markov chain Monte Carlo. BMC Evolutionary Biology, 5:6.
  • Overstall and King (2014a) Overstall, A. and King, R. (2014a). Conting: An R package for Bayesian analysis of complete and incomplete contingency tables. Journal of Statistical Software, 58:1–27.
  • Overstall and King (2014b) Overstall, A. M. and King, R. (2014b). A default prior distribution for contingency tables with dependent factor levels. Statistical Methodology, 16:90–99.
  • Plummer (2003) Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In Proceedings of the 3rd International Workshop on Distributed Statistical Computing, volume 124, page 125. Vienna, Austria.
  • Plummer et al. (2006) Plummer, M., Best, N., Cowles, K., and Vines, K. (2006). CODA: Convergence diagnosis and output analysis for MCMC. R News, 6:7–11.
  • Sanderson and Curtin (2016) Sanderson, C. and Curtin, R. (2016). Armadillo: A template-based C++ library for linear algebra. Journal of Open Source Software, 1:26.
  • Scott and Berger (2010) Scott, J. G. and Berger, J. O. (2010). Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem. The Annals of Statistics, 38:2587–2619.
  • Sisson (2005) Sisson, S. A. (2005). Transdimensional Markov Chains. Journal of the American Statistical Association, 100:1077–1089.
  • Sisson and Fan (2007) Sisson, S. A. and Fan, Y. (2007). A distance-based diagnostic for trans-dimensional Markov chains. Statistics and Computing, 17:357–367.
  • Stephens (2000) Stephens, M. (2000). Bayesian analysis of mixture models with an unknown number of components- an alternative to reversible jump methods. The Annals of Statistics, 28:40–74.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
226164
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description