Wassersteinbased methods for convergence complexity analysis of MCMC with application to Albert and Chib’s algorithm
Abstract
Over the last 25 years, techniques based on drift and minorization (d&m) have been mainstays in the convergence analysis of MCMC algorithms. However, results presented herein suggest that d&m may be less useful in the emerging area of convergence complexity analysis, which is the study of how Monte Carlo Markov chain convergence behavior scales with sample size, , and/or number of covariates, . The problem appears to be that minorization becomes a serious liability as dimension increases. Alternative methods of constructing convergence rate bounds (with respect to total variation distance) that do not require minorization are investigated. These methods incorporate both old and new theory on Wasserstein distance and random mappings, and produce bounds that are apparently more robust to increasing dimension than those based on d&m. Indeed, the Wassersteinbased bounds are used to develop strong convergence complexity results for Albert and Chib’s (1993) algorithm in the challenging asymptotic regime where both and diverge. We note that Qin and Hobert’s (2019+) d&mbased analysis of the same algorithm led to useful results in the cases where with fixed, and with fixed, but these authors provided no results for the case where and are both large.
Wassersteinbased methods for convergence complexity analysis of MCMC with application to Albert and Chib’s algorithm
Qian Qin and James P. Hobert 
Department of Statistics 
University of Florida 
2018
^{0}^{0}footnotetext: Key words and phrases. Coupling, Drift condition, Geometric ergodicity, High dimensional inference, Minorization condition, Random mapping
1 Introduction
Markov chain Monte Carlo (MCMC) has become an indispensable tool in Bayesian statistics, and it is now wellknown that the performance of an MCMC algorithm depends heavily on the convergence properties of the underlying Markov chain. In the era of big data, it is no longer enough to understand the convergence behavior of a given algorithm for each fixed data set. Indeed, there is now growing interest in the socalled convergence complexity of MCMC algorithms, which describes how the convergence rate of the Markov chain scales with the sample size, , and/or the number of covariates, , in the associated data set (see, e.g. Rajaratnam and Sparks, 2015; Durmus and Moulines, 2016; Yang et al., 2016; Yang and Rosenthal, 2017; Johndrow et al., 2018; Qin and Hobert, 2019+). While techniques based on drift and minorization (d&m) conditions have been mainstays in the analysis of MCMC algorithms for decades, we show that these techniques are likely to fail in convergence complexity analysis. We consider alternative methods based on Wasserstein distance and coupling, and we demonstrate the potential advantages of such methods over those based on d&m through a convergence complexity analysis of Albert and Chib’s (1993) algorithm for Bayesian probit regression.
Let denote an intractable posterior distribution on some state space , and consider a Monte Carlo Markov chain (with stationary probability measure ) that is driven by the Markov transition function (Mtf) . For a nonnegative integer and , let be the step Mtf, and let be the distribution defined by . (See Section 2 for formal definitions.) Convergence of this chain can be assessed by the rate at which decreases as grows, where is some divergence function between probability measures. (Traditionally, the most commonly used is the total variation distance.) If there exist and such that
(1) 
then we say that the chain converges geometrically in . The rate of convergence, , is defined to be the infimum of the s that satisfy (1) for some . Given a sequence of (growing) data sets, and corresponding sequences of posterior distributions and Monte Carlo Markov chains, we are interested in the asymptotic behavior of . Our program entails constructing an upper bound on , which we call , and then considering the asymptotic behavior of this bound. Of course, the bound should be sufficiently sharp so that it correctly reflects the asymptotic dynamics of .
When is the total variation distance, is often constructed via drift and minorization (d&m) arguments (see, e.g., Rosenthal, 1995). It is wellknown that the resulting bounds are often overly conservative, especially in high dimensional settings (see, e.g., Rajaratnam and Sparks, 2015). While there have been a few successful d&mbased convergence complexity analyses of MCMC algorithms (Yang and Rosenthal, 2017; Qin and Hobert, 2019+), our results strongly suggest that techniques based on d&m are unlikely to be at the forefront of convergence complexity analysis going forward. Indeed, we consider a family of simple autoregressive processes, and show that, in this context, the usual d&mbased methods cannot possibly produce sharp bounds on as dimension increases. The problems exhibited by these methods in this example are so fundamental that they cannot be avoided no matter how hard one tries to find good d&m conditions. The culprit appears to be the minorization condition, which becomes a serious liability as dimension increases.
Recent results suggest that convergence complexity analysis becomes more tractable when total variation distance is replaced with an appropriate Wasserstein distance (see, e.g., Durmus and Moulines, 2015; Mangoubi and Smith, 2017; Trillos et al., 2017; Hairer et al., 2011). This could be (at least partly) due to the fact that minorization conditions are typically not used to bound Wasserstein distance. In this paper, we explore the possibility of performing convergence complexity analysis of MCMC algorithms under total variation distance by using a technique developed in Madras and Sezer (2010) to convert Wasserstein bounds into total variation bounds. While there is a substantial literature on bounding the Wasserstein distance to stationarity for Markov chains (see, e.g., Steinsaltz, 1999; Ollivier, 2009; Hairer et al., 2011; Durmus and Moulines, 2015; Butkovsky, 2014), the available methods are not directly applicable to typical Monte Carlo Markov chains that arise in Bayesian statistics. We present several new results that extend the range of applicability of the existing methods to such Markov chains. In particular, we provide a method for constructing geometric convergence bounds in a (generalized) Wasserstein distance through a drift condition and associated contraction conditions. This extends results of Durmus and Moulines (2015) and Butkovsky (2014), which deal with Wasserstein distances defined via bounded metrics, to the unbounded case, at the cost of imposing a stronger contraction condition. We also establish a new result that facilitates the application of techniques developed in Steinsaltz (1999) and Ollivier (2009).
Our application of the WassersteintoTV method to Albert and Chib’s (1993) algorithm shows that this approach can produce bounds that are more robust to increasing dimension than those based on d&m. Indeed, Qin and Hobert (2019+) (hereafter, Q&H) recently performed a convergence complexity analysis of the Albert and Chib (A&C) algorithm using d&m methods. Roughly, the two main results (which concern convergence in total variation) are as follows: When is fixed, under mild conditions on the data structure, for some . When is fixed, for some , provided that the prior distribution provides sufficiently strong shrinkage. Unfortunately, one can show that as , and that as . Thus, Q&H were not able to provide useful asymptotic results on when both and are large. Here we are able to utilize the WassersteintoTV method to get results in this more complex asymptotic regime. Our results for the A&C chain can loosely be described as follows. When is fixed, under certain sparsity assumptions on the true regression coefficients (and some regularity conditions), for some independent of . When the prior provides enough shrinkage, for some independent of and . When rows of the design matrix are duplicated, under sparsity assumptions on the true regression coefficients (and some regularity conditions), is bounded above by some with high probability as and grow simultaneously at some appropriate joint rate. Along the way, we also establish some nonasymptotic results, one of them being that the A&C chain converges geometrically in the Wasserstein distance induced by the Euclidean norm for any finite data set.
Our results for the A&C chain are among the first of their kind, providing strong asymptotic statements on convergence rates for a practically relevant Monte Carlo Markov chain in the case where both and are large. While we consider only a single serious example in this paper, we believe that many of our ideas and techniques are potentially applicable to other similar high dimensional problems.
The remainder of this article is structured as follows. In Section 2, we illustrate how d&mbased methods fail in typical high dimensional settings by studying a simple autoregressive process. Alternative methods based on coupling, random mappings and Wasserstein bounds are the topic of Section 3. Our analysis of the A&C chain is presented in Section 4. Some technical details are relegated to an Appendix.
2 Minorization Becomes a Liability as Dimension Increases
Let be a countably generated measurable space. We consider a discretetime timehomogeneous Markov chain on with Mtf . For an integer , let be the corresponding step Mtf, so that for any and ,
Let denote the probability measure defined by . We assume that the Markov chain is Harris ergodic, i.e., irreducible, aperiodic, and positive Harris recurrent. Thus, the chain has a unique stationary distribution to which it converges. The difference between and is most commonly measured by the total variation distance, , which is the supremum of their discrepancy over measurable sets. The associated convergence rate, denoted by , is defined as the infimum of that satisfy (1) when is the TV distance. A standard technique for constructing upper bounds on is based on d&m conditions. One of the earliest examples of this method is due to Rosenthal (1995), whose result is now stated.
Proposition 1.
(Rosenthal, 1995) Suppose that

there exist , , and a function such that
for all ;

there exist , and a probability measure such that for every , .
Then for all and ,
where is arbitrary.
Remark 2.
The function is called a drift function, and is called a small set. and are referred to as the drift and minorization conditions, respectively.
Proposition 1 gives the following upper bound on :
Note that , so there always exists an such that . In the remainder of this section, we argue that this method is likely to fail when the Markov chain is high dimensional. The problem appears to be the nonexistence of a minorization condition with a small value of .
Lemma 3.
Let be as in Proposition 1. Then .
Proof.
Let , , , and be as in the said proposition. A cutoff argument (see, e.g., Hairer, 2006, Proposition 4.24) shows that implies . (This inequality is trivial to verify if it is known that .) On the other hand, since ,
The result is then immediate. ∎
As dimension increases, tends to “spread out,” and the requirement that forces to be a large subset of . As a result, typically, can only hold when is very close to , and this in turn leads to an upper bound on that is very close to . We now illustrate this phenomenon using a family of simple autoregressive processes.
Let , and let be the probability measure associated with the distribution. This Mtf defines a simple, wellbehaved autoregressive process. It is Harris ergodic, its invariant distribution is , and it is known that the convergence rate is for all . Now consider using Proposition 1 to construct an upper bound on . In particular, for each , we choose a drift function and an associated small set , which together yield an upper bound, . The following result shows that the sequence of bounds constructed using Proposition 1 is necessarily quite badly behaved.
Proposition 4.
Let be the probability measure associated with the distribution. For any sequence of d&m conditions, at an exponential rate as .
Before proving Proposition 4, we state a general result concerning condition . The proof is left to the reader.
Lemma 5.
Suppose that admits a density function with respect to some reference measure . If holds, then for all ,
Proof of Proposition 4..
Let be as in Proposition 1. It is easy to show that, in order for , the diameter of must be at least , where is the median of a distribution. Hence, letting , be the density associated with , we have
Now, is of order and goes to exponentially fast as . Hence, it follows from Lemma 5 that at an exponential rate as , which in turn implies (see, e.g., Q&H, Proposition 2) that at an exponential rate. ∎
We conclude that, no matter how hard we work to find good d&m conditions, Proposition 1 cannot possibly yield a reasonable asymptotic bound on the convergence rate for our simple autoregressive process as . Moreover, it seems unlikely that the situation would be any better for more complex Markov chains, like those used in MCMC. The reader may wonder whether the problems described above extend to other d&mbased bounds. The answer is “yes.” Indeed, Proposition 4 continues to hold if we replace with the corresponding bound from Hairer and Mattingly (2011) (and the proof is essentially the same). Also, in Section A of the Appendix, we show that the bounds developed by Roberts and Tweedie (1999) and Baxendale (2005) behave similarly in our toy example. Note that the d&m conditions used in Baxendale (2005) do not impose a constant positive lower bound on , as in Lemma 3, yet the resultant convergence bound still suffers from highdimensionality. Intuitively, a good d&mbased bound requires a minorization inequality with a small set that is large enough to be visited frequently by the chain, but simultaneously small enough that is not too close to 1. However, at least for our toy example, is highly susceptible to the growing size of . As a result, in high dimensional settings where tends to spread out, the “Goldilocks” small set doesn’t exist.
Finally, it should be mentioned that, if one is able to establish d&m for rather than itself, where is a sufficiently large integer, then it’s possible to avoid the problems described above. However, in practical examples, it is rarely possible to establish sharp minorization inequalities for multistep Mtfs.
Q&H did make effective use of in a convergence complexity analysis of A&C’s Markov chain in two asymptotic regimes: with fixed, and with fixed. This fact seems at odds with the arguments given above. However, Q&H were able to circumvent the problems associated with highdimensionality by showing that the Gibbs chain underlying the A&C algorithm, which has support , is equivalent, in terms of convergence rate, to two other chains, one on , and the second on . In each of the two asymptotic regimes considered, one of the two variables ( or ) is fixed, which allowed the authors to (effectively) avoid increasing dimension. On the other hand, not surprisingly, these authors were unable to use to establish any positive results in the more interesting asymptotic regime in which both and diverge.
In the next section, we provide a different type of general bound on that does not require a minorization condition. This bound, which is developed using Wasserstein distance and coupling, is apparently more robust to increasing dimension than . Indeed, we employ the new bound in Section 4 to show that the rate of convergence of A&C’s Markov chain is bounded below 1 in three different asymptotic regimes where and both diverge.
3 Total Variation Bounds via Couplingbased Wasserstein Bounds
Let be a measurable metric, i.e., distance function. For two probability measures on , and , their Wasserstein divergence induced by is defined as
where is the set of all couplings of and , i.e., the set of all probability measures on the product space whose marginals are respectively and . (At this point, we do not require to be a formal distance between probability measures, so we are not making the usual moment and Polish space assumptions, but we will do so in Section 4.) Our goal is to bound for and . The associated convergence rate, , is the infimum of the s that satisfy (1) when is the Wasserstein divergence .
A natural way of bounding the Wasserstein divergence between and is to construct a pair of coupled Markov chains governed by an Mtf such that for all . Then, for any , , and it follows that
We call a coupling kernel of . Based on this construction, one can arrive at the following wellknown result (see, e.g., Ollivier, 2009, Corollary 21).
Proposition 6.
Suppose that for all . Suppose further that there exists a coupling kernel of , denoted by , and such that for every ,
(2) 
Then for each and ,
We provide a proof (based on our minimal set of assumptions) for completeness.
Proof.
In practice, it is often impossible to find a coupling that yields (2) for all . However, if the underlying Markov chain satisfies additional conditions, then (2) need not hold on all of . Indeed, Durmus and Moulines (2015) show that, if the chain satisfies a drift condition, then it is enough that (2) holds on the subset of where the (joint) drift function takes small values (see also, Jarner and Tweedie, 2001; Butkovsky, 2014). Unfortunately, Durmus and Moulines’s (2015) result assumes that the metric is bounded. Of course, in most cases, the natural metric is unbounded. Below we establish a new version of Durmus and Moulines’s (2015) result in which the bounded metric assumption is removed at the cost of replacing their average contraction condition with a stronger pointwise contraction condition. While strong, our pointwise contraction condition on the coupling does hold in some MCMC applications, e.g., it holds for the coupling that we use to analyze A&C’s algorithm. To state our result, we must first introduce the notion of a random mapping, which is a common tool for constructing couplings and studying Wasserstein divergences.
Let be a probability space. Let be a random element that assumes values in some measurable space , and let be a measurable function. Set for . Then is called a random mapping on . If for all , then is said to induce . Assume that this is the case and let be iid copies of . Denote by , and define to be the identity function. Then, for every , is a timehomogeneous Markov chain such that the joint distribution of and lies in for . Obviously, the conditional distribution of given defines a coupling kernel .
As an example, consider again the autoregressive process from Section 2. For , let where . Then , so induces . Furthermore, letting be the Euclidean norm, we have, for any ,
Thus, taking to be Euclidean distance in Proposition 6, we have for all .
Again, establishing (2) for all is typically not feasible when dealing with practical Monte Carlo Markov chains. The following result provides some relief. Its proof, which is similar to that of Durmus and Moulines’s (2015) Theorem , is relegated to the appendix.
Proposition 7.
Suppose that is induced by a random mapping . Suppose further that

there exist , , and a function such that
for all ;

there exist and such that for all ,

for every ,
Then for all and ,
(3) 
where ,
and is arbitrary.
Note that when is intractable, the convergence bound (3) is not fully computable because and involve integration with respect to . However, when , it does give an explicit bound on the convergence rate, , namely,
While this formula is exactly the same as that for , the two bounds are not the same because is defined differently here. Moreover, this is typically much more robust to the size of the small set . For example, the coupling we constructed for the autoregressive process satisfies with for any (and any ).
Remark 8.
Contraction conditions like and can be difficult to verify in practice. We now use ideas from Steinsaltz (1999) to show that, if is a nice Euclidean space, then we can establish and by regulating the local behavior of . Assume that is a convex subset of a Euclidean space, and that , where is a norm (not necessarily the Euclidean norm). For a random mapping on , define the local Lipschitz constant of at to be
Assume that there is a measurable function such that for each . The following two lemmas can be useful for establishing and . A proof of Lemma 10 is provided in the Appendix.
Lemma 9.
Suppose that is a convex subset of a Euclidean space, and that is induced by a norm. Let be a random mapping on . If for every , then for each , .
Lemma 10.
Suppose that is a convex subset of a Euclidean space, and that is induced by a norm. Let be a random mapping on , and let be convex. If for each , , as a function of , is Riemann integrable, almost surely in , then for every , where .
Corollary 11.
Suppose that is a convex subset of a Euclidean space, and that is induced by a norm. Suppose further that for all , and that is induced by a random mapping . Assume that for each , , as a function of , is Riemann integrable, almost surely in , and that
Then for each and ,
Remark 12.
In the case where is Lipschitz and is the Euclidean norm (neither of which is assumed here), Corollary 11 is a direct consequence of Steinsaltz’s (1999) Theorem 1.
We now turn our attention to the conversion of Wasserstein divergence bounds into total variation bounds. Here is our main tool.
Proposition 13.
(Madras and Sezer, 2010) Suppose that admits a density function with respect to some reference measure . Suppose further that there exists such that for all ,
(4) 
Then for all and ,
Madras and Sezer (2010) assume that is a Polish metric space and that is the associated Borel algebra, and they use these assumptions in their proof. Again, we have not made any Polish space assumptions, so we provide an alternative proof here.
Proof.
An argument from Madras and Sezer’s (2010) proof of their Lemma 13 shows that for two probability measures and ,
where , is defined analogously, and . Now, fix , let , and let . Then by (4),
Since is arbitrary,
∎
Clearly, if the conditions of Proposition 13 are satisfied, then an upper bound on also serves as an upper bound on . For example, it is straightforward to show that (4) holds for the aforementioned autoregressive process when is the Euclidean distance. Since we know from previous calculations that for all , it follows immediately that for all . Hence, in this case, a total variation bound converted from a Wasserstein bound is sharp. This is, of course, just a toy example, but recall how poor the d&m bounds are for this toy. In the next section, we use the results developed in this section to analyze the convergence complexity of the A&C algorithm.
4 Convergence Analysis of the Albert and Chib Chain
4.1 Preliminaries
Let , and let be independent binary responses such that for each , , where is an unknown regression coefficient. The design matrix is defined to be the matrix whose th row is . Denote the observed data by . Consider a Bayesian analysis based on the prior
where , and is a symmetric matrix that is either positive definite (Gaussian prior) or vanishing (flat prior). While the posterior distribution is automatically proper if is positivedefinite, additional conditions (on and ) are required to ensure propriety when (Chen and Shao, 2000). For the time being, we assume that the posterior is proper. The posterior density is, of course, given by
A standard method of exploring this intractable posterior density is Albert and Chib’s (1993) data augmentation algorithm, which is one of the most wellknown MCMC algorithms in Bayesian statistics. We now state the algorithm.
Let . Posterior propriety implies that is nonsingular. For , , and , let be a normal distribution that is truncated to if , and to if . If the current state of the A&C Markov chain is , then the next state is drawn according to the following procedure.
Iteration of the data augmentation algorithm:

Draw independently with , and let .

Draw
The Markov transition density of the chain that is simulated by this algorithm is given by
(5) 
where the exact forms of and can be gleaned from the algorithm. It’s clear that this chain is reversible with respect to the posterior density , and that the chain is Harris ergodic. Throughout this section, we will use to denote the stationary measure defined by , and to denote the Mtf defined through (5).
Let us now construct a random mapping that induces the A&C Markov chain. For any and , let be the inverse cumulative distribution function of . Routine calculation shows that
Let be a probability space. Let be a dimensional standard normal vector, and independently, let be a vector of iid variables. For , denote the th element of by . Consider the random mapping
Clearly, for fixed , follows the distribution defined by (5). Hence, induces .
Let and denote the Euclidean norm and the corresponding distance, respectively. We will find it convenient to work with a normalized version of . Throughout this section, let , , and denote by the distance function that this norm induces. As described in the previous section, we can use to define a Wasserstein divergence between probability measures on . Furthermore, if we restrict attention to a certain subset of these probability measures, then is an actual distance. Indeed, let denote the Borel algebra on , and define to be the set of probability measures, , on such that, for some (and hence all) , we have
Since forms a Polish metric space, and is the associated Borel algebra, it follows that restricted to is itself a metric, known as the Wasserstein distance. We note that Chen and Shao’s (2000) Theorem 2.3 implies that . Later in this section, we will establish that for all and all . In the context of this section, represents the convergence rate of the A&C Markov chain with respect to the distance . Note that for any pair of probability measures and in , we have
where and give the largest and smallest eigenvalues of a Hermitian matrix (defined through ), respectively. Hence, .
We end this subsection with a result relating Wasserstein and TV bounds for the A&C chain. The proof of this result, which is given in the Appendix, is based on Proposition 13.
Proposition 14.
For the A&C chain,
for all and all .
An immediate consequence of Proposition 14 is that an upper bound on is also an upper bound on . This fact will be crucial in our convergence complexity analysis.
4.2 Analysis of the random mapping
We begin our analysis by bounding
Let , and assume that . Then by the meanvalue theorem for vectorvalued functions,
(6)  
Now, for , we have
where is defined as follows:
Let be a diagonal matrix whose th diagonal component is . Then is continuous in . It follows from (6) that
Remark 15.
Note that is continuous in for each value of . (In fact, does not depend on .) Therefore, as a function of , is Riemann integrable for all , almost surely.
The following lemma, which is proved in the Appendix, will help us develop a bound on .
Lemma 16.
For and , . Moreover, if , .
The next lemma shows that and are bounded by .
Lemma 17.
For any and ,
Thus, for all values of and .
Proof.
By Lemma 16, all the diagonal components of are strictly less than . For two Hermitian matrices of the same size, and , write , or equivalently, if is nonnegative definite. Similarly, write if is positive definite. We consider two cases: (i) , and (ii) is positive definite. If , then , which is positive definite (because of propriety). It follows that
If is positive definite, then
Thus, in either case, , and the result follows from Weyl’s inequality. ∎
We now develop an approximation of using