Asymptotically exact data augmentation:
models, properties and algorithms
Abstract
Data augmentation, by the introduction of auxiliary variables, has become an ubiquitous technique to improve mixing/convergence properties, simplify the implementation or reduce the computational time of inference methods such as Markov chain Monte Carlo. Nonetheless, introducing appropriate auxiliary variables while preserving the initial target probability distribution cannot be conducted in a systematic way but highly depends on the considered problem. To deal with such issues, this paper draws a unified framework, namely asymptotically exact data augmentation (AXDA), which encompasses several wellestablished but also more recent approximate augmented models. Benefiting from a much more general perspective, it delivers some additional qualitative and quantitative insights concerning these schemes. In particular, general properties of AXDA along with nonasymptotic theoretical results on the approximation that is made are stated. Close connections to existing Bayesian methods (e.g. mixture modeling, robust Bayesian models and approximate Bayesian computation) are also drawn. All the results are illustrated with examples and applied to standard statistical learning problems.
Keywords: Approximation, auxiliary variables, divideandconquer, Bayesian inference, robustness.
1 Introduction
Starting at least from the 1960s with the seminal paper of Hartley (1958) on the expectationmaximization (EM) algorithm, introducing auxiliary variables has been a widely adopted strategy to derive iterative algorithms able to deal with possibly complicated inference problems. Indeed, either by coming from statistical physics (Swendsen and Wang 1987) or by the broad statistical community (Dempster et al. 1977), auxiliary (also called latent) variables have been used to improve (Duane et al. 1987; Edwards and Sokal 1988; Marnissi et al. 2018) and/or simplify (Tanner and Wong 1987; Doucet et al. 2002) inference methods, such as maximum likelihood (ML) estimation or simulationbased ones. Insightful reviews of these methods were conducted by Besag and Green (1993); van Dyk and Meng (2001); Tanner and Wong (2010). Among many others, slice sampling and halfquadratic (HQ) methods are archetypal instances of such auxiliary variablesbased methods. Indeed, slice sampling (Swendsen and Wang 1987; Edwards and Sokal 1988), which can be related to the fundamental theorem of simulation (Robert and Casella 2004, Theorem 2.15), exploits the simple rewriting where and stands for a target density. This rewriting could circumvent some difficulties related to the direct inference from by considering an augmented density whose marginal coincides with . Following a similar data augmentation (DA) idea, HQ approaches were proposed concurrently to slice samplingbased methods by Geman and Reynolds (1992) and Geman and Yang (1995) who extended the work of Blake and Zisserman (1987) on visual reconstruction problems. The main idea defended by the aforementioned authors was to introduce auxiliary variables in order to divide an initial nonquadratic term into the sum of a quadratic and a nonquadratic terms. Then, Gibbs sampling can be implemented to tackle the resulting subproblems which are simpler than the initial one. HQ and slice sampling methods, by introducing auxiliary variables, appear to be an interesting alternative when sampling cannot be performed directly from . Nonetheless, the latter have important limitations: i) sampling constrained to a possibly high dimensional set for slice sampling and ii) finding the form of the augmented density that ensures that the initial density is retrieved by marginalization for HQ methods. More generally, the superiority of DAbased techniques over classical Markov chain Monte Carlo (MCMC) methods such as MetropolisHastings (MH) algorithms is not obvious as pointed out by Polson (1996); Damien et al. (1999). Auxiliary variable methods have been found to be slower than singlesite update approaches in some cases (Hurn 1997) and some improvements have been derived to cope with these problems such as partial decoupling (Higdon 1998) or the introduction of a working parameter (Meng and van Dyk 1997). Moreover, DA techniques are often used in a casebycase basis (Geman and Reynolds 1992; Albert and Chib 1993; Geman and Yang 1995; Polson et al. 2013) and could not be applied in general scenarios due to the absence of exact DA schemes or to high computation costs.
Similarly to approximate Bayesian computation (ABC) methods to circumvent intractable likelihoods (Beaumont et al. 2002; Sisson et al. 2018b), these limitations can be tackled by considering approximate DA schemes that become exact asymptotically. Thus, inspired from variable splittingbased optimization methods, Vono et al. (2019) and Rendell et al. (2018) recently and independently proposed a novel and broad Bayesian inference framework that can circumvent limitations of exact DA approaches. Indeed, by introducing a collection of instrumental (also called splitting) variables, the aforementioned authors considered the inference from an approximate probability distribution which can be simpler, more efficient and distributed over multiple computational workers (e.g., machines or kernels).
This paper aims at deeply investigating this newly proposed asymptotically exact data augmentation (AXDA) framework, to point out its strong similarity with existing inference approaches and to deliver a set of theoretical results. In particular, the question of having theoretical guarantees on the underlying approximation will arise. Note that an asymptotic data augmentation scheme has already been proposed in the early 90’s by Wei and Tanner (1991) when the number of observations is large. In this paper, we consider AXDA models that are asymptotically exact either when the number of observations is large or when a positive parameter is small.
To this purpose, Section 2 presents how variable splitting, an efficient optimization tool, can be used to derive an AXDA scheme. The motivations and properties of the latter are also detailed. Section 3 reviews some existing stateoftheart Bayesian methods, namely robust Bayesian modeling and ABC, from AXDA perspective. From this point of view, it appears that AXDA models can inherit some useful properties of existing approaches such as robustness or improved inference schemes. Section 4 states nonasymptotic theoretical properties of AXDA models. More precisely, a nonasymptotic upper bound on the total variation distance between the true and the considered approximate distribution is derived under some mild assumptions. Then, this bound is also used to quantify coverage properties of associated credibility intervals. Section 5 presents how AXDA schemes can be used to perform efficient inference through optimization, expectationmaximization (EM), simulationbased or variational Bayes (VB) methods. Finally, Section 6 provides some discussion and concluding remarks. The properties and results of each section are illustrated with examples.
2 Asymptotically exact data augmentation
This section introduces AXDA schemes that aim to circumvent DA’s main issues namely the art (van Dyk and Meng 2001) of finding the exact DA associated to a statistical model and its inference limitations. All along this paper, we shall use abusively for simplicity the same notations for a probability distribution and its associated probability density function (pdf).
2.1 Motivations
In this paper, we are interested in performing the inference of summary statistics associated to a variable of interest based on a probability distribution with density
(1) 
where the potential is such that defines a proper probability distribution. For sake of generality, note that (1) shall describe various quantities. First, may simply refer to a pdf associated to the random variable , e.g., through its prior distribution. Referring to a set of observations denoted by , with a slight abuse of notations, shall correspond to a posterior distribution or a likelihood : in any case, the notation aims at putting the emphasis on the dependence on only. All along this paper, we will work under this convention and write explicitly the form of when needed.
If the direct inference from (1) is computationally prohibitive or is not possible, one can try to rely on DA approaches which introduce some auxiliary variables stacked into a vector and define a new target density , simpler to handle, such that
(2) 
Much research has been devoted to these models in order to simplify an inference task or to improve the convergence properties of direct inference approaches (e.g. slice sampling or HQ methods discussed in 1). Nonetheless, these approaches have several limitations. Indeed, finding the right form for in order to satisfy (2) generally requires some acquaintance and can even be impossible in some cases (Geman and Yang 1995). In addition, even if an exact DA scheme has been found, the inference could be computationally intensive in highdimensional settings. For instance, the mixture representation of a binomial likelihood function based on the PolyaGamma distribution has been used to derive a promising Gibbs sampler for logistic regression problems (Polson et al. 2013). Nonetheless, even if this algorithm has been proved to be uniformly ergodic by Choi and Hobert (2013), the corresponding ergodicity constant depends exponentially on the number of observations and on the dimension of the regression coefficients vector . In addition, the cost of matrix inversions of the order involved at each iteration of the Gibbs sampler can be prohibitive when is large (e.g. in medicine or biology applications).
To cope with these issues, one can think about relaxing the assumption (2) and consider an approximate DA scheme. This choice could enable to choose an augmented density with more flexibility and to improve the inference. Nonetheless, some guarantees are required on the approximate augmented density in order to quantify the order of approximation which is made. To this purpose, Section 2.2 presents an approximate DA scheme controlled by a unique positive scalar parameter . This scheme becomes asymptotically exact when this parameter becomes small. Considering such an approximate DA model can bring several important benefits. Firstly, the inference can be simplified by choosing with more freedom the approximate augmented density. Indeed, this choice can lead to simpler inference steps embedding efficient existing methods, see Section 5. Secondly, when the augmented density is wisely chosen, the inference can be distributed over multiple machines, see Section 5 and Rendell et al. (2018). Last but not least, considering an AXDA approach can circumvent the difficult art of finding an exact DA scheme by proposing a systematic way to augment the initial model as presented in Section 2.2.
2.2 Model
Instead of searching for an exact data augmentation scheme (2), some auxiliary variables can be introduced in order to define an approximate but asymptotically exact probability distribution. One possibility is to introduce an augmented distribution with density depending on a parameter and such that the associated marginal density defined by
(3) 
satisfies Assumption 1. Again, with a slight of notations, unless explicitly specified, will indistinctly stand for the pdf of the couple , i.e., the joint prior or the joint posterior , or for the pdf of conditioned upon , i.e., the augmented likelihood .
Assumption 1.
For all , .
Under this assumption, the following result holds.
Theorem 1.
Under Assumption 1, coincides with when , that is
(4) 
Proof.
Under Assumption 1, the bias introduced by considering the approximate augmented density is only controlled by the positive parameter and can be made arbitrarily small by decreasing the value of the latter. Indeed, Theorem 1 shows that in the limiting case , the considered approximate data augmentation model becomes exact. In addition, this approximation can also be asymptotically exact by considering a fixed parameter and by letting the number of observations tend towards infinity. Indeed, assume that stands for a twice differentiable likelihood function associated to observations . Suppose also that is a potential related to a prior distribution . In this case, can be interpreted as the joint prior of (). Then, under this statistical model, it follows from Walker (1969) that for all , the approximate posterior and the initial posterior tend towards the same normal distribution , where stands for the maximumlikelihood estimator (MLE). In the sequel, we will refer to Assumption 1 in order to define AXDA models, see Definition 1.
Definition 1.
An asymptotically exact data augmentation (AXDA) scheme with respect to (w.r.t.) some distribution and controlled by a positive parameter , refers to any approximate model such that Assumption 1 is satisfied.
A natural question is: how to choose the augmented density ? With the particular objective of solving Bayesian inverse problems, Vono et al. (2019) recently proposed a simple and systematic way to define from an initial target density . This strategy takes inspiration from variable splitting in optimization, a trick which exploits the equivalence between the problems and s.t. . Combined with the alternating direction method of multipliers (ADMM) (Boyd et al. 2011), this rewriting permits to divide an initial difficult problem into a set of simpler ones, see Section 5.1.
Extending these preliminary results, this paper investigates AXDA schemes associated to a target density and defined by the approximate augmented density
(5) 
where stands for a divergence measuring the discrepancy between and . As it will be discussed in Section 5, targeting instead of can simplify and even improve the inference. The basic unit for deriving from consists in replacing the initial potential by the augmented potential . In the sequel, we will focus on this basic unit and study its main properties. Note that the potential has to be chosen such that (5) defines a proper probability distribution and such that Assumption 1 is fulfilled. In addition, this potential can be interpreted as a coupling function between the two variables and and is strongly related to penalty, augmented Lagrangian or Bregman divergence terms in optimization, see Section 5.1. Indeed, can be expressed as a divergence up to an additional term if its associated distribution belongs to the exponential family distribution (Banerjee et al. 2005, Theorem 4). As an illustration, Examples 1 and 2 show two types of potential expressed as divergence functions between and and such that Assumption 1 is verified.
Example 1.
Gaussian penalization – Assume that the density associated to is Gaussian with scale parameter and location parameter , i.e. for all and , . Since where is the Dirac distribution centered at , Assumption 1 is satisfied. The potential clearly defines a (scaled) divergence between and . In addition, the larger , the more penalized the violation of the equality constraint between these two parameters is.
Example 2.
GammaPoisson mixture – Assume that is a potential associated to a Poisson likelihood function with intensity and observation . Suppose that is associated to a gamma distribution with shape and scale . Then the joint distribution of and parametrized by writes
(6) 
Under this statistical model, the marginal likelihood under (6) is a negative binomial distribution which tends towards the Poisson distribution in the limiting case . Therefore, Assumption 1 is satisfied. Moreover, the gamma distribution belonging to the exponential family, its associated potential can be related to a Bregman divergence leading to the rewriting
(7) 
where and is the Bregman divergence associated to the function . In this case, the potential , through the Bregman divergence , evaluates the distance between the auxiliary variable and a statistics related to by .
For a specific application, the choice of can be motivated by some expertise regarding the wanted penalization, by conjugacy properties (Rendell et al. 2018) or by convexity, differentiability and gradient Lipschitz continuity properties preluding the use of advanced inference algorithms (Vono et al. 2018, 2019) such as proximal MCMC methods (Pereyra 2016; Durmus et al. 2018). Furthermore, a careful choice of can lead to interesting properties for the marginal of under , see Proposition 1.
Proposition 1.
Assume that is logconcave. Let and assume that is logconcave, and for all , is bounded. Assume also that and . Then, the marginal with density has the following properties.

Assumption 1 is satisfied;

is logconcave;

is infinitely differentiable on ;

In the case where stands for a pdf associated to the random variable , the expectation and variance under are given by
(8) (9)
Remark 1.
Note that an example of density is the standard kernel density = where is the dimension of and is a smooth function with compact support satisfying the assumptions of Proposition 1. This type of density has for instance been considered in ABC (Sisson et al. 2018b). Another example is the Gaussian kernel.
Proposition 1 permits to draw several conclusions about the inference based on . Firstly, the infinite differentiability of (Property iii)) implies that it stands for a smooth approximation of , see Figure 3 in Section 4. Secondly, Property iv) of Proposition 1 is reassuring regarding the inference task. Indeed, if stands for a prior distribution, then considering the approximation simply corresponds to a more diffuse prior knowledge around the same expected value, see Section 4.3.2 in Section 4. Thus, more weight will be given to the likelihood if a posterior distribution is derived with this prior. On the other hand, if stands for a likelihood, then considering the approximation yields the opposite behavior: the likelihood becomes less informative w.r.t. the prior. This idea is directly related to robust hierarchical Bayesian models discussed in Section 3.1.
In the initial formulation of the augmented density (5), the potential is complemented by a unique potential . More generally, when the potential is written as a sum of several terms, i.e. for all , , a natural generalization consists in choosing the potential associated to (5) as with . Under this formulation, this means that initial parameters have been splitted. The number of initial parameters to keep can be chosen w.r.t. the considered inference problem and algorithm, see Section 5. Indeed, considering this AXDA scheme permits to decouple the initial potential function into potentials associated to a different parameter preluding the use of alternating algorithmic schemes. This model has been used for instance by Rendell et al. (2018) and Vono et al. (2018) for machine learning problems.
In summary, instead of relying directly on or on an exact DA scheme, we consider an AXDA model defined in (5) which has theoretical guarantees and several interesting properties, see Theorem 1 & Proposition 1. Section 4 will focus on nonasymptotic properties of AXDA models, that is when . Indeed, relying only on asymptotic ( or ) convergence results is not satisfying. Moreover, it is worth noting that considering such a general approximate model permits to draw strong parallels with stateoftheart approximate approaches such as robust hierarchical Bayesian modeling or ABC. Indeed, Section 3 shows that the formulation of these models can be casted into the AXDA framework described in this section.
3 Revisiting existing models from AXDA perspective
This section proposes to review some important stateoftheart works, namely robust hierarchical Bayesian models and ABC from the AXDA perspective described in Section 2. Indeed, at the core of these existing approaches, auxiliary variables are introduced in order to simplify and/or robustify an inference task (e.g. posterior sampling). Similarly to AXDA schemes, the resulting augmented density involves elementary components of the form (5). Based on this strong similarity, important properties of these two types of approaches can be inherited by AXDA methods. This section reviews some of them.
3.1 Robust hierarchical Bayesian models
Considering a wellchosen demarginalization procedure can lead to robustness properties (Robert and Casella 2004). Some approaches took advantage of this idea in order to build robust hierarchical Bayesian models. Interestingly, a wide range of these models can be viewed as particular instances of AXDA models. A lot of definitions exist to qualify a Bayesian model as a robust one (Berger et al. 2000). In the sequel, we will adopt the same robustness definition as the one proposed by Wang and Blei (2018): a Bayesian model is robust if it is less sensitive to model mismatch.
3.1.1 Localized models
In their paper, Wang and Blei (2018) already highlighted a strong bond that relates a large number of existing robust models, namely localization. The latter refers to models where each observation or a set of observations is independently drawn from a local statistical model, i.e. a model depending on individual and independent parameters, see Figure 1. This property is for instance present in mixturebased models (e.g., Student’st (Peel and McLachlan 2000) or negative binomial (Lawless 1987) distributions) where local parameters are marginalized out. In addition, allowing each observation to be a random draw from a local statistical model can lead to robust generalized linear models (e.g., robust Poisson or logistic regressions) (Wang and Blei 2018). Note that this localization idea can be traced back at least to the fundamental paper by Lindley and Smith (1972). Indeed, the latter gives an analysis of Bayesian models by using the concept of exchangeability related to the works by de Finetti (1931) and later by Hewitt and Savage (1955).
A large part of these models can be viewed as particular instances of AXDA approaches. Indeed, assume that data points are independently and identically distributed (i.i.d.) random variables drawn from a likelihood function with density
(10) 
where is a common parameter. Applying the AXDA methodology described in Section 2 by introducing auxiliary variables leads to an augmented likelihood with density
(11) 
The statistical model defined by (11) implies a hierarchical Bayesian model similar to the localized one depicted on Figure 1(b) with and corresponds in general to an approximation of the initial one, see Examples 3 and 4.
Example 3.
Student’st regression – Let an initial model that assumes the observations to be i.i.d. draws from a normal distribution . The potential writes . By introducing auxiliary variable such that is a potential associated to a scaled inverse chisquared distribution with degrees of freedom, we obtain the Student’st regression model by marginalizing out the . This model, associated to the Student’st distribution, is known to be more robust to possible outliers (Peel and McLachlan 2000) since it leads to larger tails than a Gaussian model. Note that in this case, Assumption 1 is naturally satisfied when .
Example 4.
Robust logistic regression – Assume that for all , where stands for the Bernoulli distribution, for the sigmoid function, for the transpose of the predictor matrix and for the regression coefficients vector to infer. Then, one can robustify the inference by assuming that each observation is drawn from a local and independent model associated to an auxiliary parameter . This model has for example been considered by Rendell et al. (2018) while another approximate logistic regression model has been derived in Vono et al. (2018).
AXDA models, by combining elementary approximate densities of the form (5), can then be casted into localized models and inherit their robustness property w.r.t. possible outliers in the dataset.
3.1.2 Uncertainty modeling
As described in the previous section, localization stands for an alternative to cope with likelihood misspecification. Another wellknown source of misspecification in Bayesian statistics is the choice of the prior (Berger et al. 2000; Watson and Holmes 2016). Indeed, the latter is often chosen for tractability or regularization reasons and does not take into account all uncertainties related to the parameter to infer. Among a rich literature on robustness w.r.t. prior misspecification, MohammadDjafari et al. (2018) recently proposed a hierarchical Bayesian model that takes into account both likelihood and prior uncertainties in standard linear inverse problems. A part of this model is actually a special instance of AXDA modeling and illustrates that AXDA can also bring robustness properties w.r.t. the prior choice, see Example 5.
Example 5.
Let consider a linear inverse problem
(12) 
where stand for the observations, is a known linear operator, is the parameter of interest and stands for noise. Under a common Bayesian paradigm, the target distribution is the posterior distribution
(13) 
where and denote the likelihood function and prior distribution, respectively. In addition to representing some uncertainties on through the prior distribution , one can model uncertainties w.r.t. this prior by introducing an auxiliary variable such that
(14) 
With (14), the observation model (12) can be rewritten illustrating an additional uncertainty on the parameter of interest. Furthermore, the marginal posterior distribution of under this augmented model can be written
(15) 
Instead of in (13), the target actually involves a smoothed prior distribution which has a similar form as the marginal under the augmented density defined in (5). In addition, if is such that Assumption 1 is satisfied, then can be viewed as a special instance of AXDA modeling of .
By revisiting some existing robust Bayesian approaches, Section 3.1 showed that they can be viewed as special instances of AXDA models. This implies that Bayesian approaches based on AXDA can inherit robustness properties w.r.t. model (likelihood and prior) misspecification and measurement errors. This strong relationship between Bayesian robust models and AXDA is not isolated. Indeed, the next section also shows that AXDA and ABC share a similar formulation preluding AXDA inheritance from some ABC properties and improved schemes (e.g. parallel tempering MCMCABC or sequential MCABC).
3.2 Approximate Bayesian computation
ABC stands for a family of methods that permit to cope with intractable likelihoods by sampling from the latter instead of evaluating them. In a nutshell, if one’s goal is to infer a parameter based on a posterior of interest, the ideal ABC rejection sampler is as follows. At iteration , simulate a candidate from the prior, generate pseudoobservations from the likelihood given this candidate and accept if where is the observations vector. In this basic algorithm, no evaluation of the likelihood is required since the acceptance ratio does not involve the likelihood. Many more sophisticated ABC samplers have been derived. We refer the interested reader to the recent review by Sisson et al. (2018a) for more information about ABC methods.
Among a huge literature on ABC (also called likelihoodfree) methods, noisy ABC approaches proposed and motivated by Fearnhead and Prangle (2012) and Wilkinson (2013) are strongly related to AXDA. Indeed, AXDA with observation splitting is equivalent to noisy ABC. To see this, let of the form (1) stand for an intractable likelihood. Noisy ABC replaces the exact inference based on by considering the pseudolikelihood with density
(16) 
This density has exactly the same formulation as the one defined in (5) except that noisy ABC splits the observations instead of the parameter of interest . The term proportional to is viewed as a kernel which integrates to 1 (e.g. Gaussian kernel, see Example 1) parametrized by a bandwidth . Capitalizing on this equivalence property, also pointed out by Rendell et al. (2018), one can derive interesting properties for AXDA from the ABC framework.
Firstly, note that robustness properties of noisy ABC pointed out by Wilkinson (2013) can be related to AXDA robustness detailed in Section 3.1. Secondly, ABC methods and algorithms which circumvent the handtuning of the parameter can also be applied to AXDAbased Bayesian inference methods. For instance, Rendell et al. (2018) recently proposed a very similar sequential Monte Carlo (SMC) algorithm to the one derived for ABC by Del Moral et al. (2012). The main idea is to construct particle approximations of a sequence of distributions with pdf where . Then, these approximations can be used to estimate an expectation under . This approach presents the benefit of defining a sequence of values which can be adaptively chosen instead of a single value . Using one more time the analogy between AXDA and ABC, another approach to avoid the tuning of could be to adapt the parallel tempering approach of Baragatti et al. (2013) used for ABC to AXDA MCMC schemes. The main idea of the aforementioned authors is to consider parallel Markov chains targetting with and . Using such different chains permits to explore at different speeds the parameter space: quickly but with high asymptotic bias with chains associated to a large and slowly but with a precise approximation of with chains associated to a small . Then, by using the different exploration behavior of each chain, one can exchange samples between two chains in order to improve their convergence properties (e.g. escaping from local modes). The final approximation of will be given by the chain targetting . Finally, it could be interesting to see if postprocessing methods used to reduce the bias of ABC outputs (e.g. the local linear regression model of Beaumont et al. (2002)) yield improved AXDAbased inference schemes.
Up to now, we have presented AXDA models and related existing approaches, their main properties and some asymptotic theoretical guarantees on the approximation which is made. As pointed out by Marin et al. (2012) for ABC methods, relying only on asymptotic ( or ) convergence results is not satisfying. Indeed, in real applications we are working in a nonasymptotic regime and therefore these asymptotic results do not hold. Thus, deriving nonasymptotic approximation results for AXDA methods is essential. This may help practioners to understand the meaning and the consequences of choosing a specific parameter on the inference task. In addition, obtaining exact approximation bounds can help in understanding the approximation which is made and its behavior as varies. These questions will be answered in Section 4 where exact error bounds for a fixed positive parameter with a finite number of samples are derived.
4 Nonasymptotic properties
This section presents some nonasymptotic theoretical properties about the approximation involved in AXDA. More precisely, nonasymptotic results for a fixed on the error associated to densities, potentials and credibility intervals are derived. We will assume all along this section that dim() .
4.1 Assumptions
In order to derive nonasymptotic bounds between quantities related to defined in (5) and in (1), we shall make some assumptions. Firstly, we shall assume that the variations of the potential associated to are not highly irregular. Otherwise, it seems difficult to bound these variations uniformly on the whole space . A classical regularity assumption is the Lipschitz continuity of the potential recalled in Definition 2. For instance, such a property has been used by Durmus et al. (2018) to derive nonasymptotic bounds on the total variation distance between probability distributions.
Definition 2.
Let . If there exists a constant such that for all , , then is Lipschitz.
In order to exploit the form of the divergence function , it is preferable to make additional assumptions on this coupling potential. Vono et al. (2019) pointed out a strong similarity between their AXDAbased Gibbs sampler and ADMM (Boyd et al. 2011) when is assumed to be quadratic. In addition, such a quadratic form meets the requirements necessary to use proximal MCMC methods (Durmus et al. 2018) since is a smooth, gradient Lipschitz and proper function. This quadratic form used in (5) stands for Gaussian smoothing which is also commonly used in statistics (Dümbgen and Rufibach 2009) and has interesting properties, see Proposition 1. Under these motivations, we shall make the following assumption in the sequel.
Assumption 2.
The potential function is Lipschitz and stands for a quadratic potential, i.e. for all , .
Under Assumption 2, a nonasymptotic upper bound on the total variation distance between and is derived in Section 4.2. Then, Section 4.3 takes advantage of this bound to derive theoretical properties between potential functions and credibility intervals. Finally, the results of this section are illustrated with two examples.
4.2 Nonasymptotic bound on the total variation distance
As pointed out in Theorem 1, the marginal coincides with in the limiting case . In the case where Assumption 2 is satisfied for a fixed , it is possible to control the total variation distance between and , as shown in Theorem 2 below.
Theorem 2.
Let the potentials and such that Assumption 2 is verified. Then,
(17) 
where
(18) 
The function is a parabolic cylinder function defined for all and by
(19) 
Note that this bound is asymptotically tight since it tends towards zero when , see Theorem 1. Additionally, this bound depends on few quantities that can be computed, bounded or approximated in real applications: the dimension of the problem , the Lipschitz constant associated to the regularized potential and the parameter controlling the tradeoff between the asymptotic bias and the computational cost. In the limiting case , the following equivalent function for the upper bound derived in (17) holds.
Proposition 2.
In the limiting case , the following equivalence relation holds:
(20) 
Under some regularity conditions (here Lipschitz continuity) on the potential function , Proposition 2 states that grows at most linearly w.r.t. the parameter and w.r.t. when is sufficiently small. Moreover, using Stirlinglike approximations when is large in the equivalence relation (20) may give a mild dependence w.r.t. the dimensionality of the problem in . Potential functions verifying the hypothesis of Theorem 2 are common in machine learning and signal/image processing problems, see Section 4.3.3. As an archetypal example, the sparsity promoting potential function defined for all by with is Lipschitz continuous with Lipschitz constant and satisfies Theorem 2 and Proposition 2. In this case, the dependence of (20) is linear w.r.t. when is large and is small. Note also that continuously differentiable functions on a compact set are Lipschitz continuous.
Remark 2.
Proposition 2 is verified when the potential is Lipschitz and for a Gaussian smoothing kernel . If these two assumptions are not verified in practice, it remains possible in some cases to estimate the pointwise bias between and . Indeed, if is supposed infinitely differentiable and is a standard smoothing kernel then, in the limiting case , the asymptotic bias between and can be estimated w.r.t. from a classical Taylor expansion of the target density. For instance, the latter has been used to get information about the approximation made in ABC (Sisson et al. 2018b).
Figure 2 gives the behavior of the upper bound in (17) w.r.t. the dimensionality of the problem ranging from to and as a function of the product in loglog scale. Note that the linear relation between this upper bound and shown in (20) is clearly observed for small values of . Nonetheless, the upper bound derived in (17) is not a silver bullet. Indeed, as expected, for a fixed value of the parameter , the approximation error increases as the dimension grows. Thus, this bound suffers from the curse of dimensionality and becomes noninformative in highdimension if is not sufficiently small. As usual, a possible way to cope with this issue might be to reduce the dimensionality of the problem before starting the inference task.
In the case where the initial target distribution can be expressed as a product of several terms, i.e.
(21) 
where for all , and when the augmented density writes
(22) 
we have a generalization of Theorem 2 stated by Corollary 1.
Corollary 1.
For all , let be Lipschitz. Then,
(23) 
where .
4.3 Bounds on potentials and credibility intervals
4.3.1 Theoretical results
Similarly to the definition of the potential function in (1), we define the potential function associated to the approximate marginal in (3), for all , by
(24) 
where stands for the normalization constant associated to the density proportional to . Note that this density can be associated either to (e.g. if defines a prior distribution) or (e.g. if is a likelihood). Under Assumption 2, the potential becomes
(25) 
Under this assumption, Proposition 3 and Proposition 4 below show that it is possible to control the difference between the potentials and credibility intervals associated to and , respectively.
Proposition 3.
The bounds and are asymptotically tight in the limiting case . In addition, when stands for the density associated to a posterior distribution, one advantage of Bayesian analysis is its ability to derive the underlying probability distribution of the variable of interest and thereby to provide credibility intervals under this distribution. This uncertainty information is particularly relevant and essential for realworld applications. As a result, since the marginal stands for an approximation of the original target distribution , it is important to control the credibility intervals under w.r.t. those drawn under . This is almost available thanks to the control in total variation distance given by Theorem 2. However, it is possible to quantify more precisely the difference between the credible regions (Robert 2001) with confidence level () under and , as stated below.
Proposition 4.
Proposition 4 states that the coverage of under can be determined for a fixed value of . Thus, it is possible to obtain a comprehensive description of w.r.t. the initial target density before running an AXDAbased algorithm. The bounds in (30) then allow to choose a parameter in order to ensure a prescribed coverage property. The tightness properties and the behavior of these bounds w.r.t. are the same as before: asymptotic tightness when (since ), and linear behavior w.r.t. when this parameter is sufficiently small. The results shown in Proposition 3 and 4 are illustrated in Section 4.3.2 and 4.3.3 for standard statistical learning problems including linear regression, support vector machine (SVM) and logistic regression.
4.3.2 Application to sparse linear regression
We consider a least absolute shrinkage and selection operator (lasso) regression problem under the Bayesian paradigm (Park and Casella 2008). Assume that centered observations are generated from the forward statistical model , where stands for a known standardized predictor matrix, and . By considering a Laplacian prior distribution for , the target posterior distribution has density for all ,
(31) 
where with . By only splitting the term associated to the prior with a quadratic penalization function , the joint density writes
(32) 
In this specific case, the potential associated to the smoothed prior distribution (see (25)) has a closedform expression given for all , by
(33)  
(34) 
with , and . Note that in cases where has no closed form, one can estimate it by a Monte Carlo approximation.
Figure 3 shows the behavior of the regularized potential defined in (34) for several values of the parameter along with the associated smoothed prior and posterior distributions. For simplicity reasons, the univariate case corresponding to has been considered. Obviously, this particular case does not have any interest for realworld applications but allows a simpler visualisation of the approximation at stake. The regularization parameter has been set to . The contours of the shaded area correspond to and . The potential is a smooth approximation of the potential associated to the initial prior as expected, see Property iii) in Proposition 1. Note that the inequalities derived in (26) are verified. Although this approximation seems similar to the MoreauYoshida (MY) regularization of a nonsmooth potential function (Combettes and Pesquet 2011), the rationale behind this approximation is different. Indeed, the MY envelope stands for a particular instance of the infimal convolution between two convex functions (an initial potential and a Gaussian one). On the other hand, is the potential associated to a smoothed density obtained by convolution with a Gaussian kernel. In addition, the third row of Figure 3 shows the form of the posterior of defined in (32) for , and and derived from the smoothed prior distributions shown in Figure 3. For sufficiently small values of , the marginal stands for a quite accurate approximation of the original target .
[0.47,1.24]  [0.47,1.24]  0.95  [0.949,0.951]  
idem  [0.47,1.24]  0.95  [0.948,0.952]  
idem  [0.47,1.24]  0.95  [0.88,1]  
idem  [0.47,1.37]  0.96  [0.34,1] 
Table 1 illustrates the bounds derived in (30) for . For each case, the values of the bounds are summarized in the interval
(35) 
and the real coverage is also reported. The (1)credibility intervals and have been chosen to be the highest posterior density regions associated to each density with . Note that the coverage interval becomes informative only if is sufficiently small which is not surprising since the assumptions on the potential of are weak. Indeed, the form of the density (e.g. symmetry or unimodality) is not taken into account in the derived bounds. The degree of approximation depicted in the first row of Figure 3 is consistent with Figure 2 which shows a total variation distance of the order for in the univariate case. In addition, regarding the coverage , note that the marginal stands for a conservative approximation of in this example. Indeed, in each case, the (1)credibility interval under denoted covers at least % of the probability mass under .
4.3.3 Application to statistical learning with Lipschitz loss functions
The results of Section 4 assume that the potential function associated to is Lipschitz. Interestingly, such Lipschitz functions are used in standard statistical learning problems to evaluate the discrepancy between observations and model outputs (van de Geer 2016).
name  problem  

hinge  SVM  
Huber  robust reg.  if ,  
otherwise, where  
logistic  logistic reg.  
pinball  quantile reg.  ,  
where 
Table 2 lists some of them along with their definition and associated statistical problems. Figure 4 illustrates the form of these losses and associated regularized potentials with obtained via a Monte Carlo approximation. Note that the absolute loss stands for a particular instance of the pinball loss with . Without loss of generality, these problems consider a likelihood function that can be written as in (21) with
(36) 
where for , is the feature vector associated with observation ; is one of the loss functions in Table 2 and is the parameter to infer. Since all the loss functions listed in Table 2 are Lipschitz continuous w.r.t. their second argument with Lipschitz constant equal to 1, the potential in (36) is also Lipschitz with constant . Motivated by the robustness properties inherited by AXDA, see Section 3.1.1, we consider the splitting of the likelihood contribution associated to each observation with a quadratic coupling function . The results of Corollary 1 can then be applied to defined in (21).
In practice, to illustrate the behavior of the upper bound in Corollary 1 w.r.t. the number of observations, we fixed the dimension and considered several values of ranging from 1 to . For each , we randomly generated sets of features and we normalized the columns of the matrix such that each entry is a random number between and . The latter operation is classical in machine learning and is also called feature scaling.
Figure 5 shows the behavior of this bound with two different dimension values . As expected, the bound becomes less informative for a fixed value of as the number of likelihood approximations increases with the size of the dataset . Nonetheless, the effect of on the bound is not highly prohibitive. Indeed, for each case ( and ), and appear to be complementary variables: increasing the value of the latter and decreasing the value of the former by the same order roughly gives the same bound value. Actually, if for all , , the dependence of the bound when is small is of the order for a fixed dimension . Obviously, one can limit this dependence on by splitting blocks of observations in minibatches instead of splitting each observation. This splitting strategy has for instance been considered by Rendell et al. (2018).
As a conclusion, the principle of AXDA models defined in (5) has been presented along with exact nonasymptotic results for a fixed and for a finite number of observations , see Theorem 2, Corollary 1 and Propositions 2, 3 & 4. These results have been illutrated on several standard statistical learning problems, showing the generality of the derived bounds. Next section will study how the AXDA approach defined in (5) can be used in practice to simplify, improve, accelerate and/or distribute an inference task.
5 Inference algorithms
Motivated by the theoretical results shown in Section 4, this section presents how to derive efficient inference algorithms ranging from optimization to simulationbased methods based on AXDA. The potential benefits w.r.t. direct inference from are presented and discussed. More precisely, optimizationbased approaches are derived to get maximum likelihood (ML) or maximum a posteriori (MAP) estimates based on the approximate density . MCMC and VB methods based on AXDA models are also detailed to explore the distribution of the unknown parameters to infer.
From now on, we assume that and we consider a target density with the very general form
(37) 
Based on this target density, the augmented density is assumed to take the form
(38) 
This writing permits to highlight the benefits of using the augmented density instead of for each of the different inference approaches detailed in the sequel. Again, note that the pdf can stand either for a likelihood where is associated to the total number of observations or can correspond to a posterior distribution. In the latter case, some potentials () are associated with the prior while the other are related to the likelihood contribution.
5.1 Optimizing AXDA meets ADMM
Computing the MAP or ML estimate under (38) boils down to solve the optimization problem
(39) 
If is chosen such that it satisfies Assumption 2, the problem (39) can be viewed as a quadraticly penalized formulation of the initial problem , see Nocedal and Wright (2006, Section 17.1). As expected, the solution of (39) stands for an approximate solution w.r.t. the initial optimization problem. Instead of solving this approximate optimization problem, one can replace the quadratic penalty by an augmented Lagrangian term and solve the associated problem with the ADMM, see Algorithm 1. If the functions are closed, proper and convex, then Algorithm 1 converges towards the optimal solution of the original minimization problem (Eckstein and Bertsekas 1992). Regarding Algorithm 1, one can clearly see the benefit of using a variable splitting approach as in AXDA: the initial potential is split into individual potentials. Therefore, the corresponding minimization problems are simpler (e.g., associated proximity operators may become available) and can be handled in parallel (Boyd et al. 2011). The derivation of this optimizationbased inference algorithm is illustrated on a classical machine learning problem namely penalized logistic regression, see Example 6.
Example 6.
We consider in this example the penalized logistic regression problem. We assume that binary responses are observed and correspond to conditionally independent Bernoulli random variables with probability of success . The function is the sigmoid function, stands for the feature vector associated to observation and are the unknown regression coefficients to infer. We consider an arbitrary prior distribution on with density proportional to with standing for a convex and potentially nonsmooth potential. The target then stands for the posterior distribution of the unknown regression coefficients
(40) 
By denoting and for all with , , the posterior distribution in (40) has the form (37). Computing directly the MAP estimate with classical forwardbackward algorithms (e.g., the fast iterative shrinkagethresholding algorithm (FISTA) (Beck and Teboulle 2009)) associated to can be computationally prohibitive if the number of observations is very large. Indeed, the gradient of the smooth term involves a large sum to compute at each iteration. In addition, if the dataset is distributed over multiple machines, one cannot resort to methods such as FISTA to perform the inference task. As a benefit, Algorithm 1, which stands for the deterministic counterpart of an AXDAbased Gibbs sampler (Vono et al. 2019), permits to get rid of the large sum and enables to handle the minimization steps in a distributed manner by splitting the initial objective function.
Algorithm 1 assumes that the minimization steps involving the potentials can be processed efficiently. However, it might happen that some of these steps cannot be performed analytically. In this case, if it is possible to evaluate the expected value under each corresponding density and to maximize it w.r.t. the variable of interest , the MAP or ML estimate under can be approximated using an EM algorithm.
5.2 Expectationmaximization for AXDA
An EM algorithm under the augmented density will target the MAP or ML estimator, see Algorithm 2. If the expectations in the Estep cannot be evaluated, one can use a Monte Carlo approximation to approximate them (Wei and Tanner 1990). The benefits of using the augmented density instead of are threefolds. Firstly, as pointed out in Section 2, exact DA schemes based on cannot be derived in general cases and corresponding EM algorithms cannot be implemented. Instead, considering gives a quite systematic way of introducing latent variables in the original statistical model. Secondly, the expectations involved in the Estep of Algorithm 2 can be far more simpler to derive than the expectation under . Indeed, the latter involves the whole potential while the former involves separately regularized parts of this potential namely . Finally, conditionally on , the random variables