## Abstract.

Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm. We illustrate the performance of our method with examples. Our empirical findings indicate that the algorithm can cope well with different kind of scenarios without prior tuning.

\newaliascnt

propositiontheorem \aliascntresettheproposition \newaliascntlemmatheorem \aliascntresetthelemma \newaliascntcorollarytheorem \aliascntresetthecorollary \newaliascntdefinitiontheorem \aliascntresetthedefinition \newaliascntexampletheorem \aliascntresettheexample \newaliascntremarktheorem \aliascntresettheremark

## 1. Introduction

Markov chain Monte Carlo (MCMC) is a generic method to approximate an integral of the form

 I\lx@stackreldef=∫Rdf(x)π(x)dx,

where is a probability density function, which can be evaluated point-wise up to a normalising constant. Such an integral occurs frequently when computing Bayesian posterior expectations [e.g., 24, 11].

The random-walk Metropolis algorithm [21] works often well, provided the target density is, roughly speaking, sufficiently close to unimodal. The efficiency of the Metropolis algorithm can be optimised by a suitable choice of the proposal distribution. The proposal distribution can be chosen automatically by several adaptive MCMC algorithms; see [12, 4, 26, 2] and references therein.

When has multiple well-separated modes, the random-walk based methods tend to get stuck to a single mode for long periods of time, which can lead to false convergence and severely erroneous results. Using a tailored Metropolis-Hastings algorithm can help, but in many cases finding a good proposal distribution is difficult. Tempering of , that is, considering auxiliary distributions with density proportional to with often provides better mixing within modes [30, 20, 13, 34]. We focus here particularly on the parallel tempering algorithm, which is also known as the replica exchange Monte Carlo and the Metropolis coupled Markov chain Monte Carlo.

The tempering approach is particularly tempting in such settings where admits a physical interpretation, and there is good intuition how to choose the temperature schedule for the algorithm. In general, choosing the temperature schedule is a non-trivial task, but there are generic guidelines for temperature selection, based on both empirical findings and theoretical analysis. First rule of thumb suggests that the temperature progression should be (approximately) geometric; see, e.g. [16]. Kone and Kofke linked also the mean acceptance rate of the swaps [17]; this has been further analysed by Atchadé, Roberts and Rosenthal [5]; see also [27].

Our temperature adaptation is based on the latter; we try to optimise the mean acceptance rate of the swaps between the chains in adjacent temperatures. Our scheme has similarities with that proposed by Atchadé, Roberts and Rosenthal [5]. The key difference in our method is that we propose to adapt continuously during the simulation. We show that the temperature adaptation converges, and that the point of convergence is unique with mild and natural conditions on the target distribution.

The local exploration in our approach relies on the random walk Metropolis algorithm. The proposal distribution, or more precisely, the scale/shape parameter of the proposal distribution, can be adapted using several existing techniques like the covariance estimator [12] augmented with an adaptive scaling pursuing a given mean acceptance rate [2, 3, 26, 4] which is motivated by certain asymptotic results [25, 28]. It is also possible to use a robust shape estimate which enforces a given mean acceptance rate [33].

We start by describing the proposed algorithm in Section 2. Theoretical results on the convergence of the adaptation and the ergodic averages are given next in Section 3. In Section 4, we illustrate the performance of the algorithm with examples. The proofs of the theoretical results are postponed to Section 5.

## 2. Algorithm

### 2.1. Parallel tempering algorithm

The parallel tempering algorithm defines a Markov chain over the product space , where

 (1) Xk=(X(1)k,…,X(L)k)=(X(1:L)k).

Each of the chains targets a ‘tempered’ version of the target distribution . Denote by the inverse temperature, which are such that . and by the normalising constant

 (2)

which is assumed to be finite. The parallel tempering algorithm is constructed so that the Markov chain is reversible with respect to the product density

 (3) πβ(x(1),…,x(L))\lx@stackreldef=πβ(1)(x(1))Z(β(1))×⋯×πβ(L)(x(L))Z(β(L)),

over the product space .

Each time-step may be decomposed into two successive moves: the swap move and the propagation (or update) move; for the latter, we consider only random-walk Metropolis moves.

We use the following notation to distinguish the state of the algorithm after the swap step (denoted ) and after the random walk step, or equivalently after a complete step (denoted ). The state is then updated according to

 (4) Xn−1\lx@stackrelSβ⟶¯Xn−1\lx@stackrelM(Σ,β)⟶Xn,

where the two kernels and are respectively defined as

• denotes the tensor product kernel on the product space

 (5) M(Σ,β)(x(1:L);A1×…AL)=L∏ℓ=1M(Σ(ℓ),β(ℓ))(x(ℓ),Aℓ)

where each is a random-walk Metropolis kernel targeting with increment distribution , where is the density of a multivariate Gaussian with zero mean and covariance ,

 (6) M(Σ,β)(x,A)\lx@stackreldef=∫Aαβ(x,y)qΣ(y−x)dy+δx(A)∫(1−αβ(x,y))qΣ(y−x)dy,

where

 (7) αβ(x,y)\lx@stackreldef=1∧πβ(y)πβ(x),for all (x,y)∈X×X.

In practical terms, means that one applies a random-walk Metropolis step separately for each of the chains .

• denotes the Markov kernel of the swap steps, targeting the product distribution ,

 (8) Sβ(x(1:L);A)=1L−1L−1∑j=1ϖ(j)β(x(j),x(j+1))J(j)(x(j),x(j+1);A)+1L−1L−1∑j=1(1−ϖ(j)β(x(j),x(j+1)))δx(1:L)(A),

where is the probability of accepting a swap between levels and , which is given by

 (9) ϖ(j)β(x(j),x(j+1))\lx@stackreldef=1∧(π(x(j+1))π(x(j)))β(j)−β(j+1),

and

 (10) J(j)(x(j),x(j+1);A)\lx@stackreldef=∫⋯∫Aδx(j)(dy(j+1))δx(j+1)(dy(j))∏i∈{1,…,L}∖{j,j+1}δx(i)(dy(i)).

The above defined swap step means choosing a random index uniformly, and then proposing to swap the adjacent states, and . and accepting this swap with probability given in (9).

### 2.2. Adaptive parallel tempering algorithm

In the adaptive version of the parallel tempering algorithm, the temperature parameters are continuously updated along the run of the algorithm. We denote the sequence of inverse temperatures

 (11)

which are parameterised by the vector-valued process

 (12) {ρn}n≥0\lx@stackreldef={ρ(1:L−1)n}n≥0,

through and for with

 (13) β(ℓ+1)(ρ(1:ℓ))\lx@stackreldef=β(ℓ)(ρ(1:ℓ−1))exp(−exp(ρ(ℓ))).

Because the inverse temperatures are adapted at each iteration, the target distribution of the chain changes from step to step as well. Our adaptation of the temperatures is performed using the following stochastic approximation procedure

 (14) ρ(ℓ)n=Πρ(ρ(ℓ)n−1+γn,1H(ℓ)(ρ(1:ℓ)n−1,Xn)),1≤ℓ≤L−1,

where is defined in (13), is the projection onto the constraint set , which will be discussed further in Section 2.4. Moreover,

 (15) H(ℓ)(ρ(1:ℓ),x)=1∧(π(x(ℓ+1))π(x(ℓ)))Δβ(ℓ)(ρ(1:ℓ))−α∗, (16) Δβ(ℓ)(ρ(1:ℓ))=β(ℓ)(ρ(1:ℓ−1))−β(ℓ+1)(ρ(1:ℓ)).

We will show in Section 3 that the algorithm is designed in such a way that the inverse temperatures converges to a value for which the mean probability of accepting a swap move between any adjacent-temperature chains is constant and is equal to .

We will also adapt the random-walk proposal distribution at each level. We describe below another possible algorithm for performing such a task. In the theoretical part, for simplicity, we will consider only on with the seminal adaptive Metropolis algorithm [12] augmented with scaling adaptation [e.g. 26, 3, 2]. In this algorithm, we estimate the covariance matrix of the target distribution at each temperature and rescale it to control the acceptance ratio at each level in stationarity.

Define by the set of positive definite matrices. For , we denote by and the smallest and the largest eigenvalues of , respectively. For , define by the convex subset

 (17) M+(d,ε)\lx@stackreldef={Σ∈M+(d):ε≤ϱmin(Σ)≤ϱmax(Σ)≤ε−1}.

The set is a compact subset of the open cone of positive definite matrices.

We denote by the current estimate of the covariance at level , which is updated as follows

 (18) Γ(ℓ)n =ΠΓ[(1−γn,2)Γ(ℓ)n−1+γn,2(X(ℓ)n−μ(ℓ)n−1)t(X(ℓ)n−μ(ℓ)n−1)], (19) μ(ℓ)n =(1−γn,2)μ(ℓ)n−1+γn,2X(ℓ)n,

where is the matrix transpose and is the projection on to the set ; see Section 2.4. The scaling parameters is updated so that the acceptance rate in stationarity converges to the target ,

 (20) T(ℓ)n=ΠT⎛⎜⎝T(ℓ)n−1+γn,3⎡⎢⎣⎛⎜⎝1∧πβ(ℓ)n−1(Y(ℓ)n)πβ(ℓ)n−1(¯X(ℓ)n−1)⎞⎟⎠−α∗⎤⎥⎦⎞⎟⎠,

where is is the projection onto ; see Section 2.4 and is the proposal at level , assumed to be conditionally independent from the past draws and distributed according to a Gaussian with mean and covariance matrix which is given by

 (21) Σ(ℓ)n=exp(T(ℓ)n)Γ(ℓ)n.

In the sequel we denote by the vector of proposed moves at time step ,

 (22) {Yn}n≥0={Y(1:L)n}n≥0.

In order to reduce the number of parameters in the adaptation especially in higher dimensions, we propose to use a common covariance for all the temperatures, but still employ separate scaling. More specifically,

 (23) Γn =(1−γn,2)Γn−1+γn,2LL∑ℓ=1(X(ℓ)n−μn−1)t(X(ℓ)n−μn−1), (24) μn =(1−γn,2)μn−1+γn,2LL∑ℓ=1X(ℓ)n,

and set .

Another possible implementation of the random-walk adaption, robust adaptive Metropolis (RAM) [33], is defined by a single dynamic adjusting the covariance parameter and attaining a given acceptance rate. Specifically, one recursively finds a lower-diagonal matrix with positive diagonal satisfying

 (25) Γ(ℓ)nt(Γ(ℓ)n)=Γ(ℓ)n−1[I+γn,2(αn−α∗)u(Z(ℓ)n)t(u(Z(ℓ)n))]t(Γ(ℓ)n−1),

where and , and let .

The potential benefit of using this estimate instead of (18)–(20) is that RAM finds, loosely speaking, a ‘local’ shape of the target distribution, which is often in practice close to a convex combination of the shapes of individual modes. In some situations, this proposal shape might allow better local exploration than the global covariance shape.

### 2.4. Implementation details

In the experiments, we use the desired acceptance rate suggested by theoretical results for the swap kernel [17, 5] and for the random-walk Metropolis kernel [25, 28]. We employ the step size sequences with constants and and . This is a common choice in the stochastic approximation literature.

The projections , and in (14), (18) and (20), respectively, are used to enforce the stability of the adaptation process in order to simplify theoretical analysis of the algorithm. We have not observed instability empirically, and believe that the algorithm would be stable without projections; in fact, for the random-walk adaptation, there exist some stability results [29, 31, 32]. Therefore, we recommend setting the limits in the constraint sets as large as possible, within the limits of numerical accuracy.

It is possible to employ other strategies for proposing swaps of the tempered states. Specifically, it is possible to try more than one swap at each iteration, even go through all the temperatures, without changing the invariant distribution of the chain. We made some preliminary tests with other strategies, but the results were not promising, so we decided to keep the common approach of a single randomly chosen swap.

In the temperature adaptation, it is also possible to enforce the geometric progression, and only adapt one parameter. More specifically, one can use for all and perform the adaptation (14) to update . This strategy might induce more stable behaviour of the temperature parameter especially when the number of levels is high. On the other hand, it can be dangerous because the asymptotic acceptance probability across certain temperature levels can get low, inducing poor mixing.

We consider only Gaussian proposal distributions in the random-walk Metropolis kernels. It is possible to employ also other proposals; in fact our theoretical results extend directly for example to the multivariate Student proposal distributions.

We note that the adaptive parallel tempering algorithm can be used also in a block-wise manner, or in Metropolis-within-Gibbs framework. More precisely, the adaptive random-walk chains can be run as Metropolis-within-Gibbs, and the state swapping can be done in the global level. This approach scales better with respect to the dimension in many situations. Particularly, when the model is hierarchical, the structure of the model can allow significant computational savings. Finally, it is straightforward to extend the adaptive parallel tempering algorithm described above to general measure spaces. For the sake of exposition, we present the algorithm only in .

## 3. Theoretical results

### 3.1. Formal definitions and assumptions

Denote by the proposals of the random-walk Metropolis step. We define the following filtration

 (26) Fn=σ{X0,(Xk,¯Xk−1,Yk−1),k=1,…,n,}.

By construction, the covariance matrix and the parameters are adapted to the filtration . With these notations and assumptions, for any time step ,

 P[Xn+1∈⋅|Fn]=∫Sβn(Xn,dz)M(Σn,βn)(z,⋅)=SβnM(Σn,βn)(Xn,⋅)

Therefore, denoting , we get

 (27) E[f(Xn+1)|Fn]=P(Σn,βn)f(Xn),

for all and all bounded measurable functions .

We will consider the following assumption on the target distribution, which ensures a geometric ergodicity of a random walk Metropolis chain [1, 15]. Below, applied to a vector (or a matrix) stands for the Euclidean norm.

• The density is bounded, bounded away from zero on compact sets, differentiable, such that

 (28) limr→∞sup|x|≥rx|x|⋅∇logπ(x) =−∞ (29) limr→∞sup|x|≥rx|x|⋅∇π(x)|∇π(x)| <0.

In words, (A3.1) only requires that the target distribution is sufficiently regular, and the tails decay at a rate faster than exponential. We remark that the tempering approach is only well-defined when are integrable with exponents of interest —this is the case always with (A3.1) .

### 3.2. Geometric ergodicity and continuity of parallel tempering kernels

We first state and prove that the parallel tempering algorithm is geometrically ergodic under (A3.1) . This result might be of independent interest, because geometric ergodicity is well known to imply central limit theorems.

We show that, under mild conditions, this kernel is phi-irreducible, strongly aperiodic, and -uniformly ergodic, where the function is the sum of an appropriately chosen negative power of the target density. Specifically, for , consider the following drift function

 (30) Vβ(x(1:L))\lx@stackreldef=L∑ℓ=1Vβ(x(ℓ)),

where for ,

 (31) Vβ(x)=(π(x)/∥π∥∞)−β/2.

For , define the set

 (32) Kβ0\lx@stackreldef={β(1:L)∈(0,1]L,β0≤β(L)≤⋯≤β(1)}.

We denote the -variation of a signed measure as , where the supremum is taken over all measurable functions . The -norm of a function is defined as .

###### Theorem 1.

Assume (A3.1) . Let and . Then there exists and such that, for all , and ,

 (33) ∥∥Pn(Σ,β)(x,⋅)−πβ∥∥Vβ0≤Cϵ,β0ϱnϵ,β0Vβ0(x).

Geometric ergodicity in turn implies the existence of a solution of the Poisson equation, and also provides bounds on the growth of this solution [22, Chapter 17]

###### Corollary \thecorollary.

Assume (A3.1) . Let and . For any measurable function with for some there exists a unique (up to an additive constant) solution of the Poisson equation

 (34) g−P(Σ,β)g=f−πβ(f).

This solution is denoted . In addition, there exists a constant such that

 (35) ∥^f(Σ,β)∥Vαβ0≤Dϵ,β0∥f∥Vαβ0.

We will next establish that the parallel tempering kernel is locally Lipshitz continuous. For any , denote by the -variation of the kernels and ,

 (36) DVβ[(Σ,β),(Σ′,β′)]\lx@stackreldef=supx∈XL∥∥P(Σ,β)(x,⋅)−P(Σ′,β′)(x,⋅)∥∥VβVβ(x).

For and , define the set

 (37)
###### Theorem 2.

Assume (A3.1) . Let , and . For any , there exists such that, for any and any , it holds that

 DVαβ0[(Σ,β),(Σ′,β′)]≤Kϵ,α,β0,η{|β−β′|+|Σ−Σ′|}.

### 3.3. Strong law of large numbers

We can state an ergodicity result for the adaptive parallel tempering algorithm, given the step size sequences satisfy the following natural condition.

• Assume that the step sizes , defined in (14),(18), and (20) are non-negative and satisfy following conditions

1. For , and

###### Remark \theremark.

It is easy to see that with some and and satisfy (A3.3) .

###### Theorem 3.

Assume (A3.1) - (A3.3) and . Then, for any function such that for some and given exists, we have

 1nn∑i=1f(Xi)⟶limn→∞πβn(f)a.s.
###### Remark \theremark.

In practice, one is usually only interested in integrating with respect to , which means functions depending only on the first coordinate, that is, . In this case, the limit condition is trivial, because for all .

### 3.4. Convergence of temperature adaptation

The strong law of large numbers (Theorem 3) does not require the convergence of the inverse temperatures, if only the coolest chain is involved (subsection 3.3). It is, however, important to work out the convergence of the adaptation, because then we know what to expect on the asymptotic behaviour of the algorithm. Having the convergence, it is also possible to establish central limit theorems [1]; however, we do not pursue it here.

We denote the associated mean field of the stochastic approximation procedure (14) by

 h(ρ)\lx@stackreldef=[h(1)(ρ(1)),…,h(L−1)(ρ(1),…,ρ(L−1))],

where

 h(ℓ)(ρ(1),…,ρ(ℓ))\lx@stackreldef=∫H(ℓ)(ρ(1),…,ρ(ℓ),x)πβ(dx).

We may write

 h(ℓ)(ρ)=∬ϖ(ℓ)β(x(ℓ),x(ℓ+1))πβ(ℓ)(dx(ℓ))Z(β(ℓ))πβ(ℓ+1)(dx(ℓ+1))Z(β(ℓ+1))−α∗,

where is the normalising constant defined in (2).

The following result establishes the existence and uniqueness of the stable point of the adaptation. In words, the following result implies that there exist unique temperatures so that the mean rate of accepting proposed swaps is .

###### Proposition \theproposition.

Assume (A3.1) . Then, there exists a unique solution of the system of equations , .

###### Remark \theremark.

In Proposition subsection 3.4, it is sufficient to assume that the support of has infinite Lebesgue measure and that for all ; see subsection 5.4.

###### Remark \theremark.

In case the support of has a finite Lebesgue measure, it is not difficult to show that for a sufficiently large number of levels there is no solution . On the contrary, in formal terms, for , so that the corresponding inverse temperatures for . For our algorithm, this would imply that it simulates asymptotically , the uniform distribution on the support of , with the levels .

For the convergence of the temperature adaptation, we require more stringent conditions on the step size sequence.

• Assume that step sizes defined in (14),(18),(19) and (20) are non-negative and satisfy following conditions

1. , , and , .

It is easy to check that the sequences introduced in subsection 3.3 satisfy (A3.4) if we assume in addition that .

###### Theorem 4.

Assume (A3.1) - (A3.4) , . In addition for all we assume that , where is given by subsection 3.4. Then

 limn→∞ρn=^ρa.s..

## 4. Experiments

We consider two different type of examples: mixture of Gaussians in Section 4.1 and a challenging spatial imaging example in Section 4.2. In all the experiments, we use the step size sequences , except for RAM adaptation, where (see [33] for a discussion). We did not observe numerical instability issues, so the adaptations were not enforced to be stable by projections. We used the following initial values for the adapted parameters: temperature difference , covariances and scalings .

### 4.1. Mixture of Gaussians

We consider first a well-known two-dimensional mixture of Gaussians example [e.g. 19, 7]. The example consists of 20 mixture components with means in and each component has a diagonal covariance , with . Figure 1 shows an example of the points simulated by our parallel tempering algorithm in this example, when we use energy levels and the default (covariance) adaptation to adjust the random walk proposals. Figure 2 shows the convergence of the temperature parameters in the same example.

We computed estimates of the means and the squares of the coordinates with iterations of which burn-in, and show the mean and standard deviation (in parenthesis) over 100 runs of our parallel tempering algorithm in Table 1. We considered three different random-walk adaptations: the default adaptation in (18)–(20) (Cov), with common mean and covariance estimators as defined in (23)–(24) (Cov(g)) and the RAM update defined in (25). Table 1 shows the results in the same form as [7, Table 3] to allow easy comparison. When comparing with [7], our results show smaller deviation than the unadapted parallel tempering, but bigger deviation than their samplers including also equi-energy moves. We remark that we did not adjust our algorithm at all for the example, but let the adaptation take care of that. There are no significant differences between the random-walk adaptation algorithms.

When looking the simulated points in Figure 1, it is clear that three temperature levels is enough to allow good mixing in the above example. We repeated the example with energy levels, and increased the number of iterations to in order to have a comparable computational cost. The summary of the results in Table 2 indicates increased accuracy than with levels, and the accuracy comes close to the results reported in [7] for samplers with equi-energy moves.

We considered also a more difficult modification of the mixture example above. We decreased the variance of the mixture components to and increased the dimension to . The mixture means of the added coordinates were all zero. We ran our adaptive parallel tempering algorithm in this case with temperature levels; Table 3 summarises the results with different number of iterations. In all the cases, the first half of the iterations were burn-in. In this scenario, the different random-walk adaptation algorithms have slightly different behaviour. Particularly, the common mean and covariance estimates (Cov(g)) seem to improve over the separate covariances (Cov). The RAM approach seems to provide the most accurate results. However, we believe that this is probably due to the special properties of the example, namely the fact that all the mixture components have a common covariance, and the RAM converges close to this in the first level; see the discussion in [33].

### 4.2. Spatial imaging

As another example, we consider identifying ice floes from polar satellite images as described by Banfield and Raftery [6]. The image under consideration is a 200 by 200 gray-scale satellite image, and we focus on the same 40 by 40 subregion as in [8]. The goal is to identify the presence and position of polar ice floes. Towards this goal, Higdon [14] employs a Bayesian model with an Ising model prior and following posterior distribution on ,

 log(π(x|y))∝∑1≤i,j,≤40α\mathbbm1{xi,j=yi,j}+∑(i,j)∼(i′,j′)β\mathbbm1{xi,j=xi′,j′},

where neighbourhood relation () is defined as vertical, horizontal and diagonal adjacencies of each pixel. Posterior distribution favours which are similar to original image (first term) and for which the neighbouring pixels are equal (second term).

In [14] and [8], the authors observed that standard MCMC algorithms which propose to flip one pixel at a time fail to explore the modes of the posterior. There are, however, some advantages of using such an algorithm, given we can overcome the difficulty in mixing between the modes. Specifically, in order to compute (the log-difference of) the unnormalised density values, we need only to explore the neighbourhoods of the pixels that have changed. Therefore, the proposal with one pixel flip at a time has a low computational cost. Moreover, such an algorithm is easy to implement.

We used our parallel tempering algorithm with the above mentioned proposal with temperature levels to simulate the posterior of this model with parameters and . We ran 100 replications of iterations of the algorithm. Obtained result are shown in Figure 3 is similar to [14] and [8]. We emphasize again that our algorithm provided good results without any prior tuning.

## 5. Proofs

### 5.1. Proof of Theorem 1

The proof follows by arguments in the literature that guarantee a geometric ergodicity for the individual random-walk Metropolis kernels, and by observing that the swap kernel is invariant under permutation-invariant functions.

We start with an easy lemma showing that a drift in cooler chain implies a drift in the higher-temperature chain.

###### Lemma \thelemma.

Consider the drift function for some positive constants and . Then, for any ,

 β≤β′⟹M(Σ,β′)W(x)≤M(Σ,β)W(x),for all x∈X \,.
###### Proof.

We write

 M(Σ,β)W(x)W(x)=∫{y:π(y)≥π(x)}(π(x)π(y))κqΣ(x−y)dy+∫{y:π(y)<π(x)}⎡⎣1−(π(y)π(x))β+(π(y)π(x))β−κ⎤⎦qΣ(x−y)dy

First term is independent on , since for is non-increasing the second term is also non-increasing with respect to . ∎

To control the ergodicity of each individual random-walk Metropolis sampler, it is required to have a control on the minorisation and drift constants for the kernels . The following proposition provides such explicit control.

###### Lemma \thelemma.

Assume (A3.1) . Let and . There exist and such that for any , we get

 (38) M(Σ,β)Vβ≤λε,βVβ+bε,β,

where , where .

###### Proof.

It is easily seen that if the target distribution is super-exponential in the tails (A3.1) , then all the tempered versions , where the normalising constant is defined in (2), satisfy (A3.1) as well.

The result then follows from Andrieu and Moulines [1, Proposition 12]. ∎

###### Proposition \theproposition.

Assume (A3.1) and let and . Then, there exists , and such that, for all and ,

 (39) M(Σ,β)Vβ0≤λϵ,β0Vβ0+bϵ,β0, (40) SβVβ0=Vβ0, (41) P(Σ,β)Vβ0≤λϵ,β0Vβ0+bϵ,β0,
###### Proof.

By subsection 5.1, since , we get

 (42) M(Σ,β)Vβ0(x(1:L)) =L∑ℓ=1M(Σ(ℓ),β(ℓ))Vβ0(x(ℓ)) ≤L∑ℓ=1M(Σ(ℓ),β0)Vβ0(x(ℓ)).

Then, by subsection 5.1, since , it holds

 L∑ℓ=1M(Σ(ℓ),β0)Vβ0(x(ℓ))≤λϵ,β0L∑ℓ=1Vβ0(x(ℓ))+