Correlated pseudomarginal MetropolisHastings
using quasiNewton proposals
Abstract
Pseudomarginal MetropolisHastings (pmMH) is a versatile algorithm for sampling from target distributions which are not easy to evaluate pointwise.
However, pmMH requires good proposal distributions to sample efficiently from the target, which can be problematic to construct in practice.
This is especially a problem for highdimensional targets when the standard randomwalk proposal is inefficient.
We extend pmMH to allow for constructing the proposal based on information from multiple past iterations.
As a consequence, quasiNewton (qN) methods can be employed to form proposals which utilize gradient information to guide the Markov chain to areas of high probability and to construct approximations of the local curvature to scale step sizes.
The proposed method is demonstrated on several problems which indicate that qN proposals can perform better than other common Hessianbased proposals.
Keywords: Markov chain Monte Carlo, Bayesian inference, Riemann manifolds, Hessian estimation, QuasiNewton methods, Machine learning.
1 Introduction
Sampling from some target distribution with is a common problem in statistics and Machine learning. This problem arises when exploring the posterior in the Bayesian statistical paradigm (Robert, 2007; Gelman et al., 2013) or the cost function in a supervised learning problem (Murphy, 2012; Ghahramani, 2015; Wills and Schön, 2018).
A standard choice is to make use of MetropolisHastings (MH; Robert and Casella, 2004) to sample from . This is done by constructing Markov chain which has the sought target distribution as its stationary distributions. Samples from can therefore be generated by simulating this Markov chain.
However, this requires pointwise evaluation of which can be intractable or computationally prohibitive. The former can be the result of the model containing missing data or latent variables. The latter is common in intractable likelihood models (Marin et al., 2012) and datarich scenarios (Bardenet et al., 2017).
In some situations, it is possible to circumvent this problem by changing the target into something that can be (efficiently) evaluated pointwise. MH can then be used to sample from this socalled extended target,
(1) 
where denotes some unbiased estimator of the original target . Here, we introduce some auxiliary variables denoted with density to facilitate pointwise evaluation of the intractable target. This is the core idea of the pseudomarginal MH (pmMH; Andrieu and Roberts, 2009) algorithm, which can be seen as an exact approximation of the intractable ideal MH algorithm targeting . That is, pmMH makes use of noisy evaluations from the target but still generates samples from the original target in the same manner as the exact ideal algorithm would.
A concrete example of a pmMH algorithm is when sequential Monte Carlo (SMC; Del Moral et al., 2006) is employed to construct as direct pointwise evaluation of is not possible. This leads to the particle MH (pMH; Andrieu et al., 2010) algorithm, which allows for Bayesian inference in statespace models (SSMs). Another example considered by Wills and Schön (2018) is to learn the weights in e.g., deep neural networks (LeCun et al., 2015), where direct evaluation of is computationally prohibitive due to the data size.
The performance of pmMH depends on: (i) good parameter proposals and (ii) computationally efficient estimators . These are important choices to research to promote a wider adoption of pmMH.
The main contribution of this paper is an extension of pmMH to allow the proposal to make use of information from multiple previous states. Usually, the proposal can only depend on the last state of the Markov chain but this requirement is relaxed by extending earlier work for MH (Roberts and Gilks, 1994; Zhang and Sutton, 2011). This enables making use of quasiNewton (qN; Nocedal and Wright, 2006) methods from optimisation to construct efficient proposals. Specifically, our contribution entails:

two novel proposals based on symmetric rankone (SR1) method with a novel trustregion approach and a tailored regularized least squares (LS) method,

a proof of the validity of pmMH with memory,

extensive numerical benchmarks showing the benefits of the novel proposals over standard choices.
The numerical benchmarks indicate the qN proposals can be used to compute accurate estimates of the Hessian with only a minor computational overhead. As a result, the novel proposals can outperform direct computations of the Hessian or the use of SMC methods in terms of computational speed and/or quality of the posterior estimates. Hence, qN proposals can be a useful alternative to other Hessianbased proposals.
Most popular proposals in pmMH are currently based on discretisations of Langevin diffusions (Roberts and Rosenthal, 1998; Girolami and Calderhead, 2011) which can be extended to pmMH as discussed by Dahlin et al. (2015b) and Nemeth et al. (2016). These proposals require accurate estimates of the gradient and Hessian of the logtarget to perform well. In practice, this can be a problem as the Hessian can be difficult to estimate accurately which results in a computational bottleneck.
One approach to circumvent this problem is to employ qN methods to construct a local Hessian approximation from gradient information which is typically faster and easier to estimate accurately. This idea has been employed for MH (Zhang and Sutton, 2011), pMH (Dahlin et al., 2015c, 2018) and Hamiltonian Monte Carlo (HMC; Simsekli et al., 2016). This work extends the use of qN to pmMH, proves the validity of this idea and addresses some practical implementation issues.
The idea of adapting the proposal onthefly within MH and pmMH is common and has received a lot of attention by e.g., Andrieu and Thoms (2008). A similar idea to the one presented in this paper is discuss for MH by e.g., Gilks et al. (1994), Haario et al. (2001) and Cai et al. (2008). However, a major difference is that no gradient and local Hessian information is used when constructing the proposal.
2 An overview of the proposed algorithm
This section aims to introduce the pmMH algorithm with memory. We begin by briefly discussing standard pmMH to set the notation and then continue with presenting the extension which allows for making use of information from multiple previous iterations to construct good proposals. General introductions to pmMH and pMH are given by Andrieu and Roberts (2009), Andrieu et al. (2010) and Dahlin and Schön (2017).
2.1 Standard pseudomarginal MH
Consider the extended target density in (1) on the space , where denotes the space of interest with the target density . As discussed in the introduction, it is sometimes not possible to implement MH directly to sample from as pointwise evaluation of the density is not possible e.g., when the model contains latent, missing or unknown variables.
Instead, we introduce the auxiliary variables and form the estimator . If this estimator is unbiased, then a MH algorithm targeting can generate samples from by marginalisation. To see why this scheme is valid, note that the unbiasedness of can be expressed as
(2) 
This means that we can recover the target density by marginalisation of the extended target density. Moreover, consider the MH algorithm targeting the extended target, which consists of two steps: (i) sampling a candidate from the proposal and then (ii) accepting/rejecting the candidate. In the first step, the candidate is sampled from a proposal distribution,
(3) 
where and are specified by the user and these choices are discussed in more detail in Section 3. In the second step, the proposed parameters are accepted or rejected with the probability,
(4)  
where the expanded expression is given to simplify the exposition in the subsequent sections and with . Hence, we can run a standard MH algorithm targeting to obtain .
We obtain pMH as a special case of pmMH when the target is the parameter posterior distribution,
(5) 
where , and denote the parameter prior, the likelihood and the marginal likelihood, respectively. An SMC algorithm with particles is also used to construct such that
(6) 
The auxiliary variables are in this case all the random variables generated during a run of SMC or equivalently the particles and their ancestry lineage. That is, contains all the information to reconstruct the estimator and the SMC algorithm is deterministic given . We return to pMH in Section 6.3.
2.2 Allowing pmMH to use a memory
We can introduce memory into pmMH by forming the fold product of the extended target (1). This gives the extended product target on the space with the density given by
(7) 
with , and .
The product target (7) consists of copies of the original target (1) referred to as subtargets. Hence, each component of corresponds to one complete set of parameters for (1). This is the reason for the introduction of the bar symbol to distinguish between the th component of the vector in (1) and the th component of , which is in itself a parameter vector. Finally, note that one sample from therefore consists of realisations of possible parameter vectors for (1).
As usual, we can recover the target distribution in terms of by marginalisation if the estimator is unbiased. That is,
where each subtarget is assumed to be independent and by using the unbiasedness of in (2).
We can now sample from (7) by using a MetropoliswithinGibbs (Robert and Casella, 2004; Roberts and Rosenthal, 2006) scheme as discussed in Zhang and Sutton (2011). Hence, a sample from the th component of is generated at iteration , , while keeping the other components fixed. This amounts to sampling given from a Markov kernel ,
(8) 
where the dependence of is suppressed for brevity. The Markov kernel corresponds to one iteration of a pmMH algorithm and the exact form of the Markov kernel is given later in Section 5. To simplify the notation, we introduce the memory of the Markov chain,
(9) 
which is the information available within the memory for component during the iteration. The dependence on the iteration number is suppressed for brevity. The memory consists of the elements which already have been updated during the current iteration and the remaining elements.
Repeated sampling from (8) will ultimately produce samples from the extended product target denoted by . The proposal for each component during the iteration can make use of the information in . Hence, we can write the proposal as
(10) 
The resulting acceptance probability is given by
(11) 
which follows from the subtargets related to cancelling as they are fixed. The complete procedure is given in Algorithm 1.
A systematic scan is used in this paper where each Markov kernel is applied in sequence to update the chain. It is however possible to also make use of random scan, which applies the kernels in a random sequence. This might further increase the performance of the algorithm but is left as future work.
3 Designing good proposals
The design of proposal distributions is instrumental for obtaining good performance in pmMH. Recall, that two different proposal distributions are required for Algorithm 1, i.e., one for and another for . We discuss these in turn in the context of standard pmMH without a memory to ease the notation. We consider the general case with memory in Section 4.3.
3.1 Proposal for auxiliary variables
The standard choice for the proposal for the auxiliary variables is to sample from the prior, i.e.,
which is known as the independent proposal as is assumed not to be dependent on . The expression for depends on the form of the estimator but a common choice is a standard Gaussian distribution with mean zero and a identity matrix as its covariance. This results in the acceptance probability (4) simplifying to
(12) 
by using the structure of the proposal in (3).
Another alternative is the CrankNicholson (CN; Dahlin et al., 2015a; Deligiannidis et al., 2017) proposal corresponding to a firstorder autoregressive process,
(13) 
for some step length and with , i.e., the number of elements in . Moreover, we select standard Gaussian distribution as , which results in that the acceptance probability is again given by (12).
This construction is only possible if can be transformed into any random variable required by the estimator . Often this is not a problem as inverse CDF transformations can be used to convert the Gaussian random number into random numbers from most standard distributions. The independent proposal for is recovered when . Selecting corresponds to fixed auxiliary variables, which would result in biased estimates of the target.
The correlation introduced by (13) can have a beneficial impact on the performance of pmMH. This is results in the reduction of the probability of the Markov chain getting stuck in a certain state for many iterations. This can occur if leading to the candidate parameter being accepted. After this, it is difficult to get any candidate accepted as the estimate of the logtarget at the current state is much larger than its true value. The correlation in leads to that the estimate of the logtarget for the subsequent candidates are also larger than their true values and therefore increases the probability of accepting the candidate and for the Markov chain to get loose.
3.2 Proposal for parameters
A popular choice for the parameter proposal is a Gaussian distribution given by
(14) 
where the mean and covariance are allowed to depend on and .
The simplest Gaussian proposal is given by the random walk corresponding to and as some fixed covariance matrix. However, it is known that this proposal scales unfortunately with the dimension of the state . For larger problem, it is better to make use of gradient and Hessian information when constructing the proposal. This information can be used to introduce a modeseeking behaviour by following the drift induced by the gradient as well as scale the proposal to the local curvature by using Hessian information.
We can introduce this additional information into the proposal by considering a Langevin diffusion,
(15) 
where denotes a Brownian motion. This choice is motivated by the diffusion having as its stationary distribution under some mild assumptions. Hence, samples from the target can be obtained by simply simulating the diffusion for a certain amount of time. Moreover, we follow Girolami and Calderhead (2011) and make use of the negative inverse Hessian of the logtarget as to facilitate efficient sampling.
The firstorder Euler discretisation of the Langevin diffusion (15) is given by the Gaussian proposal (14) with the statistics,
(16) 
where denotes a step size and with
denoting the gradient and negative inverse Hessian of the logtarget, respectively.
Note that it is not possible to compute and in many situations when cannot be evaluated. However, these quantities can be estimated using e.g., SMC which make use of the auxiliary variables entering the proposal. For example in SSMs, estimating using by employing particle smoothing problem and the Fisher identity (Cappé et al., 2005, p. 352).
The Hessian can be approximated in a similar manner by the Louis identity (Cappé et al., 2005, p. 352). The problem is that the accuracy of the Hessian estimates often is worse than for the gradient estimates. Empirically, the estimates of the smallest diagonal and offdiagonal elements of the Hessian are often very noisy even with many particles in the SMC algorithm. This introduce a computational bottleneck as each iteration of pmMH becomes prohibitively expensive.
4 QuasiNewton proposals
We propose to instead compute a local approximation of the Hessian using gradient information by leveraging qN methods. This is useful as the gradient estimates are usually more accurate than the Hessian estimates and applying qN only amounts to a negligible computational overhead. The use of qN can therefore serve as an alternative to the use of the Louis identity for SSMs as well as direct computation of the Hessian for some models when large amounts of data are available.
Hessian estimates obtained by qN methods (Nocedal and Wright, 2006) have a long history in the optimisation literature for finding the maximum or minimum of nonlinear functions. However, a major difference in this paper compared with most of the optimisation literature is that only noisy estimates of the gradients are available. Most qN methods have not been developed for this situation, which can result in numerical instability in the Hessian estimates. Such instability could result in that a candidate parameter far away from the posterior mode is accepted.
In this section, we address these issues by introducing a novel socalled trustregion method to encode the region of the target space in which the Hessian approximation is valid. Furthermore, we investigate a quite recent alternative based on regularised LS, which can deal with the noisy gradient estimate in a natural manner. Finally, we consider limitedmemory implementations of qN methods, which make use of the memory introduced in pmMH in Section 2.2.
4.1 SR1 with a trustregion
SR1 (Nocedal and Wright, 2006, Chapter 6.2) is a common qN method which has been successfully applied to many different optimisation methods. Compared with other qN methods, SR1 can give substantially better estimates of the Hessian and can enjoy more rapid convergence (Conn et al., 1991). Hence, it is a natural candidate for building proposals with pmMH but has previously not been considered in this context in the statistics or machine learning literature.
The SR1 algorithm is implemented as an iterative procedure which updates the current estimate of the inverse Hessian by
(17)  
where and denote the differences in the parameters and gradients between two iterations, respectively. The name SR1 comes from that the update is a socalled rankone update as the estimate is updated by a term which is an outer product of two vectors.
A limitedmemory version of SR1 is obtained by iterating (17) over the values of and corresponding to the memory (9). The initial value of the Hessian estimate is computed as a scaled version of the identity matrix , where ¿0 is selected by the user and denotes the gradient in the candidate parameter.
As discussed above, an SR1 method can become numerically unstable due to e.g., the noise entering the gradient estimates, rounding errors and loss of rank. Therefore, an SR1 method is usually accompanied by a trustregion to encode the part of the target space in which the Laplace approximation implied by the proposal (16) is believed to be accurate.
To incorporate this into pmMH, we propose to augment the proposal to be a product of the original proposal and a trustregion distribution,
(18) 
where is given by (16). A natural choice for the trustregion distribution is given by
where is some covariance matrix at iteration . In this paper, we make use of a similar approach to the hybrid method (Dahlin et al., 2015b) and set to the sample covariance matrix of the Markov chain computed using a part of the burnin iterations.
The choice of the trustregion distribution to be a Gaussian is natural since the product of two Gaussians is itself a Gaussian distribution. Hence for this choice, we can rewrite (18) as a new Gaussian distribution with updated statistics, which makes it easy to sample from the proposal and to evaluate it pointwise.
Furthermore, this mechanism mimics the behaviour of a trustregion method in standard qN methods, which adapts its size depending on the length of the previous steps. A simple analysis of (18) reveals that will determine the maximum scale of the region in which the local model is trusted.
4.2 Leastsquares
Another method for estimating the Hessian is to make use of LS as proposed by Wills and Schön (2018) and Haelterman et al. (2009) This can be accomplished by leveraging the socalled secant condition,
(19) 
for some pair of parameters and . Thus if such pairs are collected, the inverse Hessian of the logtarget can be estimated by LS using
(20)  
This LS method has some interesting advantages over traditional qN methods such as: (i) it handles the noise in the gradient estimates in a natural manner, (ii) it requires no initialisation of the Hessian estimate and (iii) it is straightforward to implement using existing commands in most computing environments.
Some care needs to be taken when applying (20) with estimates of the gradients computed using . That is, the noise in the estimates will influence future values of and therefore introduce problems with multicolinearity. This can be mitigated by using two different sets of auxiliary variables , one set to compute the gradient estimate entering (16) and another set to compute the estimate used in (20). Note that this adds no extra computational cost as these two estimates can be obtained in parallel. We make this idea explicit later when discussing the implementation of qN proposals within pmMH in Algorithm 3.
Finally, it is possible to add regularisation of the Hessian estimator and to make use of updates using matrix factorisations to improve computational efficiency. This is especially important when is large (Wills and Schön, 2018) or when the gradients are noisy to obtain stable estimates of the Hessian. A regularised least squares estimate can be computed by
(21) 
where denotes the coefficient determining the strength of the regularisation. Here, denotes the regularisation matrix at iteration which is computed in the same manner as the trustregion covariance for SR1.
In preliminary studies, we have seen that the use of regularisation improves the performance significantly by improving the numerical stability of the Hessian estimates. Moreover, selecting the regularisation parameter as results in good performance for many models.
4.3 Implementation
In this section, we describe how to implement qN as a proposal using pmMH with memory as described in Algorithm 1. We make use of a sliding window of previous states of the Markov chain denoted by , which is the information about the parameters and gradients from the previous steps. This is equivalent to sequentially applying the Markov kernel (8) but is a more convenient formulation for implementation.
The qN proposal can then be expressed as
(22)  
where denotes the qN approximation of the negative inverse Hessian of the logtarget. The mean of the proposal is centered at the parameter steps back, which is motivated by good performance in preliminary studies. However, we are free to select any other parameter in the memory using a fixed or random scheme. In Zhang and Sutton (2011), the authors make use of the previous parameter, i.e., .
The procedure to sample from (22) is given in Algorithm 2. The complete pmMH method based on using a window of previous states is given in Algorithm 3.
The Hessian estimate obtained using SR1 or LS can be negative semidefinite and therefore needs to be corrected. In this paper, this is done by shifting the negative eigenvalues to be positive by a spectral method (Nocedal and Wright, 2006, Chapter 3.4). This correction is computed by
(23) 
where and denotes the matrix of eigenvectors and the diagonal matrix of the corrected eigenvalues of the Hessian , respectively. The latter is computed as , where denotes the th eigenvalue of . Here, we introduce as some minimum size of the eigenvalues to ensure a nonsingular matrix. There are plenty of other methods for correcting the Hessian, e.g., the hybrid method (Dahlin et al., 2015b) or modified Cholesky factorisations (Nocedal and Wright, 2006, Chapter 3.4).
5 Validity of the algorithm
We discussed the pmMH with memory framework in Section 2.2 and gave a concrete example in Algorithm 3 using the qN proposal. In this section, we discuss the validity of Algorithms 1 and 3.
The validity depends on two different issues: (i) the conditions for when the MetropoliswithinGibbs construction generates samples from and (ii) when the Markov kernel R generates samples from the subtarget . These issues are naturally connected as the requirements for (i) depends on (ii). Hence, we start by considering the conditions for when leaves invariant and then address the validity of the MetropoliswithinGibbs scheme.
The Markov kernel corresponding to pmMH targeting can be written as
(24)  
where is given by (11) and denotes the rejection probability. This Markov kernel generates a reversible Markov chain by construction if the detailed balance is fulfilled by the proposal and prior for , i.e.
(25) 
which e.g., holds for the proposal in (13).
Lemma 1
Under Assumption 1 in Andrieu and Roberts (2009) and given that the (ideal) MH algorithm targeting defines a irreducible and reversible Markov chain. Then, the Markov chain defined by in (24) is also irreducible and reversible. Furthermore, we have in the total variational (TV) norm
when and for any such that .
Proof 1
The idea is to make use of the MetropoliswithinGibbs construction in (8), which either systematically updates one component after another or randomly selecting the component to update. The conditions for when such a scheme is valid has been investigated by numerous authors but the results in Roberts and Rosenthal (2006) are particular of interest for this paper. Especially, Corollary 19 gives the sufficient conditions for the Markov chain targeting the extended product target and the subchains being Harris recurrent which together with irreducibility gives an ergodic chain.
Lemma 2
Given that the target is integrable over any combination of . Then, the Markov kernel given by the combination of (8) and (24) defines a Harris recurrent Markov chain. Hence, if the Markov chain also is irreducible, we have
(26) 
when and for any such that . Furthermore, the same holds for all the subchains.
Proof 2
Follows directly from Theorem 6 and Corollary 18 in Roberts and Rosenthal (2006).
To summarise, we combine Lemmas 1 and 2 to show that the Markov chains generated using Algorithms 1 and 3 converges to the sought target distribution.
Proposition 1
As a consequence, the samples generated using this procedure are distributed according to . If we have that , i.e., all the subchains target the same distribution, then the random samples are distributed according to which is the original pmMH target in (1).
These random samples can be used to estimate expectations with respect to the extended target and the parameter posterior. Focusing on the latter, we let denote a wellbehaved integrable test function with respect to the parameters . The expectation of with respect to can be obtained by marginalisation of resulting in
(27) 
which unfortunately is intractable for most interesting problems. However, it can be approximated by
(28) 
which by the ergodic theorem is a consistent estimator,
for any finite memory length . Note that, we could thin the chain and retain only the samples for some . However, we opt for retaining all samples (even if they are correlated) by the standard argument against thinning (Geyer, 1992; MacEachern and Berliner, 1994). This is also supported by comparisons made using pilot runs.
6 Numerical illustrations
In this section, we investigate the performance and properties of Algorithm 3 using SR1 and LS methods when carrying out Bayesian parameter inference in three different models. Furthermore, these two novel proposals are compared with four alternatives; pmMH0/1/2 (Dahlin et al., 2015b; Nemeth et al., 2016) and Algorithm 3 using damped BFGS (Dahlin et al., 2018).
The pmMH0/1 method corresponds to using (14) without/with gradient information and replacing the Hessian with a constant matrix , which is usually an estimate of the posterior covariance obtained by pilot runs. The pmMH2 method is essentially the same as pmMH but with the Hessian computed without the use of memory by direct computations, importance sampling or SMC.
We compare the mixing of the Markov chains generated using the different MCMC methods using the inefficiency factor (IF) also known as the integrated autocorrelation time. The IF is computed by
(29) 
where a value of one corresponds to perfect (independent) sampling from the target. However, the IF computation cannot be carried out as only finite realisations of the Markov chains are available.
Instead, the IF is approximated by the sum of the first empirical autocorrelation function (ACF) coefficients. The time per effective samples (TES) is also computed by multiplying the IF with the time required per pmMH iteration. The TES value can be interpreted as the time required to obtain one uncorrelated sample from the posterior, which takes into account the varying computational cost of the benchmarked methods.
All the implementation details are summarised in Appendix A and the source code and supplementary material are available via GitHub https://www.github.com/compops/pmmhqn and Docker Hub (see README.md in GitHub repository).
6.1 Selecting correlation and memory length
We begin by investigating the impact of the two main user choices and on the performance of Algorithm 3 using BFGS, LS and SR1 methods. The aim is to provide some guidelines for the user to tune the acceptance probability to obtain good a small TES, comparable to existing ruleofthumbs (Nemeth et al., 2016).
To this end, we consider estimating the parameter posterior (5) for the random effects model given by
with the parameters given the observations . The parameter posterior is intractable for this model as the likelihood depends on the latent states . However, it is possible to obtain pointwise estimates of the loglikelihood and its gradients using importance sampling, see Appendix A.2 for details.
We generate a data set with observations using and run an importance sampler with correlated samples together with Algorithm 3 to estimate the posterior.
Figure 1 summaries the median TES for the three different qN methods. First, we note that SR1 and LS seem to provide better maximum performance compared with BFGS. Second, LS seems to be somewhat more robust to the user choices than BFGS and SR1. Third, the optimal value of seems to be around depending on the method and a small nonzero value of is preferable. Finally, smaller values of are preferable when is large, which is natural as this would result in less accurate gradient estimates.
Figure 2 presents the TES as a function of the acceptance rate based on the data in Figure 1. The three different methods behave quite differently over the interval. For example, LS is less sensitive to the acceptance probability than the SR1 update.
Optimal acceptance probabilities are found around (BFGS), (LS) and (SR1) as they minimise the TES. Overall, the LS update seems to be the preferable choice to obtain good performance with the simplest tuning.
6.2 Logistic regression with subsampling
We continue by investigating the accuracy of the Hessian estimate obtained qN as well as benchmarking the resulting performance of pmMH in a highdimensional setting with parameters. To this end, we consider a problem from physics namely the detection of Higgs bosons as discussed in Baldi et al. (2014). A simple approach to construct such a classifier is to make use of a logistic regression model given by
where denotes a Bernoulli distributed random variable with success probability . Here, and denote covariates for observation and the regression coefficients (including an intercept), respectively.
Again the problem amounts to the computation of the posterior (5) of the parameters given the data . For this model, we can compute the loglikelihood and its gradients in closed form, but this is computationally prohibitive as the data set is large.
Instead, we make use of a subsampling approach, where the loglikelihood and its gradients are computed on a random subset of the data at each iteration. This results in a proposal known as stochastic gradient descent (SGD) which previously has been applied to MH by Welling and Teh (2011). However, a naive method to subsampling results in a large variance in the estimates of the loglikelihood and large IF.
To mitigate this problem, we propose a possibly novel method which makes use of stratified resampling Doucet and Johansen (2011) together with correlated random numbers. The Gaussian auxiliary variables are transformed into uniform variables using a CDF transform, i.e., by evaluating the Gaussian CDF in . These transformed auxiliary variables are then used within the stratified resampling algorithm to determine the subset. More advanced methods are available in the literature and should be straightforward to adopt within Algorithm 3, see Bardenet et al. (2017) and Quiroz et al. (2018).
We begin by investigating the accuracy of the Hessian estimates by comparing the estimates obtained by the BFGS, LS and SR1 with the true Hessian for independent Markov chains. Each of the chains are run using a pmMH2 algorithm and the Hessian estimates using the three qN methods are computed at each iteration for a range of different memory lengths. The error is computed as the mean of the Frobenius norm between the Hessian estimate and the true Hessian obtained by direct computation.
Figure 3 presents the resulting median errors for the three different methods with varying memory lengths. Note that the LS update attains the lowest error when increase past the number of parameters , which is natural as is required to obtain a unique solution to the LS problem. Moreover, the error in the Hessian estimate obtained by SR1 seems to be constant for large , which could be due to the fact that this update is only valid for exact gradients.
We continue by examining the performance of complete implementations of pmMH using qN proposals for estimating the parameter posterior of . Table 1 presents some performance statistics computed as the median over independent runs. We have tried to match the acceptance probability with the recommendations above but this was difficult.
Time  
Alg.  Acc.  Cor.  mean IF  Iter.  Samp. 
pmMH0  0.06    6  2.1  
pmMH2  0.12  0.00  29  4.7  
pmMHBFGS  0.19  0.00  15  0.5  
pmMHLS  0.14  0.93  13  
pmMHSR1  0.26  1.00  14 
The smallest IF and TES are attained by LS and SR1, which corresponds well with results in the previous illustration. In fact, all the IF values for the qN proposals are much smaller than for pmMH0/2. This is the result of that the behaviour of the ACF is quite different for qN proposals compared with standard proposals due to the introduction of the memory.
Figure 4 presents the corresponding posterior estimates and empirical ACFs of the Markov chains for . The posteriors are all located around the parameter estimate obtained by SGD, which indicates that all the chains have converged. The empirical ACFs for the qN proposals have a periodic behaviour as the proposal is centered around the state steps back.
An alternative to IF is to compare the variance of the posterior estimates, which is connected to the Monte Carlo error. We compute the mean over the relative posterior variances with respect to pmMH2 and then take the median over the Monte Carlo runs to obtain: (BFGS), (LS) and (SR1).
We conclude that all four proposals seem to give similar posterior variance. However, the computational time for pmMH2 is about twice as large as for the qN proposals and also grows faster in terms of . This is the result of sums of outer products appear in the expressions for the Hessian.
Finally, recall that the Monte Carlo error typically scales as . We therefore have a further decrease by of the relative variance for a fixed computational budget when taking the computational time into account. Hence, qN proposals are possibly a better choice in terms of performance for this model.
6.3 Stochastic volatility model for Bitcoin data
We now investigate the performance of Algorithm 3 on a model where it is quite easy to accurately estimate the gradient using SMC but difficult to estimate the Hessian in the same manner. Here, the use of qN proposals could serve as a good alternative to pmMH2.
To illustrate this, we consider a stochastic volatility model with leverage to capture the behaviour of the Bitcoin logreturns presented in Figure 5. This model can be expressed by
where parameters of the model are . The aim is to estimate the posteriors of and given observed logreturns .
Time  

Alg.  Acc.  Cor.  max IF  Iter.  Samp. 
pmMH2  0.64  1.00  101  44.0  
pmMHBFGS  0.08  0.00  18  
pmMHLS  0.17  0.40  17  
pmMHSR1  0.21  0.72  18 
We make use of an SMC algorithm known as the fixedlag particle smoother (Olsson et al., 2008) to estimate the logtarget and its gradient using the Fisher identity. There are many other alternatives but this smoother is fast and reasonable accurate, see Dahlin et al. (2015b) and Lindsten and Schön (2013). It is also possible to employ the Louis identity to compute an estimate of the Hessian as discussed in Section 3.2 to use within the pmMH2 proposal and this is used as a comparison with qN proposals.
Table 2 presents the median performance using the different proposals. As before, a direct comparison between pmMH2 and the qN proposals is not possible but the three qN proposals achieve similar performance.
Figure 5 presents the posterior estimates for and obtained by pmMH2 using the Louis identity and Algorithm 3 using LS. Here, we have matched the computational budget for the two methods and therefore the estimates obtained by pmMH2 use samples and the corresponding estimates from pmMHLS makes use of samples. This is the result of that the computations required by the Louis identity are slow even when implemented in C compared with the Python implementations of the qN proposals.
We clearly see that pmMH2 struggles to explore the posterior for most parameters compared the other two proposals. The two qN proposals based on BFGS and SR1 give essentially the same estimates and are therefore not presented in the figure. The correlation parameter seems to be difficult to estimate and therefore the three posteriors differ. Based on these results, we conclude that the qN proposals enjoy superior performance to pmMH2 and are at the same time simpler to implement than the Louis identity.
7 Conclusions
The pmMH framework is a general methodology for exploring target distributions using Markov chains. In this paper, we propose and prove the validity of an extension to pmMH allowing for using information from multiple previous iterations of the Markov chain when constructing the proposal. We make use of this together with qN methods known as SR1 and regularised LS to obtain efficient proposals within pmMH.
The numerical illustrations indicate that SR1 and LS can obtain accurate estimates of the Hessian with a small computational overhead. Hence, they can outperform direct computational of the Hessian in datarich scenarios and when using SMC methods for estimating the Hessian. The LS method seems to be the best choice with good overall performance and being quite robust to user choices. Additionally, it enjoys wellunderstood statistical properties when the gradients are noisy, which current is not the case of BFGS and SR1.
These are interesting results as Hessianbased proposals can efficiently sample from highdimensional targets (Girolami and Calderhead, 2011). Hence, pmMH with qN proposal can be an alternative when pseudomarginal HMC cannot be used (Lindsten and Doucet, 2016) as for the models in Sections 6.26.3 or when the Hessian is difficult or timeconsuming to estimate or compute directly.
Future work includes exploration of other types of proposals which makes use of the memory introduced in pmMH, see e.g., Cai et al. (2008) and Gilks et al. (1994). It is also possible to extend pseudomarginal (Lindsten and Doucet, 2016) with memory, which already has been proposed for HMC (Zhang and Sutton, 2011; Simsekli et al., 2016). Finally, more detailed analysis of the statistical properties of qN proposals is required to derive ruleofthumbs to simplify calibration similar to Doucet et al. (2015) .
Acknowledgements
The authors would like to thank Prof. Thomas Schön and Dr. Fredrik Lindsten for valuable comments on an earlier draft of the paper.
References
 Andrieu and Roberts [2009] C. Andrieu and G. O. Roberts. The pseudomarginal approach for efficient Monte Carlo computations. The Annals of Statistics, 37(2):697–725, 2009.
 Andrieu and Thoms [2008] C. Andrieu and J. Thoms. A tutorial on adaptive MCMC. Statistics and Computing, 18(4):343–373, 2008.
 Andrieu et al. [2010] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010.
 Baldi et al. [2014] P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in highenergy physics with deep learning. Nature communications, 5:4308, 2014.
 Bardenet et al. [2017] R. Bardenet, A. Doucet, and C. Holmes. On Markov chain Monte Carlo methods for tall data. The Journal of Machine Learning Research, 18(1):1515–1557, 2017.
 Cai et al. [2008] B. Cai, R. Meyer, and F. Perron. Metropolis–Hastings algorithms with adaptive proposals. Statistics and Computing, 18(4):421–433, 2008.
 Cappé et al. [2005] O. Cappé, E. Moulines, and T. Rydén. Inference in hidden Markov models. Springer Verlag, 2005.
 Conn et al. [1991] A. R. Conn, N. I. M. Gould, and Ph. L. Toint. Convergence of quasiNewton matrices generated by the symmetric rank one update. Mathematical Programming, 50(13):177–195, 1991.
 Dahlin and Schön [2017] J. Dahlin and T. B Schön. Getting started with particle MetropolisHastings for inference in nonlinear dynamical models. Journal of Statistical Software (accepted for publication), 2017.
 Dahlin et al. [2015a] J. Dahlin, F. Lindsten, J. Kronander, and T. B. Schön. Accelerating pseudomarginal MetropolisHastings by correlating auxiliary variables. Preprint, 2015a. arXiv:1512.05483v1.
 Dahlin et al. [2015b] J. Dahlin, F. Lindsten, and T. B. Schön. Particle MetropolisHastings using gradient and Hessian information. Statistics and Computing, 25(1):81–92, 2015b.
 Dahlin et al. [2015c] J. Dahlin, F. Lindsten, and T. B. Schön. QuasiNewton particle MetropolisHastings. In Proceedings of the 17th IFAC Symposium on System Identification (SYSID), pages 981–986, Beijing, China, October 2015c.
 Dahlin et al. [2018] J. Dahlin, A. Wills, and B. Ninness. Constructing MetropolisHastings proposals using damped BFGS updates. In Proceedings of the 18th IFAC Symposium on System Identification (SYSID) (accepted for publication), Stockholm, Sweden, 2018.
 Del Moral et al. [2006] P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411–436, 2006.
 Deligiannidis et al. [2017] G. Deligiannidis, A. Doucet, and M. K. Pitt. The correlated pseudomarginal method. Preprint, 2017. arXiv:1511.04992v4.
 Doucet and Johansen [2011] A. Doucet and A. Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. In D. Crisan and B. Rozovsky, editors, The Oxford Handbook of Nonlinear Filtering. Oxford University Press, 2011.
 Doucet et al. [2015] A. Doucet, M. K. Pitt, G. Deligiannidis, and R. Kohn. Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika, 102(2):295–313, 2015.
 Gelman et al. [2013] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. Bayesian data analysis. Chapman & Hall/CRC, 3 edition, 2013.
 Geyer [1992] C. J. Geyer. Practical Markov chain Monte Carlo. Statistical Science, pages 473–483, 1992.
 Ghahramani [2015] Z. Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452–459, 2015.
 Gilks et al. [1994] W. R. Gilks, G. O. Roberts, and E. I. George. Adaptive direction sampling. The Statistician, pages 179–189, 1994.
 Girolami and Calderhead [2011] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):1–37, 2011.
 Haario et al. [2001] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):223–242, 2001.
 Haelterman et al. [2009] R. Haelterman, J. Degroote, D. Van Heule, and J. Vierendeels. The quasiNewton least squares method: a new and fast secant method analyzed for linear systems. SIAM Journal on Numerical Analysis, 47(3):2347–2368, 2009.
 LeCun et al. [2015] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436–444, 2015.
 Lindsten and Doucet [2016] F. Lindsten and A. Doucet. Pseudomarginal Hamiltonian Monte Carlo. Preprint, 2016. arXiv:1607.02516v1.
 Lindsten and Schön [2013] F. Lindsten and T. B. Schön. Backward simulation methods for Monte Carlo statistical inference. In Foundations and Trends in Machine Learning, volume 6, pages 1–143, August 2013.
 MacEachern and Berliner [1994] S. N. MacEachern and M. L. Berliner. Subsampling the Gibbs sampler. The American Statistician, 48(3):188–190, 1994.
 Malik and Pitt [2011] S. Malik and M. K. Pitt. Particle filters for continuous likelihood evaluation and maximisation. Journal of Econometrics, 165(2):190–209, 2011.
 Marin et al. [2012] JM. Marin, P. Pudlo, C. P. Robert, and R. J. Ryder. Approximate Bayesian computational methods. Statistics and Computing, 22(6):1167–1180, 2012.
 Murphy [2012] K. P. Murphy. Machine learning: a probabilistic perspective. The MIT Press, 2012.
 Nemeth et al. [2016] C. Nemeth, C. Sherlock, and P. Fearnhead. Particle MetropolisAdjusted Langevin algorithms. Biometrika, 103(3):701–717, 2016.
 Nocedal and Wright [2006] J. Nocedal and S. Wright. Numerical optimization. Springer Verlag, 2 edition, 2006.
 Olsson et al. [2008] J. Olsson, O. Cappé, R. Douc, and E. Moulines. Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models. Bernoulli, 14(1):155–179, 2008.
 Quiroz et al. [2018] M. Quiroz, R. Kohn, M. Villani, and M.N. Tran. Speeding up MCMC by efficient data subsampling. Journal of the American Statistical Association (accepted for publication), 2018.
 Robert [2007] C. P. Robert. The Bayesian choice. Springer Verlag, 2007.
 Robert and Casella [2004] C. P. Robert and G. Casella. Monte Carlo statistical methods. Springer Verlag, 2 edition, 2004.
 Roberts and Gilks [1994] G. O. Roberts and W. R. Gilks. Convergence of adaptive direction sampling. Journal of Multivariate Analysis, 49(2):287–298, 1994.
 Roberts and Rosenthal [1998] G. O. Roberts and J. S. Rosenthal. Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1):255–268, 1998.
 Roberts and Rosenthal [2006] G. O. Roberts and J. S. Rosenthal. Harris recurrence of MetropoliswithinGibbs and transdimensional Markov chains. The Annals of Applied Probability, pages 2123–2139, 2006.
 Simsekli et al. [2016] U. Simsekli, R. Badeau, A. T. Cemgil, and G. Richard. Stochastic quasiNewton Langevin Monte Carlo. In Proceedings of the 33rd International Conference on Machine Learning (ICML), New York City, NY, USA, June 2016.
 Welling and Teh [2011] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML), pages 681–688, Bellevue, USA, July 2011.
 Wills and Schön [2018] A. Wills and T. B. Schön. Stochastic quasiNewton with adaptive step lengths for largescale problems. Preprint, 2018. arXiv:1802.04310v1.
 Zhang and Sutton [2011] Y. Zhang and C. A. Sutton. QuasiNewton methods for Markov chain Monte Carlo. In Proceedings of the 2011 Conference on Neural Information Processing Systems (NIPS), Granada, Spain, December 2011.
Appendix A Implementation details
This section discusses the details of the implementations used to generate the illustrations in Section 6. These details are also found in the source code and supplementary material available at https://www.github.com/compops/pmmhqn.
a.1 Selecting step sizes
We make use of an adaptive method to select the step size in the qN proposal (22). We follow Andrieu and Thoms [2008] and adapt such that the acceptance rate is around some target acceptance rate using the update
(30) 
where is selected so that the RobbinsMonro conditions are fulfilled, e.g., by for some . Note that this choice fulfils the requirement of diminishing adaption to obtain a valid MCMC scheme.
a.2 Selecting correlation and memory length
We generate samples from the random effects model with parameters . The priors for the parameters in the model are
where denotes the halfCauchy distribution (over the positive real axis) with location and scale . A reparametrization of the model is done to make all the parameters in the Markov chain unconstrained (able to assume any real value) given by
where are the new states of the Markov chain. This change of variables introduces a Jacobian term into the acceptance probability given by
We initialise the Markov chain in the true parameters for simplicity. The Hessian is used to scale the random walk for the initialisation phase (the first iterations) and as during the burnin. The pmMH algorithm is run for iterations with the first iterations discarded as burnin. The entire burnin is used to estimate the posterior covariance, which is used as for the remaining iterations.
The step sizes are adapted using (30) with and initial step size (BFGS), (LS) and (SR1). These are calibrated using pilot runs to obtain reasonable mixing. We set for the regularisation of LS.
The logtarget and its gradients and Hessian are computed using a correlated importance sampler with the prior for the latent process as the instrumental distribution and particles. See the supplementary materials or Deligiannidis et al. [2017] for the details..
a.3 Logistic regression with subsampling
We make use of the first observations and covariates from the data set downloaded from: https://archive.ics.uci.edu/ml/datasets/HIGGS. An intercept is included in the model by adding an appropriate column/row in the design matrix. The prior for the parameters is and all parameters are initialised at zero. Each subsample consists of of the data set, i.e., observations. The Hessian is used to scale the random walk for the initialisation phase (the first iterations) and as during the burnin.
The pmMH algorithm using the same settings as for the random effects model but with and . The pmMH0 proposal makes use of step size and the covariance matrix estimated using Algorithm 3 with LS during an earlier run is used as . The pmMH2 proposal makes use of step size . The step size for the qN proposals are adapted as before but with initial step size target and acceptance rates (BFGS), (LS) and (SR1). The SGD estimates are obtained using Lightning for Python http://contrib.scikitlearn.org/lightning/index.html.
a.4 Stochastic volatility model for Bitcoin data
The logreturns for Bitcoin are computed by , where denotes the daily exchange rates versus the US Dollar obtained from https://www.quandl.com/BITSTAMP/USD. The priors for the parameters in the model are
where denotes a truncated Gaussian distribution on with mean and standard deviation and denotes the Gamma distribution with mean . Again, we reparameterise the model to make all the parameters in the Markov chain unconstrained (able to assume any real value) by
where are the new states of the Markov chain. This change of variables introduces a Jacobian term into the acceptance probability given by
We initialise the Markov chain in . The Hessian is used to scale the random walk for the initialisation phase (the first iterations) and as during the burnin. The pmMH algorithm is run using the same settings as before but with . The pmMH2 proposal makes use of step size . The step sizes for the qN proposals are adapted as before but with initial step size.
Appendix B Implementing correlated logtarget estimators
This section describes the implementations of the correlated importance sampling, subsampling and particle filtering algorithms. These implementations are deterministic given the auxiliary variables given by the proposals in Section 3.1 in the main paper.