Auxiliary particle filtering within adaptive Metropolis-Hastings sampling

# Auxiliary particle filtering within adaptive Metropolis-Hastings sampling

Michael K. Pitt
Economics Department
University of Warwick
m.pitt@warwick.ac.uk
Ralph S. Silva
School of Economics
University of New South Wales
r.silva@unsw.edu.au
Paolo Giordani
Research Department
Sveriges Riksbank
paolo.giordani@riksbank.se
Robert Kohn111Corresponding author.
School of Economics
University of New South Wales
r.kohn@unsw.edu.au
May 15 2010
###### Abstract

Our article deals with Bayesian inference for a general state space model with the simulated likelihood computed by the particle filter. We show empirically that the partially or fully adapted particle filters can be much more efficient than the standard particle filter, especially when the signal to noise ratio is high. This is especially important because using the particle filter within Markov chain Monte Carlo sampling is , where is the sample size. We also show that an adaptive independent Metropolis Hastings proposal for the unknown parameters based on a mixture of normals can be much more efficient than the usual optimal random walk methods because the simulated likelihood is not continuous in the parameters and the cost of constructing a good adaptive proposal is negligible compared to the cost of evaluating the simulated likelihood. Independent Metropolis-Hastings proposals are also attractive because they are easy to run in parallel on multiple processors. The article also shows that the proposed adaptive independent Metropolis Hastings sampler converges to the posterior distribution. We also show that the marginal likelihood of any state space model can be obtained in an efficient and unbiased manner by using the particle filter making model comparison straightforward. Obtaining the marginal likelihood is often difficult using other methods. Finally, we prove that the simulated likelihood obtained by the auxiliary particle filter is unbiased. This result is fundamental to using the particle filter for Markov chain Monte Carlo sampling and is first obtained in a more abstract and difficult setting by Del Moral (2004). However, our proof is direct and will make the result accessible to readers.

Keywords: Auxiliary variables; Bayesian inference; Bridge sampling; Full and partial adaptation; Marginal likelihood.

\setstretch

1.0

## 1 Introduction

Our article builds on the work of Andrieu et al. (2010) and develops general simulation methods that make Bayesian inference for time series state space models feasible and efficient. Our first contribution is to show empirically that partially or fully adapted auxiliary particle filters using the simulated likelihood defined in Pitt (2002) can be much more efficient statistically than the standard standard particle filter of Gordon et al. (1993), especially when the signal to noise ratio is high, because they reduce the noise in the simulated likelihood. It is very important to carry out the particle filter as efficiently as possible, because particle filtering within Markov chain Monte Carlo sampling is , where is the sample size as explained in Section 2.1.

Adaptive sampling methods are simulation methods for carrying out Bayesian inference that use previous iterates of the simulation to form proposal distributions, that is, the adaptive samplers learn about some aspects of the posterior distribution from previous iterates. See for example Haario et al. (2001), Atchadé and Rosenthal (2005) and Roberts and Rosenthal (2009) who consider adaptive random walk Metropolis proposals and Giordani and Kohn (2010) who base their proposal on a mixture of normals.

The second contribution of the article is to show that when working with a simulated likelihood it is worthwhile constructing adaptive independent Metropolis Hastings proposals that provide good approximations to the posterior density. The first reason for this claim is that the simulated likelihood is not continuous in the unknown parameters. This means that standard methods for constructing proposals such as Laplace approximations based on analytic or numerical derivatives are usually infeasible. It also means that the usual optimal random walk methods do not perform as well as expected as the probability of acceptance does not tend to 1 as a proposed move becomes more local moves or even if the parameter does not change at all. The second reason is that the cost of constructing a good adaptive proposal is often negligible compared to the cost of running the particle filter to obtain the simulated likelihood. Third, an adaptive sampling scheme that consists entirely or mostly of independent Metropolis-Hastings steps is attractive because a large part of the computation can be run in parallel thus substantially reducing computing time.

Our article uses the adaptive independent Metropolis Hastings sampler of Giordani and Kohn (2010) which approximates the posterior density by a mixture of normals. We show that this proposal density can be much more efficient than the adaptive random walk Metropolis proposal of Roberts and Rosenthal (2009) for the reasons just outlined. We also show that this adaptive sampler converges to the correct posterior distribution. We note, however, that in our experience, the adaptive random walk Metropolis algorithm of Roberts and Rosenthal (2009) is important because it converges reliably for a diverse set of problems and provides a good way to initialize other more efficient adaptive sampling schemes.

The third contribution of our article is to show that the marginal likelihood of any state space model can be estimated in an efficient and unbiased manner by combining particle filtering with bridge or importance sampling. This makes it straightforward to compare the marginal likelihoods of two or more models each of which can be expressed in state space form.

The final contribution of the article is to show that the simulated likelihood obtained the auxiliary particle filter is unbiased. This result is obtained in an abstract setting in Proposition 7.4.1 in Section 7.4.2 in Del Moral (2004). Andrieu et al. (2010) show that the unbiasedness of the simulated likelihood allows Bayesian inference using Markov chain Monte Carlo simulation. This is because the simulated likelihood can viewed as the density of the observations conditional on the unknown parameters and a set of auxiliary latent variables. We believe that our derivation of unbiasedness is more direct and accessible than that of Del Moral (2004).

Computational algorithms for state space models such as the Kalman filter and particle filter are useful because many time series models can be expressed in state space form. Computational methods for Bayesian inference for Gaussian state space model are well developed (see Cappé et al., 2005) and there is a literature now on Bayesian computational methods for non-Gaussian state space models. Markov chain Monte Carlo computational methods based on the particle filter have the potential to greatly increase the number and complexity of time series models amenable to Bayesian analysis. An early of the particle filter within an Markov chain Monte Carlo framework is by Fernández-Villaverde and Rubio-Ramírez (2007) who applied it to macroeconomic models as an approximate approach for obtaining the posterior distribution of the parameters.

Particle filtering (also known as sequential Monte Carlo) was proposed by Gordon et al. (1993) for online filtering and prediction of nonlinear or non-Gaussian state space models. The auxiliary particle filter method was introduced by Pitt and Shephard (1999) to improve the performance of the standard particle filter when the observation equation is informative relative to the state equations, that is when the signal to noise ratio is moderate to high. There is an extensive literature on online filtering using the particle filter, see for example Kitagawa (1996), Liu and Chen (1998), Doucet et al. (2000), Doucet et al. (2001), Andrieu and Doucet (2002), Fearnhead and Clifford (2003) and Del Moral et al. (2006). Our article considers only the standard particle filter of Gordon et al. (1993) and the fully and partially adapted particle filters proposed by Pitt and Shephard (1999).

The literature on using the particle filter to learn about model parameters is more limited. Pitt (2002) proposes the smooth particle filter to estimate the parameters of a state space using maximum likelihood. Storvik (2002) and Polson et al. (2008) consider online parameter learning when sufficient statistics are available. Andrieu et al. (2010) provide a framework for off-line parameter learning using the particle filter. Flury and Shephard (2008) give an insightful discussion of the results of Andrieu et al. (2010) and use single parameter random walk proposals for off-line Bayesian inference.

Our article is an updated version of Silva et al. (2009), which contains some extra examples.

## 2 State space models

Consider a state space model with observation equation and state transition equation , where and are the observation and the state at time and is a vector of unknown parameters. The distribution of the initial state is . See Cappé et al. (2005) for a modern treatment of general state space models. The filtering equations for the state space model (for ) are (West and Harrison, 1997, pp. 506-507)

 p(xt|y1:t−1;θ) =∫p(xt|xt−1;θ)p(xt−1|y1:t−1;θ)dxt−1, (1a) p(xt|y1:t;θ) =p(yt|xt;θ)p(xt|y1:t−1;θ)p(yt|y1:t−1;θ), (1b) p(yt|y1:t−1;θ) =∫p(yt|xt;θ)p(xt|y1:t−1;θ)dxt. (1c)

where . Equations (1a)–(1c) allow (in principle) for filtering for a given and for evaluating the likelihood of the observations ,

 p(y|θ) =p(y1|θ)T−1∏t=1p(yt+1|y1:t;θ) . (2)

If the likelihood can be computed, maximum likelihood and MCMC methods can be used to carry out inference on the parameters , with the states integrated out. When both the observation and state transition equations are linear and Gaussian the likelihood can be evaluated analytically using the Kalman filter (Cappé et al., 2005, pp. 141-143). More general state space models can also be estimated by MCMC methods if auxiliary latent variables are introduced, e.g. Kim et al. (1998) and Frühwirth-Schnatter and Wagner (2006) and/or the states are sampled in blocks as in Shephard and Pitt (1997). See section 6.3 of Cappé et al. (2005) for a review of Markov chain Monte Carlo methods applied to general state space models.

In general, however, the integrals in equations (1a)–(1c) are computationally intractable and the standard particle filter algorithm (SIR) was proposed by Gordon et al. (1993) as a method for approximating them with the approximation becoming exact as the number of particles tends to infinity. Pitt and Shephard (1999) propose the auxiliary particle filter method (ASIR) which is more efficient than standard particle filter when the observation density is informative relative to the transition density. The general auxiliary particle filter is described in Section 2.1.

### 2.1 General ASIR  method

The general auxiliary SIR (ASIR) filter of Pitt and Shephard (1999) may be thought of as a generalisation of the SIR method of Gordon et al. (1993). We therefore focus on this, more general, approach. To simplify notation in this section, we omit to show dependence on the unknown parameter vector . The following algorithm describes the one time step ASIR update and is initialized with samples with mass for .

###### Algorithm 1.

Given samples  with mass for

For

1. For  compute

2. For  sample

3. For  sample

4. For  compute

 ωkt+1=p(yt+1|xkt+1)p(xkt+1|˜xkt)g(yt+1|˜xkt)g(xkt+1|˜xkt;yt+1), \ \ \ \ \ \ πkt+1=ωkt+1∑Mi=1ωit+1.

Note that in Step 2, is the delta function with unit mass at . In addition, in Step 2, multinomial sampling may be employed but stratified sampling is generally to be preferred and is employed throughout, see Kitagawa (1996), Carpenter et al. (1999) and Pitt and Shephard (2001).

Note that the true joint density may be written as,

 p(yt+1|xt+1)p(xt+1|xt)=p(yt+1|xt)p(xt+1|xt;yt+1),

where

 p(yt+1|xt) =∫p(yt+1|xt+1)p(xt+1|xt)dxt+1, p(xt+1|xt;yt+1) =p(yt+1|xt+1)p(xt+1|xt)/p(yt+1|xt).

Typically this fully adapted form is unavailable but when it is the approximating joint density may be chosen to be the true joint. That is,

 g(yt+1|xt)g(xt+1|xt;yt+1) =p(yt+1|xt)p(xt+1|xt;yt+1).

In this case Step 4 becomes redundant as , ( ) and the method reduces to what Pitt and Shephard (2001) call the fully adapted algorithm. The fully adapted method is the most efficient in estimating the likelihood and is generally the optimal filter a single time step ahead.

The SIR method of Gordon et al. (1993) arises when the joint proposal is chosen as,

 g(yt+1|xt)×g(xt+1|xt)=1×p(xt+1|xt),

in which case, is constant and . In this case, step (1) above leaves the weights unchanged (as ).

The goal of the auxiliary particle filter is to get as close to full adaption as possible, when full adaption is not analytically possible. This is achieved by making as close to as a function of as possible (up to a constant of proportionality) and the density as close to as possible. Various procedures are found for doing this; see for example, Pitt and Shephard (2001) and Smith and Santos (2006).

The general ASIR estimator of , introduced and used by Pitt (2002), is

 ˆpA(yt|y1:t−1) ={M∑k=1ωktM}{M∑k=1ωkt−1|t}. (3)

The two sets of weights and are defined above and calculated as part of the ASIR algorithm. These two quantities are again a simple by-product of the algorithm. We define the information in the swarm of particles at time as For full adaption and and the first summation in (3) disappears. For the SIR method, and and the second summation in (3) disappears.

The ASIR Algorithm 1 is a flexible particle filter approach when combined with stratification. Theorem 1 establishes that this algorithm together with the estimator of (3) is unbiased. This is important as it enables very efficient likelihood estimators from the ASIR method to be used within an MCMC algorithm.

###### Theorem 1.

The ASIR likelihood

 ˆpA(y1:t) =ˆpA(y1)T∏t=2ˆpA(yt|y1:t−1) (4)

is unbiased in the sense that

 E(ˆpA(y1:t)) =p(y1:t)

The theorem is proved in Section 7.4.2, Proposition 7.4.1 of Del Moral (2004). We give a more direct and accessible proof in Appendix A.

Our examples use the standard particle filter and the fully adapted particle filter, and the partially adapted particle filter described in Appendix B.1.

#### Simulated Likelihood

The ASIR likelihood estimate 4 is called the simulated likelihood and Theorem 1 shows that the general ASIR particle filter provides a simulated likelihood that is an unbiased estimate of the true likelihood function. Andrieu et al. (2010) show that we can view the simulated likelihood as the density of conditional on and a set of auxiliary variables that are not a function of and such that , where the subscript denotes a simulated likelihood, and

 ∫pS(y|θ,u)p(u)du =p(y|θ). (5)

The variables represent the uniform variates used for the multinomial/statified draws and the random variates (e.g. standard Gaussian) used in simulating from . It follows that the posterior so that a method that simulates from yields iterates from the correct posterior . We note that the ideas of using a simulated likelihood for Bayesian inference have also been explored, outside the area of particle filters in the work of Beaumont (2003) and Andrieu and Roberts (2009)

We note that the variance of the log of the simulated likelihood is so it will be necessary to take the number of particles to keep a constant standard deviation as increases. This implies that the particle filter MCMC algorithm is of order for large and means that it is important to make the particle filter as efficient as possible.

## 3 Adaptive sampling for the simulated likelihood

The target density for posterior inference is , where is the prior for . It may therefore be possible to use a Metropolis-Hastings simulation method to generate samples from the target density as follows. Suppose that given some initial , the iterates have been generated. The th iterate, , is generated from the proposal density , which may also depend on some other value of which we call . Let be the proposed value of generated from Then we take with probability

 α(θj−1,uj−1;θpj,up)=min{1,pS(y|θpj,upj)p(θpj)pS(y|θj−1,uj−1)p(θj−1)qj(θj−1;θpj)qj(θpj;θj−1)} , (6)

with and cancelling out, and take otherwise. We say that the proposal is independent if .

In adaptive sampling the parameters of are estimated from the iterates . When the likelihood can be measured exactly, i.e., in the non-particle filter case, then it can be shown that under appropriate regularity conditions, the sequence of iterates , converges to draws from the target distribution. See Roberts and Rosenthal (2007), Roberts and Rosenthal (2009) and Giordani and Kohn (2010).

Our article uses the adaptive independent Metropolis Hastings scheme of Giordani and Kohn (2010) and the adaptive random walk Metropolis scheme of Roberts and Rosenthal (2009). They are discussed in Appendix C. Appendix C.3 proves that the adaptive independent Metropolis Hastings sampler converges to the target distribution under the given conditions. Both the standard particle filter and the fully adapted particle filter satisfy these conditions almost automatically. The appendix also shows that in the partially adapted case a simple mixture of a partially adapted particle filter and the standard particle filter also satisfies the conditions for convergence.

### 3.1 Adaptive sampling and parallel computation

Carrying out Bayesian inference using the particle filter and Markov chain Monte Carlo simulation is computationally expensive. However, parallel processing can greatly increase the speed and therefore the areas of application of our methods. For adaptive independent Metropolis Hastings proposals we can use the following three step approach. Let the current value of generated by the sampling scheme and the current proposal density for . (a) For each of processors generate proposed values of , which we write as , and compute the corresponding logs of the ratios . (b) After each block of proposed values is generated for each processor, carry out Metropolis-Hastings selection of the proposed parameters using a single processor to obtain draws from the chain. This last step is fast because it is only necessary to draw uniform variates. (c) Use the previous iterates and the to update the proposal density and . In our applications of this approach, is chosen so that is aproximately the time between updates of the adaptive independent Metropolis Hastings sampling scheme.

A second approach applies to all Metropolis-Hastings sampling schemes, and in particular to the adaptive random walk Metropolis proposal. Suppose that processors are available. The likelihood is estimated for a given on each of the processors using the particle filter with particles and these estimates are then averaged to get an estimate of the likelihood based on particles. This approach is similar to, but faster, than using a single processor and makes it possible to estimate the likelihood using a large number of particles. However, for a given number of generated particles, the first approach can be shown to be statistically more efficient than the second.

### 3.2 Estimating the marginal likelihood

Marginal likelihoods are often used to compare two or more models. For a given model, let be the vector of model parameters, the likelihood of the observations and the prior for . The marginal likelihood is defined as

 p(y) =∫p(y|θ)p(θ)dθ. (7)

which in our case can also be written as

 p(y) =∫pS(y|θ,u)p(θ)p(u)dθ du . (8)

It is often difficult to evaluate or estimate in non-Gaussian state space models, although auxiliary variable methods can be used in some problems. See Frühwirth-Schnatter and Wagner (2008). Appendix D briefly outlines how the marginal likelihood can be estimated using bridge or importance sampling, with the computation carried out within the adaptive sampling framework so that a separate simulation run is unnecessary.

## 4 Comparing the standard SIR particle filter with adapted ASIR particle filters

It is instructive to compare the performance of the standard particle filter with the fully and partially adapted particle filters for different signal to noise ratios and using different numbers of particles. We use two simulated examples. The first example compares the standard particle filter and the fully adapted particle filter for a Gaussian autoregressive signal observed with Gaussian noise. This example is also of interest because we can compute the exact likelihood using the Kalman filter, which is equivalent to using an infinite number of particles. The second example compares the standard SIR to a partially adapted particle filter using a a binomial model where we vary the signal to noise ratio by varying the number of binomial trials.

In both examples, we make the comparison in terms of three criteria. The first is the acceptance rate of the adaptive independent Metropolis Hastings sampler, which we define as the percentage of accepted draws. The second is the inefficiencies of the iterates of the parameters obtained using the adaptive independent Metropolis Hastings method of Giordani and Kohn (2010). The third is the standard deviation of the simulated log-likelihood evaluated at the true value of , which is a good measure of how close the particle filter likelihood is to the true likelihood .

We define the inefficiency of the sampling scheme for a given parameter as the variance of the parameter estimate divided by its variance if the sampling scheme generates independent iterates. We estimate the inefficiency factor for a given parameter as where is the estimated autocorrelation of the parameter iterates at lag . As a rule of thumb, the maximum number of lags that we use is , where is the lowest index such that where is the sample size used to compute .

### 4.1 Example 1: Autogressive model observed with noise

Consider the following first order autoregression (AR(1)) plus noise model,

 yt|xt ∼N(xt,σ2) xt+1|xt ∼N(μ+ϕ(xt−μ),τ2) (9) x0 ∼N(μ,τ2/(1−ϕ2)). (10)

The prior distributions are and . The notation means a normal distribution with mean and variance , means a uniform distribution on and means an inverse gamma distribution with shape parameter and scale .

Our simulation study uses 50 replicated data sets with 500 observations each, generated by setting , , , and two values for , corresponding to high and low signal to noise ratios. We ran 30 000 iterations of the adaptive independent Metropolis-Hastings for the posterior distribution using the standard particle filter and the fully adapted particle filter with differing number of particles. For completeness, we also ran the adaptive sampling scheme with the Kalman filter using the exact likelihood. The update times for the adaptive independent Metropolis Hastings were at iterations 100, 200, 500, 1000, 1500, 2000, 3000, 4000, 5000, 10000, 15000 and 20000. We initialized the AIMH based on a normal proposal formed from 5000 draws of a previous run of the ARWM and the Kalman filter. For each signal to noise ratio we report the median MCMC parameter inefficiencies over the 50 replications as well as the interquartile range of the inefficiencies for differing numbers of particles. We also report results on the standard deviation of the simulated log likelihood for the standard particle filter and the fully adapted particle filter. Specifically, for each of the 50 replicated data sets we computed the log likelihood at the true parameter values 1000 times for each of the two particle filters and obtained the median and the interquartile range of the medians and standard deviations of the log likelihoods across the 50 data replicates.

#### Results for the high signal to noise case

Tables 1 and 2 report the results for the high signal to noise case with . Table 1 shows that the median variance of the simulated log likelihood at the true parameter values for the standard particle filter with 2000 particles is over 400 times higher than the median variance of the fully adapted particle filter using 100 particles. This suggests that to get the same standard deviation for the simulated likelihood we would need approximately 8000 times as many particles for the standard particle filter as for the fully adapted particle filter as we know that the variance decreases approximately in inverse proportion to the number of particles.

Table 2 shows that the median acceptance rate of the adaptive independent Metropolis Hastings sampler using the fully adapted particle filter with 100 particles is about 1.75 times higher than the median acceptance rate of the standard particle filter using 4000 particles. The table also shows that the median parameter inefficiencies are about 3 times higher for the standard particle filter using 4000 particles than for the fully adapted particle filter using 100 particles. Finally, the table shows that the fully adapted particle filter using 500 particles performs almost as well as using the exact likelihood.

#### Results for the low signal to noise case

Tables 3 and 4 report the results for the low signal to noise case with . Table  3 shows that the median variance of the log of the simulated likelihood at the true parameter values for the standard particle filter using 1000 particles is about the same as the median variance of the simulated likelihood for the fully adapted particle filter using 100 particles, i.e. the variance of the simulated log likelihood of the standard particle filter is about 10 times that of the fully adapted particle filter for the same number of particles.

Table 4 shows that the median acceptance rates and parameter inefficiencies of the adaptive independent Metropolis Hastings sampler using the fully adapted particle filter with 100 particles are about the those of the standard particle filter using 4000 particles.

### 4.2 Example 2: Binomial model with an autoregressive state equation

Consider observations generated from the following dynamic binomial model

 yt ∼\rm Bin(m,πt) ,πt=exp(xt)/(1+exp(xt))

where is the number of trials and is the probability of success of each trial. The states follow the first order AR(1) model (9) whose initial distribution is (10). The prior distributions for the parameters are and . We use the notation to mean a half-normal distribution with scale .

Our simulation study is organized similarly to that in Section 4.1. The data generating process takes , , and the number of trials takes the two values . We initialized the AIMH based on a normal proposal formed from 5000 draws of a previous run of the ARWM and the partially adapted particle filter using 500 particles.

We note the following about the binomial density.

1. The standard particle filter will do worse as the number of trials increases because the the measurement density becomes more informative and peaked so the variance of the weights increases.

2. The opposite is true for the partially adapted particle filter method. The measurement density tends to normality as increases by the cental limit theorem, so that the partially adapted particle filter tends to a fully adapted particle filter. Hence the partially adapted particle filter method actually improves as becomes larger and this is seen in Tables 5 and 7 below.

#### High signal to noise case

Tables 5 and 6 report the results for the high signal to noise case, with the number of trials set at . Table 5 shows that the variance of the simulated log likelihood at the true parameter values for the standard particle filter with 4000 particles is about 2.5 times higher than that of the partially adapted particle filter when 100 particles are used. This means that it is necessary to have 100 times as many particles using the standard particle filter to get the same noise level for the simulated log likelihood as for the partially adapted particle filter.

Table 6 shows that the median acceptance rate of the adaptive independent Metropolis Hastings sampler using the partially adapted particle filter with 100 particles is about 1.3 times higher than the median acceptance rate of standard particle filter using 4000 particles. The table also shows that the median parameter inefficiencies are 1.5 times higher for the standard particle filter using 4000 particles than they are for the partially adapted particle filter using 100 particles.

#### Low signal to noise case

Tables 7 and 8 report the results for the low signal to noise case, with the number of trials set at . Table 7 shows that the median variance of simulated log likelihood at the true parameter values for the standard particle filter with 4000 particles is about the same as that obtained by the partially adapted particle filter using 250 particles, i.e. the standard particle filter requires about 16 times as many particles to obtain the same standard deviation as the partially adapted particle filter.

Table 8 shows that the median acceptance rate of the adaptive independent Metropolis Hastings sampler using the partially adapted particle filter with 200 particles is higher than the median acceptance rate of standard particle filter using 2000 particles. The table also shows that the median parameter inefficiencies for the standard particle filter using 2000 particles are higher than the median inefficiencies for the partially adapted particle filter using 200 particles.

## 5 Performance of the adaptive sampling schemes on real examples

This section uses real data to illustrate the flexibility and wide applicability of the approach that combines particle filtering with adaptive sampling. All that is necessary for model estimation and model comparison by marginal likelihood is to code up a particle filter to evaluate the simulated likelihood and to code up the prior on the parameters. We also illustrate the difference in performance between the adaptive random walk Metropolis sampling scheme of Roberts and Rosenthal (2009) and that the adaptive independent Metropolis Hastings scheme of Giordani and Kohn (2010). This comparison is interesting for two reasons. First, the adaptive independent Metropolis Hastings scheme tries to obtain a good approximation to the posterior density, whereas the adaptive random walk Metropolis aims for some target acceptance rate. Second, we claim that any independent Metropolis Hastings scheme (of which the adaptive independent Metropolis Hastings scheme of Giordani and Kohn (2010) is an example), is more suitable to be implemented in parallel than a Metropolis-Hastings scheme with a proposal that depends on the previous iterate (of which the adaptive random walk Metropolis scheme of Roberts and Rosenthal (2009) is an example).

The comparison between the two schemes is in terms of three criteria. The first two are the acceptance rate of the Metropolis-Hastings method and the inefficiency factors (IF) of the parameters and are defined in Section 4; they are independent of the way the algorithms are implemented. However, these two criteria do not take into account the times taken by the samplers. To obtain an overall measure of the effectiveness of a sampler, we define its equivalent computing time , where is the time per iteration of the sampler. We interpret as the time taken by the sampler to attain the same accuracy as that attained by 1000 independent draws of the same sampler. For two samplers and , is the ratio of times taken by them to achieve the same accuracy. We note that the time per iteration for a given sampling algorithm depends on how it is implemented, i.e. the language used, whether operations are vectorized, etc.

The results presented are for a single processor and the two parallel methods discussed in Section 3.1. To simplify the presentation, we mainly present results for the standard particle filter.

### 5.1 Example 1: Stochastic volatility model with leverage and outliers

The first example considers the univariate stochastic volatility (SV) model

 yt=Ktexp(xt/2)εt,εt∼N(0,1)xt+1=μ+ϕ(xt−μ)+σηηt,ηt∼N(0,1) (11)

where , and , with . This is a state space model with a non-Gaussian observation equation and a Gaussian state transition equation for the latent volatility which follows a first order autoregressive model. The SV model allows for leverage because the errors in the observation and state transition equations can be correlated. The model also allows for outliers in the observation equation because the standard deviation of given can be 2.5 its usual size when . To complete the model specification, we assume that all parameters are independent a priori with the following prior distributions: , , , and . We use the notation to mean a truncated normal with location and scale restricted to the interval and is an inverse gamma distribution with shape parameter , scale parameter and mode . We set in the general model to indicate that outliers are rare apriori.

Shephard (2005) reviews SV models and a model of the form (11) is estimated by Malik and Pitt (2008) by maximum likelihood using the smooth particle filter.

#### S&P 500 index

We apply the SV model (11) to the Standard and Poors (S&P) 500 data from 02/Jan/1970 to 14/Dec/1973 obtained from Yahoo Finance web site2. The data consists of observations.

Table 9 shows the acceptance rates, the inefficiencies and the equivalent computing time over 10 replications of the stochastic volatility model using thestandard particle filter and the two adaptive Metropolis-Hastings schemes. The analysis uses the SV model without leverage or outliers. In the table, SP stands for a single processor, MP for multiprocessor method 1 and MP for multiprocessor method 2 (where the simulated likelihood is obtained as an average) described in Section 3.1. We use eight processors for both the MP and MP schemes. The basic number of particles in this example is , which means that SP uses 4000 particles in a single processor, MP uses 4000 particles in each processor and MP uses 500 particles in each processor. We ran all the algorithm for 10000 iterations and took the last 5000 to compute the results. The equivalent computing time is obtained by taking the overall time divided by the number of iterations times the inefficiency times 1000. The update times for the adaptive independent Metropolis Hastings using SP or MP were at 100, 200, 500, 1000, 2000, 3000, 4000, 5000, 6000 and 7500. The block sizes (also the update times) for the adaptive Metropolis-Hastings MP were 15, 25, 60, 125, 250, 375, 500, 625, 750 and 940.

The table shows that the acceptance rates of the adaptive independent Metropolis Hastings sampler are about twice those of the adaptive random walk Metropolis and the inefficiencies are about 1/6 of those for the adaptive random walk Metropolis. For ECT, the best approach for adaptive independent Metropolis Hastings is MP and is between 11 and 18 times better than the best available approach (which is MP) for adaptive random walk Metropolis. Qualitatively similar results were obtained for particles.

#### Model selection

We now use importance sampling and bridge sampling to compute the marginal likelihoods of the four SV models: the model with no leverage effect () and no outlier effect (), the model that allows for leverage but not outliers, the model that allows for outliers but no leverage and the general model that allows for both outliers and leverage. Table 10 shows the logarithms of the marginal likelihoods of the four models for a single run of each algorithm. The differences between the two approaches are very small. In this example, and based on our prior distributions, the SV model with leverage effects but no outliers has the highest marginal likelihood.

#### Posterior estimates of model parameters

The estimated posterior means and standard deviations of all four models are given in Table 11.

### 5.2 Example 2: GARCH model observed with noise

The GARCH(1,1) model is used extensively to model financial returns, see for example, Bollerslev et al. (1994). In this section we consider the GARCH(1,1) model observed with Gaussian noise which is a more flexible version of the basic model. The model is,

 yt|xt ∼N(xt,τ2) xt+1|σ2t+1 ∼N(0,σ2t+1) σ2t+1 =α+βx2t+γσ2t x0 ∼N(0,α/(1−β−γ)).

The priors on and are and . The joint prior for and is uniform in the region .

It is straightforward to show that this model is fully adapted. See Appendix B.2 Instead of using the GARCH(1,1) model with noise we can use other members of the GARCH family, e.g. an EGARCH process observed with noise. All such models are fully adapted.

#### MSCI UK index returns

We model the weekly MSCI UK index returns from 6 January 2000 to 28 January 2010 corresponding to 526 weekly observations shown in Figure 1.

Table 12 compares different adaptive sampling algorithms and particles filters in terms of acceptance rates, inefficiencies and equivalent computing times. Medians and interquantile ranges are computed using 50 replication of each adaptive sampling, particle filter and number of particles. The adaptive sampling algorithms were run for 30 000 iterations. The first 20 000 draws were discarded and the remainder used to compute the statistics. The update times for the adaptive independent Metropolis-Hastings were at 100, 200, 500, 1000, 1500, 2000, 3000, 4000, 5000, 10000, 15000 and 20000. The initial value and proposal distribution for all algorithms were based on a single short run of the adaptive random walk and all results are for a single processor.

The table shows that the fully adapted particle filter is much more efficient than the standard particle filter for both the adaptive random walk Metropolis and the adaptive independent Metropolis Hastings samplers, and that the adaptive independent Metropolis Hastings sampler is much more efficient than the adaptive random walk Metropolis sampler for both the standard and the fully adapted particle filters. In particular, the table shows that adaptive independent Metropolis Hastings combined with full adaptation using 200 particles is about four times as efficient as the adaptive random walk Metropolis sampler using the standard particle filter and 10, 000 particles.

Table 13 shows the standard deviations of the simulated log-likelihood function for the particle filters using differing number of particles. The statistics are based on 1000 replications of the particle filter with the parameters fixed at their posterior means. The summary statistics of the posterior distribution is shown in Table 14. The table shows that the standard deviation of the simulated log likelihood using the fully adapted particle filter with 500 particles is smaller than the standard deviation of the simulated log likelihood using 10, 000 particles.

Table 14 is a summary of the posterior distributions of the four parameters.

## Acknowledgment

The research of Robert Kohn and Ralph S. Silva was partially supported by an ARC Discovery Grant DP0667069

## References

• Andrieu and Doucet (2002) Andrieu, C. and Doucet, A. (2002), “Particle filtering for partially observed Gaussian state space models,” Journal of the Royal Statistical Society, Series B, 64, 827–836.
• Andrieu et al. (2010) Andrieu, C., Doucet, A., and Holenstein, R. (2010), “Particle Markov chain Monte Carlo methods,” Journal of the Royal Statistical Society, Series B, 72, 1–33.
• Andrieu and Roberts (2009) Andrieu, C. and Roberts, G. (2009), “The pseudo-marginal approach for efficient Monte Carlo computations,” The Annals of Statistics, 37, 697–725.
• Atchadé and Rosenthal (2005) Atchadé, Y. and Rosenthal, J. (2005), “On adaptive Markov chain Monte Carlo algorithms.” Bernoulli, 11, 815–828.
• Beaumont (2003) Beaumont, M. (2003), “Estimation of population growth or decline in genetically monitored populations,” Genetics, 164, 1139.
• Bollerslev et al. (1994) Bollerslev, T., Engle, R. F., and Nelson, D. (1994), “ARCH models,” in HANDBOOK OF ECONOMETRICS, eds. Engle, R. and McFadden, D., Amsterdam: Elsevier, vol. 4, chap. 49, pp. 2959–3038.
• Cappé et al. (2005) Cappé, O., Moulines, E., and Rydén, T. (2005), Inference in Hidden Markov Models, New York: Springer.
• Carpenter et al. (1999) Carpenter, J. R., Clifford, P., and Fearnhead, P. (1999), “An improved particle filter for non-linear problems,” IEE Proceedings on Radar, Sonar and Navigation, 146, 2–7.
• Chen and Shao (1997) Chen, M. H. and Shao, Q. M. (1997), “On Monte Carlo methods for estimating ratios of normalizing constants,” The Annals of Statistics, 25, 1563–1594.
• Del Moral (2004) Del Moral, P. (2004), Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications, New York: Springer.
• Del Moral et al. (2006) Del Moral, P., Doucet, A., and Jasra, A. (2006), “Sequential Monte Carlo samplers,” Journal of the Royal Statistical Society Series B, 68, 411–436.
• Doucet et al. (2001) Doucet, A., de Freitas, N., and Gordon, N. (2001), Sequential Monte Carlo Methods in Practice, New York: Springer.
• Doucet et al. (2000) Doucet, A., Godsill, S., and Andrieu, C. (2000), “On sequential Monte Carlo sampling methods for Bayesian filtering,” Statistics and Computing, 10, 197–208.
• Fearnhead and Clifford (2003) Fearnhead, P. and Clifford, P. (2003), “On-line inference for hidden Markov models via particle filters,” Journal of the Royal Statistical Society Series B, 65, 887–899.
• Fernández-Villaverde and Rubio-Ramírez (2007) Fernández-Villaverde, J. and Rubio-Ramírez, J. (2007), “Estimating macroeconomic models: a likelihood approach,” Review of Economic Studies, 74, 1059–1087.
• Flury and Shephard (2008) Flury, T. and Shephard, N. (2008), “Bayesian inference based only on simulated likelihood: particle filter analysis of dynamic economic models,” http://www.economics.ox.ac.uk/Research/wp/pdf/paper413.pdf.
• Frühwirth-Schnatter and Wagner (2006) Frühwirth-Schnatter, S. and Wagner, H. (2006), “Auxiliary mixture sampling for parameter-driven models of time series of counts with applications to state space modelling,” Biometrika, 93, 827–841.
• Frühwirth-Schnatter and Wagner (2008) Frühwirth-Schnatter, S. and Wagner, H. (2008), “Marginal likelihoods for non-Gaussian models using auxiliary mixture sampling,” Computational Statistics and Data Analysis, 52, 4608–4624.
• Geweke (1989) Geweke, J. (1989), “Bayesian inference in econometric models using Monte Carlo integration,” Econometrica, 57, 1317–1339.
• Giordani and Kohn (2010) Giordani, P. and Kohn, R. (2010), “Adaptive Independent Metropolis-Hastings by Fast Estimation of Mixture of Normals,” Journal of Computational and Graphical Statistics
• Gordon et al. (1993) Gordon, N. J., Salmond, D. J., and Smith, A. F. M. (1993), “A novel approach to non-linear and non-Gaussian Bayesian state estimation,” Radar and Signal Processing, IEE Proceedings F, 140, 107–113.
• Haario et al. (2001) Haario, H., Saksman, E., and Tamminen, J. (2001), “An adaptive Metropolis algorithm,” Bernoulli, 7, 223–242.
• Kim et al. (1998) Kim, S., Shephard, N., and Chib, S. (1998), “Stochastic volatility: Likelihood inference and comparison with ARCH models,” Review of Economic Studies, 65, 361–393.
• Kitagawa (1996) Kitagawa, G. (1996), “Monte Carlo filter and smoother for non-Gaussian non-linear state space models,” Journal of Computational and Graphics Statistics, 5, 1–25.
• Liu and Chen (1998) Liu, J. S. and Chen, R. (1998), “Sequential Monte Carlo methods for dynamic systems,” Journal of the American Statistical Association, 93, 1032–1044.
• Malik and Pitt (2008) Malik, S. and Pitt, M. K. (2008), “Modeling Stochastic Volatility with Leverage and Jumps: A ‘Smooth’ Particle Filtering Approach,” Available at http://www.riksbank.com/upload/Research/Conferences/StateSpace2008/Pitt.pdf.
• Meng and Wong (1996) Meng, X. L. and Wong, W. H. (1996), “Simulating ratios of normalizing constants via a simple identity: A theoretical exploration,” Statistica Sinica, 6, 831–860.
• Pitt and Shephard (1999) Pitt, M. and Shephard, N. (1999), “Filtering via simulation: auxiliary particle filter,” 94, 590–599.
• Pitt and Shephard (2001) — (2001), “Auxiliary variable based particle filters,” in Sequential Monte Carlo Methods in Practice, eds. de Freitas, N., Doucet, A., and Gordon, N. J., New York: Springer-Verlag, pp. 273–293.
• Pitt (2002) Pitt, M. K. (2002), “Smooth particle filters for likelihood evaluation and maximization,” .
• Polson et al. (2008) Polson, N. G., Stroud, J. R., and Müller, P. (2008), “Practical filtering with sequential parameter learning.” Journal of the Royal Statistical Society, Series B, 70, 413–428.
• Roberts et al. (1997) Roberts, G. O., Gelman, A., and Gilks, W. R. (1997), “Weak convergence and optimal scaling of random walk Metropolis algorithms.” Annals of Applied Probability, 7, 110–120.
• Roberts and Rosenthal (2007) Roberts, G. O. and Rosenthal, J. S. (2007), “Coupling and ergodicity of adaptive MCMC,” Journal of Applied Probability, 44, 458–475.
• Roberts and Rosenthal (2009) — (2009), “Examples of adaptive MCMC,” Journal of Computational and Graphical Statistics, 18, 349–367.
• Shephard (2005) Shephard, N. (2005), Stochastic Volatilty: Selected Readings, Oxford: Oxford University Press.
• Shephard and Pitt (1997) Shephard, N. and Pitt, M. (1997), “Likelihood analysis of non-Gaussian measurement time series,” 84, 653–667.
• Silva et al. (2009) Silva, R., Giordani, P., Kohn, R., and Pitt, M. (2009), “Particle filtering within adaptive Metropolis Hastings sampling,” Http://arxiv.org/abs/0911.0230.
• Smith and Santos (2006) Smith, J. and Santos, A. (2006), “Second-Order Filter Distribution Approximations for Financial Time Series With Extreme Outliers,” Journal of Business and Economic Statistics, 24, 329–337.
• Storvik (2002) Storvik, G. (2002), “Particle filters for state-space models with the presence of unknown static parameters,” IEEE Transactions on Signal Processing, 50, 281–290.
• West and Harrison (1997) West, M. and Harrison, J. (1997), Bayesian Forecasting and Dynamic Models, New York: Springer-Verlag, 2nd ed.

## Appendix A Proof that the AISR likelihood is unbiased

This appendix proves Theorem 1 using an iterated expectations argument on the simulated likelihood. A similar result is obtained in Proposition 7.4.1 of Section 7.4.2 by Del Moral (2004) by showing that the difference of the measure on the states induced by the particle filter and that of the limiting Feyman-Kac measure is a martingale. We believe that our proof which deals specifically with the unbiasedness of the simulated likelihood is simpler and more direct and is accessible to a much wider range of readers.

Before giving the proof we define some terms that are used in Algorithm 1,

 ˆpAM(xt|y1:t)= M∑k=1πktδ(xt−xkt), % where πkt is given in Step (4). ˆgAM(xt|y1:t+1)= M∑k=1πkt|t+1δ(xt−xkt),  where xkt∼ˆpAM(xt|y1:t), ˆgAM(xt|y1:t)= ∫g(xt|˜xt−1;yt)ˆgAM(˜xt−1|y1:t)d˜xt−1, ωt|t+1(xt)= g(yt+1|xt)πt , ωt+1(xt+1;xt)=p(yt+1|xt+1)p(xt+1|xt)g(yt+1|xt)g(xt+1|xt,yt+1)

The term is the empirical filtering density arising from step 4 of Algorithm 1. The second term , is the empirical “look ahead” approximation drawn from in step 2. The expression is the filtering approximation which we draw from in step 3 (integrating over step 2). Furthermore, we have that in Algorithm 1, and .

###### Lemma 1.
 E[ˆpA(yt|y1:t−1)|At−1]=M∑k=1p(yt|xkt−1)πkt−1,

where the swarm of particles at time is

###### Proof.
 E[ˆpA(yt|y1:t−1) | At−1] = E[M∑k=1ωt(xkt;˜xkt−1)M | At−1]{M∑j=1ωjt−1|t} = ∫ωt(xt;˜xt−1)g(xt|˜xt−1;yt)ˆgAM(˜xt−1|y1:t)dxtd˜xt−1{M∑j=1ωjt−1|t} = ∫M∑k=1ωt(xt;xkt−1)g(xt|xkt−1;yt)ωt−1|t(xkt−1)(∑Mj=1ωt−1|t(xjt−1))dxt{M∑j=1ωjt−1|t} = ∫M∑k=1ωt(xt;xkt−1)g(xt|xkt−1;yt)ωt−1|t(xkt−1)dxt = M∑k=1∫p(yt|xt)p(xt|xkt−1)g(yt|xkt)g(xt|xkt−1,yt)g(xt|xkt−1,yt)g(yt|xkt−1)πkt−1dxt,

so

 E[ˆpA(yt|y1:t−1) | At−1] = M∑k=1πkt−1∫p(yt|xt)p(xt|xkt−1)dxt = M∑k=1p(yt|xkt−1)πkt−1.

###### Lemma 2.
 E[ˆpA(yt−h:t|y1:t−h−1)|At−h−1]=M∑k=1p(yt−h:t|xkt−h−1)πkt−h−1
###### Proof.

(by induction)

A. Show true for case

 E[ˆpA(yt−1:t|y1:t−2)|At−2] = E[ˆpA(yt|y1:t−1)ˆpA(yt−1|y1:t−2)|At−2] = E[E[ˆpA(yt|y1:t−1) | At−1] ˆpA(yt−1|y1:t−2) | At−2].

The inner integral is,

 E[ˆpA(yt|y1:t−1) | At−1]=M∑k=1p(yt|xkt−1)πkt−1.,

from Lemma 1. Hence,

 E[ˆpA(yt−1:t|y1:t−2)|At−2] = E[{M∑k=1p(yt|xkt−1)πkt−1}{M∑i=1ωit−1M} | At−2]{M∑j=1ωjt−2|t−1} = E⎡⎣⎧⎨⎩