MCMC for nonlinear state space models
using ensembles of latent sequences
Nonlinear state space models are a widelyused class of models for biological, economic, and physical processes. Fitting these models to observed data is a difficult inference problem that has no straightforward solution. We take a Bayesian approach to the inference of unknown parameters of a nonlinear state model; this, in turn, requires the availability of efficient Markov Chain Monte Carlo (MCMC) sampling methods for the latent (hidden) variables and model parameters. Using the ensemble technique of Neal (2010) and the embedded HMM technique of Neal (2003), we introduce a new Markov Chain Monte Carlo method for nonlinear state space models. The key idea is to perform parameter updates conditional on an enormously large ensemble of latent sequences, as opposed to a single sequence, as with existing methods. We look at the performance of this ensemble method when doing Bayesian inference in the Ricker model of population dynamics. We show that for this problem, the ensemble method is vastly more efficient than a simple Metropolis method, as well as to times more efficient than a singlesequence embedded HMM method, when all methods are tuned appropriately. We also introduce a way of speeding up the ensemble method by performing partial backward passes to discard poor proposals at low computational cost, resulting in a final efficiency gain of to times over the singlesequence method.
1 Introduction
Consider an observed sequence . In a state space model for , the distribution of the ’s is defined using a latent (hidden) Markov process . We can describe such a model in terms of a distribution for the first hidden state, , transition probabilities between hidden states, , and emission probabilities, , with these distributions dependent on some unknown parameters .
While the state space model framework is very general, only two state space models, Hidden Markov Models (HMM’s) and linear Gaussian models have efficient, exact inference algorithms. The forwardbackward algorithm for HMM’s and the Kalman filter for linear Gaussian models allow us to perform efficient inference of the latent process, which in turn allows us to perform efficient parameter inference, using an algorithm such as ExpectationMaximizaton for maximum likelihood inference, or various MCMC methods for Bayesian inference. No such exact and efficient algorithms exist for models with a continuous state space with nonlinear state dynamics, nonGaussian transition distributions, or nonGaussian emission distributions, such as the Ricker model we consider later in this paper.
In cases where we can write down a likelihood function for the model parameters conditional on latent and observed variables, it is possible to perform Bayesian inference for the parameters and the latent variables by making use of sampling methods such as MCMC. For example, one can perform inference for the latent variables and the parameters by alternately updating them according to their joint posterior.
Sampling of the latent state sequence is difficult for state space models when the state space process has strong dependencies — for example, when the transitions between states are nearly deterministic. To see why, suppose we sample from using Gibbs sampling, which samples the latent variables one at a time, conditional on values of other latent variables, the observed data, and the model parameters. In the presence of strong dependencies within the state sequence, the conditional distribution of a latent variable will be highly concentrated, and we will only be able to change it slightly at each variable update, even when the marginal distribution of this latent variable, given , is relatively diffuse. Consequently, exploration of the space of latent sequences will be slow.
The embedded HMM method of (Neal (2003), Neal, et al. (2004)) addresses this problem by updating the entire latent sequence at once. The idea is to temporarily reduce the state space of the model, which may be countably infinite or continuous, to a finite collection of randomly generated “pool” states at each time point. If the transitions between states are Markov, this reduced model is an HMM, for which we can use the forwardbackward algorithm to efficiently sample a sequence with values in the pools at each time point. Pool states are chosen from a distribution that assigns positive probability to all possible state values, allowing us to explore the entire space of latent sequences in accordance with their exact distribution. Neal (2003) showed that when there are strong dependencies in the state sequence, the embedded HMM method performs better than conventional Metropolis methods at sampling latent state sequences.
In our paper, we first look at an MCMC method which combines embedded HMM updates of the hidden state sequence with randomwalk Metropolis updates of the parameters. We call this method the ‘singlesequence’ method. We next reformulate the embedded HMM method as an ensemble MCMC method. Ensemble MCMC allows multiple candidate points to be considered simultaneously when a proposal is made. This allows us to consider an extension of the embedded HMM method for inference of the model parameters when they are unknown. We refer to this extension as the ensemble embedded HMM method. We then introduce and describe a “staged” method, which makes ensemble MCMC more efficient by rejecting poor proposals at low computational cost after looking at a part of the observed sequence. We use the singlesequence, ensemble, and ensemble with staging methods to perform Bayesian inference in the Ricker model of population dynamics, comparing the performance of these new methods to each other, and to a simple Metropolis sampler.
2 Ensemble MCMC
We first describe Ensemble MCMC, introduced by Neal (2010) as a general MCMC method, before describing its application to inference in state space models.
Ensemble MCMC is based on the framework of MCMC using a temporary mapping. Suppose we want to sample from a distribution on . This can be done using a Markov chain with transition probablity from to given by , for which is an invariant distribution — that is, must satisfy . The temporary mapping strategy defines as a composition of three stochastic mappings. The current state is stochastically mapped to a state using the mapping . Here, the space need not be the same as the space . The state is then updated to using the mapping , and finally a state is obtained using the mapping . In this approach, we may choose whatever mappings we want, so long as the overall transition leaves invariant. In particular, if is a density for , will leave invariant if the following conditions hold.
(1)  
(2)  
(3) 
In the ensemble method, we take , with referred to as an “ensemble”, with being the number of ensemble elements. The three mappings are then constructed as follows. Consider an “ensemble base measure” over ensembles with density , and with marginal densities for each of the ensemble elements. We define as
(4) 
Here, is a distribution that places a point mass at , is all of except , and is the conditional density of all ensemble elements except the th, given the value for the th.
This mapping can be interpreted as follows. First, we select an integer , from a uniform distribution on . Then, we set the ensemble element to , the current state. Finally, we generate the remaining elements of the ensemble using the conditional density .
The ensemble density is determined by and , and is given explicitly as
(5) 
can be any update (or sequence of updates) that leaves invariant. For example, could be a Metropolis update for , with a proposal drawn from some symmetrical proposal density. Finally, maps from to by randomly setting to with chosen from with probabilities proportional to .
The mappings descibed above satisfy the necessary properties to make them a valid update, in the sense of preserving the stationary distribution . The proof can be found in Neal (2010).
3 Embedded HMM MCMC as an
Ensemble MCMC method
The embedded HMM method briefly described in the introduction was not initially introduced as an ensemble MCMC method, but it can be reformulated as one. We assume here that we are interested in sampling from the posterior distribution of the state sequences, , when the parameters of the model are known. Suppose the current state sequence is . We want to update this state sequence in a way that leaves invariant.
The first step of the embedded HMM method is to temporarily reduce the state space to a finite number of possible states at each time, turning our model into an HMM. This is done by, for each time , generating a set of “pool” states, , as follows. We first set the pool state to , the value of the current state sequence at time . The remaining pool states , for are generated by sampling independently from some pool distribution with density . The collections of pool states at different times are selected independently of each other. The total number of sequences we can then construct using these pool states, by choosing one state from the pool at each time, is .
The second step of the embedded HMM method is to choose a state sequence composed of pool states, with the probability of such a state sequence, , being proportional to
(6) 
We can define , and rewrite as
(7) 
We now note that the distribution takes the same form as the distribution over hidden state sequences for an HMM in which each — the initial state distribution is proportional to , the transition probabilities are proportional to , and the have the same role as emission probabilities. This allows us to use the wellknown forwardbackward algorithms for HMM’s (reviewed by Scott (2002)) to efficiently sample hidden state sequences composed of pool states. To sample a sequence with the embedded HMM method, we first compute the “forward” probabilities. Then, using a stochastic “backwards” pass, we select a state sequence composed of pool states. (We can alternately compute backward probabilities and then do a stochastic forward pass). We emphasize that having an efficient algorithm to sample state sequences is crucial for the embedded HMM method. The number of possible sequences we can compose from the pool states, , can be very large, and so naive sampling methods would be impractical.
In detail, for , the forward probabilities are computed using a recursion that goes forward in time, starting from . We start by computing . Then, for , the forward probabilities are given by the recursion
(8) 
The stochastic backwards recursion samples a state sequence, one state at a time, beginning with the state at time . First, we sample from the pool with probabilities proportional to . Then, going backwards in time for from to , we sample from the pool with probabilities proportional to , where is the variable just sampled at time . Both of these recursions are commonly implemented using logarithms of probabilities to avoid numerical underflow.
Let us now reformulate the embedded HMM method as an ensemble MCMC method. The step of choosing the pool states can be thought of as performing a mapping which takes a single hidden state sequence and maps it to an ensemble of state sequences , with for some chosen uniformly from . (However, we note that in this context, the order in the ensemble does not matter.)
Since the randomly chosen pool states are independent under at each time, and across time as well, the density of an ensemble of hidden state sequences in the ensemble base measure, , is defined through a product of ’s over the pool states and over time, and is nonzero for ensembles consisting of all sequences composed from the chosen set of pool states. The corresponding marginal density of a hidden state sequence in the ensemble base measure is
(9) 
Together, and define the conditional distribution , which is used to define . The mapping is taken to be a null mapping that keeps the ensemble fixed at its current value, and the mapping to a single state sequence is performed by selecting a sequence from the ensemble with probabilities given by , in the same way as in the embedded HMM method.
4 The singlesequence embedded HMM MCMC method
Let us assume that the parameters of our model are unknown, and that we want to sample from the joint posterior distribution of state sequences and parameters , with density . One way of doing this is by alternating embedded HMM updates of the state sequence with Metropolis updates of the parameters. Doing updates in this manner makes use of an ensemble to sample state sequences more efficiently in the presence of strong dependencies. However, this method only takes into account a single hidden state sequence when updating the parameters.
The update for the sequence is identical to that in the embedded HMM method, with initial, transition and emission densities dependent on the current value of . In our case, we only consider simple randomwalk Metropolis updates for , updating all of the variables simultaneously.
Evaluating the likelihood conditional on and , as needed for Metropolis parameter updates, is computationally inexpensive relative to updates of the state sequence, which take time proportional to . It may be beneficial to perform several Metropolis parameter updates for every update of the state sequence, since this will not greatly increase the overall computational cost, and allows us to obtain samples with lower autocorrelation time.
5 An ensemble extension of the embedded HMM method
When performing parameter updates, we can look at all possible state sequences composed of pool states by using an ensemble that includes a parameter value , the same for each element of the ensemble. The update could change both and , but in the method we will consider here, we only change .
To see why updating with an ensemble of sequences might be more efficient than updating given a single sequence, consider a Metropolis proposal in ensemble space, which proposes to change for all of the ensemble elements, from to . Such an update can be accepted whenever there are some elements in the proposed ensemble that make the ensemble density relatively large. That is, it is possible to accept a move in ensemble space with a high probability as long as some elements of the proposed ensemble, with the new , lie in a region of high posterior density. This is at least as likely to happen as having a proposed together with a single state sequence lie in a region of high posterior density.
Using ensembles makes sense when the ensemble density can be computed in an efficient manner, in less than times as long as it takes to compute for a single hidden state sequence . Otherwise, one could make independent proposals to change , which would have approximately the same computational cost as a single ensemble update, and likely be a more efficient way to sample . In the application here, is enormous for typical values of when , while computation of takes time proportional only to .
In detail, to compute the ensemble density, we need to sum over all ensemble elements , that is, over all hidden sequences which are composed of the pool states at each time. The forward algorithm described above makes it possible to compute the ensemble density efficiently by summing the probabilities at the end of the forward recursion over all . That is
(10) 
where is the prior density of .
The ensemble extension of the embedded HMM method can be thought of as using an approximation to the marginal posterior of when updating , since summing the posterior density over a large collection of hidden sequences approximates integrating over all such sequences. Larger updates of may be possible with an ensemble because the marginal posterior of , given the data, is more diffuse than the conditional distribution of given a fixed state sequence and the data. Note that since the ensemble of sequences is periodically changed, when new pool states are chosen, the ensemble method is a proper MCMC method that converges to the exact joint posterior, even though the set of sequences using pool states is restricted at each MCMC iteration.
The ensemble method is more computationally expensive than the singlesequence method — it requires two forward passes to evaluate the ensemble density for two values of and one backward pass to sample a hidden state sequence, whereas the singlesequence method requires only a single forward pass to compute the probabilities for every sequence in our ensemble, and a single backward pass to select a sequence.
Some of this additional computational cost can be offset by reusing the same pool states to do multiple updates of . Once we have chosen a collection of pool states, and performed a forward pass to compute the ensemble density at the current value of , we can remember it. Proposing an update of to requires us to compute the ensemble density at using a forward pass. Now, if this proposal is rejected, we can reuse the stored value of the ensemble density when we make another proposal using the same collection of pool states. If this proposal is accepted, we can remember the ensemble density at the accepted value. Keeping the pool fixed, and saving the current value of the ensemble density therefore allows us to perform ensemble updates with forward passes, as opposed to if we used a new pool for each update.
With a large number of pool states, reusing the pool states for two or more updates has only a small impact on performance, since with any large collection of pool states we essentially integrate over the state space. However, pool states must still be updated occasionally, to ensure that the method samples from the exact joint posterior.
6 Staged ensemble MCMC sampling
Having to compute the ensemble density given the entire observed sequence for every proposal, even those that are obviously poor, is a source of inefficiency in the ensemble method. If poor proposals can be eliminated at a low computational cost, the ensemble method could be made more efficient. We could then afford to make our proposals larger, accepting occasional large proposals while rejecting others at little cost.
We propose to do this by performing “staged” updates. First, we choose a part of the observed sequence that we believe is representative of the whole sequence. Then, we propose to update to a found using an ensemble update that only uses the part of the sequence we have chosen. If the proposal found by this “first stage” update is accepted, we perform a “second stage” ensemble update given the entire sequence, with as the proposal. If the proposal at the first stage is rejected, we do not perform a second stage update, and add the current value of to our sequence of sampled values. This can be viewed as a second stage update where the proposal is the current state — to do such an update, no computations need be performed.
Suppose that is the ensemble density given the chosen part of the observed sequence, and is the proposal density for constructing the first stage update. Then the acceptance probability for the first stage update is given by
(11) 
If is the ensemble density given the entire sequence, the acceptance probability for the second stage update is given by
(12) 
Regardless of whether or vice versa, the above ratio simplifies to
(13) 
Choosing a part of the observed sequence for the first stage update can be aided by looking at the acceptance rates at the first and second stages. We need the moves accepted at the first stage to also be accepted at a sufficiently high rate at the second stage, but we want the acceptance rate at the first stage to be low so the method will have an advantage over the ensemble method in terms of computational efficiency. We can also look at the ‘false negatives’ for diagnostic purposes, that is, how many proposals rejected at the first step would have been accepted had we looked at the entire sequence when deciding whether to accept.
We are free to select any portion of the observed sequence to use for the first stage. We will look here at using a partial sequence for the first stage consisting of observations starting at and going until the end, at time . This is appropriate for our example later in the paper, where we only have observations past a certain point in time.
For this scenario, to perform a first stage update, we need to perform a backward pass. As we perform a backward pass to do the first stage proposal, we save the vector of “backward” probabilities. Then, if the first stage update is accepted, we start the recursion for the full sequence using these saved backward probabilities, and compute the ensemble density given the entire sequence, avoiding recomputation of backward probabilities for the portion of the sequence used for the first stage.
To compute the backward probabilities , we perform a backward recursion, starting at time . We first set to for all . We then compute, for
(14) 
We compute the first stage ensemble density as follows
(15) 
We do not know , but we can choose a substitute for it (which affects only performance, not correctness). One possibility is a uniform distrubution over the pool states at , which is what we will use in our example below.
The ensemble density for the full sequence can be obtained by performing the backward recursion up to the beginning of the sequence, and then computing
(16) 
To see how much computation time is saved with staged updates, we measure computation time in terms of the time it takes to do a backward (or equivalently, forward) pass — generally, the most computationally expensive operation in ensemble MCMC — counting a backward pass to time as a partial backward pass.
Let us suppose that the acceptance rate for stage 1 updates is . An ensemble update that uses the full sequence requires us to perform two backwards passes if we update the pool at every iteration, whereas a staged ensemble update will require us to perform
(17) 
backward passes on average (counting a pass back only to as passes). The first term in the above formula represents the full backward pass for the initial value of that is always needed — either for a second stage update, or for mapping to a new set of pool states. The second term represents the partial backward pass we need to perform to complete the first stage proposal. The third term accounts for having to compute the remainder of a backwards pass if we accept the first stage proposal — hence it is weighted by the first stage acceptance rate.
We can again save computation time by updating the pool less frequently. Suppose we decide to update the pool every iterations. Without staging, the ensemble method would require a total of forward (or equivalently, backward) passes. With staged updates, the expected number of backward passes we would need to perform is
(18) 
as can be seen by generalizing the expression above to .
7 Constructing pools of states
An important aspect of these embedded HMM methods is the selection of pool states at each time, which determines the mapping from a single state sequence to an ensemble. In this paper, we will only consider sampling pool states independently from some distribution with density (though dependent pool states are considered in (Neal, 2003).
One choice of is the conditional density of given , based on some pseudoprior distribution for — that is, . This distribution approximates, to some extent, the marginal posterior of given all observations. We hope that such a pool distribution will produce sequences in our ensemble which have a high posterior density, and as a result make sampling more efficient.
To motivate how we might go about choosing , consider the following. Suppose we sample values of from the model prior, and then sample hidden state sequences, given the sampled values of . We can then think of as a distribution that is consistent with this distribution of hidden state sequences, which is in turn consistent with the model prior. In this paper, we choose heuristically, setting it to what we believe to be reasonable, and not in violation of the model prior.
Note that the choice of affects only sampling efficiency, not correctness (as long as it is nonzero for all possible ). However, when we choose pool states in the ensemble method, we cannot use current values of , since this would make an ensemble update nonreversible. This restriction does not apply to the singlesequence method since in this case is null.
8 Performance comparisons on a
population dynamics model
We test the performance of the proposed methods on the Ricker population dynamics model, described by Wood (2003). This model assumes that a population of size (modeled as a real number) evolves as , with independent with a Normal distribution, with . We don’t observe this process directly, but rather we observe ’s whose distribution is Poisson. The goal is to infer . This is considered to be a fairly complex inference scenario, as evidenced by the application of recently developed inference methods such as Approximate Bayesian Computation (ABC) to this model. (See Fearnhead, Prangle (2012) for more on the ABC approach.) This model can be viewed as a nonlinear state space model, with as our state variable.
MCMC inference in this model can be inefficient for two reasons. First, when the value of in the current MCMC iteration is small, consecutive ’s are highly dependent, so the distribution of each , conditional on the remaining ’s and the data, is highly concentrated, making it hard to efficiently sample state sequences one state at a time. An MCMC method based on embedding an HMM into the state space, either the singlesequence method or the ensemble method, can potentially make state sequence sampling more efficient, by sampling whole sequences at once. The second reason is that the distribution of , given a single sequence of ’s and the data, can be concentrated as well, so we may not be able to efficiently explore the posterior distribution of by alternately sampling and the ’s. By considering an ensemble of sequences, we may be able to propose and accept larger changes to . This is because the posterior distribution of summed over an ensemble of state sequences is less concentrated than it is given a single sequence.
To test our MCMC methods, we consider a scenario similar to those considered by Wood (2010) and Fearnhead and Prangle (2012). The parameters of the Ricker model we use are . We generated points from the Ricker model, with only observed from time on, mimicking a situation where we do not observe a population right from its inception.
We put a Uniform prior on , a Uniform prior on , and a Uniform prior on . Instead of , we use as our state variables, since we found that doing this makes MCMC sampling more efficient. With these state variables, our model can be written as
(19)  
(20)  
(21) 
Furthermore, the MCMC state uses the logarithms of the parameters, .
For parameter updates in the MCMC methods compared below, we used independent normal proposals for each parameter, centered at the current parameter values, proposing updates to all parameters at once. To choose appropriate proposal standard deviations, we did a number of runs of the single sequence and the ensemble methods, and used these trial runs to estimate the marginal standard deviations of the logarithm of each parameter. We got standard deviation estimates of for , for , and for . We then scaled each estimated marginal standard deviation by the same factor, and used the scaled estimates as the proposal standard deviations for the corresponding parameters. The maximum scaling we used was , since beyond this our proposals would often lie outside the high probability region of the marginal posterior density.
We first tried a simple Metropolis sampler on this problem. This sampler updates the latent states one at a time, using Metropolis updates to sample from the full conditional density of each state , given by
(22)  
(23)  
(24)  
(25) 
We started the Metropolis sampler with parameters set to prior means, and the hidden states to randomly chosen values from the pool distributions we used for the embedded HMM methods below. After we perform a pass over the latent variables, and update each one in turn, we perform a Metropolis update of the parameters, using a scaling of for the proposal density.
The latent states are updated sequentially, starting from and going up to . When updating each latent variable , we use a Normal proposal distribution centered at the current value of the latent variable, with the following proposal standard deviations. When we do not observe , or , we use the current value of from the state times . When we observe , we use a proposal standard deviation of (with from the state). This choice can be motivated as follows. An estimate of precision for given is . Furthermore, is Poisson, so that Var and Var. So an estimate for the precision of given is . We combine these estimates of precisions to get a proposal standard deviation for the latent variables in the case when .
We ran the Metropolis sampler for iterations from five different starting points. The acceptance rate for parameter updates was between about and , depending on the initial starting value. The acceptance rate for latent variable updates was between and , depending on the run and the particular latent variable being sampled.
We found that the simple Metropolis sampler does not perform well on this problem. This is most evident for sampling . The Metropolis sampler appears to explore different regions of the posterior for when it is started from different initial hidden state sequences. That is, the Metropolis sampler can get stuck in various regions of the posterior for extended periods of time. An example of the behaviour of the Metropolis sampler be seen on Figure 1. The autocorrelations for the parameters are so high that accurately estimating them would require a much longer run.
This suggests that more sophisticated MCMC methods are necessary for this problem. We next looked at the singlesequence method, the ensemble method, and the staged ensemble method.
All embedded MCMCbased samplers require us to choose pool states for each time . For the ’s where no ’s are observed, we choose our pool states by sampling values of from a pseudoprior for the Poisson mean — a Gamma distribution with and — and then taking logarithms of the sampled values. For the ’s where we observe ’s we choose our pool states by sampling values of from the conditional density of with as the pseudoprior. Since is Poisson, the conditional density of has a Gamma distribution.
It should be noted that since we choose our pool states by sampling ’s, but our model is written in terms of , the pool density must take this into account. In particular, when is unobserved, we have
(26) 
and when is observed we replace with and with . We use the same way of choosing the pool states for all three methods we consider.
The staged ensemble MCMC method also requires us to choose a portion of the sequence to use for the first stage update. On the basis of several trial runs, we chose to use the last points, i.e. .
As we mentioned earlier, when likelihood evaluations are computationally inexpensive relative to sequence updates, we can do multiple parameter updates for each update of the hidden state sequence, without incurring a large increase in computation time. For the singlesequence method, we do ten Metropolis updates of the parameters for each update of the hidden state sequence. For the ensemble method, we do five updates of the parameters for each update of the ensemble. For staged ensemble MCMC, we do ten parameter updates for each update of the ensemble. These choices were based on numerous trial runs.
We thin each of the singlesequence runs by a factor of ten when computing autocorrelation times — hence the time per iteration for the singlesequence method is the time it takes to do a single update of the hidden sequence and ten Metropolis updates. We do not thin the ensemble and staged ensemble runs.
To compare the performance of the embedded HMM methods, and to tune each method, we looked at the autocorrelation time, , of the sequence of parameter samples, for each parameter. The autocorrelation time can be thought of as the number of samples we need to draw using our Markov chain to get the equivalent of one independent point (Neal (1993)). It is defined as
(27) 
where is the autocorrelation at lag for a function of state that is of interest. Here, , where is the estimated autocovariance at lag . We estimate each by estimating autocovariances using each of the five runs, using the overall mean from the five runs, and then averaging the resulting autocovariance estimates. We then estimate by
(28) 
Here, the truncation point is where the remaining ’s are nearly zero.
For our comparisons to have practical validity, we need to multiply each estimate of autocorrelation time estimate by the time it takes to perform a single iteration. A method that produces samples with a lower autocorrelation time is often more computationally expensive than a method that produces samples with a higher autocorrelation time, and if the difference in computation time is sufficiently large, the computationally cheaper method might be more efficient. Computation times per iteration were obtained with a program written in MATLAB on a Linux system with an Intel Xeon X5680 3.33 GHz CPU.
For each number of pool states considered, we started the samplers from five different initial states. Like with the Metropolis method, the parameters were initialized to prior means, and the hidden sequence was initialized to states randomly chosen from pool distribution at each time. When estimating autocorrelations, we discarded the first of samples of each run as burnin.
An example ensemble run is shown in Figure 2. Comparing with Figure 1, one can see that the ensemble run has an enormously lower autocorrelation time. Autocorrelation time estimates for the various ensemble methods, along with computation times, proposal scaling, and acceptance rates, are presented in Table 1. For the staged ensemble method, the acceptance rates are shown for the first stage and second stage. We also estimated the parameters of the model by averaging estimates of the posterior means from each of the five runs for the singlesequence method with pool states, the ensemble method with pool states, and the staged ensemble method with pool states. The results are presented in Table 2. The estimates using the three different methods do not differ significantly.
Pool  Acc.  Iter  Time /  ACT  ACT  time  

Method  states  Scaling  Rate  ations  iteration  
10  400,000  0.09  4345  2362  7272  391  213  654  
20  400,000  0.17  779  1875  1849  132  319  314  
Single  40  0.25  12%  200,000  0.31  427  187  1317  132  58  408 
sequence  60  200,000  0.47  329  155  879  155  73  413  
80  200,000  0.61  294  134  869  179  82  530  
40  0.6  13%  100,000  0.23  496  335  897  114  77  206  
60  1  11%  100,000  0.34  187  115  167  64  39  57  
Ensemble  80  1  16%  100,000  0.47  107  55  90  50  26  42 
120  1.4  14%  100,000  0.76  52  41  45  40  31  34  
180  1.8  12%  100,000  1.26  35  27  29  44  34  37  
40  1  30, 15%  100,000  0.10  692  689  1201  69  69  120  
60  1.4  27, 18%  100,000  0.14  291  373  303  41  52  42  
Staged  80  1.4  30, 25%  100,000  0.21  187  104  195  39  22  41 
Ensemble  120  1.8  25, 29%  100,000  0.29  75  59  70  22  17  20 
180  2  23, 32%  100,000  0.45  48  44  52  22  20  23 
Method  

SingleSequence  44.46 ( 0.09)  0.2074 ( 0.0012)  1.9921 ( 0.0032) 
Ensemble  44.65 ( 0.09)  0.2089 ( 0.0009)  1.9853 ( 0.0013) 
Staged Ensemble  44.57 ( 0.04)  0.2089 ( 0.0010)  1.9878 ( 0.0015) 
From these results, one can see that for our Ricker model example, the singlesequence method is less efficient than the ensemble method, when both methods are welltuned and computation time is taken into account. We also found that the the staged ensemble method allows us to further improve performance of the ensemble method. In detail, depending on the parameter one looks at, the best tuned ensemble method without staging (with 120 pool states) is between 1.9 and 12.0 times better than the best tuned singlesequence method (with 40 pool states). The best tuned ensemble method with staging (120 pool states) is between 3.4 and 20.4 times better than the best singlesquence method.
The large drop in autocorrelation time for for the singlesequence method between and pool states is due to poor mixing in one of the five runs. To confirm whether this systematic, we did five more runs of the single sequence method, from another set of starting values, and found that the same problem is again present in one of the five runs. This is inidicative of a risk of poor mixing when using the singlesequence sampler with a small number of pool states. We did not observe similar problems for larger numbers of pool states.
The results in Table 1 show that performance improvement is greatest for the parameter . One reason for this may be that the posterior distribution of given a sequence is significantly more concentrated than the marginal posterior distribution of . Since for a sufficiently large number of pool states, the posterior distribution given an ensemble approximates the marginal posterior distribution, the posterior distribution of given an ensemble will become relatively more diffuse than the posterior distributions of and . This leads to a larger relative performance improvement when sampling values of .
Evidence of this can be seen on Figure 3. To produce it, we took the hidden state sequence and parameter values from the end of one of our ensemble runs, and performed Metropolis updates for the parameters, while keeping the hidden state sequence fixed. We also took the same hidden state sequence and parameter values, mapped the hidden sequence to an ensemble of sequences by generating a collection of pool states (we used ) and peformed Metropolis updates of the parameter part of the ensemble, keeping the pool states fixed. We drew samples given a single fixed sequence and samples given an ensemble.
Visually, we can see that the posterior of given a single sequence is significantly more concentrated than the marginal posterior, and the posterior given a fixed ensemble of sequences. Comparing the standard deviations of the posterior of given a fixed sequence, and the marginal posterior of , we find that the marginal posterior of has a standard deviation about times larger than the posterior of given a single sequence. The marginal posteriors for and have a posterior standard deviation larger by a factor of and . The standard deviation of the posterior given our fixed ensemble of sequences is greater for by a factor of , and by factors of and for and . This is consisent with our explanation above.
We note that the actual timings of the runs are different from what one may expect. In theory, the computation time should scale as , where is the number of pool states and is the number of observations. However, the implementation we use, in MATLAB, implements the forward pass as a nested loop over and over , with another inner loop over vectorized. MATLAB is an interpreted language with slow loops, and vectorizing loops generally leads to vast performance improvements. For the numbers of pool states we use, the computational cost of the vector operation corresponding to the inner loop over is very low compared to that of the outer loop over . As a result, the total computational cost scales approximately linearly with , in the range of values of we considered. An implementation in a different language might lead to different optimal pool state settings.
The original examples of Wood and Fearnhead and Prangle used instead of . We found that for , the ensemble method still performs better than the single sequence method, when computation time is taken into account, though the difference in performance is not as large. When is larger, the observations are larger on average as well. As a result, the data is more informative about the values of the hidden state variables, and the marginal posteriors of the model parameters are more concentrated. As a result of this, though we would still expect the ensemble method to improve performance, we would not expect the improvement to be as large as when .
Finally, we would like to note that if the singlesequence method is implemented already, implementing the ensemble method is very simple, since all that is needed is an additional call of the forward pass function and summing of the final forward probabilities. So, the performance gains from using an ensemble for parameter updates can be obtained with little additional programming effort.
9 Conclusion
We found that both the embedded HMM MCMC method and its ensemble extension perform significantly better than the ordinary Metropolis method for doing Bayesian inference in the Ricker model. This suggests that it would be promising to investigate other state space models with nonlinear state dynamics, and see if it is possible to use the embedded HMM methods we described to perform inference in these models.
Our results also show that using staged proposals further improves the performance of the ensemble method. It would be worthwhile to look at other scenarios where this technique might be applied.
Most importantly, however, our results suggest that looking at multiple hidden state sequences at once can make parameter sampling in state space models noticeably more efficient, and so indicate a direction for further research in the development of MCMC methods for nonlinear, nonGaussian state space models.
Acknowledgements
This research was supported by the Natural Sciences and Engineering Research Council of Canada. A. S. is in part funded by an NSERC Postgraduate Scholarship. R. N. holds a Canada Research Chair in Statistics and Machine Learning.
References

Fearnhead, P., Prangle, D. (2012) “Constructing summary statistics for approximate Bayesian computation: semiautomatic approximate Bayesian computation”, Journal of the Royal Statistical Society, Series B, vol. 74, pp. 128.

Neal, R. M. (2010) “MCMC Using Ensembles of States for Problems with Fast and Slow Variables such as Gaussian Process Regression”, Technical Report No. 1011, Department of Statistics, University of Toronto.

Neal, R. M. (2003) “Markov Chain Sampling for Nonlinear State Space Models using Embedded Hidden Markov Models”, Technical Report No. 0304, Department of Statistics, University of Toronto.

Neal, R. M., Beal, M. J., and Roweis, S. T. (2004) “Inferring state sequences for nonlinear systems with embedded hidden Markov models”, in S. Thrun, et al (editors), Advances in Neural Information Processing Systems 16, MIT Press.

Neal, R. M. (1993) “Probabilistic Inference Using Markov Chain Monte Carlo Methods”, Technical Report CRGTR931, Dept. of Computer Science, University of Toronto.

Steven L. Scott (2002) “Bayesian Methods for Hidden Markov Models: Recursive Computing in the 21st Century”, Journal of the American Statistical Association. vol. 97, no. 457, pp. 337351.

Wood, S. (2010) “Statistical inference for noisy nonlinear ecological dynamic systems”, Nature. vol. 466, pp. 11021104.