Stability of Feynman-Kac formulae with path-dependent potentials

Stability of Feynman-Kac formulae with path-dependent potentials

Abstract

Several particle algorithms admit a Feynman-Kac representation such that the potential function may be expressed as a recursive function which depends on the complete state trajectory. An important example is the mixture Kalman filter, but other models and algorithms of practical interest fall in this category. We study the asymptotic stability of such particle algorithms as time goes to infinity. As a corollary, practical conditions for the stability of the mixture Kalman filter, and a mixture GARCH filter, are derived. Finally, we show that our results can also lead to weaker conditions for the stability of standard particle algorithms, such that the potential function depends on the last state only.

1 Introduction

The most common application of the theory of Feynman-Kac formulae (see e.g. Del Moral, 2004) is nonlinear filtering of a hidden Markov chain , based on observed process . In such settings, the potential function at time typically depends only on the current state . The uniform stability of the corresponding particle approximations can be obtained under appropriate conditions, see Section 7.4.3 of the aforementioned book and references therein. For a good overview of the theoretical and methodological aspects of particle approximation algorithms, also known as particle filtering algorithms, see also Doucet et al. (2001), Künsch (2001), and Cappé et al. (2005).

They are however several applications of practical interest where the potential function depends on the complete state trajectory . The corresponding particle filtering algorithms still have a fixed computational cost per iteration, because the potential can be computed using recursive formulae. An important example is the class of conditional linear Gaussian dynamic models, where the conditioning is on some unobserved Markov chain . The corresponding particle algorithm is known as the mixture Kalman filter (Chen and Liu, 2000, see also Example 7 in Doucet et al., 2000, and Andrieu and Doucet, 2002, for a related algorithm): the potential function at time is then a Gaussian density, the parameters of which are computed recursively using the Kalman-Bucy filter (Kalman and Bucy, 1961). Another example is the mixture GARCH model considered in Chopin (2007).

It is worth noting that these models such that the potential functions are path-dependent can often be reformulated as a standard hidden Markov model, with a potential function depending on the last state only, by adding components to the hidden Markov chain. For instance, the mixture Kalman filter may be interpreted as a standard particle filtering algorithm, provided the hidden Markov process is augmented with the associated Kalman filter parameters (filtering expectation and error covariance matrix) that are computed iteratively in the algorithm. However, this representation is unwieldy, and the augmented Markov process does not fulfil the usual mixing conditions found in the literature on the stability of particle approximations. This is the main reason why our study is based on path-dependent potential functions. Quite interestingly, we shall see that the opposite perspective is more fruitful. Specifically, our stability results obtained for path-dependent potential functions can also be applied to standard state-space models, leading to stability results under conditions different from those previously given in the literature.

In this paper, we study the asymptotic stability of particle algorithms based on path-dependent potential functions. We work under the assumption that the dependence of potential on state vanishes exponentially in . This assumption is met in practical settings because of the recursive nature of the potential functions. Our proofs are based on the following construction: the true filter is compared with an approximate filter associated to ‘truncated’ potentials, that is potentials that depend only on , the vector of the last states, for some well-chosen integer . Then, we compare the truncated filter with its particle approximation, using the fact the ‘truncated’ filter corresponds to a standard Feynman-Kac model with a Markov chain of fixed dimension. Finally, we use a coupling construction to compare the particle approximations of the true filter and the truncated filter. In this way, we obtain estimates of the stability of the particle algorithm of interest. We apply our results to the two aforementioned classes of models, and obtain practical conditions under which the corresponding particle algorithms are stable uniformly in time.

The paper is organised as follows. Section 2 introduces the model and the notations. Section 3 evaluates the local error induced by the truncation. Section 4 studies the mixing properties of the truncated filter. Section 5 studies the propagation of the truncation error. Section 6 develops a coupling argument for the two particle systems. Section 7 states the main theorem of the paper, which provides a bound for the particle error and derives time-uniform estimates for the long-term propagation of the error in the particle approximation of the true model. Section 8 applies these results to two particle algorithms of practical interest, namely, the mixture Kalman filter, and the mixture GARCH filter, and shows how these results can be adapted to standard state-space models, such that the potential function depends only on the last state.

2 Model and notations

We consider a hidden Markov model, with latent (non-observed) state process , and observed process , taking values respectively in a complete separable metric space and in . The state process is an inhomogeneous Markov chain, with initial probability distribution , and transition kernel . The observed process admits as a conditional probability density (with respect to an appropriate dominating measure) given and , where the short-hand for any symbol stands for the vector . As explained in the Introduction, this quantity depends on the entire path , rather than the last state . Following common practice, we drop dependencies on the ’s in the notations, as the observed sequence may be considered as fixed, and use the short-hand . The model admits a Feynman-Kac representation which we describe fully in (2.1). We consider the following assumptions.

Hypothesis 1.

For all , the kernel is mixing, i.e. there exists such that

for some , and for any Borel set , any .

Hypothesis 2.

For large enough, and all , there exists a ‘truncated’ potential function that depends on the last states only, and that approximates in the sense that

for some constants and , , , and all . For convenience, we abuse notations and set for .

Hypothesis 3.

There exists constants , , , , such that

for all , using the short-hand for any integer .

The constants and depend implicitly on the realisation of the observed process. Hypotheses 1 and 3 are standard in the filtering literature; see e.g. Del Moral (2004). Hypothesis 2 formalises the fact that potential functions are computed using iterative formulae, and therefore should forget past states at an exponential rate. One may take for instance, where is an arbitrary element of . We shall work out, in several models of interest, practical conditions under which Hypothesis 2 is fulfilled in Section 8.

We introduce the following notations for the forward kernels, for :

where is the Dirac measure centred at . The above kernels implicitly defines operators on measures and on test functions, i.e.,

for any , any test function , where denotes the set of nonnegative measures w.r.t. , and the set of probability measures w.r.t. .

We associate to a “normalised” operator , such that, for any , is defined as:

for any . Both the ’s and the ’s may be iterated using the following short-hands, for :

We have the following Feynman-Kac representation:

(2.1)

, , where, as mentioned above, the law of .

Finally, we denote the total variation norm on nonnegative measures by , the supremum norm on bounded functions by , and the Hilbert metric by for any pair , ; see e.g. Atar and Zeitouni (1997) or Le Gland and Oudjane (2004), Definition 3.3. We recall that the Hilbert metric is scale invariant, and is related to the total variation norm in the following way, see e.g. Lemma 3.4 in Le Gland and Oudjane (2004):

(2.2)
(2.3)

provided is a -mixing kernel. We can also derive the following properties from the definition of (, ):

(2.4)
(2.5)

with an equality in the latter equation if is positive.

3 Local error induced by truncation

Until further notice, is a fixed integer such that and such that Hypothesis 2 holds. Since our proofs involve a comparison between the true filter and a ‘truncated’ filter, we introduce the projection operator which, for , associates to any measure its marginal w.r.t. its last components, i.e. :

for any ; for , let . We also define the following ‘truncated’ forward kernels, for :

and the associated normalised operators, for , :

and set , for . From now on, we will refer to the filter associated to these ‘truncated’ operators as the truncated filter.

We now evaluate the local error induced by the truncation.

Lemma 1.

For all , and for all ,

Proof.

Let . One has

where

and

hence

according to Hypothesis 2. And, since, for all , , , such that and ,

(3.6)

one may conclude directly by taking , , , and . ∎

Lemma 2.

For , if there exists a (possibly random) probability kernel such that, for all ,

for some , then, for all and ,

where the expectation is with respect to the distribution of .

Proof.

Using the same ideas as above, one has, for ,

In order to use inequality (3.6), compute

where is defined as

and conclude by noting that

since is a probability measure. ∎

4 Mixing and contraction properties of the truncated filter

The truncated filter may be interpreted as a standard filter based on Markov chain . This insight allows us to establish the contraction properties of the truncated filter.

Lemma 3.

One has:

and

where

for all , and all ,.

Note must be interpreted as a mixing coefficient, and as a Birkhoff contraction coefficient.

Proof.

Using Hypothesis 3, one has:

where stands for the following reference measure:

One shows similarly that

Hence kernel is mixing, with mixing coefficient .

Following Lemma 3.4 in Le Gland and Oudjane (2004),

using the scale invariance property of the Hilbert metric. Similarly, according to Lemma 3.9 in the same paper:

5 Propagation of truncation error

We establish first the two following lemmas.

Lemma 4.

Let be a sequence of (possibly random) probability kernels such that for all and ,

where the expectation is w.r.t. the randomness of , then, for all and all , one has

where , and with the convention that empty products equal one.

Proof.

The following difference can be decomposed into a telescopic sum:

We fix the integers , , and consider some arbitrary test function . For , one may apply Lemma 2:

since , and for all .

For , let , then, using Lemma 3, Equations (2.2) to (2.5) one has

where , . Applying (7) p. 160 of Le Gland and Oudjane (2004), one gets

where, using the same calculations as in Lemma 3,

and

which ends the proof.∎

Lemma 5.

For all and all , one has

with the convention that empty sums equal zero, and empty products equal one.

Proof.

One has:

For , let , then according to Lemma 3: