A Gaussian mixture ensemble transform filterUniversität Potsdam, Institut für Mathematik, Am Neuen Palais 10, D-14469 Potsdam, Germany

A Gaussian mixture ensemble transform filterthanks: Universität Potsdam, Institut für Mathematik, Am Neuen Palais 10, D-14469 Potsdam, Germany

Sebastian Reich
Abstract

We generalize the popular ensemble Kalman filter to an ensemble transform filter where the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions, and the three dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable to track systems with non-Gaussian uni- and multimodal ensemble distributions.

1 Introduction

We consider dynamical models given in the form of ordinary differential equations (ODEs)

(1)

with state variable . Initial conditions at time are not precisely known and are treated as a random variable instead, i.e., we assume that

where denotes a given probability density function (PDF). The solution of (1) at time with initial condition at is denoted by .

The evolution of the initial PDF under the ODE (1) up to a time is provided by the continuity equation

(2)

which is also called Liouville’s equation in the statistical mechanics literature (Gardiner, 2004). Let us denote the solution of Liouville’s equation at observation time by . In other words, solutions with constitute a random variable with PDF .

For a chaotic ODE (1), i.e. for an ODE with positive Lyapunov exponents, the PDF will be spread out over the whole chaotic attractor for . This in turn implies a limited solution predictability in the sense that the time-evolved PDF will become increasingly independent of the initial PDF . Furthermore, even if the initial PDF is nearly Gaussian with mean and small covariance matrix , the solution will become increasingly unrepresentative of the expectation value of the underlying random variable it is supposed to represent.

To counteract the divergence of nearby trajectories under chaotic dynamics, we assume that we have uncorrelated measurements at times , with measurement error covariance matrix , i.e.

(3)

where the notation is used to denote a normal distribution in with mean and covariance matrix . The matrix is called the forward operator. The task of combining solutions to (1) with intermittent measurements (3) is called data assimilation in the geophysical literature (Evensen, 2006) and filtering in the statistical literature (Bain and Crisan, 2009).

A first step to perform data assimilation for nonlinear ODEs (1) is to approximate solutions to the associated Liouville equation (2). In this paper, we rely exclusively on particle methods (Bain and Crisan, 2009) for which Liouville’s equation is naturally approximated by the evolving empirical measure. More precisely, particle or ensemble filters rely on the simultaneous propagation of independent solutions , , of (1) (Evensen, 2006). We associate the empirical measure

(4)

with weights satisfying

Here denotes the Dirac delta function. Hence our statistical model is given by the empirical measure (4) and is parametrized by the particle weights and the particle locations . In the absence of measurements, the empirical measure with constant weights is an exact (weak) solution to Liouville’s equation (2) provided the ’s are solutions to the ODE (1). Optimal statistical efficiency is achieved with equal particle weights .

The assimilation of a measurement at leads via Bayes’ theorem to a discontinuous change in the statistical model (4). Sequential Monte Carlo methods (Bain and Crisan, 2009) are primarily based on a discontinuous change in the weight factors . To avoid a subsequent degeneracy in the particle weights one re-samples or uses other techniques which essentially lead to a redistribution of particle positions . See, for example, Bain and Crisan (2009) for more details. The ensemble Kalman filter (EnKF) relies on the alternative idea to replace the empirical measure (4) by a Gaussian PDF prior to an assimilation step (Evensen, 2006). This approach allows for the application of the Kalman analysis formulas to the ensemble mean and covariance matrix. The final step of an EnKF is the re-interpretation of the Kalman analysis step in terms of modified particle positions while the weights are held constant at . We call filter algorithms that rely on modified particle/ensemble positions and fixed particle weights ensemble transform filters. A new ensemble transform filter has recently been proposed by Anderson (2010). The filter is based on an appropriate transformation step in observation space and subsequent linear regression of the transformation onto the full state space. The approach developed in this paper relies instead on a general methodology for deriving ensemble transform filters as proposed by Reich (2011). See Section 2 below for a summary. The same methodology has been developed for continuous-in-time observations by Crisan and Xiong (2010). In this paper, we demonstrate how our ensemble transform filter framework can be used to generalize EnKFs to Gaussian mixture models and Gaussian kernel density estimators. The essential steps are summarized in Section 3 while an algorithmic summary of the proposed ensemble Gaussian mixture filter (EGMF) is provided in Section 4. The EGMF can also be viewed as a generalization of the continuous formulation of ensemble square root filters (Tippett et al., 2003) as provided by Bergemann and Reich (2010a, b) and the EnKF with perturbed observations, as demonstrated by Reich (2011). The paper concludes with three numerical examples in Section 5. We first demonstrate the properties of the newly proposed EGMF for one-dimensional Brownian dynamics under a double-well potential. This simulation is extended to the associated two-dimensional Langevin dynamics model with only velocities being observed. Finally we consider the three variable model of Lorenz (1963).

We mention that alternative extensions of the EnKF to Gaussian mixtures have recently been proposed, for example, by Smith (2007), Stordal et al. (2011), and Frei and Künsch (2011). However, while the cluster EnKF of Smith (2007) is an example of an ensemble transform filter, it fits the posterior (analysed) ensemble distribution back to a single Gaussian PDF and, hence, only partially works with a Gaussian mixture. Both the mixture ensemble Kalman filter of Frei and Künsch (2011) and the adaptive Gaussian mixture filter of Stordal et al. (2011) approximate the model uncertainty by a sum of Gaussian kernels and utilize the ensemble Kalman filter as a particle update step under a single Gaussian kernel. Resampling or a re-weighting of particles is required to avoid a degeneracy of particle weights due to changing kernel weights. A related filter algorithm based on Gaussian kernel density estimators has previously been considered by Anderson and Anderson (1999).

2 A general framework for ensemble transform filters

Bayes’ formula can be interpreted as a discontinuous change of a forecast PDF into an analyzed PDF at each observation time . On the other hand, one can find a continuous embedding with respect to a fictitious time such that and . As proposed by Reich (2011), this embedding can be viewed as being induced by a continuity (Liouville) equation

(5)

for an appropriate vector field . The vector field is not uniquely determined for a given continuous embedding unless we also require that it is the minimizer of the kinetic energy

over all admissible vector fields , where is a positive definite mass matrix (Villani, 2003). Admissibility means that satisfies (5) for given and .

Under these assumptions, a constrained variational principle (Villani, 2003) implies that the desired vector field is given by , where the potential is the solution of the elliptic partial differential equation (PDE)

(6)

for given PDF , mass matrix , and negative log-likelihood function

(7)

Here denotes the expectation value of a function with respect to a PDF . We finally replace (5) by

(8)

with an underlying ODE formulation

(9)

in fictitious time . As for the ODE (1) and its associated Liouville equation (2), we may approximate (9) and its associated Liouville equation (8) by an empirical measure of type (4). Furthermore, one and the same empirical measure approximation can now be used for both the ensemble propagation step under the model dynamics (1) and the data assimilation step (8) using constant and equal weights . The particle filter approximation is closed by finding an appropriate numerical solution to the elliptic PDE (6). This is the crucial step which will lead to different ensemble transform filter algorithms.

The basic numerical approach to the data assimilation step within an ensemble transform filter formulation consists then of the following sequence of steps. (i) Given a current ensemble of solutions , , one fits a statistical model . (ii) Solve the elliptic PDE

(10)

for a vector field . The solution is not uniquely determined and an appropriate choice needs to be made. See the discussion above. (iii) Propagate the ensemble members under the ODE

(11)

We assume that a forecast ensemble of members , , is available at an observation time which provides the initial conditions for the ODE (11). Solutions at yield the analyzed ensemble members, which are then used as the new initial conditions for (1) at time and (1) is solved over up to the next observation point.

If the statistical model is a Gaussian with mean and covariance matrix , then the outlined approach leads to a continuous formulation of the ensemble square-root ensemble filter analysis step at time (Bergemann and Reich, 2010a, b), i.e.

(12)

for . It follows that and

(13)

3 An ensemble transform filter based on Gaussian mixture statistical models

We now develop an ensemble transform filter algorithm based on a component Gaussian mixture model, i.e.

(14)

where denotes the normal distribution in with mean and covariance matrix . The Gaussian mixture parameters, i.e. , , need to be determined from the ensemble , , in an appropriate manner using, for example, the expectation-maximization (EM) algorithm (Dempster et al., 1977; Smith, 2007). See Section 3.3 for more details. Note that and . To simplify notation, we write instead of from now on.

An implementation of the associated continuous formulation of the Bayesian analysis step proceeds as follows. To simplify the discussion we derive our filter formulation for a scalar observation variable, i.e. , , and

(15)

The vector-valued case can be treated accordingly provided the error covariance matrix is diagonal. We first decompose the vector field in (11) into two contributions, i.e.

(16)

To simplify notation we drop the explicit dependence in the following calculations. We next decompose the right hand side of the elliptic PDE (10) also into two contributions

(17)

We now derive explicit expressions for and .

3.1 Gaussian mixture Kalman filter contributions

We define the vector field through the equation

(18)

together with

(19)

which implies that the potentials , , are uniquely determined by

(20)

for all . It follows that the potentials are equivalent to the ensemble Kalman filter potentials for the -th Gaussian component. Hence, using (12) and (18), we obtain

(21)

3.2 Gaussian mixture exchange contributions

The remaining contributions for solving (5) are collected in the vector field

(22)

which therefore needs to satisfy

(23)

and, hence, we may set

(24)

for all . To find a solution of (24) we introduce functions such that

(25)

with and . Now the right hand side of (24) gives rise to

(26)

and (24) simplifies further to the scalar PDE

(27)

The PDE (27) can be solved for

(28)

under the condition by explicit quadrature and one obtains

(29)

with marginalized PDF

(30)

, and the standard error function

(31)

Note that

(32)

We finally obtain the expression

(33)

for the vector field .

The expectation values , , can be computed analytically, i.e.

(34)

or estimated numerically. Recall that and, therefore,

(35)

It should be kept in mind that the Gaussian mixture parameters can be updated directly using the differential equations

(36)
(37)
(38)

for . Here is chosen such that

(39)

Furthermore, exactly mirrors the update of the Gaussian mixture means and covariance matrices , while mimics the update of the weight factors by rearranging the particle positions accordingly.

As already eluded to, we can treat each uncorrelated observation separately and sum the individual contributions in and , respectively, to obtain the desired total vector field (16).

3.3 Implementation aspects

Given a set of ensemble members , , there are several options for implementing a Gaussian mixture filter. The trivial case leads back to the continuous formulations of Bergemann and Reich (2010a, b). More interestingly, one can chose and estimate the mean and the covariance matrices for the Gaussian mixture model using the EM algorithm (Dempster et al., 1977; Smith, 2007). The EM algorithm self-consistently computes the mixture mean and covariance matrix via

(40)

for using weights defined by

(41)

The EM algorithm can fail to converge and a possible remedy is to add a constant contribution to the empirical covariance matrix in (40) with the parameter appropriately chosen. We mention that more refined implementations of the EM algorithm, such as those discussed by Fraley and Raftery (2007), could also be considered. It is also possible to select the number of mixture components adaptively. See, for example, Smith (2007). The resulting vector fields for the th ensemble member are given by

(42)

and, using (33),

(43)

with weights given by (41).

Another option to implement an EGMF is to set the number of mixture components equal to the number of ensemble members, i.e. , and to use a prescribed covariance matrix for all mixture components, i.e.  and , . We also give all mixture components equal weights . In this setting, it is more appropriate to call (14) a kernel density estimator (Wand and Jones, 1995). Then

(44)

and

(45)

with weights given by (41), , and . The Kalman filter like contributions (44) can be replaced by a formulation with perturbed observations (Evensen, 2006; Reich, 2011) which yields

(46)

where , , are independent, identically distributed Gaussian random numbers with mean zero and variance . A particular choice is , where is the empirical covariance matrix of the ensemble and is an appropriate constant. Assuming that the underlying probability density is Gaussian with covariance , the choice

(47)

is optimal for large ensemble sizes in the sense of kernel density estimation (see, e.g. Wand and Jones (1995)). Recall that denotes the dimension of phase space. The resulting filter is then similar in spirit to the rank histogram filter (RHF) suggested by Anderson (2010) with the RHF increments in observation space being replaced by those generated from a Gaussian kernel density estimator. Another choice is in which case (45) can be viewed as a correction term to the standard ensemble Kalman filter (46). We will explore the kernel estimator in the numerical experiment of Section 5.4.

Note that localization, as introduced by Houtekamer and Mitchell (2001) and Hamill et al. (2001), can be combined with (42)-(43) and (44)-(45), respectively, as outlined in Bergemann and Reich (2010a). For example, one could set the covariance matrix in (44)-(45) equal to the localized ensemble covariance matrix. Localization will be essential for a successful application of the proposed filter formulations to high-dimensional systems (1). The same applies to ensemble inflation (Anderson and Anderson, 1999).

We also note that the computation of the particle-mixture weight factors (41) can be become expensive since it requires the computation of . This can be avoided by either using only the diagonal part of in (Smith, 2007) or by using a marginalized density such as (30), i.e. , , instead of the full Gaussian PDF values . Some other suitable marginalization could also be performed.

The vector field is responsible for a transfer of particles between different mixture components according to the observation adjusted relative weight of each mixture component. These transitions can be rather rapid implying that can become large in magnitude. This can pose numerical difficulties and we eliminated those by limiting the -norm of through a cut-off value . Alternatively, we might want to replace (30) by a PDF which leads to less stiff contributions to the vector field such as the Student’s t-distributions (Schaefer, 1997). Hence a natural approximative PDF is provided by the the scaled t-distribution with three degrees of freedom, i.e.

(48)

We also introduce the shorthand with and . A first observation is that

(49)

i.e., the first two moments of match those of (30). The second observation is that can be integrated explicitly, i.e.

(50)

Hence the relation (32) suggests the alternative formulation

(51)

where .

The differential equation (16) needs to be approximated by a numerical time-stepping procedure. In this paper, we use the forward Euler method for simplicity. However, the limited region of stability of an explicit method such as forward Euler implies that the step-size needs to be made sufficiently small. This issue has been investigated by Amezcua et al. (2011) for the formulation with (standard continuous ensemble Kalman filter formulation) and a diagonally implicit scheme has been proposed which overcomes the stability restrictions of the forward Euler method while introducing negligible computational overhead. The computational cost of a single evaluation of (16) for given mixture components should be slightly lower than for a single EnKF step since no matrix inversion is required. Additional expenses arise from fitting the mixture components (e.g. using the EM algorithm) and from having to use a number of time-steps .

4 Algorithmic summary of the ensemble Gaussian mixture filter (EGMF)

Since the proposed methodology for treating nonlinear filter problems is based on an extension of the EnKF approach, we call the new filter the ensemble Gaussian mixture filter (EGMF). We now provide an algorithmic summary.

First a set of ensemble members is generated at time according to the initial PDF .

In between observations, the ensemble members are propagated under the ODE model (1), i.e.

(52)

for and .

At an observation time , a Gaussian mixture model (14) is fitted to the ensemble members , . One can, for example, use the classic EM algorithm (Dempster et al., 1977; Smith, 2007) for this purpose. In this paper we use a simple heuristic to determine the number of components . An adaptive selection of is, however, feasible (see, e.g., Smith (2007)). Alternatively, one can set and implement the EGMF with a Gaussian kernel density estimator with , . The covariance matrix can be based on the empirical covariance matrix of the whole ensemble via with the constant appropriately chosen. At this stage covariance localization can also be applied.

The vector fields and are computed according to (42) and (43), respectively, (or, alternatively, use (45)-(46)) for each independent observation and the resulting contributions are summed up to a total vector field . Next the ensemble members are updated according to (16) for , . Here we use a simple forward Euler discretization with step-size chosen sufficiently small. After each time-step the Gaussian mixture components are updated, if necessary, using the EM algorithm. The analyzed ensemble members are obtained after time-steps as the numerical solutions at and provide the new initial conditions for (52) with time now in the interval .

Ensemble induced estimates for the expectation value of a function can be computed via

(53)

without reference to a Gaussian mixture model.

5 Numerical experiments

In this section we provide results from several numerical simulations and demonstrate the performance of the proposed EGMF in comparison with standard implementations of the EnKF and an implementation of the RHF (Anderson, 2010). We first investigate the Bayesian assimilation step without model equations.

5.1 Single Bayesian assimilation step

Figure 1: Displayed are the prior distribution, the likelihood from a measurement and the resulting posterior distribution. The prior as well as the posterior are bimodal Gaussian.
Figure 2: Numerically obtained posterior for ensemble sizes (left panel) and (right panel). Shown are results from the EGMF, the RHF, and an EnKF analysis step. While the EGMF and the RHF converge to the correct posterior distribution, the EnKF leads to qualitatively incorrect results for both ensemble sizes.
Figure 3: Displayed is the rearrangement of the particles under the dynamics of the EGMF analysis step. Rapid transitions between the Gaussian mixture components are induced by the vector field .

We test our formulation first for a single assimilation step where the prior is a bimodal Gaussian

(54)

and the likelihood is

(55)

The posterior distribution is again bimodal Gaussian and can be computed analytically. See Fig. 1. Here we demonstrate how an EnKF, the RHF, and the proposed EGMF approximate the posterior for ensemble sizes and for , . See Fig. 2. Both the RHF and the EGMF are capable of reproducing the Bayesian assimilation step correctly for sufficiently large while the EnKF leads to a qualitatively incorrect result. The transformation of the ensemble members (particles) under the dynamics (16) is displayed in Fig. 3.

Figure 4: Shown is the potential energy used in both the Brownian and Langevin dynamics model.
Figure 5: Shown is the reference solution from which observations are generated by adding Gaussian noise with mean zero and variance .

Figure 6: Displayed are two posterior PDFs for measurement error variance as obtained from the Fokker-Planck approach. A distinct bimodal behavior can be observed which motivates the use of a binary Gaussian mixture model for the EGMF.
Figure 7: Estimated ensemble mean computed from a direct simulation of the assimilation problem using a discretized Fokker-Planck equation for measurement error variance (left panel) and (right panel).
Figure 8: Ensemble mean from the EGMF for measurement error variance (left panel) and (right panel). It can be observed that the EGMF leads to results similar to those from the discrete Fokker-Planck approach (Fig. 7).
Figure 9: Ensemble mean from a RHF for measurement error variance (left panel) and (right panel). The results are for particles. It can be observed that the RHF leads to results similar to those from the discrete Fokker-Planck approach (Fig. 7).
Figure 10: Ensemble mean from an EnKF with perturbed observations for measurement error variance (left panel) and (right panel). The results are for particles. The EnKF behaves not as well as the RHF, the EGMF, and the discretized Fokker-Planck approach. Similar results are obtained for an ensemble square root filter.

We now present results from three increasingly complex filtering problems.

5.2 A one-dimensional model problem

As a first numerical example we consider Brownian dynamics under a one-dimensional potential , i.e.

(56)

where denotes standard Brownian motion and the potential is given by

(57)

See Fig. 4. The true reference trajectory is started at . See Fig. 5. Measurements of are collected every 10 time units with two different values of the measurement error variance ().

The initial PDF is given by the bimodal distribution

(58)

Depending on the distribution of ensemble members we either use a single Gaussian () or a bi-Gaussian mixture () in the EGMF assimilation step. A single Gaussian is used whenever more than 90% of the particles are located near either the right (i.e. ) or left potential well (i.e. ). The computed variances are modified such that to avoid singularities in the EM algorithm. The model equation (56) is discretized with the forward Euler method and time-step . The total simulation interval is . The assimilation equations with (42) and (43) are discretized with the forward Euler method and step-size . The -norm of is limited to a value of .

The performance of the EGMF is compared to an EnKF with perturbed observations, an ensemble square root filter, and a RHF. The particle positions are adjusted during each data assimilation step of a RHF such that the particles maintain equal weights . The adjustment is done similar to what has been proposed by Anderson (2010) except that the posterior is approximated by piecewise constant functions.

For this simple, one-dimensional problem the densities can be directly propagated through a discretization of the associated Fokker-Planck equation. Bayes theorem reduces to a multiplication of the prior PDF approximation from the Fokker-Planck approximation with the likelihood at each grid point. We have used a grid with mesh-size over to provide an independent and accurate filtering result. Periodic boundary conditions are used for the diffusion operator such that the spatial discretization leads to a stochastic matrix. It is found from the numerical simulations that leads to a pronounced bimodal behavior of the solution PDF . See Figure 6 for two posterior PDF approximations from the Fokker-Planck approach.

Typical filter behaviors over the time interval with regard to the reference trajectory can be found in Figures 7, 8, 9, and 10, respectively, for ensemble members. The EGMF and the RHF display a behavior similar to that from the discretized Fokker-Planck approach while significantly different results are obtained from the EnKF implementation with perturbed observations. Similar results are obtained for an ensemble square root filter (not displayed). The EGMF uses a bi-Gaussian approximation in 97% of the assimilation steps for and in 47% of the assimilation steps for .

We also provide the root mean square (RMS) error between the computed mean from the three different filters and the mean computed from the Fokker-Planck approach in Table 1 for and different ensemble sizes . The RHF converges as to the solution of the Fokker-Planck approach for this one-dimensional model problem. The EGMF provides better results than the EnKF but does not converge since the limiting distributions are not exactly bi-Gaussian. Note that the EGMF should converge for and the number of mixture components sufficiently large. Note also that the RHF does not converge to the analytic filter solution as in case of more than one dynamic variable (i.e. ). See also the following example.

RHF EGMF EnKF
0.6551 0.7683 1.0283
0.3717 0.5127 0.8798
0.2691 0.4033 0.8412
Table 1: RMS errors for ensemble means obtained from EnKF, RHF, and EGMF compared to the expected value computed by a Fokker-Planck discretization with error variance and particles/ensemble members.

5.3 A two-dimensional model problem

We discuss another example from classical mechanics. The evolution of a particle with position and velocity is described by Langevin dynamics (Gardiner, 2004) with equations of motion

(59)
(60)

where the potential is given by

(61)

the friction coefficient is , denotes standard Brownian motion, and . A reference solution, denoted by , is obtained for initial condition and a particular realization of .

Let us address the situation that the reference solution is not directly accessible to us and that instead we are only able to observe subject to

(62)

where denotes again standard Brownian motion and . In other words, we are effectively only able to observe particle velocities.

We now combine the model equations and the observations within the ensemble Kalman-Bucy framework. The ensemble filter relies on the simultaneous propagation of an ensemble of solutions , . In our experiment we set . The filter equations for an EGMF with a single Gaussian, ensemble Kalman-Bucy filter (Bergemann and Reich, 2011), respectively, become

(63)
(64)

with mean

(65)

and variance/covariance

(66)

The equations are solved for each ensemble member with different realizations of standard Brownian motion and step-size . The observation interval in (62) is . The extension of the EGMF to a Gaussian mixture with is straightforward. One substitutes , , and