A Gaussian mixture ensemble transform filterUniversität Potsdam, Institut für Mathematik, Am Neuen Palais 10, D-14469 Potsdam, Germany

# A Gaussian mixture ensemble transform filter††thanks: Universität Potsdam, Institut für Mathematik, Am Neuen Palais 10, D-14469 Potsdam, Germany

Sebastian Reich
###### Abstract

We generalize the popular ensemble Kalman filter to an ensemble transform filter where the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions, and the three dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable to track systems with non-Gaussian uni- and multimodal ensemble distributions.

## 1 Introduction

We consider dynamical models given in the form of ordinary differential equations (ODEs)

 ˙x=f(x,t) (1)

with state variable . Initial conditions at time are not precisely known and are treated as a random variable instead, i.e., we assume that

 x(t0)∼π0,

where denotes a given probability density function (PDF). The solution of (1) at time with initial condition at is denoted by .

The evolution of the initial PDF under the ODE (1) up to a time is provided by the continuity equation

 ∂π∂t=−∇x⋅(πf), (2)

which is also called Liouville’s equation in the statistical mechanics literature (Gardiner, 2004). Let us denote the solution of Liouville’s equation at observation time by . In other words, solutions with constitute a random variable with PDF .

For a chaotic ODE (1), i.e. for an ODE with positive Lyapunov exponents, the PDF will be spread out over the whole chaotic attractor for . This in turn implies a limited solution predictability in the sense that the time-evolved PDF will become increasingly independent of the initial PDF . Furthermore, even if the initial PDF is nearly Gaussian with mean and small covariance matrix , the solution will become increasingly unrepresentative of the expectation value of the underlying random variable it is supposed to represent.

To counteract the divergence of nearby trajectories under chaotic dynamics, we assume that we have uncorrelated measurements at times , with measurement error covariance matrix , i.e.

 yobs(tj)−Hx(tj)∼N(0,R), (3)

where the notation is used to denote a normal distribution in with mean and covariance matrix . The matrix is called the forward operator. The task of combining solutions to (1) with intermittent measurements (3) is called data assimilation in the geophysical literature (Evensen, 2006) and filtering in the statistical literature (Bain and Crisan, 2009).

A first step to perform data assimilation for nonlinear ODEs (1) is to approximate solutions to the associated Liouville equation (2). In this paper, we rely exclusively on particle methods (Bain and Crisan, 2009) for which Liouville’s equation is naturally approximated by the evolving empirical measure. More precisely, particle or ensemble filters rely on the simultaneous propagation of independent solutions , , of (1) (Evensen, 2006). We associate the empirical measure

 πem(x,t)=M∑i=1γiδ(x−xi(t)) (4)

with weights satisfying

 M∑i=1γi=1.

Here denotes the Dirac delta function. Hence our statistical model is given by the empirical measure (4) and is parametrized by the particle weights and the particle locations . In the absence of measurements, the empirical measure with constant weights is an exact (weak) solution to Liouville’s equation (2) provided the ’s are solutions to the ODE (1). Optimal statistical efficiency is achieved with equal particle weights .

The assimilation of a measurement at leads via Bayes’ theorem to a discontinuous change in the statistical model (4). Sequential Monte Carlo methods (Bain and Crisan, 2009) are primarily based on a discontinuous change in the weight factors . To avoid a subsequent degeneracy in the particle weights one re-samples or uses other techniques which essentially lead to a redistribution of particle positions . See, for example, Bain and Crisan (2009) for more details. The ensemble Kalman filter (EnKF) relies on the alternative idea to replace the empirical measure (4) by a Gaussian PDF prior to an assimilation step (Evensen, 2006). This approach allows for the application of the Kalman analysis formulas to the ensemble mean and covariance matrix. The final step of an EnKF is the re-interpretation of the Kalman analysis step in terms of modified particle positions while the weights are held constant at . We call filter algorithms that rely on modified particle/ensemble positions and fixed particle weights ensemble transform filters. A new ensemble transform filter has recently been proposed by Anderson (2010). The filter is based on an appropriate transformation step in observation space and subsequent linear regression of the transformation onto the full state space. The approach developed in this paper relies instead on a general methodology for deriving ensemble transform filters as proposed by Reich (2011). See Section 2 below for a summary. The same methodology has been developed for continuous-in-time observations by Crisan and Xiong (2010). In this paper, we demonstrate how our ensemble transform filter framework can be used to generalize EnKFs to Gaussian mixture models and Gaussian kernel density estimators. The essential steps are summarized in Section 3 while an algorithmic summary of the proposed ensemble Gaussian mixture filter (EGMF) is provided in Section 4. The EGMF can also be viewed as a generalization of the continuous formulation of ensemble square root filters (Tippett et al., 2003) as provided by Bergemann and Reich (2010a, b) and the EnKF with perturbed observations, as demonstrated by Reich (2011). The paper concludes with three numerical examples in Section 5. We first demonstrate the properties of the newly proposed EGMF for one-dimensional Brownian dynamics under a double-well potential. This simulation is extended to the associated two-dimensional Langevin dynamics model with only velocities being observed. Finally we consider the three variable model of Lorenz (1963).

We mention that alternative extensions of the EnKF to Gaussian mixtures have recently been proposed, for example, by Smith (2007), Stordal et al. (2011), and Frei and Künsch (2011). However, while the cluster EnKF of Smith (2007) is an example of an ensemble transform filter, it fits the posterior (analysed) ensemble distribution back to a single Gaussian PDF and, hence, only partially works with a Gaussian mixture. Both the mixture ensemble Kalman filter of Frei and Künsch (2011) and the adaptive Gaussian mixture filter of Stordal et al. (2011) approximate the model uncertainty by a sum of Gaussian kernels and utilize the ensemble Kalman filter as a particle update step under a single Gaussian kernel. Resampling or a re-weighting of particles is required to avoid a degeneracy of particle weights due to changing kernel weights. A related filter algorithm based on Gaussian kernel density estimators has previously been considered by Anderson and Anderson (1999).

## 2 A general framework for ensemble transform filters

Bayes’ formula can be interpreted as a discontinuous change of a forecast PDF into an analyzed PDF at each observation time . On the other hand, one can find a continuous embedding with respect to a fictitious time such that and . As proposed by Reich (2011), this embedding can be viewed as being induced by a continuity (Liouville) equation

 ∂π∂s=−∇x⋅(πg) (5)

for an appropriate vector field . The vector field is not uniquely determined for a given continuous embedding unless we also require that it is the minimizer of the kinetic energy

 T(v)=12∫dπvTMv

over all admissible vector fields , where is a positive definite mass matrix (Villani, 2003). Admissibility means that satisfies (5) for given and .

Under these assumptions, a constrained variational principle (Villani, 2003) implies that the desired vector field is given by , where the potential is the solution of the elliptic partial differential equation (PDE)

 ∇x⋅(πM−1∇xψ)=π(S−Eπ[S]) (6)

for given PDF , mass matrix , and negative log-likelihood function

 S(x;yobs(tj))=12(Hx−yobs(tj))TR−1(Hx−yobs(tj)). (7)

Here denotes the expectation value of a function with respect to a PDF . We finally replace (5) by

 ∂π∂s=−∇x⋅(πM−1∇xψ) (8)

with an underlying ODE formulation

 dxds=M−1∇xψ(x,s) (9)

in fictitious time . As for the ODE (1) and its associated Liouville equation (2), we may approximate (9) and its associated Liouville equation (8) by an empirical measure of type (4). Furthermore, one and the same empirical measure approximation can now be used for both the ensemble propagation step under the model dynamics (1) and the data assimilation step (8) using constant and equal weights . The particle filter approximation is closed by finding an appropriate numerical solution to the elliptic PDE (6). This is the crucial step which will lead to different ensemble transform filter algorithms.

The basic numerical approach to the data assimilation step within an ensemble transform filter formulation consists then of the following sequence of steps. (i) Given a current ensemble of solutions , , one fits a statistical model . (ii) Solve the elliptic PDE

 ∇x⋅(ˆπˆg)=ˆπ(S−Eˆπ[S]) (10)

for a vector field . The solution is not uniquely determined and an appropriate choice needs to be made. See the discussion above. (iii) Propagate the ensemble members under the ODE

 dxids=ˆg(xi,s). (11)

We assume that a forecast ensemble of members , , is available at an observation time which provides the initial conditions for the ODE (11). Solutions at yield the analyzed ensemble members, which are then used as the new initial conditions for (1) at time and (1) is solved over up to the next observation point.

If the statistical model is a Gaussian with mean and covariance matrix , then the outlined approach leads to a continuous formulation of the ensemble square-root ensemble filter analysis step at time (Bergemann and Reich, 2010a, b), i.e.

 dxids=−12PHTR−1(Hxi+H¯¯¯x−2yobs(tj)) (12)

for . It follows that and

 ψ(x)=−14(Hx+H¯¯¯x−2yobs(tj))TR−1(Hx+H¯x−2yobs(tj)). (13)

## 3 An ensemble transform filter based on Gaussian mixture statistical models

We now develop an ensemble transform filter algorithm based on a component Gaussian mixture model, i.e.

 ˆπ(x)=L∑l=1αl(2π)N/2detP1/2lexp(−12(x−¯¯¯xl)TP−1l(x−¯¯¯xl))=L∑l=1αlπGauss,l(x), (14)

where denotes the normal distribution in with mean and covariance matrix . The Gaussian mixture parameters, i.e. , , need to be determined from the ensemble , , in an appropriate manner using, for example, the expectation-maximization (EM) algorithm (Dempster et al., 1977; Smith, 2007). See Section 3.3 for more details. Note that and . To simplify notation, we write instead of from now on.

An implementation of the associated continuous formulation of the Bayesian analysis step proceeds as follows. To simplify the discussion we derive our filter formulation for a scalar observation variable, i.e. , , and

 S(x;yobs(tj))=12R(yobs(tj)−Hx)2. (15)

The vector-valued case can be treated accordingly provided the error covariance matrix is diagonal. We first decompose the vector field in (11) into two contributions, i.e.

 dxds=ˆg(x,s)=uA(x,s)+uB(x,s). (16)

To simplify notation we drop the explicit dependence in the following calculations. We next decompose the right hand side of the elliptic PDE (10) also into two contributions

 ˆπ(S(x)−Eˆπ[S])={L∑l=1αlπl(S(x)−Eπl[S])}+{L∑l=1αlπl(Eπl−Eˆπ[S])}. (17)

We now derive explicit expressions for and .

### 3.1 Gaussian mixture Kalman filter contributions

We define the vector field through the equation

 uA(x)=L∑l=1αlπl(x)ˆπ(x)Pl∇xψA,l(x), (18)

together with

 ∇x⋅{ˆπ(x)uA(x)}=∇x⋅{L∑l=1αlπl(x)Pl∇xψA,l(x)}=L∑l=1αlπl(x)(S(x)−Eπl[S]) (19)

which implies that the potentials , , are uniquely determined by

 ∇x⋅{πl(x)Pl∇xψA,l(x)}=πl(x)(S(x)−Eπl[S]) (20)

for all . It follows that the potentials are equivalent to the ensemble Kalman filter potentials for the -th Gaussian component. Hence, using (12) and (18), we obtain

 uA(x,s)=−12L∑l=1αl(s)πl(x,s)ˆπ(x,s)Pl(s)HTR−1[Hx(s)+H¯¯¯xl(s)−2yobs(tj)]. (21)

### 3.2 Gaussian mixture exchange contributions

The remaining contributions for solving (5) are collected in the vector field

 uB(x)=L∑l=1αlπl(x)ˆπ(x)Pl∇xψB,l(x), (22)

which therefore needs to satisfy

 ∇x⋅{ˆπ(x)uB(x)}=∇x⋅{L∑l=1αlπl(x)Pl∇xψB,l(x)}=L∑l=1αlπl(x)(Eπl[S]−Eˆπ[S]) (23)

and, hence, we may set

 ∇x⋅{πl(x)Pl∇xψB,l(x)}=πl(x)(Eπl[S]−Eˆπ[S]) (24)

for all . To find a solution of (24) we introduce functions such that

 ψB,l(x)=ˆψB,l(Hx−H¯¯¯xl)=ˆψB,l(y−¯¯¯yl) (25)

with and . Now the right hand side of (24) gives rise to

 ∇x⋅{πl(x)Pl∇xψB,l(x)}=πl(x)⎛⎝−(y−¯¯¯yl)dˆψB,ldy(y−¯¯¯yl)+HPlHTd2ˆψB,ldy2(y−¯¯¯yl)⎞⎠ (26)

and (24) simplifies further to the scalar PDE

 −(y−¯¯¯yl)dˆψB,ldy(y−¯¯¯yl)+HPlHTd2ˆψB,ldy2(y−¯¯¯yl)=Eπl[S]−Eˆπ[S]. (27)

The PDE (27) can be solved for

 f(z)=dˆψB,ldy(y−¯¯¯yl),z=y−¯¯¯yl, (28)

under the condition by explicit quadrature and one obtains

 f(y−¯¯¯yl)=12Eπl[S]−Eˆπ[S]HPlHerf((y−¯¯¯yl)/√2σ2l)πl(y) (29)

with marginalized PDF

 πl(y):=1√2πσ2lexp⎛⎝−(y−¯¯¯yl)22σ2l⎞⎠, (30)

, and the standard error function

 erf(y)=2√π∫y0e−s2ds. (31)

Note that

 12ddyerf((y−¯¯¯yl)/√2σ2l)=πl(y). (32)

We finally obtain the expression

 uB(x,s)=12L∑l=1αl(s)πl(x,s)ˆπ(x,s)Pl(s)HTEπl[S](s)−Eˆπ[S](s)σ2l% erf((y−¯¯¯yl)/√2σ2l)πl(y) (33)

for the vector field .

The expectation values , , can be computed analytically, i.e.

 Eπl[S]=12R((yobs(tj)−¯¯¯yl)2+σ2l) (34)

or estimated numerically. Recall that and, therefore,

 Eˆπ[S]=L∑l=1αlEπl[S]. (35)

It should be kept in mind that the Gaussian mixture parameters can be updated directly using the differential equations

 d¯¯¯xlds = −PlHTR−1(H¯¯¯xl−yobs(tj)), (36) dPlds = −PlHTR−1HPl, (37) dαlds = −12αl{(H¯¯¯xl−yobs(tj))TR−1(H¯¯¯xl−yobs(tj))+λ}, (38)

for . Here is chosen such that

 L∑l=1dαlds=0. (39)

Furthermore, exactly mirrors the update of the Gaussian mixture means and covariance matrices , while mimics the update of the weight factors by rearranging the particle positions accordingly.

As already eluded to, we can treat each uncorrelated observation separately and sum the individual contributions in and , respectively, to obtain the desired total vector field (16).

### 3.3 Implementation aspects

Given a set of ensemble members , , there are several options for implementing a Gaussian mixture filter. The trivial case leads back to the continuous formulations of Bergemann and Reich (2010a, b). More interestingly, one can chose and estimate the mean and the covariance matrices for the Gaussian mixture model using the EM algorithm (Dempster et al., 1977; Smith, 2007). The EM algorithm self-consistently computes the mixture mean and covariance matrix via

 ¯¯¯xl=1∑Mi=1βi,lM∑i=1βi,lxi,Pl=1∑Mi=1βi,lM∑i=1βi,l(xi−¯¯¯xl)(xi−¯¯¯xl)T (40)

for using weights defined by

 βi,l=αlπl(xi)∑Mk=1αkπk(xi),αl=1MM∑i=1βi,l. (41)

The EM algorithm can fail to converge and a possible remedy is to add a constant contribution to the empirical covariance matrix in (40) with the parameter appropriately chosen. We mention that more refined implementations of the EM algorithm, such as those discussed by Fraley and Raftery (2007), could also be considered. It is also possible to select the number of mixture components adaptively. See, for example, Smith (2007). The resulting vector fields for the th ensemble member are given by

 uA(xi,s)=−12L∑l=1βi,l(s)Pl(s)HTR−1[Hxi(s)+H¯¯¯xl(s)−2yobs(tj)] (42)

and, using (33),

 uB(xi,s)=12L∑l=1βi,l(s)Pl(s)HTEπl[S](s)−Eˆπ[S](s)σ2lerf((y−¯¯¯yl)/√2σ2l)πl(y) (43)

with weights given by (41).

Another option to implement an EGMF is to set the number of mixture components equal to the number of ensemble members, i.e. , and to use a prescribed covariance matrix for all mixture components, i.e.  and , . We also give all mixture components equal weights . In this setting, it is more appropriate to call (14) a kernel density estimator (Wand and Jones, 1995). Then

 uA(xi,s)=−12L∑l=1βi,l(s)BHTR−1[Hxi(s)+Hxl(s)−2yobs(tj)] (44)

and

 uB(xi,s)=12L∑l=1βi,l(s)BHTEπl[S](s)−Eˆπ[S](s)HBHTerr((y−¯¯¯yl)/√2σ2l)πl(y) (45)

with weights given by (41), , and . The Kalman filter like contributions (44) can be replaced by a formulation with perturbed observations (Evensen, 2006; Reich, 2011) which yields

 uA(xi,s)=−BHTR−1[Hxi(s)−yobs(tj)+di], (46)

where , , are independent, identically distributed Gaussian random numbers with mean zero and variance . A particular choice is , where is the empirical covariance matrix of the ensemble and is an appropriate constant. Assuming that the underlying probability density is Gaussian with covariance , the choice

 c=(2/(N+2))4/(N+4)M−2/(N+4) (47)

is optimal for large ensemble sizes in the sense of kernel density estimation (see, e.g. Wand and Jones (1995)). Recall that denotes the dimension of phase space. The resulting filter is then similar in spirit to the rank histogram filter (RHF) suggested by Anderson (2010) with the RHF increments in observation space being replaced by those generated from a Gaussian kernel density estimator. Another choice is in which case (45) can be viewed as a correction term to the standard ensemble Kalman filter (46). We will explore the kernel estimator in the numerical experiment of Section 5.4.

Note that localization, as introduced by Houtekamer and Mitchell (2001) and Hamill et al. (2001), can be combined with (42)-(43) and (44)-(45), respectively, as outlined in Bergemann and Reich (2010a). For example, one could set the covariance matrix in (44)-(45) equal to the localized ensemble covariance matrix. Localization will be essential for a successful application of the proposed filter formulations to high-dimensional systems (1). The same applies to ensemble inflation (Anderson and Anderson, 1999).

We also note that the computation of the particle-mixture weight factors (41) can be become expensive since it requires the computation of . This can be avoided by either using only the diagonal part of in (Smith, 2007) or by using a marginalized density such as (30), i.e. , , instead of the full Gaussian PDF values . Some other suitable marginalization could also be performed.

The vector field is responsible for a transfer of particles between different mixture components according to the observation adjusted relative weight of each mixture component. These transitions can be rather rapid implying that can become large in magnitude. This can pose numerical difficulties and we eliminated those by limiting the -norm of through a cut-off value . Alternatively, we might want to replace (30) by a PDF which leads to less stiff contributions to the vector field such as the Student’s t-distributions (Schaefer, 1997). Hence a natural approximative PDF is provided by the the scaled t-distribution with three degrees of freedom, i.e.

 ϕ(y;¯y,σ)=2σ3π1(σ2+(y−¯y)2)2. (48)

We also introduce the shorthand with and . A first observation is that

 Eϕl[y]=¯yl,Eϕl[(y−¯yl)2]=σ2l, (49)

i.e., the first two moments of match those of (30). The second observation is that can be integrated explicitly, i.e.

 Φl(y)=∫y¯ylϕl(u)du=1πarctan((y−¯yl)σl)+σlπ(y−¯yl)σ2l+(y−¯yl)2. (50)

Hence the relation (32) suggests the alternative formulation

 uB(x,s)=L∑l=1αl(s)πl(x,s)ˆπ(x,s)Pl(s)HTEπl[S](s)−Eˆπ[S](s)σ2lΦl(y,s))ϕl(y,s), (51)

where .

The differential equation (16) needs to be approximated by a numerical time-stepping procedure. In this paper, we use the forward Euler method for simplicity. However, the limited region of stability of an explicit method such as forward Euler implies that the step-size needs to be made sufficiently small. This issue has been investigated by Amezcua et al. (2011) for the formulation with (standard continuous ensemble Kalman filter formulation) and a diagonally implicit scheme has been proposed which overcomes the stability restrictions of the forward Euler method while introducing negligible computational overhead. The computational cost of a single evaluation of (16) for given mixture components should be slightly lower than for a single EnKF step since no matrix inversion is required. Additional expenses arise from fitting the mixture components (e.g. using the EM algorithm) and from having to use a number of time-steps .

## 4 Algorithmic summary of the ensemble Gaussian mixture filter (EGMF)

Since the proposed methodology for treating nonlinear filter problems is based on an extension of the EnKF approach, we call the new filter the ensemble Gaussian mixture filter (EGMF). We now provide an algorithmic summary.

First a set of ensemble members is generated at time according to the initial PDF .

In between observations, the ensemble members are propagated under the ODE model (1), i.e.

 ˙xi=f(xi,t), (52)

for and .

At an observation time , a Gaussian mixture model (14) is fitted to the ensemble members , . One can, for example, use the classic EM algorithm (Dempster et al., 1977; Smith, 2007) for this purpose. In this paper we use a simple heuristic to determine the number of components . An adaptive selection of is, however, feasible (see, e.g., Smith (2007)). Alternatively, one can set and implement the EGMF with a Gaussian kernel density estimator with , . The covariance matrix can be based on the empirical covariance matrix of the whole ensemble via with the constant appropriately chosen. At this stage covariance localization can also be applied.

The vector fields and are computed according to (42) and (43), respectively, (or, alternatively, use (45)-(46)) for each independent observation and the resulting contributions are summed up to a total vector field . Next the ensemble members are updated according to (16) for , . Here we use a simple forward Euler discretization with step-size chosen sufficiently small. After each time-step the Gaussian mixture components are updated, if necessary, using the EM algorithm. The analyzed ensemble members are obtained after time-steps as the numerical solutions at and provide the new initial conditions for (52) with time now in the interval .

Ensemble induced estimates for the expectation value of a function can be computed via

 ¯¯¯f=1MM∑i=1f(xi) (53)

without reference to a Gaussian mixture model.

## 5 Numerical experiments

In this section we provide results from several numerical simulations and demonstrate the performance of the proposed EGMF in comparison with standard implementations of the EnKF and an implementation of the RHF (Anderson, 2010). We first investigate the Bayesian assimilation step without model equations.

### 5.1 Single Bayesian assimilation step

We test our formulation first for a single assimilation step where the prior is a bimodal Gaussian

 πprior(x)=121√2πe−(x−π)2/2+121√2πe−(x+π)2/2 (54)

and the likelihood is

 π(yobs|x)=1√2π4e−(x−π)2/32. (55)

The posterior distribution is again bimodal Gaussian and can be computed analytically. See Fig. 1. Here we demonstrate how an EnKF, the RHF, and the proposed EGMF approximate the posterior for ensemble sizes and for , . See Fig. 2. Both the RHF and the EGMF are capable of reproducing the Bayesian assimilation step correctly for sufficiently large while the EnKF leads to a qualitatively incorrect result. The transformation of the ensemble members (particles) under the dynamics (16) is displayed in Fig. 3.

We now present results from three increasingly complex filtering problems.

### 5.2 A one-dimensional model problem

As a first numerical example we consider Brownian dynamics under a one-dimensional potential , i.e.

 dx=−V′(x)dt+dw(t), (56)

where denotes standard Brownian motion and the potential is given by

 V(x)=cos(x)+34(x/6)4. (57)

See Fig. 4. The true reference trajectory is started at . See Fig. 5. Measurements of are collected every 10 time units with two different values of the measurement error variance ().

The initial PDF is given by the bimodal distribution

 π0(x)∝e−(x−3.14)2/2+e−(x+3.14)2/2. (58)

Depending on the distribution of ensemble members we either use a single Gaussian () or a bi-Gaussian mixture () in the EGMF assimilation step. A single Gaussian is used whenever more than 90% of the particles are located near either the right (i.e. ) or left potential well (i.e. ). The computed variances are modified such that to avoid singularities in the EM algorithm. The model equation (56) is discretized with the forward Euler method and time-step . The total simulation interval is . The assimilation equations with (42) and (43) are discretized with the forward Euler method and step-size . The -norm of is limited to a value of .

The performance of the EGMF is compared to an EnKF with perturbed observations, an ensemble square root filter, and a RHF. The particle positions are adjusted during each data assimilation step of a RHF such that the particles maintain equal weights . The adjustment is done similar to what has been proposed by Anderson (2010) except that the posterior is approximated by piecewise constant functions.

For this simple, one-dimensional problem the densities can be directly propagated through a discretization of the associated Fokker-Planck equation. Bayes theorem reduces to a multiplication of the prior PDF approximation from the Fokker-Planck approximation with the likelihood at each grid point. We have used a grid with mesh-size over to provide an independent and accurate filtering result. Periodic boundary conditions are used for the diffusion operator such that the spatial discretization leads to a stochastic matrix. It is found from the numerical simulations that leads to a pronounced bimodal behavior of the solution PDF . See Figure 6 for two posterior PDF approximations from the Fokker-Planck approach.

Typical filter behaviors over the time interval with regard to the reference trajectory can be found in Figures 7, 8, 9, and 10, respectively, for ensemble members. The EGMF and the RHF display a behavior similar to that from the discretized Fokker-Planck approach while significantly different results are obtained from the EnKF implementation with perturbed observations. Similar results are obtained for an ensemble square root filter (not displayed). The EGMF uses a bi-Gaussian approximation in 97% of the assimilation steps for and in 47% of the assimilation steps for .

We also provide the root mean square (RMS) error between the computed mean from the three different filters and the mean computed from the Fokker-Planck approach in Table 1 for and different ensemble sizes . The RHF converges as to the solution of the Fokker-Planck approach for this one-dimensional model problem. The EGMF provides better results than the EnKF but does not converge since the limiting distributions are not exactly bi-Gaussian. Note that the EGMF should converge for and the number of mixture components sufficiently large. Note also that the RHF does not converge to the analytic filter solution as in case of more than one dynamic variable (i.e. ). See also the following example.

### 5.3 A two-dimensional model problem

We discuss another example from classical mechanics. The evolution of a particle with position and velocity is described by Langevin dynamics (Gardiner, 2004) with equations of motion

 dq = vdt, (59) dv = −V′(q)dt−γvdt+σdw(t), (60)

where the potential is given by

 V(q)=cos(q)+34(q/6)4, (61)

the friction coefficient is , denotes standard Brownian motion, and . A reference solution, denoted by , is obtained for initial condition and a particular realization of .

Let us address the situation that the reference solution is not directly accessible to us and that instead we are only able to observe subject to

 dQ(t)=vr(t)dt+c1/2dξ(t), (62)

where denotes again standard Brownian motion and . In other words, we are effectively only able to observe particle velocities.

We now combine the model equations and the observations within the ensemble Kalman-Bucy framework. The ensemble filter relies on the simultaneous propagation of an ensemble of solutions , . In our experiment we set . The filter equations for an EGMF with a single Gaussian, ensemble Kalman-Bucy filter (Bergemann and Reich, 2011), respectively, become

 dqi = vidt−Pqv2c(vidt+¯vdt−2dQ(t)), (63) dvi = −V′(qi)dt−γvidt+σdwi(t)−Pvv2c(vidt+¯vdt−2dQ(t)) (64)

with mean

 ¯v=1MM∑i=1vi,¯q=1MM∑i=1qi (65)

and variance/covariance

 Pvv=1M−1M∑i=1(vi−¯v)2,Pqv=1M−1M∑i=(qi−¯q)(vi−¯v). (66)

The equations are solved for each ensemble member with different realizations of standard Brownian motion and step-size . The observation interval in (62) is . The extension of the EGMF to a Gaussian mixture with is straightforward. One substitutes , , and