An Optimal Transport Formulation of the Ensemble Kalman Filter

# An Optimal Transport Formulation of the Ensemble Kalman Filter

Amirhossein Taghvaei, Prashant G. Mehta A. Taghvaei is with the Department of Mechanical and Aerospace Engineering at University of California Irvine. The research reported in this paper was performed while he was a graduate student at the University of Illinois at Urbana-Champaign ataghvae@uci.edu.P. G. Mehta is with the Coordinated Science Laboratory and the Department of Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign (UIUC) mehtapg@illinois.edu.Financial support from the NSF CMMI grants 1462773 and 1761622 is gratefully acknowledged.The conference version of this paper appears in  and .
###### Abstract

Controlled interacting particle systems such as the ensemble Kalman filter (EnKF) and the feedback particle filter (FPF) are numerical algorithms to approximate the solution of the nonlinear filtering problem in continuous time. The distinguishing feature of these algorithms is that the Bayesian update step is implemented using a feedback control law. It has been noted in the literature that the control law is not unique. This is the main problem addressed in this paper. To obtain a unique control law, the filtering problem is formulated here as an optimal transportation problem. An explicit formula for the (mean-field type) optimal control law is derived in the linear Gaussian setting. Comparisons are made with the control laws for different types of EnKF algorithms described in the literature. Via empirical approximation of the mean-field control law, a finite- controlled interacting particle algorithm is obtained. For this algorithm, the equations for empirical mean and covariance are derived and shown to be identical to the Kalman filter. This allows strong conclusions on convergence and error properties based on the classical filter stability theory for the Kalman filter. It is shown that, under certain technical conditions, the mean squared error (m.s.e.) converges to zero even with a finite number of particles. A detailed propagation of chaos analysis is carried out for the finite- algorithm. The analysis is used to prove weak convergence of the empirical distribution as . For a certain simplified filtering problem, analytical comparison of the m.s.e. with the importance sampling-based algorithms is described. The analysis helps explain the favorable scaling properties of the control-based algorithms reported in several numerical studies in recent literature.

## I Introduction

The subject of this paper concerns Monte-Carlo methods for simulating a nonlinear filter (conditional distribution) in continuous-time settings. The mathematical abstraction of any filtering problem involves two processes: a hidden Markov process and the observation process . The numerical problem is to compute the posterior distribution where is the filtration generated by the observations. A standard solution approach is the particle filter which relies on importance sampling to implement the effect of conditioning [3, 4]. In numerical implementations, this often leads to the particle degeneracy issue (or weight collapse) whereby only a few particles have large weights. To combat this issue, various types of resampling schemes have been proposed in the literature [5, 6].

In the past decade, an alternate class of algorithms has attracted growing attention. These algorithms can be regarded as a controlled interacting particle system where the central idea is to implement the effect of conditioning using feedback control. Mathematically, this involves construction of controlled stochastic process, denoted by . In continuous-time settings, the model for the is a stochastic differential equation (sde):

where the [additional terms] are pre-specified (these terms may be zero). The control problem is to design mean-field type control law and such that the conditional distribution of (given ) is equal to the posterior distribution of . If this property holds, the filter is said to be exact. In a numerical implementation, the mean-field terms in the control law are approximated empirically by simulating copies of (1). The resulting system is a controlled interacting particle system with a finite number of interacting particles. The particles have uniform importance weights by construction. Therefore, the particle degeneracy issue does not arise. Resampling is no longer necessary and steps such as rules for reproduction, death or birth of particles are altogether avoided.

The focus of this paper is on (i) formal methods for design of control laws ( and ) for (1); (ii) algorithms for empirical approximation of the control laws using particles; and (iii) error analysis of the finite- interacting particle models as . The main problem highlighted and addressed in this paper is the issue of uniqueness: one can interpret the controlled system (1) as transporting the initial distribution at time (prior) to the conditional distribution at time t (posterior). Clearly, there are infinitely many maps that transport one distribution into another. This suggests that there are infinitely many choices of control laws that all lead to exact filters. This is not surprising: The exactness condition specifies only the marginal distribution of the stochastic process at times , which is not enough to uniquely identify a stochastic process, e.g., the joint distributions at two time instants are not specified.

Although these issues are relevant more generally, the scope of this paper is limited to the linear Gaussian problem. A motivation comes from the widespread use of the ensemble Kalman filter (EnKF) algorithm in applications. It is noted that the mean-field limit of the EnKF algorithm is exact only in linear Gaussian settings. The issue of non-uniqueness is manifested in the different types of EnKF algorithms reported in literature. Some of these EnKF types are discussed as part of the literature survey (in Sec. I-B) and in the main body of the paper (in Sec. II-B).

### I-a Contributions of this paper

The following contributions are made in this paper:

1. Non-uniqueness issue: For the linear Gaussian problem, an error process is introduced to help explain the non-uniqueness issue in the selection of the control law in (1). The error process helps clarify the relationship between the different types of control laws leading to the different types of EnKF algorithms that have appeared over the years in the literature.

2. Optimal transport FPF: To select a unique control law, an optimization problem is proposed in the form of a time-stepping procedure. The optimality concept is motivated by the optimal transportation theory [7, 8]. The solution of the time-stepping procedure yields a unique optimal control law. The resulting filter is referred to as the optimal transport FPF. The procedure is suitably adapted to handle the case, important in Monte-Carlo implementations with finitely many particles, where the covariance is singular. In this case, the optimal (deterministic) transport maps are replaced by optimal (stochastic) couplings. The general form of the optimal FPF includes stochastic terms which are zero when the covariance is non-singular.

3. Error analysis: A detailed error analysis is carried out for the deterministic form of the optimal FPF for the finite but large limit. For the purposes of error analysis, it is assumed that the linear system is controllable and observable, and the initial empirical covariance matrix is non-singular. The main results are as follows:

• Empirical mean and covariance of particles is shown to converge almost surely to exact mean and covariance as even for finite (Prop. 2-(i));

• Mean-squared error is shown to be bounded by where the constant has polynomial dependence on the problem dimension (Prop. 2-(ii));

• A propagation of chaos analysis is carried out to show that empirical distribution of the particles converges in a weak sense to the exact filter posterior distribution (Cor. 1).

4. Comparison to importance sampling: For a certain simplified filtering problem, a comparison of the m.s.e. between the importance sampling and control-based filters is described. The main result is to show that using an important sampling approach, the number of particles must grow exponentially with the dimension . In contrast, with a control-based approach, scales at most as order in order to maintain the same error (Prop. 4). The conclusions are also verified numerically (Fig. 1).

This paper extends and completes the preliminary results reported in our prior conference papers [1, 2]. The optimal transport formulation of the FPF and the time-stepping procedure was originally introduced in . However, its extension to the singular covariance matrix case, in Sec. III-D, is original and has not appeared before. Preliminary error analysis of the deterministic form of optimal FPF appeared in our prior work . The current paper extends the results in  in two key aspects: (i) The error bounds for the convergence of the empirical mean and covariance, in Prop. 2-(ii), reveal the scaling with the problem dimension (see Remark 3-(ii)), whereas previous results did not; (ii) The propagation of chaos analysis is carried out for the vector case (Cor. 1), whereas previous result was only valid for the scalar case. These improvements became possible with a proof approach that is entirely different than the one used in . Finally, the analytical comparison with the importance sampling particle filter, in Sec. V, is new.

### I-B Literature survey

Two examples of the controlled interacting particle systems are the classical ensemble Kalman filter (EnKF) [9, 10, 11, 12] and the more recently developed feedback particle filter (FPF) [13, 14]. The EnKF algorithm is the workhorse in applications (such as weather prediction) where the state dimension is very high; cf., [12, 15]. The high-dimension of the state-space provides a significant computational challenge even in linear Gaussian settings. For such problems, an EnKF implementation may require less computational resources (memory and FLOPS) than a Kalman filter [15, 16]. This is because the particle-based algorithm avoids the need to store and propagate the error covariance matrix (whose size scales as ).

An expository review of the continuous-time filters including the progression from the Kalman filter (1960s) to the ensemble Kalman filter (1990s) to the feedback particle filter (2010s) appears in . In continuous-time settings, the first interacting particle representation of the nonlinear filter appears in the work of Crisan and Xiong . Also in continuous-time settings, Reich and collaborators have derived deterministic forms of the EnKF [19, 12]. In discrete-time settings, Daum and collaborators have pioneered the development of closely related particle flow algorithms [20, 21].

The technical approach of this paper has its roots in the optimal transportation theory. These methods have been widely applied for uncertainty propagation and Bayesian inference: The ensemble transform particle filter is based upon computing an optimal coupling by solving a linear optimization problem ; Polynomial approximations of the Rosenblatt transport maps for Bayesian inference appears in [23, 24]; Solution of such problems using the Gibbs flow is the subject of . The time stepping procedure of this paper is inspired by the J-K-O construction in . Its extension to the filtering problem appears in [27, 28, 29, 30].

Closely related to error analysis of this paper is the recent literature on stability and convergence of the EnKF algorithm. For the discrete-time EnKF algorithm, these results appear in [31, 32, 33, 34, 35]. The analysis for continuous-time EnKF is more recent. For continuous-time EnKF with perturbed observation, under additional assumptions (stable and fully observable), it has been shown that the empirical distribution of the ensemble converges to the mean-field distribution uniformly for all time with the rate  . This result has been extended to the nonlinear setting for the case with Langevin type dynamics with a strongly convex potential and full linear observation . The stability assumption is recently relaxed in . Under certain conditions, convergence and long term stability results appear in .

In independent numerical evaluations and comparisons, it has been observed that EnKF and FPF exhibit smaller simulation variance and better scaling properties – with respect to the problem dimension – when compared to the traditional methods [40, 41, 42, 43, 44]. The error analysis (in Sec. IV) together with the analytical bounds on comparison with the importance sampling (in Sec. V) provide the first such rigorous justification for the performance improvement reported in literature. The analysis of this paper is likely to spur wider adoption of the control-based algorithms for the purposes of sampling and simulation.

### I-C Paper outline

Sec. II includes the preliminaries along with a discussion of the non-uniqueness issue. Its resolution is provided in Sec. III where the optimal FPF is derived. The error analysis appears in Sec. IV and comparison with importance sampling particle filter is given in Sec. V. The proofs appear in the Appendix.

Notation: For a vector , denotes the Euclidean norm. For a square matrix , denotes the Frobenius norm, is the spectral norm, is the matrix-trace, denotes the null-space, denotes the range space, and denotes the spectrum. For a symmetric matrix , and denote the maximum and minimum eigenvalues of respectively. The partial order of positive definite matrices is denoted by such that means is positive definite. is a Gaussian probability distribution with mean and covariance .

## Ii The Non-Uniqueness Issue

### Ii-a Preliminaries

The linear Gaussian filtering problem is described by the linear stochastic differential equations (sde-s):

 dXt =AXtdt+σBdBt, (2a) dZt =HXtdt+dWt, (2b)

where and are the state and observation at time , are mutually independent standard Wiener processes taking values in and , respectively, and , , are matrices of appropriate dimension. The initial condition is assumed to have a Gaussian distribution . The filtering problem is to compute the posterior distribution,

 πt(⋅):=P(Xt∈⋅|Zt), (3)

where .

Kalman-Bucy filter: In this linear Gaussian case, the posterior distribution is Gaussian , whose mean and variance evolve according to the Kalman-Bucy filter :

 dmt =Amtdt+Kt(dZt−Hmtdt), (4a) ddtΣt =Ricc(Σt):=AΣt+ΣtA⊤+ΣB−ΣH⊤HΣt, (4b)

where is the Kalman gain and .

Feedback particle filter: The stochastic linear FPF [14, Eq. (26)] (and also the square-root form of the EnKBF [12, Eq (3.3)]) is described by the Mckean-Vlasov sde:

 d¯Xt=A¯Xtdt+σBd¯Bt+¯Kt(dZt−H¯Xt+H¯mt2dt)FPF control law, (5)

where is the Kalman gain, is a standard Wiener process, , are the mean-field terms, and . We use

 ¯πt(⋅):=P(¯Xt∈⋅|Zt) (6)

to denote the conditional distribution of mean-field process .

The FPF control law is exact. The exactness result appears in the following theorem which is a special case of the [14, Thm. 1] that describes the exactness result for the general nonlinear non-Gaussian case. A proof is included in Appendix A. The proof is useful for studying the non-uniqueness issue described in Sec. II-B.

###### Theorem 1

(Exactness of linear FPF) Consider the linear Gaussian filtering problem (2a)-(2b) and the linear FPF (5). If then

 πt=¯πt,∀t≥0.

The notation nomenclature is tabulated in Table I.

### Ii-B The non-uniqueness issue

In the proof of Thm. 1 (given in Appendix A), it is shown that (i) the conditional mean process evolves according to (4a); and (ii) the conditional variance process evolves according to (4b).

Define an error process for . The equation for is obtained by subtracting the equation for the mean, (38) in Appendix A, from (5):

 dξt=(A−12¯ΣtHTH)ξt+σBd¯Bt

This is a linear system and therefore, the variance of , easily seen to be given by , evolves according to the Lyapunov equation

 ddt¯Σt =(A−12¯ΣtHTH)¯Σt+¯Σt(A−12¯ΣtHTH)⊤+ΣB =Ricc(¯Σt)

which is identical to (4b).

These arguments suggest the following general procedure to construct an exact process: Express as a sum of two terms:

 ¯Xt=¯mt+ξt,

where evolves according to the Kalman-Bucy equation (4a) and the evolution of is defined by the sde:

 dξt=Gtξtdt+σtd¯Bt+σ′td¯Wt

where and are independent Brownian motions, and , , and are solutions to the matrix equation

 Gt¯Σt+¯ΣtGTt+σtσ⊤t+σ′t(σ′t)⊤=Ricc(¯Σt) (7)

By construction, the equation for the variance is given by the Riccati equation (4b).

In general, there are infinitely many solutions for (7). Below, we describe three solutions that lead to three established form of EnKF and linear FPF:

1. EnKF with perturbed observation [19, Eq. (27)]:

 Gt=A−¯ΣtH⊤H,σt=¯ΣtH⊤,σ′t=σB
2. Stochastic linear FPF [14, Eq. (26)] or square-root form of the EnKF [12, Eq (3.3)] :

 Gt=A−12¯ΣtH⊤H,σt=σB,σ′t=0
3. Deterministic linear FPF :

 Gt=A−12¯ΣtH⊤H+12¯Σ−1tΣB,σt=0,σ′t=0

Given a particular solution , one can construct a family of solutions , where is any skew-symmetric matrix. For the linear Gaussian problem, the non-uniqueness issue has been discussed in literature: At least two forms of EnKF, the perturbed observation form  and the square-root form , are well known. A homotopy of exact deterministic and stochastic filters is given in . An explanation for the non-uniqueness in terms of the Gauge transformation appears in .

### Ii-C Finite-N implementation

In a numerical implementation, one simulates stochastic processes (particles) , where is the state of the -particle at time . The evolution of is obtained upon empirically approximating the mean-field terms. The finite- filter for the linear FPF (5) is an interacting particle system:

 dXit =AXitdt+σBdBit+K(N)t(dZt−HXit+Hm(N)t2dt) (8)

where ; are independent copies of ; for ; and the empirical approximations of the two mean-field terms are as follows:

 m(N)t :=1NN∑j=1Xit (9) Σ(N)t :=1N−1N∑j=1(Xit−m(N)t)(Xit−m(N)t)⊤

We use the notation

 π(N)t:=1NN∑i=1δXit (10)

to denote the empirical distribution of the particles. Here, denotes the Dirac delta distribution at .

## Iii Optimal Transport FPF

The problem is to uniquely identify an exact stochastic process . The solution is based on an optimality concept motivated by the optimal transportation theory [7, 8]. The background on this theory is briefly reviewed next.

### Iii-a Background on optimal transportation

Let and be two probability measures on with finite second moments. The (Monge) optimal transportation problem with quadratic cost is to minimize

 minT E[(T(X)−X)2] (11)

over all measurable maps such that

 X∼μX, T(X)∼μY (12)

If it exists, the minimizer is called the optimal transport map between and . The optimal cost is referred to as -Wasserstein distance between and .

###### Theorem 2

(Optimal map between Gaussians [48, Prop. 7]) Consider the optimization problem (11)-(12). Suppose and are Gaussian distributions, and , with . Then the optimal transport map between and is given by

 T(x)=mY+F(x−mX) (13)

where .

### Iii-B The time-stepping optimization procedure

To uniquely identify an exact stochastic process , the following time stepping optimization procedure is proposed:

1. Divide the time interval into equal time steps with the time instants .

2. Initialize a discrete time random process according to the initial distribution (prior) of ,

 ¯Xt0∼π0
3. For each time step , evolve the process according to

 ¯Xtk+1=Tk(¯Xtk),fork=0,…,n−1 (14)

where the map is the optimal transport map between the probability measures and .

4. Take the limit as to obtain the continuous-time process and the sde:

 d¯Xt=ut(¯Xt)dt+Kt(¯Xt)dZt (15)

The procedure leads to the control laws and that depend upon . Since is unknown, one simply replaces it with – the two are meant to be identical by construction. The resulting sde (15) is referred to as the optimal transport FPF or simply the optimal FPF. A definition is needed to state the main result.

###### Definition 1

For a given positive-definite matrix , define as the (unique such) symmetric solution to the matrix equation:

 √Ricc(Q)Q+Q√Ricc(Q)=Ricc(Q) (16)
###### Remark 1

The symmetric solution to the matrix equation (16) is explicitly given by:

 √Ricc(Q)=∫∞0e−sQRicc(Q)e−sQds

The solution can also be expressed as:

 √Ricc(Q)=A−12QH⊤H+12ΣBQ−1+ΩQ−1 (17)

where is the (unique such) skew-symmetric matrix that solves the matrix equation

 ΩQ−1+Q−1Ω=(A⊤ (18) +

The main result is as follows. Its proof appears in the Appendix B.

###### Proposition 1

Consider the linear Gaussian filtering problem (2a)-(2b). Assume the initial covariance . Then the optimal transport FPF is given by

 d¯Xt= A¯mtdt+¯Kt(dZt−H¯mtdt)+Gt(¯Xt−¯mt)dt (19)

where , , , , and

The filter is exact: That is, the conditional distribution of is Gaussian with and .

Using the form of the solution (17) for , the optimal transport sde (19) is expressed as

 d¯Xt= A¯Xtdt+12ΣB¯Σ−1t (¯Xt−¯mt)dt (20) +12¯Kt(dZt−H¯Xt+H¯mt2dt)+Ωt¯Σ−1t(¯Xt−¯mt)dt

Compared to the original (linear Gaussian) FPF (5), the optimal transport FPF (20) has two differences:

1. The stochastic term is replaced with the deterministic term . Given a Gaussian prior, the two terms yield the same posterior. However, in a finite- implementation, the difference becomes significant. The stochastic term serves to introduce an additional error of order .

2. The sde (20) has an extra term involving the skew-symmetric matrix . The extra term does not effect the posterior distribution. This term is viewed as a correction term that serves to make the dynamics symmetric and hence optimal in the optimal transportation sense. It is noted that for the scalar () case, the skew-symmetric term is zero. Therefore, in the scalar case, the update formula in the original FPF (5) is optimal. In the vector case, it is optimal iff .

### Iii-C Finite-N implementation in non-singular covariance case

The finite- implementation of the optimal transport sde (19) is given by the following system of equations:

 dXit= Am(N)tdt+K(N)t(dZt−Hm(N)tdt) (21) + √Ricc(Σ(N)t)(Xit−m(N)t)dt

for , where ; ; and empirical approximations of mean and variance are defined in (9).

The matrix is not well-defined if is a singular matrix. This is a problem because in a finite- implementation, it is possible that even though . In particular, when , the empirical covariance matrix is of rank at most and hence singular. Note that this problem only arises for the optimal and deterministic forms of the FPF. In particular, the stochastic FPF (8) does not suffer from this issue. It can be simulated for any choice of . In Sec. III-D, we extend the optimal transportation formulation to handle also the singular forms of the covariance matrix.

### Iii-D The singular covariance case

The derivation of the optimal FPF (19) crucially relies on the assumption that which in turn implies that, in the time-stepping procedure, for . In the proof of Prop. 1, the assumption is used to derive the optimal transport map . In general, when the covariance of Gaussian random variables or is singular, the optimal transport map may not exist.

In the singular case, a relaxed form of the optimal transportation problem, first introduced by Kantorovich, is used to search for optimal (stochastic) couplings instead of (deterministic) transport maps . The following example helps illuminate the issue:

###### Example 1

Consider Gaussian random variable and with distributions, and , respectively. Suppose

 mX=mY=,ΣX=[100ϵ],ΣY=

where is small. If , the optimal transportation map exists, and is given by

 Y=[1001√ϵ]X

If then there is no transport map that satisfies the constraints of the optimal transportation problem.

The Kantorovich relaxation of the optimal transportation problem (11) is the following optimization problem:

 minμ E(X,Y)∼μ[|X−Y|2] (22)

where is a joint distribution on with marginals fixed to and .

Although a deterministic map does not exist for the problem, a (stochastic) coupling that solves the Kantorovich problem (22) exists and is given by

 Y=X+B

where is independent of .

In Appendix C, the Kantorovich relaxation is used to motivate an optimization problem whose solution yields the following extension of the optimal FPF:

 d¯Xt=A¯mt+¯Kt(dZt−H¯mtdt)+Gt(¯Xt−¯mt)dt+σtd¯Bt (23)

with where is the projection matrix into the kernel of the matrix , and is any symmetric solution of the matrix equation

 Gt¯Σt+¯ΣtGt =Ricc(¯Σt)−σt(σt)⊤ (24)
###### Remark 2

When is singular, the solution to the matrix equation (24) is not unique. It is shown in Appendix C that the solution is of the following general form:

 Gt= A−12¯ΣtHH⊤+12PRΣB¯Σ+t+PKΣB¯Σ+t+¯Σ+tΣBPK (25) +PRΩ(1)¯Σ+t+PK(Ω(0)−A)PK

where is the pseudo inverse111The pseudo inverse of matrix is the unique matrix that satisfies , , is symmetric, and is symmetric . of , is projection matrix onto the range of , and is a skew-symmetric matrix that solves a certain matrix equation (46) and is an arbitrary symmetric matrix.

Using the formula (25), the optimal FPF (23) is expressed as follows:

The formula (26) allows one to clearly see the relationship between the deterministic and stochastic forms of the optimal FPF. In particular, when the covariance matrix is non-singular, , and . This results in the deterministic form of the optimal transport FPF (20). When the covariance matrix is singular, then the effect of the linear term in (20) is now simulated with the two terms in (26). This is conceptually similar to the Example 1, where the deterministic optimal transport map is replaced with a stochastic coupling. The [additional terms] in (26) do not affect the distribution.

### Iii-E Finite-N implementation in the singular covariance case

The finite- implementation of the optimal transport sde (23) is given by the following system of equations:

 dXit= Am(N)tdt+K(N)t(dZt−Hm(N)tdt)+G(N)t(Xit−m(N)t)dt+ (27) σ(N)tdBit,Xi0∼N(m0,Σ0)

where ; are independent copies of , , is a symmetric matrix solution to the matrix equation

 G(N)tΣ(N)t+Σ(N)tG(N)t =Ricc(Σ(N)t)−σ(N)tσ(N)⊤t

and and are empirical mean and covariance defined in (9). Note that the stochastic term is zero when , which is true, e.g., when .

## Iv Error analysis

This section is concerned with error bounds in the large but finite regime. Given that is large, we restrict ourselves to the non-singular case. For the purposes of the error analysis, the following assumptions are made:

Assumption (I): The system is detectable and is stabilizable.

Assumption (II): Assume and the initial empirical covariance matrix almost surely.

The main result for the finite- deterministic FPF (21) is as follows with the proof given in Appendix D.

###### Proposition 2

Consider the Kalman filter (4a)-(4b) initialized with the prior and the finite- deterministic form of the optimal FPF (21) initialized with for . Under Assumption (I) and (II), the following characterizes the convergence and error properties of the empirical mean and covariance obtained from the finite- filter relative to the mean and covariance obtained from the Kalman filter:

• Convergence: For any finite , as :

 limt→∞eλt∥m(N)t−mt∥2 =0a.s limt→∞e2λt∥Σ(N)t−Σt∥F =0a.s

for all where is a fixed positive constant (see (48) in Appendix D).

• Mean-squared error: For any , as :

 E[∥m(N)t−mt∥22] ≤(const.)e−2λtTr(Σ0)+Tr(Σ0)2N (28a) E[∥Σ(N)t−Σt∥2F] ≤(const.)e−4λtTr(Σ0)2N (28b)

for all . The constant depends on , and spectral norms , and , where is the solution to the algebraic Riccati equation (see Lemma 2).

###### Remark 3

Two remarks are in order:

1. Asymptotically (as ) the empirical mean and variance of the finite- filter becomes exact. This is because of the stability of the Kalman filter whereby the filter forgets the initial condition. In fact, for any (not necessarily i.i.d Gaussian) , that satisfy the assumption , the result holds.

2. (Scaling with dimension) If the parameters of the linear Gaussian filtering problem (2a)-(2b) scale with the dimension in a way that the spectral norms , , , and do not change, then the constant in the error bounds (28a)-(28b) do not change. The only term that scales with the dimension is . For example, with , . Therefore, the bound on the error typically scales as in problem dimension.

### Iv-a Propagation of chaos

In this section, we study the convergence of the empirical distribution of the particles for the finite- system (21) to the exact posterior distribution . Derivation of error estimates involve construction of independent copies of the exact process (19). Consistent with the convention to denote mean-field variables with a bar, the stochastic processes are denoted as where denotes the state of the particle at time . The particle evolves according to the mean-field equation (19) as

 d¯Xit =A¯mtdt+¯Kt(dZt−H¯mtdt)+√Ricc(¯Σt)(¯Xit−¯mt) (29)

where is the Kalman gain and the initial condition , the right-hand side being the initial condition of the particle in the finite- FPF (21). The mean-field process is thus coupled to through the initial condition.

In order to carry out the error analysis, the following event is defined for an arbitrary choice of a fixed matrix :

 SΛ0:={Σ(N)0≻Λ0} (30)

The following proposition characterizes the error between and , when the event is true (the estimate is key to the propagation of chaos analysis). The proof appears in the Appendix E.

###### Proposition 3

Consider the stochastic processes and whose evolution is defined according to the optimal transport FPF (21) and its mean-field model (29), respectively. The initial condition for Then, under Assumptions (I) and (II):

 E[∥Xit−¯Xit∥221SΛ0]1/2≤(const.)√N (31)

The estimate (31) is used to prove the following important result that the empirical distribution of the particles in the linear FPF converges weakly to the true posterior distribution. Its proof appears in the Appendix E.

###### Corollary 1

Consider the linear filtering problem (2a)-(2b) and the finite- deterministic FPF (21). The initial condition for . Under Assumptions (I) and (II), for any bounded and Lipschitz function ,

 E⎡⎣∣∣ ∣∣1NN∑i=1f(Xit)−E[f(Xt)|Zt]∣∣ ∣∣2⎤⎦1/2≤(const.)√N

## V Comparison to Importance Sampling

For the purposes of the comparison of the optimal FPF with the importance sampling-based particle filter, we restrict to the static filtering example with a fully observed observation model:

 dXt =0X0∼N(0,σ20Id) (32) dZt =Xtdt+σwdWt

for , where . The posterior distribution at time , denoted as , is a Gaussian with and .

Let be i.i.d samples of . The importance sampling-based particle filter yields an empirical approximation of the posterior distribution as follows:

 π(N)PF=N∑i=1wiδXi0,wi=e−∥Z1−Xi0∥222σ2w∑Ni=1e−∥Z1−Xi0∥222σ2w (33)

In contrast, given the initial samples , the FPF approximates the posterior by implementing a feedback control law as follows:

 π(N)FPF=1NN∑i=1δXi1,dXit=1σ2wΣ(N)t(dZt−Xit+m(N)t2dt) (34)

where the empirical mean and covariance are approximated as (9).

The m.s.e in estimating the conditional expectation of a given function is defined as follows:

 m.s.e∗(f)=E[∥π(N)∗(f)−πEXACT(f)∥22]

where the subscript is either the PF or the FPF.

For , a numerically computed plot of the level-sets of m.s.e, as a function of and , is depicted in Figure 1-(a)-(b). The expectation is approximated by averaging over independent simulations. It is observed that, in order to have the same error, the importance sampling-based approach requires the number of samples