Universal Nonlinear Filtering Using Feynman Path Integrals II: The Continuous-Continuous Model with Additive Noise

# Universal Nonlinear Filtering Using Feynman Path Integrals II: The Continuous-Continuous Model with Additive Noise

Bhashyam Balaji
Defence Research and Development Canada, Ottawa,
3701 Carling Avenue,
Email: Bhashyam.Balaji@drdc-rddc.gc.ca
###### Abstract:

In this paper, the Feynman path integral formulation of the continuous-continuous filtering problem, a fundamental problem of applied science, is investigated for the case when the noise in the signal and measurement model is additive. It is shown that it leads to an independent and self-contained analysis and solution of the problem. A consequence of this analysis is Feynman path integral formula for the conditional probability density that manifests the underlying physics of the problem. A corollary of the path integral formula is the Yau algorithm that has been shown to be superior to all other known algorithms. The Feynman path integral formulation is shown to lead to practical and implementable algorithms. In particular, the solution of the Yau PDE is reduced to one of function computation and integration.

Fokker-Planck Equation, Kolmogorov Equation, Optimal Nonlinear Filtering, Duncan-Mortensen-Zakai equation, Yau filters

## 1 Introduction

### 1.1 Motivation

The fundamental dynamical laws of physics, both classical and quantum mechanical, are described in terms of variables continuous in time. The continuous nature of the dynamical variables has been verified at all length scales probed so far, even though the relevant dynamical variables, and the fundamental laws of physics, are very different in the microscopic and macroscopic realms. In practical situations, one often deals with macroscopic objects whose state variables are phenomenologically well-described by classical deterministic laws modified by external disturbances that can be modelled as random noise. It is therefore natural to consider the problem of the evolution of a state of a signal of interest described by a continuous-time stochastic dynamical model. Roughly speaking, the state of the system is described by a noisy version of a deterministic dynamical system termed the state model, i.e., the dynamics is governed by a system of first-order differential equations in the state variable () with an additional contribution due to noise that is random(). The noise in the state model is referred to as the signal noise. If the noise is Gaussian (or more generally multiplicative Gaussian) the state process is a Markov process. Since the process is stochastic, the state process is completely characterized by a probability density function. The Fokker-Planck-Kolmogorov foward equation (FPKfe) describes the evolution of this probability density function (or equivalently, the transition probability density function) and is the complete solution of the state evolution problem. For an excellent discussion of stochastic processes and filtering theory, see [1].

However, in many applications the signal, or state variables, cannot be directly observed. Instead, what is measured is a nonlinearly related stochastic process () called the measurement process. The measurement process can often be modelled as yet another continuous stochastic dynamical system called the measurement model. Loosely speaking, the observations, or measurements, are discrete-time samples drawn from a different system of noisy first order differential equations. The noise in the measurement dynamical system is referred to as measurement noise. For simplicity, the signal and measurement noise are often assumed to be independent.

The problem of optimal nonlinear filtering is to estimate the state of a stochastic dynamical system optimal under a certain criterion, given the observations that are samples of a related stochastic measurement process. The conditional probability density function of the state parameters given the observations is the complete solution of the filtering problem since it contains all the probabilistic information about the state process that is in the measurements and in the initial condition. This is the Bayesian approach, i.e., the a priori initial data about the signal process contained in the initial probability distribution of the state() is incorporated into the solution. Given the conditional probability density, optimality may be defined under various criterion. Usually, the conditional mean, which is the least mean-squares estimate, is studied due to its richness in results and mathematical elegance. The solution of the optimal nonlinear filtering problem is termed universal, if the initial distribution can be arbitrary.

### 1.2 Fundamental Sochastic Filtering Results

When the state and measurement processes are linear, the linear filtering problem was considered by Kalman and Bucy[2], [3]. The Kalman filter has been successfully applied to a large number of problems.

Neverthless, the Kalman filter suffers from some major limitations. The Kalman filter is not optimal even for the linear filtering case if the initial distribution is not Gaussian111It may still be optimal for a linear system under certain criteria, such as minimum mean square error, but not a general criterion.. In other words, the Kalman filter is not a universal optimal filter, even when the filtering problem is linear. In the general nonlinear case, the filter state is infinite dimensional, namely the whole conditional probability distribution function, and the Kalman filter cannot be optimal, although it may still work quite well if the nonlinearity is mild enough.

The continuous-continous nonlinear filtering problem (i.e., continuous-time state and measurement stochastic processes) was studied in [4, 5], [6] and [7]. This led to a stochastic differential equation, called the Kushner equation, for the conditional probability density in the continuous-continuous filtering problem. It was noted in[8], [9], and [10] that the stochastic differential equation satisfied by the unnormalized conditional probability density, called the Duncan-Mortensen-Zakai (DMZ) equation, is linear and hence considerably simpler than the Kushner equation. In [11] and [12] were derived the robust DMZ equation, a partial differential equation (PDE) that follows from the DMZ equation via a gauge transformation.

A disadvantage of the robust DMZ equation is that the coefficients depend on the measurements. Thus, one does not know the PDE to solve prior to the measurements. As a result, real-time solution is impossible. In [13], it was proved that the robust DMZ equation is equivalent to a partial differential equation that is independent of the measurements, which is referred to as the Yau Equation (YYe) in this paper (see discussion at the end of Section 2.6 for why it is being distinguished from the FPKfe). Specifically, the measurements only enter as initial condition at each measurement step. Thus, no on-line solution of a PDE is needed; all PDE computations can be done off-line. As discussed later, there are other real-time solutions possible; however, the approximation error in those algorithms are not as well controlled.

However, numerical solution of partial differential equations presents several challenges. A naïve discretization may not be convergent, i.e., the approximation error may not vanish as the grid size is reduced. Alternatively, when the discretization spacing is decreased, it may tend to a different equation, i.e., be inconsistent. Furthermore, the numerical method may be unstable. Finally, since the solution of the YYe is a probability density, it must be positive which may not be guaranteed by the discretization.

A different approach to solving the PDE was taken in [14] and [15]. An explicit expression for the fundamental solution of the YYe as an ordinary integral was derived. It was shown that the formal solution to the YYe may be written down as an ordinary, but somewhat complicated, multi-dimensional integral, with the integrand an infinite series. In addition, an estimate of the time needed for the solution to converge to the true solution was presented.

### 1.3 Outline of the Paper

In this paper, the Feynman path integral (FPI) formulation is employed to tackle the continuous-continuous nonlinear filtering problem. Specifically, phrasing the stochastic filtering problem in a language common in physics, the solution of the stochastic filtering problem is presented. In particular, no other result in filtering theory (such as DMZ equation, robust DMZ equation, etc) is used. The path integral formulation leads to a path integral formula for the transition probability density for the general aditive noise case. A corollary of the FPI formalism is the path integral formula for the fundamental solution of the YYe and the Yau algorithm—the central result of nonlinear filtering theory. It is noted that this paper provides detailed derivation of results that were used in [16]. In view of the wide applicability of the results derived here, this paper provides a brief summary of the central results of filtering theory and pedagogical discussion of Feynman path itegral methods that may be found to be useful by a wider audience, i.e., people specializing in different areas of applied science and engineering.

The Feynman path integral was introduced by Feynman in quantum mechanics[17, 18]. It has had a long and impressive history of results in theoretical physics. Inspired by an observation/remark of Dirac[19, 20], Feynman discovered the path integral representation of quantum physics [17, 18]. The importance of path integrals grew (in the mid 1960’s) when Fadeev and Popov used the path integrals to derive Feynman rules for the non-Abelian gauge theories. Its status was further enhanced when ’t Hooft and Veltman used path integral methods to prove that the Yang-Mills theories were consistent (i.e., renormalizable). Since then, the path integral formulation of quantum field theory has led to extraordinary insights into quantum field theory, such as renormalization and renormalization group, anomalies, skyrmions, monopoles, instantons, and supersymmetry (see, for instance, the freely available text [21]). Much of the advance in the past 40 years in particle theory would not have been possible without the Feynman path integral. The path integral has also led to numerous insights into modern mathematics, especially in the work of Edward Witten.

A path integral formulation is especially attractive from a computational point of view. An excellent example is lattice quantum chromodynamics (QCD) (see, for instance, [22]). Specifically, lattice QCD has enabled high-precision calculations of observables in the non-Abelian gauge theory of strong interactions—an infinite dimensional, highly nonlinear problem. The numerical results derived using the Feynman path integral based lattice QCD cannot be achieved by any other known technique.

The following point needs to be emphasized to readers familiar with the discussion of standard filtering theory—Feynman path integral is different from the Feynman-Kǎc path integral. In filtering theory literature, it is the Feynman-Kǎc formalism that is used. The Feynman-Kǎc formulation is a rigorous formulation and has led to several rigorous results in filtering theory. However, in spite of considerable effort it has not been proven to be useful in the development of relaible practical algorithms with desirable numerical properties. It also obscures the physics of the problem.

In contrast, it is shown that the Feynman path integral leads to formulas that are eminently suitable for numerical implementation. It also provides a simple and clear physical picture. Finally, the theoretical insights provided by the Feynman path integral are highly valuable, as evidenced by numerous examples in modern theoretical physics (see, for instance, [21]), and shall be employed in subsequent papers.

The outline of this paper is as follows. In Section 2, aspects of continuous-continuous filtering are reviewed and some of the recent work on continuous-continuous filtering is also discussed. In the following section, some topics in path integrals are reviewed and the interpretation of transition probability density as ensemble averages. For more details on the path integral methods, see any modern text on quantum field theory, such as [21], and especially [23] which discusses application of Feynman path integral to the study of stochastic processes. In Sections 4 and 5, the path integral formulas for the transition probability density are derived for the time-independent and time-dependent cases. In Section 6, the partial differential equation satisfied by the path integral formula (derived for sampled measurements) in the previous sections is derived. It is shown that in the time-dependent case the partial differential equation is simply the YYe of continuous-continuous filtering. The conclusions are presented in Section 10.

## 2 Continuous-Continuous Filtering and the Yau Equation

In this section, the main results of (continuous-continuous) nonlinear filtering theory are summarized. The most important results are the YYe and the Yau algorithm. In subsequent sections, it will be seen that they follow directly (and simply) from the Feynman path integral method.

### 2.1 The Continuous-Continuous Model

The signal and observation model in continuous-continuous filtering is the following:

 {dx(t)=f(x(t),t)dt+e(x(t),t)dv(t),x(0)=x0,dy(t)=h(x(t),t)dt+n(t)dw(t),y(0)=0. (1)

Here and are , , and valued stochastic processes, respectively, and and . These are defined in the Itô sense. The Brownian processes and are assumed to be independent222Hence, the large body of work on the correlated noise case is not considered in this paper with and covariance matrices and , respectively. We denote by , a matrix. Also, is referred to as the drift, as the diffusion vielbein, and as the diffusion matrix.

In this paper, the additive noise case is considered, i.e., . Then, the Itô and Stratanovich forms are the same. In a future paper, the more general, multiplicative and correlated noise case shall be tackled.

In this section, some of the relevant work on continuous-continuous filtering is summarized. Hence, it is assumed that , and are vector-valued smooth functions, is an orthogonal matrix-valued smooth function, is a identity matrix, and and are identity matrices. No explicit time dependence is assumed in the model.

### 2.2 The DMZ Stochastic Differential Equation

The unnormalized conditional probability density, of the state given the observations satisfies the DMZ equation:

 (2)

Here

 L0 =12n∑i=1∂2∂x2i−n∑i=1fi(x)∂∂xi−n∑i=1∂fi∂xi(x)−12m∑i=1h2i(x), (3) =−n∑i=1∂∂xi(fi(x)⋅)+12n∑i=1∂2∂x2i−12m∑i=1h2i(x), =12(n∑i=1D2i−η(x)),

where is the zero-degree differential operator of multiplication by , is the probability density of the initial time , and

 Di ≡∂∂xi−fi(x), (4) η(x) ≡n∑i=1f2i(x)+n∑i=1∂fi∂xi(x)+m∑i=1h2i(x).

The DMZ equation is to be interpreted in the Stratanovich sense. Note that

 Di =∂∂xi−fi(x), (5) =eFi∂∂xie−Fi,% where Fi=Fi(x)=∫x0fi(t)dt.

Hence,

 D2i=eFi∂2∂x2ie−Fi, (6)

and

 L0=12(n∑i=1eFi∂2∂x2ie−Fi−η(x)). (7)

### 2.3 The Robust DMZ Partial Differential Equation

Following [11] and [12] introduce a new unnormalized density

 u(t,x)=exp(−m∑i=1hi(x)yi(t))σ(t,x). (8)

Under this transformation, the DMZ SDE is transformed into the following time-varying PDE

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂u∂t(t,x)=12n∑i=1∂2u∂x2i(t,x)+n∑i=1(−fi(x)+m∑j=1yj(t)∂hj∂xi(x))∂u∂xi(t,x)−(n∑i=1∂fi∂xi(x)+12m∑i=1h2i(x)−12m∑i=1yi(t)Δhi(x)+m∑i=1n∑j=1yi(t)fj(x)∂hi∂xj(x)−12m∑i,j=1n∑k=1yi(t)yj(t)∂hi∂xk(x)∂hj∂xk(x))u(t,x),u(0,x)=σ0(x). (9)

This is called the robust DMZ equation. Here is the Laplacian. The solution of a PDE when the initial condition is a delta function is called the fundamental solution.

### 2.4 Universal Linear and Finite Dimensional Filtering I: Lie Algebra Method and the Maximal Rank Case

The Kalman filter is not a universal optimal linear filter since it assumes that the initial distribution is Gaussian, although it may still be optimal under certain criteria such as minumum variance. In the general case, the DMZ equation needs to be considered. The use of Lie algebraic methods to solve the DMZ equation had been proposed in [24] and [25]. The Lie algebraic viewpoint has been useful in giving a deeper understanding of the DMZ equation.

The estimation algebra associated with the continuous-continuous model is the Lie algebra generated by . It is said to be of maximal rank if for any there exist constants such that are in . When , if , where is of maximal rank, then is of maximal rank. Note that the explicit recursive solution for the Beněs filtering system (see [26]) was originally derived only for the maximal rank case. In some cases, maximal rank condition is assumed in deriving the explicit solutions.

If the linear filtering system is completely reachable, controllable, and observable, the estimation algebra is dimensional with basis

 L0,∂∂x1,…,∂∂xn,x1,…,xn,1. (10)

The Wei-Norman approach to solve the robust DMZ equation, a time-variant PDE, is to note that the solution is of the form[27]

 u(t,x)=eT(t)ern(t)xn⋯er1(t)x1esn(t)Dn⋯es1(t)D1etL0u(0,x), (11)

where satisfy a system of ODEs. Therefore, the Wei-Norman approach requires the solution of a system of ODEs, linear PDEs, and an FPKfe type of equation. However, if the linear system is not completely reachable or completely observable, the basis of the estimation algebra is not known beforehand and therefore needs to to calculated. Hence, one cannot even write down the finite system of ODEs explicitly.

### 2.5 Universal Linear and Finite Dimensional Filtering II: Direct Solution

A direct method for solving the universal linear filtering problem was first presented in [28]. It was shown that by a combination of a gauge transformation and a time-dependent spatial translation of the unnormalized conditional probability density, the robust DMZ equation for the linear filtering problem reduces to an on-line system of ODEs and an off-line FPKfe. This solution does not require controllability or observability assumptions, and is also universal as the initial distribution could be arbitrary. This solution was further simplified and extended in [29] and [30].

The direct solution for the Beněs case was also derived by noting that if is a vector field of a potential function , then

 L0=12(n∑i=1eϕ∂2∂x2ie−ϕ−η(x)). (12)

If , then the DMZ equation is

 ∂ξ∂t(t,x) =12(Δ−η(x))ξ(t,x)+m∑i=1Liξ(t,x)dyidt(t), (13) ξ(0,x) =e−ϕσ0,

where is the Laplacian operator. If is a quadratic polynomial in , the semigroup generated by the differential operator is known and can be used to explicitly derive solutions to the equation when are linear in . This technique is also related to the concept of equivalence of parabolic equations (see [28] for details). The solution was further generalized and simplified to the case of a more general finite-dimensional filter (Yau filter333A Yau filter is one where the drift in the signal model is the sum of a linear term and a gradient of a function and the measurement model is linear. The function is assumed to have no quadratic form piece; else it can be absorbed into the linear term. When the gradient term vanishes, the Kalman filter results. When the linear term vanishes and the gradient term is such that is quadratic, the Beněs filter[26] is obtained. with quadratic ). In summary, they showed that if one considers defined as

 ~u(t,x)=exp[c(t)+n∑i=1ai(t)xi−F(x+b(t))]u(t,x+b(t)), (14)

then the robust DMZ equation for is reduced to a system of ODEs for , and a FPKfe-type of equation for . It has since been shown that even the off-line FPKfe-type of equation can be reduced to a system of linear ODEs [31, 32]. Note that, like the Lie algebra solution, the state and measurement models are assumed to be explicitly time-independent.

### 2.6 The Yau Equation

The finite dimensional filtering problem is a very restrictive class of the general nonlinear filtering problem. Recently, it was proved that the real-time solution of the general nonlinear filtering problem can be obtained reliably[13], [33]. Let be a partition of the time interval , and let the norm of the partition be defined as . If satisfies the equation

 (15)

in the time interval , then the function defined as

 ~ul(t,x)=exp(m∑i=1yi(τl)hi(x))ul(t,x) (16)

satisfies the parabolic partial differential equation

 ∂~ul∂t(t,x) =12n∑i=1∂2~ul∂x2i(t,x)−n∑i=1fi(x)∂~ul∂xi(t,x)−(n∑i=1∂fi∂xi(x)+12m∑i=1h2i(x))~ul(t,x), (17) =−n∑i=1∂∂xi[fi(x)~ul(t,x)]+12n∑i=1∂2~ul∂x2i(t,x)−12m∑i=1h2i(x)~ul(t,x),

in the same time interval. The converse of the statement is also true. In [34], it was also noted that it is sufficient to use the previous observation, i.e., satisfies Equation 15 if and only if defined as

 ~ul(t,x)=exp(m∑i=1yi(τl−1)hi(x))ul(t,x) (18)

satisfies Equation 17 in the time interval . We refer to Equations 16 (18) and Equation 17 as the post-measurement (pre-measurement) forms of the YYe.

Observe that Equation 15 is obtained by setting to in Equation 9444In contrast, note that defined for solving the finite-dimensional Yau filter in Equation 14 in the aforementioned papers is equivalent to solving the robust DMZ equation for at each time instant.. It was proved that the solution of Equation 15 approximates very well the solution of the robust DMZ equation (Equation 9), i.e., it converges to in both pointwise sense and sense. Thus, solving Equation 9 is equivalent to solving Equation 17. Finally,

 σ(τ,x)=lim|Pk|→0~uk(τk,x). (19)

Thus, the solution of the YYe (as ) is the desired unnormalized conditional probability density.

Observe that when , it is simply the FPKfe. However, unlike the FPKfe, the YYe does not satisfy the current conservation condition, i.e., the right-hand term is not a total divergence. This means that it does not conserve probability. This fundamental difference is traced to the fact that the FPKfe evolves the normalized probability density (and preserves the normalization), while the YYe evolves the unnormalized conditional probability density. Therefore, this distinction is made between the two equations in this paper.

### 2.7 The Yau Algorithm

We may summarize the real-time algorithm, based on both the pre- and post-measurement forms of the YYe, of Yau as follows. Suppose measurements are available at times

 ⋯<τ0<τ1<τ2<⋯<τk=τ. (20)

We seek the solution , which is the solution of the robust DMZ equation. Let the initial distribution be . Then, evolve the initial distribution to the first measurement instant, , using the YYe:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂~u1∂t(t,x)=12n∑i=1∂2~u1∂x2i(t,x)−n∑i=1fi(x)∂~u1∂xi(t,x)−(n∑i=1∂fi∂xi(x)+12m∑i=1h2i(x))~u1(t,x),~u1(τ0,x)=⎧⎪⎨⎪⎩exp(∑mj=1[yj(τ0)−yj(τ−1)]hj(x))σ0(x)(Pre-Measurement)exp(∑mj=1[yj(τ1)−yj(τ0)]hj(x))σ0(x)(Post-Measurement). (21)

The solution of equation 21 at time is . Note that is given by

 u1(τ1,x)={exp(−∑mi=1yi(τ0)hi(x))~u1(τ1,x)(Pre-Measuremnt)exp(−∑mi=1yi(τ1)hi(x))~u1(τ1,x)(Post-Measurement). (22)

Next, solve the YYe to the next measurement instant with initial condition , i.e.,

 (23)

to obtain . In fact, for , can be computed from , where satisfies the equation

 (24)

The initial condition in Equation 24 follows from noting that (since )

 ~ui(τi−1,x) =ui(τi−1,x)exp(m∑j=1yj(τi)hj(x)), (25) =exp(−m∑j=1yj(τi−1)hj(x))~ui−1(τi−1,x)exp(m∑j=1yj(τi)hj(x)),(Pre-Measurement) ~ui(τi−1,x) =ui(τi−1,x)exp(m∑j=1yj(τi)hj(x)), (26) =exp(−m∑j=1yj(τi−1)hj(x))~ui−1(τi−1,x)exp(m∑j=1yj(τi)hj(x))(Post-Measurement).

Thus the solution of the robust DMZ equation at the th step is given by

 ui(τi,x)=⎧⎪⎨⎪⎩exp(−∑mj=1yj(τi−1)hj(x))~ui(τi,x)(Pre-Measurement)exp(−∑mj=1yj(τi)hj(x))~ui(τi,x)(Post-Measurement). (27)

Note that the Yau algorithm is a recursive algorithm as it does not need any previous observation data. Furthermore, the YYe is independent of data and so can be computed off-line, and that the YYe is much simpler than the robust DMZ equation. Finally, note that the output of the Yau algorithm is the desired unnormalized conditional probability density.

### 2.8 Other Real-Time Algorithms

It is noted that there have been several other numerical methods proposed for solving the DMZ equation such as [35, 36, 37, 38]. Some numerical examples were presented in [39]. As noted in [39], most papers on the approximate solution of the DMZ equation deal with algorithms that are not (or, at least not easily) implementable. The most well-known of thsese “implementable” methods for the general filtering problem (rather than those applicable to only problems like finite-dimensional filters) are now briefly reviewed.

Recall that the DMZ SDE for the unnormalized conditional probability density is given by

 dσ(t,x)=LIσ(t,x)+n∑i=1Liσ(t,x)dyi(t), (28)

where , i.e., is the adjoint of the infinitesimal generator of the state Markov process, or the forward Fokker-Planck-Kolmogorov (FPK) operator.

The simplest scheme is the explicit one-step Euler scheme:

 σ(t+Δ,x)=(1+ΔLi)σ(t,x)+n∑i=1hi(x)(yi(t+Δ)−yi(t))σ(t,x), (29)

Some other algorithms follow from observing that Equation 28 can be considered as the combination of a linear stochastic equation and the FPKfe:

 ⎧⎪ ⎪⎨⎪ ⎪⎩dσ1(t,x)=σ1(t,x)m∑k=1hi(x)dyi(t),dσ2(t,x)=LIσ2(t,x)dt. (30)

The standard idea of operator splitting for solving deterministic higher dimensional PDEs can then be applied to this case. Since the linear stochastic equation can be solved explicitly, the following two splitting-up approximations result:

 σ(t+Δ,x) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩TΔexp(m∑i=1[yi(t+Δ,x)−yi(t,x)]hi(x)−Δ12m∑i=1h2i(x))σ(t,x),exp(m∑i=1[y(t+Δ)−y(t)]h(x)−Δ12m∑i=1h2i(x))TΔσ(t,x). (31)

Here is the operator corresponding to the evolution via the FPK forward equation (FPKfe).

The spectral separation scheme utilizes the fact that the conditional probability density can be written as a two-tensor basis with one of them obeying an observation-independent FPKfe type of equation (deterministic Hermite-Fourier coefficients) and the other a Wick polynomial that are completely defined by the observation process (see [38] for details). This separation of parameters and observations also enables real-time implementation. It can be shown that it includes the explicit Euler one-step and splitting-up approximations.

Finally, it will be seen that the path integral formulas derived here can be used to solve the FPKfe arising in these techniques. In fact, this formula already arises in the investigation of the continuous-discrete filtering problem (see [40]). Note that in the Yau algorithm, the “prediction” part is accomplished via the YYe. In contrast, in the Splitting-up method, it is done using the FPKfe, as in continuous-discrete filtering theory.

### 2.9 Why the Yau Algorithm is the most suitable algorithm

The following two reasons explain why the Yau algorithm is superior to all other known algorithms:

Firstly, recall that the coefficients of the robust DMZ equation (Equation 9) contain the measurements. This means that the partial differential equation to solve for continuous-continuous filtering is not even known prior to the measurements. In other words, the robust DMZ equation has to be solved on-line; its solution cannot be pre-computed. In contrast, in the YYe (Equation 17), measurements are absent from the partial differential equation. The measurements only enter the initial condition at each measurement step. This implies that the YYe can be be solved off-line; there is no need for on-line solution of PDEs555This assumes that the measurement steps are equidistant or known.. This makes real-time solution feasible even if state dimension is large. Specifically, suppose the measurements are performed at equidistant time intervals. The set of all possible initial conditions may be expressed as a linear combination of basis functions. Then, one may precompute and store the solution to the YYe for the basis functions. Upon arrival of measurements, the initial condition incorporating the measurements is computed. This is projected onto the basis functions to ascertain the components. The solution (due to linearity of the PDE), till the next measurement, is then simply the sum of the computed off-line solutions for the basis functions.

Secondly, all other known algorithms assume the signal and measurement model drift were bounded. In fact, error bounds derived for these methods depended (often exponentially) on the bounds so that there are severe time-step restrictions. Note that these results cannot even cover the Kalman filter (i.e., linear drift) case, which is a major limitation in solving practical problems. In contrast, the results of [13] and [41] assume only that the drifts are nonlinear with linear growth. The errors are independent of the bounds on the signal or measurement model drift. In [33], this is further weakened: is is sufficient that the growth of the observation drift be greater than the growth of the signal model drift.

It will be seen that the Feynman path integral approach naturally leads to the Yau algorithm. In that sense, the Feynman path integral formulation gives a physical explanation why the Yau algorithm is the natural choice. In addition, the path integral formula explicitly solves the PDE.

### 2.10 Integral Formula for the Solution of the YYe

A solution of the YYe in terms of ordinary integrals was presented in [14], [15]. The formal solution derived is an infinite series in (for details see the reference [15]). In addition, an estimate of the time interval on which this solution converges was presented. For the purposes of this paper, it is sufficient to note that it has better numerical properties (than solving PDEs) since integration can be computed reliably, as opposed to partial differential equations. However, the integrand is rather complicated (e.g., convolutions) and it does not appear to be easily implementable for a general model. We shall see that the path integral formula is considerably simpler to evaluate.

## 3 Preliminaries

In this section, a brief review of relevant aspects of functional methods common in theoretical physics is presented. For more details on these topics, the reader is referred to [23].

### 3.1 A Brief Review of Functional Calculus

The infinite dimensional generalization of a function is called a functional, , which gives a number for each function . Loosely speaking, the functional derivative may be defined analogous to the ordinary derivative as follows:

 δF[x(t)]δx(t′)=limϵ→0F[x(t)+ϵδ(t−t′)]−F[x(t)]ϵ. (32)

Alternatively, the functional derivative, , of the functional with respect to variation of the function at is defined by the following equation

 F[x(t)+η(t)]=F[x(t)]+∫δFδx(s′)[x(s)]η(s′)ds′+⋯. (33)

Note that the functional derivative of a functional is also a functional. Functional differentiation obeys the standard algebraic rules (linearity and Leibnitz’s rule):

 δδx(t′)[n∑i=1Fi[x(t)]] =n∑i=1δFiδx(t′)[x(t)], (34) δδx(t′)[F1[x(t)]F2[x(t)]] =F1[x(t)]δδx(t′)F2[x(t)]+F2[x(t)]δδx(t′)F1[x(t)]. (35)

Just as the derivative of with respect to is 1, the functional derivative of with respect to is the unit matrix in infinite dimensions, namely the delta function:

 δx(t)δx(t′)=δ(t−t′). (36)

The functional derivatives of a formal power series in function can be seen to be analogous to the ordinary derivative of a power series in the variable . Specifically, it is straightforward to verify that if

 F[x(t)]=∞∑n=01n!∫dt1⋯dtnF(n)(t1,t2,…,tn)x(t1)⋯x(tn), (37)

then

 F(r)(t1,t2,…,tr)={(r∏i=1δδx(ti))F}∣∣∣x=0. (38)

In particular, a functional representable as a power series in functions, like the exponential functional, can be functionally differentiated using this result. Thus

 δδJ(x)exp(∫dx′J(x′)A(x′))=A(x)exp(∫dx′J(x′)A(x′)). (39)

The functional delta function may be written as

 δ(f(x(t))=∫[dλ(t)]exp(i∫dtλ(t)f(x(t))). (40)

In the application at hand, determinants of operators (matrices) arise in Gaussian integration in the infinite (finite) dimesional case. In particular, the operator needs to be evaluated for the problem at hand, for some operator . It is assumed that traces of all powers of exist. From the identity

 lndetM=trlnM, (41)

it follows that

 lndet[1+K]= ∫dxK(x,x)−12∫dx1dx2K(x1,x2)K(x2,x3)+⋯+ (42) (−1)n+1n∫dx1⋯dxnK(x1,x2)K(x2,x3)⋯K(xn,x1)+⋯.

Note that if , only the first term is nonzero.

Finally, recall the the sequence of identities from multivariable calculus

 1 =∫{dny}δn(y), (43) =∫{dnf(x)}δn(f(x)), =∫{dnx}Jδn(f(x)),J=det(∂fi∂xj(x)), =∫{dnx}δn(x−xc).

Then, it follows that

 δn(f(x)) =δn(x−xc)J(x), (44) δn(x−xc) =δn(f(x))J(x).

and

 σ(x) =∫{dnx}δn(f(x))J(x)σ(x). (45)

Using the Fourier integral representation of the delta function, it is clear that

 σ(x)|f(x)=0 =1(2π)n∫{dnx}{dnλ}eiλf(x)J(x)σ(x). (46)

The functional analogue of Equation 43 is

 1=∫[Df(x(t′)))det[δfδx(t′)(x(t))]δ(f(x(t))). (47)

Finally, the Gaussian integral identity

 ∫[dnx]exp(−12n∑i,j=1xi(A−1)ijxj−n∑i=1bjxj)=exp(12n∑i,j=1biAijbj),[dnx]≡1√(2π)ndetA, (48)

becomes

 ∫[dλ(t)]exp(−12n∑i,j=1∫dtλi(t)(A−1(t))ijλj(t)+n∑i=1∫dtbi(t)λi(t))=exp(12n∑i,j=1∫bi(t)Aij(t)bj(t)). (49)

### 3.2 Langevin Equation and Gaussian Measure

In the path integral formulation, as in physics literature, the continuous-continuous model is interpreted as

 {˙x(t)=f(x(t),t)+e(x(t),t)ν(t),˙y(t)=h(x(t),t)+n(t)μ(t). (50)

Such equations are called Langevin equations. Heuristically,

 (51)

Gaussian noise process is represented by a path integral measure

 [dρ(ν(t))]=[Dν(t)]exp[−12∫dtp∑a,b=1νa(t)(Q−1(t))abνb(t)],ν∈Rp×1. (52)

where is a real vector for each .

It is straightforward to verify that the mean is zero

 ⟨νc(t)⟩ =∫[dρ(ν)]νc(t), (53) =∫[Dν(t)]νc(t)exp(−12∫dtp∑a,b=1νa(t)(Q−1(t))abνb(t)), =0,

and the variance is 666This is also consistent with the previous discussion in terms of Itô calculus, i.e., implies that (54) :

 ⟨νc(t)νd(t′)⟩ =∫[dρ(ν)]νc(t)νd(t′), (55) =[∫[Dν(t)]νc(t)νd(t′)exp(−12∫dt[p∑a,b=1νa(t)(Q−1(t))abνb(t)+p∑a=1baνa(t)])]∣∣∣ba=0, =δδbc(t)δδbd(t)exp(−12∫dt′p∑a,b=1ba(t′)Qab(t′)bb(t′))∣∣∣ba=0, =Qcd(t)δ(t−t′),

and higher-order moments can be easily written down using Wick’s theorem for bosonic (commuting) variables. The results are in accord with the expectation that the measure represents a Gaussian process with these first two moments. Finally, implicit in the use of these formal technques is the use of the Feynman convention, or symmetric discretization for the drift.

### 3.3 Transition Probability Density and Signal and Measurement Ensemble Averages

Consider an ensemble of systems with state variables described by the Langevin equation. Due to the random noise, each system leads to a different vector that depends on time. Although only one realization of the stochastic process is ever observed, it is meaningful to speak about an ensemble average. For fixed times , the probability density of finding the random vector in the (dimensional)interval is given by

 Wr(tr,xr;⋯;t1,x1)=⟨r∏i=1δn(x(ti)−xi)⟩, (56)

where is an dimensional column vector and denotes averaging with respect to the signal model noise . The complete information on the random vector is contained in the infinite hierarchy of such probability densities. The quantity of interest here is the conditional probability density

 p(tr,xr|tr−1,xr−1,…;t1,x1) =⟨δn(x(tr)−x(tr))⟩|x(tr−1)=xr−1,…,x(t1)=x1,x(tr)≡xr (57) =Wr(tr,xr;…;t1,x1)∫Wn(tr,xr;…;t1,x1)dnxr.

Now the process described by the Langevin equation with correlated Langevin force is a Markov process, i.e., the conditional probability density depends only on the value at the immediate previous time:

 p(tr,xr|tr−1,xr−1;…;t1,x1)=p(tn,xn|tn−1,xn−1). (58)

Hence the complete information for a Markov process is contained in the transition probabilities

 p(t2,x2|t1,x1) =W2(t2,x2;t1,x1)W