Universal, Continuous-Discrete Nonlinear Yau Filtering I: Affine, Linear State Model with State-Independent Diffusion Matrix

# Universal, Continuous-Discrete Nonlinear Yau Filtering I: Affine, Linear State Model with State-Independent Diffusion Matrix

Bhashyam Balaji
Defence Research and Development Canada, Ottawa,
3701 Carling Avenue,
Email: Bhashyam.Balaji@drdc-rddc.gc.ca
###### Abstract:

The continuous-discrete filtering problem requires the solution of a partial differential equation known as the Fokker-Planck-Kolmogorov forward equation (FPKfe). In this paper, it is pointed out that for a state model with an affine, linear drift and state-independent diffusion matrix the fundamental solution can be obtained using only linear algebra techniques. In particular, no differential equations need to be solved. Furthermore, there are no restrictions on the size of the time step size, or on the measurement model. Also discussed are important computational aspects that are crucial for potential real-time implementation for higher-dimensional problems. The solution is universal in the sense that the initial distribution may be arbitrary.

Fokker-Planck Equation, Kolmogorov Equation, Continuous-Discrete Filtering, Nonlinear Filtering, Diffusion process.

## 1 Introduction

The problem of continuous-discrete (continuous-continuous) filtering is to estimate the state that is described by a continuous-time stochastic process from the observations of a related discrete-time (continuous-time) stochastic process called the measurement process. The complete solution of the filtering problem, in the Bayesian sense, is given by the conditional probability density function. Given the conditional probability density we can define optimality under any criteria. Usually, the conditional mean, which is the minimum mean-squares estimate, is studied. A solution is said to be universal if the initial distribution is arbitrary (see, for example, [1]).

Continuous-discrete filtering requires the solution of a partial differential equation known as the Fokker-Planck-Kolmogorov forward equation (FPKfe) [2]. In general, such problems are often solved via discretization of the PDE, such as using the method of finite differences, [3, 4], spectral methods [5] and finite element methods[6]. The numerical solution of a PDE is non-trivial. In particular, there are severe time step size and grid spacing restrictions (especially for high accuracy solutions) for stability, accuracy, and consistency of the numerical method. In addition, a plain implementation places very large computational and memory requirements, especially for higher dimensional problems. It is therefore desirable to be able to exactly solve for the fundamental solution of the FPKfe, in terms of which the FPKfe can be solved.

In this paper it is pointed out that the fundamental solution of the FPKfe for the affine, linear state model with state-independent diffusion matrix, a special case of the Yau filter (discussed below), can be computed using only linear algebra methods. In particular, the fundamental solution for such a model can be computed exactly for an arbitrary time step size without having to solve a differential equation. It is also noted that the computational requirements are considerably less than that suggested by the size of the grid space. In particular, the conditional probability density can be computed efficiently (especially, concerning memory requirements) and rapidly with the use of sparse multilinear data structures. In many applications, such as target tracking, the state model is linear and the measurement model is nonlinear. Therefore, the results in the paper are applicable to a large number of practical problems.

The Yau filter is defined to be one where state model drift a linear plus a gradient term satisfying the Beněs condition and with a time- and state- independent diffusion matrix [7]. In continous-continuous filtering theory, the Yau filtering system is defined as one with an additional restriction that the measurement process is a continuous-time stochastic process with linear drift. The Yau filtering system plays a fundamental role in continuous-continuous filtering. The reason is that it has been shown that a finite-dimensional filter of maximal rank has to be a Yau filter[7]. In light of its importance, some real-time implementable solutions for the Yau filtering system have been investigated. In [8, 9], the solution of the Yau equation has been shown to be equivalent to a solution of a system of linear ODEs. The case of nonlinear observations has also been investigated in some papers. In particular, it is shown in [10] that some nonlinear observation models can also be solved via ODEs. Finally, in [11], a solution is presented in terms of a power series.

In this paper, the special case of the continuous-discrete Yau filter, namely one with an affine, linear state model drift and state-independent (but possibly time-dependent) diffusion matrix, is studied. Note that the measurement model in the continuous-discrete Yau case can be arbitrary, and so a more general filtering problem is solved than for the continuous-continuous Yau case. In a following paper, it is shown that the fundamental solution of the FPKfe for another special case of the Yau filter, namely one with a gradient drift satisfying a certain condition (and also known as the Beněs model drift), can also be solved exactly using such elementary methods.

It is noted that the methods developed for solving the continuous-continuous filtering problem quoted in the papers above can be used to solve the prediction part of the continuous-discrete filtering problem. However, they require solution of ODEs, or a power series expression, rather than the closed form expressions derived here.

The relevant aspects of filtering theory are reviewed in Section 2. The fundamental solutions are derived for the additive noise case in Section 3 and 4. Some implementational aspects are discussed in Section 5 and related comments on computational aspects of discrete-discrete filtering are presented in Section 6. An example is presented in Section 7. In Appendix A, the formula for the state transition matrix for the general time-dependent case is reviewed. In Appendix B, the linear manoeuvering target tracking models reviewed in [12] are explicitly solved. Some of the results are well-known and have been derived elsewhere, sometimes presented in a different form. Here use is made of the elegant method of [13] that enables a simple and systematic derivation of explicit formulas for arbitrary affine, linear models up to four-dimensional state space problems.

## 2 Review of Continuous-Discrete Filtering Theory

### 2.1 Langevin Equation, the FPKfe and its Fundamental Solution

The general continuous-time state model is described by the following stochastic differential equation (SDE):

 dx(t)=f(x(t),t)dt+e(x(t),t)dv(t),x(t0)=x0. (1)

Here and are dimensional column vectors, the diffusion vielbein is an matrix and is a dimensional column vector. The noise process is assumed to be Brownian with covariance and the quantity is termed the diffusion matrix. All functions are assumed to be sufficiently smooth. Equation 1 is also referred to as the Langevin equation. It is interpreted in the Itô sense.

Let be the initial probability distribution of the state process. Then, the evolution of the probability distribution of the state process described the Langevin equation, , is described by the FPKfe, i.e.,

 ⎧⎪ ⎪⎨⎪ ⎪⎩∂p∂t(t,x)=−n∑i=1∂∂xi[fi(x,t)p(t,x)]+12n∑i,j=1∂2∂xi∂xj[gij(t,x)p(t,x)],p(t0,x)=σ0(x). (2)

Let , and consider the following PDE:

 ⎧⎪ ⎪⎨⎪ ⎪⎩∂P∂t(t,x|t′,x′)=−n∑i=1∂∂xi[fi(x,t)P(t,x|t′,x′)]+12n∑i,j=1∂2∂xi∂xj[gij(t,x)P(t,x|t′,x′))],P(t′,x′′|t′,x′)=δn(x−x0). (3)

The quantity is known as the fundamental solution of the FPKfe. In this instance, the physical interpretation is that it is the transition probability density.

From the fundamental solution one can compute the probability at a later time for an arbitrary initial condition as follows111In this paper, all integrals are assumed to be from to , unless otherwise specified.:

 p(t′′,x′′)=∫P(t′′,x′′|t′,x′)p(t′,x′){dnx′}. (4)

Therefore, in order to solve the FPKfe it is sufficient to solve for the transition probability density . Note that this solution is universal in the sense that the initial distribution can be arbitrary.

### 2.2 Continuous-Discrete Filtering

In ths paper, it is assumed that the measurement model is described by the following discrete-time stochastic process

 y(tk)=h(x(tk),tk,w(tk)),k=1,2,…,tk>t0, (5)

where , and the noise process is assumed to be a white noise process. Note that . It is assumed that is known.

Then, the universal continuous-discrete filtering problem can be solved as follows. Let the initial distribution be and let the measurements be collected at time instants . We use the notation . Prior to incorporating the measurements, the state evolves according to the FPKfe, i.e.,

 p(t1,x1|Y(t0))=∫P(t1,x|t0,x0)p(t0,x0){dnx0}. (6)

This is the prediction step.

According to Bayes’ rule

 p(t1,x|Y(t0))=p(y(t1)|x,Y(t0))p(t1,x|Y(t0))p(y(t1)|Y(t0)). (7)

Now and since measurement noise is white it follows that . Hence, the ‘corrected’ conditional density at time is

 p(t1,x|Y(t1))=p(y(t1)|x)p(t1,x|Y(t0))∫p(y(t1)|ξ)p(t1,ξ|Y(t0)){dnξ}. (8)

This is then the initial condition of the FPKfe for the next prediction step which results in

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩p(t2,x|Y(t1))=∫P(t2,x|t1,x1)p(t1,x1|Y(t1)){dnx0},(Prediction Step),p(t2,x|Y(t2))=p(y(t2)|x)p(t2,x|Y(t1))∫p(y(t2)|ξ)p(t2,ξ|Y(t1)){dnξ},(Correction Step), (9)

and so on.

## 3 Fundamental Solution I: Additive Noise

In this section, the following general affine, linear state model with additive noise is considered:

 dx(t)=(F(t)x(t)+l(t))dt+e(t)dvi(t),i=1,…,n. (10)

It is assumed that commutes at different times, i.e.,

 [F(t),F(t′)] ≡F(t)F(t′)−F(t′)F(t), (11) =0.

The case when does not commute with itself at different times is discussed in Appendix A. Also, the matrix notation is used here.

The state model can be written as the following Langevin equation

 dxdt(t) =(F(t)x(t)+l(t))+e(t)dvdt(t), (12) =(F(t)x(t)+l(t))+e(t)ν(t),

where is correlated and Gaussian distributed as follows:

 ⟨νi(t)⟩ =0, (13) ⟨νi(t)νj(t′)⟩ =Qij(t)δ(t−t′),Qij(t)=Qji.

The formal solution of Equation 12 is

 x(t)=U(t,t0)x(t0)+∫tt0U(t,τ)[l(τ)+e(τ)ν(τ)]dτ, (14)

where the state transition matrix is the solution of the following equation:

 ⎧⎨⎩dUdt(t,t0)=F(t)U(t,t0),U(t0,t0)=I. (15)

This can be verified as follows. The Leibniz rule is

 ∂∂z∫b(z)a(z)Φ(ξ,z)dx=∫b(z)a(z)∂Φ∂z(x,z)dξ+Φ(b(z),z)∂b∂z(z)−Φ(a(z),z)∂a∂z(z), (16)

where are assumed to be sufficiently smooth functions of their arguments. From the Leibniz rule it therefore follows that

 dxdx(t) =F(t)U(t,t0)x(t0)+∫tt0F(t)U(t,τ)[l(τ)+e(τ)ν(τ)]dτ++U(t,t)(l(t)+e(t)ν(t)), (17) =F(t)[U(t,t0)x(t0)+∫tt0U(t,τ)[l(τ)+e(τ)ν(τ)]dτ]+l(t)+e(t)ν(t), =(F(t)x(t)+l(t))+e(t)ν(t).

Also, satisfies the semi-group property:

 U(t2,t0)=U(t2,t1)U(t1,t0), (18)

which immediately implies that

 U(t,t0)=U−1(t0,t). (19)

Since the matrix commutes at all times

 U(t,t0)=e∫tt0F(τ)dτ. (20)

If the matrix is time-independent

 U(t,t0)=eF(t−t0). (21)

Since a linear transformation of Gaussian variables is also Gaussian, is also a Gaussian random variable. Therefore, it is completely characterized by the mean vector and covariance matrix.

The mean vector is

 μ(t,t0) =⟨x(t)⟩, (22) =U(t,t0)x(t0)+~l(t),~l(t)≡∫tt0U(t,t0)l(t)dt,

and the covariance matrix is

 Σ(t,t0) =⟨[x(t)−μ(t)][x(t)−μ(t)]T⟩, (23) =U(t,t0)σ(t0)UT(t,t0)+∫tt0∫tt0U(t,τ1)e(τ1)Q(τ1)δ(τ1−τ2)eT(τ2)UT(t,τ2)dτ1dτ2, =U(t,t0)σ(t0)UT(t,t0)+∫tt0U(t,τ)g(τ)UT(t,τ)dτ.

Combining the results for and , the fundamental solution is

 P(t,x|t0,x0)=1√(2π)ndetσ(t,t0)exp(−12[x−μ(t,t0)]T[Σ−1(t,t0)][x−μ(t,t0)]). (24)

Finally, observe that

 dΣdt(t) =F(t)[U(t,t0)Σ(t0)UT(t,t0)]+[U(t,t0)Σ(t0)UT(t,t0)]FT(t)+ddt∫tt0U(t,τ)g(τ)UT(t,τ)dτ, (25) =F(t)[U(t,t0)Σ(t0)UT(t,t0)]+[U(t,t0)Σ(t0)UT(t,t0)]FT(t) +U(t,t)g(t)U(t,t)+F(t)∫tt0[U(t,τ)G(τ)UT(t,τ)]dτ+∫tt0[U(t,τ)g(τ)UT(t,τ)]FT(τ)dτ, =F(t)Σ(t)+Σ(t)FT(t)+g(t),

where the Leibniz rule (Equation 16) has been used in the second step. This result will be used in the following section.

## 4 Fundamental Solution II: State-Independent Diffusion Matrix

In Section 3, the fundamental solution was derived for the time-independent affine, linear state model with additive noise. In this section, it is pointed out that a similar result follows if the noise is multiplicative but with state dependent diffusion matrix. This is a straightforward generalization of the result derived in [14]; it also assumes that the diffusion matrix is time-independent.

The state model considered here is

 dx(t)=(F(t)x(t))+e(x(t),t)dv(t),eQeT=gij(t). (26)

The state model is assumed to be affine and linear and possibly explicitly time dependent. The diffusion matrix is assumed to be independent of state, but possibly (explicitly) time dependent.

Observe that this includes the case studied in the previous section. Since the diffusion vielbein is state dependent (or the state model noise is multiplicative), the methods used in the previous section cannot be applied to this case. This model is also more general than the Yau filter case where no explicit time dependence is assumed.

The FPKfe for the transition probability density is

 ⎧⎪ ⎪⎨⎪ ⎪⎩∂P∂t(t,x|t0,x0)=−n∑i,j=1∂∂xi((Fij(t)xj+li(t))P(t,x|t′,x′))+12n∑i,j=1gij(t)∂2P∂xi∂xj(t,x|t′,x′),P(t,x|t,x0)=δn(x−x0). (27)

Let be the Fourier transform of with respect to , i.e.,

 P(t,x|t0,x0)=1(2π)n∫ei∑ni=1kixi~P(t,k|t0,x0)dnk. (28)

Since

 ∂P∂t(t,x|t0,x0) =1(2π)n∫ei∑ni=1kixi∂~P∂t(t,k|t0,x0)dnk, (29) xjP(t,x|t0,x0) =1(2π)n∫ei∑ni=1kixixj~P(t,k|t0,x0)dnk, =1(2π)n∫[(−i)∂∂kiei∑ni=1kixi]~P(t,k|t0,x0)dnk, =i(2π)n∫ei∑ni=1kixi∂~P∂kj(t,k|t0,x0)dnk, ∂P∂xi(t,x|t0,x0) =i(2π)n∫ei∑ni=1kixiki~P(t,k|t0,x0), 12n∑i,j=1gij(t)∂2P∂xi∂xj(t,x|t0,x0)) =−1(2π)n∫ei′∑ni′=1ki′xi′12n∑i,j=1gij(t)kikj~P(t,k|t0,x0)dnk,

it follows that

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∂~P∂t(t,k|t0,x0)=−(n∑i,j=1Fij(t)ki∂∂kj+n∑i=1li(t)ki)~P(t,k|t′,x′)−12n∑i,j=1gij(t)kikj~P(t,k|t′,x′),~P(t,k|t,x0)=ei∑nj=1kjx0j. (30)

Since is Gaussian, so is and we can choose the following ansatz for :

 ~P(t,k|t0,x0)=e−i∑ni=1kiμi(t,t0)−12∑ni,j=1kikjσij(t,t0). (31)

Without loss of generality, . Note that if there is no explicit time dependence, the argument of and is the difference between the times, i.e., .

Therefore

 ⎛⎝−in∑i=1ki˙μi−12n∑i,j=1kikj˙Σij+in∑i,j=1Fij(t)kiμj+n∑i=1liki+n∑i,j,l=1FijkiΣjlkl+12n∑i,j=1gij(t)kikj⎞⎠=0. (32)

This implies that and obey the following ODEs:

 ⎧⎪ ⎪⎨⎪ ⎪⎩˙μi(t,t0)=n∑j=1Fij(t)μj(t,t0)+li(t,t0),μi(t0,t0)=x(t0), (33)

and

 (34)

From the discussion in the previous section, the solution of these first-order ODEs are

 μi(t,t0)=Uij(t,t′)x(t0)+∫tt0U(t,t0)l(t)dt, (35)

and

 Σij(t,t0)=U(t,t0)Σ(t0)UT(t,t0)+∫tt0U(t,τ)G(τ)UT(t,τ)dτ′, (36)

so that

 P(t,x|t0,x0)=1√(2π)ndetΣ(t,t0)exp(−12[x−μ(t,t0)]T[Σ−1(t,t0)][x−μ(t,t0)]). (37)

Thus, the expression is the same as derived earlier even for the more general case (i.e., although constant diffusion vielbein implies constant diffusion matrix (for constant , constant difusion matrix does not imply constant diffusion vielbein).

## 5 Practical Implementation

In this section, some practical implementational aspects are discussed. Specifically, a computationally efficient filtering algorithm based the results derived in the previous sections is presented in Section 5.1. Some additional aspects are discussed in Section 5.2.

It is important to note that the transition probability is given in terms of an exponential function. This implies that is non-negligible only in a very small region, or the transition probability density tensor is sparse. The sparsity property is crucial for storage and computation speed.

### 5.1 Sparse Kernel Grid Filtering Algorithm

The implementation of the continous-discrete filtering solution exploiting the sparsity property of the transtition probability density, or kernel, is thus straightforward.

1. Precompute the transition probability density is given by ()

 P(t′′,x′′|t′,x′)=1√(2π)ndetΣ(t′′,t′) (38) ×exp(−12n∑i,j=1[x′′i−μi(t′′,t′)][Σ−1(t′′,t′)]ij[x′′j−μj(t′′,t′)]).
2. Threshold the transition probability density and save it as a sparse tensor. That is, zero the entries corresponding to exponent larger than a certain (user-specified) threshold.

3. At time

1. The prediction step is accomplished by

 p(tk|Y(tk−1))=∫P(tk,x|tk−1,x′)p(tk−1,x′|Y(tk−1)){dnx′}. (39)

Note that is simply the initial condition .

2. The measurement at time are incorporated in the correction step via

 p(tk,x|Y(tk))=p(y(tk)|x)p(tk,x|Y(tk−1))∫p(y(tk)|ξ)p(tk,ξ|Y(tk−1)){dnξ}. (40)

This algorithm is referred to as the sparse grid filtering (SGF) algorithm. The thresholding step is crucial for real-time implementation.

Note that the precomputation step assumes that the grid spacing and time intervals between measurements are known.

In many applications, the measurement model is described by an additive Gaussian noise model, i.e.,

 y(tk)=h(x(tk),tk)+w(tk),k=1,2,…,tk>t0, (41)

with . Then, at observation time , the conditional density is given by

 p(tk,x|Y(tk))=p(y(tk)|x)p(tk,x|Y(tk−1))∫p(y(tk)|ξ)p(tk,ξ|Y(tk−1)){dnξ}, (42)

where is given by

 p(y(tk)|x)=1((2π)mdetR(tk))1/2exp{−12(y(tk)−h(x(tk),tk))T(R(tk))−1(y(tk)−h(x(tk),tk))}, (43)

and is given by

 p(tk,x|Y(tk−1))=∫P(tk,x|tk−1,xk−1)p(tk−1,xk−1|Y(tk−1)){dnxk−1}. (44)

Observe also that the fundamental solution has a simple and clear physical interpretation. Specifically, when the signal model noise is small the transition probability is significant only near trajectories satisfying the noiseless equation. The noise variance quantifies the extent to which the state may deviate from the noiseless trajectory.

1. It is noted that some of the results derived in this paper are well-known and used in EKF. In particular, the result of Section 3 is used to derive the discrete-time version of the continuous-time state model (see, for instance, [15]).

However, the use of these results is very different from that in EKF. Specifically, the EKF linearizes the nonlinear measurement model and propagates the conditional mean and variance, assuming the prior conditional probablility density is Gaussian. Thus, the EKF fails if the nonlinearity is not benign, or if the Gaussian approximation for the prior is not valid (e.g., multi-modal). In the grid-based filtering, the standard deviation computed from the conditional probability density is a reliable measure of the filter performance. This is not the case of suboptimal methods like the EKF.

In other words, the SGF uses the exact expression for the fundamental solution that is given by a Gaussian function, while the EKF make the unjustified approximation that the conditional probability density at any time is Gaussian.

2. The conditional probability density calculated at grid points is exact (ignoring roundff errors) at those grid points. So, an interpolated version of the fundamental solution at coarser grid is close to the actual value. This suggests that a practical way of verifying the validity of the approximation is to note if the variation in the statistics with grid spacing, such as the conditional mean, is minimal.

Observe that the time step size may be arbitrary. This means that the optimal solution (in the sense of being the correct solution) can be computed using this using a grid of sufficiently small grid spacing. However, the conditional state estimation will not be as good for large time steps, a fundamental limitation due to large measurement sample interval.

3. In this paper, the correction step was carried out using a uniform grid. It is clear that a much faster approach for higher dimensional problems would be to use Monte Carlo integration methods.

4. The prediction step computation can be speeded up considerably by restricting calculation only in areas with significant conditional probability mass (or more accurately, in the union of the region of significant probability mass of and .

5. Although it is possible to compute the transition probability density off-line, it may be more appropriate to compute it on-line computing only those terms where initial points are in the region where the density is significant. It is also then possible to use an ‘adaptive grid’, both in resolution and location/domain. Observe that the one-step approximation of the path integral can be stored more compactly as it is Gaussian. Note that when the noise is small, grid spacing needs to be finer in order to adequately sample the conditional probability density.

6. Although the exact analytical expression of the conditional probability density is unknown, the accuracy of the grid-based method can be made very high and robust with small enough grid spacing. This situation is similar to the case of numerical solution of partial differential equations, where the exact solution is not known, but high accuracy numerical solutions (even machine precision) can still be obtained. Thus, for practical purposes “numerically exact” solutions obtained with a fine grid are optimal. Note that in the case of the PDEs, even when the exact solution is known, it often does not yield accurate numerical solutions.

## 6 Comments on Discrete-Discrete Sparse Kernel Grid Filtering and Particle Filtering

### 6.1 Discrete-Discrete Filtering

In Section 3, the continuous-time state process has been converted to a discrete-time state sequence. This result is simple and exact because the model is linear. This is not possible for a general nonlinear state model, i.e., there is no such general formula relating the continuous-time process with its equivalent discrete-time sequence. In fact, a simple, nonlinear state process is not equivalent to a simple nonlinear state sequence.

Observe that a discrete-time sequence description is more general than a continuous-time process. This is because less is assumed in the specification of the state sequence (for example, no assumption on continuity, differentiability are needed). However, the fundamental dynamical laws of classical physics are expressed in terms of continuous-time ODEs. Note that a simple, nonlinear dynamical model may not result in a “simple” discrete-time dynamical model.

Often, a nonlinear discrete-time state process, or sequence, is studied. This can be either as a simple (e.g., Euler) approximation of a continuous-time state model, or directly via other means. In this section, the state sequence model is assumed to be given. Thus, one is led to consider the following general discrete-discrete filtering problem where the signal and measurement sequences are as follows:

 {xk=f(xk−1,vk−1,k),yk=h(xk,nk,k). (45)

Here, is a nonlinear function of the state is an independent, identically distributed (i.i.d) process noise sequence, and are dimensions of the state and process noise vectors. Similarly, is a nonlinear function, is an i.i.d. measurement noise sequance, and are dimensions of the measurement and measurement noise vectors.

The universal discrete-discrete filtering problem is to evolve the conditional probability distribution of the state at time , based on the set of all available measurements , i.e., . The recursive solution is well-known and similar to that discussed in Section 2. Thus, if is known (note is the prior, or initial pdf), then

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩p(xk|y1:k−1)=∫p(xk|xk−1)p(xk−1|y1:k−1)dx1:k−1,% (Prediction)p(xk|y1:k)=p(yk|xk)p(xk|y1:k−1)∫p(yk|xk)p(xk|y1:k−1)dxk,(Correction). (46)

Note that due to the first-order Markov property of the state process.

Observe that “discrete” here refers to the time, not the state; the states take values in the continuum ().

This solution assumes that and are known. Of course, they are defined by the state and measurement processes. In a practical situation, a closed form expression for these quantities is desired. This is indeed possible for several cases of practical interest, such as additive Gaussian noise.

Of course, no closed form expression for the conditional probability density is known for an arbitrary initial distribution. Also, such a closed form solution, if it exists, may be unsuitable for an accurate numerical evaluation.

### 6.2 Sparse Kernel Grid Filtering

It is usually stated that grid-based methods are computationally prohibitive when dealing with high-dimensional spaces. However, the transition probability density tensor is sparse, with the sparsity determined by the grid spacing, grid size and signal model noise. Likewise, the correction due to measurements is going to be sparse. Then, the conditional probability density is significant only in a small fraction of the grid space. Therefore, the prediction step needs to be carried out in a very small region of the grid space. This implies that the memory requirements and the number of flops is considerably less than that suggested by naive counting.

The arguments in Section 5 carry over to this case as well. In particular, naïve flops and memory estimate are unduly pessimistic for excellent performance using SGF.

### 6.3 Some Remarks on Particle Filtering

An alternative to grid based techniques is particle filtering (see, for instance, [16]). A recursive Bayesian filter is obtained by Monte Carlo simulations. The idea is to represent the required conditional probability density by a set of random samples with associated weights and then to compute estimates based on these samples and weights. As the number of samples becomes very large, this approaches the optimal Bayesian solution. It has been shown that the performance of a PF is very good for many cases. An important aspect of particle filtering is a good choice of the proposal density. For a discrete-time state process with linear measurement process, optimal proposal density known for PF. Note that PF solution is not robust for the case when the measurement model is non-linear.

A common problem with the particle filter is the degeneracy phenomenon, where after a few iterations, all but one particle will have negligible weight. In fact, the variance of the importance weights can only increase over time, and thus, it is impossible to avoid the degeneracy phenomenon. This degeneracy implies that a large computational effort is devoted to updating particles whose contribution to the approximation to is almost zero. Methods of alleviating this problem include choosing a good importance density and resampling.

From our discussion, it follows that the sample requirements in PF are analogous to the number of grid points in SGF (which is set by the threshold). The key point is that the SGF method does not require evaluation at all grid points due to the sparsity of the conditional probability density and the transition probability density.

It is also noted that as long as the grid spacing is small enough to adequately sample the conditional probability density, there are no problems in propagating the conditional probability density.

The SGF approach is likely to be useful in providing an estimate of the number of particles needed for the particle filtering. It is an interesting exercise to compare the computational load of the SGF with that of a robust PF solution (which is problem dependent).

## 7 Example

In the examples, use is made of the Tensor toolbox in MATLAB developed by Bader and Kolda [17]. It has the multininear sparse tensor class, essential for SGF.

Consider the 2D coordinated turn model with with and time step size is 0.5 s. The measurement model drift is , or the angle, with noise variance . Thus, the measurement model is manifestly nonlinear function of the state variables. The transition probability density was computed in the interval with grid spacing along using the Tensor toolbox in MATLAB [18]. Note that the Courant number so that a one-step explicit scheme is unstable (an implicit scheme such as the Crank-Nicholson scheme is stable, but not as accurate [19]).

Finally, the sparsity of the grid is about 99% when the threshold is set to 10 (i.e., of exponent is greater than 10, the transition probability multilinear array element is set to be 0). The sparsity of tensor is crucial in enabling a real-time computation of the on a current PC. In addition, it places considerably less memeory requirements.

Figures 1 and 2 plot that the result obtained used the formula for the fundamental solution and using SGF. Also plotted are the conditional means and bounds using the conditional probability density. It is seen that the tracking performance is very good in the sense that the conditional mean is found to be within a most of the time. The root mean square errors for the two states over 100 Monte Carlo runs is found to be around 3 at . Observe that the transition probability denisty tensor needed to be computed only once.

In summary, proposed SGF method is seen to be stable and accurate even for such large time steps; the errors introuced in sprasifying the transition probability density are seen to be minimal, while the computational savings (in terms of speed and memory) are immense.

## 8 Conclusion

In this paper it has been shown that the continuous-discrete filtering problem with an affine, linear state model and with state independent diffusion matrix can be solved accurately using the exact fundamental solution of the corresponding FPKfe valid for an arbitrary time step size. Unlike the continuous-continuous case studied by Stephen Yau and collaborators, there are no restrictions on the measurement model in the continuous-discrete case. The sparsity of the transition probability density is then exploited to demonstrate that sparse grid filtering techniques can be used to solve higher dimensional problems with considerably less computational effort. Similar comments apply to the discrete-discrete filtering problem as well.

It is also interesting to note that the path integral approach can be used to derive the expression for the fundamental solution for the FPKfe for the nonlinear state model case. Results for the additive and multiplicative noise cases are derived in [20] and [21]. In fact, it has been shown that the curdest approximation of the path integral formulas leads to very accurate results (see [22]). Thus, the sparse grid filtering approach yields a unified approach to tackling the general nonlinear filtering problem. Since it enables one to focus computation in regions of significant probability mass, it offers the possibility of solving higher dimensional filtering problems in real-time.

## Appendix A General Time Dependent Case

The state transition matrix is the solution of the following equation:

 ⎧⎨⎩dUdt(t,t0)=F(t)U(t,t0),U(t0,t0)=I. (47)

The results derived in Section 3 and 4 are not valid when the time-dependent matrix, , does not commute at different times (note that , i.e., it commutes at equal times).

A formal solution for the general time-dependent case can be written down using the concept of time-ordered product. Let be a string of time-dependent matrices and let . The, the time-ordered product

 T(Fi1(ti1)Fi2(ti2)⋯Fn(tin))=F1(t1)F2(t2)⋯Fn(tn). (48)

In other words, the time-ordered product of a string of operators (with time as its argument) is defined to be the string rearranged so that the eariler time operators are to the right of the later ones. In general, there is an ambiguity when the operators do not commute at equal times. In the case of interest, , so they commute at equal times.

Observe that under the time-ordering symbol, everything commutes. Hence, a time derivative can be taken in the usual manner:

 ddtT(e∫tt0dτF(τ)) =T(F(t)e∫tt0dτF(τ)), (49) =F(t)T(e∫tt0dτF(τ)).

Here the exponential of the matrix is defined via the usual power series expansion of the exponential function. In the second step, use is made of the fact that is the latest time, so that time-ordering puts it on the leftist. Since is the latest time, it can be taken out of the time-ordering:

 ddtT(e∫tt0dτF(τ))=F(t)T(e∫tt0dτF(τ)). (50)

Thus, the solution of Equation 47 is

 U(t,t0)=T(e∫tt0dτF(τ)),t>t′. (51)

The satisfaction of the boundary condition is obvious. Also, uniqueness of the solution follows because it is the solution of a first-order differential equation with a given initial value.

An alternative form is the following:

 U(t,t0)=1+∫tt0dt1F(t1)+∫tt0dt1∫t1t0dt2F(t1)F(t2)+∫tt0dt1∫t1t0dt2∫t2t0dt3F(t1)F(t2)F(t3)+⋯. (52)

The correctness of the formula follows from observing that when the right hand side is differentiated with respect to time, each term gives the previous one times . Observe that the various factors of are time-ordered, i.e., matrices on later time are at the left. In fact, equality of this expression to Equation 51 follows from the following identity:

 ∫t<