Kalman Filter Tuning with Bayesian Optimization

Kalman Filter Tuning with Bayesian Optimization

Abstract

Many state estimation algorithms must be tuned: given the state space process and observation models, the process and observation noise parameters must be chosen. Conventional tuning approaches rely on heuristic hand-tuning or gradient-based optimization techniques to minimize a performance cost function. However, the relationship between tuned noise values and estimator performance is highly nonlinear and stochastic. Therefore, the tuning solutions can easily get trapped in local minima, which can lead to poor choices of noise parameters and suboptimal estimator performance. This paper describes how Bayesian Optimization (BO) can overcome these issues. BO poses optimization as a Bayesian search problem for a stochastic “black box” cost function, where the goal is to search the solution space to maximize the probability of improving the current best solution. As such, BO offers a principled approach to optimization-based estimator tuning in the presence of local minima and performance stochasticity. While extended Kalman filters (EKFs) are the main focus of this work, BO can be similarly used to tune other related state space filters. The method presented here uses performance metrics derived from normalized innovation squared (NIS) filter residuals obtained via sensor data, which renders knowledge of ground-truth states unnecessary. The robustness, accuracy, and reliability of BO-based tuning is illustrated on practical nonlinear state estimation problems, losed-loop aero-robotic control.

Kalman filtering, filter tuning, Bayesian optimization, nonparametric regression, machine learning.

I Introduction

Many state estimation algorithms, including Kalman filters and particle filters, are recursive and model-based [1], [2]. They decompose the estimation problem into a cycle with two main steps: state prediction followed by measurement update. The state prediction step uses a process model to predict how the state evolves over time. The measurement update step uses an observation model to relate a measured quantity to the state estimate. Since both the process and observation models are imperfect, errors in these models are treated as random noise terms that are injected into the system. Most designs assume the noises are white, zero mean and uncorrelated. As a result, filter tuning consists of choosing the values of the process and observation noise covariances.

Many tuning procedures adopt a divide-and-conquer strategy. The first stage is to choose the observation covariance. This is normally carried using laboratory or bench testing. The sensor is placed in a condition in which the noise-free sensor values can be predicted. The observation noises are determined by statistically characterizing the difference between the predicted and actual values. In the second stage, the process noises are chosen. Since the process noises contain information about the state disturbances and dynamic model uncertainties , which often cannot be reproduced in laboratory settings, the covariance is often chosen by collecting data from an operational domain and quantifying the quality of the estimates. Typically a performance cost is assigned, and the process noise covariance adjusted to minimize the value of that cost.

However, there are several problems with this two-stage approach. First, with laboratory testing, it is not always possible to model how sensors will react in operational environments. Changes in temperature, for example, can cause the biases in IMUs to change. As a result, the observation noises might not be properly characterized. Second, the interaction between noise levels and filter performance is not straightforward. Theoretical analysis has shown that even if the process and observation models are linear, the presence of modeling errors lead to noises which are state-dependent and correlated over time [3, 4]. As a result, many non-unique tuning solutions can appear. Finally, they tend to rely on statistics which require knowledge of the ground truth of the system to compute. Although this is possible to obtain in simulations or laboratory settings with precise reference measurement systems, such approaches are of limited use for many practical real-world applications where ground truth data is not available.

This paper makes three contributions, which significantly extends the preliminary work presented in [5]. The first is a general framework for Kalman filter noise parameter tuning based on Bayesian Optimization (BO). BO provides an attractive way to overcome the limitations of other gradient-based optimal filters auto-tuning strategies, which can easily get trapped in poor local minima. The BO framework developed here uses nonparametric surrogate models based on Student-t process regression, which offers better robustness and performance compared to Gaussian process surrogate models that are more typically used for Bayesian optimization and which were considered in [5].

The second contribution is the development and validation of novel stochastic cost functions for optimization-based auto-tuning. Most auto-tuning algorithms including our previous work [5] use the Normalized Estimation Error Squared (NEES) which requires the ground truth values of the state. We may not be able to obtain the ground truth easily in the real world. To make the tuning process be more easily to implement in the real world, in this paper, our approach extends the previous work and uses Normalized Innovation Squared (NIS), which is computed from the difference between the predicted and actual sensor measurements and does not require ground truth. This makes our approach practical for many real-world applications, where only the sensor observations are available for filter validation.

Finally, the performance of the BO auto-tuning framework is demonstrated and evaluated in simulation for a challenging application: longitudinal state estimation for the Mars Science Laboratory (MSL) Skycrane platform. The results for this application shows that BO not only provides reliable and computationally efficient estimates of unknown filter parameters, but can also provide useful probabilistic information about each parameter through the whole domain space, which existing state-of-the-art auto-tuning methods cannot do. While this work focuses mainly on auto-tuning of extended Kalman filters (EKFs), the BO framework can be readily extended to other related state space filtering algorithms.

The rest of this paper is structured as follows. Sections II and III formally introduces an overview and problem statement for filter auto-tuning. Section IV describes our Bayesian optimization framework using nonparametric Student-t process regression and how it is applied to Kalman filter parameter auto-tuning. Section V describes the set up, and analysis for the numerical simulation studies using the Bayesian optimization framework to auto-tune EKFs for the Mars Skycrane longitudinal state estimation problem. Conclusions are given in Section VI.

Ii Preliminaries

Ii-a System Overview

The state of the system at time step is , where is the dimension of the state vector. The process model that propagates the state from to is

(1)

where is the control input vector and is the process noise.

The observation model is

(2)

where is the observation vector and is the measurement noise.

It is typically assumed that the process and observation models are sufficiently accurate that the process and observation noises are zero-mean, independent, Gaussian distributed random variables.

Given the structure of the system, the goal is to develop an estimation algorithm which takes in a sequence of observations and control inputs, and computes an estimate of the state. The errors in initial conditions, together with the process and observation noises, means that the state is not known perfectly. Therefore, some means of quantifying the uncertainty must be used. A common approach is to use the mean and covariance of the state estimate.

Ii-B Mean and Covariance Representation

Our goal is to estimate the state of a random variable at a discrete time and quantify the uncertainty in that estimate. Let be the estimate of using all observations up to time step , and the covariance of this estimate be :

(3)
(4)

However, computing an estimate which obeys this property in practice is difficult to achieve. Modelling errors, for example, can always lead to biased estimates. Therefore, most pratical systems use a weaker condition called covariance consistency [6]. In this case, a valid estimate has the properties:

(5)
(6)

where is application-specific and means that is positive semidefinite. In other words, the estimate should be approximately unbiased, and the estimator should not over estimate its level of confidence. At the same time, we would like the difference between the predicted covariance and actual mean squared error to be as small as possible.

Ii-C Kalman Filters

The Kalman filter is one of the best known and most widely used algorithms for state estimation. It is derived from the fact that the correction applied to the estimate is a linear rule of the form

where is a gain matrix, and

which is the difference between the actual and predicted sensor measurement, is the innovation vector. It acts as an error signal in the filter, and provides a correction term for the state estimate. The Kalman filter chooses the value of to minimize the mean squared error in .

The algorithm proceeds as follows [7]. The state is predicted according to the equation

(7)
(8)

The update is calculated from

(9)
(10)
(11)
(12)

where and are the Jacobian matrices of the process and observation models.

(13)
(14)

Note that once Eq. (1) and (2) have been chosen, the only degree of freedom left is to chose and . This process is known as tuning.

Iii Tuning

As explained in the introduction, tuning involves choosing and to minimize some performance cost. Two widely used measures are the normalized estimation error squared (NEES) and the normalized innovation error squared (NIS). These are computed from

(15)
(16)

where and is the innovation vector. If the dynamical consistency conditions are met, then

(17)
(18)

It is often assumed that the prediction and observation errors are Gaussian. In this case, and will be -distributed random variables with and degrees of freedom respectively [7]. Therefore, hypothesis tests can be performed on calculated values for (when ground truth data is available) and to see if the consistency conditions hold at each time .

Iii-a Approaches to Tuning

In this paper we focus on the process noise tuning because it is the hardest to tune in the Kalman filter. In practice, NEES tests are conducted using multiple offline Monte Carlo “truth model” simulations to obtain ground truth values. The truth model simulator represents a high-fidelity model of the “actual” system dynamics and sensor observations, which may contain nonlinearities and other non-ideal characteristics that must be compensated for via Kalman filter tuning. NIS can be conducted offline using multiple Monte Carlo simulations (e.g. in parallel with NEES tests), but can also be conducted online using real-time sensor data.

Online/offline NIS tests are conducted as follows 1: suppose independent instances of the true state are randomly initialized according to and (the initial state of the filter), and then propagated through the true stochastic dynamics (1) and measurement model (2) for time steps, yielding sample ground truth sequences and measurement sequences for . If the resulting measurement sequences are then fed into a Kalman filter with tuning parameters , the resulting NEES and NIS statistics for each simulation run at each time can be averaged across problem instances to give the test statistics:

(19)
(20)

Then, given some desired Type I error rate , the NEES and NIS tests provide lower and upper tail bounds and , such that the Kalman filter tuning is declared to be consistent if, with probability at each time ,

Otherwise, the filter is declared to be inconsistent. Specifically, if or , then the filter tuning is “pessimistic” (underconfident), since the filter-estimated state error/innovation covariances are too large relative to the true values. On the other hand, if or , then the filter tuning is “optimistic” (overconfident), since the filter-estimated state error/innovation covariance are too small relative to the true values.

The consistency tests provide a very principled basis for validating Kalman filter performance in domain-agnostic way, and also provide a well-established means for guiding the tuning of noise parameters and in practical applications. Tuning via the tests is most often done manually, and thus requires repeated “guessing and checking” over multiple Monte Carlo simulation runs. However, this quickly becomes cumbersome and non-trivial for systems with several tunable noise terms. Heuristics for manual filter tuning have been developed in the linear-quadratic optimal control literature [8], e.g. to coarsely tune diagonals of first, before fine-tuning the elements of further. Such heuristics are useful for bounding the shape and magnitude of in linear-Gaussian problems, but are of little help for tuning ‘fudge factor’ process noise parameters that are used to cope with model errors from state truncation, approximations of non-linearities, poorly modeled dynamics, etc. Given this, alternative optimization techniques are needed which are robust to stochastic variations in the cost function and which can explore nonlinear spaces while also satisfying the filter consistency requirements.

Iii-B Previous Tuning Work

Much of the previous Kalman filter auto-tuning work is based on consistency checking ([9] and [10]).
Reference [9] uses a genetic algorithm to tune Kalman filter. This algorithm simulates the Darwin concept of “survival of the fittest” to choose a good parameter set. It treats each parameter set in the parameter space as an “individual.” The specific parameter value corresponding to that individual is coded into a string as a binary value and treated as a “chromosome,” which is the genetic information. The fittest individual is selected according to the numerical value of NEES and error covariance norm, which is the cost function. The genetic algorithm is implemented after some modifications. First, random parameter sets in the parameter space are selected as the initial “population.” They will spawn the next generation by pairing two individuals and exchanging parts of their chromosomes randomly. The population is believed to have converged once the population has a low cost. In this approach, a large number of Monte Carlo runs is not used because of computation limits, which leads to a problem that some wrong individuals may also be able to pass the consistency test. They add one more option to the cost function besides the consistency test to solve that problem: when the consistency value is smaller than a threshold the cost function switches to a value based on the norm of the error covariance .
In their simulation experiment, they use a simple oscillator as an example, aiming at tuning the speed and position noise. The optimal value is not achieved because the structure of their cost function: the minimization routine will tend to have smaller state error covariance norm and instead of smaller consistency value.
Another previous work [10] uses downhill simplex numerical optimization algorithm to minimize the NEES based cost function. A simplex is a collection of points in an -dimensional space and all their interconnecting line segments. The simplex algorithm attempts to locate a minimum of the function by a series of movements in the -dimensional space. Those movements include reflection, expansion and contraction. Details of those movements can be seen in the paper [10].
However, the simplex algorithm can easily be stuck in a local minima so there may be cases that this method will fail. Although the algorithm’s cost function is based on NEES, there are no plots showing the consistency check after getting a tuning result.
Our previous work [5] focus on Kalman filter tuning using NEES tests too. Due to the hardware improvements these years, it is not that time consuming to implement a large number of Monte Carlo tests consisting of, say, several hundred runs. We simulated a car moving along a straight line and optimized a two dimensional process noise and a one dimensional measurement noise. We successfully showed that use Bayesian optimization to tune Kalman filter process noise covariance as well as measurement noise covariance and can yield good results. However, in our previous work, we did not perform formal post hoc consistency validation checks to confirm the readers that the error at each timestamp is small enough. Our previous work also only limited analysis and application to a linear dynamical system, and did not consider extensions to linearization-based approximations for non-linear filtering. At the same time, the above mentioned references and our prior work [5] use NEES based consistency check method, which makes it impossible to use when the ground truth is not available. In this paper, we also propose to use a NIS based consistency check method. NIS based tuning method makes it possible for us to tune the Kalman filter with just sensor measurements. We apply our methods on more complicated and practical nonlinear cases and also validate statistical consistency of the optimized result. Finally, in this paper we use an improved Bayesian optimization procedure which is based on nonparametric Student’s-t regression models, which leads to significantly more robust surrogate models and tuning solutions than the Gaussian processes (GPs) regression models used in our prior work.

Iii-C Summary

Problems with existing approaches are that (a) they fall into local minima; (b) they often have to use NEES; (c) they run into issues with noise and stochastic variation from small finite number of MC runs. We use Bayesian optimization to avoid falling into local minima and we use NIS to avoid using groundtruth.

Iv Bayesian Optimization for auto tuning

Many approaches for solving nonlinear optimization problems use gradient descent. However, the risk with these approaches is that they can fall into local minima. This issue is exacerbated for filter tuning problems defined by noisy nonlinear dynamical systems. Stochastic variations and nonlinear model characteristics can introduce many local minima into objective functions for tuning that can trap gradient descent methods. One principled way to handle such cases is to use Bayesian optimization [11], which poses optimization as a probabilistic search problem.

Bayesian optimization is first described for dealing with generic “black box” stochastic objective functions. Its novel application is then described for simulation-based Kalman filter auto-tuning.

Iv-a Bayesian Optimization Theory

Consider the minimization of an objective function , where is the search or solution space, and is the minimizer, such that . Furthermore, we assume that each elements of of lies in the interval .

The intuition behind BO arises from the following. First suppose that the entire solution space were densely sampled. Carrying out this process, the map is entirely known and the minimizer can be read off directly. However, this dense sampling scheme is not possible in practice. Rather, the search algorithm samples a subset of the parameter space. Since the sampling is incomplete, the shape of the cost surface is not known but, rather, must be estimated from sparse and incomplete data. Therefore, the goal of Bayesian optimization is to find the minimizer of the noisy objective function while at the same time learning about the mapping from to via Bayesian inference. Bayesian optimization uses “black box” point evaluations of to efficiently find . This is accomplished by maintaining beliefs about how behaves over all in the form of a surrogate model , which statistically approximates and is easier to evaluate (e.g. since evaluations of might require expensive high-fidelity simulation). During optimization, is used to determine where the next design point sample evaluation of should occur, in order to update beliefs over and thus simultaneously improve , while finding the (expected) minimum of as quickly as possible. The key idea is that, as more observations are sampled at different locations, the samples themselves eventually converge to the expected minimizer of . Since contains statistical information about the level of uncertainty in (i.e. related to the posterior belief ), Bayesian optimization effectively leverages probabilistic “explore-exploit” behavior to learn a probabilistic model of while also minimizing it.

We next describe the two main components of the Bayesian optimization process: (1) the surrogate model , which encodes statistical beliefs about ; and (2) the acquisition function , which is used to intelligently guide the search for via .

Surrogate Model

The surrogate model is the model used to approximate the objective function. In the BO literature, nonparametric regression models based on stochastic processes are widely used [12] because they naturally model probability distributions over uncertain functions and can be evaluated at arbitrary query points given some finite set of sample observation points. Although Gaussian process (GP) are frequently used, in this work we use Student-t processes (TP) instead, following the recommendation of Shah, et al. [13].

A Student-t process is a stochastic process such that every finite collection of samples from the process has a multivariate Student-t joint distribution. The mean function , the kernel function and the parameter are the main characteristics of TP. It can be written as

(21)

The real-valued parameter controls how “heavy-tailed” the process is. The heavier the tail (i.e. the smaller the ), the more likely it is that the TP will produce a value that is far from the mean value. The TP tends to a GP as . The TP is attractive because it provides some extra benefits over GP, without incurring more computational cost. For example, the predictive covariance for the TP explicitly depends on observed data values; this is a useful property which the Gaussian process lacks. Furthermore, distributions over the cost function may in general be heavy-tailed, so it is better to use TP to “safely” model their behaviors [14]. Similarly, every finite collection of TP samples has a multivariate Student-t distribution, which can be written as

(22)

where is the gamma function for , and . is the covariance matrix consisting of kernel function evaluations,

(23)

Equation (22) is written as the following for simplicity,

(24)

For Bayesian optimization, newly sampled and values are added to the vector and to construct a surrogate model of the underlying objective function according to Eq. (24). In most implementations of Bayesian optimization, as the new sample values are added, the hyperparameters for the kernel function are also re-estimated from the available data and updated accordingly. The updated surrogate model is then used to compute the acquisition function, which is used to select the next sample for evaluation of .

Acquisition Function

The expected improvement function is one of many well-known acquisition functions; other possible and popular choices for the acquisition function include the Lower Confidence Bound (LCB). We use expected improvement function in of our implementations and we’ll discuss how this acquisition function is generated next. Suppose that points and have been sampled and have been incorporated into the surrogate model. The current minimizer is . This will be one of these points, since observations of are not available for other points in . The algorithm needs to choose the next point to be sampled. We seek a new location which will yield a new, lower, minimum. In other words, . One way to evaluate the new sample point is to evaluate its improvement with respect to

(25)

This only assigns a non-zero value if . Therefore, the idea is to chose which maximizes the improvement. Since we only have access to the surrogate function, the improvement is stochastic. Therefore, we choose the next sample point based on the expected improvement

(26)

is the expectation based on current posterior distribution, given by the current surrogate model. We need maximize Eq. 26 to find the next point to sample.

(27)

The closed form solution of the acquisition function using the TP surrogate is [14, 15],

(28)

where and are the mean and variance of the conditional Student’s-t distribution of , which is presented below in (31). and are the CDF and PDF of the standard Student-t distribution . The conditional distribution is similar to the conditional multivariate Gaussian distribution: if we have and described by a multivariate pdf, then

(29)

where is the same as eq. (23), , and . (29) can be written more simply as

(30)

The conditional Student-t distribution of is then given by [16]

(31)

The prior mean function for the surrogate model in Bayesian optimization can be set as a constant without changing the final result [17], so let . Substituting Eq. (31) into Eq. (28), the only unknown variable will be . To find the that maximizes Eq. (28) for the largest improvement, another inner optimization problem must be solved within Bayesian optimization. Luckily, Eq. (28) is known, there are several ways to maximize the function, such as DIRECT [18], which is a derivative free and deterministic nonlinear global optimization algorithm that is widely used for Bayesian optimization via nonparametric surrogate model regression. Once the next point to sample is selected by the inner loop optimization, the Bayesian optimization loop can continue until the termination criterion (maximum iteration or minimum observation change between two iterations) is met.

The resulting algorithm for Bayesian optimization is referred to here as TPBO (Student-t processes Bayesian optimization). is used in this work because it has been shown to yield better or equal performance to other acquisition functions for a wide variety of applications [19], [20], [21].

Iv-B Stochastic Costs for Consistency-based Filter Auto-tuning

Now consider how can be defined via NEES and NIS consistency test statistics for Kalman filter tuning. As such, let be some space of configurable Kalman filter parameters (e.g. the set of all parameters defining some positive definite symmetric process noise covariance ) and let be a design point.

Consider first the case of tuning based on assessment of NEES statistics obtained via Monte Carlo ground truth simulation models. If Monte Carlo simulations are performed for time steps at any given design point , starting from the initial conditions and , then the average NEES statistic can be computed using Eq. (19) for each time . To summarize how “well-behaved” is across all time steps, we can leverage the fact that the expected value of for a consistent Kalman filter should be . Therefore, we use the following cost function to evaluate how much deviates from this ideal expected value

(32)

The reason why we use the log of the cost is that the NEES value itself is bounded from below (by 0) but is not bounded above. Taking the log ensures that the cost will space from negative to positive infinity.

By a similar reasoning,

(33)

where is the NIS outcome which could either be obtained from a ground truth simulation, or from a set of real data logs. One can also use negative log likelihood cost function, which will yield similar optimization result after our test.

Algorithm 1 summarizes the TPBO procedure for Kalman filter tuning. An attractive feature of TPBO is that it naturally provides uncertainty quantification on the shape of the objective function at both sampled and unsampled locations. This allows TPBO to cope with multiple local minima in the parameter space .

1:Initialize TP with seed data and hyperparameters
2:while termination criteria not met do
3:     
4:     Evaluate , e.g. using or .
5:     Add to , to , and update
6:end while
7:return
Algorithm 1 TPBO for Kalman filter tuning

Based on samples over the system input and corresponding outputs from the objective function, Bayesian optimization fits a surrogate model of the “true” objective function. The optimization method then repeats this process to find a minimum of the current surrogate model, update this surrogate model, and find the minimum of the new surrogate model until pre-set termination conditions are met. The key steps for the EKF-Bayesian optimization tuning procedure are shown in Figure 1.

Fig. 1: System Flowchart: Bayesian optimization will fit a surrogate model based on some sampled input parameters and associated costs according to the “black box” objective function, and then starts iterating until a minimum cost is found.

After the iteration starts, we can see in the flow chart that we maximize the acquisition function. The new point that can maximize the acquisition function will be chosen as the next set of process noise to run the EKF system. After running the EKF system with the new set of process noise we can get a new cost, which will be used to update the surrogate model.

V Application to Extended Kalman Filters

We evaluated our Bayesian optimization auto-tuning algorithm on a nonlinear state estimation application that uses the Extended Kalman filter (EKF). This application is a closed-loop control system for the aero-robotic Mars Science Lab (MSL) Skycrane landing system. In this case, TPBO is used to automatically tune the assumed process noise covariance matrix, which is one of the main “tuning knobs” for the EKF.

The “Skycrane maneuver” was used as a deployment method for the Mars Science Laboratory (MSL) Curiosity rover upon its arrival and descent near the surface of Mars, as an alternative to the air bag method used in previous missions. Thrusters were used to stabilize the MSL Descent Stage System (a robotic aircraft) to zero horizontal velocity and to slowly guide the system to 20m above the ground to deploy the rover. A simplified model of this latter stage’s longitudinal dynamics will be used to simulate vehicle state estimation just prior to the rover deployment phase. Figure 2 depicts a simplified 2D longitudinal dynamics model of the MSL Descent Stage aircraft. More detailed descriptions of the MSL platform are given in [22], [23].

V-a System description

The system is modeled as a rectangular box with two thrusters, one each on the bottom corners of the aircraft mounted at angle to the vehicle axis. The simplified vehicle states consist of the inertial translation (m), altitude above surface (m), pitch angle (radian), and rates (m/s), (m/s), and (rad/s). The control inputs are defined in terms of the thrusts (Newtons) produced by the thruster. The state and control input are therefore

(34)

The equations of motion are derived here by considering only gravity, thrust, and drag forces (the vehicle is assumed not to generate significant lift in this phase). Drag will be modeled as , where is the drag coefficient, is the atmosphere density, is the magnitude of the velocity, and is the approximate cross-sectional area of the vehicle in the direction of motion. Let be the mass of the Skycrane aircraft and payload, be the mass of the fuel, and the width and height dimensions of the Skycrane body as shown in Fig. 2, and the dimensions of the propellant housing, and and the dimensions for the vehicle center of mass. The motion equation are written as

(35)

To simplify the model further, changes in will be ignored. Values for these constants are provided in the appendix.

Sensors for state estimation consist of a simplified ideal single-axis IMU, i.e. an accelerometer and rate gyro pair which provide noisy measurements of inertial accelerations and pitch rotations about the inertial axis.

The sensor data also include on-board barometer readings to gauge altitude. Image-based tracking measurements from an overhead passing satellite are also converted into noisy platform position reports. The measurement vector can be written as

(36)

where is the sensor error vector. The process disturbance and measurement noise vectors are

(37)
(38)

all of which are modeled as additive white Gaussian noise. To obtain the appropriate matrices for the EKF, the discrete time state transition matrix is approximated by taking the Jacobian of the Euler-intergrated continuous time motion model (with sample period s). The Jacobian of measurement model is obtained from Eq. (36). One important thing we need to notice is that our system now is in continuous time, to implement it we need convert it into discrete time, which needs some extra work. The details for obtaining the corresponding Jacobian matrices and discretization are provided in the Appendix. Note that the vehicle must maintain a desired nominal trim state of (steady hover 20 m above the surface). Linearization about the trim state reveals that the continuous time perturbation dynamics are unstable but controllable and observable. Hence, to maintain the platform at the desired state using estimated full-state state feedback, a Linear Quadratic Regulator (LQR) controller is also used to define the control inputs at each discrete time step according to the control law,

(39)

where the is a pre-calculated LQR gain, which can be obtained offline using the separation principle assuming ideal full-state feedback for the linearized dynamics about trim (values given in Appendix). The same closed-loop control law is used throughout the Bayesian optimization auto-tuning procedure and the thrust values are made available to the EKF. The nominal thrusts correspond to when the aircraft stabilizes to the desired state without process noise, and is given by

(40)

An example of running the EKF for the Skycrane system can be seen in Figure 3, which shows the EKF’s estimated state values over time with the help of LQR controller. Each element’s variance of the process noise is for and variance of measurement noise is for . They all have zero mean.

Fig. 2: The Skycrane aircraft has two thrusters angled at 45 degrees, which nominally try to keep the platform 20 meters above the ground with zero translational and rotational motion.
Fig. 3: Sample Skycrane simulation showing the LQR controller maintains the desired reference state using EKF-estimated full-state feedback.

V-B Discrete EKF From Continuous Time Model

The EKF prediction stage’s formula from the continuous time will be different from the general discrete time EKF.
The prediction stage is

(41)
(42)

We cannot directly obtain using the continuous time formula. There are some other ways. The first method is that we can use the first order linearized form of to estimate it, which means

(43)

The second method is that we can numerically solve the ordinary differential equations (ODE) , which can yeild a more precise result. In our implementation we use the ODE integration library [24] to estimate the Eq. (41). In equation (42) , can be computed from Eq. (13). . is a mapping matrix from the 3 dimension noise to the 6 dimension noise, which can be seen in the Appendix. The process noise noise covariance is

(44)

The update stage will remain the same as Eq. 9 to 12. The measurement noise and its covariance are still fixed and written in the Appendix.

V-C Optimization results

As a first simple experiment, TPBO  is used to perform a 1D parameter search for defined as a constrained diagonal matrix, where the process noise covariances are such that .

For the 1D parameter optimization, TPBO was applied over the range , using 10 initial surrogate model seed samples, 50 total iterations and 200 Monte Carlo run. The surrogate model and samples points for different sample iterations are shown in Figure 8. Table I also shows the numerical values for the final best minimizer found.

For the 2D parameter optimization, the parameter is held fixed, while and are optimized. The lower bound and upper bound are set as to [1,1] respectively. We have 20 initial samples, 80 iterations and 200 Monte Carlo run. Again, the mean value of the surrogate model and the sample points at different iterations the TPBO found are shown in Figure 12. From 1D optimization result we can clearly see the result converge to points around 0.1 with high confidence and from the 2D optimization result we can see the result converge to points around [0.1, 0.01]. However, from the 2D result Table I we can see after 80 iterations, the cost does not change significantly as the change when the is around 0.1. This phenomenon may lead to a non-optimal result from TPBO. In Bayesian optimization, this happens when certain dimensions do not have a great impact on the cost, which encourages the addition of weights on other dimensions.

For the 3D optimization result, the boundaries for , , are to respectively. We have 30 initial samples, 100 iterations and 200 Monte Carlo run for each sample. The value of is far away from the optimal and it suffers from the same reason as the 2D optimization, which yield a relative larger error when we check the RMSE of in Figure 16.

Fig. 4: An example time series error between Skycrane estimates using the BO-tuned optimal parameter values and ground truth states (orange lines: bounds; blue line: error). Intuitively, the error should lie between the bounds about 95% of the time if the filter is consistent.
(a) iteration1
(b) iteration10
(c) iteration50
Fig. 8: 1D plots of surrogate functions and uncertainty bounds for the Skycrane problem, as well as sample points chosen by Bayesian optimization. The black dashed line represents the lower and upper uncertainty bounds for a 95% confidence interval. The green dots are the initial sample points. The red dots are the sample points after the iteration starts. The blue line is the mean of the surrogate model
(a) iteration1
(b) iteration50
(c) iteration80
Fig. 12: 2D surface plots of surrogate functions for the Skycrane problem, as well as sample points. The green circles are the initial sample points and the red cross is the sample points after the iterations starts. Upper and lower bound surfaces are not plotted here or the surfaces may cover each other.
(a) RMSE of
(b) RMSE of
(c) RMSE of
Fig. 16: Nd_opt means we use the optimization result from the D search and run Skycrane system. The left one shows the RMSE of translation and the right one shows the RMSE of rotation.

To validate the results of TPBO, the RMS error of three states between the EKF’s state estimation and the groundtruth (state from the simulator) are evaluated. We apply the 1D, 2D, 3D optimized parameters with a random set of process noise for 50 simulation runs of the Skycrane EKF and then obtain results in Figure 16. These results show that the 2D optimization results yield the best state estimates, since it has the minimum median error and smallest lower and upper error bounds. The 1D optimization result is similar to 2D optimization result. It is worth noting that all the errors are in fact small; for example, even though the RMSE of is visually larger than the others, its median value is rads. This is because measurement noise is set to fit the model precisely, so that its behavior will be robust most of the time. Under this condition, TPBO’s optimization ability is assessed over a relatively small range of NIS cost values. Figure 4 shows a typical trace for the state estimation error using the 3D optimized parameter estimates, indicating that consistent estimates are in fact obtained.

\diagboxTypeResult Cost Optimal
1D opt 0.0206 (0.098,0.098,0.0098)
2D opt 0.0251 (0.0446,0.1,0.0119)
3D opt 0.0193 (0.0349838, 0.999984,0.0152815)
TABLE I: Optimal means the optimal process noise for the EKF covariance. They stands for , , respectively.

Vi Conclusion

As a black box optimization method, TPBO simplifies what we need to know about a system in order to get the minimum cost. We used an example, Skycrane State estimation to show that this algorithm can be applied to complex nonlinear systems. This novel approach can also help the practitioners get the optimal process noise covariance much faster than tuning the EKF manually. In this paper we have shown results using an NIS-based cost function only. Although a NEES-based cost function can also work just as well, as shown in ref. [5], this requires the availability of ground truth state information, so NIS-based cost functions may often be more practical and are applicable with real sensor data. In this paper, we also have focused only on optimizing filter process noise parameters. However, the same auto-tuning process can be applied if the measurement noise also needs to be adjusted for a particular application [5]. In fact, if we don’t have confidence in either process noise or measurement noise covariances, it is possible to use TPBO to optimize these parameters simultaneously. The flexibility of the TPBO allows us to do more.

In the future, as this algorithm is robust to use, we aim to apply it to more realistic hardware-based system tuning problems. Another interesting direction for future research involves optimization of higher dimensional parameter spaces, where some dimensions may potentially have little/no noticeable effect on the NIS tuning cost. Possible strategies for handling this might include using TPBO to optimize those parameters which have a significant impact on the tuning cost, leaving the remaining parameters to be hand-tuned. Since the TPBO algorithm is able to support arbitrary “black box” cost function evaluations, modifications or alternatives to the NIS cost function could also be explored. Finally, the flexibility of our TPBO approach means that it can be applied to other optimal estimator tuning problems. Most notably, for example, we have already applied this approach to VI SLAM (Visual Inertial Simultaneous Localization and Mapping) extrinsic parameter calibration [25], where the “extrinsic parameter” means the relative pose between the camera and other sensors.

Appendix A Additional Information of Sky-Crane Model

A-a Jacobian and Parameters

The process model for the Skycrane has the form , where

(45)

Substituting for Eq. (35) and taking derivatives, the Jacobian of the process model is

(46)

where

(47)

Taking derivates of (36),

(48)

where

(49)

The symbols in (47) and (49) are defined as follows

(50)

All the basic constants value are written here

(51)

Mapping between 3 dimensional process noise and 6 dimensional measurement noise is

(52)

The fixed measurement noise is

(53)

The measurement noise covariance is set as

(54)

A-B Feedback Law of LQR controller

We need linearize the motion model in order to use LQR controller, e.g. calculating the Jacobian of motion model. We have calculated the Jacobian of motion model with respect to state . We also need the Jacobian with respect to control and noise respectively. The Jacobian with respect to control input is

(55)

The Jacobian with respect to the noise is