Multi-Sparse Gaussian Process: Learning based Semi-Parametric Control

Multi-Sparse Gaussian Process: Learning based Semi-Parametric Control

Abstract

A key challenge with controlling complex dynamical systems is to accurately model them. However, this requirement is very hard to satisfy in practice. Data-driven approaches such as Gaussian processes (GPs) have proved quite effective by employing regression based methods to capture the unmodeled dynamical effects. However, GPs scale cubically with data, and is often a challenge to perform real-time regression. In this paper, we propose a semi-parametric framework exploiting sparsity for learning-based control. We combine the parametric model of the system with multiple sparse GP models to capture any unmodeled dynamics. Multi-Sparse Gaussian Process (MSGP) divides the original dataset into multiple sparse models with unique hyperparameters for each model. Thereby, preserving the richness and uniqueness of each sparse model. For a query point, a weighted sparse posterior prediction is performed based on neighboring sparse models. Hence, the prediction complexity is significantly reduced from to , where and are data points and pseudo-inputs respectively for each sparse model. We validate MSGP’s learning performance for a quadrotor using a geometric controller in simulation. Comparison with GP, sparse GP, and local GP shows that MSGP has higher prediction accuracy than sparse and local GP, while significantly lower time complexity than all three. We also validate MSGP on a hardware quadrotor for unmodeled mass, inertia, and disturbances. The experiment video can be seen at: https://youtu.be/zUk1ISux6ao

I Introduction

Precise knowledge of the model for controlling complex nonlinear systems is crucial for achieving state estimation [8] and robot tracking control [13]. However, it is often difficult, if not impossible, to accurately model such systems with high fidelity. Data-driven based methods have been successful in learning the unmodeled components of system dynamics to a high degree in a supervised learning paradigm. One popular approach is using Gaussian processes (GPs) for non-parametric regression [16, 11]. Performing real-time regression with GPs is challenging due to its complexity of , since matrix inverse operations are performed on observed data points. This limits use of GPs in applications requiring large amounts of data or fast computation times, e.g. safety-critical autonomous vehicles and aerial drones. Approximations to GPs, both sparse and local, have been proposed to deal with these challenges while balancing the inherent trade-off between accuracy and complexity. In this paper, we propose a novel framework that leverages benefits from both sparse and local GP approximations to perform low-complexity, more efficient, and accurate learning in robot control called Multi-Sparse Gaussian Process (MSGP).

Various non-parametric regression frameworks for real-time learning have been proposed. Locally weighted projection regression (LWPR) learns the true function locally, spanned by a number of univariate regressions with weighted kernels [23]. LWPR, however, requires manual tuning of many metaparameters and a large number of linear models for achieving satisfactory approximation of the original function. GPs offer an appealing alternative to LWPR for model learning. GPs are flexible since they learn the model structure and estimate any hyperparameters from the data itself [16]. Thus, GPs are very powerful in capturing higher order nonlinearities with high prediction accuracy, e.g., in robot tracking and control [11].

Due to GP’s limitation on large datasets owing to its time complexity, many extensions have been developed. Local GP (LGP) is a hybrid between GP and LWPR that divides the entire dataset into local models and predicts using weighted average of these local models [12]. LGP is shown to outperform LWPR while retaining accuracy close to standard GP. However, LGP suffers from not retaining the uniqueness of each model and as the size per model increases, so does the complexity since it assumes a full GP per local model. Unlike LGP, sparse techniques approximate GPs by selecting a set of pseudo-inputs to mimic the original likelihood. There are many sparse approximations and we refer the reader to [2, 14] regarding details and their unifying framework. An online mixture of experts using sparse GPs has been used for learning shared control policies [21]. This study applies their learning based control on a smart wheelchair which is not dynamically safety critical. Learning-based control using GPs has been demonstrated for quadrotors in [1], [24], [25], [3], [19]. However, they use standard GPs in their framework and are not scalable to large datasets.

Our key contributions are the following. First, we present a semi-parametric framework using sparsity that estimates model nonlinearities. To this end, multiple sparse GPs are equipped with basis functions obtained from physics first principles. Semi-parametric methods have been applied for inverse dynamics [11], [18], system identification [26], and forward dynamics [17]; all using standard GPs. To the best of our knowledge, no prior work has merged semi-parametric modeling using sparse approximations of GPs. Second, we create multiple sparse approximations of the original GP clustered into regionally sparse models without making any global assumptions. Local models hinder prediction accuracy at the benefit of reduced complexity. To overcome this, each sparse model is optimized for its own hyperparameters and a weighted sparse posterior prediction is performed. Third, we validate the learning performance of MSGP on a hardware quadrotor platform. Learning-based control especially using sparsity for a safety-critical system such as a quadrotor has not been demonstrated before to the best of our knowledge. We address sparse based learning on a quadrotor, whose dynamics evolve in the tangent bundle to . Additionally, we also compare the learning performance of MSGP against other GP methods on a geometric quadrotor controller [9] in simulation.

The rest of the paper is organized as follows. Section II presents the problem formulation. Section III covers background regarding GPs and sparse GPs. Section IV describes the proposed MSGP framework. Semi-parametric based control with MSGP is shown in Section V. Simulation results showing MSGP’s learning performance are discussed in Section VI. Hardware experiments using MSGP is discussed in Section VII followed by the conclusion in Section VIII.

Ii Problem Statement

We consider a nonlinear, continuous-time system,

(1)

where is the state and is the control input at time . The system dynamics is divided into a known parametric model and an unknown non-parametric model . The latter contains the unmodeled dynamics.

The goal is to estimate the model nonlinearities for (1) in a semi-parametric manner using sparsity by placing multiple sparse GP priors on the non-parametric component. The basis functions for these sparse GPs take into account the physical knowledge of the system, i.e., the parametric component. This is equivalent to a semi-parametric model given by,

(2)
(3)

Comparing the resulting dynamics in (3) with (1) and (2) makes it clear that the objective of MSGP is to model the nonlinearities using multiple sparse GPs. This problem has been addressed before using standard GPs. We differ in our motivation to achieve the same using multiple sparse approximations to GPs, without comprising speed and accuracy for growing datasets. We assume that we can measure, , which are corrupted by independent, zero-mean, and bounded noise, . We also assume a nominal controller exists that drives the parametric model to the zero equilibrium point.

Iii Background Preliminaries

Here, we present the preliminaries for GPs and one of its sparse variants called sparse pseudo-input GP (SPGP) [20].

Iii-a Standard GP Regression

GPs are a popular choice for nonparametric regression in machine learning. We are interested in learning an underlying latent function , for which we assume to have noisy observations given by, , where . Given a set of training data points, with input vectors , and scalar noisy observations , we compose the dataset: , where and . A GP places a distribution on the unknown function , treating it as random variables associated with different values of , any finite number of which produces a consistent joint Gaussian distribution [16]. For instance, here could represent a robot’s states, and could represent the unmodeled system dynamics (see Section V-C).

A GP can be fully specified by its mean and covariance . The latter is also called the kernel, measuring similarity between any two inputs . GPs can be used to predict the function value, , for an arbitrary query point , by conditioning on previous observations. The posterior predictive mean and variance are then given by [16]:

(4)
(5)

where is the covariance between the input points in and query point , has entries , is the covariance matrix between pairs of input points, and is the identity matrix. The hyperparameters of a GP depend on the kernel choice and can be problem-dependent. We refer the reader to [16] for a review of different kernels. For a Gaussian kernel, the hyperparameters that best suit the particular dataset can be derived by maximizing the log marginal likelihood using quasi-Newton methods [16].

Iii-B Sparse Pseudo-Input GP Regression

Despite GPs being very powerful regressors, as the dataset grows larger, they become computationally intractable. Hence, many sparse approximations of GPs have been developed to bring down the complexity cost while retaining accuracy [20], [5], [7], [10], [22]. Broadly stated, sparse approximations of GPs fall in two major categories: approximate generative model with exact inference and exact generative model with approximate inference. Unifying theories for these various frameworks are discussed in [2], [14]. We focus on a variant of the former category: Sparse Pseudo-Input Gaussian Process (SPGP) [20].

The starting point to any GP approximation method is through a set of so-called inducing or pseudo points giving rise to sparsity. Consider a pseudo-dataset , : pseudo-inputs are and pseudo-targets are . The objective is to find the posterior distribution over pseudo targets, followed by the prediction distribution by integrating the likelihood with the posterior. A complete mathematical treatment for the derivation of the predictive distribution of SPGP can be found in [20]. Here, we simply present predictive mean and variance of SPGP:

(6)
(7)

where is the covariance matrix between input points and pseudo-inputs , is the covariance between pairs of pseudo-inputs, is a diagonal matrix, and . Inversion cost for covariance matrix is reduced to [20]. The cost per test case is and for predictive mean and variance respectively.

Iv Multi-Sparse Gaussian Process

We discuss our proposed methodology inspired from the theoretical developments of SPGP and architecture of LGP. At the onset, MSGP can be seen simply as the combination of SPGP and LGP, however, it outperforms both SPGP and LGP which is very counter intuitive. MSGP at its core is different from LGP by using multiple sparse models (instead of full GP models) and unique hyperparameters in each model. Note that, although we choose SPGP as the sparse GP representative, MSGP is agnostic to the underlying sparse approximation.

Iv-a Multi-Sparse Model Clustering

The entire dataset of training points is divided into local models (randomly/deterministically), , each with approximately data points. Every local model is denoted by , comprising of its dataset, and corresponding center, . Next, sparsity is introduced in each local model by selecting a set of pseudo-inputs , where . These pseudo-inputs are selected arbitrarily at first for each sparse model; to be optimized later in hyperparameter tuning phase. Thus, each sparse model in MSGP (see Figure 1) is specified as ; parameterized by localized hyperparameters , where and are noise and signal variance, and is characteristic length-scale.

Fig. 1: The original dataset (purple) is divided into local models (yellow) with approximately data points each and a corresponding center . Each local model is further approximated into its sparse representation (blue), with local pseudo-inputs.

Iv-B Localized Hyperparameter Tuning

Each sparse model is parameterized by that best fits its own dataset since we perform an optimization procedure on each model. The marginal likelihood for each model is,

(8)

where , is the covariance between local inputs and pseudo-inputs , is the covariance between pairs of local pseudo-inputs. is a diagonal matrix, and .

By maximizing the log marginal likelihood of (IV-B), we can jointly optimize for the hyperparameters, , and pseudo-inputs, , for each sparse model as given by:

(9)

The approximate multiple sparse generative model has attractive properties. Firstly, the training complexity in MSGP has been reduced to from GP’s , which is a significant reduction. Moreover, by optimizing the hyperparameters along with the pseudo-inputs for each sparse model, we preserve the richness and uniqueness of each model unlike LGP. Next, we look at sparse posterior predictions for a query point .

Iv-C Multi-Sparse Posterior Prediction

Optimizing hyperparameters for each sparse model may give rise to overfitting during the prediction phase. To remedy this, the posterior prediction in MSGP uses a weighted averaging over neighboring sparse predictions for a query point . The idea of weighted averaging for predictors was first introduced in LWPR; also used by LGP for its predictions. Akin to LWPR, we also perform weighted averaging, but using weighted sparse posterior predictions instead. The nearest sparse models can be determined quickly using the Gaussian kernel:

(10)

where is the center of model and is the respective characteristic length scale from model . Finally, the posterior prediction of MSGP’s predictive mean is as follows:

(11)
(12)

where gives the covariance between the local pseudo-inputs in and query point , and is the local noise variance. Hence, the predictive mean complexity in MSGP is compared to LGP’s complexity of .

V Quadrotor Learning and Control

Most modern controllers require accurate knowledge of the model for improved trajectory tracking. By learning the unmodeled component using MSGP, we demonstrate improved trajectory tracking for a quadrotor in . First, we briefly outline the dynamics and the geometric controller of the quadrotor. We then discuss the augmentation of MSGP with the geometric controller for improved tracking in presence of unmodeled dynamics and uncertainties.

V-a Geometric Dynamics Model

We consider the complete dynamics of a quadrotor model evolving in a coordinate-free framework. This framework uses a geometric representation for its attitude given by a rotation matrix in . represents the rotation from body-frame to the inertial-frame. The origin of the body-frame is given by the quadrotor’s center of mass, denoted by . A quadrotor is underactuated since it has DOF, due to its configuration space , but control inputs; thrust and moment . The equations of motion are:

(13)
(14)
(15)
(16)

where is velocity in inertial frame, denotes the skew-symmetric operator, , is mass, is gravity, is inertia matrix, , and is body-frame angular velocity.

V-B Geometric Controller Tracking in SE(3)

The geometric controller used for trajectory tracking presented in [9] has almost global exponential stability. This implies the quadrotor can reach any desired state in the state-space from any initial configuration. For a complete mathematical treatment for the nominal controller, see [9]. Here, we just present the equations for nominal and :

(17)

where are positive constants, , , , and . The desired position, velocity, attitude, and angular acceleration are and respectively. is the inverse of , i.e. .

V-C Learning based Control using MSGP

The dynamical model in (13 - 16) and the controller presented in (17) deal with a precise model of the quadrotor. However, it is often difficult to accurately parameterize a dynamical system using physics first principles. Moreover, the model (13 - 16) does not consider aerodynamic drag, damping, wind effects, or time-varying changes to mass and inertia. Here, we will use MSGP to capture and learn any unmodeled effects to the system. Since unmodeled nonlinearities appear in the dynamics (14,16). We use a total of six MSGPs, placing a prior on each dimension of the unmodeled state-space as shown below:

(18)
(19)

The input to MSGPs are and the target observations are given by the difference between (14, 16) and (18, 19), . Given the input samples and noisy observations, this constitutes a proper regression problem. After learning, we can therefore perform prediction at a new query point , where the sparse predictive mean of the unmodeled dynamics is calculated using (11). This predictive mean is then used to modify the controller (17) with the learned dynamics as shown below,

(20)

Vi Simulation Results

We validate the MSGP semi-parametric learning framework by modifying the nominal controller’s feedforward component on several test cases. We compare the MSGP’s learning performance against the nominal, standard GP, SPGP, and LGP based controllers empirically. The GPML library in MATLAB is used for hyperparameter tuning and covariance calculations [15].

Desired trajectories are sinusoids where position reference is and desired yaw is , for . Nominal parameters are , , gains , , , . In simulation, the quadrotor is subjected to these trajectories under model uncertainties using a low-gain nominal controller (17). As a result, the learned controller has a stronger effect to compensate for unmodeled dynamics by adjusting controller’s feedforward component.

The unmodeled dynamics are broadly classified into three categories for investigation: parametric, non-parametric, and combined parametric and non-parametric. For each category, we collect over samples for training and over samples for testing. One-tenth of the samples are chosen as pseudo-inputs for SPGP. Localization of data samples, into local models , for LGP and MSGP are done in the same state space as inputs to the respective GPs, i.e. . Each local model consists of a maximum of samples resulting in over regional models. MSGP is further sparsed using one-fifth of the local samples as local pseudo-inputs.

Vi-a Parametric Unmodeled Dynamics

Parametric unmodeled dynamics deal with changes or perturbations made to the quadrotor parameters, such as mass or inertia. Parameters are changed to the extent that the nominal controller can still achieve stable flight, although with performance degradation in trajectory tracking. Tracking error is determined in the position space of the quadrotor. The trajectory tracking error of the quadrotor when subjected to changes in the parameters is shown in Figure 2. In the training phase, the quadrotor is trained for each GP by introducing step changes to the mass and inertia at different time instances, as given below:

where and are the nominal mass and inertia, and are the perturbed mass and inertia, and seconds. In the testing phase, the controllers are compared by changing the magnitude of , and time intervals. From the normalized mean squared error (NMSE) plot in Figure 2, it is clear that the nominal controller has a higher NMSE along . This is expected since changing the mass has more pronounced effect on the altitude. The GP controller performs better than the nominal and SPGP controllers, while LGP outperforms all three controllers. MSGP on the other hand demonstrates superior tracking performance with the lowest NMSE among all the controllers. MSGP achieves better tracking performance compared to the other learning based controllers due to unique hyperparameters and weighted sparse posterior prediction. Moreover, changing dynamical effects at different time instances are better captured with different hyperparameters as opposed to a global set of hyperparameters as in the case of GP, SPGP, and LGP.

Fig. 2: Parametric effects: Tracking error between nominal and learning-based controllers (GP, SPGP, LGP, MSGP) for varying mass and inertia.
Fig. 3: Non-parametric effects: Tracking error between nominal and learning-based controllers (GP, SPGP, LGP, MSGP) for varying wind.
Fig. 4: Parametric & Non-parametric: Tracking error between nominal and learning-based controllers (GP, SPGP, LGP, MSGP) for mass, inertia, and wind.

Vi-B Non-Parametric Unmodeled Dynamics

Here we look at non-parametric effects introduced in the dynamics such as unmodeled aerodynamics. The quadrotor is subjected to the same unknown wind effects for training each GP: . During testing, a similar wind is introduced having different magnitudes along each dimension for comparing the tracking performance. Figure 3 shows the tracking performance of the quadrotor in presence of non-parametric aerodynamic disturbances.

The nominal controller incurs the highest NMSE along each dimension. This is expected since the unmodeled dynamics cannot be handled by the nominal controller that relies on model knowledge for feedforward compensation. GP performs significantly better than the nominal controller in presence of such effects. SPGP performs better than the nominal case, but it does not compensate as effectively as GP. LGP on the other hand outperforms both the nominal and SPGP controllers, but underperforms compared to GP. Finally, MSGP incurs the lowest NMSE, outperforming all the controllers including GP.

From the error versus time plots in Figure 3, it can be seen that each learning based controller eventually fails to compensate for the wind effects with the exception of MSGP, which holds out the longest among all the controllers. GP is able to compensate for the wind longer than both SPGP and LGP. SPGP gives in first to the unmodeled dynamical effects since it is only a sparse approximation of GP, while LGP, being a locally clustered approximation of GP, holds out longer than SPGP. Despite MSGP having multiple sparse approximations of GP, it is consistently able to compensate since each sparse model’s uniqueness is preserved as described in Section IV-B.

Vi-C Parametric & Non-Parametric Effects

Next, we study the combined effects of unmodeled dynamics in both parametric and non-parametric form. Note that the mass, inertia, and wind effects introduced here are different from the previous experiments to show performance against varied conditions. The parametric and non-parametric changes for the training phase are,

where and are the affected mass and inertia parameters respectively, is the unmodeled wind, and seconds. Since it is a combination of multiple unmodeled effects, the simulation setup is very challenging because the learning based quadrotor controller needs to deal with a highly inaccurate model. During training, each GP algorithm is trained on the above combinations. During testing, the magnitudes and respective time intervals are altered to test the generalizability of the learning based controllers.

The tracking error performance is shown in Figure 4. Among all the controllers, SPGP performs the worst in terms of tracking error followed by the nominal controller. The sparse approximation tends to over compensate for the introduced nonlinearities in the dynamics, thus exaggerating its effects in the feedforward controller. Both GP and LGP demonstrate comparable performance and perform better than the nominal controller. MSGP has the lowest NMSE among all the controllers with comparable performance to GP and LGP along the and dimension. In the altitude domain however, there is a significant reduction in NMSE for MSGP compared to any other controller.

Vi-D Training and Prediction Time Comparison

We now analyze the average time taken by different GP algorithms for posterior predictions. For different training sizes (), we train each GP individually subjected to non-parametric wind disturbances. In the case of SPGP, we take one-tenth of each training dataset as pseudo-inputs. For LGP and MSGP, we assume the number of local data points to be linearly proportional to each training dataset. We take one-fifth of each training set to form local models, forming regional models. Although in practice, the regional models need only have data points each. However, we let each regional model hold a fairly high number of local data points for comparison. Subsequently, for MSGP, we further take one-fifth of each local model’s dataset as local pseudo-inputs. For LGP and MSGP, all the neighboring models are considered for computing the weighted posterior prediction, i.e., .

LGP and MSGP take the least time to train due to the reduced order model compared to GP. LGP takes over to train each cluster in CPU time (i7-9800HQ). MSGP on the other hand takes under for each cluster in CPU time. We also benchmark GPU training time using the GPyTorch library on a RTX Ti [6]. MSGP takes roughly for dimensional input and dimensional output for each cluster.

Fig. 5: Posterior prediction time in milliseconds against different training sizes. The prediction time is computed on a query point for GP, SPGP, LGP, and MSGP. The dashed black line marks milliseconds.

The CPU prediction time comparison is shown in Figure 5. GP’s time complexity drastically increases with increasing training points as expected; since it cubically scales with the number of training points. SPGP scales very well compared to GP. For over points, SPGP computes under . LGP has the least computational cost with fewer training points (under ) and marginally grows with increasing size. It is faster than SPGP and takes under with over points. MSGP is similar to LGP but performs better as the number of training inputs increase due to sparsity in each model. MSGP takes approximately for predictions with over training points. Note that precomputations can be made to improve the speed for all the methods. In practice, one can save, , where denotes the covariance matrix for observed inputs and is the set of target observations. Rank- approximations are then made for computing inverses. This results in tremendous boost in computational speed, thus achieving faster predictions. Doing so results in a prediction time of only for MSGP in CPU time. The space and time complexity for the various GPs are tabulated in Table I.

Method Storage Training Mean Mean (w/ saving)
GP
SPGP
LGP
MSGP
TABLE I: Space and time complexity comparison for GP, SPGP, LGP, and MSGP, ignoring the time taken to create clusters for local methods. The last column assumes saving necessary matrices for each method.

Vii Hardware Experiments

Vii-a Experimental Setup

We now discuss the implementation of MSGP on a hardware quadrotor platform. We used the Crazyflie 2.1 where state estimation is performed onboard with the help of an external low-cost Lighthouse positioning system [4]. Control commands are executed from a ground station (Intel i7-7700HQ, 16 GB RAM) through the Crazyradio PA USB dongle. The nominal controller comprises of a position and attitude controller. The position controller generates commanded thrust using a feedforward hovering thrust and feedback PD controller. Desired attitude is maintained through an attitude controller generating commanded roll, pitch, and yaw-rates. The gains selected are, , under which stable flight is maintained within nominal conditions. Control inputs are sent at while states are recorded at . The experiment video can be seen at:

The objective is to maintain stable flight particularly outside nominal conditions. Training data is collected by adding a payload of approximately to the Crazyflie. MSGP input is , where , are positions, velocities, accelerations, roll, pitch, yaw, and commanded thrust. Over training points were collected, with each sparse model randomly assigned points. Each model is further sparsed using one-fourth of the points as pseudo-inputs. Training each cluster takes less than in CPU time. For weighted sparse prediction, neighboring models are used.

Vii-B Experiment 1

We first test the nominal and MSGP based controller for a simple task of stable hovering in the presence of an additional payload. The reference altitude is set at . Figure 6 shows the tracking comparison for MSGP based controller and nominal controller. Since these are two separate experiments, the transition point for adding the payload is synchronized for visualizing the plot better. When there is no added mass, both MSGP and nominal controller exert similar hovering thrust demonstrating they operate well within nominal conditions. Once the payload is added, MSGP immediately exerts a compensating feedforward thrust as seen in Figure 7. The nominal controller however begins exerting maximal feedback thrust due to the altitude drop experienced. For the nominal controller to eliminate steady state error, the gains need to be adapted or tuned accordingly. This issue is alleviated in the case of the MSGP based controller. We also note that better design of PD gains will reduce the oscillations experienced by MSGP; however, proper design of optimal gains is outside the scope of this work.

Fig. 6: Quadrotor altitude hold with a payload of
Fig. 7: Commanded thrust for altitude hold with a payload of

Vii-C Experiment 2

In this experiment, we aggressively disturb the system to test the robustness and generalizability of the MSGP learning algorithm. The system is disturbed in three ways: 1) hitting the payload inducing unmodeled inertial moments, 2) hitting the quadrotor physically off the reference, 3) pulling the quadrotor down with the mass. The disturbances are meant to induce thrust and attitude compensation by MSGP.

The altitude tracking performance in presence of the disturbances is shown in Figure 8. Note that MSGP has not been trained a priori for these disturbances. MSGP performs good tracking when there is no payload and also compensates well when the mass is added. The transient behavior can be seen in Figure 9 for the feedforward thrust. Thereafter, the system is severely disturbed by first hitting the mass which induces an off-axis inertial moment which MSGP needs to address. Additionally, the Crazyflie is then hit twice to go off its current trajectory. Finally, the Crazyflie is pulled down along with the mass. For all these unmodeled disturbances, MSGP generates the necessary thrust and commanded attitudes to hold stable flight and steady altitude.

Fig. 8: Altitude of the system in presence of aggressive unmodeled disturbances and a payload of using MSGP controller
Fig. 9: Commanded thrust of the system in presence of aggressive unmodeled disturbances and a payload of using MSGP controller

Viii Conclusion

In summary, we proposed a semi-parametric based control with sparsity by exploiting multiple non-parametric GP sparse models. The sparse models are separately optimized for their hyperparameters, thereby, retaining their own uniqueness. A weighted sparse posterior predictor is adopted for a query point to avoid overfitting and any discontinuities. The proposed framework is tested on a geometric quadrotor controller in simulation with dynamics evolving on the tangent bundle to . Simulations are performed extensively for unmodeled parametric, non-parametric, and combined dynamical effects for step changes. This demonstrated the proposed approach’s efficacy to generalize to step changes despite being a locally sparse approximator. We also rigorously tested our proposed framework against standard GP, sparse GP, and local GP on both prediction quality and time complexity. MSGP demonstrated better prediction accuracy in the form of improved trajectory tracking with reduced prediction time compared to other GPs. Lastly, we experimentally performed sparsity based control on a quadrotor platform. We validated MSGP on the quadrotor dealing with unknown mass and disturbances.

References

  1. F. Berkenkamp and A. P. Schoellig (2015) Safe and robust learning control with gaussian processes. In 2015 European Control Conference (ECC), pp. 2496–2501. Cited by: §I.
  2. T. D. Bui, J. Yan and R. E. Turner (2017) A unifying framework for gaussian process pseudo-point approximations using power expectation propagation. The Journal of Machine Learning Research 18 (1), pp. 3649–3720. Cited by: §I, §III-B.
  3. G. Cao, E. M. Lai and F. Alam (2017) Gaussian process model predictive control of an unmanned quadrotor. Journal of Intelligent & Robotic Systems 88 (1), pp. 147–162. Cited by: §I.
  4. Crazyflie 2.1 : bitcraze. Note: \urlhttps://www.bitcraze.io/crazyflie-2-1/ Cited by: §VII-A.
  5. L. Csató and M. Opper (2002) Sparse on-line gaussian processes. Neural computation 14 (3), pp. 641–668. Cited by: §III-B.
  6. J. R. Gardner, G. Pleiss, D. Bindel, K. Q. Weinberger and A. G. Wilson (2018) GPyTorch: blackbox matrix-matrix gaussian process inference with gpu acceleration. In Advances in Neural Information Processing Systems, Cited by: §VI-D.
  7. R. Herbrich, N. D. Lawrence and M. Seeger (2003) Fast sparse gaussian process methods: the informative vector machine. In Advances in neural information processing systems, pp. 625–632. Cited by: §III-B.
  8. J. Ko and D. Fox (2009-07-01) GP-bayesfilters: bayesian filtering using gaussian process prediction and observation models. Autonomous Robots 27 (1), pp. 75–90. External Links: ISSN 1573-7527 Cited by: §I.
  9. T. Lee, M. Leok and N. H. McClamroch (2010) Geometric tracking control of a quadrotor uav on se (3). In 49th IEEE conference on decision and control (CDC), pp. 5420–5425. Cited by: §I, §V-B.
  10. T. P. Minka (2001) Expectation propagation for approximate bayesian inference. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pp. 362–369. Cited by: §III-B.
  11. D. Nguyen-Tuong and J. Peters (2010-05) Using model knowledge for learning inverse dynamics. In 2010 IEEE International Conference on Robotics and Automation, Vol. . External Links: ISSN Cited by: §I, §I, §I.
  12. D. Nguyen-Tuong, J. R. Peters and M. Seeger (2009) Local gaussian process regression for real time online model learning. In Advances in Neural Information Processing Systems, pp. 1193–1200. Cited by: §I.
  13. D. Nguyen-Tuong, M. Seeger and J. Peters (2008-07) Computed torque control with nonparametric regression models. pp. 212 – 217. External Links: ISBN 978-1-4244-2078-0 Cited by: §I.
  14. J. Quiñonero-Candela and C. E. Rasmussen (2005) A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research 6 (Dec), pp. 1939–1959. Cited by: §I, §III-B.
  15. C. E. Rasmussen and H. Nickisch (2010) Gaussian processes for machine learning (gpml) toolbox. Journal of machine learning research 11. Cited by: §VI.
  16. C. E. Rasmussen (2003) Gaussian processes in machine learning. In Summer School on Machine Learning, pp. 63–71. Cited by: §I, §I, §III-A, §III-A.
  17. D. Romeres, D. K. Jha, A. D. Libera, W. Yerazunis and D. Nikovski (2018) Semiparametrical gaussian processes learning of forward dynamical models for navigating in a circular maze. CoRR abs/1809.04993. External Links: 1809.04993 Cited by: §I.
  18. D. Romeres, M. Zorzi, R. Camoriano and A. Chiuso (2016) Online semi-parametric learning for inverse dynamics modeling. In 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 2945–2950. Cited by: §I.
  19. A. J. Smith, M. AlAbsi and T. Fields (2018) Heteroscedastic gaussian process-based system identification and predictive control of a quadcopter. In 2018 AIAA Atmospheric Flight Mechanics Conference, pp. 0298. Cited by: §I.
  20. E. Snelson and Z. Ghahramani (2006) Sparse gaussian processes using pseudo-inputs. In Advances in Neural Information Processing Systems 18, Y. Weiss, B. Schölkopf and J. C. Platt (Eds.), pp. 1257–1264. Cited by: §III-B, §III-B, §III.
  21. H. Soh and Y. Demiris (2015) Learning assistance by demonstration: smart mobility with shared control and paired haptic controllers. Journal of Human-Robot Interaction 4 (3), pp. 76–100. Cited by: §I.
  22. M. Titsias (2009) Variational learning of inducing variables in sparse gaussian processes. In Artificial Intelligence and Statistics, pp. 567–574. Cited by: §III-B.
  23. S. Vijayakumar and S. Schaal (2000) Locally weighted projection regression: an o (n) algorithm for incremental real time learning in high dimensional space. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), Vol. 1, pp. 288–293. Cited by: §I.
  24. L. Wang, E. A. Theodorou and M. Egerstedt (2018) Safe learning of quadrotor dynamics using barrier certificates. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2460–2465. Cited by: §I.
  25. Wiratunga, Sheran Christopher Lalith (2014) Training gaussian process regression models using optimized trajectories. UWSpace. External Links: Link Cited by: §I.
  26. T. Wu and J. Movellan (2012-10) Semi-parametric gaussian process for robot system identification. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 725–731. External Links: Document, ISSN Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
410759
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description