Uniform Error Bounds for Gaussian Process Regression with Application to Safe Control

Uniform Error Bounds for Gaussian Process Regression with Application to Safe Control

Abstract

Data-driven models are subject to model errors due to limited and noisy training data. Key to the application of such models in safety-critical domains is the quantification of their model error. Gaussian processes provide such a measure and uniform error bounds have been derived, which allow safe control based on these models. However, existing error bounds require restrictive assumptions. In this paper, we employ the Gaussian process distribution and continuity arguments to derive a novel uniform error bound under weaker assumptions. Furthermore, we demonstrate how this distribution can be used to derive probabilistic Lipschitz constants and analyze the asymptotic behavior of our bound. Finally, we derive safety conditions for the control of unknown dynamical systems based on Gaussian process models and evaluate them in simulations of a robotic manipulator.

1 Introduction

The application of machine learning techniques in control tasks bears significant promises. The identification of highly nonlinear systems through supervised learning techniques Nørgård et al. (2000) and the automated policy search in reinforcement learning Deisenroth (2013) enables the control of complex unknown systems. Nevertheless, the application in safety-critical domains, like autonomous driving, robotics or aviation is rare. Even though the data-efficiency and performance of self-learning controllers is impressive, engineers still hesitate to rely on learning approaches if the physical integrity of systems is at risk, in particular, if humans are involved. Empirical evaluations, e.g. for autonomous driving Huval et al. (2015), are available, however, this might not be sufficient to reach the desired level of reliability and autonomy.

Limited and noisy training data lead to imperfections in data-driven models Umlauft et al. (2017b). This makes the quantification of the uncertainty in the model and the knowledge about a model’s ignorance key for the utilization of learning approaches in safety-critical applications. Gaussian process models provide this measure for their own imprecision and therefore gained attention in the control community Beckers et al. (2019); Berkenkamp et al. (2016); Fanger et al. (2016). These approaches heavily rely on error bounds of Gaussian process regression and are therefore limited by the strict assumptions made in previous works on GP uniform error bounds Srinivas et al. (2012); Chowdhury and Gopalan (2017); Umlauft et al. (2018b, a).

The main contribution of this paper is therefore the derivation of a novel GP uniform error bound, which requires less prior knowledge and assumptions than previous approaches and is therefore applicable to a wider range of problems. Furthermore, we derive a Lipschitz constant for the samples of GPs and investigate the asymptotic behavior in order to demonstrate that arbitrarily small error bounds can be guaranteed with sufficient computational resources and data. The proposed GP bounds are employed to derive safety guarantees for unknown dynamical systems which are controlled based on a GP model. By employing Lyapunov theory Khalil (2002), we prove that the closed-loop system - here we take a robotic manipulator as example - converges to a small fraction of the state space and can therefore be considered as safe.

The remainder of this paper is structured as follows: We briefly introduce Gaussian process regression and discuss related error bounds in Section 2. The novel proposed GP uniform error bound, the probabilistic Lipschitz constant and the asymptotic analysis are presented in Section 3. In Section 4 we show safety of a GP model based controller and evaluate it on a robotic manipulator in Section 5.

2 Background

2.1 Gaussian Process Regression and Uniform Error Bounds

Gaussian process regression is a Bayesian machine learning method based on the assumption that any finite collection of random variables1 follows a joint Gaussian distribution with prior mean  and covariance kernel Rasmussen and Williams (2006). Therefore, the variables are observations of a sample function of the GP distribution perturbed by zero mean Gaussian noise with variance . By concatenating input data points in a matrix the elements of the GP kernel matrix are defined as and denotes the kernel vector, which is defined accordingly. The probability distribution of the GP at a point conditioned on the training data concatenated in and is then given as a normal distribution with mean and variance .

A major reason for the popularity of GPs and related approaches in safety critical applications is the existence of uniform error bounds for the regression error, which is defined as follows.

Definition 2.1.

Gaussian process regression exhibits a uniformly bounded error on a compact set  if there exists a function such that

(1)

If this bound holds with probability of at least for some , it is called a probabilistic uniform error bound.

2.2 Related Work

For many methods closely related to Gaussian process regression, uniform error bounds are very common. When dealing with noise-free data, i.e. in interpolation of multivariate functions, results from the field of scattered data approximation with radial basis functions can be applied Wendland (2004). In fact, many of the results from interpolation with radial basis functions can be directly applied to noise-free GP regression with stationary kernels. The classical result in Wu and Schaback (1993) employs Fourier transform methods to derive an error bound for functions in the reproducing kernel Hilbert space (RKHS) attached to the interpolation kernel. By additionally exploiting properties of the RKHS a uniform error bound with increased convergence rate is derived in Schaback (2002). Typically, this form of bound crucially depends on the so called power function, which corresponds to the posterior standard deviation of Gaussian process regression under certain conditions Kanagawa et al. (2018). In Hubbert and Morton (2004), a error bound for data distributed on a sphere is developed, while the bound in Narcowich et al. (2006) extends existing approaches to functions from Sobolev spaces. Bounds for anisotropic kernels and the derivatives of the interpolant are developed in Beatson et al. (2010). A Sobolev type error bound for interpolation with Matérn kernels is derived in Stuart and Teckentrup (2018). Moreover, it is shown that convergence of the interpolation error implies convergence of the GP posterior variance.

Regularized kernel regression is a method which extends many ideas from scattered data interpolation to noisy observations and it is highly related to Gaussian process regression as pointed out in Kanagawa et al. (2018). In fact, the GP posterior mean function is identical to kernel ridge regression with squared cost function Rasmussen and Williams (2006). Many error bounds such as Mendelson (2002) depend on the empirical covering number and the norm of the unknown function in the RKHS attached to the regression kernel. In Zhang (2005), the effective dimension of the feature space, in which regression is performed, is employed to derive a probabilistic uniform error bound. The effect of approximations of the kernel, e.g. with the Nyström method, on the regression error is analyzed in Cortes et al. (2010). Tight error bounds using empirical covering numbers are derived under mild assumptions in Shi (2013). Finally, error bounds for general regularization are developed in Dicker et al. (2017), which depend on regularization and the RKHS norm of the function.

Using similar RKHS-based methods for Gaussian process regression, probabilistic uniform error bounds depending on the maximal information gain and the RKHS norm have been developed in Srinivas et al. (2012). These constants pose a high hurdle which has prevented the rigorous application of this work in control and typically heuristic constants without theoretical foundations are applied, see e.g. Berkenkamp et al. (2017). While regularized kernel regression allows a wide range of observation noise distributions, the bound in Srinivas et al. (2012) only holds for bounded sub-Gaussian noise. Based on this work an improved bound is derived in Chowdhury and Gopalan (2017) in order to analyze the regret of an upper confidence bound algorithm in multi-armed bandit problems. Although these bounds are frequently used in safe reinforcement learning and control, they suffer from several issues. On the one hand, they depend on constants which are very difficult to calculate. While this is no problem for theoretical analysis, it prohibits the integration of these bounds into algorithms and often estimates of the constants must be used. On the other hand, they suffer from the general problem of RKHS approaches: The space of functions, for which the bounds hold, becomes smaller the smoother the kernel is Narcowich et al. (2006). In fact, the RKHS attached to a covariance kernel is usually small compared to the support of the prior distribution of a Gaussian process van der Vaart and van Zanten (2011).

The latter issue has been addressed by considering the support of the prior distribution of the Gaussian process as belief space. Based on bounds for the suprema of GPs Adler and Taylor (2007) and existing error bounds for interpolation with radial basis functions, a probabilistic uniform error bound for Kriging (alternative term for GP regression for noise-free training data) is derived in Wang et al. (2019). However, the uniform error of Gaussian process regression with noisy observations has not been analyzed with the help of the prior GP distribution to the best of our knowledge.

3 Probabilistic Uniform Error Bound

While probabilistic uniform error bounds for the cases of noise-free observations and the restriction to subspaces of a RKHS are widely used, they often rely on constants which are hard to determine and are typically limited to unnecessarily small function spaces. The inherent probability distribution of GPs, which is the largest possible function space for regression with a certain GP, has not been exploited to derive uniform error bounds for Gaussian process regression with noisy observations. Under the weak assumption of Lipschitz continuity of the covariance kernel and the unknown function, a directly computable probabilistic uniform error bound is derived in Section 3.1. We demonstrate how Lipschitz constants for unknown functions directly follow from the assumed distribution over the function space in LABEL:subsec:probLipschitz. Finally, we show that an arbitrarily small error bound can be reached with sufficiently many and well-distributed training data in Section 3.3.

3.1 Exploiting Lipschitz Continuity of the Unknown Function

In contrast to the RKHS based approaches in Srinivas et al. (2012); Chowdhury and Gopalan (2017), we make use of the inherent probability distribution over the function space defined by Gaussian processes. We achieve this through the following assumption.

Assumption 3.1.

The unknown function is a sample from a Gaussian process and observations are perturbed by zero mean i.i.d. Gaussian noise with variance .

This assumption includes abundant information about the regression problem. The space of sample functions is limited through the choice of the kernel of the Gaussian process. Using Mercer’s decomposition Mercer (1909) , of the kernel , this space is defined through

(2)

which contains all functions that can be represented in terms of the kernel . By choosing a suitable class of covariance functions , this space can be designed in order to incorporate prior knowledge of the unknown function . For example, for covariance kernels which are universal in the sense of Steinwart (2001), continuous functions can be learned with arbitrary precision. Moreover, for the squared exponential kernel, the space of sample functions corresponds to the space of continuous functions on , while its RKHS is limited to analytic functions van der Vaart and van Zanten (2011). Furthermore, creftype 3.1 defines a prior GP distribution over the sample space which is the basis for the calculation of the posterior probability. The prior distribution is typically shaped by the hyperparameters of the covariance kernel , e.g. slowly varying functions can be assigned a higher probability than functions with high derivatives. Finally, creftype 3.1 allows Gaussian observation noise which is in contrast to the bounded noise required e.g. in Srinivas et al. (2012); Chowdhury and Gopalan (2017).

In addition to creftype 3.1, we need Lipschitz continuity of the kernel and the unknown function . We define the Lipschitz constant of a differentiable covariance kernel as

(3)

Since most of the practically used covariance kernels , such as squared exponential and Matérn kernels, are Lipschitz continuous Rasmussen and Williams (2006), this is a weak restriction on covariance kernels. However, it allows us to prove continuity of the posterior mean function and the posterior standard deviation , which is exploited to derive a probabilistic uniform error bound in the following theorem. The proofs for all following theorems can be found in the supplementary material.

Theorem 3.1.

Consider a zero mean Gaussian process defined through the continuous covariance kernel with Lipschitz constant on the compact set . Furthermore, consider a continuous unknown function with Lipschitz constant and observations satisfying creftype 3.1. Then, the posterior mean function  and standard deviation of a Gaussian process conditioned on the training data are continuous with Lipschitz constant and modulus of continuity on such that

(4)
(5)

Moreover, pick , and set

(6)
(7)

Then, it holds that

(8)

The parameter is in fact the grid constant of a grid used in the derivation of the theorem. The error on the grid can be bounded by exploiting properties of the Gaussian distribution Srinivas et al. (2012) resulting in a dependency on the number of grid points. Eventually, this leads to the constant defined in (6) since the covering number is the minimum number of points in a grid over with grid constant . By employing the Lipschitz constant and the modulus of continuity , which are trivially obtained due Lipschitz continuity of the covariance kernel , as well as the Lipschitz constant , the error bound is extended to the complete set , which results in (8).

Note, that most of the equations in Theorem 3.1 can be directly evaluated. Although our expression for  depends on the covering number of , which is in general difficult to calculate, upper bounds can be computed trivially. For example, for a hypercubic set the covering number can be bounded by

(9)

where is the edge length of the hypercube. Furthermore, (4) and (5) depend only on the training data and kernel expressions, which can be calculated analytically in general. Therefore, (8) can be computed for fixed and if an upper bound for the Lipschitz constant of the unknown function is known. Prior bounds on the Lipschitz constant are often available for control systems, e.g. based on simplified first order physical models. However, we demonstrate a method to obtain probabilistic Lipschitz constants from creftype 3.1 in Section 3.2. Therefore, it is trivial to compute all expressions in Theorem 3.1 or upper bounds thereof, which emphasizes the high applicability of Theorem 3.1 in safe control of unknown systems.

Moreover, it should be noted that can be chosen arbitrarily small such that the effect of the constant  can always be reduced to an amount which is negligible compared to . Even conservative approximations of the Lipschitz constants and and a loose modulus of continuity do not affect the error bound (8) much since (6) grows merely logarithmically with diminishing . In fact, even the bounds (4) and (5), which grow in the order of and , respectively, as shown in the proof of Theorem 3.3 and thus are unbounded, can be compensated such that a vanishing uniform error bound can be proven under weak assumptions in Section 3.3.

3.2 Probabilistic Lipschitz Constants for Gaussian Processes

If little prior knowledge of the unknown function is given, it might not be possible to directly derive a Lipschitz constant on . However, we indirectly assume a certain distribution of the derivatives of with creftype 3.1. Therefore, it is possible to derive a probabilistic Lipschitz constant from this assumption, which is described in the following theorem.

Theorem 3.2.

Consider a zero mean Gaussian process defined through the covariance kernel with continuous partial derivatives up to the fourth order and partial derivative kernels

(10)

Let denote the Lipschitz constants of the partial derivative kernels on the set with maximal extension . Then, a sample function of the Gaussian process is almost surely continuous on and with probability of at least , it holds that

(11)

is a Lipschitz constant of on .

Note that a higher differentiability of the covariance kernel is required compared to Theorem 3.1. The reason for this is that the proof of Theorem 3.2 exploits the fact that the partial derivative of a differentiable kernel is again a covariance function, which defines a derivative Gaussian process Ghosal and Roy (2006). In order to obtain continuity of the samples of these derivative processes, the derivative kernels must be continuously differentiable Dudley (1967). Using the metric entropy criterion Dudley (1967) and the Borell-TIS inequality Talagrand (1994), we exploit the continuity of sample functions and bound their maximum value, which directly translates into the probabilistic Lipschitz constant (11).

Note that all the values required in (11) can be directly computed. The maximum of the derivative kernels as well as their Lipschitz constants can be calculated analytically for many kernels. Therefore, the Lipschitz constant obtained with Theorem 3.2 can be directly used in Theorem 3.1 through application of the union bound. Since the Lipschitz constant has only a logarithmic dependence on the probability , small error probabilities for the Lipschitz constant can easily be achieved.

Remark 3.1.

The work in González et al. (2016) derives also estimates for the Lipschitz constants. However, they only take the Lipschitz constant of the posterior mean function, which neglects the probabilistic nature of the GP and thereby underestimates the Lipschitz constants of samples of the GP.

3.3 Analysis of Asymptotic Behavior

In safe reinforcement learning and control of unknown systems an important question regards the existence of lower bounds for the learning error because they limit the achievable control performance. It is clear that the available data and constraints on the computational resources pose such lower bounds in practice. However, it is not clear under which conditions, e.g. requirements of computational power, an arbitrarily low uniform error can be guaranteed. The asymptotic analysis of the error bound, i.e. investigation of the bound (8) in the limit can clarify this question. The following theorem is the result of this analysis.

Theorem 3.3.

Consider a zero mean Gaussian process defined through the continuous covariance kernel with Lipschitz constant on the set . Furthermore, consider an infinite data stream of observations  of an unknown function  with Lipschitz constant and maximum absolute value on which satisfies creftype 3.1. Let and denote the mean and standard deviation of the Gaussian process conditioned on the first observations. If there exists a  such that the standard deviation satisfies , , then it holds for every that

(12)

In addition to the conditions of Theorem 3.1 the absolute value of the unknown function is required to be bounded by a value . This is necessary to bound the Lipschitz constant of the posterior mean function in the limit of infinite training data. Even if no such constant is known, it can be derived from properties of the GP under weak conditions similarly to Theorem 3.2. Based on this restriction, it can be shown that the bound of the Lipschitz constant grows at most with rate  using the triangle inequality and the fact that the squared norm of the observation noise  follows a distribution with probabilistically bounded maximum value Laurent and Massart (2000). Therefore, we pick such that and which implies (12).

The condition on the convergence rate of the posterior standard deviation in Theorem 3.3 can be seen as a condition for the distribution of the training data, which depends on the structure of the covariance kernel. In (Lederer et al., 2019, Corollary 3.2), the condition is formulated as follows: Let  denote a set of training points around  with radius , then the posterior variance converges to zero if there exists a function  for which  and  holds. This is achieved, e.g. if a constant fraction of all samples lies on the point . In fact, it is straightforward to derive a similar condition for the uniform error bounds in Srinivas et al. (2012); Chowdhury and Gopalan (2017). However, due to their dependence on the maximal information gain, the required decrease rates depend on the covariance kernel and are typically higher. For example, the posterior standard deviation of a Gaussian process with a squared exponential kernel must satisfy  for Srinivas et al. (2012) and for Chowdhury and Gopalan (2017).

4 Safety Guarantees for Control of Unknown Dynamical Systems

Safety guarantees for dynamical systems, in terms of upper bounds for the tracking error, are becoming more and more relevant as learning controllers are applied in safety-critical applications like autonomous driving or robots working in close proximity to humans Umlauft et al. (2017a, c, b). We therefore show how the results in Theorem 3.1 can be applied to control safely unknown dynamical systems. In Section 4.1 we propose a tracking control law for systems which are learned with GPs. The stability of the resulting controller is analyzed in Section 4.2.

4.1 Tracking Control Design

Consider the nonlinear control affine dynamical system

(13)

with state  and control input . While the structure of the dynamics (13) is known, the function  is not. However, we assume that it is a sample from a GP with kernel . Systems of the form (13) cover a large range of applications including Lagrangian dynamics and many physical systems.

The task is to define a policy  for which the output  tracks the desired trajectory  such that the tracking error  with  vanishes over time, i.e. . For notational simplicity, we introduce the filtered state , .

A well-known method for tracking of control affine systems is feedback linearization Khalil (2002), which aims for a model-based compensation of the non-linearity  using an estimate  and then applies linear control principles for the tracking. The feedback linearizing policy reads as

(14)

where the linear control law  is the PD-controller

(15)

with control gain . This results in the dynamics of the filtered state

(16)

Assuming training data of the real system , , are available, we utilize the posterior mean function  for the model estimate . This implies, that observations of  are corrupted by noise, while  is measured free of noise. This is of course debatable, but in practice measuring the time derivative is usually realized with finite difference approximations, which injects significantly more noise than a direct measurement.

4.2 Stability Analysis

Due to safety constraints, e.g. for robots interacting with humans, it is usually necessary to verify that the model  is sufficiently precise and the parameters of the controller  are chosen properly. These safety certificates can be achieved if there exists an upper bound for the tracking error as defined in the following.

Definition 4.1 (Ultimate Boundedness).

The trajectory  of a dynamical system  is globally ultimately bounded, if there exist a positive constants  such that for every , there is a  such that

Since the solutions  cannot be computed analytically, a stability analysis is necessary, which allows conclusions regarding the closed-loop behavior without running the policy on the real system Khalil (2002).

Theorem 4.1.

Consider a control affine system (13), where  admits a Lipschitz constant on . Assume that  and the observations , , satisfy the conditions of creftype 3.1. Then, the feedback linearizing controller (14) with  guarantees with probability  that the tracking error converges to

(17)

with and defined in Theorem 3.1.

Based on Lyapunov theory, it can be shown that the tracking error converges if the feedback term  dominates the model error . As Theorem 3.1 bounds the model error, the set for which holds  can be computed. It can directly be seen, that the ultimate bound can be made arbitrarily small, by increasing the gains  or with more training points to decrease . Computing the set  allows to check whether the controller (14) adheres to the safety requirements.

5 Numerical Evaluation

We evaluate our theoretical results in two simulations.2 In Section 5.1, we investigate the effect of applying Theorem 3.2 to determine a probabilistic Lipschitz constant for an unknown synthetic system. Furthermore, we analyze the effect of unevenly distributed training samples on the tracking error bound from Theorem 4.1. In Section 5.2, we apply the feedback linearizing controller (14) to a tracking problem with a robotic manipulator.

5.1 Synthetic System with Unknown Lipschitz Constant 

As an example for a system of form (13), we consider . Based on a uniform grid over  the training set is formed of  points with . The reference trajectory is a circle  and the controller gains are  and . We choose a probability of failure  and set . The state space is the rectangle . A squared exponential kernel with automatic relevance determination is utilized, for which  and  is derived analytically for the optimized hyperparameters. We make use of Theorem 3.2 to estimate the Lipschitz constant , and it turns out to be a conservative bound (factor ). However, this is not crucial, because  can be chosen arbitrarily small and is dominated by . As Theorems 3.2 and 3.1 are subsequently utilized in this example, a union bound approximation can be applied to combine  and .

The results are shown in Figs. 2 and 1. Both plots show, that the safety bound here is rather conservative, which also results from the fact that the violation probability was set to .

Figure 1: Snapshots of the state trajectory (blue) as it approaches the desired trajectory (green). In low uncertainty areas (yellow background), the set  (red) is significantly smaller then in high uncertainty areas (blue background).
Figure 2: When the ultimate bound (red) is large, the tracking error (blue) increases due to the less precise model.

5.2 Robotic Manipulator with 2 Degrees of Freedom

We consider a planar robotic manipulator in the --plane with 2 degrees of freedom (DoFs), with unit length and unit masses / inertia for all links. For this example, we consider  to be known and extend Theorem 3.1 to the multidimensional case using the union bound. The state space is here four dimensional  and we consider . The training points are distributed in  and the control gain is , while other constants remain the same as in Section 5.1. The desired trajectories for both joints are again sinusoidal as shown in Fig. 3 on the right side. The robot dynamics are derived according to (Murray et al., 1994, Chapter 4).

Theorem 3.1 allows to derive an error bound in the joint space of the robot according to Theorem 4.1, which can be transformed into the task space as shown in Fig. 3 on the left. Thus, based on the learned (initially unknown) dynamics, it can be guaranteed, that the robot will not leave the depicted area and can thereby be considered as safe.

Previous error bounds for GPs are not applicable to this practical setting, because they i) do not allow the observation noise on the training data to be Gaussian Srinivas et al. (2012), which is a common assumption in robotics, ii) utilize constants which cannot be computed efficiently (e.g. maximal information gain in Srinivas et al. (2010)) or iii) make assumptions difficult to verify in practice (e.g. the RKHS norm of the unknown dynamical system Berkenkamp et al. (2016)).

Figure 3: The task space of the robot (left) shows the robot is guaranteed to remain in  (red) after a transient phase. Hence, the remaining state space  (green) can be considered as safe. The joint angles and velocities (right) converge to the desired trajectories (dashed lines) over time.

6 Conclusion

This paper presents a novel uniform error bound for Gaussian process regression. By exploiting the inherent probability distribution of Gaussian processes instead of the reproducing kernel Hilbert space attached to the covariance kernel, a wider class of functions can be considered. Furthermore, we demonstrate how probabilistic Lipschitz constants can be estimated from the GP distribution and derive sufficient conditions to reach arbitrarily small uniform error bounds. We employ the derived results to show safety bounds for a tracking control algorithm and evaluate them in simulation for a robotic manipulator.

Acknowledgments

Armin Lederer gratefully acknowledges financial support from the German Academic Scholarship Foundation.

Appendix A Proof of Theorem 3.1

Proof of Theorem 3.1.

We first prove the Lipschitz constant of the posterior mean and the modulus of continuity of the standard deviation , before we derive the bound of the regression error. The norm of the difference between the posterior mean evaluated at two different points is given by

with

(18)

Due to the Cauchy-Schwarz inequality and the Lipschitz continuity of the kernel we obtain

which proves Lipschitz continuity of the mean . In order to calculate a modulus of continuity for the posterior standard deviation observe that the difference of the variance at two points can be expressed as

(19)

Since the standard deviation is positive semidefinite we have

(20)

and hence, we obtain

(21)

Therefore, it is sufficient to bound the difference of the variance at two points and take the square root of the resulting expression. Due to the Cauchy-Schwarz inequality and Lipschitz continuity of the absolute value of the difference of the variance can be bounded by

(22)

On the one hand, we have

(23)

due to Lipschitz continuity of . On the other hand we have

(24)

The modulus of continuity follows from substituting (23) and (24) in (22) and taking the square root of the resulting expression. Finally, we prove the probabilistic uniform error bound by exploiting the fact that for every grid with grid points and

(25)

it holds with probability of at least that Srinivas et al. (2012)

(26)

Choose , then

(27)

holds with probability of at least . Due to continuity of , and we obtain

(28)
(29)
(30)

Moreover, the minimum number of grid points satisfying (25) is given by the covering number . Hence, we obtain

(31)

where

(32)
(33)

Appendix B Proof of Theorem 3.2

In order to proof Theorem 3.2, several auxiliary results are necessary, which are derived in the following. The first lemma concerns the expected supremum of a Gaussian process.

Lemma B.1.

Consider a Gaussian process with a continuously differentiable covariance function and let denote its Lipschitz constant on the set with maximum extension . Then, the expected supremum of a sample function of this Gaussian process satisfies

(34)
Proof.

We prove this lemma by making use of the metric entropy criterion for the sample continuity of some version of a Gaussian process Dudley (1967). This criterion allows to bound the expected supremum of a sample function by

(35)

where is the -packing number of with respect to the covariance pseudo-metric

(36)

Instead of bounding the -packing number, we bound the -covering number, which is known to be an upper bound. The covering number can be easily bounded by transforming the problem of covering with respect to the pseudo-metric into a coverage problem in the original metric of . For this reason, define

Footnotes

  1. Notation: Lower/upper case bold symbols denote vectors/matrices and / all real positive/non-negative numbers. denotes all natural numbers, the identity matrix, the dot in  the derivative of  with respect to time and the Euclidean norm. A function is said to admit a modulus of continuity if and only if . The -covering number of a set (with respect to the Euclidean metric) is defined as the minimum number of spherical balls with radius which is required to completely cover . Big notation is used to describe the asymptotic behavior of functions.
  2. Matlab code is online available: https://gitlab.lrz.de/ga68car/GPerrorbounds4safecontrol

References

  1. Random Fields and Geometry. Springer Science & Business Media. External Links: ISBN 9780387481128 Cited by: §2.2.
  2. Error Bounds for Anisotropic RBF Interpolation. Journal of Approximation Theory 162 (3), pp. 512–527. External Links: Document, ISSN 00219045 Cited by: §2.2.
  3. Stable Gaussian Process based Tracking Control of Euler–Lagrange Systems. Automatica 103 (23), pp. 390–397. External Links: Document, arXiv:1806.07190v2, ISSN 00051098 Cited by: §1.
  4. Safe Learning of Regions of Attraction for Uncertain, Nonlinear Systems with Gaussian Processes. In Proceedings of the IEEE Conference on Decision and Control, pp. 4661–4666. Cited by: §1, §5.2.
  5. Safe Model-based Reinforcement Learning with Stability Guarantees. In Advances in Neural Information Processing Systems, Cited by: §2.2.
  6. On Kernelized Multi-armed Bandits. In Proceedings of the International Conference on Machine Learning, pp. 844–853. Cited by: §1, §2.2, §3.1, §3.1, §3.3.
  7. On the Impact of Kernel Approximation on Learning Accuracy. Proceedings of 13th International Conference on Artificial Intelligece and Statistics 9, pp. 113–120. Cited by: §2.2.
  8. A Survey on Policy Search for Robotics. Foundations and Trends in Robotics 2 (1-2), pp. 1–142. External Links: Document, ISSN 1935-8253 Cited by: §1.
  9. Kernel Ridge vs. Principal Component Regression: Minimax Bounds and the Qualification of Regularization Operators. Electronic Journal of Statistics 11 (1), pp. 1022–1047. External Links: Document, ISSN 19357524 Cited by: §2.2.
  10. The Sizes of Compact Subsets of Hilbert Space and Continuity of Gaussian Processes. Journal of Functional Analysis 1 (3), pp. 290–330. Cited by: Appendix B, §3.2.
  11. Gaussian Processes for Dynamic Movement Primitives with Application in Knowledge-based Cooperation. In Proceedings of the IEEE Conference on Intelligent Robots and Systems, pp. 3913–3919. External Links: Document, ISBN 9781509037629, ISSN 21530866 Cited by: §1.
  12. Posterior Consistency of Gaussian Process Prior for Nonparametric Binary Regression. The Annals of Statistics 34 (5), pp. 2413–2429. Cited by: §3.2.
  13. Batch Bayesian Optimization via Local Penalization. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 648–657. Cited by: Remark 3.1.
  14. Lp-Error Estimates for Radial Basis Function Interpolation on the Sphere. Journal of Approximation Theory 129 (1), pp. 58–77. External Links: Document, ISSN 00219045 Cited by: §2.2.
  15. An Empirical Evaluation of Deep Learning on Highway Driving. pp. 1–7. External Links: 1504.01716, Link Cited by: §1.
  16. Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences. pp. 1–64. External Links: 1807.02582, Link Cited by: §2.2, §2.2.
  17. Nonlinear Systems; 3rd ed.. Prentice-Hall, Upper Saddle River, NJ. External Links: ISBN 0130673897 Cited by: §1, §4.1, §4.2.
  18. Adaptive Estimation of a Quadratic Functional by Model Selection. The Annals of Statistics 28 (5), pp. 1302–1338. Cited by: §3.3.
  19. Posterior Variance Analysis of Gaussian Processes with Application to Average Learning Curves. External Links: 1906.01404, Link Cited by: §3.3.
  20. Improving the Sample Complexity using Global Data. IEEE Transactions on Information Theory 48 (7), pp. 1977–1991. External Links: Document, ISSN 00189448 Cited by: §2.2.
  21. Functions of Positive and Negative Type, and their Connection with the Theory of Integral Equations. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 209 (441-458), pp. 415–446. Cited by: §3.1.
  22. A Mathematical Introduction to Robotic Manipulation. CRC Press. External Links: Document, ISBN 9781351469791 Cited by: §5.2.
  23. Sobolev Error Estimates and a Bernstein Inequality for Scattered Data Interpolation via Radial Basis Functions. Constructive Approximation 24 (2), pp. 175–186. Cited by: §2.2, §2.2.
  24. Neural Networks for Modelling and Control of Dynamical Systems - A Practicioner’s Handbook. Springer, London. Cited by: §1.
  25. Gaussian Processes for Machine Learning. The MIT Press, Cambridge, MA. Cited by: §2.1, §2.2, §3.1.
  26. Improved Error Bounds for Scattered Data Interpolation by Radial Basis Functions. Mathematics of Computation 68 (225), pp. 201–217. External Links: Document, ISSN 0025-5718 Cited by: §2.2.
  27. Learning Theory Estimates for Coefficient-based Regularized Regression. Applied and Computational Harmonic Analysis 34 (2), pp. 252–265. External Links: Document, ISSN 10635203 Cited by: §2.2.
  28. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting. IEEE Transactions on Information Theory 58 (5), pp. 3250–3265. Cited by: Appendix A, §1, §2.2, §3.1, §3.1, §3.1, §3.3, §5.2.
  29. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Proceedings of the International Conference on Machine Learning, pp. 1015–1022. External Links: ISBN 9781605589077 Cited by: §5.2.
  30. On the Influence of the Kernel on the Consistency of Support Vector Machines. Journal of Machine Learning Research 2, pp. 67–93. Cited by: §3.1.
  31. Posterior Consistency for Gaussian Process Approximations of Bayesian Posterior Distributions. Mathematics of Computation 87 (310), pp. 721–753. Cited by: §2.2.
  32. Sharper Bounds for Gaussian and Empirical Processes. The Annals of Probability 22 (1), pp. 28–76. Cited by: §3.2.
  33. Scenario-based Optimal Control for Gaussian Process State Space Models. In Proceedings of the European Control Conference, Cited by: §1.
  34. Feedback Linearization using Gaussian Processes. In Proceedings of the IEEE Conference on Decision and Control, pp. 5249–5255. Cited by: §4.
  35. Bayesian Uncertainty Modeling for Programming by Demonstration. In Proceedings of the IEEE Conference on Robotics and Automation, pp. 6428–6434. External Links: Document, ISBN 9781509046331, ISSN 10504729 Cited by: §1, §4.
  36. Learning Stable Gaussian Process State Space Models. In Proceedings of the American Control Conference, pp. 1499–1504. External Links: Document, ISBN 9781509059928, ISSN 07431619 Cited by: §4.
  37. An Uncertainty-Based Control Lyapunov Approach for Control-Affine Systems Modeled by Gaussian Process. IEEE Control Systems Letters 2 (3), pp. 483–488. Cited by: §1.
  38. Information Rates of Nonparametric Gaussian Process Methods. Journal of Machine Learning Research 12, pp. 2095–2119. Cited by: §2.2, §3.1.
  39. On Prediction Properties of Kriging: Uniform Error Bounds and Robustness. Journal of the American Statistical Society, pp. 1–38. Cited by: §2.2.
  40. Scattered Data Approximation. Cambridge University Press. Cited by: §2.2.
  41. Local Error Estimates for Radial Basis Function Interpolation of Scattered Data. IMA Journal of Numerical Analysis 13 (1), pp. 13–27. External Links: Document, ISSN 02724979 Cited by: §2.2.
  42. Learning Bounds for Kernel Regression using Effective Data Dimensionality. Neural Computation 17 (9), pp. 2077–2098. External Links: Document, ISSN 08997667 Cited by: §2.2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402632
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description