Bernstein approximation of optimal control problems††thanks: This work was supported by AFOSR, ONR, NSF and NASA.
Bernstein polynomial approximation to a continuous function has a slower rate of convergence as compared to other approximation methods. “The fact seems to have precluded any numerical application of Bernstein polynomials from having been made. Perhaps they will find application when the properties of the approximant in the large are of more importance than the closeness of the approximation.” – has remarked P.J. Davis in his 1963 book Interpolation and Approximation.
This paper presents a direct approximation method for nonlinear optimal control problems with mixed input and state constraints based on Bernstein polynomial approximation. We provide a rigorous analysis showing that the proposed method yields consistent approximations of time continuous optimal control problems. Furthermore, we demonstrate that the proposed method can also be used for costate estimation of the optimal control problems. This latter result leads to the formulation of the Covector Mapping Theorem for Bernstein polynomial approximation. Finally, we explore the numerical and geometric properties of Bernstein polynomials, and illustrate the advantages of the proposed approximation method through several numerical examples.
Optimal control problems that arise from most engineering applications are in general very complex. Finding a closed-form solution to these problems can be difficult or even impossible, and therefore they must be solved numerically. Numerical methods include indirect and direct methods . Indirect methods solve the problems by converting them into boundary value problems. Then, the solutions are found by solving systems of differential equations. On the other hand, direct methods are based on transcribing optimal control problems into nonlinear programming problems (NLPs) using some discretization scheme [1, 2, 3, 4]. These NLPs can be solved using ready-to-use NLP solvers (e.g. MATLAB, SNOPT, etc.) and do not require calculation of costate and adjoint variables as indirect methods do.
A pioneering work in the literature on direct methods is the one of Polak on consistency of approximation theory reported in his book (see [5, Section 3.3]). Borrowing tools from variational analysis, Polak provides a theoretical framework to assess the convergence properties of direct methods. Motivated by the consistency of approximation theory, a wide range of direct methods that use different discretization schemes have been developed, including Euler , Runge-Kutta , Pseudospectral  methods, as well as the method presented in this paper.
Pseudospectral methods are the most popular direct methods nowadays, and they have been applied successfully for solving a wide range of optimization problems, e.g. [8, 9, 10, 11, 12, 13, 7]. They offer several advantages over many other discretization methods, mainly owing to their spectral (exponential) rate of convergence. However, as pointed out in [14, 15, 16, 17], there is one salient disadvantage associated with these methods. When discretizing the state and/or the input, the constraints are enforced at the discretization nodes; unfortunately, satisfaction of constraints cannot be guaranteed in between the nodes. To avoid violation of the constraints in between the nodes, the order of approximation (number of nodes) can be increased; however, this leads to larger NLPs, which may become computationally expensive and too inefficient to solve. This problem does not limit itself to pseudospectral methods, but it is common to methods that are based on discretization.
This undesirable behaviour becomes obvious, for example, when considering the optimal trajectory generation problem for multi-vehicle missions, where a large number of vehicles have to reach their final destinations by following trajectories that have to guarantee intervehicle separation for safety all the time. Clearly, with a small order of approximation, separation between the trajectories will be hardly satisfied. Increasing the number of nodes will eventually produce spatially separated trajectories, but will also drastically increase the number of collision avoidance constraints and thus also the complexity of the problem.
Pseudospectral methods also suffer from a drawback when dealing with non-smooth optimal control problems. This drawback is mainly related to the well known Gibbs phenomenon , common to all approximation methods based on orthogonal polynomials. The Gibbs phenomenon, visible in the form of oscillations, reduces the accuracy of the approximation to first order away from discontinuities and to in the neighborhoods of jumps . Several extensions of pseudospectral methods have been developed to deal with this disadvantage and lessen the effect of the Gibbs phenomenon (e.g. [20, 21, 22]). The most accurate methods require the location of the discontinuities to be known a priori, which is often impractical or difficult. Other methods are based on the estimation of these locations, which could result in inefficiency or ill conditioning of the discretized problem, especially when the number of discontinuities is large and unknown.
The present article proposes a direct method based on Bernstein approximation. Bernstein approximants have several nice properties. First of all, the approximants converge uniformly to the functions that they approximate – and so do their derivatives [23, Chapter 3]. This, as we will discuss later, is useful for derivation of convergence properties of the proposed computational method. Moreover, Bernstein polynomials behave well, even when the functions being approximated are non-smooth. In fact, as demonstrated in , the Gibbs phenomenon does not occur when approximating piecewise smooth, monotone functions with both left and right derivatives at every point by Bernstein polynomials. As a result, the proposed method based on Bernstein approximation lends itself to problems that have discontinuous states and/or controls, e.g. bang-bang optimal control problems (see also ). Finally, due to their favorable geometric properties (see [23, Chapter 5]) Bernstein polynomials afford computationally efficient algorithms for the computation of state and input constraints for the whole time interval where optimization takes place, and not only at discretization points (see [26, 16]). Hence, with the proposed approach the solutions can be guaranteed to be feasible and satisfy the constraints for all times, while retaining the computational efficiency of methods based on discretization.
“There is a price that must be paid for these beautiful approximation properties: the convergence of the Bernstein polynomials is very slow.” – wrote P.J. Davis in his 1963 book Interpolation and Approximation . He continues: “This fact seems to have precluded any numerical application of Bernstein polynomials from having been made. Perhaps they will find application when the properties of the approximant in the large are of more importance than the closeness of the approximation.” In fact, the slow convergence of the Bernstein approximation implies that the approach proposed in the present paper is outperformed by, for example, pseudospectral methods in terms of convergence rate. This is not surprising, since the choice of nodes and the interpolating polynomials in the pseudospectral methods is dictated by approximation accuracy and convergence speed, while sacrificing satisfaction of constraints in between the nodes. On the other hand, our approach prioritizes constraint satisfaction at the expense of a slower convergence rate.
The paper is structured as follows: in Section 2 we present the notation and the mathematical results, which will be used later in the paper. Section 3 introduces the optimal control problem of interest and some related assumptions. Section 4 presents the NLP method based on Bernstein approximation that approximates the optimal control problem. Section 5 demonstrates that the proposed approximation method yields approximate results that converges uniformly to the optimal solution. In Section 6 we derive the KarushâKuhnâTucker (KKT) conditions associated with the NLP. Section 7 compares these conditions to the first order optimality conditions for the original optimal control problem and states the Covector Mapping Theorem for Bernstein approximation. Numerical examples are discussed in Section 8. The paper ends with conclusions in Section 9.
2 Notation and mathematical background
Vector valued functions are denoted by bold letters, , while vectors are denoted by bold letters with an upper bar, . The symbol denotes the space of functions with continuous derivatives. denotes the space of -vector valued functions in . denotes the Euclidean norm, .
2.1 Bernstein polynomials
The Bernstein basis polynomials of degree are defined as
for , with
They were originally introduced by the mathematician Sergei Natanovich Bernstein in 1912 to facilitate a constructive proof of the Weierstrass approximation theorem . An th order Bernstein polynomial is a linear combination of Bernstein basis polynomials of order , i.e.
where , , are referred to as Bernstein coefficients (also known as control points). For the sake of generality, and with a slight abuse of terminology, in this paper we extend the definition of a Bernstein polynomial given above to a vector of th order polynomials expressed in the following form
where are -dimensional Bernstein coefficients.
Bernstein polynomials were popularized by Pierre Bézier in the early 1960s as useful tools for geometric design (Bézier used Bernstein polynomials to design the shape of the cars at the Renault company in France), and are now widely used in computer graphics, animations and type fonts such as postscript fonts and true type fonts. For this reason, the Bernstein polynomial introduced in Equation (1) is often referred to as a Bézier curve, especially when used to describe a spatial curve.
Bernstein polynomials possess favorable geometric and numerical properties that can be exploited in many application domains. For an extensive review on Bernstein polynomials and their properties the reader is referred to . The derivative and integral of a Bernstein polynomial can be easily computed as
respectively. Bernstein polynomials can be used to approximate smooth functions. Consider a -vector valued function . The th order Bernstein approximation of is a vector of Bernstein polynomials computed as in (1) with and for all . Namely,
The following results hold for Bernstein approximations.
Lemma 1 (Uniform convergence of Bernstein approximation)
Let on , and be computed as in Equation (3). Then, for arbitrary order of approximation , the Bernstein approximation satisfies
where is a positive constant satisfying and is the modulus of continuity of in .
where are independent of .
If on , then we have
with , where is independent of . Moreover, if , then
The following properties of Bernstein polynomials are relevant to this paper.
Property 1 (End point values)
Property 2 (Convex hull)
A Bernstein polynomial is completely contained in the convex hull of its Bernstein coefficients (see Figure 0(b)).
Property 3 (de Casteljau Algorithm)
The de Casteljau algorithm is an efficient and numerically stable recursive method to evaluate a Bernstein polynomial at any given point. The de Casteljau algorithm is also used to split a Bernstein polynomial into two independent ones. Given an th order Bernstein polynomial , and a scalar , the Bernstein polynomial at can be computed using the following recursive relation
Then, the Bernstein polynomial evaluated at is given by
Moreover, the Bernstein polynomial can be subdivided at into two th order Bernstein polynomials with Bernstein coefficients
Figure 0(c) depicts a 2D curve defined by an th order Bernstein polynomial (with Bernstein coefficients described by blue circles). The curve is subdivided into two th order Bernstein polynomials, each with Bernstein coefficients described by black and red circles.
Property 4 (Minimum distance)
The minimum distance between two Bernstein polynomials and , with , namely
can be efficiently computed by exploiting the convex-hull property and the de Casteljau algorithm , in combination with the Gilbert-Johnson-Keerthi (GJK) distance algorithm . The latter is widely used in computer graphics and video games to compute the minimum distance between convex shapes. In  the authors propose an iterative procedure that uses the above tools to compute (4) within a desired tolerance. This procedure is extremely useful for motion planning applications to efficiently compute the spatial clearance between two paths, or between a path and an obstacle. For example, the minimum distance between the 2D Bernstein polynomial and the point depicted in Figure 0(d) is computed in less than ms using an implementation in MATLAB, while the minimum distance between the 3D Bernstein polynomials depicted in Figure 0(e) is computed in less than 30 ms. The same procedure can also be employed to compute the extrema (maximum and minimum) of a Bernstein polynomial .
3 Problem formulation
This paper considers the following optimal control problem:
Problem 1 (Problem )
Determine and that minimize
where and are the terminal and running costs, respectively, describes the system dynamics, is the vector of boundary conditions, and is the vector of state and input constraints.
The following assumptions hold:
, , , , and are Lipschitz continuous with respect to their arguments.
Problem admits optimal solutions and that satisfy and .
4 Bernstein approximation of Problem P
The purpose of this section is to formulate a discretized version of Problem , here referred to as Problem , where denotes the order of approximation. This requires that we approximate the input and state functions, the cost function, the system dynamics, and the equality and inequality constraints in Problem .
First, consider the following th order vectors of Bernstein polynomials:
with , , and . Let and be defined as
Let be a set of equidistant time nodes, i.e. . Then Problem can be stated as follows:
Problem 2 (Problem )
Determine and that minimize
where , and is a small positive number that depends on and converges uniformly to , i.e. .
Compared to the constraints of Problem , the dynamic and inequality constraints given by Equations (11) and (13) are relaxed. Motivated by previous work on consistency of approximation theory , the bound , referred to as relaxation bound, is introduced to guarantee that Problem has a feasible solution. As it will become clear later, the relaxation bound can be made arbitrarily small by choosing a sufficiently large order of approximation . Furthermore, note that when , then the right hand sides of Equations (11) and (13) equal to zero, i.e. the difference between the constraints imposed by Problems and vanishes.
5 Feasibility and consistency of Problem
The outcome of Problem is a set of optimal Bernstein coefficients and that determine the vectors of Bernstein polynomials and , i.e.
Now we address the following theoretical issues:
existence of a feasible solution to Problem ,
convergence of the pair to the optimal solution of Problem , given by .
Theorem 1 (Feasibility)
where is a positive constant independent of , and , and are the moduli of continuity of , and , respectively. Then Problem is feasible for arbitrary order of approximation .
Proof: Let and be a feasible solution for Problem , which exists by Assumption 2. Let us define the Bernstein coefficients
and the resulting vectors of Bernstein polynomials as
If the optimal state and control solutions to Problem exist and satisfy and in , then Theorem 1 holds with where is a positive constant independent of .
From the definition of in Theorem 1 (and Corollary 1), it follows that for any arbitrarily small scalar there exists such that for all , we have . In other words, the relaxation bound in Problem can be made arbitrarily small by choosing sufficiently large , while retaining the feasibility result (Theorem 1 and Corollary 1).
Theorem 2 (Consistency)
Let be a sequence of optimal solutions to Problem , and be a sequence of Bernstein polynomials, given by (14). Assume has a uniform accumulation point, i.e.
and assume that and are continuous on . Then is an optimal solution for Problem .
Proof: This proof is divided into three steps: we prove that is a feasible solution to Problem ; we show that
we prove that is an optimal solution of Problem , i.e.
Step . First, we show that satisfies the dynamic constraint of Problem :
We show this by contradiction. Assume that the above equality does not hold. Then there exists , such that
Since the nodes , are dense in , there exists a sequence of indices such that
Then, from continuity of , and , the left hand side of Equation (21) satisfies
However, the dynamic constraint in Problem is
which implies that
The above result contradicts Equation (21), thus proving that satisfies the dynamic constraint in Equation (6). The equality and inequality constraints in (12) and (13) follow easily by an identical argument.
Step . To prove that Equation (20) is satisfied we need to show the following:
Finally, applying the convergence assumption given by Equation (19), the result in Equation (22) follows. Similarly, using the Lipschitz assumption on , one can show that Equation (23) holds, thus completing the proof of Step (2).
Step . Finally, it remains to show that
First, recall that and are optimal solutions of Problem , while and are optimal solutions of Problem . Let , , and
Following an argument similar to the one in the proof of Theorem 1, one can show that there exists such that for any the pair is a feasible solution of Problem . Moreover, an argument similar to the one in the proof of Step (2) yields
Then we have
The last inequality, combined with (24), gives
which completes the proof of Theorem 2.
6 Costate estimation for Problem P using Bernstein approximation
6.1 First order optimality conditions of Problem P
We start by deriving the first order necessary conditions for Problem . Let be the costate trajectory, and let and be the multipliers. By defining the Lagrangian of the Hamiltonian (also known as the D-form ) as
where the Hamiltonian is given by
the dual of Problem P can be formulated as follows .
Problem 3 (Problem )
In the above problem, subscripts are used to denote partial derivatives, e.g. .
The following assumptions are imposed onto Problem .
and are continuously differentiable with respect to their arguments, and their gradients are Lipschitz continuous over the domain.
Solutions , , , and of Problem