Polynomial approximation of high-dimensional Hamilton-Jacobi-Bellman equations and applications to feedback control of semilinear parabolic PDEs
A procedure for the numerical approximation of high-dimensional Hamilton-Jacobi-Bellman (HJB) equations associated to optimal feedback control problems for semilinear parabolic equations is proposed. Its main ingredients are a pseudospectral collocation approximation of the PDE dynamics, and an iterative method for the nonlinear HJB equation associated to the feedback synthesis. The latter is known as the Successive Galerkin Approximation. It can also be interpreted as Newton iteration for the HJB equation. At every step, the associated linear Generalized HJB equation is approximated via a separable polynomial approximation ansatz. Stabilizing feedback controls are obtained from solutions to the HJB equations for systems of dimension up to fourteen.
Optimal feedback controls for evolutionary control systems are of significant practical importance. Differently from open-loop optimal controls, they do not rely on knowledge of the initial condition and they can achieve design objectives, as for instance stabilisation, also in the presence of perturbations. Furthermore, the online synthesis of feedback control can be implemented in a real-time setting. It is well-known that their construction relies on special Hamilton-Jacobi-Bellman (HJB) equations, see for instance [4, 17]. The solution of the HJB equation is the value function associated to the optimal control problem, and its gradient is used to construct the optimal feedback control. In the very special, but important case of a linear control system with quadratic cost without constraints on the control or the state variables, the HJB equation reduces to a Riccati equation which has received a tremendous amount of attention, both for the cases when the control system is related to ordinary or to partial differential equations. Otherwise one has to deal with the HJB equation which is a partial differential equation whose spatial dimension is that of the control system. Thus optimal feedback control for partial differential equations leads to HJB equations in infinite dimensions . After semi-discretization in space of the controlled partial differential equation (PDE), the HJB equation is posed in a space of dimension corresponding to the spatial discretization of the PDE . For standard finite element or finite difference discretizations this leads to high-dimensional HJB equations. This is one of the instances which is referred to as the curse of dimensionality .
Many attempts to tackle the difficulties posed for numerically solving the HJB equations arising in optimal control have been made in the past or are currently being investigated. We refer, for instance, to , which mainly focuses on semi-Lagrangian schemes, and further references given there. A related approach to numerical optimal feedback control of PDEs is to semi-discretize the dynamics and to add a model order reduction step, either with Balanced Truncation or Proper Orthogonal Decomposition, in order to reduce the dimension of the dynamics to a number that is tractable for grid-based, semi-Lagrangian schemes. This approach has been successfully explored, for instance, in [1, 26, 29] and references therein. It strongly relies on a trustworthy representation of the dynamics via low-dimensional manifolds. Such a low-dimensional representation may deteriorate when nonlinear and/or advection effects are relevant. Thus, it is important to strive for techniques, or combinations of techniques, which allow to solve higher dimensional problems.
Another direction of research evolves around generalizing the Riccati-based approach to allow for nonlinearities in the state equation. One such technique is termed state-dependent Riccati equation . Here the coefficients in the ’ordinary’ Riccati equation are functions of the state rather than constants as in the case of linear state equations. Another approach realizes the fact that the Riccati equation can be interpreted as the equation satisfied by the first term arising in the power series expansion of the value function, and attempts to improve by realizing also higher order terms in the expansion. These methods are succinctly explained in .
Yet another technique which has received a considerable amount of attention is termed Successive Galerkin Approximation. Roughly speaking, the nonlinear HJB equation associated to the continuous-time optimal control problem is solved by means of a Newton method. At each iteration, the control law is fixed. This leads to a Generalized Hamilton-Jacobi equation (GHJB) which is linear. The iteration is closed by an update of the control law based on the gradient of the value function. This method was intensively investigated in [5, 6], see also , and the references given in these citations. It is worth to mention that the discrete-time counterpart of this method corresponds to the well-known policy iteration or Howards’ algorithm [25, 11, 2].
The numerical examples in [5, 6, 7] do not go beyond dimension five, and most, if not all, of the published numerical results for nonlinear HJB equations do not exceed dimension eight [10, 20, 22]. An alternative sparse grid approach for high-dimensional approximation of HJB equations based on open-loop optimal control has been presented in , with tests up to dimension six. Numerical methods relying on tensor calculus have been shown to perform well in high-dimensional settings where the associated HJB equation is a linear PDE .
In the present paper, to solve optimal control problems for certain classes of semilinear parabolic equations we shall proceed as follows. To accommodate the curse of dimensionality, the discretization of the PDE is based on a pseudospectral collocation method, allowing a higher degree of accuracy with relatively few collocation points. To solve the resulting HJB we utilize a Newton method based on the GHJB equation as described above. Next, the discretization of the GHJB equation is addressed through a Galerkin approximation with polynomial, globally supported, ansatz functions. While this mitigates the curse of dimensionality in terms of removing the mesh structure, it leads to high-dimensional integrals. We therefore resort to separable representations for the system dynamics and for the basis set of the polynomial approximation. The separability assumption reduces the computation of the Galerkin residual equation to products of one-dimensional integrals. The combination of these procedures allowed us to solve HJB equations related to nonlinear control systems up to dimension fourteen by means of basic parallelization tools. The successful use of the Newton procedure requires to provide a feasibly initialization, i.e. a sub-optimal, stabilizing control. Since we do not consider constraints, this is not restrictive for finite horizon problem, but can be challenging for infinite horizon problems, and specifically for the stabilization problems which are considered in the present paper. In this respect we developed a continuation procedure based on the use of a discount factor. Specifically, we consider a nested iterative procedure: within the outer loop the value of a positive discount factor is driven to zero, within the inner loop the HJB equation is solved approximately for a fixed discount factor. With this approach, which, is summarized in Algorithms 6 and 6 below, we managed to solve optimal feedback stabilization problems for semilinear parabolic equations with different stability behavior of the desired steady state.
Let us give a brief outline of the paper. Section 2 sets the stage and provides the discussion of a special case to facilitate the understanding of the following material. In Section 3 the solution process of the HJB equation is detailed. In Section 4 we provide the formulas which are needed to numerically realize the discretized HJB equation after a separable basis has been chosen. Numerical experiments are documented in Section 5. There we can also find comparisons to suboptimal feedback strategies based on Riccati and asymptotic expansion techniques.
2. Infinite horizon optimal feedback control
We consider the following undiscounted infinite horizon optimal control problem:
subject to the nonlinear dynamical constraint
where we denote the state , the control , with , the state running cost , and the control penalization . Furthermore, we assume the running cost and the system dynamics and to be . Throughout it is assumed that and . Our focus is therefore asymptotic stabilization to the origin.
It is well-known that the optimal value function
characterizing the solution of this infinite horizon control problem is the unique viscosity solution of the Hamilton-Jacobi-Bellman equation
with . Here we follow the convention of dropping the subscript of . We study this equation in the unconstrained case, i.e., , where the explicit minimizer of (1) is given by
note that by inserting this expression for the optimal control in (1), we obtain the equivalent HJB equation
which under further assumptions can be simplified to the Riccati equation associated to linear-quadratic infinite horizon optimal feedback control.
The methodology we present in this work is applicable to systems fitting the aforedescribed setting, although for the sake of simplicity we restrict the presentation by the following choices:
the control is a scalar variable, i.e. .
the running cost is quadratic, i.e. , with positive-definite,
the control term is a constant vector in .
At this point, our setting differs from the linear-quadratic case as it allows nonlinear dynamics, and nonquadratic state costs. For the numerical scheme that we develop, the following assumption is crucial:
The free dynamics are separable in every coordinate
where is a tensor-valued function. In the case , then we shall also assume a similar separable structure for .
2.1. Towards optimal feedback control of semilinear parabolic equations
In the following, we illustrate how the presented framework sets the grounds for a computational approach for approximate optimal feedback controllers for nonlinear PDEs. We consider the following optimal stabilization problem:
subject to the semilinear parabolic equation
In this case, the scalar control acts through the indicator function , with . At the abstract level, this corresponds to an infinite-dimensional optimal control problem. A first step towards the application of the proposed framework is the space discretization of the system dynamics, leading to finite-dimensional state space representation. The use of the pseudospectral collocation methods for parabolic equations has been studied in [31, 33], and leads to a state space representation of the form
where the discrete state corresponds to the approximation of at collocation points , and is the coordinatewise power. The matrices and are finite-dimensional approximations of the Laplacian and control operators, respectively. Such a discretization of the dynamics directly fulfills the separability required in Assumption 1, as the i-th equation of the dynamics reads
with a separability degree . It is very important to note that semidiscretization in space of a wide class of time-dependent PDEs will lead to finite-dimensional state space representations of this type, thus the applicability of the presented framework is only limited by the dimensionality of the associated HJB equation. This motivates the choice of a pseudospectral collocation method for the discretization, as it is possible to obtain a meaningful representation of the dynamics with considerably fewer degrees of freedom than classical low-order schemes. However, if pseudospectral collocation is not a suitable discretization method for the dynamics, model reduction procedures such as balanced truncation, proper orthogonal decomposition, or reduced basis techniques shall also lead to separable state-space representations. Once the finite-dimensional state state space representation is obtained, we proceed to approximate the solution of the associated HJB equation (1), leading to the optimal feedback controller (2).
We now present a preview of the numerical results of the proposed approach. Further details of the numerical scheme will be developed in the forthcoming sections. The system dynamics in (5), are approximated in 12 collocation points (14 with b.c.s’), and therefore our approximation scheme seeks for a solution of a 12-dimensional HJB equation, which allows the computation of online optimal feedback controllers. We compare our HJB-based controller (HJB) to the linear-quadratic controller (LQR) obtained by linearization of the system dynamics, and to an approximation method for the HJB equation based on power series expansion (PSE) [21, 35]. In Figure 1 we observe the basic features of the dynamics and the control schemes. The uncontrolled system dynamics (diffusion+dissipative source term) are stable, but stabilization is extremely slow. The control algorithms considerably reduce the transient phase. However, the control signals are different, and the HJB-based controller generates a feedback control with reduced overall cost (4). Observe that at the beginning of the time horizon even the signs of the LQR-, PSE-, and HJB-based controls differ.
3. Approximate iterative solution of HJB equations
In this section, we construct a numerical scheme for the approximation of the HJB equation
where . We recall two additional features in this equation which render the application of classical approximation techniques difficult: the absence of a variational formulation, and the minimization with respect to the control variable , which makes the HJB equation fully nonlinear. The simplest numerical approach to these problems is the use of monotone, grid-based discretizations (finite differences, semi-Lagrangian), in conjunction with a fixed point iteration for the value function which typically depends on the use of a discount factor. The so-called “value iteration” procedure was first presented by Bellman in , and although it has become a standard solution method for low-dimensional HJB equations, it suffers from three major drawbacks. First, the grid-based character of the scheme makes it inapplicable for high-dimensional dynamics, as the total number of degrees of freedom scales exponentially with respect to the dimension of the dynamical system. This corresponds to the most classical statement of the so-called curse of dimensionality. Second, the contractive mapping includes a minimization procedure which needs to be solved for every grid point at every iteration. Third, the Lipschitz constant of the contractive mapping goes to 1 when the discretization parameter goes to 0, becoming extremely slow for fine-mesh solutions. In order to circumvent these limitations, we develop a numerical scheme combining an iteration on the control variable rather than the value function, together with a polynomial expansion for the value function to mitigate the computational burden associated to mesh-based schemes.
3.1. Successive approximation of HJB equations
Definition 1 (Admissible control).
We say that a feedback mapping is admissible on , denoted as , if , , and for all .
Algorithm 6 corresponds to a Newton method for solving equation (6), and in the linear-quadratic setting it is equivalent to the Newton-Kleinmann iteration for solving the Riccati equation. It can be also directly identified with the policy iteration algorithm for HJB equations (see  and references therein), although in this context the usual setting includes a discount factor which relaxes the admissibility assumption, as well as discrete-time dynamics. Consequently, it is applied to a Bellman equation with no continuous gradient. In both cases, the core ingredient of the algorithm is to generate a decreasing sequence of values by solving an associated sequence of linear problems. In our case this translates into solving, for a given at each iteration, the Generalized Hamilton-Jacobi-Bellman (GHJB) equation
The following result from  summarizes relevant properties of the GHJB equation.
If is a compact subset of , is Lipschitz continuous on and , is strictly increasing in , , and , then:
There exists a unique satisfying (8).
is a Lyapunov function of the controlled system.
, for all .
The update satisfies .
If satisfies , then for all .
3.2. A continuation procedure
A critical aspect of the Successive Approximation Algorithm 6 is its initialization, which requires the existence of an admissible control which in view of (4) means that it asymptotically stabilizes all the initial conditions in . For asymptotically stable dynamics, this is trivially satisfied by . For more general cases, the computation of stabilizing feedback controllers is a challenging task. A partial answer is to consider the stabilizing feedback associated to the linearized system dynamics. However, this feedback is only locally stabilizing, and therefore the identification of a suitable domain where this control law is admissible becomes relevant. For low dimensional dynamics, this has been studied in the context of Zubov’s method in . An alternative solution that we propose is to consider a discounted infinite horizon control problem
where the inclusion of the discount factor relaxes the admissibility condition. Recently, in [19, 32], the link between discounted optimal control and asymptotic stabilization has been discussed, and under certain conditions, the discounted control problem can generate optimal controls that are also admissible for the undiscounted problem. We recall that the associated HJB equation for the infinite horizon optimal control problem is given by
and the associated GHJB reads
We consequently modify the Successive Approximation Algorithm in order to embed it within a path-following iteration with respect to the discount factor:
3.3. Spectral element approximation of the GHJB equation
So far we have discussed the iterative aspects of a computational method for solving HJB equations. We now address the numerical approximation of the GHJB equation.
For this purpose, we consider an expansion of the form
where , with belonging to a complete set of basis functions in , and . In particular, we shall often generate from a multidimensional monomial basis as illustrated in Figure 2, which directly satisfies the boundary condition . The coefficients are obtained by imposing the Galerkin residual equation
The convergence of has been studied thoroughly in . It follows a power series argument, and requires conditions for uniform convergence of pointwise convergent series, in order to guarantee that for sufficiently large. In our particular case, we further assume that the dynamics are polynomial (as illustrated in Section 2.1). Therefore, under the assumptions of Theorem 26 in , by choosing a multidimensional monomial basis (of degree ) and an admissible control , it can be established that, , such that for , , and .
We now focus on the different terms involved in the approximation of the GHJB equation. Since this equation is meant to be solved within the iterative loop described in the previous section, we assume that can be expressed in the form
where corresponds to the value function of the previous iteration, approximated with the expansion
Below we shall write for . We proceed by expanding case by case the different terms of the Galerkin residual equation
: it is directly verifiable that
: by inserting the expansion we obtain
: the relation (14) leads to
: we further assume that
: note that
is given by
After discretization, the GHJB (13) reduces to a parameter-dependent linear system for
where is given by the expansion of ( terms 4) and 5) in the list above).
4. Computation of integrals via separable expansions
Under Assumption 1 concerning the separability of the free dynamics , and with the construction of a separable set of basis functions by taking the tensor product of one-dimensional basis functions as shown in Figure 2, the calculation of the -dimensional inner products of the Galerkin residual equation of the previous section is reduced to the product of one-dimensional integrals. In the following, we provide further details of this procedure.
4.1. Generation of a multi-dimensional basis
The multi-dimensional basis functions
for the expansion of are generated as follows. We start by choosing a polynomial degree , and a one-dimensional polynomial basis . For the sake of simplicity, we consider the monomial basis , but the same ideas apply for other basis, such as orthogonal polynomials. The multidimensional basis is generated as a subset of the -dimensional tensor product of one-dimensional basis, such that
i.e., we construct a full multidimensional tensorial basis and then we remove elements according to the approximation degree . The elimination step is fundamental and is twofold. If no elimination is performed, the cardinality of would be , and again one would face the curse of dimensionality that also affects grid-based schemes. By reducing the set to multdimensional monomials of degree at most , the cardinality of the set is given by
which replaces the exponential dependence on by a combinatorial one. This formula is evaluated in Table 1 for different values of interest for and . By considering globally defined polynomial basis functions, the dependence on the dimension is replaced by the combinatorial expression (16). The dimensional reduction of the basis is particularly significant for low order polynomial approximation (up to degree 6). A second justification for the way in which we generate the basis set has a control-theoretical inspiration. A well-known result in optimal feedback control is that if the dynamics are linear, and the running cost is quadratic, the value function associated to the infinite horizon control problem (in the unconstrained case and other technical assumptions) is a quadratic form, i.e. is of the form , which fits precisely the elements generated for with a monomial basis when and linear elements are eliminated. Therefore, our basis can be interpreted as a controlled increment, accounting for the nonlinear dynamics, of the basis required to recover the solution of the control problem associated to the linearized dynamics around the equilibrium point.
|Full monomial basis||Even-degree monomials|
Theorem 7.1 in  states parity conditions to reduce the polynomial basis . Under the assumptions , and , if
is a symmetric rectangle around the origin, i.e.,
the free dynamics are odd-symmetric on , i.e. , for all
then is an even-symmetric function, i.e., , and therefore odd-degree monomials are excluded from the basis. A direct corollary is that in the linear quadratic case, where the linear dynamics are trivially odd-symmetric, is a quadratic form.
Finally, for the calculation presented in the following, it is important to note that due to the construction procedure, the basis elements directly admit a separable representation
where each component .
4.2. High-dimensional integration
We begin by recalling that
where is a tensor-valued function, and that .
As in the previous section, we proceed term by term, to obtain the summands in (15). The integration is carried over the hyperrectangle .
: this term is directly assembled from the calculation of
: This term involves the calculation of
which is expanded by using the separable structure of the free dynamics
: In this case, we need to work on the expression
which is obtained directly from the computations for in 5) below.
where with a similar argument as in the previous term we expand
: This term requires the computation of the inner product
with given by
By using the separable representation of the basis functions
we expand the inner product
The first iteration, with a stabilizing initial guess , requires special attention. If it is obtained via a Riccati-type argument, then initialization follows directly from (14). Otherwise we shall relax this requirement, and only assume that the initial stabilizing controller is given in separable form
In this case, we must recompute the term:
As for the term , which needs to be computed differently in the first iteration, we can proceed in the same way as for , since both and have the same separable structure, it just takes to assign .
4.3. Computational complexity and implementation
Among the expressions developed in the previous subsection, the overall computational burden is governed by the approximation of
which requires the assembly of the 5-dimensional tensor . Each entry of this tensor is a -dimensional inner product, which under the aforementioned separability assumptions is computed as the product of , one-dimensional integrals. Thus, the total amount of one-dimensional integrals required for the proposed implementation is . A positive aspect of our approach is that the assembly of tensors like falls within the category of embarrassingly parallelizable computations, so the CPU time scales down almost directly with respect to the number of available cores. Furthermore, can be entirely computed in an offline phase, before entering the iterative loops in Algorithms 6 and 6. However, for values of interest of and , such as and , Table 1 indicates that is indeed a very large number. A rough estimate of the CPU time required for the assembly of is given by
where corresponds to the time required for the computation of a one-dimensional integral. Therefore, it is fundamental for an efficient implementation to reduce to a bare minimum. From closer inspection of the expression (19), we observe that all the terms can be identified as elements of the tensors
Both and can be computed exactly with a Computer Algebra System, or approximated under suitable quadrature rules. We follow this latter approach, implementing an 8-point Gauss-Legendre quadrature rule. After having computed and , the assembly of (19) reduces to calls to properly indexed elements of these tensors. This approach requires a careful bookkeeping of the separable components of each multidimensional basis function . In this way, an entry of takes of the order of 1E-7 seconds and the overall CPU time is kept within hours for problems of dimension up to 12.
5. Computational implementation and numerical tests
5.1. Convergence of the polynomial approximation
We assess the convergence of the polynomial approximation in a 1D test, with
such that the exact solution of equation (3) is given by
We implement the path-following version (Algorithm 2), starting with , and a threshold value , a parameter , and an internal tolerance . The relative error for Table 2 is defined as
|Monomial basis||Legendre basis|
5.2. Optimal feedback control of semilinear parabolic equations
Similarly as in Section 2.1, we consider the following optimal control problem
subject to the semilinear dynamics
where the linear operator is of the form with , and is a nonlinear operator such that . The scalar control acts through the indicator function , with The The system is closed under suitable boundary conditions. We choose , , , and . The nonlinearity covers both advective, Burgers’-type, and polynomial source terms. In order to generate a low-dimensional state space representation of the dynamics, we resort to a pseudospectral collocation method with Chebyshev polynomials as in  (for further details we also refer to [33, p. 107]. By considering collocation points , the continuous state is discretized into , where . The semilinar PDE dynamics are thus approximated by the dimensional nonlinear ODE system
where the operators correspond to the finite-dimensional realization of