Nonlinear Uncertainty Control with Iterative Covariance Steering

Nonlinear Uncertainty Control with Iterative Covariance Steering

Jack Ridderhof   Kazuhide Okamoto   Panagiotis Tsiotras This work of the first author was supported by NASA Space Technology Research Fellowship 80NSSC17K0093. The work of the second and third authors was supported by NSF award CPS-1544814.J. Ridderhof is a PhD student with the School of Aerospace Engineering, Georgia Institute of Technology,Atlanta, GA, 30332-0150, USA. Email: jridderhof3@gatech.eduK. Okamoto is a PhD student with the School of Aerospace Engineering, Georgia Institute of Technology,Atlanta, GA, 30332-0150, USA. Email: kazuhide@gatech.eduP. Tsiotras is the Andrew and Lewis Chair Professor with the D. Guggenheim School of Aerospace Engineering and the Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332-0150, USA. Email: tsiotras@gatech.edu
Abstract

This paper considers the problem of steering the state distribution of a nonlinear stochastic system from an initial Gaussian to a terminal distribution with a specified mean and covariance, subject to probabilistic path constraints. An algorithm is developed to solve this problem by iteratively solving an approximate linearized problem, which we call the iterative covariance steering (iCS) method.

I Introduction

Guidance and control design has generally followed the standard approach where an open-loop reference optimal control is solved with respect to the nonlinear dynamics and then a feedback controller is subsequently designed with respect to the dynamics linearized about the reference trajectory. Hence, there is an implicit unidirectional dependence of the feedback controller on the reference trajectory, but there is no direct mechanism from which the reference trajectory is affected by the closed-loop system behavior. Intuitively, if we could explicitly couple the design of the reference trajectory with the design of the feedback controller, then, since we are optimizing over a larger set, we could improve the closed-loop system performance. The situation becomes more complex with the introduction of state constraints and uncertainty. If the closed-loop statistics of a system are not considered, then the reference trajectory design must be conservative to satisfy the constraints. For systems that are significantly influenced by uncertain external forces, the conservatism of this approach may lead to greatly increased control cost or even infeasibility.

In this paper we consider the system state to be a random vector which evolves according to a nonlinear stochastic differential equation with additive Brownian noise. By letting the state to be a random vector, the control problem can be formulated as one of simultaneously steering each sample trajectory, and as a consequence, we can analytically study the difference between open and closed-loop control [1]. This machinery will serve as our mechanism to understand the coupling between the reference trajectory and the feedback controller. We assume that the state is normally distributed at the initial time, and we will design a control that steers the mean and the covariance of the initial state distribution to some given terminal values at the final time. This problem is referred to as the nonlinear covariance steering (CS) problem. Since the state is assumed to be normally distributed, and thus unbounded, we must treat state constraints probabilistically. That is, the probability that the constraints are satisfied must be greater than some prespecified value. Since, by construction, these constraints may not be met for every sample path, they are often referred to as chance constraints [2, 3].

The special case of linear time-varying stochastic systems with additive Brownian noise has been extensively studied in the literature. It has been shown that if the system is controllable, then the state covariance is also controllable [1]. That is, for an initial covariance at time , there exist a state feedback gain defined on the interval that steers the covariance to any final value for any time . The solution to the optimal linear CS problem with expected quadratic cost was given by Chen et al. [4, 5, 6], and the solution was found to be closely related to the classical linear quadratic feedback control. The discrete linear CS problem with quadratic cost has also been studied and a similar close connection to linear quadratic control has been shown [7].

It is well known that, for linear systems, the mean and the covariance have independent dynamics, and that the mean state is controlled by the mean open-loop control and the covariance is controlled by the state feedback gain. Hence, we can consider the mean steering and the covariance steering as separate problems. However, the mean and covariance are coupled through the constraints. In other words, for linear systems, the reference trajectory explicitly depends on the closed-loop behavior of the system when the state or control is constrained. For the discrete time linear case with convex chance constraints, the chance constrained CS problem has been shown to be equivalent to a deterministic convex optimization problem [8], and this work has been extended to include non-convex chance constraints using mixed-integer programming [9].

Despite these recent advances in linear system chance constrained CS, these results alone are insufficient to answer the question of how to make explicit the effect of closed-loop system behavior on the optimal reference trajectory for constrained, nonlinear stochastic systems. In this paper we propose an algorithmic approach to solve the nonlinear CS problem by iteratively solving the linear CS problem with respect to the reference trajectory of the previous step. We will refer to this algorithm as iterative CS (iCS). To the best of our knowledge, there are currently no known methods to solve the nonlinear CS problem. Furthermore, in contrast to the existing literature on chance constrained linear CS, we begin our analysis with a continuous stochastic system and describe an exact discretization procedure.

The organization of this paper is as follows. In Section II, we formulate the nonlinear covariance steering problem, and then derive a discrete linear approximation to the original system. Then, in Section III, we follow the solution approach developed in [9] to solve the linear covariance steering problem for the approximate discrete system. In Section IV, we augment the linear CS formulation and use it to construct the iCS algorithm. The proposed method is demonstrated with a numerical example in Section V.

I-a Notation and Preliminaries

For a random vector on a probability space , we denote the expectation of a function of as . The mean value of is denoted by , and the difference from the mean as . The covariance of is denoted by . The complement of an event is denoted , and we use the shorthand to denote events. Dependence of a quantity on time is denoted by . For a square matrix , we write if is positive (semi-)definite, i.e., for all nonzero real vectors . The set of natural numbers, including zero, is written as , and . We will denote by the set of natural numbers up to, and including, a positive integer (similarly for ).

Ii Problem Formulation

Consider the nonlinear stochastic differential equation

(1)

where is the state, is the control input, and is an increment of standard Brownian motion. At time , the state is assumed to be normally distributed with fixed mean and covariance

(2)

At each time, the state and control are constrained in probability to given convex sets

(3)

where and are prescribed maximum probabilities of failure. The constraints (3) are referred to as chance constraints. We wish to find a control that brings the state to a final distribution at time with given mean and covariance

(4)

where is a given positive-definite matrix, while minimizing the cost functional

(5)

Here and are weight matrices, and is an integrable function that is convex in and , and the optimization is performed over the control . In summary, we are interested in solving the following problem.

Problem 1

Nonlinear Covariance Steering. Find a control to minimize the cost (5) subject to the dynamics (1), terminal state constraints (4), and chance constraints (3).

In the remainder of this section, we will develop a linear approximation of (1) in the neighborhood of a given reference. Then, after discretizing the linearized system, we will focus our analysis on the discrete linear system.

Ii-a Time Normalization and Linearization

We begin by normalizing the time domain to the unit interval using the dilation coefficient [10]

(6)

Let be the normalized time, from which it follows that . Since the Brownian motion increment has variance , we scale the diffusion term in (1) by to obtain with variance (i.e., is identically distributed with ). The time normalized system is then given by

(7)

and the time normalized cost is given by

(8)

Next, we linearize (7) about a given reference trajectory , where is an index to count successive linearizations. This procedure results in the linear stochastic system

(9)

where

(10)
(11)

Ii-B Discrete Approximation

Let be a partition of the interval , where

(12)

Henceforth, we will write and . We use a zero-order-hold (ZOH) discretization of the control given by

(13)

Substituting in (9), we obtain the solution [11]

(14)

where is the state transition matrix for system (9), which satisfies

(15)

Let be the matrix that satisfies the stochastic integral equation

(16)

where, for each , are i.i.d normal random vectors with unit variance. For , we rewrite (14) as,

(17)

where

(18a)
(18b)
(18c)

The discrete formulation (14) is an exact representation of the linear system (9) with ZOH control. However, since the previous integrals may be difficult to compute, the following first-order approximation is commonly used:

(19)

We remark that the method of discretization chosen does not affect any of the following discussion.

Following [9, 8], we will rewrite the discrete system (17) as a single linear equation. Define the state transition matrix from step to step as

(20)

and the corresponding transitions for the control, noise, and affine terms as

(21)
(22)

Define concatenated control and disturbance vectors at step as

(23)
(24)

Then the state at step can be written as

(25)

where is a column vector of ones, , and

(26)
(27)
(28)

In terms of the concatenated state vector , control vector , and disturbance vector , the system dynamics are written as the matrix equation

(29)

The matrices , and are formed by appropriately stacking the terms from (25).

Let and be block-diagonal state and control cost weight matrices with entries corresponding to the continuous weights and from (5):

(30)
(31)

The quadratic state cost at step is neglected, since the terminal state is fixed. In terms of and , the continuous time cost functional (8) in terms of the linearized system can be rewritten as

(32)

and the boundary conditions (2) and (4) as

(33a)
(33b)

The chance constraints (3), enforced at each time step , are written as

(34a)
(34b)

In summary, we have approximated the continuous time, nonlinear stochastic system (1) by the discrete, linear stochastic system (29). Problem 1 can be accordingly restated in terms of this approximate system as follows.

Problem 2

Find the control sequence that minimizes (32) subject to the dynamics (29), boundary conditions (33), and chance constraints (34).

Remark 1

In the discrete formulation, the chance constraints are only enforced at the discrete times , and therefore the original constraints (3) may be violated for some . Constraint violation in this interval is likely when the discretization is too coarse.

Iii Covariance Steering

For the remainder of this paper, we will restrict the control law to be of the form [9]

(35)

were is a feedforward control, is a feedback gain matrix, and is a zero-mean random process given by

(36)

In vector notation, we have

(37)

and thus,

(38)

where the block feedback matrix is given by

(39)

Substituting the control into the state equation, we obtain the expressions for the mean and deviation states as

(40)
(41)

Similarly for the control, we obtain

(42)
(43)

It follows that the state covariance and the control covariance, in terms of the covariance of the process , are given as

(44)
(45)
(46)

Substituting (45) and (46) into the cost function (32) and simplifying, we obtain

(47)

where

(48)

Iii-a Endpoint Constraints

Substituting (40) into (33b), we obtain the equality constraint on the final mean state

(49)

Since the equality constraint is not convex in , we instead enforce the relaxed inequality constraint [12]

(50)

which is convex in . This constraint may be equivalently stated in the more standard form [8]

(51)

Iii-B Chance Constraints

Assume that at each time step the convex regions and can be represented by the finite intersection of half spaces

(52)

where and are given in terms of the vectors , and scalars . By subadditivity of probability, we have

(53)

It follows that if for a set of positive numbers that sum over the index to less than , then [13, 3]. In terms of the concatenated state and control vectors, and since and , the events and have the same probability. Therefore, when relabeling indices of the inequality constraints according to

(54)
(55)

if and are given sets of positive numbers that satisfy, for each ,

(56)

then, from (53),

(57)

implies that

(58)

The same construction applies to the control sets .

Next, we formulate into a deterministic expression of the control variables. From (29) it follows that is normally distributed and hence is a scalar normal random variable with mean and covariance . It follows that

(59)

where is the normal cumulative distribution function. Therefore, the chance constraint can be equivalently stated as

(60)

Putting it all together, if (56) holds, and if

(61a)
and
(61b)

then we ensure the chance constraints (34) will be satisfied.

Remark 2

This work assumes that and are given sets of positive numbers that satisfy (56). This assumption allows (61) to be convex. Otherwise, (61) becomes non-convex, and we need to consider an optimal risk allocation problem. Several approaches have been proposed, such as [14, 15], to handle the risk allocation problem. In addition, the authors of [16] used a primal-dual interior point method to find an optimal risk allocation.

We are now ready to restate the covariance steering problem as a deterministic, finite dimensional optimization problem.

Problem 3

Linear Covariance Steering. Find and that minimize the cost (47) subject to the terminal state constraints (49) and (51) and the chance constraints (61).

Iv Iterative Covariance Steering

In the previous sections, we have locally approximated the continuous time, nonlinear system (1) with the discrete linear system (29), and we have restated the cost function and constraints in terms of the discrete linear system as functions of a feedfoward control and feedback gain . We will search for solutions to the original nonlinear system by successively solving this approximate convex problem, where optimal controls from each successive problem are used to propagate trajectories of the nonlinear system to obtain references for the next linearization step. This method is referred to in the literature as successive convexification [17, 18].

Iv-a Stochastic Trust Region

The linear approximation of the system dynamics is only valid in a neighborhood around the reference trajectory, so care must be taken to ensure that the optimal controls for the linear problem are relevant to the nonlinear problem. For this reason, variations in the state and the control from the previous solution are bounded inside a trust region [17]. In this paper, we are successively approximating a stochastic system, and since the state of a system with Brownian noise is unbounded, we must define a stochastic trust region instead as follows

(62a)
(62b)

where , , , and are user-defined limits. Constraints in the 1-norm can be represented by or inequality constraints for the state or control, respectively, using (61). As a consequence of these trust region constraints, if the reference trajectory is sufficiently far away from the terminal constraint, then the problem may become infeasible. For these situations, which are most likely encountered when initializing the problem, we relax the hard constraint (49) on the terminal state mean to the soft constraint

(63)

with a corresponding term added to the cost, where is a user-defined weight. This constraint may be replaced with the hard constraint (49) when the reference trajectory is sufficiently close to the terminal constraint. In the case that (63) is active, we use the augmented cost function given by

(64)

In summary, we have modified Problem 3 to the following convex optimization problem.

Problem 4

iCS Convex Subproblem. Find and that minimize the cost (64) subject to the terminal state constraints (61) and (49) (or (63) if the reference trajectory is sufficiently far from the target), the chance constraints (61), and the trust region constraints (62).

This problem is solved successively in order to find a solution to Problem 1 using the iCS algorithm presented in Procedure 1.

1:Initial guess
2:Optimal control and
3:for  to  do
4:     Propagate nonlinear mean dynamics with
5:     
6:     Linearize about
7:     Discretize
8:     Solve problem (4) to obtain
9:     Reshape ,
10:     if  then
11:         return
12:     else
13:         ,      
14:return Convergence not met
Procedure 1 Iterative Covariance Steering (iCS)
Remark 3

The nonlinear mean dynamics can be propagated through Monte Carlo, which can be parallelized. In the case that computational resources are limited, we can approximate so that the mean state evolves according to

(65)

In this case, the mean state can be estimated by integrating a single trajectory.

V Numerical Example

In this section we apply the iCS algorithm to control a double integrator subject to a quadratic drag force. Let the position and velocity be described by the stochastic system

(66)
(67)

where is the drag coefficient and is a noise scale parameter. In terms of the state , the dynamics can be written as

(68)

where represents the nonlinear drag dynamics and where

(69)

Linearizing about a reference velocity , we obtain

(70)

where , , and is given as in (11). In addition, we enforce the the chance constraint

(71)

where . The initial state is normally distributed with mean and covariance

(72)

and the terminal distribution is constrained by the mean and covariance

(73)

We set the drag coefficient and the noise scale . For the solution, we let , , and . The mean cost function was

(74)

the weight matricies were and , and the algorithm was seeded with the initial guess

(75)

Since the initial guess violates the chance constraint (71), we relax the chance constraint on the first iteration and tighten it to the final constraint over the first several iterations. The algorithm converged in eight iterations, and solutions for each iteration are shown in Figure 1. Samples from a 5,000 trial Monte Carlo simulation are shown in Figure 2. The maximum probability of constraint violation was at step , with 9.78% of states having , which is below the limit of 10% set in (71). Also from the Monte Carlo, the final state mean was

(76)

which is very close to the specified value in (73), and the covariance

(77)

is less than the upper bound in (73).

Fig. 1: State and control during successive solutions. The dashed lines are the reference and , and the solid lines are the mean state and control after the th step, and . The first iteration is shown in blue and the final iteration is shown in bold.
Fig. 2: Successive iterations and Monte Carlo results. Iterations are shown by colored lines, with the initial iteration in red. The chance constraint is shown by the dotted line, and 90% confidence ellipses are shown in black and gray. The black confidence ellipses are computed from the linear analysis and the gray ellipses are computed from the Monte Carlo simulations, and the gray trajectories are a subset of the Monte Carlo trails.

Vi Conclusion

In this paper we presented an algorithmic solution to the chance constrained nonlinear CS problem. We began by approximating the original nonlinear stochastic system by a linear discrete stochastic system, and then we formulated the linear CS problem as a deterministic optimization problem. Next, in order for the linearized problem formulation to be a reasonable approximation of the original nonlinear problem, we constrained the trajectory at each iteration within a probabilistic trust region about the trajectory from the previous iteration. The size of the trust region depends on the nonlinearity of the dynamics, and therefore convergence properties are problem-specific.

Since the proposed iCS algorithm linearizes the dynamics at each iteration, an initial trajectory must be given for the first iteration of the algorithm. At the same time, the difference in the trajectories between iterations is constrained within a trust region, and so a poor initialization may cause the first step to be infeasible. In the numerical example we addressed this problem by relaxing the chance constraints in the first iterations of the algorithm. Since the chance constrained region is assumed to be a convex polytope, the region can be easily expanded by scaling the inequality constraints. Another solution would be to first solve a deterministic optimization problem with tightened inequality constraints representing a worst-case chance constrained region. In this case, the iCS algorithm would be used to improve the solution from the deterministic problem by adding the closed-loop system statistics to the optimization. The latter approach could be applied to problems such as planetary entry and powered descent by iterating on a given reference trajectory that is to be tracked in the presence of uncertainty [19, 20].

In future work we plan to apply iCS to problems in entry, descent, and landing (EDL) with nonlinear dynamics, such as entry and powered descent. Another extension to this work would the addition of time-varying chance constraints that are satisfied for all time, rather than for each time, while not being overly conservative.

References

  • [1] R. Brockett, “Notes on the control of the Liouville equation,” in Control of Partial Differential Equations, Lecture Notes in Mathematics 2048, (Berlin ; New York), Springer, 2010.
  • [2] A. Charnes and W. W. Cooper, “Deterministic equivalents for optimizing and satisficing under chance constraints,” Operations Research, vol. 11, no. 1, pp. 18–39, 1963.
  • [3] A. Nemirovski and A. Shapiro, “Convex approximations of chance constrained programs,” SIAM Journal on Optimization, vol. 17, no. 4, pp. 969–996, 2006.
  • [4] Y. Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part I,” IEEE Transactions on Automatic Control, vol. 61, pp. 1158–1169, May 2016.
  • [5] Y. Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part II,” IEEE Transactions on Automatic Control, vol. 61, pp. 1170–1180, May 2016.
  • [6] Y. Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part III,” IEEE Transactions on Automatic Control, vol. 63, pp. 3112 – 3118, August 2016.
  • [7] M. Goldshtein and P. Tsiotras, “Finite-horizon covariance control of linear time-varying systems,” in IEEE 56th Annual Conference on Decision and Control, (Melbourne, Australia), pp. 3606–3611, December 2017.
  • [8] K. Okamoto, M. Goldshtein, and P. Tsiotras, “Optimal covariance control for stochastic systems under chance constraints,” IEEE Control Systems Letters, vol. 2, pp. 266–271, April 2018.
  • [9] K. Okamoto and P. Tsiotras, “Optimal stochastic vehicle path planning using covariance steering,” IEEE Robotics and Automation Letters, 2019.
  • [10] M. Szmuk and B. Açıkmeşe, “Successive convexification for 6-dof Mars rocket powered landing with free-final-time,” in 2018 AIAA Guidance, Navigation, and Control Conference, no. AIAA 2018-0617, 2018.
  • [11] W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control. Applications of Mathematics 1, Springer-Verlag, 1975.
  • [12] E. Bakolas, “Optimal covariance control for discrete-time stochastic linear systems subject to constraints,” in IEEE 55th Annual Conference on Decision and Control, (Las Vegas, NV), pp. 1153–1158, December 2016.
  • [13] L. Blackmore and M. Ono, “Convex chance constrained predictive control without sampling,” in AIAA Guidance, Navigation, and Control Conference, no. AIAA 2009-5876, (Chicago, Illinois), August 2009.
  • [14] M. Ono and B. C. Williams, “Iterative risk allocation: A new approach to robust model predictive control with a joint chance constraint,” in IEEE 47th Conference on Decision and Control, (Cancún, Mexico), pp. 3427–3432, December 2008.
  • [15] M. P. Vitus and C. J. Tomlin, “Closed-loop belief space planning for linear, gaussian systems,” in IEEE International Conference on Robotics and Automation, (Shanghai, China), pp. 2152–2159, May 9 – 13, 2011.
  • [16] Y. Ma, S. Vichik, and F. Borrelli, “Fast stochastic MPC with optimal risk allocation applied to building control systems,” in IEEE Conference on Decision and Control, (Maui, HI), pp. 7559–7564, December 2012.
  • [17] M. Szmuk, B. Açıkmeşe, and A. W. Berning, “Successive convexification for fuel-optimal powered landing with aerodynamic drag and non-convex constraints,” in AIAA Guidance, Navigation, and Control Conference, no. AIAA 2016-0378, 2016.
  • [18] Y. Mao, M. Szmuk, and B. Açıkmeşe, “Successive convexification of non-convex optimal control problems and its convergence properties,” in IEEE 55th Conference on Decision and Control, (Las Vegas, NV), pp. 3636–3641, December 2016.
  • [19] A. D. Cianciolo and R. W. Powell, “Entry, descent, and landing guidance and control approaches to satisfy Mars human mission landing criteria,” in 27th AAS/AIAA Space Flight Mechanics Meeting, no. AAS 17-254, (San Antonio, TX), February 2017.
  • [20] J. Ridderhof and P. Tsiotras, “Minimum-fuel powered descent in the presence of random disturbances,” in 2019 AIAA Guidance, Navigation, and Control Conference, no. AIAA 2019-0646, (San Diego, California), 2019.
  • [21] M. Ono and B. C. Williams, “An efficient motion planning algorithm for stochastic dynamic systems with constraints on probability of failure,” in AAAI, (Chicago, Illinois), pp. 1376–1382, July 2008.
  • [22] M. Szmuk, T. Reynolds, B. Açıkmeşe, M. Mesbahi, and J. M. Carson, “Successive convexification for 6-dof powered descent guidance with compound state-triggered constraints,” in AIAA Scitech 2019 Forum, no. AIAA 2019-0926, 2019.
  • [23] J. Ridderhof and P. Tsiotras, “Uncertainty quantification and control during Mars powered descent and landing using covariance steering,” in 2018 AIAA Guidance, Navigation, and Control Conference, no. AIAA 2018-0611, (Kissimmee, Flordia), 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
359374
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel