Adaptive Horizon Model Predictive Control

Adaptive Horizon Model Predictive Control

Arthur J Krener Research supported in part by AFOSR.A. J. Krener is with the Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA 93943 ajkrener@nps.edu
Abstract

Adaptive Horizon Model Predictive Control (AHMPC) is a scheme for varying as needed the horizon length of Model Predictive Control (MPC). Its goal is to achieve stabilization with horizons as small as possible so that MPC can be used on faster or more complicated dynamic processes. Beside the standard requirements of MPC including a terminal cost that is a control Lyapunov function, AHMPC requires a terminal feedback that turns the control Lyapunov function into a standard Lyapunov function in some domain around the operating point. But this domain need not be known explicitly. MPC does not compute off-line the optimal cost and the optimal feedback over a large domain instead it computes these quantities on-line when and where they are needed. AHMPC does not compute off-line the domain on which the terminal cost is a control Lyapunov function instead it computes on-line when a state is in this domain.

ptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptpt


Adaptive Horizon Model Predictive Control


Arthur J Krener thanks: Research supported in part by AFOSR.thanks: A. J. Krener is with the Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA 93943 ajkrener@nps.edu


I Introduction

Model Predictive Control (MPC) is a way to optimally steer a discrete time control system to a desired operating point. We briefly describe it following the definitive treatise of Rawlings and Mayne [1]. We closely follow their notation.

We are given a controlled, nonlinear dynamics in discrete time

(1)

where the state , the control and . This could be the discretization of a controlled, nonlinear dynamics in continuous time. The goal is to find a feedback law that drives the state of the system to some desired operating point. A pair is an operating point if . We conveniently assume that the operating point has been translated to be .

The controlled dynamics may be subject to constraints such as

(2)
(3)

and possibly a constraint involving both the state and control

(4)

A control is said to be feasible at if and

Of course the stabilizing feedback that we seek needs to be feasible. For every ,

An ideal way to solve this problem is to choose a Lagrangian that is nonnegative definite in and positive definite in and then to solve the infinite time optimal control problem of minimizing over choice of feasible control sequence the quantity

(5)

subject to the dynamics (I), the constraints (I, I) and . Let denote the minimum value and be a minimizing control sequence with corresponding state sequence . Minimizing control and state sequences need not be unique but we shall generally ignore this.

If a pair of functions satisfy the infinite horizon Dynamic Program Equations (DPE)

(6)
(7)
(8)

and the feasibility constraints

(9)
(10)

for then it is not hard to show that is the optimal cost and is an optimal feedback law, . Then under suitable condtions a Lyapunov argument can be used to show that the feedback is stabilizing.

The difficulty with this approach is that it is generally impossible to solve DPE on a large domain if the state dimension is greater than or . So both theorists and practicioners have turned to Model Predictive Control (MPC). They choose a Lagrangian , a horizon length , a terminal domain containing and a terminal cost defined and positive definite on . Consider the problem of minimizing by choice of feasible

(11)

subject to the dynamics (I), the constraints (I, I), the terminal condition and the initial condition . Assuming this problem is solvable, let denote the optimal cost,

(12)

where the minimum is taken over all feasible . Let and denote optimal control and state sequences and define

Let be defined inductively,

The terminal set is controlled invariant (aka viable) if for each there exists a such that and the constraints (I) are satisfied. If this holds then it is not hard to see inductively that the sets are nested

If a pair defined on satisfy the horizon Dynamic Program Equations (DPE)

(13)
(14)
(15)

where the minimum is over all that are feasible at then it is not hard to show that is the optimal cost and is an optimal feedback law . If is a control Lyapunov function on then under suitable conditions a Lyapunov argument can be used to show that the feedback is stabilizing on . See [1] for more details.

As we noted above solving off-line the infinte horizon optimal control problem for all possible states is generally intractable. The advantage of solving the horizon optimal control problem for the current state is that it possibly can be done on-line as the process evolves. If the current value of the state is known to be then the finite horizon optimal control problem is a nonlinear program with finite dimenionsal decision variable . If the time step is long enough, if are reasonably simple and if is small enough then this nonlinear program that can be solved in a fraction of one time step for . Then the first element of this sequence is used as the control at the current time. The system evolves one time step and the process is repeated at the next time. Conceptually MPC computes an optimal feedback law but only at values of when and where it is needed.

Some authors do away with the terminal cost but there is a theoretical and a practical reason to use one. The theoretical reason is that a control Lyapunov terminal cost facilitates a proof of asymptotic stability via a Lyapunov argument [1]. The practical reason is that one can usually use a shorter horizon when there is a terminal cost. A shorter horizon reduces the dimension of the decision variables in the nonlinear programs that need to be solved on-line. Therefore MPC with a suitable terminal cost can be used for faster and more complicated systems.

The ideal terminal cost is of the corresponding infinite horizon optimal control provided that the latter can be accurately computed off-line on a reasonably large terminal set . This may be tractable because the terminal set may be much smaller than and only an approximate solution on may suffice. For example can be locally approximated by the solution of the infinite horizon LQR problem involving the linear part of the dynamics and quadratic part of the Lagrangian at the operating point.

One would expect when the current state is far from the operating point, a relatively long horizon is needed to ensure that but as the state approaches the operating point shorter and shorter horizons can be used. Adaptive Horizon Model Predictive Control (AHMPC) adjusts the horizon of MPC on-line as it is needed. In the next section we present an ideal version of AHMPC and in the following section we present a practical implementation of AHMPC. Finally we close with an example.

Ii Ideal Adaptive Model Prediction Control

We shall make some standing assumptions. The first few are drawn from Rawlings and Mayne.

Assumption 1: (Assumption 2.2 [1])
The functions are continuous on some open set containing , is nonegative definite in and positive definite in on this open set, is positive definite on and , , .

Assumption 2: (Assumption 2.3 [1])
The sets and are closed, , is compact and , and contain neighborhoods of their respective origins.

Assumption 3: (Assumptions 2.12 and 2.13 of [1])
For all there exist a feasible such that

This assumption implies that is controlled invariant and that is a control Lyapunov function on .

We make some additional assumptions.

Assumption 4: For each there is a nonnnegative integer and a control sequence such that the corresponding state sequence starting from satisfies .

This assumption allows us to define a function on as the minimum of all such that there exist such a control sequence and corresponding state sequence starting from that satisfies .

Then the nested sets defined above are given by

Assumption 5: There exists a nonegative integer such that for all . In other words

These assumptions imply that the usual MPC with horizon length is stabilizing on by standard arguments [1]. But it is a waste of time to use horizon length when is substantially smaller . If the current state is , ideal AHMPC uses horizon length . Then as the current state approaches the terminal set, ideal AHMPC uses shorter and shorter horizons. When is in the terminal set, ideal AHMPC uses a horizon length of .

In a moment we shall show that the function

(16)

is a valid Lyapunov function for the closed loop system which confirms the stabilizing property of the ideal AHMPC feedback

(17)

But the reason why this scheme is not practical is that, in general, it is impossible to compute the function . In the next section we shall offer a work around but for now we study the stabilizing properties of ideal AHMPC.

Lemma 1 Assume Assumptions 1-5 hold. If and if , are a control and state trajectory from such then .

Proof: By assumption so , , etc. So .

Suppose for some , then and , , etc. so which contradicts the assumption that .

Lemma 2 (Compare with Lemma 2.14 of [1]) Under Assumptions 1-5 then

where and is defined by (16).

Proof: By definition

where and are optimizing control and state sequences for the horizon optimal control problem so where is defined by (17).

Let , by Lemma 1, so

So for any feasible control sequence and corresponding state sequence

In particular if we take then and


Following Rawlings and Mayne we make the following assumption.

Assumption 6: (Assumption 2.16(a) of [1])
The stage cost and the terminal cost satisfy

where and are functions.

Assumptions 3 and 6 imply that for each there exists a feasible such that

Proposition 1 (Compare with Proposition 2.17 of [1])
(a) Suppose that Assumptions 1, 2, 3, 4, 5 and 6 are satisfied. Then there exists functions and such that has the following properties

Proof If then the first inequality follows from Assumpion 6(a) and the fact that . If then the first inequality follows from Assumpion 7(a). The second inequality follows Assumption 6(a) and the fact that if then and so

If the second property held for all

then would be a valid Lyapunov on . The following is a paraphrase of a proposition of Rawlings and Mayne.

Proposition 2 (Proposition 2.1 of [1])
Suppose that Assumptions 1, 2, 3 hold, that contains an open neighborhood of the origin and that is compact. If there exists a function such that for then there exists another function such that for .

This allows us to prove the following proposition.

Proposition 3
Suppose that Assumptions 1, 2, 3, 4, 5 and 6 hold, that contains an open neighborhood of the origin and that is compact. If there exists a function such that for then there exists another function such that for .

Proof: By Assumpions 4 and 5, . Let and define

The maximum of a finite family of functions is also a function. Clearly if

Proposition 4 Suppose that Assumptions 1, 2, 3, 4, 5 and 6 hold, that contains an open neighborhood of the origin and that is compact. Then is a valid Lyapunov function which confirms the asymptotic stability of the closed loop dynamics

on

So ideal AHMPC solves our stabilization problem in theory. But it generally can’t be implemented because we can’t compute the key ingredient, the function or its domain of definition.

There is a slightly less ideal version of AHMPC. Suppose we have a function with the following properties.
a) For each there is a feasible control sequence and corresponding state sequence such that .
b) There exist an such that for all .
c) If and are optimal control and state sequences for the horizon optimal control problem starting at then . In other words along optimal trajectories either stays the same or decreases by at each time step.

Then the above results hold, as defined by (16) is a valid Lyapunov function for closed loop system using feedback defined by (17). The only additional thing that needs to shown is that if then . But this follows from standard MPC arguments, see Lemma 2.14 of [1].

Iii Adaptive Horizon Model Predictive Control

Here is a variation on the above that is practical which we call Adaptive Horizon Model Predictive Control (AHMPC). We assume that we have the following.

  1. Sets satisfying Assumption 2. We do not require that be known explicitly.

  2. A discrete time controlled dynamics , a Lagrangian , a constraint pair and a terminal cost satisfying Assumption 1.

  3. A terminal feedback and a class function defined for all and satisfying

We don’t need to know the terminal set on which these conditions are satisfied, all we need to there is such a terminal set and that it contains a neighborhood of .

One way of obtaining such a terminal pair is to approximately solve the infinite horizon dynamic program equations (DPE) on some neighborhood of the origin. For example if the linear part of the dynamics and the quadratic part of the Lagrangian constitute a nice LQR problem then then one can let be the quadratic optimal cost and be the linear optimal feedback of the LQR. Alternatively one can take higher degree Al’brekht approximations to [2]. Of course the problem with such terminal pairs is that generally there is no way to estimate the terminal set on which (1), (2) and (3) are satisfied. It is reasonable to expect that they are satisfied on some terminal set but the extent of the terminal set is very difficult to estimate.

AHMPC mitigates this difficulty. MPC does not try to compute the optimal cost and optimal feedback everywhere, instead it computes them just when and where they are needed. AHMPC does not try to compute the extent of , it just tries to determine if the end state of the currently computed optimal trajectory is in a terminal set where (1), (2) and (3) are satisfied.

Suppose the current state is and we have solved the horizon optimal control problem for , . AHMPC does not explictly impose the terminal constraint because is not explicitly known but it does require that the terminal cost is defined at .

The terminal feedback is used to extend the state trajectory additional steps

for . This assumes that the terminal feedback is defined is defined on for . If the terminal feedback is not defined at any of these points then we presume that is not in so we increase by and we solve the optimal control problem over the new horizon.

If the feedback is defined on the extended trajectory then one checks that the Lyapunov conditions hold for the extended part of the state sequence,

(18)
(19)

for . Again if the terminal cost is not defined at any of these points then we presume that is not in so we increase by and we solve the optimal control problem over the new horizon.

If (18, 19) hold for all for . then we presume that and we use the control to move one time step forward to . At this next state we solve the horizon optimal control problem and check that the extension of the new optimal trajectory satisfies (18, 19).

If (18, 19) do not hold for all for . then we presume that . If time permits we solve the horizon optimal control problem at the current state and then check the Lyapunov conditions (18, 19) again. We keep increasing the horizon by until these conditions are satisfied. If we run out of time before (18, 19) are satisfied then we use the last computed and move one time step forward to . At we solve the horizon optimal control problem.

The number of additional time steps is a design parameter. Two obvious choices are to take a fixed which is a fraction of or to take a varying which is a fraction of the current .

Iv Example

The example that we apply AHMPC to is stabilizing a double pendulum to the upright position using torques at each of the pivots. The states are , the angle of the first leg measured in radians counter-clockwise from straight up, , the angle of the second leg measured in radians counter-clockwise from straight up, and . The controls are , the torque applied at the base of the first leg, and , the torque applied at the joint between the legs. The length of the first leg is m. and the length of the second leg is m. The legs are assumed to be massless but there is a mass of kg. at the joint between the legs and a mass of kg. at the tip of the second leg. The continuous time controlled dynamics is discretized using Euler’s method with time step s. assuming the control is constant throughout the the time step.

The continuous time Lagrangian is chosen to be and its Euler discretization, , is used. We choose the initial state to be and the initial horizon length to be . We simulated practical AHMPC with the solution of the LQR problem using the linear part of the dynamics at the origin and the quadratic Lagrangian, and fixed . We did not move one time step forward if (18, 19) did not hold over the extended state trajectory but instead increased by one and recomputed. The AHMPC trajectories of the two angles, in blue and in red, are shown in Figure 1.

Fig. 1: Angles Converging to the Vertical

The adaptively changing horizon length is shown in Figure 2. This graph includes cases where the horizon was increased by one but the state of the pendulum was not advanced. Notice that the horizon goes down and up several times before settling at .

Fig. 2: Adaptively Changing Horizon

V Conclusion

Adaptive Horizon Model Predictive Control is a scheme for varying the horizon length in Model Predictive Control as the stabilization process evolves. We have presented an ideal version of AHMPC and shown that it guarantees stabilization. AHMPC is a practical version that proceeds without knowing the minimum horizon length function and without knowing the domain of Lyapunov stability of the terminal cost and terminal feedback .

We have only proven the convergence of AHMPC under ideal conditions but the convergence of standard MPC is also proven under similar ideal conditions, e.g., exact model, exact knowledge of the current state, exact solution of the finite horizon optimal control problems, etc.

The principal advantge of AHMPC over standard MPC is that the AHMPC horizon length decreases as the process is stabilized thereby lessening the on-line computational burden. Hence AHMPC may be able to stabilize systems with faster or more complicated dynamics.

The author would like to acknowledge helpful communications with Sergio Lucia, Philipp Rumschinski and Rolf Findeisen.

  • [1] J. B. Rawlings and D. Q. Mayne, Model predictive control : theory and design, Nob Hill Pub., 2009.
  • [2] E. G. Al’brekht, On the Optimal Stabilization of Nonlinear Systems, PMM-J. Appl. Math. Mech., 25:1254-1266, 1961.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
353727
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description