Constraint-Tightening and Stability in Stochastic Model Predictive Control

Constraint-Tightening and Stability in Stochastic Model Predictive Control

Matthias Lorenzen, Fabrizio Dabbene, Roberto Tempo, and Frank Allgöwer Institute for Systems Theory and Automatic Control, University of Stuttgart, Germany {matthias.lorenzen,frank.allgower}@ist.uni-stuttgart.de. The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/2) at the University of Stuttgart.
CNR-IEIIT, Politecnico di Torino, Italy {roberto.tempo,fabrizio.dabbene}@polito.it. This research was partially supported by the joint CNR-JST international lab COOPS.
©2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Abstract

Constraint tightening to non-conservatively guarantee recursive feasibility and stability in Stochastic Model Predictive Control is addressed. Stability and feasibility requirements are considered separately, highlighting the difference between existence of a solution and feasibility of a suitable, a priori known candidate solution. Subsequently, a Stochastic Model Predictive Control algorithm which unifies previous results is derived, leaving the designer the option to balance an increased feasible region against guaranteed bounds on the asymptotic average performance and convergence time. Besides typical performance bounds, under mild assumptions, we prove asymptotic stability in probability of the minimal robust positively invariant set obtained by the unconstrained LQ-optimal controller. A numerical example, demonstrating the efficacy of the proposed approach in comparison with classical, recursively feasible Stochastic MPC and Robust MPC, is provided.

Stochastic model predictive control, constrained control, predictive control, chance constraints, discrete-time stochastic systems, receding horizon control, linear systems

I Introduction

It is well-known that a moving horizon scheme like Model Predictive Control (MPC) might incur significant performance degradation in the presence of uncertainty and disturbances. This fact was already recognized in early publications on dynamic programming, see for instance [1, Chapter 9.5]. To cope with this disadvantage, in recent years Robust MPC has received a great deal of attention for linear systems [2, 3] as well as for nonlinear systems [4, 5, 6]. In many cases, a stochastic model can be formulated to represent the uncertainty and disturbance, as for instance in the case of inflow material quality and purity in a chemical process or wind speed and turbulence in aircraft or wind turbine control. This fact, and the inherent conservativeness of robust approaches, has led to an increasing interest in Stochastic Model Predictive Control (SMPC). A probabilistic description of the disturbance or uncertainty allows to optimize the average performance or appropriate risk measures. Furthermore, allowing a (small) probability of constraint violation, by introducing so-called chance constraints, seems more appropriate in some applications, e.g. meeting the demand in a warehouse or bounds on the temperature or concentrations in a chemical reactor. Besides, chance constraints lead to an increased region of attraction without changing the prediction horizon. Still, hard constraints, e.g. due to physical limitations, can be considered in the same setup.

The first problem in Stochastic MPC is the derivation of computationally tractable methods to propagate the uncertainty for evaluating the cost function and the chance constraints. Both are multivariate integrals, whose evaluation requires the development of suitable techniques. A second problem in SMPC is related to the difficulty of establishing recursive feasibility. In order to have a well-defined control law, it is necessary to guarantee that the optimal control program, which is solved online, remains feasible at future sampling times if it is initially feasible. Indeed, in classical MPC, recursive feasibility is usually guaranteed through showing that the planned input trajectory remains feasible in the next optimization step. This idea is extended in Robust MPC by requiring that the input trajectory remains feasible for all possible disturbances.

In Stochastic MPC, a certain probability of future constraint violation is in general allowed, which leads to significantly less conservative constraint tightening for the predicted input and state, because worst-case scenarios become very unlikely. However, in this setup, the probability distribution of the state prediction at some future time depends on both the current state and the time to go. Hence, even under the same control law, the violation probability changes from time to time , which can render the optimization problem infeasible.

The first problem, uncertainty propagation and tractable reformulation of chance constraints, has gained significant attention and different methods to evaluate exactly, approximate or bound the desired quantities have been proposed in the Stochastic MPC literature. An exact evaluation is in general only possible in a linear setup with Gaussian noise or finitely supported uncertainties as in [7]. Approximate solutions include particle approach [8] or polynomial chaos expansion [9].

Bounding methods with guaranteed probabilistic confidence include [10, 11], where the authors use the so-called scenario approach to cope with the chance constraint and determine at each iteration an optimal feedback gain ([10]) or feed-forward input ([11]), respectively. While this approach allows for nearly arbitrary uncertainty in the system, the online optimization effort increases dramatically and recursive feasibility cannot be guaranteed. In [12, 13] the authors use an online sampling approach as well, but show how the number of samples can be significantly reduced. For linear systems with parametric uncertainty, [14] proposes to decompose the uncertainty tube into a stochastic part computed offline and a robust part which is computed online. The paper [15] computes online a stochastic tube of fixed complexity using a sampling technique, but a mixed integer problem needs to be solved online. In [16] layered sets for the predicted states are defined and a Markov Chain models the transition from one layer to another.

For linear systems with additive stochastic disturbance, the system is usually decomposed into a deterministic, nominal part and an autonomous system involving only the uncertain part. The approaches can then be divided into (i) computing a confidence region for the uncertain part and using this for constraint tightening, see [17] for an ellipsoidal confidence region, and (ii) directly tightening the constraints, given the evolution of the uncertain part, e.g. [18] and [19]. A slightly different approach is taken in [20], where the authors first determine a confidence region for the disturbance sequence, as well, but then employ robust optimization techniques. Using the same setup, in [21] the focus is to guarantee bounded variance of the state under hard input constraints.

The second problem, recursive feasibility, has seemingly attracted far less attention. The issue has been highlighted in [22] and a rigorous solution has been provided in [18, 17], where “recursively feasible probabilistic tubes” for constraint tightening are proposed. Instead of considering the probability distribution steps ahead given the current state, the probability distribution steps ahead given any realization in the first steps is considered. This essentially leads to a constraint tightening with worst-case and one stochastic prediction for each prediction time . In [19] the authors propose to compute a control invariant region and to restrict the successor state to be inside this region. This procedure leads to a feasible region which is less restrictive, but stability issues are not discussed.

The main contribution of this paper is to propose a nonconservative Stochastic MPC scheme that is computationally tractable and guarantees recursive feasibility. This is achieved by introducing a novel approach which unifies the previous results, combining the asymptotic performance bound of [18] with the advantages of the least restrictive approach in [19]. Unlike previous works, we explicitly study the case when the optimized input sequence does not remain feasible at the next sampling time and present a constraint tightening to bound this to a desired probability . Recursive feasibility is guaranteed through an additional constraint on the first step. With a scheme similar to [19] and with , SMPC with recursively feasible probabilistic tubes is recovered. We introduce a constraint tightening, which allows the parameter to be used as a tuning parameter to balance convergence speed and performance against the size of the feasible region. Under mild assumptions, we prove stability in probability of the minimal robust positively invariant region obtained by the unconstrained LQ-optimal controller. As suggested in [23] the online algorithm is kept simple and the main computational effort is offline. The resulting offline chance constrained programs are briefly discussed and an efficient solution strategy using a sampling approach is provided.

The remainder of this paper is organized as follows. Section II introduces the receding horizon problem to be solved. In Section III the proposed finite horizon optimal control problem is derived, starting with a suitable constraint reformulation, followed by recursive feasibility considerations of the optimization problem and a candidate solution. The section concludes with a summary of the algorithm. The theoretical properties are summarized in Section IV, where a performance bound and a stability result are derived. A discussion on constraint tightening concludes the section and demonstrates the advantages of the approach. The computation of the offline constraint tightening is discussed in Section V, followed by numerical examples that underline the advantages of the proposed scheme. Finally, Section VI provides some conclusions and directions for future work.

Preliminary results have been presented in [24]. Building on these results, methods to bound the probability that a suitable candidate solution remains feasible are introduced and the implication on system theoretic properties stability and performance are analyzed thoroughly. A discussion on how to deal with joint chance constraints is presented and the numerical example has been updated to support the theory. Related results for systems with parametric uncertainty have been presented in [25], where constraint tightening via offline uncertainty sampling is addressed.

Notation

The notation employed is standard. Uppercase letters are used for matrices and lower case for vectors. and denote the -th row and entry of the matrix and vector , respectively. Positive (semi)definite matrices are denoted () and . The set denotes the positive integers and , similarly , . The notation denotes the conditional probability of an event given the realization of , similarly . We use for the (measured) state at time and for the state predicted steps ahead at time . The set of cardinality of vectors ,… will be denoted by . , denotes the Minkowski sum and the Pontryagin set difference, respectively. To simplify the notation, we use the convention for .

Ii Problem Setup

In this section, we first describe the system to be controlled and introduce the basic Stochastic Model Predictive Control algorithm.

Ii-a System Dynamics, Constraints, and Objective

Consider the following linear, time-invariant system with state , control input and additive disturbance

(1)

The disturbance sequence is assumed to be a realization of a stochastic process satisfying the following assumption.

Assumption 1 (Bounded Random Disturbance).

for are independent and identically distributed, zero mean random variables with distribution and support . The set is bounded and convex.

The system is subject to probabilistic constraints on the state and hard constraints on the input

(2a)
(2b)

with , , , , and the assumption that is a measurable function in . Equation (2a) restricts to the probability of violating the linear state constraint at the future time , given the realization of the current state . In the following, the notation denoting the conditional probability of an event given the realization of will be used.

The control objective is to (approximately) minimize , the expected value of an infinite horizon quadratic cost

(3)

with , , , .

Ii-B Receding Horizon Optimization

To solve the control problem, a Stochastic Model Predictive Control algorithm is considered. The approach consists of repeatedly solving an optimal control problem with finite horizon , but implementing only the first control action.

As it is common in linear Robust and Stochastic MPC, e.g.  [18], the state of the system, predicted steps ahead from time 

is split into a deterministic, nominal part and a zero mean stochastic error part . Let be a stabilizing feedback gain such that is Schur. A prestabilizing error feedback is employed, which leads to the predicted input

(4)

with being the free SMPC optimization variables. Hence, the dynamics of the nominal system and error are given by

(5a)
(5b)

where are zero mean random variables and are deterministic.

The finite horizon cost to be minimized at time is defined as

(6)

where is the solution to the discrete-time Lyapunov equation . The expected value can be computed explicitly, which gives a quadratic, finite horizon cost function in the deterministic variables and

(7)

where is a constant term which can be neglected in the optimization.

The prototype finite horizon optimal control problem to be solved online is given in the following definition, where the constraint sets and are derived from the chance constraints (2) and some suitable terminal constraint as described in the next section.

Definition 1 (Finite Horizon Optimal Control Problem).

Given the system dynamics (5), cost (7) and nominal constraint sets , and , the SMPC finite horizon optimization problem is

(8a)
s.t.
(8b)

The minimizer of (8), which depends on the state , is denoted and the SMPC control law is . The set of feasible decision variables for a given state is defined as

In order to have a well-defined control law, it is necessary to ensure that, if initially feasible, the optimal control problem remains feasible at future sampling times, a property known as recursive feasibility.

Definition 2 (Recursive Feasibility).

The finite horizon optimal control problem (8) is recursively feasible for system (1) under the SMPC control law if

for every realization .

The main goal is to suitably design the cost and constraint set , and of the finite horizon optimal control problem (8) , such that in closed-loop the constraints (2) are satisfied, recursive feasibility is ensured and the system is stabilized.

Iii Constraint Tightening and Stochastic MPC Algorithm

This section addresses the Stochastic MPC synthesis part. First, the deterministic, nonconservative constraint sets and for the nominal system are derived, such that the constraints (2) for system (1) hold in closed-loop under the SMPC control law. These constraint sets are further modified to provide stochastic stability guarantees and recursive feasibility under all admissible disturbance sequences. We discuss the difference between existence of an a priori unknown feasible solution and feasibility of an a priori known candidate solution, which is unique to Stochastic MPC and plays a crucial role in proving stability. A second constraint tightening is presented, where the probability of a given candidate solution being infeasible is a design parameter. The section concludes with the resulting SMPC algorithm.

Iii-a Constraint Tightening

Given the evolution of the disturbance (5b), similar to [26, 18], we directly compute tightened constraints offline. However, we neither aim at the computation of recursively feasible probabilistic tubes nor at robust constraint tightening for the input.

State Constraints

The probabilistic state constraints (2a) can non-conservatively be rewritten in terms of convex, linear constraint sets on the predicted nominal state , as stated in the following proposition.

Proposition 1.

The system (1) satisfies the chance constraints (2a) for and if and only if the nominal system (5a) satisfies the constraints with

(9)

where is given by

(10)
s.t.
Proof.

The constraint (2a) can be rewritten in terms of and as

(11)

with being the solution to (5b). Equation (11) is equal to s.t. and . This is equal to , with s.t. . The maximum value exists as (10) can equivalently be written as

s.t.

By Assumption 1 on the disturbance, the cumulative density function for the random variable exists and is right-continuous. Using , the constraint can be written as which concludes the proof. ∎

Proposition 1 leads to independent, one dimensional, linear chance constrained optimization problems (10) that can be solved offline. Computational issues will be addressed in Section V-A and in the following the program (10) will be assumed to be solved. Note that the random variable does neither depend on the realization of the state at time nor at the optimization variables .

Input Constraints

To decrease conservativeness, instead of a robust constraint tightening for the hard constraints on the input , we propose a stochastic constraint tightening in the predictions, which are restricted to optimal feed-forward instead of feedback control. In other words, we take advantage of the probabilistic nature of the disturbance and require that the (suboptimal) combination of SMPC feed-forward input sequence and static error feedback remains feasible for most, but not necessarily for all possible disturbance sequences. This is in line with the fact that at each sampling time the optimal input is recomputed and adapted to the actual disturbance realization, ensuring that the hard constraints (2b) are satisfied.

Let be a probabilistic level. Similarly to the state constraint tightening, we replace the original constraint (2b) with where

(12)

and is given by the solutions to one dimensional, linear chance constrained optimization problems

(13)
s.t.

We remark that in closed-loop, the hard input constraints (2b) will be satisfied as .

Terminal Constraint

We first construct a recursively feasible admissible set under a local control law and then employ a suitable tightening to determine the terminal constraint for the nominal system.

Proposition 2 (Terminal Constraint).

For the system (1) with input let be a (maximal) robust positively invariant polytope111 For an in depth theoretical discussion, practical computation and polytopic approximations of see [27] for an overview or [28] for details. inside the set

with according to (10). For any initial condition in the constraints (2) are satisfied in closed-loop operation with the control law for all .

Proof.

By definition, the set is forward invariant for all disturbances and constraint (2b) holds for all . Furthermore

is satisfied for all states , which is sufficient for (2a). ∎

To define the terminal constraint for the nominal system, a constraint tightening approach similar to (10) is needed. Let be a probabilistic level, we define the terminal region

(14)

with

(15)
s.t.

Iii-B Recursive Feasibility

As it has been pointed out in previous works, e.g. [22, 18], the probability of constraint violation steps ahead at time is not the same as steps ahead at time given the realization of state , in particular

Hence, the tightened constraint sets (9), (12) and (14) do not guarantee recursive feasibility.

A commonly used approach to recover recursive feasibility and prove stability, is to use a mixed worst-case/stochastic prediction for constraint tightening. In [18, 19] the constraint (9) is replaced by

In [19] the authors point out that this approach is rather restrictive and leads to higher average costs if the optimal solution is “near” a chance constraint. Alternatively, if only recursive feasibility is of interest, the authors propose to use a constraint only on the first input, to obtain a recursively feasible optimization program which is shown to be least restrictive.

In the following, we propose a hybrid strategy: We impose a first step constraint to guarantee recursive feasibility and the previously introduced stochastic tube tightening with terminal constraint and cost to prove stability. At the cost of further offline reachability and controllability set computation, the proposed approach has the advantage of being less conservative compared to recursively feasible stochastic tubes, but yet guaranteed to stabilize the system at the minimal positively invariant region.

Let

be the -step set and feasible first input for the nominal system (5a) under the tightened constraints , and . The set can be computed via projection or backward recursion [29]. defines the feasible states and first inputs of the finite horizon optimal control problem.

Since the projection onto the first coordinates is not necessarily robust positively invariant with respect to the disturbance set , it is important to further compute a (maximal) robust control invariant polytope with the constraint . Let and

the set is defined through . The basis of a standard algorithm to compute is given by recursively computing until for some which implies . The basic idea and analysis of the sequence have been presented in [30],[31, Section 5.3].

Remark 1.

The computation of the sets and is a long-standing problem in (linear) controller design, which has gained renewed attention in the context of Robust MPC. Efficient algorithms to exactly calculate or to approximate those sets exist, e.g. [28, 31]. Matlab implementations of those algorithms as part of a toolbox can be found in, e.g. [32, 33].

Iii-C Recursive Feasibility of the Candidate Solution

Given a feasible input trajectory at time , a candidate solution for time is given by a “shifted solution”, as it is common in Robust and Stochastic MPC [26, 18].

Definition 3 (Candidate Solution).

Given a solution , the candidate solution to the SMPC optimization (8) at time for is defined by

(16)

To prove asymptotic stability, not only existence of a feasible solution at each time is of interest, but also feasibility of an explicitly given candidate solution at time . In this subsection, based on Section III-A, a refined constraint tightening is defined, which allows to explicitly bound the probability of the candidate solution being feasible in the next time step.

Let and be a convex confidence region for , i.e. . For , define

with

Similarly, define by replacing with , respectively. For the terminal constraint let

to define .

Tightened constraints, where the probability of infeasibility of the candidate solution can be specified a priori, are obtained by replacing , and with , and .

Proposition 3 (Recursive Feasibility of the Candidate Solution).

Let the state, input and terminal constraints in (8) be given by

(17)

with .

If it exists such that then, with probability no smaller than , with and .

Proof.

With probability it holds , hence it suffices to prove the claim for .

Assume , recursive feasibility of the terminal constraint follows from robust recursive feasibility of the terminal region and robust reachability for all . Furthermore is implied by the assumption .

Constraint satisfaction for the state constraints for follows inductively

for all , and . Similarly for the input, replacing and by and . ∎

While the constraints introduced in Section III-A only allow for an analysis of the probability of infeasibility of the candidate solution, the maximal probability is a design parameter when the constraints (17) are employed. This alternative constraint tightening essentially closes the gap between “recursively feasible probabilistic tubes” [18] which are recovered with and the “least restrictive” scheme presented in [19] where only existence of a solution is considered. The impact of on the convergence and provable average closed-loop cost will be highlighted in the next section. The influence on the size of the feasible region is demonstrated in the example in Section V.

Iii-D Resulting Stochastic MPC Algorithm

The final Stochastic MPC algorithm can be divided into two parts: (i) an offline computation of the involved sets and (ii) the repeated online optimization.

Offline: Determine the tightened constraint sets , and according to either (10), (13), and (15), or (17). Determine the first step constraint according to the section III-B.

Online: For each time step

  1. Measure the current state ,

  2. Solve the linearly constrained quadratic program (8) with additional first step constraint , i.e.

    (18a)
    s.t.
    (18b)
  3. Apply .

Iv Properties of the proposed SMPC Scheme

In this section, we formally derive the control theoretic properties of the proposed SMPC scheme, in particular the influence of , the bound on the probability of the candidate solution being infeasible. We first derive a bound on the asymptotic average state cost, which highlights the connection to [18] and proves bounded variance of the state. This is followed by a proof of asymptotic stability in probability of a robust invariant set, which is novel in Stochastic MPC and shows the connection to tube based Robust MPC approaches like [26, 3]. This asymptotic behavior has previously been claimed but only shown in simulations in [34]. The section concludes with a discussion on offline relaxation of chance constraints in Stochastic MPC.

Iv-a Asymptotic Average Performance

Prior to a stability analysis, we prove recursive feasibility of the SMPC algorithm, which is provided by the following proposition.

Proposition 4 (Recursive Feasibility).

Let

If , then for every realization and .

Proof.

From it follows and by construction . ∎

Due to the persistent excitation through the additive disturbance, it is clear that the system does not converge asymptotically to the origin, but “oscillates” with bounded variance around it. The following theorem summarizes the constraint satisfaction and provides a bound on the asymptotic average stage cost.

Theorem 1 (Main Properties).

If , then the closed-loop system under the proposed SMPC control law satisfies the hard and probabilistic constraints (2) for all future times and

with the maximum probability that the previously planned trajectory is not feasible, and the Lipschitz constant of the optimal value function of (18).

Proof.

Since, by Proposition 4, the SMPC algorithm is recursively feasible, chance constraint satisfaction follows from Proposition 1 and hard input constraint satisfaction from and hence .

To prove the second part, we use the optimal value of (18) as a stochastic Lyapunov function. Let be the optimal value function of (18), which is known to be continuous, convex and piecewise quadratic in  [35]. Hence, a Lipschitz constant on exists. The old input trajectory does not remain feasible with at most probability , but we can bound the cost increase of by .

Let be the expected optimal value at time , conditioning on the state at time and feasibility of the candidate solution .

where , , denote the optimal solution of (18), respectively predicted state at time , and . Note that the expected value of all - cross-terms equals zero because of the zero-mean and independence assumption. Furthermore, since we defined the terminal cost as the solution to the discrete-time Lyapunov equation it holds that .

Combining both cases we obtain by the law of total expectation

The final statement follows by taking iterated expectations. ∎

Remark 2.

A terminal region, which is forward invariant with probability , can be used instead of a robust forward invariant terminal region. In this case, Theorem 1 still holds.

Iv-B Asymptotic Stability

In this section, we prove, under mild assumptions, the existence of a set which is asymptotically stable in probability for the closed-loop system under the proposed Stochastic MPC algorithm. In particular, by the proposed SMPC control law, the same set is stabilized as with the Robust MPC proposed in [26] or with the Stochastic MPC proposed in [18]. The different constraint tightening leads to a possibly different transient phase. The price to obtain a larger feasible region can be a longer convergence time before the terminal set is reached.

Definition 4 (Asymptotic Stability in Probability).

A compact set is said to be asymptotically stable in probability for system (1) with a control law , if for each and such that

and for a neighborhood of , for all

where is called region of attraction.

To streamline the presentation, we make the following assumption on the control gain , as well as two non-restrictive technical assumptions.

Assumption 2.
  • The feedback gain for the prestabilizing and terminal controller is chosen to be the unconstrained LQ-optimal solution.

  • Let be the minimal robust positively invariant set for the system (1) with input and let be an open unit ball in . It exists such that .

  • The set is compact.

Under this assumption, the main result of this section, asymptotic stability of , can be formally stated.

Theorem 2 (Asymptotic Stability).

Under Assumption 2, the set is asymptotically stable in probability with region of attraction for the system (1) with the proposed SMPC controller.

We prove the theorem by first proving it under the assumption that the candidate solution remains feasible at each time step. Then, we prove that it exists a set where this feasibility assumption is verified and that for every probability and state in it exists a time such that