On Stochastic Model Predictive Control with Bounded Control Inputs††thanks: This research was partially supported by the Swiss National Science Foundation under grant 200021-122072
This paper is concerned with the problem of Model Predictive Control and Rolling Horizon Control of discrete-time systems subject to possibly unbounded random noise inputs, while satisfying hard bounds on the control inputs. We use a nonlinear feedback policy with respect to noise measurements and show that the resulting mathematical program has a tractable convex solution in both cases. Moreover, under the assumption that the zero-input and zero-noise system is asymptotically stable, we show that the variance of the state, under the resulting Model Predictive Control and Rolling Horizon Control policies, is bounded. Finally, we provide some numerical examples on how certain matrices in the underlying mathematical program can be calculated off-line.
Model Predictive Control (MPC) for deterministic systems has received a considerable amount of attention over the last few decades, and significant advancements have been realized in terms of theoretical analysis as well as industrial applications. The motivation for such research thrust comes primarily from tractability of calculating optimal control laws for constrained systems. In contrast, the counterpart of this development for stochastic systems is still in its infancy.
The deterministic setting is dominated by worst-case analysis relying on robust control methods. The central idea is to synthesize a controller based on the bounds of the noise such that a certain target set becomes invariant with respect to the closed-loop dynamics. However, such an approach usually leads to rather conservative controllers and to large infeasibility regions, and although disturbances are not likely to be unbounded in practice, assigning an a priori bound to them seems to demand considerable insight. A stochastic model of the disturbance is a natural alternative approach to this problem: the conservatism of the worst-case analysis may be circumvented, and one need not impose any a priori bounds on the maximum magnitude of the noise. However, since in practice control inputs are almost always bounded, it is of great importance to consider hard bounds on the control inputs as essential ingredients of the controller synthesis; probabilistic constraints on the controllers naturally raise difficult questions on what actions to take when such constraints are violated (see however  for one possible approach to answer these questions).
In this paper we aim to provide answers to the following questions: Given a linear system that is affected by (possibly unbounded) stochastic noise, to be controlled by applying predictive-type bounded control inputs, (i) is the associated optimization problem tractable? (ii) under what conditions is stability (in a suitable stochastic sense) of the closed-loop system guaranteed? (iii) is stability retained both in the case of MPC implementation and the case of Rolling Horizon Control (RHC) implementation?
In the deterministic setting, there exists a plethora of literature that settles tractability and stability of model-based predictive control, see, for example, [2, 3, 4, 5] and the references therein. However, there are fewer results in the stochastic case, some of which we outline next. In , the authors reformulate the stochastic programming problem as a deterministic one with bounded noise and solve a robust optimization problem over a finite horizon, followed by estimating the performance when the noise can take unbounded values, i.e., when the noise is unbounded, but takes high values with low probability (as in the Gaussian case). In [7, 8] a slightly different problem is addressed in which the noise enters in a multiplicative manner into the system, and hard constraints on the state and control input are relaxed to probabilistic ones. Similar relaxations of hard constraints to soft probabilistic ones have also appeared in  for both multiplicative and additive noise inputs, as well as in . There are also other approaches, for example those employing randomized algorithms as in [11, 12]. Finally, a related line of research can be found in , and a novel convex analysis dealing with chance and integrated chance constraints can be found in .
In this paper we restrict attention to linear time-invariant controlled systems with affine stochastic disturbance inputs. Our approach has three main features. Firstly, for the finite-horizon optimal control subproblem we adopt a feedback control strategy that is affine in certain bounded nonlinear functions of the past noise inputs. Secondly, instead of following the usual trend of adding element-wise constraints to the control input in the optimization, we propose a new approach that entails saturating the utilized noise measurements first and then optimizing over the feedback gains, ensuring that the hard constraints on the input will be satisfied by construction. This novel approach does not require artificially relaxing the hard constraints on the control input to soft probabilistic ones to ensure large feasible sets, and still provides a solution to the problem for a wide class of noise input distributions. In fact, we demonstrate that our strategy (without state constraints) leads to global feasibility. The effect of the noise appears in the finite-horizon optimal control problem as certain covariance matrices, and these matrices may be computed off-line and stored. Thirdly, the measurement saturation functions are only required to be elementwise bounded in order to ensure tractability of the optimization problem while maintaining hard constraints on the control input; therefore, these measurement saturation functions may be picked from among the wide class of saturation functions, the standard sigmoidal functions and their piecewise affine approximations, etc.
Once tractability of the finite-horizon underlying optimization problem is insured, it is possible to implement the resulting optimal solution using an MPC approach or an RHC approach. In the former case , the optimization problem is resolved at each step and only the first control input is implemented. In the latter case , the optimization problem is resolved every steps (with being the horizon length) and the entire sequence of input vectors is implemented. Both of these approaches are shown to provide stability under the assumption that the zero-input and zero-noise system is asymptotically stable, which translates into the condition that the state matrix is Schur stable. At a first glance, this assumption might seem restrictive. However, the problem of ensuring bounded variance of linear Gaussian systems with bounded control inputs is, to our knowledge, still open, and here we are considering the problem of controlling a linear system with bounded control input and possibly unbounded noise. It is known that for discrete-time systems without any noise acting on the system it is possible to achieve global stability if and only if the matrix is neutrally stable .
This paper unfolds as follows. In §II we state the main problem to be tackled with the underlying assumptions. In §III, we provide a tractable approach to the finite horizon optimization problem with hard constraints on the control input, as well as some examples in §III-A. Stability of the MPC and RHC implementations is shown in §IV, and hints onto the input-to-state stable properties of this result are provided in §IV-C. Finally, we provide a numerical example in §V and conclude in §VI.
Hereafter, is the set of natural numbers, , and is the set of nonnegative real numbers. We let denote the indicator function of a set , and and denote the -dimensional identity and zeros matrices, respectively. Also, let denote the expected value given , and denote the trace of a matrix. For a given symmetric -dimensional matrix with real entries, let be the set of eigenvalues of , and let and . Let denote standard norm. Finally, the mean and covariance matrix of any vector are denoted by and , respectively.
Ii Problem Statement
Consider the following general affine discrete-time stochastic dynamical model:
where is the state, is the control input, is a stochastic noise input vector, , and are known matrices, and is a known constant vector. We assume that the initial condition is given and that, at any time , is observed exactly. We shall assume further that the noise vectors are i.i.d. and that the control input vector is bounded at each instant of time , i.e.,
with input constraints that can be transformed to the form (2) by an affine transformation
is amenable to our approach by setting and in (1). Note that the set need not necessarily be a hypercube, or even contain the origin. Note also that we can assume that is zero mean in (1) without loss of generality; given a system of the form (3) where is not zero mean, we can replace it by a system in the form (1) with zero mean in which
by setting .
Fix a horizon and set . The MPC procedure can be described as follows.
Determine an admissible optimal feedback control policy, say , for an -stage cost function starting from time , given the (measured) initial condition ;
increase to , and go back to step (a).
On the other hand, the RHC procedure simply replaces (b) above by
apply the entire sequence of control inputs, update the state at the -th step, increase to and go back to step (a).
Accordingly, the -th step of this procedure consists of minimizing the stopped -period cost function starting at time , namely, the objective is to find a feedback control policy that attains
Since both the system (1) and cost (4) are time-invariant, it is enough to consider the problem of minimizing the cost for , i.e., the problem of minimizing over .
In view of the above we consider the problem
The evolution of the system (1) over a single optimization horizon can be described in compact form as follows:
where the input
Using the compact notation above, the optimization Problem (5) can be rewritten as follows:
The solution to Problem (8) is difficult to obtain in general. In order to obtain an optimal solution to Problem (8) over the class of feedback policies, we need to solve the Dynamic Programming equations. This generally requires using some gridding technique, making the problem extremely difficult to solve computationally. Another approach is to restrict attention to a specific class of state feedback policies. This will result in a suboptimal solution to our problem, but may yield a tractable optimization problem. It is the track we pursue in the next section.
Iii Tractable Solution under Bounded Control Inputs
By the hypothesis that the state is observed without error, one may reconstruct the noise sequence from the sequence of observed states and inputs by the formula
where the feedback gains and the affine terms must be chosen based on the control objective, while observing the constraints (2). With this definition, the value of at time depends on the values of up to time . Using (9) we see that is a function of the observed states up to time . It was shown in  that there exists a one-to-one (nonlinear) mapping between control policies in the form (10) and the class of affine state feedback policies. That is, provided one is interested in affine state feedback policies, parametrization (9) constitutes no loss of generality. Of course, this choice is generally suboptimal, but it will ensure the tractability of a large class of optimal control problems. In compact notation, the control sequence up to time is given by
where , and
Since the elements of the noise vector are not assumed to be bounded, there can be no guarantee that the control input (11) will meet the constraint (7). This is a problem in practical applications, and has traditionally been circumvented by assuming that the noise input lies within a compact set , and designing a worst-case controller. In this article we propose to use the controller
instead of (11), where
is a shorthand for the vector , is the -th row of the matrix , and is any function with . In other words, we have chosen to saturate the measurements that we obtain from the noise input vector before inserting them into our control vector. This way we do not assume that the noise distribution is defined over a compact domain, which is an advantage over other approaches [6, 18]. Moreover, the choice of element-wise saturation functions is left open. As such, we can accommodate standard saturation, piecewise linear, and sigmoidal functions, to name a few.
Our choice of saturating the measurement from the noise vectors renders the optimization problem tractable as opposed to just calculating the whole input vector and then saturating it afterwards, which tends to an intractable optimization problem.
Let us first consider the cost function in Problem 8. After substituting the system equations, we obtain
Note that since , we have that . Accordingly, using the definitions of , , , and ,
where is a constant that we omit as it does not change the optimization problem, and we have used the following intermediate step
Using again the assumption that , we have that
Let us look at the constraints in Problem 8. The proposed control input (12) satisfies the hard constraints (7) as long as the following condition is satisfied: , such that . This is equivalent to the following conditions: , , such that . As these conditions should hold for any permissible value of the function , we can eliminate the dependence of the constraints on through the following optimization problems . It is straightforward now to show, using Hölder’s inequality [19, p. 29], that , and the result follows.
An important step in the solvability of Problem (13) is being able to calculate the matrices and . In general, these matrices can be calculated off-line by numerical integration. However, in some instances these matrices can be given in terms of explicit formulas; two of these instances are given in the following examples.
Recall the following standard special mathematical functions: the standard error function and the complementary error function [22, p. 297] defined by for , the incomplete Gamma function [22, p. 260] defined by for , the confluent hypergeometric function [22, p. 505] defined by for and is the standard Gamma function.
We collect a few facts in the following
For we have
Let us consider (1) with Gaussian noise and sigmoidal bounds on the control input. More precisely, suppose that the noise process is an independent and identically distributed (i.i.d) sequence of Gaussian random vectors of mean and covariance . Let the components of be mutually independent, which implies that is a diagonal matrix . Suppose further that the matrix and that the function is a standard sigmoid, i.e., . Then from Proposition 5 we have for and ,
This shows that the matrix in Proposition 3 is equal to , where Similarly, since
the matrix in Proposition 3 is , where Therefore, given the system (1), the control policy (10), and the description of the noise input as above, the matrices and derived above complete the set of hypotheses of Proposition 3. The problem (5) can now be solved as a quadratic program (13).
Note that we have chosen to use the standard sigmoidal functions in Example 6. However, the result still holds for more general sigmoidal functions of the form , where is some given magnitude and is some given slope. This slight change is reflected in the entries of the matrices and , i.e., for and ,
Therefore, in this case the matrix in Proposition 3 is with , and the matrix is with . These information complete the set of hypotheses of Proposition 3, and the problem (5) can now be solved as a quadratic program (13).
Iv Stability Analysis
In this section, we assume that the matrix is Schur stable, i.e., , . Accordingly, and since the control is bounded, it is intuitively evident that the closed-loop system is stable in some sense. Indeed, we shall show that the variance of the state is uniformly bounded both in the MPC and RHC cases, the only difference being a choice of implementation based on available memory.
First we need the following Lemma. It is a standard variant of the Foster-Lyapunov condition ; we include a proof here for completeness. The hypotheses of this Lemma are stronger than usual, but are sufficient for our purposes; see e.g.,  for more general conditions.
Let be an -valued Markov process. Let be a continuous positive definite and radially unbounded function, integrable with respect to the probability distribution function of . Suppose that there exists a compact set and a number such that
From the conditions it follows immediately that
where . We then have
which shows that as claimed.
Iv-a MPC Case
The MPC implementation corresponding to our input (12) and optimization program (13) consists of the following steps: Given a fixed optimization horizon , set the initial time , calculate the optimal control gains using the program (13), apply the first optimal control input , increase to , and iterate. Of course, the optimal gain depends implicity on the current given initial state, i.e., , which in turn gives rise to a stationary infinite horizon optimal policy given by . The closed-loop system is thus given by
Since by assumption the matrix is Schur stable, there exists a positive definite and symmetric matrix with real entries, say , such that . Using the system (19), at each time instant we have
Using the fact that (from (13)), we obtain the following bound
where and . Since , we have that
For we know that
where . From (20) it now follows that whence
We see that the hypotheses of Lemma 8 are satisfied with , , and . Since , it follows that
which completes the proof.
Iv-B RHC Case
In the RHC implementation is also iterative in nature, however instead of recalculating the gains at each time instant the optimization problem is solved every steps, where . The resulting optimal control policy (applied over a horizon ) is given by , where again the control gains depend implicitly on the initial condition , i.e., and . Therefore, the optimal policy is given by . For , the resulting closed-loop system over horizon is given by
where , and and are suitably defined matrices that are extracted from and , respectively.
Using (21) and the fact that , , we have that