Shrinking Horizon Model Predictive Control with Signal Temporal Logic Constraints under Stochastic Disturbances

Shrinking Horizon Model Predictive Control with Signal Temporal Logic Constraints under Stochastic Disturbances

Samira S. Farahani, Rupak Majumdar, Vinayak Prabhu, Sadegh Esmaeil Zadeh Soudjani The authors are with the Max Planck Institute for Software Systems, Germany. A limited subset of the results of this paper is accepted for presentation at American Control Conference 2017 [11]. farahani,vinayak,rupak,sadegh@mpi-sws.org
Abstract

We present Shrinking Horizon Model Predictive Control (SHMPC) for discrete-time linear systems with Signal Temporal Logic (STL) specification constraints under stochastic disturbances. The control objective is to maximize an optimization function under the restriction that a given STL specification is satisfied with high probability against stochastic uncertainties. We formulate a general solution, which does not require precise knowledge of the probability distributions of the (possibly dependent) stochastic disturbances; only the bounded support intervals of the density functions and moment intervals are used. For the specific case of disturbances that are independent and normally distributed, we optimize the controllers further by utilizing knowledge of the disturbance probability distributions. We show that in both cases, the control law can be obtained by solving optimization problems with linear constraints at each step. We experimentally demonstrate effectiveness of this approach by synthesizing a controller for an HVAC system.

I Introduction

We consider the control synthesis problem for stochastic discrete-time linear systems under path constraints that are expressed as temporal logic specifications and are written in signal temporal logic (STL) [23]. Our aim is to obtain a controller that robustly satisfies desired temporal properties with high probability despite stochastic disturbances, while optimizing additional control objectives. With focus on temporal properties defined on a finite path segment, we use the model predictive control (MPC) scheme [3, 8, 20, 22] with a shrinking horizon: the horizon window is fixed and not shifted at each time step of the controller synthesis problem. We start with an initial prediction horizon dependent on the temporal logic constraints, compute the optimal control sequence for the horizon, apply the first step, observe the system evolution under the stochastic disturbance, and repeat the process (decreasing the prediction horizon by 1) until the end of the original time horizon.

Our proposed setting requires solving three technical challenges in the MPC framework.

First, in addition to optimizing control and state cost, the derived controller must ensure that the system evolution satisfies chance constraints arising from the STL specifications. Previous choices of control actions can impose temporal constraints on the rest of the path. We describe an algorithm that updates the temporal constraints based on previous actions.

Second, for some temporal constraints, we may require that the system satisfies the constraints robustly: small changes to the inputs should not invalidate the temporal constraint. To ensure robust satisfaction, we use a quantitative notion of robustness for STL [10]. We augment the control objective to maximize the expected robustness of an STL specification, in addition to minimizing control and state costs, under chance constraints. Unfortunately, the resulting optimization problem is not convex.

As a third contribution, we propose a tractable approximation method for the solution of the optimization problem. We conservatively approximate the chance constraints by linear inequalities. Second, we provide a tractable procedure to compute an upper bound for the expected value of the robustness function under these linear constraints.

Recently receding horizon control with STL constraints has been studied for a variety of domains [12, 24]. In these works, the disturbance is assumed to be deterministic but from a bounded polytope, and the worst-case MPC optimization problem is solved. The control synthesis for deterministic systems with probabilistic STL specifications is studied in [25] but only a fragment of STL is considered in order to obtain a convex optimization problem. Also, the receding horizon control has been applied to deterministic systems in the presence of perception uncertainty [17]. Additionally, chance-constrained MPC has been addressed in [26] for deterministic systems, in which the underlying probability space comes only from the measurement noise. Application of chance-constrained MPC to optimal control of drinking water networks is studied in [14].

In this paper, we assume that the the disturbance signal has an arbitrary probability distribution with bounded domain and that we only know the support and the first moment interval for each component of the disturbance signal. In order to solve the optimization problem more efficiently, we transform chance constraints into their equivalent (or approximate) linear constraints. To this end, we apply the technique presented by [4], to approximate the chance constraints with an upper bound. Also, the expected value of the robustness function can be approximated by the moment intervals of the disturbance signal, and can be computed without using numerical integration.

Furthermore, as an additional case in this study, we show that if the disturbance signal is normally distributed and hence, has no bounded support, instead of truncating the distribution to obtain a finite interval of support for random variables, we can use a different approach, which is based on the quantiles of the normally distributed random variables to replace the chance constraints by linear constraints. In this case, we also show that the expected value of the robustness function can be replaced by an upper bound using the methods presented in [13].

We empirically demonstrate the effectiveness of our approach by synthesizing a controller for a Heating, Ventilation and Air Conditioning (HVAC) system. We compare our approach with open-loop optimal controller synthesis and with robust MPC [24], and show that our approach can lead to significant energy savings.

I-a Notations

We use for the set of reals and for the set of non-negative integers. The set indicates logical true and false. For a vector , its components are denoted by , . For a sequence and , we define , In this paper, all random variables are denoted by capital letters and the deterministic variables are denoted by small letters. We also use small letter to indicate observations of a random vector . For a sequence of random vectors and , we define , which is a matrix containing observations of the random variable up to time augmented with its unobserved values after . For a random variable denote the support interval by and the first moment111The expected value of a random variable with support and the cumulative distribution function is defined as . The expectation exists if the integral is well-defined and yields a finite value. by .

We consider operations on intervals according to interval arithmetic: for two arbitrary intervals and , and constants , we have and .

Ii Discrete-Time Stochastic Linear Systems

In this paper, we consider systems in discrete-time that can be modeled by linear difference equations perturbed by stochastic disturbances. Depending on the probability distribution of the disturbance signal, we conduct our study for two cases: a) the disturbance signal has an arbitrary probability distribution with a bounded domain for which we only know the support and their first moment intervals; and b) the disturbance signal has a normal distribution. The first case can be extended to random variables with an unbounded support, such as normal or exponential random variables, by truncating their distributions. The specific form of the distribution in the second case enables us to perform a more precise analysis using properties of the normal distribution. Note that the support of a random variable with values in is defined as the set , where denotes the ball with center at and radius ; alternatively, the support can be defined as the smallest closed set such that .

Consider a (time-variant) discrete-time stochastic system modeled by the difference equation

(1)

where denotes the state of the system at time instant , denotes the control input at time instant , and is a vector of random variables, the components of which have either of the above mentioned probability distributions. The random vector can be interpreted as the process noise or an adversarial disturbance. Matrices , and are possibly time-dependent appropriately defined system’s matrices, and the initial state is assumed to be known. We assume that are mutually independent random vectors for all time instants . Note that, for any , the state-space model (1) provides the following explicit form for , , as a function of , input , and the process noise :

(2)

where is the state transition matrix of model (1) defined as

with being the identity matrix.

For a fixed positive integer , and a given time instant , let (matrix is defined similarly). A finite stochastic run of system (1) for the time interval is defined as , which is a finite sequence of states satisfying (2). Since each state depends on , and , we can rewrite in a more elaborative notation as . Analogously, we define an infinite stochastic run as an infinite sequence of states. Stochastic runs will be used in Section III to define the system’s specifications.

Iii Signal Temporal Logic

An infinite run of system (1) can be considered as a signal , which is a sequence of observed states. We consider Signal temporal logic (STL) formulas with bounded-time temporal operators defined recursively according to the grammar [23]

where is the true predicate; is a predicate whose truth value is determined by the sign of a function, i.e. with being an affine function of state variables; is an STL formula; and indicate negation and conjunction of formulas; and is the until operator with . A run satisfies at time , denoted by , if the sequence satisfies . Accordingly, satisfies , if .

Semantics of STL formulas are defined as follows. Every run satisfies . The run satisfies if it does not satisfy ; it satisfies if both and hold. For a run and a predicate , we have if . Finally, if holds at every time step starting from time before holds, and additionally holds at some time instant between and . Additionally, we derive the other standard operators as follows. Disjunction , the eventually operator as , and the always operator as .

Thus if holds at some time instant between and and if holds at every time instant between and .

Formula Horizon. The horizon of an STL formula is the smallest such that the following holds for all signals and :

Thus, in order to determine whether a signal satisfies an STL formula , we can restrict our attention to the signal prefix where is the horizon of . This horizon can be upper-approximated by a bound, denoted by , defined to be the maximum over the sums of all nested upper bounds on the temporal operators. Formally, is defined recursively as:

where and are STL formulas. For example, for , we have . For a given STL formula , it is possible to verify that using only the finite run , where is equal to .

STL Robustness. In contrast to the above Boolean semantics, the quantitative semantics of STL [18] assigns to each formula a real-valued function of signal and such that implies . Robustness of a formula with respect to a run at time is defined recursively as

where refers to signal at time . The robustness of the derived formula can be worked out to be ; and similarly for as . The robustness of an arbitrary STL formula is computed recursively on the structure of the formula according to the above definition, by propagating the values of the functions associated with each operand using min and max operators.

STL Robustness for Stochastic Runs. With focus on stochastic runs and using the bound of a formula , the finite stochastic run with is sufficient to study probabilistic properties of . Analogous to the definition of robustness for deterministic run, we can define stochastic robustness of a formula with respect to the run for times with the stochastic robustness being dependent on and .

Note that a general STL formula consists of several Boolean and/or temporal operators. Due to the system dynamics (1), the stochastic run and are both functions of . Therefore, is a random variable since affine operators, maximization and minimization are measurable functions.

The above definition of robustness implies that, for any formula and constant , a stochastic run satisfies with probability greater than or equal to , if the finite stochastic run with satisfies .

Iv Control Problem Statement

For system (1) with a given initial state , the stochastic disturbance vector with a given probability distribution, STL formulas and , and some constant , the control problem can be defined as finding an optimal input sequence , that minimizes the expected value of a given objective function subject to constraints on states and input variables, where . This optimization problem for the time interval can be defined as

(3a)
(3b)
(3c)
(3d)

where denotes the expected value operator and the closed set specifies the constraint set for the input variables. The chance constraints (3c) state that for a given , stochastic runs of the system should satisfy with a probability greater than or equal to . We consider the following objective function

(4)

where the first term represents the negative value of the robustness function on STL formula at time that needs to be minimized; and the second term reflects the cost on the input variables and can be defined as a linear or a quadratic function.

Note that optimization problem (3) is an open-loop optimization problem and we cannot incorporate any information related to the process noise or the states of the system.

Remark 1

The above problem formulation enables us to distinguish the following two cases: we put the robustness of a formula in the objective function if the system is required to be robust with respect to satisfying the formula; we encode the formula in the probabilistic constraint if only satisfaction of the formula is important.

Iv-a Model Predictive Control

To obtain a more well-behaved control input and to include the information about the disturbances, instead of solving the optimization problem (3), we apply shrinking horizon model predictive control (SHMPC), which can be summarized as follows: at time step one, we obtain a sequence of control inputs with length (the prediction horizon) to optimize the cost function; then we only use the first component of the obtained control sequence to update the state of the system (or to observe the state in the case of having a stochastic disturbance); in the next time step, we fix the first component of the control sequence by the first component of the previously calculated optimal control sequence and hence, we only optimize for a control sequence of length . As such, at each time step, the size of the control sequence decreases by 1. Note that in this approach, unlike the receding horizon approach, we do not shift the horizon at each time step and the end point of the prediction window is fixed. MPC allows us to incorporate the new information we obtain about the state variables and the disturbance signal, at each time step and hence, to improve the control performance comparing with the one of solving the open-loop optimization problem (3).

A natural choice for the prediction horizon in this setting with STL constraints is to set it to be greater than or equal to the bound of the formula , i.e., , which was defined in the previous section. This choice provides a conservative maximum trajectory length required to make a decision about the satisfiability of the formula.

Let where are the observed states up to time and is the random state variable at time , also let such that are the noise realizations up to time and are random vectors with given probability distributions at time . Define to be the vector of input variables such that are the obtained optimal control inputs up to time and are the input variables that need to be determined at time .

Given STL formula , observations of state variables , and designed control inputs of system (1), the stochastic SHMPC optimization problem minimizes the expected value of the cost function

at each time instant , as follows

(5a)
(5b)
(5c)
(5d)

where the expected value in (5a) is conditioned on observations and for all . Optimization variables in (5) are the control inputs . We indicate the argument of minimum by .

The complete procedure of obtaining an optimal control sequence using SHMPC is presented in Algorithm 1. Lines 3 to 8 of this algorithm specify the inputs and the parameters used in the algorithm and line 20 specifies the output. In line 10, the SHMPC optimization procedure starts for each time step . In line 11, we solve the optimization problem (5) to obtain an optimal control sequence for time instance . In lines 12 to 16, we check whether the obtained solution satisfies the STL specifications or not; if yes, assign the first component of the obtained input sequence to , and if not, the optimization procedure will be terminated. Finally, in line 17, we apply to the system (1) and observe the states at time instant .

  • procedure Chance-Constrained Stochastic SHMPC

  • input:

  • STL formulas and and a fixed

  • parameters:

  • probability distributions of the process noise

  • Initial state

  • for

  • closed-loop optimization problem:

  • for ; ;  do

  • Compute by solving the optimization problem (5)

  • if  the solution of optimization problem (5) exists then

  • Let ;

  • else

  • Return Infeasible Solution and terminate the optimization procedure;

  • end if

  • Apply to the system and observe the value of as

  • end for

  • output:

  • end procedure

Algorithm 1

We show in the following theorem that in Algorithm 1, by using the shrinking horizon technique, the specific choice of , and keeping track of the control inputs and observed states, the closed-loop system satisfies the STL specification with probability greater than or equal to .

Theorem 2

Given a constant and an STL formula , if the optimization problems in Algorithm 1 are all feasible, the computed optimal control sequence ensures that the closed-loop satisfy with probability greater than or equal to .

Proof: Considering the chance constraint (5c) the probability that a trajectory of the system violates at time step is at most . This implies that the probability of violating in the time interval is at most , which proves that the optimal control sequence obtained using Algorithm 1 results in trajectories that satisfy with probability greater than or equal to   

Note that in practice, if at each time step a feasible solution is not found, by using the previous control value, i.e., by setting , we can give the controller a chance to retry in the next time step after observing the next state.

Remark 3

The choice of is completely arbitrary. In general, the positive constants can be picked freely with the condition that .

Computation of the solution of the optimization problem (5) requires addressing two main challenges: a) the objective function (5a) depends on the optimization variables and on random variables , thus we have to compute the expected value as a function of these variables; and b) the feasible set of the optimization restricted by the chance constraint (5c) is in general difficult to characterize. We propose approximation methods in Sections V and VI to respectively address these two challenges.

V Approximating the objective function

To solve the optimization problem (5), one needs to calculate the expected value of the objective function. One way to do this is via numerical integration methods [7]. However, numerical integration is in general both cumbersome and time-consuming. For example, the method of approximating the density function of the disturbance with piecewise polynomial functions defined on polyhedral sets [5, 19] suffers from scalability issues on top of the induced approximation error. Therefore, in this section, we discuss an efficient method that computes an upper bound for the expected value of the objective function and then, minimize this upper bound instead.

We discuss computation of such upper bounds for both cases of process noise with arbitrary probability distribution and with normal distribution in Sections V-A and V-B, respectively. For this purpose, we first provide a canonical form for the robustness function of a STL formula , which is the mix-max or max-min of random variables. This result is inspired by [9], in which the authors provide such canonical forms for max-min-plus-scaling functions.

Theorem 4

For a given STL formula , the robustness function , and hence the function , can be written into a max-min canonical form

(6)

and into a min-max canonical form

(7)

for some integers , where and are weighting vectors and and are affine functions of and .

Proof: The proof is inductive on the structure of and uses the explicit form of the states in (2) utilizing the identities and

for functions , and .   

V-a Arbitrary probability distributions with bounded support

Suppose the elements of the stochastic vector , i.e., have arbitrary probability distribution with known bounded support and its first moment belongs to the interval , with known quantities . Under this assumption, the explicit form of in (2) implies that, for the observed value of as , is a random variable with the following interval of support and the first moment interval

(8)

where , and and are weighted sum of , obtained by using interval arithmetics mentioned in Section I-A.

The objective function in (5) can be written as and that . Recall that with observed states of system (1) and random states . The following theorem shows how we can compute an upper bound for based on the canonical form of .

Theorem 5

For a given STL formula , can be upper bounded by

where the constants , , are affine functions of and , and are weighted sum of and for .

Proof: With focus on the canonical form (6), let . Considering the support and moment interval of the components of , each random variable has the following support and moment interval (similar to (8))

(9)

where the constants , , are weighted sum of and for , using interval arithmetic (cf. Section I-A). Accordingly, is a random variable with the following support and moment intervals,

(10)

Hence, as we are minimizing the cost function in (5), we can utilize the upper bound for .   

Note that the approximation methodology of Theorem 5 is applicable also to the min-max canonical form (7).

By replacing the expected objective function by its upper bound given in Theorem 5, and by replacing the probabilistic constraints by their equivalent linear approximation (as is discussed in Section VI), the optimization problem (5) can be then recast as a mixed integer linear programming (MILP) problem, which can be solved using the available MILP solvers [2, 21].

V-B Normal distribution

The upper bound on the objective function provided in the previous section does not apply to process noises with unbounded support, but knowing the distribution of the process noise provides more information about the statistics of the runs of the system. In this section we address process noises with normal distribution separately due the their wide use in engineering applications.

Suppose that for any , is normally distributed with mean and covariance matrix , i.e., . The explicit form of in (2) and the fact that normal distribution is closed under affine transformations result in normal distribution for , . Its expected value and covariance matrix with an observed value of are

respectively, for .

In this section we use the canonical representation of in Theorem 4, which states that (for fixed and ) can be written in either of the forms

(11)

with being affine functions of the process noise, thus normally distributed random variables (similar to explained above). With focus on these canonical representations for we employ Proposition 6 to show how to approximate using higher order moments of . This proposition, also used in [13], follows due to the relation between the infinity norm and the -norm of a vector and Jensen’s inequality.

Proposition 6

Consider random variables for and let be an even integer. Then

Founded on Proposition 6, next theorem shows how we can upper bound using the higher order moments of .

Theorem 7

Considering the canonical forms in (11) for as a function of random variables , can be upper bounded by

(12)
(13)

Proof: For random variables , and for a positive even integer , the following inequality holds,

where in we used the upper bound obtained in Proposition 6; in we used the fact that ; In we use again the inequality in Proposition 6. Moreover, for , the following inequality holds,

where we apply Jensen’s inequality to the concave function to get . The inequality of Proposition 6 gives .   

Note that random variables are normally distributed in both (12) and (13). Higher order moments of normally distributed random variables can be computed analytically in a closed form as a function of the first two moments, i.e., using its mean and variance. More specifically, for a normally distributed random variable with mean and variance , the -th moment has a closed form as