Embedding Constrained Model Predictive Control in a Continuous-Time Dynamic FeedbackThe authors are with the University of Michigan, Ann Arbor. Email:{mnicotr,dliaomcp,ilya}@umich.edu. This research is supported by the National Science Foundation Award Number CMMI 1562209.

# Embedding Constrained Model Predictive Control in a Continuous-Time Dynamic FeedbackThe authors are with the University of Michigan, Ann Arbor. Email:{mnicotr,dliaomcp,ilya}@umich.edu. This research is supported by the National Science Foundation Award Number CMMI 1562209.

## Abstract

This paper introduces a continuous-time constrained nonlinear control scheme which implements a model predictive control strategy as a continuous-time dynamic system. The approach is based on the idea that the solution of the optimal control problem can be embedded into the internal states of a dynamic control law which runs in parallel to the system. Using input to state stability arguments, it is shown that if the controller dynamics are sufficiently fast with respect to the plant dynamics, the interconnection between the two systems is asymptotically stable. Additionally, it is shown that, by augmenting the proposed scheme with an add-on unit known as an Explicit Reference Governor, it is possible to drastically increase the set of initial conditions that can be steered to the desired reference without violating the constraints. Numerical examples demonstrate the effectiveness of the proposed scheme.

## 1Introduction

One of the major challenges in the control of real world systems is the presence of constraints. Indeed, achieving high performance typically requires a control law that is able to operate on the constraint boundaries. Popular continuous-time constrained control methodologies include anti-windup schemes, which are mostly used to address input saturation [1], and barrier-type methods where the control action becomes more aggressive as the system approaches the constraint boundary [3]. Nevertheless, the most widespread and systematic approach for incorporating constraints into control design is the Model Predictive Control (MPC), which is typically developed as a discrete-time control scheme [6].

Traditional MPC schemes rely on solving a finite horizon discrete Optimal Control Problem (OCP) to a pre-specified level of accuracy during each sampling period. In recent years, however, “Fast MPC” approaches have become increasingly popular. These algorithms are designed to track the solution of the OCP with a bounded error rather than seeking to accurately solve the OCP at each time-step. This is achieved by making extensive use of warm-start and sensitivity based strategies to exploit similarities between subsequent OCPs to perform a fixed number of computations [9], rather than solving the OCP to a fixed tolerance. The stability of unconstrained sub-optimal MPC was studied in [11], whereas convex control constraints were considered in [12]. An example of a “fast” algorithm is the real-time iteration (RTI) scheme [13] for nonlinear MPC. In an RTI scheme, a single quadratic program (QP) is solved at every timestep noting that, over time, the fast contraction rate of Newton-type methods may allow convergence to the solution to the original nonlinear OCP [14]. Two more path-tracking algorithms are CGMRES [16], which tracks the solution to discretized necessary conditions of an unconstrained continuous-time OCP, and IPA-SQP [17] which uses insights from neighboring extremal optimal control theory to define a predictor-corrector type scheme. For constrained problems, parametric generalized equations [18] have been used to provide insight to aid analysis and algorithm design. Finally, first order methods, which only rely on gradient information to solve the OCP, e.g. [20], have become increasingly popular for “fast” MPC, due to the fact that their relatively low computational cost per iteration can sometimes allow the controller to achieve improved performances by increasing the sampling frequency [23].

Drawing inspiration from “fast” MPC schemes and based on observation that MPC can be implemented by making marginal improvements to the OCP solution at an increasingly high frequency, this paper introduces a novel continuous-time dynamic feedback controller that performs MPC without an iterative optimization solver. The idea behind the proposed controller is to embed the solution to a discrete finite horizon state and control constrained OCP into the state vector of a dynamic system that runs in parallel to the controlled system. The closed-loop behavior of the proposed controller is analyzed from a systems theory perspective and sufficient conditions under which the interconnection is asymptotically stable are derived using the small-gain theorem.

Continuous-time MPC strategies that are not based on manipulating the solution dynamics have been presented in e.g. [24]. Dynamic control laws for performing continuous-time MPC have been also proposed in the literature. Reference [28] describes an NMPC algorithm where the control action is obtained as the output of a hybrid dynamic system which ensures a non-increasing cost function. In [29], the authors present a backstepping approach for performing NMPC using output feedback. A dynamic system for solving quadratic programs is presented in [30]. Unlike existing solutions, the approach presented in this paper does not require a monotonically decreasing cost function to demonstrate closed-loop stability. Instead, it limits itself to ensuring that the interconnection between the control law and the controlled system is contractive. Furthermore, this paper considers very general convex control and state constraints.

To address the issue that a sudden change in the desired reference can drastically change the solution to the OCP, the proposed controller is also augmented with an Explicit Reference Governor (ERG). The ERG is a closed form add-on scheme that filters the applied reference in a way that ensures constraint satisfaction [31]. In the context of this paper, the ERG is tasked with maintaining the feasibility of the OCP by manipulating the reference of the primary control loop so that the terminal set is always reachable within the given prediction horizon. Similar approaches that extend the set of admissible initial conditions by using the reference as an auxiliary optimization variable can be found in [33]. The validity of the proposed control scheme, both with and without the ERG add-on, will be demonstrated in this paper with the aid of numerical experiments.

The remainder of the paper is organized as follows. Section 2 describes the class of systems considered in this paper and formulates the problem statement. Section 3 introduces an ideal continuous-time MPC feedback law that meets the control requirements under the assumption that the proposed OCP can be solved instantaneously. Section 4 then illustrates how that assumption can be dropped by embedding the optimization problem in a continuous-time dynamic system and deriving conditions under which the closed-loop system is asymptotically stable. Section 5 proposes the addition of an explicit reference governor to address the shortcomings of the embedded MPC controller. Section 6 illustrates the step-by-step implementation of the proposed methodology to the particular case of linear-quadratic constrained control problems. Finally, Section 7 showcases the good behavior of the proposed control scheme using both a simple double integrator example and a more advanced case study featuring a satellite docking scenario.

## 2Problem Statement

Consider a continuous linear time-invariant system

where is the state vector, is the input vector, is the output vector, and are suitably dimensioned state-space matrices.

The system is subject to the following state and input constraints

where and are vectors of convex functions; their feasible sets will be denoted by and .

Given the constraint sets and , and Assumption ?, it is possible to define the set of strictly steady-state admissible references as the set of output values such that the equilibrium point defined by

satisfies and . This allows the formulation of the following control problem.

## 3Control Strategy

To design a continuous-time constrained control law, we draw inspiration from the discrete-time MPC framework. Given a reference , a typical MPC approach for addressing Problem ?, see e.g. [6], consists of choosing a suitable discretization step and solving the following optimal control problem online

where

is the stage cost, is the terminal cost, is a terminal constraint, and the optimization variables are . This is done under the following assumptions.

Due to Assumption ?, the OCP is a strongly convex program and therefore admits an unique primal optimum , , provided that it is feasible [36]. Since this paper will implement a primal-dual algorithm to solve , the following assumption which guarantees uniqueness of the dual variables is added. Recall that given a set where and are continuously differentiable, the linear independence constraint qualification (LICQ) is said to hold at a point if

where is the index set of constraints active at [37].

As proven in [8], Assumptions ? and ? ensure that the discrete-time approximation of system subject to the control law

is recursively feasible and admits as an exponentially stable equilibrium point. In typical MPC schemes, the control law is implemented using a zero order hold strategy. As a result, rigorous proofs of stability and constraint satisfaction would require techniques from sampled data systems, see e.g. [9]. However, as shown in the following proposition, implementing as a continuous-time signal greatly simplifies the stability proof.

Interestingly enough, it will be shown in Section 6 that, if the cost functions and are quadratic, the results stated in Proposition ? and Corollary ? hold globally rather than semi-globally.

The main drawback of the continuous-time MPC approach proposed above is that it assumes that can be computed instantaneously. Considering the fact that this requires the solution of an optimization problem (or that complex computations of a pre-stored solution are involved), this assumption may be unrealistic in practice. Moreover, given , the OCP admits a solution only if , meaning that it must be possible to steer system into the terminal set within the prediction horizon . Depending on the application, however, this requirement may be too restrictive.

In what follows, Section 4 illustrates one method by which the first issue can be overcome by embedding the solution to the OCP into the internal states of a dynamic control law. This will be done under the assumption that the system is subject to a generic constant reference . Section 5 will then illustrate how this auxiliary reference can be steered to the desired reference in a way that ensures recursive feasibility and significantly extends the set of admissible initial conditions. The proposed control scheme is depicted in Figure 1.

## 4Primary Control Loop

The objective of this section is to illustrate how, given a suitable constant reference , it is possible to embed the solution to the optimal control problem into the internal states of a dynamic control law. In particular, given the vector of primal optimization variables , with , for , and , the optimal control problem can be expressed in compact form as

where , with , is a convex function, is a vector of size , is a full-rank matrix, and , is a vector of convex functions which collects the inequality constraints.

The Lagrangian for the problem ( ?) has the following form,

where and are vectors of Lagrangian multipliers, and is shorthand for the primal-dual tuple. The solution to must satisfy the necessary and sufficient Karush-Kuhn-Tucker (KKT) conditions.

where is the normal cone mapping defined as

A possible way to solve the generalized equation is, along the lines of the work presented in [38], to use primal-dual gradient flow:

where is a tunable scalar that controls the rate of change and is the projection operator onto the normal cone of defined as

The primal-dual projected gradient flow , coupled with the output equation , can be reinterpreted as a dynamic control law in the form

This is a nonlinear state space system where the internal states are , , and and the output is . Since the internal states asymptotically tend to the solution of , the intuition behind the proposed scheme is that the control action issued by will mimic the behavior of a standard MPC.

The following subsections will establish the convergence properties of the proposed feedback control scheme using a two-step approach: First, the stability of the dynamic control law will be proven under the assumption that remains constant. Then, the stability of the closed-loop system will be proven by showing that the interconnection between system and the dynamic control law is contractive.

### 4.1Stability of the Dynamic Controller

The following proposition concerns the asymptotic convergence of the dynamic control law to a point that satisfies the KKT conditions .

Clearly, the main limitation of Proposition ? is that it unrealistically assumes that , i.e. the state of system , does not evolve over time. By taking advantage of the properties of exponentially stable equilibrium points, however, the following corollary states that given a bounded , the dynamic control law will track the solution of with a bounded error. Moreover, the tracking error can be tuned by modifying the rate of change in equation .

Corollary ? bounds the asymptotic tracking error between the trajectory of the dynamic control law and the solution of the optimal control problem for a generic signal . The following subsection specializes this results by taking into account the fact that is the state of system subject to the control law .

### 4.2Stability of the Interconnection

The objective of this subsection is to show that, if the controller dynamics are sufficiently fast with respect to the plant dynamics, the closed-loop system asymptotically tends to .

Theorem ? basically states that the dynamic control law will successfully stabilize the system as long as:

1. The discretization step used for the OCP is suitably small with respect to the time constants of system ;

2. The internal dynamics of the control law are sufficiently fast with respect to the characteristic times of the controlled system;

4. The state belongs to a suitable set of initial conditions such that the solution to the optimal control problem exists.

The first two requirements pertain to the actual design of the control law and can be satisfied by a correct tuning of the discretization step and the rate of change . The third requirement poses a reasonable restriction which may or may not be an issue depending on the application. As for the final requirement, it basically states that the only admissible initial states are the ones that can reach the terminal set within a finite horizon time and without violating the constraint. In many applications, this can be considered too restrictive since the set of initial conditions that could eventually be steered to the desired equilibrium without violating the constraints is arguably much larger. In addition, Theorem ? also has the drawback of addressing the asymptotic behavior of the closed-loop system without taking into account the transient dynamics. This can be problematic in terms of constraint satisfaction since there is no guarantee that the tracking error between the dynamic control law and the solution of the optimal control problem will not cause a violation of the constraints.

In spite of these limitations, the primary control loop successfully mimics the behavior of a typical MPC strategy by embedding the solution to the optimal control problem into the internal states of the dynamic control system . The following section illustrates how the shortcomings of the primary control loop can be overcome by augmenting it with an add-on component.

## 5Auxiliary Control Loop

The objective of this section is to illustrate how, given a constant desired reference , it is possible to manipulate the dynamics of the auxiliary reference so that the requirements of the primary control loop are always met. This will be done in two steps: The first step will be to recursively ensure that the solution of the OCP exists, under the ideal assumption that the control input is . The second step will consists in dropping this assumption by showing that the error between the internal states of the dynamic control system and the solution of the OCP can be maintained within an arbitrarily small bound.

### 5.1Recursive Feasibility

To ensure that the optimal control problem remains feasible at all times, it is possible to take advantage of the fact that, due to Assumption ?, the terminal control law and the terminal constraint set are such that implies and . Since the terminal constraint set depends on the auxiliary reference , it is possible to enforce recursive feasibility by manipulating so that . This can be done using an add-on scheme known as the Explicit Reference Governor (ERG). For the general theory of the ERG, the reader is referred to [31]. In this paper, the ERG is used to generate the signal based on the auxiliary system

where is a Lipschitz continuous function such that

and is a piece-wise continuous function such that the system satisfies

By implementing the ERG strategy to manipulate the dynamics of the applied reference, the following can be proven.

The main interest in Proposition ? is that it greatly extends the set of initial conditions that can be steered to the desired reference without violating constraints. Indeed, classical MPC formulations impose the restriction . With the aid of the ERG, it is instead possible to relax this requirement to , where

which is arguably much larger than .

The main limitation with Proposition ? is that it assumes that the solution of the OCP is available and can be used to compute . The following subsection justifies this assumption by showing that it is possible to use the ERG to ensure that the error between the available state and the actual value of can be made arbitrarily small.

### 5.2Bounded Tracking Error

The objective of this subsection is to address the presence of a transient error between the internal states of the dynamic control law and the solution of the optimal control problem . Indeed, although Theorem ? guarantees asymptotic convergence even though , the discrepancy is nevertheless problematic because it can lead to a violation of constraints. As detailed in the following Proposition, however, the ERG can be used to limit the transient error between the internal states of the dynamic control law and the solution of the optimal control problem .

The main interest in Proposition ? is that it ensures that the error between the actual solution to the OCP and the approximate solution embedded in the dynamic control law can be tuned to satisfy a certain tolerance margin. As a result, given a such that the dynamics of the primary control loop are reasonably fast, and given a suitable bound on , the proposed control scheme will enforce constraint satisfaction within an arbitrarily small tolerance margin.

Based on these considerations, the ERG strategy presented in the previous subsection should be modified to

where is a Lipschitz continuous function such that

and is a piece-wise continuous function such that and the system satisfies

with given by . Given a dynamically embedded MPC augmented with an explicit reference governor, the following result is achieved.

The following section will focus on the specific, but highly relevant, case of linear systems subject to linear constraints and quadratic cost functions.

The objective of this section is to provide a step-by-step control design strategy that is applicable whenever , are convex polytopes

and the stage cost is quadratic

where , , and are suitably sized matrices such that , , and the pair is detectable. Given the polytopic constraints , it is convenient to define the set of strictly steady-state admissible references as

where each represents a static safety margin between the steady-state solution and the -th constraint.

### 6.1Terminal Conditions

Given the quadratic stage cost , it is possible to formulate a suitable optimal control problem by solving the algebraic Riccati equation

to obtain the terminal control gain

and the associated terminal cost

To compute the terminal constraint set, it is worth noting that, given the terminal control law , any quadratic function

with satisfying , is a Lyapunov function for the closed-loop system with the terminal controller. By taking advantage of set invariance properties, see e.g. [40], it has been proven in [41] that any state constraint in the form

can be mapped into a constraint on the Lyapunov function , where the threshold

corresponds to the largest Lyapunov level-set that does not violate the constraint . As also proven in [41], the size of this set can be maximized by assigning the matrix on the basis of the following linear matrix inequality

which can be solved offline for each constraint. Clearly, the state constraints are already in the form . By taking into account the terminal control law, the set of input constraints can also be written in the form by defining

Therefore, the terminal set constraint can be defined as

where is the total number of constraints.

### 6.2Primary Control Loop

Having defined all the elements in the optimal control problem , the dynamic control law follows directly from . In particular, it follows from that, given linear constraints and a quadratic cost, is a linear function that can be computed using

Note that, in virtue of Remark , the terminal constraint can be neglected in the MPC formulation since the ERG will be enforcing it.

### 6.3Auxiliary Control Loop

The final design step consist in constructing suitable components for the ERG in equation . In particular, a simple way to satisfy requirements is

with . As for the requirements , it follows from the convexity of the set that it is possible to employ an attraction/repulsion strategy

where

is an attraction term that points towards the desired reference , and

is a repulsion term that points away from the constraint boundary. As discussed in [42], is an arbitrarily small radius which ensures that gradually goes to zero in . The scalars are the static safety margins used to define the set , whereas the scalars are influence margins that ensure that the contribution of the -th constraint is non-zero if and only if . Finally, is any positive definite matrix that can be used to modify the direction from which converges to . A typical choice is the identity matrix. However, following from the intuition that each matrix is aligned as much as possible to the -th constraint [41], a possible choice is , where and, due to Assumption ?, is positive definite.

## 7Numerical Case Studies

The objective of this section is to validate and characterize the behavior of the proposed control strategy. To provide a clear and intuitive understanding, the first example will focus on the constrained control of a standard double integrator. The second example will then showcase the implementation of the dynamically embedded MPC on a more complex system.

### 7.1Double Integrator

Consider a double integrator described by the continuous-time LTI model , with

The system is subject to box state and input constraints

where the lower bound will assume two different values. Given the initial conditions , the control objective is reach the desired reference . The system is controlled using the quadratic stage cost , with , , and , and is discretized using the sampling time and prediction steps. The terminal cost and terminal constraints are obtained as detailed in Section 6. The rates of change for the primary control loop and auxiliary control loop , - are assigned as and , respectively. The auxiliary reference is initialized using the starting output .

Figures ?- ? illustrate the closed-loop response for . The figures compare the results obtained by directly feeding as a reference for the primary control loop, or by filtering it via the ERG. In both cases, the desired reference is reached without violating the constraints, thus implying that the optimal control problem is feasible. Interestingly enough, the introduction of the auxiliary control loop does not penalize the output response. This behavior, although not true in general, is clearly desirable since it means that the ERG does not degrade the performance if it not necessary.

Figures ?- ? instead illustrate the behavior for . In this case, the system constraints are violated in the absence of the ERG. This is due to the fact that the lower bound on the control input does not provide a sufficient deceleration for the given time horizon . As expected, the auxiliary control loop is able to overcome this issue by manipulating the dynamics of so that the OCP is always feasible.

### 7.2Spacecraft Relative Motion

Consider the Hill-Clohessy-Wiltshire (HCW) equations, which describe the relative motion of a chaser spacecraft with respect to a target spacecraft moving on a circular orbit [43]. The relative coordinates of the chaser spacecraft are defined as displacements in the radial direction , the along track direction and the across track direction . The state vector consists of these positions and the respective velocities, , and . The system dynamics are captured by the continuous-time LTI model , with

where (rad/sec) is the orbital rate of the target. The chaser spacecraft is required to change its relative position from to without violating the box constraints. The full set of state and control constraints is given by

This is achieved using the dynamically embedded MPC with quadratic costs , , and , prediction horizon , discretization step , rate of change and ERG gain .

The closed-loop behavior obtained by using the dynamically embedded MPC proposed in this paper is reported in Figures ?- ?. As expected, the system is successfully steered to the desired setpoint without violating the constraints. As with the previous example, the initial conditions are such that the system cannot reach the terminal set within the given prediction horizon. This issue is resolved by the explicit reference governor which provides an auxiliary reference (dashed lines in Figure ?) such that the OCP is feasible at all times.

## 8Conclusions

This paper proposes a continuous-time MPC scheme for linear systems implemented using a dynamic control law. The stability of the resulting closed-loop system was proven with the aid of the small gain theorem under the condition that the internal dynamics of the control law are faster than the characteristic timescales of the system. The dynamically embedded MPC was then augmented with an explicit reference governor to extend the set of admissible initial conditions and, at the same time, limit the tracking error of the OCP solution. Simulation results demonstrated feasibility of the proposed approach on both a simple example and a more relevant test case. Future research will pursue the extension of the proposed strategy to the constrained control of nonlinear.

### Proof of Proposition

Consider a candidate Lyapunov function defined as

where , , , and is the solution to the ordinary differential equation

Following [44], its time derivative satisfies

where, to simplify the notations, we designated and . The derivative of the terminal cost, , can be linked to the one step variation using the first order Taylor expansion

with such that, for any bounded ,

Equation can thus be rewritten as

Following from Assumption ?, the following bound applies

As a result, given an arbitrarily large , there exists a sufficiently small discretization step such that . This ensures exponential stability due to Assumption ?.

### Proof of Proposition

The objective of this section is to demonstrate that, given a constant measured input , such that is feasible the internal states of the controller exponentially tend to the optimal solution of . Recall that given assumption ? (strong convexity) and assumption ? (LICQ), admits a unique primal-dual optimum; we will denote it by .

We wish to show that is an exponentially stable (ES) equilibrium point of the primal-dual gradient flow update law which will be expressed compactly as . We will prove ES by showing that the update law is chosen from a negative scaling of the so-called KKT operator [45]

and proving that any update law which chooses its elements from and has a equilibrium point at is exponentially stable about .
The update law can be rewritten as

the first two lines are clearly elements of the KKT operator. The third line is also chosen from the KKT operator since is explicitly defined as a projection onto in . Finally, by explicit computation

it becomes apparent that if and only if the pair satisfy the KKT complementarity conditions,

and thus and is an equilibrium point of of the update law.

Now consider the Lyapunov function candidate,

it is straightforward to see that , and as . Its derivative is given by

substituting in the control law and recalling that yields,

Here we will use the fact that , and invoke the strong monotonicity property of the KKT operator [45], to obtain the bound

where is the strong monotonicity constant of , which proves exponential stability. Note that the region of attraction of this law is given by since if any and the projection onto the empty set is undefined. This is not an issue as (i) can simply be projected onto the non-negative orthant before initialization and (ii) in explicit form the update equation for is given by which does not allow if .

### Proof of Corollary

The objective of this section is to show that the computational system is ISS with respect to with a disturbance gain . The same Lyapunov function can be used as in the proof of Proposition ? where the optimal solution was considered fixed with respect to time. However, if the optimal solution is allowed to vary in time then the time derivative of the Lyapunov function candidate may not exist since is not necessarily differentiable or even a function. However, by considering results regarding the sensitivity of parameterized nonlinear programming problems it will be shown that and thus are Lipschitz continuous functions, allowing the application of Clarke’s generalized Jacobian.
First we will show, under strong convexity and the LICQ (assumptions ? and ?) that is a Lipschitz continuous function of . The KKT conditions can be rewritten as the following generalized equation (GE),

where,

is the base mapping, and is the normal cone of , and collects the exogenous inputs of the problem. Denote the solution mapping of by .

To show that is single valued, and thus a function, recall that is a convex optimization problem in the sense of Boyd with an strongly convex objective function; thus it must have a unique primal minimum [36]. In addition, the LICQ is then sufficient for uniqueness of the dual variables, see e.g.,[46], establishing the uniqueness of the primal-dual solution. Since is necessary and sufficient for optimality the solution mapping then must be single valued, i.e., , and thus is a function.
Next to show Lipschitz continuity, let be a reference solution of . Then, invoking Robinson’s theorem [47], strong regularity of in is sufficient for to be locally Lipschitz in a neighbourhood of , see e.g., [48]), provided is Lipschitz in ; which is true for . Thus local Lipschitz continuity of with respect to is implied by strong regularity. It is known that the LICQ and the strong second order sufficient conditions (SSOSC) are sufficient to establish the strong regularity of a minimum of a nonlinear programming problem see e.g., [Proposition 1.28.][46]. Strong convexity of the objective (assumption ?) is sufficient for the SSOSC to hold and the LICQ holds by assumption ?. Thus the solution mapping is single valued and locally Lipschitz continuous in the neighbourhood of any .
It has thus been established that the optimal primal-dual solution is a function of the parameters of the optimal control problem, namely the reference and measured state ,

and that for any point the solution mapping, , is locally Lipschitz continuous. However, since the solution mapping cannot be assumed to be continuously differentiable, we turn to generalized differentiation. Suppose is a function which is locally Lipschitz at , then let denote Clarke’s generalized Jacobian of evaluated at . The generalized Jacobian has many of the useful properties of the Jacobian, reduces to the Jacobian when is continuously differentiable, and is always well defined and guaranteed to be non-empty for locally Lipschitz functions[49].
Armed with the generalized Jacobian, consider the same Lyapunov function candidate

considered in the proof of Proposition ?. Taking the generalized Jacobian with respect to time we obtain

Since is continuously differentiable and the first term can be bounded using , thus

Using the chain rule for the generalized Jacobian [49]

and considering the case where is constant and exists1 we obtain

and thus 2

which completes the proof.

### Footnotes

1. Since system (1) is Lipschitz continuous, is a class function as long as is bounded
2. Note that since is set valued where refers to the induced matrix norm

### References

1. S. Tarbouriech and M. Turner, “Anti-windup design: an overview of some recent advances and open problems,” IET Control Theory and Applications, vol. 3, no. 1, pp. 1–19, 2009.
2. Princeton University Press, 2011.
L. Zaccarian and A. R. Teel, Modern Anti-windup Synthesis: Control Augmentation for Actuator Saturation.
3. A. Ilchmann, E. P. Ryan, and C. J. Sangwin, “Tracking with prescribed transient behaviour,” ESAIM: Control, Optimisation and Calculus of Variations, vol. 7, pp. 471–493, 2002.
4. K. B. Ngo, R. Mahony, and Z.-P. Jiang, “Integrator backstepping using barrier functions for systems with multiple state constraints,” in Proc. of the IEEE Conference on Decision and Control (CDC), pp. 8306–8312, 2005.
5. K. P. Tee, S. S. Ge, and E. H. Tay, “Barrier lyapunov functions for the control of output-constrained nonlinear systems,” Automatica, vol. 45, no. 4, pp. 918–927, 2009.
6. Nob Hill Pub., 2009.
J. B. Rawlings and D. Q. Mayne, Model predictive control: Theory and design.
7. Springer Science & Business Media, 2013.
E. F. Camacho and C. B. Alba, Model predictive control.
8. Springer Science & Business Media, 2006.
G. C. Goodwin, M. M. Seron, and J. A. De Doná, Constrained control and estimation: an optimisation approach.
9. L. Grüne and J. Pannek, “Nonlinear model predictive control,” in Nonlinear Model Predictive Control, pp. 43–66, Springer, 2011.
10. P. O. Scokaert, D. Q. Mayne, and J. B. Rawlings, “Suboptimal model predictive control (feasibility implies stability),” IEEE Transactions on Automatic Control, vol. 44, no. 3, pp. 648–654, 1999.
11. L. Grüne and J. Pannek, “Analysis of unconstrained nmpc schemes with incomplete optimization,” IFAC Proceedings Volumes, vol. 43, no. 14, pp. 238–243, 2010.
12. K. Graichen and A. Kugi, “Stability and incremental improvement of suboptimal mpc without terminal constraints,” IEEE Transactions on Automatic Control, vol. 55, no. 11, pp. 2576–2580, 2010.
13. M. Diehl, H. G. Bock, and J. P. Schlöder, “A real-time iteration scheme for nonlinear optimization in optimal feedback control,” SIAM Journal on control and optimization, vol. 43, no. 5, pp. 1714–1736, 2005.
14. S. Gros, M. Zanon, R. Quirynen, A. Bemporad, and M. Diehl, “From linear to nonlinear mpc: bridging the gap via the real-time iteration,” International Journal of Control, pp. 1–19, 2016.
15. M. Diehl, R. Findeisen, F. Allgöwer, H. G. Bock, and J. P. Schlöder, “Nominal stability of real-time iteration scheme for nonlinear model predictive control,” IEE Proceedings-Control Theory and Applications, vol. 152, no. 3, pp. 296–308, 2005.
16. T. Ohtsuka, “A continuation/gmres method for fast computation of nonlinear receding horizon control,” Automatica, vol. 40, no. 4, pp. 563–574, 2004.
17. R. Ghaemi, J. Sun, and I. V. Kolmanovsky, “An integrated perturbation analysis and sequential quadratic programming approach for model predictive control,” Automatica, vol. 45, no. 10, pp. 2412–2418, 2009.
18. V. M. Zavala and M. Anitescu, “Real-time nonlinear optimization as a generalized equation,” SIAM Journal on Control and Optimization, vol. 48, no. 8, pp. 5444–5467, 2010.
19. J.-H. Hours and C. N. Jones, “A parametric nonconvex decomposition algorithm for real-time and distributed nmpc,” IEEE Transactions on Automatic Control, vol. 61, no. 2, pp. 287–302, 2016.
20. D. Kouzoupis, H. Ferreau, H. Peyrl, and M. Diehl, “First-order methods in embedded nonlinear model predictive control,” in Control Conference (ECC), 2015 European, pp. 2617–2622, IEEE, 2015.
21. S. Richter, C. N. Jones, and M. Morari, “Real-time input-constrained mpc using fast gradient methods,” in Proceedings of IEEE Conference on Decision and Control (CDC), pp. 7387–7393, IEEE, 2009.
22. P. Patrinos and A. Bemporad, “An accelerated dual gradient-projection algorithm for embedded linear model predictive control,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 18–33, 2014.
23. M. Alamir, “Fast nmpc: A reality-steered paradigm: Key properties of fast nmpc algorithms,” in Control Conference (ECC), 2014 European, pp. 2472–2477, IEEE, 2014.
24. M. Reble and F. Allgöwer, “Unconstrained model predictive control and suboptimality estimates for nonlinear continuous-time systems,” Automatica, vol. 48, no. 8, pp. 1812–1817, 2012.
25. L. Magni and R. Scattolini, “Stabilizing model predictive control of nonlinear continuous time systems,” Annual Reviews in Control, vol. 28, no. 1, pp. 1–11, 2004.
26. L. Wang, “Continuous time model predictive control design using orthonormal functions,” International Journal of Control, vol. 74, no. 16, pp. 1588–1600, 2001.
27. M. Cannon and B. Kouvaritakis, “Infinite horizon predictive control of constrained continuous-time linear systems,” Automatica, vol. 36, no. 7, pp. 943–955, 2000.
28. D. DeHaan and M. Guay, “A real-time framework for model-predictive control of continuous-time nonlinear systems,” IEEE Transactions on Automatic Control, vol. 52, no. 11, pp. 2047–2057, 2007.
29. F. D. Brunner, H.-B. Dürr, and C. Ebenbauer, “Feedback design for multi-agent systems: A saddle point approach,” in Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pp. 3783–3789, IEEE, 2012.
30. H.-B. Dörr, E. Saka, and C. Ebenbauer, “A smooth vector field for quadratic programming,” in Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pp. 2515–2520, IEEE, 2012.
31. M. M. Nicotra and E. Garone, “Explicit reference governor for continuous time nonlinear systems subject to convex constraints,” in American Control Conference (ACC), Proceedings of the, pp. 4561–4566, 2015.
32. E. Garone and M. M. Nicotra, “Explicit reference governor for constrained nonlinear systems,” IEEE Transactions on Automatic Control, vol. 61, no. 5, pp. 1379–1384, 2016.
33. D. Limón, I. Alvarado, T. Alamo, and E. F. Camacho, “MPC for tracking piecewise constant references for constrained linear systems,” Automatica, vol. 44, no. 9, pp. 2382–2387, 2008.
34. F. A. De Almeida, “Reference management for fault-tolerant model predictive control,” Journal of Guidance, Control, and Dynamics, vol. 34, no. 1, pp. 44–56, 2011.
35. S. Di Cairano, A. Goldsmith, and S. Bortoff, “Reference management for fault-tolerant model predictive control,” IFAC-PapersOnLine, vol. 48, no. 23, pp. 398–403, 2015.
36. Cambridge university press, 2004.
S. Boyd and L. Vandenberghe, Convex optimization.
37. J. Nocedal and S. J. Wright, “Numerical optimization, second edition,” Numerical optimization, pp. 497–528, 2006.
38. K. J. Arrow, L. Hurwicz, H. Uzawa, and H. B. Chenery, “Studies in linear and non-linear programming,” 1958.
39. Z.-P. Jiang, A. R. Teel, and L. Praly, “Small-gain theorem for iss systems and applications,” Mathematics of Control, Signals and Systems, vol. 7, no. 2, pp. 95–120, 1994.
40. F. Blanchini, “Set invariance in control,” Automatica, vol. 35, no. 11, pp. 1747–1767, 1999.
41. E. Garone, L. Ntogramatzidis, and M. M. Nicotra, “Explicit reference governor for linear systems,” International Journal of Control, vol. 0, no. 0, pp. 1–16, 2017.
42. M. M. Nicotra and E. Garone, “An explicit reference governor for the robust constrained control of nonlinear systems,” in Decision and Control (CDC), 2016 IEEE 55th Conference on, pp. 1502–1507, 2016.
43. Elsevier, 2010.
K. T. Alfriend, S. R. Vadali, P. Gurfil, J. P. How, and L. S. Breger, Spacecraft Formation Flying: Dynamics, control and navigation.
44. D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: Stability and optimality,” Automatica, vol. 36, pp. 789–814, 2000.
45. E. K. Ryu and S. Boyd, “Primer on monotone operator methods,” Appl. Comput. Math, vol. 15, no. 1, pp. 3–43, 2016.
46. Springer, 2014.
A. F. Izmailov and M. V. Solodov, Newton-type methods for optimization and variational problems.
47. S. M. Robinson, “Strongly regular generalized equations,” Mathematics of Operations Research, vol. 5, no. 1, pp. 43–62, 1980.
48. A. L. Dontchev and R. T. Rockafellar, “Implicit functions and solution mappings,” Springer Monogr. Math., 2014.
49. SIAM, 1990.
F. H. Clarke, Optimization and nonsmooth analysis.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters