Control of nonlinear systems:a model inversion approach

# Control of nonlinear systems: a model inversion approach

## Abstract

A novel control design approach for general nonlinear systems is presented in this paper. The approach is based on the identification of a polynomial model of the system to control and on the on-line inversion of this model. An efficient technique is developed to perform the inversion, which allows an effective control implementation on real-time processors.

## 1 Introduction

Consider a nonlinear discrete-time SISO system in regression form:

 yt+1=g(yt,ut,ξt) (1)
 yt=(yt,…,yt−n+1)ut=(ut,…,ut−n+1)ξt=(ξt,…,ξt−n+1)

where is the input, is the output, is a disturbance including both process and measurement noises, and is the system order. and are compact sets. In particular, accounts for input saturation.

Suppose that the system (1) is unknown, but a set of measurements is available:

 D≐{~ut,~yt}0t=1−L (2)

where and are bounded for all . The tilde is used to indicate the input and output samples of the data set (2).

Let be a set of initial conditions of interest for the system (1) and, for a given initial condition , let be a set of output sequences of interest.

The aim is to control the system (1) in such a way that, starting from any initial condition , the system output sequence tracks any reference sequence . The set of all solutions of interest is defined as . The set of all possible disturbance sequences is defined as .

After a brief section regarding the notation used in the paper, an approach called NIC (Nonlinear Inversion Control) is proposed for the design of a controller , allowing the accomplishment of the above task.

## 2 Notation

A column vector is denoted as . A row vector is denoted as , where indicates the transpose.

A discrete-time signal (i.e. a sequence of vectors) is denoted with the bold style: , where and indicates the discrete time; is the th component of the signal at time .

A regressor, i.e. a vector that, at time , contains present and past values of a variable, is indicated with the bold style and the time index: .

The norms of a vector are defined as

 ∥x∥p≐{(∑nxi=1|xi|p)1p,p<∞,maxi|xi|,p=∞.

The norm is also used to denote the absolute value of a scalar: for .

The norms of a signal are defined as

 ∥x∥p≐{(∑∞t=1∑nxi=1|xi,t|p)1p,p<∞,maxi,t|xi,t|,p=∞,

where is the th component of the signal at time . These norms give rise to the well-known Banach spaces.

## 3 NIC control design

The proposed approach relies on identifying from the data (2) a model of the form

 Missing or unrecognized delimiter for \right (3)

where and are the system input and output, and is the model output. For simplicity, the model is supposed of the same order as the system but this choice is not necessary: all the results presented in the paper hold also when the model and system orders are different. Indications on the choice of the model order are given in Section 4.

A parametric structure is taken for the function :

 f(qt,ut)=N∑i=1αiϕi(qt,ut) (4)

where are basis functions and are parameters to be identified. The basis function choice is in general a crucial step, [1, 2, 3]. In the present NIC approach, polynomial functions are used. The motivations are mainly two: (1) polynomials have been proved to be effective approximators in a huge number of problems; (2) as we will see later, they allow a “fast” controller evaluation. The identification of the parameter vector can be performed by means of convex optimization, as shown in Section 4.

Once a model of the form (3) has been identified, the controller is obtained by its inversion:

Suppose that, at a time , the reference value for the time is and the current regressor is . Inversion consists in finding a command input such that the model output at time is “close” to :

 ^yt+1=f(qt,unlt)≅rt+1. (5)

The latter equality may be not exact for two reasons: (1) no may exist for which is exactly equal to ; (2) values of with a limited norm may be of interest, in order to have a not too high command activity. This kind of inversion is called (approximate) right-inversion and can be performed also when is not bijective with respect to (e.g., for some and , more than one value of may exist such that (5) holds).

The command input yielding (5) can be computed according to the following optimality criterion:

 Missing \left or extra \right (6)

The objective function is given by

 J(u)=1ρy(rt+1−f(qt,u))2+μρuu2 (7)

where and are normalization constants computed from the data set (2), and is a design parameter, allowing us to determine the trade-off between tracking precision and command activity. Indications on the choice of are given in Section 4.

Note that the objective function (7) is in general non-convex. Moreover, the optimization problem (6) has to be solved on-line, and this may require a long time compared to the sampling time used in the application of interest. In order to overcome these two relevant problems, a technique is now proposed, allowing a very efficient computation of the optimal command input .

Since a polynomial basis function expansion has been considered for , it follows that the objective function is a polynomial in . The minima of can thus be found considering the roots of its derivative: Define the set

 Us≐(Rroots(dJ(u)du)∩U)∪{u––,¯¯¯u}

where denotes the set of all real roots of , and and are the boundaries of . The optimal command input is given by

 unlt=Knl(rt+1,qt)≐argminu∈UsJ(u) (8)

where it has been considered that depends on the reference and regressor .

The nonlinear controller is fully defined by the control law (8).

###### Remark 1

The derivative can be computed analytically. Moreover, is composed by a “small” number of elements:

 card(Us)

where card is the set cardinality and deg indicates the polynomial degree. The evaluation of through (8) is thus extremely fast, since it just requires to find the real roots of a polynomial whose analytical expression is known and to compute the objective function for a “small” number of values. This fact allows a very efficient controller implementation on real-time devices.

###### Remark 2

The system (1) is not required to be stable and in general no preliminary stabilizing controllers are needed. The only guideline is to generate the data using input signals for which the system output does not diverge. This can be easily done for many nonlinear systems like the single-corner model considered below. Indeed, many systems are characterized by trajectories that are unstable but bounded (a typical feature of chaotic systems). In the presence of unbounded trajectories, for which a suitable input signal can hardly be found, a preliminary stabilizing controller may be required. The preliminary controller can also be a human operator, who is able to drive the system within a bounded domain, see [4, 5].

## 4 Model identification algorithm

In this section, the algorithm for identifying the model (3), required by the nonlinear controller , is proposed. Systematic criteria for the choice of the involved identification/design parameters are also provided.

Choose a set of polynomial basis functions . For example, these functions can be generated as products of univariate polynomials, where the independent variables are scaled to range in the interval . In most cases, no large polynomial degrees are required: we observed in several simulated and real-world applications that a degree is sufficient to guarantee satisfactory model accuracy and control performance.

Define

 ~y≐(~yt1+1,…,~yt2+1)Φ≐⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ϕ1(~yt1,~ut1)⋯ϕN(~yt1,~ut1)⋮⋱⋮ϕ1(~yt2,~ut2)⋯ϕN(~yt2,~ut2)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦

where , , and and are the input-output measurements of the data set (2). Consider the set , defined as

 SC(γy,η,ρ)≐{β:∣∣˜yl+1−˜yk+1+(Φk−Φl)β∣∣≤γyρ∥˜yl−˜yk∥∞+2ηρ,k∈T,l∈Υk}

where and is the set of indexes given by

 Υk≐{i:∥~uk−~ui∥∞≤ζ}

and is the minimum value for which every set contains at least two elements. is defined by a set of linear inequalities in and is thus convex in .

The parameter vector of the model defined by (3) and (4) can be identified by means of the following algorithm, completely based on convex optimization. Note that the algorithm is “self-tuning”, in the sense that all the required parameters are chosen by the algorithm itself, without requiring extensive trial and error procedures.

Algorithm 1

Initialization: choose a “low” model order (e.g. ).

1. Construct the vector and matrix as indicated above.

2. Compute

 Missing \left or extra \right
3. Solve the optimization problem

 α=argminβ∈RN∥β∥1subject to(a) β∈SC(γy,η,ρ)(b) ∥˜y−Φβ∥∞≤ηρ (9)

where denotes the th row of the matrix , is the minimum value for which the constraints and are feasible, and is a real number slightly larger than .

4. Repeat steps 1-3 for increasing model order. Stop when no significant reductions of are observed.

5. Repeat steps 3-4 for increasing . Stop when .

The algorithm allows the achievement of three important features:

1. Closed-loop stability. As proven in [4], under reasonable conditions, constraint ensures that the function has a Lipschitz constant wrt non larger than , as . On the other hand, a theoretical analysis shows that having this constant smaller than is a key condition for closed-loop stability. Another required condition is that , where is the input-output gain of the system formed by the cascade connection of the controller and the model (this latter working in prediction). Since can be imposed arbitrarily (see the discussion below), it can be concluded that Algorithm 1 is able to ensure closed-loop stability when the number of data becomes large.

2. “Small” tracking error. Constraint is aimed at providing a model with a “small” prediction error (this error, evaluated on the design data set, is given by ). According to the mentioned theoretical analysis, reducing this error allows us to obtain a “small” tracking error. Note that there is a trade-off between stability and tracking performance: In step 5, is increased until the stability condition is met. However, increasing causes an increase of the prediction error and, consequently, of the tracking error.

3. Model sparsity. In step 3, the norm of the coefficient vector is minimized, leading to a sparse coefficient vector , i.e. a vector with a “small” number of non-zero elements, [6, 7, 8, 9]. Sparsity is important to ensure a low complexity and a high regularity of the model, limiting at the same time well known issues such as over-fitting and the curse of dimensionality. Sparsity allows also an efficient implementation of the model/controller on real-time processors, which may have limited memory and computation capacities.

Once a model has been identified, the nonlinear controller is obtained by its inversion, as explained in Section 3. Only one design parameter needs to be chosen for this inversion: the weight in (7). If no particular requirements on the activity of the command input have to be satisfied, the simplest choice is . Otherwise, if the input activity has to be reduced, a value can be chosen, where is the maximum value for which the stability condition holds. is the input-output gain of the system formed by the cascade connection of the controller and the model. This condition can be checked (approximately) by deriving an estimate of from the data (2). Let

 DΓ≐{~wt,^yt+1}−1t=1−L+m (10)

where

 ^yt=f(~yt−1,~unlt−1)~unlt−1=Knl(~yt,~qt−1)~qt−1=(~yt−1,…,~yt−n,~unlt−2,…,~unlt−n)~wt=(~yt,…,~yt−m+1), (11)

and are the input-output measurements of the data set (2), and . The estimate can be obtained applying the validation method of [10] to the data set (10) (the method is summarized in the Appendix). Observing that and thus , must be chosen in such a way that .

###### Remark 3

The stability conditions and can give indications on the choice of the control system sampling time : As discussed in [11], a too small leads to models where . These kinds of models have a strong dependence on past outputs and a weak dependence on the input, resulting in large values of and . It is thus expected that and can be reduced by increasing . Clearly, to capture the relevant dynamics of the system and allow a prompt control action, must be not “too large”.

## 5 Appendix: Nonlinear Set Membership validation procedure

In this appendix, the validation method of [10] is summarized, suitably adapted for the present setting. This method is useful within the NIC approach for estimating the constant appearing in Sections 4 and 1.

Suppose that is the Lipschitz constant of an unknown function . Let a set of data , be available, described by

 ^yt+1=f(˜wt)+dt,t∈T

where is a suitable set of indexes and is a noise. This noise may also include errors due to the fact that is not Lipschitz continuous (e.g. in the case where is the sum of a Lipschitz continuous function plus a discontinuous but bounded function). Assume that , where is the ball with radius , and that , where is the set of Lipschitz continuous functions on the domain of with constant . Under this assumption, we have that , where is the Feasible Function Set.

###### Definition 1

Feasible Function Set:

 FFS≐{f∈F(Γy):^yt+1−f(˜wt)∈Bε, t∈T}.■

According to this definition, is the set of all functions consistent with prior assumptions and data. As typical in any identification/estimation theory, the problem of checking the validity of prior assumptions arises. The only thing that can be actually done is to check if prior assumptions are invalidated by the data, evaluating if no function exists consistent with data and assumptions, i.e. if is empty.

###### Definition 2

Prior assumptions are validated if .

The following result provides necessary and sufficient conditions for prior assumption validation. Define the function

###### Theorem 1

A sufficient condition for prior assumptions
to be validated is:

 ¯¯¯f(Γy,˜wt)>^yt+1−ε, t∈T.

Proof. See Theorem 1 in [10].

The validation Theorem 1 can be used for assessing the value of the Lipschitz constant so that the sufficient condition holds. Suppose that has been chosen by means of any criterion (e.g. based on some prior knowledge on the noises, or by means of standard filtering/smoothing techniques, or also using the dispersion function defined in [12]). The constant

 Γminy≐inf¯¯¯f(Γ,˜wt)>^yt+1−ε, t∈TΓ (12)

represents the minimum Lipschitz constant for which the prior assumptions are validated. A reasonable estimate of is thus a value slightly larger than . Note that the evaluation of is quite simple, as shown by the examples in [10] and [13].

### References

1. J. Sjöberg, Q. Zhang, L. Ljung, A. Benveniste, B.Delyon, P. Glorennec, H. Hjalmarsson, and A. Juditsky, “Nonlinear black-box modeling in system identification: a unified overview,” Automatica, vol. 31, pp. 1691–1723, 1995.
2. K. Hsu, C. Novara, T. Vincent, M. Milanese, and K. Poolla, “Parametric and nonparametric curve fitting,” Automatica, vol. 42/11, pp. 1869–1873, 2006.
3. C. Novara, T. Vincent, K. Hsu, M. Milanese, and K. Poolla, “Parametric identification of structured nonlinear systems,” Automatica, vol. 47, no. 4, pp. 711 – 721, 2011.
4. C. Novara, L. Fagiano, and M. Milanese, “Direct feedback control design for nonlinear systems,” Automatica, vol. 49, no. 4, pp. 849–860, 2013.
5. L. Fagiano and C. Novara, “Learning controllers from data: control of a power kite used for wind energy conversion,” http://lorenzofagiano.altervista.org/movies/LC_kite_LD.mp4, 2012.
6. J. Fuchs, “Recovery of exact sparse representations in the presence of bounded noise,” IEEE Transactions on Information Theory, vol. 51, no. 10, pp. 3601 –3608, oct. 2005.
7. D. Donoho, M. Elad, and V. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Transactions on Information Theory, vol. 52, no. 1, pp. 6 – 18, jan. 2006.
8. E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489 – 509, feb. 2006.
9. J. Tropp, “Just relax: convex programming methods for identifying sparse signals in noise,” IEEE Transactions on Information Theory, vol. 52, no. 3, pp. 1030 –1051, mar. 2006.
10. M. Milanese and C. Novara, “Set membership identification of nonlinear systems,” Automatica, vol. 40/6, pp. 957–975, 2004.
11. G. Goodwin, J. Yuz, J. Aguero, and M. Cea, “Sampling and sampled-data models,” in American Control Conference, plenary lecture, Baltimore, MD, USA, 2010.
12. K. Hsu, T. Vincent, C. Novara, M. Milanese, and K. Poolla, “Identification of nonlinear maps in interconnected systems,” in 44th IEEE Conference on Decision and Control and European Control Conference, Seville, Spain, 2005.
13. M. Milanese and C. Novara, “Set membership prediction of nonlinear time series,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1655–1669, 2005.
72399