# Linear identification of nonlinear systems:

A lifting technique based on the Koopman operator

###### Abstract

We exploit the key idea that nonlinear system identification is equivalent to linear identification of the so-called Koopman operator. Instead of considering nonlinear system identification in the state space, we obtain a novel linear identification technique by recasting the problem in the infinite-dimensional space of observables. This technique can be described in two main steps. In the first step, similar to a component of the Extended Dynamic Mode Decomposition algorithm, the data are lifted to the infinite-dimensional space and used for linear identification of the Koopman operator. In the second step, the obtained Koopman operator is “projected back” to the finite-dimensional state space, and identified to the nonlinear vector field through a linear least squares problem. The proposed technique is efficient to recover (polynomial) vector fields of different classes of systems, including unstable, chaotic, and open systems. In addition, it is robust to noise, well-suited to model low sampling rate datasets, and able to infer network topology and dynamics.

## I Introduction

Operator-theoretic techniques rely on the idea of lifting nonlinear dynamical systems to an infinite dimensional space, where those systems have a linear representation. For instance, the so-called Koopman operator is a linear operator that describes the evolution of observable-functions along the trajectories of the system. Its spectral properties have been investigated over the past years and related to geometric properties of the system (see e.g. [5]). On the one hand, this body of work yielded novel methods for the analysis of nonlinear systems described by a known vector field (e.g. global stability analysis [4], global linearization [2], analysis of monotone systems [10]). On the other hand, the Koopman operator approach was shown to be conducive to data analysis [8] and related to numerical methods such as Dynamic Mode Decomposition (DMD) [9], yielding new techniques for the analysis of nonlinear systems described by trajectory data points (snapshots). In this context, the present paper aims at bridging these two directions of analysis, connecting data to vector field.

While Koopman operator techniques have been previously considered in the framework of system analysis, we present a first attempt –to the authors knowledge– to develop them for system identification. Exploiting the lifting obtained through the Koopman operator, our key idea is to identify a linear operator in the space of observables, instead of identifying a nonlinear system in the state space. Since Koopman operator and state space representations are both equivalent, identifying the operator is indeed sufficient to recover the nonlinear dynamical system. Because the Koopman operator is defined in an infinite-dimensional space, we consider its projection onto a finite basis of functions. The proposed method can somehow be interpreted, in the space of observables, as an analog of methods developed in [1, 6], which use dictionary functions in the state space. Our method proceeds in two main steps: from state space to space of observables, and back to state space. In the first step, a finite-dimensional projection of the Koopman operator is identified from data through classical linear identification, a method that is related to a component of the so-called Extended Dynamic Mode Decomposition (EDMD) algorithm [12]. In the second step, the projected operator is connected to the vector field of the nonlinear system through an analytic derivation of the infinitesimal generator (following similar steps as in [4]). The expansion of the vector field in the finite basis is then obtained by solving a linear least squares problem.

Thanks to the lifting technique, the proposed method has the advantage of relying only on linear techniques (i.e. least squares regression), in spite of the fact that the dynamical system is nonlinear. In contrast to recent (direct) methods (e.g. [1]), it is a (indirect) method that does not rely on the estimation of time derivatives, which cannot be performed when the sampling rate is too low. Therefore, the method is well-suited to low sampling, and also robust to both measurement noise and process noise. In addition, it works efficiently with different kinds of behaviors, including unstable and chaotic systems (although limited so far to polynomial vector fields) and can be applied to several small datasets obtained from multiple trajectories. This flexibility is well-suited to identification problems such as those encountered in the context of system biology.

The rest of the paper is organized as follows. In Section II, we introduce the system identification problem and present the lifting technique obtained through the Koopman operator. Our numerical method for nonlinear system identification based on the lifting technique is described in Section III. Extensions of the method to open systems (i.e. with outputs or process noise) are discussed in Section IV. In Section V, the method is illustrated with several numerical examples, and applied to network reconstruction. Concluding remarks and perspectives are given in Section VI.

##
Ii Koopman operator technique

for system identification

### Ii-a Problem statement

Consider a (unknown) dynamical system

(1) |

with a vector field that is assumed to be polynomial. We denote by the flow induced by the system, i.e. is a solution of (1) associated with the initial condition .

We address the problem of identifying the vector field given snapshot pairs . The data points are such that , where each component of the measurement noise is a Gaussian random variable with zero mean and standard deviation . The sampling period is assumed to be identical for all pairs . Note that the data points and can belong to different trajectories. Stochastic systems with process noise and systems with inputs will also be considered in Section IV.

### Ii-B Lifting technique using the Koopman operator

The system (1) is described by its flow in the state space . Alternatively, the system can be lifted to an infinite-dimensional space of observable-functions . In this space, it is described by the semigroup of Koopman operators , , which governs the evolution of the observables along the trajectories:

(2) |

This lifting technique is of interest since it leads to a linear (but infinite-dimensional) representation of the system. Even if the system (1) is nonlinear, the operator is always linear since we have

In addition, there is a bijective relationship between the Koopman operator and its associated system. We denote by the infinitesimal generator of the Koopman semigroup, which also satisfies

(3) |

Provided that the vector field is continuously differentiable, it can be shown that

(4) |

where denotes the gradient (see e.g. [3]). This implies that there is a one-to-one correspondence between the Koopman operator and the vector field.

This paper exploits the equivalence between the two descriptions of the system, proposing to identify the linear Koopman operator in the space of observables. Figure 1 illustrates the two main steps of the method. In the first step, data are lifted to the space of observables, providing the evolution of observables (at some points in the state space). Then classical linear identification is used to obtain an approximation of the Koopman operator (in a finite-dimensional subspace of . This corresponds to a component of the so-called Extended Dynamic Mode Decomposition (EDMD) algorithm [12]. In the second step, the vector field is obtained through the equalities (3) and (4).

## Iii Numerical method

This section describes the numerical scheme derived from the Koopman operator identification method. Lifting of the data and identification of the operator (step 1) are explained in the first subsection. Identification of the vector field (step 2) is described in the second subsection.

Conceptually, the method works in the infinite-dimensional space of observables. But to be numerically tractable, it is developed on a finite-dimensional subspace which is spanned by a basis of linearly independent functions . In our case, we will choose a basis of monomials with total degree less or equal to :

This choice is motivated by the fact that the vector field is assumed to be polynomial. Other bases of functions (e.g. Fourier basis, radial basis functions) could be considered to generalize the method. For practical purposes, the sequence of monomials should be characterized by some order (e.g. lexicographic order, weight order). The number of functions in the basis is equal to . We denote by the vector of basis monomials.

The Koopman operator can be projected onto the subspace and represented by a matrix . Considering

(5) |

and

(6) |

where is a projection operator, we have

(7) |

The matrix provides a (approximate) finite-dimensional linear representation of the nonlinear system. This representation is not obtained through local linearization techniques and is valid globally in the state space. In the numerical scheme described below, we compute from data (step 1), and then connect the obtained matrix to the vector field of the nonlinear system (step 2).

### Iii-a Step 1: Lifting of the data and Koopman operator identification

We now describe the method used to compute the matrix given snapshot pairs , which corresponds to a component of the EDMD algorithm (see [12] for more details).

For each snapshot pair , we construct the new pair . The data are somehow lifted to (a subspace of) the space of observables and provide snapshots of trajectories of particular observables (i.e. the basis functions). Provided that the measurement noise is small, we have . It follows from (2) and (5) that

so that

(8) |

where denotes here the matrix representation of . The last equality implies that the th column of is so that . If the projection is chosen so that it yields the least squares fit at the points , (i.e. finite-dimensional Galerkin projection), the matrix is given by the least squares solution

(9) |

with the matrices

(10) |

and where denotes the pseudoinverse of .

### Iii-B Step 2: Identification of the vector field

We assume that the polynomial vector field is of total degree less or equal to , so that we can write

(11) |

where is the number of monomials of total degree less or equal to . The identification problem is therefore reduced to identifying coefficients .

#### Iii-B1 Derivation of the Koopman infinitesimal generator

Our goal is to derive the matrix which represents the projection of the infinitesimal generator (4) on the basis of monomials , and show that this is a function of the unknown coefficients of the vector field. Consider the linear operators

It is clear that the operators map polynomials of total degree less or equal to to polynomials of total degree less or equal to . Denote by the vector of monomials of total degree less or equal to . The projection of on the basis of monomials is represented by the matrix , with , which is so that

(12) |

for all . We have also , which implies that the th column of corresponds to the expansion of in the basis of monomials . Defining an index function that encodes the order of the monomials in the vector , i.e. , we have

where is the th unit vector. It follows that the entries of are given by

(13) |

Note that the matrices can also be obtained by multiplying a multiplication matrix and a differentiation matrix (see [4] for more details). Finally it follows from (4) and (11) that

(14) |

and the matrix given by

(15) |

represents the projection of the Koopman infinitesimal generator on the basis of monomials .

#### Iii-B2 Computation of the vector field

The coefficients of the vector field are finally computed by using the relationship between (obtained from data in (9)) and (which depends on in (15)).

It can be shown easily (using (3)) that, if and represent the projection of and , respectively, then . This property and (9) imply that we can obtain an estimation of from data:

(16) |

The function denotes the (principal) matrix logarithm.

###### Remark 1 (Matrix logarithm)

The real eigenvalues of shall not be negative, so that the principal logarithm exists and is real. However, numerical simulations show that can have negative real eigenvalues when the number of data points is too small. In this case, there exists no real matrix and the method might fail.

When the Koopman infinitesimal generator admits complex eigenvalues, the sampling period must be sufficiently small so that the eigenvalues of lie in the strip . If this condition is not satisfied, and the accuracy of the method is altered. This issue is discussed with more details in [13].

Using (15), we obtain the equality

(17) |

where is the matrix constructed with the columns and rows of associated with monomials of total degree less or equal to and , respectively (note that we will typically choose for the size of and ). By disregarding the remaining columns of , we only consider monomials that are mapped by onto the span of basis monomials (of total degree less or equal to ), for which the finite Galerkin projection (i.e. the identity in this case) is exact.

The matrix equality (17) corresponds to a linear set of equations (with unknowns ) that is overdetermined. Its least squares solution yields an approximation of the coefficients :

(18) |

where stands for the vectorization of the matrix. Note that more advanced techniques could be used to solve (17), for instance promoting sparsity of the vector of unknowns by adding a -norm penalty term to (17), see e.g. the LASSO method [11]. We leave this for future research.

###### Remark 2

Each entry of the matrix yields an equality in (18). However, if a given entry of is zero for all and , then the corresponding entry of does not depend on the coefficients and the related equality in (18) can be disregarded. In particular, when , the number of effective equalities is equal to and the problem is not overdetermined. Moreover, (17) can be solved directly in this case since each coefficient is equal to an entry of .

###### Remark 3

The identification problem could also be performed at the level of the Koopman semigroup. However solving the equality (with a square matrix ) amounts to solving a (nonconvex) nonlinear least squares problem. Numerical simulations suggest that better results are obtained by solving (17).

## Iv Extensions to open systems

In this section, we show that the proposed method is well-suited to identify open systems that are driven by a known input or by a white noise (i.e. process noise).

### Iv-a Systems with inputs

Consider an open dynamical system of the form

(19) |

with and with the input . We define the associated flow so that is a solution of (19) with the initial condition and the input . Following the generalization proposed in [7] for discrete-time dynamical systems, we consider observables and define the semigroup of Koopman operators

where is a constant input. In this case, can be considered as additional state variables and the above operator is the classical Koopman operator for the augmented system , . In particular, the infinitesimal generator is still given by (4).

It follows that the method proposed in Sections III-A and III-B can be used if the input can be considered as constant between two snapshots (zero-order hold assumption), or equivalently if the sampling rate is high enough. The matrix is obtained with snapshot pairs and the rest of the procedure follows on similar lines with the augmented state space . The efficiency of the identification method in the case of systems with inputs is shown in Section V-B.

### Iv-B Process noise

We have considered so far only measurement noise. We will show that the proposed method is also robust to process noise. Consider a system described by the stochastic differential equation

(20) |

where is a white noise that satisfies (where denotes the expectation). We define the flow , where is the probability space, such that is a solution of (20). In this case, the semigroup of Koopman operators is defined by (see e.g. [5])

and its infinitesimal generator is given by

where denotes the Laplacian operator that accounts for diffusion. Note that the infinitesimal generator is related to the so-called Kolmogorov backward equation.

Now we can show that the numerical scheme of the proposed identification method does not need to be adapted to take account of process noise. As explained in [12], the first step of the method (Section III-A) is still valid for identifying the matrix . In the second step (Section III-B), the procedure is the same, except that one has to consider the Laplacian operator whose projection on the basis of monomials is represented by a matrix . The equality (17) is then replaced by

where is an additional unknown. While the operators map monomials of total degree to monomials of total degree greater or equal to , the Laplacian operator maps monomials of total degree to monomials of total degree . Therefore all nonzero entries of correspond to zero entries of , so that the addition of the diffusion term only modifies entries of which are zero—and do not depend on —when there is no process noise. In other words, the diffusion term does not affect the equalities on , whose solution is still given by (18). In Section V-B, an example illustrates the robustness of the method against process noise.

## V Numerical examples

In this section, we illustrate our numerical method with several examples. We show that the method is efficient to recover the vector field of different kinds of systems, including unstable, chaotic, and open systems. The method is also applied to network reconstruction.

For all the simulations, we consider measurement noise with standard deviation . We also choose the parameters and , so that .

### V-a Periodic, unstable, and chaotic systems

We first consider three systems that exhibit different kinds of behavior.

##### Van der Pol oscillator

The dynamics are given by

and possess a stable limit cycle. The data are obtained by taking snapshots at (sampling period ) for trajectories with initial conditions on .

##### Unstable equilibrium

The dynamics are given by

and are characterized by an unstable equilibrium at the origin. The data are obtained by taking snapshots at (sampling period ) for trajectories with initial conditions on .

##### Chaotic Lorenz system

The dynamics are given by

and exhibit a chaotic behavior. The data are obtained by taking snapshots over (sampling period ) for trajectories with initial conditions on .

For each model, we identify the coefficient of the vector field and compute the root mean square error

where and are the estimated and exact values of the coefficients . In order to compare the results obtained with different systems, we also compute the normalized RMSE: where is the average value of the nonzero coefficients . The RMSE and NRMSE averaged over experiments are given in Table I. The results show that the method performs well, even in the case of unstable and chaotic systems (for which the NRMSE is slightly larger). Note that the accuracy increases with the number of data points (which is low here) but a comprehensive study of this effect is beyond the scope of this paper.

RMSE | NRMSE | |
---|---|---|

a) Van der Pol | ||

b) Unstable equilibrium | ||

c) Chaotic Lorenz |

### V-B Input and process noise

We consider the forced Duffing system

(21) | |||||

(22) |

where is the input. With , we obtain snapshots over (sampling period ) for trajectories with initial conditions on . Our identification method provides a good estimation of the vector field (including the forcing term ). The RMSE computed over all coefficients (including those related to the forcing term) and averaged over simulations is equal to .

We now consider the effect of process noise and replace the forcing term in (21) by a strong white noise with (note that we still add measurement noise with ). Data points are obtained as previously, but for trajectories computed with the Euler-Maruyama scheme. The method is robust against process noise, and recovers the vector field with a RMSE (averaged over simulations) equal to .

### V-C Network reconstruction

We consider a -dimensional system of the form

(23) |

where the coefficients are chosen according to a uniform distribution on and are distributed according to a Gaussian distribution of zero mean and standard deviation equal to one. The subscripts and are integers randomly chosen on and the exponents and are also integers randomly chosen on so that . The first term in (23) is a linear term that ensures local stability of the origin. The other terms are quadratic and cubic nonlinearities that define an interaction network between the different states. We say that there is a directed link from to if and for some and .

For different trajectories with initial conditions on , we consider three snapshots obtained at time (sampling period ). Figure 2 shows that, with this dataset, the method provides an accurate estimation of all the coefficients of the polynomial vector field (). Note that we consider a total of data points to identify coefficients . As we decrease the number of data points, results are less accurate but still acceptable ( with trajectories and with trajectories). More advanced methods promoting sparsity (e.g. LASSO method [11]) could further improve the accuracy of the results.

Network reconstruction aims at predicting links in the network. Very small coefficients are mainly due to measurement noise and have an exact value equal to zero. Also we consider that there is a link from to if at least one coefficient related to a monomial containing is above a given threshold value. With a threshold value equal to , we obtain a true positive rate (i.e. number of correctly identified links divided by the actual number of links) equal to and a false positive rate (i.e. number of incorrectly identified links divided by the actual number of missing links) equal to . It is noticeable that, in addition to this good performance, the method also provides the nature of the links (e.g. quadratic, cubic) and their weight (i.e. values ).

## Vi Conclusion

We have proposed a novel method for the identification of nonlinear systems. This method is based on a lifting technique which recasts nonlinear system identification as the identification of the linear Koopman operator in an infinite-dimensional space of observables. A key advantage of the method is that it relies only on linear techniques and is an indirect method that does not require the estimation of time derivatives. It follows that the method is well-suited to model low sampling rate data sets. It is also robust to noise and efficient to recover the vector field of different classes of systems.

This paper presents preliminary results that open the door to further developments of Koopman operator lifting techniques for nonlinear system identification (e.g. use other bases to identify non-polynomial vector fields). Since the method is based on the (finite-dimensional) truncation of the (infinite-dimensional) Koopman operator, a theoretical study of its convergence should also be provided in future work.

## Vii Acknowledgments

The authors acknowledge support from the Luxembourg National Research Fund (FNR 14/BM/8231540).

## References

- [1] S. L. Brunton, L. P. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016.
- [2] Y. Lan and I. Mezić. Linearization in the large of nonlinear systems and Koopman operator spectrum. Physica D, 242:42–53, 2013.
- [3] A. Lasota and M. C. Mackey. Chaos, Fractals, and Noise: stochastic aspects of dynamics. Springer-Verlag, 1994.
- [4] A. Mauroy and I. Mezić. Global stability analysis using the eigenfunctions of the Koopman operator. IEEE Transactions On Automatic Control, 2016. In press.
- [5] I. Mezić. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics, 41(1-3):309–325, 2005.
- [6] W. Pan, Y. Yuan, J. Goncalves, and G.-B. Stan. A sparse Bayesian approach to the identification of nonlinear state-space systems. IEEE Transactions On Automatic Control, 61(1):182–187, 2016.
- [7] L. P. Proctor, S. L. Brunton, and J. N. Kutz. Generalizing Koopman operator theory to allow for inputs and control. http://arxiv.org/abs/1602.07647.
- [8] C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. S. Henningson. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641:115–127, 2009.
- [9] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010.
- [10] A. Sootla and A. Mauroy. Operator-theoretic characterization of eventually monotone systems. http://arxiv.org/abs/1510.01149.
- [11] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
- [12] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A Data-Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition. Journal of Nonlinear Science, pages 1–40, June 2015.
- [13] Z. Yue, J. Thunberg, L. Ljung, and J. Goncalves. Identification of Sparse Continuous-Time Linear Systems with Low Sampling Rate: Exploring Matrix Logarithms. https://arxiv.org/abs/1605.08590.