A Class of Logistic Functions for Approximating State-Inclusive Koopman Operators

A Class of Logistic Functions for Approximating State-Inclusive Koopman Operators

Charles A. Johnson, Enoch Yeung Charles Johnson is with the Information and Decision Algorithms Laboratories, Brigham Young University, Provo, UT 84602, USA charles.addisonj@gmail.comEnoch Yeung, is a research scientist in the Computational Analytics Division at Pacific Northwest National Laboratories enoch.yeung@pnnl.gov
Abstract

An outstanding challenge in nonlinear systems theory is identification or learning of a given nonlinear system’s Koopman operator directly from data or models. Advances in extended dynamic mode decomposition approaches and machine learning methods have enabled data-driven discovery of Koopman operators, for both continuous and discrete-time systems. Since Koopman operators are often infinite-dimensional, they are approximated in practice using finite-dimensional systems. The fidelity and convergence of a given finite-dimensional Koopman approximation is a subject of ongoing research. In this paper we introduce a class of Koopman observable functions that confer an approximate closure property on their corresponding finite-dimensional approximations of the Koopman operator. We derive error bounds for the fidelity of this class of observable functions, as well as identify two key learning parameters which can be used to tune performance. We illustrate our approach on two classical nonlinear system models: the Van Der Pol oscillator and the bistable toggle switch.

I Introduction

Koopman operators are a class of models used for understanding important dynamical properties of nonlinear systems [1, 2, 3, 4, 5]. The Koopman operator captures the behavior of nonlinear systems by representing them as higher-order linear systems in a lifted function space. Though posed originally by Bernard Koopman in the early 1930s [6], Koopman operator theory has gained traction and popularity quite recently for their applicability in nonlinear system analysis [5] and as a data driven method for system identification [7], [8].

In specific instances, a nonlinear system may admit a finite dimensional Koopman operator. When a Koopman operator is finite dimensional, the evolution of all the system’s states are completely characterized as a linear combination of a finite number of lifting or observable functions. In general, Koopman operators can be typically infinite dimensional and have continuous or countably infinite spectra [2]. Infinite dimensional operators are computationally unwieldy, which motivate the development of learning methods for high fidelity finite dimensional approximations of the Koopman operator.

Methods for learning Koopman operators have existed since Torsten Carleman developed a technique for the linearization of nonlinear systems in finite dimensional representations [9]. However Carleman linearization only results in a finite dimensional representation for systems possessing specific internal structure, e.g. feedforward dynamics. More recently work on finding approximations to Koopman operators include state-of-the-art techniques such as dynamic mode decomposition (DMD) [10] and extended dynamic mode decomposition (EDMD)[11, 12, 5]. EDMD, in particular, is effective, since it learns the space of Koopman observables using a dictionary of generic basis functions [12].

In particular, recent work has shown the effectiveness of deep [13] and shallow neural networks [14] for learning approximate Koopman operators. However, these methods essentially learn black box dictionaries, where the relationship between the mathematical form of the neural network outputs and the actual system are unclear. [13] showed that neural networks can learn smooth dictionaries consisting primarily of observables with a sigmoidal response profile. One contribution of this paper is to clarify the value of sigmoidal basis functions in learning approximate Koopman operators.

Most generally, we consider the problem of learning a finite dimensional approximation to the Koopman generator for a known continuous nonlinear system. We first identify a class of Koopman basis functions that produces an exact Koopman realization of the same dimension as the original system. We show however, that these bases functions provide virtually no insight into the stability of the underlying system, since they exclude the system’s actual state.

Motivated by these findings, we consider Koopman observables that include the system state. However, most state inclusive Koopman observable spaces suffer from exploding dimensionality. We introduce the property of finite approximate closure, namely the ability of a state-inclusive Koopman basis with finite cardinality to simultaneously approximate a nonlinear system’s dynamics as well as its own flow. We show that Koopman observables obtained from a special class of multi-variate logistic functions, satisfy an approximation property we define as finite approximate closure. We derive explicit error bounds and show its relationship with learning parameters describing the dictionary resolution and ultra-sensitivity. We show this class of dictionary functions learns the dynamics of the Van der Pol oscillator and the bistable toggle switch.

The rest of this paper is organized as follows. Section II introduces the problem of learning a finite dimensional approximation of the so-called Koopman operator. Section III motivates the use of Koopman bases that include states of the original system. Section IV introduces the concept of finite approximate closure and a class of state-inclusive Koopman bases functions that satisfy finite approximate closure. Section V derives error bounds for this special Koopman basis and Section VI illustrates the accuracy of these bases functions for learning the dynamics of two nonlinear systems: the Van Der Pol oscillator and the toggle switch.

Ii The Koopman Generator Learning Problem

Consider a nonlinear system with dynamics

 ˙x=f(x) (1)

where , is continuously differentiable, time-invariant, and nonlinear in . Let denote the initial condition for the system and denote the state-space of the dynamical system. We introduce the concepts of a Koopman generator and its associated multi-variate Koopman semigroup, following the exposition of [2].

Ii-a The Koopman Generator

For continuous nonlinear systems, the Koopman semigroup is a semigroup of linear but infinite dimensional operators that acts on a space of functions , often referred to as observables. Each observable function , where is a finite or infinite dimensional function space. We thus say is an operator for each . The Koopman semigroup provides an alternative view for evolving the state of a dynamical system:

 Kt∘ψ(x0)=ψ∘Φt(x0), (2)

where is the flow map of the dynamical system (1), evolved forward up to time , given the initial condition .

Instead of examining the evolution of the state of a dynamical system, the Koopman semigroup allows us to study the forward evolution of observables defined on the state of a dynamical system [12].

The generator for the Koopman semigroup is defined as

 KGψ≡limt→0Ktψ−ψt (3)
Lemma 1

The Koopman generator is a linear operator.

Proof: Notice that the transformation is an operator, since each is an operator on Moreover, the algebraic limit theorem and the linearity of each guarantees linearity of , which implies it is a linear operator.

In general, may not have a finite or countably infinite-dimensional matrix representation, since the limit of the spectrum of as may be continuous and therefore uncountably infinite, see [5] for a thorough study of several examples.

Ii-B Problem Statement

We restrict our attention in this paper to systems with finite or countably infinite dimensional Koopman generators . Given such a continuous nonlinear dynamical system, specifically and from (1); we aim to learn and Koopman generator to solve the optimization problem

 minKG,ψ∈Ψ∥∥∥dψ(x(t))dt−KGψ(x(t))∥∥∥ (4)

This optimization problem is often non-convex, since the form of is unknown or parametrically undefined. Both the Koopman generator and the basis functions must be discovered simultaneously to minimize the above objective function. This is true in data-driven formulations of the problem where is completely unknown. Additionally, it is true in learning problems where is known but has yet to be discovered.

A solution pair that achieves exactly zero error is an exact realization of a Koopman generator and its associated observable function. In general, there may be multiple solutions that achieve exactly zero error. To see this, note that if

 dψ(x(t))dt=KGψ(x(t)) (5)

then a state transformation also defines an exact solution pair

Solving for an exact solution pair in practice may be difficult for at least three reasons. First, evaluating requires numerical differentiation, which incurs a certain degree of numerical error. Second, may be infinite dimensional and the collection of observable functions is unknown a priori.

We refer to any solution pair that results in a non-zero error as an approximate solution. Note that, given vector norm , the error for any approximate solution may be specific to a particular , of the form

 ϵ(x)=∥∥∥dψ(x(t))dt−KGψ(x(t))∥∥∥ (6)

and thus may vary as a function of We seek the best approximation that minimizes over all

The goal of Koopman generator learning is thus to obtain a “lifted” linear representation of a nonlinear system, defined on a set of observable functions, that enables direct application of the rich body of techniques for analyzing linear systems. Even if it is only possible to identify an approximate solution that minimizes , for all ; spectral analysis of the system can provide insight into the stability of the underlying nonlinear system within the region of the phase space.

Iii Selection of Koopman Basis Functions

The standard approach for learning and is to first postulate a set of dictionary functions that approximate and span or a subspace of and second, estimate given fixed This approach is known as extended dynamic mode decomposition. The technique involves constructing a set of dictionary functions evaluating the dictionary over a time-shifted stack of state trajectories

 Xp=[x(tn+1)…x(t0)], Xf=[x(tn)…x(t1)]

to obtain

 Ψ(Xf)=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ψ(x(0)n+1)…ψ(x(0)1)⋮⋱⋮ψ(x(p)n+1)…ψ(x(p)1)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦Ψ(Xp)=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ψ(x(0)n)…ψ(x(0)0)⋮⋱⋮ψ(x(ND)n)…ψ(x(ND)0)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (7)

and approximating the Koopman operator by minimizing the (regularized) objective function

 ||Ψ(Xf)−KΨ(Xp)||2+ζ||K||2,1 (8)

where is the finite approximation to the true Koopman operator, , for a discrete time system, and is the 1-norm of the vector of 2-norms of each column of

Note that provides the classical formulation for extended dynamic mode decomposition. As suggested by the notation, this approach is most commonly applied in the study of open-loop nonlinear discrete-time dynamical systems, see [12, 14, 7] for several examples. More recently, [15, 8, 16] illustrated the ability to learn control Koopman operators for discrete time dynamical systems, which introduce a new class of control-Koopman learning problems.

The Koopman generator learning problem is the continuous time analogue of minimizing (8). However, when preforming Koopman learning in a continuous dynamical system, the matrix must be replaced with a finite-difference approximation of the derivative matrix So, in general, the accuracy of this estimate is sensitive to the finite-difference approximation used.

The purpose of this paper is to study the quality of a class of observable functions for estimating a Koopman generator in finite dimensions. To this end, we restrict our attention to Koopman generator learning problems where the underlying function is known, but difficult to analyze using local linearization methods.

Assumption 1

Given a nonlinear system (1), we suppose that the underlying vector field is known.

This allows us to evaluate the quality of a class of observable functions, independent of the error imposed by any finite-difference scheme for estimating the derivative

 ddtΨ(Xf).

This leaves us with two challenges. First, identifying a suitable dictionary of observables or lifting functions from which to construct the observable functions Second, identifying or estimating , given .

Iii-a Understanding Stability with Koopman Observables

Not all Koopman observables (or what we will refer to as Koopman liftings) yield insight into the stability of the underlying system. For example, suppose that we are given a nonlinear system of the form (1). Further suppose that is invertible on Then the Koopman generator, must satisfy:

 dψ(x)dt=KGψ(x) (9)

Let be an arbitrary constant. First, we choose a set of functions so that:

 Li(x)=∫x0cf−1(τ)dτ (10)

Notice that the Koopman observable function defined as

 ψ(x)≡⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ψ1(x)ψ2(x)⋮ψn(x)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦≡⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eL1(x)eL2(x)⋮eLn(x)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (11)

has time-derivative that can be expressed in state-space form as

 ddtψ(x(t)) =ddt⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eL1(x)eL2(x)⋮eLn(x)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eL1(x)eL2(x)⋮eLn(x)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦cf−1(x)f(x) (12) =cI⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eL1(x)eL2(x)⋮eLn(x)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=KGψ(x)

where the Koopman generator is Since is arbitrary, the system can be either stable or unstable, depending on the sign of the choice of .

Our choice of observable functions provides an exact solution to the Koopman generator learning problem. The spectral properties of are easy to explore as each eigenvalue is simply equal to . However, this result is uninformative since the stability properties of the Koopman generator are totally dependent on an arbitrary constant and are therefore completely divorced from the vector field .

The key property that is lacking in the above example is the inclusion of the underlying state dynamics in the observable function . Whenever is contained within , this guarantees that any Koopman generator and its associated Koopman semigroup , not only describes the time evolution of but also the underlying system. Specifically, if contains a that is the so-called full state observable function, then by definition,

 dψj(x)/dt=f(x).

Thus, the spectrum of will characterize the stability of , including . When and , we say the system has finite exact closure. That is, derivatives of the full-state observable and the rest of can be described entirely in terms of the state vector This property does not hold for many nonlinear candidate observable functions. We give an example:

Example 1

Consider a scalar nonlinear system of the form

 ˙x=f(x)=−x2 (13)

First, consider a candidate observable function We want to see if

 ˙ψ(x)=KGψ(x) (14)

for some Calculating explicitly, we get

 ˙ψ(x)=⎡⎢⎣0−ψ3(x)−2ψ2(x)ψ3(x)⎤⎥⎦ (15)

The issue is that including in the state requires including as part of the derivative, which implies that each time multiplies , you obtain a cubic term which is not included in . Similarly, including cubic terms in results in quartic terms and so on. This is an example of a system where defined above, does not satisfy finite exact closure. This is not to say that the system can not be expressed with finite closure, but that our proposed observable function does not satisfy the finite exact closure property.

Iii-B Finite Approximate Closure

In general, systems that are not globally topologically conjugate to a finite dimensional linear system, e.g. systems with multiple fixed points, cannot be represented exactly by a finite-dimensional linear Koopman operator that includes the state as part of the set of observables [17].

However, it may be possible to learn a Koopman observable function that approximately satisfies finite closure, defined as follows:

Definition 1

Let where . We say achieves finite -closure or finite approximate closure with error if and only if there exists an and such that

 ddt(Ψ(x))=KGψ(x)+ϵ(x) (16)

We say that achieves uniform finite approximate closure for some set if and only if it achieves finite approximate closure with for all

Finite approximate closure is a desirable property since, as , we may use to preform high fidelity stability, observability and spectral analysis. For example, if is small enough over all x(t) in M, one could study the target trajectory of given by studying the evolution of a state-inclusive lifting of observable functions, or . Projecting from to is trivial and it’s trajectory, an approximation to , may yield stability insights.

By a similar token we also may consider observability analysis and state prediction problems [18, 19]. Given a series of measurements with corruption in the model and noise in the measurements, can one predict the state of the system? Under the condition of finite approximate closure and a sufficiently small the error of state estimation on the state inclusive lifting of the system (evolving according to the linear relation given by ) should also be small. For more extensive treatment in the use of Koopman operators in the state prediction problem (in discrete time) see [18].

Finally we note that given a matrix with a spectrum , if one adds a perturbation matrix, , where , there are established limits on how the spectrum of , , will vary from . For example, there are the bounds established in the Hoffman-Wielandt theorem. So the spectrum of a weakly perturbed matrix is weakly altered. Therefore, if is a close approximation to the true Koopman generator of a system, we can estimate the spectral distribution of the true Koopman generator, including its principal modes and eigenvalues [5]. Finite approximate closure of of order guarantees bounded error between and an ideal Koopman generator. Moreover, certain learning parameters can be tuned to arbitrarily reduce the size of .

Iv State Inclusive Logistic Lifting (SILL) Functions and Finite Approximate Closure

To develop an approximation to we introduce a new class of conjunctive logistic functions. We do so for several reasons. Firstly, logistic functions have well established functional approximation properties [20]. Secondly, we now show that sets of logistic functions in this class of models, satisfying a total order, satisfy finite approximate closure.

We define a multivariate conjunctive logistic function as follows:

 Λvl(x)≡n∏i=1λμi(xi) (17)

where , and the logistic function is defined as

 λμ(x)≡11+e−α(x−μ). (18)

The parameters define the centers or the point of activation along dimension The parameter is the steepness parameter, or sensitivity parameter, and determines the steepness of the logistic curve. Given multivariate logistic functions, we then define a state inclusive logistic lifting function as so that:

 ψ(x)≡⎡⎢⎣1xΛ⎤⎥⎦ (19)

where . We then have that . We first suppose there exists vectors , that can be well approximated by logistic functions [20], as follows:

 f(x)≊NL∑l=1wlΛvl(x) (20)

This is a fair assumption since the number of logistic functions can be increased until the accuracy of (20) is satisfactory. This accuracy depends on a mesh resolution parameter, which we refer to as . This is also generally true of any candidate dictionary for generating Koopman observable functions, e.g. Hermite polynomials, Legendre polynomials, radial basis functions, etc.

The critical property that enables a high fidelity finite approximate Koopman operator is finite approximate closure. We must show that the time-derivative of these functions can be expressed (approximately) recursively. The derivative of this multivariate logistic function is given as

 ˙Λvl(x)=(∇xΛvl(x))T∂x∂t=(∇xΛvl(x))Tf(x) (21)

where the term of the gradient of is expressed as

 [∇xΛvl(x)]i =α(λμli(xi)−λμli(xi)2)Λvl(x)λμli(xi) (22) =α(1−λμli(xi))Λvl(x)

Notice that the time-derivative of can be expressed as

 ˙Λvl(x) =n∑i=1α(1−λμli(xi))Λvl(x)fi(x) (23) =n∑i=1α(1−λμli(xi))Λvl(x)NL∑k=1wikΛvk(x) =n∑i=1NL∑k=1α(1−λμli(xi))wikΛvl(x)Λvk(x)

Thus, we have that the derivative of our multivariate logistic function is a sum of products of logistic functions with a number of predetermined centers. There is one critical property that must be satisfied in order to achieve finite approximate closure:

Assumption 2

There exists an total order on the set of conjunctive logistic functions , induced by the positive orthant , where whenever .

This assumption is satisfied whenever the conjunctive logistic functions are constructed from a evenly spaced mesh grid of points . For the purposes of this paper, we will consider evenly spaced mesh grids. We leave the study of algorithms for learning sparse conjunctive logistic bases for future work.

Since we have imposed a total order on our logistic basis functions whenever , we have that the derivative of is the derivative of Thus we can write

 dΛμldt= n∑i=1NL∑k=1α(1−λμli(xi))wikΛvl(x)Λvk(x) (24) ≊n∑i=1NL∑k=1α(1−λμli(xi))wikΛvmax(l,k)(x)

where

 vmax(l,k)=(max{μl1,μk1},...,max{μln,μkn}). (25)

which shows that conjunctive logistic functions satisfy finite approximate closure.

V Convergence and Error Bounds

Even if SILL functions satisfy finite approximate closure, it is necessary to evaluate the fidelity of their approximation. We first show that fidelity of the approximation increases with the steepness parameter and derive a global error bound.

V-a Convergence in α

Without loss of generality we let . Then the difference between each of the terms in the summation is a scaling of the difference :

 αΛvl (x)Λvk(x)−αΛvl(x) (26) =αΛvl(x)(Λvk(x)−1) =α1−Λvk(x)−1(Λvl(x)Λvk(x))−1 =α1−(1+e−α(x1−μl1))...(1+e−α(x1−μln))∏ni=1(1+e−α(xi−μli))(1+e−α(xi−μki)) =α∏ni=1sil(x)sik(x)−α∏ni=1sil(x)

where . Based on this error term we have the following theorem:

Theorem 2

Given that . As , the error between and its approximation

 n∑i=1NL∑k=1α(1−λμli(xi))wikΛvmax(l,k)(x)

converges to .

Proof: The error term between and

 n∑i=1NL∑k=1α(1−λμli(xi))wikΛvmax(l,k)(x)

is comprised of terms described by equation (26), namely

 α∏ni=1sil(x)sik(x)−α∏ni=1sil(x), (27)

where . We observe three cases. In each of these cases we hold constant and allow to vary.

Case 1, : So, for all .

Case 2, : As we have that and so .

Case 3, : As we have that and so . We further note that as , .

Defining for , the cases above imply that if there exists for every , so that , then (23) goes to 0 as .

Furthermore, if then we have that , this also implies that (26) goes to as .

Since, by assumption we have that our error of each term goes to zero. Thus the sum of each of these terms comes to zero as well, as they are each only multiplied by a constant with respect to . Almost everywhere (for ) the error converges to . At , there is a small error incurred due to the approximation of a product of two totally-ordered conjunctive logistic functions with the “greatest” element of the pair. This error never goes to zero without introducing additional SILL functions to aid in the approximation.

V-B Global Error Bounds

Denote the error term in (26) as . Given a fixed , and , the error is bounded above by . We calculate by taking the derivative of (26), a gradient, and setting each of its terms to zero:

 ∇Ekl(x)j= η(x)sjl(x)sjk(x)∏ni=1sil(x)sik(x) (28) −α2e−α(xj−μlj)sjl(x)∏ni=1sil(x)=0

where

 η(x)≡α2(e−α(xj−μlj)+e−α(xj−μkj)+2e−α(2xj−μlj−μkj)). (29)

We find a common denominator and multiply both sides by it then divide out to obtain:

 0= 1+e−α(μlj−μkj)+2e−α(xj−μkj) (30) −(1+e−α(xj−μkj))n∏i=1(1+e−α(xi−μki))

and we set resulting in a multivariate polynomial:

 0=pj(y),∀j∈{1,2,…,n} (31)

We then have equations with unknown variables, we define our solution be the points: , then we consider the set of points . We set .

Then the sum of the error terms, , we call . The sum of the maximal error terms, , we call . Since, by assumption, our error in approximating is zero, and the derivative of 1 is zero everywhere, we have that the total error in our Koopman approximation of the derivative of the state vector at time will be:

 NL∑l=1EΛl(x(t)) (32)

Thus our error in estimating the state at time will be:

 ∫t0NL∑l=1 EΛl(x(τ))dτ (33) ≤∫t0NL∑l=1MΛldτ =tNL∑ℓ=1MΛl

This holds true under the assumption that the approximation of the function, by logistic functions is a perfect approximation. In the case where there is error in the approximation of we have the following:

 ˙xk=fk(x)=δk(x)+m∑ℓ=1wkℓΛvl(x) (34)

where is the error when approximating at . The value of is tied to our mesh resolution parameter, . Thus, we have

 ˙Λvl(x) =n∑i=1α(1−λμli(xi))Λvl(x)fi(x) (35) =n∑i=1 α(1−λμli(xi))Λvl(x)NL∑k=1wikΛvk(x) +n∑i=1δi(x)α(1−λμli(xi))Λvl(x) =n∑i=1 NL∑k=1α(1−λμli(xi))wikΛvl(x)Λvk(x) +n∑i=1δi(x)α(1−λμli(xi))Λvl(x)

So, we add another error terms to our derivative approximation. Each will be approximated by the multivariate logistic function . Thus the error of approximation of these terms for any given will be bounded by:

 |δk(x)(1−λ(xk−μlk))−1|≤|δk(x)−1|. (36)

Ultimately, the error when approximating the behavior of any element in the SILL basis is bounded. We thus have that our choice of lifting has finite approximate closure which means that it may be used to extract stability properties. We demonstrate this in two examples below.

Vi Numerical Examples

Vi-a The Van der Pol Oscillator

We consider the Van der Pol system. This system features a stable limit cycle in the saddle region of its phase space. The system thus presents a challenge, since it contains oscillatory and unstable dynamics all within the same phase space. Arbabi and Mezic showed it was possible to use Koopman representations to learn the asymptotic phase of the operator [1]. The equations for the system are below:

 ˙x1 =x2 (37) ˙x2 =−x1+α1(1−x21)x2

where is taken to be in all simulations.

Our results show that we were able to learn the oscillatory dynamics in two regions of phase space (see Figure 3A and 3C). However, the SILL functions were not able to predict the unstable dynamics of the Van Der Pol oscillator. This was because the SILL functions had to be defined on a finite lattice. The boundaries of the lattice incur the most error, since the vector field is not evaluated beyond the boundary region. Specifically, for the Van der Pol oscillator, these boundaries coincided with unstable dynamics of the system. Moreover, there is a numerical conditioning challenge with identifying a model with unstable modes.

Vi-B The Bistable Toggle Switch

We now consider a bistable toggle switch system as proposed in [21]. This system models the interaction between two proteins and who repress each other, resulting in one of two equilibrium points ultimately being reached depending on the initial concentrations of each of the two proteins. For simplicity, we refer to these proteins as protein 1 and protein 2 respectively. Given constants: , the simplest model is a two state repression model [21] of the form

 ˙x1=α11+xn12−δx1 (38) ˙x2=α21+xn21−δx2

where are the concentrations of the respective proteins 1 and 2. We note that given the proper parameters, and under a wide range of initial conditions, our SILL functions and their associated approximate Koopman generator correctly indicate the tendency of nearly every set of initial protein concentrations. Specifically, the error in approximation was less than for both and .

Vii Conclusions

We set out to find finite dimensional approximations to Koopman generators for nonlinear systems. We introduced a class of state-inclusive observable functions comprised of products of logistic functions that confer an approximate finite closure property. We derived error bounds for their approximation, in terms of a steepness and a mesh resolution parameter In particular, we show that introduction of SILL observable functions does not introduce unbounded error in the Koopman generator approximation, since a mesh dictionary of SILL functions satisfies a total order property. Further, the error bound can be reduced by modifying the learning parameters and

In future work, we will study the use of structured regularization or structured sparse compressive sensing may result in a more efficient and concise set of Koopman dictionary functions.

There are many scenarios where snapshots of the underlying system from different observers may each yield a scalable Koopman generator. The process of synthesizing or integrating these Koopman operators to obtain a global Koopman operator (coinciding with global measurements), is a subject of ongoing research.

References

• [1] Hassan Arbabi and Igor Mezić. Ergodic theory, dynamic mode decomposition and computation of spectral properties of the koopman operator. arXiv preprint arXiv:1611.06664, 2016.
• [2] Marko Budišić, Ryan Mohr, and Igor Mezić. Applied koopmanism a. Chaos: An Interdisciplinary Journal of Nonlinear Science, 22(4):047510, 2012.
• [3] Jonathan H Tu, Clarence W Rowley, Dirk M Luchtenburg, Steven L Brunton, and J Nathan Kutz. On dynamic mode decomposition: theory and applications. arXiv preprint arXiv:1312.0041, 2013.
• [4] Igor Mezić. Analysis of fluid flows via spectral properties of the koopman operator. Annual Review of Fluid Mechanics, 45:357–378, 2013.
• [5] Igor Mezić. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics, 41(1):309–325, 2005.
• [6] Bernard O Koopman. Hamiltonian systems and transformation in hilbert space. Proceedings of the National Academy of Sciences, 17(5):315–318, 1931.
• [7] J Nathan Kutz, Steven L Brunton, Bingni W Brunton, and Joshua L Proctor. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems. SIAM, 2016.
• [8] Matthew O Williams, Maziar S Hemati, Scott TM Dawson, Ioannis G Kevrekidis, and Clarence W Rowley. Extending data-driven koopman analysis to actuated systems. IFAC-PapersOnLine, 49(18):704–709, 2016.
• [9] Torsten Carleman. Application de la théorie des équations intégrales linéaires aux systèmes d’équations différentielles non linéaires. Acta Mathematica, 59(1):63–87, 1932.
• [10] Peter J Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010.
• [11] Igor Mezić and Andrzej Banaszuk. Comparison of systems with complex behavior. Physica D: Nonlinear Phenomena, 197(1):101–133, 2004.
• [12] Matthew O Williams, Ioannis G Kevrekidis, and Clarence W Rowley. A data–driven approximation of the koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science, 25(6):1307–1346, 2015.
• [13] E. Yeung, S. Kundu, and N. Hodas. Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems. ArXiv e-prints, August 2017.
• [14] Qianxiao Li, Felix Dietrich, Erik M Bollt, and Ioannis G Kevrekidis. Extended dynamic mode decomposition with dictionary learning: a data-driven adaptive spectral decomposition of the koopman operator. arXiv preprint arXiv:1707.00225, 2017.
• [15] Joshua L Proctor, Steven L Brunton, and J Nathan Kutz. Dynamic mode decomposition with control. SIAM Journal on Applied Dynamical Systems, 15(1):142–161, 2016.
• [16] Joshua L Proctor, Steven L Brunton, and J Nathan Kutz. Generalizing koopman theory to allow for inputs and control. arXiv preprint arXiv:1602.07647, 2016.
• [17] Steven L Brunton, Bingni W Brunton, Joshua L Proctor, and J Nathan Kutz. Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PloS one, 11(2):e0150171, 2016.
• [18] Amit Surana and Andrzej Banaszuk. Linear observer synthesis for nonlinear systems using koopman operator framework. IFAC-PapersOnLine, 49(18):716–723, 2016.
• [19] Umesh Vaidya. Observability gramian for nonlinear systems. In Decision and Control, 2007 46th IEEE Conference on, pages 3357–3362. IEEE, 2007.
• [20] Yoshifusa Ito. Approximation of continuous functions on r d by linear combinations of shifted rotations of a sigmoid function with and without scaling. Neural Networks, 5(1):105–115, 1992.
• [21] Timothy S Gardner, Charles R Cantor, and James J Collins. Construction of a genetic toggle switch in escherichia coli. Nature, 403(6767):339, 2000.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters