Explicit Reduced-Order Integral Formulations of State and Parameter Estimation Problems for a Class of Nonlinear Systems

Explicit Reduced-Order Integral Formulations of State and Parameter Estimation Problems for a Class of Nonlinear Systems

I.Yu. Tyukin, A.N. Gorban I.Yu. Tyukin and A.N. Gorban are with the University of Leicester, Department of Mathematics, University Road, LE1 7RH, Leicester, United Kingdom, e-mail: I.Tyukin@le.ac.uk, ag153@le.ac.uk
Abstract

We propose a technique for reformulation of state and parameter estimation problems as that of matching explicitly computable definite integrals with known kernels to data. The technique applies for a class of systems of nonlinear ordinary differential equations and is aimed to exploit parallel computational streams in order to increase speed of calculations. The idea is based on the classical adaptive observers design. It has been shown that in case the data is periodic it may be possible to reduce dimensionality of the inference problem to that of the dimension of the vector of parameters entering the right-hand side of the model nonlinearly. Performance and practical implications of the method are illustrated on a benchmark model governing dynamics of voltage in generated in barnacle giant muscle.

Notation

Symbol stands for the Euclidian norm. By we denote the set of all strictly increasing continuous functions such that . Consider a non-autonomous system , where , are continuous, is the vector of parameters, and is locally Lipschitz; stands for the unique maximal solution of the initial value problem: . In cases when no confusion arises, we will refer to these solutions as , , or simply . Solutions of the initial value problem above at are denoted as , , , or respectively. Let , then denotes the uniform norm of on : .

I Introduction

Consider a system governed by nonlinear ordinary differential equations

(1)

where is  continuous and locally Lipschitz wrt the variable function, and is the vector of unknown parameters. Let be an interval on which the solution of (1) is defined. Let us further suppose that the system’s state, , is not accessible for direct observation at any . One can, however, observe the values

for every . Let the problem be to find , such that

(2)

This is a standard inverse problem, and many methods for finding solutions to this problem have been developed to date (sensitivity functions [11], splines [3], interval analysis [7], adaptive observers [10],[2], [5], [6],[14],[15] and particle filters and Bayesian inference methods [1]). Despite these methods are based on different mathematical frameworks, they share a common feature: one is generally required to repeatedly find numerical solutions of nonlinear ordinary differential equations (ODEs) over given intervals of time (solve the direct problem).

Notwithstanding the plausibility of numerical integration of systems of ODEs in algorithms for state and parameter estimation, this operation is an inherently sequential process. This constrains computational scalability of the problem, and as a result imposes limitations on the time required to derive a solution. In order to overcome this limitation we propose to cast the inverse problem above in an alternative, integral form. In particular, instead of finding numerical solutions of the initial value problem (1) and matching the results to observed data, e.g. as (2), we search for a representation of the problem as

(3)

where and are functions that are explicitly computable from measurement data. Furthermore, we additionally require that if is a solution of (3) then it is also a solution of (2) and vise versa.

In the next sections we specify a class of systems for which such representation is possible. This class of systems is not as general as (1) but is relevant enough in modelling applications. In Section II we define this class of systems and present general technical assumptions. This is followed by presentation of main results in Section III. The results are based on the periodicity assumption we impose on the data and also on known facts from the theory of adaptive observes [10],[9]. In Section IV we illustrate the approach with an example for state and parameter estimation of action potential models for neural membranes.

Ii Problem Formulation

Consider the following class of systems

(4)

where , , is the state vector, , are parameters, is an real matrix, possibly dependent on , and , . We assume that the following hold for (4):

Assumption II.1 (General assumptions on (4))

  • the solution of (4) is defined on the interval (for some , possibly dependent on );

  • the pair , is observable, that is

  • and are and real matrices of which the entries are continuous and differentiable functions; is diagonal:

  • , are continuous and differentiable functions.

  • Exact values of parameters , are unknown.

Since the pair is observable there always is a coordinate transform , [10] rendering (4) into the following form

(5)

is such that the polynomial is Hurwitz, and . Functions , , , are continuous and differentiable, is diagonal, and , are parameters.

Furthermore, noticing that the variable is defined and known for all , and is continuous one can express the solution on in the closed form as follows:

Denoting , we therefore arrive at the transformed equations of (5):

(6)

The core problem we are interested in (6) is as follows:

Problem II.1

Let (6) be given, and its solutions are defined on . Suppose that all functions in the right-hand side of (5) are known, but true values of , , are unknown. Infer the values of from the measurements of over .

The question is if there is an equivalent integral formulation such as e.g. (3) of this problem for (6)? If such an integral formulation exists then whether a reduced-complexity version of this formulation can be stated so that the dimension of the parameter vector in the reduced formulation is smaller that that of in the original problem? Answers to these questions are provided in the next section.

Iii Main Result

Iii-a Indistinguishable parameterizations of (6)

We begin with the following property of linear systems regarding input detectability (cf [15])

Lemma 1

Consider

(7)

where

and , , . Let be bounded: for all . Then the following hold:

  • if the solution of (7) is globally bounded for all then, for sufficiently large, there are :

    where ,

    (8)

    and : real parts of the roots of are negative.

  • if , then for all implies existence of

    (9)

    for all .

Proof of Lemma 1 is provided in the Appendix.

According to Lemma 1 the following two sets of parameters, associated with every , need special consideration. The first set is defined as

The set contains all parameterizations of (6) which are indistinguishable from each other providing that the values of are known for all . That is, if for all then . Denote

where are defined as in (8) with replaced by . The second set is defined as

In accordance with Lemma 1 the set contains all parametrization of (6) that are indistinguishable on the interval on the basis of accessing only the values of . In other words, if for all then . If the set contains more than one element then (6) is not uniquely identifiable on [4]. Here, for simplicity, we will focus on systems (6) that are uniquely identifiable on :

Assumption III.1

Sets and coincide and contain no more than one element.

Iii-B Integral reduced-order formulation of the inverse problem for (6)

Before we proceed with presenting an equivalent integral formulation of Problem II.1 let us first introduce several additional components and corresponding technical assumptions. Let be a vector satisfying the following condition:

where are some symmetric positive definite matrices. According to the Meyer-Kalman-Yakubovich-Popov lemma, such vector will always exist since the polynomial is Hurwitz.

Consider

(10)

and let be its corresponding normalized fundamental solutions matrix: .

Theorem III.1

Consider (6) and suppose that Assumption III.1 holds. Let , , be -periodic on for all , and the function satisfy:

Then the following statements are equivalent

  • for all , where :

    (11)
  • for all .

Furthermore, the values of , satisfy

(12)
{proof}

Let us first show that 1) 2). Recall (see e.g. [9]) that assumptions of the theorem imply existence of positive numbers :

Hence there are no zero eigenvalues of the matrix , and exists.

Consider :

(13)

It is clear that solutions of (13) are defined for all providing that the definition of , , and are extended (periodically) on the interval . Introduce the function (in which the domain of the function definition is extended to ), and consider the difference

Dynamics of satisfy (10) with , . Moreover, for all (or in if is periodically extended on ).

Let . This implies that for all . Hence according to Lemma 1 belong to . Given that sets and coincide and contain just one element, , we conclude that , .

Notice that for all , and that

(14)

is the unique exponentially stable periodic solution of (13). This implies that (12) holds.

Let us show that 2) 1). Let be parameters for which the following identity folds for all . Consider the function defined earlier. Given that (14) is the unique exponentially stable periodic solution of (13), that for arbitrary choice of initial conditions (i.e. vectors , , and , ) and that if , , one concludes that for all .

Remark III.1

One may argue that it is, in principle, possible to obtain integral formulations of the corresponding inverse problem without using adaptive observer-inspired structures. Note, however, that since the original matrix is allowed to depend on unknown parameters , explicit expressions of solutions of (5) will involve extra nonlinearly parameterized terms, . If closed-form expressions are applied to (6) then the drawback is that the overall unknown parameters vector is , and its dimension is . In the proposed solution dimension of the unknown parameters vector is reduced to which is advantageous for systems with large number of unknowns.

Remark III.2

The uncertainty reduction achieved in the proposed method is due to the assumption that all functions in the right-hand side of (6) are -periodic. Whereas such periodicity assumptions may not always hold, they are not particularly difficult to satisfy (at least approximately) in the laboratory conditions.

Remark III.3

Instead of dealing with continuous-time signals, , one may re-formulate the above results for data sampled at an discrete points in . In this case sets , will need to be re-defined so that the corresponding identities hold at a finite number of points rather than for all . Discrete extension of the theorem allows straightforward formulation of the inference problem as

(15)

which bears some similarity with [8, 13]. Here, however, no discretization of the original continuous-time dynamical model is required and are computable as definite integrals.

Iv Example

Consider the following system:

(16)

where

Equations (16) model dynamics of voltage oscillations generated in barnacle giant muscle fiber [12]. Variable is the measured voltage, is the recovery variable. The values of , , are normally known (, , ); other parameters may vary from one cell to another.

It is clear that equations (16) are of the form (5). Moreover, if the model operates in the oscillatory regime then the right-hand side is periodic in , including the variable . In addition the integral

where if is the period of oscillations, for practically relevant values of . Assuming that observations are taking place when the system’s solution are on (or sufficiently near) the stable period orbit we can express the variable as follows:

This brings equations (16) into the form (6) with parameters , and .

For the purpose of illustration we set the values of parameters , as specified in Table I.

Vector

Vector

Table I: True (first row) and Estimated (second) parameter values of (16)

For the data generated at these parameter values the system is uniquely identifiable, and hence Assumption III.1 holds. According to Theorem III.1, the problem of finding the values of can be now formulated as that of matching the function defined in (11) to over . And in view of Remark III.3 it reduces to solving the unconstrained program (15).

In order to evaluate , as a function of parameter at a given one needs to know the fundamental solutions matrix for all . In this example this matrix was constructed numerically (using Dormand-Prince method and with fixed step size ) from linearly independent solutions of

(17)

starting from , , and .

Points in (15) were evenly spaced with , and the BFGS quasi-Newton method was used to find a numerical estimation of the solution of (15). For computational convenience, instead of looking for directly we were estimating ratios and respectively. Similarly, as follows from (17), the estimate of parameter is not the value of but rather is the sum . We run the method for iterations, and results of the estimation are shown in Table I and Fig. 1.

Figure 1: Estimates and true values of , , , , , , , ,

In order to verify the quality of parameter estimation we run (16) with both estimated and true values of parameters. Results of this simulation are show in Fig. 2, upper panel. Note that frequency of the estimated is higher than that of the measured data. This explains noticeable difference between trajectories at the end of the interval. In order to compensate for this difference we adjusted parameter (regulating the frequency of oscillations in the original model) by . Simulated trajectory of (16) after this adjustment is shown in Fig. 2, lower panel.

Figure 2: Trajectories of (16) with true values of (red curves) and estimated values of from Table I (blue curves). The upper panel shows the case when no adjustments to estimated parameters were made. The lower panel illustrates how the reconstructed changes when the parameter regulating the frequency of oscillations is slightly adjusted by .

It is worth noticing that even though both estimated and simulated are matching reasonably well there are still errors. The origin of these errors is likely to be 1) due to numerical errors in estimating the matrix , and 2) due to the ill-conditioning of the original problem. Indeed, as Fig. 3 suggests, there is a long shallow valley in a vicinity of the optimum.

Figure 3: Estimation error plotted as a function of , . Red star marks estimated , and green star corresponds to the true values of

The estimation took approximately hour on a standard PC in MATLAB. We observed that most of the time was spent in the calculations of which is not surprising given the integration (11) was performed over a relatively dense and uniform grid of points. On the other hand, this indicates that in this and similar cases scalability of the procedure is expected to grow nearly linear with dimension of . This will be tested in experiments in future.

V Conclusion

We presented a technique for explicit reduced-order integral reformulation of inverse problems for a class of nonlinear systems. The technique is aimed at using parallel computational streams and is based on the ideas of adaptive observers. It has been shown that the method allows to reduce dimensionality of the problem to that of the dimension of the vector of parameters entering the right-hand side of the model nonlinearly. In order to test the viability of the method a benchmark model governing dynamics of voltage in generated in barnacle giant muscle fiber has been chosen. The method performed well in this problem which, if coupled with inherent scalability of the procedure, enables to hope that the very same inference technology can be used successfully for efficient fitting of other models to data too.

References

  • [1] H.D.I. Abarbanel, D. Creveling, R. Farisian, and M. Kostuk. Dynamical state and parameter estimation. SIAM J. Applied Dynamical Systems, 8(4):1341–1381, 2009.
  • [2] G. Besancon. Remarks on nonlinear adaptive observer design. Systems and Control Letters, 41:271–280, 2000.
  • [3] D. Brewer, M. Barenco, R. Callard, M. Hubank, and J. Stark. Fitting ordinary differential equations to short time course data. Phil. Trans. R. Soc. A, 366:519–544, 2008.
  • [4] J. Distefano and C. Cobelli. On parameter and structural identifiabiliy: Nonunique observability/reconstructibility for identifiable systems, other ambiguities, and new definitions. IEEE Trans. on Automatic Control, AC-25(4):830–833, 1980.
  • [5] M. Farza, M. M’Saad, T. Maatoung, and M. Kamoun. Adaptive observers for nonlinearly parameterized class of nonlinear systems. Automatica, 45:2292–2299, 2009.
  • [6] H.F. Grip, T.A. Johansen, L. Imsland, and G.O. Kaasa. Parameter estimation and compensation in systems with nonlinearly parameterized perturbations. Automatica, 46(1):19–28, 2010.
  • [7] T. Johnson and W. Tucker. Rigorous parameter reconstruction for differential equations with noisy data. Automatica, 44:2422–2426, 2008.
  • [8] P. Kuhl, M. Deihl, T. Kraus, J. P. Schloder, and Bock H. G. A real-time algorithm for moving horizon state and parameter estimation. Computers and Chemical Engineering, 35(1):71–83, 2011.
  • [9] A. Loria and E. Panteley. Uniform exponential stability of linear time-varying systems: revisited. Systems and Control Letters, 47(1):13–24, 2003.
  • [10] R. Marino and P. Tomei. Global adaptive observers for nonlinear systems via filtered transformations. IEEE Trans. Automatic Control, 37(8):1239–1245, 1992.
  • [11] H. Miao, X. Xia, A. Perelson, and H. Wu. On identifiability of nonlinear ode models and applications in viral dynamics. SIAM Rev., 53(1):3–39, 2011.
  • [12] C. Morris and H. Lecar. Voltage oscillations in the barnacle giant muscle fiber. Biophysics J., 35:193–213, 1981.
  • [13] A. Pavlov, B.G.B. Hunnekens, N.v.d. Wouw, and H. Nijmeijer. Steady-state performance optimization for nonlinear control systems. Automatica, 49(7):2087–2097, 2013.
  • [14] I. Tyukin. Adaptation in Dynamical Systems. Cambridge Univ. Press, 2011.
  • [15] I.Yu. Tyukin, E. Steur, H. Nijmeijer, and C. van Leeuwen. Adaptive observers and parameter estimation for a class of systems nonlinear in the parameters. Automatica, 49(8):2409–2423, 2013.

Appendix A Appendix

Lemma 2

Consider , , , , , and let ,