Forecasting the Evolution of Dynamical Systems from Noisy Observations

# Forecasting the Evolution of Dynamical Systems from Noisy Observations

M. Anghel and I. Steinwart Los Alamos National Laboratory, Los Alamos, NM 87545
July 27, 2019
###### Abstract

We consider the problem of designing almost optimal predictors for dynamical systems from a finite sequence of noisy observations and incomplete knowledge of the dynamics and the noise. We first discuss the properties of the optimal (Bayes) predictor and present the limitations of memory-free forecasting methods, and of any finite memory methods in general. We then show that a nonparametric support vector machine approach to forecasting can consistently learn the optimal predictor for all pairs of dynamical systems and bounded observational noise processes that possess summable correlation sequences. Numerical experiments show that this approach adapts the memory length of the forecaster to the complexity of the learning task and the size of the observation sequence.

###### pacs:

Our goal is to design almost optimal predictors for dynamical systems conditioned only on a finite sequence of observed noisy measurements of its evolution and imperfect knowledge of the dynamics and noise. Suppose that is a map on a compact subset such that the associated dynamical system , where , has an ergodic measure . For example, if is compact and is continuous such a measure always exists. Moreover, let be an -valued i.i.d. process with respect to the distribution that is independent of the process . We assume that all observations of the dynamical system are perturbed by the process , i.e. all observations are of the form

 zi:=Fi(x0)+εi, (1)

where is the unknown initial point of the trajectory. Let us now assume that we have a sequence of observations and we wish to predict the observation , or the true state , after some time as best as possible. Then one possible formalization of this problem is to look for a minimizer , called the Bayes forecaster, of the risk functional

 Rm,l(f)=∫Rd∫R(m+1)d∥∥Fm+l(x)+εm+l (2) −f(F(x)+ε1,…,Fm(x)+εm)∥∥ppνm+1(dε)μ(dx),

which describes the average discrepancy (measured in the -norm) between the predictions of and the observations (we have made a time-shift by in order to handle non-invertible maps ). To attain this goal we want to design a learning method , which assigns to each observed trajectory , , a forecaster and which is consistent, i. e.

 limn→∞Rm,l(fTn)=R∗m,l (3)

holds in probability, where is the minimum risk (the Bayes risk).

Most classical forecasting methods use a Markov state-space approach to modeling dynamic systems and require a model of both the flow and the observational noise . These methods usually attempt to estimate the probability density function (PDF) of the state based on all the available information, i. e. , where and denotes the stochastic process that generates the observations. Then, an optimal (with respect to any criterion) forecaster may be obtained from the PDF, including . Known as nonlinear filters, these methods estimate recursively in time this distribution and consist of essentially two steps: a prediction step, that uses the system model to propagate the state PDF to the next observation time, and an update step, that uses the latest observation to modify the prediction PDF using Bayes’ rule. Except in a few special cases, including linear Gaussian state space models (Kalman filter) and hidden finite-state space Markov chains, the recursive propagation of the posterior PDF cannot be determined analytically Maybeck (1979). Consequently, various approximations strategies to the optimal solutions have been developed. The most popular algorithms, the extended and the unscented Kalman filter, rely on anlytical approximations of the flow or/and finite moment approximations of the posterior PDF. Alternatively, sequential Monte Carlo methods, have conceptually the advantage of not being subject to the assumptions of linearity or Gaussianity in the model, but are computationally very expensive and suffer from the degeneracy of the algorithm Doucet et. al. (2000).

Notwithstanding their significant merits, there are many difficulties in these approaches to forecasting. First, due to their reliance on system and noise models they are sensitive to model errors. Second, instead of solving the function estimation problem directly, for which the available information might suffice, they estimate densities as a first step, which is a harder and more general problem that requires a large number of observations to be solved well. Third, by relying on a Markov state-space approach to forecasting they heavily restrict the class of functions available for forecasting. In order to further appreciate this aspect, and to better understand our proposed approach, let us now describe the structure of the optimal forecasters in more detail for the special case . If the noise has no systematic bias, i.e. , simple algebra then shows that is not only the optimal forecaster for the observable state but also for the true state . Moreover, it is well known that , but this closed form does in general not solve the issue of actually computing since for many cases this computation is intractable even with perfect knowledge of , and . However, if the noise has a density with respect to the Lebesgue measure, can be expressed by a more tractable formula,

 f∗m,l(z1,…,zm)= (4) ∫Rdϑ(z1−F(x))⋯ϑ(zm−Fm(x))Fm+l(x)dμ(x)∫Rdϑ(z1−F(x))⋯ϑ(zm−Fm(x))dμ(x).

When perfect knowledge of the system is available, the above equation describes how to build the optimal predictor. Even though these assumptions are rarely met in practice the above result points to a number of interesting and very general conclusions with regard to forecasting the future of a dynamical system based on noisy observations:

1) In general the flow is not the optimal predictor. Indeed Eq. (4) clearly shows that the flow is in general not the optimal one-step predictor based only on a noisy estimate of the present state of the system, i.e. . A good example is provided by the logistic map, defined as , which is known to be ergodic and has an invariant measure given by Beck and Schlögl (1993). As we can clearly see from Fig. (1), the true dynamical behavior (continuous line) and the best forecaster (dashed line) disagree. However, in the absence of observational noise, we obviously obtain from Eq. (2), i.e.  is the optimal, memoryless, one-step ahead predictor.

2) Recursive forecast is worse than direct forecast. Since in general for step ahead predictors we have we see that iterating one-step-ahead forecasts is in general worse than directly forecasting -steps ahead. In Fig. (2), where we plot the direct two-step ahead forecaster as well as the recursive forecaster obtained by iterating the optimal one-step ahead Bayes predictor, illustrates this effect for the logistic map. In general, as the forecasting time increases the iterated forecasters become more and more inadequate. Of course, for noiseless observations direct and recursive predictors have the same forecasting performance.

3) Forecasts based on denoising the present state are not optimal. By choosing in Eq. (4) we obtain an optimal estimate (optimal denoiser) of the present state of the dynamical system based on a history of length rem (). Now a common forecasting strategy is to estimate the present system state first and then apply the dynamics to this estimate. It should be is clear from our discussion so far that even if the estimate of the present state is optimal it still approximates the true state: therefore, this forecaster is not the optimal -step ahead forecaster based on histories of length . The dotted line in Fig. (1), which is composed with the best memory-free denoiser, , and the one in Fig. (2), which is composed with , illustrate the principal limit of memory-free, and of any finite memory denoising, followed by propagating the dynamics.

4) Memory improves forecasting performance. Indeed, building one-step ahead predictors using the past states is a minimization of the risk functional over the space of all measurable functions . This function space is a subset of the space of measurable functions which is the hypothesis space for the best one-step ahead predictor using the past states. Therefore, we necessarily have , i.e. by increasing the memory we can decrease the best possible prediction error: building forecasters that use information from the recent past of the dynamical system can substantially improve the forecasting performance. Indeed, as Table 1 shows, using histories of increased length reduces the Bayes risk for predictors of the logistic map.

The last two features clearly show the limitations of memory-free forecasting methods, and of any finite memory methods in general, because they demonstrate that optimal estimators are non-Markovian in the original state space even though the underlying dynamical system is deterministic. Unfortunately, it is not clear how prior knowledge about the system can be used to build non-Markovian forecasters. For this reason, we propose to build nonparametric predictors without making any assumptions about the form of the dynamics or the noise, while only assuming that the process described by (1) is bounded, ergodic, and has sufficiently fast decay of correlations. While nonparametric methods that are proven to work in a certain statistical sense different from (2) for arbitrary, unknown stationary ergodic processes exist Györfi et al. (2002), these methods require very large data segments for acceptable precision. Approaches for forecasting goals closer to (2) were considered by, e.g., Modha and Masry (1998); Meir (2000) but unfortunately these methods require certain mixing conditions that cannot be satisfied by dynamical systems.

The approach we propose uses support vector machine (SVM) forecasters Schölkopf and Smola (2002) . For simplicity we only describe the least squares loss, , and memoryless, , one-step ahead forecasters, , and consider , but generalizations are straightforward. An SVM forecaster assigns to each finite sequence a function from a reproducing kernel Hilbert space (RKHS) Cucker and Smale (2002) that solves

 minf∈Hγ(λ∥f∥2Hγ+1nn−1∑i=0(zi+1−f(zi))2), (5)

where is a free regularization parameter. Here, we choose the RKHS of a Gaussian kernel defined by , where is a free parameter called the width, but other choices of kernels are possible as well. It can be shown that the function minimizing the regularized empirical error (5) exists, is unique, and has the form , where is the unique solution of the well-posed linear system in

 (λnI+K)α∗=Z. (6)

Here is the identity matrix, is the matrix whose entry is , where , and is the vector Cucker and Smale (2002).

Unfortunately, standard learning theory cannot make conclusions on the behavior of , since the input/output pairs are clearly not i.i.d. samples. On the other hand, because the observational noise process is weakly mixing and the dynamical system is ergodic, the stochastic process defined by these pairs is ergodic. Thus, according to Birkhoff’s ergodic theorem, it satisfies the following law of large numbers (LLN)

 limn→∞1nn∑i=1h∘¯Zi(ω)=Eμ⊗νh, (7)

-almost everywhere, for all defined by . In particular, this holds for for any . Therefore, the results in Steinwart et al. (2006) show that there exists a null-sequence , depending on and , for which the SVM forecaster is consistent. We keep constant, but we could as well find a sequence of the regularization parameter and the kernel width, , ensuring consistency as long as . This result even holds for unbounded, but integrable, i.i.d. noise processes.

In order to a-priori determine for a given sample size the regularization sequence , the convergence speed of the LLN that the observation-generating process satisfies is necessary. However, the general negative results from Nobel (1999) strongly suggest that there exist neither a universal, i.e. system and noise independent, sequence nor any other universal forecaster. However, the situation changes dramatically, if one restricts considerations to dynamical systems whose stochasticity can be described by, e.g., convergence rates of the correlations

 cor(ψ,φ,n):=∫ψ⋅φ∘Fndμ−∫ψdμ∫φdμ (8)

for . Indeed, we recently showed the existence of a universal sequence that yields an SVM which is consistent for all pairs of ergodic dynamical systems and bounded, i.i.d observational noise processes satisfying for all Lipschitz continuous and Steinwart and Anghel (2007). To be more specific, the corresponding SVM is consistent if, e.g., we use the least squares loss and sequences and , , for fixed and satisfying and . Moreover, if the noise process is also centered then this SVM actually learns to forecast the next true state.

Let us now apply this nonparametric approach to predicting the evolution of the -coordinate of the Lorenz system described by the following set of differential equations, , , , where the parameters are set at the standard values , , and . The evolution is sampled at time steps of and the observational error is i.i.d Gaussain noise. We compute SVM forecasters using memories of increasing length and for training data of two different sizes and . Since in estimating the convergence speed of (7) we are using a loose concentration result, a suitable regularization parameter and kernel width sequence cannot be chosen a-priori. Hence, we have adopted a grid search in space and a 4-fold cross-validation technique Györfi et al. (2002) to choose for a given sample size . Finally, we use for an estimate constructed from (6) using the whole sample (to simplify notation, we henceforward omit the dependence of on the regularization parameters ). This approach adapts the regularization parameters and the complexity of the SVM forecasters to the amount of available empirical data. The risk of each forecaster is estimated by the prediction error over a large test set ( input/output pairs) chosen independent of the training set . Note that , where is the standard deviation of the noise.

As Fig. (3) shows for , by increasing the memory of the predictor we obtain forecasters with improved performance. However, for a given sample size , there is an optimal memory length and increasing memory length beyond this value produces poorer forecaster due to their increased complexity for the available data. Moreover the memory of the best forecaster increases with sample size from for to for , as shown for -step ahead forecasters, while the memory increases from for to for , for -step ahead forecasters. This reflects the fact that with increased information we can build more complex and, therefore, better predictors. Since forecasting further into the future is a more difficult problem, the performance of the predictor decreases when we keep the same sample size but attempt to make -steps ahead predictions. Furthermore, as the right plot in Fig. 3 reveals, for the same sample size the memory of the best forecaster reduces in order to accommodate the increased complexity of the learning task.

In the limit direct forecasts are better than recursive forecasts because the SVM forecaster approaches the Bayes forecaster. However, for finite sample sizes this is not always the case. Indeed, as Fig. (4) shows for there is a memory size at which the risk of the direct forecaster, , is larger than that of the recursive, , -steps ahead forecaster. Moreover, the crossover depends on the amount of noise and increases as noise variance increases while keeping the same sample size . By increasing the sample size though, the crossover increases such that in the limit of infinite sample size direct forecasters always perform better than recursive forecasters, as is expected.

To conclude, we have described the assumptions under which non-parametric SVM forecasters can consistently learn the optimal predictors. For example, for bounded noise, SVM forecasters are consistent for all pairs of dynamical systems and observational noise processes that possess summable correlation sequences. Hence, the SVM forecasters possess a weak form of universality for a large class of stochastic processes. This includes systems with smooth uniformly expanding dynamics or smooth hyperbolic dynamics, systems perturbed by dynamic noise, as well as ”parabolic” or ”intermittent” systems which have a polynomial decay of correlations Baladi (2000). Remarkably, for some dynamical systems it seems possible to decide on the summability of the correlation sequence from observations. Indeed, for piecewise expanding maps rigorous estimates of the asymptotic rate of decay of correlations for a given function is numerically feasible Liverani (2001).

We also notice that in the presence of observational noise the task of forecasting is different from the task of modelling the underlying nonlinear dynamics from data. Indeed, finding the simplest model consistent with the observations, which is the ultimate goal of modelling Kantz and Schreiber (2004), is highly unsuitable for forecasting. This is clearly illustrated by noticing the significant increase of forecasting skill in Fig. (3) when using a memory length which is much larger than the minimal time delay embedding necessary to reconstruct a modelling phase space equivalent to the original state space of the Lorenz system.

Finally, we note that our analysis remains valid for stationary nonergodic processes. Since a nonergodic stationary process has an ergodic decomposition, a realization of the time series falls with probability one into an invariant event on which the process is ergodic and stationary. For this reason, the proposed learning algorithm can be applied to a time series sequence generated by that event as though it were the process universe.

## References

• Maybeck (1979) P. S. Maybeck, Stochastic models, Estimation, and Control (Academic Press, Inc., 1979).
• Doucet et. al. (2000) A. Doucet et. al., Statistics and Computing 10, 197 (2000).
• Beck and Schlögl (1993) C. Beck and F. Schlögl, Thermodynamics of Chaotic Systems (Cambridge University Press, 1993).
• (4) In order to obtain the optimal denoiser we have to minimize a modified risk functional, i.e. which assumes, unrealistically, exact knowledge of the state, , that we want to estimate by denoising. Nevertheless, by carrying out the minimization we obtain Eq. (4) with , which is computationally feasible when perfect knowledge about the dynamics and the noise is assumed.
• Györfi et al. (2002) L. Györfi et al., A Distribution-free Theory of Nonparametric Regression (Springer, New York, 2002).
• Modha and Masry (1998) D. Modha and E. Masry, IEEE Transactions on Information Theory 44, 117 (1998).
• Meir (2000) R. Meir, Machine learning 39, 5 (2000).
• Schölkopf and Smola (2002) B. Schölkopf and A. Smola, Learning with Kernels (MIT Press, Cambridge, MA, 2002).
• Cucker and Smale (2002) F. Cucker and S. Smale, Bull. Amer. Math. Soc. 39, 1 (2002).
• Steinwart et al. (2006) I. Steinwart et al. (2006), arXiv:0707.0303, submitted to the J. Multivariate Anal.
• Nobel (1999) A. Nobel, Ann. Statist. 27, 262 (1999).
• Steinwart and Anghel (2007) I. Steinwart and M. Anghel (2007), arXiv:0707.0322, submitted to the Annals of Statistics.
• Baladi (2000) V. Baladi, Positive Transfer Operators and Decay of Correlations (World Scientific, 2000).
• Liverani (2001) C. Liverani, Nonlinearity 14, 463 (2001).
• Kantz and Schreiber (2004) H. Kantz and T. Schreiber, Nonlinear Time Series Analysis (Cambridge University Press, 2004), 2nd ed.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters