Prediction of flow dynamics using point processes

Prediction of flow dynamics using point processes

Abstract

Describing a time series parsimoniously is the first step to study the underlying dynamics. For a time-discrete system, a generating partition provides a compact description such that a time series and a symbolic sequence are one-to-one. But, for a time-continuous system, such a compact description does not have a solid basis. Here, we propose to describe a time-continuous time series using a local cross section and the times when the orbit crosses the local cross section. We show that if such a series of crossing times and some past observations are given, we can predict the system’s dynamics with fine accuracy. This reconstructability does not depend strongly on the size nor the placement of the local cross section if we have a sufficiently long database. We demonstrate the proposed method using the Lorenz model as well as the actual measurement of wind speed.

Current developments of measurement techniques and hardware enable us to record time-continuous data with very high sampling rates for long times. To understand such data intuitively, we need to describe the data parsimoniously so that such description can reproduce the original time series. Thus, here we propose to represent such time-continuous data only by the series of times when the orbit passes a local cross section. We show that a time series together with some measurements of the past dynamics of the system is sufficient information to predict the future dynamics of the system. There are two applications: an immediate application is to save, or send, time-continuous data using small memory or low channel capacities and therefore make big data more manageable; a more abstract application is to approximate details in a time series when only few observations are possible, based on detailed measurements taken during a different time.

Recent advances in measurement technology enable us to record data of time-continuous systems with very high sampling rates. While these advances are welcome, there are several problems in accessing, analysing and even storing such data sets. These problems are nowadays summarised under the umbrella of big data and arise in diverse areas ranging from bioinformatics to systems’ health engineering (1); (2). Here we are presenting a method based on the recurrence of dynamical systems to categorise and rank such data sets. Ranking the recurrence times of our current state against the recurrence times of the big data set, allows us to predict the future dynamics of the system.

Our research is motivated by recent achievements in the time series analysis of non-uniformly sampled data (3); (4); (5); (6); (7); (8); (9). We assume, we have a time-continuous dynamical system, that we are able to measure for a certain amount of time. This observational series of the high dimensional state space is our database. After recording the full dynamics in the phase space for our database, we no longer measure the full dynamics but record the crossing times. These crossing times are the times when the trajectory of the system passes through a local cross section on the attractor of the dynamical system. This data set is our non-uniform sampled data set, which we use to estimate the current state of the system. This estimation is done by finding similarities between the current crossing time sequence and the crossing times calculated from the database of the full dynamics.

To detect the similarity we facilitate the Victor and Purpura distance (10). The information contained in such crossing times has been studied since Poincaré (11) and is the foundation of recurrence–based time series analysis (12). The Victor and Purpura distance (10) offers a natural metric to rank and detect similarities between consecutive time windows of the crossing times (10); (5); (6); (7); (8); (13); (9). Using cross prediction (14), we exploit these similarities to estimate the current state and predict the future system’s dynamics.

Our paper is organised as follows. After giving a short summary of our method, we first give details on the information contained in the crossing time series and introduce the Victor and Purpora distance. Then we introduce the cross prediction method and illustrate our method using the low-dimensional Lorenz system (15). In addition we apply our method to predict actual wind speed data.

Predicting the dynamics of the system will be done by facilitating the information in the database. This record of the past dynamics is used in two ways. First to compare the crossing time series of the database with our current crossing time series. Second we predict the future dynamics by approximating the future state of the system from the past dynamics in the database. The length of the database is critical and determines the quality of our prediction. The recording time has to be chosen long enough to allow first a good state estimation and then a realistic approximation of the future dynamics. Since our state estimation as well as the cross prediction rely on similarity between past and future dynamics the longer the record the more reliable the prediction becomes. One of the main results of this paper is that even for realistic short recording times our method results in quite reliable predictions.

Formally we assume a dynamical system , where is an -dimensional manifold and is the diffeomorphism representing how a point in moves to a point in after some time . Our database contains the full dynamics of for , where denotes the recording time and is the sampling time. If the full dynamics cannot be directly measured, one has to reconstruct the dynamics using delay embedding (16); (17). We will discuss this further, when we introduce our environmental example of predicting wind speeds.

Figure 1: A schematic overview of our prediction method. (a) Local section with radius , oriented perpendicular to the flow; (b) spike train having a non-zero amplitude every time the trajectory passes through ; (c) -component of the Lorenz system (blue) and our prediction (green). The blue part of the trajectory represents the database used for the cross prediction.

From onwards we do not have access to the full dynamics anymore, but measure the times the trajectory crosses a local section , e.g. . We choose to be perpendicular to the flow at a point of the trajectory and the size of is given by its radius (c.f. Fig. 1(a)). (In practice, we choose a point in the time series and calculate the direction the point is moving. This direction gives the the normal vector for our local cross section .) A lot is known about such crossing time series. For example it can be shown that a -dimensional vector provides a one–to–one embedding of the dynamics under mild conditions relating to periodic points in the phase space (18); (19); (20) as long as . We exploit this result but instead of measuring consecutive crossings, our measuring time is chosen so that our crossing time series always satisfies the inequality :

(1)

We therefore record the crossing time series of our system for time units. To determine we use the time series in the database and measure the crossing time series . This set of times is going to be used to determine . Therefore is the maximum time span needed in the recorded history (the database) to satisfy the condition . Note that, especially if the recording time is short, choices of , as the upper bound of eq. (1), should be taken conservatively. As we will see, in practice one can even use a small value for , e.g. an upper bound for only a section of the time series, and still get a good prediction. We discuss techniques used to improve the prediction for such choices of later.

Given the current crossing time series , we use the Victor and Purpura distance metric (10) to find similar sequences of length in . The closest sequences in provide the base for our state estimation and the prediction of the dynamics for . To apply the Victor and Pupura distance we represent the crossing time series and as their corresponding spike trains. The value of such spike trains are everywhere 0 and only equal to 1 when the trajectory (c.f. Fig. 1 (b)). The Victor and Purpura metric determines the minimum cost to convert one spike train into another one. This metric has been used extensively in the context of neuroscience, e.g. (5), where the spike trains naturally arise as the output of idealised neurons.

Let the current spike train be and let represent one of the possible spike trains of the database. denotes the time for the th event within and is the time for the th event within . Thus, given two initial conditions at the beginnings of the two windows, we have for each and . When we apply the shift of events, we pair up one event from with another from . Therefore each and may not belong to more than one pair. In addition, we define as the set of such pairs taken from and . With this we can define the Victor Purpura distance as:

(2)

This edit distance satisfies the necessary three conditions for a metric; (i) non-negative or ,zero iff two time windows are identical, (ii) symmetric, and (iii) the triangle inequality.

Intuitively the metric can be understood by taking into account the possible edits that can transform into : either we can align event and or we have to delete / create events in the spike trains. Both these edits appear in the RHS of (2). The cost of aligning two events is proportional to the time difference between the two events, e.g. controls the cost. Since the cost of creating and deleting events is chosen to be equal to 1, depends on the difference between the total number of events – given by – and twice the number of possible pairs – . This Victor Purpura distance has been used to evaluate synchonization among neurons (21); (22). Further details and a thorough mathematical description of the Victor Purpura distance can be found in the literature (23); (8); (24).

Comparing the current crossing time sequence in this way with all the crossing time sequences in the database, we identify the 10 sequences that are closest in terms of their distance . The average of their end points is going to be the state estimate at the current time. For extremely long database recording times and very high data precision one might use just the end point of the sequence with minimal as the current state estimate. Under realistic conditions, however, using the average and standard deviation of about 10 sequences with similar recurrence properties – in the Victor Purpura distance sense – improves the stability of the prediction and helps us to evaluate its reliability. One can understand the Victor-Purpura distance for a spike train as being similar to the Euclidean distance for usual vector spaces.

Using cross prediction has the advantage that we do not require access to the equations governing the dynamics of . Instead the prediction of the future dynamics is provided as the average of the recorded evolution of the 10 states defining our state estimate. We define as the current point in time and use as the index of the 10 states. Consequently denotes the end points of the 10 most similar sequences in the database. Then the estimation of the current state is:

(3)

Similarly the prediction of for time steps in the future is:

(4)

and its corresponding prediction uncertainty can be measured by the variations of the 10 sequences:

We use the uncertainty of our prediction to accept or reject the prediction of . The main issue we will have is that the ensemble of the 10 sequences with the lowest might lose coherence, because they might be close in the Victor-Purpura sense, but not all of them have to be close to the system’s state on . When this happens the uncertainty of the prediction grows: . In such a case we would reject the prediction and instead let be our prediction and replace by to reinitialise for further prediction.

Especially when is short, it can be necessary to work with a small value of . For such the inequality (1) will not be satisfied for all times, but only for some part of the time series in the database. To still achieve a good state estimation, we consider that just before or after the current time window, there is or is not a spike. For our current spike train, we therefore have the option that a spike does (not) appear just before (after) the beginning (end) of the window. We combine these 4 options with the corresponding 4 options of our database and determine the of the 16 possibilities. As we see below, including these 16 possibilities in our minimisation greatly improves the accuracy of the prediction.

Figure 2: Comparison between the predicted (dashed line) and the original (solid line) time series of the Lorenz system: (a) recording time of the database , radius of local section , upper bound of eq. (1) and ; (b) , , and ; at the bottom of both panels the corresponding spike trains are shown and is given by eqs. (3,4).

Such a prediction can be seen as the orange line in Fig. 1(c). Clearly visible are the time periods during which we rejected the prediction (see the instances of constant amplitude). The coincide with periods of low or no recurrence, that are large gaps in the train sequence in panel (b). For practical applications we would use a larger value of to eliminate these areas of prediction rejection. To qualitatively assess the prediction we evaluate the cross-correlation coefficient between the prediction and the true dynamics of the system. For each we are going to predict the dynamics using eq. (4) and evaluate .

As a first application of this method, we apply our algorithm to the Lorenz 1963 system (15), using the standard parameters , and . We are going to systematically vary the radius of the local section and the database recording time . These parameter changes directly impact on the upper bound of eq. (1). We calculated statistics and from these spike trains based on Ref. (25), which evaluate the global and local variabilities for spike trains for judging whether the spike trains follow a Poisson process or not. If a spike train follows a Possion process, its and fluctuate around 1 (25). We found that and were and , respectively, over 10 samples. While the estimate for was not significantly different from 1, that for was significantly different from 1 (), meaning that the spike trains are likely to be different from a Poisson process, or a standard point process (25).

Figure 3: The cross-correlation coefficient between the prediction and the true dynamics in dependence of the size of the partition and recording length. Cross-correlation was calculated for a prediction of 50 time units. Shown is the average correlation for 10 randomly placed s as well as the 50% confidence interval. (a) fixed upper bound and while varying the size of the portion and (b) and a fixed radius of the partition while varying the recording length . N.B. that we tend to have positive correlation coefficients because we predicted the values generated from the true dynamics based on spike trains.

In Fig. 2 we see two outputs of our algorithm in comparison with the true -component of the Lorenz system. Since it is hard to visualise the state estimates together with their predictions, we instead only show the state estimates together with the prediction up to the next estimate. As a reinitialisation time we used in both experiments. Since this value is very small, the prediction is never rejected and the data shown highlights the accuracy of our state estimate more than the quality of our prediction. The main difference between the two data sets shown in panel (a) and (b) is the recording time and the radius of the local section . We use in panel (a) and in (b). Consequently the recurrence rate with which the trajectory returns to is much higher in (a) than in (b) as we can see from the spike trains below the trajectory. Given the high recurrence rate we need a short recording time for , while for we need longer recording times (). Similarly the recurrence rate helps us to chose the upper bound of eq. (1): (a) and (b) . The comparison of the true and predicted dynamics clearly shows that our algorithm can be optimised to work for both choices of and approximates the dynamics of the system well.

The large deviations of in Fig. 2 (a) around and are caused by our choice of and a very short recording time . Given , it is not always possible to satisfy the condition (1). For that reason our prediction diverges from the true dynamics. In addition the short recording of the past dynamics in combination with the cross-prediction technique does not allow us to get the amplitude of some of the oscillations right, e.g. . As we can see for and these problems disappear and we get better estimates and predictions (Fig. 2 (b)). While the recurrence rate is lower, we have more knowledge about the past dynamics and the higher value of makes it easier to satisfy the condition (1).

We want to understand the influence of and on our prediction algorithm in more detail. For this we determine the correlation between the original time series of Lorenz with predictions of the next 50 time units, while varying or . Moreover we use these experiments to demonstrate that the location of the local section is of minor importance for the predictions. For each of our experiments we therefore use 10 random positions for and report in Fig. 3 the mean value as well as the 50% confidence interval. For Fig. 3 (a) we fixed , and vary the size of the partition, while in (b) we chose , and vary .

A sufficiently large size of enables us to cross-predict the original time series almost perfectly (Fig. 3(a)), because for large we have a high recurrence rate and the condition of eq. (1) is likely to be met. When we reduce the size of the local cross section, the correlation coefficient between the original time series and the cross-predicted time series decreases. The lower recurrence rate makes it impossible to satisfy the condition eq. (1) , namely , for all times, leading to the worse prediction. But, even in such a case, a sufficiently large recording time as well as a larger leads to a high correlation coefficient (Fig. 3(b)). Thus, we expect for a database of a sufficiently long recording time that we can still reconstruct the original time series faithfully. Moreover the medians and 50% confidence intervals in Fig. 3(a) and (b) show that the positions of the local cross sections are of less importance for our prediction algorithm. For example, reconstructions from local sections at with or are not so different from each other as shown in Fig. 4.

Figure 4: A spike train generated by the local cross section at and , and the reconstruction of the original signal. The original signal is shown in the blue solid line and its reconstruction, the red dashed line. Here we used , and . At the bottom we show the spike train and reconstruction for the local cross section at but . The correlation coefficient between the original and reconstructed signals is 0.3926. (Because we have a fewer events, we needed the larger and to have the better results.)

To further understand these results in Figs. 2 and 3 and gain some insight about the importance of the different measurements (recorded history in the database and crossing times), we analysed the predictability of flows from crossing times as an information-theoretic problem. Our analysis shows that when a local cross section becomes small and therefore generated less frequent events, the time resolution of our data becomes the dominating factor. For our analysis we partition our spike train time series into bins of equal size. Let be the number of events within the spike train. Moreover each bin can only contain one spike. Then, the number of signals we can send by a spike train is

(5)

Assuming that all signals are equally likely, we can calculate the amount of information in the series as (26)

(6)

Applying Stirling’s approximation, we get

(7)

Given that , we can assume that and are related by . Therefore the RHS of the equation can be written as

(8)

Since gives the total number of spikes in the sequence and we only allow one spike per partition, the maximum is directly related to the time resolution of the sequence. We conclude that therefore the time resolution given by is directly proportional to the total amount of information contained in the time series. This has two consequences. For real world data the time resolution of the data is of upmost importance. Using our algorithm in real world applications would require instruments with high precision, but at least it is only the time precision and not also amplitude precision on top. The second consequence of eq. (8) relates to the efficiency of our method. Once the database is recorded, the data required to predict the future dynamics has a very small footprint. For example the database needed to predict the dynamics shown in Fig. 2(b) has bytes and its crossing time series has bytes without compression in disk space (We used here the MATLAB command “whos” to evaluate the disk space we need for storing these variables.). The spike train recorded to estimate the state and consequently predict the dynamics was % of their combined size. Having such a small memory demand makes this method attractive for real world applications.

As one example for a real world application, we are use our algorithm to predict wind speed (27). The original measurements were observed for 24 hours from around 2pm on 1 September 2005 at about 1m above from the ground level at the Institute of Industrial Science, The University of Tokyo, Tokyo, Japan using an ultrasonic anemometer. The original measurements were made with 50Hz, but we first sub-sample the time series using every tenth point. This sub-sampling allows us to use a longer and therefore the database contains more of the long term changes in the dynamics. Since we do not have access to the full state space of this environmental system, we have to reconstruct the attractor of the dynamics by using delay embedding (16); (17). Given that the system seems to have features of high-dimensional dynamics (28), we have to use a high dimensional embedding. But the total recording time of the wind speeds is 10,000 seconds, and consequently we cannot use too high an embedding dimension, without making the recurrence rate too low for our prediction algorithm. In addition we only use the first 9,000 seconds as our database and use the remainder for comparison with our prediction. As a compromise and after several tests, we decided to use a 10 dimensional embedding space. We find that our algorithm succeeded in capturing the large scale features of the dynamics, i.e., prolonged changes of the wind velocity are similar in the experimental data and the prediction (Fig. 5). On the other hand our predictions fail to reproduce the finer details of the system’s dynamics. We conjecture that a higher embedding dimension together with a longer time series would be able to overcome these limitations. Another way to improve the performance of the method could be to use marked spike trains, by assigning some additional value to each event. Several distances have been proposed for such marked point processes (29); (23); (30); (31), which might increase the prediction performance for such high dimensional dynamics.

Figure 5: Example of the wind speed time series (solid line) together with our prediction (dash-dotted line). The parameters of our algorithms are the size of the time window , recording time , and the size of the local section . Our algorithm predicts the following 1,000 seconds.

Applying the method of Ref. (32) to the above wind data shows the value greater than 99% point of , meaning that the wind data should be regarded as non-stationary. The applicability of our method for non-stationary data should be evaluated more closely in future research, although our results in Fig. 5 seem to show some promise in this line of research.

Our numerical results and to some extent the results on wind speeds show that a local cross section has the generic property to reconstruct the underlying dynamics of a flow almost perfectly. This is similar to generating partitions which have been extensively studied in maps (33); (34); (35); (36); (37); (38). In maps it is a non-trivial task to find a generating partition. Our results show, that in flows this is much easier and the position of does not matter much for the prediction quality (c.f. Fig. 3).

While there are many studies that try to predict the next crossing time based on previous crossings (for example (18)), the method presented here is different. Instead of just focussing on the next crossing, we are able, using cross-prediction, to estimate the complete future dynamics. If we regard the observations at a local cross section as a coincidence detector (39); (40) our results could explain how we can share the same experience even if we perceive a phenomenon in different ways, or in our case by different local cross sections. Hence, our results might also have some implications in the field of theoretical neuroscience.

We presented a method for state estimation and prediction of flows, given a database of the past dynamics and a crossing time series. Identifying the current state is done by ranking the crossing time sequences of the database according to their distance from the current crossing time series. We showed that our method requires little memory to store and send time series information via a spike train if the database is shared between sender and receiver. Our method does not require us to store and send an entire time series under this assumption. Instead we simply record or send the times when a trajectory passes the local cross section. This advantage can be important for sensor networks (41), where each device has to store and communicate lots of environmental information under severe energy constraints. Similarly, our method may be used to reconstruct missing data within observations, such as gaps in, e.g., satellites images partly covered by clouds (42), historical series of sunspot numbers (43) or historical phenological data such as the start of the cherry blossom in Japan (44).

Y. H. would like to appreciate Prof. Kazuyuki Aihara for the discussions. The research of Y. H. is supported by Kozo Keikaku Engineering Inc. The research of D. E. has been supported by German-Israeli Foundation for Scientific Research and Development (GIF), GIF Grant No. I-1298-415.13/2015.

References

  1. C.S. Greene, J. Tan, M. Ung, J.H. Moore, and C. Cheng, Journal of Cellular Physiology 229, 1896–1900 (2014).
  2. W.Q. Meeker and Y. Hong, Quality Engineering 26, 102–116 (2014).
  3. M. McCullough, K. Sakellariou, T. Stemler, and M. Small, Chaos 26, 123103 (2016).
  4. K. Sakellariou, M. McCullough, T. Stemler, and M. Small, Chaos 26, 123104 (2016).
  5. Y. Hirata and K. Aihara, J. Neurosci. Methods 183, 277-286 (2009).
  6. Y. Hirata and K. Aihara, Physica A 391, 760-766 (2012).
  7. Y. Hirata, E. J. Lang, and K. Aihara, Recurrence plots and the analysis of multiple spike trains, In. Springer Handbook of Bio-/Neuroinformatics, Ed. Nikola Kasabov, Springer Berlin Heidelberg, pp.735-744, 2014.
  8. I. Ozken, D. Eroglu, T. Stemler, N. Marwan, G. B. Bagci and J. Kurths, Phys. Rev. E 91, 062911 (2015).
  9. D. Eroglu, F. H. McRobie, I. Ozken, T. Stemler, K.-H. Wyrwoll, S. F. M. Breitenbach, N. Marwan and J. Kurths, Nat. Comm. 7, 12929 (2016).
  10. J. D. Victor and K. P. Purpura, Network: Comput. Neural Syst. 8, 127-164 (1997).
  11. H. Poincaré, “Sur le problème des trois corps et les équations de la dynamique”. Acta Math., 13, 1-270 (1890).
  12. N. Marwan, M. C. Romano, M. Thiel, J. Kurths, Recurrence Plots for the Analysis of Complex Systems, Physics Reports, 438(5-6), 237-329, (2007).
  13. Y. Hirata, K. Iwayama and K. Aihara, Phys. Rev. E 94, 042217 (2016).
  14. M. Le Van Quyen, J. Martinerie, C. Adam, and F. J. Varela, Physica D 127, 250-266 (1999).
  15. E. N. Lorenz, J. Atmos. Sci. 20, 130-141 (1963).
  16. F. Takens, Lect. Notes Math. 898, 366 (1981).
  17. T. Sauer, J. A. Yorke and M. Casdagli, J. Stat. Phys. 65, 579 (1991).
  18. T. Sauer, Phys. Rev. Lett. 72, 3811-3814 (1994).
  19. J. P. Huke and D. S. Broomhead, Nonlinearity 20, 2205-2244 (2007).
  20. S. Smale, Stable manifolds for differential equations and diffeomorphisms, in Topologia differenziale pp. 93-126, 2011.
  21. K. MacLeod, A. Bäcker, and G. Laurent, Nature 395, 693-698 (1998).
  22. R. Nomura, Y. Liang, and T. Okada, PLoS One 10, e0140774 (2015).
  23. S. Suzuki, Y. Hirata and K. Aihara, Int. J. Bifurcat. Chaos 20, 3699-3708 (2010).
  24. Y. Hirata and K. Aihara, Chaos 25, 123117 (2015).
  25. S. Shinomoto, K. Shima, and J. Tanji, Neural Comp. 15, 2823-2842 (2003).
  26. T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed., John Wiley & Sons, Hoboken, 2006.
  27. Y. Hirata, D. P. Mandic, H. Suzuki and K. Aihara, Phil. Trans. R. Soc. A 366, 591-607 (2008).
  28. Y. Hirata, T. Takeuchi, S. Horai, H. Suzuki, and K. Aihara, Sci. Rep. 5, 15736 (2015).
  29. F. P. Schoenberg and K. E. Tranbarger, Environnmentrics 19, 271-286 (2008).
  30. H. Hino, K. Takano and N. Murata, R Journal 7, 237-248 (2015).
  31. K. Iwamaya, Y. Hirata and K. Aihara, Phys. Lett. A 381, 257-262 (2017).
  32. M. B. Kennel, Phys. Rev. E 56, 316-321 (1997).
  33. P. Grassberger and H. Kantz, Phys. Lett. 113A, 235 (1985).
  34. R. L. Davidchack, Y.-C. Lai, E. M. Bollt, and M. Dhamala, Phys. Rev. E 61, 1353 (2000).
  35. J. Plumecoq and M. Lefranc, Phsyica D 144, 231 (2000).
  36. J. Plumecoq and M. Lefranc, Phsyica D 144, 259 (2000).
  37. M. B. Kennel and M. Buhl, Phys. Rev. Lett. 91, 084102 (2003).
  38. Y. Hirata, K. Judd and D. Kilminster, Phys. Rev. E 70 016215 (2004).
  39. P. König, A. K. Engel and W. Singer, Trends Neurosci. 19, 130-137 (1996).
  40. R. Azouz and C. M. Gray, Proc. Natl. Acad. Sci. USA 97, 8110-8115 (2000).
  41. I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, IEEE Communications Magazine 40, 102 (2002).
  42. R. Liu, R. Shang, Y. Liu, X. Lu, Remote Sensing of Environment 189, 164-179 (2017).
  43. F. Clette, L. Svalgaard, J.M. Vaquero, E.W. Cliver, Space Science Reviews 186, 35–103 (2014).
  44. Y. Aono, K. Kazui, Int. J. of Climatology 28, 905–914 (2008).
104282
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request answer
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...