# Empirical fixed point bifurcation analysis

###### Abstract

In a common experimental setting, the behaviour of a noisy dynamical system is monitored in response to manipulations of one or more control parameters. Here, we introduce a structured model to describe parametric changes in qualitative system behaviour via stochastic bifurcation analysis. In particular, we describe an extension of Gaussian Process models of transition maps, in which the learned map is directly parametrized by its fixed points and associated local linearisations. We show that the system recovers the behaviour of a well-studied one dimensional system from little data, then learn the behaviour of a more realistic two dimensional process of mutually inhibiting neural populations.

## 1 Introduction

Data, especially from biological systems, is often gathered through slow, noisy measurements, and as such standard modelling tools are often not suited to processing the raw experimental data. Time evolution of processes is frequently ignored, both on short time scales, such as during a single measurement, and on long ones – through repeated trials taking place over weeks or months.

Here we wish to address two features of experiments that do not always make the cut. Firstly, we explicitly model the short term evolution of the system using a stochastic dynamical system. Secondly, we wish to understand, how the behaviour changes with respect to varying a parameter that the experimenter either has direct access to, or that changes slowly and may be measured over long time periods.

Unfortunately, providing simple summaries and comparisons of dynamical systems learned from data is no easy task, and is a very active research area (Sussillo & Barak, 2013; Nonnenmacher et al., 2017; Sussillo et al., 2016). One popular approach has been to use a simple model to fit to the data, which enabled scientists to glean insight into how systems change and evolve (Park et al., 2015). A frequent criticism of this approach is that they do not adequately capture the data itself, so one has to be very careful when interpreting results.

On the other hand, more complex models may provide very good description of almost arbitrary data, especially with the recent rise of various deep learning algorithms. Despite their success in predicting newly observed data, we are far from an understanding of how changes in fitted parameters of such systems relate to changes in the data.

We take a middle approach here: We follow changes of dynamical systems via the bifurcations of their fixed points as the experimental parameter varies, a type of analysis mostly applied to known, continuous time and noiseless systems. In order to use it in the wild, we need to relax these constraints, and use a non-parametric stochastic map model, in which the fixed points can be identified robustly. For this purpose, we employ a Gaussian Process model of the map, which we parametrise in terms of its fixed points and associated Jacobians.

The paper is organised as follows: In section 2 we review deterministic bifurcation analysis on a simple example, and discuss the implications of noise. In section 3, we first describe how to infer a map from time series data using Gaussian Processes, then add the fixed points and derivative structures to our model of the map. Finally, in section 4, we prove the feasibility of our approach on a one-dimensional pitchfork bifurcation, then examine a more realistic experimental scenario using a model of mutually inhibiting neural populations.

To provide a concise description of the model with the added clarity of indexing, we employ the indicial notation with Einstein summation convention throughout the paper. In short, the number of lower indices describes the order of a tensor, and repeated indices in the same term induces summation, e.g. , and . Furthermore, the third order Kronecker delta tensor is iff. . For a much more detailed description of the notation system, we refer the reader to Appendix A.

## 2 Bifurcation analysis in discrete stochastic systems

We introduce concepts in bifurcation analysis through the normal form of the pitchfork bifurcation, described by the map

(1) |

Bifurcation analysis concerns the number and stability of fixed points of a system as a function of a parameter – here . When these change, the system is said to go through a bifurcation, a qualitative change in long term behaviour. Equation eq. 1 has a fixed point at for all values. We understand the stability of a fixed point in a map by the magnitude of the derivative (or the eigenvalues of the Jacobian in higher dimensional systems). If it is smaller in absolute value than , the system will move towards the point, otherwise away from it, as can be checked by so-called cobwebbing. As , the fixed point at 0 exhibits a change in its stability at . Furthermore, for two new fixed points arise, . Examining their stability, , they are initially stable, becoming unstable at . This analysis of the system is summarised in Figure 1A.

Unfortunately even minuscule amounts of noise destroy this beautiful picture (Crauel & Flandoli, 1998). Stochastic bifurcation analysis examines random dynamical systems, and distinguishes between phenomenological and dynamical bifurcations. The former examines profiles of stationary probability densities, whereas the latter examines systems with frozen noise. Numerical analysis is best suited towards finding phenomenological bifurcations, as by definition these cause noticeable changes in system behaviour (Arnold & Crauel, 1991). As experimentally found, the number and stability of the so-called random fixed points often differs from the noiseless system’s fixed points at a particular parameter value, thus also changing the point of bifurcation (Diks, 2006; Kuehn, 2011; Wang et al., 2018). To illustrate the difficulty of the problem in even our simple example, examine Figure 1 B and C, where a deterministic system already over the point of bifurcation falls back to its former behaviour with added noise.

In known systems, the expected number of fixed points may be given, but in real world measured datasets we have little idea. Therefore we need a model that is capable of automatically determining the number and location of likely fixed points, given little data. In the following section we describe such an approach, based on explicitly parametrising Gaussian Process transition functions in terms of fixed points and corresponding Jacobians, incorporating a form of Automatic Relevance Determination to identify the number of constraints best supported by the data.

## 3 Model

In this section, we first introduce the type of datasets we expect for a single value of the bifurcation parameter, which arise frequently in biological studies, and the assumptions that underlie the models used here. We then describe the general machinery of learning a Gaussian Process transition map from data. Finally, in section 3.2 we describe our extension of this machinery, which enables the fixed points and the corresponding local linearisations to be optimised directly.

Given a dataset of repeated measurements of time-series of measured variables, we aim to model the data using a discrete-time, latent, non-linear, stochastic dynamical system:

(2) | ||||

(3) |

where represents the state of the dynamical system at time on trial ; is the transition function, with additive noise; and is the observation function, with additive noise.

For simplicity, we only discuss the case where is a linear function, , and . We can thus write down the conditional probability

(4) |

The framework may be extended easily to arbitrary observation functions and likelihoods (including Gaussian Processes models of the observation function), as long as we can compute the probability , estimate its gradient with respect to and infer an approximation to the posterior .

### 3.1 Sparse Gaussian process transition function inference and learning

We represent the scalar-valued stochastic transition functions as independent Gaussian Processes (GP), such that the latent map is , and we can easily query the predicted mean and variance of the map for any given input. Unforunately, inferring a full GP from time series observations requires re-estimating the posterior after incorporating every new data point, then reevaluating the influence of previous observations on the current belief state, given the new estimate of the transition functions. This is very much intractable, but choosing a parameterised representation of the function over inducing points de-couples the current estimate of the function from the observations (McHutchon, 2014) and enables us to evaluate the model.

The process is thus a Sparse Gaussian Process (SGP) parametrised by: a positive definite kernel function ; the inducing point locations ; and the uncertain values at those locations, represented as random variables drawn from . The ability to represent different noise levels at different locations makes it possible to learn maps with heteroscedastic noise, and may also serve as an Automatic Relevance Determination (ARD) system to determine the required set of inducing points. Given the parameters, we can evaluate the expected mean and variance of each map at any input value :

(5) | ||||

(6) | ||||

(7) | ||||

(8) |

We then recover the vector-valued transition function as a collection of the scalar functions, . The learning of the transition function thus boils down to the estimation of the inducing point parameters . Often the kernel function itself is parametrised, , in which case we may choose to learn the kernel hyperparameters, , too.

#### 3.1.1 Inference

In order to carry out the inference of a latent trajectory, given the data and the current estimate of the transition and observation functions, there are various algorithms available (Wan & Merwe, 2000). We choose to use Assumed Density Filtering (Ramakrishnan et al., 2011) here, based on empirical performance in tasks similar to ours (McHutchon, 2014).

We represent our belief of the latent state at time as a normal distribution with mean and diagonal covariance matrix . We first need to propagate our belief through the transition function, to get an estimate of our updated belief, which we approximate as a normal distribution , with a non-diagonal covariance. The moments of this distribution may then be computed as:

(9) | |||

(10) |

where the required expectations may be computed in closed form for linear and Exponentiated Quadratic kernels, and are shown in Appendix B.

Given our belief of the latent state at time , we need to incorporate the data into our belief to obtain the moments. Thanks to our simple linear-Gaussian observation model, and our approximate belief, this can be done exactly:

(11) | ||||

(12) | ||||

(13) | ||||

(14) |

We then approximate the covariance matrix with its diagonal, as per moment matching, and proceed to carry out the filtering for the next time step, until the complete trajectory has been recovered. The parameters of the latent model we require for inference are .

#### 3.1.2 Learning

In order to obtain a good estimate of the parameters of our model, we need to be able to learn them. There are many frameworks available to carry out this estimation (Titsias, 2009; Titsias & Lawrence, 2010; Bui et al., 2016), we chose - again, based on empirical evidence - to use gradient ascent with the exact log marginal likelihood of the above described model as our objective function.

(15) |

Equipped with this objective function, and the fact, that our model is differentiable with respect to all of its parameters, we may optimise our parameters via gradient ascent. We may then iterate inference and learning steps until convergence.

### 3.2 Conditioning on fixed points

We now wish to extend our framework towards a Fixed Point Sparse Gaussian Process, which provides explicit representation of the learned map’s fixed points and their local linearisation as parameters of the model.

We may think of a fixed point in a Sparse Gaussian Process as a special inducing point, whose value is tied to its location. Furthermore, to represent and inquire about the stability of said fixed point, we wish to attach derivative observations to that location, representing the local Jacobian. This way we may uncover the location and stability of fixed points in non-linear dynamical systems, via any optimisation algorithm.

The steps we need to go through for the derivation of the system largely follow what has been described in the previous section, with a few extra complications. Let the fixed points be represented as random variables drawn from , where the variables may be used by the system to disable unnecessary fixed points, by setting this variance high. Each fixed point is associated with a local Jacobian, stored as a tensor .

We can thus extend the parameter set describing our current belief of the map, . We use this set of parameters to carry out the inference, requiring us to revisit eqs. 5 to 10. The main change comes from the fact, that we wish to use the derivative observations attached to our fixed points during inference. This requires establishing a derivative Gaussian Process, whose kernel function is given by the derivative of the original kernel function with respect to both arguments

(16) |

resulting in a fourth order tensor. It is useful to define a block-structure matrix version . We may similarly define the cross-covariance between a normal and a derivative process, as taking the derivative with only to the respective argument of the original kernel function:

(17) | ||||

(18) |

Resulting in the predictive moments for a noiseless input:

(19) | ||||

(20) |

where and represent the concatenation of indices . For the inference we still need to consider propagating beliefs represented as Gaussian random variables, thus requiring to compute and . For the linear and the Exponentiated Quadratic kernels these expressions are available in closed form, although care must be taken during the computations, as for the latter term, we need to take the expectation of kernel products between regular and derivative processes. In the general case these expectation are to be approximated numerically (Girard, 2004). For the detailed computations, see Appendix B.

The learning does not change significantly, our objective function remains the same, and our operations remain differentiable with respect to all parameters.

## 4 Experiments

Equipped with fully described model, we are ready to test it. As our first example we are going to return to the well-studied example described in section 2. Finally, we study changes of fixed point pattens in an influential model of mutually inhibiting neural populations during decision making (Machens et al., 2005).

### 4.1 Stochastic pitchfork bifurcation

We may now write down the stochastic version of eq. 1:

(21) |

where iid. We examine, how varying affects the learned fixed points. We trained the system using 32 trials, lasting 20 time steps each, with , and the initial condition . Note that this is less data in total than shown in Figure 1 C. We then fit our model to the data with 16 inducing points and the overestimated 5 fixed points, letting the ARD formulation determine the number of fixed points present in the system.

We first confirmed, that the method indeed captures the available data very well for various values of the bifurcation parameter , as shown in, Figure 2 A and B.

We then create the bifurcation plot, Figure 2C, based on the learned parameter values. The fixed points identified truthfully track the expected location and stability, as well as successfully recovering the true number of fixed points. Consistently with previous finding on similar systems (Diks, 2006), we indeed find that noise shifts the bifurcation towards larger values of , and when the distances of the noiseless fixed points are on the order of the noise, the random fixed points are not detectable from data.

### 4.2 Mutually inhibiting neural populations

Having recovered previous results with our highly flexible system, we now turn our attention to a system closer to the data-analysis-in-the-wild type problems we aimed to solve.

The data used comes from a simulated system, but one that was optimised to match the behaviour of measured neural population. For more details about the experiment and the simulation, read the excellent paper from Machens, Romo and Brody (2005).

In short the system consists of an external excitation and so-called negative and positive populations. In the current study we do not take into account possible differential inputs to the populations, so for our purposes the model is completely symmetric. We slightly reformulate the system equations to match the language used throughout the paper. The simulated system is thus governed by the map

(22) | ||||

(23) |

where is iid noise, is the inhibitory weight of the populations, and importantly, is a numerically optimised function, which defines the nullclines of the system and gives rise to interesting system behaviour, as the external driving input - our bifurcation parameter - is varied.

The nullclines generally cross one another, giving rise to fixed points, whose stability depend on the angle of crossing. In particular, the original study finds that in one extreme, two stable fixed points are created far from one another, and may be used for decision making, whereas during the other, there is only a single fixed point, useful for loading new information robustly. The special case is, that for a special value of the external input, the nullclines were designed such that they completely overlap, creating a line attractor, which is used for maintaining system state.

Note that this model was carefully designed to behave so, and our goal here is to estimate such behaviour purely from data generated from the model. We simulated 60 two-dimensional trajectories, each consisting of 80 time steps, corresponding to 2 seconds of experimental data, sampled at 25 msec steps, and injecting additive noise at every time step, with .

Examining the results in Figure 3 A-C the system behaviour was very well captured by the estimated fixed points, including the number, location and stability of the points. Although the stability of the system is inferred correctly, if we examine the eigenspectrum of the central point in Figure 3D, we can indeed follow its stabilisation from a saddle with one stable and one unstable direction to a mostly stable fixed point.

## 5 Discussion

Studying real systems, especially in biological experiments, where we have little knowledge of the governing equation is a hard but ubiquitous problem. In the current study we designed an algorithm aimed at the study of random dynamical systems measured at discrete time, in which we can modify or measure a variable influencing the system. We provide a simple analysis of a very complicated question, namely when does the system behaviour change qualitatively as a system parameter varies. This question broadly encompasses large fields of research, particularly cell biology and neuroscience.

All analyses comes with limitations of course. The current work is aimed at fixed point bifurcations, at the moment we can not sufficiently describe limit cycles or more than zero-dimensional attractors, beyond a rudimentary approximation in the form of aligned fixed points. Another issue that often comes up in learning dynamical systems is that of global stability. Fortunately Gaussian Processes with many types of kernels functions decay rapidly away from data, so likely our learned map would push us towards the origin as soon as a new input wanders away from observations.

Our core contribution is the Fixed Point Sparse Gaussian Process formulation, in which fixed points appear explicitly as parameters of the model fit to data, and may thus be identified directly by parameter optimisation methods. This core idea has many potential extensions highlighted throughout the paper, including extensions to non-linear maps from latent dynamics to observations, or to hierarchical (deep) Gaussian Processes models of the transition map itself, which are capable of capturing more complex and less smooth structures.

Furthermore, combining this powerful stochastic representation with the ability to robustly identify fixed points in unknown systems may indeed bring further effort into stochastic bifurcation analysis, an exciting and very powerful methodology, still in its infancy.

## References

- Arnold & Crauel (1991) Arnold, Ludwig and Crauel, Hans. Random dynamical systems. pp. 1–22, 01 1991.
- Bui et al. (2016) Bui, Thang D., Yan, Josiah, and Turner, Richard E. A unifying framework for gaussian process pseudo-point approximations using power expectation propagation, 2016.
- Crauel & Flandoli (1998) Crauel, Hans and Flandoli, Franco. Additive noise destroys a pitchfork bifurcation. J. Dyn. Differ. Equations, 10(2):259–274, 1998. ISSN 1040-7294. doi: 10.1023/A:1022665916629. URL http://link.springer.com/article/10.1023/A:1022665916629.
- Diks (2006) Diks, C. A weak bifurcation theory for discrete time stochastic dynamical systems. Tinbergen Inst. Discuss. Pap. No. 06-, (June):1–30, 2006. URL http://papers.ssrn.com/sol3/papers.cfm?abstract{_}id=901422.
- Girard (2004) Girard, Agathe. Approximate methods for propagation of uncertainty with Gaussian process models. Ph.D. Thesis, (October), 2004. URL http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.66.1826{&}rep=rep1{&}type=pdf.
- Kuehn (2011) Kuehn, Christian. A mathematical framework for critical transitions: Bifurcations, fastslow systems and stochastic dynamics. Phys. D Nonlinear Phenom., 240(12):1020–1035, 2011. ISSN 01672789. doi: 10.1016/j.physd.2011.02.012.
- Machens et al. (2005) Machens, Christian K., Romo, Ranulfo, and Brody, Carlos D. Flexible control of mutual inhibition: A neural model of two-interval discrimination. Science (80-. )., 307(5712):1121–1124, 2005. ISSN 00368075. doi: 10.1126/science.1104171. URL http://www.sciencemag.org/cgi/doi/10.1126/science.1104171.
- McHutchon (2014) McHutchon, Andrew. Nonlinear Modelling and Control using Gaussian Processes. (August), 2014. URL .
- Nonnenmacher et al. (2017) Nonnenmacher, Marcel, Turaga, Srinivas C, and Macke, Jakob H. Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations. Adv. Neural Inf. Process. Syst., (Nips), 2017.
- Park et al. (2015) Park, Mijung, Bohner, Gergo, and Macke, Jakob H. Unlocking neural population non-stationarities using hierarchical dynamics models. Adv. Neural Inf. Process. Syst., pp. 145–153, 2015.
- Ramakrishnan et al. (2011) Ramakrishnan, N., Ertin, E., and Moses, R. L. Assumed density filtering for learning gaussian process models. In 2011 IEEE Statistical Signal Processing Workshop (SSP), pp. 257–260, June 2011. doi: 10.1109/SSP.2011.5967674.
- Sussillo & Barak (2013) Sussillo, David and Barak, Omri. Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks. Neural Comput., 25(3):626–649, 2013. ISSN 0899-7667. doi: 10.1162/NECO˙a˙00409. URL http://www.mitpressjournals.org/doi/10.1162/NECO{_}a{_}00409.
- Sussillo et al. (2016) Sussillo, David, Jozefowicz, Rafal, Abbott, L. F., and Pandarinath, Chethan. LFADS - Latent Factor Analysis via Dynamical Systems. arXiv, 2016. URL http://arxiv.org/abs/1608.06315.
- Titsias (2009) Titsias, Michalis. Variational learning of inducing variables in sparse gaussian processes. In van Dyk, David and Welling, Max (eds.), Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pp. 567–574, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, 16–18 Apr 2009. PMLR. URL http://proceedings.mlr.press/v5/titsias09a.html.
- Titsias & Lawrence (2010) Titsias, Michalis and Lawrence, Neil. Bayesian Gaussian Process Latent Variable Model. Artif. Intell., 9:844–851, 2010. ISSN 0899-7667. doi: 10.1162/089976699300016331. URL http://eprints.pascal-network.org/archive/00006343/.
- Wan & Merwe (2000) Wan, E. A. and Merwe, R. Van Der. The unscented kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373), pp. 153–158, 2000. doi: 10.1109/ASSPCC.2000.882463.
- Wang et al. (2018) Wang, Hui, Chen, Xiaoli, and Duan, Jinqiao. A Stochastic Pitchfork Bifurcation in Most Probable Phase Portraits. pp. 1–9, 2018. URL http://arxiv.org/abs/1801.03739.