Computing invariant sets of random differential equations using polynomial chaos

Computing invariant sets of random differential equations using polynomial chaos

Maxime Breden Technical University of Munich, Faculty of Mathematics, Research Unit “Multiscale and Stochastic Dynamics”, 85748 Garching b. München, Germany. maxime.breden@tum.de    Christian Kuehn Technical University of Munich, Faculty of Mathematics, Research Unit “Multiscale and Stochastic Dynamics”, 85748 Garching b. München, Germany. ckuehn@ma.tum.de
Abstract

Differential equations with random parameters have gained significant prominence in recent years due to their importance in mathematical modelling and data assimilation. In many cases, random ordinary differential equations (RODEs) are studied by using Monte-Carlo methods or by direct numerical simulation techniques using polynomial chaos (PC), i.e., by a series expansion of the random parameters in combination with forward integration. Here we take a dynamical systems viewpoint and focus on the invariant sets of differential equations such as steady states, stable/unstable manifolds, periodic orbits, and heteroclinic orbits. We employ PC to compute representations of all these different types of invariant sets for RODEs. This allows us to obtain fast sampling, geometric visualization of distributional properties of invariants sets, and uncertainty quantification of dynamical output such as periods or locations of orbits. We apply our techniques to a predator-prey model, where we compute steady states and stable/unstable manifolds. We also include several benchmarks to illustrate the numerical efficiency of adaptively chosen PC depending upon the random input. Then we employ the methods for the Lorenz system, obtaining computational PC representations of periodic orbits, stable/unstable manifolds and heteroclinic orbits.

Keywords: invariant manifold, periodic orbit, heteroclinic orbit, Lorenz system, polynomial chaos, random differential equation.

1 Introduction

In this work, we study random nonlinear dynamical systems. More precisely, we focus on nonlinear random ordinary differential equations (RODEs) of the form

(1)

where is a smooth vector field and denotes random parameters with given distributions. From the viewpoint of applications, it is frequently natural to assume that the parameters are only known from measurements, which naturally carry an associated probability distribution. Then the challenge is to quantify the uncertainty in the “output” of the RODE (1) based upon the random input. Yet, we are still far away to fully understand the nonlinear dynamics of such RODEs. In classical uncertainty quantification problems [12, 19], one is often interested [21, 33] in the moments as an output of the solution for , where denotes the expectation. Here we take a dynamical systems perspective focusing on the invariant sets (e.g. steady states, periodic orbits, invariant manifolds, connecting orbits, etc.) of (1), and especially in understanding their dependence on the noisy parameters . In this paper, we study the invariant sets from a numerical viewpoint. However, the framework that we develop is well suited to the usage of rigorous numerics [30], and in particular of a-posteriori validation techniques [32], which could applied to obtain rigorous results about these stochastic invariant sets. This idea will be presented in a forthcoming work.

Let us start by mentioning that there exists many well developed techniques to numerically study invariant sets of deterministic ODEs. Therefore, a natural way of studying the invariant sets of (1) would be to use a Monte-Carlo type approach: consider a large sample of values taken according to the distribution of , and for each study the invariant sets of the deterministic ODE . However, this approach is known to be very costly, because it requires a large sample to accurately represent the statistics of the invariant sets [10]. In this work we make use of a different technique, namely polynomial chaos (PC) expansions [35, 11], to accurately compute invariant sets of (1). Roughly speaking, we view each invariant set of (1) as a curve parametrized by (or as a manifold if is more than one-dimensional), and compute such parameterization explicitly via a PC expansion. This can be thought of as a parameter continuation in , but in an astute way that allows us to obtain not only the geometrical object (i.e. the curve/manifold of invariant sets), but also statistical properties of this object (e.g. mean position, variance, …). Furthermore, our techniques naturally extend to numerical continuation algorithms [16, 17] if the probability distributions of contain further parameters, which is a direction we will pursue in future work.

Let be an invariant set of (1), i.e., trajectories starting inside remain in for all . In this work, we focus on the cases where represents a steady state, a stable/unstable manifold, a periodic orbit or a heteroclinic orbit, which are among the most important objects in nonlinear dynamics [18, 13]. One key observation is that PC expansions can be used to unify the computational framework for invariant sets and that they provide a natural deterministic structure to view the dynamics of nonlinear random ODEs. Our goal is to find a series expansion of as a function of :

(2)

Note that the coefficients have to be found as solutions of suitable nonlinear problems to correctly represent the different invariant sets; we develop the details for each type of invariant set in this work. In practice, the choice of the expansion basis is of course also crucial, as it determines the quality of the approximation, or more precisely the number of coefficients required to obtain a good enough approximation. For the rest of this discussion, we assume for simplicity that is one-dimensional.

If is analytic with respect to , then we can expect to also be analytic as a function of , at least around values of that are not bifurcation values. Therefore, in many cases one could think about writing as a Taylor series by taking . However, it is well known in approximation theory that faster convergence can be achieved by considering instead Chebyshev series or Legendre series (i.e. taking or , where and respectively denote Cheybshev and Legendre orthogonal polynomials), mainly because these expansions are less sensitive to potential poles of in the complex plane; see e.g. [29]. Chebyshev or Legendre expansions also have the advantage of being convergent even when is only of class , where the coefficients decay at an algebraic rate, rather than geometric in the analytic case. If is deterministic, then looking at the decay rates of the coefficients is a relevant benchmark, because it is related to the error in -norm. For instance, if is the truncated series given by

then for a Taylor expansion, a Chebyshev expansion or a Legendre expansion alike one has

because for each of these choices one has . However, when is a random variable, one is more interested in controlling different quantities such as

where denotes the variance. To minimize the error in the moments, the choice of the expansion is critical not only because it influences the decay of the coefficients, but also because the themselves appear in the error term. For instance, one has

Therefore, two different expansions leading to the same decay of the coefficients may not lead to an error of the same order for the expectation or of the variance. Of course, one could change these weights by rescaling the , but this only shifts the problem because the rescaling would then affect the decay of the coefficients.

In this work, we use the PC paradigm to minimize such quantities, by choosing an expansion basis that is adapted to the distribution of the noisy parameter . PC expansions have become a very important tool in uncertainty quantification in the last decades, and a review of its many applications is far beyond the scope of the present work. We instead refer the interested reader to the survey [38] and the book [19].

The paper is structured as follows. First, we review the PC methodology in Section 2. In Section 3 we introduce some classical techniques to numerically study periodic orbits, invariant manifold and connecting orbits of deterministic ODEs. Then we proceed to the main contributions of our work. We explain, how to combine the numerical study of invariant sets with PC expansions to study the dynamics of nonlinear RODEs. In Section 4 we focus on a relatively simple example, where many quantities can also be computed analytically, which allows us to benchmark our numerical computations in the context of steady states. Yet, we also go beyond explicit structures and compute the distribution of stable/unstable manifolds using the “parameterization method” in combination with PC. Furthermore, we explain the implications of the random structure of the invariant sets and how to gain information from the moments of the invariants sets very efficiently. In Section 5 we then turn our attention to another example, where analytic computations are no longer available, and showcase the potential of our approach on the Lorenz system. For this system, we compute periodic orbits using a combined Fourier and PC ansatz, we compute again invariant stable/unstable manifolds, as well as heteroclinic connections.

2 A quick review on PC

Let be probability distribution function (PDF) having finite moments, i.e.

Given normalization constants , for all , there exists a unique family of orthogonal polynomials associated to the weight , i.e. satisfying

The most classical examples are:

  • The Hermite polynomials , which correspond to and ;

  • The Laguerre polynomials , which correspond to and ;

  • The Jacobi polynomials , , which correspond to and , where is the Euler beta function.

Within the class of the Jacobi polynomials, we list a few remarkable cases (sometimes having different normalizations) that we make use of in this work:

  • The Legendre polynomials , which correspond to and ;

  • The Chebyshev polynomials of the first kind , which correspond to and , , for ;

  • The Chebyshev polynomials of the second kind , which correspond to and ;

  • The Gegenbauer or ultraspherical polynomials , , , which correspond to and .

For a more complete description of PC choices and their relations to the Askey scheme, see [39].

Remark 2.1.

In this work, we only consider parameters having a PDF with bounded support. Indeed, in most applications these parameters have a physical meaning, for instance they could represent a quantity which must always be nonnegative, and having PDF with unbounded supports like Gaussian would mean that said parameters would be negative with a positive probability, which is not realistic. However, many sources of uncertainties are still expected to have a Gaussian-like behavior, in the sense that their PDF should be concentrated around a point. In that case, a good compromise is to use Beta distributions, which are the weights associated to the Gegenbauer polynomials and provide good bounded approximations of Gaussian distributions, at least for small variances (see e.g. [36], or the comparison on Figure 1). Another widely used option is to use truncated Gaussian distributions.

Figure 1: Comparison between the Gaussian probability distribution , in blue, and the symmetric Gamma probability distribution , in red, for several values of , and with .

We recall that if has compact support, or decays at least exponentially fast at infinity (see e.g. [8]), then is a Hilbert basis of , i.e. any mesurable function such that

admits a unique series expansion of the form

where the series converges in .

Now, assume that the noisy parameter (still assumed to be one dimensional for the moment) has a PDF given by . The PC paradigm then tells us that we should use the orthogonal polynomials associated to as a basis for the expansion (2) of . Notice that with such a choice, the coefficients of the expansion directly provide us with the mean and variance of . Indeed, by orthogonality (and assuming for simplicity) we have

and similarly

If consists of several independent random variables, each with respective PDF , , one can consider the PC basis constructed as a tensor product of the univariates bases. That is, for all and all ,

where is a basis of orthogonal polynomials associated to the weight . In practice, there are several ways to compute the coefficients of a PC expansion (2). Notice that directly using the orthogonality relations to get

is usually not one of them, as is not known a-priori but is actually what we want to compute via the PC expansion. To formalize the discussion, assume that the quantity of interest for which we want to find a PC expansion, solves a problem depending on a parameter , of the form

where . One common way to find the coefficients is to solve the system obtained by Galerkin projection:

(3)

Of course, in practice one only solves for a truncated expansion

by considering a finite-dimensional projection of associated dimension, i.e. in (3). This is the technique we make use of in this work. Another possible option is to use a collocation/interpolation approach [37]. This technique is based on first solving the deterministic problem several times for a well chosen sample of parameter values , i.e. computing that solves

The polynomials chaos coefficients are then constructed by interpolation:

where are weights associated to the interpolation points in such a way that, for any smooth function

where is the PDF of . For a detailed discussion about these two approaches and their respective merits and limitations, we refer to the survey [38], the book [19] and the references therein. In practice, after an accurate PC approximation

of has been computed, we can then do Monte Carlo simulations for a very large sample of values of . Indeed, instead of having to solve the deterministic problem for each value of the sample, we only have to evaluate to obtain .

3 Computation of periodic orbits, invariant manifolds and connecting orbits

In this section, we review some classical techniques to numerically study periodic orbits, invariant manifolds and connecting orbits of deterministic ODEs. An exhaustive review of these techniques is far beyond the scope of this work, and we only focus on one technique for each case, although alternative methods could certainly also be used; see e.g. [15] for a comparison of methods for computing stable/unstable manifolds. All the techniques presented here are based on series expansions, and this choice is motivated by two main reasons. The first and most important one is that series expansions can easily and efficiently be combined with PC expansions, once we go to the stochastic setting. This is particularly relevant for limit cycles, and is to be contrasted with more classical long-term integration approaches, for which PC expansions are known to be ill-behaved. The second one is that series expansions are particularly well suited for a-posteriori validation techniques [32], which we plan on developing in this context in a future work. In each case, we explain how an extra layer of PC expansion can be added, to keep track of the stochastic nature of the parameters. Illustrations for all these methods are presented in Section 4 and Section 5.

3.1 Periodic orbits via Fourier series

We start by considering the parameter dependent problem (1) from a deterministic point of view. Limit cycles of (1) can be efficiently studied and computed using Fourier series. That is, for a fixed , we write a -periodic solution of (1) as

(4)

where . To numerically find a periodic orbit, we thus solve for the coefficients and , which satisfy the system obtained by plugging (4) into (1), namely

(5)

where are the Fourier coefficients of .

Remark 3.1.

In practice, it is helpful to complement the above system, with a phase condition. For simplicity we choose to fix a section through which the solution has to pass at time , but other options are available [18]. More precisely, we add a scalar equation of the form

where is some (approximate) point on the orbit, , and we use the dot product to denote the scalar product on (and avoid potential confusion with the scalar product ). The phase condition allows to isolate the solution by eliminating time-shifts, which makes the system easier to solve in practice, especially using iterative methods. Indeed, if is a periodic orbit of (1), then so is any function of the form , but there is (locally) only one that also satisfies the phase condition.

This approach of solving for the Fourier coefficients and the frequency has several advantages, compared to numerically integrating (1) and trying to find an (approximately) closed orbit. Indeed, the approach is not sensitive to the linear stability or instability of the limit cycle, and is therefore not susceptible of diverging if one tries to approximate an unstable periodic orbit, which is a definitive concern for time-integration based techniques. We illustrate this point in Section 5 by computing unstable limit cycles belonging to the chaotic attractor of the Lorenz system. The Fourier series approach is also particularly well adapted to continuation algorithms, especially to compute limit cycles originating from a Hopf bifurcation, where the linearized analysis can predict the first Fourier coefficient and the frequency.

If we now consider that is random and has a given PDF , it is natural to still consider a Fourier ansatz [22]. It is straightforward to extend the Fourier series approach by expanding each Fourier coefficient, together with the frequency, with PC. Namely, write

or equivalently

(6)

In practice, we of course consider truncated expansions

for which we solve using (5). Each now belongs to instead of , and belongs to instead of . An explicit example of such system together with numerical solutions is given in Section 5. In this setting, the Fourier series approach also has the notable advantage of separating the random period (or equivalently the random frequency ), from the description of the random cycle given by the coefficients . In particular, we avoid the usual pitfalls related to phase-drift and broadening of the spectrum, which are the main reasons why limit cycles are hard to compute using PC combined with time integration [7]. A similar idea was introduced in [20, 27], where a random time rescaling is used to compensate for the random period, which significantly improves the long time behavior of the PC expansions and allows to better capture stable limit cycles. Yet a Fourier series approach accomplishes that naturally, and can also be used to study unstable limit cycles.

3.2 Local invariant manifolds via Taylor series and the parameterization method

We again start by considering the parameter dependent problem (1) from a deterministic point of view, but now focus on studying local stable and unstable manifold attached to equilibrium points. In this section we only discuss the case of stable manifolds, but unstable manifolds can of course be studied with the same techniques. Let be an equilibrium point of (1), i.e. , and assume that the derivative has exactly eigenvalues with negative real part. For simplicity, we also assume that each of these eigenvalues is simple, and denote by associated eigenvectors. Our goal is to find a parameterization of the local stable manifold of . We look for a power series representation of :

(7)

with the classical multi-indexes notations and . Since we want to be a parameterization of the local stable manifold of , we must have

(8)

where are scaling that can be adjusted. To obtain the higher order terms, we follow the idea of the parameterization method, introduced in [3, 4, 5] (see also the recent book [14]). We want to obtain a parameterization that conjugates the dynamics on the stable manifold with the stable dynamics of the linearized system. More precisely, introducing the diagonal matrix with diagonal entries , we want to satisfy (see Figure 2)

(9)

where is the flow generated by the vector field and .

Figure 2: Schematic illustration of the parameterization method. We want the parameterization to conjugate the nonlinear flow to the linearized flow generated by on the stable subspace.
Remark 3.2.

If some of the stable eigenvalues are complex conjugate, say , it is more convenient to first look for a complex valued parameterization with , and then recover a real valued parameterization via

see e.g. [31].

Finding a parameterization satisfying (9) is interesting because it provides us with not only a local stable manifold but also an explicit description of the dynamics on this manifold. However, the formulation (9) is not the most convenient one to work with in order to determine the higher order coefficients of , because it involves the flow. To get rid of it, one can take a time derivative of (9) and evaluate at , to obtain the following invariance equation

(10)

One can check that, if solves (10) and is such that , then satisfies (9), therefore is indeed a parameterization of the local unstable manifold of . The invariance equation (10) is the one we are going to use to numerically find the coefficients . Plugging the expansion (7) in the invariance equation (10), we obtain

(11)

where are the Taylor coefficients of and , therefore the coefficients must satisfy

(12)

Notice that (8) already ensures that (12) is satisfied for . In order to solve (12) for , let us first describe, how depends on . Given a Taylor series of the form (7), we define for all the truncated series

Notice also that, for any , . Besides, using a Taylor expansion of in the variable, we have for all

and looking at the coefficient of degree on each side we get

Therefore, assuming (8), having (12) for all is equivalent to having

(13)

Assuming the following non-resonance condition is satisfied:

we see that (13) has a unique solution that can be computed recursively via

(14)

since the right-hand side only depends on for . We can therefore compute a truncated parameterization of arbitrary order, starting from (8) and then computing (14) recursively for as large as desired. In practice, the weights in (8) are chosen in order to obtain a reasonnable decay of the coefficients (we refer to [2] for a detailed explanation of how this choice can be optimized). Explicit examples are presented in Sections 4 and 5.

Remark 3.3.

In cases where resonant eigenvalues are present, a similar approach can still be used, but the conjugacy condition (9) defining has to be adapted [31].

If we now consider that is random and has a given PDF , this approach based on the parameterization method can also be easily generalized by adding a layer of PC expansion. Namely, we write

or equivalently

In practice we consider a truncation

which we compute via

(15)

where, compared to (14), is now a vector of size rather than , and

can be interpreted as a block matrix, with blocks having each size . For explicit computations we again refer to Sections 4 and 5.

3.3 Heteroclinic orbits via Chebyshev series and projected boundaries

We go back to considering the parameter dependent problem (1) from a deterministic point of view, and extend the discussion of the previous subsection by looking at more global solutions, namely connecting orbits between equilibrium points.

Let and be two equilibrium points of (1). Assume that has an unstable manifold of dimension and that has a stable manifold of dimension , such that . Then, if the two manifolds intersect, we can generically expect this intersection to be transverse in the phase space , in which case there exists a transverse heteroclinic orbit between and . We recall that a heteroclinic orbit between and (or homoclinc orbit if ) is a solution of (1) such that

In this work, we compute such solution by solving a boundary value problem [1] between the unstable manifold of and the stable manifold of , for which we first compute local parameterization as in Section 3.2. This allows us to only solve (1) on a finite time interval, and recover the remaining parts of the orbit via the conjugacy satisfied by the parameterizations. More precisely, we want to find an orbit such that

(16)

where and respectively denote the unstable manifold of and the stable manifold of . Note that, since we only consider autonomous vector fields in this work, only the length of the time interval is relevant, and the whole interval itself could of course be shifted. To numerically compute such an orbit, we use piece-wise Chebyshev series. In order to do so, we first introduce some notations.

For , we denote by the Chebyshev polynomial of order , defined for instance by . We introduce a partition of . We denote by the (unknown) time the orbit spends between the two local manifolds and consider the partition of given by , where for all . For all and , we also introduce the rescaled Chebyshev polynomial , defined as

We can then write the orbit between the two manifolds as

(17)

Finally, we assume that, following the methodology presented in Section 3.2, a truncated parameterization of the local unstable manifold of as well as a truncated parameterization of the local stable manifold of have been computed. Rewriting the differential equation in (16) as an integral one, plugging in the expansion (17) and using well known properties of the Chebyshev polynomials, namely

we obtain

(18)

where are the Chebyshev coefficients of on . The first line in (18) corresponds to the differential equation on each subinterval , the second line insures that the solution connects continuously between two consecutive subintervals, and the last two lines correspond to the boundary conditions on each manifold.

Remark 3.4.

In (18), besides the unknown Chebyshev coefficients , we have additional scalar unknowns: , and . Still assuming , the system is then underdetermined and we can fix two of these parameters to recover a unique solution.

Similarly to the two previous cases of Fourier and Taylor series, when we consider as a random parameter having a given distribution we only need to expand every unknown in (18) using PC. We again refer to Section 5 for an explicit example.

4 First example: a Lotka-Volterra system

In this section, we compute steady states and invariant manifolds of a Lotka-Volterra system of competition type

(19)

where is a bounded random variable having a given distribution. This basic example allows us to easily study the quality of our numerical computations by comparing them to analytic results.

4.1 Analytical results (for benchmarking)

4.1.1 Deterministic framework

Assuming , system (19) has a non trivial equilibrium given by

The eigenvalues of the Jacobian are

(20)

and associated eigenvectors are given by

(21)

In the case , is positive and the equilibrium is of saddle type. Its stable manifold is the line , and its unstable manifold is a (nonlinear) curve connecting to and .

4.1.2 Stochastic framework

We now assume that the parameter is a bounded random variable. We write

where and is a random variable taking values in . We assume and , so that a.e.. For specific distributions of , one can compute analytically the first two moments of :

  • If is uniformly distributed on , that is if its PDF is given by , one has

  • If is beta-distributed with parameters , that is if its PDF is given by , one has

  • If is beta-distributed with parameters , that is if its PDF is given by , one has

In order to focus the amount of comparisons and illustrations in the next section, we only consider the first component of the equilibrium, but similar analytical and numerical computations could of course also be carried out for .

4.2 Numerical results

In this section, we compute using PC the equilibrium and its stable and unstable manifold. For given ,