Computing invariant sets of random differential equations using polynomial chaos

# Computing invariant sets of random differential equations using polynomial chaos

Maxime Breden Technical University of Munich, Faculty of Mathematics, Research Unit “Multiscale and Stochastic Dynamics”, 85748 Garching b. München, Germany. maxime.breden@tum.de    Christian Kuehn Technical University of Munich, Faculty of Mathematics, Research Unit “Multiscale and Stochastic Dynamics”, 85748 Garching b. München, Germany. ckuehn@ma.tum.de
###### Abstract

Differential equations with random parameters have gained significant prominence in recent years due to their importance in mathematical modelling and data assimilation. In many cases, random ordinary differential equations (RODEs) are studied by using Monte-Carlo methods or by direct numerical simulation techniques using polynomial chaos (PC), i.e., by a series expansion of the random parameters in combination with forward integration. Here we take a dynamical systems viewpoint and focus on the invariant sets of differential equations such as steady states, stable/unstable manifolds, periodic orbits, and heteroclinic orbits. We employ PC to compute representations of all these different types of invariant sets for RODEs. This allows us to obtain fast sampling, geometric visualization of distributional properties of invariants sets, and uncertainty quantification of dynamical output such as periods or locations of orbits. We apply our techniques to a predator-prey model, where we compute steady states and stable/unstable manifolds. We also include several benchmarks to illustrate the numerical efficiency of adaptively chosen PC depending upon the random input. Then we employ the methods for the Lorenz system, obtaining computational PC representations of periodic orbits, stable/unstable manifolds and heteroclinic orbits.

Keywords: invariant manifold, periodic orbit, heteroclinic orbit, Lorenz system, polynomial chaos, random differential equation.

## 1 Introduction

In this work, we study random nonlinear dynamical systems. More precisely, we focus on nonlinear random ordinary differential equations (RODEs) of the form

 dxdt=˙x=f(x,p),x=x(t)∈Rm, (1)

where is a smooth vector field and denotes random parameters with given distributions. From the viewpoint of applications, it is frequently natural to assume that the parameters are only known from measurements, which naturally carry an associated probability distribution. Then the challenge is to quantify the uncertainty in the “output” of the RODE (1) based upon the random input. Yet, we are still far away to fully understand the nonlinear dynamics of such RODEs. In classical uncertainty quantification problems [12, 19], one is often interested [21, 33] in the moments as an output of the solution for , where denotes the expectation. Here we take a dynamical systems perspective focusing on the invariant sets (e.g. steady states, periodic orbits, invariant manifolds, connecting orbits, etc.) of (1), and especially in understanding their dependence on the noisy parameters . In this paper, we study the invariant sets from a numerical viewpoint. However, the framework that we develop is well suited to the usage of rigorous numerics [30], and in particular of a-posteriori validation techniques [32], which could applied to obtain rigorous results about these stochastic invariant sets. This idea will be presented in a forthcoming work.

Let us start by mentioning that there exists many well developed techniques to numerically study invariant sets of deterministic ODEs. Therefore, a natural way of studying the invariant sets of (1) would be to use a Monte-Carlo type approach: consider a large sample of values taken according to the distribution of , and for each study the invariant sets of the deterministic ODE . However, this approach is known to be very costly, because it requires a large sample to accurately represent the statistics of the invariant sets [10]. In this work we make use of a different technique, namely polynomial chaos (PC) expansions [35, 11], to accurately compute invariant sets of (1). Roughly speaking, we view each invariant set of (1) as a curve parametrized by (or as a manifold if is more than one-dimensional), and compute such parameterization explicitly via a PC expansion. This can be thought of as a parameter continuation in , but in an astute way that allows us to obtain not only the geometrical object (i.e. the curve/manifold of invariant sets), but also statistical properties of this object (e.g. mean position, variance, …). Furthermore, our techniques naturally extend to numerical continuation algorithms [16, 17] if the probability distributions of contain further parameters, which is a direction we will pursue in future work.

Let be an invariant set of (1), i.e., trajectories starting inside remain in for all . In this work, we focus on the cases where represents a steady state, a stable/unstable manifold, a periodic orbit or a heteroclinic orbit, which are among the most important objects in nonlinear dynamics [18, 13]. One key observation is that PC expansions can be used to unify the computational framework for invariant sets and that they provide a natural deterministic structure to view the dynamics of nonlinear random ODEs. Our goal is to find a series expansion of as a function of :

 X(p)=∑n∈NdXnϕn(p). (2)

Note that the coefficients have to be found as solutions of suitable nonlinear problems to correctly represent the different invariant sets; we develop the details for each type of invariant set in this work. In practice, the choice of the expansion basis is of course also crucial, as it determines the quality of the approximation, or more precisely the number of coefficients required to obtain a good enough approximation. For the rest of this discussion, we assume for simplicity that is one-dimensional.

If is analytic with respect to , then we can expect to also be analytic as a function of , at least around values of that are not bifurcation values. Therefore, in many cases one could think about writing as a Taylor series by taking . However, it is well known in approximation theory that faster convergence can be achieved by considering instead Chebyshev series or Legendre series (i.e. taking or , where and respectively denote Cheybshev and Legendre orthogonal polynomials), mainly because these expansions are less sensitive to potential poles of in the complex plane; see e.g. [29]. Chebyshev or Legendre expansions also have the advantage of being convergent even when is only of class , where the coefficients decay at an algebraic rate, rather than geometric in the analytic case. If is deterministic, then looking at the decay rates of the coefficients is a relevant benchmark, because it is related to the error in -norm. For instance, if is the truncated series given by

 XN(p)=N∑n=0Xnϕn(p),

then for a Taylor expansion, a Chebyshev expansion or a Legendre expansion alike one has

 supp∈[−1,1]∥∥X(p)−XN(p)∥∥≤∞∑n=N+1∥Xn∥,

because for each of these choices one has . However, when is a random variable, one is more interested in controlling different quantities such as

 ∥∥E(X(p))−E(XN(p))∥∥or∥∥V(X(p))−V(XN(p))∥∥,

where denotes the variance. To minimize the error in the moments, the choice of the expansion is critical not only because it influences the decay of the coefficients, but also because the themselves appear in the error term. For instance, one has

 ∥∥E(X(p))−E(XN(p))∥∥≤∞∑n=N+1∥Xn∥|E(ϕn(p))|.

Therefore, two different expansions leading to the same decay of the coefficients may not lead to an error of the same order for the expectation or of the variance. Of course, one could change these weights by rescaling the , but this only shifts the problem because the rescaling would then affect the decay of the coefficients.

In this work, we use the PC paradigm to minimize such quantities, by choosing an expansion basis that is adapted to the distribution of the noisy parameter . PC expansions have become a very important tool in uncertainty quantification in the last decades, and a review of its many applications is far beyond the scope of the present work. We instead refer the interested reader to the survey [38] and the book [19].

The paper is structured as follows. First, we review the PC methodology in Section 2. In Section 3 we introduce some classical techniques to numerically study periodic orbits, invariant manifold and connecting orbits of deterministic ODEs. Then we proceed to the main contributions of our work. We explain, how to combine the numerical study of invariant sets with PC expansions to study the dynamics of nonlinear RODEs. In Section 4 we focus on a relatively simple example, where many quantities can also be computed analytically, which allows us to benchmark our numerical computations in the context of steady states. Yet, we also go beyond explicit structures and compute the distribution of stable/unstable manifolds using the “parameterization method” in combination with PC. Furthermore, we explain the implications of the random structure of the invariant sets and how to gain information from the moments of the invariants sets very efficiently. In Section 5 we then turn our attention to another example, where analytic computations are no longer available, and showcase the potential of our approach on the Lorenz system. For this system, we compute periodic orbits using a combined Fourier and PC ansatz, we compute again invariant stable/unstable manifolds, as well as heteroclinic connections.

## 2 A quick review on PC

Let be probability distribution function (PDF) having finite moments, i.e.

 ∫Rsnρ(s) ds<∞∀ n∈N.

Given normalization constants , for all , there exists a unique family of orthogonal polynomials associated to the weight , i.e. satisfying

 ⟨ϕn1,ϕn2⟩ρ:=∫Rϕn1(s)ϕn2(s)ρ(s) ds=hn1δn1,n2,∀ n1,n2∈N.

The most classical examples are:

• The Hermite polynomials , which correspond to and ;

• The Laguerre polynomials , which correspond to and ;

• The Jacobi polynomials , , which correspond to and , where is the Euler beta function.

Within the class of the Jacobi polynomials, we list a few remarkable cases (sometimes having different normalizations) that we make use of in this work:

• The Legendre polynomials , which correspond to and ;

• The Chebyshev polynomials of the first kind , which correspond to and , , for ;

• The Chebyshev polynomials of the second kind , which correspond to and ;

• The Gegenbauer or ultraspherical polynomials , , , which correspond to and .

For a more complete description of PC choices and their relations to the Askey scheme, see [39].

###### Remark 2.1.

In this work, we only consider parameters having a PDF with bounded support. Indeed, in most applications these parameters have a physical meaning, for instance they could represent a quantity which must always be nonnegative, and having PDF with unbounded supports like Gaussian would mean that said parameters would be negative with a positive probability, which is not realistic. However, many sources of uncertainties are still expected to have a Gaussian-like behavior, in the sense that their PDF should be concentrated around a point. In that case, a good compromise is to use Beta distributions, which are the weights associated to the Gegenbauer polynomials and provide good bounded approximations of Gaussian distributions, at least for small variances (see e.g. [36], or the comparison on Figure 1). Another widely used option is to use truncated Gaussian distributions.

We recall that if has compact support, or decays at least exponentially fast at infinity (see e.g. [8]), then is a Hilbert basis of , i.e. any mesurable function such that

 ∫Rg2(s)ρ(s) ds<∞,

admits a unique series expansion of the form

 g=∑n∈Ngnϕn,

where the series converges in .

Now, assume that the noisy parameter (still assumed to be one dimensional for the moment) has a PDF given by . The PC paradigm then tells us that we should use the orthogonal polynomials associated to as a basis for the expansion (2) of . Notice that with such a choice, the coefficients of the expansion directly provide us with the mean and variance of . Indeed, by orthogonality (and assuming for simplicity) we have

 E(X(p))=∞∑n=0XnE(ϕn(p))=∞∑n=0Xn⟨ϕn,1⟩ρ=X0,

and similarly

 V(X(p))=E(X(p)2)−E(X(p))2=∞∑m=0∞∑n=0XnXm⟨ϕm,ϕn⟩ρ−X20=∞∑n=1X2n.

If consists of several independent random variables, each with respective PDF , , one can consider the PC basis constructed as a tensor product of the univariates bases. That is, for all and all ,

 ϕn(s):=d∏j=1ϕ(j)nj(sj),

where is a basis of orthogonal polynomials associated to the weight . In practice, there are several ways to compute the coefficients of a PC expansion (2). Notice that directly using the orthogonality relations to get

 Xn=1hn⟨X,ϕn⟩ρ,

is usually not one of them, as is not known a-priori but is actually what we want to compute via the PC expansion. To formalize the discussion, assume that the quantity of interest for which we want to find a PC expansion, solves a problem depending on a parameter , of the form

 F(X(p),p)=0,

where . One common way to find the coefficients is to solve the system obtained by Galerkin projection:

 ⟨F(∞∑n1=0Xn1ϕn1(p),p),ϕn2(p)⟩ρ=0∀ n2∈N. (3)

Of course, in practice one only solves for a truncated expansion

 XN(p)=N∑n1=0Xn1ϕn1(p),

by considering a finite-dimensional projection of associated dimension, i.e. in (3). This is the technique we make use of in this work. Another possible option is to use a collocation/interpolation approach [37]. This technique is based on first solving the deterministic problem several times for a well chosen sample of parameter values , i.e. computing that solves

 F(X(pi),pi)=0,∀ 1≤i≤I.

The polynomials chaos coefficients are then constructed by interpolation:

 Xn=I∑i=1X(pi)ϕn(pi)αi,

where are weights associated to the interpolation points in such a way that, for any smooth function

 I∑i=1g(pi)αi≈∫g(s)ρ(s) ds,

where is the PDF of . For a detailed discussion about these two approaches and their respective merits and limitations, we refer to the survey [38], the book [19] and the references therein. In practice, after an accurate PC approximation

 XN(p)=N∑n=0Xnϕn(p)

of has been computed, we can then do Monte Carlo simulations for a very large sample of values of . Indeed, instead of having to solve the deterministic problem for each value of the sample, we only have to evaluate to obtain .

## 3 Computation of periodic orbits, invariant manifolds and connecting orbits

In this section, we review some classical techniques to numerically study periodic orbits, invariant manifolds and connecting orbits of deterministic ODEs. An exhaustive review of these techniques is far beyond the scope of this work, and we only focus on one technique for each case, although alternative methods could certainly also be used; see e.g. [15] for a comparison of methods for computing stable/unstable manifolds. All the techniques presented here are based on series expansions, and this choice is motivated by two main reasons. The first and most important one is that series expansions can easily and efficiently be combined with PC expansions, once we go to the stochastic setting. This is particularly relevant for limit cycles, and is to be contrasted with more classical long-term integration approaches, for which PC expansions are known to be ill-behaved. The second one is that series expansions are particularly well suited for a-posteriori validation techniques [32], which we plan on developing in this context in a future work. In each case, we explain how an extra layer of PC expansion can be added, to keep track of the stochastic nature of the parameters. Illustrations for all these methods are presented in Section 4 and Section 5.

### 3.1 Periodic orbits via Fourier series

We start by considering the parameter dependent problem (1) from a deterministic point of view. Limit cycles of (1) can be efficiently studied and computed using Fourier series. That is, for a fixed , we write a -periodic solution of (1) as

 X(t,p)=∑k∈ZXk(p)eikΩ(p)t,Xk(p)∈Rm, Ω(p)∈R, (4)

where . To numerically find a periodic orbit, we thus solve for the coefficients and , which satisfy the system obtained by plugging (4) into (1), namely

 ikΩ(p)Xk(p)=fk(X(t,p),p)∀k∈Z, (5)

where are the Fourier coefficients of .

###### Remark 3.1.

In practice, it is helpful to complement the above system, with a phase condition. For simplicity we choose to fix a section through which the solution has to pass at time , but other options are available [18]. More precisely, we add a scalar equation of the form

 (X(0,p)−u(p))⋅v(p)=0,

where is some (approximate) point on the orbit, , and we use the dot product to denote the scalar product on (and avoid potential confusion with the scalar product ). The phase condition allows to isolate the solution by eliminating time-shifts, which makes the system easier to solve in practice, especially using iterative methods. Indeed, if is a periodic orbit of (1), then so is any function of the form , but there is (locally) only one that also satisfies the phase condition.

This approach of solving for the Fourier coefficients and the frequency has several advantages, compared to numerically integrating (1) and trying to find an (approximately) closed orbit. Indeed, the approach is not sensitive to the linear stability or instability of the limit cycle, and is therefore not susceptible of diverging if one tries to approximate an unstable periodic orbit, which is a definitive concern for time-integration based techniques. We illustrate this point in Section 5 by computing unstable limit cycles belonging to the chaotic attractor of the Lorenz system. The Fourier series approach is also particularly well adapted to continuation algorithms, especially to compute limit cycles originating from a Hopf bifurcation, where the linearized analysis can predict the first Fourier coefficient and the frequency.

If we now consider that is random and has a given PDF , it is natural to still consider a Fourier ansatz [22]. It is straightforward to extend the Fourier series approach by expanding each Fourier coefficient, together with the frequency, with PC. Namely, write

 Xk(p)=∑n∈ZXk,nϕn(p)andΩ(p)=∑n∈NΩnϕn(p),

or equivalently

 X(t,p)=∑k∈Z∑n∈NXk,nϕn(p)eiktΩ(p). (6)

In practice, we of course consider truncated expansions

 XNk(p)=N∑n=0Xk,nϕn(p)andΩ(p)=N∑n=0Ωnϕn(p).

for which we solve using (5). Each now belongs to instead of , and belongs to instead of . An explicit example of such system together with numerical solutions is given in Section 5. In this setting, the Fourier series approach also has the notable advantage of separating the random period (or equivalently the random frequency ), from the description of the random cycle given by the coefficients . In particular, we avoid the usual pitfalls related to phase-drift and broadening of the spectrum, which are the main reasons why limit cycles are hard to compute using PC combined with time integration [7]. A similar idea was introduced in [20, 27], where a random time rescaling is used to compensate for the random period, which significantly improves the long time behavior of the PC expansions and allows to better capture stable limit cycles. Yet a Fourier series approach accomplishes that naturally, and can also be used to study unstable limit cycles.

### 3.2 Local invariant manifolds via Taylor series and the parameterization method

We again start by considering the parameter dependent problem (1) from a deterministic point of view, but now focus on studying local stable and unstable manifold attached to equilibrium points. In this section we only discuss the case of stable manifolds, but unstable manifolds can of course be studied with the same techniques. Let be an equilibrium point of (1), i.e. , and assume that the derivative has exactly eigenvalues with negative real part. For simplicity, we also assume that each of these eigenvalues is simple, and denote by associated eigenvectors. Our goal is to find a parameterization of the local stable manifold of . We look for a power series representation of :

 (7)

with the classical multi-indexes notations and . Since we want to be a parameterization of the local stable manifold of , we must have

 Q0(p)=^X(p)andQei(p)=γiv(i)(p), ∀ 1≤i≤ms, (8)

where are scaling that can be adjusted. To obtain the higher order terms, we follow the idea of the parameterization method, introduced in [3, 4, 5] (see also the recent book [14]). We want to obtain a parameterization that conjugates the dynamics on the stable manifold with the stable dynamics of the linearized system. More precisely, introducing the diagonal matrix with diagonal entries , we want to satisfy (see Figure 2)

 φp(t,Q(θ,p))=Q(eΛ(p)tθ,p),∀ ∥θ∥∞≤1, ∀ t≥0, (9)

where is the flow generated by the vector field and .

###### Remark 3.2.

If some of the stable eigenvalues are complex conjugate, say , it is more convenient to first look for a complex valued parameterization with , and then recover a real valued parameterization via

 Qreal(θ,p)=Q(θ1+iθ2,θ1−iθ2,…,θ2r−1+iθ2r,θ2r−1−iθ2r,θ2r+1,…,θms,p),

see e.g. [31].

Finding a parameterization satisfying (9) is interesting because it provides us with not only a local stable manifold but also an explicit description of the dynamics on this manifold. However, the formulation (9) is not the most convenient one to work with in order to determine the higher order coefficients of , because it involves the flow. To get rid of it, one can take a time derivative of (9) and evaluate at , to obtain the following invariance equation

 f(Q(θ,p),p)=DθQ(θ,p)Λ(p)θ,∀ ∥θ∥∞≤1. (10)

One can check that, if solves (10) and is such that , then satisfies (9), therefore is indeed a parameterization of the local unstable manifold of . The invariance equation (10) is the one we are going to use to numerically find the coefficients . Plugging the expansion (7) in the invariance equation (10), we obtain

 ∑|k|≥0fk(Q(θ,p),p)θk=∑|k|≥0(λ(p)⋅k)Qk(p)θk, (11)

where are the Taylor coefficients of and , therefore the coefficients must satisfy

 (λ(p)⋅k)Qk(p)=fk(Q(θ,p),p),∀k∈Nms. (12)

Notice that (8) already ensures that (12) is satisfied for . In order to solve (12) for , let us first describe, how depends on . Given a Taylor series of the form (7), we define for all the truncated series

 πKQ(θ,p)=∑0≤|k|≤KQk(p)θk.

Notice also that, for any , . Besides, using a Taylor expansion of in the variable, we have for all

 f⎛⎝∑0≤|l|≤|k|Ql(p)θl,p⎞⎠ =f⎛⎝∑0≤|l|≤|k|−1Ql(p)θl+∑|l|=|k|Ql(p)θl,p⎞⎠ =f⎛⎝∑0≤|l|≤|k|−1Ql(p)θl,p⎞⎠+∑|l|=|k|Dxf⎛⎝∑0≤|j|≤|k|−1Qj(p)θj,p⎞⎠Ql(p)θl +higher order terms,

and looking at the coefficient of degree on each side we get

 fk(Q(θ,p),p)=fk(π|k|Q(θ,p),p)=fk(π|k|−1Q(θ,p),p)+Dxf(Q0(p),p)Qk(p).

Therefore, assuming (8), having (12) for all is equivalent to having

 ((λ(p)⋅k)−Dxf(^X(p),p))Qk(p)=fk(π|k|−1Q(θ,p),p),∀ |k|≥2. (13)

Assuming the following non-resonance condition is satisfied:

 λ(p)⋅k≠λ(i)(p)∀ |k|≥2, ∀ 1≤i≤r,

we see that (13) has a unique solution that can be computed recursively via

 Qk(p)=((λ(p)⋅k)−Dxf(^X(p),p))−1fk(π|k|−1Q(θ,p),p)∀ |k|≥2, (14)

since the right-hand side only depends on for . We can therefore compute a truncated parameterization of arbitrary order, starting from (8) and then computing (14) recursively for as large as desired. In practice, the weights in (8) are chosen in order to obtain a reasonnable decay of the coefficients (we refer to [2] for a detailed explanation of how this choice can be optimized). Explicit examples are presented in Sections 4 and 5.

###### Remark 3.3.

In cases where resonant eigenvalues are present, a similar approach can still be used, but the conjugacy condition (9) defining has to be adapted [31].

If we now consider that is random and has a given PDF , this approach based on the parameterization method can also be easily generalized by adding a layer of PC expansion. Namely, we write

 Q(θ,p)=∑|k|≥0∑n∈NQk,nϕn(p)θk,

or equivalently

 Qk(p)=∑n∈NQk,nϕn(p).

In practice we consider a truncation

 QNk(p)=N∑n=0Qk,nϕn(p),

which we compute via

 QNk(p)=((λN(p)⋅k)−Dxf(^XN(p),p))−1fk(π|k|−1QN(θ,p),p)∀ |k|≥2, (15)

where, compared to (14), is now a vector of size rather than , and

 ((λN(p)⋅k)−Dxf(^XN(p),p))

can be interpreted as a block matrix, with blocks having each size . For explicit computations we again refer to Sections 4 and 5.

### 3.3 Heteroclinic orbits via Chebyshev series and projected boundaries

We go back to considering the parameter dependent problem (1) from a deterministic point of view, and extend the discussion of the previous subsection by looking at more global solutions, namely connecting orbits between equilibrium points.

Let and be two equilibrium points of (1). Assume that has an unstable manifold of dimension and that has a stable manifold of dimension , such that . Then, if the two manifolds intersect, we can generically expect this intersection to be transverse in the phase space , in which case there exists a transverse heteroclinic orbit between and . We recall that a heteroclinic orbit between and (or homoclinc orbit if ) is a solution of (1) such that

 limt→−∞x(t,p)=^X(p)andlimt→+∞x(t,p)=ˇX(p).

In this work, we compute such solution by solving a boundary value problem [1] between the unstable manifold of and the stable manifold of , for which we first compute local parameterization as in Section 3.2. This allows us to only solve (1) on a finite time interval, and recover the remaining parts of the orbit via the conjugacy satisfied by the parameterizations. More precisely, we want to find an orbit such that

 ⎧⎪ ⎪⎨⎪ ⎪⎩˙X(t,p)=f(X(t,p),p)∀ t∈[0,τ(p)]X(0,p)∈Wu(^X(p))X(τ(p),p)∈Ws(ˇX(p)), (16)

where and respectively denote the unstable manifold of and the stable manifold of . Note that, since we only consider autonomous vector fields in this work, only the length of the time interval is relevant, and the whole interval itself could of course be shifted. To numerically compute such an orbit, we use piece-wise Chebyshev series. In order to do so, we first introduce some notations.

For , we denote by the Chebyshev polynomial of order , defined for instance by . We introduce a partition of . We denote by the (unknown) time the orbit spends between the two local manifolds and consider the partition of given by , where for all . For all and , we also introduce the rescaled Chebyshev polynomial , defined as

 T(j)k(t,p)=Tk(2t−t(j)(p)−t(j−1)(p)t(j)(p)−t(j−1)(p)).

We can then write the orbit between the two manifolds as

 X(t,p)=X(j)0(p)+2∞∑k=1X(j)k(p)T(j)k(t,p),∀ t∈(t(j−1)(p),t(j)(p)), ∀ j∈{1,…,J}. (17)

Finally, we assume that, following the methodology presented in Section 3.2, a truncated parameterization of the local unstable manifold of as well as a truncated parameterization of the local stable manifold of have been computed. Rewriting the differential equation in (16) as an integral one, plugging in the expansion (17) and using well known properties of the Chebyshev polynomials, namely

 Tk(1)=1,Tk(−1)=(−1)kand∫Tk=12(Tk+1k+1−Tk−1k−1),

we obtain

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩kX(j)k(p)=τ(p)~t(j)−~t(j−1)4(f(j)k−1(X(t,p),p)−f(j)k+1(X(t,p),p)),∀ k≥1, ∀ 1≤j≤JX(j)0(p)+2∞∑k=1X(j)k(p)=X(j+1)0(p)+2∞∑k=1(−1)kX(j+1)k(p),∀ 1≤j≤M−1X(1)0(p)+2∞∑k=1(−1)kX(1)k(p)=^Q(^θ(p),p)X(M)0(p)+2∞∑k=1X(J)k(p)=ˇQ(ˇθ(p),p), (18)

where are the Chebyshev coefficients of on . The first line in (18) corresponds to the differential equation on each subinterval , the second line insures that the solution connects continuously between two consecutive subintervals, and the last two lines correspond to the boundary conditions on each manifold.

###### Remark 3.4.

In (18), besides the unknown Chebyshev coefficients , we have additional scalar unknowns: , and . Still assuming , the system is then underdetermined and we can fix two of these parameters to recover a unique solution.

Similarly to the two previous cases of Fourier and Taylor series, when we consider as a random parameter having a given distribution we only need to expand every unknown in (18) using PC. We again refer to Section 5 for an explicit example.

## 4 First example: a Lotka-Volterra system

In this section, we compute steady states and invariant manifolds of a Lotka-Volterra system of competition type

 {˙x=(1−x−ay)x,˙y=(1−y−bx)y, (19)

where is a bounded random variable having a given distribution. This basic example allows us to easily study the quality of our numerical computations by comparing them to analytic results.

### 4.1 Analytical results (for benchmarking)

#### 4.1.1 Deterministic framework

Assuming , system (19) has a non trivial equilibrium given by

 (xeq,yeq)=(a−1ab−1,b−1ab−1).

The eigenvalues of the Jacobian are

 λ(1)=−1andλ(2)=(a−1)(b−1)ab−1. (20)

and associated eigenvectors are given by

 Vλ(1)=(a−1b−1)andVλ(2)=(a−b). (21)

In the case , is positive and the equilibrium is of saddle type. Its stable manifold is the line , and its unstable manifold is a (nonlinear) curve connecting to and .

#### 4.1.2 Stochastic framework

We now assume that the parameter is a bounded random variable. We write

 b(ω)=¯b+σp(ω),

where and is a random variable taking values in . We assume and , so that a.e.. For specific distributions of , one can compute analytically the first two moments of :

• If is uniformly distributed on , that is if its PDF is given by , one has

 E(xeq)=a−1aσatanh(aσa¯b−1)andE(x2eq)=(a−1)2(a¯b−1)2−(aσ)2.
• If is beta-distributed with parameters , that is if its PDF is given by , one has

 E(xeq)=a−1a¯b−11√1−(aσa¯b−1)2andE(x2eq)=(a−1a¯b−1)21(1−(aσa¯b−1)2)32.
• If is beta-distributed with parameters , that is if its PDF is given by , one has

In order to focus the amount of comparisons and illustrations in the next section, we only consider the first component of the equilibrium, but similar analytical and numerical computations could of course also be carried out for .

### 4.2 Numerical results

In this section, we compute using PC the equilibrium and its stable and unstable manifold. For given ,