Bounds for deterministic and stochastic dynamical systems using sum-of-squares optimization
We describe methods for proving upper and lower bounds on infinite-time averages in deterministic dynamical systems and on stationary expectations in stochastic systems. The dynamics and the quantities to be bounded are assumed to be polynomial functions of the state variables. The methods are computer-assisted, using sum-of-squares polynomials to formulate sufficient conditions that can be checked by semidefinite programming. In the deterministic case, we seek tight bounds that apply to particular local attractors. An obstacle to proving such bounds is that they do not hold globally; they are generally violated by trajectories starting outside the local basin of attraction. We describe two closely related ways past this obstacle: one that requires knowing a subset of the basin of attraction, and another that considers the zero-noise limit of the corresponding stochastic system. The bounding methods are illustrated using the van der Pol oscillator. We bound deterministic averages on the attracting limit cycle above and below to within 1%, which requires a lower bound that does not hold for the unstable fixed point at the origin. We obtain similarly tight upper and lower bounds on stochastic expectations for a range of noise amplitudes. Limitations of our methods for certain types of deterministic systems are discussed, along with prospects for improvement.
In the study of dynamical systems with complicated and possibly chaotic dynamics, average quantities are often of more interest than any particular solution trajectory. This is partly because of the difficulty of computing a trajectory precisely and partly because average quantities are more important in many applications. For instance, one might seek time-averaged drag forces in a model of an oil pipeline or ensemble-averaged temperatures in a stochastic climate model. One way to estimate averages quantitatively is to integrate the dynamical system numerically and average over the resulting particular solution. Such direct numerical simulations are often straightforward, but the accuracy of the result is not guaranteed unless errors are rigorously controlled. Moreover, even a perfectly accurate solution does not give information about the different trajectories that result from different initial conditions. Finally, the computational cost of computing well-converged averages is often prohibitive, especially in systems that are high-dimensional or stochastic.
A complementary approach, which we pursue here, is to prove bounds on average quantities directly from the governing equations. An advantage over numerical integration is that bounds can be proven without knowing any solution trajectories. Furthermore, they can be proven for all possible trajectories at once, or for all trajectories within a given region of state space. On the other hand, it is generally difficult to prove bounds that are tight enough to give good estimates of the average quantities being bounded.
The aim of this work is to develop methods for proving bounds that are tight, meaning that the upper and lower bounds are equal or nearly so. We assume that the quantity of interest can be described by a function , where is the state vector of a dynamical system, and we seek to bound averages of . In deterministic systems, we consider averages over infinite time,
If the above limit does not exist it can be replaced by limit superior or inferior for the upper and lower bound problems, respectively. The value of depends in general on which trajectory is being averaged over. In stochastic systems with a stationary probability distribution , we consider stationary ensemble averages,
One obstacle to proving tight bounds is, for reasons described shortly, the need to determine whether certain complicated expressions are sign-definite. As proposed in [], this can be done systematically with computer assistance for finite-dimensional systems with polynomial dynamics. The main idea, described further in §2, is to construct a polynomial whose non-negativity implies the desired bound. By the methods of sum-of-squares (SoS) programming [, ], a sufficient condition for this non-negativity can then be posed as a semidefinite program (SDP).
In deterministic systems there is a second obstacle to proving tight bounds. It is generally easiest to construct bounds that hold for all possible initial conditions. Sometimes this is desired, but other times one is interested in a particular local attractor, and bounds holding globally are not generally tight for averages over a local attractor. In this sense, the local bound is spoiled by any other invariant structure in the state space, such as another attractor or an unstable fixed point.
We pursue two ways of obtaining tight bounds specific to a local attractor. The first is to enforce conditions that imply the bound only on an absorbing set around the attractor, thereby omitting other invariant structures. The second is to add noise to the system and prove bounds for ensemble averages in the vanishing noise limit. If the system is stochastically stable, then under certain conditions this limit will agree with the corresponding deterministic time average []. Note that these ideas are applicable irrespectively of any special structure in the system (such as Hamiltonian), and can in principle be applied to systems of arbitrarily high but finite dimension.
Throughout this work, we illustrate the methods described using the van der Pol oscillator [], which can be written as
where a dot denotes . The parameter sets the strength of the nonlinear damping. There is a limit cycle that attracts all trajectories except the unstable fixed point at the origin (Figure 1), and the global invariant set is composed of the limit cycle and the fixed point. The system is a standard example of a nonlinear oscillator and has been studied extensively, including with stochastic forcing [, , ]. Here we find nearly tight bounds on averages of both with and without noise.
The rest of this work is organized as follows. Section 2 reviews the framework of [] for bounding deterministic averages and uses it to find upper bounds on in the van der Pol system. Section 3 extends the framework to give attractor-specific bounds, which we use to find lower bounds on over the van der Pol limit cycle. These lower bounds are larger than zero and thus do not apply to the unstable fixed point. The bounding methods for stochastic dynamical systems are described in §4, and they are specialized to the case of small and vanishing noise in §§5–6. The methods of these sections give bounds on in the van der Pol example for a range of noise amplitudes. Section 7 discusses the limitations of our methods and gives ideas for improvement, and §8 offers concluding remarks.
2 Global bounds for deterministic systems
Consider an autonomous dynamical system
in which all trajectories remain bounded as . We assume nothing special about the structure of except that the system is bounded, although eventually we restrict attention to polynomial . We wish to prove upper and lower bounds on , the average of a function over a trajectory . In applications, can be chosen as a quantity of interest for the system. Unless the system has a single attractor that is globally asymptotically stable, different trajectories can give different values of . In this section we seek global bounds on , meaning they hold for every possible value of .
Suppose we wish to prove a constant lower bound on all possible ,
Central to our method will be a suitably chosen differentiable storage function . No matter what is chosen, along any bounded trajectory because
The desired bound (2.2) is thus equivalent to the inequality
for any differentiable . The above time average cannot be evaluated without knowing trajectories , but a sufficient condition for the inequality to hold is for it to hold pointwise for all . Thus, we will have proven that if we can find any differentiable such that
An upper bound can be proven in a similar way by reversing the inequality sign in the pointwise sufficient condition. The following proposition summarizes both conditions.
Let with be a dynamical system whose trajectories are bounded forward in time, let be a scalar function and let be its time average defined as in (1.1). If there exist differentiable functions , , and constants , such that
then along any trajectory ,
In the statement of Proposition 1 we have assumed that the time average exists. Should the limit in (1.1) not converge, the Proposition holds if we take the limit superior when computing the upper bound, and the limit inferior when computing the lower bound. That is, for any trajectory ,
See [] for a discussion of when infinite-time averages do or do not converge.
There are two difficulties in applying the above proposition to yield good bounds and . The first is choosing the storage functions and . The second is checking whether and for candidate storage functions and bound values. These difficulties can be prohibitive in general, but the task is greatly simplified when and are polynomials of the state variables .
In the rest of this work we assume that and are polynomials. If the chosen and are also polynomials, then so are and . Checking the sufficient conditions (2.6a) and (2.6b) thus amounts to verifying the non-negativity of polynomial expressions. While this is an NP-hard problem, the computational complexity can be significantly reduced by replacing the conditions and with the stronger conditions that and belong to the set of polynomials that are sums-of-squares (SoS). A polynomial is said to be SoS if it can be expressed as the sum of squares of some other polynomials—that is, if there exist polynomials such that
To prove the bounds and it suffices to find polynomials and such that and . The best bounds that can be proven in this framework are
For the storage functions and , we must specify polynomial ansatze with undetermined coefficients. The decision variables in the upper bound optimization, for instance, are and the coefficients of .
Optimization problems with SoS constraints such as (2.9) can be solved numerically using the methods of SoS programming. The main idea behind SoS programming is that every polynomial can be represented as a symmetric matrix (after defining a suitable polynomial basis set), and this matrix can be positive semidefinite if and only if the polynomial admits a SoS decomposition [, , ]. Constraints involving SoS polynomials, including additional equality and inequality constraints, can be posed as equality and inequality conditions on symmetric matrices. An optimization problem with constraints of this type is known as a semidefinite program (SDP). A number of efficient computer solvers are available for SDPs (e.g. [, , , , ]), and the software packages YALMIP [] or SOSTOOLS [] can assist in formulating SoS constraints as SDPs. More details on convex optimization and the link between SoS polynomials and SDPs can be found in [, , ], while examples using SoS programming to study dynamical systems can be found in [, , , , , , , , ].
For our numerical implementation, we used the SoS module of YALMIP [] to transform SoS optimization problems into SDPs. To solve the SDPs, we used the multiple-precision solver SDPA-GMP [] for the following reasons. First, the SDPs we solved were badly conditioned even for modest polynomial degrees, and none of the standard double-precision solvers we tested converged reliably. This issue could be resolved by carefully rescaling the system, but there is no systematic rule to determine a suitable rescaling. Second, working in multiple-precision offers a simple check on results: increase the precision, and confirm that the bounds or change very little. We have done this for all bounds reported here. Appendix A describes additional checks on numerical results, including how SDP computations could be part of fully rigorous computer-assisted proofs.
Example : deterministic upper bounds for the van der Pol limit cycle.
To illustrate the application of Proposition 1, we have computed upper bounds in the van der Pol system (1.3) for various and four different polynomial degrees of . Figure 2 shows the resulting , along with estimates of obtained by integrating over the limit cycle using the fourth-order Runge-Kutta method with fixed time steps. As expected, increasing the degree of the storage function gives a better (smaller) upper bound . For a given degree, the bounds become less tight as is raised because the shape of the limit cycle is more complicated (cf. Figure 1).
The trivial lower bound is the best global lower bound possible for the van der Pol system. It cannot be improved because it is saturated by the trajectory staying at the unstable fixed point . However, if one is interested only in the trajectories that tend to the stable limit cycle, then the lower bound of zero is not tight. Raising the lower bound requires a result that does not apply globally, but instead applies only to a subset of all possible trajectories.
3 Bounds on local attractors
Often, one is interested in the average of a function over only a subset of all possible trajectories, such as those tending to a particular attractor. If a bound on over such trajectories is to be tight, it should not apply to trajectories that tend to a different local attractor, nor should it apply to trajectories on unstable invariant structures that are not part of the attractor of interest (such as unstable fixed points, unstable limit cycles, or basin boundaries). However, global bounds of the kind derived using Proposition 1 are unlikely to be tight for a particular attractor since they must obey
where is any trajectory of the dynamical system.
Suppose we wish to bound possible values of , not for all trajectories but only for trajectories that eventually enter and remain inside a given absorbing domain . To compute over any such trajectory, it suffices to begin averaging after the trajectory has permanently entered , so dynamics outside of can be ignored. This suggests the following modification of Proposition 1, where the inequality conditions are imposed only on and, consequently, the resulting bounds are proven only for trajectories that permanently enter . The comments made in Remark 1 still apply.
Let be a dynamical system with , let be a scalar function, let be its time average defined as in (1.1), and let be an absorbing domain. If there exist differentiable functions , , and constants , such that
then along any trajectory that permanently enters ,
Applying Proposition 2 to a particular system requires specifying an appropriate absorbing domain. For instance, if one is interested in a local attractor with basin , it suffices to choose any such that . Bounds proven on then apply to all trajectories that approach the attractor or are part of the attractor, but they need not apply to trajectories outside .
Finding a mathematical description of for a given attractor is difficult in general, but this too can be done using SoS programming [, ]. An example in which it is not difficult to choose is when the trajectory to be excluded from the bound is a repelling fixed point. In this case it suffices to choose for any small enough set around that point.
If the absorbing domain can be specified as a semi-algebraic set—that is, defined by a set of polynomial inequalities and equalities—the conditions of Proposition 2 can be checked by SoS programming using the generalized -procedure [, Lemma 2.1]. For instance, suppose for some polynomial ; proving a lower bound for trajectories entering calls for SoS conditions that imply when . For this to be true, it suffices that there exist such that . Strengthening these non-negativity constraints to SoS constraints and making similar arguments for the upper bound gives the following two SoS programs:
where polynomial ansatze are specified for , , and , and their free coefficients are the decision variables. Using similar ideas, the -procedure can be generalized to semi-algebraic defined by multiple polynomial inequalities and equality constraints [].
Example : deterministic lower bounds for the van der Pol limit cycle.
Let us revisit the lower bound on for the van der Pol oscillator. Suppose we want a bound that does not apply to the unstable fixed point at the origin but applies to the other trajectories, all of which approach the limit cycle. If such a lower bound is perfectly tight, it will equal the value of on the limit cycle. To apply Proposition 2, we must specify an absorbing domain that contains the entire limit cycle but omits the origin. As described in Appendix B, the set is just such an absorbing domain for any .
Figure 3 shows the lower bounds obtained by solving the lower bound program of (3.4) using the absorbing domains and . At a given polynomial degree, using the smaller absorbing domain gives a better bound. Results would likely be improved further by using that closely approximate the attractor. This suggests a two-step procedure: using SoS techniques to find the smallest possible around the attractor of interest [, ], and then solving the SoS bounding programs (3.4).
4 Bounds for stochastic systems
Consider a stochastic dynamical system
in which the random trajectories are bounded almost surely as . The standard vector Wiener process is pre-multiplied by the matrix that describes the relative effect of each noise component on each state variable , and the overall noise strength is scaled by .
We assume for simplicity that the noise is additive, meaning that does not vary with , but the present analysis extends with only minor modifications to multiplicative noise in either the Itô or Stratonovich interpretation. We also assume that the system has reached statistical equilibrium, in which case the probability density of its trajectories, , decays at infinity and satisfies the stationary Fokker–Planck equation
Suppose we wish to prove a constant lower bound , where is the stationary expectation of defined as in (1.2). We assume that such a stationary average exists, which is true for all that don’t grow too fast as (precise statements can be found in []). The subscript on indicates its dependence on the noise strength . Since , this is equivalent to proving
As in the deterministic case, our method of proof relies on a suitably chosen differentiable storage function . For any that does not grow too quickly as , the stationary expectation is zero because
The third line above follows from the stationary Fokker-Planck equation (4.2). The second line follows from integration by parts, assuming that the boundary terms vanish—that is, assuming
where is the outwards unit normal to the sphere , and is the surface element. The above condition holds in many cases, including any case where is polynomial and decays exponentially at infinity. When this condition holds, then so does the equality (4.4), hence the inequality (4.3) is equivalent to
The above expectation cannot be evaluated without knowing , but it is sufficient for the inequality to hold pointwise for all . The lower bound is therefore proven if we can find any differentiable satisfying the boundary integral condition (4.5) and the pointwise inequality
The same argument with a reversed inequality sign gives a sufficient condition for an upper bound on . We summarize these results in the following Proposition, which is the stochastic analog of Proposition 1.
Let with , , be a stochastic dynamical system in a statistically stationary state with probability distribution . If there exist differentiable functions , and constants , such that
where , and if , grow slowly enough at infinity to each satisfy (4.5), then
The inequality conditions (4.8a) and (4.8b) were derived by a different method in [] for the case where is the identity matrix (hence ). The conditions differ from their deterministic counterparts (2.6a) and (2.6b), respectively, only in the addition of the diffusive terms and . Similar results for non-negative were also stated in [].
Like the deterministic bounds of §§2–3, the stochastic bounds and storage functions in Proposition 3 can be found numerically for a given if and are polynomials. Letting and be polynomials also, we replace (4.8a) and (4.8b) with stronger SoS constraints to obtain the SoS programs
Since Proposition 3 relies on the boundary integral condition (4.5), the and constructed by the above SoS programs must satisfy this condition in order for and to be proven bounds. One way to guarantee this is to prove that decays exponentially as .
Example : stochastic bounds for the van der Pol oscillator.
To illustrate the application of Proposition 3, we have bounded the stationary expectation for the stochastic van der Pol oscillator
with . For the matrix chosen above, the Wiener processes acts on the component alone. This corresponds to a stochastic physical force, as seen by rewriting (4.11) as a second-order equation for the position ,
We have computed bounds using the SoS programs (4.10) for noise amplitudes and three different polynomial degrees of , . Figure 4 shows the optimal bounds, along with several values of computed by numerically solving the stationary Fokker-Planck equation (4.2). Our solution of the Fokker-Planck equation employed finite differences with operator splitting as in [], and the system was evolved to steady state by implicit Euler time stepping. The limiting expectation equals the deterministic average on the limit cycle, reflecting the fact that the van der Pol system is stochastically stable.
The upper bounds in Figure 4 are nearly tight at all noise strengths for of degree 10 and higher. They become less tight as increases, but they still correctly capture the increase in that occurs when stochastic forcing “smears out” the van der Pol limit cycle.
The lower bounds in Figure 4 are good for strong noise but not for weak noise. In fact, as for any fixed polynomial degree of . To see that this is inevitable, observe that the diffusive term in (4.8b) vanishes as if is bounded. As , the stochastic SoS programs (4.10) reduce to the deterministic SoS programs (2.9), so the stochastic bounds can be no tighter than the global deterministic bounds.
In the van der Pol example, upper bounds are not affected by this phenomenon since the deterministic global upper bound on of §2 is sharp and as . However, a tight lower bound on at small is impeded by the unstable deterministic trajectory at the origin, which saturates the deterministic global lower bound . In the deterministic case, a tight local lower bound was achieved in §3 by ignoring part of phase space, but we cannot do so in the stochastic case since everywhere. Instead, the diffusive term in the SoS constraint of (4.10) must remain near the origin as . Clearly, this cannot be accomplished with polynomial . The next section illustrates how a non-polynomial can be used instead to improve the lower bounds on at small .
5 Bounds for stochastic systems with weak noise
An unstable trajectory of the deterministic system can interfere not only with tight bounds on for a local attractor but also with tight bounds on in the corresponding stochastic system, at least when the noise is weak. This was illustrated in the previous section, where lower bounds on in the van der Pol system were not tight: they approached zero as with polynomial of fixed degree. In this section we derive lower bounds on that remain tight when is small. We do this using a non-polynomial that depends explicitly on .
The method described below applies to stochastic upper or lower bounds, provided that the deterministic trajectory interfering with the bounds is a repelling fixed point. For concreteness, suppose we are trying to prove a tight lower bound on an expectation that is strictly positive, as in the van der Pol example. Suppose also that the global lower bound on in the corresponding deterministic system, , is zero and is saturated by the repelling fixed point . For the lower bound on to remain larger than the deterministic lower bound as , the stochastic bounding inequality (4.8b) must not reduce to its deterministic counterpart (2.6b). This requires that the diffusive term remains commensurate with the other terms in (4.8b), at least near the fixed point at the origin.
The term can remain near the origin as if develops a boundary layer there. Purely polynomial can have no such boundary layer, so we add a non-polynomial term and let
Here, is a polynomial, is a tunable constant, and is a positive definite quadratic form,
for a suitable symmetric, positive definite matrix (denoted ) to be determined. When is small, the logarithmic term dominates near , but remains significant away from the origin.
Despite not being polynomial, the ansatz (5.1) is suitable to derive a SoS optimization problem for the bound because the relevant derivatives of are rational functions,
Substituting these expressions into the bounding inequality (4.8b) gives a rational inequality with the positive denominator . Multiplying this rational inequality by the denominator we obtain the equivalent polynomial inequality
Consequently, a lower bound on can be calculated for a fixed, small by solving the optimization problem
The above SoS program can produce lower bounds larger than zero for very weak noise strengths, meaning that these bounds are not spoiled by the fixed point at the origin. In fact, while the SoS bounding program (4.10) for polynomial reduces to its deterministic counterpart (2.9) as , the above program does not. Instead, the polynomial reduces to , retaining the term that does not appear in the deterministic program. This term is due to the boundary layer built into around the origin.
Choosing the quadratic form .
If the coefficients of the positive quadratic form (equivalently, the entries of ) are tuned at the same time as the other coefficients, then the SoS constraint is quadratic in the decision variables, as opposed to linear. This produces a non-convex optimization problem for that is harder than a standard SDP and requires a bilinear solver. Here we keep the problem convex by fixing a non-optimal in advance and tuning only the other coefficients in . Even a non-optimal provides a good lower bound at small provided that, for reasons explained in §6, we have not only but also near the unstable fixed point. Quadratic satisfying these two conditions can always be found if the point is a repeller but not if it is a saddle, which is why we have restricted ourselves to repellers. Further discussion of how to choose or optimize is given in §6 and Appendix D.
Example : stochastic van der Pol oscillator with weak noise.
We have used the methods of this section to derive lower bounds on for the stochastic van der Pol oscillator (4.11). In the logarithmic ansatz (5.1) for , we used the positive quadratic form that satisfies near the origin. Figure 5 shows the resulting lower bounds computed using the SoS program (5.6) for various degrees of the polynomial . When the noise amplitude is moderate or small, these bounds dramatically improve on the bounds of Figure 4 produced using purely polynomial . When , is indistinguishable from the deterministic local lower bounds found in §3.
6 Bounds on local attractors: the vanishing noise limit
In this section we propose a second method for bounding deterministic averages on a local attractor. While in §3 we did this by segmenting the phase space, here we do it by adding noise. Many dynamical systems are stochastically stable, in the sense that the vanishing noise limit of a stationary expectation, , is equal to the deterministic average on a particular attractor [, ]. This is true in the van der Pol example, where the vanishing noise limit of is equal to on the limit cycle. Such correspondence can be exploited to bound on a local attractor: by proving bounds on that hold as , we obtain bounds that hold also for . In essence, the vanishing noise limit preserves the attractor while destroying unstable invariant structures. This idea was proposed in [], though here we extend it by treating the limit rigorously, rather than numerically.
We continue the analysis of §5, still assuming that is a repelling fixed point, , and on the attractor of interest. We suppose also that , meaning that the zero-noise limit of the expectation converges to the deterministic time average. Rigorous statements about when this property holds can be found in [, ] and references therein.
Under these assumptions, we can obtain a lower bound that is tight for the local attractor by deriving a lower bound . To this end, we recall that although the polynomial inequality considered in §5 suffices to prove the lower bound, we are really interested in proving the integral inequality (4.6). It is proven in Appendix C, under mild assumptions on the stationary distribution , that (4.6) holds in the limit if there exists any such that . Dividing this relation by (which is constructed to be positive definite) and rearranging gives the sufficient condition
Since decreases by an arbitrarily small amount, we can let the bound be implied by a strict inequality and set . The lower bound we seek, , is thus returned by the SoS optimization problem
Like the finite-noise program (5.6), the above program can be made convex by fixing in advance the positive definite quadratic form . The chosen must satisfy near the origin. That is, it must satisfy
where denotes the linearized dynamics near the origin. This condition is needed because the SoS constraint becomes
near the origin. The only way the above condition can be satisfied for larger than zero is if its first term is positive. Appendix D gives a way of constructing an admissible when the unstable fixed point is repelling. When the unstable fixed point is a saddle, it is not generally possible to satisfy (6.3) for positive definite .
Example : deterministic bounds for the van der Pol limit cycle—vanishing noise formulation.
To demonstrate the methods of this section, we have computed lower bounds on the vanishing noise limit of for the van der Pol system using the SoS program (6.2). The results serve also as deterministic lower bounds on for all trajectories approaching the limit cycle (strictly speaking, this conclusion requires proving that , which is outside our present scope). Bounds of the latter type appear also in §3, computed differently using the SoS formulation (3.4).
Figure 6 shows the bounds obtained by fixing the degree of the polynomial to 12 for the three different defined in Table 1. The condition (6.3) is satisfied with by and when , and by when . The failure of the condition is why the bound using is poor for , while the bounds using or are poor near . Taking the best of the three lower bounds at each gives a lower bound on within 1% of the value found by numerical integration over the entire range .
7 Future directions
Despite our success in computing bounds for the van der Pol oscillator, the study of more complicated dynamical systems presents several obstacles that require further investigations. We discuss some of these below, and put forward some preliminary ideas for future improvements.
7.1 Bounds for high-dimensional systems
The techniques we have developed in this paper can in principle be applied to systems of arbitrarily high but finite dimension. However, computing tight bounds typically requires the use of polynomial storage functions of large degree. Since the size of the SDP required to determine the existence of a SoS decomposition for a polynomial of degree in variables grows as [, Theorem 3.3], relatively large SDPs must be solved even for a low-dimensional system such as the van der Pol oscillator. Moreover, the SDPs seem to be consistently ill-conditioned, and more so for larger systems. Our present methods are therefore practical only for systems of moderate dimension.
One way to improve the numerical conditioning of the SDPs is to try to rescale the system. A preliminary investigation showed that rescaling the van der Pol system such that the limit cycle is contained in the box dramatically improves the numerical conditioning, and bounds very similar to those reported above can be obtained without the need for multiple precision solvers. Researchers studying other aspects of dynamics using SDPs have had similar success rescaling the relevant dynamics to lie in . However, the appropriate rescaling is not generally clear a priori.
There are several possible ways to reduce the computational cost of constructing bounds. One option is to exploit special structures in the SDPs such as symmetries or sparsity (e.g. [, ]). It might also be profitable to exploit special structure of the underlying dynamics, such as conserved quantities. Finally, it is sometimes possible to work with a lower-dimensional truncation of a high-dimensional system and bound the errors introduced by the truncation (e.g. [, , ]).
7.2 Bounds for systems governed by partial differential equations
The methods developed here apply only to finite-dimensional systems, and more work is needed to extend them to partial differential equations (PDEs). One idea is to project the PDE variables onto a finite Galerkin basis. Bounds can then be constructed using the finite-dimensional system, provided that the influence of the unprojected component can be controlled. This approach has been used successfully for nonlinear stability analysis of a fluid dynamical PDE [, , ]. A second idea is to bound integrals of the PDE variables directly using the dissipation inequalities proposed in []. In essence, these inequalities are integral constraints that replace the polynomial constraints of Proposition 1 when a finite-dimensional system is replaced by a PDE. Despite recent advances in the numerical implementation of integral inequalities [], however, this technique has so far been applied only to simple examples.
7.3 Tight bounds for systems with saddle points
If deterministic bounds are spoiled by a saddle point that is not embedded in the attractor of interest, the -procedure can be used to remove a set containing that is disjoint from the attractor. However, it can be difficult to establish that is indeed separate from the attractor. If is embedded in the attractor, alternative methods must be found. The vanishing-noise approach of §§5–6 seems promising, but the present formulation works only for repelling points, not saddle points. This is because (6.4) requires that the polynomial increases along all trajectories near the unstable fixed point, but is monotonic and cannot increase along trajectories on the stable and unstable manifolds of simultaneously.
Similarly, the logarithmic ansatz (5.1) is not suitable when stochastic bounds with weak (but finite) noise are spoiled by a saddle point. In fact, the term in each of the inequalities in (4.8) has opposite signs along the stable and unstable manifolds of unless changes rapidly, and any negative contribution must be balanced by large Laplacians. This requires polynomials of large degree, making the SoS programs intractable in practice.
One possible solution is to find an ansazt for that, similarly to (5.1) for repelling fixed points, lets stochastic bounds stay tight as by developing appropriate boundary layers. The following proposition suggests a way to deduce the ideal scaling of in this limit.
Consider the inequality
that is a sufficient condition for the lower bound . If is continuous and the stationary distribution is piecewise continuous, then the above inequality can be satisfied for the perfect lower bound only if wherever .
This statement is proven by adding , which holds because is a perfect bound, and expression (4.4) to find . Since is continuous and non-negative, is possible only if wherever , as claimed. An analogous statement holds for the perfect upper bound.
Proposition 4 suggests that as is improved and gets closer to its perfect value of , the inequality gets closer in some sense to being an equality. This motivates us to consider solutions to the equation
where and obeys the decay condition (4.5). A solution exists only when , as seen by taking the expectation of the equality and using (4.4). Asymptotic analysis of in the above equality as might suggests a good ansatz for in the corresponding inequality.
7.4 Assessing the quality of numerical bounds
Proposition 4 also provides a good way of assessing a posteriori whether the stochastic bounds obtained with SoS optimisation are nearly sharp. This is useful, for instance, when data from numerical simulations are not available for comparison. For example, if a lower bound is close to the exact value of , should be close to the solution of (7.2) whenever . In addition, an argument similar to the proof of Proposition 4 for deterministic bounds shows that if an upper or lower bound is perfect, then
for any trajectory on the attractor. If a good lower bound is proven using , for instance, then should be very close to on the attractor.
For illustration, we have solved the lower bound program (6.2) for the van der Pol oscillator with of degree 6 and degree 12. The degree-6 lower bound of is rather poor, while the degree-12 bound of is nearly tight. Figure 7 shows the variation of along the limit cycle in each case and compares it to the value of along the limit cycle. The and curves match very closely in the degree-12 case but not in the degree-6 case.
7.5 Formulating computer-assisted proofs
The bounds we have computed for the van der Pol system all rely on the numerical solution of SDPs, so they are subject to roundoff error. It is unlikely that roundoff error has led to any invalid bounds in our calculations because we have solved the SDPs using multiple-precision arithmetic and checked the results by increasing the precision. Nonetheless, such computations cannot be considered rigorous to the standard of computer-assisted proofs because of the presence of roundoff errors. Two different methods have been proposed for obtaining rigorous results from numerical SDP solutions. The first method is to project an approximate numerical solution onto an exact solution in terms of rational numbers [, ]. The second method is to use perturbation analysis, made rigorous by interval arithmetic, to construct a small interval around the approximate numerical solution in which the exact solution is guaranteed to lie. This approach has been implemented in the software VSDP [], but the relevant SDPs would need to be constructed manually since there is no sum-of-squares parser available that incorporates interval arithmetic.
In this work we have presented computer-assisted methods for deriving bounds on average quantities in both deterministic and stochastic dynamical systems using sum-of-squares programming. We have given particular attention to proving bounds that apply only to trajectories approaching a particular attractor. One method is to use the -procedure to omit segments of phase space on which the bounds do not need to hold. Another strategy is to remove unstable invariant structures by adding noise to the system. This idea, proposed previously in [], has been extended here by analyzing the weak and vanishing noise cases when the unstable structures to be omitted are repelling fixed points. These methods give improved bounds for weak but finite noise, and also in the rigorous limit of vanishingly weak noise.
Our methods have worked well when applied to the van der Pol oscillator. The best deterministic and stochastic bounds proven throughout are summarized in Figure 8. In the deterministic case, we obtained upper and lower bounds on the infinite-time average of over the limit cycle that are all within 1% of the “true” values found by numerical integration. In the stochastic case, bounding the stationary expectation of at a variety of noise strengths, we obtained upper bounds all within 1% and lower bounds all within 10% of the “true” values found by solving the Fokker-Planck equation numerically.
Moving to dynamical systems other than the van der Pol oscillator, whether the methods we have described can yield similarly tight deterministic and stochastic bounds depends on the details of those systems. If the dimension of a system is small enough for SoS optimization to be computationally feasible, we expect that tight global bounds on deterministic averages can be obtained using the methods of §2. In fact, since the first draft of this work, the bounding techniques we have presented for deterministic systems have been successfully applied to the design of control systems for fluid flows [, ]. With stochastic forcing that is not too weak, we expect that fairly tight bounds on stationary expectations also can be obtained using the methods of §4. As the noise strength decreases, the tendency of unstable invariant structures to spoil tight bounds can be combatted by the methods of §5 if these structures are repelling fixed points. New methods will be needed to maintain tight bounds as noise becomes weak in systems with other unstable structures, including saddle points. Lastly, the methods of §3 can be used to bound averages on trajectories within an absorbing domain and, in particular, on a single attractor. This much is true for any dynamical system, but whether these bounds can be tight depends on the nature of the attractor. For instance, bounds for a chaotic attractor must apply not just to generic chaotic trajectories but also to all trajectories embedded within the attractor, such as saddle points and unstable orbits. What can be deduced about chaotic attractors is of particular interest if the methods presented here are to be applied to complex systems of physical and engineering relevance.
Appendix A Towards rigorous bounds
As explained in §7.5, techniques are available to control the roundoff errors in the solution of the SDPs and compute rigorous bounds. However, these may be impractical for large problems. We are also unaware of any parsers for SoS programs that incorporate rigorous computations. For these reasons, we illustrate a method to produce rigorous bounds that can be implemented with existing software. For definiteness, we consider the upper bound problem in (2.9).
The solution of the SoS program consists of a polynomial storage function , a bound , a vector of monomials and a positive semidefinite, symmetric matrix such that
In practice, the decomposition is only approximate, and the error
is non-zero. However, according to Theorem 4 in [], the polynomial is still certifiably non-negative if
where is the coefficient of of largest magnitude, is the smallest eigenvalue of and denotes the dimension of .
Unfortunately, the optimal solution of a SoS problem rarely satisfies this condition, since for the optimal []. Consequently, we suggest to compute a slightly suboptimal by manually decreasing an initial guess using a sequence of feasibility problems, checking after each step that (A.3) holds using rigorous computations such as interval arithmetic [, ]. The steps are outlined in Algorithm 1.
Figure 9 compares some preliminary bounds obtained with Algorithm 1 for the van der Pol oscillator to the optimal ones. The results are only slightly suboptimal; moreover the feasibility problems were better conditioned than the full SoS optimization and could be solved with standard double-precision solvers such as SeDuMi []. We remark that given the preliminary stage of this investigation, we avoided the technical step of computing rigorous roundoff errors for the eigenvalues of . Instead, we checked condition (A.3) using arbitrary precision arithmetic, increasing the precision until changed by less than 1%. Although our results are only “certified” but not fully rigorous, we expect that similar results would be obtained when is computed rigorously. Moreover, we also stress that Algorithm 1 was not very robust and required careful tuning (e.g. in the choice of decimal figures to retain). A thorough investigation and resolution of these numerical issues is left for future work.
Appendix B Construction of an absorbing domain for the van der Pol oscillator
To show that is an absorbing domain for the van der Pol oscillator for , let us reverse the direction of time in (1.3) and consider the energy of the system , for which the origin is a stable fixed point. One has
meaning that when . Therefore, any contour of which is contained in the strip defines the boundary of a region of attraction of the origin for the time-reversed oscillator. One concludes that in the original system all orbits (except of course the fixed point at the origin) will leave the ball if , i.e. is an absorbing domain.