Phase Reduction Method for Strongly Perturbed Limit-Cycle Oscillators

# Phase Reduction Method for Strongly Perturbed Limit-Cycle Oscillators

Wataru Kurebayashi    Sho Shirasaka    Hiroya Nakao Graduate School of Information Science and Engineering, Tokyo Institute of Technology, O-okayama 2-12, Meguro, Tokyo 152-8552, Japan
July 3, 2019
###### Abstract

The phase reduction method for limit-cycle oscillators subjected to weak perturbations has significantly contributed to theoretical investigations of rhythmic phenomena. We here propose a generalized phase reduction method that is also applicable to strongly perturbed limit-cycle oscillators. The fundamental assumption of our method is that the perturbations can be decomposed into a large-amplitude component varying slowly as compared to the amplitude relaxation time and remaining weak fluctuations. Under this assumption, we introduce a generalized phase parameterized by the slowly varying large-amplitude component and derive a closed equation for the generalized phase describing the oscillator dynamics. The proposed method enables us to explore a broader class of rhythmic phenomena, in which the shape and frequency of the oscillation may vary largely because of the perturbations. We illustrate our method by analyzing the synchronization dynamics of limit-cycle oscillators driven by strong periodic signals. It is shown that the proposed method accurately predicts the synchronization properties of the oscillators, while the conventional method does not.

limit cycle, phase reduction, neuron
###### pacs:
preprint: APS/123-QED

Rhythmic phenomena are ubiquitous in nature and of great interest in many fields of science and technology, including chemical reactions, neural networks, genetic circuits, lasers, and structural vibrations GBT (); COWT (); SUCNS (); WCNN (); LCOSync (); Quorum (); MFN (); Brown (). These rhythmic phenomena often result from complex interactions among individual rhythmic elements, typically modeled as limit-cycle oscillators. In analyzing such systems, the phase reduction method GBT (); COWT (); SUCNS (); WCNN (); MFN (); Brown () has been widely used and considered an essential tool. It systematically approximates the high-dimensional dynamical equation of a perturbed limit-cycle oscillator by a one-dimensional reduced phase equation, with just a single phase variable representing the oscillator state.

A fundamental assumption of the conventional phase reduction method is that the applied perturbation is sufficiently weak; hence, the shape and frequency of the limit-cycle orbit remain almost unchanged. However, this assumption hinders the applications of the method to strongly perturbed oscillators because the shapes and frequencies of their orbits can significantly differ from those in the unperturbed cases. Indeed, strong coupling can destabilize synchronized states of oscillators that are stable in the weak coupling limit Aronson+Bressloff (). The effect of strong coupling can further lead to nontrivial collective dynamics such as quorum-sensing transition Quorum (), amplitude death and bistability Aronson+Bressloff (), and collective chaos Hakim+Nakagawa (). Although not all of these collective phenomena are the subject of discussion in this study, our formulation will give an insight into a certain class of them, e.g., bistability between phase-locked and drifting states Aronson+Bressloff (). The assumption of weak perturbations can also be an obstacle to modeling real-world systems, which are often subjected to strong perturbations.

Although the phase reduction method has recently been extended to stochastic StochasticPRM (), delay-induced DelayPRM (), and collective oscillations CollectivePRM (), these extensions are still limited to the weakly perturbed regime. To analyze a broader class of synchronization phenomena exhibited by strongly driven or interacting oscillators, the conventional theory should be extended. This Letter proposes an extension of the phase reduction method to strongly perturbed limit-cycle oscillators, which enables us to derive a simple generalized phase equation that quantitatively describes their dynamics. We use it to analyze the synchronization dynamics of limit-cycle oscillators subjected to strong periodic forcing, which cannot be treated appropriately by the conventional method.

We consider a limit-cycle oscillator whose dynamics depends on a time-varying parameter representing general perturbations, described by

 ˙X(t)=F(X(t),I(t)), (1)

where is the oscillator state and is an -dependent vector field representing the oscillator dynamics. For example, and can represent the state of a periodically firing neuron and the injected current, respectively WCNN (); Brown (). In this Letter, we introduce a generalized phase , which depends on the parameter , of the oscillator. In defining the phase , we require that the oscillator state can be accurately approximated by using with sufficiently small error, and that increases at a constant frequency when the parameter remains constant. The former requirement is a necessary condition for the phase reduction, i.e., for deriving a closed equation for the generalized phase, and the latter enables us to derive an analytically tractable phase equation.

To define such , we suppose that is constant until further notice. We assume that Eq. (7) possesses a family of stable limit-cycle solutions with period and frequency for , where is an open subset of (e.g., an interval between two bifurcation points). An oscillator state on the limit cycle with parameter can be parameterized by a phase as . Generalizing the conventional phase reduction method GBT (); COWT (); SUCNS (); WCNN (); MFN (), we define the phase such that, as the oscillator state evolves along the limit cycle, the corresponding phase increases at a constant frequency as for each . We assume that is continuously differentiable with respect to and .

We consider an extended phase space , as depicted schematically in Fig. 1 (a). We define as a cylinder formed by the family of limit cycles for and , and define as a neighborhood of . For each , we assume that any orbit starting from an arbitrary point in asymptotically converges to the limit cycle on . We can then extend the definition of the phase into , as in the conventional method GBT (); COWT (); SUCNS (); WCNN (); MFN (), by introducing the asymptotic phase and isochrons around the limit cycle for each . Namely, we can define a generalized phase function of such that is continuously differentiable with respect to and , and holds everywhere in , where is the gradient of with respect to and the dot denotes an inner product. This is a straightforward generalization of the conventional asymptotic phase GBT (); COWT (); SUCNS (); WCNN (); MFN () and guarantees that the phase of any orbit in always increases with a constant frequency as at each . For any oscillator state on , holds. In general, the origin of the phase can be arbitrarily defined for each as long as it is continuously differentiable with respect to . The assumptions that and are continuously differentiable can be further relaxed for a certain class of oscillators, such as those considered in Izhikevich_Relax ().

Now suppose that the parameter varies with time. To define that approximates the oscillator state with sufficiently small error, we assume that can be decomposed into a slowly varying component and remaining weak fluctuations as . Here, the parameters and are assumed to be sufficiently small so that varies slowly as compared to the relaxation time of a perturbed orbit to the cylinder of the limit cycles, which we assume to be without loss of generality, and the oscillator state always remains in a close neighborhood of on , i.e., holds (see Supplementary Information). We also assume that is continuously differentiable with respect to . Note that the slow component itself does not need to be small.

Using the phase function , we introduce a generalized phase of the limit-cycle oscillator (7) as . This definition guarantees that increases at a constant frequency when remains constant, and leads to a closed equation for . Expanding Eq. (7) in as and using the chain rule, we can derive where is a matrix whose ()-th element is given by , is the gradient of with respect to , and denotes .

To obtain a closed equation for , we use the lowest-order approximation in and , i.e., . Then, by defining a phase sensitivity function and two other sensitivity functions and , we can obtain a closed equation for the oscillator phase as

 ˙θ(t) = ω(q(ϵt))+σζ(θ,q(ϵt))⋅p(t) (3) +ϵξ(θ,q(ϵt))⋅˙q(ϵt)+O(σ2,ϵ2,σϵ),

which is a generalized phase equation that we propose in this study. The first three terms in the right-hand side of Eq. (3) represent the instantaneous frequency of the oscillator, the phase response to the weak fluctuations , and the phase response to deformation of the limit-cycle orbit caused by the slow variation in , respectively, all of which depend on the slowly varying component .

To address the validity of Eq. (3) more precisely, let denote the absolute value of the second largest Floquet exponent of the oscillator for a fixed , which characterizes the amplitude relaxation timescale of the oscillator (). As argued in Supplementary Information, we can show that the error terms in Eq. (3) remain sufficiently small when and , namely, when the orbit of the oscillator relaxes to the cylinder sufficiently faster than the variations in .

Note that if we define the phase variable as with some constant instead of , gives the conventional phase. Then, we obtain the conventional phase equation with and . Here, is a natural frequency, , and is the conventional phase sensitivity function at  COWT (). This equation is valid only when (i.e., ). By using the near-identity transformation Keener (), we can show that the conventional equation is actually a low-order approximation of the generalized equation (3) (see Sec. III of Supplementary Information).

In practice, we need to calculate and numerically from mathematical models or estimate them through experiments. We can show that the following relations hold (See Supplementary Information for the derivation):

 ξ(θ,I) = −∂X0(θ,I)∂I⊤Z(θ,I), (4) ξ(θ,I) = ξ(θ0,I)−1ω(I)∫θθ0[ζ(θ′,I)−¯ζ(I)]dθ′, (5)
 ¯ζ(I) := 12π∫2π0ζ(θ,I)dθ=dω(I)dI, (6)

where is a matrix whose ()-th element is given by , is a constant, and is the average of with respect to over one period of oscillation. From mathematical models of limit-cycle oscillators, can be obtained numerically by the adjoint method for each  MFN (); Brown (), and then and can be computed from and Eqs. (4) and (5). Experimentally, and can be measured by applying small impulsive perturbations to , while can be obtained by applying small stepwise perturbations to .

To test the validity of the generalized phase equation (3), we introduce an analytically tractable model, a modified Stuart-Landau (MSL) oscillator (see MSL () and Fig. 1 for the definition and details). We numerically predict the phase of a strongly perturbed MSL oscillator by both conventional and generalized phase equations, and compare them with direct numerical simulations of the original system. In applying the conventional phase reduction, we set , where denotes the time average. In Fig. 1, we can confirm that the generalized phase equation (3) accurately predicts the generalized phase of the original system, while the conventional phase equation does not well predict the conventional phase because of large variations in .

As an application of the generalized phase equation (3), we analyze phase locking HighOrderLocking () of the system (7) to a periodically varying parameter with period and frequency , in which the frequency tuning () occurs. Although the averaging approximation averaging () for the phase difference is generally used to analyze the phase locking COWT (); HighOrderLocking (), we cannot directly apply it in the present case because the frequency can vary largely with time. Thus, generalizing the conventional definition, we introduce the phase difference as with an additional term to remove the large periodic variations in due to , where is a -periodic function defined as . By virtue of this term, temporal variations in remain of the order , i.e., , which enables us to apply the averaging approximation to .

Introducing a small parameter representing the magnitude of variations in , one can derive a dynamical equation for as where and denotes the right-hand side of Eq. (3). Using first- and second-order averaging averaging (), we can introduce slightly deformed phase differences satisfying and obtain the first- and second-order averaged equations, and where and are given by and . These averaged equations can be considered autonomous by neglecting the and terms, respectively. Averaged equations for the conventional phase equation can be derived similarly. Thus, if the averaged equation has a stable fixed point, phase locking is expected to occur. As demonstrated below, the first-order averaging of the generalized phase equation already predicts qualitative features of the phase-locking dynamics, while the second-order averaging gives more precise results when the parameter varies significantly.

As an example, we use the MSL oscillator and investigate their phase locking to periodic forcing. Figure 3 shows the results of the numerical simulations. We apply four types of periodically varying parameters and predict if the oscillator exhibits either or phase locking to the periodically varying parameter (small fluctuation is also added for completeness). We derive averaged equations for the phase differences using the proposed and conventional methods, and compare the results with direct numerical simulations of the MSL oscillator. We find that our new method correctly predicts the stable phase-locking point already at first-order averaging, while the conventional method does not. In particular, the conventional method can fail to predict whether phase locking takes place or not, as shown in Figs. 3 (g) and (h), even after the second-order averaging. In this case, the exponential dependence of the frequency on the parameter is the main cause of the breakdown of the conventional method (see Sec. III of Supplementary Information for a discussion). Typical trajectories of are plotted on the cylinder of limit cycles in the extended phase space , which shows that the oscillator state migrates over synchronously with the periodic forcing. The trajectories are closed when phase locking occurs.

In summary, we proposed a generalized phase reduction method that enables us to theoretically explore a broader class of strongly perturbed limit-cycle oscillators. Although still limited to slowly varying perturbations with weak fluctuations, our method avoids the assumption of weak perturbations, which has been a major obstacle in applying the conventional phase reduction method to real-world phenomena. It will therefore facilitate further theoretical investigations of nontrivial synchronization phenomena of strongly perturbed limit-cycle oscillators Aronson+Bressloff (); Hakim+Nakagawa (). As a final remark, we point out that a phase equation similar to Eq. (3) has been postulated in a completely different context, to analyze the geometric phase in dissipative dynamical systems Kepler (). This formal similarity may provide an interesting possibility of understanding synchronization dynamics of strongly perturbed oscillators from a geometrical viewpoint.

Financial support by JSPS KAKENHI (25540108, 22684020), CREST Kokubu project of JST, and FIRST Aihara project of JSPS are gratefully acknowledged.

## References

• (1) A. T. Winfree, The Geometry of Biological Time (Springer, New York, 2001).
• (2) Y. Kuramoto, Chemical Oscillations, Waves and Turbulence (Dover, New York, 2003).
• (3) A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: A Universal Concept in Nonlinear Sciences (Cambridge University Press, Cambridge, 2001).
• (4) F. C. Hoppensteadt and E. M. Izhikevich, Weakly Connected Neural Networks (Springer, New York, 1997).
• (5) G. B. Ermentrout and D. H. Terman, Mathematical Foundations of Neuroscience (Springer, New York, 2010).
• (6) E. Brown, J. Moehlis, and P. Holmes, Neural Comput. 16, 673–715 (2004).
• (7) K. Wiesenfeld, C. Bracikowski, G. James, and R. Roy, Phys. Rev. Lett. 65, 1749–1752 (1990); S. H. Strogatz, D. M. Abrams, A. McRobie, B. Eckhardt, and E. Ott, Nature 438, 43–44 (2005); I. Z. Kiss, C. G. Rusin, H. Kori, and J. L. Hudson, Science 316, 1886–1889 (2007).
• (8) J. Garcia-Ojalvo, M. B. Elowitz, and S. H. Strogatz, Proc. Natl. Acad. Sci. USA 101, 10955–10960 (2004); S. De Monte, and F. d’Ovidio, S. Danø, and P. G. Sørensen, Proc. Natl. Acad. Sci. 104, 18377–18381 (2007); A. F. Taylor, M. R. Tinsley, F. Wang, Z. Huang, and K. Showalter, Science 323, 614–617 (2009); T. Danino, O. Mondragón-Palomino, L. Tsimring, and J. Hasty, Nature 463, 326–330 (2010).
• (9) D. G. Aronson, G. B. Ermentrout, and N. Kopell, Physica D 41, 403–449 (1990); R. E. Mirollo and S. H. Strogatz, J. Stat. Phys. 50, 245–262 (1990); D. Hansel, G. Mato, and C. Meunier, Neural Comput. 7, 307–337 (1995); I. Z. Kiss, W. Wang, and J. L. Hudson, J. Phys. Chem. B 103, 11433–11444 (1999); P. C. Bressloff and S. Coombes, Neural Computation 12, 91–129 (2000); Y. Zhai, I. Z. Kiss, and J. L. Hudson, Phys. Rev. E 69, 026208 (2004).
• (10) V. Hakim and W. J. Rappel, Physical Review A 46, 7347-7350 (1992); N. Nakagawa and Y. Kuramoto, Prog. Theor. Phys. 89, 313-323 (1993); H. Nakao and A. S. Mikhailov, Phys. Rev. E 79, 036214 (2009).
• (11) K. Yoshimura and K. Arai, Phys. Rev. Lett. 101, 154101 (2008); J.-N. Teramae, H. Nakao, and G. B. Ermentrout, Phys. Rev. Lett. 102, 194102 (2009); D. S. Goldobin, J.-N. Teramae, H. Nakao, and G. B. Ermentrout, Phys. Rev. Lett. 105, 154101 (2010).
• (12) V. Novicẽnko and K. Pyragas, Physica D 241, 1090–1098 (2012); K. Kotani, I. Yamaguchi, Y. Ogawa, Y. Jimbo, H. Nakao, G. B. Ermentrout, Phys. Rev. Lett. 109, 044101 (2012).
• (13) Y. Kawamura, H. Nakao, K. Arai, H. Kori, and Y. Kuramoto, Phys. Rev. Lett. 101, 024101 (2008); Y. Kawamura, H. Nakao, and Y. Kuramoto, Phys. Rev. E 84, 046211 (2011).
• (14) E. M. Izhikevich, SIAM J. App. Math. 60, 1789–1804 (2000).
• (15) J. P. Keener, Principles of Applied Mathematics: Transformation and Approximation (Addison Wesley, Boston, 1988).
• (16) The modified Stuart-Landau oscillator has a two-dimensional state variable and a vector field with , , , , and .
• (17) G. B. Ermentrout, J. Math. Biol. 12, 327–342 (1981).
• (18) J. A. Sanders and F. Verhulst, Averaging Methods in Nonlinear Dynamical Systems (Springer-Verlag, New York, 1985).
• (19) T. B. Kepler and M. L. Kagan, Phys. Rev. Lett. 66, 847–849 (1991).

## Appendix A Derivation of the generalized phase equation

In this section, we give a detailed derivation of the generalized phase equation (2) in the main article, which takes into account the effect of amplitude relaxation of the oscillator state to the cylinder of limit cycles . Our aim is to evaluate the order of error terms in the generalized phase equation (2). Our argument here is based on a formulation similar to Ref. Goldobin () by Goldobin et al., in which the effect of colored noise on limit-cycle oscillators is analyzed and an effective phase equation that accurately describes the oscillator state is derived by incorporating the effect of amplitude relaxation of the oscillator state to the unperturbed limit-cycle orbit.

As in the main article, we consider a limit-cycle oscillator whose dynamics depends on a time-varying parameter representing general perturbations, described by

 ˙X(t)=F(X(t),I(t)). (7)

For simplicity, we assume that the state variable is two-dimensional (), but the formulation can be straightforwardly extended to higher-dimensional cases.

Suppose that the parameter is constant for the moment. As explained in the main article, we introduce an extended phase space and define a generalized phase and amplitude as functions of in . Here, gives the distance of the oscillator state from the unperturbed stable limit cycle . For each constant value of , as argued in the Supplementary Information of Ref. Goldobin (), we can define a phase and an amplitude such that

 ∂Θ(X,I)∂X⋅F(X,I) = ω(I), (8) ∂R(X,I)∂X⋅F(X,I) = −λ(I)R(X,I), (9)

where is the absolute value of the second Floquet exponent of Eq. (7) for each . We further assume that and are continuously differentiable with respect to and . Equations (8) and (9) guarantee that

 ˙θ=ω(I),˙r=−λ(I)r (10)

always hold for each . In the absence of perturbations, the amplitude decays to exponentially, and the phase increases constantly.

Now we suppose that the parameter can vary with time. As explained in the main article, we decompose the parameter into a slowly varying component and remaining weak fluctuations as . We define a phase and an amplitude of the oscillator as follows:

 θ(t) = Θ(X(t),q(ϵt)), (11) r(t) = R(X(t),q(ϵt)). (12)

Since and are continuously differentiable with respect to and , we can derive the dynamical equations for and as

 ˙θ = ∂Θ(X,I)∂X∣∣∣(X,q(ϵt))⋅dX(t)dt+∂Θ(X,I)∂I∣∣∣(X,q(ϵt))⋅dq(ϵt)dt, (13) ˙r = ∂R(X,I)∂X∣∣∣(X,q(ϵt))⋅dX(t)dt+∂R(X,I)∂I∣∣∣(X,q(ϵt))⋅dq(ϵt)dt. (14)

Plugging into Eq. (7) and expanding it to the first order in , we can derive

 ˙X=F(X,q(ϵt))+σG(X,q(ϵt))p(t)+O(σ2), (15)

where the matrix is defined in the main article. Substituting Eqs. (8), (9), and (15) into Eqs. (13) and (14), we can obtain

 ˙θ =ω(q(ϵt))+σ∂Θ(X,I)∂X∣∣∣(X,q(ϵt))⋅G(X,q(ϵt))p(t)+ϵ∂Θ(X,I)∂I∣∣∣(X,q(ϵt))⋅˙q(ϵt)+O(σ2), (16) ˙r =−λ(q(ϵt))r+σ∂R(X,I)∂X∣∣∣(X,q(ϵt))⋅G(X,q(ϵt))p(t)+ϵ∂R(X,I)∂I∣∣∣(X,q(ϵt))⋅˙q(ϵt)+O(σ2), (17)

where denotes . For simplicity of notation, we define , , and , respectively, as

 ζθ(θ,r,I) = G(X,I)⊤∂Θ(X,I)∂X∣∣∣X=X(θ,r,I), (18) ζr(θ,r,I) = G(X,I)⊤∂R(X,I)∂X∣∣∣X=X(θ,r,I), (19) ξθ(θ,r,I) = ∂Θ(X,I)∂I∣∣∣X=X(θ,r,I), (20) ξr(θ,r,I) = ∂R(X,I)∂I∣∣∣X=X(θ,r,I), (21)

where represents an oscillator state with , , and parameter . Using Eqs. (18), (19), (20), and (21), we can rewrite Eqs. (16) and (17) as

 ˙θ = ω(q(ϵt))+σζθ(θ,r,I)⋅p(t)+ϵξθ(θ,r,I)⋅˙q(ϵt)+O(σ2), (22) ˙r = −λ(q(ϵt))r+σζr(θ,r,I)⋅p(t)+ϵξr(θ,r,I)⋅˙q(ϵt)+O(σ2). (23)

Note that and are equivalent to the sensitivity functions and defined in the main article. The functions and represent sensitivities of the amplitude to the small fluctuations and to the slowly varying component of the applied perturbations, respectively. In the main article, we also assumed that varies sufficiently slowly as compared to the relaxation time of perturbed orbits to . By using the absolute value of the Floquet exponent and the slowly varying component , this assumption can be written as

 ϵ≪λ(q(ϵt)),orϵλ(q(ϵt))≪1. (24)

Now, we show that the following relation between the sensitivity functions for the amplitude holds:

 ξr(θ,0,I) =−1ω(I)∫∞0e−λ(I)ϕ/ω(I)ζr(θ−ϕ,0,I)dϕ (25) =−1λ(I)∫∞0e−sζr(θ−ω(I)s/λ(I),0,I)ds, (26)

where we defined in the second line. From Eq. (9),

 ∂R(X,I)∂X⋅F(X,I) =−λ(I)R(X,I) (27)

holds. We differentiate Eq. (27) with respect to and plug in . Then, from the left-hand side of Eq. (27), we obtain

 ∂∂I[∂R(X,I)∂X⋅F(X,I)]∣∣ ∣∣X=X0(θ,I) =[∂∂I(∂R(X,I)∂X)]⊤F(X,I)∣∣ ∣∣X=X0(θ,I) (28) +∂F(X,I)∂I⊤∂R(X,I)∂X∣∣ ∣∣X=X0(θ,I) (29) =[∂∂X(∂R(X,I)∂I)]F(X,I)∣∣ ∣∣X=X0(θ,I)+ζr(θ,0,I), (30)

where denotes a differential operator defined as for a scalar function , is a matrix whose ()-th element is given by , and is the transpose of . Here, the first term of the right-hand side of Eq. (LABEL:eq._def_R_func_def) can be written as

 [∂∂X(∂R(X,I)∂I)]F(X,I)∣∣ ∣∣X=X0(θ,I)=[∂∂X(∂R(X,I)∂I)]∣∣ ∣∣X=X0(θ,I)dX0(ω(I)t,I)dt∣∣∣t=θ/ω(I) (32) =ω(I)[∂∂X(∂R(X,I)∂I)]∣∣ ∣∣X=X0(θ,I)∂X0(θ,I)∂θ=ω(I)∂∂θ(∂R(X,I)∂I)∣∣ ∣∣X=X0(θ,I) (33) =ω(I)∂ξr(θ,0,I)∂θ. (34)

Furthermore, differentiating the right-hand side of Eq. (27), we can derive

 ∂∂I[−λ(I)R(X,I)]∣∣∣X=X0(θ,I) =−[dλ(I)dIR(X,I)+λ(I)∂R(X,I)∂I]∣∣ ∣∣X=X0(θ,I) (35) =−λ(I)ξr(θ,0,I), (36)

where we used . Thus, from Eqs. (27)–(36), we can obtain

 ω(I)∂ξr(θ,0,I)∂θ+ζr(θ,0,I) =−λ(I)ξr(θ,0,I) (37)

Since Eq. (37) is a linear first-order ordinary differential equation for , this equation can be solved as follows:

 ξr(θ,0,I) =−1ω(I)∫θ−∞eλ(I)(θ′−θ)/ω(I)ζr(θ′,0,I)dθ′, (38)

which leads to Eqs. (25) and (26).

Using the derived Eq. (26), we can estimate the order of as

 ξr(θ,0,q(ϵt)) =1λ(I)∫∞0e−sζr(θ−ω(I)s/λ(I),0,I)ds∣∣∣I=q(ϵt) (39) =1λ(I)∫∞0e−sζr(θ,0,I)ds∣∣∣I=q(ϵt)+O(1λ(q(ϵt))2) (40) =O(1λ(q(ϵt))), (41)

where we expanded in in the second line. For simplicity of notation, we introduce as follows:

 ~ξr(θ,I)=λ(I)ξr(θ,0,I)=∫∞0e−sζr(θ−ω(I)s/λ(