Synchronization of coupled stochastic limit cycle oscillators

Synchronization of coupled stochastic limit cycle oscillators

Georgi S. Medvedev Department of Mathematics, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, medvedev@drexel.edu, ph.: 1-215-895-6612, fax: 1-215-895-1582
July 3, 2019
Abstract

For a class of coupled limit cycle oscillators, we give a condition on a linear coupling operator that is necessary and sufficient for exponential stability of the synchronous solution. We show that with certain modifications our method of analysis applies to networks with partial, time-dependent, and nonlinear coupling schemes, as well as to ensembles of local systems with nonperiodic attractors. We also study robustness of synchrony to noise. To this end, we analytically estimate the degree of coherence of the network oscillations in the presence of noise. Our estimate of coherence highlights the main ingredients of stochastic stability of the synchronous regime. In particular, it quantifies the contribution of the network topology. The estimate of coherence for the randomly perturbed network can be used as means for analytic inference of degree of stability of the synchronous solution of the unperturbed deterministic network. Furthermore, we show that in large networks, the effects of noise on the dynamics of each oscillator can be effectively controlled by varying the strength of coupling, which provides a powerful mechanism of denoising. This suggests that the organization of oscillators in a coupled network may play an important role in maintaining robust oscillations in random environment. The analysis is complemented with the results of numerical simulations of a neuronal network.
PACS: 05.45.Xt, 05.40.Ca
Keywords: synchronization, coupled oscillators, denoising, robustness to noise, compartmental model

1 Introduction

Consider a dynamical system forced by small noise:

(1.1)

where function is continuous together with partial derivatives up to the second order, is a bounded continuous function of time, is a dimensional white noise process, and small is the noise intensity. We call (1.1) a local system and denote (1.1 the underlying deterministic system (1.1) with . For , stochastic differential equation (1.1) is understood in the Ito’s sense [1]. Many models of (bio)physical phenomena are formulated as the coupled networks of identical local systems (1.1):

(1.2)

where , and
, and , are independent copies of dimensional Brownian motion. The coupling is implemented by a linear operator and is interpreted as the strength of coupling. may depend on time. Our only assumption on is that it leaves the diagonal invariant, i.e., if solves (1.1 then solves (1.2. Here, and denotes the Kronecker product, so that is a solution of the coupled system. Such solutions, when asymptotically stable, feature synchronization, an important mode of collective behavior. Suppose is an asymptotically stable solution of the local system (1.1. Under what conditions on the coupling operator , is an asymptotically stable solution of (1.2? What information about is important for synchronization properties of the coupled system? What determines the rate of attraction of the coupled limit cycle and its robustness to noise? Clearly, the answers to these questions depend in a nontrivial fashion on the properties of the local systems, network topology, and the type of interactions between local systems.

Many different approaches have been proposed to study synchronization: asymptotic analysis [2, 3], phase reduction [4, 5], constructing Lyapunov functions [10] and estimating Lyapunov exponents [7], invariant manifold theory [6, 8], and graph-theoretic techniques [9]. Above we selected just a few representative studies illustrating these approaches. For more background and more complete bibliography, we refer an interested reader to recent monographs [11, 12, 16]. For weakly coupled networks, i.e., when , there are effective perturbation techniques for studying synchronization such as asymptotic approximation of the Poincare map and the method of averaging. These ideas underlie widely used method of phase response curves and Kuramoto’s phase reduction [5, 4]. For a review of techniques available for studying synchronization in weakly coupled networks we refer to Chapter 10 in [13] and references therein. When the coupling is moderate or strong, these methods do not apply. Moreover, the mechanisms for synchronization in weakly and strongly coupled networks are different. In the case of strong coupling, a prevalent approach for studying synchronization is to use the properties of specific (albeit important in applications) coupling schemes such as global all-to-all coupling (see, e.g., [14]), local nearest-neighbor coupling, or more generally schemes resulting from discretization of Laplace operator [2, 6]. For these network topologies, one can use explicit information about the spectra of the coupling matrices; in addition, the former scheme has strong symmetry properties that can be used in understanding network dynamics. Networks with general coupling operators have been studied by Pecora and Caroll [7] and by V. Belykh, I. Belykh, and Hasler [9, 15]. The master stability function, constructed in [7], uses spectral properties of a given coupling operator to determine whether the synchronous solution is stable. Practical implementation of this method relies on numerical computation of matrix eigenvalues. Analytical sufficient conditions for synchronization derived in [9, 15] use graph theoretic interpretation of the coupling operator to construct Lyapunov functions controlling the growth of perturbations of the synchronous solution. In this Letter, we look for an analytic or rather algebraic description of coupling operators that endow synchronous solutions with exponential stability. We define a class of matrices, dissipative matrices (cf. Definition 1), and show that these matrices generate exponentially stable synchronous solutions once the coupling strength exceeds a certain value. A random dissipative matrix shown in Section 4 suggests that many dissipative matrices do not fall into the class of coupling operators analyzed in [9, 15]. Therefore, by identifying dissipative matrices we have substantially extended existing knowledge of linear coupling operators that enforce synchrony in coupled networks. Surprisingly, dissipative matrices admit an explicit algebraic characterization: Theorem 2 relates all dissipative matrices to a discrete Laplacian, justifying common interpretation of the Laplacian as a prototypical diffusive coupling. To highlight the main ingredients of our treatment of synchronization, in Section 2 we present the analysis in the simplest meaningful setting when the coupling is linear and stationary. In Section 3, we discuss how to apply our method to problems with time-dependent and nonlinear coupling operators, and ,importantly, to networks composed of local systems with nonperiodic attractors.

Adequate description of many physical phenomena requires including stochastic terms into differential equation models. In the context of synchronization this leads to an important question of robustness of synchrony to noise. This is the second problem investigated in this Letter. We analytically estimate the coherence of the coupled oscillations in the presence of noise. The estimate is tight. It reflects the main ingredients of robustness of synchronous oscillations to noise. In particular, it quantifies the contribution of the network topology to the stability of the synchronous solution. As a related result, we show that in large networks the effects of noise on oscillations can be reduced substantially by increasing the strength of coupling. For networks of simpler elements, so-called integrate and fire neurons, the denoising property was shown in [19]. The present Letter extends the result of [19] to systems of coupled limit cycle oscillators. Finally, in Section 4 we illustrate our findings with a discussion of the dynamics of a concrete biophysical model, an ensemble of neural oscillators.

2 The analysis

We start by specifying the structure of the coupling operator. We call coupling operator separable if

(2.1)

Matrices and play distinct roles in the network organization. reflects the global architecture: what local system is connected to what. specifies how the coupling is organized on the level of a local system: roughly, what local variables are engaged in coupling. The separable structure of the coupling is important. It translates naturally to the stability analysis of the synchronous state. The condition that the diagonal is invariant for separable coupling translates to Moreover, if the network is connected then is one-dimensional. Thus, we are led to the following condition

(2.2)

Further, we assume that is symmetric semipositive definite, i.e., and . is not assumed symmetric. For simplicity, we keep and constant. In Section 3, we explain our results for time-dependent and nonlinear coupling.

If is positive definite, we say that the coupling is full (rank), otherwise we call it partial (rank). The distinction between the full and partial coupling is important for synchronization properties of (1.2). The following examples illustrate how full and partial coupling arise in physical models. Let

(2.3)

be the identity matrix, and . in (1.2) implements all-to-all coupling through all variables, while engages only the first variables of the local systems. The former is an example of the full coupling, whereas the latter is that of the partial coupling. The stability analysis of partially coupled systems has to deal with the degeneracy of the coupling matrix (due to the zero eigenvalues of ). A reader wishing to gain better physical intuition for coupled system (1.2) and (2.1) before embarking on analysis, is referred to the discussion of a compartmental model of a neuron in Section 4.

To study synchronization in (1.2)-(2.2), we derive the equation for the phase variables of the coupled system. To set up the notation, we review phase reduction for a single oscillator. Let denote a periodic solution of (1.1 with the least period . By we denote the corresponding orbit. Along we introduce an orthonormal moving coordinate frame (cf. [17]):

(2.4)

where the first vector is a unit vector moving along , . The change of variables

(2.5)

defines a smooth transformation in a sufficiently small neighborhood of (cf. Theorem VI.1.1 in [17]). Using Ito’s formula, in the vicinity of the periodic orbit, we rewrite (1.1) in terms of and (cf. (2.5)) and project the resultant equation onto the subspaces spanned by and to obtain

(2.6)
(2.7)

where

(2.8)

The detailed derivation of the system of equations near a periodic orbit of a deterministic system in the moving coordinates can be found in the proof of Theorem VI.1.2 in [17]. The treatment of the stochastic case requires Ito’s formula, which does not affect the leading order terms written out in (2.6) and (2.7) (see the proof of Lemma 4.1 in [18] for details). The solution of an initial value problem for (2.6) and (2.7) yields a real-valued function . The phase of the oscillations is given by . It will be more convenient to work with , which we will call a phase variable, rather than with it’s projection on .

Assume that the eigenvalues of , , , are negative

(2.9)

By applying the phase reduction to each oscillator in the network, in complete analogy to (2.6), we derive the phase equations for the coupled system

(2.10)

where denote the entries of (cf. (2.1)). The expression for the coupling terms in (2.10) simplifies to

(2.11)

Here and below, we ignore quadratic terms . By plugging (2.11) in (2.10), we arrive at the following system of equations

(2.12)

where

and

(2.13)

Next, we derive the system for the vector of the phase differences

(2.14)

where matrix is defined by

(2.15)

Multiply both sides of (2.12) by and note that

(2.16)

where is defined in (2.14). From (2.12) and (2.16) we have

(2.17)

where is defined by . For , is well-defined (cf. Appendix [19]). In the derivation of (2.17), we treated terms inherited from (2.16) as higher order terms. Since (cf. (2.14)),

and (2.17) is reduced to

(2.18)

In (2.18), terms are treated as higher order, because they do not affect exponential stability of the synchronous solution. Note how separable coupling translates to the structure of the phase equation. Matrix , which carries the information about the network topology effectively determines the stability of the synchronous solution. Semipositive matrix enters the factor (cf. (2.13)). The stability of (2.18) is determined from the homogeneous deterministic system:

(2.19)

where by we mean the first component of the solution of deterministic equation (1.2. We continue our analysis assuming that , i.e., the coupling is full. In Section 3, we comment on how our results apply to partially coupled systems. Thus, and after changing the independent variable we have

(2.20)

For exponential stability of synchronous solution, the symmetric part of must be negative definite. This motivates the following definition.

Definition 1.

Matrices from

(2.21)

are called dissipative.

Thus, we arrive at the first conclusion of this Letter: synchronous solution of (1.2-(2.2) is exponentially stable iff is dissipative. When studying (1.2-(2.2) it is tempting to relate the stability of synchronous solution to the spectrum of . It is important to realize that it is the spectrum of that is responsible for synchronization. Remarkably, dissipative matrices admit an explicit characterization.

Theorem 2.

iff

(2.22)

for some with negative definite symmetric part.

Proof. Suppose (2.22) holds. Let . Then

Furthermore, for any

(2.23)

where by we denote the inner product in . We will use the same notation for the inner product in any other Euclidean space used in this Letter, e.g., and . Note that , because the rows of are linearly independent. Thus, from (2.23) we conclude

i.e., .

Conversely, suppose . Note that on the orthogonal complement of , , is invertible and define as follows: for let

(2.24)

We show that the symmetric part of (and, therefore, itself) is negative definite. For any there exists such that , because is invertible. Moreover, such , can be chosen from because . Thus,

(2.25)

Here, we used the fact that and (because ). The combination of (2.24) and (2.25) yields (2.22).

Theorem 2 gives explicit and for separable coupling exhaustive characterization of coupling matrices that generate exponentially stable synchronous solutions. Synchronization is often attributed to systems with diffusive coupling that are obtained by discretizing elliptic differential operators or, more generally, differential operators modeling diffusion on graphs. In this respect, it is remarkable that Theorem 2 relates all dissipative matrices to the discrete Laplacian

(2.26)

This is consistent with the common interpretation of the Laplacian as a prototypical elliptic operator. The explicit characterization of dissipative matrices by (2.22) is the second result of this Letter. Before turning to the question of the robustness to noise we state two corollaries of Theorem 2. The first corollary gives a convenient computational formula for , while second one characterizes the spectrum of a dissipative matrix.

Corollary 3.

For , , where satisfies (2.22).

Corollary 4.

If then has a zero eigenvalue of multiplicity . All nonzero eigenvalues of are real and negative.

Proof. Matrix has a simple zero eigenvalue by (2.2). Suppose is a nonzero eigenvalue of and be a corresponding eigenvector

(2.27)

Note that . By multiplying both sides of (2.27) by , we have

(2.28)

The scalar product multiplying on the right hand side of (2.28) is positive, while is negative. Therefore, .

Having understood the mechanism for synchronization in the deterministic network (1.2. We now turn to the question of robustness of the synchronous regime to noise. To this end, we return to (2.18). For the remainder of this Letter, . For small , on a finite time interval solution of (2.18) can be expanded as

(2.29)

where deterministic function solves (2.19) (cf. Theorem 2.2, [20]). Since is an exponentially stable solution of (2.19), we take . The leading order correction is a Gaussian process, which for small approximates on a finite interval of time. This specifies the scope of applicability of our analysis. In particular, we are not concerned with large deviation type effects which become relevant on much longer timescales.

From (2.12) we have

(2.30)

By plugging (2.29) and (2.30) in (2.18), we have

(2.31)

After changing time to and ignoring higher order terms, (2.32) is rewritten as

(2.32)

The solution of (2.32) subject to initial condition is a Gaussian random process with zero mean and covariance matrix (cf. §5.6, [1])

(2.33)

where and is a nonnegative scalar function. Using standard properties of trace and the mean value theorem, from (2.33) we have

(2.34)

where continuous function is due to the application of the mean value theorem, , , and

(2.35)

Note that is uniformly bounded. Define the average variance of the variables as

(2.36)

Equation (2.34) yields an important estimate for the network variability

(2.37)

where is a positive constant. Nonnegative function reflects the properties of the local system such as geometric properties of the limit cycle and matrix multiplying stochastic term, whereas captures network topology. For a network of fixed size, can be made arbitrarily small by taking large . Moreover, by Chebyshev’s inequality, for any ,

i.e., for strong coupling the phases of individual oscillators can be localized within arbitrarily narrow bounds. The control of the coherence by varying the coupling strength is more effective in networks with smaller . Thus, (2.37) shows explicitly the factors controlling the coherence in the presence of noise. Moreover, quantifies the contribution of the network topology to the stability of the synchronous state. This is the third conclusion of this Letter.

What features of the network topology are captured by ? We first go over the ingredients of the formula for (cf. (2.35)). Matrix is a Laplacian:

(2.38)

Unlike , is nonsingular. The following examples show that can change by many times for networks with different topologies. Let be as in (2.3) and (cf. (2.26)). and are coupling matrices corresponding to the graphs modeling all-to-all and nearest-neighbor interactions in the network. By direct verification,

(2.39)

By plugging the explicit expressions (2.39) in (2.35), we find

(2.40)

Note that the all-to-all topology features a significant reduction in compared to the nearest-neighbor coupling. This reduction is proportional to the ratio of the degrees of the corresponding graphs: - for the nearest-neighbor and - for the all-to-all coupling. Thus, (2.37) suggests that networks with the higher density of connections are more robust to noise.

Equation (2.37) estimates the variability of the phase differences, revealing the main factors contributing to robustness of synchrony to noise. If is symmetric, Equation (2.37) can also be used to estimate the variability of the phase variables themselves. We follow the method used in [19] for a related problem. First, we derive the equation for the average phase of the coupled system

By multiplying both sides of (2.12) by , we have

(2.41)

Here, we used the following approximations

As a linear combination of independent Gaussian processes , is distributed as , where is a dimensional Brownian motion. Thus,

(2.42)

and

(2.43)

where does not depend on and . Next, by noting as in [19] that each phase variable can be represented as a linear combination of the average phase and phase differences , we estimate using (2.43) and (2.37). Omitting further details, which are the same as in step 3. of the proof of Theorem 3.1 in [19], we state the final result

(2.44)

where and are positive constants independent from and . The first term on the right hand side of (2.43) can be made arbitrarily small by increasing , while the second term decreases for increasing . Therefore, in large networks, the effects of noise on oscillations can be controlled by varying the strength of coupling. The two terms on the right hand side of (2.43) represent two main ingredients of the mechanism of denoising: the first term is due to the averaging of statistically independent stochastic forces acting on connected local systems, whereas the second term reflects the dissipativity of the coupling. The latter is a critical property of the coupling operator that underlies both synchronization and denoising in coupled networks.

The analysis of the phase equations above produced a necessary and sufficient condition for synchronization in systems with separable coupling and gave a compact explicit estimate for the spread of phases of coupled stochastic oscillators. For the phase equations to be valid, the trajectory of the coupled system must remain close to the limit cycle. To complete the analysis, we consider the system of equations for . The derivation of the system for is completely analogous to that for (cf. (2.12)). We omit the details and state the final result

(2.45)

where , . Matrices and are defined in (2.5) and (2.8) respectively. By Vazhevski’s inequality [21], the combination of (2.9), , and implies exponential stability of the equilibrium at in (2.45. Therefore, on finite time intervals with overwhelming probability, remains small, provided and are sufficiently small. This justifies the phase reduction that we used above. Note that this conclusion holds for both full and partial coupling.

3 Generalizations

The analysis of this Letter admits several important generalizations.

A) Partial coupling. If the coupling is partial, nonnegative function in (2.32) in general takes zero values. Dealing with the degeneracies in (2.32) requires additional care. For a common in applications case when has isolated zeros, with technical modifications one can get a qualitatively similar estimate to (2.37).

B) Time-dependent coupling. Our analysis remains unchanged if instead constant one uses a bounded measurable function of time. The definition of the full coupling is then modified to for some and uniformly in . Likewise, can be taken time-dependent as long as for all . In this case, exponential stability of follows from (2.20) if we require that all eigenvalues of are negative and bounded away from zero uniformly in :

for some . By Theorem 2, such matrices can be written as where is such that

for some . Also, in the time-dependent case, one can get a slightly weaker but qualitatively similar estimate on (cf. (2.37)).

C) Nonlinear coupling. The analysis can be extended to the systems with nonlinear coupling of the following form

(3.1)

where for every and is a smooth function. Since , (3.1) can be rewritten as

(3.2)

Suppose is a periodic solution of the local system Then solves (3.2). In a neighborhood of , (3.2) can be rewritten using Taylor’s formula

or, equivalently,

(3.3)

Equation (3.3) is now in the form, for which the analysis of Section 2 applies (cf. B) above).

D) Nonperiodic attractors. Moving coordinate systems similar to the one used in this Letter can be introduced in the vicinity of locally invariant sets of more general nature. For example, motions along certain attracting normally hyperbolic slow manifolds admit a similar description (cf. [17, 23]). Thus, our analysis can be adopted to study synchronization in a more general setting. Furthermore, in the case when the attractor of the local system is periodic and synchronization takes place, the analysis of Section 2 yields a precise description of the asymptotic behavior of trajectories of the coupled system. Specifically, the attractor of the coupled system includes a periodic orbit that is a direct product of the periodic orbits of the local systems. If one is only concerned with synchronization and does not aim at describing the asymptotic behavior of the coupled system, then the transformation to moving coordinates is not needed. To outline the analysis for this case, let be a solution of the local system. This solution is not necessarily periodic. Instead, we assume that does not leave a bounded domain in . Then solves the coupled system

(3.4)

Linearization about yields

(3.5)

By multiplying both sides of (3.5) by , we get the equation of :

(3.6)

Asymptotic stability of implies synchronization for (3.4). A sufficient condition for asymptotic stability of the trivial solution of (3.6) is that the eigenvalues of symmetric matrix

(3.7)

are negative and bounded from zero uniformly in . Since is bounded, the desired property for for large follows from and being symmetric positive definite. Thus, we get a sufficient condition for synchronization for the full coupling. In the partial coupling case, has zero eigenvalues and one has to make sure that they do not give rise to negative eigenvalues of . The condition for this can be obtained from the well-known formulas for the perturbations of the eigenvalues of symmetric matrices (cf. Appendix in [24]). A more complete analysis of synchronization for the partial coupling case will be given elsewhere [25].

4 Numerical example

To illustrate the analytical results of this Letter, we use a nondimensional model of a pacemaker neuron from [22]:

(4.1)
(4.2)

Dynamic variables and represent membrane potential and calcium concentration in a given compartment of an axon or a dendrite of a neural cell. The compartments are sufficiently small so that the membrane potential and calcium concentration can be assumed constant throughout one compartment (Fig. 1a). Terms on the right hand side of the voltage equation (4.1) model ionic currents: a calcium current, a calcium dependent potassium current, and a small leak current. In addition, small white noise is added to account for random synaptic input or other fluctuations. The equation for calcium concentration (4.2) takes into account calcium current and calcium efflux due to calcium pump. The ionic conductances are sigmoid functions of the voltage and calcium concentration

(4.3)
(4.4)

We briefly comment on the meaning of the model parameters: and stand for maximal conductances and reversal potentials of the corresponding ionic currents; are constants used in the descriptions of activation of calcium and calcium dependent currents; and are certain constants that come up in the process of nondimesionalization of the conductance based model of a dopamine neuron (see [22] for details). The values of the parameters that were used in our simulations are given in the appendix to this Letter.

a   b

Figure 1: Schematic representation of spatial structure of a compartmental model: (a) linear cable, (b) branched cable. Dynamic variables and , , approximate voltage and calcium concentration in each compartment.

The coupling terms

(4.5)
(4.6)

model electrical current and calcium diffusion between adjacent compartments respectively. In case of a linear cable geometry of an axon (dendrite) shown in Fig. 1a, is the matrix corresponding to the nearest-neighbor coupling (cf. (2.26)). For branched dendrites (see Fig. 1b), may have a more complex structure. In either case, the structure of reflects the geometry of the neuron. By combining this information, we obtain a model in the form of (1.2), (2.1), with

(4.7)

where . The coupling is full rank. If we disregard calcium diffusion, i.e., set , the coupling becomes partial with . System of equations (4.1) and (4.2) with admits an alternative interpretation. One can view and as the membrane potential and calcium concentration of Cell in a neuronal population. The coupling is due to the current through the gap-junctions between adjacent cells. In this case, reflects network connectivity. If gap-junctional conductance depends on voltage or calcium concentration, the coupling is nonlinear as in (3.1). If the gap-junctions permit ions in one direction, coupling matrix is not symmetric.

In the remainder of this section, we present the results of numerical simulations of (4.1) and (4.2). We choose the variant of the model with partial coupling, i.e., with . When , the model has even better synchronization properties. The upper panel of Fig. 2 shows the phase plane and the time series of five uncoupled oscillators forced by small noise. The initial condition is chosen on the limit cycle of the deterministic system and is the same for each oscillator. Fig. 2a shows phase trajectories of all five oscillators for approximately one cycle. As one can see from Fig.