Synchronization of coupled stochastic limit cycle oscillators

Synchronization of coupled stochastic limit cycle oscillators

Abstract

For a class of coupled limit cycle oscillators, we give a condition on a linear coupling operator that is necessary and sufficient for exponential stability of the synchronous solution. We show that with certain modifications our method of analysis applies to networks with partial, time-dependent, and nonlinear coupling schemes, as well as to ensembles of local systems with nonperiodic attractors. We also study robustness of synchrony to noise. To this end, we analytically estimate the degree of coherence of the network oscillations in the presence of noise. Our estimate of coherence highlights the main ingredients of stochastic stability of the synchronous regime. In particular, it quantifies the contribution of the network topology. The estimate of coherence for the randomly perturbed network can be used as means for analytic inference of degree of stability of the synchronous solution of the unperturbed deterministic network. Furthermore, we show that in large networks, the effects of noise on the dynamics of each oscillator can be effectively controlled by varying the strength of coupling, which provides a powerful mechanism of denoising. This suggests that the organization of oscillators in a coupled network may play an important role in maintaining robust oscillations in random environment. The analysis is complemented with the results of numerical simulations of a neuronal network.
PACS: 05.45.Xt, 05.40.Ca
Keywords: synchronization, coupled oscillators, denoising, robustness to noise, compartmental model

1Introduction

Consider a dynamical system forced by small noise:

where function is continuous together with partial derivatives up to the second order, is a bounded continuous function of time, is a dimensional white noise process, and small is the noise intensity. We call (Equation 1) a local system and denote (Equation 1 the underlying deterministic system (Equation 1) with . For , stochastic differential equation (Equation 1) is understood in the Ito’s sense [1]. Many models of (bio)physical phenomena are formulated as the coupled networks of identical local systems (Equation 1):

where , and
, and , are independent copies of dimensional Brownian motion. The coupling is implemented by a linear operator and is interpreted as the strength of coupling. may depend on time. Our only assumption on is that it leaves the diagonal invariant, i.e., if solves (Equation 1 then solves (Equation 2. Here, and denotes the Kronecker product, so that is a solution of the coupled system. Such solutions, when asymptotically stable, feature synchronization, an important mode of collective behavior. Suppose is an asymptotically stable solution of the local system (Equation 1. Under what conditions on the coupling operator , is an asymptotically stable solution of (Equation 2? What information about is important for synchronization properties of the coupled system? What determines the rate of attraction of the coupled limit cycle and its robustness to noise? Clearly, the answers to these questions depend in a nontrivial fashion on the properties of the local systems, network topology, and the type of interactions between local systems.

Many different approaches have been proposed to study synchronization: asymptotic analysis [2], phase reduction [4], constructing Lyapunov functions [10] and estimating Lyapunov exponents [7], invariant manifold theory [6], and graph-theoretic techniques [9]. Above we selected just a few representative studies illustrating these approaches. For more background and more complete bibliography, we refer an interested reader to recent monographs [11]. For weakly coupled networks, i.e., when , there are effective perturbation techniques for studying synchronization such as asymptotic approximation of the Poincare map and the method of averaging. These ideas underlie widely used method of phase response curves and Kuramoto’s phase reduction [5]. For a review of techniques available for studying synchronization in weakly coupled networks we refer to Chapter 10 in [13] and references therein. When the coupling is moderate or strong, these methods do not apply. Moreover, the mechanisms for synchronization in weakly and strongly coupled networks are different. In the case of strong coupling, a prevalent approach for studying synchronization is to use the properties of specific (albeit important in applications) coupling schemes such as global all-to-all coupling (see, e.g., [14]), local nearest-neighbor coupling, or more generally schemes resulting from discretization of Laplace operator [2]. For these network topologies, one can use explicit information about the spectra of the coupling matrices; in addition, the former scheme has strong symmetry properties that can be used in understanding network dynamics. Networks with general coupling operators have been studied by Pecora and Caroll [7] and by V. Belykh, I. Belykh, and Hasler [9]. The master stability function, constructed in [7], uses spectral properties of a given coupling operator to determine whether the synchronous solution is stable. Practical implementation of this method relies on numerical computation of matrix eigenvalues. Analytical sufficient conditions for synchronization derived in [9] use graph theoretic interpretation of the coupling operator to construct Lyapunov functions controlling the growth of perturbations of the synchronous solution. In this Letter, we look for an analytic or rather algebraic description of coupling operators that endow synchronous solutions with exponential stability. We define a class of matrices, dissipative matrices (cf. Definition ?), and show that these matrices generate exponentially stable synchronous solutions once the coupling strength exceeds a certain value. A random dissipative matrix shown in Section 4 suggests that many dissipative matrices do not fall into the class of coupling operators analyzed in [9]. Therefore, by identifying dissipative matrices we have substantially extended existing knowledge of linear coupling operators that enforce synchrony in coupled networks. Surprisingly, dissipative matrices admit an explicit algebraic characterization: Theorem ? relates all dissipative matrices to a discrete Laplacian, justifying common interpretation of the Laplacian as a prototypical diffusive coupling. To highlight the main ingredients of our treatment of synchronization, in Section 2 we present the analysis in the simplest meaningful setting when the coupling is linear and stationary. In Section 3, we discuss how to apply our method to problems with time-dependent and nonlinear coupling operators, and ,importantly, to networks composed of local systems with nonperiodic attractors.

Adequate description of many physical phenomena requires including stochastic terms into differential equation models. In the context of synchronization this leads to an important question of robustness of synchrony to noise. This is the second problem investigated in this Letter. We analytically estimate the coherence of the coupled oscillations in the presence of noise. The estimate is tight. It reflects the main ingredients of robustness of synchronous oscillations to noise. In particular, it quantifies the contribution of the network topology to the stability of the synchronous solution. As a related result, we show that in large networks the effects of noise on oscillations can be reduced substantially by increasing the strength of coupling. For networks of simpler elements, so-called integrate and fire neurons, the denoising property was shown in [19]. The present Letter extends the result of [19] to systems of coupled limit cycle oscillators. Finally, in Section 4 we illustrate our findings with a discussion of the dynamics of a concrete biophysical model, an ensemble of neural oscillators.

2The analysis

We start by specifying the structure of the coupling operator. We call coupling operator separable if

Matrices and play distinct roles in the network organization. reflects the global architecture: what local system is connected to what. specifies how the coupling is organized on the level of a local system: roughly, what local variables are engaged in coupling. The separable structure of the coupling is important. It translates naturally to the stability analysis of the synchronous state. The condition that the diagonal is invariant for separable coupling translates to Moreover, if the network is connected then is one-dimensional. Thus, we are led to the following condition

Further, we assume that is symmetric semipositive definite, i.e., and . is not assumed symmetric. For simplicity, we keep and constant. In Section 3, we explain our results for time-dependent and nonlinear coupling.

If is positive definite, we say that the coupling is full (rank), otherwise we call it partial (rank). The distinction between the full and partial coupling is important for synchronization properties of (Equation 2). The following examples illustrate how full and partial coupling arise in physical models. Let

be the identity matrix, and . in (Equation 2) implements all-to-all coupling through all variables, while engages only the first variables of the local systems. The former is an example of the full coupling, whereas the latter is that of the partial coupling. The stability analysis of partially coupled systems has to deal with the degeneracy of the coupling matrix (due to the zero eigenvalues of ). A reader wishing to gain better physical intuition for coupled system (Equation 2) and (Equation 3) before embarking on analysis, is referred to the discussion of a compartmental model of a neuron in Section 4.

To study synchronization in (Equation 2)-(Equation 4), we derive the equation for the phase variables of the coupled system. To set up the notation, we review phase reduction for a single oscillator. Let denote a periodic solution of (Equation 1 with the least period . By we denote the corresponding orbit. Along we introduce an orthonormal moving coordinate frame (cf. [17]):

where the first vector is a unit vector moving along , . The change of variables

defines a smooth transformation in a sufficiently small neighborhood of (cf. Theorem VI.1.1 in [17]). Using Ito’s formula, in the vicinity of the periodic orbit, we rewrite (Equation 1) in terms of and (cf. (Equation 7)) and project the resultant equation onto the subspaces spanned by and to obtain

where

The detailed derivation of the system of equations near a periodic orbit of a deterministic system in the moving coordinates can be found in the proof of Theorem VI.1.2 in [17]. The treatment of the stochastic case requires Ito’s formula, which does not affect the leading order terms written out in (Equation 8) and ( ?) (see the proof of Lemma 4.1 in [18] for details). The solution of an initial value problem for (Equation 8) and ( ?) yields a real-valued function . The phase of the oscillations is given by . It will be more convenient to work with , which we will call a phase variable, rather than with it’s projection on .

Assume that the eigenvalues of , , , are negative

By applying the phase reduction to each oscillator in the network, in complete analogy to (Equation 8), we derive the phase equations for the coupled system

where denote the entries of (cf. (Equation 3)). The expression for the coupling terms in (Equation 11) simplifies to

Here and below, we ignore quadratic terms . By plugging (Equation 12) in (Equation 11), we arrive at the following system of equations

where

and

Next, we derive the system for the vector of the phase differences

where matrix is defined by

Multiply both sides of (Equation 13) by and note that

where is defined in (Equation 15). From (Equation 13) and (Equation 17) we have

where is defined by . For , is well-defined (cf. Appendix [19]). In the derivation of (Equation 18), we treated terms inherited from (Equation 17) as higher order terms. Since (cf. (Equation 15)),

and (Equation 18) is reduced to

In (Equation 19), terms are treated as higher order, because they do not affect exponential stability of the synchronous solution. Note how separable coupling translates to the structure of the phase equation. Matrix , which carries the information about the network topology effectively determines the stability of the synchronous solution. Semipositive matrix enters the factor (cf. (Equation 14)). The stability of (Equation 19) is determined from the homogeneous deterministic system:

where by we mean the first component of the solution of deterministic equation (Equation 2. We continue our analysis assuming that , i.e., the coupling is full. In Section 3, we comment on how our results apply to partially coupled systems. Thus, and after changing the independent variable we have

For exponential stability of synchronous solution, the symmetric part of must be negative definite. This motivates the following definition.

Thus, we arrive at the first conclusion of this Letter: synchronous solution of (Equation 2-(Equation 4) is exponentially stable iff is dissipative. When studying (Equation 2-(Equation 4) it is tempting to relate the stability of synchronous solution to the spectrum of . It is important to realize that it is the spectrum of that is responsible for synchronization. Remarkably, dissipative matrices admit an explicit characterization.

Suppose ( ?) holds. Let . Then

Furthermore, for any

where by we denote the inner product in . We will use the same notation for the inner product in any other Euclidean space used in this Letter, e.g., and . Note that , because the rows of are linearly independent. Thus, from (Equation 22) we conclude

i.e., .

Conversely, suppose . Note that on the orthogonal complement of , , is invertible and define as follows: for let

We show that the symmetric part of (and, therefore, itself) is negative definite. For any there exists such that , because is invertible. Moreover, such , can be chosen from because . Thus,

Here, we used the fact that and (because ). The combination of (Equation 23) and (Equation 24) yields ( ?).

Theorem ? gives explicit and for separable coupling exhaustive characterization of coupling matrices that generate exponentially stable synchronous solutions. Synchronization is often attributed to systems with diffusive coupling that are obtained by discretizing elliptic differential operators or, more generally, differential operators modeling diffusion on graphs. In this respect, it is remarkable that Theorem ? relates all dissipative matrices to the discrete Laplacian

This is consistent with the common interpretation of the Laplacian as a prototypical elliptic operator. The explicit characterization of dissipative matrices by ( ?) is the second result of this Letter. Before turning to the question of the robustness to noise we state two corollaries of Theorem ?. The first corollary gives a convenient computational formula for , while second one characterizes the spectrum of a dissipative matrix.

Matrix has a simple zero eigenvalue by (Equation 4). Suppose is a nonzero eigenvalue of and be a corresponding eigenvector

Note that . By multiplying both sides of (Equation 26) by , we have

The scalar product multiplying on the right hand side of (Equation 27) is positive, while is negative. Therefore, .

Having understood the mechanism for synchronization in the deterministic network (Equation 2. We now turn to the question of robustness of the synchronous regime to noise. To this end, we return to (Equation 19). For the remainder of this Letter, . For small , on a finite time interval solution of (Equation 19) can be expanded as

where deterministic function solves (Equation 20) (cf. Theorem 2.2, [20]). Since is an exponentially stable solution of (Equation 20), we take . The leading order correction is a Gaussian process, which for small approximates on a finite interval of time. This specifies the scope of applicability of our analysis. In particular, we are not concerned with large deviation type effects which become relevant on much longer timescales.

From (Equation 13) we have

By plugging (Equation 28) and (Equation 29) in (Equation 19), we have

After changing time to and ignoring higher order terms, (Equation 31) is rewritten as

The solution of (Equation 31) subject to initial condition is a Gaussian random process with zero mean and covariance matrix (cf. §5.6, [1])

where and is a nonnegative scalar function. Using standard properties of trace and the mean value theorem, from (Equation 32) we have

where continuous function is due to the application of the mean value theorem, , , and

Note that is uniformly bounded. Define the average variance of the variables as

Equation (Equation 33) yields an important estimate for the network variability

where is a positive constant. Nonnegative function reflects the properties of the local system such as geometric properties of the limit cycle and matrix multiplying stochastic term, whereas captures network topology. For a network of fixed size, can be made arbitrarily small by taking large . Moreover, by Chebyshev’s inequality, for any ,

i.e., for strong coupling the phases of individual oscillators can be localized within arbitrarily narrow bounds. The control of the coherence by varying the coupling strength is more effective in networks with smaller . Thus, (Equation 36) shows explicitly the factors controlling the coherence in the presence of noise. Moreover, quantifies the contribution of the network topology to the stability of the synchronous state. This is the third conclusion of this Letter.

What features of the network topology are captured by ? We first go over the ingredients of the formula for (cf. (Equation 34)). Matrix is a Laplacian:

Unlike , is nonsingular. The following examples show that can change by many times for networks with different topologies. Let be as in (Equation 5) and (cf. (Equation 25)). and are coupling matrices corresponding to the graphs modeling all-to-all and nearest-neighbor interactions in the network. By direct verification,

By plugging the explicit expressions (Equation 38) in (Equation 34), we find

Note that the all-to-all topology features a significant reduction in compared to the nearest-neighbor coupling. This reduction is proportional to the ratio of the degrees of the corresponding graphs: - for the nearest-neighbor and - for the all-to-all coupling. Thus, (Equation 36) suggests that networks with the higher density of connections are more robust to noise.

Equation (Equation 36) estimates the variability of the phase differences, revealing the main factors contributing to robustness of synchrony to noise. If is symmetric, Equation (Equation 36) can also be used to estimate the variability of the phase variables themselves. We follow the method used in [19] for a related problem. First, we derive the equation for the average phase of the coupled system

By multiplying both sides of (Equation 13) by , we have

Here, we used the following approximations

As a linear combination of independent Gaussian processes , is distributed as , where is a dimensional Brownian motion. Thus,

and

where does not depend on and . Next, by noting as in [19] that each phase variable can be represented as a linear combination of the average phase and phase differences , we estimate using (Equation 42) and (Equation 36). Omitting further details, which are the same as in step 3. of the proof of Theorem 3.1 in [19], we state the final result

where and are positive constants independent from and . The first term on the right hand side of (Equation 42) can be made arbitrarily small by increasing , while the second term decreases for increasing . Therefore, in large networks, the effects of noise on oscillations can be controlled by varying the strength of coupling. The two terms on the right hand side of (Equation 42) represent two main ingredients of the mechanism of denoising: the first term is due to the averaging of statistically independent stochastic forces acting on connected local systems, whereas the second term reflects the dissipativity of the coupling. The latter is a critical property of the coupling operator that underlies both synchronization and denoising in coupled networks.

The analysis of the phase equations above produced a necessary and sufficient condition for synchronization in systems with separable coupling and gave a compact explicit estimate for the spread of phases of coupled stochastic oscillators. For the phase equations to be valid, the trajectory of the coupled system must remain close to the limit cycle. To complete the analysis, we consider the system of equations for . The derivation of the system for is completely analogous to that for (cf. (Equation 13)). We omit the details and state the final result

where , . Matrices and are defined in (Equation 7) and (Equation 9) respectively. By Vazhevski’s inequality [21], the combination of (Equation 10), , and implies exponential stability of the equilibrium at in (Equation 44. Therefore, on finite time intervals with overwhelming probability, remains small, provided and are sufficiently small. This justifies the phase reduction that we used above. Note that this conclusion holds for both full and partial coupling.

3Generalizations

The analysis of this Letter admits several important generalizations.

A) Partial coupling. If the coupling is partial, nonnegative function in (Equation 31) in general takes zero values. Dealing with the degeneracies in (Equation 31) requires additional care. For a common in applications case when has isolated zeros, with technical modifications one can get a qualitatively similar estimate to (Equation 36).

B) Time-dependent coupling. Our analysis remains unchanged if instead constant one uses a bounded measurable function of time. The definition of the full coupling is then modified to for some and uniformly in . Likewise, can be taken time-dependent as long as for all . In this case, exponential stability of follows from (Equation 21) if we require that all eigenvalues of are negative and bounded away from zero uniformly in :

for some . By Theorem ?, such matrices can be written as where is such that

for some . Also, in the time-dependent case, one can get a slightly weaker but qualitatively similar estimate on (cf. (Equation 36)).

C) Nonlinear coupling. The analysis can be extended to the systems with nonlinear coupling of the following form

where for every and is a smooth function. Since , (Equation 45) can be rewritten as

Suppose is a periodic solution of the local system Then solves (Equation 46). In a neighborhood of , (Equation 46) can be rewritten using Taylor’s formula

or, equivalently,

Equation (Equation 47) is now in the form, for which the analysis of Section 2 applies (cf. B) above).

D) Nonperiodic attractors. Moving coordinate systems similar to the one used in this Letter can be introduced in the vicinity of locally invariant sets of more general nature. For example, motions along certain attracting normally hyperbolic slow manifolds admit a similar description (cf. [17]). Thus, our analysis can be adopted to study synchronization in a more general setting. Furthermore, in the case when the attractor of the local system is periodic and synchronization takes place, the analysis of Section 2 yields a precise description of the asymptotic behavior of trajectories of the coupled system. Specifically, the attractor of the coupled system includes a periodic orbit that is a direct product of the periodic orbits of the local systems. If one is only concerned with synchronization and does not aim at describing the asymptotic behavior of the coupled system, then the transformation to moving coordinates is not needed. To outline the analysis for this case, let be a solution of the local system. This solution is not necessarily periodic. Instead, we assume that does not leave a bounded domain in . Then solves the coupled system

Linearization about yields

By multiplying both sides of (Equation 49) by , we get the equation of :

Asymptotic stability of implies synchronization for (Equation 48). A sufficient condition for asymptotic stability of the trivial solution of (Equation 50) is that the eigenvalues of symmetric matrix

are negative and bounded from zero uniformly in . Since is bounded, the desired property for for large follows from and being symmetric positive definite. Thus, we get a sufficient condition for synchronization for the full coupling. In the partial coupling case, has zero eigenvalues and one has to make sure that they do not give rise to negative eigenvalues of . The condition for this can be obtained from the well-known formulas for the perturbations of the eigenvalues of symmetric matrices (cf. Appendix in [24]). A more complete analysis of synchronization for the partial coupling case will be given elsewhere [25].

4Numerical example

To illustrate the analytical results of this Letter, we use a nondimensional model of a pacemaker neuron from [22]:

Dynamic variables and represent membrane potential and calcium concentration in a given compartment of an axon or a dendrite of a neural cell. The compartments are sufficiently small so that the membrane potential and calcium concentration can be assumed constant throughout one compartment (Fig. ?a). Terms on the right hand side of the voltage equation (Equation 52) model ionic currents: a calcium current, a calcium dependent potassium current, and a small leak current. In addition, small white noise is added to account for random synaptic input or other fluctuations. The equation for calcium concentration ( ?) takes into account calcium current and calcium efflux due to calcium pump. The ionic conductances are sigmoid functions of the voltage and calcium concentration

We briefly comment on the meaning of the model parameters: and stand for maximal conductances and reversal potentials of the corresponding ionic currents; are constants used in the descriptions of activation of calcium and calcium dependent currents; and are certain constants that come up in the process of nondimesionalization of the conductance based model of a dopamine neuron (see [22] for details). The values of the parameters that were used in our simulations are given in the appendix to this Letter.

a

The coupling terms

model electrical current and calcium diffusion between adjacent compartments respectively. In case of a linear cable geometry of an axon (dendrite) shown in Fig. ?a, is the matrix corresponding to the nearest-neighbor coupling (cf. (Equation 25)). For branched dendrites (see Fig. ?b), may have a more complex structure. In either case, the structure of reflects the geometry of the neuron. By combining this information, we obtain a model in the form of (Equation 2), (Equation 3), with

where . The coupling is full rank. If we disregard calcium diffusion, i.e., set , the coupling becomes partial with . System of equations (Equation 52) and ( ?) with admits an alternative interpretation. One can view and as the membrane potential and calcium concentration of Cell in a neuronal population. The coupling is due to the current through the gap-junctions between adjacent cells. In this case, reflects network connectivity. If gap-junctional conductance depends on voltage or calcium concentration, the coupling is nonlinear as in (Equation 45). If the gap-junctions permit ions in one direction, coupling matrix is not symmetric.

In the remainder of this section, we present the results of numerical simulations of (Equation 52) and ( ?). We choose the variant of the model with partial coupling, i.e., with . When , the model has even better synchronization properties. The upper panel of Fig. ? shows the phase plane and the time series of five uncoupled oscillators forced by small noise. The initial condition is chosen on the limit cycle of the deterministic system and is the same for each oscillator. Fig. ?a shows phase trajectories of all five oscillators for approximately one cycle. As one can see from Fig. ?b, after a few first cycles, under the influence of noise the oscillations gradually loose coherence. In contrast, the trajectories shown in Fig. ?c and d remain tightly bundled. In these simulations, we used nearest-neighbor and all-to-all coupling respectively.

a

To illustrate Theorem ?, for simulations shown in Figure ?e, we use a random dissipative matrix. Specifically, we pick the entries of