The Utility of Phase Models in Studying Neural Synchronization
Synchronization of neural activity is a ubiquitous phenomenon in the brain that has been associated with many cognitive functions [6, 21, 5] and is inferred from macroscopic electrophysiological recordings (LFP, EEG, MEG). These physiological recordings represent the aggregate rhythmic electrical activity in the cortex . To study synchrony, we consider only tonically firing neurons, allowing us to study synchrony solely in terms of spike times.
Weakly coupled oscillator theory  provides a mechanistic description of synchronization rates and stability. We use this theory to predict and explain synchronization in two types of membranes: Class I membranes, which are characterized by the onset of oscillations that have nonzero amplitude and arbitrarily low frequency, and Class II membranes, which are characterized by the onset of oscillations at non-zero frequency with arbitrarily small amplitude. In terms of dynamics, Class II is associated with a Hopf bifurcation  and Class I is associated with a saddle-node limit cycle (SNLC) bifurcation . (Class I,II are also called Type I,II, but to avoid confusion between the classification of PRCs, we will use Class I,II to describe the neuronal dynamics.)
Reciprocally coupled neurons can synchronize their spiking according to how they respond to incoming spikes. The timing of spike events in a tonically firing neuron can be represented mathematically as the phase of an oscillator. The impact of incoming spikes on that neuron can thus be reduced to perturbations to the phase of an oscillator. How the perturbations advance or delay the phase is quantified by the phase response curve (PRC) and is typically measured directly from the neuron.
In Fig. 0.1(a), we show repetitive spiking in the Morris-Lecar model, a simple planar conductance-based model that was originally developed to explain molluscan muscle fibers [17, 20]. The corresponding phase of the spike train is shown in Fig. 0.1(b). By plotting the voltage and gating variables of the spike train as a parametric curve, we attain Fig. 0.1(c), the phase space representation of the model. The closed orbit that is shown is both periodic and attracting and therefore a limit cycle, which we denote .
The phase representation in Fig. 0.1(c,d) is achieved by parameterizing the -periodic limit cycle by a parameter . This formalism is standard in mathematical neuroscience.
0.2 Derivation of the Phase Model
The phase representation of a neuron allows for a substantial reduction in dimensionality of the system that is particularly useful when studying many coupled neurons in networks. All the complex biophysics, channels, ions, and synaptic interactions are reduced to a set of coupled phase-models where is the number of neurons in the network. The task at hand is to derive how the phases interact when coupled into a network. This simplification to phase comes with some assumptions that we will outline in the ensuing paragraphs.
For a generic membrane model, we assume the existence of a -periodic limit cycle, , satisfying a system of ordinary differential equations,
where and is a sufficiently differentiable function. The limit cycle is attracting. In neural models, the limit cycle represents the dynamics of a spiking neural membrane (for example, when injected with a currect sufficient to induce repetitive firing), where one dimension typically represents the membrane voltage and the other dimensions represent recovery variables.
The phase of the limit cycle is a function . The phase can be rescaled into any other interval – common choices include and – but we choose for convenience. In addition, we choose the phase to satisfy
This choice is a substantial yet powerful simplification of the neural dynamics, which allows us to study deviations from this constant rate, and in turn provide information about spike delays or advances. We account for different models with different spiking frequencies by rescaling time appropriately.
Winfree generalized the notion of phase (which, technically, is only defined on the limit cycle itself) to include all points in the basin of attraction of the limit cycle . This generalization begins by choosing an initial condition, say at the square in Fig. 0.2. As time advances in multiples of the limit cycle period , this point converges along the white curve labeled to a unique point (pentagon) on the limit cycle . The initial condition is then assigned the phase of this unique limit cycle point, , which we call . We repeat this method to assign a phase value to every point that converges to the limit cycle.
In mathematical terms, we choose two initial conditions, one in the basin of attraction and another on the limit cycle, and , respectively. Since is on the limit cycle, it has some phase associated with it, say (we use the same phase value as above for convenience). If this choice of initial conditions satisfies the property
then is said to have the asymptotic phase . The set of all initial conditions sharing this asymptotic phase is called an isochron, and this isochron forms a curve in the plane, labeled in Fig. 0.2. This idea extends to all other phase values: for each phase value there exists a curve of initial conditions in the basin of attraction satisfying Eq. (0.2). Collectively, isochrons form non-overlapping lines in the basin of attraction. The notion of isochrons extends beyond planar limit cycles to limit cycles in any dimension .
Equivalently, if denotes the asymptotic phase of the point in the basin of attraction, then the level curves of are the isochrons. Due to this close relationship between asymptotic phase and isochrons, the terms are used interchangably.
0.3 Phase Response Curve
A fundamental measurement underlying the study of synchrony of coupled oscillators is the phase response curve (PRC): the change in spike timing, or the change in phase, of an oscillating neuron in response to voltage perturbations. If the new phase is denoted and the old phase , then we can quantify the phase shift as
This phase shift defines the PRC, and is an easily measurable property of a neural oscillator in both theory and experiment [2, 23]. Neuroscientists often measure the PRC of a neuron by applying a brief current and measuring its change in spike timing. If is negative, then the perturbation lengthens the time to the next spike (phase delay). If is positive, then the perturbation decreases the time to the next spike (phase advance).
In the limit of weak and brief perturbations, the PRC becomes the infinitesimal phase response curve (iPRC). The theory of infinitesimal PRCs was independently proposed by Malkin [15, 16] and Winfree . The iPRC is a result of a Taylor expansion of the phase function,
where is an arbitrary unit vector direction. The change in phase for this small perturbation is
By taking , we arrive at the expression of the iPRC given a perturbation in the direction :
The iPRC is closely related to the PRC. If one finds the PRC by taking small magnitude perturbations and divides the PRC by this small magnitude, then we obtain an approximation to the iPRC. The smaller the magnitude, the better the approximation.
Note that the iPRC is a vector in dimensions, where the coordinate represents the iPRC of a perturbation in that coordinate direction. Neuroscientists are often interested in perturbations to the voltage state variable. Assuming voltage lies in the first coordinate, we take and the dot product in Eq. (0.6) recovers the first coordinate , which is the iPRC of the voltage variable.
0.3.1 Phase Response Curves and Membranes
The shape of the PRC is informative about the oscillators response to perturbations. An oscillator with a strictly positive (negative) PRC will only ever advance (retard) the phase in response to perturbations, as shown in Fig. 0.3(a). This type of PRC is classified as a Type I . In neurons, this idea corresponds to advancing or retarding the time to the next spike. On the other hand, a PRC that can simultaneously advance or retard the phase depending on the arrival time of the perturbation is classified at Type II, as shown in Fig. 0.3(b) . Oscillators with Type II PRC have greater propensity to synchronize to an incoming pulse train because it can both advance and retard its phase.
Membrane oscillations were characterized into two classes by Hodgkin [10, 12]: Class I and Class II, as noted in the introduction of this paper.  showed that Hodgkin’s classification could be related to the bifurcation mechanism by which the neurons made the transition from rest to repetitive firing as the input current changed. They showed that Class I excitability corresponds to a SNLC bifircation and Class II to a Hopf bifurcation.
Remarkably, each PRC type is associated with a distinct excitable membrane property. In [2, 1], they show that Class I membranes have Type I PRCs, and Class II membrane oscillations arising from a super- or sub-critical Andronov-Hopf bifurcation have Type II PRCs.
The figure used to demonstrate Type I and Type II PRCs is derived from the Morris-Lecar model. The parameters used for these models may be found in .
If the input current is chosen sufficiently far from the onset of oscillations such that membrane oscillations persist, Class I (Class II) oscillators do not generally have Type I (Type II) PRCs. For the reminder of this chapter, we choose parameters close to the onset of Class I (Class II) oscillations. Therefore, any mention of Class I (Class II) oscillations can be assumed to have an associated Type I (Type II) PRC.
0.4 Two Weakly Coupled Oscillators
With the PRC in hand, we now turn to the issue of coupling oscillators into a network. Networks of neurons that are conductance-based, such as the Morris-Lecar model are generally coupled by synampses and the effects of these synapses is additive, as they are physically currents. Thus, in order to analyze dynamics of networks of rhythmic neurons, we have to (1) derive the interactions that arise after we reduce them to a phase model and (2) see how these interactions depend on the nature of the coupling. We will study this for a small network of 2, keeping in mind that the pairwise interactions are all that we need in order to simulate and analyze large networks since the networks are formed from weighted sums of the pairwise interactions.
For pairwise interactions, a natural question to ask is whether or not two oscillators with similar frequencies will synchronize or converge to some other periodic patterns such as “anti-phase” where they oscillate a half-cycle apart. It is possible to predict synchrony between coupled oscillators with very general assumptions on the form of coupling by using the phase-reduction technique that we outline below. This generality comes at a price: we must assume that the interactions are “weak”; that is, the effects of all the inputs to an oscillator are small enough that it stays close to its uncoupled limit cycle. For this reason, what we next present is often called the theory of weakly-coupled oscillators. To make the mathematics easier, we assume the reciprocally coupled oscillators are identical except for the coupling term. That is, we assume coupling of the form,
where is small, are vector valued, and represent neural coupling. Note that the vector field is the same in both ODEs, and with , the stable periodic solution of also satisfies both ODEs. To make predictions regarding synchronization, we follow the geometric approach by Kuramoto [14, 4].
Let , , and . We start with a change of coordinates along the limit cycle, , where is the asymptotic phase function. Because is a function of time, we apply the chain rule to rewrite :
We substitute with its vector field definition to yield
Finally, we use the normalization property, , where is a periodic solution . We arrive at an exact equation that provides some intution of the role of the coupling term and iPRC on the phase model :
Intuitively, this equation says the phase of the oscillator advances at the usual rate of with an additional weak nonlinear term that depends on the iPRC and the coupling term. We remark that the iPRC term, which is derived by considering instaneous perturbations, appears naturally in a context where perturbations to the phase are not necessarily instaneous.
While Eq. (0.8) is exact, we do not know the form of the solution and therefore can not evaluate this ODE. However, if is sufficiently small, then interactions between the two oscillators are weak and the periodic solutions are almost identical to the unperturbed limit cycle , which is in turn almost identical to . Making this substitution results in an equation that only depends on the phases :
By subtracting off the rotating frame using the change of variables , we can study the effects of coupling without keeping track of a term that grows linearly in time. Eq. (0.9) becomes
All terms that are multiplied by are periodic so that we can apply the averaging theorem  to eliminate the explicit time-dependence. (This theorem says that the equation where , then, the dynamics of are close to those of for small, where and ) We average the right-hand sides over one cycle to obtain:
That is, we have reduced the system of two-coupled dimensional systems to a pair of coupled scalar equations. It should be clear that if the coupling terms are additive (as they would be in the case of synaptic coupling) and there are coupled oscillators, then the phase equations will have the general form:
We can make one more reduction in dimension by observing that all the interactions in (0.11) depend only on the phase-difference. Thus we can study the relative phases by setting , for and obtain the dimensional set of equations:
where we set . The beauty of these equations is that equilibrium points correspond to periodic solutions to the original set of coupled oscillators and these periodic solutions have the same stability properties as the equilibria of (0.12). For example, synchrony of the coupled oscillators would correspond to a solution to (0.12) where An easily computed sufficient condition for stability of equilibria of (0.12) can be found in . For the remainder of this chapter, we focus on , and define to obtain a single scalar equation for the phase-difference of the two oscillators:
and as above. The function is often called the interaction function  and is the convolution of the coupling term with the iPRC .
Remark 1. We note that equation (0.13) was derived under the assumption that there were no frequency difference the two oscillators. However, if the frequency difference are small, that is, , then the equations (0.11) have an addional constant term, representing the uncoupled frequency difference from that of In neural models, the easiest way to change the frequency is by adding some additional current, . In this case
where is the voltage component of the iPRC and is the membrane capacitance. Equation (0.15) is intuitively appealing: oscillators with positive PRCs are the most sensitive to currents since their average will generally be larger than PRCs that have both positive and negative values. When the oscillators have slightly different frequencies (in the sense of this remark), then equation (0.13) becomes:
where Frequency differences have the effect of shifting the interaction function up and down.
Remark 2. Before continuing with our discussion of the behavior of the phase models, we want to briefly discuss the issues that arise from coupling different oscillataors together (e.g. a class I with a class II, such as figure 0.4e,f). Our results for phase models are strictly valid when the uncoupled systems are identical. However, in coupling Class I,II neurons, the uncoupled oscillators are different and so, the limit cycles are not the same functions. Thus the equations presented for the interaction functions (0.14) are not correct. We can still apply the averaging theorem as long as we adjust parameters of the two distinct systems so that the uncoupled frequencies are identical. We can then use the same ideas to compute the interaction functions. Let be the limit cycles and iPRCs of the two uncoupled systems. By assumption, they are both periodic. Then:
With these changes for the heterogenous oscillators, we can now proceed.
There are many advantages to the result in Eq. (0.13). The ODE is autonomous and scalar, so we can apply a standard stability analysis on the phase line. Fixed points on the phase line correspond precisely to stable (unstable) phase locked solutions. In particular, a fixed point at corresponds to synchrony, and a fixed point at corresponds to anti-synchrony.
We show various examples of the right-hand-side function in Fig. 0.4 using synaptically coupled Morris-Lecar models (relevant parameters are listed in Table .7.1). The phase line is shown on the -axis of each subfigure. On the phase line, a black filled circle (open circle) corresponds to an asymptotically stable (unstable) phase locked solution. Fig. 0.4(a) is of two weakly coupled Class I neurons with reciprocal excitatory coupling. In this case, the phase model predicts that coupled oscillators will asymptotically converge to an anti-phase rhythm. If there is more than one stable solution, as in Fig. 0.4(b), then the asymptotic phase difference depends on the initial relative phase shift of the oscillators. Initializing with a sufficiently small phase difference results in asymptotic synchrony, while initializing with a larger phase difference close to half a period results in asymptotic anti-synchrony. This subfigure corresponds to two weakly coupled Class I neurons with reciprocal inhibitory coupling.
The remainder of Fig. 0.4 considers Class II excitatory to Class II excitatory coupling (Fig. 0.4(c)), Class II inhibitory to Class II inhibitory coupling (Fig. 0.4(d)), Class I inhibitory to Class II excitatory coupling (Fig. 0.4(e)), Class I excitatory to Class II inhibitory coupling (Fig. 0.4(f)), Class I excitatory to Class II excitatory coupling (Fig. 0.4(g)), and Class I inhibitory to Class II inhibitory coupling (Fig. 0.4(e)). As in the preceding discussion, determining asymptotic stability is a straightforward matter of finding stable fixed points on the phase line.
In Fig. 0.4(e), there are no fixed points. Such a case corresponds to phase drift: the oscillators never phase-lock. Reciprocal coupling of Class I with Class II neurons is tricky because we must choose the frequencies to be sufficiently similar (see Remark 2, above). We find that choosing for the Class I neuron and for the Class II neuron both preserves the salient features of the respective PRC types and results in good agreement in oscillator frequency. Why are there no fixed points in case (e) and why are the fixed points nearly degenerate in (f)? We can understand this as follows. From Remark 2, we see that is found by convolving a Type I PRC with an excitatory synapse. This will result in positive everywhere. On the other hand, is found by convolving a Type II PRC with an inhibitory synapse that results in a mixture of positive and negative regions. The large positive , when subtracted from to get the equation for the phase-difference will be negative as seen in the figure. More intuitively, the excitatory synapse onto the the Type I neuron will constantly advance the phase of neuron 1 while the inhibitory synapse will cause a mix of advance and delay as the PRC is Type II. This there will be a net advance of oscillator 1 over oscillator 2 and we will see drift ( everywhere). Similar considerations hold for panel (f). From Remark 1 (above), we recall that by introducing small frequency differences, we can shift the interaction functions up and down. Thus, we could get a phase-locked solution, in, e.g., panel (f) by adding a small depolarizing curren to oscillator 2, thus allowing it to speed up.
We list additional observations that follow from Eq. (0.13).
If the interaction terms are delta functions (used for arbitrarily fast synapses), the interaction function is directly proportional to the PRC.
If reciprocal coupling is the same and the uncoupled oscillators are the same, then , and and the right hand side of Eq. (0.13) is proportional to the odd part of , denoted :
In addition to predicting asymptotic stability, Eq. (0.13) also provides convergence rates of solutions, and therefore synchronization rates of the full dynamics.
We demonstrate the accuracy of the convergence rates in Fig. 0.5. The dashed and solid curves are computed using Eq. (0.14) with parameters chosen to represent Class I excitatory to Class I excitatory coupling and Class II excitatory to Class II excitatory coupling, respectively. The diamonds and squares represent the numerical phase difference in the full model. We find the phase difference of the full model by computing spike timing differences following the numerical integration of Eqs.(.18)–(.21). We choose , which is sufficiently small for accurate predictions.
0.5 Summary of Reciprocal Coupling
The results are summarized in Table 0.1, where the headers denote the excitatory or inhibitory effect of a given neuron. “Class I excitatory” is shorthand for an excitatory Class I neuron, and “Class II excitatory” is shorthand for an excitatory Class II neuron. Synaptic driving potential is mV, for excitatory and inhibitory synaptic coupling, respectively.
|Class I||Class II|
Numbers in the table denote phase locked solutions as a proportion of the respective period. As mentioned earlier, parameter values for Class I, Class II neurons and excitatory, inhibitory synapses are chosen according to Table .7.1. Asymptotic convergence to corresponds to synchrony, while convergence to corresponds to anti-phase.
The diagonal entries of the table as well as the four combinations of Class I to Class II excitatory/inhibitory coupling have been shown in Fig. 0.4. The remaining table entries consider Class I excitatory to Class I inhibitory coupling (phase drift), and Class II excitatory to Class II inhibitory coupling (phase locked at 0.862).
Reducing tonically firing neurons to a phase model allows us to formulate a mathematically precise phase description of neural synchronization. Using this phase description, we quantified perturbations of phase using phase response curves. We also demonstrated using a qualitative geometric argument how a perturbation can push solutions to different isochrons, resulting in a phase shift. The phase description of a neural oscillator is useful because just one scalar variable represents the dynamics of what are generally high dimensional systems involving many conductances.
The knowledge of the iPRC and the coupling term is useful in predicting the synchronization outcome. By convolving the iPRC with the coupling term(s), we derive an autonomous scalar differential equation for the phase difference dynamics, which faithfully reproduces synchronization in the full numerical integration. Moreover, because the phase difference dynamics is given by a scalar, autonomous differential equation, an analysis on the phase line provides all the necessary insight into asymptotic phase-locking. We use a phase-line analysis to predict synchronization of various reciprocally coupled oscillators. Our synapses are slow, but the observations happen to agree with what is known in the literature for fast synapses, in particular that Class I excitatory to Class I excitatory coupling tends not to synchronize at zero lag, while Class II excitatory to Class II excitatory coupling tends to synchronize .
In addition to predicting asymptotic phase-locked states, knowledge of the iPRC and coupling term also leads to predictions of synchronization rates, as shown in Fig. 0.3. This figure also demonstrates the flexibility of weak coupling theory. Despite the nonlinear nature of synaptic coupling, sufficiently weak interactions leads to accurate predictions of both rates and stability of phase-locking.
Weak coupling theory naturally applies to networks of coupled oscillators with virtually no modifications , and relevant applications arise in biology, chemistry, and physics. Examples include swimming locomotion of dogfish, lamprey and lobster , communication of fireflies , reaction-diffusion chemical reactions , coupled reactor systems , and Josephson junctions .
.7 Morris-Lecar Model
are defined to keep the notation compact. The model is not analytically tractable due to multiple nonlinearities, so we proceed numerically.
.7.1 Synaptic Dynamics
We use the following coupling functions in our numerical examples of Eq. (0.13). Let . The coupling terms are defined as
where the dynamics of satisfy
These dynamics are often used to model synaptic interactions. Qualitatively, the rate of activation is determined by and the voltage-dependent degree of activation . If voltage is large, say from an action potential, and the synapse is inactive, and are maximized, resulting in an increase in synaptic activity. Eventually, the synapse is maximally active, and the voltage has returned to its resting state, so is minimized close to zero and the synaptic activity decays at a rate .
We choose mV, mV for excitatory and inhibitory coupling, respectively.
- Moehlis J Brown, E and P Holmes. On the Phase Reduction and Response Dynamics of Neural Oscillator Populations. Neural Computation, 16(4):673–715, 2004.
- Bard Ermentrout. Type I Membranes, Phase Resetting Curves, and Synchrony. Neural Computation, 8(5):979–1001, 1996.
- G Bard Ermentrout. Stable periodic solutions to discrete and continuum arrays of weakly coupled nonlinear oscillators. SIAM Journal on Applied Mathematics, 52(6):1665–1687, 1992.
- GB Ermentrout and DH Terman. Mathematical Foundations of Neuroscience, volume 35 of Interdisciplinary Applied Mathematics. Springer New York, New York, NY, 2010.
- J Fell and N Axmacher. The role of phase synchronization in memory processes. Nature Reviews Neuroscience, 12(2):105–118, 2011.
- P Fries. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in Cognitive Sciences, 9(10):474–480, 2005.
- John Guckenheimer. Isochrons and phaseless sets. Journal of Mathematical Biology, 1(3):259–273, 1975.
- John Guckenheimer and Philip Holmes. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, volume 42. Springer Science & Business Media, 1983.
- D Hansel, G Mato, and C Meunier. Synchrony in excitatory neural networks. Neural computation, 7(2):307–337, 1995.
- AL Hodgkin. The local electric changes associated with repetitive action in a non-medullated axon. The Journal of physiology, 107(2):165–181, 1948.
- Frank C Hoppensteadt and Eugene M Izhikevich. Weakly connected neural networks, volume 126. Springer Science & Business Media, 2012.
- EM Izhikevich. Dynamical systems in neuroscience: The Geometry of Excitability and Bursting. MIT press, 2007.
- N Kopell and GB Ermentrout. Symmetry and phaselocking in chains of weakly coupled oscillators. Communications on Pure and Applied Mathematics, 39(5):623–660, 1986.
- Yoshiki Kuramoto. Chemical oscillations, waves, and turbulence, volume 19. Springer Science & Business Media, 2012.
- IG Malkin. Methods of Poincare and Liapunov in theory of non-linear oscillations. Gostexizdat, 1949.
- IG Malkin. Some problems in nonlinear oscillation theory. Gostexizdat, Moscow, 541, 1956.
- C Morris and H Lecar. Voltage oscillations in the barnacle giant muscle fiber. Biophysical journal, 35(1):193, 1981.
- J. Neu. Coupled Chemical Oscillators. SIAM Journal on Applied Mathematics, 37(2):307–315, 1979.
- PL Nunez and R Srinivasan. Electric Fields of the Brain: The neurophysics of EEG. OUP USA, 2nd edition, 2006.
- John Rinzel and G Bart Ermentrout. Analysis of neural excitability and oscillations. Methods in neuronal modeling, 2:251–292, 1998.
- A Schnitzler and J Gross. Normal and pathological oscillatory communication in the brain. Nature Reviews Neuroscience, 6(4):285–296, 2005.
- MA Schwemmer and TJ Lewis. The Theory of Weakly Coupled Oscillators. 2012.
- Benjamin Torben-Nielsen, Marylka Uusisaari, and Klaus M Stiefel. A comparison of methods to determine neuronal phase-response curves. Frontiers in neuroinformatics, 4, 2010.
- S Watanabe and SH Strogatz. Constants of motion for superconducting josephson arrays. Physica D: Nonlinear Phenomena, 74(3):197–253, 1994.
- AT Winfree. The Geometry of Biological Time. Interdisciplinary Applied Mathematics. Springer New York, 2001.