How effective delays shape oscillatory dynamics in neuronal networks
Synaptic, dendritic and single-cell kinetics generate significant time delays that shape the dynamics of large networks of spiking neurons. Previous work has shown that such effective delays can be taken into account with a rate model through the addition of an explicit, fixed delay (1); (2). Here we extend this work to account for arbitrary symmetric patterns of synaptic connectivity and generic nonlinear transfer functions. Specifically, we conduct a weakly nonlinear analysis of the dynamical states arising via primary instabilities of the asynchronous state. In this way we determine analytically how the nature and stability of these states depend on the choice of transfer function and connectivity. We arrive at two general observations of physiological relevance that could not be explained in previous works. These are: 1 - Fast oscillations are always supercritical for realistic transfer functions. 2 - Traveling waves are preferred over standing waves given plausible patterns of local connectivity. We finally demonstrate that these results show a good agreement with those obtained performing numerical simulations of a network of Hodgkin-Huxley neurons.
keywords:delay, neuronal networks, neural field, amplitude equations, Wilson-Cowan networks, rate models, oscillations
Pacs:87.19.lj, 87.19.lp, 05.45.-a, 84.35.+i, 89.75.-k
When studying the collective dynamics of cortical neurons computationally, networks of large numbers of spiking neurons have naturally been the benchmark model. Network models incorporate the most fundamental physiological properties of neurons: sub-threshold voltage dynamics, spiking (via spike generation dynamics or a fixed threshold), and discontinuous synaptic interactions. For this reason, networks of spiking neurons are considered to be biologically realistic. However, with few exceptions, e.g. (5); (3); (4), network models of spiking neurons are not amenable to analytical work and thus constitute above all a computational tool. Rather, researchers use reduced or simplified models which describe some measure of the mean activity in a population of cells, oftentimes taken as the firing rate (for reviews, see (6); (7)). Firing-rate models are simple, phenomenological models of neuronal activity, generally in the form of continuous, first-order ordinary differential equations (8); (9). Such firing-rate models can be analyzed using standard techniques for differential equations, allowing one to understand the qualitative dependence of the dynamics on parameters. Nonetheless, firing-rate models do not represent, in general, proper mathematical reductions of the original network dynamics but rather are heuristic, but see (10). As such, there is in general no clear relationship between the parameters in the rate model and those in the full network of spiking neurons, although for at least some specific cases quasi-analytical approaches may be of value (11). It therefore behooves the researcher to study rate models in conjunction with network simulations in order to ensure there is good qualitative agreement between the two.
Luckily, rate models have proven remarkably accurate in capturing the main types of qualitative dynamical states seen in networks of large numbers of asynchronously spiking neurons. For example, it is well known that in such networks the different temporal dynamics of excitatory and inhibitory neurons can lead to oscillations. These oscillations can be well captured using rate models (8). When the pattern of synaptic connectivity depends on the distance between neurons, these differences in the temporal dynamics can also lead to the emergence of waves (9); (12); (13); (14). This is certainly a relevant case for local circuits in cortical tissue, where the likelihood of finding a connection between any two neurons decreases as a function of the distance between them, e.g. (15).
When considering the spatial dependence of the patterns of synaptic connectivity between neurons, one must take into account the presence of time delays due to the finite velocity propagation of action potentials along axons. Such delays depend depend linearly on the distance between any two neurons. This has been the topic of much theoretical study using rate models with a space-dependent time delay e.g. (16); (17); (21); (14); (13); (18); (20); (23); (19); (22); (24). The presence of propagation delays can cause an oscillatory instability of the unpatterned state leading to homogeneous oscillations and waves (17); (20). The weakly nonlinear dynamics of waves in spatially extended rate models, i.e. describing large-scale (on the order of centimeters) activity, is described by the coupled mean-field Ginzburg-Landau equations (22), and thus exhibits the phenomenology of small amplitude waves familiar from other pattern forming systems (25). Also, it is important to note that discrete fixed delays have been used to model the time delayed interaction between discrete neuronal regions, as well as to model neuronal feedback, e.g. (26); (29); (27); (28); (30).
Localized solutions of integro-differential equations describing neuronal activity, including fronts and pulses, are also affected by distance-dependent axonal delays (13); (14); (17); (18); (19). Specifically, the velocity of propagation of the localized solution is proportional to the conduction velocity along the axon for small conduction velocities, while for large conduction velocities it is essentially constant. This reflects the fact that the propagation of activity in neuronal tissue is driven by local integration in which synaptic and membrane time constants provide the bottleneck. Also, allowing for different conduction velocities for separate excitatory and inhibitory populations can lead to bifurcations of localized bump states to breathers and traveling pulses (23).
Although the presence of time delays in the nervous system are most often associated with axonal propagation, significant time delays are also produced by the synaptic kinetics and single-cell dynamics (see the next section for a detailed discussion about the origin of such effective time delays in networks of spiking neurons). As a relevant example for the present work, it was shown in (1); (2) that the addition of an explicit, non-space-dependent delay in a rate equation was sufficient to explain the emergence of fast oscillations prevalent in networks of spiking neurons with strong inhibition and in the absence of any explicit delays.
Specifically, in (1); (2) the authors studied a rate model with a fixed delay on a ring geometry with two simplifying assumptions. First they assumed that the strength of connection between neurons could be expressed as a constant plus the cosine of the distance between the neurons. Secondly, they assumed a linear rectified form for the transfer function which relates inputs to outputs. These assumptions allowed them to construct a detailed phase diagram of dynamical states, to a large degree analytically. In addition to the stationary bump state (SB) which had been studied previously (31); (32), the presence of a delay led to two new states arising from primary instabilities of the stationary uniform state (SU): an oscillatory uniform state (OU) and a traveling wave state (TW). Secondary bifurcations of these three states (SB,OU,TW) led to yet more complex states including standing waves (SW) and oscillatory bump states (OB). Several regions of bistability between primary and secondary states were found, including OU-TW, OU-SB and OU-OB. They subsequently confirmed these results through simulations of networks of Hodgkin-Huxley neurons. Despite the good agreement between the rate equation and network simulations, several important issues remain unresolved:
The rate equation predicted that the primary instability of the SU state to waves should be to traveling waves, while in the network simulations standing waves were robustly observed.
The linear-threshold transfer function, albeit amenable to analysis, nonetheless leads to degenerate behavior at a bifurcation point. Specifically, any perturbations with a positive linear growth rate will continue to grow until the lower threshold of the transfer function is reached. This means that the amplitude of new solution branches at a bifurcation is always finite, although the solution itself may not be subcritical. In a practical sense then, this means that it is not possible to assess whether a particular solution, for example oscillations or bumps, will emerge continuously from the SU state as a parameter is changed, or if it will appear at finite amplitude and therefore be bistable with the SU state over some range.
The previous work only considered a simplified cosine connectivity. More realistic patterns of synaptic connectivity such as a Gaussian dependence of connection strength as a function of distance might lead to different dynamical regimes. It remains to be explored the effect of a general connectivity kernel in the dynamics of both the rate equation and the spiking neurons network with fixed time delays.
In order to address these issues, and provide a more complete analysis of the role of fixed delays in neuronal tissue, we here study a rate equation with delay without imposing any restrictions on the form of the transfer function beyond smoothness or on the shape of the connectivity kernel beyond being symmetric. Our approach is similar to that of Curtu and Ermentrout in (33), who extended a simplified rate model with adaptation for orientation selectivity (32) to include a nonlinear transfer function and general connectivity kernel. Here we do the same for a rate model with a fixed time delay.
Thus in what follows we will study a rate equation with fixed delay and spatially modulated connectivity. In conjunction with this analysis we will conduct numerical simulations of a network of large numbers of spiking neurons in order to assess the qualitative agreement between the rate model and the network for the delay-driven instabilities, which are the primary focus of this work.
This article is organized as follows: In section 2 we provide an overview of the origin of the effective delay. We do this by looking at the dynamics of synaptically coupled conductance-based neurons. This will motivate the presence of an explicit fixed delay in a rate-model description of the dynamics in recurrently coupled networks of neurons. In section 3 we formulate the rate model and conduct a linear stability analysis of the SU state. In section 4 we conduct a weakly nonlinear analysis for the four possible primary instabilities of the SU state (asynchronous unpatterned state in a network model), thereby deriving amplitude equations for a steady, Turing (bumps), Hopf (global oscillations), and Turing-Hopf (waves) bifurcations. We will focus on the delay-driven instabilities, i.e. Hopf and Turing-Hopf. Finally, in section 5 we will study the interactions of pairs of solutions: bumps and global oscillations, and global oscillations and waves, respectively.
2 The origin of effective time delays
This section is intended to provide an intuitive illustration of the origin of an effective fixed delay in networks of spiking neurons. A detailed, analytical study of this phenomenon can be found in (3); (34); (35).
Fig.1 illustrates the origin of the effective delay in networks of model neurons. In this case we look at a single neuron pair: one presynaptic and one postsynaptic. The single-neuron dynamics are described in detail in A. The top panel of Fig.1 shows the membrane potential of an excitatory neuron subjected to a current injection of A/cm which causes it to fire action potentials. Numerically, an action potential is detected whenever the membrane voltage exceeds mV from below. When this occurs, an excitatory postsynaptic current (EPSC) is generated in the post-synaptic neuron, as seen in the second panel. This current is generated by the activation of an excitatory conductance which has the functional form of a difference of exponentials with rise time ms and decay time ms. The different colored curves correspond to different conductance strengths: black , red , blue and orange mSms/cm. The resulting excitatory postsynaptic potential (EPSP) in millivolts is shown in the third panel. At this point it is already clear that the postsynaptic response, although initiated here simultaneously with the presynaptic action potential, will nonetheless take a finite amount of time to bring the postsynaptic cell to threshold, thereby altering its firing rate. This is shown in the bottom panel. In this case the weakest input (black) was insufficient to cause the neuron to spike, while the other three inputs were all strong enough to cause an action potential. The latency until action potential firing is a function of the synaptic strength, with the latency going to zero as the synaptic strength goes to infinity. The very long latency for mSms/cm (red curve) is due in part to the intrinsic action potential generating mechanism. In fact, an input which brings the neuron sufficiently close to the bifurcation to spiking can generate arbitrarily long latencies.
Fig.2 illustrates how this effective delay is proportional to both the rise and decay time of the EPSC. In the top panel, the decay time is fixed at ms while the rise time is varied, while in the bottom panel, the rise time is fixed at ms and the decay time is varied. From these figures it is clear that the effective delay is proportional to both the rise and decay times. Simulations with an EPSC modeled by a jump followed by a simple exponential decay reveal that the effective delay is proportional to the decay time in this case (not shown).
It is instructive to note that the effective delay, due to the time course of the synaptic kinetics in the model neuron, can be captured by modeling the EPSC as a Dirac delta function with a fixed delay. This is shown in Fig.3. In Fig.3, the curves shown in black are the same as in Fig. 1 for mSms/cm, while the red EPSC is a Dirac delta function which arrives with a fixed delay of ms. Note that the decay of the EPSP and the postsynaptic spike time are well captured here.
The fact that a jump in voltage with a fixed delay can capture the effect of having continuous synaptic kinetics was described already in (3). In that work, the authors studied a network of recurrently coupled integrate-and-fire neurons with inhibitory synapses, the time course of which was modeled as jump in the voltage at a fixed delay. They showed that the fixed delay led to the emergence of fast oscillations, the period of which was proportional to approximately several times the delay. The advantage of using EPSCs modeled as Dirac delta functions is that the input current is delta-correlated in time, allowing one to solve the associated Fokker-Planck equation for the distribution of the membrane voltages in a straightforward way.
Subsequent work studied the emergence of fast oscillations in networks of integrate-and-fire neurons with continuous synaptic kinetics (34). There the authors determined the frequency of oscillations analytically and found that it is proportional to both the rise and decay times of the synaptic response. An extension of that work showed that for networks of Hogkin-Huxley conductance-based neurons, the frequency of oscillations also depends on the single cell dynamics and specifically the membrane time constant and action potential generation mechanism (35). This is consistent with the effect of the synaptic response and single-cell dynamics on the response latency that we have illustrated above.
Thus the same mechanism which generates an effective delay in the response of a postsynaptic neuron to a single excitatory presynaptic input, can also generate coherent oscillations in a network of neurons coupled through inhibitory synapses. This can be seen in Fig. 4, which shows the results of simulations of a network of recurrently coupled inhibitory cells. The single-cell model is the same conductance based model used in Figs.1 -3, see A for details. Synaptic connections are made between neurons with a probability of , leading to a sparse, random connectivity with each cell receiving an average of connections. Synapses are modeled as the difference of exponentials with a rise time and a decay time of ms and ms respectively and mSms/cm. All cells receive uncorrelated Poisson inputs with a rate of Hz and mSms/cm, and there is no delay in the interactions. Fig. 4A shows a raster plot of the network activity in the top panel. The activity is noisy although periods of network synchrony are visible. The middle panel shows the firing rate averaged over all neurons in time bins of ms (black) and smoothed by averaging with a sliding window of ms (red). The large fluctuations in the firing rate indicate network synchrony, while the averaged trace shows clear periodic oscillations. This is even more evident in the bottom panel which shows the subthreshold input current averaged over all cells. One can clearly see the ongoing oscillation, the amplitude of which undergoes slow fluctuations due to the noisy dynamics. Fig. 4B shows the smoothed firing rate (top) and average input current (bottom) from the same simulation, but on a shorter time scale. Note that the sign of the input current has been inverted so that downward deflections mean increasing positive currents. Here it is clear that the input current is a delayed copy of the firing rate, with a delay on the order of ms which matches with the time scale of the synaptic kinetics (ms, ms).
Therefore, Fig. 4 provides a clear prescription for developing a rate model description of fast oscillations in networks in the asynchronous regime. The input a neuron receives is not simply a nonlinear function of the instantaneous firing rate, rather it is a function of the delayed firing rate. Thus one should introduce a fixed delay in the rate model description. This was the underlying assumption behind the work in (1).
Before presenting the model we would like to emphasize that fixed delays, which are primarily due to synaptic and dendritic integration, and conduction delays due to the propagation of action potentials along the axon, are both present in real neuronal systems. Importantly, this means that the delay in neuronal interactions at zero distance is not zero. In fact, fixed delays are always observed in paired intracellular recordings in cortical slices. The latency from the start of the fast rising phase of the action potential to the start of the post-synaptic current (or potential) has been measured for pairs of pyramidal cells in rat layers 3 to 6 and is on the order of milliseconds, see (37) for a recent review. Recordings from cat cortex and between pyramidal cells and other cells including spiny cells and interneurons in the rat cortex also reveal fixed delays which are rarely less than a millisecond. These delays are seen when neurons are spatially adjacent, indicating that axonal propagation is not an important contributing factor. On the other hand the speed of propagation of action potentials along unmyelinated axons in mammals is on the order of m/s, which means a delay of 0.1-10 ms for neurons separated by 1 millimeter (38); (39). Thus fixed delays and conduction delays are of similar magnitude within a local patch of cortex and both would be expected to shape the dynamics of non-steady activity, i.e. neither is negligible. Here we have decided to focus on fixed delays, as in previous work (1); (2), due both to their physiological relevance and prevalence in networks of spiking neurons.
3 The rate model with fixed time delay
An effective delay roughly proportional to the time scale of the post-synaptic currents is always present in networks of spiking neurons as we have illustrated in the previous section and has been shown extensively elsewhere, e.g. (34); (35). In particular, this is true for networks in the asynchronous regime, for which a rate-equation description is, in general, appropriate. Given this, we consider here a rate model with fixed delay. Specifically, we study a heuristic equation describing the activity of a small patch of neural tissue consisting of two populations of recurrently coupled excitatory and inhibitory neurons respectively. Our formulation is equivalent to the Wilson-Cowan equations without refractory period (8), and with spatially dependent synaptic connectivity which was studied originally in (40). Additionally, we consider a fixed delay in the neuronal interactions. Given these assumptions, the full Wilson-Cowan equations are
In the original formulation (8), and represent the average number of active cells in the excitatory and inhibitory populations respectively, in this case at a position and at a time . The time constant () is roughly the time it takes for a an excitatory (inhibitory) cell receiving “at least threshold excitation” (8) to generate a spike. This can reasonably be taken as the membrane time constant which is generally on the order of 10-20 ms. The functions are usually taken to be sigmoidal. Specifically, if all neurons in the population receive equal excitatory drive, and there is heterogeneity in some parameter across neurons, e.g. the threshold to spiking, which obeys a unimodal distribution, then the fraction of active neurons is just the integral over the distribution, up to the given level of excitation. The integral of a unimodal distribution is sigmoidal. In Eqs.(1a-1b), the functions represent the strength of synaptic connection from a neuron in population to a neuron in population separated by a distance . Here the neurons are arranged in one dimension on a domain . Input from excitatory (inhibitory) cells is furthermore delayed by a fixed amount (), which, as we have discussed in the introduction, is on the order of one millisecond. Finally, the excitatory and inhibitory populations are subject to an external drive of strength and respectively.
A general analysis of Eqs.(1a-1b) would be technically arduous although it is a natural extension of the work presented here. Rather, we choose to study the dynamics of this system under the simplifying assumption that the excitatory and inhibitory neurons follow the same dynamics, i.e. , , , , , . If this the case, then and the variable follows the dynamics given by
where we have chosen the domain to be a ring of normalized length . Furthermore, we have re-scaled time by the time constant . The normalized delay is therefore , which is the ratio of the effective delay in neuronal interactions to the integration time constant and should be much less than one in general. The synaptic connectivity expressed in terms of the excitatory and inhibitory contributions is , and thus represents an effective mixed coupling which may have both positive and negative regions.
In presenting Eq.(2) we have relied on the heuristic physiological motivation first put forth in (8). Nonetheless, as a phenomenological model, the terms and parameters in Eq.(2) may have alternative and equally plausible interpretations. Indeed, the variable is often thought of as the firing rate as opposed to the fraction of active cells, in which case the function can be thought of as the transfer function or fI curve of a cell.
Experimentally the function has been found to be well approximated by a power-law nonlinearity with a power greater than one (41); (36). Modeling studies show that the same nonlinearity applies to integrate-and-fire neurons and conductance based neurons driven by noisy inputs (36). Therefore it may be that such a choice of leads to better agreement of Eq.(2) with networks of spiking neurons and hence with actual neuronal activity. More fundamentally, we may ask if choosing as a sigmoid or a power law qualitatively alters the dynamical states arising in Eq.(2). This is precisely why we choose here not to impose restrictions on but rather conduct an analysis valid for any . How the choice of affects the generation of oscillations and waves is an issue we will return to in the corresponding sections of this paper.
3.1 Linear stability analysis
Stationary uniform solutions (SU) of Eq.(2) are given by
where is a constant non-zero rate, is the zeroth order spatial Fourier coefficient of the symmetric connectivity which can be expressed as
and is an integer. Depending on the form of , Eq.(3) may admit one or several solutions.
We study the linear stability of the SU state with the ansatz
where the slope is evaluated at the fixed point given by Eq.(3). The real and imaginary parts of the eigenvalue represent the linear growth rate and frequency of perturbations with spatial wavenumber respectively. At the bifurcation of a single mode, the growth rate will reach zero at exactly one point and be negative elsewhere. That is, for the critical mode . Given this, Eq.(6) yields the dispersion relation for the frequency of oscillation of the critical mode
From Eq.(7) it is clear that the wavelength of the critical mode depends crucially on the synaptic connectivity. In particular, the spatial Fourier coefficients of the connectivity kernel depend on the wavenumber , i.e. . Thus, the critical wavenumber is, in effect, selected by the choice of connectivity kernel. It is in this way that the nature of the instability depends on the synaptic connectivity at the linear level.
Depending on the values of and in Eq.(7) at the bifurcation from the SU state, four types of instabilities are possible:
Steady (, ): The instability leads to a global increase in activity.
Turing (, ): The instability leads to a stationary bump state (SU).
Hopf (, ): The instability leads to an oscillatory uniform state (OU).
Turing-Hopf (,): The instability leads to waves (SW, TW).
For the non-oscillatory instabilities (i.e. ), Eq.(7) gives the critical value
while for the oscillatory ones Eq.(7) is equivalent to the system of two transcendental equations
Note that we have defined the critical values as and .
The small delay limit ()
It is possible to gain some intuition regarding the effect of fixed delays on the dynamics, by deriving asymptotic results in the limit of small delay. This limit is a relevant one physiologically, since fixed delays are on the order of a few milliseconds and the integration time constant is about an order of magnitude larger. Therefore throughout this work we will present asymptotic results, and compare them to the full analytical formulas as well as to numerical simulations.
In the limit , the asymptotic solutions of Eq.(9a) can be easily obtained graphically. Fig.5 shows two curves (black and grey) representing the right and left hand sides of Eq.(9a) respectively, where we defined . The intersections of these curves correspond to the roots of Eq.(9a). The plot shows three solutions, the trivial one (corresponding to the non-oscillatory instabilities), and an infinite number of solutions that clearly approach ( integer) in the small delay limit, since the slope of the straight line goes to infinity as . Substituting these solutions into Eq. (9b), we find that the first potentially unstable solution of the spatial Fourier mode is , that occurs at the critical value of the coupling
with a frequency
Fig. (6) shows the critical frequency and coupling as a function of the delay, up to a delay . The solution obtained from the dispersion relation Eqs.(9a) and (9b) are given by solid lines, while the expressions obtained in the small delay limit are given by dotted lines. Thus the expressions in the small delay limit agree quite well with the full expressions even for .
3.2 An illustrative Phase Diagram
Throughout the analysis which follows we will illustrate our results with a phase diagram of dynamical states. Specifically, we will follow the analysis in (1); (2) in constructing a phase diagram of dynamical states as a function of and , the first two Fourier coefficients of the synaptic connectivity. We will set the higher order coefficients to zero for this particular phase diagram, although we will discuss the effect of additional modes in the text. Furthermore, unless otherwise noted, for simulations we choose a sigmoidal transfer function with and . As we vary the connectivity in the phase diagram, we also vary the constant input in order to maintain the same level of mean activity, i.e. we keep fixed. For the values of the parameters we have chosen here this results in . We also take unless noted otherwise
The primary instability lines for the SU state can be seen in the phase diagram, Fig. (7). The region in space where the SU state is stable is shown in gray, while the primary instabilities, listed above, are shown as red lines. In Section 4 we will provide a detailed analysis of the bumps, global oscillations and waves (SB, OU and SW/TW) which arise due to the Turing, Hopf and Turing-Hopf instabilities respectively. The derivation of the amplitude equations is given in B, as well as a brief discussion of the steady, transcritical bifurcation which occurs for strong excitatory coupling and is not of primary interest for this study. Finally, in Section 5 we will analyze the codimension 2 bifurcations: Hopf and Turing-Hopf (OU and waves), and Turing and Hopf (SU and OU). This analysis will allow us to understand the dynamical states which appear near the upper and lower left hand corners of the grey shaded region in Fig. (7), i.e. the SW/OU and OB states.
4 Bifurcations of codimension 1
As we are interested in creating a phase diagram as a function of the connectivity, we will take changes in the connectivity as the bifurcation parameter. The small parameter is therefore defined by the expansion
The perturbative method we apply, which makes use of this small parameter, is called the multiple-scales method and is a standard approach for determining the weakly nonlinear behavior of pattern-forming instabilities (25). We choose the particular scaling of in the foreknowledge that if the amplitudes of the patterns of interest are scaled as , a solvability condition will arise at order . This solvability condition yields a dynamical equation governing the temporal evolution of the pattern (see Appendix A for details). Without loss of generality we will assume that an instability of a nonzero spatial wavenumber is for . We will furthermore co-expand the constant input so as to maintain a fixed value for the spatially homogeneous steady state solution
where the small parameter is defined by Eq.(12). Additionally we define the slow time
4.1 Turing Bifurcation
The emergence and nature of stationary bumps in rate equations have been extensively studied elsewhere, e.g. (40). We briefly describe this state here for completeness. The spatial Fourier mode of the connectivity is given by the critical value Eq. (8), while we assume that all other Fourier modes are sufficiently below their critical values to avoid additional instabilities. Without loss of generality we assume here.
We expand the parameters , and as in Eqs. (12,13,14), and define the slow time Eq. (15). The solution of Eq.(2) linearized about the SU state is a spatially periodic amplitude which we allow to vary slowly in time, i.e. . Carrying out a weakly nonlinear analysis to third order in leads to the amplitude equation
with the coefficients
The nature of the bifurcation (sub- or supercritical) clearly depends strongly on the sign and magnitude of mean connectivity and the second spatial Fourier mode . Figure 8 shows a phase diagram of the bump state at the critical value of . The red lines indicate oscillatory and steady instability boundaries for the modes and . Clearly and over most of the region of allowable values, and the bump is therefore supercritical. There is only a narrow region of predominantly positive values (shaded region in Fig. 8 for which the cubic coefficient is positive. This indicates that the bifurcating solution branch is unstable. However, neuronal activity is bounded, which is captured in Eq.(2) by a saturating transfer function . Thus the instability will not grow without bound but rather will saturate, producing a finite amplitude bump solution. This stable, large amplitude branch and the unstable branch annihilate in a saddle-node bifurcation for values of below the critical value for the Turing instability. Such finite-amplitude bumps are therefore bistable with the SU state. In Fig. 8, the two insets show the connectivity kernel for parameter values given by the placement of the open triangle (subcritical bump) and the open square (supercritical bump).
In the phase diagram (7), the Turing instability line (upper horizontal red line) is shown thin for supercritical, and thick for subcritical bumps (here ).
4.2 Hopf Bifurcation
As shown elsewhere previously (3); (4); (1), a network of recurrently coupled inhibitory neurons can generate fast oscillations due to the effective delay in the synaptic interactions. Fig.9A shows raster plots of the spiking activity in a network randomly connected inhibitory neurons as the synaptic weight is increased (see A for a description of the network). The raster plots clearly show oscillations emerging as the parameter increases in strength. Fig.9B shows the autocorrelation function of the network-averaged firing rate from four seconds of simulation time. The curve is completely flat for the uncoupled case as expected, while periodic peaks appear and grow as the synaptic weights are increased, indicating the presence of coherent network oscillations. Note that the oscillations appear to emerge continuously, which would indicate a supercritical bifurcation. This is consistent with the finding in (3), where fast oscillations in a network of integrate-and-fire neurons were shown to be supercritical analytically.
We may ask if fast oscillations are in general expected to bifurcate supercritically, or if a subcritical bifurcation is also possible. To this end we study the rate equation which allows us to detemine the nature of the bifurcation analytically as a function of the transfer function and connectivity.
In the rate model there is a spatially homogeneous oscillatory instability with frequency given by Eq.(9a). This occurs for a value of the spatial Fourier mode of the connectivity given by Eq.(9b), while we assume that all other Fourier modes are sufficiently below their critical values to avoid additional instabilities. We expand the parameters , and as in Eqs. (12,13,14), and define the slow time Eq. (15). The linear solution has an amplitude which we allow to vary slowly in time, i.e. . Carrying out a weakly nonlinear analysis to third order in leads to the amplitude equation
Figure 10 shows a typical bifurcation diagram (in this case ) for the Hopf bifurcation. Plotted is the amplitude of the limit cycle as a function of where symbols are from numerical simulation of Eq.(2) and the lines are from the amplitude equation, Eq.(18).
In the small delay limit () we can use the asymptotic values (11) to obtain, to leading order,
where we have defined the quantity . Figure 11 shows a comparison of the full expressions for the coefficients of the amplitude equation, Eqs. (52-53) with the expressions obtained in the limit , Eqs. (19a-19b). Again, the agreement is quite good, even up to , especially for the real part of the cubic coefficient , which is of primary interest here.
The asymptotic expression for the cubic coefficient , Eq.(19b), indicates that a subcritical limit cycle should occur for . This provides a simple criterion for determining whether or not a particular choice of the transfer function can generate oscillations which are bistable with the SU state. In fact, it is a difficult condition to fulfill given a sigmoidal-like input-output function. For example, given a sigmoid of the form , one finds that
It is straightforward to show that the expression of the right hand side of Eq.(20) is bounded above by . In fact, . Such a nonlinear transfer function will therefore always generate supercritical oscillations.
If the nonlinear transfer function is interpreted as the single-cell fI curve, which is common in the literature, then we can use the fact that cortical cells operate in the fluctuation-driven regime. In particular, the mean input current to cortical cells is too low to cause spiking. Rather, this occurs at very low rates due to fluctuations in the membrane voltage. Although the fI curve for spiking neurons in the supra-threshold regime is concave down and saturates, in the fluctuation-driven, sub-threshold regime the fI curve exhibits a smoothed out tail which is concave up. It has been shown that the sub-threshold portion of the fI curve of actual cells can be well fit by a function of the form , where (see e.g. (41); (36)). In this case
which again is bounded between and . This again rules out subcritical oscillations in the small delay limit.
Nonetheless, suitable functions for generating subcritical oscillations can be contrived, as shown in Fig. 12 A. Numerical simulation of Eq.(2) indeed reveals a subcritical bifurcation in this case (see Fig. 12 B). However, this type of transfer function does not seem consistent with the interpretation of as a single-cell fI curve, nor with that of as a cumulative distribution of activation, i.e. a sigmoid. This strongly suggests that delay-driven oscillations in networks of spiking neurons will be generically supercritical.
4.3 Turing-Hopf Bifurcation
As shown previously in (1), given an inverted Mexican-hat connectivity for which inhibition dominates locally, fast waves may emerge in networks of spiking neurons. This is illustrated in Fig.13A, where raster plots of three networks are shown with the degree of spatial modulation increasing from top to bottom.
Additionally, Fig. 13B (top) shows a spatial profile of the network activity averaged over 2500 ms for the three networks, while the bottom panel shows the autocorrelation function (AC) of the network firing rate for the three cases (same color scheme). Note that for small inhibitory spatial modulation (black curve) the profile is essentially flat while the AC exhibits an initial peak and dip, but an absence of multiple peaks which would indicate fast oscillations. As is increased (red curve), the profile remains flat but the AC clearly exhibits periodic peaks indicating that fast oscillations are present in the firing rate. The corresponding raster plot in the middle panel of Fig. 13 shows intermittent standing wave patterns which emerge and later disappear giving rise to a new pattern with a different spatial orientation (not shown). This explains why the time average of this spatial profile becomes flat. Finally, for strong enough spatial modulation, a stationary standing wave pattern is seen in Fig. 13A (bottom). In this case the time-averaged spatial profile shown in Fig. 13B (top, blue) shows two distinct maxima, whereas the AC indicates fast oscillations in the firing rate.
Extensive simulations with such a cosine connectivity always yielded standing wave patterns for various choices of synpatic weights and input rates (not shown). We seek to understand why this is so, and if delay-driven traveling wave patterns can also be seen in network numerical simulations. To this end we study the emergence of fast oscillations in the rate equation.
There is a spatially inhomogeneous oscillatory instability with frequency given by Eq.(9a). This occurs for a value of the spatial Fourier mode of the connectivity given by Eq.(9b), while we assume that all other Fourier modes are sufficiently below their critical values to avoid additional instabilities. Without loss of generality we assume .
We expand the parameters , and as in Eqs. (12,13,14) and define the slow time Eq. (15). The linear solution consists of leftwards and rightwards traveling waves with an amplitude which we allow to vary slowly in time, i.e. . Carrying out a weakly nonlinear analysis to third order in leads to the coupled amplitude equations
In the small delay limit () we can use the asymptotic values (11) to obtain, to leading order,
where . Figure 14 shows a comparison of the full expressions (solid lines) for the real parts of the cubic and cross-coupling coefficients and with the asymptotic expressions above (dotted lines).
Wave solutions and their stability.
Traveling waves: Leftward and rightward traveling waves in
Eqs. (23a) and (23b) are given by
and respectively, where
. The stability of
traveling waves can be determined with the ansatz
The resulting eigenvalues are
The existence and stability of small-amplitude waves as described above is completely determined by the values of the cubic and cross-coupling coefficients and . This is illustrated in Fig. 15, where the parameter space is divided into five sectors. In each sector the type of solution which will be observed numerically is indicated where known, and otherwise a question mark is placed. Illustrative bifurcation diagrams are also given. Specifically, in the region labeled 1 (red online), the SW solution is supercritical and unstable while the TW solution is supercritical and stable. TW will therefore be observed. In the region labeled 2, the SW solution is supercritical and unstable while the TW solution is subcritical. Finite-amplitude TW are therefore expected to occur past the bifurcation point. In the region labeled 3, both solution branches are subcritical, indicating that the analysis up to cubic order is not sufficient to identify the type of wave which will be observed. In the region labeled 4, TW are supercritical and unstable while SW are subcritical. Finite amplitude SW are therefore expected past the bifurcation point. In the region labeled 5, the TW solution is supercritical and unstable while the SW solution is supercritical and stable. SW will therefore be observed.
From Fig. 15 we can see that the nature of the solution seen will depend crucially on the sign of the second term of the right-hand side of Eq.(24). In particular, the diagonal divides the the parameter space into two qualitatively different regions. Above this line TWs are favored while below it SWs are favored. In the small delay limit, Eq.(24) indicates that the balance between the third derivative of the transfer function and the second spatial Fourier mode of the connectivity kernel will determine whether TW or SW are favored.
For sigmoidal transfer functions, the third derivative changes sign from positive to negative already below the inflection point, while for expansive power-law nonlinearities, which fit cortical neuronal responses quite well in the fluctuation-driven regime, the third derivative is positive if the power is greater than 2 and is negative otherwise. The contribution of this term therefore will depend on the details of the neuronal response. In simulations of large networks of conductance-based neurons in the fluctuation-driven regime in which was zero, the standing wave state was always observed, indicating a (1); (2).
The phase diagram for , Fig. 7, clearly shows the dominance of the SW solution, indicating for the parameter values chosen. Specifically, for values of , supercritical standing waves are stable (see region 5 in Fig. 15). Figures 16 A and 16 B show supercritical SW patterns for and , respectively. For , TW are supercritical and unstable while SW are subcritical [see region 4 in Fig. 15]. An example of subcritical SW is shown in Fig. 16 C. For , both SW and TW are subcritical (see region 3 in Fig. 15). Numerical simulations reveal TW patterns in this region (see an example in Fig. 16 D). In the region where SW are subcritical there is a small sliver in where the SW state is bistable with a TW state (TW/SW in the phase diagram). This TW branch most likely arises in a secondary bifurcation slightly below the Turing-Hopf bifurcation line. There is also a small region of bistability between large amplitude TW and the spatially uniform high activity state (TW/HA in the phase diagram Fig. 7).
Thus for and with a simple cosine connectivity, SW arise for most values of . However, adding a non-zero can lead to the TW solution winning out. The phase diagram of wave states as a function of and is shown in Fig. 17 A. In Fig. 17 A, the light shaded region indicates values of and for which TW are expected, whereas SW are expected in the dark shaded region. In the unshaded region, both TW and SW are subcritical and the solution type is therefore not determined by the analysis up to cubic order. These regions, delimited by the solid lines, were determined by numerically evaluating the real parts of the full expressions for the cubic and cross-coupling coefficients, Eqs. (54-55). Each region is furthermore numbered according to the existence and stability of the TW and SW solution branches as shown in Fig. 15. The dashed lines show the approximation to the solid lines given by the asymptotic formulas (22a-22b). The set of allowable values for and is bounded by the conditions (8-9b) corresponding to steady or oscillatory linear instabilities. These stability conditions are shown by the horizontal and vertical bounding lines (red online). All parameter values are as in Fig. 7.
From Fig. 17 we can now understand the discrepancy between the analytical results in (1) using a rate equation with a linear threshold transfer function, which predicted TW, and network simulations, which showed SW. Specifically, given a nonlinear transfer function with , then with a simple cosine coupling SW are predicted over almost the entire range of allowable (dark shaded region for ). The nonlinear transformation of inputs into outputs is thus crucial in determining the type of wave solution. The choice of a threshold linear transfer function results in the second and all higher order derivatives being zero. In this sense it produces degenerate behavior at a bifurcation point, and by continuation of the solution branches, in a finite region of the phase diagram.
We have shown that varying can change the nature of the bifurcation, e.g. supercritical to subcritical, while varying can switch the solution type, e.g. from SW to TW. As an example of a functional form of connectivity motivated by anatomical findings, e.g. (42), we consider a difference of Gaussians, written as
In this case, one finds that the Fourier coefficients are
where . Once has been fixed at the critical value for the onset of waves, from Eq.(26) it is straightforward to show that where both and are constants which depend on and , the widths of the excitatory and inhibitory axonal projections respectively. Thus a difference-of-Gaussian connectivity, constrains the possible values of and to lie along a straight line for fixed connectivity widths. This is illustrated in Fig. 17B where three dashed lines are superimposed on the phase diagram, corresponding to the values ; ; and . Each of these lines is bounding a region to the left where and are less than , and respectively. Given periodic boundary conditions with a system size of , a Gaussian with is already significantly larger than zero for or . Thus, restricting ourselves to Gaussians which essentially decay to zero at the boundaries means that TW will always be observed. The same holds true for qualitatively similar types of connectivity.
Classes of waves in Network Simulations
Our analytical results concerning waves from the rate equation Eq.(2) predict that a connectivity with a sufficiently strong second Fourier component with a negative amplitude will lead to traveling waves (see the phase diagram in Fig. 17(A)).
Here we have confirmed this prediction performing numerical simulations of the network of spiking neurons described in the Appendix A. Indeed, Fig. 18 shows that the addition of the second spatial Fourier component to the inhibitory connections converts standing waves (SW) into travelling waves (TW).
5 Bifurcations of codimension 2
For certain connectivity kernels we may be in the vicinity of two distinct instabilities. This is the case for certain Mexican hat connectivities (OU and SB) and certain inverted Mexican hat connectivities (OU and SW/TW). Although two instabilities will co-occur only at a single point in the phase diagram Fig. 7, i.e. and are both at their critical values, the competition between these instabilities may lead to solutions which persist over a broad range of connectivities. This is the case here. We can investigate this competition once again using a weakly nonlinear approach. The main results of this section are summarized in table 1.
5.1 Hopf and Turing-Hopf bifurcations
Here we consider the co-occurrence a spatially homogeneous oscillation and a spatially inhomogeneous oscillation (OU and SW/TW), both with frequency given by Eq.(9a). This instability occurs when the zeroth and spatial Fourier mode of the connectivity both satisfy the relation, Eq.(9b), while we assume that all other Fourier modes are sufficiently below their critical values to avoid additional instabilities. Without loss of generality we take for the SW/TW state.
We expand the parameters , , and as in Eqs. (12,13,14), and define the slow time (15). The linear solution consists of homogeneous, global oscillations, leftwards and rightwards traveling waves with amplitudes which we allow to vary slowly in time, i.e. . Carrying out a weakly nonlinear analysis to third order in leads to the coupled amplitude equations
Solution types and their stability
Eqs. (27a-27c) admit several types of
steady state solutions including oscillatory uniform solutions (OU),
traveling waves (TW), standing waves (SW) and mixed mode
oscillations/standing waves (OU-SW). The stability of these
solutions depends on the values of the coefficients in
Eqs. (27a-27c). In addition,
non-stationary solutions are also possible. Here we describe briefly
some stationary solutions and their stability. For details see B.
Oscillatory Uniform (OU): The oscillatory uniform solution has the form where
The OU state undergoes a steady instability along the line
Traveling Waves (TW): The traveling wave solution has the form = or , where
The TW state undergoes an oscillatory instability along the line
with a frequency
Standing Waves (SW): The standing wave solution has the form = , where
An oscillatory instability occurs along the line
with a frequency
A stationary instability occurs along the line
For Eq. (2) with the parameters used to generate the phase diagram Fig. 7, we find that the stationary instability precedes the oscillatory one and that . This agrees well with the numerically determined stability line near the co-dimension 2 point in the diagram 7.
A simple example
We now turn to a simple example in order to illustrate the two main
types bifurcation scenarios that can arise when small amplitude waves
and oscillations interact in harmonic resonance.
i. Bistability: Here we take the parameters
Note that this scenario is the relevant one for the phase diagram shown in
Fig. 7. That is, we find there is a region of
bistability between the OU and SW solutions, bounded between two
lines with slope and respectively.
ii. Mixed Mode: Here we consider the parameters
It is easy to show that for the mixed mode amplitudes approach and the phase . The mixed-mode solution thus bifurcates continuously from the oscillatory pure mode. Figure 19b shows the corresponding bifurcation diagram where solid and dotted lines are the analytical expressions for the solution branches and symbols are from numerical simulation of Eqs. (27a-27c). As increase from the left we see that the SW solution indeed undergoes an oscillatory instability at leading to an oscillatory mixed-mode solution indicated by small circles (the maximum and minimum amplitude achieved on each cycle is shown). This oscillatory solution disappears in a saddle-node bifurcation, giving rise to a steady mixed-mode solution whose amplitude is given by Eq.(36a- 36c). This steady mixed-mode solution bifurcates from the pure oscillatory mode at