Equivalence between synaptic current dynamics and heterogeneous propagation delays in spiking neuron networks

# Equivalence between synaptic current dynamics and heterogeneous propagation delays in spiking neuron networks

Matteo Biggio    Marco Storace    Maurizio Mattia DITEN, University of Genoa, 16145 Genova, Italy Istituto Superiore di Sanità, 00161 Roma, Italy
July 19, 2019
###### Abstract

Message passing between components of a distributed physical system is non-instantaneous and contributes to determine the time scales of the emerging collective dynamics like an effective inertia. In biological neuron networks this inertia is due in part to local synaptic filtering of exchanged spikes, and in part to the distribution of the axonal transmission delays. How differently these two kinds of inertia affect the network dynamics is an open issue not yet addressed due to the difficulties in dealing with the non-Markovian nature of synaptic transmission. Here, we develop a mean-field dimensional reduction yielding to an effective Markovian dynamics of the population density of the neuronal membrane potential, valid under the hypothesis of small fluctuations of the synaptic current. The resulting theory allows us to prove the formal equivalence between local and distributed inertia, holding for any synaptic time scale, integrate-and-fire neuron model, spike emission regimes and for different network states even when the neuron number is finite.

###### pacs:
87.19.ll,87.19.lc,87.18.Sn

## I Introduction

Large distributed systems like brain neuronal networks often have to satisfy both timing and space constraints irrespective of their size. Indeed, they have to be compact in order to be portable, and at the same time their components (i.e., the neurons) have to communicate always to the same pace, which in turn is determined by the environment Buzsáki et al. (2013). The parallel non-instantaneous communication of messages like the spikes emitted by the neurons can be seen as an effective inertia and can be increased acting locally, by slowing down the dynamics of the single elements composing the networks, or globally, by distributing inertia across the system elements and their connections. A large local inertia implies a long memory (i.e., a long decay time bringing to forget initial conditions) and usually a big size, as in the case of a large capacitance in an electronic circuit or a large mass in a given physical system. This aspect usually obstacles the miniaturization of a many-body system made up of elements (with a given memory capacity) with large local inertia. Indeed, the bigger the overall memory capacity, the larger the required space and/or the consumed energy per element Mead (1989); Sarpeshkar (1998); Laughlin and Sejnowski (2003). Alternatively, a global inertia can be implemented by delaying in time the interaction between the network components, thus implementing a distributed memory of the collective activity spread across the whole system, possibly hindering fast synchronizations and hence making the system slower and more stable Atay (2003); Mattia and Del Giudice (2003). Also this strategy has drawbacks, such as the implementation of delay lines with their related costs. For instance, a delayed distribution of the past activity/state of the system requires a memory resource or an additional energy demand in order to keep alive the messages to be exchanged between the network components Laughlin and Sejnowski (2003); Merolla et al. (2014). Maybe it is not by chance that both these strategies coexist in a huge cellular network like the one composing our brain. Indeed, neuronal networks have both local low-pass filters at the level of synapses – which select preferentially slow-varying information and provide local inertia – and a global spread of their instantaneous activity through the axon and dendrites, which propagate emitted spikes with suited delays and provide global inertia. Then a natural question arises: are these two kinds of inertia affecting differently the network dynamics?

This is a challenging question. Indeed, synapses in neuron networks filter incoming spikes with a wide variety of time scales, affecting the stability of various collective dynamics Wang (1999), the selectivity in transmitting information Abbott and Regehr (2004) and the reactivity to suddenly appearing exogenous stimuli Fourcaud and Brunel (2002). Even for extremely simplified models of spiking neurons and network connectivities, theoretical approaches including non-instantaneous transmission rely on approximations valid for relatively small time scales Brunel and Sergi (1998); Brunel et al. (2001); Fourcaud and Brunel (2002); Schuecker et al. (2015), or for quasi-adiabatic dynamical regimes Moreno-Bote and Parga (2004, 2006), or for neurons working with low-noise supra-threshold inputs Lindner (2004); Schwalger et al. (2015). Despite the long history of attempts in statistical physics to work out an effective one-dimensional Markovian description of the dynamics of a non-Markovian system Risken (1984); Hänggi and Jung (1995) (i.e., a suitable representation of the membrane potential dynamics of the neurons when synaptic filtering is incorporated), a theoretical framework valid for any correlation time of the synaptic input and any neuronal activity regime is still lacking if the boundary conditions describing the spike emission process are taken into account.

Here, we address this issue by developing a theoretical framework in which from small to large synaptic time scales the same dynamical model holds for the instantaneous firing rate of spiking neuron networks. Starting from the population density dynamics of single-neuron state variables under mean-field approximation, we extend to the colored-noise case the spectral expansion of the associated Fokker-Planck (FP) equation previously derived under white-noise assumption Mattia and Del Giudice (2002). Resorting to a kind of central moment closure method Levermore (1996), a zero-th order approximation of the population density dynamics is worked out, recovering the same theoretical description as for the white-noise case. In the new model, the equation coefficients are modulated by the low-pass filtered mean synaptic current. This dimensional reduction provides results showing a remarkable agreement with microscopic simulations and finally allows us to provide an answer to the challenging question asked above. This is done by proving a formal equivalence between local and distributed inertia: a suited distribution of spike transmission delays between neurons (global inertia) allows to fully reproduce the dynamical features of the same network having instead non-instantaneous synaptic transmission (local inertia). This equivalence is independent of integrate-and-fire (IF) neuron models (VIF, LIF, EIF), spike emission regimes (sub- and supra-threshold) and dynamical state (stable asynchronous fixed point and limit cycles) chosen for the network, even when finite-size fluctuations are taken into account.

## Ii Local inertia in neuronal networks

As paradigmatic example of local inertia in a many-body distributed system, we use a network composed of integrate-and-fire (IF) neurons, each receiving an input current resulting from the low-pass filtered linear combination of the spikes emitted at time by a subset of presynaptic cells in the network Destexhe et al. (1998); Brunel et al. (2001): . For simplicity, here we assume a first-order dynamics for synaptic current with decay time and constant efficacy (Fig. 1a-b). The membrane potential of an IF neuron evolves according to the general equation , with membrane decay constant , neuron resistance (henceforth set to 1 for simplicity) and a voltage-dependent leakage drift which depends on the model neuron type. For instance, is a constant drift for the perfect IF (PIF) neuron and for the leaky IF (LIF) Burkitt (2006). Neurons may receive an additional current modeling external sources, like incoming synaptic input from other networks. Once crosses a threshold value , a spike is emitted and potential is reset to . Due to these boundary conditions, even under stationary regimes single-neuron dynamics is not analytically tractable. This is particularly apparent looking at the probability density of states from a long time series worked out numerically by modeling presynaptic activity as a Poissonian spike train (Fig. 1c). The absorbing barrier in and the reentering flux of realizations in modeling the emission of spikes, make asymmetric Brunel and Sergi (1998); Fourcaud and Brunel (2002); Apfaltrer et al. (2006) and correlation between and non-monotonic, a feature even more apparent for relatively slow synaptic filtering (Fig. 1d). Under diffusion approximation, holding for large rate of incoming spikes each only mildly affecting Tuckwell (1988); Burkitt (2006), membrane potential dynamics is described by the following system of Langevin equations Risken (1984)

 ˙V=−f(V)/τm+I+Iext,τs˙I=−I+μI+σIξ, (1)

where for simplicity currents are scaled by a factor (i.e., measured as voltage per unit time) and is a Gaussian white noise homogeneous in time.

When the mean driving force alone (i.e., the deterministic part of the input ) is not enough to make the membrane potential cross the threshold , the neurons are evolving in a noise-dominated (or subthreshold) regime and irregular firing occurs, due to the stochastic part of the input. In the opposite case, the emission of an action potential can occur also in the absence of noisy afferent currents, and the neurons are in a drift-dominated (or suprathreshold) regime of activity, characterized by the regular emission of spike trains Fusi and Mattia (1999); Brunel (2000); Lansky and Ditlevsen (2008).

Equation (1) describes the time course of a generic realization , which under mean-field approximation represents the dynamics of the statistically identical neurons of the network. According to this theoretical ansatz, infinitesimal moments

 μI(t)=JCν(t)σ2I(t)=J2Cν(t) (2)

characterize the white noise driving the activity-dependent synaptic current Amit and Brunel (1997) as a function of the network emission rate (i.e., the average number of spikes each neuron emits per time unit) and the average number of presynaptic contacts () per neuron. The probability density to find neurons at time with membrane potential and synaptic current follows the FP equation Risken (1984)

 ∂tP=−∂xFx(x,y)−∂yFy(y,ν)≡1τm∂x[(f(x)−y−μext)+12σ2ext∂x]P+1τs∂y[(y−μI)+12σ2I∂y]P≡(Lx+Ly)P, (3)

a continuity equation in which the density changes are determined by the divergence of the probability current (i.e., the flux of realizations) together with few specific boundary conditions (see Fig. 1): i) spike emission as an absorbing barrier at (only for if ), ii) reentering flux of absorbed realizations at the reset potential and iii) lower bound for the membrane potential implemented as a reflecting barrier at Knight et al. (1996); Brunel and Sergi (1998); Brunel and Hakim (1999); Mattia and Del Giudice (2002).

In general, Eq. (3) is hard to solve and only approximated solutions (mainly under stationary conditions) have been worked out Risken (1984); Doering et al. (1987); Hänggi and Jung (1995); Fourcaud and Brunel (2002). This is both due to the peculiar boundary conditions and to its intrinsic nonlinearity, as depends on , being the flux of realizations crossing :

 ν(t)=∫∞−∞Fx(vthr,y)dy≡∫∞−∞ν(y,t)dy, (4)

where is the ‘partial’ rate of spikes emitted by subpopulation . Here we derive a solution for this two-dimensional problem without making any assumption like stationarity in time and restricted ranges for the synaptic filtering timescale. We extend the spectral expansion approach exploited to study the one-dimensional case Knight et al. (1996); Mattia and Del Giudice (2002), by describing as the superposition of an infinite set of one-dimensional interacting sub-populations, labeled by . Sub-population dynamics are studied, as in Mattia and Del Giudice (2002), by projecting at fixed on the eigenfunctions of a suited one-dimensional FP operator, which here we set to be , with an additional diffusion term with respect to of Eq. (3). This allows to rewrite this equation in terms of the expansion coefficients , which now explicitly depend on the auxiliary variable and where is the -th eigenfunction of the adjoint operator of . The stationary mode with eigenvalue has , thus is the marginal probability density .

The dynamics of can be worked out by deriving with respect to time Knight et al. (1996); Mattia and Del Giudice (2002), and rewriting :

 ˙an=λnan+∑m⟨ψn|Lyamϕm⟩−σ2(y)2τm∑mam⟨ψn|∂2xϕm⟩. (5)

For , the dynamics of can be worked out, as both and do not depend on :

 ˙ρy=Lyρy−ρyσ2(y)2τm∫vthrvmin∂2xϕ0(x,y)dx.

The integral on the r.h.s. can be solved by parts and turns out to be 0 thanks to both the conservation of the flux of realizations exiting from and re-entering in () and the reflecting barrier condition in , provided that the reasonable assumption holds (this is surely true for , as is a probability density). Thus, the dynamics reduces to

 ˙ρy=Lyρy=∂y(y−μI)ρyτs+σ2I2τs∂2yρy. (6)

Relying on the same spectral expansion as in Mattia and Del Giudice (2002), each ‘partial’ rate in Eq. (4) can be decomposed as follows:

 ν(y,t)=ρy(y,t)Φ(y,ν)+∑n≠0an(y,t)fn(y,ν), (7)

where the stationary mode contribution is separated from the others. Here, is the emission rate of the stationary one-dimensional density at fixed , while the other nonstationary modes contribute with the fluxes . Both and depend on the total emission rate through the input current moments and .

## Iii Dimensional reduction of the emission rate dynamics

Equations (4-7) together are a mere reformulation of the original problem (3), not a solution. However, in this framework a perturbative approach can be envisaged. From Eq. (6), synaptic current results to have time-dependent mean and variance

 τs˙μy=−μy+μI,τs˙σ2y=−2σ2y+σ2I, (8)

like an Ornstein-Uhlenbeck process with nonstationary input moments and Risken (1984). Hence, synaptic current displacement in time is and, for small , we can assume a negligible role for a subpopulation labeled with distant enough from . This assumption of a narrow centered around allows to simplify the integrals across the domain weighted by the expansion coefficients . Indeed, approximations of these integrals can be worked out by expanding in Taylor’s series the -dependent functions in the integrands. By combining Eqs. (4) and (7), the total emission rate can be then rewritten as

 ν=∫∞−∞ν(y,t)dy=Φ(μy,ν)∫∞−∞ρy(y,t)dy+∑n≠0fn(μy,ν)∫∞−∞an(y,t)dy+O(σI)≡Φ0(ν)+∑n≠0a0nf0n(ν)+O(σI), (9)

where the -th order terms of the Taylor expansions are defined as and , together with the introduction of the new integral expansion coefficients . The next step to find a self-consistent approximated spectral expansion of Eq. (3) is to work out the dynamics of these . This can be done by integrating both sides of Eq. (5) and constraining the additional diffusion term in to be as small as the fluctuation size of : . The dynamics of the integral expansion coefficients reduces to

 ˙a0n=∫∞−∞λn(y)an(y,t)dy+∑m∫∞−∞⟨ψn|∂y(y−μIτsamϕm)⟩dy+O(σ2I)=λ0na0n−μy−μIτs∑m⟨∂yψn|ϕm⟩|y=μya0m+O(σI),

where we integrated by parts the second integral and set . This equation for can be further recast as follows, by taking into account Eq. (8):

 ˙a0n=λ0na0n+˙μy∑m⟨∂yψn|ϕm⟩|y=μya0m+O(σI). (10)

This is a dimensional reduction of Eq. (5) as the coefficients do not depend on , and it holds provided that current has a narrow distribution across neurons at each time (i.e., small ), leaving unconstrained.

To work out a self-consistent equation for emission rate , we still need to choose a suited in . To this purpose, the case of instantaneous synaptic transmission can be used as reference, as Eq. (3) reduces to a one-dimensional FP equation with the only operator Brunel and Hakim (1999); Fusi and Mattia (1999); Mattia and Del Giudice (2002). Hence, must tend to for vanishing , which is the case if . Indeed, in this limit , being from Eq. (8). This implies that eigenvalues and eigenfunctions in Eq. (10) are the same as in the one-dimensional case with -correlated synaptic input, but with collective emission rate .

In conclusion, this dimensional reduction extends the emission rate equation previously worked out for Mattia and Del Giudice (2002), and, by combining Eqs. (810) and neglecting terms, it results to be

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩˙→a0=(Λ0+W0˙μy)→a0+→w0˙μy˙μy=(μI(ν)−μy)/τsν=Φ0+→f0⋅→a0. (11)

Here, the infinite vectors and are introduced together with the eigenvalue matrix . Separating stationary from non-stationary modes, the synaptic coupling vector and matrix are also included, respectively. For the above chosen , we remark that all these terms depend on the total emission rate only through .

## Iv Local and distributed inertia equivalence

So far, we considered the filtering activity operated by local synapses transmitting incoming spikes as a post-synaptic potential non-instantaneous in time. Here, we recall the known dynamics of a network of spiking neurons where synaptic transmission is instantaneous but a distribution of axonal delays is taken into account Brunel and Hakim (1999); Mattia and Del Giudice (2002, 2003). In this one-dimensional case (), the network dynamics can be simply worked out by replacing in and (see Eqs. (2)) with the instantaneous rate of spikes received by neurons when a distribution of axonal transmission delays is taken into account. Interestingly, the coefficients of the related emission rate equation result to have a straightforward relationship with those in Eq. (11). Indeed, the elements of the synaptic coupling vector and matrix now are for any and , since

 ~ν=μyJC

for what shown above. Due to this, the searched emission rate equation reduces to

 ⎧⎪ ⎪⎨⎪ ⎪⎩˙→a=(Λ0+W0JC˙~ν)→a+→w0JC˙~ν˙~ν=(ν−~ν)/τδν=Φ0+→f0⋅→a, (12)

where the specific delay distribution

 ρ(δ)=1τde−δ/τdΘ(δ) (13)

has been taken into account. This leads to be a version of the collective emission rate smoothed in time by a first-order low-pass filter with decay time . is the Heaviside function as only positive transmission delays are admitted.

A comparison between Eqs. (11) and (12) remarkably points out the equivalence of having, in a spiking neuron network, a non-instantaneous synaptic transmission or a suited distribution of axonal delays. In Eq. (12) the expansion coefficients in refer to the one-dimensional density , while in Eq. (11) is only an effective representation of the two-dimensional density . Nevertheless, provided that , both the dynamics seen from the perspective of are the same, as the functions , , , and are the same. In other words, under mean-field approximation and for not too large synaptic current fluctuations , having a local synaptic filtering with cut-off frequency is equivalent to have random axonal delays with exponential distribution (13) and decay constant .

## V Testing inertia equivalence in simulation

The theory developed in the previous sections has been tested through extensive numerical simulations by using NEST Gewaltig and Diesmann (2007) and the high-performance custom simulator implementing the event-based approach described in Mattia and Del Giudice (2000). The parameters for the numerical simulations have been identified following the procedure described in Appendix A.2. The network parameters corresponding to the results proposed in this section are summarized in Appendix A.3.

### v.1 Equivalence in the asynchronous state of finite-size networks

A first evaluation of the effectiveness of the low-dimensional description derived above is possible by inspecting the activity in the presence of an endogenous noise in a network of a finite number of neurons trapped into a stable asynchronous state. As previously done in Mattia and Del Giudice (2002), finite-size fluctuations can be incorporated into Eqs. (11) and (12) as an additive forcing term to the expansion coefficient dynamics, giving rise to an equation for the emission rate of a finite pool of neurons (see Appendix A.1 for details). The resulting dynamics can be linearized around the fixed-point , allowing to compute the Fourier transform , and from it the power spectral density , which turns out to be

 Pν(ω)=1+2Re[→f0⋅(iωI−Λ0)−1→ψ0]∣∣1−[Φ′0+iω→f0⋅(iωI−Λ0)−1→w0]ρ(iω)∣∣2ν∗N. (14)

Here, is the identity matrix, all the coefficients depending on are now constants computed at , and is the ratio between the Fourier transforms of and , which results to be . Note that is the same as for the Fourier transform of the delay distribution (12) with . If we consider an additional fixed transmission delay , this transform generalizes to Mattia and Del Giudice (2002, 2003).

We compared this theoretical result with the estimated from simulations of networks composed of simple IF neurons with synaptic filtering (see Appendix A.4 for details), finding a remarkable agreement (Fig. 2). Here we resorted to the VIF neuron model Fusi and Mattia (1999), an extended version of the widely used PIF neuron Gerstein and Mandelbrot (1964), for its amenability to analytical treatment Mattia and Del Giudice (2002) (see Appendix A.3 for details). Besides the good matching between theory and simulations, it is interesting to note how resonant peaks are differently affected by an increasing . Not surprisingly, high- peaks are dampened due to the low-pass filtering of synaptic transmission. These resonances are related to the synaptic reverberation of spiking activity, which gives rise to the so-called transmission poles in the linearized dynamics Treves (1993); Mattia and Del Giudice (2002). In an excitatory network as in Fig. 2, these are expected to be found at frequencies multiples of the inverse of the average time needed by a presynaptic spike to affect postsynaptic potential, here roughly the same as . On the other hand, the low- peaks do not display any shift in frequency, although their power is dampened as well. This is because such peaks are due to the so-called diffusion poles in the linearized dynamics Spiridon and Gerstner (1999); Mattia and Del Giudice (2002). They occur when neurons emit spikes at drift-dominated (suprathreshold) regime. In this regime, the distribution of the inter-spike intervals is narrow and resonant peaks emerge at multiples of (equal to 10 Hz in Fig. 2). We remark that, due to the shifting at low- of the transmission peaks, a constructive interference with diffusion peaks may occur. This explains the non-monotonic change of power of the second diffusion peak at 20 Hz in Fig. 2. Indeed, its power decreases when is increased from 0 ms to 4 ms, as expected, whereas due to the mentioned interference it is heightened for  ms. This is because the lowest transmission peak in this case is expected to be found at  Hz, not too far from  Hz.

### v.2 Equivalence under noise- and drift-dominated regime

To further test the equivalence between local and distributed inertia, we investigated whether this match worked well not only for networks of neurons under drift-dominated spiking regime, as shown in Fig. 2, but also under fluctuation-driven regimes. Indeed, synaptic filtering may have significantly different effects in these two regimes Brunel et al. (2001); Moreno-Bote and Parga (2004).

Figure 3 shows for comparison the power spectral densities obtained by varying and in the same excitatory VIF neuron networks in which either non-instantaneous synaptic transmission or a distribution of transmission delays was incorporated (Fig. 3a and b, respectively). Networks with neurons spiking at drift-dominated regimes (top panels) were the same as those shown in Fig. 2. A remarkable agreement between simulations with local and distributed inertia was confirmed as iso-power curves tightly overlapped (see Fig. 3c) for a wide range of filter timescales. The same good match between these two network types was apparent also under noise-dominated regime (bottom panels).

As expected in this regime, the diffusion poles become real numbers Mattia and Del Giudice (2002) and resonant peaks at frequency multiples of firing rate disappear, although the high- resonances corresponding to transmission poles remain almost unaffected by this change of regime. It interesting to point out here that widening the filtering time window (i.e., increasing and ) leads to almost flat power spectra, compatibly with the flattening of the response amplitude of isolated neurons receiving colored noise as input currents Brunel et al. (2001); Schuecker et al. (2015).

We remark that in Fig. 3a-c comparisons rely on normalized spectra, i.e. . On the one hand, this means that power spectra shapes does not depend on the particular inertia implemented in the network. On the other hand, this does not guarantee that average firing rates are the same and do not change with the inertia-related timescales. To address this issue, in Fig. 3d the measured are compared. As expected, the average firing rate when distribution of delay are incorporated does not change by varying . Mean-field fixed-point mildly overestimates and this is due to the assumptions underlying the diffusion approximation. Indeed, the difference shrinks as the number of synaptic contacts per neuron and the network size increase by keeping unchanged drift and diffusion coefficients of the related Langevin equation (this result is not shown in the figures). But what is important to note here is rather that the average firing rate measured for local and distributed inertia are different, and the difference increases with . This trend is particularly apparent under noise-dominated regime, and it is not completely unexpected. Indeed, in the small- limit Brunel and Sergi (1998); Moreno-Bote and Parga (2004); Schuecker et al. (2015) and in the long synaptic timescale condition Moreno-Bote and Parga (2004) macroscopic differences are known to exist. This phenomenon is not captured by our 0-th order perturbative approach, since the total flux of realizations crossing the emission threshold , which contributes to the total emission rate, depends on the shape of the probability current distribution Brunel and Sergi (1998); Schuecker et al. (2015). Indeed, as shown in Figure 1c-d, the section of close to emission threshold shrinks when increases from 4 ms to 64 ms.In our perturbative approach this dependence is completely neglected, as the distribution is assumed to be a Dirac’s .

### v.3 Independence from spiking neuron models

The developed theory is of general applicability, i.e. it applies to a wide range of spiking neuron models. As a result, the equivalence between local and distributed inertia proved for a network of simplified VIF neurons together with the theoretical expression for the power spectra of are both expected to hold also for networks of more realistic single-cell models like the leaky IF (LIF, Tuckwell (1988)) and the exponential IF (EIF, Fourcaud-Trocmé et al. (2003)) neurons.

To verify this expectation, we directly simulated networks of LIF and EIF neurons with a mean-field stable equilibrium point at . In Fig. 4 the mean for these kinds of networks is compared both under drift-dominated (top panels) and noise-dominated (bottom) regimes. A remarkable agreement is apparent for both LIF and EIF neuron networks (Fig. 4a and b, respectively), confirming the generality of the developed approach. To further test the absence of any bias due to the specific neuron model chosen, we computed the relative differences between the power spectral densities when non-instantaneous synaptic filtering () and transmission delay distribution () are taken into account. The cumulative distributions of these discrepancies across all tested inertia time scales () and Fourier frequencies () are shown in Fig. 4c. Interestingly, no significant differences are visible for the three chosen models (VIF, LIF, EIF), further proving that in all regimes (drift- and noise-dominated) our dimensional reduction does not depend on the specific single neuron dynamics.

Starting from this, it is not surprising to note here that spectral features similar to those highlighted in Fig. 3 for VIF neuron networks are also displayed by LIF and EIF neuron networks. More specifically, resonant peaks at multiple frequencies of under drift-dominated regimes and resonances at higher- due to the transmission poles – i.e., those related to the average spike transmission delay – are also visible in both Fig. 4a and b in LIF and EIF neuron networks, respectively.

### v.4 Equivalence beyond the asynchronous state

So far, we tested the validity of the equivalence between non-instantaneous synaptic transmission and delay inhomogeneity in linearizable dynamical regimes like the asynchronous state. To further assess the generality of our theoretical results, we simulated an inhibitory network of LIF neurons (see Appendix A.3) undergoing a transition from a point-attractor dynamics to a regime of collective oscillations through a supercritical Hopf bifurcation as in Brunel and Hakim (1999).

Figure 5a shows three examples of population activity for increasing magnitude of the inhibitory coupling when a non-instantaneous synaptic transmission with decay time is incorporated. For relatively weak couplings (lowest ), the network is in an asynchronous state and its activity fluctuates around the equilibrium point  Hz (black line). By increasing the synaptic inhibition above a threshold value ( mV), the network starts to display coherent global oscillations of the emission rate with increasing amplitude (light and dark gray lines).

We spanned the same range of synaptic strength making the Hopf bifurcation apparent in networks with both synaptic filtering and delay distribution, having care to use the same mean-field parameters (Fig. 5b). For each we carried out the related external excitatory current needed to keep unchanged the fixed-point at  Hz regardless of its stability (see details in Appendix A.3). We compared the dynamics of the two network types by measuring the standard deviation across time of the emission rate under stationary conditions (i.e., discarding the first second of the time series). As a remarkable result, we found an almost perfect match between measured in networks with both local and distributed inertia. Indeed, the relative mismatch between is always lower than (compare solid and dashed lines). This also implies that regardless of the inertia type, the transitions across regimes roughly occurs at the same critical value ( mV) with a discontinuity of the derivative , highlighting in both cases the same supercritical Hopf bifurcation. Similar results were obtained for longer filtering timescales ( ms, Fig. 5c), although in this case significant differences can be measured for large enough .

Altogether, these results further prove the generality of the theoretical equivalence derived in Sec. IV, clearly pointing out that local and distributed inertia give rise to the same collective dynamics not only in a tameable condition like the linearizable asynchronous state, but also for nonstationary network states like the global oscillatory regime.

## Vi Discussion

Local and distributed inertia in networks of spiking neurons are equivalent, provided that non-instantaneous post-synaptic currents from incoming spikes (local inertia) have the same time course as the distribution of spike transmission delays across axons and dendrites (distributed inertia). This is one of the main findings of this work, and we proved such equivalence by developing a novel population density approach to the network dynamics of neurons with non-Markovian membrane potentials. This general method consists of a dimensional reduction of the two-dimensional population density dynamics arising from the Markovian embedding Risken (1984); Hänggi et al. (1985) of the membrane potential evolution. The effective one-dimensional description we obtained relies on both an extended mean-field approximation Amit and Brunel (1997) and the assumption of a relatively low synaptic noise, without requiring any additional constraint on the correlation time due to the incorporated non-instantaneous synaptic transmission. This is our second main result, from which we obtained the same emission rate equation (ERE) as the one arising from the spectral expansion of the one-dimensional FP equation for the population density in the absence of synaptic filtering Mattia and Del Giudice (2002), but with an important difference; drift and diffusion coefficients, related to mean and variance of the input currents, respectively, now depend on a low-pass filtered version of the instantaneous firing rate of the network. We tested both the generality of the developed theory and the equivalence between local and distributed inertia through extensive numerical simulations, finding a remarkable agreement with theoretical expectations in a variety of dynamical scenarios, including noise- and drift-dominated regimes, equilibrium states and collective oscillations, excitatory and inhibitory synaptic connections, and for a wide set of spiking neuron models.

### vi.1 Comparison with previous studies

Reducing into an effective one-dimensional Markovian problem the dynamics of a non-Markovian system – which in turn can be embedded into a larger set of coupled Markovian processes – has a long history in statistical physics Risken (1984); Hänggi and Jung (1995). Of course, the resulting approximation schemes have unavoidable limitations in representing the true dynamics of the original non-Markovian problem. Nevertheless, many successes were accumulated in the late 80s of the past century Sancho et al. (1982); Hänggi et al. (1985); Fox (1986); Grigolini (1986); Jung and Hänggi (1987); Masoliver et al. (1987). More specifically, many of them focused on the nonstationary dynamics underlying the specific problem of the first-passage time (FPT), thus taking into account an absorbing barrier, similarly to the spike emission threshold in neuronal modeling. Remarkably, also in this FPT framework stochastic processes driven by ‘colored noise’ can be effectively approximated by one-dimensional Markovian processes, provided that the related FP equations incorporate a state-dependent nonlinear diffusion coefficient Hänggi et al. (1985); Masoliver et al. (1987). This coefficient is not time-varying only in the limit of small noise and/or small correlation times Fox (1986); Grigolini (1986). Other perturbative approaches have been worked out to cover the long- range Jung and Hänggi (1987); H’walisz et al. (1989), but the effectiveness of such Markovian approximations is mainly limited by the lack of a proper management of the boundary conditions Doering et al. (1987); Hänggi and Jung (1995); Kłosek and Hagan (1998). Intriguingly, here by including the proper absorbing barrier prescription due to the presence of a forcing white noise with arbitrarily small size (similarly to Moreno et al. (2002)), the theoretical derivation of the approximated ERE (11) eventually recovers a representation with time-varying state-dependent diffusion coefficients (see for instance Section V.B of Hänggi and Jung (1995)). Indeed, all the coefficients of this ERE depend on a filtered version of the total emission rate , which from Eq. (4) is in turn a nonlinear functional of the probability density .

The approximated population density approach here developed brought also another interesting result. Typically, power spectra and Fourier transfer functions of the population activity in network of spiking neurons and synaptic filtering are characterized outside biologically relevant regimes. This is because theoretical approaches are perturbative and target extreme conditions, in which either fast Brunel et al. (2001); Fourcaud and Brunel (2002); Schuecker et al. (2015) or slow Moreno-Bote and Parga (2004, 2006) synapses are considered, or in which neurons work under strong drift-dominated (low-noise) regimes Lindner (2004); Schwalger et al. (2015). Here, we bridge this gap by presenting a theoretical description valid for both noise- and drift-dominated regimes, and for the whole spectrum of synaptic time scales and Fourier frequencies (Figs. 24).

It is also interesting to remark that the proved equivalence between local and distributed inertia allows to map a many-body system with non-Markovian coupled elements onto another many-body system where many Markovian units have delayed interactions. In the latter framework, different types of collective dynamics and, in particular, synchronization have been widely studied Boccaletti et al. (2006). The related theoretical results are then expected to be exportable to the case in which an equivalent synaptic filtering is taken into account. An intriguing example is the use of effective time-delayed feedback loops to control neuronal pathological states like the synchronous firing observed in Parkinson’s disease or epilepsy Schiff et al. (1994); Rosenblum and Pikovsky (2004). The same strategy could be implemented by relying on a suitably slow synaptic feedback, by taking advantage of the natural plasticity of the inter-neuronal couplings. Due to this neurophysiological adaptability, the optimal delayed feedback needed to control pathological dynamics could be in principle ‘learned’, similarly to what occurs in ‘echo-state networks’ and ‘liquid-state machines’ Jaeger and Haas (2004); Maass et al. (2007); Sussillo and Abbott (2009).

### vi.2 Dimensional reduction

Other dimensional reductions of the Fokker-Planck dynamics arising when synaptic filtering is incorporated in network of spiking neurons have been proposed in the past to obtain effective kinetic representations amenable to numerical integration Cai et al. (2004); Rangan and Cai (2006); Ly and Tranchina (2007). They mainly rely on the centered moment closure method usually applied to kinetic problems in statistical physics Levermore (1996). Drawbacks of this approach are related to their limited applicability to noise-dominated regimes Ly and Tranchina (2007); Ly (2013), which are of particular relevance in neuroscience, being associated to activity states with balanced excitation and inhibition van Vreeswijk and Sompolinsky (1996); Shu et al. (2003); Yizhar et al. (2011). Even when some of these limitations are removed by introducing modifications to the standard mean-field approximation Ly (2013), this kind of dimensional reduction mainly remains a computational method in which, in addition to the dynamics of the centered moments, a one-dimensional partial differential equation has to be numerically integrated. On the contrary, our approach, in which a spectral expansion of the two-dimensional FP operator is operated together with a suited centered moment closure, is not only amenable to numerical integration. For instance, theoretical insights on the dynamics of the investigated system can be effectively carried out Mattia and Del Giudice (2002, 2004); Schaffer et al. (2013) by focusing on the slowest modes of the expansion.

### vi.3 Limitations

The developed theoretical description of the network dynamics with synaptic currents is approximated, and as such it is subject to several limitations. Our starting point is a population density approach which, like others Knight et al. (1996); Brunel and Hakim (1999); Mattia and Del Giudice (2002), relies on both a diffusion and a mean-field approximation. The underlying hypotheses require that each neuron receives a large amount of spikes per unit time, and that postsynaptic currents due to single spikes induce small changes of the membrane potential. These conditions are well satisfied in neocortex Tuckwell (1988); Amit and Brunel (1997).

To reduce the dimensionality of the population density dynamics, we further resorted to two simplifications. Firstly, to safely manage the boundary conditions in the spectral expansion, we introduced a forcing white noise current with an arbitrarily small size (as in Moreno et al. (2002)). The intrinsic stochasticity of ionic channels of neuronal membrane potential can be the source of such additional input current Linaro et al. (2011); Goldwyn and Shea-Brown (2011). But, even in the limit , we expect to find a good agreement between our theory and simulations. Indeed, although widely varied in the various comparisons presented in Figs. 25, the agreement between simulations with synaptic filtering and delay distribution did not show any significant difference. However, it is important to remark that, in the absence of this external noise, the spectral expansion we used is no more valid, as it relies on the eigenfunctions of a FP operator with . This boundary condition is not valid if Doering et al. (1987); Kłosek and Hagan (1998); Brunel and Sergi (1998); Schuecker et al. (2015), in which case a different basis for the probability density decomposition has to be adopted.

The second strong simplification we implemented was to consider small enough the fluctuation size of the synaptic currents. Such a low-noise regime gave us the possibility to assume the marginal distribution of the current narrowly distributed around its time-varying mean . As a result, the dynamics were reduced to a one-dimensional FP equation centered around . Rather surprisingly, such a rough approximation for which the shape of the marginal distribution is completely neglected, seems to work remarkably well. This is a result that we suspect being due to the symmetry of the narrow distribution , such that contributions of the terms with are almost perfectly compensated by those with . The only main discrepancy we found is in the mean firing rate, which our approximated theory systematically overestimated (see Fig. 3d). This is an expected error, since synaptic filtering is known to mildly reduce the emission rate of spikes in both the small- Brunel and Sergi (1998); Moreno-Bote and Parga (2004); Schuecker et al. (2015) and the large- Moreno-Bote and Parga (2004) limit. Such discrepancy should disappear in our approximated development if higher-orders of the centered moment closure were incorporated. Indeed, this would allow to take into account also other features of , such as its standard deviation and skewness.

###### Acknowledgements.
Work partially supported by the University of Genoa (M.S.) and by the EU Horizon 2020 Research and Innovation Programme under HBP SGA1 (grant no. 720270 to M.M.).

## References

• Buzsáki et al. (2013) G. Buzsáki, N. K. Logothetis,  and W. Singer, Neuron 80, 751 (2013).
• Sarpeshkar (1998) R. Sarpeshkar, Neural Comput. 10, 1601 (1998).
• Laughlin and Sejnowski (2003) S. B. Laughlin and T. J. Sejnowski, Science 301, 1870 (2003).
• Atay (2003) F. M. Atay, Phys. Rev. Lett. 91, 094101 (2003).
• Mattia and Del Giudice (2003) M. Mattia and P. Del Giudice, Scientiae Mathematicae Japonicae 58, 335 (2003).
• Merolla et al. (2014) P. a. Merolla, J. V. Arthur, R. Alvarez-Icaza, a. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar,  and D. S. Modha, Science 345, 668 (2014).
• Wang (1999) X.-J. Wang, J. Neurosci. 19, 9587 (1999).
• Abbott and Regehr (2004) L. F. Abbott and W. G. Regehr, Nature 431, 796 (2004).
• Fourcaud and Brunel (2002) N. Fourcaud and N. Brunel, Neural Comput. 14, 2057 (2002).
• Brunel and Sergi (1998) N. Brunel and S. Sergi, J. Theor. Biol. 195, 87 (1998).
• Brunel et al. (2001) N. Brunel, F. S. Chance, N. Fourcaud,  and L. F. Abbott, Phys. Rev. Lett. 86, 2186 (2001).
• Schuecker et al. (2015) J. Schuecker, M. Diesmann,  and M. Helias, Phys. Rev. E 92, 052119 (2015).
• Moreno-Bote and Parga (2004) R. Moreno-Bote and N. Parga, Phys. Rev. Lett. 92, 028102 (2004).
• Moreno-Bote and Parga (2006) R. Moreno-Bote and N. Parga, Phys. Rev. Lett. 96, 028101 (2006).
• Lindner (2004) B. Lindner, Phys. Rev. E 69, 1 (2004).
• Schwalger et al. (2015) T. Schwalger, F. Droste,  and B. Lindner, J. Comput. Neurosci. 39, 29 (2015).
• Risken (1984) H. Risken, Fokker-Planck Equation (Springer, 1984).
• Hänggi and Jung (1995) P. Hänggi and P. Jung, Adv. Chem. Phys. 89, 239 (1995).
• Mattia and Del Giudice (2002) M. Mattia and P. Del Giudice, Phys. Rev. E 66, 051917 (2002).
• Levermore (1996) C. D. Levermore, J. Stat. Phys. 83, 1021 (1996).
• Destexhe et al. (1998) A. Destexhe, Z. F. Mainen,  and T. J. Sejnowski, “Methods in neuronal modeling,”  (MIT Press, Cambridge, MA, 1998) Chap. Kinetic models of synaptic transmission, pp. 1–25, 2nd ed.
• Burkitt (2006) A. N. Burkitt, Biol. Cybern. 95, 1 (2006).
• Apfaltrer et al. (2006) F. Apfaltrer, C. Ly,  and D. Tranchina, Network 17, 373 (2006).
• Tuckwell (1988) H. C. Tuckwell, Introduction to theoretical neurobiology: volume 2, nonlinear and stochastic theories, Vol. 8 (Cambridge University Press, 1988).
• Fusi and Mattia (1999) S. Fusi and M. Mattia, Neural Comput. 11, 633 (1999).
• Brunel (2000) N. Brunel, J. Comput. Neurosci. 8, 183 (2000).
• Lansky and Ditlevsen (2008) P. Lansky and S. Ditlevsen, Biol. Cybern. 99, 253 (2008).
• Amit and Brunel (1997) D. J. Amit and N. Brunel, Cereb. Cortex 7, 237 (1997).
• Knight et al. (1996) B. W. Knight, D. Manin,  and L. Sirovich, in Symposium on Robotics and Cybernetics: Computational Engineering in Systems Applications, edited by E. Gerf (Cite Scientifique, Cite Scientifique, Lille, France, 1996) pp. 1–5.
• Brunel and Hakim (1999) N. Brunel and V. Hakim, Neural Comput. 11, 1621 (1999).
• Doering et al. (1987) C. R. Doering, P. S. Hagan,  and C. D. Levermore, Phys. Rev. Lett. 59, 2129 (1987).
• Gewaltig and Diesmann (2007) M.-O. Gewaltig and M. Diesmann, Scholarpedia 2, 1430 (2007).
• Mattia and Del Giudice (2000) M. Mattia and P. Del Giudice, Neural Comput. 12, 2305 (2000).
• Gerstein and Mandelbrot (1964) G. L. Gerstein and B. Mandelbrot, Biophys. J. 4, 41 (1964).
• Treves (1993) A. Treves, Network 4, 259 (1993).
• Spiridon and Gerstner (1999) M. Spiridon and W. Gerstner, Network 10, 257 (1999).
• Fourcaud-Trocmé et al. (2003) N. Fourcaud-Trocmé, D. Hansel, C. van Vreeswijk,  and N. Brunel, J. Neurosci. 23, 11628 (2003).
• Hänggi et al. (1985) P. Hänggi, T. J. Mroczkowski, F. Moss,  and P. V. McClintock, Phys. Rev. A 32, 695 (1985).
• Sancho et al. (1982) J. M. Sancho, M. San Miguel, S. L. Katz,  and J. D. Gunton, Physical Review A 26, 1589 (1982).
• Fox (1986) R. F. Fox, Phys. Rev. A 34, 4525 (1986).
• Grigolini (1986) P. Grigolini, Phys. Lett. A 119, 157 (1986).
• Jung and Hänggi (1987) P. Jung and P. Hänggi, Phys. Rev. A 35, 4464 (1987).
• Masoliver et al. (1987) J. Masoliver, B. J. West,  and K. Lindenberg, Phys. Rev. A 35, 3086 (1987).
• H’walisz et al. (1989) L. H’walisz, P. Jung, P. Hänggi, P. Talkner,  and L. Schimansky-Geier, Z. Phys. B 77, 471 (1989).
• Kłosek and Hagan (1998) M. M. Kłosek and P. S. Hagan, J. Math. Phys. 39, 931 (1998).
• Moreno et al. (2002) R. Moreno, J. de la Rocha, A. Renart,  and N. Parga, Phys. Rev. Lett. 89, 288101 (2002).
• Boccaletti et al. (2006) S. Boccaletti, V. Latora, Y. Moreno, M. Chavez,  and D. Hwang, Phys. Rep. 424, 175 (2006).
• Schiff et al. (1994) S. J. Schiff, K. Jerger, D. H. Duong, T. Chang, M. L. Spano,  and W. L. Ditto, Nature 370, 615 (1994).
• Rosenblum and Pikovsky (2004) M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. Lett. 92, 114102 (2004).
• Jaeger and Haas (2004) H. Jaeger and H. Haas, Science 304, 78 (2004).
• Maass et al. (2007) W. Maass, P. Joshi,  and E. D. Sontag, PLoS Comput. Biol. 3, e165 (2007).
• Sussillo and Abbott (2009) D. Sussillo and L. F. Abbott, Neuron 63, 544 (2009).
• Cai et al. (2004) D. Cai, L. Tao, M. Shelley,  and D. W. McLaughlin, Proc. Natl. Acad. Sci. USA 101, 7757 (2004).
• Rangan and Cai (2006) A. V. Rangan and D. Cai, Phys. Rev. Lett. 96, 178101 (2006).
• Ly and Tranchina (2007) C. Ly and D. Tranchina, Neural Comput. 19, 2032 (2007).
• Ly (2013)
• van Vreeswijk and Sompolinsky (1996) C. van Vreeswijk and H. Sompolinsky, Science 274, 1724 (1996).
• Shu et al. (2003) Y. Shu, A. Hasenstaub,  and D. A. McCormick, Nature 423, 288 (2003).
• Yizhar et al. (2011) O. Yizhar, L. E. Fenno, M. Prigge, F. Schneider, T. J. Davidson, D. J. O’Shea, V. S. Sohal, I. Goshen, J. Finkelstein, J. T. Paz, K. Stehfest, R. Fudim, C. Ramakrishnan, J. R. Huguenard, P. Hegemann,  and K. Deisseroth, Nature 477, 171 (2011).
• Mattia and Del Giudice (2004) M. Mattia and P. Del Giudice, Phys. Rev. E 70, 052903 (2004).
• Schaffer et al. (2013) E. S. Schaffer, S. Ostojic,  and L. F. Abbott, PLoS Comput. Biol. 9, e1003301 (2013).
• Linaro et al. (2011) D. Linaro, M. Storace,  and M. Giugliano, PLoS Comput. Biol. 7, e1001102 (2011).
• Goldwyn and Shea-Brown (2011) J. H. Goldwyn and E. Shea-Brown, PLoS Comput. Biol. 7, e1002247 (2011).
• Ricciardi and Sacerdote (1979) L. M. Ricciardi and L. Sacerdote, Biol. Cybern. 35, 1 (1979).

## Appendix A Appendix

### a.1 Derivation of the power spectral density Pν(ω) of νN(t)

In a network of a finite number of neurons, the instantaneous firing rate display endogenous fluctuations. Indeed, due to the finite number of spikes emitted by such network in relatively small time intervals, the counting Poissonian statistics makes well described by a stochastic variable whose variance scales as Spiridon and Gerstner (1999); Brunel and Hakim (1999); Mattia and Del Giudice (2002). These activity fluctuations can be seen as an ongoing stimulation of the infinite-size network and can be obtained by introducing an effective stochastic driving force different for each eigenmode of the FP operator Mattia and Del Giudice (2002). Following this approach and generalizing Eq. (11) to the case of a finite-size network, we obtain the following stochastic emission-rate equations

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩˙→a0=(Λ0+W0˙μy)→a0+→w0˙μy+→ψ0ηN˙μy=(μI(νN)−μy)/τsν=Φ0+→f0⋅→a0νN=ν+ηN, (15)

where models the finite-size fluctuations of the instantaneous firing rate in the infinite-size limit. For large enough , is well approximated by a Gaussian memoryless white noise, with zero mean and variance . In the above equations the coefficients depend on instead of , as the infinitesimal mean and variance are functions of the instantaneous spike rate of the presynaptic neurons. The additional coefficients result from having incorporated finite-size fluctuations to the boundary condition on the flux conservation of the realizations exiting from and reentering in Mattia and Del Giudice (2002).

In the limit, the network dynamics can be set into an asynchronous state such that is a fixed-point [] with local stability [] and the single-neuron spiking activity is asynchronous Amit and Brunel (1997); Brunel (2000). In this state, finite-size fluctuations bring to wander around the fixed-point , and Eq. (15) can be linearized by neglecting the terms of order higher than in the Taylor’s series expansion around . From this linearized dynamics, the Fourier transform can be obtained (see Mattia and Del Giudice (2002, 2004) for details), and the power spectral density turns out to be

 Pν(ω)=1+2Re[→f0⋅(iωI−Λ0)−1→ψ0]∣∣1−[Φ′0+iω→f0⋅(iωI−Λ0)−1→w0]ρ(iω)∣∣2ν∗N, (16)

which is the same expression detailed in Eq. (14).

### a.2 Identification of mean-field parameters

To find the parameters for the numerical simulations (with NEST Gewaltig and Diesmann (2007) and the high-performance custom simulator implementing the event-based approach described in Mattia and Del Giudice (2000)), the following procedure has been adopted.

An emission rate fixed point was fixed a priori, and the contour line in the plane defined by was numerically determined. was computed analytically for the VIF Fusi and Mattia (1999) and LIF Ricciardi and Sacerdote (1979); Amit and Brunel (1997) neuron models. For the EIF neuron model, in the absence of an analytically expression for valid in all the considered conditions, we used a numerical cubic interpolation (using the Matlab – The MathWorks, Natick, MA – function interp2) passing through samples obtained from NEST simulation data. Depending on the regime of interest (noise- or drift-dominated), a proper point along this iso-frequency line was chosen.

Once determined and , also can be determined by imposing the value of at the fixed point , which is directly related to its degree of stability Mattia and Del Giudice (2002). Indeed, by taking into account Eq. (2), it is sufficient to solve the following second-order equation in :

 ddνΦ(μ,σ2)=CJ(∂Φ∂μ+J∂Φ∂σ2) (17)

where both and can be suitably computed from . Of the possible solutions, we take the one corresponding to the stable fixed point with the highest firing rate, since it is related to the proper range of values.

The final step is to determine mean and variance of the external current to be added to the recurrent synaptic contribution in order to obtain the chosen and . is the sum of a constant current bias and a Poissonian spike train from independent sources firing at rate through an instantaneous synaptic transmission with efficacy . Under diffusion approximation, is a memoryless Wiener process and by imposing the spike rate from the external neurons, the two remaining parameters and are uniquely determined, in the distribution of delays case, by solving the following system

 {μ=JCν∗+JextCextνext+IDCσ2=J2Cν∗+J2extCextνext. (18)

When the non-instantaneous synaptic transmission is incorporated, the recurrent synaptic efficacy has to be simply rescaled by the time constant , i.e. . This is the approach used to design all the VIF, LIF and EIF neuron networks analyzed in Figs. 2-4.

For the results shown in Fig. 5, we used the following protocol. We started with the simulation of a LIF inhibitory neuron network in an asynchronous state, thus determining , and as stated above. Then, we increased and kept unchanged the (stable or unstable) fixed-point by increasing accordingly.

### a.3 Spiking neuron network simulations

All the simulated IF neuron networks under asynchronous state (Figs. 2-4) are modeled by considering a sparse recurrent connectivity with connection probability , being the average number of synapses per neuron. Each neuron receives an external input from independent Poissonian spike trains with frequency through instantaneous synapses with efficacy . Spikes are always delivered to post-synaptic neurons with a minimum delay . The values for both the synaptic time constant and the decay constant of the exponential delay distribution are  ms.

The VIF neuron networks are composed of the ‘VLSI’ integrate-and-fire neurons introduced in Fusi and Mattia (1999). This model neuron is an extended version of the standard ‘perfect integrate-and-fire’ (PIF) neuron introduced in Gerstein and Mandelbrot (1964): in addition to a constant leakage current , which here we consider as a part of the current bias , a reflecting barrier at is set to avoid a divergent diffusion towards negative membrane potentials. Contrary to the PIF neuron, this makes the VIF neuron capable of having a non-zero (positive) mean firing rate also under subthreshold regimes, i.e., for negative drifts (). For the VIF neuron, input-output gain function , eigenfunctions and eigenvalues of the related FP operator have explicit analytical expressions Fusi and Mattia (1999); Mattia and Del Giudice (2002, 2004), which make this model particularly suited for matching theory and simulations. The parameters for the VIF neuron networks used in this work are listed in Tab. 1. For this model the natural unit of measure for the membrane voltage is the emission threshold .

For the LIF Tuckwell (1988) and the EIF Fourcaud-Trocmé et al. (2003) neuron networks, the parameters at drift- (DD) and noise-dominated (ND) regimes are listed in Tabs. 2 and 3, respectively.

The LIF neuron networks for the Hopf bifurcation test (Fig. 5) are similar to the one described in Tab. 2, but with a doubled connection probability and an increased number of neurons, which are now inhibitory. The list of changed parameters is in Tab. 4. Only two values of synaptic time constants were considered: ms (T4) and ms (T16). Table 5 lists the values of and the corresponding values of set to keep constant the fixed point , either stable or not.