Drifting states and synchronization induced chaos in autonomous networks of excitable neurons
Abstract
The study of balanced networks of excitatory and inhibitory neurons has led to several open questions. On the one hand it is yet unclear whether the asynchronous state observed in the brain is autonomously generated, or if it results from the interplay between external drivings and internal dynamics. It is also not known, which kind of network variabilities will lead to irregular spiking and which to synchronous firing states. Here we show how isolated networks of purely excitatory neurons generically show asynchronous firing whenever a minimal level of structural variability is present together with a refractory period.
Our autonomous networks are composed of excitable units, in the form of leaky integrators spiking only in response to driving currents, remaining otherwise quiet. For a nonuniform network, composed exclusively of excitatory neurons, we find a rich repertoire of selfinduced dynamical states. We show in particular that asynchronous drifting states may be stabilized in purely excitatory networks whenever a refractory period is present. Other states found are either fully synchronized or mixed, containing both drifting and synchronized components. The individual neurons considered are excitable and hence do not dispose of intrinsic natural firing frequencies. An effective networkwide distribution of natural frequencies is however generated autonomously through selfconsistent feedback loops. The asynchronous drifting state is, additionally, amenable to an analytic solution.
We find two types of asynchronous activity, with the individual neurons spiking regularly in the pure drifting state, albeit with a continuous distribution of firing frequencies. The activity of the drifting component, however, becomes irregular in the mixed state, due to the periodic driving of the synchronized component. We propose a new tool for the study of chaos in spiking neural networks, which consists of an analysis of the time series of pairs of consecutive interspike intervals. In this space, we show that a strange attractor with a fractal dimension of about 1.8 is formed in the mentioned mixed state.
1 Introduction
The study of collective synchronization has attracted the
attention of researchers across fields for now over half
a century [1, 2, 3, 4, 5]. Kuramoto’s
exactly solvable mean field model of coupled limitcycles
[4], formulated originally by
Winfree [2], has helped in this
context to establish the link between the distribution
of natural frequencies and the degree of synchronization
[6]. Moreover, the functional simplicity
of this model, and other extensions, has permitted to
analytically study the collective response of the system
to external perturbations in the form of phase resets
[7].
Networks of phase coupled oscillators
may show, in addition, attracting states corresponding to
limit cycles, heteroclinic networks, and chaotic phases
[8, 9], with
full, partial, or clustered synchrony
[10], or asynchronous behaviour
[11]
Different degrees of collective synchronization may occur also in networks
of elements emitting signals not continuously, such as
limitcycle oscillators, but via shortlived pulses
[12, 13, 11].
Networks of pacemaker cells in the heart [3],
for instance, synchronize with high precision, acting together
as a robust macroscopic oscillator. Other wellknown examples
are the simultaneous flashing of extended populations of
southeast Asian fireflies [5, 14]
and the neuronal oscillations of cortical networks
[15]. In particular, the study
of synchronization in the brain is of particular relevance
for the understanding of epileptic states, or seizures
[16].
The individual elements are usually modeled in this context as integrate and fire units [17, 18], where the evolution (in between pulses, flashes, or spikes) of a continuous internal state variable is governed by an equation of the type:
(1) 
Here is the characteristic relaxation timescale of
, with representing the intrinsic dynamics of the unit,
and the overall input (both from other units and from
external stimuli). Whenever reaches a threshold value
, a pulse is emitted (the only information carried
to other units) and the internal variable is reset to
.
These units are usually classified either as oscillators or
as excitable units, depending on their intrinsic dynamics.
The unit will fire periodically even in the absence of input
when (). Units of this kind
are denoted pulsecoupled oscillators. The unit is,
on the other hand, an excitable unit, if an additional
input is required to induce firing.
A natural frequency given by the inverse integration time of the
autonomous dynamics exist in the case of pulsecoupled oscillators.
There is hence a preexisting, albeit discontinuous limit cycle,
which is then perturbed by external inputs. One can hence use
phase coupling methods to study networks of pulse
coupled oscillators
[13, 17, 18],
by establishing a map between the internal state variable
and a periodic phase given by the state of the
unit within its limit cycle. From this point of view
systems of pulsecoupled units share many properties with
sets of coupled Kuramotolike oscillators [4],
albeit with generally more complex coupling functions
[18]. For reviews and examples of
synchronization in populations of coupled oscillators see
[19, 9].
These assumptions break down for networks of coupled excitable
units as the ones here described.
In this case the units will remain silent without inputs
from other elements of the system and there are no preexisting
limit cycles and consequently also no preexisting natural
frequencies (unlike rotators [20],
which are defined in terms of a periodic phase variable, and
a count with a natural frequency). The firing rate depends hence exclusively on the
amount of input received. The overall system activity will
therefore forcefully either die out or be sustained collectively
in a selforganized fashion [6]. The respectively
generated spiking frequencies for a given mean network activity
could be considered in this context
as selfgenerated natural frequencies.
The study of pulse coupled excitable units is of particular
relevance within the neurosciences, where neurons are
often modeled as spike emitting units that continuously integrate
the input they receive from other cells [21].
The proposal [22, 23], and later
the empirical observation that excitatory and inhibitory inputs
to cortical neurons are closely matched in time
[24, 25], has led
researchers to focus on dynamical states (asynchronous states
in particular) in networks characterized by a balance between
excitation and inhibition
[26, 27, 28, 29, 30, 11].
This balance (E/I balance) is generally presumped to be
an essential condition for the stability of states showing
irregular spiking, such as the one arising in balanced
networks of integrate and fire neurons [31].
The type of connectivity usually employed in network studies however,
is either global, or local consisting of either repeated patterns,
or random connections drawn from identical distributions
[32, 33, 34, 8].
Our results show, however, that only a minimal level of structural
variability is necessary for excitatory networks to display wide
varieties of dynamical states, including stable autonomous irregular
spiking. We believe that these studies are not only interesting on
their own because of the richness of dynamical states, but also
provide valuable insight into the role of inhibition.
Alternatively, one could have built networks of excitatory
neurons with high variability in the connection parameters,
reproducing realistic connectivity distributions, such as
those found in the brain.
The large number of parameters involved would make
it however difficult to fully characterize the system from
a dynamical systems point of view, the approach taken here.
An exhaustive phasespace study would also become intractable.
We did hence restrict ourselves in the present work to a
scenario of minimal variability, as given by a network of
globally coupled excitatory neurons, where the coupling
strength of each neuron to the mean field is nonuniform.
Our key result is that stable irregular
spiking states emerge even when only a minimal level of
variability is present at a network level.
Another point we would like to stress here is that
asynchronous firing states may be stabilized in the
absence of external inputs. In the case here studied, there is an
additional ‘difficutlty’ to the problem, in the sense that the
pulsecoupled units considered are in excitable
states, remaining quiet without sufficient drive from the
other units in the network. The observed sustained
asynchronous activity is hence selforganized.
We characterize how the features of the network dynamics depend
on the coupling properties of the network and, in particular,
we explore the possibility of chaos in the
here studied case of excitable units, when partial synchrony
is present, since this link has already been established in the case
of coupled oscillators with a distribution
of natural frequencies [35], while other studies
had also shown how stable chaos emerges in inhibitory networks of
homogeneous connection statistics [36]
2 The Model
In the current work we study the properties of the selfinduced stationary dynamical states in autonomous networks of excitable integrateandfire neurons. The neurons considered are characterized by a continuous state variable (as in Eq. (1)), representing the membrane potential, and a discrete state variable that indicates whether the neuron fires a spike () or not () at a particular point in time. More precisely, we will work here with a conductance based (COBA) integrateandfire (IF) model as employed in [28] (here however without inhibitory neurons), in which the evolution of each neuron in the system is described by:
(2) 
where represents
the excitatory reversal potential and
is the membrane time constant. Whenever the membrane potential
reaches the threshold , the
discrete state of the neuron is set to for the duration of
the spike. The voltage is reset, in
addition, to its resting value of ,
where it remains fixed for a refractory period of
. Eq. (2) is not computed
during the refractory period. Except for the times of spike
occurrences, the discrete state of the neuron remains
(no spike).
The conductance in (2) integrates the influence of the time series of presynaptic spikes, decaying on the other side in absence of inputs:
(3) 
where is the conductance time constant. Incoming spikes from the other neurons produce an increase of the conductance , with:
(4) 
Here the synaptic weights represent the intensity of the connection between the presynaptic neuron and the postsynaptic neuron . We will generally employ normalized synaptic matrices with . In this way we can scale the overall strength of the incoming connections via , retaining at the same time the structure of the connectivity matrix.
2.1 Global couplings
Different connectivity structures are usually employed in the study of coupled oscillators, ranging from purely local rules to global couplings [32, 33, 34, 8]. We start here by employing a global coupling structure, where each neuron is coupled to the overall firing activity of the system:
(5) 
which corresponds to a uniform connectivity matrix without self coupling. All couplings are excitatory. The update rule (4) for the conductance upon presynaptic spiking then take the form:
(6) 
where represents the timedependent mean field of the network, viz the average over all firing activities. is hence equivalent to the mean field present in the Kuramoto model [4], resulting in a global coupling function as an aggregation of local couplings. With our choice (5) for the coupling matrix the individual excitable units may be viewed, whenever the mean field is strong enough, as oscillators emitting periodic spikes with an ‘effective’ natural frequency determined by the afferent coupling strength . The resulting neural activities determine in turn the mean field .
2.2 Coupling strength distribution
We are interested in studying networks with nonuniform , We mostly consider here the case of equidistant , defined by:
(7) 
for the neurons, where represents the mean scaling parameter, and , the maximal distance to the mean. It is possible, alternatively, to use a flat distribution with the drawn from an interval around the mean . For large systems there is no discernible difference, as we have tested, between using equidistant afferent coupling strengths and drawing them randomly from a flat distribution.
3 Results
Several aspects of our model, in particular the asynchronous
drifting state, can be investigated analytically as a
consequence of the global coupling structure (5),
as shown in Sect. 3.1. All further
results are obtained from numerical simulations, for which,
if not otherwise stated, a timestep of has
been used. We have also set the spike duration to one timestep,
although these two parameters can be modified separately if
desired, with our results not depending on the choice
of the timestep, while the spike width does introduce minor
quantitative changes to the results, as later discussed.
3.1 Stationary meanfield solution for the drifting state
As a first approach we compute the response of a neuron with coupling constant to a stationary mean field , as defined by Eq. (6), representing the average firing rate of spikes (per second) of the network. This is actually the situation present in the asynchronous drifting state, for which the firing rates of the individual units are incommensurate. With being constant we can combine the update rules (3) and (4) for the conductances to
(8) 
where we have denoted with the steadystate conductance. With the individual conductance becoming a constant we may also integrate the evolution equation (2) for the membrane potential,
(9) 
obtaining the time it takes for the membrane potential to reach the threshold , when starting from the resting potential :
(10) 
with:
(11) 
We note, that the threshold potential is only reached, if for all . For the to be finite we hence have (from Eq. (9))
(12) 
The spiking frequency is , with the intervals between consecutive spikes given by , when (12) is satisfied. Otherwise the neuron does not fire. The mean field is defined as the average firing frequency
(13) 
of the neurons. Eqs. (13) and (10) describe the asynchronous drifting state in the thermodynamic limit . We denote this selfconsistency condition for the stationary meanfield (SMF) solution.
3.2 Numerical simulations
We studied our model, as defined by Eqs. (2)
and (3), numerically for networks with
typically neurons, a uniform coupling matrix
(see Eq. (5)) and coupling parameters
and given by Eq. (7). We did not
find qualitative changes when scaling the size of the network
up to for testing purposes (and neither with
downscaling), see Fig. 7.
Random initial conditions where used.
The networkwide distribution of firing rates is
computed after the system settles to a
dynamical equilibrium.
Three examples, for and
, , and ,
of firingrate distributions are presented in
Fig. 1 in comparison with the analytic
results obtained from the stationary mean field
approach (SMF), as given by Eq. (13).
The presence or absence of synchrony is directly visible.
In all of the three parameter settings presented
in Fig. 1 there is a drifting component,
characterized by a set of neurons with a continuum of
frequencies. These neurons fire asynchronously,
generating a constant contribution to the collective
mean field.
The plateau present in the case ,
corresponds, on the other hand, to a set of neurons
firing with identical frequencies and hence synchronously.
Neurons firing synchronously will do so however with
finite pairwise phase lags, with the reason being
the modulation of the common mean field
through the distinct afferent coupling strengths .
We note that the stationary meanfield theory
(13) holds, as expected, for
drifting states, but not when synchronized clusters
of neurons are present.
In Fig. 2 we systematically explore the phase space as a function of and . For labeling the distinct phases we use the notation
I  :  inactive, 
I+D  :  partially inactive and drifting, 
D  :  fully drifting (asynchronous), 
D+S  :  mixed, containing both drifting and synchronized components, and 
S  :  fully synchronized. 
Examples of the rate distributions present in the
individual phases are presented in
Fig. 2 c).
The phase diagram is presented in
Fig. 2 a).
The activity dies out for a low mean
connectivity strength , but not for
larger . Partial synchronization is present
when both and the variance are small,
taking over completely for larger values of
and small . The phase space is otherwise
dominated by a fully drifting state. The network
average of the neural firing rates, given in
Fig. 2 b), drops only
close to the transition to the inactive state I,
showing otherwise no discernible features at the
phase boundaries.
The dashed lines in Fig. 2 a)
and b) represent the transitions between the
inactive state I and active state I+D, and between states I+D
and D, as predicted by the stationary mean field
approximation (13), which becomes
exact in the thermodynamic limit. The shaded region
in these plots indicates the coexistence of
attracting states S and S+D. As a note, we found
that the location of this shaded region depends on
the spike width, shifting to higher values
for narrower spikes. While real spikes in neurons
have a finite width, we note from a dynamical systems
point of view, that this region would most likely
vanish in the limit of delta spikes.
For a stable (nontrivial) attractor to arise in a
network composed only of excitatory neurons,
some limitation mechanism needs to be at play. Otherwise
one observes a bifurcation phenomenon, similar to that
of branching problems, in which only a critical network
in the thermodynamic limit could be stable [6].
In this case,
the limiting factor is the refractory period.
Refractoriness prevents neurons from firing continuously,
and prevents the system activity from exploding. Interestingly,
this does not mean that the neurons will fire at the
maximal rate of which would correspond in
this case to . The existence of this
refractory period allows for selforganized states
with frequencies even well bellow this limit, as seen
in Fig. 2 b). We have tested
these claims numerically by setting , observing that
the neural activity either dies out or the neurons fire
continuously.
In order to study the phase transitions
between states D and D+S and between D+S and S, we will resort
in the following section to adiabatic paths in phase space
crossing these lines.
3.2.1 Adiabatic parameter evolution
Here we study the nature of the phase transitions between
different dynamical states in Fig. 2. To do so,
we resort to adiabatic trajectories in phase space, crossing these
lines. Beginning in a given phase we modify the
coupling parameters and (Eq. 7),
on a timescale much slower than that of the network dynamics.
Along these trajectories, we then freeze the system in a
number of windows in which we compute the rate distribution
as a function of the (see Fig. 1).
During these observation windows the parameters do not change.
In this way, we can follow how the rate distribution varies
across the observed phase transitions. The results are presented
in Fig. 3.
We observe that the emergence of synchronized clusters,
the transition D(D+S), is completely reversible. We
believe this transition to be of second order and that
the small discontinuity in the respective firing rate
distributions observed in Fig. 3a)
are due to finitesize effects. The time to reach the
stationary state diverges, additionally, close to the
transition, making it difficult to resolve the locus
with high accuracy.
The disappearance of a subset of drifting neurons, the
transition S(D+S) is, on the other hand, not
reversible. In this case, when is reduced,
the system tends to get stuck in metastable attractors
in the phase, producing irregular jumps in the rate
distributions. Furthermore, when we increase ,
we observe that the system jumps back and forth between
states and in the vicinity of the phase transition,
indicating that both states may coexist as metastable
attractors close to the transition. We note that a
similar metastability has been observed in partially
synchronized phase of the Kuramoto model
[35].
3.2.2 Time structure
In networks of spiking neurons, it is essential to characterize not only the rate distribution of the system, but also the neurons’ interspiketime statistics [37, 38, 39, 40, 41]. In this case, we have computed the distribution of the interspike intervals (ISI) of the individual neurons respectively for full and partial drifting and synchronized states. The distribution of interspike intervals in Fig. 4 shows the network average of the , normalized individually with respect to the average spiking intervals.

D+S : The input received for drifting neuron in a state where other neurons form a synchronized subcluster is intrinsically periodic in time and the resulting nontrivial, as evident in Fig. 4.

S : is a delta function for all neurons in the fully synchronized state, with identical interspike intervals .
As a frequently used measure of the regularity of a time distribution we have included in Fig. 4 the coefficient of variation (),
(14) 
Of interest here are the finite s of the drifting units in the D+S state, which are considerably larger than the s of the drifting neurons when no synchronized component is present. This phenomenon is a consequence of the interplay between the periodic driving of the drifting neurons by the synchronized subcluster in the D+S state, where the driving frequency will in general be in mismatch with the effective, selforganized natural frequency of the drifting neurons. The firing of a drifting neuron is hence irregular in the mixed D+S state, becoming however regular in the absence of synchronized drivings.
3.2.3 Self induced chaos
The high variability of the spiking intervals observed
in the mixed state, as presented in Fig. 4,
indicates that the firing may have a chaotic component
in the mixed state and hence positive Lyapunov exponents.
[6].
Alternatively to a numerical evaluation of the Lyapunov exponents (a demanding task for large networks of spiking neurons), a somewhat more direct understanding of the chaotic state can be obtained by studying the relation between consecutive spike intervals. In Fig. 5 we plot for this purpose a time series of 2000 consecutive interspike intervals (corresponding to about in real time), for one of the drifting neurons in the D+S state (with the parameters of the third example of Fig. 1: and ). We note that the spiking would be regular, viz constant, for all neurons either in the fully drifting state (D) or in the fully synchronized state (S). The plot of consecutive spike intervals presented in Fig. 5 can be viewed as a poor man’s approximation to Takens’ embedding theorem [43], which states that a chaotic attractor arising in a dimensional phase space (in our case ) can be reconstructed by the series of tuples of time events of a single variable.
With a blue line we follow in Fig. 5 a
representative segment of the trajectory, which jumps
irregularly. A first indication that the attractor in
Fig. 5 may be chaotic comes from the
observation that the trajectory does not seem to
settle (within the observation window) within a limit
cycle. The time series of consecutive spikeinterval
pairs will nevertheless approach any given previous
pair arbitrarily close,
a consequence of the generic ergodicity of attracting
sets [6]. One of these close
reencounters occurs in Fig. 5
near the center of the dashed circle, with the
trajectory diverging again after the close
reencounter. This divergence indicates the presence
of positive Lyapunov exponents.
We have determined the fractal dimension of
the attracting set of pairs of spike intervals
in the mixed phase by overlaying the attractor
with a grid of squares. For this
calculation we employed a longer simulation with
. The resulting
box count, presented in Fig 6,
yields a Minkowski or boxcounting dimension
, embeded in the 2D space of the plot,
confirming such that the drifting neurons in
the D+S phase spikes indeed chaotically. As a comparison,
a limit cycle in this space, has a of .
While we present here the result for one particular neuron,
the same holds for every drifting neuron in this state,
albeit with slightly different fractal dimension values.
We note that the such determined fractal dimension
is not the one of the full attractor in phase space,
for which tuples of consecutive interspike intervals
would need to be considered [43, 44].
Our point here is that a noninteger result for the single
neuron strongly indicates that the full attractor
(in the dimensional phase space) is chaotic.
We believe that the chaotic state arising in the mixed
D+S state may be understood in analogy to
the occurrence of chaos in the periodically driven
pendulum [42]. A drifting neuron
with a coupling constant in the D+S does indeed
receive two types of inputs to its conductance,
compare Eq. (4), with the first input being
constant (resulting from the firing of the other
drifting neurons) and with the second input being
periodic. The frequency of the periodic
driving will then be strictly smaller than the
natural frequency of the drifting neuron
as resulting from the constant input (compare
Fig. 1). It is known from the
theory of driven oscillators [42]
that the oscillator may not be able to synchronize
with the external frequency, here ,
when the frequency ratio is
small enough and the relative strength of
the driving not too strong.
3.2.4 Robustness
In order to evaluate the robustness and the
generality of the results here presented, we
have evaluated the effects occurring when changing
the size of the network and when allowing for variability in the
connectivity matrix . We have also considered an adiabatically
increasing external input, as well as Gaussian noise.
In Fig. 7 (top half), the effect on the rate
distribution of the network size is evaluated. Sizes of
, and have been employed.
We observe that the plots overlap
within the precision of the simulations.
This result is on the one hand a consequence
of the scaling for the overall strength of
the afferent links and, on the other hand, of the regularity
in firing discussed in Sect. 3.2.2.
The neural activity is driven by the mean field
which is constant, in the thermodynamic limit , in
the fully drifting state and nonconstant but smooth (apart
from an initial jump) in the synchronized states. Fluctuations
due to finite network sizes are already small for
, as employed for our simulations, justifying
this choice.
In the previous sections, we considered the uniform connectivity matrix described by (5). This allowed us to formulate the problem in terms of a meanfield coupling. We now analyze the robustness of the states found when a certain degree of variability is present in the weight matrix, viz when an extra variability term is present:
(15) 
Here we consider to be drawn from a flat distribution
with zero mean and a width . Tests with
, , and were performed.
In Fig. 7 (lower half), the results for
are presented. We observe that the
fully drifting state is the least affected by the
variability in the weight matrix. On the other hand,
the influence of variable weight matrices becomes
more evident when the state is partially
or fully synchronized, with the separation between
the locked and the drifting neurons becoming less
pronounced in the case of partial synchronization
(lower panel of Fig. 7b)).
The larger standard deviation evident for larger values
of in the lower
panel of Fig. 7c) indicates the
presence of drifting states in some of the ensemble
realizations of weight matrices.
Finally, we test the robustness of the drifting state
when perturbed with an external stimulus. To determine
the stability of the state, we adiabatically increase
the external stimulus and compute the firing rate
as a function of the rank for several values of . We
do two excursions, one for positive values of and
another one for negative values. These plots are presented
in Fig. 8. We observe that the firing
rates evolve in a continuous fashion, indicating that the drifting
state is indeed stable. While positive inputs push the
system to saturation, negative inputs reduce the average
rate. We find, as is to be expected, that a large enough
negative input makes the system silent. As a final test
(not shown here), we have perturbed the system with
random Gaussian uncorrelated noise, observing that
the found attractors are all robust with respect to
this type of noise as well.
4 Discussion
In the present work we have studied a network of excitable
units, consisting exclusively of excitatory neurons. In absence of
external stimulus, the network is only able to remain active
through its own activity, in a selforganized fashion. Below
a certain average coupling strength of the network the
activity dies out, whereas, if the average coupling is
strong enough, the excitable units will collectively behave
as pulsecoupled oscillators.
We have shown how the variability of coupling
strengths determines the synchronization characteristics
of the network, ranging from fully asynchronous, to fully
synchronous activity. Interestingly, this variability,
together with the neurons’ refractoriness, is enough to
keep the neural activity from exploding.
While we have initially assumed a purely mean field coupling
(by setting all the synaptic weights ), only
regulating the intensity with which a neuron integrates
the mean field with the introduction of a scaling constant
, we have later shown how the here found states also
survive when we allow the to individually vary
up or down by up to a value. We have also shown how
the variability in coupling strength makes the asynchronous
or drifting state extremely robust with respect to strong
homogeneous external inputs.
Finally, we have studied the time structure of spikes in
the different dynamical states observed. It is in the time
domain that we find the main difference with natural neural
networks. Spiking in real neurons is usually irregular,
and it is often modeled as Poissonian, whereas in our
network we found a very high degree of regularity, even in the
asynchronous state. Only in the partially synchronous state
we found a higher degree of variability, as a result from
chaotic behavior. We have determined the fractal dimension
of the respective strange attractor in the space of
pairs of consecutive interspike intervals, finding
fractional values of roughly for the different
neurons in the state.
While it has been often stated that inhibition is a
necessary condition for bounded and uncorrelated activity,
we have show here that uncorrelated aperiodic (and even chaotic)
activity can be obtained with a network of excitatoryonly
connections, in a stable fashion and without external input.
We are of course aware that the firing rates obtained
in our simulations are high compared to in vivo activity
levels and that the degree of variability in the time domain of
spikes is far from Poissonian. We have however incorporated in
this work only variability of the interneural connectivity,
keeping other neural properties (such as the neural intrinsic
parameters) constant and homogeneous. In this sense, it would be
interesting to study in future work, how intrinsic and synaptic
plasticity [45], modify these statistics,
incorporating plasticity in terms of interspiketimes
[46, 47], and in terms
of neural rates [48, 49, 50]. Here, instead of trying to
reproduce the detailed connection statistics in the brain,
which would in any case never be realistic without inhibitory
neurons, we have shown how a minimal variability model in
terms of non uniform link matrices is able to give rise
to asynchronous spiking states, even without inhibition.
Our results indicate therefore that further studies are
needed for an improved understanding of which features of
the asynchronous spiking state depend essentially on
inhibition, and which do not.
We have shown here that autonomous activity (sustained even in the absence of external inputs) may arise in networks of coupled excitable units, viz for units which are not intrinsically oscillating. We have also proposed a new tool to study the appearance of chaos in spiking neural networks by applying a box counting method to consecutive pairs of interspike intervals from a single unit. This tool is readily applicable both to experimental data and to the results of theory simulations in general.
5 Acknowledgments
The support of the German Science Foundation (DFG) and the German Academic Exchange Service (DAAD) are acknowledged.
References
 [1] Arkady Pikovsky and Michael Rosenblum. Dynamics of globally coupled oscillators: Progress and perspectives. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(9):097616, 2015.
 [2] Arthur T Winfree. Biological rhythms and the behavior of populations of coupled oscillators. Journal of theoretical biology, 16(1):15–42, 1967.
 [3] Charles S Peskin. Mathematical aspects of heart physiology. Courant Institute of Mathematical Sciences, New York University, 1975.
 [4] Yoshiki Kuramoto. Selfentrainment of a population of coupled nonlinear oscillators. In International symposium on mathematical problems in theoretical physics, pages 420–422. Springer, 1975.
 [5] John Buck. Synchronous rhythmic flashing of fireflies. ii. Quarterly review of biology, pages 265–289, 1988.
 [6] C. Gros. Complex and adaptive dynamical systems: A primer. Springer Verlag, 2010.
 [7] Zoran Levnajić and Arkady Pikovsky. Phase resetting of collective rhythm in ensembles of oscillators. Physical Review E, 82(5):056202, 2010.
 [8] Peter Ashwin, Gábor Orosz, John Wordsworth, and Stuart Townley. Dynamics on networks of cluster states for globally coupled phase oscillators. SIAM Journal on Applied Dynamical Systems, 6(4):728–758, 2007.
 [9] Florian Dörfler and Francesco Bullo. Synchronization in complex networks of phase oscillators: A survey. Automatica, 50(6):1539–1564, 2014.
 [10] D Golomb, David Hansel, B Shraiman, and Haim Sompolinsky. Clustering in globally coupled phase oscillators. Physical Review A, 45(6):3516, 1992.
 [11] LF Abbott and Carl van Vreeswijk. Asynchronous states in networks of pulsecoupled oscillators. Physical Review E, 48(2):1483, 1993.
 [12] Steven H Strogatz, Ian Stewart, et al. Coupled oscillators and biological synchronization. Scientific American, 269(6):102–109, 1993.
 [13] Renato E Mirollo and Steven H Strogatz. Synchronization of pulsecoupled biological oscillators. SIAM Journal on Applied Mathematics, 50(6):1645–1662, 1990.
 [14] Frank E Hanson. Comparative studies of firefly pacemakers. In Federation proceedings, volume 37, pages 2158–2164, 1978.
 [15] György Buzsáki and Andreas Draguhn. Neuronal oscillations in cortical networks. Science, 304(5679):1926–1929, 2004.
 [16] JL Perez Velazquez, RF Galan, L Garcia Dominguez, Y Leshchenko, S Lo, J Belkas, and R Guevara Erra. Phase response curves in the characterization of epileptiform activity. Physical Review E, 76(6):061912, 2007.
 [17] Yoshiki Kuramoto. Collective synchronization of pulsecoupled oscillators and excitable units. Physica D: Nonlinear Phenomena, 50(1):15–30, 1991.
 [18] Eugene M Izhikevich. Weakly pulsecoupled oscillators, fm interactions, synchronization, and oscillatory associative memory. Neural Networks, IEEE Transactions on, 10(3):508–526, 1999.
 [19] Steven H Strogatz. From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D: Nonlinear Phenomena, 143(1):1–20, 2000.
 [20] Bernard Sonnenschein, Thomas K DM Peron, Francisco A Rodrigues, Jürgen Kurths, and Lutz SchimanskyGeier. Cooperative behavior between oscillatory and excitable units: the peculiar role of positive couplingfrequency correlations. The European Physical Journal B, 87(8):1–11, 2014.
 [21] Anthony N Burkitt. A review of the integrateandfire neuron model: I. homogeneous synaptic input. Biological cybernetics, 95(1):1–19, 2006.
 [22] Michael N Shadlen and William T Newsome. Noise, neural codes and cortical organization. Current opinion in neurobiology, 4(4):569–579, 1994.
 [23] Daniel J Amit and Nicolas Brunel. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral cortex, 7(3):237–252, 1997.
 [24] Maria V SanchezVives and David A McCormick. Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nature neuroscience, 3(10):1027–1034, 2000.
 [25] Bilal Haider, Alvaro Duque, Andrea R Hasenstaub, and David A McCormick. Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition. The Journal of neuroscience, 26(17):4535–4545, 2006.
 [26] Carl van Vreeswijk and Haim Sompolinsky. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724–1726, 1996.
 [27] Arvind Kumar, Sven Schrader, Ad Aertsen, and Stefan Rotter. The highconductance state of cortical networks. Neural computation, 20(1):1–43, 2008.
 [28] Tim P Vogels and Larry F Abbott. Signal propagation and logic gating in networks of integrateandfire neurons. The Journal of neuroscience, 25(46):10786–10795, 2005.
 [29] Roxana A Stefanescu, Viktor K Jirsa, et al. A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neurons. PLoS Comput. Biol, 4(11):e1000219, 2008.
 [30] D Hansel and G Mato. Existence and stability of persistent states in large neuronal networks. Physical Review Letters, 86(18):4175, 2001.
 [31] Nicolas Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of computational neuroscience, 8(3):183–208, 2000.
 [32] Yoshiki Kuramoto and Dorjsuren Battogtokh. Coexistence of coherence and incoherence in nonlocally coupled phase oscillators. arXiv preprint condmat/0210694, 2002.
 [33] Leandro M Alonso and Gabriel B Mindlin. Average dynamics of a driven set of globally coupled excitable units. Chaos: An Interdisciplinary Journal of Nonlinear Science, 21(2):023102, 2011.
 [34] Daniel M Abrams and Steven H Strogatz. Chimera states for coupled oscillators. Physical review letters, 93(17):174102, 2004.
 [35] Giovanna Miritello, Alessandro Pluchino, and Andrea Rapisarda. Central limit behavior in the Kuramoto model at the “edge of chaos”. Physica A: Statistical Mechanics and its Applications, 388(23):4818–4826, 2009.
 [36] David AnguloGarcia and Alessandro Torcini. Stable chaos in fluctuation driven neural circuits. Chaos, Solitons & Fractals, 69:233–245, 2014.
 [37] Benjamin Lindner. Interspike interval statistics of neurons driven by colored noise. Physical Review E, 69(2):022901, 2004.
 [38] Donald H Perkel, George L Gerstein, and George P Moore. Neuronal spike trains and stochastic point processes: I. the single spike train. Biophysical journal, 7(4):391, 1967.
 [39] Donald H Perkel, George L Gerstein, and George P Moore. Neuronal spike trains and stochastic point processes: Ii. simultaneous spike trains. Biophysical journal, 7(4):419, 1967.
 [40] Farzad Farkhooi, Martin F StrubeBloss, and Martin P Nawrot. Serial correlation in neural spike trains: Experimental evidence, stochastic modeling, and single neuron variability. Physical Review E, 79(2):021905, 2009.
 [41] Maurice J Chacron, Benjamin Lindner, and André Longtin. Noise shaping by interval correlations increases information transfer. Physical review letters, 92(8):080601, 2004.
 [42] D d’Humieres, MR Beasley, BA Huberman, and A Libchaber. Chaotic states and routes to chaos in the forced pendulum. Physical Review A, 26(6):3483, 1982.
 [43] Floris Takens. Detecting strange attractors in turbulence. In Dynamical systems and turbulence, Warwick 1980, pages 366–381. Springer, 1981.
 [44] Mingzhou Ding, Celso Grebogi, Edward Ott, Tim Sauer, and James A Yorke. Estimating correlation dimension from a chaotic time series: when does plateau onset occur? Physica D: Nonlinear Phenomena, 69(3):404–424, 1993.
 [45] Jochen Triesch. Synergies between intrinsic and synaptic plasticity mechanisms. Neural Computation, 19(4):885–909, 2007.
 [46] Claudia Clopath, Lars Büsing, Eleni Vasilaki, and Wulfram Gerstner. Connectivity reflects coding: a model of voltagebased stdp with homeostasis. Nature neuroscience, 13(3):344–352, 2010.
 [47] Rodrigo Echeveste and Claudius Gros. Twotrace model for spiketimingdependent synaptic plasticity. Neural computation, 2015.
 [48] Aapo Hyvärinen and Erkki Oja. Independent component analysis by general nonlinear hebbianlike learning rules. Signal Processing, 64(3):301–313, 1998.
 [49] Elie L Bienenstock, Leon N Cooper, and Paul W Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience, 2(1):32–48, 1982.
 [50] Rodrigo Echeveste and Claudius Gros. An objective function for selflimiting neural plasticity rules. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium), 2224 April 2015, number ESANN 2015 proceedings. i6doc.com publ., 2015.