Can dynamical synapses produce true selforganized criticality?
Abstract
Neuronal networks can present activity described by powerlaw distributed avalanches presumed to be a signature of a critical state. Here we study a randomneighbor network of excitable cellular automata coupled by dynamical synapses. The model exhibits a very similar to conservative selforganized criticality (SOC) models behavior even with dissipative bulk dynamics. This occurs because in the stationary regime the model is conservative on average, and, in the thermodynamic limit, the probability distribution for the global branching ratio converges to a deltafunction centered at its critical value. So, this nonconservative model pertain to the same universality class of conservative SOC models and contrasts with other dynamical synapses models that present only selforganized quasicriticality (SOqC). Analytical results show very good agreement with simulations of the model and enable us to study the emergence of SOC as a function of the parametric derivatives of the stationary branching ratio.
, ,
1 Introduction
In the past few decades there has been an increasing exploration of the heuristic idea that the concepts of complexity and criticality are intermingled [1, 2, 3, 4, 5]. Criticality is associated with critical surfaces in parameter space, for example, a critical point (surface) or a critical line () in a continuous phase transition. If the system has a dimensional parameter space, to be critical at a surface necessarily implies fine tuning of the system parameters to be close to that surface. Since several complex systems seem to lie spontaneously at the border of such critical surfaces, we need to explain such fine tuning.
Several mechanisms have been proposed to do that, the most popular being based in the notion of SelfOrganized Criticality (SOC). SOC in microscopically conservative systems, whose prototypical example is the sandpile model [3], is by now a well understood issue [6, 7, 8]. The case of nonconservative (bulk) dynamics has also been subjected to intense study and the present consensus is that these systems are critical only in the conservative limit [9, 10, 11, 12, 6, 13, 14, 15, 16, 17]. In the absence of that, what one observes is simply a large critical region surrounding the critical point, which can be confounded with a generic critical interval due to finite size effects (PseudoSOC) [11, 12, 6, 13, 14]. Another class of models has loading mechanisms that make the system hover around the critical point [9, 18], with fluctuations that do not vanish in the thermodynamic limit, a case which has been called SelfOrganized quasiCriticality (SOqC) [15, 16]. A review about SOC applied to neural networks discuss several limitations of the present models [19].
Here we introduce a new class of models that, although nonconservative microscopically, after a transient achieves a stationary regime which is conservative on average, with vanishing fluctuations in the thermodynamic limit. This new behavior, not seen in previous SOC models, is due to an intraavalanche ultrasoft loading mechanism (loading in a slower time scale than that presented in SOqC models). Another advantage of our model is that it has a very simple, exact and transparent meanfield treatment, in comparison to other models of the literature.
2 The model
First we introduce the LevinaHerrmannGeisel (LHG) model [18, 16], which directly inspired our model. Surprisingly, we will find that the models pertain to different universality classes. In the LHG model the basic excitable elements are integrateandfire neurons with time dependent membrane potentials . The synapses are denoted as and also evolve in time. The synaptic coupling is alltoall (so there are synapses). The model dynamics is given by
(1)  
(2) 

Driving: the term is a slow drive on neuron that acts on times with a given rate.

Firing: the neuron spikes if , which defines the spike time . The term corresponds to a reset of the membrane potential.

Integration: the sum over corresponds to the integration, by the postsynaptic neuron , of the synaptic contributions of all firing presynaptic neurons .

Synaptic depression: when a presynaptic neuron fires (which defines the spike time ), all synapses (outlinks) are depressed by the amount .

Synaptic recovery: synapses recover to a target value with a time scale given by .
Both LHG [18] and Bonachela et al. [16] did not study the full parameter space but mainly studied the effect of varying with other parameters fixed as and . They found a criticallike region around . However, Bonachela et al. showed that this region is pseudocritical because the system’s behavior is an oscillation around the critical point with an amplitude that does not vanish in the large limit.
Our model builds upon a randomneighbor network of excitable neurons used previously [20] (inspired on the wellknown SIRS epidemiological model). The possible states (, ) are:

Susceptible or Quiescent: ;

Infected or Firing: ;

Recovering or Refractory: .
The network is of random neighbor type, each presynaptic neuron having exactly outlinks to postsynaptic neurons described by probabilistic couplings (synapses) . Firing sites can induce firing in neighbors sites, creating an avalanche (defined as a not interrupted sequence of firing sites). The dynamics is composed by the following steps:

Driving: After an avalanche occurs and all sites are either quiescent or refractory, we choose a single site at random and force it to fire (), creating a new avalanche.

Firing: the probability that a presynaptic neighbor does not induce a firing in the postsynaptic neuron is , where henceforth is the Kronecker delta function. So, the probability that a quiescent () site spikes () is given by , where is the number of incoming links of site (note that ).

Refractory time: after a site spikes, it deterministically becomes refractory for time steps, and then returns to quiescence:
(3) 
Synaptic depression and recovery: to be described below.
Each site has a local branching ratio and we can define a global branching ratio . If are drawn from a uniform distribution with average and kept fixed then the average branching ratio controls the collective behavior of the network: for the system is subcritical (supercritical), with unstable (selfsustained) activity. This is the static version of the model, without dynamical synapses, which undergoes a continuous absorbing state phase transition at [20]. For this static model, criticality is only achieved by fine tuning to a critical value . This absorbing continuous phase transition pertains to the class of Compact Directed Percolation (CDP) and is identical to that present in canonical conservative SOC models [7, 8, 15].
Here we report new results obtained with a homeostatic synaptic mechanism described by timedependent probabilities that follow a depressing/recovering synaptic dynamics similar to that used in the LHG model [18, 16]. The crucial difference with respect to their model, however, is that the recovering mechanism of our synaptic dynamics occurs at a slower, , time scale:
(4) 
where is the spiking time of the th presynaptic neuron. A comparison with the synaptic part of the LHG model (Eq. 2) gives that is proportional to (remember that is proportional to ) and that , with having the same meaning in both models. The parameters of Eq. 4 can be better understood in Figure 1.
Notice that the dynamics in leads to a timedependent global branching ratio . The different time scale from the LHG model arises because they use an alltoall coupling where synapses already have a scale (like in the alltoall Ising model). So, synaptic recovery in the LHG model perturbs the synapses with a term of same order and this could be the origin of the large variance on the average synaptic weight measured in [16]. In our case, is and the factor in Eq. (4) defines an ultrasoft recovery mechanism. Although this could be considered a small modeling change, it produces an important change in the universality class of our model.
We observe that the scaling in the synapses dynamics, also present in the LHG model, is somewhat problematic from the biological point of view, since synapses do not have information about how many neurons there are in the network. However, in the literature of networks with dynamical links, this feature corresponds to the modeling state of art and there are no models without this feature. Perhaps, in real networks, the recovery dynamics mediated by does not depend on but has anyway a very large recovery time (small ) which is sufficient to put the system in a state very close of the critical one. This problem also suggests a research topic: what are the time scales that biological synapses recover after spike and if the recovery time depends on the size of the network.
We also observe that is a property of the architecture, and how the network links change. It reflects the average number of firings potentially induced by a firing site in the absence of overall network activity. So, is not the dynamical branching ratio measured by averaging the ratio between the number of firing sites over the last firing sites, by using an actual firing time series, which is equal to one even for a supercritical systems [19]. The two concepts (branching ratio for the architecture and branching ratio for the activity) have the same interpretation only in the critical limit .
3 MeanField analysis
The model is amenable to very simple and transparent analytic treatment at a meanfield level. We initially follow the steps developed for the static model with fixed [20]. Let be the density of sites in state at time . The ensemble average of is . The probability that a quiescent neuron at time step will be excited in the next time step by at least one of its neighbors can be approximated [20] by , where is the density of active sites. Given that a site can only be excited when it is quiescent, the dynamics of can be written by resorting again to the meanfield approximation . The dynamics of , the density of quiescent neurons, is coupled to those of the refractory states, whose deterministic dynamics immediately yield in the thermodynamic limit :
(5)  
(7) 
To study the stationary state of the meanfield equations, we drop the dependence in the stationary state () from the above equations to obtain . Imposing the stationary condition onto the normalization , we arrive at . We therefore have, in the stationary state, the first of our selfconsistent equations, which describes the stationary density of active sites for fixed coupling [20]:
(8) 
By considering the limit in Eq. (8), we can obtain the mean field behavior for a critical branching process: , with the critical value and the usual meanfield exponent [7].
In Eq. (8), was assumed constant. To obtain its value, we impose the stationary condition on Eq. (4) for the synaptic dynamics, such that the dissipation and the driving (loading) of the system must be the same. Dropping the dependence and averaging Eq. (4) over the ensemble, we obtain
(9) 
which can be solved for , rendering
(10) 
This is the second of our selfconsistent equations, stating the average coupling as a result of the interplay between synaptic depression () and recovery (), in light of a constant density of spiking neurons.
Together, Eqs. (8) and (10) can be solved to determine and . In particular, in the critical region , we can understand how the model parameters affect the distance of the stationary branching ratio from what would be the critical value in a truly selforganized system:
(11) 
where the scaling variable condenses the effect of most of the parameters and the important dependence. Therefore, the meanfield calculation predicts that when (which we call large tuning and is realized for finite , , and large ), the stationary value differs from by a term of order . The several parameters of the model only affect the constant prefactor of this term. We have a critical state without the need of fine tuning of the parameters, requiring only the large limit which enables the evolution of to approach the region where Eq. (11) is valid.
We also note that can be produced exactly in our model, but at the expense of choosing , which is a fine tuning operation (pulling towards ). In short, due to the synaptic dynamics, is no longer a parameter (like in the static model [20]) but is rather a slow dynamical variable whose stationary value depends on the parameters , , , , and . The coupled equations (8) and (10) are solved numerically to give curves and .
(12) 
This shows that, for large , we have (critical state), and grows with but has a prefactor. So, a graphic of as a function of any parameter of the model shows that the critical state (and the avalanche behavior) depends very weakly on as grows. We will call this dependence, which vanishes fast with , as grosstuning, to differentiate it from the finetuning needed in several models to achieve SOC [11, 12, 6, 13, 14, 15].
4 Results
Our meanfield calculation describes a system without spacial correlations, in which neighbors are chosen at random at each time step (annealed model). Although being a step back in biological realism, the annealed model is very important due to the insights furnished along the meanfield results.
Irrespective of the initial distribution of couplings , which defines an initial value , the network architecture evolves (“selforganizes”) during a transient toward a stationary regime . This selforganization can be followed by measuring , see Fig. 2. The fact that the branching ratio evolves and selforganizes in time is a characteristic of networks with adaptive links not present in classical SOC models like sandpiles and earthquake models, where the links are static and represent, say, how much a given toppling site gives to its neighbors. Also, if perturbed or damaged, the set of synapses recovers and achieves a new stationary state similar to the previous one. The evolution of and the corresponding towards criticality, as exposed in Fig. 2, seems for us to be a more strong instantiation of the original idea by Per Bak of a truly selforganizing system [21].
The stationary time series for presents fluctuations around an average value , with standard deviation . The stationary distribution is roughly Gaussian (Fig. 3) with the large scaling , with the critical point (see Eq. (11)). The most important result is that it tends to a delta function, with , see inset of Fig. 3.
In the stationary state, the model is therefore conservative on average, in the sense that it conserves the average number of active sites. In other words, its timeaveraged branching ratio is critical for large enough .
In Fig. 4 we present theoretical (Eq. (8)) and simulation results for the annealed model. To show the supercritical regime, we used large values for given a finite in order to produce . From this figure it is clear that the synaptic dynamics induces the system to lie at the critical point of an absorbing continuous phase transition [7, 8]. This is an important feature, not present in the LHG model, as extensively discussed by Bonachela and Muñoz [16].
The main characteristic of our model is that if we plot versus a parameter (for example, , or ), not only tends to , but also the parametric derivative tends to zero as increases. This means that a plateau appears around the critical point, so that the parametric dependence (for all parameters) vanishes for large . This can be seen explicitly in the mean field equations, where, for any parameter , we have:
(13) 
for some . For example, the emergence of a parametric plateau for can be seen in Fig. 5 (notice the logarithmic scale for ). The same behavior can be observed for parameters and .
The avalanche finitesize scaling, however, is somewhat problematic, as also observed in other nonconservative models [15, 16]. To obtain a precise scaling of critical avalanches for finite , one needs to tune the parameters. For example, with other parameters fixed, the choice
(14) 
as suggested by Bonachela et al. [15, 16] leads to the correct scaling of the cumulative avalanche size distribution: , where and is the probability that an avalanche has size (see Fig. 6).
The scaling with in Eq. 14 is not so problematic, since it can be absorbed in the original scaling of the synaptic recovery, Eq. 4, that is, by using from start a scaling . However, critical avalanches are observed only for a definite choice of (which now does not depend on ). To which extent this tuning implies that our system is a SOqC model is discussed below.
In our model, the cutoff scales as with , which is different from the scaling found in the LHG model () [16] or other models () [15]. Bonachela et al. [16] observed that a random neighbor version of the LHG model presented an anomalous cutoff exponent, but did not reported its value. Naive scaling considerations, similar to that done in [15], although produce , do not produce nor , so we prefer to reserve this issue for future considerations.
5 Discussion
Bonachela et al. [15, 16] tried to define an universality class, with a definite field theory, for bulk dissipative models that they call SelfOrganized quasiCriticality (SOqC). In doing so, they claimed that this class is characterized by three (necessary) features:

A) The stationary distribution of couplings values , which corresponds to in our model, has finite variance even in the infinite size limit. The system hoovers around the critical point, with excursions on the supercritical and subcritical phases. The avalanche distribution is constructed by summing supercritical and subcritical avalanches;

C) For finite , to obtain a correct scaling with power law avalanches, we must use tuned parameters (like ).
The LHG model [18, 16] presents all these features, being classified as a SOqC model. The same occurs with other bulk dissipative models [15]. Our model, however, presents only feature C (see Fig. 6) and lacks the important features A and B. Our model, in contrast to the LHG model, presents vanishing variance for , so that it does not oscillates nor make supercritical (or subcritical) excursions in the large limit. Also, the limit is achieved very fast, with weak dependence (tuning) on the parameters , because they are constants in front of a factor. Anyway, we observe that, in practice, neuronal networks always work with a very large number of elements (say, one million), compared, for example, with our simulations with . So, the large limit is the relevant one, and in this limit the avalanche behavior depends very weakly on the parameters, as can be seen from the meanfield results (Eq. 12). So, for large , it is more precise to talk about ”grosstuning” instead of ”finetuning” to describe the finite size avalanche behavior of our model.
Our model lies at the border of a phase transition to an absorbing state (Compact Directed Percolation), instead of a dynamical percolation transition, which is the relevant transition in bona fide SOC models [8, 7, 15]. Since the universality class of our model is different of the LHG model, it should not be put in the same SOqC class. The problem here is of definition: if item C is sufficient to classify a system as SOqC, then our model is SOqC (but them the SOqC will be comprised of two universality classes). But if items A and B are necessary conditions, then our model is not SOqC and another class must be created. What we can claim at this moment is that, since our model lacks features A and B, its behavior resembles much more the conserving SOC models than the LHG model or other nonconservative models [15, 16].
The synaptic depression, mediated by the parameter, is not conservative. The absence of conserved quantities in the bulk and specially during the (selforganization) transient is another feature that puts our model apart from conventional SOC models. The fact our model violates conservation in the bulk, however, is not an impeditive factor for true criticality. Recently, Moosavi and Montakhab [22] showed that sandpile models with noise (that violates microscopic conservation but preserves average conservation) can be critical if the noiseless model is critical and the noise has zero mean. In the case of our model, the conservation on average is achieved in the stationary state, after a nonconserving transient. So, we conclude that nonconservative bulk dynamics is not a sufficient feature to put a system in the SOqC class.
Which ingredients could account for the differences between our model and the LHG model, which clearly pertain to different universality classes? We identify three main possibilities: i) their model uses continuoustime integrateandfire units in contrast to our excitable (SIRS) discrete time units; ii) their units are deterministically coupled via weighted synaptic sums, while our discrete automata are coupled by probabilistic multiplicative synapses; iii) in the LHG model, the avalanches are deterministic and, in our model, they are stochastic; iv) their model is based on a complete graph with synapses, while our model sits on a random graph with finite average degree and hence synapses.
It seems to us that items i), ii), iii) and iv) could hardly be responsible for a change of universality class. On the other hand, item iv) refers to a change of topology, along with a change of time scale in the synaptic dynamics: LHG uses a change of for synapses that already are of order (because of the complete graph topology), which means that synapses are strongly perturbed along the time. In our random neighbor model, since , synapses are and the synaptic change of per time step is infinitesimal for large .
This means that the correction in diminishes for increasing , preventing large excursions or oscillations around the point. We believe that this ultrasoft synaptic correction is the missing element, not contemplated in the literature, that produces a SOC model with vanishing variance. If this is true, then one can predict that a simulation of the Levina et al. model with finite random neighbors should fall in our universality class, that is, presenting vanishing variance for the average coupling in the thermodynamic limit around a CDP transition.
We finally observe that, although tends to a delta function, the distribution of local couplings or, equivalently, of local branching ratios , is not a delta function in the limit. The two facts are not in contradiction because is an average over sites and the delta function limit is a large effect. In other words, the delta function limit is an effect of the law of large numbers for the average of the distribution (which continues to have finite variance for large ). This means that the model is nontrivial: there is sufficient diversity in the couplings (synapses) to mimic a real biological network.
In conclusion, we have presented an excitable (SIRS) automata model for neural networks with dynamical synapses which seems to pertain to a new universality class: models with dissipative bulk dynamics that, due to homeostatic mechanisms, achieve a stationary conservative (in average) dynamics. In this model, like in conservative SOC models, the relevant transition pertains to the CPD class. An evolving “control” parameter (the architectural branching ratio) selforganizes to criticality, and its variance around the critical point vanishes in the thermodynamic limit.
References
References
 [1] C. G. Langton. Computation at the edge of chaos: Phase transitions and emergent computation. Physica D, 42:12–37, 1990.
 [2] S. Wolfram. Statistical mechanics of cellular automata. Rev. Mod. Phys., 55:601–644, 1983.
 [3] P Bak, C Tang, and K Wiesenfeld. Selforganized criticality: An explanation of the 1/f noise. Phys. Rev. Lett., 59:381–384, 1987.
 [4] D. R. Chialvo. Emergent complex neural dynamics. Nat. Phys., 6:744–750, 2010.
 [5] S. M. Reia and O. Kinouchi. Conway’s game of LIFE is a near critical metastable state in the multiverse of cellular automata. Phys. Rev. E, 89:052123, 2014.
 [6] H. J. Jensen. SelfOrganized Criticality: Emergent Complex Behavior in Physical and Biological Systems. Cambridge University Press, Cambridge, 1998.
 [7] M. A. Muñoz, R. Dickman, A. Vespignani, and S. Zapperi. Avalanche and spreading exponents in systems with absorbing states. Phys. Rev. E, 59(5):6175–6179, 1999.
 [8] R. Dickman, M. A. Muñoz, A. Vespignani, and S. Zapperi. Paths to selforganized criticality. Braz. J. Phys., 30:27 – 41, 2000.
 [9] B. Drossel and F. Schwabl. Selforganized critical forestfire model. Phys. Rev. Lett., 69(11):1629, 1992.
 [10] J. E. S. Socolar, G. Grinstein, and C. Jayaprakash. On selforganized criticality in nonconserving systems. Phys. Rev. E, 47:2366–2376, Apr 1993.
 [11] HM. Bröker and P. Grassberger. Random neighbor theory of the olamifederchristensen earthquake model. Phys. Rev. E, 56:3944–3952, Oct 1997.
 [12] M. L. Chabanol and V. Hakim. Analysis of a dissipative model of selforganized criticality with random neighbors. Phys. Rev. E, 56:R2343–R2346, 1997.
 [13] O. Kinouchi and C. P. C. Prado. Robustness of scale invariance in models with selforganized criticality. Phy. Rev. E, 59(5):4964, 1999.
 [14] J. X. de Carvalho and C. P. C. Prado. Selforganized criticality in the OlamiFederChristensen model. Phys. Rev. Lett., 84(17):4006, 2000.
 [15] J. A. Bonachela and M. A. Muñoz. Selforganization without conservation: true or just apparent scaleinvariance? J. Stat. Mech., 2009:P09009, 2009.
 [16] J. A. Bonachela, S. de Franciscis, J. J. Torres, and M. A. Muñoz. Selforganization without conservation: are neuronal avalanches generically critical? J. Stat. Mech., 2010:P02015, 2010.
 [17] E. Martin, A. Shreim, and M. Paczuski. Activitydependent branching ratios in stocks, solar xray flux, and the baktangwiesenfeld sandpile model. Phys. Rev. E, 81:016109, Jan 2010.
 [18] A. Levina, J. M. Herrmann, and T. Geisel. Dynamical synapses causing selforganized criticality in neural networks. Nat. Phys., 3:857–860, 2007.
 [19] J. Hesse and T. Gross. Selforganized criticality as a fundamental property of neural systems. Front. Syst. Neurosci., 8:1–5, 2014.
 [20] O. Kinouchi and M. Copelli. Optimal dynamical range of excitable networks at criticality. Nat. Phys., 2:348–351, 2006.
 [21] P. Bak. How Nature Works: The Science of SelfOrganized Criticality. Oxford University Press, New York, 1997.
 [22] S. A. Moosavi and A. Montakhab. Meanfield behavior as a result of noisy local dynamics in selforganized criticality: Neuroscience implications. Phys. Rev. E, 89:052139, May 2014.