# Distinct dynamical behavior in random and all-to-all neuronal networks

###### Abstract

Neuronal network dynamics depends on network structure. It is often assumed that neurons are connected at random when their actual connectivity structure is unknown. Such models are then often approximated by replacing the random network by an all-to-all network, where every neuron is connected to all other neurons. This mean-field approximation is a common approach in statistical physics. In this paper we show that such approximation can be invalid. We solve analytically a neuronal network model with binary-state neurons in both random and all-to-all networks. We find strikingly different phase diagrams corresponding to each network structure. Neuronal network dynamics is not only different within certain parameter ranges, but it also undergoes different bifurcations. Our results therefore suggest cautiousness when using mean-field models based on all-to-all network topologies to represent random networks.

## I Introduction

The brain is an enormous network of neurons connected by synapses. Neurons are dynamical systems whose dynamics depends on the interaction with other neurons. Understanding how network structure shapes neuronal dynamics is of fundamental importance to unveil the workings of the brain. Modelling of neuronal networks often considers neurons connected in all-to-all or random networks (e.g. Brunel and Hakim (1999); Koulakov et al. (2002); Börgers and Kopell (2003); Izhikevich (2003); Yuste (2015)). In the case of random neuronal networks, a lack of mathematical methods to consider these networks exactly usually forces approximations that may use all-to-all neuronal interactions (see e.g. Wang et al. (1995); Golomb and Hansel (2000); Hansel and Mato (2001)). Such approximation may seem to be supported by mean-field models such as the Wilson–Cowan model which do not explicitly define the underlying neuronal network architecture Wilson and Cowan (1973); Destexhe and Sejnowski (2009). The fact that most mean-field models do not account for network structure may give the misimpression that structure may play a minor role on neuronal dynamics.

Many models of statistical physics, including the Ising, Potts, Kuramoto and other models, demonstrate the standard mean-field behavior, as in all-to-all networks, provided that the heterogeneity of the network is sufficiently weak, namely, when the second moment of the degree distribution is finite Dorogovtsev et al. (2002, 2008); Lopes et al. (2016). Additionally, the annealed network approximation by which an uncorrelated random network may be replaced by a weighted all-to-all network Giuraniuc et al. (2006); Dorogovtsev et al. (2008); Lopes et al. (2016) may also suggest that representing a random network with an all-to-all network is an acceptable approximation. However, an all-to-all network is always denser and more clustered than a random network, properties that may have a strong impact on dynamics.

In this paper we consider a neuronal network model previously introduced and studied in Refs. Goltsev et al. (2010); Holstein et al. (2013); Lee et al. (2014); Lopes et al. (2014, 2017). Herein, we solve analytically the model both in all-to-all and random networks. Our aim is to clarify whether these two network structures underpin equivalent or distinct neuronal dynamics.

## Ii Model

We consider the neuronal network model introduced in Refs. Goltsev et al. (2010); Lee et al. (2014) and further studied in Refs. Holstein et al. (2013); Lopes et al. (2014, 2017). The network consists of neurons, excitatory neurons, and inhibitory neurons (). Neurons can either be active and fire spike trains or be inactive and stay silent. Their state is a function of positive currents coming from presynaptic excitatory neurons and negative currents from presynaptic inhibitory neurons. Additionally, neurons are also stimulated by noise which accounts for both internal and external stochastic processes that may influence neuronal dynamics Faisal et al. (2008). The neurons act as stochastic integrators: they sum their input currents during an integration time and switch their dynamical state with probability depending on whether the input is larger or smaller than a threshold . More specifically, an inactive excitatory (inhibitory) neuron becomes active with probability () if its total input current is larger than . Conversely, an active neuron becomes inactive with probability if its total input current is smaller than ( for excitatory neurons, and for inhibitory neurons). and are the first-spike latencies of excitatory and inhibitory neurons, respectively. As we shall see, the ratio plays an important role in the model by controlling the relative response times of excitatory and inhibitory neurons.

We define the fractions of active excitatory and inhibitory neurons at time , and , to characterise the neuronal network dynamics. We will refer to these fractions as activities. These activities follow the rate equations Goltsev et al. (2010); Lee et al. (2014)

(1) |

where , , and is the probability of a randomly chosen neuron to become active at time . This function encodes all information concerning single neuron dynamics, noise, and network structure.

### ii.1 Random network

We have previously solved the model in the case where neurons are connected in a Erdős–Rényi network Goltsev et al. (2010); Lee et al. (2014). We found the heterogeneous mean-field function ,

(2) | |||||

The function considers a randomly chosen neuron that integrates spikes from excitatory presynaptic neurons, spikes from inhibitory presynaptic neurons, and spikes from noise. and are synaptic efficacies that weight these contributions ( and ). is the Heaviside step function, if , otherwise . The numbers of excitatory and inhibitory spikes, and , follow a Poisson distribution, , that accounts for the random structure Goltsev et al. (2010). The average number of spikes is , where is the mean degree, and it accounts for the average fraction of active presynaptic neurons in population . The noise follows a Gaussian distribution with mean and variance as in Refs. Lee et al. (2014); Lopes et al. (2014, 2017). For more details about the derivation of this function see Refs. Goltsev et al. (2010); Lee et al. (2014).

### ii.2 All-to-all network

In the case of an all-to-all network, every neuron receives spikes from all other active neurons in the network,

(3) |

where we use the standard normalisations, and . Note that these normalisations imply that both the noise intensity and threshold must be rescaled. Given that the input current in Eq. (2) is proportional to the mean degree , for the sake of comparison we define , , and consequently , and . We thus find the function for an all-to-all network,

(4) |

As above, we consider Gaussian noise and therefore can be written as

(5) |

where is the cumulative distribution function of the standard normal distribution Abramowitz and Stegun (1970),

(6) |

Thus, the neuronal network dynamics are governed by the following rate equations

(7) |

### ii.3 Parameters

We considered the following model parameters except when otherwise specified. In random networks we used mean degree , threshold , integration time , synaptic efficacies and , and noise variance . These parameters have been discussed and justified elsewhere Goltsev et al. (2010); Lee et al. (2014); Lopes et al. (2014). Analogously, in all-to-all networks we used , integration time , , , and . The algorithm employed in our numerical simulations was explained in Lee et al. (2014).

## Iii Steady states

To characterise and compare the neuronal dynamics in random and all-to-all networks we first find the steady states in the two networks. The system reach a steady state when . In both networks, steady excitatory activity is equal to steady inhibitory activity, . In random networks, we find the steady state equation

(8) |

where is given by Eq. (2). Similarly, we find the steady state equation in all-to-all networks

(9) |

Figure 1 shows in these steady states as a function of noise intensity in networks with different fractions of excitatory neurons . The noise has an excitatory effect on neurons and as a result grows with increasing noise. We also find a strong dependence of on . Note that at the network is balanced, i.e. , and therefore the quantity is zero at the steady states, whilst it is negative at and positive at . We observe that larger fractions of are responsible for more pronounced increases of as a function of noise. However, although we find a bistability region bounded by activity jumps in random networks at intermediate noise levels in the three cases (panels in the left column), all-to-all networks show no bistability when and , instead grows gradually with increasing noise . The steepness of as a function of gets higher with increasing , and a bistability region emerges when the steepness becomes infinite. Panel (f) further shows that the bistability region appears in all-to-all networks only at , bounded by . In contrast, random networks display a bistability region at both above and below , and at the region is bounded by a critical value .

## Iv Phase diagrams and dynamics

To further characterise the neuronal dynamics, we study the local stability of the fixed points determined by Eqs. (8) and (9) Strogatz (1994); Lee et al. (2014). This stability is determined by the eigenvalues of the Jacobian of Eqs. (1),

(10) |

at the fixed points . In the case of the all-to-all network, the Jacobian of the dynamical system described by Eqs. (7) is

(11) |

where is the Gaussian distribution with zero mean and standard deviation ,

(12) |

and .

The eigenvalues of the Jacobian matrices are given by the same equation

(13) |

where are the entries of the Jacobian.

To find phase boundaries separating different dynamical behaviors in both random and all-to-all networks, we solve the conditions

(14) |

and

(15) |

Additionally, we solve the equation

(16) |

which determines the level of noise at which the neuronal activity jumps observed in Fig. 1 take place. We have previously demonstrated that the jumps correspond to saddle-node bifurcations Lee et al. (2014). Figure 2 shows the numerical solutions of Eqs. (14), (15) and (16) in noise– planes at different fractions of excitatory neurons for both random and all-to-all networks. We identify four regions of neuronal activity: in region I activity relaxes exponentially to a low activity state; region II is a bistability region where the lower and upper metastable states may be stable or unstable (see Ref. Lee et al. (2014) for more details); region III corresponds to sustained network oscillations; and in region IVa and IVb activity relaxes exponentially and in the form of damped oscillations to a high activity state, respectively. Note that in all-to-all networks (), the absence of a saddle-node bifurcation enables regions I and IVa to form a continuum from low to high activity at sufficiently high (region I+IVa in Fig. 2(b) and (d)). We observe that as we increase the fraction of excitatory neurons , the region of neuronal network oscillations shrinks in both network topologies. At , the all-to-all network no longer displays network oscillations in striking contrast with random networks which present a large area in parameter space with oscillations. Furthermore, we find that whilst region III in Fig. 2(a),(c) and (e) is bounded by both a saddle node on invariant circle (SNIC) bifurcation and a Hopf bifurcation in random networks, instead oscillations in all-to-all networks emerge only due to a Hopf bifurcation.

Figure 3 displays representative neuronal network activity in three of the regions identified in Fig. 2. We chose equivalent parameters in the two networks corresponding to comparable regions of the phase diagrams. As expected taking into account Fig. 1, the steady states are quantitatively different in the two networks. Moreover, we observe that network oscillations present different shape. Figure 3 also shows the result of simulations using networks comprising neurons. Note that in the low activity state, panels (a) and (b), the activity is smaller than hence most neurons are silent most of the time in the simulations except for occasional random firings. In the high activity state, whilst neuronal activity fluctuates in random networks (see panel (c)), it does not in all-to-all networks (see panel (d)). Overall, we observe a good agreement between the numerical integrations of Eqs. (1) and (7) corresponding to the infinite size limit and numerical simulations in finite neuronal networks () in both random and all-to-all networks.

## V Discussion and Conclusions

All-to-all connectivity has been assumed to provide a reasonable approximation of random networks for the analytical treatment of neuronal network models (see e.g. Wang et al. (1995); Golomb and Hansel (2000); Hansel and Mato (2001)). In this paper, we addressed the reliability of this assumption. We compared neuronal network dynamics in random and all-to-all networks using the same analytically solvable model in both topologies. The considered model comprised stochastic binary-state excitatory and inhibitory neurons interacting together in a network Goltsev et al. (2010); Lee et al. (2014); Lopes et al. (2014, 2017). We found that the network structure has a strong impact on the observed dynamics and bifurcation diagram. Depending on parameters and phenomena of interest, the replacement of a random network by an all-to-all network can lead to another dynamical behavior. The approximation may be particularly unreliable if one is interested on neuronal oscillations and critical phenomena in the vicinity of bifurcations.

Our results in Fig. 1 show that for balanced () and slightly unbalanced networks towards inhibition () there is bistability in random networks but not in all-to-all networks. At a fraction of excitatory neurons we found bistability in both networks. However, the upper metastable state in random networks comprises about half the neuronal population, whereas the equivalent state in all-to-all networks involves the whole network. Such differences may help deciding whether a random or an all-to-all network may be more appropriate to model, for example, neuronal cultures Orlandi et al. (2013).

We found that fixed points characterised by complete activation of the network () are incompatible with oscillations in both random and all-to-all networks. Larger fractions of excitatory neurons in the network lead to higher activities and consequently we observe that the region of network oscillations shrinks as we increase . Interestingly, when we observed a region of oscillations in both network structures [see Fig. 2(a)-(d)], this region appears to be symmetrical with regard to the level of noise in all-to-all networks, but not in random networks. More importantly, oscillations may emerge due to a SNIC bifurcation or a Hopf bifurcation in random networks, whereas in all-to-all networks the oscillatory regime is only bounded by a Hopf bifurcation. Although results in Fig. 2 may seem to suggest that network oscillations vanish in all-to-all networks when the saddle-node bifurcation emerges, that is not actually the case. Further numerical analysis revealed that there is a narrow region of parameters at which the saddle-node bifurcation coexists with network oscillations in all-to-all networks, however the region of network oscillations remains bounded only by the Hopf bifurcation (results not presented here).

Figure 3 shows that even for parameters at which random and all-to-all networks are supposed to be in similar dynamical regimes, we found significant differences particularly if we simulate the dynamics on finite systems. Whilst we found irregular fluctuations around a high activity state in random networks (see panel (c)), we observed stable full network activation in all-to-all networks (panel (d)). Network oscillations also present distinctive shapes in random (see panel (e)) and all-to-all networks (see panel (f)).

Based on these results, we would like to stress how profoundly network structure can influence network dynamics and how unreliable it can be to replace a random network by an all-to-all network. Note that random and all-to-all networks are actually opposite ends in regard to clustering. The clustering coefficient of an undirected network is , which in the case of random networks tends to zero in the infinite-size limit Dorogovtsev and Mendes (2002). In contrast, the coefficient is in all-to-all networks. The clustering coefficient characterises the occurrence of triangles and other motifs in a network Dorogovtsev et al. (2008). Thus, whilst motifs may be neglected in random networks, they may not in all-to-all networks. In our neuronal network there are many different motifs since the network is directed and there are two types of nodes (excitatory and inhibitory neurons), which makes it difficult to predict how these motifs may influence the dynamics. In certain instances, an all-to-all network may be a better representation of a real neuronal network compared to a random network given that large clustering coefficients have been observed in large-scale brain networks Sporns et al. (2004). At smaller scales, neurons are connected on average to about other neurons in the cortex Kandel et al. (2000), while packed in minicolumns Mountcastle (2003), thus likely organised in dense clustered networks. Future work should aim to clarify how increasing network density impacts on emerging neuronal dynamics and which network topology better represents real neuronal networks.

## Vi Acknowledgements

This work was partially supported by FET IP Project MULTIPLEX 317532. A.V.G. is grateful to LA I3N for Grant No. PEST UID/CTM/50025/2013. M.A.L. acknowledges the financial support of the Medical Research Council (MRC) via grant MR/K013998/1.

## References

- Brunel and Hakim (1999) N. Brunel and V. Hakim, Neural Comput. 11, 1621 (1999).
- Koulakov et al. (2002) A. A. Koulakov, S. Raghavachari, A. Kepecs, and J. E. Lisman, Nat. Neurosci. 5, 775 (2002).
- Börgers and Kopell (2003) C. Börgers and N. Kopell, Neural Comput. 15, 509 (2003).
- Izhikevich (2003) E. M. Izhikevich, IEEE Trans. Neural Netw. 14, 1569 (2003).
- Yuste (2015) R. Yuste, Nat. Rev. Neurosci. 16, 487 (2015).
- Wang et al. (1995) X. Wang, D. Golomb, and J. Rinzel, Proc. Natl. Acad. Sci. USA 92, 5577 (1995).
- Golomb and Hansel (2000) D. Golomb and D. Hansel, Neural Comput. 12, 1095 (2000).
- Hansel and Mato (2001) D. Hansel and G. Mato, Phys. Rev. Lett. 86, 4175 (2001).
- Wilson and Cowan (1973) H. R. Wilson and J. D. Cowan, Kybernetik 13, 55 (1973).
- Destexhe and Sejnowski (2009) A. Destexhe and T. J. Sejnowski, Biol. Cybern. 101, 1 (2009).
- Dorogovtsev et al. (2002) S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Phys. Rev. E 66, 016104 (2002).
- Dorogovtsev et al. (2008) S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Rev. Mod. Phys. 80, 1275 (2008).
- Lopes et al. (2016) M. Lopes, E. Lopes, S. Yoon, J. Mendes, and A. Goltsev, Phys. Rev. E 94, 012308 (2016).
- Giuraniuc et al. (2006) C. Giuraniuc, J. Hatchett, J. Indekeu, M. Leone, I. P. Castillo, B. Van Schaeybroeck, and C. Vanderzande, Phys. Rev. E 74, 036108 (2006).
- Goltsev et al. (2010) A. V. Goltsev, F. V. de Abreu, S. N. Dorogovtsev, and J. F. F. Mendes, Phys. Rev. E 81, 061921 (2010).
- Holstein et al. (2013) D. Holstein, A. V. Goltsev, and J. F. F. Mendes, Phys. Rev. E 87, 032717 (2013).
- Lee et al. (2014) K.-E. Lee, M. A. Lopes, J. F. F. Mendes, and A. V. Goltsev, Phys. Rev. E 89, 012701 (2014).
- Lopes et al. (2014) M. A. Lopes, K.-E. Lee, A. V. Goltsev, and J. F. F. Mendes, Phys. Rev. E 90, 052709 (2014).
- Lopes et al. (2017) M. A. Lopes, K.-E. Lee, and A. V. Goltsev, Phys. Rev. E 96, 062412 (2017).
- Faisal et al. (2008) A. Faisal, L. Selen, and D. M. Wolpert, Nat. Rev. Neurosci. 9, 292 (2008).
- Abramowitz and Stegun (1970) M. Abramowitz and I. A. Stegun, Handbook of mathematical functions: with formulas, graphs, and mathematical tables (Courier Dover Publications, Washington, D.C., 1970).
- Strogatz (1994) S. H. Strogatz, Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering (Perseus Books Group, New York, 1994).
- Orlandi et al. (2013) J. G. Orlandi, J. Soriano, E. Alvarez-Lacalle, S. Teller, and J. Casademunt, Nat. Phys. 9, 582 (2013).
- Dorogovtsev and Mendes (2002) S. N. Dorogovtsev and J. F. F. Mendes, Adv. Phys. 51, 1079 (2002).
- Sporns et al. (2004) O. Sporns, D. R. Chialvo, M. Kaiser, and C. C. Hilgetag, Trends Cogn. Sci. 8, 418 (2004).
- Kandel et al. (2000) E. R. Kandel, J. H. Schwartz, and T. M. Jessell, Principles of neural science (McGraw-Hill, New York, 2000).
- Mountcastle (2003) V. B. Mountcastle, Cereb. Cortex 13, 2 (2003).