Abstract

Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder

Diego Fasoli, Stefano Panzeri

1 Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, 38068 Rovereto, Italy

Corresponding Author. E-mail: diego.fasoli@iit.it

Abstract

We study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, which can be calculated in terms of the permanent of special matrices, according to extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we also specialized our calculations to the case of statistically-homogeneous multi-population networks. In the specific case of this network topology, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code.

Keywords: stationary states, multistability, binary neural network, quenched disorder, bifurcations, extreme value theory, order statistics, matrix permanent, Fisher-Tippett-Gnedenko theorem, Gumbel distribution

1 Introduction

Biological networks are typically characterized by highly heterogeneous synaptic connections. Variability in the biophysical properties of the synapses has been observed experimentally between different types of synapses, as well as within a given synaptic type [36]. Among these properties, the synaptic weights represent the dynamically regulated strength of connections between pairs of neurons, which quantify the influence the firing of one neuron has on another. The magnitude of the synaptic weights is determined by several variable factors, which in the case, for example, of chemical synapses, include the size of the available vesicle pool, the release probability of neurotransmitters into the synaptic cleft, the amount of neurotransmitter that can be absorbed in the postsynaptic neuron, the efficiency of neurotransmitter replenishment, etc [17, 51, 41, 36, 11]. These factors differ from synapse to synapse, causing heterogeneity in the magnitude of the synaptic weights. An additional source of variability in biological networks is represented by the number of connections made by the axon terminals to the dendrites of post-synaptic neurons, which affects both the magnitude of the synaptic weights and the local topology of neural circuits [11]. Heterogeneity is an intrinsic network property [41], which is likely to cover an important functional role in preventing neurological disorders [44].

Biologically realistic neural network models should capture most of the common features of real networks. In particular, synaptic heterogeneity is typically described by assuming that the synaptic connections can be modeled by random variables with some known distribution. Under this assumption, the standard deviation of the distribution can be interpreted as a measure of synaptic heterogeneity. So far, only a few papers have investigated the dynamical properties of neural networks with random synaptic connections, see e.g. [47, 13, 21, 29, 12]. These papers focused on the special case of fully-connected networks of graded-rate neurons in the thermodynamic limit. The neurons in these models are all-to-all connected with unit probability, namely the network topology is deterministic, while the strength of the synaptic connections is normally distributed. The synaptic weights are described by frozen random variables which do not evolve over time, therefore these models are said to present quenched disorder.

In condensed matter physics, quenched disorder describes mathematically the highly irregular atomic bond structure of amorphous solids known as spin glasses [46, 30, 15, 39, 38, 45]. Spin models are characterized by discrete output, therefore the powerful statistical approaches developed for studying spin glasses can be adapted for the investigation of networks of binary-rate neurons in the thermodynamic limit. Typically, the synaptic weights in these models are chosen according to the Hebbian rule, so that the systems behave as attractor neural networks, that can be designed for storing and retrieving some desired patterns of neural activity (see [14] and references therein).

In order to obtain statistically representative and therefore physically relevant results in mathematical models with quenched disorder, one needs to average physical observables over the variability of the connections between the network units [45]. In other words, one starts by generating several copies or repetitions of the network, where the weights and/or the topology of the connections are generated randomly from given probability distributions. Each copy owns a different set of frozen connections, which affect the value of physical observables. Therefore physical observables present a probability distribution over the set of connections. Since measurements of macroscopic physical observables are dominated by their mean values [45], one typically deals with the difficult problem of calculating averages over the distribution of the connections.

In [19] we introduced optimized algorithms for investigating changes in the long-time dynamics of arbitrary-size recurrent networks, composed of binary-rate neurons that evolve in discrete time steps. These changes of dynamics are known in mathematical terms as bifurcations [32]. In particular, in [19] we studied changes in the number of stationary network states and the formation of neural oscillations, elicited by variations in the external stimuli to the network. The synaptic weights and the topology of the network were arbitrary (possibly random and asymmetric), and we studied the bifurcation structure of single network realizations without averaging it over several copies of the model.

In the present paper we extended the mathematical formalism and the algorithms introduced in [19], by deriving a complete semi-analytical description of the statistical properties of the long-time network states and of the corresponding bifurcations, across network realizations. For simplicity, we focused on the mathematical characterization of the stationary states of random networks, while the more difficult case of neural oscillations will be discussed briefly at the end of the paper. Unlike spin-glass theory [46, 30, 15, 39, 38, 45], we extended our work beyond the calculation of the expected values, by deriving complete probability distributions. Moreover, unlike previous work on random networks of graded neurons [47, 13, 21, 29, 12], which focused on fully-connected models with normally distributed weights in the thermodynamic limit, we investigated the more complicated case of arbitrary-size networks, with arbitrary distribution of the synaptic weights and of the topology of the connections. Moreover, unlike spin-glass models of attractor neural networks [14], we did not consider specifically the problem of storing/retrieving some desired sequences of neural activity patterns. Rather, we determined the fixed-point attractors, namely the stationary solutions of the network equations, that are generated by some given arbitrary distribution of the synaptic connections (i.e. by connections that are not necessarily designed to store and retrieve some desired patterns).

The range of network architectures that can be studied with our formalism is wide, and includes networks whose size, number of neural populations, synaptic weights distribution, topology and sparseness of the connections are arbitrary. For this reason, performing a detailed analysis of how all these factors affect the statistical properties of the stationary states and of their bifurcation points, is not feasible. Rather, the purpose of this paper is to introduce the mathematical formalism that allows the statistical analysis of these networks in the long-time regime, and to show the results it produces when applied to some examples of neural network architectures.

Paper Outline: In Sec. (2) we introduced the binary neural network model (SubSec. (2.1)) and we studied semi-analytically the statistical properties of the stationary states and of their bifurcation points, provided the probability distribution of the synaptic connections is known (SubSec. (2.2)). In particular, in SubSec. (2.2.1) we calculated the probability distribution of the bifurcation points in the stimuli space in terms of the permanent of special matrices, while in SubSec. (2.2.2) we used this result to derive the mean multistability diagram of the model. In SubSecs. (2.2.3) and (2.2.4), we calculated the probability that a given firing-rate state is stationary, for a fixed combination of stimuli and regardless of the value of the stimuli, respectively. In SubSec. (2.3) we specialized to the case of statistically-homogeneous multi-population networks of arbitrary size, and we derived a compact expression of the matrix permanent, while in SubSec. (2.4) we calculated the asymptotic form of the stationary-state statistics in the large-size limit. In SubSec. (2.5) we described the Monte Carlo techniques that we used for estimating numerically the statistical properties of the stationary states and of their bifurcation points. In Sec. (3) we used these numerical results to further validate our semi-analytical formulas, by studying specific examples of network topologies. In Sec(4) we discussed the importance of our results, by comparing our approach with previous work on random neural network models (SubSec(4.1)). We also discussed the limitations of our technique (SubSec(4.2)), and the possibility to extend our work to the study of more complicated kinds of neural dynamics (SubSec(4.3)). To conclude, in the supplemental Python script “Statistics.py”, we implemented the comparison between our semi-analytical and numerical results for arbitrary-size networks. Then, in the script “Permanent.py”, we implemented the comparison between the formula of the permanent of statistically-homogeneous networks, that we introduced in SubSec. (2.3), and the Balasubramanian-Bax-Franklin-Glynn algorithm.

2 Materials and Methods

2.1 The Network Model

We studied a recurrent neural network model composed of binary neurons, whose state evolves in discrete time steps. The firing rate of the th neuron at time is represented by the binary variable , so that if the neuron is not firing at time , and if it is firing at the maximum rate. We also defined , namely the vector containing the firing rates of all neurons at time .

If the neurons respond synchronously to the local fields , the firing rates at the time instant are updated according to the following activity-based equation (see e.g. [19]):

(1)

As we anticipated, is the number of neurons in the network, which in this work is supposed to be finite. Moreover, is a deterministic external input (i.e. an afferent stimulus) to the th neuron, while is the Heaviside step function:

(2)

with deterministic firing threshold .

is the (generally asymmetric) entry of the synaptic connectivity matrix , and represents the weight of the random and time-independent synaptic connection from the th (presynaptic) neuron to the th (postsynaptic) neuron. The randomness of the synaptic weights is quenched, therefore a “frozen” connectivity matrix is randomly generated at every realization of the network, according to the following formula:

(3)

In Eq. (3), is the th entry of the topology matrix , so that if there is no synaptic connection from the th neuron to the th neuron, and if the connection is present. The topology of the network is generally random and asymmetric, and it depends on the (arbitrary) entries of the connection probability matrix . In particular, we supposed that with probability , while with probability . Moreover, in Eq. (3) also the terms are (generally asymmetric and non-identically distributed) random variables, distributed according to marginal probability distributions (for simplicity, here we focused on continuous distributions, however our calculations can be extended to discrete random variables, if desired). In order to ensure the mathematical tractability of the model, we supposed that the terms are statistically independent from each other and from the variables , and that the variables are independent too.

On the other hand, if the neurons respond asynchronously to the local fields, at each time instant only a single, randomly drawn neuron is to undergo an update (see [19]):

(4)

where the local field is defined as in Eq. (1). The results that we derived in this paper are valid for both kinds of network updates, since they generate identical bifurcation diagrams of the stationary states for a given set of network parameters, as we proved in [19].

For simplicity, from now on we will represent the vector by the binary string , obtained by concatenating the firing rates at time . For example, in a network composed of neurons, the vector will be represented by the string (in this notation, no multiplication is intended between the bits).

2.2 Statistical Properties of the Network Model

In this paper, we focused on the calculation of the statistical properties of the stationary solutions of Eqs. (1) and (4), provided the probability distribution of the entries of the connectivity matrix is known. Therefore we did not consider the problem of storing/retrieving some desired sequences of neural activity patterns.

Our formalism is based on a branch of statistics known as extreme value theory [16], which deals with the extreme deviations from the median of probability distributions. Formally, extreme deviations are described by the minimum and maximum of a set of random variables, which correspond, respectively, to the smallest and largest order statistics of that set [49, 5, 4, 26].

In this section, we derived semi-analytical formulas of the probability distribution of the bifurcation points of the stationary states (SubSec. (2.2.1)), of their mean multistability diagram (SubSec. (2.2.2)), of the probability that a state is stationary for a given combination of stimuli (SubSec. (2.2.3)), and of the probability that a state is stationary regardless of the stimuli (SubSec. (2.2.4)). We implemented these formulas in the “Semi-Analytical Calculations” section of the supplemental Python script “Statistics.py”. Note that our formulas are semi-analytical, in that they are expressed in terms of 1D definite integrals containing the arbitrary probability distributions . In the Python script, these integrals are calculated through numerical integration schemes. However, note that generally the integrals may be calculated through analytical approximations, while for some distributions exact formulas may also exist, providing a fully-analytical description of the statistical properties of the stationary states.

2.2.1 Probability Distribution of the Bifurcation Points

The multistability diagram of the network provides a complete picture of the relationship between the stationary solutions of the network and a set of network parameters. In particular, in [19] we studied how the degree of multistability (namely the number of stationary solutions) of the firing-rate states and their symmetry depend on the stimuli , which represent our bifurcation parameters. Given a network with distinct input currents (i.e. ), we defined to be the set of neurons that share the same external current (namely ), while , for . Then in [19] we proved that a given firing-rate state is a stationary solution of the network equations (1) and (4) for every combination of stimuli , where:

(5)

By calculating the hyperrectangles for every , we obtain a complete picture of the relationship between the stationary states and the set of stimuli. If the hyperrectangles corresponding to different states overlap, the overlapping region has multistability degree (i.e. for combinations of stimuli lying in that region, the network has distinct stationary states). A stationary state loses its stability at the boundaries and , turning into another stationary state or an oscillation. Therefore and represent the bifurcation points of the stationary solution of Eqs. (1) and (4).

In what follows, we derived the probability density functions of the bifurcation points (note that, for simplicity, from now on we will omit the superscript in all the formulas). Since the random variables are independent for a given firing-rate state (as a consequence of the independence of the synaptic weights ), if we call the matrix permanent and the cumulative distribution function of , then from Eq. (5) we obtain that and are distributed according to the following order statistics [49, 5, 4, 26]:


where , while and are and matrices respectively, and are column vectors, is a matrix, and is the all-ones matrix. Note that in the supplemental Python script “Statistics.py” the permanent is calculated by means of the Balasubramanian-Bax-Franklin-Glynn (BBFG) formula, which is the fastest known algorithm for the numerical calculation of the permanent of arbitrary matrices [3, 7, 6, 23].

According to Eq. (LABEL:eq:probability-distribution-bifurcation-points), in order to complete the derivation of the probability densities and , we need to evaluate the probability density and the cumulative distribution function of . By defining and , from the definition of in Eq. (5) it follows that:

(7)

Since the synaptic weights are independent by hypothesis, the probability distribution of can be calculated through the convolution formula:

(8)

According to Eq. (3):

(9)

where is the Dirac delta function. Then, from Eqs. (8) and (9), it follows that:

(10)

where represents the power set of . Finally, we obtain from Eqs. (7) and (10), while the corresponding cumulative distribution function is:

(11)

Note that the definite integral in Eq. (11) depends on the probability distribution , which is arbitrary. For this reason, we did not provide any analytical expression of this integral, though exact formulas exist for specific distributions , e.g. when are normally distributed (in the supplemental Python script “Statistics.py”, the distribution is defined by the user, and the integrals are calculated by means of numerical integration schemes).

Now we have all the ingredients for calculating the probability distributions of the bifurcation points from Eq. (LABEL:eq:probability-distribution-bifurcation-points). Note, however, that this formula cannot be used in its current form, because it involves the ill-defined product between the Dirac delta function (which is contained in the probability density ) and a discontinuous function (i.e. , which jumps at ). To see that this product is generally ill-defined, consider for example the probability density of , in the case when are independent and identically distributed random variables:

(12)

If is absolutely continuous, we can prove that , as given by Eq. (12), is correctly normalized:

However, in the limit , the cumulative distribution function is discontinuous at , and we get:

If now we attempt to apply the famous formula:

we get that the integral equals , therefore is not properly normalized. The same problem occurs for .

To fix it, consider the general case when is a mixture of continuous and discrete random variables. Its probability density can be decomposed as follows:

(13)

In Eq. (13), is the component of that describes the statistical behavior of the continuous values of . Moreover, represents the set of the discrete values of , at which the cumulative distribution function is (possibly) discontinuous. In the specific case when or , by comparing Eq. (13) with Eqs. (LABEL:eq:probability-distribution-bifurcation-points), (7) and (10), we get:


where is a column vector. Moreover, and for and respectively, while . According to Eq. (13), we need also to evaluate the cumulative distribution functions of and . By following [4], we get:


We observe that Eqs. (13), (LABEL:eq:continuous-components) and (LABEL:eq:cumulative-distribution-functions) do not depend anymore on the ill-defined product between the Dirac delta distribution and the Heaviside step function. These formulas will be used in the next subsections to calculate the mean multistability diagram of the network (SubSec. (2.2.2)), the probability that a firing-rate state is stationary for a given combination of stimuli (SubSec. (2.2.3)), and the probability that a state is stationary regardless of the stimuli (SubSec. (2.2.4)).

2.2.2 Mean Multistability Diagram

The mean multistability diagram is the plot of the bifurcation points and , averaged over the network realizations. The mean bifurcation points and (where the brackets represent the statistical mean over the network realizations) correspond to the values of the stimulus at which a given firing-rate state loses its stability on average, turning into a different stationary state or an oscillatory solution. We propose two different approaches for evaluating the mean bifurcation points, which we implemented in the supplemental Python script “Statistics.py”.

The first method is based on Eq. (13), from which we obtain:

(16)

for and . The cumulative distribution function in Eq. (16) is calculated by means of Eq. (LABEL:eq:cumulative-distribution-functions), while the function is given by Eq. (LABEL:eq:continuous-components).

The second method takes advantage of the following formula:


where the second equality is obtained by integrating the Lebesgue-Stieltjes integral by parts. After some algebra, in the special case we get:

(17)

where is given again by Eq. (LABEL:eq:cumulative-distribution-functions). By running both methods in the supplemental Python script, the reader can easily check that Eqs. (16) and (17) provide identical results for and , apart from rounding errors.

It is important to observe that the multistability diagram shows only those stability regions for which for every , because if this condition is not satisfied, the state is not stationary on average for any combination of stimuli. Moreover, beyond multistability, the diagram provides also a complete picture of spontaneous symmetry-breaking of the stationary solutions of the firing rates. Spontaneous symmetry-breaking occurs whenever neurons in homogeneous populations fire at different rates, despite the symmetry of the underlying neural equations. We define the population function that maps the neuron index to the index of the population the neuron belongs to, so that . Then, in a single network realization, a population is said to be homogeneous if the sum , the firing threshold and the external stimulus do not depend on the index , for every index such that (see [19]). However, in the present article we studied network statistics across realizations. For this reason, the homogeneity of a neural population should be defined in a statistical sense, namely by checking whether the probability distribution of does not depend on the index , for every neuron in the population . Whenever the neurons in a population show heterogeneous firing rates despite the homogeneity condition is satisfied, we say that the symmetry of that population is spontaneously broken. In order to check whether the probability distribution of is population-dependent, it is possible to calculate numerically the Kullback-Leibler divergence between all the pairs of neurons that belong to the same population . However, in the supplemental script “Statistics.py”, we checked the statistical homogeneity of the neural populations in a simpler and computationally more efficient way, though our approach is less general than that based on the Kullback-Leibler divergence. Our method relies on the assumption that a small number of moments of , for example just the mean and the variance:

are sufficient for discriminating between the probability distributions of the two random variables. In other words, we assumed that if and/or , then are differently distributed, and therefore the neural population is statistically heterogeneous 111Note, however, that the probability distribution of a scalar random variable with finite moments at all orders, generally is not uniquely determined by the sequence of moments. It follows that there exist (rare) cases of differently distributed random variables that share the same sequence of moments. For this reason, the moments are not always sufficient for discriminating between two probability distributions. Note also that a sufficient condition for the sequence of moments to uniquely determine the random variable is that the moment generating function has positive radius of convergence (see Thm. (30.1) in [9])..

2.2.3 Occurrence Probability of the Stationary States for a Given Combination of Stimuli

In this subsection we calculated the probability that a given firing-rate state is stationary, for a fixed combination of stimuli. According to Eq. (5), is stationary for every . Since the boundaries of (namely the functions and ) are random variables, it follows that the probability that the firing-rate state is stationary, for a fixed combination of stimuli , is . Since , for a given firing rate and for , are independent variables (and the same for the variables ), it follows that can be factored out into the product of the probabilities . In particular, whenever , from Eq. (5) we see that is stationary for every . It follows that, in this case, . On the other hand, whenever , the state is stationary for every , so that . In all the other cases, is stationary for every . This condition can be decomposed as , and since , the random variables e are independent, so that . Therefore, to summarize, the probability that the firing-rate state is stationary, for a fixed combination of stimuli , is:

(18)

Note that can be equivalently interpreted as the conditional probability , and that generally , therefore is not normalized over the set of the possible firing-rate states.

2.2.4 Occurrence Probability of the Stationary States Regardless of the Stimuli

In this subsection we calculated the probability to observe a given firing-rate state in the whole multistability diagram of a single network realization, namely the probability that the state is stationary regardless of the specific combination of stimuli to the network. In other words, this represents the probability that is stationary for at least one combination of stimuli. The firing-rate state is observed in the multistability diagram only if its corresponding hyperrectangle has positive hypervolume . Since , it follows that only if , where represents the length of the interval  222Note that in Sec. (3) we consider an example of neural network model with , therefore in that case represents the area of the rectangles in the stimuli space. Nevertheless, to avoid confusion, we will continue to use the general notation .. In particular, when (respectively ), from Eq. (5) we get for every (respectively ), or in other words . On the other hand, when , according to Eq. (5) we obtain . Since and are independent for a given , we can write:

and therefore, by using Eq. (13):


To conclude, since the quantities are independent, we obtain that the probability to observe a given firing-rate state in the whole multistability diagram of a single network realization is:

(19)

2.3 The Special Case of Multi-Population Networks Composed of Statistically-Homogeneous Populations

In biological networks, heterogeneity is experimentally observed between different types of synapses (e.g. excitatory vs inhibitory ones), as well as within a given synaptic type [36]. For this reason, in this subsection we focused our attention on the study of random networks composed of statistically-homogeneous populations. As we explained in SubSec. (2.2.2), by statistical homogeneity we mean that the synaptic weights are random and therefore heterogeneous, but the probability distribution of , as well as the firing threshold and the external stimulus , are population-dependent. This model has been used previously in neuroscience to study the dynamical consequences of heterogeneous synaptic connections in multi-population networks (see e.g. [47, 29]). However, while previous studies focused on the thermodynamic limit of the network model, here we considered the case of arbitrary-size networks.

We called the size of population , so that . Moreover, we rearranged the neurons so that the connection probabilities can be written in the following block-matrix form:

(20)

In Eq. (20), is a matrix, while represents the magnitude of the diagonal entries of the matrix , namely the probability to observe a self-connection or autapse [53]. represents the probability to observe a synaptic connection from a neuron in population to a (distinct) neuron in population . Moreover, is the all-ones matrix (here we used the simplified notation ), while is the identity matrix. According to the homogeneity assumption, we also supposed that the strength of the non-zero synaptic connections from population to population is generated from a population-dependent probability distribution:

belonging to populations , respectively. For every excitatory population the support of the distribution must be non-negative, while for every inhibitory population it must be non-positive. We also supposed that all the neurons in population have the same firing threshold and share the same stimulus . For example, if each population receives a distinct external stimulus, then , where and . However, generally, there may exist distinct populations that share the same stimulus.

Now consider the following block matrix:

with homogeneous blocks (where , while are free parameters). We found that:

(21)

with multinomial coefficients:

As a consequence of the statistical homogeneity of the multi-population network considered in this subsection, the matrices in Eqs. (LABEL:eq:continuous-components) and (LABEL:eq:cumulative-distribution-functions) are composed of homogeneous block submatrices. For this reason, in the specific case of this multi-population network, the permanents in Eqs. (LABEL:eq:continuous-components) and (LABEL:eq:cumulative-distribution-functions) can be calculated by means of Eq. (21). Note that, for a given , the parameter represents the number of distinct populations that share the current (for example, if each population receives a distinct external stimulus), while is the number of block columns (for example, when calculating , see Eq. (LABEL:eq:cumulative-distribution-functions), and when calculating , see Eq. (LABEL:eq:continuous-components)). corresponds to the number of neurons with index that belong to the population , while represents the number of columns of the th block matrix (for example, when calculating , we set and , which correspond to the number of columns of the submatrices and respectively, see Eq. (LABEL:eq:continuous-components)). Moreover, corresponds to , while the parameters represent the entries of the matrices in Eqs. (LABEL:eq:continuous-components) and (LABEL:eq:cumulative-distribution-functions) (for example, , and for in the population , when calculating ).

For the sake of clarity, we implemented Eq. (21) in the supplemental Python script “Permanent.py”. Since the permanents in Eqs. (LABEL:eq:continuous-components) and (LABEL:eq:cumulative-distribution-functions) can be obtained from Eq. (21) for and , in the script we specifically implemented these two cases. The computation of by means of Eq. (21) generally proved much faster than the BBFG algorithm, see Sec. (3). However, it is important to note that while the BBFG algorithm can be applied to neural networks with any topology, Eq. (21) is specific for multi-population networks composed of statistically-homogeneous populations.

2.4 Large-Network Limit

In computational neuroscience, statistically-homogeneous multi-population networks represent an important class of network models, since their large-size limit is typically well-defined and serves as a basis for understanding the asymptotic behavior of neural systems [47, 13, 21, 29, 12]. In this subsection, we derived the large-size limit of the class of statistically-homogeneous multi-population networks with quenched disorder that we introduced in SubSec. (2.3). In particular, we focused on the case when each neural population receives a distinct external stimulus current, and we also supposed that the contribution of self-connections to the statistics of the firing rates is negligible in the large-network limit. The consequences of the relaxation of these two assumptions will be discussed at the end of this subsection.

The derivation of the asymptotic form of the stationary-state statistics required the introduction of a proper normalization of the sum in Eq. (1), in order to prevent the divergence of the mean and the variance of in the thermodynamic limit. To this purpose, we chose the mean and the variance of the random variables as follows:

given parameters , , and such that and , for every (with ) in the populations , respectively. Eq. (LABEL:eq:normalization) implies that:

and therefore:

having neglected the contribution of the autapses. Therefore the mean and the variance of are finite for every state in the thermodynamic limit, as desired. Now, consider any of the firing-rate states composed of active neurons in the population (