A unified view on weakly correlated recurrent networks

A unified view on weakly correlated recurrent networks

Abstract

The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.

\pdfbookmark

[1]TitleTitlePage

Dmytro Grytskyy, Tom Tetzlaff, Markus Diesmann, Moritz Helias

1 Inst. of Neuroscience and Medicine (INM-6) and Inst. for Advanced Simulation (IAS-6)

Jülich Research Centre and JARA, Jülich, Germany

2 Medical Faculty

RWTH Aachen University, Germany

email: d.grytskyy@fz-juelich.de

Keywords

Correlations, linear response, Hawkes process, leaky integrate-and-fire model, binary neuron, linear rate model, Ornstein-Uhlenbeck process

1 Introduction

The meaning of correlated neural activity for the processing and representation of information in cortical networks is still not understood, but evidence for a pivotal role of correlations increases (recently reviewed in Cohen & Kohn, 2011). Different studies have shown that correlations can either decrease (Zohary et al., 1994) or increase (Sompolinsky et al., 2001) the signal to noise ratio of population signals, depending on the readout mechanism. The architecture of cortical networks is dominated by convergent and divergent connections among the neurons (Braitenberg & Schüz, 1991) causing correlated neuronal activity by common input from shared afferent neurons in addition to direct connections between pairs of neurons and common external signals. It has been shown that correlated activity can faithfully propagate through convergent-divergent feed forward structures, such as synfire chains (Abeles, 1991; Diesmann et al., 1999), a potential mechanism to convey signals in the brain. Correlated firing was also proposed as a key to the solution of the binding problem (von der Malsburg, 1981; Bienenstock, 1995; Singer, 1999), an idea that has been discussed controversially (Shadlen & Movshon, 1999). Independent of a direct functional role of correlations in cortical processing, the covariance function between the spiking activity of a pair of neurons contains the information about time intervals between spikes. Changes of synaptic coupling, mediated by spike-timing dependent synaptic plasticity (STDP, Markram et al., 1997; Bi & Poo, 1999), are hence sensitive to correlations. Understanding covariances in spiking networks is thus a prerequisite to investigate the evolution of synapses in plastic networks (Burkitt et al., 2007; Gilson et al., 2009, 2010).

On the other side, there is ubiquitous experimental evidence of correlated spike events in biological neural networks, going back to early reports on multi-unit recordings in cat auditory cortex (Perkel et al., 1967; Gerstein & Perkel, 1969), the observation of closely time-locked spikes appearing at behaviorally relevant points in time (Kilavik et al., 2009; Ito et al., 2011) and collective oscillations in cortex (recently reviewed in Buzsáki & Wang, 2012).

The existing theories explaining correlated activity use a multitude of different neuron models. Hawkes (1971) developed the theory of covariances for linear spiking Poisson neurons (Hawkes processes). Ginzburg & Sompolinsky (1994) presented the approach of linearization to treat fluctuations around the point of stationary activity and to obtain the covariances for networks of non-linear binary neurons. The formal concept of linearization allowed Brunel & Hakim (1999) and Brunel (2000) to explain fast collective gamma oscillations in networks of spiking leaky integrate-and-fire (LIF) neurons. Correlations in feed-forward networks of leaky integrate-and-fire models are studied in Moreno-Bote & Parga (2006), exact analytical solutions for such network architectures are given in Rosenbaum & Josic (2011) for the case of stochastic random walk models, and threshold crossing neuron models are considered in Tchumatchenko et al. (2010) and Burak et al. (2009). Covariances in structured networks are investigated for Hawkes processes (Pernice et al., 2011), and in linear approximation for LIF (Pernice et al., 2012) and exponential integrate-and-fire neurons (Trousdale et al., 2012). The latter three works employ an expansion of the propagator (time evolution operator) in terms of the order of interaction. Finally Buice et al. (2009) investigate higher order cumulants of the joint activity in networks of binary model neurons.

Analytical insight into a neuroscientific phenomenon based on correlated neuronal activity often requires a careful choice of the neuron model to arrive at a solvable problem. Hence a diversity of models has been proposed and is in use. This raises the question which features of covariances are generic properties of recurrent networks and which are specific to a certain model. Only if this question can be answered one can be sure that a particular result is not an artifact of oversimplified neuronal dynamics. Currently it is unclear how different neuron models relate to each other and whether and how results obtained with one model carry over to another. In this work we present a unified theoretical view on pairwise correlations in recurrent networks in the asynchronous and collective-oscillatory regime, approximating the response of different models to linear order. The joint treatment allows us to answer the question of genericness and moreover naturally leads to a classification of the considered models into only two categories, as illustrated in Figure 1. The classification in addition enables us to extend existing theoretical results to biologically relevant parameters, such as synaptic delays and the presence of inhibition, and to derive explicit expressions for the time-dependent covariance functions, in quantitative agreement with direct simulations, which can serve as a starting point for further work.

The remainder of this article is organized as follows. In the first part of our results in “2 Covariance structure of noisy rate models” we investigate the activity and the structure of covariance functions for two versions of linear rate models (LRM); one with input the other with output noise. If the activity relaxes exponentially after application of a short perturbation, both models coincide with the Ornstein-Uhlenbeck process (OUP). We mainly consider the latter case, although most results hold for arbitrary kernel functions. We extend the analytical solutions for the covariances in networks of OUP (Risken, 1996) to the neuroscientifically important case of synaptic conduction delays. Solutions are derived first for general forms of connectivity in “2.2 Solution of the convolution equation with input noise” for input noise and in “2.3 Solution of convolution equation with output noise” for output noise. After analyzing the spectral properties of the dynamics in the frequency domain in “2.4 Spectrum of the dynamics”, identifying poles of the propagators and their relation to collective oscillations in neuronal networks, we show in “2.5 Population-averaged covariances” how to obtain pairwise averaged covariances in homogeneous Erdös-Rényi random networks. We explain in detail the use of the residue theorem to perform the Fourier back-transformation of covariance functions to the time domain in “2.6 Fourier back transformation” for general connectivity and in “2.7 Explicit expression for the population averaged cross covariance in the time domain” for averaged covariance functions in random networks, which allows us to obtain explicit results and to discuss class dependent features of covariance functions.

Figure 1: Mapping different descriptions of neuronal dynamics to linear rate models (LRM). The arrows indicate analytical methods which enable a mapping from the original spiking (leaky integrate-and-fire model, Hawkes model) or binary neuron dynamics to the analytically more tractable linear rate models. Depending on the original dynamics (spiking or binary) the resulting LRM contains an additive noise component either on the output side (left) or on the input side (right).

In the second part of our results in “3 Binary neurons”, “4 Hawkes processes”,
and “5 Leaky integrate-and-fire neurons” we consider the mapping of different neuronal dynamics on either of the two flavors of the linear rate models discussed in the first part. The mapping procedure is qualitatively the same for all dynamics as illustrated in Figure 1: Starting from the dynamic equations of the respective model, we first determine the working point described in terms of the mean activity in the network. For unstructured homogeneous random networks this amounts to a mean-field description in terms of the population averaged activity (i.e. firing rate in spiking models). In the next step, a linearization of the dynamical equations is performed around this working point. We explain how fluctuations can be considered in the linearization procedure to improve its accuracy and we show how the effective linear dynamics maps to the LRM. We illustrate the results throughout by a quantitative comparison of the analytical results to direct numerical simulations of the original non-linear dynamics. The appendices “8.2 Implementation of noisy rate models”,
8.3 Implementation of binary neurons in a spiking simulator code”, and
8.4 Implementation of Hawkes neurons in a spiking simulator code” describe the model implementations and are modules of our long-term collaborative project to provide the technology for neural systems simulations (Gewaltig & Diesmann, 2007).

2 Covariance structure of noisy rate models

2.1 Definition of models

Let us consider a network of linear model neurons, each characterized by a continuous fluctuating rate and connections from neuron to neuron given by the element of the connectivity matrix . We assume that the response of neuron to input can be described by a linear kernel so that the activity in the network fulfills

(1)

where denotes the function shifted by the delay , is an uncorrelated noise with

(2)

e. g. a Gaussian white noise and is the convolution. With the particular choice we obtain

(3)

We call the dynamics (3) the linear noisy rate model (LRM) with noise applied to output, as the sum appears on the right hand side. Alternatively, choosing we define the model with input noise as

(4)

Hence, equations (3) and (4) are special cases of (1). In the following we consider the particular case of an exponential kernel

(5)

where denotes the Heaviside function, for , else. Applying to (1) the operator which has as a Green’s function (i.e. we get

(6)

which is the equation describing a set of delay coupled Ornstein-Uhlenbeck-processes (OUP) with input or output noise for or , respectively. We use this representation in “3 Binary neurons” to show the correspondence to networks of binary neurons.

2.2 Solution of the convolution equation with input noise

The solution for the system with input noise obtained from the definition (4) after Fourier transformation is

(7)

where the delay is consumed by the kernel function . We use capital letters throughout the text to denote objects in the Fourier domain and lower case letters for objects in the time domain. Solved for the covariance function of in the Fourier domain is found with the Wiener–Khinchin theorem (Gardiner, 2004) as , also called the cross spectrum

(8)

where we introduced the matrix . From the second to the third line we used the fact that the non-delayed kernels can be replaced by delayed kernels and that the corresponding phase factors and cancel each other. If is a vector of pairwise uncorrelated noise, is a diagonal matrix and needs to be chosen accordingly in order for the cross spectrum (8) to coincide (neglecting non-linear effects) with the cross spectrum of a network of binary neurons, as described in
3.1 Equivalence of binary neurons and Ornstein-Uhlenbeck processes”.

2.3 Solution of convolution equation with output noise

For the system with output noise we consider the quantity as the dynamic variable representing the activity of neuron and aim to determine pairwise correlations. It is easy to get from (3) after Fourier transformation

(9)

which can be solved for in order to determine the Fourier transform of as

(10)

The cross spectrum hence follows as

(11)

with . is a diagonal matrix with the -th diagonal entry . For the correspondence to spiking models must be chosen appropriately, as discussed in “4 Hawkes processes” and “5 Leaky integrate-and-fire neurons” for Hawkes processes and leaky integrate-and-fire neurons, respectively.

2.4 Spectrum of the dynamics

For both linear rate dynamics, with output and with input noise, the cross spectrum has poles at certain frequencies in the complex plane. These poles are defined by the zeros of and the corresponding term with the opposite sign of . The zeros of are solutions of the equation

where is the -th eigenvalue of . The same set of poles arises from (1) when solving for . For and the exponential kernel (5), the poles can be expressed as

(12)

where is the -th of the infinitely many branches of the Lambert-W function (Corless et al., 1996). For vanishing synaptic delay there is obviously only one solution for every given by .

Given the same parameters , , , the pole structures of the cross spectra of both systems (8) and (11) are identical, since the former can be obtained from the latter by multiplication with , which has no poles. The only exception causing a different pole structure for the two models is the existence of an eigenvalue of the connectivity matrix , corresponding to a pole . However, this pole corresponds to an exponential decay of the covariance for input noise in the time domain and hence does not contribute to oscillations. For output noise, the multiplication with the term , vanishing at , cancels this pole in the covariance. Consequently both dynamics exhibit similar oscillations. A typical spectrum of poles for a negative eigenvalue is shown in Figure 2B,D.

2.5 Population-averaged covariances

Often it is desirable to consider not the whole covariance matrix but averages over subpopulations of pairs of neurons. For instance the average over the whole network would result in a single scalar value. Separately averaging pairs, distinguishing excitatory and inhibitory neuron populations, yields a by matrix of covariances. For these simpler objects closed form solutions can be obtained, which already preserve some useful information and show important features of the network. Averaged covariances are also useful for comparison with simulations and experimental results.

In the following we consider a recurrent random network of excitatory and inhibitory neurons with synaptic weight for excitatory and for inhibitory synapses. The probability determines the existence of a connection between two randomly chosen neurons. We study the dynamics averaged over the two subpopulations by introducing the quantities and noise terms for ; indices and stand for inhibitory and excitatory neurons and corresponding quantities. Calculating the average local input to a neuron of type , we obtain

(13)

where, from the second to the third line we used the fact that in expectation a given neuron has targets in the population . The reduction to the averaged system in (13) is exact if in every column in there are exactly non-zero elements for and for , which is the case for networks with fixed out-degree (number of outgoing connections of a neuron to the neurons of a particular type is kept constant), as noted earlier (Tetzlaff et al., 2012). For fixed in-degree (number of connections to a neuron coming in from the neurons of a particular type is kept constant) the substitution of by is an additional approximation, which could be considered as an average over possible realizations of the random connectivity. In both cases the effective population-averaged connectivity matrix turns out to be

(14)

with . So the averaged activities fulfill the same equations (3) and (4) with the non-averaged quantities , , and replaced by their averaged counterparts , , and . The population averaged activities are directly related to the block-wise averaged covariance matrix , with . With

(15)

we replace by and by so that the same equations (11) and (8) and their general solutions also hold for the block-wise averaged covariance matrices.

Figure 2: Pole structure determines dynamics. Autocovariance of the population activity (A,C) measured in and its Fourier transform called power spectrum (B,D) of the rate models with output noise (dots) (3) and input noise (diagonal crosses) (4) for delays (A,B) and (C,D). Black symbols show averages over the excitatory population activity and gray symbols over the inhibitory activity obtained by direct simulation. Light gray curves show theoretical predictions for the spectrum (20) and the covariance (21) for output noise and the spectrum (17) and the covariance (18) for input noise. Black crosses (12) in B, D denote the locations of the poles of the cross spectra - with the real parts corresponding to the damping (vertical axis), and the imaginary parts to oscillation frequencies (horizontal axis). The detailed parameters for this and following figures are given in “8.5 Parameters of simulations”.

The covariance matrices separately averaged over pairs of excitatory, inhibitory or mixed pairs are shown in Figure 2 for both linear rate dynamics (3) and (4). (Parameters for all simulations presented in this article are collected in “8.5 Parameters of simulations”, the implementation of linear rate models is described in “8.2 Implementation of noisy rate models”). The poles of both models shown in Figure 2B are given by (12) and coincide with the peaks in the cross spectra (8) and (11) for output and input noise, respectively. The results of direct simulation and the theoretical prediction are shown for two different delays, with the longer delay leading to stronger oscillations.

Figure 3: Limits of the theory for fixed in-degree and fixed out-degree. Autocovariance (A) and covariance (B) in random networks with fixed in-degree (dots) and fixed out-degree (crosses). Simulation results for , , and are shown in dark gray, black and light gray, respectively for synaptic weight far from bifurcation. For larger synaptic weight close to bifurcation (see text at the end of “2.5 Population-averaged covariances”), is also shown in D for fixed in-degree (dark gray dots) and for fixed out-degree (black dots). Corresponding theoretical predictions for the autocovariance (34) (A) and the covariance (18) (B,D) are plotted as light gray curves throughout. The set of eigenvalues is shown as black dots in panel C for the smaller weight. The gray circle denotes the spectral radius (Rajan & Abbott, 2006; Kriener et al., 2008) confining the set of eigenvalues for the larger weight. The small filled gray circle and the triangle show the effective eigenvalues of the averaged systems for small and large weight, respectively.

Figure 3C shows the distribution of eigenvalues in the complex plane for two random connectivity matrices with different synaptic amplitudes . The model exhibits a bifurcation, if at least one eigenvalue assumes a zero real part. For fixed out-degree the averaging procedure (13) is exact, reflected by the precise agreement of theory and simulation in Figure 3D. For fixed in-degree, the averaging procedure (13) is an approximation, which is good only for parameters far from the bifurcation. Even in this regime still small deviations of the theory from the simulation results are visible in Figure 3B. On the stable side close to a bifurcation, the appearance of long living modes causes large fluctuations. These weakly damped modes appearing in one particular realization of the connectivity matrix are not represented after the replacement of the full matrix by the average over matrix realizations. The eigenvalue spectrum of the connectivity matrix provides an alternative way to understand the deviations. By the averaging the set of eigenvalues of the connectivity matrix is replaced withby the two eigenvalues of the reduced matrix , one of which is zero due to identical rows of . The eigenvalue spectrum of the full matrix is illustrated in Figure 3C. Even if the eigenvalue(s) of are far in the stable region (corresponding to ) some eigenvalues of the full connectivity matrix in the vicinity of the bifurcation region may still have an imaginary part becoming negative and the system can feel their influence, shown in Figure 3D.

2.6 Fourier back transformation

Although the cross spectral matrices (8) and (11) for both dynamics look similar in the Fourier domain, the procedures for back transformation differ in detail. In both cases, the Fourier integral along the real -axis can be extended to a closed integration contour by a semi-circle with infinite radius centered at in the appropriately chosen half-plane. The half-plane needs to be selected such that the contribution of the integration along the semi-circle vanishes. By employing the residue theorem (Bronstein et al., 1999) the integral can be replaced by a sum over residua of the poles encircled by the contour. For a general covariance matrix we only need to calculate for , as for the solution can be found by symmetry .

For input noise it is possible to close the contour in the upper half-plane where the integrand vanishes for for all , as decays as . This can be seen from (8), because the highest order of appearing in is equal to the dimensionality of ( for ), and in it is () or (). So is proportional to and for large .

For the case of output noise (11) can be obtained from the for input noise (8) multiplied with for large . The multiplication with this factor changes the asymptotic behavior of the integrand, which therefore contains terms converging to a constant value and terms decaying like for . These terms result in non-vanishing integrals over the semicircle in the upper half-plane and have to be considered separately. To this end we rewrite (11) as

(16)

and find the constant term which turns into a -function in the time domain. The first term in the second line of (16) decays like and can be transformed just as for input noise closing the contour in the upper half-plane. The second and third term are the transposed complex conjugates of each other, because of the dependence of on instead of , and require a special consideration. Multiplied by under the Fourier integral, the first term is proportional to and vanishes faster than for large in the upper half-plane for and in the lower half plane for . For the second term the half planes are interchanged. The application of the residue theorem requires closing the integration contour in the half-plane where the integral over the semi-circle vanishes faster than . For and in the general case of a stable dynamics all poles of the first term are in the upper half-plane , and have no contribution to for . For the second term the same is true for ; these terms correspond to the jumps of after one delay, caused by the effect of the sending neuron arriving at the other neurons in the system after one synaptic delay. These terms correspond to the response of the system to the impulse of the sending neuron – hence we call them “echo terms” in the following (Helias et al., 2013). The presence of such discontinuous jumps at time points and in the case of output noise is reflected in the convolution of with in the time domain in (37). For input noise the absence of discontinuities can be inferred from the absence of such terms in (33), where the derivative of the correlation function is equal to the sum of finite terms. The first summand in (16) corresponds to the covariance evoked by fluctuations propagating through the system originating from the same neuron and we call it “correlated input term”. In the system with input noise a similar separation into effective echo and correlated input terms can be performed. We obtain the correlated input term as the covariance in an auxiliary population without outgoing connections and echo terms as the difference between the full covariance between neurons within the network and the correlated input term.

2.7 Explicit expression for the population averaged cross covariance in the time domain

We obtain the population averaged cross spectrum in a recurrent random network of Ornstein-Uhlenbeck processes with input noise by inserting the averaged connectivity matrix (14) into (8). The explicit expression for the covariance function follows by taking into account all (both) eigenvalues of with values and . The detailed derivation of the results presented in this section are documented in “8.1 Calculation of the population averaged cross covariance in time domain”. The expression for the cross spectrum (8) takes the form

(17)

where we introduced as a short hand. Sorting the terms by their dependence on , introducing the functions for this dependence, and for the corresponding functions in the time domain, the covariance in the time domain takes the form

The previous expression is valid for arbitrary . In simulations presented in this article we consider identical marginal input statistics for all neurons. In this case the averaged activities for excitatory and inhibitory neurons are the same, so we can insert the special form of given in (15), which results in

(18)

The time-dependent functions are the same in both cases. Using the residue theorem for they can be expressed as a sum over the poles given by (12) and the pole of . At the residue of is the residue of at is , so that the explicit forms of follow as

(19)

The corresponding expression for for output noise is obtained by multiplying (17) with

(20)

which, after Fourier transform, provides the expression for in the time domain for

(21)

As in (18), the first line holds for arbitrary , and the second for given by (15), valid if the firing rates are homogeneous. is defined as before, and

(22)

vanishes for . All matrix elements of the first term in (21) are identical. Therefore all elements of are equal for . Both rows of the matrix in front of are identical, so for the off diagonal term coincides with and with and vice versa for .

As an illustration we show the functions for one set of parameters in Figure 4. The left panels (A,C) correspond to contributions to the covariance caused by common input to a pair of neurons, the right panels (B,D) to terms due to the effect of one of the neurons’ activities on the remaining network (echo terms). The upper panels (A,B) belong to the model with input noise, the lower panel (C,D) to the one with output noise.

For the rate dynamics with output noise, the term with in (21) (shown in Figure 4C) is symmetric and describes the common input covariance and the term with (shown in Figure 4D) is the echo part of the covariance. For the rate dynamics with input noise (18) the term containing (shown in Figure 4A) is caused by common input and is hence also symmetric, the terms with and (shown in Figure 4B) correspond to the echo part and have hence their peak outside the origin. The second echo term in (18) is equal to the first one transposed and with opposite sign of the time argument, so we show and together in one panel in Figure 4B. Note that for input noise, the term with describes the autocovariance, which corresponds to the term with the -function in case of output noise.

The solution (18) is visualized in Figure 6, the solution (21) in Figure 7, and the decomposition into common input and echo parts is also shown and compared to direct simulations in Figure 8.

Figure 4: Functions introduced in (19) and (22) for decomposition of covariance . In panel B is shown in gray and in black. The two functions are continuations of each other, joint at . Both functions appear in the echo term for input noise. The function in panel D describing the corresponding echo term in the case of output noise is shifted to be aligned with the function in panel B to facilitate the comparison of panels B and D. Parameters in all panels are , , .

3 Binary neurons

In the following sections we study, in turn, the binary neuron model, the Hawkes model and the leaky integrate-and-fire model and show how they can be mapped to one of the two OUPs; either the one with input or the one with output noise, so that the explicit solutions (18) and (21) for the covariances derived in the previous section can be applied. In the present section, we start with the binary neuron model (Ginzburg & Sompolinsky, 1994; Buice et al., 2009).

Following Ginzburg & Sompolinsky (1994) the state of the network of binary model neurons is described by a binary vector and each neuron is updated at independently drawn time points with exponentially distributed intervals of mean duration . This stochastic update constitutes a source of noise in the system. Given the -th neuron is updated, the probability to end in the up-state () is determined by the gain function which depends on the activity of all other neurons. The probability to end in the down state () is . Here we implemented the binary model in the NEST simulator (Gewaltig & Diesmann, 2007) as described in “8.3 Implementation of binary neurons in a spiking simulator code”. Such systems have been considered earlier (Ginzburg & Sompolinsky, 1994; Buice et al., 2009), and here we follow the notation employed in the latter work. In the following we collect results that have been derived in these works and refer the reader to these publications for the details of the derivations. The zero-time lag covariance function is defined as , with the expectation value taken over different realizations of the stochastic dynamics. Here is the vector of mean activities . fulfills the differential equation

In the stationary state, the correlation therefore fulfills

(23)

The time lagged covariance fulfills for the differential equation

(24)

This equation is also true for , the autocovariance. The term has a simple interpretation:. iIt measures the influence of a fluctuation of neuron at time around its mean value on the gain of neuron at time (Ginzburg & Sompolinsky, 1994). We now assume a particular form for the coupling between neurons

(25)

where is the vector of incoming synaptic weights into neuron and is a non-linear gain function. Assuming that the fluctuations of the total input into the -th neuron are sufficiently small to allow a linearization of the gain function , we obtain the Taylor expansion

where

(26)

is the slope of the gain function at the point of mean input.

Figure 5: Alternative linearizations of the binary neuron model. The black curve represents the non-linear gain function . The dashed gray line is its tangent at the mean input value (denoted by the diagonal cross). The solid curve is the slope averaged over the distribution of the fluctuating input (27). This distribution estimated from direct simulation is presented by black dots, the corresponding theoretical prediction of a normal distribution (27) is shown as the light gray curve.

Up to this point the treatment of the system is identical to the work of Ginzburg & Sompolinsky (1994). Now we present an alternative approach for the linearization which takes into account the effect of fluctuations in the input. For sufficiently asynchronous network states, the fluctuations in the input to neuron can be approximated by a Gaussian distribution . In the following we consider a homogeneous random network with fixed in-degree as described in “2.5 Population-averaged covariances”. As each neuron receives the same number of excitatory and inhibitory synapses, the marginal statistics of the summed input to each neuron is identical. The mean input to a neuron then is , where is the mean activity of a neuron in the network. If correlations are small, the variance of this input signal distribution can be approximated as the sum of the variances of the individual contributions from the incoming signals, resulting in , where we used the fact that the variance of a binary variable with mean is . This results from a direct calculation: since , , so that the variance is . Averaging the slope of the gain function over the distribution of the input variable results in the averaged slope

(27)

The two alternative methods of linearization of are illustrated in Figure 5. In the given example, the linearization procedure taking into account the fluctuations of the input signal results in a smaller effective slope than taking the slope at the mean activity near its maximum. Averaging the slope over this distribution fits simulation results better than calculated at the mean of , as shown in Figure 6.

The finite slope of the non-linear gain function can be understood as resulting from the combination of a hard threshold with an intrinsic local source of noise. The inverse strength of this noise determines the slope parameter (Ginzburg & Sompolinsky, 1994). In this sense, the network model contains two sources of noise, the explicit local noise, quantified by and the fluctuating synaptic input interpreted as self-generated noise on the network level, quantified by . Even in the absence of local noise (), the above mentioned linearization is applicable and yields a finite effective slope (27). In the latter case the resulting effective synaptic weight is independent of the original synapse strength (Grytskyy et al., 2013).

We now extend the classical treatment of covariances in binary networks (Ginzburg & Sompolinsky, 1994) by synaptic conduction delays. In (25) ) must therefore be understood as a functional acting on the function for , so that also synaptic connections with time delay can be realized. We define an effective weight vector to absorb the gain factor as , with either or depending on the linearization procedure, and expand the right hand side of (24) to obtain

Thus the cross-covariance fulfills the matrix delay differential equation

(28)

This differential equation is valid for . For the stationary solution, the differential equation only depends on the relative timing

(29)

The same linearization applied to (23) results in the boundary condition for the solution of the previous equation

(30)

or, if we split into its diagonal and its off-diagonal parts and

(31)
with

In the following section we use this representation to demonstrate the equivalence of the covariance structure of binary networks to the solution for Ornstein-Uhlenbeck processes with input noise.

3.1 Equivalence of binary neurons and Ornstein-Uhlenbeck processes

In the following subsection we show that the same equations (29) and (31) for binary neurons also hold for the Ornstein-Uhlenbeck process (OUP) with input noise. In doing so here we also extend the existing framework of Ornstein-Uhlenbeck processes (Risken, 1996) to synaptic conduction delays . A network of such processes is described by

(32)

where is a vector of pairwise uncorrelated white noise with and . With the help of the Green’s function satisfying , namely , we obtain the solution of equation (32) as

The equation for the fluctuations around the expectation value

coincides with the noisy rate model with input noise (4) with delay and convolution kernel . In the next step we investigate the covariance matrix to show for which choice of parameters the covariance matrices for the binary model and the OUP with input noise coincide. To this end we derive the differential equation with respect to the time lag for positive lags

(33)

where we used , because the noise is realized independently for each time step and the system is causal. Eq. (33) is identical to the differential equation satisfied by the covariance matrix (28) for binary neurons (Ginzburg & Sompolinsky, 1994). To determine the initial condition of (33) we need to take the limit . This initial condition can be obtained as the stationary solution of the following differential equation

Here we used that vanishes due to independent noise realizations and causality and

In the stationary state, only depends on the time lag and is independent of the first time argument , which, with the symmetry yields the additional condition for the solution of (33)

or, if is split in diagonal and off-diagonal parts and , respectively,