# Joint Statistics of Strongly Correlated Neurons via Dimensional Reduction

###### Abstract

The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

## I Introduction

Electric activity generated by different neurons in the brain is often strongly correlated Rosenbaum et al. (2014); Lampl et al. (1999); Okun and Lampl (2008); Poulet and Petersen (2008). These correlations arise as a result of shared input, or input components that are themselves correlated. Correlated activity can be a consequence of shared background fluctuations Arieli et al. (1996), but strong correlations might also indicate synchronous action potentials at the input indicating temporal coding. However, a clear-cut dichotomy between decorrelated and synchronized dynamics does not exist Staude et al. (2008) Kumar et al. (2010); Doiron et al. (2016). Rather, one should consider these two extremes as two faces of the same coin. Recent high-precision measurements reported very low average correlations suggesting a mechanism of active decorrelation in cortical networks Ecker et al. (2010); Renart et al. (2010); Pernice et al. (2011); Helias et al. (2014). At the same time it was observed by intracellular measurements that nearby neurons receive very similar input Lampl et al. (1999); Okun and Lampl (2008); Poulet and Petersen (2008).

Several studies of pair correlations in neural networks relate structure and dynamics assuming a fluctuating dynamics about a fixed point that is characterized by asynchronous (A) population activity and irregular (I) spike trains Brunel (2000); Pernice et al. (2011, 2012); Trousdale et al. (2012). They employ essentially linear perturbation theory Brunel and Hakim (1999); Lindner and Schimansky-Geier (2001) to compute correlation functions. Nevertheless, some of these works push the limits of existing methods. First of all, a qualitatively different AI state was observed in simulations of spiking neural networks with stronger couplings Ostojic (2014). Secondly, a partial extension of the theory to the strongly correlated regime was based on numerically determined spike response functions Pernice et al. (2012). Thirdly, pair correlation studies were generalized to higher-order correlations in recurrent networks by accounting for certain network connectivity motifs Jovanović et al. (2015); Jovanovic2016. These studies exploit and extend existing methods, but they also demonstrate the need for a new approach.

The main assumption of perturbation theory is that the common drive of the two neurons is weak. Yet, this criterion depends on the background state and strength of interactions in a given network. We showed previously that low background rates, for example, may lead to a breakdown of perturbation theory even for low correlation coefficients Deniz and Rotter (2016). This makes non-perturbative methods indispensable for modeling and analysis of correlations as low spike rates are typical in experiments Shadlen and Newsome (1998). All in all, a proper treatment of strong correlations must take the non-linear correlation transfer into account, which appears to play an important role in sensory processing Lyamzin et al. (2015). However, a unified and transparent framework to calculate correlations of all strengths for neuron models of the integrate-and-fire type does not exist.

The pitfall of previously suggested non-perturbative methods are their immense computational costs due to the high dimensionality of the discretized problem Rosenbaum et al. (2012). This makes computations practically impossible for a large range of parameters. For instance, the numerical effort of computing the pair correlation problem scales as , where is the number of grid points used to approximate single neuron membrane potentials. Although, limiting the grid size is possible Helias et al. (2010), a too coarse voltage grid fails to properly reflect the statistics of leaky-integrate-and-fire neurons with Poisson input. The precision issue gets even more severe for correlations of higher order, which are needed to parametrize the joint statistics of multiple neurons. With our methods, in contrast, we observed that joint membrane potential distributions of even strongly correlated neurons can be reduced to a small set of principal vectors via singular value decomposition (SVD). This suggests that strong correlations can be computed with high precision resorting to subspaces of relatively low dimension. In this work, specifically, we devise a SVD based method that allows to compute spike correlation functions of two leaky integrate-and-fire neurons with high accuracy.

Similar problems were studied analytically for arbitrary input correlations of the stochastic dynamics of neural oscillators Abouzeid and Ermentrout (2011), and for level-crossings of correlated Gaussian processes Tchumatchenko et al. (2010). Related numerical work considered strong input correlations for integrate-and-fire neurons receiving white noise input Vilela and Lindner (2009) or shot noise input with nontrivial temporal correlations Schwalger et al. (2015); Voronenko et al. (2015). The problem of analytically calculating the stationary distributions conditional on a spike from the exit current at the threshold is also discussed in the case of colored noise Schwalger et al. (2015). A method to deal with very strong input correlations in a specific input model (correlated Poisson processes) was suggested by Schultze-Kraft et al. (2013). Our study further suggests a novel technique, extending Helias et al. (2010), to solve 2D jump equations for leaky integrate-and-fire neurons by mapping it to a Markov chain. This method provides an accurate estimate of the steady state joint distribution of membrane potentials.

We test our method for large input correlations and demonstrate its power for different types of correlation asymmetries by comparing our semi-analytical approach to correlations extracted from simulated neuronal spike trains. We look at the full range of input correlations and provide an example of a non-linear correlation transfer function. Our method can be extended to more general integrate-and-fire models, higher-order input correlations and third order output correlations. However, we have to defer a detailed analysis of such cases to future work.

## Ii Models & Methods

### ii.1 Two neurons with shared Poisson input

The leaky inteagrate-and-fire neuron model with postsynaptic potentials of finite amplitudes was studied previously in Helias et al. (2010). The stochastic equation that describes the membrane potential dynamics of one particular neuron is given as

(1) |

where and represent the amplitudes of individual EPSPs and IPSPs, respectively, and and are the spike trains of excitatory and inhibitory presynaptic neurons, with each of their spikes represented by a Dirac delta function

(2) |

An analogous definition holds for inhibitory neurons. In both cases, if the membrane potential reaches the firing threshold, , a spike is elicited and the voltage is reset to its resting value at .

In order to study correlations between the spike trains of two neurons we look at two coupled stochastic equations, describing the membrane potentials of two neurons that share a certain fraction of their excitatory and inhibitory input spikes

(3) | ||||

(4) |

where shared excitatory and inhibitory input spike trains are denoted as and , respectively. Comparing the parameters of the jump model to shared white noise input we implicitly specified the firing rates of 6 independent Poisson processes (Fig. 1)

(5) | ||||

(6) |

For notational convenience, shared input rates and are computed from a shared Wiener process with zero mean. Setting and , rates for each independent excitatory and inhibitory process are given as

(7) | ||||

(8) |

Here we note that some combinations of do not correspond to any combination of positive Poisson rates, as they must satisfy

(9) |

guarantee for the inhibitory rate.

### ii.2 Discretized Markov operators for the LIF model

In this section we summarize the discrete approximation to the dynamics of a LIF neuron, as developed in Helias et al. (2010). The coarse graining of the membrane potential is given by the map

(10) |

The probability density function of the membrane potential then becomes a vector satisfying

(11) |

with . As we impose a cut-off lower boundary of the voltage scale, the dimension of the discrete state space is given as .

The temporal evolution of the membrane potential distribution is now described in terms of a Markov process. The Markov propagator can be expressed as a juxtaposition of three operators: a decay operator describing leaky integration, a jump operator accounting for the action of synaptic inputs, and a threshold-and-reset operator that implements spike generation upon threshold crossing. The discrete decay operator is derived from its continuous counterpart as follows

This definition leads to the following decay matrix

(12) |

The jump distribution, which underlies the jump operator , can be derived from the count distribution of excitatory and inhibitory synaptic events in a small time interval . Assuming that they arrive with Poisson statistics (maximum entropy) at a rate of and , respectively, we obtain

(13) |

where and are the corresponding expected event counts. Dummy indices and correspond to excitatory and inhibitory counts. The jump distribution of the membrane potential is given by

(14) |

The jump operator is then derived as

with a jump matrix given by

(15) |

The threshold-and-reset operator takes all the states above threshold and inserts them at the reset potential. This is simply given as

(16) |

Finally, the time evolution matrix of the corresponding Markov chain is given as a product of the three operators described above

(17) |

The discrete stationary distribution is

(18) |

and the corresponding stationary rate is given as

(19) |

### ii.3 Spike train correlations

The cross-covariance function of two stationary spike trains () is defined as

(20) |

where , with indicating the ensemble average. For the model we studied here stationary rates were computed using Eq. 21. The cross-covariance function can be expressed in terms of the conditional firing rate , two stationary rates and , and the amplitude of a -function

(21) |

Given the spiking neuron model considered here, Eq. 21 can be derived from the stationary joint membrane potential distribution . Conditional on a spike at time in the first neuron, one has to find the instantaneous distribution of the membrane potential of the second neuron

(22) |

The probability of observing a consecutive spike is given by the flux with the normalization in Eq. 23. The conditional flux is computed for the Markov approximation as in Eq. 47. In general we simply compute the exit rate at the threshold distributed over by solving the following initial value problem

(23) | ||||

(24) |

where is discrete time evolution matrix of the neuron model given by Eq. 17. leads to a (forward) time evolution in steps of . The instantaneous conditional rate in Eq. 21 is computed with a formula similar to Eq. 19

(25) |

Finally, the covariance function in Eq. 21 is derived by using and two stationary rates and and . Note that the method described here can be generalized to higher order correlations as well.

### ii.4 Correlated jump distribution in 2D

We now use a 2-dimensional state space describing the joint membrane potential evolution of two neurons. Correlated and uncorrelated Poisson jumps push the 2D membrane potential vector into three independent directions, , and , allowing jumps in the positive (excitatory) and negative (inhibitory) direction, respectively. Hence, the jump distribution from an initial position to a new position in state space driven by independent Poisson processes is obtained from a 2D convolution

(26) |

Inserting the expressions for the jump distributions in each direction, collecting all terms and using Eq. 13, we obtain

This expression is also valid for more general input statistic models that rely on a decomposition into statistically independent parts Kuhn et al. (2003); Schultze-Kraft et al. (2013). Here we consider the shared Poisson input model as described by Eq. 4. Integrating the expression with respect to and inserting the mean event counts and , the resulting expression is given in a compact form

We can simplify this expression by choosing a regular grid according to and , for integers and . In practice, however, the resulting sum will be truncated for a given tolerance. We will use the matrix form of the discretized operators in the following subsection, which is equivalent to the above expression.

### ii.5 Linear operators for correlated dynamics in 2D

Here we discuss the action of operators on state vectors of the discretized space, assuming bins in each dimension. We write the stationary voltage distribution in the form

(27) |

where and are two suitable orthogonal bases of . We will define a specific choice for X and Y later in Section II.6. As there is no crosstalk between the two neurons except by shared input leading to a correlated jump distribution, threshold and decay operators (tensors) are separable

(28) | ||||

(29) |

Separability would also apply to the jump distribution for an input correlation coefficient of , corresponding to two neurons without shared input

(30) |

However, in the case of non-zero correlation, this relation holds only for a single path among the many connecting two points in state space . Therefore, every path must be taken into account by considering the contribution of each operator to a movement in the oblique direction.

Once we have computed the correct Markov matrix for 2D via

(31) |

one can also find the stationary joint membrane potential distribution as the Perron-Frobenius eigenvector of the propagator matrix

(32) |

Regarding the correlated jump distribution there are two ways of constructing operators. One possibility is described in Eq. 26. Alternatively, we construct linear Markov jump operators in 2D exploiting the commutativity of infinitesimal operators

Here is the identity matrix. Using the properties of the operator product and commutativity of the individual factors, we can simplify this expression

In order to expand the third term we define and operators as up and down transition matrices, where corresponds to a -step up transition, and corresponds to a -step down transition. This leads to

(33) |

Hence, discrete approximations to infinitesimal generators of private components are given as

(34) |

where matrix powers and are defined on and for simplicity.(This restrictive assumption can be generalized easily by computing the continuous jump distribution in Eq. 14 and then discretizing it, which leads to the same result.) On the other hand, we need to be careful with correlated spikes, which trigger jumps into the oblique direction with probability

(35) |

for excitatory and inhibitory jumps. Expanding yields

(36) |

As we noted before, the above construction is quite general and can be applied easily for general amplitude distributions. We only need to consider a discrete amplitude distribution in a given time bin of size , as described above, see also Kuhn et al. (2003); Richardson and Swarbrick (2010).

A final remark on the method described in this section concerns the commutativity of matrices. This property leads to a numerically more economic expression

(37) |

where the set of integers is defined as list of all jump numbers . The coefficient

(38) |

is the probability of jumps. The jump generator is then defined in terms of matrix powers

(39) |

### ii.6 Operator decomposition and SVD basis reduction

The expansion method described above is straightforward, but rather cumbersome to implement. We will now introduce a basis for the expansion of correlated jump operators suitable to reduce the cost of the computations involved, and helpful to increase the accuracy of a truncation. With our method, as compared to others, we have to solve linear equations of lower dimensionality in order to get a better approximation for the correlation function. Some further arguments for selecting Singular Value Decomposition are discussed in the results section. SVD of a matrix is given as

(40) |

where the diagonal entries of the diagonal matrix are the square roots of the non-zero eigenvalues of and . Both matrices and are orthogonal with columns consisting of the eigenvectors of and , respectively. We show a 2D example SVD of Markov matrix in Fig. 2. For replaced by a single-neuron time evolution matrix (), we define the matrix () by the selected orthogonal subspace of dimension () of (), according to the largest () singular values, resepectively. In order to project ()and () to this subspace, we also extend the orthogonal basis and to supra-threshold transitions

(41) |

where is an and is an operator, respectively. and are identity matrices, where and are the maximal supra-threshold jump numbers induced by both private and shared inputs. The combined action of and for is then expressed as reduced operators

(42) |

which map () to () matrices. Below we use the same dimensional reduction for correlated operators. In Eq. 37 the correlated jump operators are expressed as

A dimensional reduction is then achieved by using Eq. 42 in

In order to find in Eq. 27 we need to solve Eq. 32 , which reads

The projection operators satisfy . Applying them to the left hand side of this equation Eq. 32, we obtain

can be expressed in a simpler form as

for monomials . The reduced equation is then

(43) |

where is a tensor defined as

(44) |

The dimensionally reduced problem in Eq. 43 can then be solved in practice by reindexing tensor indices .

### ii.7 Conditional flux distribution

Using the decomposed 2D stationary distribution obtained by reduction, one can compute the flux distribution with the help of matrix operators. We compute the flux distribution using the 2D decay and jump operators with thresholding imposed only at one of the boundaries. A scheme illustrating the situation is shown in Fig. 1c. Here we explain how to obtain the flux distribution conditional on a spike in one neuron. In the general case of correlated neurons, the action of the full operators and is given as

(45) |

where is a matrix. The implicitly summed subspace indices, , are not shown and, , and are defined in Eq. 37. This equation can be written in a concise form with implicit indices as

Again, for practical reasons, computations were reduced by projecting to extended subspaces and similar to Eq. 42

(46) |

In order to compute the probability of jumps we need to sum the probabilities for a jump over the threshold or . The flux distribution is obtained as

(47) |

defined for and . These expressions are normalized such that . The amplitude of the delta singularity at the reset potential is obtained as

(48) |

where is the rate of excitatory shared spike trains. Once we have found the conditional flux distribution, we can solve the initial value problem defined in Eq. 23 in order to determine the conditional rates or .

### ii.8 Diffusion approximation vs. finite PSPs

We compare the exit rate of the stochastic system with post-synaptic potentials of finite amplitude with the analytic result obtained for the diffusion approximation Hakim1999. For small enough PSPs the difference in rates of the two models is small

(49) | ||||

(50) |

We use the absolute difference between the two rates

(51) |

to account for the accuracy of a specific space-time grid .

### ii.9 Correlation coefficient and comparison to diffusion approximation

The correlation coefficient in 10a is computed with the formula

(52) |

We used 49 for the stationary rate , is the amplitude of the -function in Eq. 48, and the coefficient of variation, , is computed with the equation

(53) |

as given in Brunel (2000). Thereby is the error function Abramowitz (1974). We computed correlation coefficients of the diffusion approximation of the finite PSP system (meaning a 2D Fokker-Planck equation with the same and s) in Deniz and Rotter (2016).

### ii.10 Direct numerical simulations and data smoothing

We used the neural simulation tool NEST Gewaltig and Diesmann (2007) to perform numerical simulations of input and output spike trains in the scenario described above. All analyses were based on discretized voltage data obtained during simulations of duration using a time resolution of .

Empirical voltage distributions were obtained by normalizing histograms appropriately. Further smoothing using a simple moving average was performed before comparing these distributions to the analytically obtained stationary distribution. We also performed the comparison using cumulative distributions, as the implicit integration very efficiently reduces the noise in the data. Two 2D distributions are compared via visual inspection of contour lines. We also directly compare spike train cross-correlation functions to assess efficiency and accuracy of the method.

### ii.11 Numerical evaluation of cross-correlation functions

We compute cross-correlation functions of spike trains from conditional PSTHs. One can express this as an integral over two variables and with bin size

(54) |

where we set

(55) |

with observation window .

### ii.12 Convergence and error bounds

The direct singular value decomposition of a 2D membrane potential distribution shows that there are only few singular values that deviate significantly from (Fig. 4). This behavior does not depend strongly on the chosen discretization, but it does depend on the input correlation coefficient . Although singular vectors are not probability distributions in their own respect, the singular vectors derived from the neuronal dynamics matrix (except the first few vectors) have the property

(56) |

provided the discretization is fine enough. This behavior is demonstrated in Fig. 6. The contribution of each sum to the overall normalization

(57) |

is progressively small, where and are sums of -th and -th singular vectors of the first and the second matrix for and , respectively

(58) |

This shows that the sum converges rather quickly. This error measure is related to projections of 1D discrete stationary distribution (satisfying ) to SVDs. All other eigenvectors of a Markovian matrix ( for ) satisfy . We want to avoid underestimating the total probability mass as a result of the truncated sum in Eq. 27. Hence, above we justify that the remainder of projections after truncation can be omitted up to a certain precision. On the other hand, in order to describe cumulative contribution of singular vectors we look at the distance of the omitted remainder (i.e. , )

(59) |

which describes how well the method converges self-consistently. Here we didn’t normalize this equation for every term we added. Which means we just rely on fast convergence of projections measured by Eq. 58, so first few error terms can be misleading.

## Iii Results

In order to treat strong correlations we devised a robust numerical method to study the joint statistics of membrane potentials and spike trains of integrate-and-fire model neurons. The case study reported here covers the leaky integrate-and-fire (LIF) model with Poisson input spike trains. However, our method can be easily generalized to non-linear leak functions Gerstner and Kistler (2002), conductance based synaptic inputs Kuhn et al. (2004) and more complex input correlation models Kuhn et al. (2003); Schultze-Kraft et al. (2013),. although we have to leave the details of such generalizations open. In this section we will explain how to select a ‘good basis for expansion’, and we will give numerical examples that demonstrate the power of the method.

### iii.1 SVD of joint probability distributions and choice of expansion basis

We started from a simple observation: The stationary joint membrane potential distribution for two neurons with independent input is given as

(60) |

where and are the stationary membrane potential distributions of two independent neurons, as described in Eq. 4. A similar relation for a discretized voltage grid can be written as

(61) |

For the case of shared input this simple relation is not valid any more. On the other hand, we observed that a value of the parameter close to will practically recruit only a small number of additional principal components for any given precision, cf. Fig. 4. Here we perform a singular value decomposition of the full solution of Eq. 32 given in terms of the matrix

(62) |

generalizing the case of independent neurons to also reconstruct the joint membrane potential distribution for neurons with shared input. As demonstrated in Fig. 4, convergence is rather quick, even for moderate values of .

Another aim of our study was to gain some understanding about the influence of the space-time grid. We observed that left and right singular vectors are of the form

(63) |

where and reflect the quasi-continuous part of basis vectors and is the coupling matrix as defined in Eq. 27. Here we need to make sure that the emerging singularity at the reset bin is not causing any numerical problems. One needs to first consider a small time step and adapt the stepping in space accordingly. A more thorough discussion of a suitable coarse graining strategy, however, is postponed to a later section of this paper.

Here we suggest to use SVD as a method to achieve a dimensional reduction of the full system. As it is a numerical method, its convergence and efficiency needs to be addressed. Generally, there are several different options to select a basis. Specifically, we use the right singular vectors of single-neuron Markov matrices. As demonstrated in Fig. 4, right singular vectors lead to an expansion that converges faster for coarse grids (e.g. ). Although for finer grids (e.g. ) the difference is less prominent (Figs. 5a and b), right singular vectors still converge slightly faster than left singular vectors (Fig. 5d). Right singular vectors of the single-neuron time evolution matrix yield an orthogonal coordinate system with very good properties.

As reported previously Deniz and Rotter (2016), we may also use a direct analytical approach using the basis of the single-neuron Fokker-Planck operator, and its adjoint basis

(64) |

where and are the left eigenvectors of the single neuron operators. However, the issue is that the discrete adjoint basis blows up at the lower boundary. The effect of this on our approximation is demonstrated in Fig. 3. In general, SVD eliminates a kernel of singular matrices.

In our treatment of the 2D Fokker-Planck equation, which is the infinitesimal limit of the theory considered above, we used the basis and adjoint basis to project linear operators to a subspace. This has certain advantages as it satisfies constraints for marginal distributions and preserves the Markov property to some extent. Positivity of the solution in the subspace is not guaranteed, but time evolution is probability preserving ().

First, SVD is computationally convenient, because it leads to using a real orthogonal basis which resembles the eigenbasis of . Second, numerical instabilities due to an ill-conditioned time evolution matrix (some eigenvalues ) are cured by SVD. Third, although the basis vectors implied by SVD have the disadvantage of not completely preserving positivity, the deviation remains within tight bounds even for a relatively small number of basis vectors.

### iii.2 Comparison to direct numerical simulations

We compare our SVD-based Markov theory and direct numerical simulations of spiking neurons both on the level of joint 2D membrane potential distributions and on the level of spike train covariance functions, cf. Fig. 7. The empirical distributions derived from 2D histograms are slightly smoothed in order to compare them to the distributions derived from the Markov theory on the level of contour lines. We also considered 2D cumulative distribution functions, where the smoothing step can be omitted. Moreover, we computed output spike train covariance functions as described in methods section and compared them to the covariance functions obtained directly from the simulated spike trains.

### iii.3 Application 1: Non-linear correlation transfer

Two neurons that are driven by correlated input current will exhibit correlated output spike trains. This transfer of correlation reflects an important property of neuronal dynamics, which is of particular relevance for understanding the contribution of neurons to network dynamics. Recently, we were able to demonstrate, by exact analytical treatment, that the correlation transfer for leaky integrate-and-fire neurons is strongly non-linear Deniz and Rotter (2016). Only for weak input correlation it can be described by perturbative methods, and deviations from linear response theory depend on the background firing rate. In the present work we demonstrate the same non-linear correlation transfer, cf. Fig. 10. There we also demonstrate how the parameters of the spatial and temporal coarse graining affects the precision of the Markov approximation. Our main conclusion is that dimensional reduction via SVD subspace projections makes it possible to achieve a superior precision with small bin sizes. Fine enough grids, however, could not be dealt with on typical computers without using the reduction suggested here.

### iii.4 Application 2: Asymmetric cross-covariance functions

Neurons in biological networks have widely distributed parameters, and this heterogeneity may also influence information processing Padmanabhan and Urban (2010); Yim et al. (2012, 2013). Moreover, robust asymmetries in spike correlations could lead to asymmetric synaptic efficacies, if they are subject to spike timing dependent plasticity Morrison et al. (2008); Babadi and Abbott (2013). Our approach reveals a generic temporal asymmetry in cross-covariance functions, related to the heterogeneity of intrinsic neuron parameters and input variables. Such temporal asymmetry is more pronounced for larger values of , especially in the non-perturbative regime that we address in this work.

We document here an application of our method to four types of asymmetries Yim et al. (2013); Deniz and Rotter (2016): asymmetry, asymmetry, asymmetry and asymmetry. We quantified the asymmetry by (specific values given in Table Appendix: Parameters), where is replaced by the respective parameter. Asymmetric correlations have been described previously, and they were by numerical simulations and experimentally studied by Ostojic et al. (2009); Padmanabhan and Urban (2010); Yim et al. (2013, 2014).

***

## Iv Discussion

### iv.1 Relevance of the new approach presented here

Models of correlated neuronal activity describe the origin of correlations in spiking model neurons, induced by the structure of the network and/or feedforward input. Such neuron models, however, are notoriously nonlinear. Nevertheless, most treatments rely on linearization and other simplifying assumptions, as nonlinear correlation transfer functions (i.e. relation of input and output correlations) are difficult to derive. Previous analytical approaches have employed perturbation theory Brunel and Hakim (1999); Lindner and Schimansky-Geier (2001) to study pairwise correlations under the assumption of weak input correlation de la Rocha et al. (2007); Shea-Brown et al. (2008). As a consequence of this technical convenience, we still lack a systematic approach that allows us to deal with a broad range of correlations, and to gain an understanding of their implications for network dynamics.

### iv.2 Extensions of the theory

All computations described in our paper can be applied to more general integrate-and-fire models with anon-linear leak function

(65) |

We only need to rewrite the decay matrix as a discrete approximation of the differential operator