Spike trains statistics in Integrate and Fire Models: exact results.

# Spike trains statistics in Integrate and Fire Models: exact results.

ABSTRACT We briefly review and highlight the consequences of rigorous and exact results obtained in [1], characterizing the statistics of spike trains in a network of leaky Integrate-and-Fire neurons, where time is discrete and where neurons are subject to noise, without restriction on the synaptic weights connectivity. The main result is that spike trains statistics are characterized by a Gibbs distribution, whose potential is explicitly computable. This establishes, on one hand, a rigorous ground for the current investigations attempting to characterize real spike trains data with Gibbs distributions, such as the Ising-like distribution [2], using the maximal entropy principle. However, it transpires from the present analysis that the Ising model might be a rather weak approximation. Indeed, the Gibbs potential (the formal “Hamiltonian”) is the log of the so-called “conditional intensity” (the probability that a neuron fires given the past of the whole network [3, 4, 5, 6, 7, 8, 9, 10]). But, in the present example, this probability has an infinite memory, and the corresponding process is non-Markovian (resp. the Gibbs potential has infinite range). Moreover, causality implies that the conditional intensity does not depend on the state of the neurons at the same time, ruling out the Ising model as a candidate for an exact characterization of spike trains statistics. However, Markovian approximations can be proposed whose degree of approximation can be rigorously controlled. In this setting, Ising model appears as the “next step” after the Bernoulli model (independent neurons) since it introduces spatial pairwise correlations, but not time correlations. The range of validity of this approximation is discussed together with possible approaches allowing to introduce time correlations, with algorithmic extensions.

KEY WORDS Spike trains statistics, Gibbs measures, Integrate and Fire Models, chains with complete connections, Markov approximations, Ising model.

## 1 Introduction.

At first glance, the neuronal activity is manifested by the emission of action potentials (“spikes”) constituting spike trains. Those spike trains are usually not exactly reproducible when repeating the same experiment, even with a very good control ensuring that the experimental conditions have not changed. Therefore, researchers are seeking statistical regularities in order to provide an accurate model for spike train statistics, i.e. a probability distribution fitting the experimental data and/or matching what neuroscientists believe models should predict. However, obtaining statistical models from experimental data remains a difficult task.

It appears simpler to characterize spike trains statistics in neural networks models where one controls exactly the neural network parameters, the number of involved neurons, the number of samples, and the duration of the experiment (with a possible mathematical extrapolation to infinite time). Especially, producing analytical (and when possible, rigorous) results on those statistics provide clues toward resolving experimental questions and new algorithms for data treatments. Here, we propose a complete and rigorous characterization of spike train statistics for the discrete-time leaky Integrate-and-Fire model with noise and time-independent stimuli. This framework affords extrapolation to more realistic neural networks models such as generalized Integrate-and-Fire [11, 12]. Our results hold for finite-sized networks, and all type of synaptic graph structure are allowed. Also, we are not constrained by an ad hoc choice of the initial conditions distribution of membrane potentials; instead we propose a procedure where this distribution is selected by dynamics and is uniquely determined. More precisely, we show that spike train statistics are characterized by a (unique) invariant probability distribution (equilibrium state) whatever the model-parameters values, which satisfies a variational principle (the maximal entropy principle) and is a Gibbs distribution whose potential is explicitly computed. This has several deep consequences discussed in this paper.

## 2 Model definition.

We consider a discrete-time (continuous state) Integrate and Fire model, introduced in [13, 14], whose dynamics is given by:

 Vi(t+1)=γVi(1−Z[Vi(t)])+N∑j=1WijZ[Vj(t)]+Ii+σBBi(t), (1)

where is the neuron index, the membrane potential of neuron (this is a continuous variable), is the leak rate, whenever and otherwise, where is the firing threshold, is the synaptic weight from neuron to neuron , is a constant external current, , and is a noise, namely the ’s are Gaussian i.i.d. random variable with mean zero and variance . The term corresponds to the reset of membrane potential whenever neuron fires while the term is the post synaptic potential generated by the pre-synaptic spikes.

## 3 Spike train statistics.

To each membrane potential value we associate a variable . The “spiking pattern” of the neural network at time is the vector : it tells us which neurons are firing at time , () and which neurons are not firing at time (). We denote by the sequence or spike block . A bi-infinite sequence of spiking patterns is called a “raster plot”. It tells us which neurons are firing at each time . In practical experimental raster plots are obviously finite sequences of spiking pattern but the extension to , especially the possibility of considering an arbitrary distant past (negative times) is a key tool in the present work. We are seeking invariant probability distributions on the set of raster plots, generated by the dynamical system (1). In the next sections we give our main results whose proofs can be found in [1].

## 4 The transition probability.

In this model it is possible to compute exactly and rigorously the probability, called conditional intensity in [3, 4, 5, 6, 7, 8, 9, 10], of a spiking pattern at time , , given the past of the network . To our knowledge this is the first time that such an exact and rigorous computation is achieved at the level of network. It is given by:

 P(ω(t+1)|ωt−∞,)=N∏i=1P(ωi(t+1)|ωt−∞), (2)

with

 P(ωi(t+1)|ωt−∞)=ωi(t+1)π(θ−Ci(ωt−∞)σi(ωt−∞))+(1−ωi(t+1))(1−π(θ−Ci(ωt−∞)σi(ωt−∞))), (3)

where

 Ci(ωt−∞)=N∑j=1Wijxij(ωt−∞)+Ii1−γt+1−τi(ωt−∞)1−γ,

with

 xij(ωt−∞)=t∑l=τi(ωt−∞)γt−lωj(l),

is the deterministic contribution of the network to the membrane potential of neuron at time , when the spike train up to time is . The term integrates the pre-synaptic flux that arrived at neuron in the past, as well as the external current . As a consequence, this term, depends on the past history up to a time which is the last time before when neuron has fired, in the sequence (the mathematical definition is given below).

Likewise,

 σ2i(ωt−∞)=σ2B1−γ2(t+1−τi(ωt−∞))1−γ2

is the variance of the noise-induced term applied to neuron at time and resulting from the integration of the noise from time to time . Finally,

The mathematical definition of is:

 Missing or unrecognized delimiter for \left

Basically, this notion integrates the fact that the state of neuron depends on spikes emitted in the past by the network, up to the last time when this neuron has fired (thus, the present analysis relies heavily on the structure of IF models where reset of the membrane potential to a fixed value erases the past evolution). Depending on the sequence , this time can go arbitrary far in the past. Now, here, sequences such that for some positive , have a positive probability (decaying exponentially fast with ). The consequence is that the transition probability (2) depends on the history of the network on an unbounded past. As a consequence dynamics is non Markovian.

Note also that (evidently ?) the structure of the neural network dynamics imposes causality. This appears explicitly in (3) where the probability that neuron fires at time depends on spikes emitted up to time . In other words, this probability does not depend on the spikes emitted by the other neurons at the same time (). We shall come back to this remark when discussing the relevance of Ising model.

Finally, remark that the factorization of the probability (2) expresses the conditional independence of spikes at given time. This is simply due to the fact that the only source of stochasticity, when past spikes are fixed (by the conditioning), is the noise, which is assumed, in this model, to be independent between neurons. But, (2) does not imply that spikes are independent.

Let us emphasize what we have obtained. We have an explicit form for the probability that a neuron fires given its past. From this we can obtain the probability of any spike block. But, here there is technical difficulty, since the corresponding stochastic process is not Markovian. Such process corresponds to a more elaborated concept called a chain with compete connections. As a consequence the existence and uniqueness of an invariant measure is not given by classical theorems on Markov chains but requires more elaborated tools, widely developed by the community of stochastic processes and ergodic theory (see [15] for a nice introduction and useful references). The price to pay is a rather abstract mathematical formalism but the reward is the possibility of considering spike statistics arising in such models, including memory effects and causality, with, we believe, possible extensions toward more realistic neural models.

## 5 Gibbs distribution.

In the present setting where does not depend on , is stationary. Namely, fix a spiking sequence , then, , Therefore, instead of considering a family of transition probabilities depending on , it suffices to define the transition probability at one time , for example . The next results are based on techniques from ergodic theory and chains with complete connections [15].

We recall first a few definitions. A Gibbs distribution is a probability distribution , on the set of (infinite) spike sequences (raster plots) , such that one can find some constants with such that for all and for all :

 c1≤μψ(ωn0)exp[−(n+1)P(ψ)+∑nk=0ψ(Tkω)]≤c2.

where is the right shift1 over the set of infinite sequences i.e. . , called the topological pressure, is formally analogous to the free energy and is a central quantity since it allows the computation of averages with respect to . In the previous equation, the term may be viewed as a formal Hamiltonian over a finite chain of symbols . However, as exemplified below (eq. (5)) this term depends also on the past (infinite chain ). There is an analogy with statistical physics systems with boundary conditions where the Gibbs distribution not only depends on the configuration inside the lattice, but also on the boundary term [16].

Moreover, is an equilibrium state if it satisfies (“maximum entropy principle”):

 P(ψ)\lx@stackreldef=h(μψ)+μψ(ψ)=supμ∈PT(X)h(μ)+μ(ψ), (4)

where the set of -invariant finite measures on , is the entropy of , and is the average value of with respect to the measure . Though Gibbs states and equilibrium states are non equivalent notions in the general case, they are equivalent in the present setting.

Now, our main result is:

The dynamical system (1) has a unique invariant-measure, , whatever the values of parameters . This is a Gibbs distribution and an equilibrium state, for the potential:

 Unknown environment '%' (5)

where we note .

Note therefore that the Gibbs potential is simply the log of the conditional intensity. We believe that this last remark extends to more general models.

Knowing that is Gibbs distribution with a known potential allows one to control the probability distribution of finite sequences. Note that here the potential is explicitly known. This has several other deep consequences. First, this formalism opens up the possibility of characterizing and by a spectral approach, being respectively the (unique) largest eigenvalue and related left eigenfunction of the Ruelle-Perron-Frobenius operator , acting on , the set of continuous real functions on . This last property has deep consequences at the implementation level [17].

Also, it is possible to propose Markovian approximations of this distribution, whose degree of accuracy can be controlled, as we now discuss.

## 6 Markov approximations.

The main difficulty in handling the transition probabilities (2) and the related equilibrium state is that they depend on an history dating back to , where is unbounded. On the other hand, the influence of the activity of the network, say at time , on the membrane potential at time , appearing in the term decays exponentially fast as . Thus, one may argue that after a characteristic time depending on the past network activity has little influence on the current membrane potential value.

Assume that we want to approximate the statistics of spikes, given by the dynamics (1), by fixing a finite time horizon such that the membrane potential at time depends on the past only up to some finite time . In this way, we truncate the histories and we approximate the transition probabilities , with unbounded memory, by transition probabilities , thus limiting memory to at most time steps in the past. These approximated transition probabilities constitute therefore a Markov chain with a memory depth . But how good is this approximation ?

The truncation of histories leads to a truncation of the Gibbs potential, denoted . The invariant measure of the Markov chain is a Gibbs distribution for the potential , denoted . One can show that the Kullback-Leibler divergence between the exact measure and the Markov measure obeys:

 d(μψ(R),μψ)

where can be computed explicitly as a function of synaptic weights and current.

Therefore, the Kullback-Leibler divergence decays exponentially fast, with a decay rate , as expected from our prior qualitative analysis. It is thus sufficient, for practical purposes, to approximate with a potential of range:

 R∼1|logγ|. (7)

Consequently, the smaller the leak, the shorter is the range.

Note that the previous result is classical in ergodic theory and expresses that finite range potentials are dense in the set of Gibbs potentials [16]. What we bring is the computation of the decay rate and an estimation of the constant for the present model (see [1] for details).

## 7 Raster plots statistics

As discussed in the introduction, the neuroscience community is confronted to the delicate problem of characterizing statistical properties of raster plots from finite time spike trains and/or from finite number of experiments. This requires an a priori guess for the probability of raster plots, what we call a statistical model. These models can be extrapolated from methods such as Jaynes’ [18]: “Maximising the statistical entropy under constraints”. Actually, this is none other that the variational formulation (4). Now, this method only provides an approximation of the sought probability, relying heavily on the choice of constraints usually fixed from phenomenological arguments, and this approximation can be rather bad [19]. Moreover, note that Jaynes’ method is typically used for a finite number of constraints while (4) holds for a very general potential, actually corresponding, in the present context, to infinitely many constraints.

On phenomenological grounds, spike -uplets (neuron fires at time , and neuron fires at time , ) provide natural choices of constraints since they correspond to experimentally measurable events whose probability of occurrence has a phenomenological relevance. For example, the probability of occurrence of , , is the instantaneous firing rate of neuron at time . On a more formal ground a spike -uplet corresponds to an order- monomial; this is a function which associate to a raster the product , where and , and where there is no repeated pair of indexes . Hence, is equal to if and only if neuron fires at time , and neuron fires at time , in the raster (and is otherwise). A polynomial is a linear combination of monomials.

Fixing constraint on spike-uplets (monomials) average leads to specific forms of Gibbs distribution. As an example, constraining firing rates only corresponds to a Bernoulli distribution where neurons spike independently and where the potential reads . Constraining rates and probability that 2 neurons fire simultaneously leads to the so-called Ising distribution where the potential reads [2]. Here, the related probability measure does not factorize any more but all information about spike train statistics is contained in the first and second order spike-uplets at time .

More generally, imposing constraint on a (finite) set of monomials , leads to a parametric form of Gibbs potential , where the are Lagrange multipliers. Note that the ’s are related by an additional constraint, the normalisation of the probability. Again, this potential relies heavily on the choice of constraints.

On the opposite, in the present example, instead of an ad hoc guess, an exact polynomial expansion can be obtained from the explicit form of the potential (5). It is given by expanding (5) in series via the series expansion of (note that is bounded in absolute value provided the synaptic weights are finite and the noise intensity is positive. Thus, , for some ). This series involves terms of the form leading to monomials of the form where are neurons pre-synaptic to (with ) and . As a consequence, the potential (5) has the form where the ’s are monomials of the form and the ’s contains products of the form .

Note that terms of the form do not appear in this expansion. Especially, it does not contain the Ising term.

As discussed in the previous section, truncating this series to a finite polynomial involving monomials with a time depth (i.e. of the form with ) amounts to considering Markovian approximations of . How the corresponding Gibbs distribution approximates the exact one is controlled by eq. (6) where depends on the synaptic weights and currents.

This expansion contains an exponentially increasing number of terms as , or , the number of neurons, growths. However, it does not contain all possible monomials. Indeed, beyond, the remark above about the vanishing of non causal term the form of the potential can be considerably reduced from elementary considerations such as stationarity. Moreover, some ’s can be quite close to zero and eliminating them does not increase significantly the Kullback-Leibler divergence. This can be used to perform systematic (algorithmic) reduction of the potential in a more general context than the present model (see next section and [17] for more details).

To summarize, one can obtain, from the previous analysis, a canonical form for a range- approximation of , of the form:

 ψ(R)λ=K∑l=0λlϕl, (8)

where all terms contribute significantly to the probability distribution i.e. removing them leads to a significant increase of the KL divergence between the exact Gibbs distribution and its approximation. Thus, in the present case, the relevant constraints (and the value of the related ’s) can be estimated from the analytic form of the potential.

## 8 Parametric estimation of spike trains statistics.

This analysis opens up the possibility of developing efficients algorithms to estimate at best the statistic of spike trains from experimental data, using several guess potential and selecting the one which minimizes the KL divergence between the Gibbs measure and the empirical measure attached to some experimental raster [17]. The idea is to start from a parametric form of potential (8), of range , and to compute the empirical average of all monomials from the experimental raster . Then, one adjust the parameters by minimizing the KL divergence. This computation can be easily done using spectral properties of the Perron-Frobenius operator. This algorithm described in [17], will be presented in this conference, in another communication, and is freely available as a C++ source at http://enas.gforge.inria.fr.

## 9 Discussion

In this paper we have addressed the question of characterizing the spike trains statistics of a network of LIF neurons with noise, in the stationary case, with two aims. Firstly, to obtain analytic and rigorous results allowing the characterization of the process of spike generations. Secondly, to make a connection from this mathematical analysis toward the empirical methods used in neuroscience community for the analysis of spike trains. Especially, we have shown that the “Ising model” provides, in this example, a bad approximation of the exact Gibbs potential. This is due, on one hand, to the fact that Ising potential does not take into account causality, and on the other hand, to the fact that the exact potential includes quite a bit more “constraints” than the mere average value of pairwise spike coincidence. Although the first objection can somewhat be relaxed when considering data binning (that we do not consider here), the second one seems unavoidable.

Also, we have shown that the Jaynes method, based on an a priori choice of a “guess” potential, with finite range, amounts to approximate the exact probability distribution by the Gibbs distribution of a Markov chain. The degree of approximation can be controlled by the Kullback-Leibler divergence.

One may wonder whether our conclusions, obtained for a rather trivial model from the biological point of view has any relevance for the analysis of real neurons statistics. We would like to point that the main ingredients making this model-statistics so complex are causality induced by dynamics, and integration over past events via the leak term. We don’t see any reason why the situation should be simpler for more realistic models. Especially, what makes the analysis tractable is the reset of the membrane potential to a constant value after firing, inherent to IF models, and rather unrealistic in real neurons. Thus, the memory effects could be even worse in realistic neurons, with a difficulty to extract, from a thorough mathematical analysis, the relevant times scales for a memory cut-off, as the log of the leak is in the present model. Neverteless, this work is a beginning with a clear need to be extended toward more realistic models.

A natural extension concerns the so-called Generalized Integrate and Fire models [11] , which are closer to biology [21]. The occurrence of a post-synaptic potential on synapse , at time , results in a change of membrane potential. In conductance based models this change is integrated in the adaptation of conductances. It has been shown in [12], under natural assumptions on spike-time precision, that the continuous-time evolution of these equations reduces to the discrete time dynamics. In this case the computation of the potential corresponding to (5) is clearly more complex but still manageable. This case is under current investigations.

One weakness of the present work is that it only considers stationary dynamics, where e.g. the external current is independent of time. Besides, we consider here an asymptotic dynamics. However, real neural systems are submitted to non static stimuli, and transients play a crucial role. To extend the present analysis to these case one needs the proper mathematical framework. The non stationarity requires to handle time dependent Gibbs measures. In the realm of ergodic theory applied to non equilibrium statistical physics, Ruelle has introduced the notion of time-dependent SRB measure [22]. A similar approach could be used here, at least formally.

In neural networks, synaptic weights are not fixed, as in (1), but they evolve with the activity of the pre- and post-synaptic neuron (synaptic plasticity). This means that synaptic weights evolve according to spike train statistics, while spike train statistics is constrained by synaptic weights. This interwoven evolution has been considered in [23] under the assumption that spike-train statistics is characterized by a Gibbs distribution. It is shown that synaptic mechanism occurring on a time scale which is slow compared to neural dynamics are associated with a variational principle. There is a function, closely related to the topological pressure, which decreases when the synaptic adaptation process takes place. Moreover, the synaptic adaptation has the effect of reinforcing specific terms in the potential, directly related to the form of the synaptic plasticity mechanism. The interest of this result is that it provides an a priori guess of the relevant terms in the potential expansion. A contrario, it allows to constrain the spike train statistics of a LIF model, using synaptic plasticity with an appropriate rule which can be determined from the form of the expected potential.

Acknowledgment:. We are indebted to T. Viéville and H. Rostro for valuable discussions and comments.

### Footnotes

1. The use of the right shift instead of the left shift is standard in the context of chains with complete connections [15].

### References

1. B. Cessac. A discrete time neural network model with spiking neurons ii. dynamics with noise. J.Math. Biol., accepted, 2010.
2. E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007–1012, 2006.
3. Don H. Johnson and Ananthram Swami. The transmission of signals by auditory-nerve fiber discharge patterns. J. Acoust. Soc. Am, 74(2):493â501, August 1983.
4. D. R. Brillinger. Maximum likelihood analysis of spike trains of interacting nerve cells. Biol Cybern, 59(3):189â200, 1988.
5. E. S. Chornoboy, L. P. Schramm, and A. F. Karr. Maximum likelihood identification of neural point process systems. Biol Cybern, 59(4-5):265â275, 1988.
6. R. E. Kass and V. Ventura. A spike-train probability model. Neural Comput., 13(8):1713â1720, 2001.
7. Wilson Truccolo, Uri T. Eden, Matthew R. Fellows, John P. Donoghue, and Emery N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J Neurophysiol, 93:1074–1089, 2005.
8. Murat Okatan, Matthew A.Wilson, and Emery N. Brown. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Computation, 17(9):1927â1961, September 2005.
9. Wilson Truccolo and John P. Donoghue. Nonparametric modeling of neural point processes via stochastic gradient boosting regression. Neural Computation, 19(3):672–705, 2007.
10. C. Pouzat and A. Chaffiol. On goodness of fit tests for models of neuronal spike trains considered as counting processes. http://arxiv.org/abs/0909.2785v1, 2009.
11. M. Rudolph and A. Destexhe. Analytical integrate and fire neuron models with conductance-based dynamics for event driven simulation strategies. Neural Computation, 18:2146–2210, 2006.
12. B. Cessac and T. Viéville. On dynamics of integrate-and-fire neural networks with adaptive conductances. Frontiers in neuroscience, 2(2), jul 2008.
13. H. Soula, G. Beslon, and O. Mazet. Spontaneous dynamics of asymmetric random recurrent spiking neural networks. Neural Computation, 18(1), 2006.
14. B. Cessac. A discrete time neural network model with spiking neurons. i. rigorous results on the spontaneous dynamics. J. Math. Biol., 56(3):311–345, 2008.
15. G. Maillard. Introduction to chains with complete connections. Ecole Federale Polytechnique de Lausanne, winter 2007.
17. J.C. Vasquez, B. Cessac, and T. Vieville. Entropy-based parametric estimation of spike train statistics. Journal of Computational Neuroscience, 2010. submitted.
18. E.T. Jaynes. Information theory and statistical mechanics. Phys. Rev., 106(620), 1957.
19. Imre Csiszar. Sanov property, generalized -projection and a conditional limit theorem. Ann. Prob, 12(3):768–793, 1984.
20. E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply string correlated network states in a neural population. Nature, 440:1007– 1012, 2006.
21. R. Jolivet, A. Rauch, H-R. LÃ¼scher, and W. Gerstner. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. Journal of Computational Neuroscience, 21:35–49, 2006.
22. D. Ruelle. Smooth dynamics and new theoretical ideas in nonequilibrium statistical mechanics. J. Statist. Phys., 95:393–468, 1999.
23. B. Cessac, H. Rostro-Gonzalez, J.C. Vasquez, and T. Viéville. How gibbs distribution may naturally arise from synaptic adaptation mechanisms: a model based argumentation. J. Stat. Phys, 136(3):565–602, August 2009.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters