Modeling networks of spiking neurons as interacting processes with memory of variable length

Modeling networks of spiking neurons as interacting processes with memory of variable length

Abstract

We consider a new class of non Markovian processes with a countable number of interacting components, both in discrete and continuous time. Each component is represented by a point process indicating if it has a spike or not at a given time. The system evolves as follows. For each component, the rate (in continuous time) or the probability (in discrete time) of having a spike depends on the entire time evolution of the system since the last spike time of the component. In discrete time this class of systems extends in a non trivial way both Spitzer’s interacting particle systems, which are Markovian, and Rissanen’s stochastic chains with memory of variable length which have finite state space. In continuous time they can be seen as a kind of Rissanen’s variable length memory version of the class of self-exciting point processes which are also called “Hawkes processes”, however with infinitely many components. These features make this class a good candidate to describe the time evolution of networks of spiking neurons. In this article we present a critical reader’s guide to recent papers dealing with this class of models, both in discrete and in continuous time. We briefly sketch results concerning perfect simulation and existence issues, de-correlation between successive interspike intervals, the longtime behavior of finite non-excited systems and propagation of chaos in mean field systems.

Key words : Biological neural nets, chains of variable length memory, Hawkes processes, interacting particle systems, mean field interaction, perfect simulation, propagation of chaos.

AMS Classification : 60K35, 60J25, 60J55

1 Introduction

A biological neural system has the following characteristics. It is a system with a huge (about ) number of interacting components, the neurons. The activity of each neuron is represented by a point process, namely, the successive times at which the neurons emit an action potential or a so-called spike. It is generally considered that the spiking activity is the way the system encodes and transmits information.

The spiking probability or rate of a given neuron depends on its membrane potential. The membrane potential of a given neuron is affected by the actions of all other neurons interacting with it. Neurons interact either by chemical or by electrical synapses. Chemical synapses can be described as follows. Each neuron spikes randomly following a point process with rate depending on the membrane potential of the neuron. At its spiking time, the membrane potential of the spiking neuron is reset to an equilibrium potential . At the same time, simultaneously, the neurons affected by it receive an additional amount of potential which is added to their membrane potential.

Electrical synapses occur through gap-junctions which allow neurons in the brain to communicate with one another. This induces an attraction between the values of the membrane potentials of each other and, as a consequence, a drift of the system towards its center of mass. Finally, leakage channels may induce a loss of membrane potential for each neuron.

The fact that the membrane potential of each neuron is reset to when it spikes makes its time evolution to be dependent of a variable length of the past. More precisely, it depends on the influence received from its presynaptic neurons since its last spiking time. In other terms, the time evolution of such a system is obviously not described by a Markov process (Cessac 2011). In particular, if we consider a time continuous description of the system, the waiting times between two successive spikes of a single neuron are not exponentially distributed (see, for instance, Brillinger 1988).

Such a system can be described in discrete time in the following way. Consider a small interval of time, typically of the order of which is more or less the time it takes for a neuron to emit a spike, followed by a refractory period. We indicate the presence or absence of a spiking activity for each neuron within each such time window. Then the process we obtain is a system of interacting chains with memory of variable length and a large number of components. This class of systems extends in a non trivial way both the interacting particle systems, which are Markov, see Spitzer (1970), and the stochastic chains with memory of variable length which have finite state space, see Rissanen (1983) or Galves and Löcherbach (2008).

A continuous time description of the process seems however more convenient both from a modeling and a mathematical point of view. This is the point of view adopted in De Masi et al. (2015), Duarte and Ost (2014) and Fournier and Löcherbach (2014).

The present article is considered as a critical reader’s guide to the papers mentioned above. We will briefly sketch the main results of these papers as well as challenges and next steps to be addressed.

This paper is organized as follows. In Section 2 we introduce a model of an infinite network of spiking neurons in discrete time; in Section 3 an analogous continuous time model using point processes is introduced. In Section 4, we show that under appropriate conditions, such a system of interacting point processes can be represented via an associated interacting particle system which is Markovian. In Section 5 we give an existence and perfect simulation result for the process defined in Sections 2 and 3. Section 6 considers a finite system composed of neurons where the graph of synaptic weights is a realization of a (slightly) supercritical directed Erdös-Rényi random graph. In this case, the correlations of two neighboring interspike intervals are shown to be asymptotically de-correlated, as the system size tends to infinity. Sections 7 to 9 are devoted to a study of the associated interacting particle system introduced in Section 4 where we deal successively with the longtime behavior of finite particle systems, with the hydrodynamical limit within a mean field system and finally with asymptotic properties of the limit process. In Section 10 we mention challenges and next steps to be addressed. We close our paper with a discussion in Section 11.

2 Infinite systems of interacting processes with memory of variable length in discrete time

We introduce a new class of stochastic processes, both in discrete and in continuous time, which are models of networks of spiking neurons. The processes we consider are infinite systems of interacting processes with memory of variable length.

Let be a countable set of neurons and introduce a family of synaptic weights for for all We interpret as the synaptic weight of neuron on neuron . We suppose that the synaptic weights have the following property of uniform summability

(2.1)

Moreover, we shall use a family of spiking rate functions and a family of leak functions All functions and are measurable functions. We assume that is increasing and uniformly Lipschitz continuous, i.e. there exists a positive constant such that for all

(2.2)

We start by considering a model in discrete time which is partly inspired by Cessac (2011) who proposes a finite dimensional system. We consider a stochastic chain taking values in where is the countable set of neurons, defined on a suitable probability space For each neuron at each time reports if neuron has a spike at that time Otherwise we put The global configuration of neurons at time is denoted For each neuron and each time let

(2.3)

be the last spike time of neuron strictly before time At each time conditionally on the whole past, sites update independently. This means that for any finite subset if we introduce the filtration

then we have

(2.4)

where

(2.5)

Here is the spiking rate function of neuron introduced above, and the leak function associated to the spiking events of the th neuron. Observe that, since is increasing, the contribution of components is either excitatory or inhibitory, depending on the sign of

3 Infinite systems of interacting processes with memory of variable length in continuous time

In continuous time, the activity of each neuron is described by a counting process recording for any the number of spikes of neuron during the interval The sequence of counting processes is characterized by its intensity process defined through the relation

where and where

(3.6)

Here, is a collection of positive numbers giving the maximal intensity of spiking per neuron (recall that takes values in ), and

This form of an intensity process is close to the typical form of the intensity of a multivariate nonlinear Hawkes process as it has been considered since Hawkes (1971). We refer the interested reader to Brémaud and Massoulié (1996) for an extensive and comprehensible study of stability properties of nonlinear Hawkes process, and to Hansen, Reynaud-Bouret and Rivoirard for the use of Hawkes processes as models of spike trains in neuroscience. See also Delattre, Fournier and Hoffmann (2014) for a study of infinite systems of nonlinear Hawkes processes. Our form of the intensity (3.6) differs from the classical Hawkes setting by its variable memory structure introduced through the term Hence the spiking intensity of a neuron only depends on its history up to its last spike time which is a biologically very plausible assumption on the memory structure of the process. Therefore, our model can be seen as a nonlinear multivariate Hawkes process where the number of components is infinite with a variable memory structure.

4 Associated Markov interacting particle system

The choice of a leak function in (3.6) gives rise to an intensity process which is Markov. In this case, write

(4.7)

We can interpret as value of the membrane potential of neuron at time Then it is straightforward to see that is a Markov process taking values in whose generator is given for any smooth test function by

(4.8)

where

(4.9)

In such a system of interacting processes each neuron is represented by the height of its membrane potential. It spikes at a rate depending on this height. When spiking, it goes back to the value which can be interpreted as resting potential. At the same time, neurons influenced by i.e. the postsynaptic neurons, receive an additional amount of potential independently of the former value of the membrane potential of the spiking neuron. In particular, there is no conservation of mass (i.e. of potential), since the spiking neuron does not re-distribute its own potential (this is for instance a main difference with the Potlatch process or with sandpile processes).

Notice that from a mathematical point of view the existence of such a process in infinite dimension, i.e. in the case when is infinite, is not evident, since the interactions might come down from infinity. We do not go into the details, but for a general discussion of existence issues in infinite dimension, we invite the interested reader to consult for example Chapter 1 of Liggett (1985).

Sometimes, we will concentrate on the finite case and take for some fixed In this case, we might add to the above dynamics (4.8) two terms. The first one is a leak term modeling the fact that throughout its evolution, the membrane potential looses potential due to leakage channels. The second is a drift term modeling the effects of gap-junctions to the system. Whereas the leakage channels tend to push the membrane potential of each neuron towards zero, the gap junctions, on the contrary, tend to push the whole system towards its average membrane potential value. We are thus led to consider a continuous time Markov process taking values in whose infinitesimal generator is given for any smooth test function by

(4.10)

where are positive parameters and where Here, models the strength of electrical synapses, and the leakage effect.

5 Existence results and perfect simulation

It is natural to ask if there exists at least (and at most) one stationary process which is consistent with the dynamics defined through (2.4) and (2.5) in discrete time or through (3.6) in continuous time. The answer to this question is intimately related to the structure of interactions given by the synaptic weights. These interactions can be represented as a directed weighted graph where the directed link is present if and only if and where each directed link is weighted by For each neuron we introduce

the set of all neurons that have a direct influence on neuron Notice that in our model, can be both finite or infinite. We fix a growing sequence of subsets of such that if and

We now state our existence and uniqueness result. We formulate it for discrete time systems incorporating spontaneous spike times, see Condition (5.11) below. These spontaneous spikes can be interpreted as external stimulus or, alternatively, as autonomous activity of the brain. In order to state our result, let us introduce, for all the process which is the trajectory of between times and

Theorem 1

[Theorem 1 of Galves and Löcherbach (2013)]
Grant conditions (2.1) and (2.2). Assume that the functions and satisfy moreover the following assumptions:

i) There exists such that for all

(5.11)

ii) We have that

(5.12)

where and where is as in condition 1.

iii) We have fast decay of the synaptic weights, i.e.

(5.13)

Then the following assertions hold true.
1) There exists a critical parameter such that for any there exists a unique probability measure under which satisfies (2.4) and (2.5).
2) There exists a non increasing function such that for any the following holds. For all for all bounded measurable functions

(5.14)

Moreover, for some fixed constant

Example 1

Take for all and

for some fixed where is the norm on In this case, if we choose we have and it is easy to see that condition (5.13) is satisfied.

The proof of Theorem 1 implies the existence of a perfect simulation algorithm of the stochastic chain By a perfect simulation algorithm we mean a simulation which samples in a finite space-time window precisely from the stationary law We refer the interested reader to Galves and Löcherbach (2013) . In continuous time, the existence of a unique stationary version of a process having intensity (3.6) can be shown by following similar same ideas. Details can be found in Hodara and Löcherbach (2014). In particular, here again, we obtain a perfect simulation algorithm for the stationary law.

6 The interaction graph and de-correlation of neighboring interspike intervals

Throughout this chapter, we work within the discrete time model of Section 2.

One central question in theoretical neuroscience is the distribution of consecutive interspike intervals (ISI) and in particular their dependence or independence. In order to answer to this question, we have to specify our choice of an interaction graph. This is related to the second central question in theoretical neuroscience: what kind of graph should be considered? Beggs and Plenz (2003) argue that networks of living neurons should behave in a slightly supercritical state. Therefore we consider a slightly supercritical directed Erdös-Rényi random graph.

More precisely, for a large but finite system of neurons, let be random synaptic weights. The sequence is a sequence of i.i.d. Bernoulli random variables defined on some probability space with parameter i.e.

where

(6.15)

We put for all Conditionally on the choice of the connectivities the dynamics of the chain are then given by

as before. We denote the conditional law of the process, conditioned on the choice of

Fix a neuron and consider its associated sequence of successive spike times

(6.16)

where

and

Let us fix We are interested in the covariance between successive inter-spike intervals

for any Being in stationary regime, the above covariance does not depend on the particular choice of The next theorem shows that neighboring inter-spike intervals are asymptotically uncorrelated as the number of neurons tends to infinity.

Theorem 2

[Theorem 3 of Galves and Löcherbach 2013] Assume that (2.2), (5.11) and (5.12) are satisfied. Then there exists a measurable subset such that on

where is the lower bound appearing in Condition (5.11). Moreover,

For large if the graph of synaptic weights belongs to the “good” set the above result is compatible with the discussion in Gerstner and Kistler (2002) arguing that two consecutive interspike intervals can be considered as independent.

7 Longtime behavior for the associated Markov interacting particle system

We briefly discuss the longtime behavior of the Markov interacting particle system having generator (4.10). Notice that such a process is a piecewise deterministic Markov process (PDMP). We concentrate on the case when all synapses are excitatory, i.e.  for all Moreover we consider a homogenous population where all neurons have the same spiking behavior, i.e. for all We do not suppose to be bounded nor to be globally Lipschitz continuous any more. All we have to assume is

Assumption 1

is non-decreasing, , for all , there exists such that and .

In this case, the existence of the process is deduced by a simple coupling argument going back to Fournier and Löcherbach (2014) proving that the total number of jumps during any finite time interval is finite almost surely. Moreover, interestingly enough, Duarte and Ost (2014) show that, if the parameter of Assumption 1 satisfies

and if i.e. there is some leakage phenomenon, then

(Theorem 2.3 of Duarte and Ost (2014), compare also to Theorem 1 of Robert and Touboul (2014)). In particular, in this case, the process goes extinct almost surely. However, if and there is no leak effect, then it is straightforward to show that the process does not go extinct, but will converge to a non trivial invariant measure. Even more, in this case the process is recurrent in the sense of Harris on see Theorem 2.4 of Duarte and Ost (2014). In particular, the trivial invariant measure is non attractive.

8 Mean field limits

Suppose we observe a large homogeneous population of neurons in continuous time evolving according to (4.10). Then we can assume that we are in an idealized situation where all neurons have identical properties, leading to a mean field description. The mean field assumption appears through the assumption that for all Moreover, we suppose from now on, that there is no leakage effect, i.e.  In order to keep track of the size of the system, we denote the process by

and identify the state of the system at time with its empirical measure

(8.17)

In Theorem 2 of De Masi et al. (2015) it has been shown that, in the limit as this membrane potential distribution becomes deterministic and it is described by a density where for any interval , is the limit fraction of neurons whose membrane potentials are in at time . The limit density is proved to obey a non linear PDE which is a conservation law of hyperbolic type

(8.18)

where

(8.19)

is the velocity field, where the first term describes the attraction to the average membrane potential of the system, due to the gap junction effect, the second one the drift produced by the other neurons spiking. Here, the limit total firing rate per unit time and the limit average membrane potential are

(8.20)

In (8.18), the expression is a loss of mass term due to spiking. Finally, since (8.18) is only defined for we have to complete the PDE with boundary conditions which read as follows.

(8.21)

where is some initial density of neurons at time

It can be shown that under suitable assumptions on the initial distribution of neurons at time there exists a unique weak solution of (8.18)–(8.21). This is e.g. the case if the distribution of neurons at time is of compact support. For further details, we refer the reader to Theorem 4 of Fournier and Löcherbach (2014).

Theorem 3 (Theorem 2 of De Masi et al. (2015))

Grant Assumption 1 and suppose that are i.i.d. random variables having smooth density Let be the unique weak solution of (8.18)–(8.21). Then for any fixed

(8.22)

(weak convergence in ) as where is the law on supported by the distribution valued trajectory given by

for all

Remark 1

The equivalence between the “chaoticity” of the system and a weak law of large numbers for the empirical measures, as proven in Theorem 3, is well-known (see for instance Sznitman (1999)).This means that in the large population limit, the neurons converge in law to independent and identically distributed copies of the same limit law. This property is usually called “propagation of chaos” in the literature.

In case and (8.18) reads as follows.

This equation is different from the so-called “population density equations” which are obtained for integrate-and-fire neurons as considered e.g. in Chapter 6.2.1 of Gerstner and Kistler (2002), see in particular their formula (6.14). As in integrate-and-fire models, also in our model spiking neurons are reset to a reversal potential (which equals ); but spiking does not create Dirac-masses at the reset value. This is due to the Poissonian mechanism giving rise to spiking in our model. The loss of mass at time due to spiking of neurons having potential height is therefore described by the term

At the same time, spiking induces a deterministic drift for those neurons that are not spiking. Finally, conservation of total mass implies that the initial density of neurons at the border is of height This initial condition is different from the usual initial condition obtained in integrate-and-fire models.

Remark 2

The mean field approach intending to replace individual behavior in large homogeneous systems of interacting neurons by the mean behavior of the neuronal population has a long tradition in the frame of neural networks, see e.g. Chapter 6 of Gerstner and Kistler (2002) or Faugeras et al. (2009) and the references therein. Most of the models used in the literature are either based on rate models where randomness comes in through random synaptic weights (see e.g. Cessac et al. (1994) or Moynot and Samuelides (2002)); or they are based on populations of integrate and fire neurons which are diffusion models in either finite or infinite dimension, see for instance Delarue et al. (2012) or Touboul (2014). The model we consider is reminiscent of integrate-and-fire models but firing does not occur when reaching a fixed threshold, and the membrane potential is not described by a diffusion process. The only noise which comes in is the Poissonian noise given by the spiking features, compare also to Robert and Touboul (2014).

9 Further results

The limit density which is solution of (8.18)–(8.21) can be interpreted as density of a typical single neuron evolving within an infinite system of neurons according to (4.8). Its dynamics can be described as follows. Let be a -distributed random variable, independent of a Poisson measure on having intensity measure . Then

(9.23)

Notice that the above dynamics is the dynamics of a nonlinear Markov process (in particular, non homogenous in time), since the law of the process itself – representing the state of the other neurons within the infinitely large system – is involved in its dynamics.

As in the case of finite systems of neurons, also the limit process possesses exactly two invariant measures supported in . The first one is . The second one is of the form , with defined by

where and are uniquely determined by the constraints , . Furthermore, we have and . Note that for this reads as

(9.24)

Contrary to the case of a finite system of neurons, it is surprisingly difficult to show that the limit system does not go extinct, i.e.  does not tend to weakly - at least in the case of presence of gap junctions The whole picture is only known in the case

Proposition 1 (Prop. 9 of Fournier and Löcherbach (2014))

Grant Assumption 1 and suppose moreover that is convex increasing and . Let and suppose that where satisfies , and . Denote by the law of and write for the invariant probability measure defined in (9.24). Then we have where denotes the total variation distance. In particular, the process does not go extinct, almost surely.

The case is more subtle, and we only know that the process does not go extinct, under minimal conditions on the spiking rate function, cf. Proposition 11 of Fournier and Löcherbach (2014).

10 Questions and challenges

In this section we raise several natural questions in the context of the models considered in this paper.

To which extend is a mean field description as adopted in Sections 8 and 9 above relevant from a neurobiological point of view? The mean field approach intending to replace individual behavior in large homogeneous systems of interacting neurons by the mean behavior of the neuronal population has a long tradition in the frame of neural networks. Bressloff (2009) argues that considering homogeneous populations “is motivated by the observation that neurons in cortex with similar response properties tend to be arranged in vertical columns. A classical example is the columnar-like arrangement of neurons in primary visual cortex that prefer stimuli of similar orientation”. Therefore it is reasonable to consider that such systems of neurons are governed by interactions of mean-field type.

Moreover, Bojak et al. (2010) claim “that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD …”. For a recent review paper we refer the reader to Pinotsis et al. (2014).

Description of the system and propagation of chaos. EEG as well as fMRI data describe the collective behavior of huge subpopulations of neurons. This makes it reasonable to consider a space-time rescaling of the “microscopic system” reminiscent of what is usually done for interacting particle systems under the name of “hydrodynamical limits”, see De Masi and Presutti (1991) and Kipnis and Landim (1999). The difficulty for neurobiological models is that contrarily to the case of thermodynamical systems considered in statistical physics, we have no macroscopic qualitative results available. In a nutshell, as far as we know, in neurobiology there is presently nothing that plays the role that the Fourier law plays in thermodynamics. Therefore, the first problem is to understand what kind of limiting behavior should be obtained when rescaling a stochastic model describing a system of spiking neurons.

Chaos propagation is an issue which is directly associated to the above discussion. The concept of propagation of chaos has been introduced by Mark Kac in his seminal paper of 1956, see Kac (1956).. His goal was to show that within a system of a huge number of interacting components, in a suitable limit, any fixed number of components behave as independent stochastic processes having the same distribution (for more details, we refer the reader to the classical reference Sznitman (1991)).

The recent literature in neuromathematics presents many results concerning the propagation of chaos in stochastic models of neuronal networks. However, it is far from being clear what is the neurobiological meaning of these results. Moreover, it is difficult to find neurobiological experimental results clearly related with chaos propagation. At the same time, in a large majority of the mathematical papers, including ours, there is no discussion about the neurobiological relevance of this issue.

Exceptions are recent papers and lectures of Olivier Faugeras and members of his team. For instance, in Baladron et al. (2012) the authors write the following.“We prove a propagation of chaos property which shows that in the mean-field limit, the neurons become independent, in agreement with some recent experimental work [13] and with the idea that the brain processes information in a somewhat optimal way.” In the above, [13] refers to Ecker et al. (2010) which present experimental evidence concerning the de-correlation of neuronal firing in the visual cortex. Here, the authors argue that “the de-correlated state of the neocortex […] offers substantial advantages for information processing.”

We believe that a more systematical discussion of the relation between mathematical results on chaos propagation and qualitative experimental results of neurobiology should be done by the neuromathematical community.

What about inhibitions? Inhibitions are considered in Sections 2, 3 and 5, but the theoretical results proved there do not take advantage of the balance between excitatory and inhibitory neurons. As far as we know, no rigorous neuromathematical result relies on this balance. This is clearly a step which must be achieved in a near future.

On the other hand, in many articles of the neurobiological literature it is very often suggested that inhibition should play a crucial role to explain many qualitative aspects found in the data. For instance, Benayoun et al. (2010) suggest that the balance between inhibition and excitation is an important ingredient in the explanation of avalanche phenomena. However, it is quite simple to conceive mathematical toy models without inhibition for systems of spiking neurons in which avalanches are produced. Another example is the belief which seems to be widespread in the neuroscientific community concerning the role played by inhibition in phenomena like chaos propagation (cf. for instance, van Vreeswijk and Sompolinsky (1996) and Huntsman et al. (1999)). However, our Theorem 3 concerning propagation of chaos does not rely at all on the presence of inhibitory synapses. We believe that it is important to better understand the importance of inhibition in the qualitative behavior of stochastic models like those presented in this paper.

What about statistical model selection? We have presented in this paper what we believe to be a good class of candidate models describing spiking neuronal behavior. Therefore, if we were able to do statistical model selection for this class of models we would be able to clarify issues like the way different neurons and regions of the cortex interact. In mathematical terms, this amounts of estimating the underlying interaction graph. There are two obvious difficulties.

First of all, our data concerns only an extremely tiny part of the cortex (between to neurons). What does this very small view tell us about the entire region? In former papers, one of the authors show that for models coming from statistical physics like the Ising model, under a Dobrushin type condition, it is possible to make inference of the entire system by just observing a small part of it. We refer the interested reader to Galves, Orlandi and Takahashi (2015). Is this type of result still valid when we look at a system like the brain?

The second problem is the computational complexity of the selection procedure among all possible interaction graphs supporting models like ours. In the case of chains with memory of variable length, the famous Context algorithm introduced by Rissanen (1983) and more recent versions of the BIC approach to the same problem by Cszisar and Talata (2006) as well as its application to random Markov fields of variable neighborhoods in Löcherbach and Orlandi (2011), this selection problem can be reduced to a linear complexity problem. Is a solution like this available for the class of models considered here?

11 Discussion

We close with a discussion of the particular aspects of the models we propose in this paper. We start by discussing the “ variable length memory dependence” on the past which is incorporated in our models via (2.5) or (3.6). From a theoretical point of view, Cessac (2011) suggested the same kind of dependence from the past in a discrete time model. In the framework of leaky integrate and fire models, he considers a system with a finite number of membrane potential processes. The image of this process in which only the spike times are recorded is a stochastic chain of infinite order where each neuron has to look back into the past until its last spike time. Cessac’s process is a finite dimensional version of the model considered in Section 2.

A second important feature of the class of processes introduced in our paper is the fact that we are able to deal with infinite systems of neurons in a rigorous mathematical way. Finite systems of point processes in discrete or continuous time aiming to describe biological neural systems have a long history whose starting points are probably Hawkes (1971) from a probabilistic point of view and Brillinger (1988) from a statistical point of view, see also the interesting paper by Krumin et al. (2010) for a review of the statistical aspects. For non-linear Hawkes processes, but in the frame of a finite number of components, Brémaud and Massoulié (1994) address the problem of existence, uniqueness and stability. Møller and coauthors propose a perfect simulation algorithm in the linear case, see Møller and Rasmussen (2005). In spite of the great attention that Hawkes processes received recently, especially in association with modeling problems in finance and biology, all the studies are reduced to the case of systems with a finite number of components. Only recently, Delattre, Fournier and Hoffmann (2014) studied an infinite system of nonlinear Hawkes processes but did not address the perfect simulation issue. Notice also that in none of the above articles, a variable length memory dependence on the past is incorporated although this kind of dependence is completely natural in the frame of neural nets.

Acknowledgments

We thank Pierre Hodara and Daniel Takahashi for careful reading of this manuscript and valuable comments.

This article was produced as part of the activities of FAPESP Research, Dissemination and Innovation Center for Neuromathematics (grant 2013/07699-0, S. Paulo Research Foundation). This work is part of USP project “Mathematics, computation, language and the brain” and CNPq project “Stochastic modeling of the brain activity” (grant 480108/2012-9). AG is partially supported by a CNPq fellowship (grant 309501/2011-3).

References

121699
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request answer
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...