A phase transition in the first passage of a Brownian process through a fluctuating boundary: implications for neural coding.

A phase transition in the first passage of a Brownian process through a fluctuating boundary: implications for neural coding.

Abstract

Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics and industry. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied settings in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability that a Gauss-Markov process will first exceed the boundary at time suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent , with critical value . For smoother boundaries, , the probability density is a continuous function of time. For rougher boundaries, , the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point corresponds to a widely-studied case in the theory of neural coding, where the external input integrated by a model neuron is a white-noise process, such as uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply-defined times regardless of the intensity of internal noise.

random walk — first-passage time — phase transition — neural code

www.pnas.org/cgi/doi/10.1073/pnas.0709640104 \copyrightyear2008 \issuedateIssue Date \volumeVolume \issuenumberIssue Number

\contributor

DRAFT: To Be Submitted to Proceedings of the National Academy of Sciences of the United States of America

{article}\abbreviations

FPT, first passage time; OUP, Ornstein-Uhlenbeck process; LIF, leaky integrate and fire neuron

\dropcap

A Brownian process which starts at from will fluctuate up and down, eventually crossing the value infinitely many times: for any given realization of the process there will be infinitely many different values of for which . Finding the very first such time,

known as the first passage of the process through the boundary , is easier said than done, one of those classical problems whose concise statements conceal their difficulty [1, 2, 3, 4]. For general fluctuating random processes the first passage time problem (FPTP) is both extremely difficult [5, 6, 7, 8, 9] and highly relevant, due to its manifold practical applications: it models phenomena as diverse as the onset of chemical reactions [10, 11, 12, 13, 14], transitions of macromolecular assemblies [15, 16, 17, 18, 19], time-to- failure of a device [20, 21, 22], accumulation of evidence in neural decision-making circuits [23], the “gambler’s ruin” problem in game theory [24], species extinction probabilities in ecology [25], survival probabilities of patients and disease progression [26, 27, 28], triggering of orders in the stock market [29, 30, 31], and firing of neural action potentials [32, 33, 34, 35, 36, 37].

Much attention has been devoted to two extensions of this basic problem. One is the first passage through a stationary boundary within a complex spatial geometry, such as diffusion in porous media or complex networks, as this describes foraging search patterns in ecology [38, 39], and the speed at which a node can receive and relax information in a complex network [40, 41] .

The second extension is the first passage through a boundary that is a fluctuating function of time [42, 43, 44], a problem with direct application to the modeling of neural encoding of information [45, 46]. This problem and its application are the subject of this paper. The connection arises as follows. The membrane voltage of a neuron fluctuates in response both to synaptic inputs as well as internal noise. As soon as a threshold voltage is exceeded, nonlinear avalanche processes are awakened which cause the neuron to generate an action potential or spike. Therefore the generation of an action potential by a neuron involves the first passage of the fluctuating membrane voltage through the threshold. This dynamics of spike generation underlies neural coding: neurons communicate information through their electrical spiking, and the functional relation between the information being encoded and the spikes is called a neural code. Two important classes of neural code are the rate codes, in which information is only encoded in the average number of spikes per unit of time (rate) without regard to their precise temporal pattern, and the temporal codes, in which the precise timing of action potentials, either absolute or relative to one another, conveys information.

Central to the distinction between rate and temporal codes is the notion of jitter or temporal reliability. This notion originates from repeating an input again and again and aligning the resulting spikes to the onset of the stimulus. Time jittering is assessed graphically through a raster plot and quantitatively in a temporal histogram (PSTH) which permits verifying the temporal accuracy with which the neuronal process repeats action potentials. A fundamental observation is that the very same neuron may lock onto fast features of a stimulus yet show great variability when presented with a featureless, smooth stimulus [33]. These two are extreme examples from a continuum—the jitter in spike times depends directly on the stimulus being presented [47] .

1 First passage through a rough boundary

We shall make use of a simple geometrical construction, mapping the dynamics of a neuron with an input, internal noise and a constant threshold voltage, onto a neuron with internal noise and a fluctuating threshold voltage; the construction thus maps the input onto fluctuations of the threshold. We use as our model neuron the leaky integrate-and-fire neuron (LIF), a simple yet widely-used [36, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] model of neuronal function defined by

(1)

where is the membrane voltage, is a decay time given by the constant of the membrane, the current that the neuron receives as an input through synapses, and an internal noise. When first reaches a threshold value an action potential is generated, and the voltage is reset to zero. The nonlinearity of the model is concentrated on the spike generation and subsequent reset, so that between spikes we can integrate separately the effect of the input and of the noise:

Because the input is fixed, the equation needs to be solved just once. Then the problem of reaching the threshold can be recast as reaching the boundary : we have transformed a problem with an input and a constant threshold into a problem with no input and a fluctuating threshold . The reset operation becomes (see Appendix).

These considerations lead us to examine the problem of the first passage time through a fluctuating threshold. In order to develop some intuition about the problem, we are going to break it up into two parts, a “geometrical optics” part, in which most first passages can be accounted for by simple “visibility” considerations, and a “diffractive” correction in which we take into account that random walkers can turn around corners. The geometrical part is simple: most first passages are generated by the walker running into a hard-to-avoid obstacle, as shown in Figure 1a. The intuition is that the walkers are moving left to right, rising onto a ceiling from which features are hanging, and as the walkers rise they collide with some feature. The problem is thus twice symmetry-broken: what matters are local minima of the boundary, not the maxima, which are hard to get into; and the walkers only spontaneously run onto the left flank of a local minimum. Therefore, a good first order approximation follows from observing that most of the first passages occur on the left flanks of local minima, and deeper local minima cast “shadows” on subsequent shallower minima.

Figure 1: How a random walk first hits a moving boundary . In all panels, time is horizontal, the process and the boundary vertical. (A) It is highly probable to hit the left flank of a minimum, as the walkers are moving left to right and from the bottom up. (B) Each minimum “casts a shadow” behind it, so that hitting some features behind may be hard, as it requires missing the minimum, then rising sufficiently high to hit the second feature. (C) Hitting the right (rising) flank of a minimum is hardest, since it requires missing the minimum narrowly, then rising up, setting up a “race condition” between the boundary and the walker. Lower panels D and E: 300 sample paths which start at the red point on the left and have their first passage through the boundary (white) on the red point in the right. White curve: average trajectory (analytic). Sample paths are colored by the probability density of the point they go through. In (D), hitting a left flank of a minimum is easy, and the average trajectory to do so does not significantly deviate from the deterministic trajectory until the very end, where the white curve can be seen to rise onto the minimum following a square root. In (E), hitting the right flank of a minimum is hard, and the average trajectory to do so strongly deviates from the deterministic trajectories of the system, missing the minimum by just enough not to collide with it, then rapidly rising to meet the first passage point, again, in a square-root profile.

Figure 2: Rasterplots and PSTH. A small segment of our dataset is displayed for clarity. A rasterplot and a plot of the PSTH are shown for each of three Hölder exponents: 0.25 (rough), 0.5 (transition) and 0.75 (smoother, though still not differentiable). There’s approximately the same number of spikes in all three groups. The rasterplots display the times at which the neuron fired (i.e. a first passage) stacked vertically (as a function of stimulus presentation number) to show repeatability. The PSTHs show a temporal histogram of said spikes. Please note the differences in vertical scale of the PSTHs: for Hölder exponent there are no bins with fewer counts than 10 events or more than 60, while for most bins have 0 counts while a few have over 1000 counts.

Figure 3: (a) Probability density of firing as a function of time (horizontal) and Hölder exponent (vertical), color coded in log scale. 51 values of the Hölder exponent between 0.25 and 0.75 are stacked vertically. The bin counts shown in the PSTHs of Fig 2 are color coded with a logarithmic code. (b) 3D rendering of a section of the data in (a): vertical axis and color scale is logarithmic in the rate, where it is evident that towards the back of the figure ( Hölder exponent ) the rate either diverges or goes to zero a.e.

However, there is a finite probability that a walker may narrowly avoid a local minimum and pass just under it, only to rapidly rise afterwards and hit the right rising flank of the barrier, as shown in Figure 1C. This is, effectively, a race between the boundary and the walker: if the walker can rise far faster than the boundary, then there is some probability of passage right of the minimum. But if the boundary rises faster than a walker can catch up with, then the probability of passage right of the minimum can be exponentially small. Let us consider a local minimum of the barrier at time of the form

and consider a walker that has just narrowly missed the minimum by an amount : . The probability of the process to be at value at time is, to leading order,

and thus the probability of arriving at the barrier at time is approximately

When this expression has an essential singularity, its value singular-exponentially small for small times. In fact the probability and all of its derivatives are zero at . For instance, consider a barrier whose flank to the right of the local minimum rises like . As the fourth root in the barrier rises much more rapidly than the square root in the walker, the probability of hitting the barrier after the minimum looks like , a function that has an essential singularity at : the function as well as all of its derivatives approach as .

The parameter we described above, which is called the Hölder exponent of the function, quantifies the ability of the barrier to, locally, rise faster or slower than a random walk. More formally, a function is said to be -Hölder continuous if it satisfies ; the roughness exponent of the function is the largest possible value of for which the function satisfies a Hölder condition. Up to now we have considered a single local minimum, and even though the probability of crossing is singular-exponential small for , it is still nonzero. However, if the boundary is rugged, the local minima are dense. This density is not an issue for , that is inputs which are smoother than the internal noise; in this case the probability density of first passages is nowhere zero. But when so the input is rougher or burstier than the internal noise, the probability density ceases to be a function: it is zero almost everywhere except for a set of zero measure where it diverges.

2 Results

We postpone to the Appendices the more formal proofs of regularity of the first passage time probability distributions. We proceed now, instead, to discuss numerical simulations and their analysis.

We carried out careful numerical integration of equation (1), for all Hölder exponents in the range in increments of . In order for the results of the simulations at different Hölder exponents to be directly comparable to one another, we generated the inputs by using the exact same overall coefficients in the basis functions of the Ornstein-Uhlenbeck process described in [60], but scaled differently according to the Hölder exponent laws (see Appendix). For each one of the Hölder exponents between and , repetitions of the stimulus were performed, accumulating first passages per Hölder exponent. We computed the first passages using the fast algorithm described in [55, 60], which carries out exact integration in intervals which are recursively subdivided when the probability that the process attains the first passage exceeds a threshold, in our case . The first passages were computed to an accuracy of , and the allowable probability that a computed passage is not in fact the first one is , so as to have an overall probability of that any one of our 7.5 billion numbers is not in fact a true first passage. The values of the first passages were histogrammed in bins; this histogram, which we call our PSTH (peristimulus time histogram) in analogy to the term in use in neural coding, represents the instantaneous probability distribution of first passage integrated over the bins, or, equivalently, the finite differences over a grid of the cumulative probability distribution function for firing.

Figure 4: Density map of PSTH bin counts. The individual bin counts of the PSTHs as shown in Figs 2 and 3 are histogrammed here, and the value displayed as a logarithmic color map. All 7.5 billion spikes in our dataset were used for this plot. The bin counts are normalized by the average bin count (). For large Hölder exponents, the probability of observing an actual count agrees with counting statistics given the average. As the Hölder exponent becomes smaller, this distribution becomes wider, until below 0.5 it becomes heavy-tailed. Notice the bottom row of the figure, representing the probability of observing a bin with zero counts. It is zero for all , becomes nonzero at , and for it is the maximum of the distribution (i.e. the brightest red value).

The transition from smooth probability distribution to a singular measure is illustrated in Figures 2 and 3, where, as the Hölder exponent is lowered, the concentration of the first passage probability on a small set is evident. Histogramming the individual bins of the PSTH we get the probability distribution to observe a given instantaneous rate of firing, shown in Figure 4. For large Hölder exponents the rate does not deviate far from its mean. However, as the Hölder exponent becomes , both the probability of observing a zero rate, as well as the probability of seeing a rate far larger than the mean, become substantial. For it becomes very probable to observe either zeros or large values of the instantaneous rate. This statement can be made precise by observing the tails of the probability distribution, and this is best accomplished, given our numerical setup, by looking at the tails of the cumulative probability distribution, namely

and then analyzing vs for large , which is carried out in Figure 5. Figure 5a shows that the tails of the distribution, when , decay exponentially for but behave like stretched exponentials when :

(2)
(3)

This observation is quantified in Fig 5b, where is fitted with a quadratic polynomial in , namely

For the quadratic coefficient in the fit, which gives the convergent linear term, vanishes, uncovering the stretched exponential behavior. This quantitatively proves our assertion of a phase transition at .

Figure 5: The tail of the cumulative probability distribution of observing a given count in the PSTH becomes a stretched exponential at Hölder exponent . Top, the tails of the cumulative probability distribution, plotted as vs. , for Hölder exponents and (right to left). The probability distribution is minus the derivative of these curves. Superposed on the data (black) a fit to the last data points in the cumulative, i.e., the higher 2% percentile (red), in the form . Right, the coefficients , and for the aforementioned fit, plotted as a function of the Hölder exponent . Notice that the linear component a is (numerically) zero for , exposing the term as the next higher order. For the positive linear term guarantees convergence of all moments of the distribution.

3 Discussion

In abstract, mathematical terms, we have shown that the probability of observing a first-passage of a Gauss-Markov process through a rough boundary of Hölder exponent suffers a phase transition at . The integral of the probability on equispaced grids becomes a stretched exponential, showing the underlying instantaneous probability has ceased to be a function: it is concentrated on a Cantor-like set within which it is infinite, and it is zero outside this set. Gauss-Markov processes, such as the Ornstein-Uhlenbeck process, can be mapped to the canonical Wiener process through a deterministic joint scaling and time-change operation that preserves Hölder continuity1. Furthermore, being the solution to a linear Langevin equation, the first-passage problem for drifted Gauss-Markov processes can always be formulated in terms of a fluctuating effective barrier that integrates the drift contribution. Therefore, our analysis directly applies to this situation. As non-linear diffusions with bounded drift behave like Brownian motion at vanishingly small scales, we envision that our result is valid for this more general class of stochastic processes with Hölder continuous barrier. However, in this case, the barrier under consideration does not summarize the drift contribution of the diffusion.

In terms of the original motivating problem, the encoding of an input into the timing of action potentials by a model neuron, this means that within our (theoretical and rather aseptic) model, there is an abrupt transition in character of the PSTH, the instantaneous firing rate constructed from histogramming repetitions of the same stimulus. The transition happens when the input has the roughness of white noise, conceptually the case in which the neuron is receiving a barrage of statistically independent excitatory and inhibitory inputs, each with a random, Poisson character. For inputs which are smoother than this, the PSTH is a well-behaved function whose finite resolution approximations converge nicely and properly to finite values. However, when the input is rougher than uncorrelated excitation and inhibition, for example when excitatory and inhibitory activities are clustered positively with themselves and negatively with one another, then the PSTH is concentrated on a singularly small set, which means that the PSTH consists of a large number of sharply-defined peaks of many different amplitudes, but each one of them having precisely zero width. The width of the peaks is zero regardless of the amplitude of the internal noise; increasing internal noise only leads to power from the tall peaks being transferred to lower peaks, but all peaks stay zero width. Since the set of peaks is dense, refining the bins over which the PSTH is histogrammed leads to divergencies.

Concentration of the input into rougher temporal patterns would evidently be a function of the circuit organization. For example, in primary auditory cortex, the temporal precision observed in neuronal responses [61] appears to originate in the concentration of excitatory input into sharp “bump”-like features [62].

It currently remains to be seen whether our mechanism will resist the multiple layers of real-world detail separating the abstract equation (1) from real neurons in a living brain. Obviously, the infinite-sharpness of our mathematical result shall not withstand many relevant perturbations, which will broaden our zero-width peaks into finite thickness. That this will happen is indeed sure, but not necessarily relevant, because a defining characteristic of phase transitions is that their presence affects the parameter space around them even under strong perturbations: that is why studying phase transitions in abstract, schematic models has been fruitful. Thus the real question remaining is whether our mechanism can retain enough temporal accuracy to be relevant to understand the organization of high-temporal-accuracy systems such as the auditory pathways, and whether our description of the roughness of the input as the primary determinant of coding modality, temporal code or rate code, may illuminate and inform further studies.


[Proofs]

Consider the stochastic leaky integrate-and-fire model for a spike triggering membrane threshold and a post-spiking reset value . Suppose a spike is emitted at time . With initial condition , the inhomogeneous linear stochastic differential system

(4)

describes the ensuing sub-threshold noisy dynamic of the potential when driven by the input current . Here, shall be considered as the infinitesimal increment of a time-varying load function that is -continuous for a given Hölder exponent , i.e. for every , there exists a constant such that for all

Notice that, at the cost of rescaling and by , we can restrain ourselves to the study of the case .

Appendix A Effective Barrier Formulation

The nonlinearity of the leaky integrate-and-fire model lies entirely in the spike generation and subsequent reset, so that we can separately integrate input and noise between spikes. Thus, our first-passage problem for constant threshold and varying forcing becomes a first-passage problem without driving forces to a fluctuating effective barrier. Precisely, we solve (4) writing , where we separate the stochastic part (the Ornstein-Uhlenbeck process obtained for ) and the deterministic part arising from the integration of the input :

Determining the next spiking time can be cast in terms of a first-passage problem for the process with the effective barrier :

(5)

Therefore, a train of spikes is determined by solving consecutively the first-passage problems (5). Note that, due to the reset rule, the effective barriers do not agree at spiking times . However, for all , we have for :

Making the left-hand term of the second inequality explicit, we have

and we recognize as the solution of (4) for , with the new initial condition:

As a result, the train of spikes is determined by the sequence of first-passage problem:

(6)

where is the standard Ornstein-Uhlenbeck process with initial condition . In other words, by altering the reset rule, the linearity of the stochastic dynamics allows us to recast the successive first-passage problems (5) in terms of a sequence of first-passage problems for one single continuous barrier (6).

Appendix B First-Passage Markov Chain

In a typical experiment, the spiking history of a neuron is recorded in response to repeated presentations of the same stimulus. We idealize this situation by studying the distribution of spiking events when an input cyclically forces a leaky-integrate-and-fire neuron. To avoid discontinuity effects, we choose a barrier satisfying for some and then extend the definition of on the whole time-line by periodization . Then, the sequence of random times , where denotes successive first-passage times to , defines a discrete-time Markov chain over the finite time period , seen as an oriented circle2.
To make it more formal, assume we can choose a load function satisfying for some

(7)

which amounts to having a periodic effective barrier by setting . For any time in , consider the first passage time for an Ornstein-Uhlenbeck process starting at and the barrier . Because is a continuous function, it is known that the random variable admits a continuous non-decreasing cumulative distribution function  [70]. We then define the measure on the Borel sets of by setting for every open set , :

Moving forward, we identify with the circle , which is compact for the Euclidean distance and for which the open arc circles , are oriented counter-clockwise from to , and generate the collection of Borel sets . Equipped with the quotient map , we define on the compact measurable state space the measure kernels by setting for all open

The collection of measures form a transition kernel on the compact state space . Given an initial probability measure on , they define a continuous state, discrete time Markov chain [67, 73, 75] on , whose probability satisfies:

In particular, for all , in , is continuous in with .
We shall see as the cumulative distribution of when the underlying process starts at , i.e. the distribution of a spiking event knowing that the previous spike occurs at . As such, the kernels need not admit a density satisfying , similarly to the “Devil’s staircase” resulting from the integration of the uniform measure over the triadic Cantor set [72].

Appendix C Ergodicity of the Markov Chain

We are interested in using this Markov framework to elucidate the distribution of spiking events when a neuron is driven cyclically by an input defined (7). To ensure that the instantaneous firing rate and the probability of spiking coincide, we show that the Markov Chain is ergodic, a notion we define in the following.
An distribution is invariant by if it satisfies

so that if is distributed according to , so is . When there exists a unique such measure , for any initial distribution and any measurable set on the circle

and the Markov chain is said to be ergodic. Simply stated, the mean sojourn-time of the Markov chain in tends toward the measure of under .
We can show that the Markov chain is indeed ergodic for -continuous functions with . Since the state space of is compact, it is enough to show that it has the strong Feller property [68] to prove the existence of invariant measures, i.e.

To establish the unicity of the invariant measure , it is enough to show that the Markov chain has the irreducible property [68]:

We deduce the two properties above from consideration about the first-passage time problem in Supplementary Materials.
The Feller property specifies that, if two identical leaky integrate-and-fire neurons spike respectively at times and , then, when asymptotically approaches , the probability that the first neuron later spikes in a given time interval becomes the same as for the other neuron. In other words, close initial conditions entail similar probability laws for the occurrence of the next spiking events (in the sense of the Kolmogorov test).
The irreducible property, which states that if one spiking time is achievable for a given starting condition (previous reset time), it is attainable for any starting time, similarly stems from these two intuitive observations. If one trajectory starting at has a non-zero probability to hit this barrier in a given time region, we can easily convince ourselves that another trajectory starting at any has a non-zero probability to be close to the reset value in , and from there, unfold as a trajectory that has been reset in .
Intuitively, these properties holds for our first-passage Markov chain for two reasons. First, the continuity of the barrier which ensures the continuity of the cumulative distributions of the transition kernels. Second, the non-zero reset rules which constrain the membrane potential to be reset away from the barrier, thus avoiding pathological situations such as immediate absorption.

Appendix D Numerical Simulation of the Markov Chain

If the first-passage Markov chain is ergodic, due to the possible irregularity of the barrier, numerical simulation of its invariant measure demands that we resort to an approximation scheme. To justify this approach, we adapt a general result from [69], clarifying in which sense a sequence of Markov chains converge toward a limit chain when tends to infinity.

Theorem 2 (adapted from [69]): Let be a sequence of strongly Feller Markov chains defined on a compact state space . If, for any in , the kernel probability measures of converge in law toward a limit probability measure , then, any limit in law of a sequence of invariant measures of , is an invariant measure of the Markov chain corresponding to the limit kernel .

In particular, if all and are ergodic, the sequence is uniquely defined and so is its limit distribution , which is the stationary measure of .
For our purpose, an efficient approximation strategy of consists in exhibiting a sequence of ergodic strongly Feller Markov chains whose kernels converge to in law. This is accomplished by considering a sequence of first-passage Markov chains defined for the piecewise continuous periodic barriers that interpolates on the dyadic points :

where denotes the expectation with respect to the law of (see [77]). Such Markov chains are ergodic by the same argument as for . Moreover, since we restrain ourselves to barriers that are -continuous, the sequence converges uniformly toward (see Supplementary Materials), which in turn, implies the convergence in law (and in distribution) of toward . This demonstrates the cogency of approximating by .

Appendix E Frozen Noise as Injected Current

In addition to providing a valid numerical method, the previous approach provides an easy description of the input that gives rise to . The central results is adapted from [78]:

Theorem: There exists a Schauder basis of continuous functions compactly supported on such that, for all ,

where the are the independent standard Gaussian variables

and the thus-defined functions form an orthonormal system of .

Equipped with this result, it is easy to see that writing the input as a “Gaussian white noise”

the statistics of the resulting random barrier

is the same as for an Ornstein-Uhlenbeck process centered around zero and translated upward by . Moreover, setting , we naturally enforce the periodic condition .
However, we aim at studying the distribution of spiking events of a neuron cyclically driven by a deterministic input. Accordingly, suppose now is a realization of our “Gaussian white noise”, i.e. a frozen noise. Then, is the sample path of an Ornstein-Uhlenbeck bridge translated upward, which is almost surely -continuous of exponent . For this reason, we denote such an input , the associated barrier and the coefficients .

Appendix F Family of Hölder Continuous Barriers

From there, let us consider the set of coefficients for which the continuous barriers of the form

converge uniformly on . It can be shown [78] that contains the set

From this, we deduce that given , for any real such that , the barrier

is well-defined as a continuous function of . Keeping this in mind, we have at our disposal a well-known result [65] relating the local Hölder exponent of a function to the asymptotic behavior of the coefficients of its decomposition in the Schauder basis. Adjusting to our situation, it directly entails that for all , , the barriers are almost-surely -continuous. Therefore, we can continuously (in the -norm) control the asymptotic Hölder continuity of the effective barrier driving the activity of a leaky integrate-and-fire neuron by smoothly changing the coefficient used to construct piecewise approximations .
In order to emphasize the effect of the varying Hölder regularity, we adopt a slightly modified version of our barriers , by weighting them with a continuous function under the from . The function is chosen so that the newly formed barriers cause the neuron to fire with an overall mean firing rate (as opposed to the instantaneous mean firing rate which is time-dependent) remains constant when changing . Formally, this constraint is equivalent to holding a constant mean inter-spike time

while varying 3.

Appendix G Integral Equation for the First-Passage Time

We establish the existence of a density function for the first-passage time of a Wiener process hitting a -continuous barrier with . This property is formally referred to as the absolute continuity of the first-passage time distribution with respect to the Lebesgue measure on the real half-line. Without loss of generality, we adopt the point of view of a killed Wiener process absorbed on a fluctuating boundary, which allows us to use the powerful machinery of the heat equation. The presented result stems from the ground-breaking work of Gevrey [66] about parabolic differential equations, later actualized in a modern form by Rozier [63].
Integral equations for the cumulative distribution of the first-passage time of a Wiener process naturally arise from probabilistic arguments. Consider the event for an continuous barrier satisfying . Then, the first-passage time with occurs certainly before and we can condition this event with respect to , which yields

(8)

where denotes the first-passage time probability measure. Using the strong Markov property, on , we can disregard the past-trajectory of and equate the probabilities and . Differentiating equation (8) with respect to , we end up with

where denotes the Heat kernel.
It is important to observe that as long as is -continuous with , we have

Since is a smooth function, we can make the arbitrary value tend toward the barrier by superior value and, through the dominated convergence theorem, we get the following integral Volterra equation [?, ?]:

(9)

This integral equation, which dates back original work from Siegert [74], stems from the fact that indexed by is a martingale [76], which offers a convenient way to generalize this equation to general time-inhomogeneous diffusion processes.

Appendix H Absolute Continuity of the First-Passage Time

The integral equation is of the Volterra type, which comes in two flavor: equations of the first kind and of the second kind [71]. To ensure the existence and unicity of a solution to the equations of the second kind, we have the following powerful result:

Theorem (adapted from [64, 79]): The linear Volterra equation of the second-kind

where is a piecewise continuous function has a unique piecewise continuous solution for all if is bounded on and if there exists a monotone increasing function with , such that for all

Unfortunately, equation (9) is a Volterra equation of the first-kind and as such cannot be dealt with directly. However for barriers that are -continuous, it can be recognized as a linear generalized Abel integral equation, that is an equation of the type

where is the unknown, is a continuous function, and is a continuous kernel for and .
Abel integral equations are frequently encountered in physics and there are methods to prove the existence and unicity of a solution by transforming the original equation into a Volterra equation of the second-kind. In our case, it proceeds through the use of the Abel integral transform, which is designed to solve the canonical Abel equation

The unique solution is given as

where is the Abel inverse operator. The application of to equation (9) reduces the problem to a Volterra equation of the second-kind:

Proposition (adapted from [63]): If is -continuous with , through the application of the Abel operator, the Volterra equation of the first-kind (9) is equivalent to the Volterra equation of the second-kind

with the kernel being defined as

and denotes the continuous function .

A careful study shows that the kernel satisfies the conditions of Theorem  [63]. Thus the integral equation (H) obtained through the Abel transform admits a unique continuous solution, which is the density of the first-passage time to the barrier .

Acknowledgements.
This work was partially supported by NSF under grant EF-0928723. We are indebted to Jonathan Touboul for helpful comments.

Footnotes

  1. This transformation is referred as the Doob’s transform.
  2. The passage of time orients the circle and we identify the future time with the past time 0
  3. Notice that for the sake of well-posedness, the kernels that intervene in the formulation of the mean inter-spike time are computed for a periodic barrier but defined on instead of being wrapped on .

References

  1. Risken H (1996) The Fokker-Planck equation : methods of solution and applications (Springer-Verlag, Berlin ; New York) 2nd Ed pp xiv, 472 p.
  2. Wasan MT (1994) Stochastic processes and their first passage times : lecture notes (Queen’s University, Kingston, Ont. Canada) pp ix, 616 p.
  3. Redner S (2001) A guide to first-passage processes (Cambridge University Press, Cambridge ; New York) pp ix, 312 p.
  4. Kampen NGv (2007) Stochastic processes in physics and chemistry. in North-Holland personal library (Elsevier,, Amsterdam ; Boston ; London), pp xvi, 463 p. ill. 424 cm.
  5. Siegert AJF (1951) On the 1st Passage Time Probability Problem. Phys Rev 81(4):617-623.
  6. Mehr CB & Mcfadden JA (1964) Explicit Results for Probability Density of First-Passage Time for 2 Classes of Gaussian-Processes. Ann Math Stat 35(1):457-
  7. Vanmarcke EH (1975) Distribution of First-Passage Time for Normal Stationary Random Processes. J Appl Mech-T Asme 42(1):215-220.
  8. Domine M (1995) Moments of the first-passage time of a Wiener process with drift between two elastic barriers. J Appl Probab 32(4):1007-1013.
  9. Sacerdote L & Tomassetti F (1996) On evaluations and asymptotic approximations of first-passage-time probabilities. Adv Appl Probab 28(1):270-284.
  10. Kramers HA (1940) Brownian motion in a field of force and the diffusion model of chemical reactions. Physica 7:284-304.
  11. Strenzwi.Df (1973) Mean First Passage Time for a Unimolecular Reaction in a Solid. B Am Phys Soc 18(4):671-671.
  12. Solc M (2000) Time necessary for reaching chemical equilibrium: First passage time approach. Z Phys Chem 214:253-258.
  13. Chelminiak P & Kurzynski M (2000) Mean first-passage time in the steady-state kinetics of biochemical processes. J Mol Liq 86(1-3):319-325.
  14. Arribas E, et al. (2008) Mean lifetime and first-passage time of the enzyme species involved in an enzyme reaction. Application to unstable enzyme systems. B Math Biol 70(5):1425-1449.
  15. Montroll EW (1969) Random Walks on Lattices .3. Calculation of First-Passage Times with Application to Exciton Trapping on Photosynthetic Units. J Math Phys 10(4):753- &.
  16. Ansari A (2000) Mean first passage time solution of the Smoluchowski equation: Application to relaxation dynamics in myoglobin. J Chem Phys 112(5):2516-2522.
  17. Goychuk I & Hanggi P (2002) Ion channel gating: A first-passage time analysis of the Kramers type. P Natl Acad Sci USA 99(6):3552-3556.
  18. Kurzynski M & Chelminiak P (2003) Mean first-passage time in the stochastic theory of biochemical processes. Application to actomyosin molecular motor. J Stat Phys 110(1-2):137-181.
  19. Abdolvahab RH, Metzler R, & Ejtehadi MR (2011) First passage time distribution of chaperone driven polymer translocation through a nanopore: Homopolymer and heteropolymer cases. J Chem Phys 135(24).
  20. Roberts JB (1974) Probability of First Passage Failure for Stationary Random Vibration. Aiaa J 12(12):1636-1643.
  21. Kahle W & Lehmann A (1998) Parameter estimation in damage processes: Dependent observation of damage increments and first passage time. Advances in Stochastic Models for Reliability, Quality and Safety:139-152.
  22. Khan RA, Ahmad S, & Datta TK (2003) First passage failure of cable stayed bridge under random ground motion. Applications of Statistics and Probability in Civil Engineering, Vols 1 and 2:1659-1666.
  23. Mazurek ME, Roitman JD, Ditterich J, & Shadlen MN (2003) A role for neural integrators in perceptual decision making. Cereb Cortex 13(11):1257-1269.
  24. Schmitt FG (1972) Gamblers Ruin Problem. Am Math Mon 79(1):90- &.
  25. Richterd.N & Goel NS (1972) Extinction of a Colonizing Species. Theor Popul Biol 3(4):406- &.
  26. Saebo S, Almoy T, Heringstad B, Klemetsdal G, & Aastveit AH (2005) Genetic evaluation of mastitis resistance using a first-passage time model for Wiener processes for analysis of time to first treatment. J Dairy Sci 88(2):834-841.
  27. Lo CF (2006) First passage time density for the disease progression of HIV infected patients. Lect Notes Eng Comp 62:117-122.
  28. Xu RM, McNicholas PD, Desmond AF, & Darlington GA (2011) A First Passage Time Model for Long-Term Survivors with Competing Risks. Int J Biostat 7(1).
  29. Ammann M (2001) Credit risk valuation : methods, models, and applications (Springer, New York) 2nd Ed pp x, 255 p.
  30. Zhang D & Melnik RVN (2009) First passage time for multivariate jump-diffusion processes in finance and other areas of applications. Appl Stoch Model Bus 25(5):565-582.
  31. Yi CA (2010) On the first passage time distribution of an Ornstein-Uhlenbeck process. Quant Financ 10(9):957-960.
  32. Capocelli RM & Ricciardi LM (1971) Diffusion Approximation and First Passage Time Problem for a Model Neuron. Kybernetik 8(6):214- &.
  33. Mainen ZF & Sejnowski TJ (1995) Reliability of Spike Timing in Neocortical Neurons. Science 268(5216):1503-1506.
  34. Shimokawa T, Pakdaman K, Takahata T, Tanabe S, & Sato S (2000) A first-passage-time analysis of the periodically forced noisy leaky integrate-and-fire model. Biol Cybern 83(4):327-340.
  35. Arcas BAY, Fairhall AL, & Bialek W (2003) Computation in a single neuron: Hodgkin and Huxley revisited. Neural Comput 15(8):1715-1749.
  36. Arcas BAY & Fairhall AL (2003) What causes a neuron to spike? Neural Comput 15(8):1789-1807.
  37. Sacerdote L & Zucca C (2005) Inverse first passage time method in the analysis of neuronal interspike intervals of neurons characterized by time varying dynamics. Brain, Vision, and Artificial Intelligence, Proceedings 3704:69-77.
  38. Fauchald P & Tveraa T (2003) Using first-passage time in the analysis of area-restricted search and habitat selection. Ecology 84(2):282-288.
  39. Le Corre M, et al. (2008) A multi-patch use of the habitat: testing the First-Passage Time analysis on roe deer Capreolus capreolus paths. Wildlife Biol 14(3):339-349.
  40. Noh JD & Rieger H (2004) Random walks on complex networks. Phys Rev Lett 92(11).
  41. Condamin S, Benichou O, Tejedor V, Voituriez R, & Klafter J (2007) First-passage times in complex scale-invariant media. Nature 450(7166):77-80.
  42. Buonocore A, Nobile AG, & Ricciardi LM (1987) A New Integral-Equation for the Evaluation of 1st-Passage-Time Probability Densities. Adv Appl Probab 19(4):784-800.
  43. Lo CF & Hui CH (2006) Computing the first passage time density of a time-dependent Ornstein-Uhlenbeck process to a moving boundary. Appl Math Lett 19(12):1399-1405.
  44. Peskir G & Shiryaev AN (1999) On the Brownian first-passage time over a one-sided stochastic boundary. Theor Probab Appl+ 42(3):444-453.
  45. Rieke F (1997) Spikes : exploring the neural code (MIT Press, Cambridge, Mass.) pp xvi, 395 p.
  46. Abbott LF & Sejnowski TJ (1999) Neural codes and distributed representations : foundations of neural computation (MIT Press, Cambridge, Mass.) pp xxiii, 345 p.
  47. Cecchi GA, et al. (2000) Noise in neurons is message dependent. P Natl Acad Sci USA 97(10):5557-5561.
  48. Arcas BAY, Fairhall AL, & Bialek W (2001) What can a single neuron compute? Adv Neur In 13:75-81.
  49. Tiesinga PHE & Sejnowski TJ (2001) Precision of pulse-coupled networks of integrate-and-fire neurons. Network-Comp Neural 12(2):215-233.
  50. Beierholm U, Nielsen CD, Ryge J, Alstrom P, & Kiehn O (2001) Characterization of reliability of spike timing in spinal interneurons during oscillating inputs. J Neurophysiol 86(4):1858-1868.
  51. Tiesinga PHE, Fellous JM, & Sejnowski TJ (2002) Spike-time reliability of periodically driven integrate-and-fire neurons. Neurocomputing 44:195-200.
  52. Brette R & Guigon E (2003) Reliability of spike timing is a general property of spiking model neurons. Neural Comput 15(2):279-308.
  53. Lo CF & Chung TK (2006) First passage time problem for the Ornstein-Uhlenbeck neuronal model. Neural Information Processing, Pt 1, Proceedings 4232:324-331.
  54. Buonocore A, Caputo L, Pirozzi E, & Ricciardi LM (2009) On a Generalized Leaky Integrate-and-Fire Model for Single Neuron Activity. Computer Aided Systems Theory - Eurocast 2009 5717:152-158.
  55. Taillefumier T & Magnasco MO (2010) A Fast Algorithm for the First-Passage Times of Gauss-Markov Processes with Hölder Continuous Boundaries. J Stat Phys 140(6):1130-1156.
  56. Buonocore A, Caputo L, Pirozzi E, & Ricciardi LM (2010) On a Stochastic Leaky Integrate-and-Fire Neuronal Model. Neural Comput 22(10):2558-2585.
  57. Buonocore A, Caputo L, Pirozzi E, & Ricciardi LM (2011) The First Passage Time Problem for Gauss-Diffusion Processes: Algorithmic Approaches and Applications to LIF Neuronal Model. Methodol Comput Appl 13(1):29-57.
  58. Dong Y, Mihalas S, & Niebur E (2011) Improved Integral Equation Solution for the First Passage Time of Leaky Integrate-and-Fire Neurons. Neural Comput 23(2):421-434.
  59. Thomas PJ (2011) A Lower Bound for the First Passage Time Density of the Suprathreshold Ornstein-Uhlenbeck Process. J Appl Probab 48(2):420-434.
  60. Taillefumier T & Magnasco MO (2008) A Haar-like construction for the Ornstein Uhlenbeck process. J Stat Phys 132(2):397-415.
  61. Mounya Elhilali, Jonathan B. Fritz, David J. Klein, Jonathan Z. Simon, and Shihab A. Shamma (2004) The Journal of Neuroscience, 24(5): 1159-1172; doi:10.1523/JNEUROSCI.3825-03.2004
  62. M.R. DeWeese and Zador, A.M. Non-Gaussian membrane potential dynamics imply sparse, synchronous activity in auditory cortex. Journal of Neuroscience. 26(47), 12206-18. (2006).
  63. John Rozier Cannon. The one-dimensional heat equation, volume 23 of Encyclopedia of Mathematics and its Applications. Addison-Wesley Publishing Company Advanced Book Program, Reading, MA, 1984. With a foreword by Felix E. Browder.
  64. R. Courant and D. Hilbert. Methods of mathematical physics. Vol. II: Partial differential equations. (Vol. II by R. Courant.). Interscience Publishers (a division of John Wiley & Sons), New York-Lon don, 1962.
  65. K. Daoudi, J. Lévy Véhel, and Y. Meyer. Construction of continuous functions with prescribed local regularity. Constr. Approx., 14(3):349–385, 1998.
  66. Maurice. Gevrey. Sur les équations aux dérivées partielles du type parabolique. Gauthier-Villars, Paris, 1913.
  67. Olle Häggström. Finite Markov chains and algorithmic applications, volume 52 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 2002.
  68. Onésimo Hernández-Lerma and Jean Bernard Lasserre. Markov chains and invariant probabilities, volume 211. Birkhäuser Verlag, Basel, 2003.
  69. Alan F. Karr. Weak convergence of a sequence of markov chains. Probability Theory and Related Fields, 33:41–48, 1975. 10.1007/BF00539859.
  70. Axel Lehmann. Smoothness of first passage time distributions and a new integral equation for the first passage time density of continuous Markov processes. Adv. in Appl. Probab., 34(4):869–887, 2002.
  71. Peter Linz. Analytical and numerical methods for Volterra equations, volume 7 of SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1985.
  72. Benoit Mandelbrot. The Fractal Geometry of Nature. W. H. Freeman, 1982.
  73. J. R. Norris. Markov chains, volume 2 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 1998. Reprint of 1997 original.
  74. Arnold J. F. Siegert. On the first passage time probability problem. Physical Rev. (2), 81:617–623, 1951.
  75. William J. Stewart. Probability, Markov chains, queues, and simulation. Princeton University Press, Princeton, NJ, 2009. The mathematical basis of performance modeling.
  76. Daniel W. Stroock and S. R. Srinivasa Varadhan. Multidimensional diffusion processes. Classics in Mathematics. Springer-Verlag, Berlin, 2006. Reprint of the 1997 edition.
  77. Thibaud Taillefumier and Marcelo Magnasco. A fast algorithm for the first-passage times of Gauss-Markov processes with Hölder continuous boundaries. Journal of Statistical Physics, 140(6):1–27, 2010.
  78. Thibaud Taillefumier and Jonathan Touboul. Multiresolution hilbert approach to multidimensional gauss-markov processes. International Journal of Stochastic Analysis, 2011, 2011.
  79. F. G. Tricomi. Integral equations. Dover Publications Inc., New York, 1985. Reprint of the 1957 original.
264073
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description