# Time to reach the maximum for a random acceleration process

###### Abstract

We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density of the time at which the process reaches its maximum, within a fixed time interval . We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.

## 1 Introduction

Consider a general stochastic process , starting from , over a fixed time interval . Let denote the time at which the process achieves its maximum value during the interval (see Fig. 1). Clearly, is a random variable that fluctuates from one realization of the process to another. Our main goal is to investigate the probability density function (pdf) of the stochastic times , given the interval length and the underlying stochastic process .

This question naturally rises in a variety of fields. For instance, in the context of the queueing theory, the stochastic process may represent the length of a queue at time , and one would like to know , i.e., the time at which the queue length is maximum over a given time span. In the context of finance, may represent the price of a stock, and a trader is evidently interested in , i.e., the time at which the stock price is at its highest.

Perhaps, one of the simplest examples is provided by the underlying stochastic process being an ordinary Brownian motion

(1) |

where is a Gaussian white noise with zero mean , and a delta correlator . In this case, the probabiliy density is independent of and is well known [1]

(2) |

This curve is symmetric around the mid-point , has a ‘ shape’ with minimum at , and diverges at the two end points and . The cumulative distribution is known as one of Levy’s celebrated ‘arcsine laws’ [1].

For a Brownian bridge, i.e., a Brownian motion constrained to return to at , i.e., , the corresponding pdf is known to be uniform: for [2].

Motivated principally by the applications in the queueing theory and finance, the pdf has been computed explicitly for a variety of other ‘constrained’ Brownian motions by using suitably adapted path integral methods. These examples include Brownian excursion [3], Brownian meander [3], reflected Brownian bridge [3], Brownian motion till its first-passage time [4], and Brownian motion with a drift [5]. Some of these exact results were then reobtained via a functional renormalization group method [6]. In Ref. [6], the authors computed also for Bessel processes and for continuous-time random walks. Curiously, the pdf also appeared recently as an important input in the calculation of the area enclosed by the convex hull of a planar Brownian motion of duration [7, 8], which displays an interesting application in ecology. In addition, we have recently shown that also appears in the computation of the disorder-averaged equilibrium distribution of a particle moving in a random self-affine potential [9].

The results for mentioned above have been obtained for Brownian motion and its variants, which are all Markov processes. The purpose of this paper is to go beyond Markov processes and present an exact result of for a non-Markov process. The non-Markov process that we study here is the well-known random acceleration process, which evolves via

(3) |

being a Gaussian white noise with , as before, and . For simplicity, we will choose subsequently. The process starts at with , and evolves until the time . We denote by and the final position and velocity of the process, respectively (see Fig. 1). The random acceleration process in the single variable is non-Markovian, whereas the position-velocity process is Markovian and does not depend on the past history.

The random acceleration process, being among the simplest non-Markovian processes, has been intensely studied both in Physics and Mathematics literature. In Physics, for instance, it appears in the continuum description of the equilibrium Boltzmann weight of a semiflexible polymer chain with non-zero bending energy [10]. It also describes the steady state profile of a -dimensional Gaussian interface [11] with dynamical exponent , the continuum version of the Golubovic-Bruinsma-Das Sarma-Tamborenea model [12]. This process also appears in the description of the statistical properties of the Burgers equation with Brownian initial velocity [13]. The first-passage and a variety of other related properties of the random acceleration model are highly nontrivial and have been studied extensively over the last few decades [14, 15, 16, 10, 17, 19, 11, 18, 20, 21].

Thus, in addition to being relevant in many applications, the random acceleration model represents a simple, yet nontrivial, non-Markov process where one can try to compute observables of physical interest. Recent studies have concerned the distribution of extreme observables associated with this process, notably the global maximum itself over the interval [10, 22, 23]. In this paper, we focus instead on the time at which the global maximum occurs in . Actually, we show in this paper that can be computed explicitly.

Let us first summarize our main results. Since the only time scale in the problem is , it is evident that , normalized to unity over , has the scaling form

(4) |

where the scaling function , defined over , satisfies the normalization condition: . We will consider two different boundary conditions, detailed below, for which we are able to compute explicitly.

### 1.1 Integral of a Brownian Bridge

When the final velocity vanishes, , the process can be interpreted as the integral of a Brownian Bridge. In this case, we will show that

(5) |

which is evidently symmetric around the mid-point and diverges at the two end-points as and , respectively. It is also useful to consider the cumulative distribution , which reads

(6) |

Here is the incomplete Beta function [24]. A plot of is shown in Fig. 4, where it is also compared to direct numerical solutions, to an excellent agreement.

### 1.2 Integral of a free Brownian Motion

When is arbitrary, the process can be interpreted as the integral of a free Brownian motion. In this case, we obtain the following exact result for the normalized pdf

(7) |

where the constant has the exact value

(8) |

The density is thus asymmetric around the mid-point (i.e., ), and the maximum may either occur at some time strictly shorter than , namely (i.e., ), or with a finite nonvanishing probability at the end point of the interval (or equivalently ). The representative trajectories for these two situations are shown respectively in Fig. 1 and Fig. 2. In other words, roughly of all trajectories, starting at and , achieve their maximum only at the end of the interval . The corresponding cumulative distribution is given by

(9) |

where is zero for and is equal to for , i.e, exhibits a discontinuous jump at from to . A plot of is provided in Fig. 5, where it is also compared to direct simulation results: we find an excellent agreement between analytical and numerical results.

The paper is organized as follows. In Section 2, we outline the main ideas used for the exact derivation of via path decomposition techniques. In Subsection 2.3, we also compare our analytical predictions with the results obtained via Monte Carlo simulations. A concluding Section 3 presents a summary and raises some open questions. The details of the calculations are left to four Appendices, as they involve rather cumbersome multiple integrations of Airy functions.

## 2 Calculations

The basic ingredient in our computation is the propagator , i.e., the probability that the process , starting at with velocity , reaches the point with velocity at time , without ever crossing the origin during this time. The Laplace transform of the propagator has been explicitly computed by Burkhardt and is recalled in A. We show that the location of the maximum can be expressed in terms of the propagator . Then, we make use of the known results about in Laplace space to derive a closed form expression for .

The velocity of a random acceleration process is a Brownian motion, thus is continuous everywhere, which implies that is continuous and differentiable everywhere. An important consequence is that the global maximum can lie either inside the interval , with a velocity (Fig. 1), or at the boundary , with a velocity (Fig. 2).

With reference to Fig. 1, the global maximum of the process starting at and ending at lies inside the interval . It is useful to decompose the total path in two portions: a first path from to in a time , and a second path from to in a time (see Fig. 3a). Remark that represents the maximum of both paths. Denote by the probability of the first path and by the probability of the second path. With respect to the variables , the process is Markovian, and the probability of the entire path is given by the product

(10) |

We introduce the change of variables , (which implies ) for the first path (see Fig. 3b). Therefore, we can rewrite

(11) |

As for the second path, the change of variables , (see Fig. 3b) gives

(12) |

When , as illustrated in Fig. , an analogous argument, with a change of variables and (which implies ), allows rewriting the probability of such a path as .

### 2.1 Integral of a Brownian Bridge

In this case, , and the location of the maximum lies inside the interval, as in Fig. 1. It follows that can be obtained by integrating the two paths of Eq. (10) over and , with

(13) |

We define

(14) |

which denotes the probability that the process, starting at the initial position at , with initial velocity , remains positive up to time , with a vanishing final velocity (the final position being arbitrary).

The integral can be computed exactly and it turns out that, in the limit, this integral vanishes, as apparent from Eq. (70). This is not surprising: indeed, if the particle starts at the origin with vanishing velocity, it can not survive up to a finite time , since it will cross the boundary almost immediately. This is due to the continuous nature of the Brownian velocity. Therefore, if one puts straightaway, the rhs of Eq. (13) vanishes. The reason for this is clear: the lhs of Eq. (13) represents the probability that the maximum lies in , i.e., , and hence is proportional to the small time increment . Setting essentially implies , which therefore gives on the rhs. To extract the nonzero probability density , one therefore needs to keep a finite (therefore a finite ) in on the rhs of Eq. (13). Then, finally one uses the following limiting procedure that gives a finite answer

(15) |

We note that this type of regularization procedure has been adopted before in computing the distribution of the area under a Brownian excursion [25] and, more recently, in the context of computing for several constrained Brownian motions, including the Brownian excursion [3].

### 2.2 Integral of a free Brownian Motion

When the final velocity of the particle is arbitrary, there is a finite probability that the global maximum occurs at the end of the observation time window . This means that the probability density includes a delta function (with a nonzero weight ). This is in addition to a non-delta function part with a nonzero support (and a total weight ) over the full interval .

Let us first compute the delta-function component in at . This contribution comes from paths that start at the origin with zero velocity and end up at (with being the maximum) in a small time window , where and is free to take any positive value. Following the notations and the change of variables discussed earlier, one can write the net probability of such paths as

(16) |

We compute this integral explicitly in B, and it turns out to be a constant indepedent of , . Thus, it follows that the delta-function component of the probability density is .

We next focus on the non-delta function part, where the maximum occurs at some time well inside the interval . The calculation is performed along the lines of the integral of a Brownian Bridge. The integral of the two paths of Eq. (10) now writes

(17) |

where

(18) |

### 2.3 Numerical simulations

To verify our main theoretical predictions in the two cases, namely, (i) the integral of a Brownian bridge in Eq. (6) and (ii) the integral of a free Brownian motion in Eq. (9), we have also performed Monte Carlo simulations of the two processes. In both cases, we simulated realizations, with and an integration step . In Fig. 4 and 5, for the two cases (i) and (ii) respectively, the theoretical and numerical results are compared. The circles represent the simulation results and the solid lines represent the analytical formulae, respectively in Eq. (6) (Fig. 4) and Eq. (9) (Fig. 5). The agreement between simulations and analytical curves is excellent.

## 3 Summary and conclusions

In this paper we have studied two non-Markov stochastic processes: the integral of a Brownian bridge up to time and the integral of a Brownian motion up to - i.e., the so called random acceleration processes. For both processes, we have computed exactly the probability density of the time at which the process achieves its maximum when observed over a time window . In the former case, the density is symmetric around the midpoint , with power law divergences at the end points and . In contrast, in the latter case, is asymmetric around the midpoint and, in addition, has an unusual delta-function contribution at the end point . The exact results are given respectively in Eqs. (5) and (7). Our analytical findings are in excellent agreement with numerical Monte Carlo simulations (see Figs. 4 and 5).

Note that, even though our final results look rather simple, their derivations, starting from the basic propagator, are nontrivial and involve rather delicate mathematical manipulations of multiple integrals, which we have tried to present systematically in a stepwise fashion in the Appendices. Hopefully, some of these details might be also useful in other problems involving the integrals of a Brownian motion.

The exact result for that we obtain in this paper is directly relevant for a particle diffusing in a one dimensional random potential . In a finite box of size , the particle finally thermalizes with an equilibrium Boltzmann density, , where is the inverse temperature and is the partition function. In the low temperature limit, by taking the average over disorder, one can show [9] that is precisely equal to the probability density that the potential , viewed as a stochastic process in space (where the space plays the role of ‘time’), has its minimum (or equivalently its maximum [26]) at . For example, for the well known Sinai model, where itself is a free Brownian trajectory, , the same as the arcsine law. Our finding for then generalizes the result for to the case where the potential represents the trajectory of a random acceleration process.

Let us end with a curious observation. For the free Brownian motion, Levy’s arcsine law appears in the distribution of two different observables: (a) in , i.e., the location of the maximum, and (b) in , where represents the occupation time, i.e., the time the process spends on the positive half axis within the interval . One may then ask if these two observables share the same distribution also for the random acceleration process. While we are able to compute explicitly in this case, the computation of seems harder in the random acceleration case. Our numerical evidence shows that, unlike in the Brownian case, the two distributions are different in the random acceleration model. Therefore, computing the occupation time distribution for the random acceleration model remains a challenging open problem.

## Appendix A Propagator with an absorbing boundary

In the absence of boundaries, the free propagator for the random acceleration process reads [10]

(20) |

In presence of the absorbing boundary at , the propagator reads [10]

(21) |

where the Laplace transform of , i.e.,

(22) |

has a rather complicated expression [10]

(23) |

being the Airy function [27]. In subsequent calculations, we will also use the following amazing identity [16, 10]

(24) |

## Appendix B The first integral

In this Appendix we compute the first integral

(25) |

where the propagator is given in Eq. (20), and the Laplace transform of is given in Eq. (23). We show below that is actually a constant, independent of , and is given by an amazingly simple expression

(26) |

Performing these integrals in closed form requires the use of a number of identities from Ref. [24]. However, to use the available identities, we need to first express the integral in a suitable form. To summarize, the derivation is not straightforward and requires quite a few mathematical manipulations and tricks. To make it easy, we lay out the main steps used in this derivation.

Step 1: Upon substituting the exact from Eq. (20), the first integral in Eq. (25) can be performed in a straightforward manner and gives

(27) |

Step 2: To perform the second integral, we first take its Laplace transform with respect to , substitute the result from Eq. (23), and then easily perform the integration over . This yields

(28) |

Step 3: Next, we split the term in the integrand on the rhs of Eq. (28) into two parts, namely,

(29) |

and thus split the integral into two parts, i.e., . This allow dealing each contribution separately, so that a number of identities can be used, as explained below

Step 4: Let us first consider the first term,

(30) |

Now, the integral over can be performed explicitly using the identity (24). Next, for the integral over , we make a change of variables: . Substituting back, we get

(31) |

Then, making a further change of variables, , yields a simpler expression

(32) |

The rhs can be computed using integration by parts, so to give

(33) |

Using [27], finally gives the rather simple exact expression

(34) |

Step 5: We now turn to the second contribution , which is given by

(35) |

Now, the integral over can be separated from the integrals over and , and we write

(36) |

where involves only the integral over

(37) |

and involves the integration over and

(38) |

Step 6: We now perform the integral in Eq. (37). We first make a change of variables from to . Next, we use the identity [27]

(39) |

where is the modified Bessel function of index [24]. Substituting back in Eq. (37), we get

(40) |

The integral over can be explicitly performed [24], and gives . Thus, we get a very simple expression

(41) |

Step 7: Now we turn to the computation of in Eq. (38). Making the change of variables from to , we get

(42) |

Next, we make a change of variables from to , so to rewrite the integral as

(43) |

Substituting the expressions for and in Eq. (36) finally yields

(44) |

where the constant is given by the integral

(45) |

Step 8: We have now to evaluate the constant in Eq. (45). As a first step, let us again use the identity in Eq. (39), so to express the inner integral in Eq. (45) as

(46) |

This follows by making a change of variables on the lhs of Eq. (46) and then using the identity in Eq. (39). Thus, we get

(47) |

Next, we perform integration by parts and use the incomplete Gamma function defined by . This gives

(48) |

Using the result [24] , we get

(49) |

Step 9: Concerning finally the integral in Eq. (49), namely,

(50) |

we first write

(51) |

which can be easily proved by integration by parts. Substituting this result in Eq. (50), the first term can be easily computed, and gives. We then get

(52) |

To perform the second integral, we first use the integral representation [24]

(53) |

Substituting this result in Eq. (52) gives

(54) |

Next, we use the identity [24]

(55) |

so to perform the inner integral over in Eq. (54). The integral over then becomes simple and using once again yields an exact expression for , namely,

(56) |

Substituting this result in Eq. (49) gives the final expression for

(57) |

## Appendix C The second integral

(60) |

The first term can be easily integrated, and expanding in small yields

(61) |

We next take the Laplace transform of the second term with respect to : and then integrate over using the expression in Eq. (23). To extract the leading dependence, it turns out to be useful to split . This gives

(62) |

where

(63) |

The integration over in the first term of Eq. (62) can be carried out by resorting to the identity (24), so to get