Noise sensitivity of functionals of fractional Brownian motion driven stochastic differential equations: Results and perspectives
We present an innovating sensitivity analysis for stochastic differential equations: We study the sensitivity, when the Hurst parameter of the driving fractional Brownian motion tends to the pure Brownian value, of probability distributions of smooth functionals of the trajectories of the solutions and of the Laplace transform of the first passage time of at a given threshold. Our technique requires to extend already known Gaussian estimates on the density of to estimates with constants which are uniform w.r.t. in in the whole half-line and when tends to .
Key words: Fractional Brownian motion, Malliavin calculus, first hitting time.
Recent statistical studies show memory effects in biological, financial, physical data: see e.g.  for a statistical evidence in climatology and  for a financial model and citations therein for evidence in finance. For such data the Markov structure of Lévy driven stochastic differential equations makes such models questionable. It seems worth proposing new models driven by noises with long-range memory such as fractional Brownian motions.
In practice the accurate estimation of the Hurst parameter of the noise is difficult (see e.g. ) and therefore one needs to develop sensitivity analysis w.r.t. of probability distributions of smooth and non smooth functionals of the solutions to stochastic differential equations. Similar ideas were developed in  for symmetric integrals of the fractional Brownian motion.
Here we review and illustrate by numerical experiments our theoretical results obtained in  for two extreme situations in terms of Malliavin regularity: on the one hand, expectations of smooth functions of the solution at a fixed time; on the other hand, Laplace transforms of first passage times at prescribed thresholds. Our motivation to consider first passage times comes from their many use in various applications: default risk in mathematical finance or spike trains in neuroscience (spike trains are sequences of times at which the membrane potential of neurons reach limit thresholds and then are reset to a resting value, are essential to describe the neuronal activity), stochastic numerics (see e.g. [3, Sec.3]) and physics (see e.g. ). Long-range dependence leads to analytical and numerical difficulties: see e.g. .
Our theoretical estimates and numerical results tend to show that the Markov Brownian model is a good proxy model as long as the Hurst parameter remains close to . This robustness property, even for probability distributions of singular functionals (in the sense of Malliavin calculus) of the paths such as first hitting times, is an important information for modeling and simulation purposes: when statistical or calibration procedures lead to estimated values of close to , then it is reasonable to work with Brownian SDEs, which allows to analyze the model by means of PDE techniques and stochastic calculus for semimartingales, and to simulate it by means of standard stochastic simulation methods.
Our main results
The fractional Brownian motion with Hurst parameter is the centred Gaussian process with covariance
Given , we consider the process solution to the following stochastic differential equation driven by :
where the last integral is a pathwise Stieltjes integral in the sense of . For the process solves the following SDE in the classical Stratonovich sense:
Below we use the following set of hypotheses:
There exists such that ;
The function satisfies a strong ellipticity condition: such that .
Our first theorem is elementary. It describes the sensitivity w.r.t. around the critical Brownian parameter of time marginal probability distributions of .
Our next theorem concerns the first passage time at threshold 1 of issued from : . The probability distribution of the first passage time of a fractional Brownian motion is not explictly known.  obtained the asymptotic behaviour of its tail distribution function and  obtained an upper bound on the Laplace transform of . The recent work of  proposes an asymptotic expansion (in terms of ) of the density of formally obtained by perturbation analysis techniques.
To prove the preceding theorem we need accurate estimates on the density of with constants which are uniform w.r.t. small and long times and w.r.t. in . Our next theorem improves estimates in [2, 5]. Our contributions consists in getting constants which are uniform w.r.t. in the whole half-line and when tends to .
Note that Theorems 1.1, 1.2 and 1.3 are proved in , including extensions to . We do not address the proof of Theorem 1.3 here.
We sketch the proofs of Theorems 1.1 and 1.2 in Section 2. In Section 3 we consider a case which was not tackled in , that is, the case . Finally, in Section 4 we show numerical experiment results which illustrate Theorem 1.2 and suggest that the rate is sub-optimal.
2 Sketch of the proofs
2.1 Reminders on Malliavin calculus
We denote by and the classical derivative and Skorokhod operators of Malliavin calculus w.r.t. Brownian motion on the time interval (see e.g. ). In the fractional Brownian motion framework the Malliavin derivative is defined as an operator on the smooth random variables with values in the Hilbert space defined as the completion of the space of step functions on with the following scalar product:
The domain of in () is denoted by and is the closure of the space of smooth random variables with respect to the norm:
Equivalently, and are defined as and for (cf. [15, p.288]), where for any the operator is defined as follows: for any with suitable integrability properties,
We denote by the sup norm and the Hölder norm for functions on the interval . Under Assumption 3, there exists a transformation called the Lamperti transform, such that is mapped to the solution of (1;H) with coefficients and . Since is one-to-one, we assume in the rest of this paper that is uniformly . See  for details on the Lamperti transform in this framework.
Let be the solution to (1;H). There exist modifications of the processes and such that for any it a.s. holds that
2.2 Sketch of the proof of Theorem 1.1
Proving Theorem 1.1 is easy. A first technique consists in using pathwise estimates on with and defined on the same probability space. A second technique, which we present here in order to introduce the reader to the method of proof for Theorem 1.2, consists in differentiating where
which leads to
As solves a parabolic PDE driven by the generator of and as the Skorokhod integral has zero mean we get
It then remains to use the estimates (2.1).
2.3 Sketch of the proof of Theorem 1.2
We now sketch the proof of Theorem 1.2. We will soon limit ourselves to the pure fBm case ( and ) in order to show the main ideas used in the proof and avoid too many technicalities. For now, our previous remark on the Lamperti transform implies that can be chosen uniformly equal to .
Our Laplace transforms sensititivity analysis is based on a PDE representation of first hitting time Laplace transforms in the case .
For it is well known that
where the function is the classical solution with bounded continuous first and second derivatives to
where the last term corresponds to the Itô term. Using and the ODE (2.2) satisfied by , we get
where . Observe that the last term vanishes for close to , since is an approximation of the identity and converges to as . This argument is made rigorous in .
We now limit ourselves to the pure fBm case ( and ) to make the rest of the computations more understandable, although the differences will be essentially technical. Given that now, , the previous equality becomes
Evaluate the previous equation at , take expectations and let tend to infinity. For any it comes:
Let be the function of defined by if and if . There exists a constant such that
where is the function defined in Theorem 1.2.
Sketch of proof.
From Fubini’s theorem, we get
lead to the desired result. ∎
Note that this proof adapts to diffusions, but that the density of is now needed, which is the purpose of Theorem 1.3.
Compared to the proof of Theorem 1.1, an important difficulty appears when estimating : as the optional stopping theorem does not hold for Skorokhod integrals of the fBm one has to carefully estimate expectations of stopped Skorokhod integrals and obtain estimates which decrease infinitely fast when goes to infinity. We obtained the following result.
Proposition 13 of  shows that
Define the field and the process by
For any real-valued function with one has
Suppose for a while that we have proven: there exists such that for all and all , there exist constants such that
We would then get:
which is the desired result (2.5).
In order to estimate the left-hand side of Inequality (2.7) we aim to apply Garsia-Rodemich-Rumsey’s lemma (see below). However, it seems hard to get the desired estimate by estimating moments of increments of , in particular because is not smooth in the Malliavin sense. We thus proceed by localization and construct a continuous process which is smooth on the event and is close to 0 on the complementary event. To this end we introduce the following new notations.
For some small to be fixed set
where is a smooth function taking values in such that , and .
The crucial property of is the following: For all and , a.s. This is a consequence of the local property of ([15, p.47]). Therefore, for any ,
Recall the Garsia-Rodemich-Rumsey lemma: if is a continuous process, then for and such that , one has
provided the right-hand side in each line is finite. In order to apply (2.3), we thus need to estimate moments of
. Note that Lemmas 2.3 and Lemmas 2.4 (below) both give bounds on the moments of in terms of a power of . Thus has a continuous modification, by Kolmogorov’s continuity criterion, and the GRR lemma will be applicable to .
We can easily obtain bounds on the norm in terms of . This observation leads us to notice that
Choosing and we thus get
from which Inequality (2.7) follows. ∎
There exists such that: for all , for all and for all , there exist such that
where the function is defined as in Theorem 1.2.
There exists such that: For all and , there exist such that