Nonparametric estimation for irregularly sampled Lévy processes
Abstract
We consider nonparametric statistical inference for Lévy processes sampled irregularly, at low frequency. The estimation of the jump dynamics as well as the estimation of the distributional density are investigated.
Nonasymptotic risk bounds are derived and the corresponding rates of convergence are discussed under global as well as local regularity assumptions. Moreover, minimax optimality is proved for the estimator of the jump measure. Some numerical examples are given to illustrate the practical performance of the estimation procedure.
Keywords. Nonparametric statistical inference. Lévy processes. Irregular sampling. Density estimation.
AMS Subject Classification. 62G05 62G07 62G20 62M05
1 Introduction
Nonparametric statistical inference for stochastic processes with jumps has a long history, dating back as far as to the work by Rubin and Tucker (1959) or Basawa and Brockwell (1982).
In the past decade, jump processes have become increasingly popular among practitioners, especially in the field of financial applications, and the interest in the topic has constantly grown. In particular, a vast amount of literature has been published on the estimation of the characteristics of Lévy processes.
In estimation problems for Lévy processes, one can essentially distinguish between two different types of observation schemes: In a high frequency framework, it is assumed that the maximal distance between the observation times tends to zero as the number of observations increases to infinity, whereas in a low frequency regime, it is not assumed that the observation distances are asymptotically small.
So far, when low frequent observations of the underlying Lévy process are considered, most publications have focused on the case where the observations are homogeneous, which means that the distance between any two observation times and is fixed an does not vary with , see for example, Neumann and Reiß (2009), Comte and GenonCatalot (2010) or Kappus (2014). Contrarily to this, when a high frequent sampling scheme is being investigated, the homogeneity assumption can easily be disposed of, see, among many others, the work by FigueroaLópez (2009) or Comte and GenonCatalot (2009, 2011).
In the present work, we focus on nonparametric statistical inference for Lévy processes when the sampling scheme is low frequent and irregular. This means that the (deterministic) distances between any two observation times and may vary in , from very small values close to zero to some possibly very large value .
Belomestny (2011) and, most recently, Belomestny and Trabs (2015) have investigated the estimation of the characteristics of a Lévy process , when homogeneous and low frequent observation of a time changed process are available. In this framework is understood to be a random time change, independent of .
It is important to point out that the deterministic and irregular sampling scheme discussed in the present work is not embedded, as as special case, in the time changed model which has been investigated in Belomestny (2011). In that paper, the estimator is constructed for a stationary timechange process and under the standing assumption, referred to as (ATI), that . However, when a deterministic time change is considered, this would readily imply that and the problem would hence degenerate to a standard estimation problem with homogeneous observations.
The statistical framework considered in the present publication, with arbitrary irregular sampling, is hence new in the literature on nonparametric estimation for Lévy processes. We focus on the following problems: Firstly, the nonparametric estimation of the jump measure is being discussed under some additional assumptions on the process. Secondly, we discuss the estimation of the distributional density of under very general a priori assumptions on the process. This second problem has not yet received much attention in the literature on Lévy processes and is, indeed, quite standard when high frequent or heterogeneous observations are available. However, under low frequent and irregular sampling, the estimation of distributional densities is not straightforward.
This paper is organized as follow: In Section 2, the statistical framework is made precise and some notation is introduced. In Section 3, the estimation of the jump measure is investigated. An oracle type estimator for the Lévy density is introduced, which turns out to depend on the appropriate choice of certain weight functions. Nonasymptotic bounds are derived for the oracle estimator and corresponding rates of convergence are derived and shown to be optimal in a minimax sense. The density estimation is then addressed in Section 4. In Section 5, an algorithm for the fully data driven choice of the weights as well as for the choice of the cutoff parameter is introduced. All proofs are postponed to Section 6.
2 Statistical model, assumptions and notation
A one dimensional Lévy process is observed at deterministic time points . Throughout the rest of this work, we assume that there exists a positive constant such that . Apart from this upper bound, no additional assumptions on the observation times are imposed so the sampling is irregular and fully general.
The goal of this paper is twofold. Firstly, we focus on the estimation of the jump dynamics under the following additional assumptions.

has finite variation on compact sets.

has no drift component.

For one and hence for any , .

The Lévy measure has a Lebesgue density which is continuous on .
Under (A1)(A4), the characteristic function of is known to admit the following representation.
(2.1) 
with characteristic exponent
(2.2) 
For the proof, see e.g. Theorem 8.1 in Sato (1999). The process is then entirely described by the jump measure and hence by the function . We discuss the estimation of , with loss on some interval , .
The above assumptions are met for many prototypical Lévy processes such as compound Poisson processes, gamma processes or tempered stable processes without drift and with index of stability . Notice that may fail to hold true, but is satisfied if for some . This is met, for example, for compound Poisson processes, gammaprocesses and tempered stable processes with . If , estimating with loss does still make sense for compact sets bounded away from the origin.
It is worth mentioning that the assumptions (A1) and (A2) can be omitted and a fully general treatment of the problem is possible, but at the cost of additional technical complications, see, for example, Neumann and Reiß (2009) for the homogeneous case.
Secondly, we investigate the estimation of the distributional density of with loss, in case that a square integrable Lebesgue density exists. It is well known that possesses a density if the process has a high enough activity of small jumps or a nonzero Gaussian component. This follows, for example, from arguments given in Orey (1968). For example, the Lebesgue density exists and is square integrable if the process is of pure jump type and there exist constants and such that .
Typical examples are gamma processes with scale parameter , stable or tempered stable processes as well as the Brownian motion. Prototypic counterexamples are compound Poisson processes or gamma processes with .
We conclude this section by introducing some notation which will be used throughout the rest of the text.
For a function , is understood to be the Fourier transform. By , we denote the distance between the observation times. is the corresponding increment of the process and
its characteristic function. Given a kernel and bandwidth , we write . Moreover, . Let and be independent Lévy processes with Lévy measures and . For , . Finally, given a continuous function and
3 Estimation of the jump dynamics
3.1 Estimation procedure and nonasymptotic risk bounds
It follows from formula (2.1) and formula (2.2) that the Fourier transform of can be recovered by differentiating the characteristic exponent,
(3.1) 
For (possibly complex) weight functions to be chosen appropriately , we define
(3.2) 
The derivative of the characteristic exponent then equals the ratio and formula (3.1) permits to recover the Fourier transform of as follows,
(3.3) 
Let us introduce the empirically accessible counterparts of and ,
(3.4) 
and
(3.5) 
Taking expectation, we find that these quantities are unbiased estimators of and . This suggests to use as an estimator of .
However, the estimation of the Lévy density from low frequent observations is a prototypical statistical inverse problem and the rates of convergence are governed by the smoothness of the underlying density as well as the decay behavior of the function in the denominator. Too small values of the denominator will typically lead to a highly irregular behavior of the estimator and hence a large variance. Inspired by Neumann (1997), we introduce a regularized version of the inverse of . Once is below some threshold which is of the order of the standard deviation, a reasonable estimate of is no longer possible so the estimator is set to zero. Moreover, in order to ensure that the estimator is bounded from below, we introduce an additional constant threshold value. For some threshold parameter to be chosen,
(3.6) 
The corresponding regularized estimator of is . Finally, let be a kernel function having an integrable Fourier transform. For , the corresponding kernel estimator of is
(3.7) 
There remains to specify the . It is intuitive that the optimal choice of will depend on as well as on . For small values of , the noise dominates so the quality of the estimator gets worse, which should lead to choosing a relatively small weight. Moreover, the smaller is, the more information on the jump dynamics is contained in the observation, which motivates to give a high weight to . These considerations lead to specifying the ideal (oracle) weights . Notice that this statistical framework has strong structural similarities to a deconvolution problem with heteroscedastic errors, see Delaigle and Meister (2008).
In what follows, is understood to be the oracle estimator corresponding to the ideal weights . It is clear, however, that the are not feasible to actually compute, since they depend on the (unknown) characteristic function. In the sequel, we derive risk bounds and optimality properties for the oracle estimator . A procedure for the fully data driven choice of the weights is then proposed in Section 5.
Theorem 3.1.
Assume that . Assume, moreover, that . Then there exists some which is monotonously increasing with respect to both components such that
(3.8)  
(3.9) 
Discussion. It is interesting to note that the upper risk bound in the preceding theorem confirms the analogy of the nonparametric estimation of with a deconvolution problem with heteroscedastic errors, see Delaigle and Meister (2008).
We concentrate, in this work, on a deterministic sampling scheme. However, the construction of the estimator and the upper bounds may be generalized to the case where the process is sampled at random times, provided that the random sampling does not depend on the underlying process . In the proof of the rate results, one will then have to consider conditional expectations with respect to .
3.2 Rates of convergence
Minimax upper bounds
In this section, we study the asymptotic properties of which can be derived from the upper risk bound formulated in Theorem 3.1.
Global regularity
We start by considering the estimation of with loss on the whole real axis and the resulting rates of convergence over certain nonparametric classes. It has been pointed out that the estimation of resembles the estimation of a distributional density from observations with additional heteroscedastic errors. The rates of convergence will thus depend on the and on the decay of the , as well as on the smoothness of .
However, one needs to beware of the fact that (unlike in a standard deconvolution framework) the smoothness of and the decay of are not independent of each other.
It is easily seen that the polynomial or exponential decay conditions
imply a logarithmic or polynomial growth of the characteristic exponent which gives, in turn,
The faster the characteristic function of decays, the slower will hence be the decay of the Fourier transform of . For processes of compound Poisson type, the absolute value of is bounded from below and may be infinitely differentiable.
Let us introduce the following nonparametric classes of functions and of corresponding Lévy processes: For and , let or be the classes of functions such that for the corresponding Lévy process, (A1)(A4) are met, ,
and, in addition,
Moreover, let be the class of functions such that is a compound Poisson process, ,
(3.10) 
and
(3.11) 
The following result which describes the rates of convergence with respect to the prescribed smoothness classes introduced above is a direct consequence of Theorem 3.1.
Proposition 3.2.
Let be the sinckernel. This is equivalent to stating that .

Let be implicitly defined, as the solution of the minimization equation
(3.12) Then
(3.13) 
Let be the solution of
Then
(3.14) 
Let be the solution of

Assume that holds and . Then
(3.15) 
Assume that and . Then
(3.16)

The convergence rates summarized above, with defined implicitly, are not particularly intuitive. For this reason, we give some examples to better understand the underlying structure.
Examples.

Consider the special case of homogeneous observations, . Then, for the function class , the implicit definition of the bandwidth implies that . This leads to the convergence rate
(3.17) which is optimal for estimating g with loss from homogeneous observations. For the class , we find that . The corresponding convergence rate is logarithmic, .
(Notice that in the homogeneous case, the weights coincide for all and hence cancel in the construction of the estimator. )

For nonhomogeneous observations, the rates of convergence are intimately connected to the maximal distance between the observation times. Consider the class . Part (i) of Proposition 3.4 then implies that and consequently,
(3.18) In particular, for small, we approach the rate of convergence which is optimal for estimating a density with loss when . The estimation of can hence be understood in analogy with density deconvolution whenever the observations are low frequent, and in analogy with standard density estimation when is small.
In the compound Poisson case, for , we find that . The rate of convergence is and hence coincides with the optimal convergence rate for standard density estimation problems with supersmooth densities. In this particular case, does not affect the rate of convergence but only appears as a constant factor.
Local regularity
It has been pointed out in the preceding section that the global regularity of is linked to the decay behavior of , which influences the rates of convergence to be obtained. In particular, an exponential decay of will always lead to slow logarithmic rates of convergence.
However, one may as well be interested in the estimation of on some compact set bounded away from the origin. The Lévy density and hence the function may then be arbitrarily regular and even infinitely differentiable on . In what follows, we investigate rates of convergence over nonparametric classes of locally smooth functions.
Recall that for an open interval , the Hölder class consists of those functions defined on , whose absolute value is bounded above by , which are times continuously differentiable and for which
holds. In the present case, denotes the largest integer which is smaller than (but not equal to) .
Let us introduce nonparametric classes of locally Hölder regular functions and corresponding Lévy processes. In the sequel, denotes the class of functions for which the following holds:

There exists a Lévy process for which (A1)(A4) holds, with Lévy density .

can be extended to a function and .

.

has a finite fourth moment and .
The following lemma gives a bound on the bias term for locally Hölder regular functions.
Lemma 3.3.
Let be a compact interval, bounded away from the origin. Assume that there exists an open interval such that and can be extended to a function . Let the kernel be chosen such that , has the order and moreover, for some constant ,
(3.19) 
Then we can estimate for some positive constant depending on the choice of , on and on and
(3.20) 
Comment. It is important to realize that the condition (3.19) on the kernel is crucial. The sinckernel is thus not a reasonable choice in the present situation, for estimating on a compact set bounded away from the origin. This is a consequence of the fact that, no matter how may be locally, it will have discontinuity at zero when the jump activity is infinite, which implies that the function is globally nonsmooth.
The following proposition is a direct consequence of the bound on the bias given above, combined with Theorem 3.1.
Proposition 3.4.
Let be such that the assumptions summarized in Lemma 3.3 are met. Moreover, let be supported on . Let be implicitly defined as the solution of
(3.21) 
Then
(3.22) 
Example. Again, we investigate how influences the rate of convergence to be obtained. We find that , with
(3.23) 
The resulting rate of convergence is faster than or equal to . This implies that rate approaches the rate of convergence which is known the be optimal for estimating (locally) with loss when continuous time observations of the process are available.
Comment. In the same spirit, we may consider the case where decays exponentially,
. It can then be shown that, with optimal choice of , one attains convergence of order .
3.3 Lower bounds
In the sequel, we show that the rates of convergence presented in the preceding paragraphs are minimax optimal. However, to avoid some technical difficulties, we content ourselves with considering the case of polynomially decaying characteristic functions.
4 Estimating the distributional density
Let us have a look at the situation where one is interested in estimating the density of rather than the underlying Lévy measure. We can now drop the technical assumptions (A1)(A4) on the process, thus allowing a nonzero Gaussian part, a high activity of small jumps and the existence of a drift. In the sequel, it is only assumed that the density of exists and is square integrable on the whole real axes, thus excluding processes of compound Poisson type.
Without specifying any particular assumptions on the Lévy measure, drift or Gaussian part, the estimation procedure proposed in Section 3 can still be used to build an estimator of the derivative of the characteristic exponent,
(4.1) 
Since holds by definition of the characteristic exponent, a corresponding estimator of the characteristic exponent can be defined as follows.
The characteristic function of can then be estimated by .
However, there is a priori no guarantee that is a characteristic function and the absolute value may be larger than one. For this reason, we introduce an additional threshold, thus defining the final estimator of
(4.2) 
Given a kernel function and bandwidth , the density is then estimated using kernel smoothing and Fourier inversion,
(4.3) 
Theorem 4.1.
Let be supported on . Assume that .
Then there exists some which is monotonously increasing in both components such that
(4.4)  
(4.5) 
with
Rates of convergence
For , let be the class of densities corresponding to an infinitely divisible distribution for which the following holds.

For the characteristic function of ,
(4.6) 
is square integrable and is integrable, with

The random variable with density has finite moments up to order and .
This function class contains, for example, the densities of gamma or bilateral gamma distributions.
For , let be the class of infinitely divisible densities such that



The moments up to order are finite and .
Typical examples are tempered stable laws with index of stability , or (for ) processes with nonzero Gaussian part.
In the sequel, we define, for the exponential or polynomial decay scenario, respectively,
Moreover,
The convergence rates over nonparametric function classes summarized in the following Proposition are immediate consequences of Theorem 4.1, performing the compromise between the bias and variance term.
Proposition 4.2.
Let be the sinckernel, .

Consider and . Then, with implicitly defined implicitly, as the solution to
(4.7) we derive that
(4.8) and
(4.9) 
Consider or . Let be the solution to
(4.10) Then it follows that
(4.11)
Again, the implicit definition of the smoothing parameter is not particularly intuitive so we have a look at some examples to understand how the convergence rates are connected to the maximal distance between the observation times.
Examples.

In the polynomial decay scenario, (4.7) implies that , with
(4.12) and the corresponding convergence rate is faster than or equal to . This implies, in particular, that for , without any further restriction on the regularity of the observation times, the estimator attains the parametric rate of convergence if a finite th moment exists and .

In the exponential decay scenario with , (4.7) implies that , with
The corresponding rate of convergence is faster than or equal to , with .
It follows that the parametric rate is attained whenever the th moment is finite and .
For exponential decay with , the arguments are the same, apart from an additional logarithmic loss.
5 Numerical examples
5.1 Practical calculation of the weights and data driven bandwidth selection
The weights assigned to each of the are crucial for the construction of the estimator and the ideal weights are not feasible to actually compute.
On may consider the following selection algorithm for the weights. The interval is divided into disjoint intervals of equal length. Set . Then, for , a biased estimator of can be constructed, setting
It can be shown that this estimation procedure has good theoretical properties and it preserves, up to some logarithmic loss, the upper risk bound and convergence rates which have been derived for the oracle estimator when the number of subintervals is logarithmic in . However, in numerical examples, the practical performance of the estimator turns out to be unsatisfactory. For this reason, we cannot recommend this procedure in applications.
Instead, we propose an iterative selection algorithm for the weights. A preliminary estimator of is calculated, using the initial weights . Improved estimators of the weights are then given by . These improved estimators are then applied to build a new estimator of . The procedure is iterated until the distance between the empirical weights is sufficiently small.
The algorithm can be summarized as follows.
(5.1)  
(5.2)  
(5.3)  