Predicting the Last Zero of a Spectrally Negative Lévy Process
Abstract
Last passage times arise in a number of areas of applied probability, including risk theory and degradation models. Such times are obviously not stopping times since they depend on the whole path of the underlying process. We consider the problem of finding a stopping time that minimises the distance to the last time a spectrally negative Lévy process is below zero. Examples of related problems in a finite horizon setting for processes with continuous paths are Du Toit et al. (2008) and Glover and Hulley (2014), where the last zero is predicted for a Brownian motion with drift, and for a transient diffusion, respectively.
As we consider the infinite horizon setting, the problem is interesting only when the Lévy process drifts to which we will assume throughout. Existing results allow us to rewrite the problem as a classic optimal stopping problem, i.e. with an adapted payoff process. We use a direct method to show that an optimal stopping time is given by the first passage time above a level defined in terms of the median of the convolution with itself of the distribution function of . We also characterise when continuous and/or smooth fit holds.
Keywords: Lévy processes, optimal prediction, optimal stopping.
Mathematics Subject Classification (2000): 60G40, 62M20
1 Introduction
In recent years last exit times have been studied in several areas of applied probability, e.g. in risk theory (see Chiu et al. (2005)). Consider the Cramér–Lundberg process, which is a process consisting of a deterministic drift plus a compound Poisson process which has only negative jumps (see Figure 1) which typically models the capital of an insurance company. A key quantity of interest is the time of ruin , i.e. the first time the process becomes negative. Suppose the insurance company has funds to endure negative capital for some time. Then another quantity of interest is the last time that the process is below zero. In a more general setting we may consider a spectrally negative Lévy process instead of the classical risk process. We refer to Chiu et al. (2005) and Baurdoux (2009) for the Laplace transform of the last time before an exponential time a spectrally negative Lévy process is below some level.
Last passage times also appear in financial modeling. In particular, Madan et al. (2008a, b) showed that the price of a European put and call option for certain nonnegative, continuous martingales can be expressed in terms of the probability distributions of last passage times.
Another application is in degradation models. Paroissin and Rabehasaina (2013) proposed a spectrally positive Lévy process as a degradation model. They consider a subordinator perturbed by an independent Brownian motion. The presence of a Brownian motion can model small repairs of the component or system and the jumps represents major deterioration. Classically, the failure time of a component or system is defined as the first hitting time of a critical level which represents a failure or a bad performance of the component or system. Another approach is to consider instead the last time that the process is under . Indeed, for this process the paths are not necessarily monotone and hence when the process is above the level it can return back below it later.
The main aim of this paper is to predict the last time a spectrally negative Lévy process is below zero. More specifically, we aim to find a stopping time that is closest (in sense) to the above random time. This is an example of an optimal prediction problem. Recently, these problems have received considerable attention, for example, Bernyk et al. (2011) predicted the time at which a stable spectrally negative Lévy process attains its supremum in a finite time horizon. A few years later, the infinite time horizon version was solved in Baurdoux and Van Schaik (2014) for a general Lévy process with an infinite time horizon. Glover et al. (2013) predicted the time of its ultimate minimum for a transient diffusion processes. Du Toit et al. (2008) predicted the last zero of a Brownian motion with drift and Glover and Hulley (2014) predicted the last zero of a transient difussion. It turns out that the problems just mentioned are equivalent to an optimal stopping problem, in other words, optimal prediction problems and optimal stopping problems are intimately related.
2 Prerequisites and formulation of the problem
Formally, let be a spectrally negative Lévy process drifting to infinity, i.e. , starting from defined on a filtered probability space where is the filtration generated by which is naturally enlarged (see Definition 1.3.38 in Bichteler (2002)). Suppose that has Lévy triple where , and is the socalled Lévy measure concentrated on satisfying . Then the characteristic exponent defined by take the form
Moreover, the Lévy–Itô decomposition states that can be represented as
where is an standard Brownian motion, is a Poisson random measure with intensity and the process is a squareintegrable martingale. Furthermore, it can be shown that all Lévy processes satisfy the strong Markov property.
Let and the scale functions corresponding to the process (see Kyprianou (2014) or Bertoin (1998) for more details). That is, is such that for , and is characterised on as a strictly increasing and continuous function whose Laplace transform satisfies
and
where and are, respectively, the Laplace exponent and its right inverse given by
for .
Note that is zero at zero and tends to infinity at infinity. Moreover, it is infinitely differentiable and strictly convex with (since drifts to infinity). The latter directly implies that .
We know that the right and left derivatives of exist (see Kyprianou (2014) Lemma 8.2). For ease of notation we shall assume that has no atoms when is of finite variation, which guarantees that . Moreover, for every the function is an analytic function on .
If is of finite variation we may write
where necessarily
With this notation, from the fact that for and using the dominated convergence theorem we have that
(1) 
For all , the function may have a discontinuity at zero and this depends on the path variation of : in the case that is of infinite variation we have that , otherwise
(2) 
There are many important fluctuations identities in terms of the scale functions and (see Bertoin (1998) Chapter VII or Kyprianou (2014) Chapter 8). We mention some of them that will be useful for us later on. Denote by the first time the process is below the zero, i.e.
We then have for
(3) 
where denotes the law of started from .
Let us define the potential measure of killed on exiting for as follows
The potential measure has a density (see Kuznetsov et al. (2011) Theorem 2.7 for details) which is given by
(4) 
In particular, will be useful later. Another pair of processes that will be useful later on are the running supremum and running infimum defined by
The wellknown duality lemma states that the pairs and have the same distribution under the measure . Moreover, with an independent exponential distributed random variable with parameter , we deduce from the Wiener–Hopf factorisation that the random variables and are independent. Furthermore, in the spectrally negative case, is exponentially distributed with parameter . From the theory of scale functions we can also deduce that is a continuous random variable with
(5) 
for .
Denote by as the last passage time below , i.e.
(6) 
When we simply write .
Remark 2.1.
Note that from the fact that drifts to infinity we have that a.s. Moreover, as is a spectrally negative Lévy process, and hence the case of a compound Poisson process is excluded, the only way of exiting the set is by creeping upwards. This tells us that and that a.s.
Clearly, up to any time the value of is unknown (unless is trivial), and it is only with the realisation of the whole process that we know that the last passage time below has occurred. However, this is often too late: typically one would like to know how close is to at any time and then take some action based on this information. We search for a stopping time of that is as “close” as possible to . Consider the optimal prediction problem
(7) 
where is the set of all stopping times.
3 Main result
Before giving an equivalence between the optimal prediction problem (7) and an optimal stopping problem we prove that the random times for have finite mean. For this purpose, let be the first passage time above , i.e,
Lemma 3.1.
Let be a spectrally negative Lévy process drifting to infinity with Lévy measure such that
(8) 
Then for every .
Proof.
Note that by the spatial homogeneity of Lévy processes we have to prove that for all .
Then it suffices to take . From Baurdoux (2009) (Theorem 1) or Chiu et al. (2005) (Theorem 3.1) we know that for a spectrally negative Lévy process such that the Laplace transform of for and is given by
Then, from the wellknown result which links the moments and derivatives of the Laplace transform (see Feller (1971) (section XIII.2)), the expectation of is given by
We know that for any the function is analytic, therefore the first term in the last expression is finite. Hence has finite second moment if and are finite. Recall that the function is zero at zero and tends to infinity at infinity. Further, it is infinitely differentiable and strictly convex on . Since drifts to infinity we have that for any . We deduce that is strictly increasing in and the right inverse is the usual inverse for . From the fact that is strictly convex we have that for all .
We then compute
and
From the Lévy–Itô decomposition of we know that
where the last inequality holds by assumption (8) and from the fact that since is a Lévy measure. Then we have that and hence for all . ∎
Now we are ready to state the equivalence between the optimal prediction problem and an optimal stopping problem mentioned earlier. This equivalence is mainly based on the work of Urusov (2005).
Lemma 3.2.
Consider the standard optimal stopping problem
(9) 
where the function is given by for . Then the stopping time which minimises (7) is the same which minimises (9). In particular,
(10) 
Proof.
Fix any stopping time of . We then have
From Fubini’s Theorem we have
Note that due to Remark 2.1, the event is equal to (up to a null set). Hence, since is measurable,
where for . From the Markov property for Lévy processes we have that is a Lévy process with the same law as , independent of . We therefore find that
where . Note that the event is equal to where . Hence, by the spatial homogeneity of Lévy processes
where the last equality holds by identity (3) and the fact that . Therefore,
Hence,
∎
To find the solution of the optimal stopping problem (9) we will expand it to an optimal stopping problem for a strong Markov process with starting value . Specifically, we define the function as
(11) 
Thus,
Remark 3.3.
Note that the distribution function of is given by
Hence the function can be written in terms of as .
Let us now give some intuition about the optimal stopping problem (11). For this define as the lowest value such that , i.e.
(12) 
We know that is continuous and strictly increasing on and vanishes on . Moreover, we have that (since is a distribution function). As a consequence we have that is a strictly increasing and continuous function on such that for and . In the same way as , may have a discontinuity at zero depending of the path variation of . From the fact that for and the definition of given in (12) we have that .
The above observations tell us that, to solve the optimal stopping problem (11), we are interested in a stopping time such that before stopping, the process spent most of the time in those values where is negative, taking into account that can pass some time in the set and then return back to the set .
It therefore seems reasonable to think that a stopping time which attains the infimum in (11) is of the form,
for some .
The following theorem is the main result of this work. It confirms the intuition above and links the optimal stopping level with the median of the convolution with itself of the distribution function of .
Theorem 3.4.
Suppose that is a spectrally negative Lévy process drifting to infinity with Lévy measure satisfying
Then there exists some such that an optimal stopping time in (11) is given by
The optimal stopping level is defined by
(13) 
where is the convolution of with itself, i.e.,
Furthermore, is a nondecreasing, continuous function satisfying the following:

If is of infinite variation or finite variation with
(14) then is the median of the distribution function , i.e. is the unique value which satisfies the following equation
(15) The value function is given by
(16) Moreover, there is smooth fit at i.e. .

If is of finite variation with then and
In particular, there is continuous fit at i.e. and there is no smooth fit at i.e. .
Remark 3.5.

Note that since corresponds to the distribution function of , can be interpreted as the distribution function of where is an independent copy of . Moreover, can be written in terms of scale functions as
(17) and then equation (15) reads.
Using Fubini’s Theorem the value function takes the form
(18) 
Note that in the case that is of finite variation the condition is equivalent to (since and ) so the condition given in tells us that the drift is much larger than the average size of the jumps. This implies that the process drifts quickly to infinity and then we have to stop the first time that the process is above zero. In this case, concerning the optimal prediction problem, the stopping time which is nearest (in the sense) to the last time that the process is below zero is the first time that the process is above the level zero.

If is of finite variation with then we have that the average of size of the jumps of are sufficiently large such that when the process crosses above the level zero the process is more likely (than in ) that the process jumps again below and spend more time in the region where is negative. This condition also tells us that the process drifts a little slower to infinity that in the . The stopping time which is nearest (in the sense) to the last time that the process is below zero is the first time that the process is above the level .
4 Proof of Main Result
In the next section we proof Theorem 3.4 using a direct method. Since proof is rather long, we break it into a number of lemmas.
In particular, we will use the general theory of optimal stopping (see Peskir and Shiryaev (2006)) to get a direct proof of Theorem 3.4. First, using the Snell envelope we will show that an optimal stopping time for (11) is the first time that the process enters to a stopping set , defined in terms of the value function . Recall the set
We denote as the set of all stopping times.
The next Lemma is standard in optimal stopping and we include the proof for completeness.
Lemma 4.1.
Denoting by the stopping set, we have that for any the stopping time
attains the infimum in , i.e. .
Proof.
From the general theory of optimal stopping consider the Snell envelope defined as
and define the stopping time
Then we have that the stopping time is is optimal for
(19) 
On account of the Markov property we have
where the last equality follows from the spatial homogeneity of Lévy processes and from the definition of . Therefore . So we have
Thus
where the third equality holds since is optimal for (19) and the fourth follows from the spatial homogeneity of Lévy processes. Therefore the stopping time is the optimal stopping time for for all . ∎
Next, we will prove that is finite for all which implies that there exists a stopping time such that the infimum in (11) is attained. Recall the definition of in (12).
Lemma 4.2.
The function is nondecreasing with for all . In particular, for any .
Proof.
From the spatial homogeneity of Lévy processes,
Then, if we have since is a nondecreasing function (see the discussion before Theorem 3.4). This implies that and is nondecreasing as claimed. If we take the stopping time , then for any we have . Let and let then and from the fact that for all , we have
where the last inequality holds due to and then .
Now we will see that for all . Note that holds for all and thus
where the last inequality holds since if then .
From Lemma 3.1 we have that . Hence for all we have and due to the monotonicity of , for all .
∎
Next, we derive some properties of which will be useful to find the form of the set .
Lemma 4.3.
The set is nonempty. Moreover, there exists an such that
Proof.
Let be the median of , i.e.
and let the last time that the process is below the level defined in (6). Then
(20) 
Note that from the fact that is finite and has finite expectation (see Lemma 3.1)) the first term on the righthand side of (20) is finite. Now we analyse the second term in the righthand side of (20). With , since is nonnegative for all we have