Predicting the Last Zero of a Spectrally Negative Lévy Process

# Predicting the Last Zero of a Spectrally Negative Lévy Process

Erik J. Baurdoux111Department of Statistics, London School of Economics and Political Science. Houghton street, London, WC2A 2AE, United Kingdom. E-mail: e.j.baurdoux@lse.ac.uk  &  J. M. Pedraza 222Department of Statistics, London School of Economics and Political Science. Houghton street, London, WC2A 2AE, United Kingdom. E-mail: j.m.pedraza-ramirez@lse.ac.uk
July 14, 2019
###### Abstract

Last passage times arise in a number of areas of applied probability, including risk theory and degradation models. Such times are obviously not stopping times since they depend on the whole path of the underlying process. We consider the problem of finding a stopping time that minimises the -distance to the last time a spectrally negative Lévy process is below zero. Examples of related problems in a finite horizon setting for processes with continuous paths are Du Toit et al. (2008) and Glover and Hulley (2014), where the last zero is predicted for a Brownian motion with drift, and for a transient diffusion, respectively.

As we consider the infinite horizon setting, the problem is interesting only when the Lévy process drifts to which we will assume throughout. Existing results allow us to rewrite the problem as a classic optimal stopping problem, i.e. with an adapted payoff process. We use a direct method to show that an optimal stopping time is given by the first passage time above a level defined in terms of the median of the convolution with itself of the distribution function of . We also characterise when continuous and/or smooth fit holds.

Keywords: Lévy processes, optimal prediction, optimal stopping.

Mathematics Subject Classification (2000): 60G40, 62M20

## 1 Introduction

In recent years last exit times have been studied in several areas of applied probability, e.g. in risk theory (see Chiu et al. (2005)). Consider the Cramér–Lundberg process, which is a process consisting of a deterministic drift plus a compound Poisson process which has only negative jumps (see Figure 1) which typically models the capital of an insurance company. A key quantity of interest is the time of ruin , i.e. the first time the process becomes negative. Suppose the insurance company has funds to endure negative capital for some time. Then another quantity of interest is the last time that the process is below zero. In a more general setting we may consider a spectrally negative Lévy process instead of the classical risk process. We refer to Chiu et al. (2005) and Baurdoux (2009) for the Laplace transform of the last time before an exponential time a spectrally negative Lévy process is below some level.

Last passage times also appear in financial modeling. In particular, Madan et al. (2008a, b) showed that the price of a European put and call option for certain non-negative, continuous martingales can be expressed in terms of the probability distributions of last passage times.

Another application is in degradation models. Paroissin and Rabehasaina (2013) proposed a spectrally positive Lévy process as a degradation model. They consider a subordinator perturbed by an independent Brownian motion. The presence of a Brownian motion can model small repairs of the component or system and the jumps represents major deterioration. Classically, the failure time of a component or system is defined as the first hitting time of a critical level which represents a failure or a bad performance of the component or system. Another approach is to consider instead the last time that the process is under . Indeed, for this process the paths are not necessarily monotone and hence when the process is above the level it can return back below it later.

The main aim of this paper is to predict the last time a spectrally negative Lévy process is below zero. More specifically, we aim to find a stopping time that is closest (in sense) to the above random time. This is an example of an optimal prediction problem. Recently, these problems have received considerable attention, for example, Bernyk et al. (2011) predicted the time at which a stable spectrally negative Lévy process attains its supremum in a finite time horizon. A few years later, the infinite time horizon version was solved in Baurdoux and Van Schaik (2014) for a general Lévy process with an infinite time horizon. Glover et al. (2013) predicted the time of its ultimate minimum for a transient diffusion processes. Du Toit et al. (2008) predicted the last zero of a Brownian motion with drift and Glover and Hulley (2014) predicted the last zero of a transient difussion. It turns out that the problems just mentioned are equivalent to an optimal stopping problem, in other words, optimal prediction problems and optimal stopping problems are intimately related.

The rest of this paper is organised as follows. In Section 2 we discuss some preliminaries and some technicalities to be used later. Section 3 concerns main result, Theorem 3.4. Section 4 is then dedicated to the proof of Theorem 3.4. In the final Section we consider specific examples.

## 2 Prerequisites and formulation of the problem

Formally, let be a spectrally negative Lévy process drifting to infinity, i.e. , starting from defined on a filtered probability space where is the filtration generated by which is naturally enlarged (see Definition 1.3.38 in Bichteler (2002)). Suppose that has Lévy triple where , and is the so-called Lévy measure concentrated on satisfying . Then the characteristic exponent defined by take the form

 Ψ(θ)=icθ+12σ2θ2+∫(−∞,0)(1−eiθx+iθxI{x>−1})Π(dx).

Moreover, the Lévy–Itô decomposition states that can be represented as

 Xt=−ct+σBt+∫[0,t]∫{x≤−1}xN(ds,dx)+∫[0,t]∫{x>−1}(xN(ds,dx)−xΠ(dx)ds),

where is an standard Brownian motion, is a Poisson random measure with intensity and the process is a square-integrable martingale. Furthermore, it can be shown that all Lévy processes satisfy the strong Markov property.

Let and the scale functions corresponding to the process (see Kyprianou (2014) or Bertoin (1998) for more details). That is, is such that for , and is characterised on as a strictly increasing and continuous function whose Laplace transform satisfies

 ∫∞0e−βxW(q)(x)dx=1ψ(β)−qfor β>Φ(q),

and

 Z(q)(x)=1+q∫x0W(q)(y)dy

where and are, respectively, the Laplace exponent and its right inverse given by

 ψ(λ) =logE(eλX1) Φ(q) =sup{λ≥0:ψ(λ)=q}

for . Note that is zero at zero and tends to infinity at infinity. Moreover, it is infinitely differentiable and strictly convex with (since drifts to infinity). The latter directly implies that .

We know that the right and left derivatives of exist (see Kyprianou (2014) Lemma 8.2). For ease of notation we shall assume that has no atoms when is of finite variation, which guarantees that . Moreover, for every the function is an analytic function on .

If is of finite variation we may write

 ψ(λ)=dλ−∫(−∞,0)(1−eλy)Π(dy),

where necessarily

 d=−c−∫(−1,0)xΠ(dx)>0.

With this notation, from the fact that for and using the dominated convergence theorem we have that

 ψ′(0+)=d+∫(−∞,0)xΠ(dx). (1)

For all , the function may have a discontinuity at zero and this depends on the path variation of : in the case that is of infinite variation we have that , otherwise

 W(q)(0)=1d. (2)

There are many important fluctuations identities in terms of the scale functions and (see Bertoin (1998) Chapter VII or Kyprianou (2014) Chapter 8). We mention some of them that will be useful for us later on. Denote by the first time the process is below the zero, i.e.

 τ−0=inf{t>0:Xt<0}.

We then have for

 Px(τ−0<∞)={1−ψ′(0+)W(x)if ψ′(0+)≥0,1if ψ′(0+)<0, (3)

where denotes the law of started from .

Let us define the -potential measure of killed on exiting for as follows

 R(q)(a,x,dy):=∫∞0e−qtPx(Xt∈dy,τ+a>t).

The potential measure has a density (see Kuznetsov et al. (2011) Theorem 2.7 for details) which is given by

 r(q)(a,x,y)=e−Φ(q)(a−x)W(q)(a−y)−W(q)(x−y). (4)

In particular, will be useful later. Another pair of processes that will be useful later on are the running supremum and running infimum defined by

 ¯¯¯¯¯Xt=sup0≤s≤tXs, X––t=inf0≤s≤tXs.

The well-known duality lemma states that the pairs and have the same distribution under the measure . Moreover, with an independent exponential distributed random variable with parameter , we deduce from the Wiener–Hopf factorisation that the random variables and are independent. Furthermore, in the spectrally negative case, is exponentially distributed with parameter . From the theory of scale functions we can also deduce that is a continuous random variable with

 P(−X––eq∈dz)=(qΦ(q)W(q)(dz)−qW(q)(z))dz (5)

for .

Denote by as the last passage time below , i.e.

 gr=sup{t≥0:Xt≤r}. (6)

When we simply write .

###### Remark 2.1.

Note that from the fact that drifts to infinity we have that -a.s. Moreover, as is a spectrally negative Lévy process, and hence the case of a compound Poisson process is excluded, the only way of exiting the set is by creeping upwards. This tells us that and that -a.s.

Clearly, up to any time the value of is unknown (unless is trivial), and it is only with the realisation of the whole process that we know that the last passage time below has occurred. However, this is often too late: typically one would like to know how close is to at any time and then take some action based on this information. We search for a stopping time of that is as “close” as possible to . Consider the optimal prediction problem

 V∗=infτ∈TE|g−τ|, (7)

where is the set of all stopping times.

## 3 Main result

Before giving an equivalence between the optimal prediction problem (7) and an optimal stopping problem we prove that the random times for have finite mean. For this purpose, let be the first passage time above , i.e,

 τ+x=inf{t>0:Xt>x}.
###### Lemma 3.1.

Let be a spectrally negative Lévy process drifting to infinity with Lévy measure such that

 ∫(−∞,−1)x2Π(dx)<∞. (8)

Then for every .

###### Proof.

Note that by the spatial homogeneity of Lévy processes we have to prove that for all .

 Ex(gr)=Ex−r(g)<∞.

Then it suffices to take . From Baurdoux (2009) (Theorem 1) or Chiu et al. (2005) (Theorem 3.1) we know that for a spectrally negative Lévy process such that the Laplace transform of for and is given by

 Ex(e−qg)=eΦ(q)xΦ′(q)ψ′(0+)+ψ′(0+)(W(x)−W(q)(x)).

Then, from the well-known result which links the moments and derivatives of the Laplace transform (see Feller (1971) (section XIII.2)), the expectation of is given by

 Ex(g) =−∂∂qEx(e−qg)∣∣∣q=0+ =ψ′(0+)∂∂qW(q)(x)∣∣∣q=0+−ψ′(0+)[Φ′′(q)eΦ(q)x+xΦ′(q)2eΦ(q)x]∣∣∣q=0+ =ψ′(0+)∂∂qW(q)(x)∣∣∣q=0+−ψ′(0+)[Φ′′(0)+xΦ′(0)2]

We know that for any the function is analytic, therefore the first term in the last expression is finite. Hence has finite second moment if and are finite. Recall that the function is zero at zero and tends to infinity at infinity. Further, it is infinitely differentiable and strictly convex on . Since drifts to infinity we have that for any . We deduce that is strictly increasing in and the right inverse is the usual inverse for . From the fact that is strictly convex we have that for all .

We then compute

 Φ′(0)=1ψ′(Φ(0)+)=1ψ′(0+)<∞

and

 Φ′′(0)=−ψ′′(Φ(q)+)Φ′(q)ψ′(Φ(q)+)2∣∣∣q=0=−ψ′′(0+)ψ′(0+)3.

From the Lévy–Itô decomposition of we know that

 ψ′′(0+)=σ2+∫(−∞,0)x2Π(dx)=σ2+∫(−∞,−1)x2Π(dx)+∫(−1,0)x2Π(dx)<∞,

where the last inequality holds by assumption (8) and from the fact that since is a Lévy measure. Then we have that and hence for all . ∎

Now we are ready to state the equivalence between the optimal prediction problem and an optimal stopping problem mentioned earlier. This equivalence is mainly based on the work of Urusov (2005).

###### Lemma 3.2.

Consider the standard optimal stopping problem

 V=infτ∈TE(∫τ0G(Xs)ds), (9)

where the function is given by for . Then the stopping time which minimises (7) is the same which minimises (9). In particular,

 V∗=V+E(g). (10)
###### Proof.

Fix any stopping time of . We then have

 |g−τ| =(τ−g)++(τ−g)− =(τ−g)++g−(τ∧g) =∫τ0I{g≤s}ds+g−∫τ0I{g>s}ds =∫τ0I{g≤s}ds+g−∫τ0[1−I{g≤s}]ds =g+∫τ0[2I{g≤s}−1]ds.

From Fubini’s Theorem we have

 E[∫τ0I{g≤s}ds] =E[∫∞0I{s<τ}I{g≤s}ds] =∫∞0E[I{s<τ}I{g≤s}]ds =∫∞0E[E[I{s<τ}I{g≤s}|Fs]]ds =∫∞0E[I{s<τ}E[I{g≤s}|Fs]]ds =E[∫∞0I{s<τ}E[I{g≤s}|Fs]ds] =E[∫τ0P(g≤s|Fs)ds].

Note that due to Remark 2.1, the event is equal to (up to a -null set). Hence, since is -measurable,

 P(g≤s|Fs) =P(Xu>0 for all u∈[s,∞)|Fs) =P(infu≥sXu≥0|Fs) =P(infu≥s(Xu−Xs)≥−Xs|Fs) =P(infu≥0˜Xu≥−Xs|Fs),

where for . From the Markov property for Lévy processes we have that is a Lévy process with the same law as , independent of . We therefore find that

 P(g≤s|Fs) =h(Xs),

where . Note that the event is equal to where . Hence, by the spatial homogeneity of Lévy processes

 h(b) =P(infu≥0Xu≥−b) =Pb(infu≥0Xu≥0) =Pb(τ−0=∞) =[1−Pb(τ−0<∞)] =ψ′(0+)W(b),

where the last equality holds by identity (3) and the fact that . Therefore,

 V∗ =infτ∈TE(|g−τ|) =E(g)+infτ∈T{2E(∫τ0I{g≤s}ds)−E(τ)} =E(g)+infτ∈T{2E(∫τ0P(g≤s|Fs)ds)−E(τ)} =E(g)+infτ∈T{2E(∫τ0h(Xs)ds)−E(τ)} =E(g)+infτ∈T{E(∫τ0[2h(Xs)−1]ds)}.

Hence,

 V∗ =E(g)+infτ∈T{E(∫τ0[2ψ′(0+)W(Xs)−1]ds)} =E(g)+infτ∈T{E(∫τ0G(Xs)ds)}.

To find the solution of the optimal stopping problem (9) we will expand it to an optimal stopping problem for a strong Markov process with starting value . Specifically, we define the function as

 V(x)=infτ∈TEx(∫τ0G(Xs)ds). (11)

Thus,

 V∗=V(0)+E(g).
###### Remark 3.3.

Note that the distribution function of is given by

 F(x)=P(−X––∞≤x)=Px(τ−0=∞)=ψ′(0+)W(x).

Hence the function can be written in terms of as .

Let us now give some intuition about the optimal stopping problem (11). For this define as the lowest value such that , i.e.

 x0=inf{x∈R:G(x)≥0}. (12)

We know that is continuous and strictly increasing on and vanishes on . Moreover, we have that (since is a distribution function). As a consequence we have that is a strictly increasing and continuous function on such that for and . In the same way as , may have a discontinuity at zero depending of the path variation of . From the fact that for and the definition of given in (12) we have that .

The above observations tell us that, to solve the optimal stopping problem (11), we are interested in a stopping time such that before stopping, the process spent most of the time in those values where is negative, taking into account that can pass some time in the set and then return back to the set .

It therefore seems reasonable to think that a stopping time which attains the infimum in (11) is of the form,

 τ+a=inf{t>0:Xt>a}

for some .

The following theorem is the main result of this work. It confirms the intuition above and links the optimal stopping level with the median of the convolution with itself of the distribution function of .

###### Theorem 3.4.

Suppose that is a spectrally negative Lévy process drifting to infinity with Lévy measure satisfying

 ∫(−∞,−1)x2Π(dx)<∞.

Then there exists some such that an optimal stopping time in (11) is given by

 τ∗=inf{t≥0:V(Xt)=0}=inf{t≥0:Xt≥a∗}.

The optimal stopping level is defined by

 a∗=inf{x∈R:H(x)≥1/2} (13)

where is the convolution of with itself, i.e.,

 H(x)=∫[0,x]F(x−y)dF(y)

Furthermore, is a non-decreasing, continuous function satisfying the following:

• If is of infinite variation or finite variation with

 F(0)2<1/2, (14)

then is the median of the distribution function , i.e. is the unique value which satisfies the following equation

 H(a∗)=∫[0,a∗]F(a∗−y)dF(y)=12 (15)

The value function is given by

 V(x)=2ψ′(0+)∫a∗xH(y)dy−a∗−xψ′(0+)I{x≤a∗} (16)

Moreover, there is smooth fit at i.e. .

• If is of finite variation with then and

 V(x)=xψ′(0+)I{x≤0}.

In particular, there is continuous fit at i.e. and there is no smooth fit at i.e. .

###### Remark 3.5.
• Note that since corresponds to the distribution function of , can be interpreted as the distribution function of where is an independent copy of . Moreover, can be written in terms of scale functions as

 H(x)=ψ′(0+)2W(x)W(0)+ψ′(0+)2∫x0W(y)W′(x−y)dy (17)

and then equation (15) reads.

 ψ′(0+)2[W(a∗)W(0)+∫x0W(y)W′(a∗−y)dy]=12

Using Fubini’s Theorem the value function takes the form

 V(x)=(2ψ′(0+)∫a∗0W(y)W(a∗−y)dy−2ψ′(0+)∫x0W(y)W(x−y)dy−a∗−xψ′(0+))I{x≤a∗}. (18)
• Note that in the case that is of finite variation the condition is equivalent to (since and ) so the condition given in tells us that the drift is much larger than the average size of the jumps. This implies that the process drifts quickly to infinity and then we have to stop the first time that the process is above zero. In this case, concerning the optimal prediction problem, the stopping time which is nearest (in the sense) to the last time that the process is below zero is the first time that the process is above the level zero.

• If is of finite variation with then we have that the average of size of the jumps of are sufficiently large such that when the process crosses above the level zero the process is more likely (than in ) that the process jumps again below and spend more time in the region where is negative. This condition also tells us that the process drifts a little slower to infinity that in the . The stopping time which is nearest (in the sense) to the last time that the process is below zero is the first time that the process is above the level .

## 4 Proof of Main Result

In the next section we proof Theorem 3.4 using a direct method. Since proof is rather long, we break it into a number of lemmas.

In particular, we will use the general theory of optimal stopping (see Peskir and Shiryaev (2006)) to get a direct proof of Theorem 3.4. First, using the Snell envelope we will show that an optimal stopping time for (11) is the first time that the process enters to a stopping set , defined in terms of the value function . Recall the set

 Tt={τ≥t:τ is a stopping time}.

We denote as the set of all stopping times.

The next Lemma is standard in optimal stopping and we include the proof for completeness.

###### Lemma 4.1.

Denoting by the stopping set, we have that for any the stopping time

 τD=inf{t≥0:Xt∈D}

attains the infimum in , i.e. .

###### Proof.

From the general theory of optimal stopping consider the Snell envelope defined as

 Sxt=essinfτ∈TtE(∫τ0G(Xs+x)ds∣∣∣Ft)

and define the stopping time

 τ∗x=inf{t≥0:Sxt=∫t0G(Xs+x)ds}.

Then we have that the stopping time is is optimal for

 infτ∈TE(∫τ0G(Xs+x)ds). (19)

On account of the Markov property we have

 Sxt =∫t0G(Xs+x)ds+essinfτ∈TtE(∫τ0G(Xs+x)ds−∫t0G(Xs+x)ds∣∣∣Ft) =∫t0G(Xs+x)ds+essinfτ∈TtE(∫τtG(Xs+x)ds∣∣∣Ft) =∫t0G(Xs+x)ds+essinfτ∈TtE(∫τ−t0G(Xs+t+x)ds∣∣∣Ft) =∫t0G(Xs+x)ds+essinfτ∈TEXt(∫τ0G(Xs+x)ds) =∫t0G(Xs+x)ds+V(Xt+x),

where the last equality follows from the spatial homogeneity of Lévy processes and from the definition of . Therefore . So we have

 τ∗x=inf{t≥0:Xt+x∈D}

Thus

 V(x) =infτ∈TEx(∫τ0G(Xt)dt) =infτ∈TE(∫τ0G(Xt+x)dt) =E(∫τ∗x0G(Xt+x)dt) =Ex(∫τD0G(Xt)dt),

where the third equality holds since is optimal for (19) and the fourth follows from the spatial homogeneity of Lévy processes. Therefore the stopping time is the optimal stopping time for for all . ∎

Next, we will prove that is finite for all which implies that there exists a stopping time such that the infimum in (11) is attained. Recall the definition of in (12).

###### Lemma 4.2.

The function is non-decreasing with for all . In particular, for any .

###### Proof.

From the spatial homogeneity of Lévy processes,

 V(x)=infτ∈TE(∫τ0G(Xs+x)ds).

Then, if we have since is a non-decreasing function (see the discussion before Theorem 3.4). This implies that and is non-decreasing as claimed. If we take the stopping time , then for any we have . Let and let then and from the fact that for all , we have

 V(x)≤Ex(∫τ+y00G(Xs)ds)≤Ex(∫τ+y00G(y0)ds)=G(y0)Ex(τ+y0)<0,

where the last inequality holds due to and then .

Now we will see that for all . Note that holds for all and thus

 V(x) =infτ∈TEx(∫τ0G(Xs)ds) ≥infτ∈TEx(∫τ0−I{Xs≤x0}ds) =−supτ∈TEx(∫τ0I{Xs≤x0}ds) ≥−Ex(∫∞0I{Xs≤x0}ds) ≥−Ex(gx0),

where the last inequality holds since if then .

From Lemma 3.1 we have that . Hence for all we have and due to the monotonicity of , for all .

Next, we derive some properties of which will be useful to find the form of the set .

###### Lemma 4.3.

The set is non-empty. Moreover, there exists an such that

 V(x)=0for all x≥˜x.
###### Proof.

Suppose that . Then by Lemma 4.1 the optimal stopping time for (11) is . This implies that

 V(x)=Ex(∫∞0G(Xt)dt).

Let be the median of , i.e.

 m=inf{x∈R:G(x)≥1/2}

and let the last time that the process is below the level defined in (6). Then

 Ex(∫∞0G(Xt)dt) =Ex(∫gm0G(Xt)dt)+Ex(∫∞gmG(Xt)dt). (20)

Note that from the fact that is finite and has finite expectation (see Lemma 3.1)) the first term on the right-hand side of (20) is finite. Now we analyse the second term in the right-hand side of (20). With , since is non-negative for all we have

 Ex(∫∞gmG(Xt)dt) =Ex(I{gm