Damped jump-telegraph processes

# Damped jump-telegraph processes

Nikita Ratanov Universidad del Rosario, Cl. 12c, No. 4-69, Bogotá, Colombia
###### Abstract

We study a one-dimensional Markov modulated random walk with jumps. It is assumed that amplitudes of jumps as well as a chosen velocity regime are random and depend on a time spent by the process at a previous state of the underlying Markov process.

Equations for the distribution and equations for its moments are derived. We characterise the martingale distributions in terms of observable proportions between jump and velocity regimes.

###### keywords:
inhomogeneous jump-telegraph process, Volterra equation, martingale measure
###### Msc:
primary 60J27; secondary 60J75, 60K99

## 1 Introduction

Telegraph processes with different switchings and velocity regimes are studied recently in connection with possibility of different applications such as, for instance, queuing theory (see Zacks (2004), Stadje and Zacks (2004)) and mathematical biology (see Hadeler (1999)). Special attention is devoted to financial applications (see Ratanov (2007), López and Ratanov (2012)). In the latter case, an arbitrage reasoning demands the presence of jumps.

The motions with deterministic jumps are studied in detail, see the formal expressions of the transition densities in Ratanov (2007), Di Crescenzo and Martinucci (2013). Such a model is developed for the option pricing problem, which is based on the risk-neutral approach, see Ratanov (2007). If the jump amplitudes are random, the case is less known. The telegraph processes of this type are studied earlier only under the assumption of mutual independence of jump values and jump amplitudes, see Stadje and Zacks (2004) and Di Crescenzo and Martinucci (2013). Similar setting were used for the purposes of financial applications, López and Ratanov (2012).

We present here a jump-telegraph process when an amplitude of the next jump depends on the (random) time spent by the process at the previous state. This approach is of special interest for the economical and the financial applications, everywhere when the comportment of process relates with friction and memory.

Assume that the particle moves with random (and variable) velocities performing jumps of random amplitude whenever the velocity is changed. More precisely, the actual velocity regime and the amplitude of the next jump are defined as (alternated) functions of the time spent by the particle at the previous state. We assume also that the time intervals between the subsequent state changes have sufficiently arbitrary alternated distributions. It creates an effect of damping process where a friction is generated by means of memory.

This setting generalises processes which were used before for market modelling by Ratanov (2007) and López and Ratanov (2012).

The underlying processes are described in Sections 2-3. Section 4 presents the result which can be interpreted as a Doob-Meyer decomposition. Several examples with different regimes of velocities and of jumps are presented.

## 2 Generalised jump-telegraph processes: distribution

Let be a probability space. Consider two continuous-time Markov processes . The subscript indicates the initial state, (with probability 1). Assume that are left-continuos a. s.

Let be a Markov flow of switching times. The increments are independent and possess alternated distributions (with the distribution functions , , the survival functions and the densities ). We assume that , i. e. the state process is started at the switching instant. The distributions of and depend on the initial state . For brevity, we will not always indicate this dependence.

Consider a particle moving on with two alternated velocity regimes and . These velocities are described by two continuous functions . At each instant the particle takes the velocity regime , where is the (random) time spent by the particle at the previous state. We define a pair of the (generalised) telegraph processes driven by variable velocities as follows,

 T0(t)=T0(t;c0,c1)=∞∑n=0cε0(τn)(Tn,t−τn)1{τn

The integral is named the integrated telegraph process.

Let be a counting process. Notice that, and and .

The integrated telegraph process can be interpreted as the sum of random number of random variables. If , then

 t∫0Ti(s)ds=li(T0;t); (2.2)

if , then the integrated telegraph process is expressed as

 t∫0Ti(s)ds=Ni(t)−1∑n=0lεi(τn)(Tn;τn,τn+1)+lεi(τNi(t))(TNi(t);τNi(t),t). (2.3)

Here

 li(T;u,t):=∫tuci(T,s)ds,i=0,1.

Notice that Simplifying we write instead of .

In the same manner we define the jump component. Let and be a pair of deterministic continuous (or, at least, boundary measurable) functions. Consider telegraph processes (2.1) based on instead of ,

 Ti(t;h0,h1)=∞∑n=1hεi(τn)(Tn)1{τn

An integrated jump process is defined in the form of compound Poisson process by the integral

 t∫0Ti(s;h0,h1)dNi(s)=Ni(t)∑n=1hεi(τn)(Tn),i=0,1. (2.4)

The amplitude of a jump depends on the time spent by the particle at the current state.

Finally, the generalised integrated jump-telegraph process is the sum of the integrated telegraph process defined by (2.2)-(2.3) and the jump component defined by (2.4):

 Xi(t)=t∫0Ti(s;c0,c1)ds+t∫0Ti(s;h0,h1)dNi(s),t≥0,i=0,1. (2.5)

It describes the particle which moves, alternating the velocity regimes at random times , starting from the origin at the velocity regime . Each velocity reversal is accompanied by jumps of random amplitudes, is the current particle’s position.

Conditioning on the first velocity reversal, notice that

 X0(t)\lx@stackrelD= l0(T0;t)1{τ1>t}+[l0(T0;τ1)+h0(τ1)+X1(t−τ1)]1{τ1t}+[l1(T0;τ1)+h1(τ1)+X0(t−τ1)]1{τ1

Here denotes the equality in distribution. At each of two equalities the first term represents the movement without velocity reversal; the second one is the sum of three terms: the path till the first reversal, the jump value and the movement which is initiated after the first reversal.

The distribution of is separated into the singular and absolutely continuous parts.

The singular part of the distribution corresponds to the movement without any velocity reversals; let be the respective conditional distribution, if the initial state is fixed: for any Borel set we set

 P(0)i(A):=P(Xi(t)∈A,Ni(t)=0),i=0,1.

We denote the corresponding expectation by . On the space of (continuous) test-functions consider the linear functional (generalised function), It is easy to see that

 E(0)i{φ(X(t))}=∞∫−∞φ(y)P(0)i(dy)=¯Fi(t)∞∫0φ(li(s;t))f1−i(s)ds=:.

The generalised function

 pi(x,t;0)=¯Fi(t)∫∞0δli(s;t)(x)f1−i(s)ds=¯Fi(t)∫∞0δ0(x−li(s;t))f1−i(s)ds (2.7)

can be viewed as the distribution “density”. Here is the Dirac measure (of unit mass) at point .

The absolutely continuous part of the distribution of is characterised by the densities

 pi(x,t;n)=P{Xi(t)∈dx,Ni(t)=n}/dx,i=0,1,n≥1.

The sum

 pi(x,t)=∞∑n=1pi(x,t;n)

corresponds to the absolutely continuous part of distribution of .

Conditioning on the first velocity reversal, similarly to (2.6) we obtain the following equations,

 p0(x,t;n)= ∫∞0f1(τ)dτ∫t0p1(x−l0(τ;s)−h0(s),t−s;n−1)f0(s)ds, (2.8) p1(x,t;n)= ∫∞0f0(τ)dτ∫t0p0(x−l1(τ,s)−h1(s),t−s;n−1)f1(s)ds

(if the inner integrals are understood in the sense of the theory of generalised functions). Summing up in (2.8) we get the system of integral equations for (complete) distribution densities,

 p0(x,t)= p0(x,t;0)+∫∞0f1(τ)dτ∫t0p1(x−l0(τ;s)−h0(s),t−s)f0(s)ds, (2.9) p1(x,t)= p1(x,t;0)+∫∞0f0(τ)dτ∫t0p0(x−l1(τ,s)−h1(s),t−s)f1(s)ds.

Here and are defined by (2.7).

If equations (2.8) and (2.9) can be solved explicitly using the following notations,

 ξ=ξ(x,t):=x−c1tc0−c1~{}~{}~{}and ~{}~{}~{}t−ξ=c0t−xc0−c1.

Notice that , if (say, ). Define the functions , : for ,

 q0(x,t;2n)=λn0λn1(n−1)!n!ξn(t−ξ)n−1q1(x,t;2n)=λn0λn1(n−1)!n!ξn−1(t−ξ)n,n≥1, (2.10)

and

 q0(x,t;2n+1)=λn+10λn1(n!)2ξn(t−ξ)nq1(x,t;2n+1)=λn0λn+11(n!)2ξn(t−ξ)n,n≥0. (2.11)

Denote .

Equations (2.8) have the following solution:

 pi(x,t;0)= e−λit\d(x−cit), (2.12) pi(x,t;n)= qi(x−jin,t;n)θ(x−jin,t),n≥1,i=0,1,

where the displacements are defined as the sum of alternating jumps, where if is odd, and if k is even.

Summing up we obtain the solution of (2.9):

 pi(x,t)= e−λit⋅δ0(x−cit) (2.13) + 1c0−c1[λiθ(x−hi, t)I0(2√λ0λ1(c0t−x+hi)(x−hi−c1t)c0−c1) + √λ0λ1θ(x,t)(x−c1tc0t−x)12−iI1(2√λ0λ1(c0t−x)(x−c1t)c0−c1)⎤⎦,

where and are the modified Bessel functions.

See the proof of (2.10)-(2.13) in Ratanov (2007).

## 3 Generalised jump-telegraph processes: moments

Using (2.9) the equations for the expectations can be derived also. Let and . Equations (2.9) lead to

 μi(t)=¯Fi(t)¯li(t)+∫t0(¯li(s)+hi(s)+E{X1−i(t−s)})fi(s)ds,i=0,1.

Therefore the expectations follow the equations of Volterra type:

 μ0(t)= a0(t)+∫t0μ1(t−s)f0(s)ds, (3.1) μ1(t)= a1(t)+∫t0μ0(t−s)f1(s)ds,

where

 ai(t):=¯Fi(t)¯li(t)+∫t0(¯li(s)+hi(s))fi(s)ds,i=0,1.

Integrating by parts at the latter integral we have

 ∫t0¯li(s)fi(s)ds=−¯Fi(t)¯li(t)+∫t0¯ci(s)¯Fi(s)ds,

which gives the following simplification for functions :

 (3.2)

Here we denote .

Equations for variances can be derived similarly:

 σ0(t)= b0(t)+∫t0σ1(t−s)f0(s)ds, (3.3) σ1(t)= b1(t)+∫t0σ0(t−s)f1(s)ds,

where

 bi(t):=¯Fi(t)(¯li(t)−μi(t))2+∫t0(¯li(s)+hi(s)+μ1−i(t−s)−μi(t))2fi(s)ds,i=0,1.

Generalising (3.1)-(3.3), we have the following result.

###### Theorem 3.1.

Let be a locally bounded measurable function. Assume that

 ∫∞0f1−i(τ)|g(x+li(τ;t))|dτ<∞,i=0,1. (3.4)

Then the expectations

 u0(x,t)=E{g(x+X0(t))},u1(x,t)=E{g(x+X1(t))}

exist, and they satisfy the system

 u0(x,t)= G0(x,t)+∫∞0∫t0u1(x+l0(τ;s)+h0(s),t−s)f1(τ)f0(s)dτds, (3.5) u1(x,t)= G1(x,t)+∫∞0∫t0u0(x+l1(τ;s)+h1(s),t−s)f0(τ)f1(s)dτds, (3.6)

where .

###### Proof.

Equations (3.5)-(3.6) follow by conditioning on the first velocity reversal, see (2.6). ∎

The equations for the moments can be derived by using Theorem 3.1 with , see (3.5)-(3.6).

###### Corollary 3.0.

Let

Functions satisfy the equations

 μ(N)0(t)= ¯F0(t)∫∞0f1(τ)l0(τ;t)Ndτ+N∑k=0(Nk)∫t0g0,N−k(s)μ(k)1(t−s)f0(s)ds, (3.7) μ(N)1(t)= ¯F1(t)∫∞0f0(τ)l1(τ;t)Ndτ+N∑k=0(Nk)∫t0g1,N−k(s)μ(k)0(t−s)f1(s)ds.

Here and

 g0,m(t)=∫∞0f1(τ)(l0(τ;t)+h0(t))mdτ,g1,m(t)=∫∞0f0(τ)(l1(τ;t)+h1(t))mdτ,m≥1.

In general, systems (3.1), (3.3) and (3.7) have the form of the recursive Volterra equations of the second kind:

 μ(N)0(t)= a(N)0(t)+∫t0μ(N)1(t−s)f0(s)ds, (3.8) μ(N)1(t)= a(N)1(t)+∫t0μ(N)0(t−s)f1(s)ds,

where are generated by the preceding moments,

 a(N)0(t):= ¯F0(t)∫∞0l0(τ;t)Nf1(τ)dτ+N−1∑k=0(Nk)∫t0g0,N−k(s)μ(k)1(t−s)f0(s)ds, (3.9) a(N)1(t):= ¯F1(t)∫∞0l1(τ;t)Nf0(τ)dτ+N−1∑k=0(Nk)∫t0g1,N−k(s)μ(k)0(t−s)f1(s)ds.

Here .

System (3.8) possesses a unique solution, see e.g. Linz (1985). Under appropriate assumptions the solution can be found explicitly. Consider the following example. Let the distributions of interarrival times are exponential:

 fi(t)=λiexp(−λit),t≥0,i=0,1.

In this particular case system (3.8) is solved by

 μ(t)=a(t)+∫t0(I+φ(t−s)Λ)La(s)ds, (3.10)

where . Here we use the matrix notations ,

 L=(0λ0λ10)andΛ=(−λ0λ0λ1−λ1).

To check it, notice that system (3.8) is equivalent to ODE with zero initial condition:

 dμdt=Λμ(t)+ϕ(t),μ(0)=0,

where We get this equation by differentiating in (3.8) with subsequent integration by parts. Clearly, the unique solution is

 μ(t)=∫t0e(t−s)Λϕ(s)ds. (3.11)

Integrating by parts in (3.11) we obtain

 μ(t)=a(t)+∫t0e(t−s)ΛLa(s)ds.

Now, the desired representation (3.10) follows from

 exp{tΛ}=I+φ(t)Λ=12λ(λ1+λ0e−2λtλ0(1−e−2λt)λ1(1−e−2λt)λ0+λ1e−2λt). (3.12)

## 4 Martingales

Let and be (integrated) telegraph processes defined by (2.5) on the probability space . Let denotes the expectations, and coefficients are defined by (3.2).

Notice that by (3.1) if and only if , which is equivalent to the set of identities, see (3.2),

 (4.1)

Let be the filtration, generated by .

###### Theorem 4.1.

The integrated jump-telegraph processes and defined by (2.5) are -martingales if and only if (4.1) holds.

###### Proof.

The proof can be done by computing the conditional expectation for . Indeed,

 =E⎧⎨⎩∫t2−t10Tεi(t1+s)(t1+s)ds+Ni(t2)−Ni(t1)∑n=1hεi(τn+Ni(t1))(Tn+Ni(t1)) | Ft1⎫⎬⎭

According to the Markov property applied to the processes and we have

 εi(t1+s)\lx@stackrelD= ~εεi(t1)(s), Ni(t1+s)\lx@stackrelD= Ni(t1)+~Nεi(t1)(s), s≥0, τn+N(t1)\lx@stackrelD= ~τn, Tn+N(t1)\lx@stackrelD= ~Tn, n≥1,

where and are copies of and respectively, independent of . Therefore,

Here denotes the integrated jump-telegraph process, which is initiated from the state , and is based on and . The latter expectation is equal to zero, , if and only if (4.1) holds.∎

###### Remark 4.0.

Notice that if (4.1) holds, then the direction of jump should be opposite to the (mean) velocity value.

###### Corollary 4.0.

If the jump-telegraph processes and defined by (2.5) are martingales, then

 ¯ci(t)hi(t)< 0∀t≥0, (4.2) ∫∞0¯ci(s)hi(s)ds= ∞,i=0,1. (4.3)

Moreover, and are martingales, if and only if the distribution densities of interarrival times satisfy the following integral relations:

 fi(t)=−¯ci(t)hi(t)exp{∫t0¯ci(s)hi(s)ds},i=0,1. (4.4)
###### Proof.

Inequality (4.2) follows directly from (4.1). Identities (4.1) are equivalent to

 ¯ci(t)hi(t)=−fi(t)¯Fi(t)≡(ln¯Fi(t))′,i=0,1. (4.5)

Therefore

 ¯Fi(t)=exp{∫t0¯ci(s)hi(s)ds},t≥0,i=0,1.

The latter equality is equivalent to (4.4).

Notice that by definition Hence, condition (4.3) is fulfilled. ∎

In this framework various particular cases of the martingale distributions and the corresponding distributions of interarrival times can be presented by applying Corollary 3. Consider the following examples.

Exponential distribution. Assume that functions and are proportional:

 ¯ci(t)hi(t)≡−λi,λi>0,i=0,1. (4.6)

Relations (4.4) mean that the integrated jump-telegraph process is the martingale if the distributions of interarrival times are exponential:

Identities (4.6) can be written in detail as follows. The (observable) parameters of the model, i. e. the regimes of velocities and the regimes of jumps , satisfy the equations

 λ1∫∞0e−λ1τc0(τ,t)dτ=−λ0h0(t),λ0∫∞0e−λ0τc1(τ,t)dτ=−λ1h1(t)

with some positive constants and . These equations help to compute the switching intensities and by using the (observable) proportion between velocity and jump values. On the other hand, if mean velocity regimes are given, and , from these equations we can conclude that small jumps occur with high frequency, and big jumps are rare. The direction of jump should be opposite to the velocity sign, see also Remark 2.

###### Proposition 4.0.

In the framework of (2.5) we assume that the Markov flow of switching times has interarrival intervals which are exponentially distributed with alternated constant intensities . Let the velocity regimes and jump amplitudes are given, and they are proportional as in (4.6),

The martingale measure for exists and it is unique.

###### Proof.

According to the Girsanov Theorem, see Ratanov (2007), we apply Radon-Nikodym derivative of the form

 dQdP=Et{X∗}=exp{∫