Splitting and time reversal

# Splitting and time reversal for Markov additive processes

## Abstract.

We consider a Markov additive process with a finite phase space and study its path decompositions at the times of extrema, first passage and last exit. For these three families of times we establish splitting conditional on the phase, and provide various relations between the laws of post- and pre-splitting processes using time reversal. These results offer valuable insight into behaviour of the process, and while being structurally similar to the Lévy process case, they demonstrate various new features. As an application we formulate the Wiener-Hopf factorization, where time is counted in each phase separately and killing of the process is phase dependent. Assuming no positive jumps, we find concise formulas for these factors, and also characterize the time of last exit from the negative half-line using three different approaches, which further demonstrates applicability of path decomposition theory.

###### Key words and phrases:
Lévy process, path decomposition, time reversal, Wiener-Hopf factorization, last exit, conditioned to stay positive
Financial support by the Swiss National Science Foundation Project 200020 143889 is gratefully acknowledged.

## 1. Introduction

In the theory of Markov processes path decomposition or splitting time theorems have a long history, see e.g. [21, 25] and references therein. For a nice Markov process these results are concerned with a study of random times such that the post- process has a Markovian structure and is independent of the events before given (and sometimes also ). The main examples of such are stopping times, the time of the infimum and the last exit time from , all of which fall in the category of so-called randomized coterminal times [21]. The first example is rather trivial, and the latter two have the same structure, i.e. the time of the infimum can be seen as the last exit time from a random interval . Thus it is not surprising that they lead to the same law for the post- process: the law of conditioned to stay above a certain level.

For a Lévy process the theory of path decomposition becomes especially compelling and rich with various relations, see [22, 10] and also [4, Ch. VII.4], for the one-sided case. It provides a valuable insight in the behaviour of the process and proves to be useful in various computations. In particular, the celebrated Wiener-Hopf factorization is just an application of splitting at the infimum, see [22, 11] or [4, Lem. VI.6]. The aim of this work is to develop the corresponding theory for Markov additive processes (MAPs), which are the processes with stationary and independent increments given the current phase. These processes often appear in finance, queueing and risk theories [2], and recently were found to be instrumental in the analysis of real self-similar Markov processes, see  [8]. Even though MAPs closely resemble Lévy processes, their increments are not exchangeable, and hence one may expect certain difficulties as well as apriory non-obvious differences from the Lévy case.

### 1.1. Overview of the results

The results of this paper are formulated for defective MAPs, where general phase-dependent killing is allowed. This is important in applications since it permits tracking time spent in different phases by way of joint Laplace transforms, see Section 2.1. In the following we provide a brief overview of the main results and point out the main differences/difficulties as compared to the case of a Lévy process, see also Figure 1.

• Splitting and conditioning:

• Section 3.1 presents splitting at the infimum and defines the law of the post-infimum process given phase . There may be a phase switch at the infimum, see Figure 2, and it is crucial to condition on the correct phase. Moreover, it is convenient to choose this phase in a slightly different way for infimum and supremum.

• Section 3.2 shows that corresponds to the original process conditioned to stay positive. For certain phases this process starts from a positive level and possibly different phase. This initial distribution is addressed in Proposition 3.3.

• Section 3.3 shows that splitting also holds at the last exit time from an interval given the exit is continuous, and then the law of the post- process is . Thus post-infimum and post- processes have the same law given that they start in the same phase  (continuous exit can be realized only in some phases).

• Time reversal and equivalence of laws:

• Section 4.1 discusses time reversal at the life time of the process. In general, the life time depends on the process and hence the time reversal identity is different from the classical formula concerning reversal at an independent time.

• Section 4.2 expresses the law of the process reversed at the infimum via the dual process conditioned to stay non-positive, see Theorem 4.1. Importantly, there are new non-trivial constants in this identity.

• Section 4.3 shows that the process reversed at the last exit is closely related to the dual process considered up to the last exit (on the events of continuous exit), see Theorem 4.2. There is another set of constants which have a clear probabilistic interpretation.

• Section 4.4 completes the list of reversal identities by showing that reversal at the first passage (when continuous) results in the dual process conditioned to stay positive and considered up to the last exit, see Theorem 4.3. Again there is a need for appropriate scaling.

• Wiener-Hopf factorization: The statement of the factorization and its relation to the results in [17, 18, 8] can be found in Section 5. These works consider MAPs killed at independent exponential times, which would be natural for Lévy processes, but is not fully satisfactory for MAPs. Moreover, we allow for counting time in each phase separately. Finally, we provide some necessary corrections to the factorizations in the literature.

• Spectrally negative MAPs: Section 6 further develops the above results in the important case when the process does not have positive jumps. In particular, the Wiener-Hopf factors are given by compact explicit formulas, which are then compared to the results in [9, 20, 13].

• Application: Section 6.3 studies the times spent in different phases up to the last exit  from the negative half-line (together with the phase at ). Our theory allows for three different approaches: (i) is based on splitting at  and law equivalence of post- and post-infimum processes, (ii) is based on the formula for time reversal at , and (iii) is based on splitting at the infimum and the relation between the post-infimum process considered up to its last exit and the dual process reversed at its first passage, see Theorem 4.3.

The above list shows that splitting and time reversal results for MAPs closely resemble analogous results for Lévy processes, which should help in understanding the present theory and its application. Importantly, there are non-trivial differences in each of these results, and we made an attempt to clearly stress these points. Some parts of the proofs are rather standard and so we keep them very brief. Many results, however, cannot be obtained by a simple generalization of the arguments in the Lévy case. This is especially true when it comes to time reversal, see e.g. Proposition 4.1. It should also be noted that Nagasawa’s reversal theory for Markov processes [23] does not provide easy to use tools in our case, and instead we employ probabilistic arguments.

The subject of this work is rather technical, and our main goal is to present the results in a clear and comprehensible way while treating the details with care. Thus most of the results are supplemented with comments and remarks. Moreover, it is often convenient to draw a picture such as Figure 1.

This picture illustrates splitting at the infimum, and the last exit from in a continuous way. Both post-splitting processes lead to the same law corresponding to the process conditioned to stay positive, see grey axes. Time reversal at these times is depicted by small black axes - consider turning the paper upside down.

## 2. Preliminaries and notation

Consider a Markov additive process , where is the additive component taking values in and is the phase taking values in a finite set . The defining property of a MAP reads: for all given the process is independent of and has the same law as the original process given . Moreover, we allow for killed processes by adding an absorbing state , and write to denote the life time of the process. We require the above defining property to hold, but we do not add to . Observe that is a (possibly transient) Markov chain on and assume that it is irreducible. Finally, any MAP can be seen as a Markov-modulated Lévy process: the additive component evolves as some Lévy process while and has a jump distributed as some at the phase switch from to , where all these components and are independent. We refer to [2] and [13] for basic facts about MAPs.

It is common to write for an matrix with elements . It is well known that there exists a matrix-valued function such that for all and at least purely imaginary . It is noted that is the analogue of the Laplace exponent of a Lévy process, and that it characterizes the law of the corresponding MAP. Finally, we define first passage times:

 τx:={t≥0:Xt>x}, τ−x:={t>0:Xt

### 2.1. Defective processes and time spent in different phases

Note that is the transition rate matrix of , and hence is a vector with transition rates into which are called the killing rates. Throughout this work we assume that our MAP is killed/defective:

 Assumption: q≠0, i.e. ζ<∞%a.s.

Note that has a phase-type distribution (one under each ) with phase generator and exit vector , see e.g. [2, Ch. III.4]. Clearly, the life time  depends on the process unless for all , in which case it is independent and exponentially distributed. On top of the implicit killing with rates we sometimes need additional killing with rates ; we write , and so on, meaning that the underlying killing rates are . Finally, some results of this paper (but not all) can be generalized to non-defective processes by taking for all .

For any random time let be the time spent in phase  up to , and write for the corresponding vector. Note that by the properties of an exponential random variable we have

 E[eαXt−⟨β,t⟩;Jt]=Eβ[eαXt;Jt]=eΨβ(α)t.

Moreover, the joint transform of and at the killing time together with times spent in every phase is given by the following expression.

 (1) E[eαXζ−−⟨β,ζ⟩;Jζ−]=∫∞0E[eαXt−⟨β,t⟩;Jt]Δqdt=∫∞0eΨβ(α)tdtΔq =−(Ψβ(α))−1Δq,

where denotes the diagonal matrix with on the diagonal. Here we used the fact that the set of jump points of has 0 Lebesgue measure, and that all the eigenvalues of have negative real parts for . It is noted that (1) should be compared to in the Lévy case, where is an independent exponential random variable with rate , see the left side of (6.17) in [19].

### 2.2. Partition of phases

Recall that for a Lévy process the point 0 is said to be regular for an open or closed set if hits immediately ( event), and is irregular otherwise. The conditions for regularity can be found in [19, Thm. 6.5]. In the following it will be convenient to partition the phases in three groups :

• contains such that is regular for and irregular for under ; the prime example: has bounded variation and positive drift.

• contains such that is irregular for and regular for under ; the prime example: has bounded variation and negative drift, also it can be a compound Poisson process.

• contains such that is regular for and for under ; the prime example: has unbounded variation.

It is noted that if has bounded variation and 0 drift then may belong to any of the three groups.

## 3. Splitting and conditioning

### 3.1. Splitting at extrema

Define the overall infimum and its (last) time, and the overall supremum and its (first) time:

 X––=inft∈[0,ζ){Xt}, G––=sup{t∈[0,ζ):Xt∧Xt−=X––}, ¯¯¯¯¯X=supt∈[0,ζ){Xt}, ¯¯¯¯G=inf{t∈[0,ζ):Xt∨Xt−=¯¯¯¯¯X}.

Note that the distinction between first and last extrema is only necessary if for some the underlying Lévy process is a compound Poisson process; in this case one may similarly consider first infimum and last supremum times.

#### Splitting at the infimum

###### Proposition 3.1.

The following splitting result holds true.

• On the event the process splits at : given the processes and are independent.

• On the event the process splits at : given the processes and are independent.

In the case of a Lévy process the event has probability either 1 or 0, where the first corresponds to processes of types and and the second to type . Hence the splitting result can be formulated for these two cases separately, see [4, Lem. VI.6]. This issue is more complicated for MAPs. Letting

 (2) J––=JG––\rm 1{XG––=X––}+JG––−\rm 1{XG––>X––},

be the phase in which the infimum was achieved, we have the following important observation.

###### Lemma 3.1.

The following dichotomy holds with probability one:

• If then and ,

• If then and .

###### Proof.

Consider separately the cases when the phase is switched and not switched at , see also Remark 3.1. Note also that does not imply that according to the first scenario in Remark 3.1. ∎

It is clear from Lemma 3.1 and Proposition 3.1 that determines the law of the post-infimum process, and there is no need to parameterize it by the phase ; we denote this law by .

###### Definition 3.1.

Let under be distributed as under given .

If for some and all we have then is left undefined. Now Proposition 3.1 states that given the process splits at or at according to and , and the post-infimum process has the law .

###### Remark 3.1.

It is important to understand the cases when may switch at , say from phase to phase . For this to occur must be able to achieve its infimum at an exponential time or must be able to achieve its infimum at 0. According to [22, Prop. 2.1] there are three (non-exclusive) cases:

• and for some ;

• and for some ;

• and for some ,

see Figure 2. Note that in the last scenario one may alternatively split at ; consider the stopping times of phase switches. Finally, the time of killing can be interpreted as the time of phase switch from some to , and if then necessarily .

###### Proof of Proposition 3.1.

Splitting at the infimum for MAPs can be proven along the lines of [22] or [11], and so we keep the proof rather brief.

For consider the epochs when is at the running minimum and is in . These epochs are stopping times, and hence we may apply classical splitting at each . Thus for any bounded functionals we have

 Ei(f{(Xt,Jt),t∈[0,G––]}g{(XG––+t−X––,JG––+t),t≥0};J––=j) =∑nEi(f{(Xt,Jt),t∈[0,Tn]},Tn<∞) ×Ei(g{(XTn+t−XTn,JTn+t),t≥0};XTn+t−XTn>0,t>0) =∑nEi(f{(Xt,Jt),t∈[0,Tn]};Tn<∞,XTn+t−XTn>0,t>0) ×Ej(g{(Xt,Jt),t≥0}|Xt>0,t>0) =Ei(f{(Xt,Jt),t∈[0,G––]};J––=j)Ej(g{(Xt,Jt),t≥0}|Xt>0,t>0),

which establishes splitting at .

Next, consider the case which implies that there is no phase switch at , and that there is no jump of at . In this case we approximate by an array of stopping times. The standard way is to use the inverse local time at the infimum, see e.g. [4, Lem. VI.6] for the Lévy case and [18, 8] discussing the same procedure for MAPs. More precisely, let be a continuous local time process of at . On the event we apply classical splitting at the stopping time similarly to the previous paragraph. Summing up over and taking yields splitting at .

For one may use the same argument as in the previous paragraph. Nevertheless, we present an alternative approach providing a better insight into the post-infimum process. Consider the epochs of jumps of larger than which constitute a sequence of stopping times. Note that a given epoch coincides with if and . But for each of these epochs we can apply classical splitting right-before the jump, since the jump is also independent of the past given the corresponding phase. These ideas result in the following identity

 Ei(f{(Xt,Jt),t∈[0,G––)}g{(XG––+t−X––,JG––+t),t≥0};XG––−XG––−>ϵ,JG––−=j) =Ei(f{(Xt,Jt),t∈[0,G––)};XG––−XG––−>ϵ,JG––−=j) ×Ej(g{(X(ϵ)t,J(ϵ)t),t≥0}|X––(ϵ)>0),

where has the law of the original process started from an independent and this latter pair under denotes the first jump larger than while in phase , and the phase right-after this jump; see Section 3.2.1 for further comments about the distribution of . Finally, let to deduce splitting at . ∎

#### Splitting at the supremum

The result about splitting at the supremum can be obtained by taking process, and adapting the statements and the proofs slightly, where it concerns a compound Poisson process . Note also that when discussing splitting at supremum we assume that the absorbing state is . Define

 (3) ¯¯¯¯J:=\rm 1{X¯¯¯G−=¯¯¯¯X}J¯¯¯¯G−+\rm 1{X¯¯¯G−<¯¯¯¯X}J¯¯¯¯G,

where we take in the indicators as compared to in the definition (2) of ; by convention . This choice turns out to be more convenient when it comes to time reversal at the infimum, see Section 4.2. In fact, the only difference appears when there is a phase switch at but no jump of , see the third scenario of Remark 3.1: now we take the phase right before the switch and not right after.

Next, we note that on the set of probability one it holds that

• if then and ,

• if then and .

Finally, given the process splits at or at according to and , and the law of the post-supremum process is denoted by .

### 3.2. Process conditioned to stay positive

For define as the law of conditioned on and for all . Clearly, under this law is a time-homogeneous Markov process with the semigroup

 (4) P↑x,i(Xt∈dy,Jt=j) =Px,i(Xt∈dy,Jt=j|τ−0=∞) =Px,i(Xt∈dy,Jt=j,t<τ−0)Py,j(τ−0=∞)Px,i(τ−0=∞)

for . Recall that we only consider killed processes, and hence .

###### Remark 3.2.

There is a large body of literature devoted to Lévy processes conditioned to stay positive, where the original process drifts to . This case is more complex than ours and may lead to different laws depending on the way conditioning (on the event of probability 0) is implemented, see [12].

The following result is a straightforward adaptation of [22, Prop. 4.1]. It explains the notation chosen for the law of the post-infimum process. The corresponding result for Lévy processes can also be found in [3, 5].

###### Proposition 3.2.

The post-infimum process, i.e.  under , is a Markov process with transition semigroup defined in (4) for .

It is noted that if then the post-infimum process starts at 0 and leaves 0 immediately, and hence the semigroup determines the law . Moreover, it can be shown that converges on the Skorokhod space of paths to as , see [5, Thm. 2] for a similar statement concerning Lévy processes. This result follows from splitting at the infimum and the fact that which is obtained from the corresponding theory for Lévy processes. Note also that for the semigroup (4) can be extended to include , because in this case Then

 P↑i(⋅)=Pi(⋅|τ−0=∞)=Pi(⋅|X––≥0),

which also follows from the proof of Proposition 3.1.

Finally, if then the corresponding result is less neat. Compared to the Lévy case there is an additional problem that does not necessarily converge to  (think of a model with ). Instead we use an alternative approach described in the following Section.

#### On the initial distribution

Let us focus on the law for , which corresponds to the post-infimum process starting from a positive level at time 0. The proof of Proposition 3.4 implies that

 E↑ig{(Xt,Jt),t≥0}=limϵ↓0Ei(g{(X(ϵ)t,J(ϵ)t),t≥0}|X––(ϵ)>0),

where has the law of the original process started from an independent . Notice that is determined by competing independent exponential clocks:

• for the rate is and the level law is ,

• for the rate is and the level law is ,

• for the rate is and the level equals .

It is convenient to define

 (5) Uij(dx):=\rm 1{i=j}νi(dx)+\rm 1% {i≠j}Ψij(0)P(Uij∈dx),

which we call the jump measure associated to a MAP. Now the above observations lead to the following result.

###### Proposition 3.3.

For it holds that

 P↑i(X0∈dx;J0=j) =1ci+qivj(x)Uij(dx), P↑i(J0=†) =qici+qi,

where

 vj(x):=Pj(X––>−x), ci:=∫∞0Ui⋅(dy)v(y)∈[0,∞).
###### Proof.

Observe that

 P↑i(X0>x;J0=j)=limϵ↓0Pi(X(ϵ)0>x;X––(ϵ)>0,J(ϵ)0=j)/Pi(X––(ϵ)>0) =limϵ↓0Pi(X(ϵ)0>x;vj(X(ϵ)0),J(ϵ)0=j)/(∑jPi(vj(X(ϵ)0);J(ϵ)0=j)+Pi(J(ϵ)0=†)),

where and for , and is the total rate of the corresponding jumps. Hence we have

 (6) P↑i(X0>x;J0=j)=∫∞xvj(y)Uij(dy)/(∫∞0Ui⋅(dy)v(y)+qi),

where the first integral is clearly finite. If then it must be that , but the probabilistic reasoning shows that this is only possible when , i.e. no jumps up in phase  are possible, showing that in fact . ∎

Let us mention that we elaborate on this results a bit further in Section 6.4 in the case of a spectrally positive processes.

#### Process conditioned to stay non-positive

Similarly, the law can be seen as the law of the process conditioned to stay non-positive. There are two features of which are different from those of . Firstly, under the process starts from the level 0 if , but it may still do so for , see (3) and think about the exceptional scenario. Secondly, if is a compound Poisson process then under the process stays at the level 0 before jumping down.

### 3.3. Splitting at the last exit from an interval

Define the last exit time from  by

 σa:=sup{t≥0:Xt≤a}

and note that there are two trivial cases: if for all , and if .

The following result is well-known in the theory of Markov processes, see [21] and references therein.

###### Proposition 3.4.

Conditionally on the process is a Markov process with semigroup which is independent of

This result can be proven in a way similar to the proof of Proposition 3.1, that is, we make use of classical splitting at certain stopping times.

The main problem in relating the laws of the post-infimum and the post- processes lies in the initial distribution of the latter. This problem is avoided if , i.e. the post- process starts at 0.

###### Corollary 3.1.

On the event the process has the law , assuming that this event has positive probability, and in particular .

###### Proof.

It is easy to see that , because otherwise is either a compound Poisson process or a process which visits a level at discrete times (if ever) and goes immediately below this level. In any case the event of interest has 0 probability. Now right-continuity of paths and convergence of towards as implies the result. ∎

Note that the results of Proposition 3.4 and Corollary 3.1 also hold for the process under with the same law for the post- process, see also [4, Cor. VII.19] for the case of Lévy processes with no positive jumps.

Finally, we identify the cases when has positive probability. It is said that a Lévy process admits continuous passage upward (creeps upward) if for some (and then for all), which is equivalent to , assuming that is not a compound Poisson process. Sufficient and necessary conditions for this are given in [19, Thm. 7.11].

###### Lemma 3.2.

Suppose is not monotone and is not a compound Poisson process. Then if and only if admits continuous passage upward. Moreover, on this event there is no jump and no phase switch at .

###### Proof.

Note that must be diffuse at each phase switch, and hence it can not be at the level at these times. Moreover, can not jump onto a level . This proves the second statement. Now the first statement follows from a similar result for Lévy processes, see [22, Prop. 5.1] based on time reversal argument.

Note that we exclude the case when , because one may take to be a process of bounded variation, zero drift and such that . Such a process does not admit continuous passage upward, but the above probability would be positive. A similar problem may arise for if is a compound Poisson process whose jump measure has atoms. ∎

## 4. Time reversal

Throughout this section it will be convenient to work with the canonical probability space, where the sample space is a set of càdlàg paths equipped with the Skorokhod’s topology and . Let us define the killing operator and the reversal operator . Namely,

 kT(ω)t :=\rm 1{t

where and depend on . If then we agree that produces a path identically equal to . Note that inverts both time and space, and that the additive component is reversed in the ‘additive’ sense, see Figure 1. In addition, it leads to a càdlàg path which may start at some level if there is a jump of at ; sometimes we will consider in order to ignore this jump, i.e. when reversing at the life time . Next, for an arbitrary non-negative measurable functional we define

 FT:=F∘kT, ^FT:=F∘rT,

i.e we apply to the original process considered up to time  and to the process reversed at . Finally, note that in general a ’MAP killed at ’ is not a (defective) MAP unless has a very particular law, see Section 2.1.

Assume for a moment that and let be the stationary distribution of . Consider now a deterministic time  and note that neither nor jump at a.s. It is well-known (and easy to see) that if is in stationarity then the process time reversed at is also a MAP (with some law ) killed at :

 E(^Ft,J0=i,Jt=j)=^E(Ft,Jt=i,J0=j).

Therefore, we have the following basic time reversal identity

 (7) πiEi(^Ft,Jt=j)=πj^Ej(Ft,Jt=i),

because is in stationarity under both and . It is easy to see that this identity still holds true if the same is used to kill the process under and ; notice that killing amounts to multiplying functionals and by which clearly preserves the correspondence, because is the vector of total times spent in different phases and so it is invariant under time reversal. Thus we again assume implicit killing with rates in the following, and determine by .

In matrix notation we have

 E[^Ft;Jt]=Δ−1π^E[Ft;Jt]TΔπ.

Taking and noting that we get

 eΨ(α)t=E[eαXt;Jt]=Δ−1π^E[eαXt;Jt]TΔπ=Δ−1πe^Ψ(α)TtΔπ,

which implies a well-known relation

 ^Ψ(α)=Δ−1πΨ(α)TΔπ.

In what follows we develop identities similar to (7) for time reversal at the life time , at the infimum , at the last exit time and at the first passage time assuming the exit and passage are continuous. It must be noted that there is a well-developed theory of time reversal for Markov processes, see [23, 26] and also [24] for an illustration. This theory of Nagasawa builds on potential theory, and thus it requires conversions between potential densities (resolvents) and the corresponding Markov processes, which makes it hard to apply in our case even when a resulting Markov process can be guessed by some other means. Instead, we follow a direct probabilistic path.

### 4.1. Time reversal at the life time

Importantly, one can also time revert the process at the killing time even though depends on the process. In this case the relation becomes slightly different:

 (8) πiqiEi(^Fζ−;Jζ−=j)=πjqj^Ej(Fζ−;Jζ−=i),

which readily follows by integrating over life-time and noting that

 πiqi∫∞0Ei(^Ft;Jt=j)qjdt=πjqj∫∞0^Ej(Ft;Jt=i)qidt

according to (7). Note that , and above we chose the first one for the reason of symmetry only. Note also, that killing at an independent exponential time means that and hence (8) simplifies to (7) with , which is not true for general killing.

### 4.2. Time reversal at the infimum

Similarly to the case of Lévy processes we may express the law of a MAP time reversed at its infimum and supremum through the law of this MAP conditioned to stay non-positive and positive, respectively. These identities follow from time reversal at the killing time together with splitting at the extrema. Importantly, there is a new term in these identities as compared to the Lévy case, see the definition of below.

Recall that splitting at the infimum holds either at or depending on the scenario, see also Figure 2. This difficulty is resolved using the following definition:

 ^F–– :=^FG––\rm 1{XG––=X––}+^FG––−\rm 1{XG––>X––},

which has to be compared with the definition of .

###### Theorem 4.1.

It holds that

 πiqiEi(^F––;J––=j)=cj^E↓j(Fζ−;Jζ−=i),

where .

###### Proof.

We may assume that the entries of are strictly positive, because the other case follows by taking limits. Time reversal at the killing time (8) and splitting at the infimum and also at the supremum for the reversed process yield

 πiqiEi(^F––;J––=j)P↑j(Jζ−=k)=πkqk^Pk(¯¯¯¯J=j)^E↓j(Fζ−;Jζ−=i).

Here we have used the fact that coincides with , see the corresponding definitions (2) and (3) and check the third scenario of Figure 2. In particular, we also have

 πiqiPi(J––=j)^P↓j(Jζ−