Tail asymptotics of the stationary distribution ofa two dimensional reflecting random walkwith unbounded upward jumps

# Tail asymptotics of the stationary distribution of a two dimensional reflecting random walk with unbounded upward jumps

## Abstract

We consider a two dimensional reflecting random walk on the nonnegative integer quadrant. This random walk is assumed to be skip free in the direction to the boundary of the quadrant, but may have unbounded jumps in the opposite direction, which are referred to as upward jumps. We are interested in the tail asymptotic behavior of its stationary distribution, provided it exists. Assuming the upward jump size distributions have light tails, we completely find the rough tail asymptotics of the marginal stationary distributions in all directions. This generalizes the corresponding results for the skip free reflecting random walk in [25]. We exemplify these results for a two node network with exogenous batch arrivals.

AMS classification: Primary; 60K25, 60K25, Secondary; 60F10, 60G50

Keywords: two dimensional reflecting random walk, stationary distribution, tail asymptotics, linear convex order, unbounded jumps, two node queueing network, batch arrival

## 1 Introduction

We consider a two dimensional reflecting random walk on the nonnegative integer quadrant. This random walk is assumed to be skip free toward the boundary of the quadrant but may have unbounded jumps in the opposite direction, which we call upward jumps. Here, the boundary is composed of the origin and two half coordinate axes, which are called boundary faces. The transitions on each boundary face are assumed to be homogeneous. This reflecting process is referred to as a double -type process. This process naturally arises in queueing networks with two dimensional compound Poisson arrivals and exponentially distributed service times. Here, customers may simultaneously arrive in batch at different nodes. It also has own interest as a multidimensional reflecting random walk on the nonnegative integer quadrant.

A stationary distribution is one of the most important characteristics for this reflecting random walk in application. However, it is known very hard to analytically derive it except for some special cases. Thus, theoretical interest has been directed to its tail asymptotic behaviors. Borovkov and Mogul’skiĭ [5] made great contributions to this problem. They proposed the so called partially homogenous chain, which includes the present random walk as a special case, and studied the tail asymptotic behavior of its stationary distribution. Their results are very general, but have the following limitations.

• The results are not very explicit. That is, it is hard to see how the modeling primitives, that is, the parameters which describe the model, influence the tail asymptotics.

• The tail asymptotics are only obtained for small rectangles. No marginal distribution is considered. Furthermore, some extra technical conditions which seem to be unnecessary are assumed.

For the skip free two-dimensional reflecting random walk, these issues have been addressed by Miyazawa [25], and its tail asymptotics have been recently answered for the marginal distributions in [20], which considers the stationary equation using generating or moment generating functions and applies classical results of complex analysis (see, e.g., [26] and references therein). This classical approach has been renewed in combination with other methods in recent years (e.g., see [19, 24]). However, its application is limited to skip free processes because of technical reasons (see Remark 2.3).

In this paper, our primary interest is to see how the tail asymptotics are changed when upward jumps may be unbounded. For this, we consider the asymptotics of the stationary probability of the tail set:

 {(i1,i2)∈Z2+;c1i1+c2i2≥x}

as goes to infinity for each , where is the set of all nonnegative integers, where is called a directional vector if . We are interested in its decay rate, where is said to be the decay rate of function if

 α=−limn→∞1xlogp(x).

We aim to derive the tail decay rates of the marginal distributions in all directions. These decay rates will be geometrically obtained from the curves which are produced by the modeling primitives, that is, one step transitions of the reflecting random walk. In this way, we answer our question, which simultaneously resolves the issues (a) and (b).

Obviously, if the tail decay rates are positive, then the one step transitions of the random walk must have light tails, that is, they decay exponentially fast. Thus, we assume this light tail condition. Then, the mean drifts of the reflecting random walk in the interior and on the boundary faces are finite. Using these means, Fayolle, Iasnogorodski and Malyshev [12] characterize stability, that is, the existence of the stationary distribution. However, their result is incomplete as they did not consider all cases, which we redress in Lemma 2.1. If the mean drifts vanish in all directions, then it is not hard to see that the stationary distribution does not have light tails in all directions. We also formally verify this fact. Thus, we assume that not all the mean drifts vanish in addition to the light tail condition for the one step transitions.

Under these conditions, we completely solve the decay rate problem for the marginal stationary distribution in each direction, provided the stability conditions hold. The decay rate may be considered as rough asymptotic, and we also study some finer asymptotics, called exact asymptotics. Here, the tail probability is said to have exact asymptotics for some function if the ratio of the tail probability at level to converges to a positive constant as goes to infinity. In particular, if is exponential (or geometric), it is said to be exactly exponential (or geometric). We derive some sufficient conditions for the tail asymptotic to be exactly exponential.

The difficulty of the tail asymptotic problem mainly arises from reflections at the boundary of the quadrant. We have two major boundary faces corresponding to the coordinate axes. The reflections on these faces may or may not influence the tail asymptotics. Foley and McDonald [16] classified them as jitter, branch and cascade cases. We need to consider all of those influence to find the tail asymptotics. This problem becomes harder because of the unbounded jumps.

To overcome these difficulties, we here take a new approach. Based on the stability conditions, we first derive the convergence domain of the moment generating function of the stationary distribution. For this, we use a stationary inequality, which was recently introduced by the second author [26] (see also [6]), and a lower bound for the large deviations of the stationary distribution due to [5]. Once the domain is obtained, it is expected that the decay rate would be obtained through the boundary of the domain. However, this is not immediate. We need sharper lower bounds for the large deviations in coordinate directions. For this, we use a method based on Markov additive processes (e.g., see [28]).

We apply one of our main results, Theorems 3.2 and 3.3, to a batch arrival network with two nodes to see how the modeling primitives influence the tail asymptotics. We use linear convex order for this. We also show that the stochastic upper bound for the stationary distribution obtained by Miyazawa and Taylor [27] is not tight unless one of the nodes has no batch arrival.

This paper is made up by six sections. In Section 2, we formally introduce the reflecting random walk, and discuss its basic properties including stability and stationary equations. In Section 3, main results on the domain and tail asymptotics, Theorems 3.1, 3.2 and 3.3, are presented. We also discuss about linear convex order. Those theorems are proved in Section 4. We apply them to a two node network with exogenous batch arrivals in Section 5. We give some remarks on extensions and further work in Section 6.

## 2 Double M/g/1-type process

Denote the state space by . Recall that is the set of all nonnegative integers. Similarly, denotes the set of all integers. Define the boundary faces of as

 S0={(0,0)},S1={(i,0)∈Z2+;i≥1},S2={(0,i)∈Z2+;i≥1}.

Let and . We refer to and as the boundary and interior of , respectively.

Let be the random walk on . That is, its one step increments are independent and identically distributed. We denote a random vector subject to the distribution of these increments by . We generate a reflecting process from this random walk in such a way that it is a discrete Markov chain with state space and the transition probabilities given by

 P(L(ℓ+1)=j|L(ℓ)=i)={P(X(+)=j−i),j∈S,i∈S+,P(X(k)=j−i),j∈S,i∈Sk,k=0,1,2,

where is a random vector taking values in . For this process to be non defective, it is assumed that , , , and respectively. Here, inequalities of vectors are meant to those in component-wise.

Thus, is reflected at the boundary and skip free in the direction to . In particular, their entries and have similar transitions to the queue length process of the queue. So, we refer to as a double -type process.

Let be the set of all real numbers. Similar to , let be the all nonnegative real numbers. We denote the moment generating functions of and by and , that is, for ,

 γ(θ)=E(e⟨θ,X(+)⟩), γk(θ)=E(e⟨θ,X(k)⟩), k=0,1,2,

where is inner product of vectors and . As usual, is considered to be a metric space with Euclidean norm . In this paper, we assume that

• The random walk is irreducible and aperiodic.

• The reflecting process is irreducible and aperiodic.

• For each satisfying or , there exist and such that and for .

• Either or for .

The conditions (i) and (iii) are stronger than what are actually required, but we use them for simplifying arguments. For example, except for the exact asymptotics, (iii) can be weaken to the following condition (see Remark 2.2).

• and for are finite for some .

In the rest of this section, we discuss three basic topics. First, we consider necessary and sufficient conditions for stability of the reflecting random walk , that is, the existence of its stationary distribution, and explain why (iv) is assumed. We then formally define rough and exact asymptotics.

### 2.1 Stability condition and tail asymptotics

It is claimed in the book of Fayolle, Malyshev and Menshikov [12] that necessary and sufficient conditions for stability are obtained (see Theorem 3.3.1 of the book). However, the proof of their Theorem 3.3.1 is incomplete because important steps are omitted. A better proof be found in [10]. Furthermore, some exceptional cases are missing as we will see. Nevertheless, those necessary and sufficient conditions can be valid under minor amendment.

In [12], those necessary and sufficient conditions are separately considered according to whether all the mean drifts are null, that is, or not. One may easily guess that the null drift implies that the stationary distribution has a heavy tail in all directions, that is, the tail decay rates in all directions vanish. We formally prove this fact in Remark 4.1. Thus, we can assume (iv) to study the light tail asymptotics.

We now present the stability conditions of [12] under the assumption (iv). We will consider their geometric interpretations. For this, we introduce some notation. Let, for ,

 mk=E(X(+)k),m(1)k=E(X(1)k),m(2)k=E(X(2)k)

Define vectors

 m=(m1,m2),m(1)=(m(1)1,m(1)2),m(2)=(m(2)1,m(2)2), m(1)⊥=(m(1)2,−m(1)1),m(2)⊥=(−m(2)2,m(2)1).

Note that by condition (iv). Obviously, is orthogonal to for each . We present the stability conditions of [12] using these vectors below, in which we make some minor corrections for the missing cases.

###### Lemma 2.1 (Corrected Theorem 3.3.1 of [12])

If , the reflecting random walk has the stationary distribution, that is, it is stable, if and only if either one of the following three conditions hold.

• , , and .

• , . In addition to these conditions, is required if .

• , . In addition to these conditions, is required if .

###### Remark 2.1

The additional conditions for in (II ) and for in (III ) are missing in Theorem 3.3.1 of [12]. To see their necessity, let us assume in (II ). This implies that the reflecting random walk can not get out from the 2nd coordinate axis except for the origin once it hits the axis. On the other hand, may take any value because of no constrain on in the first three conditions of (II ). Hence, is necessary for the stability. By symmetry, the extra condition is similarly required for (III ). It is not hard to see the sufficiency of these conditions if the proof in [10, 12] is traced, so we omit its verification.

We will see that the stability conditions are closely related to the curves and . So, we introduce the following notation.

 Γ={θ∈R2;γ(θ)<1},∂Γ={θ∈R2;γ(θ)=1}, Γk={θ∈R2;γk(θ)<1},∂Γk={θ∈R2;γk(θ)=1},k=1,2.
###### Remark 2.2

(a) and are convex sets because and are convex functions. Furthermore, is a bounded set by the condition (i). (b) By the condition (iii), and are continuous curves. If we replace (iii) by the weaker assumption (iii)’, then and may be empty sets. In these cases, we redefine them as the boundaries of and , respectively. This does not change our arguments as long as the solutions of or are not analytically used. Thus, we will see that the rough asymptotics are still valid under (iii)’, but this is not the case for the exact asymptotics.

Note that and are normal to the tangents of and at the origin and toward the outside of and , respectively (see Figure 1). From this observation, we can get the following lemma, which gives a geometric interpretation of the stability condition. We prove it in Appendix A.

###### Lemma 2.2

Under conditions (iii) and (iv), the reflecting random walk has the stationary distribution if and only if contains a vector such that for each . Furthermore, if this is the case, at least for either one of , there exists a such that and .

Throughout the paper, we assume stability, that is, either one of the stability conditions (I), (II ) and (III ), in addition to conditions (i)–(iv). We are now ready to formally introduce tail asymptotics for the stationary distribution of the double -type process. We denote this stationary distribution by . Let be the random vector subject to the distribution , that is,

 ν(i)=P(L=i), i∈S

We are interested in the tail asymptotic behavior of this distribution. For this, we define the rough and exact asymptotics. We refer to vector as a direction vector if . For an arbitrary direction vector , we define as

 αc=−limx→∞1xlogP(⟨c,L⟩≥x),

as long as it exists. This is referred to as a decay rate in the direction . Thus, if the decay rate exists, is lower and upper bounded by and , respectively, for any and sufficiently large . If there exists a function and a positive constant such that , that is,

 limx→∞P(⟨c,L⟩≥x)f(x)=b, (2.1)

then is said to have exact asymptotic . In particular, if is exponential, that is, for some , then it is called an exactly exponential asymptotic. It is notable that random variable only takes a countable number of real values. Hence, we must be careful about their periodicity.

###### Definition 2.1

(a) A countable set of real numbers is said to be -arithmetic at infinity for some if is the largest number such that, for some , is a subset of . On the other hand, is said to be asymptotically dense at infinity if there is a positive number for each such that, for all , for some . (b) A random variable taking countably many real values at most is said to be -arithmetic for some integer if is the largest positive number such that .

The following fact is an easy consequence of Lemma 2 and Corollary in Section V.4a of [13], but we prove it in Appendix B for completeness.

###### Lemma 2.3

For a directional vector , let . Then, is asymptotically dense at infinity if and only if neither nor vanishes and is irrational. Otherwise, is arithmetic at infinity.

Because of this lemma, the in (2.1) runs over either arithmetic numbers for some or real numbers. Particularly, if is 1-arithmetic at infinity, then we replace by integer . For example, this is used for the asymptotics: for each fixed .

### 2.2 Moment generating function and stationary equation

There are two typical ways to represent the stationary distribution of the double -type process. One is a traditional expression using either generating or moment generating functions. Another is a matrix analytic expression viewing one of coordinates as a background state. In this paper, we will use both of then because they have their own merits. We first consider the stationary equation using moment generating functions. Since the states are vectors of nonnegative integers, it may be questioned why moment generating functions are not used. This question will be answered at the end of this section.

We denote the moment generating function of the stationary random vector by

 φ(θ)≡E(e⟨θ,L⟩),θ∈R2.

We define a light tail for the stationary distribution according to [26].

###### Definition 2.2

The is said to have a light tail in all directions if there is a positive such that . Otherwise, it is said to have a heavy tail in some direction.

Define the convergence domain of the moment generating function as

Then, we can expect that the tail asymptotic of the stationary distribution is obtained through the boundary of the domain . Obviously, is a convex set because is a convex function on . Let us derive the stationary equation for this . Let

 φ+(θ)=E(e⟨θ,L⟩1(L>0)), φk(θk)=E(eθkLk1(Lk≥1,L3−k=0)),k=1,2,

where is the indicator function. Then,

 φ(θ)=φ+(θ)+φ1(θ1)+φ2(θ2)+φ0(0),

where . From this relation and the stationary equation:

 L≃L+X(+)1(L∈S+)+∑k∈{0,1,2}X(k)1(L∈Sk), (2.2)

where stands for the equality in distribution, and the random vectors in the right hand side are assumed to be independent, we have

 (1−γ(θ))φ+(θ)=∑k∈{1,2}(γk(θ)−1)φk(θk)+(γ0(θ)−1)φ0(0), (2.3)

as long as . This equation holds at least for .

Equation (2.3) is equivalent to the stationary equation of the Markov chain , and therefore characterizes the stationary distribution. Thus, the stationary distribution can be obtained if we can solve (2.3) for unknown function , equivalently, , , and . However, this is known as a notoriously hard problem. This is relatively relaxed when jumps are skip free (see, e.g., [11, 20, 26]). This point is detailed below.

###### Remark 2.3 (Kernel method and generating function)

To get useful information from (2.3), it is a key idea to consider it on the surface obtained from . This enables us to express in terms of the other ’s under the constraint that . Then, we may apply analytic extensions for using complex variables for . This analytic approach is called a kernel method (see, e.g., [19, 24, 26]). In the kernel method, generating function is more convenient than moment generating function. Let , then is the generating function corresponding to . Note that is a polynomial of and . Particularly for the skip free two-dimensional reflecting random walk, , which corresponds with , is a quadratic equation of for each fixed . Hence, we can algebraically solve it, which is thoroughly studied in the book of [11]. However, the problem is getting hard if the random walk is not skip free. If the jumps are unbounded, the equation has infinitely many solutions in the complex number field for each fixed (or ), and there may be no hope to solve the equation. Even if these roots are found, it would be difficult to analytically extend .

This remark suggests that the kernel method based on complex analysis is hard to use for the -type process. We look at the problem in a different way, and consider the equation in the real number field. In this case, it has at most two solutions of for each fixed because is a two variable convex function. However, we have a problem on the stationary equation (2.3) because we only know its validity for . To overcome this difficulty, we will introduce a new tool, called stationary inequality, and work on and for . For this, moment generating functions are more convenient because and will be convex. This convexity may not be true for generating functions because two variable generating functions may not be convex.

Although we mainly use moment generating functions, we do not exclude to use generating functions. In fact, they are convenient to consider the tail asymptotics in coordinate directions. For other directions, we again need moment generating function because may not be periodic. Thus, we will use both of them.

## 3 Convergence domain and main results

The aim of this section is to present main results on the domain and the tail asymptotics. They will be proved in Section 4. We first give a key tool for finding the domain , which allows us to extend the valid region of (2.3) from .

###### Lemma 3.1

For , and (2.3) holds true if either one of the following conditions is satisfied.

• and for .

• and .

• and .

This lemma is a version of Lemma 6.1 of [26], and proved in Appendix C. The lemma suggests that , and are important to find for to be finite. They are not empty by Lemma 2.2. Obviously, these sets are convex sets. We will use the following extreme points of them.

 θ(k,max)=argmax(θ1,θ2){θk;γ(θ1,θ2)=1},θ(k,min)=argmin(θ1,θ2){θk;γ(θ1,θ2)=1}, θ(k,c)=argsup(θ1,θ2){θk;θ∈Γ∩Γk},θ(k,e)=argmax(θ1,θ2){θk;θ∈∂Γ∩∂Γk},

where c and e in the superscripts stand for convergence parameter and edge, respectively (see Figure 2). Their meanings will be clarified in the context of the Markov additive process .

From these definitions, it is easy to see that, for ,

 θ(k,c)={θ(k,e),γk(θ(k,max))>1,θ(k,max),γk(θ(k,max))≤1.

Using these points, we classify their configurations into three categories as:

(D1) and ,  (D2) ,  (D3) ,

where we exclude the case that and because it is impossible (see Figures 2 and 3). These categories have been firstly introduced in [25] for the double QBD processes, and shown to be useful for the tail asymptotic problems.

Using this classification, we define vector as

 τ=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩(θ(1,c)1,θ(2,c)2)if (D1) holds,(¯¯¯ξ1(θ(2,c)2),θ(2,c)2)% if (D2) holds,(θ(1,c)1,¯¯¯ξ2(θ(1,c)1))% if (D3) holds. (3.1)

where and are defined as

 ¯¯¯ξ1(θ2)=max{θ;γ(θ,θ2)=1},¯¯¯ξ2(θ1)=max{θ;γ(θ1,θ)=1}.

We will also use the following notation corresponding with them.

 ξ–1(θ2)=min{θ1;γ(θ1,θ2)=1},ξ–2(θ1)=min{θ2;γ(θ1,θ2)=1}.

Then, the domain is obtained as follows.

###### Theorem 3.1

Under conditions (i)–(iv) and the stability condition, we have

 D={θ∈Γmax;θ<τ}, (3.2)

where . Furthermore, for ,

 τk=sup{θk≥0;φk(θk)<∞}. (3.3)
###### Remark 3.1

This result generalizes Theorem 3.1 of [25] in two respects. First, the domain is fully identified. Secondly, the skip free condition in [25] is relaxed.

We next consider the decay rate of as goes to infinity. In some cases, we also derive its exact asymptotic. From the domain obtained in Theorem 3.1, we can expect that this decay rate is given by

 αc=sup{x≥0;xc∈D}. (3.4)

One may consider this to be immediate, claiming that the tail decay rate of a distribution on is obtained by the convergence parameter of its moment generating function. This claim has been used in the literature, but it is not true (see Appendix D for a counter example). Thus, we do need to prove (3.4).

We simplify notation as for and . Note that

 α1={τ1,¯¯¯ξ2(τ1)≥0,β1,¯¯¯ξ2(τ1)<0.α2={τ2,¯¯¯ξ1(τ2)≥0,β2,¯¯¯ξ1(τ2)<0. (3.5)

where is the positive solution of (see Figure 4 below). We now present asymptotic results in two theorems, which will be proved in the next section.

###### Theorem 3.2

Under conditions (i)–(iv) and the stability condition, we have,

 limn→∞1nlogP(Lk≥n,L3−k=i)=−τk,i∈Z+,k=1,2. (3.6)

Furthermore, if and if the Markov additive kernel is 1-arithmetic, then we have the following asymptotics for some constant .

 liminfn→∞eτknP(Lk≥n,L3−k=i)≥bki,i∈Z+,k=1,2. (3.7)

In particular, for (D1) with , (D2) with and (D3) with , this is refined as

 limn→∞eτknP(Lk≥n,L3−k=i)=ckii∈Z+ for some cki>0. (3.8)
###### Remark 3.2

(a) In the case (D1), if and , then we can show

 limn→∞eτknP(Lk≥n,L3−k=i)=0. (3.9)

This is immediate from Remark 3.3 (b). We conjecture that (3.9) is also true for and . (b) When we apply the kernel method to the skip free case, with complex variable (or the corresponding generating function) has a branch point at for under the case (D1), which is a dominant singular point of (see [20]). Thus, the rightmost point corresponds with a branch point of the complex analysis. This is also the reason why (3.9) holds because has the exact asymptotics of the form with as .

###### Theorem 3.3

Under the same conditions of Theorem 3.2, we have, for any directional vector ,

 limx→∞1xlogP(⟨c,L⟩≥x)=−αc, (3.10)

where we recall that . Further assume that , and for . Then, we have, for some positive constant ,

 limx→∞eαcxP(⟨c,L⟩≥x)=bc, (3.11)

where runs over if is -arithmetic for some , while it runs over otherwise, which is equivalent to and is not rational.

###### Remark 3.3

(a) Since may be less than , the decay rate of may be different from that of . (b) Similar to Remark 3.2, if and , then we can show that using the expression (4.23) in the proof of this theorem. This implies that

 limn→∞eαknP(Lk≥n)=0. (3.12)

This obviously implies (3.9). (c) If the jumps are skip free, then finer exact asymptotics are obtained for coordinate directions in [20], which partially uses the kernel method discussed in Section 2.2. One may think to apply the same approach as in [20]. Unfortunately, this approach does not work because of the same reason discussed in Remark 2.3.

We now consider how the decay rates and are influenced by the modeling primitives. Obviously, if , and are increased by changing the modeling primitives, then the open sets , and are diminished. Hence, we have

###### Lemma 3.2

Under the assumptions of Theorem 3.2, if the distributions of , and are changed to increase , and for each fixed , then the decay rates and are decreased.

Here, decreasing and increasing are used in the weaker sense. This convention will be used throughout the paper. To materialize the monotone property in Lemma 3.2, we use the following stochastic order for random vectors.

###### Definition 3.1 (Müller and Stoyan [30])

(a) For random variables and , the distribution of is said to be less than the distribution of in convex order, which is denoted by , if, for any convex function from to ,

 E(f(X))≤E(f(Y)),

as long as the expectations exist. (b) For two dimensional random vectors and , if for all , then the distribution of is said to be less than the distribution of in linear convex order, which is denoted by .

For linear convex order, Koshevoy and Mosler [22] give several conditions to be equivalent (see also [34]). Among them, the Strassen’s characterization for convex order visualizes the variability of this order (see Theorem 2.6.6 of [30]). That is, if and only if there is a random variable for each such that and

 ⟨θ,Y⟩≃⟨θ,X⟩+Uθ, (3.13)

where denotes the equality in distribution.

Since is a convex function, the following fact is immediate from Theorems 3.2 and 3.3 and Lemma 3.2.

###### Corollary 3.1

If the distributions of , and are increased in linear convex order, then the decay rate and are decreased for and any direction , where the stability condition is unchanged due to (3.13).

This lemma shows how the decay rates are degraded by increasing the variability of the transition jumps , and .

## 4 Proofs of the theorems

The aim of this section is to prove Theorems 3.1, 3.2 and 3.3. Before their proofs, we prepare two sets of auxiliary results. One is another representation of the stationary distribution using a Markov additive process. Another is an iteration algorithm for deriving the convergence domain .

### 4.1 Occupation measure and Markov additive process

We first represent the stationary distribution using a so called censoring. That is, the stationary probability of each state is computed by the expected number of visiting times to that state between the returning times to the boundary or its face. The set of these expected numbers is called an occupation measure. We formally introduce them.

Let be a subset of , and let . For this , define the distribution of the first return state and the occupation measure as

 gU(i,j)=P(L(σU)=j,σU<∞∣∣L(0)=i),i,j∈U, hU(i,j)=E⎛⎝1(σU<∞)σU−1∑ℓ=01(L(ℓ)=j)∣∣ ∣∣L(0)=i⎞⎠,i,j∈S∖U.

Let and be the matrices whose entries are and , respectively. may be also considered as a potential kernel (see, e.g., Section 2.1 of [32] for this kernel). Note that is stochastic since has the stationary distribution . Let be the stationary distribution of , which is uniquely determined up to a constant multiplier by . By , we denote the expectation of with respect to . By the existence of the stationary distribution, is finite. Then, as discussed in Section 2 of [29], it follows from censoring on the set that

 ν(j)=1EνU(σU)∑i∈UνU(i)∑k∈S∖Up(i,k)hU(k,j),j∈S∖U, (4.1)

where .

We here need the distribution because may not be singleton. The censored process is particularly useful when the occupation measure is simple. For example, if we choose for , then is obtained from the random walk , which is simpler than .

We next choose for . This occupation measure has been used to find tail asymptotics of the marginal stationary distribution (see, e.g., [21, 25, 28]). For each , we denote the matrix whose entry is by , where . Since this does not depend on , we simply denote it by for , where is the identity matrix.

We need further notation. For nonnegative integers , let

 r(1)((m,i),(m+n,j)) (4.2) =∞∑ℓ=1P(L(ℓ)=(m+n,j), m

For , does not depend on , so, for each , we denote the matrix whose entry is by for . On the other hand, for , we denote the corresponding matrix by for .

For each , let be the vector whose -th entry is . Similar to (4.1), it follows from censoring with respect to that

 ν(1)n=ν(1)0R(1)0n+n−1∑ℓ=1ν(1)ℓR(1)n−ℓ,n≥1. (4.3)

Using the well known identity that

 ∞∑n=0snH(1)n=(I−∞∑n=1snR(1)n)−1,

(4.3) can be written as

 ν(1)n=ν(1)0n∑ℓ=1R(1)0ℓH(1)n−ℓ≥ν(1)1H(1)n−1,n≥1. (4.4)

These formulas are also found in [28, 29]. Thus, is given in terms of , and . These expressions will be also useful.

We next consider a Markov additive process generated from the reflecting process by removing the boundary transitions on , and present a useful identity, called the Wiener-Hopf factorization.

Denote this Markov additive process by . Specifically, let be its Markov additive kernel, that is, entry of matrix is defined as

 [A(1)n]ij=P(Z(1)(ℓ+1)=(m+n,j)|Z(