Correlated fractional counting processes on a finite time interval

# Correlated fractional counting processes on a finite time interval

Luisa Beghin Address: Dipartimento di Scienze Statistiche, Sapienza Università di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy. e-mail: luisa.beghin@uniroma1.it    Roberto Garra Address: Dipartimento di Scienze di Base e Applicate per l’Ingegneria, Sapienza Università di Roma, Via A. Scarpa 16, I-00161 Roma, Italy. e-mail: rolinipame@yahoo.it    Claudio Macci Address: Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, I-00133 Rome, Italy. e-mail: macci@mat.uniroma2.it
###### Abstract

We present some correlated fractional counting processes on a finite time interval. This will be done by considering a slight generalization of the processes in [9]. The main case concerns a class of space-time fractional Poisson processes and, when the correlation parameter is equal to zero, the univariate distributions coincide with the ones of the space-time fractional Poisson process in [24]. On the other hand, when we consider the time fractional Poisson process, the multivariate finite dimensional distributions are different from the ones presented for the renewal process in [26]. Another case concerns a class of fractional negative binomial processes.

Keywords: Poisson process, negative binomial process, weighted process.
Mathematical Subject Classification: 60G22, 60G55, 60E05, 33E12.

## 1 Introduction

Several fractional processes in the literature are defined by considering some known equations in terms of suitable fractional derivatives. In this paper we are interested in particular Lévy counting processes, as in the recent paper [5]; in particular we deal with Poisson and negative binomial processes. There is a wide literature on fractional Poisson processes: see e.g. [16], [19], [7], [8], [24] and [26] (we also cite [15] and [20] where their representation in terms of randomly time-changed and subordinated processes was studied in detail). Some references with fractional negative binomial processes are [5] (see Example 3) and [28]. Among the other fractional processes in the literature we recall the diffusive processes (see e.g. [2], [3], [18] [22], [27]), the telegraph processes in [21] and the pure birth processes in [23].

Often the results for these fractional processes are given in terms of the Mittag-Leffler function

 Eα,β(x):=∑r≥0xrΓ(αr+β)

(see e.g. [25], page 17); we also recalled the generalized Mittag-Leffler function

 Eγα,β(x):=∑r≥0(γ)(r)xrr!Γ(αr+β),

where

 (γ)(r):={γ(γ+1)⋯(γ+r−1) if r≥11 if r=0 (for γ∈R)

is the rising factorial (also called Pochhammer symbol), and coincides with when .

In this paper we consider some processes on a finite time interval , for some . More precisely is defined by

 Nρ(t):=Mg∑n=11[0,t](XF,ρn),

where is a nonnegative integer valued random variable with probability generating function , i.e.

 g(u):=E[uMg],

and is a sequence of random variables with (common) distribution function such that and , and independent of ; moreover the correlation coefficient between any pair of random variables and , with , is equal to a common value .

###### Remark 1.

We have ; thus the distribution of does not depend on .

In this way we are considering a slight generalization of the processes presented in [9]; indeed we can recover several formulas in [9] by setting for some (which concerns a Poisson distributed random variable with mean ), and for , where . The case without correlation, i.e. the case , appears in [4]; see also [17] where that process is considered as a claim number process in insurance. Here, in view of what follows, we recall the following formulas (see e.g. (9) and (10) in [9]): we have the probability generating function

 GNρ(t)(u)=ρ(1−F(t))+ρF(t)g(u)+(1−ρ)g(1−F(t)+F(t)u), (1)

and the probability mass function

 P(Nρ(t)=k)=(1−ρ)P(N0(t)=k)+ρ{(1−F(t))1k=0+F(t)P(Mg=k)} (for all k≥0), (2)

where

 P(N0(t)=k)=∞∑n=k(nk)Fk(t)(1−F(t))n−kP(Mg=n) (for all k≥0) (3)

concerns the case (see (2.4) in [4]).

As pointed out in [4], this class of counting processes can be useful to tackle the problem of overdispersion and underdispersion in the analysis of count data where correlations between events are present. A possible application can be given for example in models of non-exponential extinction of radiation in correlated random media (see e.g. [13]). We also remark that, as far as the the marginal distribution of each random variable , in (2) we have a mixture between three probability mass functions, i.e. , and , and the weights are , and , respectively.

The aim of this paper is to present some correlated fractional counting processes by choosing in a suitable way the probability generating function and a distribution function above. In Section 2 we present a class of space-time fractional Poisson processes (in fact we have the same univariate distributions of the space-time fractional Poisson process in [24] when ). A class of fractional negative binomial processes is presented in Section 3.

Finally, since the presentation of the results in [9] refers to the concept of weighted Poisson processes (see also the previous reference [4] concerning the case ), in the final Section 4 we give some minor results on weighted processes. Even though this section seems to be disconnected from the other ones in this paper, in our opinion it is a small nice enrichment of the content of [9].

## 2 A class of correlated fractional Poisson processes

For the aims of this section, some preliminaries are needed. Firstly we consider the Caputo (left fractional) derivative of order (see e.g. in (2.4.14) and (2.4.15) in [12] with ; we use the notation ) defined by

 dνdtνf(t):=⎧⎪⎨⎪⎩1Γ(n−ν)∫t01(t−s)ν−n+1dndsnf(s)ds if ν is not integer, where n=[ν]+1dνdtνf(t) if ν is integer (for all t≥0);

note that, since here we consider , we have (see e.g. (2.4.17) in [12] with )

 dνdtνf(t):=⎧⎨⎩1Γ(1−ν)∫t01(t−s)νddsf(s)ds if ν∈(0,1)ddtf(t) if ν=1 (for all t≥0).

We also consider the (fractional) difference operator in [24]; more precisely is the identity operator, is the backward shift operator defined by and , and therefore

 (I−B)α=∞∑j=0(−1)j(αj)Bj. (4)

We now recall that Orsingher and Polito in [24] considered the space-time fractional Poisson process , for , whose probability mass functions solve the Cauchy problem

 ⎧⎪⎨⎪⎩dνdtνpk(t)=−λα(I−B)αpk(t)pk(0)={0,k>0,1,k=0.

The explicit form of the probability generating function of this process has the form (see [24], equation (2.28))

 E[uNα,ν0(t)]=Eν,1(−λαtν(1−u)α).

In this section we consider a class of correlated space-time fractional Poisson processes on a finite time interval . For we consider such that the probability generating function of is

 g(u):=Eν,1(−λαTν(1−u)α), (5)

and the distribution function of the random variables is

 F(t):=(t/T)ν/α (for t∈[0,T]).

In what follows we present the probability generating functions in Proposition 2.1 and the corresponding probability mass functions in Proposition 2.2. Moreover, in Proposition 2.3, we give an equation for the probability mass functions in Proposition 2.2 with respect to time .

###### Proposition 2.1.

The probability generating functions are

 GNα,νρ(t)(u) =ρ(1−(t/T)ν/α)+ρ(t/T)ν/αEν,1(−λαTν(1−u)α) +(1−ρ)Eν,1(−λαtν(1−u)α).

Proof. We have

 GNα,νρ(t)(u) =ρ(1−(t/T)ν/α)+ρ(t/T)ν/αEν,1(−λαTν(1−u)α) +(1−ρ)Eν,1(−λαTν(1−{1−(t/T)ν/α+(t/T)ν/αu})α)

by (1), and we conclude with some manipulations of the last term.

###### Remark 2.

By Proposition 2.1, if we have the probability generating function

 GNα,ν0(t)(u)=Eν,1(−λαtν(1−u)α) (6)

which coincides with the one presented in the last case of Table 1 in [24]; note that (6) is a generalization of (5) with instead of . Thus the univariate distributions of the random variables (for the case ) coincide with the ones of the random variables of the renewal process in [26] (restricted to the same finite time interval). On the other hand one can check that the multivariate finite dimensional marginal distributions are different from the ones in [26] (and, in fact, is not a renewal process). We explain this with a simple example where we take into account that

 P(M(s)=1)=P(N1,ν0(s)=1)=λsνE2ν,ν+1(−λsν) (for s∈[0,T])

by (2.5) in [8]. In fact, for , we have

 P(M(t)=1,M(T)=1)=λtνE2ν,ν+1(−λtν)Eν,1(−λ(T−t)ν) (7)

by combining (11) and (14) in [26] (with and ) with (2) and (4) in the same reference, and

 P(N1,ν0(t)=1,N1,ν0(T)=1)=tTλTνE2ν,ν+1(−λTν) (8)

because by construction. Then (7) and (8) coincide only for the non-fractional case (see Figure 1 below).

###### Proposition 2.2.

The probability mass functions are

 P(Nα,νρ(t)=k) =(1−ρ)(−1)kk!∞∑r=0(−λαtν)rΓ(νr+1)Γ(αr+1)Γ(αr+1−k)+ρ{(1−(t/T)να)1k=0 +(t/T)να⋅(−1)kk!∞∑r=0(−λαTν)rΓ(νr+1)Γ(αr+1)Γ(αr+1−k)} (for all k≥0).

Proof. Firstly we have

 P(Mg=n)=P(Nα,νρ(T)=k)=(−1)nn!∞∑r=0(−λαTν)rΓ(νr+1)Γ(αr+1)Γ(αr+1−n) (for all n≥0) (9)

by the probability generating function in (5) (see (1.8) in [24]) and by Remark 1. Moreover, if we consider (3), we get

 P(Nα,ν0(t)=k) =∞∑n=k(nk)(t/T)ναk(1−(t/T)να)n−k(−1)nn!∞∑r=0(−λαTν)rΓ(νr+1)Γ(αr+1)Γ(αr+1−n) =(−1)kk!(t/T)ναk∞∑n=k(−1)n−k(n−k)!(1−(t/T)να)n−k ⋅∞∑r=0(−λαtν)rΓ(νr+1)(T/t)νrΓ(αr+1)Γ(αr+1−k)Γ(αr+1−k)Γ(αr+1−n) =(−1)kk!(t/T)ναk∞∑r=0(−λαtν)rΓ(νr+1)(T/t)νrΓ(αr+1)Γ(αr+1−k) ⋅∞∑j=0(−1)jj!(1−(t/T)να)jΓ(αr+1−k)Γ(αr+1−k−j) (for all k≥0).

Then, by a well-known “Newton’s generalized binomial theorem”, we obtain

 P(Nα,ν0(t)=k) =(−1)kk!(t/T)ναk∞∑r=0(−λαtν)rΓ(νr+1)(T/t)νrΓ(αr+1)Γ(αr+1−k)(1+(t/T)να−1)αr−k =(−1)kk!∞∑r=0(−λαtν)rΓ(νr+1)(t/T)ναk−νr+νr−ναkΓ(αr+1)Γ(αr+1−k) =(−1)kk!∞∑r=0(−λαtν)rΓ(νr+1)Γ(αr+1)Γ(αr+1−k) (for all k≥0)

where, as we expected by (6), here meets in (9) (here we have and in place of and in (9)). We conclude the proof by considering (2) together with the last above expression obtained for the case .

In view of the next Proposition 2.3 we remark that in a part of the proof we refer to Theorem 2 in [6] which can be derived by referring to a subordinated representation of the space-time fractional Poisson process in terms of both stable subordinator and its inverse (see also (3.20), together with (3.1), in the same reference).

###### Proposition 2.3.

Let be the probability mass functions in Proposition 2.2. Then we have the following equations: for ,

 dνdtνP(Nα,νρ(t)=0) =−λαP(Nα,νρ(t)=0)+λαρ −ρ(t/T)ν/α(λα+t−νΓ(ν/α+1)Γ(ν/α−ν+1))(1−P(Nα,νρ(T)=0));

for all ,

 dνdtνP(Nα,νρ(t)=k)= −λα(I−B)αP(Nα,νρ(t)=k)+λαρ(1−(t/T)ν/α)(−1)k(αk) +ρ(t/T)ν/α[λα(I−B)α+t−νΓ(ν/α+1)Γ(ν/α−ν+1)]P(Nα,νρ(T)=k).

In all cases we have the initial conditions and for all .

Proof. The initial conditions trivially hold. Throughout this proof we consider the notation

 pα,νk(t)=P(Nα,ν0(t)=k) (for all k≥0)

for the probability mass function concerning the case . Then, by (2) and Remark 1, we get

 dνdtνP(Nα,νρ(t)=k)= (1−ρ)dνdtνpα,νk(t)+ρ{−1Tν/α1k=0dνdtνtν/α+1Tν/αP(Nα,νρ(T)=k)dνdtνtν/α} = (1−ρ)dνdtνpα,νk(t)−ρTν/α{1k=0−P(Nα,νρ(T)=k)}dνdtνtν/α.

Moreover we have

 dνdtνpα,νk(t)=−λα(I−B)αpα,νk(t)

by Theorem 2 in [6] and

 dνdtνtν/α=tν/α−νΓ(ν/α+1)Γ(ν/α−ν+1)

(see e.g. (2.2.11) and (2.4.8) in [12], or a correction of (2.4.28) in the same reference); then we get

 dνdtνP(Nα,νρ(t)=k) =−λα(I−B)α(1−ρ)pα,νk(t) −ρ(t/T)ν/α{1k=0−P(Nα,νρ(T)=k)}t−νΓ(ν/α+1)Γ(ν/α−ν+1).

From now on we consider the cases and separately.
Case . Firstly we have

 (I−B)αpα,ν0(t)=∞∑j=0(−1)j(αj)pα,ν0−j(t)=pα,ν0(t)

by (4); therefore

 dνdtνP(Nα,νρ(t)=0)=−λα(1−ρ)pα,ν0(t)−ρ(t/T)ν/α{1−P(Nα,νρ(T)=0)}t−νΓ(ν/α+1)Γ(ν/α−ν+1).

then, by (2) and Remark 1,

 dνdtνP(Nα,νρ(t)=0) =−λα{P(Nα,νρ(t)=0)−ρ{1−(t/T)ν/α+(t/T)ν/αP(Nα,νρ(T)=0)}} −ρ(t/T)ν/α{1−P(Nα,νρ(T)=0)}t−νΓ(ν/α+1)Γ(ν/α−ν+1).

and, finally, we can check by inspection that the last equation is equivalent to the one in the statement of the proposition.
Case . Firstly, again by (2) and Remark 1, we have

 dνdtνP(Nα,νρ(t)=k) =−λα(I−B)α[P(Nα,νρ(t)=k) −ρ(1−(t/T)ν/α)1k=0−ρ(t/T)ν/αP(Nα,νρ(T)=k)] +ρ(t/T)ν/αP(Nα,νρ(T)=k)t−νΓ(ν/α+1)Γ(ν/α−ν+1) =−λα(I−B)αP(Nα,νρ(t)=k)+λαρ(1−(t/T)ν/α)(I−B)α1k=0 +ρ(t/T)ν/α[λα(I−B)α+t−νΓ(ν/α+1)Γ(ν/α−ν+1)]P(Nα,νρ(T)=k);

then we get the desired equation by noting that

 (I−B)α1k=0=∞∑j=0(−1)j(αj)1k−j=0=(−1)k(αk).

The proof is complete.

Finally we remark that, even if the equations in Proposition 2.3 have some analogies with other results for fractional Poisson processes in the literature, here some standard techniques do not work because we deal with a finite horizon time case (i.e. ).

## 3 A class of correlated fractional negative binomial processes

It is well-known that the negative binomial process can be seen as a suitable compound Poisson process with logarithmic distributed summands (see e.g. Proposition 1.1 in [14]). More precisely, for some and some integer , we have the probability generating function

 u↦hr(m(u)),

where:

 h(u):=eλ(u−1), with λ=−logp,

is the probability generating function of a Poisson distributed random variable with mean ;

 m(u):=log(1−(1−p)u)logp, for |u|<11−p,

is the probability generating function of a logarithmic distributed random variable (obviously we have if ).

In this section we present a class of correlated fractional negative binomial processes on a finite time interval . More precisely we consider the same approach with the probability generating function of a space-time fractional Poisson distributed random variable; thus, for , we have

 hα,ν(u):=Eν,1(−λα(1−u)α)

in place of (note that coincides with ), again with , and this meets in (5) with . Thus we have

 g(u):={Eν,1(−(−logp)α(1−log(1−(1−p)u)logp)α)}r={Eν,1(−logα(1−(1−p)up))}r, (10)

where, again, is an integer power of the function , and . We remark that in (10) is the probability generating function of , but it does not depend on as happens for in (5).

As far as the distribution function is concerned, we argue as in Section 2 as follows: for all , we want to have the condition

 GNα,ν0(t)(u)={Eν,1(−logα(1−(1−q(t))uq(t)))}r

for some such that for all and . Then, by (1) with and by (10), we require that

 1−(1−q(t))uq(t)= 1−(1−p)(1−F(t)+F(t)u)p = 1−(1−p)(1−F(t))−(1−p)F(t)up;

so, if we divide both numerator and denominator by , we get

 q(t)=p1−(1−p)(1−F(t)).

Moreover we have

 q(t)=pp+(1−p)F(t)=11+(1p−1)F(t)

which yields

 F(t):=1q(t)−11p−1 (for t∈[0,T]), (11)

and the function has to be a decreasing. We also give a particular example with a choice of , and we provide the corresponding distribution function .

###### Example 3.1.

If we set

 q(t)=1−λ1−(1−tT)λ

for some , we recover the example in Section 3.3 in [4] (see also Section 4.3 in [9] for a generalization). In fact this choice of is the analogue of (3.6) in [4]; moreover, if we set , we have

 q(t)=p1−(1−tT)(1−p)=11+(1p−1)tT

and therefore .

In what follows we present the probability generating functions in Proposition 3.1 and, for only, the corresponding probability mass functions in Proposition 3.2 (for we have the -th convolution of the probability mass function of the case , but we cannot provide manageable formulas). Moreover, in Proposition 3.3, we give an equation for the probability generating functions in Proposition 3.1 for , and ; in this case we consider fractional derivatives with respect to their argument , and not with respect to time .

###### Proposition 3.1.

The probability generating functions are

 GNα,νρ(t)(u) =ρ⎛⎜⎝1−1q(t)−11p−1⎞⎟⎠+ρ1q(t)−11p−1{Eν,1(−logα(1−(1−p)up))}r +(1−ρ){Eν,1(−logα(1−(1−q(t))uq(t)))}r.

Proof. This is an immediate consequence of (1) and the formulas above.

In view of the next Proposition 3.2 some preliminaries are needed. Firstly we consider the Stirling numbers ; for their definition and some properties used below see e.g. [1], page 824. Moreover

 pΨq[(a1,α1)…(ap,αp)(b1,β1)…(bq,βq)](z):=∑j≥0∏ph=1Γ(ah+αhj)∏qk=1Γ(bk+βkj)zjj!

is the Fox-Wright function (see e.g. (1.11.14) in [12]) under the convergence condition

 q∑k=1βk−p∑h=1αh>−1 (12)

(see e.g. (1.11.15) in [12]).

###### Proposition 3.2.

If , the probability mass functions are

 P(Nα,νρ(t)=k) =(1−ρ)P(Nα,ν0(t)=k) +ρ⎧⎪⎨⎪⎩1p−1q(t)1p−11k=0+1q(t)−11p−1P(Nα,ν0(T)=k)⎫⎪⎬⎪⎭ (for all k≥0),

where, for all ,

 P(Nα,ν0(t)=k)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Eν,1(−logα(1+At)) if k=01k!(−At)k(1+At)k∑kh=1log−h(1+At)sk,h⋅ 2Ψ2[(1,α)(1,1)(1−h,α)(1,ν)](−logα(1+At)) if k≥1 (13)

and (note that the convergence condition (12) holds because we have ).

Proof. Firstly we remark that we can only check (13) (concerning the case ); in fact we obtain the formula for the general case by combining (2), in (11) and (13). It is well-known that

 P(Nα,ν0(t)=k)=⎧⎪⎨⎪⎩GNα,ν0(t)(0) if k=01k!dkdukGNα,ν0(t)(u)∣∣u=0 if k≥1. (14)

Firstly, if as in the statement of the proposition, we have

and we immediately obtain (13) for . Moreover, if we prove that

 dkduk