A Generalization of the Exponential-Logarithmic Distribution for Reliability and Life Data Analysis

# A Generalization of the Exponential-Logarithmic Distribution for Reliability and Life Data Analysis

Mohieddine Rahmouni,  Ayman Orabi

Corresponding author, ESSECT, University of Tunis, Tunisia, and Community College in Abqaiq, King Faisal University, Saudi Arabia. E-mail address: mohieddine.rahmouni@gmail.comCommunity College in Abqaiq, King Faisal University, Saudi Arabia.
January 5,2018
###### Abstract

In this paper, we introduce a new two-parameter lifetime distribution, called the exponential-generalized truncated logarithmic (EGTL) distribution, by compounding the exponential and generalized truncated logarithmic distributions. Our procedure generalizes the exponential-logarithmic (EL) distribution modelling the reliability of systems by the use of first-order concepts, where the minimum lifetime is considered (Tahmasbi & Rezaei, 2008). In our approach, we assume that a system fails if a given number of the components fails and then, we consider the -smallest value of lifetime instead of the minimum lifetime. The reliability and failure rate functions as well as their properties are presented for some special cases. The estimation of the parameters is attained by the maximum likelihood, the expectation maximization algorithm, the method of moments and the Bayesian approach, with a simulation study performed to illustrate the different methods of estimation. The application study is illustrated based on two real data sets used in many applications of reliability.

Keywords: Lifetime distributions, reliability, failure rate, order statistics, exponential distribution, truncated logarithmic distribution.

## 1 Introduction

Lifetime distributions are often used in reliability theory and survival analysis for modelling real data. They play a fundamental role in reliability in diverse disciplines such as finance, manufacture, biological sciences, physics and engineering. The exponential distribution is a basic model in reliability theory and survival analysis. It is often used to model system reliability at a component level, assuming the failure rate is constant (Balakrishnan & Basu, 1995; Barlow & Proschan, 1975; Sinha & Kale, 1980). In recent years, a growing number of scholarly papers has been devoted to accommodate lifetime distributions with increasing or decreasing failure rate functions. The motivation is to give a parametric fit for real data sets where the underlying failure rates, arising on a latent competing risk problem base, present monotone shapes (nonconstant hazard rates). The proposed distributions are introduced as extensions of the exponential distribution, following Adamidis & Loukas (1998) and Kuş (2007), by compounding some useful lifetime and truncated discrete distributions (for review, see Barreto-Souza & Cribari-Neto (2009); Chahkandi & Ganjali (2009); Silva et al. (2010); Barreto-Souza et al. (2011); Cancho et al. (2011); Louzada-Neto et al. (2011); Morais & Barreto-Souza (2011); Hemmati et al. (2011); Nadarajah et al. (2013); Bakouch et al. (2014), and others). The genesis is stated on competing risk scenarios in presence of latent risks, i.e. there is no information about the causes of the component’s failure (Basu & Klein, 1982). In fact, a system may experience multiple failure processes that compete against each other, and whichever occurs first can cause the system to fail (Rafiee et al., 2017; Kalbfleisch & Prentice, 2002; Andersen et al., 2002; Tsiatis, 1998). The term competing risks refers to duration data where two or more causes are competing to determine the observed time-to-failure. The potential multiple causes of failure are not mutually exclusive but the interest lies in the time to the first coming one (Putter et al., 2007; Bakoyannis & Touloumi, 2012). For further details see Basu (1981).

In the same way, the exponential-logarithmic (EL) distribution was proposed by Tahmasbi & Rezaei (2008) as a log-series mixture of exponential random variables. This two-parameter distribution with decreasing failure rate (DFR) is obtained by mixing the exponential and logarithmic distributions. It is based on the idea of modelling the system’s reliability where the time-to-failure occurs due to the presence of an unknown number of initial defects of some components is considered. Suppose the breakdown (failure) of a system of components occurs due to the presence of a non-observable number, , of initial defects of the same kind, that can be identifiable only after causing failure and are repaired perfectly (Adamidis & Loukas, 1998; Kuş, 2007). Let be the failure time of the system due to the defect, for . If we assume that are iid exponential random variables independent of , that follows a truncated logarithmic distribution, then the time to the first failure is adequately modelled by the EL distribution (Barreto-Souza & Silva, 2015; Bourguignon et al., 2014; Ross, 1976). For reliability studies, and are used respectively in serial and parallel systems with identical components (Chahkandi & Ganjali, 2009; Ramos et al., 2015). However, one may determine the distribution of the smallest value of the time-to-failure ( order statistic), instead of the minimum lifetime (first order statistic).

There is a huge literature on the order statistics for reliability engineering (for review, see Barlow & Proschan (1975); Sarhan & Greenberg (1962); Barlow & Proschan (1965); Pyke (1965); Gnedenko et al. (1969); Pledger & Proschan (1971); Barlow & Proschan (1981); David (1981); Bain & Englehardt (1991), and references contained therein). The motivation arises in reliability theory, where the so-called -out-of- systems are studied (Xie & Wang, 2008; Xie et al., 2005; Proschan & Sethuraman, 1976; Kim et al., 1988). An engineering system consisting of components is working if at least out of the total components are operating and it breaks down if or more components fail. Hence, a -out-of- system fails at the time of the component failure (Barlow & Proschan, 1975; Kamps, 1995; Cramer & Kamps, 2001). This binary-state context is based on the assumption that a system or its components can be either fully working or completely failing. However, in reality, a system may provide its specified function at less than full capacity when some of its components operate in a degraded state (Ramirez-Marquez & Coit, 2005). The binary -out-of- system reliability models have been extended to multi-state -out-of- system reliability models by allowing more than two performance levels for the system and its components (Eryilmaz, 2014). Multi-state systems contain units presenting two or more failed states with multiple modes of failure and one working state (Anzanello, 2009). Reliability models provide, through multi-state, more realistic representations of engineering systems (Yingkui & Jing, 2012). Many authors have made contributions about the reliability estimation approaches for multi-state systems (Jenney & Sherwin, 1986; Page & Perry, 1988; Rocco et al., 2005; Ramirez-Marquez & Coit, 2004; Ramirez-Marquez & Levitin, 2008; Levitin, 2007; Levitin & Amari, 2008).

In this paper, we generalize the EL distribution (Tahmasbi & Rezaei, 2008) modelling the time to the first failure, to a distribution more appropriate for modelling any order statistic (second, third, or any lifetime). For instance, suppose a machine produces a random number, units, of light bulbs or wire fuses which are put through a life test. Each item has a random lifetime , . The EL distribution (Tahmasbi & Rezaei, 2008) is focused only on the minimum time-to-failure of the first of the functioning components. However, we may be interested in the duration and then determine the lifetime distribution for the order statistics, assuming the system will fail if of the units fail. We may let be the order statistics of independent observations of the time and then, we consider the -smallest value of lifetime instead of the minimum lifetime. We assume that the ’s are not observable, but that is. We would like to estimate the lifetime distribution given the observations on . The proposed new family of lifetime distributions is obtained by compounding the exponential and generalized truncated logarithmic distributions, named exponential-generalized truncated logarithmic (EGTL) distribution.

The paper is organized as follows: In section 2, we present the new family of lifetime distributions and the probability density function (pdf) for some special cases. The moment generating function, the moment, the reliability, the failure rate function and the random number generation are discussed in this section. The estimation of parameters for this new family of distributions will be discussed in section 3. It is attained by maximum likelihood (MLEs) and expectation maximization (EM) algorithms. The method of moments and the Bayesian approach are also presented as possible alternatives to the MLEs method. As illustration of these three methods of estimation, numerical computations will be performed in section 4. The application study is illustrated based on two real data sets in section 5. The last section concludes the paper.

## 2 Properties of the distribution

### 2.1 Distribution

The derivation of the new family of lifetime distributions depends on the generalization of the compound exponential and truncated logarithmic distributions as follows: Let be iid exponential random variables with scale parameter and a pdf given by: , for , where is a discrete random variable following a logarithmic-series distribution with parameter and a probability mass function (pmf), , given by:

 P(Z=z)=1−ln(1−p)pzz;z∈{1,2,3,\mathellipsis} (1)

If is a truncated at logarithmic random variable with parameter , then the probability function will be given by:

 Pk−1(Z=z)=1A(p,k)pzz;k=1,2,3,\mathellipsis ,z % and z=k,k+1,\mathellipsis (2)

where,

 A(p,k)=∞∑j=kpjj=−ln(1−p)−ψ(k)k−1∑j=1pjj (3)

and,

 ψ(k)={0if k=11if k=2,3,...,z (4)

The pdf of the order statistic (the -smallest value of lifetime) exponentially distributed is given by the equation (5) (see, David & Nagaraja (1970); Balakrishnan & Cohen (1991); Balakrishnan (1996)):

 fk(x/z,θ)=θΓ(z+1)Γ(k)Γ(z−k+1)e−θ(z−k+1)x(1−e−θx)k−1 ; θ,x>0 (5)

From equations (2) and (5) the joint probability density is derived as111The proofs of all steps and equations are presented in the appendix.:

 gk(x,z/p,θ)=Γ(z)Γ(k)Γ(z−k+1)θpze−θ(z−k+1)x(1−e−θx)k−1A(p,k) (6)

where, is the lifetime of a system and is the last order statistic. In equation (6) we consider the ascending order . The joint probability density is determined by compounding a truncated at logarithmic series distribution and the pdf of the order statistic (). The use of the truncated at logarithmic distribution is motivated by mathematical interest because we are interested in the order statistic. There is a left-truncation scheme, where only () individuals who survive a sufficient time are included, i.e. we observe only individuals or units with exceeding the time of the event that truncates individuals. In comparison with the formulation of Tahmasbi & Rezaei (2008) and Adamidis & Loukas (1998), we consider the -smallest value of lifetime instead of the minimum lifetime . So, our proposed new lifetime distribution, named exponential-generalized truncated logarithmic (EGTL) distribution, is the marginal density distribution of given by:

 gk(x/p,θ,k)=θpke−θx(1−e−θx)k−1A(p,k)(1−pe−θx)k ; x∈[0,∞) (7)

where is the shape parameter and is the scale parameter. This distribution is more appropriate for modelling any order statistic (, or any lifetime). The particular case of the EGTL density function, for , is the EL distribution modelling the time of the first failure, , given by Tahmasbi & Rezaei (2008):

 g1(x;p,θ)=pθe−θx−ln(1−p)(1−pe−θx)

For , this pdf decreases strictly in and tends to zero as . The modal value of the density of the EL distribution, at , is given by and hence, its median is . The EL distribution tends to an exponential distribution with rate parameter , as . The function is concave upward on . The graphs of the density resemble those of the exponential and Pareto II distributions (see, Figure 1).

Also, the cumulative distribution function (cdf) of corresponding to the pdf in equation (7) is given by:

 Gk(x/p,θ,k)=1A(p,k)k−1∑r=0(k−1r)(−1)r(1−p)rI(x,r) (8)

where,

 I(x,r)=∫1−pe−θx1−pt−(r+1)dt

Then, the final cdf can be reduced to:

 Gk(x/p,θ,k)=1A(p,k)∞∑j=k(py)jj=1A(p,k)[−ln(1−py)−ψ(k)k−1∑j=1(py)jj]=ln(1−py)+ψ(k)∑k−1j=1(py)jjln(1−p)+ψ(k)∑k−1j=1pjj (9)

where,

 y=1−e−θx1−pe−θx

### 2.2 Moment generating function and rth Moment

Suppose has the pdf in equation (7), then the moment generating function (mgf) is given by:

 E(etx)=pkA(p,k)∞∑i=0(k−1+ii)piβ(i−tθ+1,k) (10)

where,

 β(a,b)=∫10ta−1(1−t)b−1dt

and hence, we can write the mgf as:

 E(etx)=pkA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1i+j−tθ+1 (11)
 E(x) = ∂∂tE(etx)/t=0 = pkθA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1(i+j−tθ+1)2/t=0 = pkθA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)2

The moment is given by:

 E(xr)=Γ(r+1)θrpkA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)r+1 (12)

### 2.3 Reliability and failure rate functions

It is well known that the reliability (survival) function is the probability of being alive just before duration , given by which is the probability that the event of interest has not occurred by duration . So, the reliability is the probability that a system will be successful in the interval from time to time , where is a random variable denoting the time-to-failure or failure time. One may refer to the literature on reliability theory (Barlow & Proschan, 1975, 1981; Basu, 1988). The survival function, corresponding to the pdf in equation (7), is given by equation (13). Table (1) presents the reliability function for some special cases.

 Sk(x/p,θ,k)=1−ln(1−p1−e−θx1−pe−θx)+ψ(k)∑k−1j=11j(p1−e−θx1−pe−θx)jln(1−p)+ψ(k)∑k−1j=1pjj (13)

The failure rate, known as hazard rate function , is the instantaneous rate of occurrence of the event of interest at duration (i.e. the rate of event occurrence per unit of time). Mathematically, it is equal to the pdf of events at , divided by the probability of surviving to that duration without experiencing the event. Thus, we define the failure rate function as in Barlow et al. (1963) by . The hazard function for some special cases is given in table (2).

The failure rate function is analytically related to the failure’s probability distribution. It leads to the examination of the increasing (IFR) or decreasing failure rate (DFR) properties of life-length distributions. is an IFR distribution, if increases for all such that . The motivation of the EGTL lifetime distribution is the realistic features of the hazard rate in many real-life physical and non-physical systems, which is not monotonically increasing, decreasing or constant hazard rate. If , the hazard rate function is decreasing following Tahmasbi & Rezaei (2008). In fact, if then and if then . For , there is an increasing failure rate. Indeed, if then . If then (see Figure 2).

### 2.4 Random number generation

We can generate a random variable from the cdf of in equation (9) using the following steps:

• Generate a random variable from the standard uniform distribution.

• Solve the non linear equation in :

 U=ln(1−py)+ψ(k)∑k−1j=1(py)jjln(1−p)+ψ(k)∑k−1j=1pjj (14)

where,

 ψ(k)={0if k=11if k=2,3,...,z
• Calculate the values of such as:

 X=−1θln(1−y1−py) (15)

where is EGTL random variable with parameters and . Note that for the special case , we generate directly from the following equation:

 X=−1θln(1−(1−p)1−up) (16)

## 3 Estimation of the parameters

In this section, we will determine the estimates of the parameters and for the EGTL new family of distributions. There are many methods available for estimating the parameters of interest. We present here the three most popular methods: Maximum likelihood, method of moments and Bayesian estimations.

### 3.1 Maximum Likelihood estimation

Let () be a random sample from the EGTL distribution. The log-likelihood function given the observed values, , is:

 lnL=nln(θ)+nkln(p)−θn∑i=1xi+(k−1)n∑i=1ln(1−e−θxi)−nlnA(p,k)−kn∑i=1ln(1−pe−θxi) (17)

We subsequently derive the associated gradients:

 ∂lnL∂p=npk−1(1−p)[ln(1−p)+ψ(k)∑k−1j=1pjj]+nkp+n∑i=11eθxi−p
 ∂lnL∂θ=nθ−n∑i=1xi+(k−1)n∑i=1xieθxi−1−kpn∑i=1xieθxi−p

We need the Fisher information matrix for interval estimation and tests of hypotheses on the parameters. It can be expressed in terms of the second derivatives of the log-likelihood function:

 I(ˆp,ˆθ)=−⎛⎜⎝E(∂2lnL∂p2)E(∂2lnL∂p∂θ)E(∂2lnL∂θ∂p)E(∂2lnL∂θ2)⎞⎟⎠∣∣ ∣ ∣∣θ=(ˆp,ˆθ)

The maximum likelihood estimates (MLEs) and of the EGTL parameters and , respectively, can be found analytically using the iterative EM algorithm to handle the incomplete data problems (Dempster et al., 1977; Krishnan & McLachlan, 1997). The iterative method consists in repeatedly updating the parameter estimates by replacing the "missing data" with the new estimated values. The standard method used to determine the MLEs is the Newton-Raphson algorithm that requires second derivatives of the log-likelihood function for all iterations. The main drawback of the EM algorithm is its rather slow convergence, compared to the Newton-Raphson method, when the "missing data" contain a relatively large amount of information (Little & Rubin, 1983). Recently, several researchers have used the EM method such as Adamidis & Loukas (1998), Adamidis et al. (2005), Karlis & Xekalaki (2003), Ng et al. (2002) and others. Newton-Raphson is required for the M-step of the EM algorithm. To start the algorithm, a hypothetical distribution of complete-data is defined with the pdf in equation (6) and then, we drive the conditional mass function as:

 p(z/x,p,θ)=Γ(z)Γ(k)Γ(z−k+1)pz−ke−θ(z−k)x(1−pe−θx)k (18)

E-step:

 E(z/x,p,θ)=k1−pe−θx (19)

M-step:

 ˆp(r+1)=−(1−p(r+1))[ln(1−p(r+1))+ψ(k)∑k−1j=1(p(r+1))jj]n(p(r+1))k−1n∑i=1k1−p(r)e−θ(r)xi (20)
 ˆθ(r+1)=n[n∑i=1kxi1−p(r)e−θ(r)xi−(k−1)n∑i=1xi1−e−θ(r+1)xi]−1 (21)

### 3.2 Method of moments estimation

The method of moments involves equating theoretical with sample moments. The estimate of moment is . For the EGTL distribution, the moment is given by equation (12). The corresponding first and second moments are given by:

 1nn∑i=1xi=E(x)=1θpkA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)2 (22)
 1nn∑i=1x2i=E(x2)=2θ2pkA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)3 (23)

From equation (22) we obtain:

 θ=pk¯¯¯xA(p,k)∞∑i=0k−1∑j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)2 (24)

and then, we should solve the following equation in :

 1nn∑i=1x2i−2¯¯¯x2A(p,k)∑∞i=0∑k−1j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)3pk[∑∞i=0∑k−1j=0(k−1+ii)(k−1j)pi(−1)j1(i+j+1)2]2=0 (25)

Thereafter, we determine by replacing with its estimated value, , in the equation (24).

### 3.3 Bayesian estimation

In the Bayesian approach inferences are expressed in a posterior distribution for the parameters which is, according to Bayes’ theorem, given in terms of the likelihood and a prior density function by:

 Pk(p,θ/x1,x2,...,xn)=gk(x/p,θ).πk(p,θ)gk(x) (26)

where, is a prior probability distribution function and is the likelihood of observations . Note that is the normalizing constant for the function given by:

 ∫∫πk(p,θ)gk(x/p,θ)dpdθ (27)

We should first specify our initial beliefs or other sorts of knowledge on the prior distribution . Here, we suppose that the standard uniform distribution on the interval is a prior distribution for the parameter and gamma, , is a prior distribution for the parameter , where is a shape parameter and is a scale parameter. The prior probability function is then equal to:

 πk(p,θ)=π1,k(p)π2,k(θ) (28)

where, and .

Using the mean square error as a risk function, we obtain the Bayes estimates as the means of the posterior distribution:

 ˆθ=E(θ)=∫∞0∫10θPk(p,θ/x)dθdp (29)
 ˆp=E(p)=∫10∫∞0pPk(p,θ/x)dpdθ (30)

## 4 Simulation study

As an illustration of the three last methods of estimation, numerical computations have been performed using the steps presented in section 2.4 for the random number generation. The numerical study was based on random samples of the sizes , and from the EGTL distribution for each of the values of and the three cases . We have considered the initial values , and . For this purpose, we have used the program Mathcad 14.0. After determining the parameter estimates we compute the biases, the variances and the mean square errors (MSEs), where and . An estimator is said to be efficient if its mean square error (MSE) is minimum among all competitors. In fact, is more efficient than if .

Table 3 reports the results from the simulated data where the variances and the MSEs of the parameters are given. The results show that, for each case , the variances and the MSEs decrease when the sample size increases. We see that the values from the Bayesian method are generally lower than those obtained using the ML approach.

## 5 Application examples

In this section, we fit the EGTL distribution to two real data sets using the MLEs. The first set (table 4) consists of " failure times for right rear brakes on D9G-66A caterpillar tractors", reproduced from Barlow & Campo (1975) and used also by Chang & Rao (1993). These data are used in many applications of reliability (Adamidis et al., 2005; Tsokos, 2012; Shahsanaei et al., 2012). The second set of data involves observations (table 5) of the results from an experiment concerning "the tensile fatigue characteristics of a polyester/viscose yarn". These data were presented by Picciotto (1970) to study the problem of warp breakage during weaving. The observations were obtained on the cycles to failure of a cm yarn sample put to test under strain level. The sample is used in Quesenberry & Kent (1982) as an example to illustrate selection procedure among probability distributions used in reliability. The reliability function of these two data sets belongs to the increasing failure rate class (Doksum & Yandell, 1984; Adamidis et al., 2005). In addition to our class of distributions, the gamma and Weibull distributions were fitted these data sets. The respective densities of gamma and Weibull distributions are and .

Table 6 shows the fitted parameters, the calculated values of Kolmogorov-Smirnov (K-S) and their respective p-values for the two sets of data. It should be noted that the K-S test compares an empirical distribution with a known (not estimated) one. It is used to decide if a sample comes from a population with a specific distribution (: the data follow a specified distribution). We estimate some special cases () of the EGTL family of distributions at significant level. The p-values are only significant for the case for the Barlow & Campo (1975) and Quesenberry & Kent (1982) data sets. In fact, the data exhibit increasing failure rates but, the EGTL distribution is a decreasing failure rate if (see figure 2). The new lifetime distribution provides good fit to the data sets. The K-S test shows that the EGTL distribution is an attractive alternative to the popular gamma and Weibull distributions. It generalizes the reliability lifetime distributions to any order statistics. Indeed, as shown in section 2.3, If , the hazard rate function is decreasing following Tahmasbi & Rezaei (2008) and there is an increasing hazard rate for .

## 6 Conclusion

We define a new two-parameter lifetime distribution so-called EGTL distribution. Our procedure generalizes the EL distribution proposed by Tahmasbi & Rezaei (2008). We derive some mathematical properties and we present the plots of the pdf and the failure rate functions for some special cases. The estimation of the parameters is attained by the maximum likelihood, EM algorithm, the method of moments and the Bayesian approach, with numerical computations performed as illustration of the different methods of estimation. The application study is illustrated based on two real data sets used in many applications of reliability. We have shown that our proposed EGTL distribution is suitable for modelling the time to any failure and not only the time to the first or the last failure. It is very competitive compared with its standard counterpart’s distributions.

Ordered random variables are already known for their ascending order. The paper may be extended to the concept of dual generalized ordered statistics, introduced by Burkschat et al. (2003), that enables a common approach to the descending ordered spacings like the reverse ordered statistics and the lower record values.

## Appendix

Let be iid exponential r.v. with pdf given by: , for , where is a log-series r.v. with pmf, , given by:

 P(Z=z)=1−ln(1−p)pzz;z∈{1,2,3,\mathellipsis} (31)

From the Taylor series, for we have:

 ln(1+x)=∞∑j=1(−1)j+1jxj

then,

 ln(1−p)=−∞∑j=1pjj

The truncated at logarithmic distribution with parameter is:

 Pk−1(Z=z)=1A(p,k)pzz;k=1,2,3,\mathellipsis ,z % and z=k,k+1,\mathellipsis (32)

where,

 A(p,k)=∞∑j=kpjj=−ln(1−p)−ψ(k)k−1∑j=1pjj (33)

and,

 ψ(k)={0if k=11if k=2,3,...,z (34)

The pdf of the order statistic is:

 fk(x/z,θ)=θΓ(z+1)Γ(k)Γ(z−k+1)e−θ(z−k+1)x(1−e−θx)k−1 ; θ,x>0 (35)

Then, the joint distribution from eq. (32) and (35) is:

 gk(x,z/p,θ)=fk(x/z,θ)Pk−1(z/p)=θΓ(z+1)Γ(k)Γ(z−k+1)e−θ(z−k+1)x(1−e−θx)k−11A(p,k)pzz=Γ(z)Γ(k)Γ(z−k+1)θpze−θ(z−k+1)x(1−e−θx)k−1A(p,k) (36)

Let, , , and

 gk(x,z/p,θ)=1A(p,k)(z−1)!pk(z−k)!(k−1)!fFk−1az−k (37)

the marginal density of is:

 gk(x/p,θ)=pkfFk−1A(p,k)∞∑z=k(z−1)!(z−k)!(k−1)!az−k (38)

Let, , ,

 gk(x/p,θ)=pkfFk−1A(p,k)∞∑s=0(s+k−1)!s!(k−1)!as=pkfFk−1A(p,k)∞∑s=0(s+k−1s)as=pkfFk−1A(p,k)1(1−a)k=θpke−θx(1−e−θx)k−1A(p,k)(1−pe−θx)k ; x∈[0,∞) (39)
 Gk(x/p,θ)=∫U0pkA(p,k)Uk−111−p(1−U)kdU=pkA(p,k)∫U0Uk−11−p(1−U)kdU (40)

let