Bartlett corrections in beta regression models

# Bartlett corrections in beta regression models

F. M. Bayer Departamento de Estatística, Universidade Federal de Santa Maria F. Cribari-Neto Departamento de Estatística, Universidade Federal de Pernambuco
###### Abstract

We consider the issue of performing accurate small-sample testing inference in beta regression models, which are useful for modeling continuous variates that assume values in , such as rates and proportions. We derive the Bartlett correction to the likelihood ratio test statistic and also consider a bootstrap Bartlett correction. Using Monte Carlo simulations we compare the finite sample performances of the two corrected tests to that of the standard likelihood ratio test and also to its variant that employs Skovgaard’s adjustment; the latter is already available in the literature. The numerical evidence favors the corrected tests we propose. We also present an empirical application.

###### keywords:
Bartlett correction, beta regression, bootstrap, likelihood ratio test
###### Msc:
[2010] 62F03, 62F05, 62F40, 62E17, 62E20

## 1 Introduction

Regression analysis is commonly used to model the relationship between a dependent variable (response) and a set of explanatory variables (covariates). The linear regression model is the most used regression model in empirical applications, but it is not appropriate when the variable of interest assume values in the standard unit interval, as is the case of rates and proportions. For these situations Ferrari and Cribari-Neto (2004) proposed a regression model based on the assumption that the response () is beta distributed. Their model is similar to those that belong to the class of generalized linear models (McCullagh and Nelder, 1989).

The beta density can be expressed as

 f(y;μ,ϕ)=Γ(ϕ)Γ(μϕ)Γ((1−μ)ϕ)yμϕ−1(1−y)(1−μ)ϕ−1,0

and

 E(y)=μ,var(y)=V(μ)1+ϕ,

where is the variance function and can be viewed as a precision parameter. The beta distribution is very flexible since its density can assume different shapes depending on the values of the two parameters. In particular, it can be symmetric, asymmetric, J-shaped and inverted J-shaped; see Figure 1.

The class of beta regression models allows practitioners to model responses that belong to the interval using a regression structure that contains a link function, covariates and unknown parameters. Several authors have used beta regression models and alternative modeling strategies in different fields; see, e.g., Brehm and Gates (1993), Hancox et al. (2010), Kieschnick and McCullough (2003), Smithson and Verkuilen (2006) and Zucco (2008).

One may be tempted to view the logistic regression as an alternative to the class of beta regressions. However, logistic regression is used when the response is binary, i.e., only assumes two values, namely: 0 and 1. In that case, one models as a function of covariates. Beta regression, on the other hand, is used when the response is continuous and assume values in the standard unit interval. Beta regression is useful for modeling rates, proportions, concentration indices (e.g., Gini) and other variates that assume values in or, more generally, in , where and are known ().

Testing inference in beta regression is usually carried out using the likelihood ratio test. The test employs an approximate critical value which is obtained from the test statistic limiting null distribution (). It is thus an approximate test and size distortions are likely to take place in small samples. This happens because when the number of data points is not large the test statistic exact null distribution is oftentimes poorly approximated by its asymptotic counterpart. Testing inference can be made more reliable by transforming the likelihood ratio statistic using a Bartlett correction (Bartlett, 1937). Such a correction depends on the log-likelihood cumulants and mixed cumulants up to fourth order. The derivation of a closed-form expression for the Bartlett correction factor in beta regressions can be quite cumbersome since the mean and precision parameters are not orthogonal, unlike generalized linear models.

A useful approach to improve inferences in small samples, particularly when the Bartlett correction is analytically cumbersome, is Skovgaard’s adjustment (Skovgaard, 2001). This adjustment is more straightforward than the Bartlett correction, only requiring second order log-likelihood derivatives. It does not require orthogonality between nuisance parameters and parameters of interest. Skovgaard’s adjustment for varying dispersion beta and inflated beta regressions were derived by Ferrari and Pinheiro (2011) and Pereira (2010), respectively. Ferrari and Cysneiros (2008) obtained a similar adjustment for exponential family nonlinear models. The numerical results presented by these autors reveal that the modified likelihood ratio test obtained using Skovgaard’s proposal is less size distorted than the original likelihood ratio test when the sample size is small.

A shortcoming of Skovgaard’s correction is that it does not improve the rate at which size distortions vanish, i.e., it does not yield asymptotic refinements. As noted earlier, however, Bartlett corrections are more difficult to obtain. They deliver asymptotic refinements and are usually derived using a general result given by Lawley (1956). An alternative is to use results in Cordeiro (1993) which are written matrix fashion. Another alternative for models in which the derivation of Bartlett correction is analytically cumbersome is the bootstrap Bartlett correction (Rocke, 1989). Here, the Bartlett correction factor is determined using bootstrap resampling (Efron, 1979).

Our main goal in this paper is to derive the Bartlett correction factor to the likelihood ratio test in the class of beta regressions. The derivation is quite cumbersome since in beta regressions the mean regression parameter vector is not orthogonal to the precision parameter. We were able to obtain, after extensive algebra, the Bartlett correction for fixed dispersion beta regressions. We also consider the bootstrap Bartlett correction, i.e., we numerically estimate the Bartlett correction factor. Finally, we perform extensive Monte Carlo simulations where we compare the finite sample behavior of Bartlett corrected tests (analytically and numerically) to that of the modified likelihood ratio test of Ferrari and Pinheiro (2011). The numerical evidence favors the two Bartlett corrected tests, especially the bootstrap Bartlett corrected test.

The paper unfolds as follows. Section 2 introduces the beta regression model proposed by Ferrari and Cribari-Neto (2004). In Section 3 we derive the Bartlett correction factor to the likelihood ratio test in fixed dispersion beta regressions. We also present the bootstrap Bartlett correction and the modified likelihood ratio statistics obtained by Ferrari and Pinheiro (2011). Monte Carlo Simulation results are presented and discussed in Section 4. Section 5 presents an application that uses real (not simulated) data. Concluding remarks are offered in the last section and the log-likelihood cumulants we derived are presented in the A.

## 2 The beta regression model

Let be a vector of independent random variables, each , , having density (1) with mean and unknown parameter precision . The beta regression model can be written as

 g(μi)=p∑j=1xijβj=ηi, (2)

where is an unknown vector parameter and are observations on the covariates (). When an intercept is included in the model, we have , for . Finally, is a strictly monotonic and twice differentiable link function, with domain in and image in IR. Some commonly used link functions are logit, probit, cloglog, loglog and Cauchy.

Estimation of the -dimensional parameter vector , where , can be performed by maximum likelihood. The log-likelihood function is

 ℓ(θ)=ℓ(θ;y)=n∑i=1ℓi(μi,ϕ), (3)

where

 ℓi(μi,ϕ) =logΓ(ϕ)−logΓ(μiϕ)−logΓ((1−μi)ϕ)+(μiϕ−1)logyi +{(1−μi)ϕ−1}log(1−yi).

The score function is obtained by differentiating the log-likelihood function with respect to unknown parameters. The score function with respect to and are, respectively,

 Uβ(θ)=ϕX⊤T(y∗−μ∗), Uϕ(θ)=n∑i=1μi(y∗i−μ∗)+log(1−yi)−ψ((1−μi)ϕ)+ψ(ϕ),

where is the covariates matrix whose -th row is . Also, , , , , and is the digamma function111The polygamma function is defined, for , as , . The digamma function is obtained by setting ..

The maximum likelihood estimators are the solution to the following system:

 {Uβ(θ)=0Uϕ(θ)=0.

The maximum likelihood estimators, and , cannot be expressed in closed-form. They are typically obtained by numerically maximizing the log-likelihood function using a Newton or quasi-Newtion nonlinear optimization algorithm. For details on nonlinear optimization algorithms, see Press et al. (1992).

Fisher’s joint information for and is given by

 K=K(θ)=(K(β,β)K(β,ϕ)K(ϕ,β)K(ϕ,ϕ)),

where , and . Here, ( diagonal matrix), (-vector) and ( diagonal matrix) have typical elements given by , , , respectively. That is, , and . For details on log-likelihood derivatives, see A.

Under mild regularity conditions, and in large samples, the joint distribution of and is approximately -multivariate normal:

 (^β^ϕ)∼Nk((βϕ),K−1),

approximately.

## 3 Improved likelihood ratio testing inference

Consider the parametric model presented in (2) and the corresponding log-likelihood function given in (3), where is the model -dimensional parametric vector, being a -dimensional vector and containing the remaining parameters. Suppose that we wish test the null hypothesis

 H0:θ1=θ01

against the alternative hypothesis

 H1:θ1≠θ01,

where is a given vector of scalars. Hence, is the vector of nuisance parameters and is the vector of parameters of interest. The null hypothesis imposes restrictions on the parameter vector. The likelihood ratio test statistic can be written as

 LR=2{ℓ(^θ;y)−ℓ(~θ;y)},

where the vector is the restricted maximum likelihood estimator of obtained by imposing the null hypothesis, i.e., .

In large samples, the likelihood ratio statistic is approximately distributed as under with error of the order . In small samples, however, this approximation may be poor. Since the test is conducted using critical values obtained from the limiting null distribution () and that such a distribution may provide a poor approximation to the test statistic exact null distribution in small samples, the likelihood ratio test may be considerably size distorted.

Likelihood ratio testing inference can be made more accurate by applying a correction factor to the test statistic. This correction factor is known as the Bartlett correction and was proposed by Bartlett (1937) and later generalized by Lawley (1956). The underlying idea is to base inferences on the modified statistic given by , where is the Bartlett correction factor. It is possible to express the Bartlett correction factor using moments of log-likelihood derivatives; see Lawley (1956). It is noteworthy that the Bartlett correction delivers an improvement in the rate at which size distortions vanish; see Barndorff-Nielsen and Cox (1984). In particular, and .

### 3.1 A matrix formula for the Bartlett correction factor

The Bartlett correction factor can be written as

 c=1+ϵk−ϵk−qq.

Using Lawley’s expansion (Lawley, 1956), the expected value of the likelihood ratio statistic can be expressed as

 E(LR)=q+ϵk−ϵk−q+O(n−2),

where

 ϵk =∑θ(λrstu−λrstuvw), (4) λrstu =κrsκtu{κrstu4−κ(u)rst+κ(su)rt}, λrstuvw =κrsκtuκvw{κrtv(κsuw6−κ(u)sw)+κrtu(κsvw4−κ(v)sw)+κ(v)rtκ(u)sw +κ(u)rtκ(v)sw}

and

 κrs=E(∂2ℓ(θ)∂θr∂θs),κrst=E(∂3ℓ(θ)∂θr∂θs∂θt),κ(t)rs=∂κrsθt,etc.

Notice that is the element of the inverse of Fisher’s information matrix, . The summation in (4) runs over all components of , i.e., the indices , , , , and vary over all parameters. The expression for is obtained from (4) by letting summation to only run over the nuisance parameters in . All ’s are of order , and and are of order .

It can be quite hard to derive the Bartlett correction using Lawley’s general formula, since it involves the product of mixed cumulants that are not invariant under index permutations (Cordeiro, 1993). In particular, in the beta regression model the parameters and are not orthogonal, i.e., Fisher’s information matrix is not block diagonal, and as consequence the Bartlett correction derivation via the Lawley’s approach becomes especially cumbersome. An alternative is to use the general matrix formula given by Cordeiro (1993).

In order to express in matrix form, we first define the following matrices: , and , for . The elements of such matrices are

 A(tu)={κrstu4−κ(u)rst+κ(su)rt},P(t)={κrst},Q(u)={κ(r)su}, (5)

for . Using matrix notation, we can write

 ∑θλrstu=tr(K−1L), ∑θκrsκtuκvw{16κrtvκsuw−κrtvκ(u)sw+κ(v)rtκ(u)sw} =−16tr(K−1M1)+tr(K−1M2)−tr(K−1M3), (6) ∑θκrsκtuκvw{14κrtuκsvw−κrtuκ(v)sw+κ(u)rtκ(v)sw} =−14tr(K−1N1)+tr(K−1N2)−tr(K−1N3), (7)

where the elements of the , , , , , and matrices are given, respectively, by

 {tr(K−1A(rs))}, {tr(K−1P(r)K−1P(s))}, {tr(K−1P(r)K−1Q(s)⊤)}, {tr(K−1Q(r)K−1Q(s))}, {tr(P(r)K−1)tr(P(s)K−1)}, {tr(P(r)K−1)tr(Q(s)K−1)}, {tr(Q(r)K−1)tr(Q(s)K−1)}.

Using (4)–(7) we can write

 ϵk=tr[K−1(L−M−N)],

where and .

The term in (3.1) can be easily computed using a matrix programming language, like Ox (Doornik, 2007) and R (R Development Core Team, 2009). It only requires the computation of matrices of order , namely: , matrices , matrices and matrices . The remaining matrices can be obtained from them using simple matrix operations. Thus, to obtain the Bartlett correction factor we need compute matrices of order and matrices of order . In order to obtain the matrices , and we need cumulants of log-likelihood derivatives up to fourth order. We have derived these cumulants for the beta regression model and present them in A.

The usual Bartlett corrected likelihood ratio statistic is given by . There are, however, two other equivalent specifications that deliver the same order of accuracy. The three Bartlett corrected test statistics are

 LRb1=LRc, LRb2=LRexp{−(ϵk−ϵk−q)q}, LRb3=LR{1−(ϵk−ϵk−q)q}.

The corrected statistics , and are equivalent to order (Lemonte et al., 2010), and has the advantage of only taking positive values.

### 3.2 Bootstrap Bartlett correction

Rocke (1989) introduced a numeric alternative to the analytic Bartlett correction in which the correction factor is computed using Efron’s bootstrap (Efron, 1979). The bootstrap Bartlett correction can be described as follow. Bootstrap resamples are used to estimate the likelihood ratio statistic expected value. Here, bootstrap resamples () are generated using the parametric bootstrap and imposing . Data generation is performed from the postulated model after replacing the unknown parameter vector by its restricted estimate, i.e., by the estimate obtained under the null hypothesis. For each pseudo sample , , the statistic is computed as

 LR∗b=2{ℓ(^θ∗b;y∗b)−ℓ(~θ∗b;y∗b)},

where and are the maximum likelihood estimators of obtained from the maximization of under and , respectively. The bootstrap Bartlett corrected likelihood ratio statistic is then computed as

 LRboot=LRq¯¯¯¯¯¯¯¯¯¯LR∗,

where .

It is noteworthy that the bootstrap Bartlett correction is computationally more efficient than the usual approach of using the bootstrap method to obtain a critical value (or a -value) since it requires a smaller number of resamples. The usual bootstrap approach typically requires 1,000 bootstrap resamples, since it involves estimating tail quantities (Efron, 1986, 1987); on the other hand, the bootstrap Bartlett correction is expected to work well when based on only 200 artificial samples. Notice that in the latter we use data resampling to estimate the mean of a distribution, and not an upper quantile. According to Rocke (1989) the bootstrap Bartlett correction that uses typically yields inferences that are as accurate as those obtained using the usual bootstrapping scheme with .

In a different approach, Skovgaard (2001) generalized the results in Skovgaard (1996) and presented a much simpler way to improve likelihood ratio testing inference. His adjustment was later computed for various models; see, e.g., Ferrari and Cysneiros (2008), Ferrari and Pinheiro (2011), Melo et al. (2009) and Pereira (2010). The numerical evidence presented by these authors indicates that hypothesis testing inference based on Skovgaard’s modified likelihood ratio statistic is typically more accurate than that based on the uncorrected statistic.

In order to present the Skovgaard’s adjustment to the likelihood ratio test statistic, which was derived by Ferrari and Pinheiro (2011) for beta regressions, we shall now introduce some additional notation. Recall that , where and are the interest and nuisance parameters, respectively. Let denote the observed information matrix and let be the observed information matrix corresponding to . Additionally, , , , and .

The Skovgaard modified likelihood ratio test statistic is given by

 LRsk1=LR−2logξ,

where

 ξ={|~K||^K||~J11|}1/2|¯Υ||{~K¯Υ−1^J^K−1¯Υ}11|1/2{~U⊤¯Υ−1^K^J−1¯Υ~K−1~U}q/2LRq/2−1~U⊤¯Υ−1¯υ.

Here, and are obtained by replacing for and for in and after expected values are computed.

An asymptotically equivalent version of the above test statistic is

 LRsk2=LR(1−1LRlogξ)2.

Under , and are approximately distributed as with a high degree of accuracy (Skovgaard, 2001; Ferrari and Pinheiro, 2011). For more details and matrix formulas for and in the beta regressions, see Ferrari and Pinheiro (2011). In Ferrari and Pinheiro (2011) the Skovgaard adjustment is derived for a general class of beta regressions that allows for nonlinearities and varying dispersion.

## 4 Numerical evidence

This section presents Monte Carlo simulation results on the small sample performance of the likelihood ratio test () in beta regression and also of six tests that are based on corrected statistics, namely: the three Bartlett corrected statistics (, and ), the bootstrap Bartlett corrected statistic () and the two modified statistics obtained using Skovgaard’s adjustment ( and ). The number of Monte Carlo replications is 10,000. For each Monte Carlo replication we performed 500 bootstrap replications. All simulations were carried out using the R programming language (R Development Core Team, 2009).

We consider the following beta regression the model:

 logit(μi)=β1+β2x2i+β3x3i+β4x4i+β5x5i.

The covariates values are chosen as random draws from the distribution and are kept fixed during the simulations. We use four different values for the precision parameter , namely: , , and . Restrictions on are tested using samples of 15, 20, 30 and 40 observations and at three nominal levels: , and . The null hypotheses are (), () and (), to be tested against two-sided alternative hypotheses. When , we set , , , and . When , , , and . Finally, when , the parameter values used for data generation are , and .

Tables 1 (), 2 () and 3 () present the null rejection rates of the different tests. The figures in these tables clearly show that the likelihood ratio test is considerably oversized (liberal); its null rejection rate can be eight times larger than the nominal level, as in Table 2 for , and . In general, larger sample sizes and/or larger values of lead to smaller size distortions.

The simulation results for presented in Table 1 indicate that the corrected tests display good small sample behavior. The Bartlett corrected test is the best performer, being followed by the Skovgaard adjusted test and by the bootstrap Bartlett corrected test, . The latter outperforms the competition when . For instance, when and , the null rejection rates of for the four sample sizes are 10.2%, 10.3%, 10.6% and 10.0% and the corresponding rates of the are 10.2%, 10.3%, 10.8% and 10.2%. The good performance of the test can be observed in all scenarios.

The results for the cases where we impose more than one restriction, namely and , are presented in Tables 2 and 3 and are similar to those obtained for . The modified tests once again displayed small size distortions. For instance, for , and the type I error frequency of the uncorrected likelihood ratio test equals for whereas for the corrected tests and it equals . The corresponding rejection rate of the was . For , , and the null rejection rates are (), () and (). For and , the null rejection rates of the , and tests are very close to whereas, for the four samples sizes considered, the likelihood ratio test null rejection rates were , , and .

The numerical results presented in Tables 1, 2 and 3 show that the corrected tests outperform the uncorrected test in small samples. The best performing corrected tests are the Bartlett corrected test , the bootstrap Bartlett corrected test and the Skovgaard test, . The null rejection rates of these tests are closer to the nominal levels than those of the uncorrected test and also relative to the other corrected tests. In particular, the bootstrap Bartlett correction works very well when and .

Table 4 presents moments and quantiles of the different test statistics alongside with their asymptotic counterparts for , and . It is noteworthy that the approximation to the likelihood ratio null distribution is quite poor. For example, the limiting null distribution variance equals 4 whereas the variance of exceeds 7. On the other hand, the same approximation works quite well for the (analytically and numerically) Bartlett corrected statistics. The statistic stands out, being followed by . For instance, the mean and variance of are, respectively, and , which are very close to two and four, the mean and variance. The worst performing corrected statistic is , especially when we consider its skewness and kurtosis. We also note that the limiting null approximation provided to the exact null distribution of is not as accurate as for the Bartlett corrected statistics and . This fact is evidenced by the measures of variance (), skewness (), kurtosis () and by the 90th quantile (4.6612), which are considerably different from the respective chi-squared reference values.

Figure 2 contains QQ plots (exact empirical quantiles versus asymptotic quantiles) for different sample sizes when and . Figure 3 shows estimated null densities of some statistics for and . These densities were estimated using the kernel method with Gaussian kernel function.222For details on nonparametric density estimation, see Silverman (1986) and Venables and Ripley (2002). In both figures we consider the likelihood ratio test statistic, the best performer Bartlett corrected statistic (), the bootstrap Bartlett corrected statistic and the best performer statistic modified using Skovgaard’s approach (). The QQ plots in Figure 2 show that the corrected statistics null distributions are much more closer to the reference distribution than that of . The best agreement between exact and limiting null distributions takes place for . The same conclusion can be drawn from the estimated null densities presented in Figure 3.