# Prediction Measures in Beta Regression Models

###### Abstract

We consider the issue of constructing PRESS statistics and coefficients of prediction for a class of beta regression models. We aim at displaying measures of predictive power of the model regardless goodness-of-fit. Monte Carlo simulation results on the finite sample behavior of such measures are provided. We also present an application that relates to the distribution of natural gas for home usage in São Paulo, Brazil. Faced with the economic risk of to overestimate or to underestimate the distribution of gas was necessary to construct prediction limits using beta regression models (Espinheira et al., 2014). Thus, it arises the aim of this work, the selection of best predictive model to construct best prediction limits.

###### keywords:

Beta distribution, beta regression, PRESS, prediction coefficient., Corresponding author. ,

## 1 Introduction

The beta distribution is commonly used to model random variables that assume values in , such as percentages, rates and proportions. The beta density can display quite different shapes depending on the parameter values. Oftentimes the variable of interest is related to a set of independent (explanatory) variables. Ferrari and Cribari-Neto (2004) introduced a regression model in which the response is beta-distributed, its mean being related to a linear predictor through a link function. The linear predictor includes independent variables and regression parameters. Their model also includes a precision parameter whose reciprocal can be viewed as a dispersion measure. In the standard formulation of the beta regression model it is assumed that the precision is constant across observations. However, in many practical situations this assumption does not hold. Smithson and Verkuilen (2006) consider a beta regression specification in which dispersion is not constant, but is a function of covariates and unknown parameters. Parameter estimation is carried out by maximum likelihood (ML) and standard asymptotic hypothesis testing can be easily performed. Practitioners can use the betareg package, which is available for the R statistical software (http://www.r-project.org), for fitting beta regressions. Cribari-Neto and Zeileis (2010) provide an overview of varying dispersion beta regression modeling using the betareg package.

Recently Espinheira et al. (2014) built and evaluated bootstrap-based prediction intervals for the class of beta regression models with varying dispersion. However, a prior approach it is necessary, namely: the selection of the model with the best predictive ability, regardless of the goodness-of-fit. Indeed, the model selection is a crucial step in data analysis, since all inferential performance is based on the selected model. Bayer and Cribari-Neto (2014) evaluated the performance of different selection criteria models in samples of finite size in beta regression model, such as Akaike Information Criterion (AIC) (Akaike, 1973), Schwarz Bayesian Criterion (SBC) (Schwarz, 1978), residual sum of squares (RSS), and various functions of RSS such as the coefficient of determination, and the adjusted . However, these methods do not offer any insight about the quality of the predictive values. In this context, Allen (1974), proposed the PRESS (Predictive Residual Sum of Squares) criterion, that can be used as an indication of the predictive power of a model. The PRESS statistic is independent from the goodness-of-fit of the model, since that its calculation is made by leaving out the observations that the model is trying to predict. The PRESS statistics can be viewed as a sum of squares of external residuals. Thus, similarly of the approach of Mediavilla et al. (2008) proposed a coefficient of prediction based on PRESS namely . The statistic can be used to select models from a predictive perspective adding important information about the predictive ability of the model in various scenarios.

## 2 On beta regression residuals

Let be independent random variables such that each , for , is beta distributed, i.e., each has density function given by

(1) |

where and . Here, and , where . In the beta regression model introduced by Ferrari and Cribari-Neto (2004) the mean of can be written as

(2) |

In addition to the relation given in (2), it is possible to assume that the precision parameter is not constant and write

(3) |

In (2) and (3), and are linear predictors, and are unknown parameter vectors (; ), and are fixed covariates () and and are link functions, which are strictly increasing and twice-differentiable.

The PRESS statistic is based on sum of external residuals obtained from exclusion of observations. For beta regression models Ferrari et al. (2011) present a standardized residual obtained using Fisher’s scoring iterative algorithm for under varying dispersion. Here, we propose a new residual based on a combination of ordinary residuals obtained using the algorithms for and under varying dispersion. At the outset, consider the Fisher’s scoring iterative algorithm for estimating (see the Appendix A). From (21) it follows that the th step of the scoring scheme is

(4) |

where the th elements of the vectors and are given, respectively, by

(5) |

denoting the digamma function, i.e., for . The matrices and are given in (17) and (19), respectively, is an matrix whose th row is and . Note that (see (22); Appendix A). Similarly, from (21) it follows that the th step of the scoring scheme for is given by

(6) |

where the th element of is give by

(7) |

and the matrices and are given in (18) and (20), respectively, and is an matrix that th row is . It is possible to write the iterative schemes in (4) and (6) in terms of weighted least squares regressions, respectivelly as and . Where , with , , with and given in (7). Upon convergence,

(8) |

Here, , , and are the matrices , , and respectively, evaluated at the maximum likelihood estimator. We note that and in (8) can be viewed as the least squares estimates of and obtained by regressing and on and , respectively. The residuals ordinary obtained of interactive process of and are given by and , respectively. Hence, using the definitions of the matrices given from (17) to (21), we can rewrite the residuals obtained from the iterative process of and respectively, as

(9) |

where and are given in (19) and (20), respectively. Thus, we propose a new residual based on and , which we shall refer to as the combined residual where and are given in (5). Assuming that and are known and from (22) to (26) it follows that , with

(10) |

Then, we can define the following standardized combined residual:

(11) |

Here, is in (10) evaluated at e . It is important to note that when is constant it is only necessary replace by at all elements of (11). We should emphasize that here we are just interested in evaluating the in the composition of the PRESS statistic.

## 3 Statistics

Consider the linear model, where is a vector of responses, is a known matrix of covariates of dimension , is the parameter vector of dimension and is a vector of errors distributed as . Let , , and let be the estimate of without the i observation and be the case deleted predicted value of the response when the independent variable has value . Thus, for multiple regression which can be rewritten as , where is the t diagonal element of the matrix .

In the beta regression model in (8) can be viewed as the least squares estimate of obtained by regressing

(12) |

Thus, the prediction error is . Using the ideas proposed by (Pregibon, 1981) and fact that

where is given in (9) and is the th diagonal element of

it then follows that Finally, for the beta regression model the PRESS statistic is given by

(13) |

In (13) the th observation is not used in fitting the regression model to predict , then both the external predicted values and the external residuals are independent of . This fact enables the PRESS statistic to be a true assessment of the prediction capabilities of the regression model regardless of the overall quality of the fit of the model.

Considering the same approach of the coefficient of determination , we can think in a prediction coefficient based on PRESS, namely

(14) |

wherein and is the arithmetic average of the . It can be shown that , wherein is the number of model parameters. In the beta regression model with varying dispersion, , is the is the arithmetic average of the given in (12) and .

Cook and Weisberg (1982) suggest other versions of PRESS statistics based on different residuals. Thus, we present another version of PRESS statistics and associated considering a new residual presented in (11), such that

(15) |

respectively. It is noteworthy that the measures and are distinct, since that the propose to measure the quality of fit of the model and the and measure the predictive power. Additionally, and are not positive measure. In fact, the is a positive quantity, thus the and the associated given in (14) and (15), respectively, take values in . The closer to one the better is the predictive power of the model. In order to check the goodness-of-fit of the estimated model, we used the approach suggested by Bayer and Cribari-Neto (2014) for beta regression models with varying dispersion, a version of based on likelihood ratio, given by: wherein is the maximum likelihood achievable (saturated model) and is the achieved by the model under investigation.

### 3.1 Monte Carlo results

The Monte Carlo experiments were carried out using using both fixed and varying dispersion beta regressions as data generating processes. All results are based on 10,000 Monte Carlo replications. Table 1 contains numerical results for the fixed dispersion beta regression model as data generating processe, given by

The covariate values were independently obtained as random draws of the following distributions: , and were kept fixed throughout the experiment. The precisions, the sample sizes and the mean response are, respectively, , , , and . To investigate the performances of statistics in the omission of covariates, we considered the Scenarios 1, 2 and 3, in which are omitted, three, two and one covariate, respectively. In the fourth scenario the estimated model is correctly specified. Additionally we calculate the for the same scenarios. The results in Table 1 show that the values of all statistics increase as important covariates are included in the model. Statistics behave similarly as the sample size and the precisions values indicating that the most important factor is the correct specification of the model. Considering the three ranges for the it should be noted that the statistic values are considerably larger when and the values approaching one when the estimated model is closest to the true model. For instance, in Scenario 4 for , the values of and are, respectively, (0.8354, 0.9357, 0.9748) and (0.8349, 0.9376, 0.9758).

The statistics finite sample behavior substantially change when . It is noteworthy the reduction of the statistic values, revealing the difficulty in to fit the model and make prediction when . Indeed, in this range of is more difficult to make prediction that to fit the model. For example, in Scenario 1, when three covariates are omitted from the model, when and the values equals, 0.0580, 0.0636 and 0.0972 whereas the the values are 0.1553, 0.1999 and 0.2496, respectively. Similar results were obtained for . Even when for the correctly specified four covariate model (Scenario 4) the predictive power of the model is more affected than the quality of fit of the model by the fact of . In this situation, it is noteworthy that the finite sample performances predictive power model improve when the value of the precision parameter increases. For instance, when and we have and , respectively. Here it is possible see that the statistic always shows larger values than the statistic when the mean responses are close to of the upper limit of the standard unit interval. However, the two measures behave similarly when used to investigate model misspecification.

The same difficulty in obtaining predictions and in fitting the regression model occurs when . Once again the greatest difficulty lies on the predictive power of the model. It is also noteworthy that when the point prediction becomes even less reliable than when , since the and values decreased substantially and become considerably distant from the values. When the mean responses are close to of the lower limit of the standard unit interval, the seems to be more able in identify poor predictions. For instance, in Scenario 4 (model correctly specified; four covariates) when and , we have and , respectively.

Scenarios | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | |||||||||

Estimated | |||||||||||||

model | |||||||||||||

50 | 150 | 400 | 50 | 150 | 400 | 50 | 150 | 400 | 50 | 150 | 400 | ||

40 | 0.359 | 0.392 | 0.406 | 0.457 | 0.501 | 0.518 | 0.595 | 0.655 | 0.679 | 0.835 | 0.935 | 0.974 | |

0.454 | 0.471 | 0.478 | 0.567 | 0.599 | 0.611 | 0.704 | 0.754 | 0.774 | 0.856 | 0.938 | 0.974 | ||

0.354 | 0.390 | 0.405 | 0.467 | 0.514 | 0.532 | 0.613 | 0.674 | 0.697 | 0.857 | 0.946 | 0.979 | ||

80 | 0.341 | 0.377 | 0.392 | 0.439 | 0.487 | 0.505 | 0.575 | 0.642 | 0.668 | 0.819 | 0.929 | 0.972 | |

0.437 | 0.457 | 0.465 | 0.551 | 0.587 | 0.601 | 0.689 | 0.745 | 0.768 | 0.842 | 0.932 | 0.971 | ||

0.351 | 0.389 | 0.404 | 0.462 | 0.512 | 0.531 | 0.605 | 0.671 | 0.696 | 0.848 | 0.942 | 0.977 | ||

120 | 0.335 | 0.372 | 0.387 | 0.432 | 0.482 | 0.501 | 0.569 | 0.638 | 0.664 | 0.813 | 0.927 | 0.971 | |

0.431 | 0.452 | 0.460 | 0.546 | 0.583 | 0.598 | 0.685 | 0.742 | 0.765 | 0.838 | 0.930 | 0.970 | ||

0.350 | 0.389 | 0.404 | 0.460 | 0.511 | 0.531 | 0.603 | 0.670 | 0.696 | 0.845 | 0.941 | 0.977 | ||

50 | 150 | 400 | 50 | 150 | 400 | 50 | 150 | 400 | 50 | 150 | 400 | ||

40 | 0.058 | 0.063 | 0.097 | 0.062 | 0.070 | 0.117 | 0.065 | 0.205 | 0.409 | 0.071 | 0.296 | 0.610 | |

0.092 | 0.112 | 0.217 | 0.106 | 0.152 | 0.298 | 0.109 | 0.445 | 0.711 | 0.132 | 0.601 | 0.858 | ||

0.155 | 0.199 | 0.249 | 0.225 | 0.292 | 0.364 | 0.350 | 0.486 | 0.621 | 0.441 | 0.619 | 0.794 | ||

80 | 0.033 | 0.037 | 0.072 | 0.038 | 0.044 | 0.097 | 0.035 | 0.165 | 0.385 | 0.037 | 0.240 | 0.574 | |

0.067 | 0.080 | 0.192 | 0.081 | 0.115 | 0.277 | 0.069 | 0.404 | 0.699 | 0.079 | 0.551 | 0.843 | ||

0.149 | 0.195 | 0.246 | 0.212 | 0.283 | 0.358 | 0.329 | 0.471 | 0.612 | 0.412 | 0.597 | 0.781 | ||

120 | 0.025 | 0.028 | 0.063 | 0.030 | 0.036 | 0.090 | 0.025 | 0.151 | 0.376 | 0.027 | 0.222 | 0.562 | |

0.058 | 0.069 | 0.184 | 0.072 | 0.103 | 0.270 | 0.057 | 0.390 | 0.694 | 0.063 | 0.534 | 0.838 | ||

0.147 | 0.194 | 0.245 | 0.207 | 0.280 | 0.357 | 0.322 | 0.466 | 0.609 | 0.403 | 0.591 | 0.777 | ||

50 | 150 | 400 | 50 | 150 | 400 | 50 | 150 | 400 | 50 | 150 | 400 | ||

40 | 0.067 | 0.055 | 0.080 | 0.072 | 0.048 | 0.070 | 0.072 | 0.144 | 0.285 | 0.079 | 0.327 | 0.663 | |

0.044 | 0.043 | 0.044 | 0.049 | 0.041 | 0.035 | 0.061 | 0.067 | 0.073 | 0.076 | 0.093 | 0.111 | ||

0.214 | 0.252 | 0.294 | 0.274 | 0.327 | 0.381 | 0.378 | 0.482 | 0.576 | 0.526 | 0.700 | 0.847 | ||

80 | 0.044 | 0.031 | 0.057 | 0.050 | 0.028 | 0.057 | 0.046 | 0.113 | 0.269 | 0.046 | 0.271 | 0.632 | |

0.022 | 0.021 | 0.022 | 0.025 | 0.020 | 0.017 | 0.029 | 0.037 | 0.047 | 0.036 | 0.046 | 0.060 | ||

0.209 | 0.249 | 0.292 | 0.263 | 0.320 | 0.377 | 0.361 | 0.470 | 0.568 | 0.504 | 0.683 | 0.838 | ||

120 | 0.037 | 0.023 | 0.049 | 0.044 | 0.022 | 0.053 | 0.037 | 0.101 | 0.262 | 0.036 | 0.252 | 0.621 | |

0.015 | 0.014 | 0.015 | 0.018 | 0.013 | 0.011 | 0.019 | 0.027 | 0.038 | 0.023 | 0.032 | 0.043 | ||

0.207 | 0.248 | 0.291 | 0.259 | 0.317 | 0.375 | 0.356 | 0.465 | 0.566 | 0.497 | 0.677 | 0.834 |

We have also carried out Monte Carlo simulations using a varying dispersion beta regression model, in which we increased the number of covariates, used different covariates in the mean and precision submodels. In this case the data generating process and the postulated model is the same . We report results for , , , and . Here,

(16) |

is the measure the intensity of nonconstant dispersion. The covariate values in the mean submodel and in the precision submodel were obtained as random draws from the and distributions, respectively, such that the covariate values in the two submodels are not the same. At the end, we also considered a covariate values generated from (Student’s t-distribution with 3 degrees of freedom). The results are presented in Table 2. We should emphasize that were generated only covariates values and the covariates values are replications of original set. In this sense, the intensity of nonconstant dispersion remains the same over the sample size.

When the mean responses are scattered on the standard unit interval () the three statistics display similar values. It seems that neither the degree of intensity of nonconstant dispersion nor the simultaneous increase in the number of covariates in the two submodels noticeably affect the predictive power and fit of the model when the sample size is fixed. However, it is noteworthy a reduction of statistic values when the response values are close to one or close to zero, making clear the difficulty in fitting the regression model and obtaining good predictions when or and the precision is modelled. The minor values of statistic reveals the problem in to make good predictions when , whereas when this problem is singled out by smaller values of statistic. Here, the model fit is more affect when the number of covariates increases simultanealy in the two submodels. For instance consider , and . At the Scenario 5 (one covariate in both submodels), we have = 0.8117, = 0.3677 and = 0.8228. Whereas in Scenario 8 (four covariate in both submodels) we have = 0.8627, = 0.4863 and = 0.6447;

We also displayed in Table 2 the statistic values when the model is correctly specified, but we introduced leverage points in the data. To that end, only the values were obtained as random draws of the distribution and concerned ourselves with , which yielded one point which has leverage measure ten times greater than the average value when , two high leverage points when and three, . Here, we used as measure of leverage the leverage generalized (Espinheira et al., 2008). Notice that in Scenarios 5, 6 and 7 the measure seems more able to identify correctly that the leverage points affect the goodness of prediction than the measure. On the order hand, the outperforms the in Scenario 8. It is interesting to notice that in Scenario 5, which represents one covariate in both submodels, with the only one covariate of mean submodel had values generated from the occurs the smaller values of the three statistics. Thus, the statistics correctly lead to the conclusion that as greatest is the influence of leverage point in the data, worst are the predictions and the model fit.

Scenarios | Scenario 5 | Scenario 6 | Scenario 7 | Scenario 8 | |||||||||

Mean | |||||||||||||

submodels | |||||||||||||

Dispersion | |||||||||||||

submodels | |||||||||||||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.794 | 0.764 | 0.742 | 0.743 | 0.721 | 0.699 | 0.792 | 0.769 | 0.749 | 0.731 | 0.731 | 0.725 | |

0.823 | 0.812 | 0.806 | 0.772 | 0.762 | 0.755 | 0.850 | 0.843 | 0.838 | 0.819 | 0.824 | 0.826 | ||

0.834 | 0.837 | 0.842 | 0.771 | 0.784 | 0.797 | 0.779 | 0.779 | 0.785 | 0.712 | 0.738 | 0.761 | ||

80 | 0.773 | 0.739 | 0.714 | 0.702 | 0.674 | 0.649 | 0.745 | 0.715 | 0.687 | 0.646 | 0.642 | 0.630 | |

0.803 | 0.789 | 0.781 | 0.732 | 0.717 | 0.708 | 0.814 | 0.802 | 0.794 | 0.758 | 0.762 | 0.763 | ||

0.840 | 0.844 | 0.850 | 0.783 | 0.796 | 0.810 | 0.789 | 0.790 | 0.796 | 0.724 | 0.749 | 0.772 | ||

120 | 0.766 | 0.731 | 0.704 | 0.688 | 0.657 | 0.630 | 0.729 | 0.696 | 0.665 | 0.615 | 0.609 | 0.596 | |

0.796 | 0.781 | 0.771 | 0.717 | 0.701 | 0.690 | 0.801 | 0.788 | 0.778 | 0.737 | 0.740 | 0.740 | ||

0.842 | 0.846 | 0.852 | 0.786 | 0.799 | 0.813 | 0.793 | 0.793 | 0.799 | 0.727 | 0.753 | 0.775 | ||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.448 | 0.569 | 0.633 | 0.527 | 0.594 | 0.650 | 0.576 | 0.671 | 0.730 | 0.789 | 0.818 | 0.841 | |

0.775 | 0.870 | 0.905 | 0.829 | 0.878 | 0.909 | 0.839 | 0.901 | 0.933 | 0.950 | 0.960 | 0.968 | ||

0.455 | 0.557 | 0.617 | 0.432 | 0.494 | 0.557 | 0.353 | 0.445 | 0.513 | 0.454 | 0.501 | 0.544 | ||

80 | 0.410 | 0.534 | 0.599 | 0.461 | 0.534 | 0.592 | 0.482 | 0.592 | 0.661 | 0.707 | 0.743 | 0.774 | |

0.770 | 0.862 | 0.898 | 0.809 | 0.861 | 0.895 | 0.804 | 0.879 | 0.915 | 0.928 | 0.942 | 0.954 | ||

0.491 | 0.588 | 0.644 | 0.471 | 0.530 | 0.589 | 0.399 | 0.485 | 0.551 | 0.493 | 0.536 | 0.576 | ||

120 | 0.396 | 0.522 | 0.587 | 0.436 | 0.511 | 0.571 | 0.451 | 0.566 | 0.637 | 0.678 | 0.714 | 0.750 | |

0.767 | 0.859 | 0.895 | 0.801 | 0.855 | 0.890 | 0.794 | 0.872 | 0.909 | 0.921 | 0.935 | 0.948 | ||

0.501 | 0.597 | 0.653 | 0.482 | 0.541 | 0.599 | 0.412 | 0.497 | 0.563 | 0.504 | 0.544 | 0.586 | ||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.680 | 0.769 | 0.811 | 0.641 | 0.692 | 0.732 | 0.647 | 0.739 | 0.797 | 0.800 | 0.832 | 0.862 | |

0.218 | 0.298 | 0.367 | 0.257 | 0.296 | 0.341 | 0.281 | 0.332 | 0.387 | 0.409 | 0.442 | 0.486 | ||

0.719 | 0.781 | 0.822 | 0.609 | 0.639 | 0.677 | 0.464 | 0.547 | 0.621 | 0.532 | 0.585 | 0.644 | ||

80 | 0.657 | 0.748 | 0.792 | 0.584 | 0.639 | 0.683 | 0.567 | 0.675 | 0.742 | 0.721 | 0.763 | 0.804 | |

0.166 | 0.248 | 0.321 | 0.175 | 0.216 | 0.262 | 0.169 | 0.220 | 0.278 | 0.271 | 0.305 | 0.355 | ||

0.743 | 0.754 | 0.761 | 0.639 | 0.667 | 0.704 | 0.504 | 0.585 | 0.657 | 0.566 | 0.617 | 0.676 | ||

120 | 0.650 | 0.741 | 0.784 | 0.565 | 0.619 | 0.664 | 0.540 | 0.653 | 0.722 | 0.691 | 0.737 | 0.781 | |

0.150 | 0.232 | 0.306 | 0.148 | 0.189 | 0.236 | 0.132 | 0.183 | 0.242 | 0.225 | 0.260 | 0.311 | ||

0.750 | 0.760 | 0.774 | 0.649 | 0.675 | 0.711 | 0.515 | 0.596 | 0.668 | 0.577 | 0.628 | 0.689 | ||

covariate values generated from . | |||||||||||||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.426 | 0.401 | 0.385 | 0.733 | 0.705 | 0.680 | 0.624 | 0.603 | 0.585 | 0.775 | 0.773 | 0.768 | |

0.526 | 0.544 | 0.565 | 0.822 | 0.812 | 0.806 | 0.685 | 0.700 | 0.716 | 0.751 | 0.762 | 0.768 | ||

0.515 | 0.555 | 0.593 | 0.756 | 0.772 | 0.787 | 0.641 | 0.671 | 0.697 | 0.741 | 0.776 | 0.800 | ||

80 | 0.400 | 0.364 | 0.340 | 0.696 | 0.658 | 0.628 | 0.553 | 0.516 | 0.490 | 0.710 | 0.701 | 0.692 | |

0.633 | 0.618 | 0.613 | 0.793 | 0.779 | 0.769 | 0.750 | 0.744 | 0.740 | 0.673 | 0.680 | 0.686 | ||

0.537 | 0.577 | 0.616 | 0.767 | 0.782 | 0.799 | 0.657 | 0.685 | 0.711 | 0.754 | 0.789 | 0.815 | ||

120 | 0.386 | 0.348 | 0.322 | 0.682 | 0.641 | 0.608 | 0.523 | 0.482 | 0.453 | 0.687 | 0.675 | 0.663 | |

0.639 | 0.622 | 0.614 | 0.783 | 0.767 | 0.756 | 0.742 | 0.732 | 0.726 | 0.644 | 0.650 | 0.655 | ||

0.545 | 0.584 | 0.623 | 0.770 | 0.785 | 0.802 | 0.661 | 0.689 | 0.715 | 0.759 | 0.793 | 0.819 |

Scenarios | Scenario 5 | Scenario 6 | Scenario 7 | Scenario 8 | |||||||||

True | |||||||||||||

models | |||||||||||||

Estimated | |||||||||||||

models | |||||||||||||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.778 | 0.734 | 0.699 | 0.761 | 0.721 | 0.677 | 0.707 | 0.674 | 0.639 | 0.718 | 0.707 | 0.685 | |

0.792 | 0.761 | 0.740 | 0.752 | 0.717 | 0.684 | 0.775 | 0.757 | 0.735 | 0.776 | 0.768 | 0.752 | ||

0.777 | 0.734 | 0.701 | 0.722 | 0.676 | 0.624 | 0.607 | 0.555 | 0.505 | 0.571 | 0.549 | 0.512 | ||

80 | 0.759 | 0.711 | 0.671 | 0.728 | 0.682 | 0.630 | 0.650 | 0.608 | 0.562 | 0.643 | 0.626 | 0.594 | |

0.772 | 0.738 | 0.714 | 0.714 | 0.673 | 0.631 | 0.728 | 0.707 | 0.679 | 0.717 | 0.703 | 0.680 | ||

0.781 | 0.739 | 0.707 | 0.732 | 0.687 | 0.637 | 0.630 | 0.582 | 0.533 | 0.600 | 0.579 | 0.544 | ||

120 | 0.753 | 0.703 | 0.660 | 0.717 | 0.669 | 0.614 | 0.631 | 0.584 | 0.534 | 0.617 | 0.598 | 0.560 | |

0.764 | 0.729 | 0.702 | 0.702 | 0.656 | 0.611 | 0.712 | 0.688 | 0.660 | 0.696 | 0.680 | 0.651 | ||

0.783 | 0.741 | 0.708 | 0.735 | 0.690 | 0.640 | 0.637 | 0.589 | 0.541 | 0.608 | 0.588 | 0.552 | ||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.141 | 0.163 | 0.175 | 0.194 | 0.208 | 0.220 | 0.280 | 0.292 | 0.300 | 0.347 | 0.350 | 0.351 | |

0.274 | 0.349 | 0.390 | 0.314 | 0.360 | 0.395 | 0.433 | 0.460 | 0.480 | 0.560 | 0.550 | 0.545 | ||

0.250 | 0.244 | 0.233 | 0.177 | 0.153 | 0.134 | 0.058 | 0.039 | 0.023 | 0.086 | 0.064 | 0.041 | ||

80 | 0.093 | 0.114 | 0.127 | 0.115 | 0.127 | 0.136 | 0.165 | 0.172 | 0.176 | 0.215 | 0.211 | 0.209 | |

0.242 | 0.320 | 0.364 | 0.253 | 0.296 | 0.327 | 0.332 | 0.352 | 0.368 | 0.464 | 0.442 | 0.431 | ||

0.275 | 0.268 | 0.257 | 0.213 | 0.191 | 0.171 | 0.115 | 0.097 | 0.082 | 0.162 | 0.139 | 0.118 | ||

120 | 0.077 | 0.098 | 0.111 | 0.089 | 0.100 | 0.109 | 0.125 | 0.130 | 0.133 | 0.170 | 0.163 | 0.159 | |

0.231 | 0.311 | 0.356 | 0.231 | 0.274 | 0.304 | 0.295 | 0.313 | 0.325 | 0.428 | 0.400 | 0.385 | ||

0.282 | 0.275 | 0.265 | 0.225 | 0.203 | 0.182 | 0.131 | 0.114 | 0.098 | 0.181 | 0.157 | 0.138 | ||

20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | 20 | 50 | 100 | ||

40 | 0.295 | 0.312 | 0.316 | 0.270 | 0.285 | 0.295 | 0.317 | 0.331 | 0.338 | 0.371 | 0.374 | 0.377 | |

0.119 | 0.123 | 0.128 | 0.168 | 0.172 | 0.179 | 0.243 | 0.252 | 0.257 | 0.278 | 0.292 | 0.301 | ||

0.560 | 0.522 | 0.484 | 0.400 | 0.363 | 0.326 | 0.159 | 0.143 | 0.130 | 0.165 | 0.146 | 0.124 | ||

80 | 0.272 | 0.288 | 0.290 | 0.202 | 0.217 | 0.226 | 0.205 | 0.214 | 0.218 | 0.246 | 0.239 | 0.234 | |

0.068 | 0.073 | 0.077 | 0.086 | 0.090 | 0.096 | 0.127 | 0.133 | 0.137 | 0.138 | 0.149 | 0.155 | ||

0.603 | 0.576 | 0.545 | 0.439 | 0.422 | 0.399 | 0.216 | 0.207 | 0.201 | 0.245 | 0.223 | 0.204 | ||

120 | 0.264 | 0.280 | 0.283 | 0.177 | 0.194 | 0.203 | 0.166 | 0.171 | 0.175 | 0.202 | 0.189 | 0.180 | |

0.052 | 0.058 | 0.062 | 0.059 | 0.063 | 0.069 | 0.087 | 0.092 | 0.095 | 0.093 | 0.101 | 0.105 | ||

0.618 | 0.596 | 0.570 | 0.449 | 0.439 | 0.427 | 0.231 | 0.222 | 0.220 | 0.266 | 0.2410 | 0.2520 |

Finally, were carried out Monte Carlo simulations to assess the performance of statistics when the dispersion modelling is neglected. To that end, the true data generating process considers varying dispersion but a fixed dispersion beta regression is estimated; see Table 3. In this case we have misspecification. Thus, we hope that the statistics display smaller values in comparison with Table 3. In this sense, when