Jackknife Empirical Likelihood Methods for Gini Correlations and their Equality Testing

Jackknife Empirical Likelihood Methods for Gini Correlations and their Equality Testing

Yongli Sanga, Xin Dangb and Yichuan Zhaoc CONTACT Yongli Sang. Email: yongli.sang@louisiana.edu
aDepartment of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
bDepartment of Mathematics, University of Mississippi, University, MS 38677, USA
cDepartment of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303, USA
Abstract

The Gini correlation plays an important role in measuring dependence of random variables with heavy tailed distributions, whose properties are a mixture of Pearson’s and Spearman’s correlations. Due to the structure of this dependence measure, there are two Gini correlations between each pair of random variables, which are not equal in general. Both the Gini correlation and the equality of the two Gini correlations play important roles in Economics. In the literature, there are limited papers focusing on the inference of the Gini correlations and their equality testing. In this paper, we develop the jackknife empirical likelihood (JEL) approach for the single Gini correlation, for testing the equality of the two Gini correlations, and for the Gini correlations’ differences of two independent samples. The standard limiting chi-square distributions of those jackknife empirical likelihood ratio statistics are established and used to construct confidence intervals, rejection regions, and to calculate -values of the tests. Simulation studies show that our methods are competitive to existing methods in terms of coverage accuracy and shortness of confidence intervals, as well as in terms of power of the tests. The proposed methods are illustrated in an application on a real data set from UCI Machine Learning Repository.

noindent

Keywords: Jackknife empirical likelihood; Gini correlation; -statistics; Wilks’ theorem; Test
MSC 2010 subject classification: 62G35, 62G20

1 Introduction

The Gini correlation has been used in a wide range of fields since proposed in 1987 ([17]). In the field of economic data analysis, the Gini correlation enables us to test whether an asset increases or decreases the risk of the portfolio ([18]), and can be used to build the relationship between the family income and components of income ([17]); in plant systems biology, the Gini correlation can compensate for the shortcomings of popular correlations in inferring regulatory relationships in transcriptome analysis ([11]); it has also been widely used in all branches of modern signal processing ([26]).

Let and be two non-degenerate random variables with continuous marginal distribution functions and , respectively, and a joint distribution function . Then two Gini correlations are defined as

(1)

to reflect different roles of and The representation of Gini correlation indicates that it has mixed properties of those of the Pearson and Spearman correlations, and thus complements these two correlations ([17], [18], [19]). The two Gini correlations in (1) are not symmetric in X and Y in general. The equality of the two Gini correlations can be involved in many procedures in Economics. For example, it can be applied to determine the similarity in two popular methodologies for constructing portfolios, the MV and MG ([20]), and the equality of the two Gini correlation between the return on each asset and the return on the portfolio is the necessary condition of the statement that all the Security Characteristic curves are linear ([20]), that is, a rejection of the hypothesis on the equality of Gini correlations is a rejection of the assumption that all the Security Characteristics curves are linear. Therefore, to understand the Gini correlation and to test the equality of the two Gini correlations are essential. In the paper, we develop a procedure to estimate the Gini correlation and to test the equality of the two Gini correlations. To the best of our knowledge, there is no nonparametric approaches to infer the Gini correlations.

As a nonparametric method, the empirical likelihood (EL) method introduced by Owen ([12], [13]) has been used heuristically for constructing confidence intervals. It combines the effectiveness of likelihood and the reliability of nonparametric approach. On the computational side, it involves a maximization of the nonparametric likelihood supported on data subject to some constraints. If these constraints are linear, the computation of the EL method is particularly easy. However, EL loses this efficiency when some nonlinear constraints are involved. To overcome this computational difficulty, Wood ([25]) proposed a sequential linearization method by linearizing the nonlinear constraints. However, they did not provide the Wilks’ theorem and stated that it was not easy to establish. Jing et al. ([4]) proposed the jackknife empirical likelihood (JEL) approach. The JEL method transforms the maximization problem of the EL with nonlinear constraints to the simple case of EL on the mean of jackknife pseudo-values, which is very effective in handling one and two-sample -statistics. Wilks’ theorems for one and two-sample -statistics are established. This approach has attracted statisticians’ strong interest in a wide range of fields due to its efficiency, and many papers are devoted to the investigation of the method, for example, [9], [14], [2], [22], [23], [7], [6] and so on. However, theorems derived in [4] are limited to a simple case of the -statistic but the Gini correlation cannot be estimated by a -statistic, which does not allow us to apply the results of [4] directly. However, it can be estimated by a functional of multiple -statistics ([17]). Due to this specific form of the Gini correlation, we propose a novel -statistic type functional and a JEL-based procedure with the -structured estimating function is applied for the Gini correlation. And this approach may work for making an inference about some difference functions of multiple -statistic structure with nuisance parameters involved.

In the test

(2)

where , the natural empirical estimator of is a function of 4 dependent -statistics. Based on -statistics theorem, , will, after appropriate normalization, have a limiting normal distribution. However, the asymptotic variance is complicated to calculate. In the present paper, by proposing a new -statistic type functional system, we avoid estimating the asymptotic variance to do the test. However, only a part of parameters are being interested. When only a part of parameters are of interest, Qin and Lawless ([15]) proposed to use a profile empirical likelihood method which is also an important tool to transform nonlinear constraints to some linear constraints by introducing link nuisance parameters. However, the profile EL could be computationally costly. Hjort, McKeague and Van Keilegom ([3]) proposed to reduce the computational complexity by allowing for plug-in estimates of nuisance parameters in estimating equations with the cost that the standard Wilks’ theorem may not hold. Li et al. ([6]) proposed a jackknife plug-in estimate in terms of a function of interested parameters so that their EL still have standard chi-square distributions. However, we cannot take advantage of their method since the parameters of interest in this paper are estimated by solving estimating functions with -statistics structure. We cannot apply theoretical results of the profile JEL method in [7], either. Li, Xu and Zhao ([7]) developed a JEL-based inferential procedure for general -structured estimating functions. It requires the condition that kernel functions are bounded both in the sample space and in the parameter space. Under merely second order moment assumptions, we establish the Wilks’ theorem for the jackknife empirical log-likelihood ratio for . The computation is also easy since a simple plug-in estimate of the nuisance parameter is used.

It is often of considerable interest to compare the Gini correlations from two independent populations. For instance, Lohweg ([10]) constructed adaptive wavelets for the analysis of different print patterns on a banknote and made it possible to use mobile devices for banknote authentication. After the wavelet transformations, there are four continuous variables: variance, skewness, kurtosis and entropy of wavelet transformed images. It is natural to ask what are correlations of each pair of the above variables. Are there any differences between the Genuine banknotes and Forgery banknotes? One of the main goals of this paper is to develop the JEL method for comparing the Gini correlations for independent data sets.

The remainder of the paper is organized as follows. In Section 2, we develop the JEL method for the Gini correlations. The JEL method for testing the equality of Gini correlations is proposed in Section 3. In Section 4, we consider the JEL method for comparing Gini correlations for two samples. Following the introduction of methods in each section, simulation studies are conducted to compare our JEL methods with some existing methods. A real data analysis is illustrated in Section 5. Section 6 concludes the paper with a brief summary. All proofs are reserved to the Appendix.

2 JEL for the Gini correlation

The Gini correlation has a mixed property of the Pearson correlation and the Spearman correlation: (1) If and are statistically independent then ; (2) is invariant under all strictly increasing transformations of or under changes of scale and location in ; (3) and (4) if is a monotonic increasing (decreasing) function of , then both and equal +1 (-1). From [17], in (1) can be written in the form as below

(3)

where and are independent copies of ,

(4)

and

(5)

Then given an i.i.d. data set , with , the Gini correlation can be estimated by a ratio of two -statistics

(6)

with the kernel of being and the kernel of being .

Remark 2.1

A direct computation of -statistics is time-intensive with complexity . Rewriting and as linear combinations of order statistics reduces the computation to . That is, and , where is the order statistics of and is the that belongs to ([17]).

By -statistic theory, the asymptotic normality of the estimator (6) for ([17], [16]) is:

(7)

with

(8)

where

and

In particular, under a bivariate normal distribution with correlation , Xu ([26]) provided an explicit formula of , given by . However, the asymptotic variance is difficult to obtain for the non-normal distributions and an estimate of is needed either by a Monte Carlo simulation or based on the jackknife method. Let be the jackknife pseudo value of the Gini correlation estimator based on the sample with the observation deleted. Then the jackknife estimator of (8) is

(9)

where see [21].

In this section, we utilize the jackknife approach combining with the EL method to make inference on the Gini correlation.

2.1 JEL for the Gini correlation

Without loss of generality, we consider the case for , and the procedure for will be similar. For simplicity, we use to denote in this section. Define a novel -statistic type functional as

(10)

where

(11)

with and being given by (4) and (5), respectively. By (3), we have . To apply the JEL to , we define the jackknife pseudo sample as

where is based on the sample with the observation being deleted. It can be easily shown that and

Let be nonnegative numbers such that Then following the standard empirical likelihood method for a univariate mean over the jackknife pseudo-values ([12], [13]), we define the JEL ratio at as

Utilizing the standard Lagrange multiplier technique, the jackknife empirical log-likelihood ratio at is

where satisfies

Define and . Then we have the following Wilks’ theorem with only the assumption of the existence of second moments:

Theorem 2.1

Assume , and . Then we have

Based on the theorem above, a jackknife empirical likelihood confidence interval for can be constructed as

where denotes the quantile of the chi-square distribution with one degree of freedom, and is the observed empirical log-likelihood ratio at .

In application, an under-coverage problem may appear when the sample size is relatively small. In order to improve coverage probabilities, we utilize the adjusted empirical likelihood method ([1]) by adding one more pseudo-value

where Under the recommendation of [1], we take

2.2 Empirical performance

To evaluate the empirical performance of our JEL methods (denoted as ‘JEL’, ‘JEL’ for , , respectively), we conduct a simulation study. Another purpose is to examine whether the adjusted JEL methods (denoted as ‘AJEL’, ‘AJEL’ ) can make an improvement over the JEL method for small sample sizes. The interval estimators for the Gini correlations based on the asymptotical normality of (7) with variance calculated by (8) are denoted as ‘AV’ and ‘AV’, while ‘J’ and ‘J’ to denote the methods using (9) to estimate . Similar notions for the different method in the following sections will be used.

We also present the results for the Pearson’s correlation and denote it as ‘’. The limiting distribution of the regular sample Pearson correlation coefficient is normal: , where

(12)

and , see for example [24]. The Pearson correlation estimator requires a finite fourth moment on the distribution to evaluate its asymptotic variance. For bivariate normal distributions, the asymptotic variance simplifies to . For other distributions rather than the normal, the asymptotic variance may be estimated by a Monte Carlo simulation or by a jackknife variance method. We do not include another popular correlation Kendall in the simulation. Its performance is referred to [16].

We generate 3000 samples of two different sample sizes () from two different bivariate distributions, namely, normal and , with the scatter matrix . Without loss of generality, we consider only cases of with . For each simulated data set, and confidence intervals are calculated using different methods. We repeat this procedure 30 times. The average coverage probabilities and average lengths of confidence intervals as well as their standard deviations (in parenthesis) are presented in Tables 1, 2.

Method
CovProb Length CovProb Length CovProb Length CovProb Length
JEL .880(.005) .642(.002) .931(.005) .799(.004) .900(.005) .208(.002) .950(.003) .230(.000)
AJEL .907(.005) .710(.003) .951(.004) .937(.005) .905(.005) .210(.000) .953(.003) .232(.000)
J .869(.006) .812(.002) .916(.006) .968(.004) .898(.005) .238(.000) .947(.004) .284(.000)
AV .886(.005) .748(.000) .945(.005) .891(.000) .900(.005) .237(.000) .951(.004) .282(.000)
JEL .880(.005) .642(.002) .931(.006) .798(.006) .900(.005) .208(.000) .950(.003) .230(.000)
AJEL .908(.003) .709(.003) .951(.005) .936(.007) .904(.005) .209(.000) .953(.003) .232(.000)
J .869(.004) .811(.003) .916(.006) .968(.005) .897(.005) .238(.000) .947(.003) .283(.000)
AV .886(.004) .748(.000) .945(.005) .891(.000) .900(.005) .237(.000) .951(.003) .282(.000)
.889(.004) .728(.000) .947(.006) .868(.000) .899(.005) .230(.000) .950(.004) .274(.000)
JEL .876(.007) .506(.002) .925(.006) .572(.004) .900(.006) .156(.000) .950(.004) .171(.000)
AJEL .903(.006) .530(.003) .944(.005) .673(.005) .905(.006) .157(.000) .953(.004) .173(.000)
J .867(.007) .634(.003) .908(.007) .756(.005) .898(.006) .182(.000) .947(.004) .217(.000)
AV .886(.005) .568(.000) .937(.005) .677(.000) .898(.006) .180(.000) .949(.004) .214(.000)
JEL .876(.006) .506(.002) .926(.006) .572(.004) .900(.006) .156(.000) .950(.003) .171(.000)
AJEL .903(.006) .530(.002) .945(.005) .673(.004) .905(.006) .157(.000) .953(.003) .173(.000)
J .865(.006) .634(.003) .908(.007) .757(.005) .897(.006) .182(.000) .946(.004) .217(.000)
AV .886(.005) .568(.000) .937(.005) .677(.000) .899(.006) .180(.000) .949(.003) .214(.000)
.893(.005) .552(.000) .941(.005) .657(.000) .900(.005) .175(.000) .950(.003) .208(.000)
JEL .874(.005) .144(.001) .919(.005) .183(.002) .899(.005) .040(.000) .948(.005) .044(.000)
AJEL .898(.005) .152(.002) .936(.005) .254(.003) .903(.006) .040(.000) .951(.005) .044(.000)
J .857(.007) .185(.002) .892(.005) .220(.002) .897(.006) .048(.000) .942(.005) .057(.000)
AV .876(.006) .146(.000) .919(.006) .174(.000) .894(.006) .046(.000) .943(.005) .055(.000)
JEL .874(.007) .143(.001) .919(.004) .184(.002) .899(.006) .040(.000) .949(.005) .044(.000)
AJEL .898(.006) .151(.002) .937(.004) .254(.003) .903(.006) .040(.000) .952(.005) .044(.000)
J .858(.007) .184(.002) .892(.006) .220(.002) .896(.006) .048(.000) .943(.006) .057(.000)
AV .876(.007) .146(.000) .918(.006) .174(.000) .894(.006) .046(.000) .944(.005) .055(.000)
.894(.006) .140(.000) .929(.005) .167(.000) .900(.006) .044(.000) .947(.004) .053(.000)
Table 1: Coverage probabilities (standard deviations) and average lengths (standard deviations) of the Gini correlations’ interval estimators from a variety of methods under bivariate normal distributions.

Under elliptical distributions including normal and distributions, the two Gini correlations and the Pearson correlation are equal to the linear correlation parameter , that is, ([16]). Thus, all the listed methods in Table 1 and Table 2 are for the same quantity .

From Table 1, we observe that under the bivariate normal distribution, all methods keep good coverage probabilities when sample size is large () but the JEL methods produce the shortest intervals for all values. It even behaves better than the method which is asymptotically optimal under normal distributions. AV (AV, AV) methods are slightly better than J (J, J) methods but not good as method. Note that the lengths of AV and AV methods are same, also standard deviations of confidence interval lengths for AV, J and methods are always 0. When the sample size is relatively small (), our JEL method always produces better coverage probabilities and shorter confidence intervals compared with J method, and performs better than AV method when is relatively large (). All the JEL, J and AV methods present slight under-coverage problems. However, the adjusted JEL method improves the under-coverage problems effectively and keeps shorter intervals.

Method
CovProb Length CovProb Length CovProb Length CovProb Length
JEL .853(.007) .467(.003) .910(.005) .538(.002) .892(.004) .210(.001) .944(.003) .239(.001)
AJEL .887(.006) .505(.003) .936(.005) .585(.002) .897(.004) .212(.001) .948(.003) .242(.001)
J .853(.007) .912(.005) .901(.006) 1.08(.005) .893(.004) .285(.000) .944(.004) .340(.001)
AV .901(.006) .894(.000) .957(.003) 1.07(.000) .897(.004) .283(.000) .948(.003) .337(.000)
JEL .854(.007) .468(.003) .911(.004) .538(.002) .892(.005) .210(.001) .944(.003) .239(.001)
AJEL .887(.006) .505(.003) .937(.004) .584(.002) .897(.005) .213(.001) .947(.003) .242(.001)
J .853(.006) .909(.005) .901(.005) 1.08(.005) .893(.005) .285(.001) .943(.003) .340(.001)
AV .903(.006) .894(.000) .958(.004) 1.07(.000) .897(.006) .283(.000) .948(.003) .337(.000)
.977(.003) 1.25(.000) .994(.002) 1.49(.000) .940(.004) .395(.000) .971(.004) .470(.000)
JEL .847(.007) .532(.003) .906(.005) .611(.003) .890(.006) .186(.001) .942(.004) .209(.001)
AJEL .880(.007) .572(.003) .932(.004) .666(.003) .896(.006) .188(.001) .945(.004) .211(.001)
J .847(.008) .717(.005) .892(.005) .854(.007) .892(.005) .221(.001) .940(.004) .263(.001)
AV .908(.005) .699(.000) .948(.004) .833(.000) .902(.005) .221(.000) .951(.004) .264(.000)
JEL .848(.006) .531(.002) .906(.005) .611(.003) .892(.005) .186(.001) .943(.004) .208(.001)
AJEL .881(.006) .572(.003) .933(.005) .665(.003) .897(.005) .187(.001) .947(.004) .211(.001)
J .846(.006) .712(.004) .894(.006) .849(.006) .894(.006) .220(.001) .941(.004) .262(.001)
AV .910(.006) .699(.000) .950(.004) .833(.000) .903(.005) .221(.000) .951(.004) .264(.000)
.956(.003) .929(.000) .976(.003) 1.11(.000) .936(.003) .294(.000) .969(.003) .350(.000)
JEL .841(.006) .159(.001) .895(.005) .214(.003) .883(.004) .048(.000) .938(.004) .062(.001)
AJEL .871(.006) .173(.002) .917(.005) .280(.003) .888(.004) .049(.000) .941(.004) .064(.001)
J .827(.005) .207(.002) .864(.006) .245(.003) .882(.005) .059(.000) .929(.004) .071(.000)
AV .907(.004) .190(.000) .932(.003) .226(.000) .905(.006) .060(.000) .945(.003) .071(.000)
JEL .842(.006) .159(.002) .894(.005) .215(.003) .883(.005) .048(.000) .938(.004) .062(.000)
AJEL .871(.005) .173(.002) .917(.005) .282(.003) .888(.005) .049(.000) .941(.004) .064(.001)
J .828(.006) .207(.002) .864(.006) .245(.003) .884(.004) .059(.000) .930(.004) .071(.000)
AV .907(.005) .190(.000) .931(.003) .226(.000) .906(.005) .060(.000) .950(.003) .071(.000)
.938(.004) .235(.000) .957(.003) .280(.000) .935(.004) .074(.000) .966(.003) .089(.000)
Table 2: Coverage probabilities (standard deviations) and average lengths (standard deviations) of the Gini correlations’ interval estimators from a variety of methods under bivariate distributions.

Table 2 lists the results under the bivariate distribution. As expected, the method performs poorly for heavy-tailed distributions. It suffers a serious over-coverage problem for all cases. For the AV method, the asymptotic variance (8) is calculated by a Monte Carlo simulation with sample size of . In this sense, we say AV to be a parametric method and it yields good coverage probabilities. For the two nonparametric methods, JEL and J, both of them have slight under-coverage problems especially when is large and is small, but the JEL method produces better coverage probabilities and shorter confidence intervals than J. When the sample size is small () and is small (), JEL interval estimators are as short as half the length of J and AV interval estimators. Compared with AV methods, the JEL method always has shorter confidence intervals. Additionally the adjusted JEL methods improve the under-coverage problems.

3 JEL test for the equality of Gini correlations

The two Gini correlations in (1) are not equal generally. One sufficient condition for the equality of the two Gini correlations is that and are exchangeable up to a linear transformation. That is, there exist and () such that and are equally distributed. Particularly, if are elliptically distributed with linear correlation parameter , then and are exchangeable up to a linear transformation. Hence we have and they are equal to . More details are referred to ([17], [27]). Let . The hypotheses of interest are

(13)

The objective of this section is to test the equality of the two Gini correlations via the JEL method.

3.1 JEL test for the equality of the two Gini correlation

For simplicity, we use and to denote and , respectively. Let and we are interested in making inference of . Let and , where is defined in (11). We define a vector -statistic type functional as

with kernels

It is easy to see that .

We do not apply the profile EL method since the computation of the profile EL could be very difficult even for equations without a -structure involved. For our case, since function does not depend on , it enables us to estimate by that solves

It is easy to check that , which can be easily computed with complexity . We plug in and conduct the JEL method for . More specifically, let

(14)

and

Then the jackknife pseudo samples are

(15)

and the jackknife empirical likelihood ratio at is

By the standard Lagrange multiplier method, we obtain the log empirical likelihood ratio as

where satisfies

(16)

Define and . We have the following result.

Theorem 3.1

If , and , then

A proof of Theorem 3.1 needs to deal with an extra variation introduced by estimator , which is given in the Appendix.

Remark 3.1

Li ([7]) established the Wilks’ theorem for a general U-type profile empirical likelihood ratio under a strong condition that the kernel is uniformly bounded in both variables and parameters. Here we only assume the existence of the second moments of the kernel functions in the sample space.

Remark 3.2

The profile empirical likelihood ratio is usually computed through the ratio of the EL at the true value of the parameter and the EL at the maximal empirical likelihood estimate. In our case, because is independent with and is linear in and , our JEL ratio does not involve the maximal empirical likelihood estimate, enjoying the computational ease property.

We can obtain a jackknife empirical likelihood confidence interval for as

where is the observed log-likelihood ratio at . If , we are able to reject at significance level. For the hypothesis test (13), under the null hypothesis, the -value can be calculated by

where is a random variable from a chi-square distribution with one degree of freedom. For instance, under elliptical distributions we are able to compute -values for the test. On the other hand, in the case that the true parameter , Theorem 3.1 holds rather under but under , and hence the power of the test under can be computed to be

In the next, we consider simulations of two cases. One is for the case and the other is for .

3.2 Empirical performance

In the simulation, 3000 samples of two different sample sizes () are drawn from bivariate normal and normal-lognormal distributions, that is, and are drawn from with the scatter matrix . Under the bivariate normal distribution, , the null hypothesis, , is true and values are provided in Table 3 along with averages and standard deviations of the coverage probabilities. Under the mixture of normal and lognormal distribution, and is not equal to for . Thus powers of the test are presented in Table 4.

Method
CovProb P-Value CovProb P-Value CovProb P-Value CovProb P-Value
JELdelta .918(.005) .515(.004) .962(.004) .515(.005) .905(.005) .505(.005) .952(.003) .505(.006)
AJELdelta .965(.004) .565(.004) .991(.002) .565(.005) .909(.005) .509(.005) .955(.003) .509(.005)
deltaJ .964(.004) .747(.005) .986(.002) .745(.005) .914(.005) .723(.005) .958(.003) .724(.006)
JELdelta .941(.004) .537(.005) .975(.003) .537(.004) .912(.005) .509(.004) .958(.004) .511(.005)
AJELdelta .979(.002) .586(.005) .996(.001) .586(.004) .916(.005) .514(.004) .961(.004) .515(.005)
deltaJ .978(.003) .750(.006) .993(.002) .749(.004) .925(.005) .726(.004) .967(.004) .727(.006)
JELdelta .971(.004) .582(.005) .991(.002) .580(.004) .962(.004) .564(.004) .987(.002) .567(.005)
AJELdelta .994(.002) .629(.004) .999(.000) .627(.004) .964(.004) .568(.004) .989(.002) .571(.005)
deltaJ .993(.002) .756(.004) .999(.001) .754(.005) .971(.003) .743(.004) .992(.002) .742(.005)
Table 3: Coverage probabilities (standard deviations) of interval estimators of and -values (standard deviations) of the test for under bivariate normal distributions.
Method
CovProb Power CovProb Power CovProb Power CovProb Power
JELdelta .857(.007) .256(.007) .921(.005) .196(.008) .873(.006) .192(.008) .930(.005) .154(.006)
AJELdelta .896(.006) .226(.007) .950(.004) .155(.006) .879(.006) .189(.008) .933(.005) .151(.006)
deltaJ .938(.004) .062(.004) .970(.003) .031(.003) .893(.006) .110(.006) .945(.004) .056(.004)
JELdelta .858(.007) .213(.006) .917(.005) .161(.007) .876(.005) .235(.008) .932(.005) .171(.006)
AJELdelta .895(.006) .184(.005) .945(.004) .127(.006) .881(.005) .230(.008) .936(.005) .166(.006)
deltaJ .947(.004) .052(.004) .978(.003) .021(.002) .897(.005) .164(.007) .949(.004) .094(.004)
JELdelta .865(.005) .144(.007) .916(.004) .093(.005) .884(.005) .370(.009) .939(.004) .268(.010)
AJELdelta .898(.005) .114(.006) .941(.004) .066(.004) .889(.005) .362(.008) .943(.004) .261(.010)
deltaJ .976(.003) .031(.004) .993(.002) .008(.001) .907(.005) .293(.008) .955(.004) .187(.008)
Table 4: Coverage probabilities (standard deviations) of interval estimators of and powers (standard deviations) of the test for under the normal-lognormal distributions.

Under bivariate normal distributions, both JEL and deltaJ methods have over coverage probability problems especially for large under small sample size. The JEL method performs relatively better than the deltaJ method for all sample size and all values. The -values in Table 3 are all greater than 0.5, which indicates that we cannot reject under a bivariate normal distribution. This implies no evidence to reject the exchangeability up to a linear transformation, which is a correct decision under bivariate normal distributions.

From Table 4, under normal-lognormal distributions we can observe that the powers of the test for all the listed methods are not high. This can be explained by the fact that the true values are very close to 0, making the procedures difficult to reject . However, the JEL method is more efficient with higher powers for all sample sizes than deltaJ. Among all the approaches in Table 4, deltaJ method produces good coverage probabilities when the sample size is large but have serious over-coverage problems when and is large. This is due to one characteristic of the lognormal distribution. The bias and variance of the sample correlation may be quite significant especially when the correlation coefficient is not close to zero ([24]). On the other hand, the JEL method has under-coverage problems when the sample size is small but these problems have been corrected effectively by the adjusted JEL method.

4 JEL for independent data

Let and , where for , be independent samples from distributions and with sample size and , respectively. Let , , and denote the Gini correlations between and , and for these two distributions, respectively. Let and , the hypotheses of interest are

(17)

Our aim for this section is to derive a JEL method to test the above statements.

4.1 JEL for Gini correlation differences for independent data

Due to independence of for and , we have

and

where , This motivates us to define a two-sample -statistic type functional as

(18)

with