Fast Computing for Distance Covariance

Fast Computing for Distance Covariance

Xiaoming Huo
School of Industrial and Systems Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332
and National Science Foundation,
Arlington, VA 22203
email: xiaoming@isye.gatech.edu
& Gábor J. Székely
National Science Foundation
Arlington, VA 22203
and Alfréd Rényi Institute of Mathematics
email: gszekely@nsf.gov
July 21, 2019
Abstract

Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O() which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be implemented by an O() algorithm and this is comparable to other computationally efficient algorithms. The new formula we derive for an unbiased estimator for squared distance covariance turns out to be a U-statistic. This fact implies some nice asymptotic properties that were derived before via more complex methods. We apply the fast computing algorithm to some synthetic data. Our work will make distance correlation applicable to a much wider class of applications.

Author’s Footnote:

Dr. Xiaoming Huo is a professor in the School of Industrial and Systems Engineering at the Georgia Institute of Technology. He has been serving as a rotator at the National Science Foundation since August 2013. Mailing address: 765 Ferst Dr, Atlanta, GA 30332 (email: xiaoming@isye.gatech.edu).

Dr. Gábor J. Székely is a program officer at the National Science Foundation. Mailing address: 4201 Wilson Blvd, Arlington, VA 22203 (email: gszekely@nsf.gov).

Keywords: distance correlation, fast algorithm, statistical dependence

1 Introduction

Since its induction (?), distance correlation has had many applications in, e.g., life science (?) and variable selection (?), and has been analyzed (??), extended (??) in various aspects. If distance correlation were implemented straightforwardly from its definition, its computational complexity can be as high as a constant times for a sample size . This fact has been cited for numerous times in the literature as a disadvantage of adopting the distance correlation. In this paper, we demonstrate that an O() algorithm for a version of the distance correlation exits.

The main idea behind the proposed algorithm is to use an idea rooted in the the AVL tree structure (?). The same idea has been utilized to develop fast algorithm for computing the Kendall’s rank correlation coefficient (?) (?). We extend it to make it suitable for our purpose. The derivation of the fast algorithm also involves significant reformulation from the original version of the distance correlation. Details are presented in this paper.

In simulations, not only we demonstrate the effectiveness of the fast algorithm, but also we testify that the advantage of using distance correlation (in comparison with other existing methods) become more evident when the sample sizes increase. These experiments become feasible due to the availability of the proposed fast algorithm. In one experiment (See details in Section 5.3), we increased the sample size by fold from a previously published simulation study.

The rest of this paper is organized as follows. Section 2 reviews the distance covariance/correlation and its relevant properties. In Section 3, we consider a reformulation of the distance covariance, such that the new estimator is both unbiased and a U-statistic. In Section 4, an algorithm with the average complexity of was presented. Extensive simulations are presented in Section 5 to demonstrate the additional capability we obtained due to the proposed fast algorithm. Finally, some concluding remarks are made in Section 6. Detailed description of the algorithm is relegated to the Appendix, along with most of the technical proofs.

2 A Review of Distance Covariance

Distance covariance and distance correlation was introduced in 2005 by one of the co-authors of this paper, G. J. Székely, in several lectures to address the deficiency of Pearson’s correlation, namely that the Pearson’s correlation can be zero for dependent variables. In the following, we start with a definition of the distance covariance.

Definition 2.1.

The population distance covariance between random vectors and with finite first moments is the nonnegative number defined by a weighted norm measuring the distance between the joint characteristic function (c.f.) of and , and the product of the marginal c.f.’s of and . If and take values in and , respectively, is

where . The integral exists provided that and have finite first moments.

This immediately shows that distance covariance is zero if and only if the underlying random variables are independent. The beauty of kernel is that the corresponding sample statistic has the following surprisingly simple form. Denote the pairwise distances of the observations by and the pairwise distances of the observations by for and denote the corresponding double centered distance matrices by , and where

(2.1)
(2.2)

It is clear that the row sums and column sums of these double centered matrices are 0. The squared sample distance covariance is the following simple formula

The corresponding squared sample variance is

and we can define the sample distance correlation as the standardized sample covariance:

(2.3)

For more details see ? and a discussion paper (?). It is clear that is rigid motion invariant and scale invariant. For recent applications of distance correlation, see e.g., ? and ?.

The population version of distance covariance and distance correlation can be defined without characteristic functions, see ?. This definition is as follows. Let and be random variables with finite expectations. The random distance functions are and . Here the primed random variable denotes an independent and identically distributed (i.i.d.) copy of the variable , and similarly are i.i.d.

Introduce the real-valued function

where is the cumulative distribution function (cdf) of , and

which is a real-valued random variable. For simplicity we write and .

Next we define the counterpart of centered distance matrices. The centered distance function is

For random variables we have

where

Similarly define the centered distance function and the random variable . Now for i.i.d., and i.i.d., such that and have finite expectations, the population distance covariance is defined by

(2.4)

We have that is always nonnegative, and equates zero if and only if and are independent.

It is clear by inspection that without further efforts the implementation of the sample distance covariance and the corresponding sample distance correlation requires O() steps. In this paper we show that for real-valued random variables and , we do not need more than O() steps.

3 The unbiased version of the squared sample distance covariance: reformulation and relation to U-statistics

In this section, a reformulation is given in Section 3.1. We then show in Section 3.2 that the newly formed unbiased estimator is a U-statistic.

3.1 Reformulation

We will work with the unbiased version of the squared sample distance covariance, which is published in ?. The definition is as follows.

Definition 3.1 (-centered matrix).

Let be a symmetric, real valued matrix with zero diagonal, . Define the -centered matrix as follows: the -th entry of is

(3.1)

Here the “-centered” is so named because as shown below, the corresponding inner product (which will be specified in (3.2)) defines an unbiased estimator of the squared distance covariance.

Proposition 3.2.

Let denote a sample of observations from the joint distribution of random vectors and . Let be the Euclidean distance matrix of the sample from the distribution of , and be the Euclidean distance matrix of the sample from the distribution of . Then if , for , the following

(3.2)

is an unbiased estimator of squared population distance covariance .

The proof of the above proposition is in the appendix of ?.

Let denote the inner product defined in (3.2). The following notations will be used. Define the column and row sums as follows:

We will need the following lemma.

Lemma 3.3.

If is the inner product defined in (3.2) then we have

(3.3)

For the proof see the Appendix. Formula (3.3) will be used to prove that (i) the estimator in (3.2) is a U-statistic and thus we can apply the relevant limit theorems to study its asymptotic behavior; (ii) the estimator in (3.2) can be computed in steps.

3.2 Validating the Statistic is a U-Statistic

Suppose is a sample. For positive integer , let denote all the distinct -subsets of . For a set , we define notation . Let be a symmetric real-valued or complex-valued kernel function of variables. For each , the associated U-statistic of order , , is equal to the average over ordered samples of size of the sample values . In other words,

(3.4)

For U-statistics, we can verify the following lemma.

Lemma 3.4.

For , we denote

(3.5)

where is defined in (3.4) after removing the element . Then we must have

(3.6)
Proof of Lemma 3.4.

In (3.6), each term is counted times on both sides. Hence the equality holds. ∎

In fact, using arithmetic deduction, one can prove that the converse of the above is also true. In other words, the jackknife invariance is a necessary and sufficient condition for being U-statistics. For a very similar (equivalent) approach see ?.

Lemma 3.5.

If there exists a positive integer , such that for any , function satisfies (3.6) and (3.5), then there must be a kernel function of order , such that can be written in a form as in (3.4); i.e., is a U-statistic.

A proof of the above can be found in the Appendix.

The two lemmas above show that the recursive relation (3.6) is a necessary and sufficient condition for a U-statistic. For later use, we explicitly restate the result below.

Theorem 3.6.

Let be a statistic of a sample . Let , , be a statistic of a reduced sample ; i.e., is the statistic after removing the observation . The necessary and sufficient condition for to be a U-statistic of order is

(3.7)

holds for all .

The above can be extended to a two-sample problem, in which a sample is for . By replacing with , all the previous arguments still hold.

Proof of Theorem 3.6.

. Combine Lemma 3.4 and Lemma 3.5, and simplify (3.6), we have (3.7). ∎

Let denote the inner product that is defined in (3.2). Note is based on the entire sample (i.e., . For , let denote the corresponding statistic after knocking out pair from the entire sample.

The following lemma establish counterpart for , where .

Lemma 3.7.

For , let , , , and denote the corresponding sums after entry is removed from the sample. If is the inner product that is defined in (3.2) after knocking off the -th entry , we have

(3.8)

We will not provide the proof for Lemma 3.7, because it will be identical with the proof of Lemma 3.3.

Theorem 3.8.

Estimator —the inner product that is defined in (3.2)—is a U-statistic. The kernel function of the corresponding U-statistic is the inner product that was defined in (3.2) with .

See a proof in the Appendix.

Now we know that (3.2) is a U-statistic and it is easy to see that (3.2) is in fact a U-statistic with a degenerate kernel under the null hypothesis of independence of and , thus we can see from Corollary 4.4.2 of ? that if the second moments of and are finite then under the null hypothesis, the limit distribution of has the form , where , and are i.i.d. standard normal random variables. Under the alternative hypothesis we have that , thus we can easily construct a consistent test of independence. For a technically much more difficult approach, see ? where a similar result was derived for a related V-statistic using deep results on complex-valued Gaussian processes.

4 Fast Algorithm

We now argue that when and are univariate, there is an O algorithm to implement (3.3). We start with several intermediate results, which are presented as lemmas below.

Lemma 4.1.

Denote

For , we also denote

We have

(4.1)

A proof is relegated to the appendix.

Due to symmetry, the following is the counterpart fact for . We state it without a proof.

Lemma 4.2.

Denote

For , we denote

We have

(4.2)

Using formulas (4.1) and (4.2), the following two equations can be easily established. We state them without a proof.

Corollary 4.3.

We have

(4.3)

and

(4.4)

The following lemma will be used.

Lemma 4.4.

We define a sign function, for ,

For any sequence , for , we define

The following is true:

(4.5)
Proof of Lemma 4.4.

We have

Per the definition of , one can verify that the above equates to (4.5). ∎

Lemma 4.5.

For any sequence , there is an O algorithm to compute for all (), where .

Again, we relegate the proof to the appendix. The main idea of the proposed algorithm is a modification as well as an extension of the idea that was used in ? and ?, which developed a fast algorithm for computing the Kendall’s rank correlation coefficient. The principle of the AVL tree structure (?) was adopted. Despite they are in a similar spirit, the algorithmic details are different. We now present the main result in the following theorem.

Theorem 4.6.

The unbiased estimator of the squared population distance covariance (that was defined in (3.2)) can be computed by an O() algorithm.

Proof of Theorem 4.6.

In Lemma 3.3, the unbiased statistic has been rewritten as in (3.3). For the first term on the right hand side of (3.3), per Lemmas 4.4 and 4.5, there is an O() algorithm to compute it.

For the second term on the right hand side of (3.3), Note that quantities , and that were defined in Lemmas 4.1 and 4.2, respectively, are partial sums, which can be computed for all ’s with O() algorithms. The factor is inserted, because one may need to sort ’s or ’s in order to compute for , and . Then by (4.1) and (4.2), all and can be computed at order O(). Consequently, the second term on the right hand side of (3.3) can be computed by using an O() algorithm.

For the third term on the right hand side of (3.3), using (4.3) and (4.4) in Corollary 4.3, we can easily see that it can be computed via an O() algorithm. From all the above, the theorem is established. ∎

For readers’ convenience, we present a detailed algorithm description in Appendix, where Algorithm 2 realizes the idea that is described in the proof of Lemma 4.5; Algorithm 3 is a subroutine that will be called in Algorithm 2; and the Algorithm 1 is the algorithm that can compute for the distance covariance at O().

5 Numerical Experiments

In Section 5.1, we describe a MATLAB and C based implementation of the newly proposed fast algorithm. This fast algorithm enables us to run some simulations with sample sizes that were impossible to experiment with before its appearance. We report some numerical experiments in Section 5.2. Distance correlation has been found helpful in feature screening. In Section 5.3, we redo experiments on this regard, increasing the sample size from to . It is observed that the advantage of using the distance correlation is more evident when the sample size becomes larger.

5.1 Matlab Implementation

The fast algorithm was implemented in MATLAB, with a key step (of dyadic updating) being implemented in C. It was then compared against the direct (i.e., slow) implementation. Table 1 presents the average running time for the two different implementations in MATLAB with replications at each sample size. The sample size goes from () to (). In all these cases, the two methods ended with identical solutions; this validates our fast algorithm. Note a comparison in MATLAB is not desirable for our fast algorithm. The direct method calls some MATLAB functions, which achieve the speed of a low-level language implementation, while the implementation of the fast method is not. In theory, the fast algorithm will compare more favorably if both methods are implemented in a low-level language, such as in C or C++.

Sample Size Direct method Fast method
32 0.0006 (0.0001) 0.0014 (0.0001)
64 0.0008 (0.0001) 0.0024 (0.0002)
128 0.0019 (0.0004) 0.0053 (0.0006)
256 0.0083 (0.0010) 0.0120 (0.0011)
512 0.0308 (0.0021) 0.0272 (0.0018)
1024 0.1223 (0.0051) 0.0647 (0.0037)
2048 0.4675 (0.0172) 0.1478 (0.0045)
Table 1: Running times (in seconds) for the direct method and the fast method for computing the distance correlations. The values in the parentheses are sample standard errors. At each sample size, repetitions were run.
Figure 1: A comparison of running time between the direct method and the fast method for the computation of the distance correlations.

Fig. 1 provides a visual comparison of the two methods. All the experiments that are reported in this paper is run on a laptop (Lenovo T520, Intel Core i7-2640M CPU @ 2.80GHz) with allowable 975 MB memory in MATLAB Version 8.2.0.89 (R2013b).

When the sample size is large, e.g., when , the direct method will generate an “out-of-memory” message. Recall the direct method computes for all pairwise distances, hence it requires O memory. The fast method only requires O in memory. For illustration purpose, we run the fast algorithm for sample size going from (which is ) to (which is ). The running times are reported in Tab. 2 and Fig. 2. When , the running time is a little more than three minutes. The trend that is observable from Fig. 2 consists with our claim that the fast method is an O algorithm. It is evident that the running time scales approximately linearly with the sample size (). We did not run experiments with larger sample sizes, because their outcomes are predictable by property of the fast method.

Sample Size Fast method
4,096 0.3323 (0.0044)
8,192 0.7432 (0.0051)
16,384 1.6752 (0.0137)
32,768 3.7686 (0.0238)
65,536 8.5158 (0.0654)
131,072 19.1241 (0.4688)
262,144 42.2150 (0.3918)
524,288 93.1250 (0.6422)
1,048,576 204.3403 (1.7328)
Table 2: Running times (in seconds) for the fast method for computing the distance correlations, when the sample sizes are large. The values in the parentheses are sample standard errors. At each sample size, repetitions were run.
Figure 2: An illustration of running times of the fast method for the computation of the distance correlations. The dashed line corresponds to an O algorithm.

5.2 Measuring Effectiveness of Distance Correlation

The distance correlation is zero if and only if the corresponding two random variables are independent of each other. The Pearson’s correlation does not have such a property. There have been intuitive numerical examples to illustrate such an advantage of using the distance correlation. See the Wikipedia page on “distance correlation.” When the direct implementation of the distance correlation is adopted, the sample size (which was denoted by ) cannot be large, due to the O complexity of the direct method. In Fig. 3, we compare the Pearson’s correlation with the distance correlation in nine representative cases:

  1. is a bivariate normal with moderate correlation;

  2. a bivariate normal with a correlation close to ;

  3. a thickened rippled curve;

  4. a rotation of a uniformly filled rectangle;

  5. a further rotation of the aforementioned uniformly filled rectangle;

  6. a thickened quadratic curve;

  7. bifurcated quadratic curves;

  8. a thickened circle; and

  9. a bivariate mixed normal with independent coordinates.

When the sample sizes are and , respectively, Fig. 3 presents the Pearson’s correlation and the distance correlation in all cases. In the cases (3) through (8), we seemingly observe the trend that the Pearson’s correlations are getting close to zero, while the distance correlations are not. However, the significance of such a pattern is not evident.

(a) Sample size:
(b)
Figure 3: Comparison of the Pearson’s correlation and the distance correlation in nine cases. In each sub-figure, the two coordinates correspond to the random variables . Each dot is a sample point. In the title, the first value is the Pearson’s correlation, and the second one is the corresponding distance correlation.

With the fast method, we now can run the same experiments with larger sample sizes. In Fig. 4, we run the comparison with sample size .


Figure 4: Comparison of the Pearson’s correlation and the distance correlation when the sample size () is large: . Each dot is a realization of a pair of random variables (). In each case, the first value is the Pearson’s correlation, and the second one is the corresponding distance correlation. We can clearly observe that in the cases (3) through (8), the Pearson’s correlations are close to zero, while the distance correlations are not. In cases of (1), (2), and (9), two correlations are close to each other, as the theory predicts.

This is a sample size for which the corresponding experiment cannot be done with the direct method. It is clear that the Pearson’s correlation become nearly zero in the cases of (3) through (8), even though the two random variables are not independent. The corresponding distance correlations are clearly far from zero. For readers’ convenience, we summarize the results in Table 3.

Sample Size (1) (2) (3) (4) (5) (6)
n = 40 0.92; 0.91 0.98; 0.98 0.08; 0.35 -0.04; 0.00 -0.07; 0.00 -0.38; 0.44
n = 400 0.80; 0.75 0.98; 0.97 0.04; 0.30 0.10; 0.17 0.08; 0.18 -0.03; 0.40
n = 10,000 0.81; 0.76 0.98; 0.97 0.01; 0.33 0.00; 0.14 0.01; 0.14 0.01; 0.42
(7) (8) (9)
n = 40 0.04; 0.21 0.16; 0.22 0.10; 0.00
n = 400 0.10; 0.25 0.04; 0.19 0.01; 0.00
n = 10,000 0.01; 0.25 0.00; 0.18 -0.01; 0.00
Table 3: Pearson’s correlations (left) and distance correlations (right) for the nine cases that are studied in Figures 3 and 4. It is of particular interests to observe that when , the distance correlations in cases (3) through (8) are clearly nonzero, while the Pearson’s correlations in these cases converge to zero.

The fast method allows us to study how the sample distance correlation converge to the population counterpart as a function of the sample size. Fig. 5 shows the convergence of the sample distance correlation and Pearson’s correlation. It is worth noting that in cases (3)-(8), the Pearson’s correlation quickly converges to zero, while the sample distance correlation clearly stays away from zero. This experiments shows that a previous observation in Fig. 4 should occur with large probability.

Figure 5: Convergence of covering interval of both sample Pearson’s correlation (solid line, with low and upper sample quartiles marked by ‘’) and sample distance correlation (dotted lines, with both quartiles marked by ‘’). The horizontal axis equates the sample size. The vertical axis corresponds to the values of correlations. In cases (3) through (8), the two correlations clearly converge to different constants, when the Pearson’s correlation always seems to converge to zero.

5.3 Feature Screening

In ?, distance correlation has been proposed to facilitate feature screening in ultrahigh-dimensional data analysis. The proposed sure independence screening procedure based on the distance correlation (DC-SIS) has been proven to be effective in their simulation study. Due to the use of the direct method, they restricted their sample size to . We redo the simulations as in ?, however increases the sample size to , i.e., times of the originally attempted. It is observed that the use of distance correlation becomes more advantageous when the sample size increases.

The screening algorithm, which was initially advocated by ?, works as follows. For each covariate , a ‘marginal utility’ function was computed. Such a marginal utility function can be the Pearson’s correlation, the distance correlation that was discussed in this paper, or other dependence measure such as the one in ? that was also used in the simulation studies of ?. The ‘screening’ is based on the magnitude of the values of these marginal utility function. Sometimes, forward, backward, or a hybrid stepwise approach is proposed. In this paper, we refrain from further discussion in this potential research direction.

Our simulation setup follows the one in ?. Note that an alternative approach named sure independent ranking and screening (SIRS) (?) was compared against. For a sample, , of two random variable and , the SIRS dependence measure (i.e., the marginal utility function) is defined as

(5.1)

where is an indicator function. The formulation in the above definition seemingly hint an O algorithm. The following theorem will show that it can be computed via an O algorithm. The proof and the algorithmic details are relegated to the appendix.

Theorem 5.1.

For a sample, , of a bivariate random vector , the SIRS measure (?) in (5.1) can be computed via an algorithm whose average complexity is O.

For completeness, we state our simulation setup below. we generate from normal distribution with zero mean and covariance matrix , and the error term from the standard normal distribution . Two covariance matrices are considered to assess the performance of the DC-SIS and to compare with existing methods: (1) and (2) . Note that a covariance matrix with entries enjoys a known Cholesky decomposition: , where if , and , , for and , . In our simulations, we take advantage of this known decomposition. The dimension varies from to . Each experiment was repeated times, and the performance is evaluated through the following three criteria:

  1. : the minimum model size to include all active predictors. We report the , and quantiles of out of replications.

  2. : the proportion that an individual active predictor is selected for a given model size in the replications.

  3. : the proportion that all active predictors are selected for a given model size in the replications.

The is used to measure the model complexity of the resulting model of an underlying screening procedure. The closer to the minimum model size the is, the better the screening procedure is. The sure screening property ensures that and are both close to one when the estimated model size is sufficiently large. Different from ?, the is chosen to be , , and throughout our simulations to empirically examine the effect of the cutoff, where denotes the integer part of .

An innovative stopping rule is introduced in ? for DC-SIS. We did not implement it here, because the new stopping rule requires a multivariate version of the distance correlation, which is not covered by this paper.

The example is designed to compare the finite sample performance of the DC-SIS with the SIS (?) and the SIRS (?). In this example, we generate the response from the following four models:

  • ,

  • ,

  • ,

  • ,

where is an indicator function.

The regression functions in models (1.a)-(1.d) are all nonlinear in . In addition, models (1.b) and (1.c) contain an interaction term , and model (1.d) is heteroscedastic. Following Fan and Lv (2008), we choose for , and , where , Bernoulli and . We set in this example to be consistent with the experiments in ?: challenging the feature screening procedures under consideration. For each independence screening procedure, we compute the associated marginal utility between each predictor and the response . That is, we regard as the predictor vector in this example.

SIS SIRS DC-SIS
Model
Case 1: and
(1.a) 4 4 6 10 22 4 5 6 10 20 4 5 6 9 20
(1.b) 76 551 1180 1592 1918 237 814 1269 1789 1959 4 6 8 11 14
(1.c) 591 922 1364 1781 1941 342 827 1354 1637 1930 6 6 6 8 11
(1.d) 8 237 726 1310 1827 58 273 919 1444 1878 4 4 6 8 1001
Case 2: and
(1.a) 5 9 14 21 46 5 9 14 22 69 4 9 14 20 36
(1.b) 28 32 35 109 1497 29 33 40 327 1658 4 10 15 20 26
(1.c) 39 222 711 1418 1924 37 109 379 1053 1843 10 11 14 18 23
(1.d) 13 89 547 1152 1823 22 77 338 863 1679 5 8 11 17 355
Case 3: and
(1.a) 4 5 6 9 19 4 5 6 9 20 4 5 6 9 19
(1.b) 59 1107 2751 3920 4737 299 1755 3255 4289 4837 4 6 8 10 14
(1.c) 998 2652 3790 4425 4898 321 1864 3269 4303 4857 6 6 6 8 10
(1.d) 10 221 1346 3055 4585 64 596 1894 3500 4791 4 4 5 7 1024
Case 4: and
(1.a) 5 10 16 23 46 6 11 17 23 48 5 10 16 22 35
(1.b) 28 32 36 314 3907 29 34 49 665 4449 5 9 14 20 27
(1.c) 45 447 1759 3538 4818 41 235 1063 2603 4762 10 11 14 18 23
(1.d) 14 216 1307 3018 4695 23 115 747 2135 4368 5 8 11 16 150
Table 4: The , and quantiles of the minimum model size out of replications.
SIS SIRS DC-SIS
Model Size All All All
Case 1: and
(1.a) 1.00 0.95 1.00 1.00 0.95 1.00 0.95 1.00 1.00 0.95 1.00 0.95 1.00 1.00 0.95
1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96
1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96 1.00 0.97 1.00 1.00 0.97
(1.b) 0.01 0.02 1.00 1.00 0.01 0.00 0.00 1.00 1.00 0.00 1.00 1.00 1.00 1.00 1.00
0.05 0.06 1.00 1.00 0.02 0.01 0.03 1.00 1.00 0.00 1.00 1.00 1.00 1.00 1.00
0.07 0.08 1.00 1.00 0.03 0.02 0.04 1.00 1.00 0.00 1.00 1.00 1.00 1.00 1.00
(1.c) 0.08 0.07 0.00 1.00 0.00 0.00 0.03 1.00 1.00 0.00 1.00 1.00 1.00 1.00 1.00
0.11 0.07 0.01 1.00 0.00 0.01 0.03 1.00 1.00 0.00 1.00 1.00 1.00 1.00 1.00
0.14 0.11 0.01 1.00 0.00 0.02 0.04 1.00 1.00 0.01 1.00 1.00 1.00 1.00 1.00
(1.d) 0.60 0.40 0.52 0.52 0.11 0.98 0.93 1.00 0.03 0.03 0.98 0.89 1.00 1.00 0.88
0.72 0.42 0.54 0.59 0.12 0.98 0.95 1.00 0.04 0.04 0.98 0.90 1.00 1.00 0.89
0.78 0.44 0.63 0.62 0.14 0.98 0.95 1.00 0.05 0.05 0.98 0.91 1.00 1.00 0.89
Case 2: and
(1.a) 0.92 0.85 0.88 1.00 0.74 0.92 0.84 0.88 1.00 0.73 0.92 0.84 0.91 1.00 0.76
0.99 0.97 0.98 1.00 0.95 0.99 0.97 0.98 1.00 0.94 0.99 0.97 1.00 1.00 0.96
0.99 0.97 0.99 1.00 0.95 0.99 0.97 0.99 1.00 0.95 0.99 0.97 1.00 1.00 0.96
(1.b) 0.00 0.01 0.97 1.00 0.00 0.00 0.00 0.97 1.00 0.00 0.82 0.84 0.96 1.00 0.77
0.64 0.74 0.99 1.00 0.64 0.52 0.65 0.98 1.00 0.50 1.00 1.00 1.00 1.00 1.00
0.71 0.79 0.99 1.00 0.70 0.62 0.71 0.99 1.00 0.61 1.00 1.00 1.00 1.00 1.00
(1.c) 0.01 0.01 0.50 1.00 0.00 0.00 0.00 0.99 1.00 0.00 0.94 0.94 0.93 1.00 0.87
0.11 0.09 0.91 1.00 0.05 0.11 0.13 1.00 1.00 0.07 1.00 1.00 1.00 1.00 1.00
0.15 0.14 0.91 1.00 0.09 0.19 0.25 1.00 1.00 0.16 1.00 1.00 1.00 1.00 1.00
(1.d) 0.66 0.60 0.49 0.42 0.09 1.00 0.94 0.97 0.02 0.02 0.94 0.86 0.97 1.00 0.82
0.72 0.64 0.55 0.49 0.16 1.00 0.96 0.97 0.18 0.17 0.99 0.95 0.98 1.00 0.91
0.76 0.68 0.59 0.54 0.20 1.00 0.96 0.98 0.22 0.20 0.99 0.95 0.99 1.00 0.93
Table 5: The proportions of and in our experiment for the first two cases. The user-specified model sizes are , , and .
SIS SIRS DC-SIS
Model Size All All All
Case 3: and
(1.a) 1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96
1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96
1.00 0.97 1.00 1.00 0.96 1.00 0.97 1.00 1.00 0.96 1.00 0.96 1.00 1.00 0.96
(1.b) 0.09 0.09 1.00 1.00 0.05 0.04 0.03 1.00 1.00 0.01 1.00 1.00 1.00 1.00 1.00
0.13 0.13 1.00 1.00 0.06 0.05 0.05 1.00 1.00 0.01 1.00 1.00 1.00 1.00 1.00
0.15 0.15 1.00 1.00 0.08 0.07 0.07 1.00 1.00 0.02 1.00 1.00 1.00 1.00 1.00
(1.c) 0.09 0.10 0.01 1.00 0.00 0.03 0.03 0.99 1.00 0.01 1.00 1.00 0.99 1.00 0.99
0.12 0.14 0.03 1.00 0.00 0.06 0.05 1.00 1.00 0.02 1.00 1.00 1.00 1.00 1.00
0.14 0.15 0.04 1.00 0.00 0.06 0.07 1.00 1.00 0.02 1.00 1.00 1.00 1.00 1.00
(1.d) 0.77 0.47 0.52 0.53 0.14 1.00 0.95 1.00 0.04 0.04 1.00 0.93 1.00 1.00 0.92
0.80 0.54 0.58 0.57 0.19 1.00 0.95 1.00 0.08 0.07 1.00 0.93 1.00 1.00 0.93
0.82 0.57 0.60 0.60 0.21 1.00 0.96 1.00 0.10 0.10 1.00 0.93 1.00 1.00 0.93
Case 4: and
(1.a) 0.99 0.98 0.98 1.00 0.95 0.99 0.98 0.98 1.00 0.95 0.99 0.98 1.00 1.00 0.97
0.99 0.98 0.99 1.00 0.96 0.99 0.99 0.99 1.00 0.96 0.99 0.99 1.00 1.00 0.98
0.99 0.99 0.99 1.00 0.97 0.99 0.99 0.99 1.00 0.97 0.99 0.99 1.00 1.00 0.98
(1.b) 0.62 0.71 0.98 1.00 0.61 0.52 0.63 0.99 1.00 0.51 1.00 1.00 1.00 1.00 1.00
0.68 0.76 0.99 1.00 0.67 0.60 0.70 0.99 1.00 0.60 1.00 1.00 1.00 1.00 1.00
0.71 0.78 0.99 1.00 0.70 0.62 0.73 0.99 1.00 0.62 1.00 1.00 1.00 1.00 1.00
(1.c) 0.11 0.11 0.91 1.00 0.06 0.11 0.13 1.00 1.00 0.08 1.00 1.00 1.00 1.00 1.00
0.16 0.16 0.93 1.00 0.10 0.20 0.24 1.00 1.00 0.16 1.00 1.00 1.00 1.00 1.00
0.20 0.21 0.94 1.00 0.13 0.22 0.30 1.00 1.00 0.19 1.00 1.00 1.00 1.00 1.00
(1.d) 0.70 0.61 0.50 0.49 0.14 0.98 0.98 0.99 0.18 0.17 0.97 0.96 1.00 1.00 0.94
0.73 0.65 0.55 0.53 0.18 0.99 0.98 0.99 0.25 0.24 0.98 0.97 1.00 1.00 0.95
0.76 0.67 0.57 0.57 0.21 0.99 0.98 0.99 0.30 0.28 0.98 0.97 1.00 1.00 0.95
Table 6: The proportions of and in our example. This is for the remaining two cases. The user-specified model sizes are , , and .

Tables 4, 5, and 6 present the simulation results for , and . The performances of the DC-SIS, SIS, and SIRS are quite similar in model (1.a), indicating that the SIS has a robust performance if the working linear model does not deviate far from the underlying true model. The DC-SIS outperforms the SIS and the SIRS significantly in models (1.b)-(1.d). Both the SIS and the SIRS have little chance to identify the important predictors and in models (1.b) and (1.c), and in model (1.d).

Comparing Tab.s 4 and 5 with the counterparts in ?, one can clearly see that the advantage of using the distance correlation becomes more evident, observing smaller sample quantiles of for DC-SIS, and larger coverage probabilities in and .

6 Conclusion

Distance correlation has been found useful in many applications (??). A direct implementation of the distance correlation led to an algorithm with sample size