Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs

Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs

Dávid Pál
Department of Computing Science
University of Alberta
Edmonton, AB, Canada
dpal@cs.ualberta.ca
&Barnabás Póczos
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA, USA
poczos@ualberta.ca &Csaba Szepesvári
Department of Computing Science
University of Alberta
Edmonton, AB, Canada
szepesva@ualberta.ca
Abstract

We present simple and computationally efficient nonparametric estimators of Rényi entropy and mutual information based on an i.i.d. sample drawn from an unknown, absolutely continuous distribution over . The estimators are calculated as the sum of -th powers of the Euclidean lengths of the edges of the ‘generalized nearest-neighbor’ graph of the sample and the empirical copula of the sample respectively. For the first time, we prove the almost sure consistency of these estimators and upper bounds on their rates of convergence, the latter of which under the assumption that the density underlying the sample is Lipschitz continuous. Experiments demonstrate their usefulness in independent subspace analysis.

 

Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs


  Dávid Pál Department of Computing Science University of Alberta Edmonton, AB, Canada dpal@cs.ualberta.ca Barnabás Póczos School of Computer Science Carnegie Mellon University Pittsburgh, PA, USA poczos@ualberta.ca Csaba Szepesvári Department of Computing Science University of Alberta Edmonton, AB, Canada szepesva@ualberta.ca

1 Introduction

We consider the nonparametric problem of estimating Rényi -entropy and mutual information (MI) based on a finite sample drawn from an unknown, absolutely continuous distribution over . There are many applications that make use of such estimators, of which we list a few to give the reader a taste: Entropy estimators can be used for goodness-of-fit testing (vasicek76test; goria05new), parameter estimation in semi-parametric models (Wolsztynski85minimum), studying fractal random walks (Alemany94fractal), and texture classification (hero2002aes; hero02alpha). Mutual information estimators have been used in feature selection (peng05feature), clustering (aghagolzadeh07hierarchical), causality detection (Hlavackova07causality), optimal experimental design (lewi07realtime; poczos09identification), fMRI data processing (chai09exploring), prediction of protein structures (adami04information), or boosting and facial expression recognition (Shan05conditionalmutual). Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis (radical03; poczos05geodesic; Hulle08constrained; szabo07undercomplete_TCC), and image registration (kybic06incremental; hero2002aes; hero02alpha). For further applications, see Leonenko-Pronzato-Savani2008; WKV2009Survey.

In a naïve approach to Rényi entropy and mutual information estimation, one could use the so called “plug-in” estimates. These are based on the obvious idea that since entropy and mutual information are determined solely by the density (and its marginals), it suffices to first estimate the density using one’s favorite density estimate which is then “plugged-in” into the formulas defining entropy and mutual information. The density is, however, a nuisance parameter which we do not want to estimate. Density estimators have tunable parameters and we may need cross validation to achieve good performance.

The entropy estimation algorithm considered here is direct—it does not build on density estimators. It is based on -nearest-neighbor (NN) graphs with a fixed . A variant of these estimators, where each sample point is connected to its -th nearest neighbor only, were recently studied by goria05new for Shannon entropy estimation (i.e. the special case ) and Leonenko-Pronzato-Savani2008 for Rényi -entropy estimation. They proved the weak consistency of their estimators under certain conditions. However, their proofs contain some errors, and it is not obvious how to fix them. Namely, Leonenko-Pronzato-Savani2008 apply the generalized Helly-Bray theorem, while goria05new apply the inverse Fatou lemma under conditions when these theorems do not hold. This latter error originates from the article of kozachenko87statistical, and this mistake can also be found in Wang-Kulkarni-Verdu2009.

The first main contribution of this paper is to give a correct proof of consistency of these estimators. Employing a very different proof techniques than the papers mentioned above, we show that these estimators are, in fact, strongly consistent provided that the unknown density has bounded support and . At the same time, we allow for more general nearest-neighbor graphs, wherein as opposed to connecting each point only to its -th nearest neighbor, we allow each point to be connected to an arbitrary subset of its nearest neighbors. Besides adding generality, our numerical experiments seem to suggest that connecting each sample point to all its nearest neighbors improves the rate of convergence of the estimator.

The second major contribution of our paper is that we prove a finite-sample high-probability bound on the error (i.e. the rate of convergence) of our estimator provided that is Lipschitz. According to the best of our knowledge, this is the very first result that gives a rate for the estimation of Rényi entropy. The closest to our result in this respect is the work by tsybakov96rootn who proved the root- consistency of an estimator of the Shannon entropy and only in one dimension.

The third contribution is a strongly consistent estimator of Rényi mutual information that is based on NN graphs and the empirical copula transformation (dedecker07weak). This result is proved for  111Our result for Rényi entropy estimation holds for and , too. and . This builds upon and extends the previous work of Poczos-Kirshner-Szepesvari2010 where instead of NN graphs, the minimum spanning tree (MST) and the shortest tour through the sample (i.e. the traveling salesman problem, TSP) were used, but it was only conjectured that NN graphs can be applied as well.

There are several advantages of using -NN graph over MST and TSP (besides the obvious conceptual simplicity of -NN): On a serial computer the -NN graph can be computed somewhat faster than MST and much faster than the TSP tour. Furthermore, in contrast to MST and TSP, computation of -NN can be easily parallelized. Secondly, for different values of , MST and TSP need to be recomputed since the distance between two points is the -th power of their Euclidean distance where . However, the -NN graph does not change for different values of , since -th power is a monotone transformation, and hence the estimates for multiple values of can be calculated without the extra penalty incurred by the recomputation of the graph. This can be advantageous e.g. in intrinsic dimension estimators of manifolds (costa03entropic), where is a free parameter, and thus one can calculate the estimates efficiently for a few different parameter values.

The fourth major contribution is a proof of a finite-sample high-probability error bound (i.e. the rate of convergence) for our mutual information estimator which holds under the assumption that the copula of is Lipschitz. According to the best of our knowledge, this is the first result that gives a rate for the estimation of Rényi mutual information.

The toolkit for proving our results derives from the deep literature of Euclidean functionals, see, (Steele1997; Yukich1998). In particular, our strong consistency result uses a theorem due to Redmond-Yukich1996 that essentially states that any quasi-additive power-weighted Euclidean functional can be used as a strongly consistent estimator of Rényi entropy (see also HeMi99). We also make use of a result due to Koo-Lee2007, who proved a rate of convergence result that holds under more stringent conditions. Thus, the main thrust of the present work is showing that these conditions hold for -power weighted nearest-neighbor graphs. Curiously enough, up to now, no one has shown this, except for the case when , which is studied in Section 8.3 of (Yukich1998). However, the condition gives results only for .

All proofs and supporting lemmas can be found in the appendix. In the main body of the paper, we focus on clear explanation of Rényi entropy and mutual information estimation problems, the estimation algorithms and the statements of our converge results.

Additionally, we report on two numerical experiments. In the first experiment, we compare the empirical rates of convergence of our estimators with our theoretical results and plug-in estimates. Empirically, the NN methods are the clear winner. The second experiment is an illustrative application of mutual information estimation to an Independent Subspace Analysis (ISA) task.

The paper is organized as follows: In the next section, we formally define Rényi entropy and Rényi mutual information and the problem of their estimation. Section 3 explains the ‘generalized nearest neighbor’ graphs. This graph is then used in Section 4 to define our Rényi entropy estimator. In the same section, we state a theorem containing our convergence results for this estimator (strong consistency and rates). In Section 5, we explain the copula transformation, which connects Rényi entropy with Rényi mutual information. The copula transformation together with the Rényi entropy estimator from Section 4 is used to build an estimator of Rényi mutual information. We conclude this section with a theorem stating the convergence properties of the estimator (strong consistency and rates). Section 6 contains the numerical experiments. We conclude the paper by a detailed discussion of further related work in Section 7, and a list of open problems and directions for future research in Section 8.

2 The Formal Definition of the Problem

Rényi entropy and Rényi mutual information of real-valued random variables222We use superscript for indexing dimension coordinates. with joint density and marginal densities , , are defined for any real parameter assuming the underlying integrals exist. For , Rényi entropy and Rényi mutual information are defined respectively as333The base of the logarithms in the definition is not important; any base strictly bigger than is allowed. Similarly as with Shannon entropy and mutual information, one traditionally uses either base or . In this paper, for definitiveness, we stick to base .

(1)
(2)

For they are defined by the limits and . In fact, Shannon (differential) entropy and the Shannon mutual information are just special cases of Rényi entropy and Rényi mutual information with .

The goal of this paper is to present estimators of Rényi entropy (1) and Rényi information (2) and study their convergence properties. To be more explicit, we consider the problem where we are given i.i.d. random variables where each has density and marginal densities and our task is to construct an estimate of and an estimate of using the sample .

3 Generalized Nearest-Neighbor Graphs

The basic tool to define our estimators is the generalized nearest-neighbor graph and more specifically the sum of the -th powers of Euclidean lengths of its edges.

Formally, let be a finite set of points in an Euclidean space and let be a finite non-empty set of positive integers; we denote by the maximum element of . We define the generalized nearest-neighbor graph as a directed graph on . The edge set of contains for each an edge from each vertex to its -th nearest neighbor. That is, if we sort according to the Euclidean distance to (breaking ties arbitrarily): then is the -th nearest-neighbor of and for each there is an edge from to in the graph.

For let us denote by the sum of the -th powers of Euclidean lengths of its edges. Formally,

(3)

where denotes the edge set of . We intentionally hide the dependence on in the notation . For the rest of the paper, the reader should think of as a fixed but otherwise arbitrary finite non-empty set of integers, say, .

The following is a basic result about . The proof can be found in the appendix.

Theorem 1 (Constant ).

Let be an i.i.d. sample from the uniform distribution over the -dimensional unit cube . For any and any finite non-empty set of positive integers there exists a constant such that

(4)

The value of depends on and, except for special cases, an analytical formula for its value is not known. This causes a minor problem since the constant appears in our estimators. A simple and effective way to deal with this problem is to generate a large i.i.d. sample from the uniform distribution over and estimate by the empirical value of .

4 An Estimator of Rényi Entropy

We are now ready to present an estimator of Rényi entropy based on the generalized nearest-neighbor graph. Suppose we are given an i.i.d. sample from a distribution over with density . We estimate entropy for by

(5)

and is the sum of -th powers of Euclidean lengths of edges of the nearest-neighbor graph for some finite non-empty as defined by equation (3). The constant is the same as in Theorem 1.

The following theorem is our main result about the estimator . It states that is strongly consistent and gives upper bounds on the rate of convergence. The proof of theorem is in the appendix.

Theorem 2 (Consistency and Rate for ).

Let . Let be an absolutely continuous distribution over with bounded support and let be its density. If is an i.i.d. sample from then

(6)

Moreover, if is Lipschitz then for any with probability at least ,

(7)

5 Copulas and Estimator of Mutual Information

Estimating mutual information is slightly more complicated than estimating entropy. We start with a basic property of mutual information which we call rescaling. It states that if are arbitrary strictly increasing functions, then

(8)

A particularly clever choice is for all , where is the cumulative distribution function (c.d.f.) of . With this choice, the marginal distribution of is the uniform distribution over assuming that , the c.d.f. of , is continuous. Looking at the definition of and we see that

In other words, calculation of mutual information can be reduced to the calculation of entropy provided that marginal c.d.f.’s are known. The problem is, of course, that these are not known and need to be estimated from the sample. We will use empirical c.d.f.’s as their estimates. Given an i.i.d. sample from distribution and with density , the empirical c.d.f’s are defined as

Introduce the compact notation , ,

(9)
(10)

Let us call the maps , the copula transformation, and the empirical copula transformation, respectively. The joint distribution of is called the copula of , and the sample is called the empirical copula (dedecker07weak). Note that -th coordinate of equals

where is the number of element of less than or equal to . Also, observe that the random variables are not even independent! Nonetheless, the empirical copula is a good approximation of an i.i.d. sample from the copula of . Hence, we estimate the Rényi mutual information by

(11)

where is defined by (5). The following theorem is our main result about the estimator . It states that is strongly consistent and gives upper bounds on the rate of convergence. The proof of this theorem can be found in the appendix.

Theorem 3 (Consistency and Rate for ).

Let and . Let be an absolutely continuous distribution over with density . If is an i.i.d. sample from then

Moreover, if the density of the copula of is Lipschitz, then for any with probability at least ,

6 Experiments

In this section we show two numerical experiments to support our theoretical results about the convergence rates, and to demonstrate the applicability of the proposed Rényi mutual information estimator, .

6.1 The Rate of Convergence

In our first experiment (Fig. 1), we demonstrate that the derived rate is indeed an upper bound on the convergence rate. Figure (a)a-(c)c show the estimation error of as a function of the sample size. Here, the underlying distribution was a 3D uniform, a 3D Gaussian, and a 20D Gaussian with randomly chosen nontrivial covariance matrices, respectively. In these experiments was set to . For the estimation we used (kth) and (knn) sets. Our results also indicate that these estimators achieve better performances than the histogram based plug-in estimators (hist). The number and the sizes of the bins were determined with the rule of scott79optimal. The histogram based estimator is not shown in the 20D case, as in this large dimension it is not applicable in practice. The figures are based on averaging 25 independent runs, and they also show the theoretical upper bound (Theoretical) on the rate derived in Theorem 3. It can be seen that the theoretical rates are rather conservative. We think that this is because the theory allows for quite irregular densities, while the densities considered in this experiment are very nice.

(a) 3D uniform
(b) 3D Gaussian
(c) 20D Gaussian
Figure 1: Error of the estimated Rényi informations in the number of samples.

6.2 Application to Independent Subspace Analysis

An important application of dependence estimators is the Independent Subspace Analysis problem (cardoso98multidimensional). This problem is a generalization of the Independent Component Analysis (ICA), where we assume the independent sources are multidimensional vector valued random variables. The formal description of the problem is as follows. We have , independent -dimensional sources, i.e. , and .444Here we need the generalization of MI to multidimensional quantities, but that is obvious by simply replacing the 1D marginals by -dimensional ones. In the ISA statistical model we assume that is hidden, and only i.i.d. samples from are available for observation, where is an unknown invertible matrix with full rank and . Based on i.i.d. observation of , our task is to estimate the hidden sources and the mixing matrix . Let the estimation of be denoted by , where . The goal of ISA is to calculate , where is a matrix with full rank. Following the ideas of cardoso98multidimensional, this ISA problem can be solved by first preprocessing the observed quantities by a traditional ICA algorithm which provides us estimated separation matrix555for simplicity we used the FastICA algorithm in our experiments (ICAbook01), and then simply grouping the estimated ICA components into ISA subspaces by maximizing the sum of the MI in the estimated subspaces, that is we have to find a permutation matrix which solves

(12)

where . We used the proposed copula based information estimation, with to approximate the Shannon mutual information, and we chose . Our experiment shows that this ISA algorithm using the proposed MI estimator can indeed provide good estimation of the ISA subspaces. We used a standard ISA benchmark dataset from szabo07undercomplete_TCC; we generated 2,000 i.i.d. sample points on 3D geometric wireframe distributions from 6 different sources independently from each other. These sampled points can be seen in Fig. (a)a, and they represent the sources, . Then we mixed these sources by a randomly chosen invertible matrix . The six 3-dimensional projections of observed quantities are shown in Fig. (b)b. Our task was to estimate the original sources using the sample of the observed quantity only. By estimating the MI in (12), we could recover the original subspaces as it can be seen in Fig. (c)c. The successful subspace separation is shown in the form of Hinton diagrams as well, which is the product of the estimated ISA separation matrix and . It is a block permutation matrix if and only if the subspace separation is perfect (Fig. (d)d).

(a) Original
(b) Mixed
(c) Estimated
(d) Hinton
Figure 2: ISA experiment for six -dimensional sources.

7 Further Related Works

As it was pointed out earlier, in this paper we heavily built on the results known from the theory of Euclidean functionals (Steele1997; Redmond-Yukich1996; Koo-Lee2007). However, now we can be more precise about earlier work concerning nearest-neighbor based Euclidean functionals: The closest to our work is Section 8.3 of Yukich1998, where the case of graph based -power weighted Euclidean functionals with and was investigated.

Nearest-neighbor graphs have first been proposed for Shannon entropy estimation by kozachenko87statistical. In particular, in the mentioned work only the case of graphs with was considered. More recently, goria05new generalized this approach to and proved the resulting estimator’s weak consistency under some conditions on the density. The estimator in this paper has a form quite similar to that of ours:

Here stands for the digamma function, and is the directed edge pointing from to its nearest-neighbor. Comparing this with (5), unsurprisingly, we find that the main difference is the use of the logarithm function instead of and the different normalization. As mentioned before, Leonenko-Pronzato-Savani2008 proposed an estimator that uses the graph with for the purpose of estimating the Rényi entropy. Their estimator takes the form

where stands for the Gamma function, and is the volume of the -dimensional unit ball, and again is the directed edge in the graph starting from node and pointing to the -th nearest node. Comparing this estimator with (5), it is apparent that it is (essentially) a special case of our based estimator. From the results of Leonenko-Pronzato-Savani2008 it is obvious that the constant in (5) can be found in analytical form when . However, we kindly warn the reader again that the proofs of these last three cited articles (kozachenko87statistical; goria05new; Leonenko-Pronzato-Savani2008) contain a few errors, just like the Wang-Kulkarni-Verdu2009 paper for KL divergence estimation from two samples. Kraskov04estimating also proposed a -nearest-neighbors based estimator for the Shannon mutual information estimation, but the theoretical properties of their estimator are unknown.

8 Conclusions and Open Problems

We have studied Rényi entropy and mutual information estimators based on graphs. The estimators were shown to be strongly consistent. In addition, we derived upper bounds on their convergence rate under some technical conditions. Several open problems remain unanswered:

An important open problem is to understand how the choice of the set affects our estimators. Perhaps, there exists a way to choose as a function of the sample size (and ) which strikes the optimal balance between the bias and the variance of our estimators.

Our method can be used for estimation of Shannon entropy and mutual information by simply using close to . The open problem is to come up with a way of choosing , approaching , as a function of the sample size (and ) such that the resulting estimator is consistent and converges as rapidly as possible. An alternative is to use the logarithm function in place of the power function. However, the theory would need to be changed significantly to show that the resulting estimator remains strongly consistent.

In the proof of consistency of our mutual information estimator we used Kiefer-Dvoretzky-Wolfowitz theorem to handle the effect of the inaccuracy of the empirical copula transformation. Our particular use of the theorem seems to restrict to the interval and the dimension to values larger than . Is there a better way to estimate the error caused by the empirical copula transformation and prove consistency of the estimator for a larger range of ’s and ?

Finally, it is an important open problem to prove bounds on converge rates for densities that have higher order smoothness (i.e. -Hölder smooth densities). A related open problem, in the context of of theory of Euclidean functionals, is stated in Koo-Lee2007.

Acknowledgements

This work was supported in part by AICML, AITF (formerly iCore and AIF), NSERC, the PASCAL2 Network of Excellence under EC grant no. 216886 and by the Department of Energy under grant number DESC0002607. Cs. Szepesvári is on leave from SZTAKI, Hungary.

References

Appendix A Quasi-Additive and Very Strong Euclidean Functionals

The basic tool to prove convergence properties of our estimators is the theory of quasi-additive Euclidean functionals developed by Yukich1998; Steele1997; Redmond-Yukich1996; Koo-Lee2007 and others. We apply this machinery to the nearest neighbor functional defined in equation (3).

In particular, we use the axiomatic definition of a quasi-additive Euclidean functional from Yukich1998 and the definition of a very strong Euclidean functional from Koo-Lee2007 who add two extra axioms. We then use the results of Redmond-Yukich1996 and Koo-Lee2007 which hold for these kinds of functionals. These results determine the limit behavior of the functionals on a set of points chosen i.i.d. from an absolutely continuous distribution over . As we show in the following sections, the nearest neighbor functional defined by equation (3) is a very strong Euclidean functional and thus both of these results apply to it.

Technically, a quasi-additive Euclidean functional is a pair of real non-negative functionals where is a -dimensional cube and is a finite set of points. Here, a -dimensional cube is a set of the form where is its “lower-left” corner and is its side. The functional is called the boundary functional. The common practice is to neglect and refer to the pair simply as . We provide a boundary functional for the nearest neighbor functional in the next section.

Definition 4 (Quasi-additive Euclidean functional).

is a quasi-additive Euclidean functional of power if it satisfies axioms (A1)–(A7) below.

Definition 5 (Very strong Euclidean functional).

is a very strong Euclidean functional of power if it satisfies axioms (A1)–(A9) below.

Axioms.

For all cubes , any finite , all , all ,

(A1)
(A2)
(A3)
(A4)

For all and a partition of into subcubes of side

(A5)

For all finite ,

(A6)

For a set of points drawn i.i.d. from the uniform distribution over ,

(A7)
(A8)
(A9)

Axiom (A2) is translation invariance, axiom (A3) is scaling. First part of (A5) is subadditivity of and second part is super-additivity of . Axiom (A6) is smoothness and we call (A7) quasi-additivity. Axiom (A8) is a strengthening of (A7) with an explicit rate. Axiom (A9) is the add-one bound. The axioms in Koo-Lee2007 are slightly different, however it is a routine to check that they are implied by our set of axioms.

We will use two fundamental results about Euclidean functionals. The first is (Redmond-Yukich1996, Theorem 2.2) and the second is essentially (Koo-Lee2007, Theorem 4).

Theorem 6 (Redmond-Yukich).

Let be quasi-additive Euclidean functional of power . Let consist of points drawn i.i.d. from an absolutely continuous distribution over with common probability density function . Then,

where is a constant depending only on the functional and .

Theorem 7 (Koo-Lee).

Let be a very strong Euclidean functional of power . Let consist of points drawn i.i.d. from an absolutely distribution over with common probability density function . If is Lipschitz 666Recall that a function is Lipschitz if there exists a constant such that for all in the domain of ., then

where is the constant from Theorem 6.

Theorem 7 differs from its original statement (Koo-Lee2007, Theorem 4) in two ways. First, our version is restricted to Lipschitz densities. Koo and Lee prove a generalization of Theorem 7 for -Hölder smooth density functions. The coefficient then appears in the exponent of in the rate. However, their result holds only for in the interval which does not make it very interesting. The case corresponds to Lipschitz densities and is perhaps the most important in this range. Second, Theorem 7 has slight improvement in the rate. Koo and Lee have an extraneous factor which we remove by “correcting” their axiom (A8).

In the next section, we prove that the nearest neighbor functional defined by (3) is a very strong Euclidean functional. First, in section B, we provide a boundary functional for . Then, in section C, we verify that satisfy axioms (A1)–(A9). Once the verification is done, Theorem 1 follows from Theorem 6.

Theorem 2 will follow from Theorem 7 and a concentration result. We prove the concentration result in Section D and finish that section with the proof of Theorem 2. Proof of Theorem 3 requires more work—we need to deal with the effect of empirical copula transformation. We handle this in Section E by employing the classical Kiefer-Dvoretzky-Wolfowitz theorem.

Appendix B The Boundary Functional

We start by constructing the nearest neighbor boundary functional . For that we will need to introduce an auxiliary graph, which we call the nearest-neighbor graph with boundary. This graph is related to and will be useful later.

Let be a -dimensional cube, be finite, and be non-empty and finite. We define nearest-neighbor graph with boundary to be a directed graph, with possibly parallel edges, on vertex set , where denotes the boundary of . Roughly speaking, for every vertex and every there is an edge to its “-th nearest-neighbor” in .

(a)
(b)
Figure 3: Figure (a) shows an example of a nearest neighbor graph in two dimensions and a corresponding boundary nearest neighbor graph is shown in Figure (b). We have used , and a set consisting of points in .

More precisely, we define the edges from as follows: Let be the boundary point closest to . (If there are multiple boundary points that are the closest to we choose one arbitrarily.) If and then also belongs to . For each such that we create in one copy of the edge . In other words, there is a bijection between edge sets and . An example of a graph and a corresponding graph are shown in Figure 3.

Analogously, we define as the sum of -powered edges of . Formally,

(13)

We will need some basic geometric properties of and . By construction, the edges of are shorter than the corresponding edges of . As an immediate consequence we get the following proposition.

Proposition 8 (Upper Bound).

For any cube , any and any finite set , .

Appendix C Verification of Axioms (A1)–(A9) for

It is easy to see that the nearest neighbor functional and its boundary functional satisfy axioms (A1)–(A3). Axiom (A4) is verified by Proposition 8. It thus remains to verify axioms (A5)–(A9) which we do in subsections C.1, C.2 and C.3. We start with two simple lemmas.

Lemma 9 (In-Degree).

For any finite the in-degree of any vertex in is .

Proof.

Fix a vertex . We show that the in-degree of is bounded by some constant that depends only on and . For any unit vector we consider the convex open cone with apex at , rotationally symmetric about its axis and angle :

As it is well known, can be written as a union of finitely many, possibly overlapping, cones , , , , where depends only on the dimension . We show that the in-degree of is at most .

Suppose, by contradiction, that the in-degree of is larger than . Then, by pigeonhole principle, there is a cone containing vertices of the graph with an incoming edge to . Denote these vertices and assume that they are indexed so that .

By a simple calculation, we can verify that for all . Indeed, by the law of cosines

where the sharp inequality follows from that and so the angle between vectors and is strictly less than , and the second inequality follows from . Thus, cannot be among the nearest-neighbors of which contradicts the existence of the edge . ∎

Lemma 10 (Growth Bound).

For any and finite , .

Proof.

An elegant way to prove the lemma is with the use of space-filling curves.777There is an elementary proof, too, based on a discretization argument. However, this proof introduces an extraneous logarithmic factor when . Since Peano1890 and Hilbert1891, it is known that there exists a continuous function from the unit interval onto the cube (i.e. a surjection). For obvious reason is called a space-filling curve. Moreover, there are space-filling curves which are -Hölder; see Milne1980. In other words, we can assume that there exists a constant such that

(14)

Since is a surjective function we can consider a right inverse i.e. a function such that and we let . Let be the points of sorted in the increasing order. We construct a “nearest neighbor” graph on . For every and every we create a directed edge , where the addition is taken modulo . It is not hard to see that the total length of the edges of is

(15)

To see more clearly why (15) holds, note that every line segment , belongs to at most edges and the total length of the line segments is .

Let be a graph on isomorphic to , where for each edge there is a corresponding edge . By the construction of

(16)

Hölder property of implies that

(17)

If then since and thus

Chaining the last inequality with (16), (17) and (15) we obtain that for .

If we use the inequality between arithmetic and -mean. It states that for positive numbers

In our case