Low Dimensional Euclidean Volume Preserving Embeddings

Low Dimensional Euclidean Volume Preserving Embeddings

Abstract

Let be an -point subset of Euclidean space and be an integer. In this paper we study the following question: What is the smallest (normalized) relative change of the volume of subsets of when it is projected into . We prove that there exists a linear mapping that relatively preserves the volume of all subsets of size up to within at most a factor of .

keywords:
Volume, Embeddings, Dimensionality Reduction, Discrete Geometry, Distortion
1

1 Introduction

A classical result of Johnson and Lindenstrauss [JL84] states that any -point subset of Euclidean space can be projected into dimensions while preserving the metric structure of the set. A natural question to pose would be what is the smallest distortion of any -point subset of Euclidean space when it is projected into (fixed) dimensions. This problem was first studied by Matoušek [Mat90], who proved an upper bound on the distortion by projecting the points into using a random -dimensional subspace. In Section 3 we re-prove Matoušek’s result using the simplified analysis of [DG03]; [IM98] adapted in this setting, i.e., bounding the distortion having fixed dimension instead of bounding the target dimension having fixed distortion. Although the simplified proof of the above result is well-known and well-understood, we hope that is not redundant and that it helps the reader to digest the following theorem

Theorem 1.

Let be a -point subset of and let . Then there is a linear mapping such that

where are absolute constants, and is the -dimensional volume of the convex hull of .

Remark: The case where we fix the relative change of the volume of subsets to be arbitrary close to one, and ask what is the minimum dimension of such a mapping was studied in [MZ08].

Notice that if we only require to preserve pairwise distances the best upper bound is , see Section 3; therefore our result can be thought of as a generalization of the distance preserving embeddings since it also guarantees distance preservation. Moreover, there exists -point subset of Euclidean space that any embedding onto has distortion  [Mat90], and thus the above worst-case upper bound cannot be much improved.

2 Preliminaries and Technical Lemmas

We start by defining an (stochastic) ordering between two random variables and , but first let’s motivate this definition. Assume that we have upper and lower bounds on the distribution function of , and also assume that it’s hard to give precise bounds on the distribution function of . Using this notion of ordering, if “smaller than” , then we can bound the “complicated” variable through bounding the “easy” variable . We use this notion extensively in this paper.

More formally, let and be two random variables, not necessarily on the same probability space. The random variable is stochastically smaller than the random variable when, for every , the inequality

(1)

holds. We denote this by .

Next we recall known results about the Chi-square distribution and also give bounds on its’ cumulative distribution function. If be independent, identically distributed normal random variables, then the random variable is a Chi-square random variable with degrees of freedom. Notice that the expected value of is . It is well known ([Fel71], Chapter II, p. ) that the Chi-square distribution is a special case of the Gamma distribution and its cumulative distribution function is given by

(2)

where is the Gamma function, and is the lower and upper incomplete Gamma function, respectively. Next we present some bounds on the Gamma and incomplete Gamma functions that we use in Sections 3, 4. We start by presenting the following bound on the Gamma function, see for instance ([CD05], Lemmas ) and ([WW63], p.).

Lemma 1 (Stirling Bound on Gamma Function).

If , where , then

(3)

Next we upper bound . Note that , hence

(4)

Now for the upper incomplete gamma, we have the following bound.

Lemma 2.

If where , then

(5)
Proof.

In ([CD05], Lemma ) set and . ∎

It is well-known ([FB95], pp. ) that the volume that is spanned by the convex hull of a -point subset of along with the origin is equal to , where is the matrix that contains the points as columns. The following lemma gives a connection between the volume of the convex hull of points and the determinant of a specific matrix that is constructed using these points.

Lemma 3.

Let be an -point subset of in general position and let be a linear mapping. Let be an matrix. Then

(6)

where is the matrix that corresponds to .

Proof.

By a translation of the point-set , i.e., identifying with the origin, it follows that , since the volume is translation invariant, and similarly . Since is in general position, it follows that

Now, let’s consider the above lemma in the setting where is a random linear mapping. More specifically, let be a Gaussian matrix, i.e., a matrix whose entries are i.i.d. Gaussian . First observe that the fraction of the volumes is a random variable. Surprisingly enough, as the following lemma states, the fraction of the volumes in this setting is independent of . This can be thought of as a generalization of the -stability property of inner products with Gaussian random vectors to matrix multiplication with Gaussian matrices.

Lemma 4.

Let be an -point subset of in general position. And let be a random Gaussian linear mapping. Then

(7)
Proof.

It is a simple consequence of ([MZ08], Lemma ) and the above lemma. ∎

Remark 1.

For in Lemma 4, we get .

Equation 7 gives the distribution of the fraction of the volume as a product of independent random variables. However, in general it’s difficult to deal with such a product, and so we employ the following theorem that sandwiches this product with a single Chi-square distributions.

Theorem 2 (Theorem 4, [Gor89]).

Let be independent Chi-square random variables for . Then the following holds for every ,

(8)

We now have enough tools at our disposal to prove Theorem 1.

3 Distance Distortion

In this section we prove the following

Theorem 3.

Let be a -point subset of and let , where is a positive constant. Then there exists a linear mapping with (distance) distortion , i.e., there exists an absolute constant such that

Proof.

Similarly as in [Mat90]. Consider the random linear map , where is an random Gaussian matrix. Using linearity of and Remark 1 it follows that for any . Our goal is to show that is sufficiently concentrated. More specifically, it suffices to show that doesn’t fall outside an interval for some with constant probability. This aims to upper bound the probabilities and .

The elements of determine at most distinct direction vectors. Applying union bound over all pairs of gives that if

(9)

then there exists that expands every distance in by at most times and contracts at least times, so . Our goal therefore is to specify in terms of and such that Inequality 9 holds. To do so, we first bound from below, which will be used later. By Lemma 1, we have that Now, we will bound separately. We find such that . Using Equation 4 and the previous analysis we require that , which holds for all if we set , where is an absolute constant. Similarly, we will find such that . Using Lemma 2, and assume for the moment that , we have that

It suffices to show that is negative for large enough . Indeed,

Note that if then . Thus we can assume that , since if we can bound it, then we can bound it for all fixed . Define . We want to show that for large enough . By choosing , and recall that hence , we conclude that as desired. Hence, we can choose functions of such that . ∎

4 Proof of Main Theorem

Our goal is to find a mapping such that

(10)

where is the volume distortion of the mapping. We will see in the analysis below that we can set and . We can assume w.l.o.g. that the input points are in general position, i.e., every subset of size up to is affinely independent. If not, both the original points and projected points will span zero volume.

Similarly with Section 3, we take a random using a Gaussian random matrix and show that it satisfies (10) with constant probability. To do so, we first bound the probability that a fixed subset “contracts” its’ volume by more than a factor .

Lemma 5.

Fix any subset of size with . Then

where .

Proof.

Using Lemma 4 we know that the above probability is equal to . Using Theorem 2, we can bound the above probability of product of Chi-square random variables with a single Chi-square. More specifically, using the stochastic ordering we have the following inequality

for every . Now, we have a single Chi-square random variable and thus we can bound it from above, the same way as we did in Section 3, using Lemma (1) and Equation (4). It follows that . ∎

Similarly, we bound the probability that a fixed subset “expands” it’s volume by more than a factor .

Lemma 6.

Fix any subset of size with . If , then

where .

Proof.

As in the previous lemma the above probability is equal to , and again using Theorem 2 it follows that

Using Lemmas 12 it follows that . ∎

Notice that if , then from the stochastic ordering of the Chi-square distribution. Now we are ready to apply union bound. Our goal is to find such that with probability at least , our embedding does not contract volumes of subsets of size up to by a factor .

By union bounding over all sets of fixed size , , we want to find such that

where . Note that if we sum over all different size of subsets () we get that the failure probability is at most . It suffices to show that is negative for large enough and for every and , or equivalently the following is negative

Let’s group the terms of the right hand size and bound them individually. It is not hard to see that and since and , when and for . Hence, it suffices to show that

Set , for some positive that will be specified shortly and a sufficient small positive constant. Recall that we want the above inequality to hold for every . We can choose smaller than and take care of the term. Lets now focus on the dominate term . It follows that the above quantity is negative if . Let’s study closer the function . We will show that is convex on the domain and also increasing in the domain for any fixed . A simple calculation shows that for and for (details omitted). Also note that . By convexity in , we get that for all .

The above analysis gives a bound on the parameter , i.e., the maximum size of subsets that we can consider. Thus, we get that should be less than or equal to .

To sum up, we have proved that if then with probability at least our embedding doesn’t contract the normalized volumes of subsets of size at most by more than a multiplicative factor of .

Next our goal is to find such that with probability at least , does not expand volumes by more than a factor of . Let . We apply union bound over all sets of fixed size , together with Lemma 6 assuming for the moment that . We want to find such that

Summing over all different size of subsets we get the desired property with probability at least .

It suffices to show that is negative for every and . Similarly with Section 3 we can assume without loss of generality that , using the fact that if then .

Now, since there are at most subsets of size , it suffices to show that the following quantity is negative,

Note that in the last quantity the positive terms are of order . The negative terms are of order . Recall that . It is not hard to see that by choosing , where a sufficient large constant, then and the above quantity goes to as grows for every .

To sum up, we proved that with probability at least , doesn’t expand normalized volumes of subsets of size at most by more than a multiplicative factor of .

Rescaling by , we conclude that there exists with such that

This concludes the proof of Theorem 1.

Footnotes

  1. journal: IPL

References

  1. Z. Chen and J. J. Dongarra. Condition numbers of gaussian random matrices. SIAM Journal on Matrix Analysis and Applications, 27(3):603–620, 2005.
  2. S. Dasgupta and A. Gupta. An elementary proof of a theorem of johnson and lindenstrauss. Random Struct. Algorithms, 22(1):60–65, 2003.
  3. J. B. Fraleigh and R. A. Beauregard. Linear Algebra. Addison-Wesley Publishing Company, third edition, 1995.
  4. W. Feller. An Introduction to Probability Theory and its Applications, volume II. Wiley, New York, 1971.
  5. L. Gordon. Bounds for the distribution of the generalized variance. The Annals of Statistics, 17(4):1684–1692, 1989.
  6. P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC ’: Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, pages 604–613. ACM, 1998.
  7. W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. In Amer. Math. Soc., editor, In Conference in Modern Analysis and Probability, pages 189–206, Providence, RI, 1984.
  8. J. Matoušek. Bi-lipschitz embeddings into low dimensional euclidean spaces. In Comment. Math. Univ. Carolinae, volume 31, pages 589–600, 1990.
  9. A. Magen and A. Zouzias. Near optimal dimensionality reductions that preserve volumes. In APPROX-RANDOM, volume 5171 of Lecture Notes in Computer Science, pages 523–534. Springer, 2008.
  10. E. T. Whittaker and G. N. Watson. A Course of Modern Analysis. Cambridge University Press, 4 edition, 1963.
72281
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description