Incidence Theorems and Their Applications
Abstract
We survey recent (and not so recent) results concerning arrangements of lines, points and other geometric objects and the applications these results have in theoretical computer science and combinatorics. The three main types of problems we will discuss are:
Counting incidences: Given a set (or several sets) of geometric objects (lines, points, etc.), what is the maximum number of incidences (or intersections) that can exist between elements in different sets? We will see several results of this type, such as the SzemerediTrotter theorem, over the reals and over finite fields and discuss their applications in combinatorics (e.g., in the recent solution of Guth and Katz to Erdos’ distance problem) and in computer science (in explicit constructions of multisource extractors).
Kakeya type problems: These problems deal with arrangements of lines that point in different directions. The goal is to try and understand to what extent these lines can overlap one another. We will discuss these questions both over the reals and over finite fields and see how they come up in the theory of randomnessextractors.
SylvesterGallai type problems: In this type of problems, one is presented with a configuration of points that contain many ‘local’ dependencies (e.g., three points on a line) and is asked to derive a bound on the dimension of the span of all points. We will discuss several recent results of this type, over various fields, and see their connection to the theory of locally correctable errorcorrecting codes.
Throughout the different parts of the survey, two types of techniques will make frequent appearance. One is the polynomial method, which uses polynomial interpolation to impose an algebraic structure on the problem at hand. The other recurrent techniques will come from the area of additive combinatorics.
Contents\@mkbothLCONTENTSLCONTENTS
\@afterheading\@starttoc
toc
Chapter 1 Overview
Consider a finite set of points, , in some vector space and another set of lines. An incidence is a pair such that . There are many types of questions one can ask about the set of incidences and many different conditions one can impose on the corresponding set of points and lines. For example, the SzemerediTrotter theorem (which will be discussed at length below) gives an upper bound on the number of possible incidences. More generally, in this survey we will be interested in a variety of problems and theorems relating to arrangements of lines and points and the surprising applications these theorems have, in theoretical computer science and in combinatorics. The term ‘incidence theorems’ is used in a very broad sense and might include results that could fall under other categories. We will study questions about incidences between lines and points, lines and lines (where an incidence is a pair of intersecting lines), circles and points and more.
Some of the results we will cover have direct and powerful applications to problems in theoretical computer sciences and combinatorics. One example in combinatorics is the recent solution of Erdos’ distance problem by Guth and Katz [GK10b]. The problem is to lower bound the number of distinct distances defined by a set of points in the real plane and the solution (which is optimal up to logarithmic factors) uses a clever reduction to a problem on counting incidences of lines [ES10].
In theoretical computer science, incidence theorems (mainly over finite fields) have been used in recent years to construct extractors, which are procedures that transform weak sources of randomness (that is, distributions that have some amount of randomness but are not completely uniform) into completely uniform random bits. Extractors have many theoretical applications, ranging from cryptography to data structures to metric embeddings (to name just a few) and the current stateoftheart constructions all use incidence theorems in one way or another. The need to understand incidences comes from trying to analyze simple looking constructions that use basic algebraic operations. For example, how ‘random’ is , when are three independent random variables each distributed uniformly over a large subset of .
We will see incidence problems over finite fields, over the reals, in low dimension and in high dimension. These changes in field/dimension are pretty drastic and, as a consequence, the ideas appearing in the proofs will be quite diverse. However, two main techniques will make frequent appearance. One is the ‘polynomial method’ which uses polynomial interpolation to try and ‘force’ an algebraic structure on the problem. The other recurrent techniques will come from additive combinatorics. These are general tools to argue about sets in Abelian groups and the way they behave under certain group operations. These two techniques are surprisingly flexible and can be applied in many different scenarios and over different fields.
The survey is divided into four chapters, following this overview chapter. The first chapter will be devoted to problems of counting incidences over the real numbers (SzemerediTrotter and others) and will contain applications mostly from combinatorics (including the GuthKatz solution to Erdos’ distance problem). The second chapter will be devoted to the SzemerediTrotter theorem over finite fields and its applications to the explicit constructions of multisource extractors. The third chapter will be devoted to Kakeya type problems which deal with arrangements of lines pointing in different directions (over finite and infinite fields). The applications in this chapter will be to the construction of another variant of extractors – seeded extractors. The fourth and final chapter will deal with arrangements of points with many collinear triples. These are related to questions in theoretical computer science having to do with locally correctable error correcting codes. More details and definitions relating to each of the aforementioned chapters are given in the next four subsections of this overview which serves as a road map to the various sections.
This survey is aimed at both mathematicians and computer scientists and could serve as a basis for a one semester course. Ideally, each chapter should be read from start to finish (the different chapters are mostly independent of each other). We only assume familiarity with undergraduate level algebra, including the basics of finite fields and polynomials.
Notations:
We will use and to denote (in)equality up to multiplicative absolute constants. That is, means ‘there exists an absolute constant such that ’. In some places, we opt to use instead the computer science notations of and to make some expressions more readable. So is the same as , is the same as and is the same as . This allows us to write, for example, to mean that there exists an absolute constant such that .
Sources:
Aside from research papers there were two main sources that were used in the preparation of this survey. The first is a sequence of posts on Terry Tao’s blog which cover a large portion of Chapter 2 (see e.g. [Tao09]). Ben Green’s lecture notes on additive combinatorics [Gre09] were the main source in preparing the Chapter 3. Both of these sources were indispensable in preparing this survey and I am grateful to both authors.
Chapter 2: Counting incidences over the reals
Let be a finite set of points and a finite set of lines in . Let
denote the set of incidences between and . A basic question we will ask is how big can be. The SzemerediTrotter (ST) theorem [ST83] gives the (tight) upper bound of
We begin this chapter in Section 2.1 with two different proofs of this theorem. The first proof, presented in Section 2.1.1, is due to Tao [Tao09] (based on [CEG90b] and similar to the original proof of [ST83]) and uses the method of cell partitions. The idea is to partition the two dimensional plane into cells, each containing a bounded number of points/lines and to argue about each cell separately. This uses the special ‘ordered’ structure of the real numbers (this proof strategy is also the only one that generalizes to the complex numbers [Tot03]). The second proof, presented in Section 2.1.2, is due to Szekely [Szé97] ands uses the crossing number inequality for planar drawings of graphs and is perhaps the most elegant proof known for this theorem. This proof can also be adapted easily to handle intersections of more complex objects such as curves. We continue in Section 2.2 with some simple applications of the ST theorem to geometric and algebraic problems. These include proving sum product estimates and counting distances between sets of points.
Sections 2.3 to 2.6 are devoted to the proof of the GuthKatz theorem on Erdos’ distance counting problem. This theorem, obtained in [GK10b], says that a set of points in the real plane define at least distinct distances. This gives an almost complete answer to an old question of Erdos (the upper bound has a factor of instead of ). The tools used in the proof are developed over several sections which contain several other related results.
In Section 2.3 we discuss the ElekesSharir framework [ES10] which reduces distance counting to a question about incidences of a specific family of lines in , much in the spirit of the ST theorem. Sections 2.4 and 2.5 introduce the two main techniques used in the proof of the GuthKatz theorem. In Section 2.4 we introduce for the first time one of the main characters of this survey – the polynomial method. As a first example to the power of this method, we show how it can be used to give a solution to another beautiful geometric conjecture – the joints conjecture [GK10a]. Here, we have a set of lines in and want to upper bound the number of joints, or noncoplanar intersections of three lines or more. In Section 2.5 we introduce the second ingredient in the GuthKatz theorem – the polynomial HamSandwich theorem. This technique, introduced by Guth in [Gut08], combines the polynomial method with the method of cell partitions. As an example of how this theorem is used we give a third proof of the ST theorem which was discovered recently [KMS11].
Section 2.6, contains a relatively detailed sketch of the proof of the GuthKatz theorem (omitting some of the more technical algebraic parts). The main result proved in this section is an incidence theorem upper bounding the number of pairwise intersections in a set of lines in . If we don’t assume anything, lines can have intersections (an intersection is a pair of lines that intersect). An example is a set of horizontal lines and vertical lines, all lying in the same plane. If we assume, however, that the lines are ‘truly’ in 3 dimensions, in the sense that no large subset of them lies in two dimensions, we can get a better (and tight) bound of . This theorem then implies the bound on distinct distances using the ElekesSharir framework.
Chapter 3: Counting incidences over finite fields
This chapter deals with the analog of the SzemerediTrotter theorem over finite fields and its applications. When we replace the field with a finite field of elements things become much more tricky and much less is known (in particular there are no tight bounds). Assuming nothing on the field, the best possible upper bound on the number of intersections between lines and points is , which is what one gets from only using the fact that two points determine a line (using a simple CauchySchwarz calculation). However, if we assume that does not contain large subfields (as is the case, for example, if is prime) one can obtain a small improvement of the form for some positive , provided . This was shown by Bourgain, Katz and Tao as an application of the sum product theorem over finite fields [BKT04]. The sum product theorem says that, under the same conditions on subfields, for every set of size at most we have , where depends only on . The set is defined as the set of all elements of the form with ( is defined in a similar way).
The proof of the finite field ST theorem is given in Sections 3.1 – 3.4. Section 3.1 describes the machinery called ‘Ruzsa calculus’ – a set of useful claims for working with sumsets. Section 3.2 proves a theorem about growth of subsets of (we will only deal with prime fields) which is a main ingredient of the proof of the ST theorem. Section 3.3 proves the BalogSzemerediGowers theorem, a crucial tool in this proof and in many other results in additive combinatorics. Finally, Section 3.4 puts it all together and proves the final result. We note that, unlike previous expositions (and the original [BKT04]), we opt to first prove the ST theorem and then derive the sum product theorem from it as an application. This choice allows us to derive a slightly more streamlined proof of the ST theorem.
As an application of these results over finite fields we will discuss, in Section 3.5, the theory of multisource extractors coming from theoretical computer science. We will see how to translate the finite field ST theorem into explicit mappings which transform ‘weak’ structured sources of randomness into purely random bits. More precisely, suppose you are given samples from several (at least two) independent random variables and want to use them to output uniform random bits. It is not hard to show that a random function will do the job, but finding explicit (that is, efficiently computable) constructions is a difficult task. Such constructions have applications in theoretical computer science, in particular in the area of derandomization, which studies the power of randomized computation vs. deterministic computation.
We will discuss in some detail two representative results in this area: the extractors of Barak, Impagliazzo and Wigderson for several independent blocks [BIW06], which were the first to introduce the tools of additive combinatorics to this area, and Bourgain’s two source extractor [Bou05]. Both rely crucially on the finite field SzemerediTrotter theorem of [BKT04].
Chapter 4: Packing lines in different directions – Kakeya sets
This chapter deals with a somewhat different type of theorems that describe the way lines in different directions can overlap. In Sections 4.1 and 4.2 we will discuss these questions over the real numbers and over finite fields, respectively. In Section 4.3 we will discuss applications of the finite field results to problems in theoretical computer science.
A Kakeya set is a compact set containing a unit line segment in every direction. These sets can have measure zero. An important open problem is to understand the minimum Minkowski or Hausdorff dimension^{1}^{1}1For a definition see Section 4.1. of a Kakeya set. This question reduces in a natural way to a discrete incidence question involving a finite set of lines in many ‘sufficiently separated’ directions. The Kakeya conjecture states that Kakeya sets must have maximal dimension (i.e., have dimension ). The conjecture is open in dimensions and was shown to have deep connections with other problems in Analysis, Number Theory and PDE’s (see [Tao01]).
The most successful line of attack on this conjecture was initiated by Bourgain [Bou99] and later developed by Katz and Tao [KT02] and uses tools from additive combinatorics. In Section 4.1 we will discuss Kakeya sets over the reals and prove a bound on the Minkowski dimension, which is very close to the best known lower bound of . The underlying additive combinatorics problem that arises in this context is upper bounding the number of differences , for pairs in some graph as a function of the number of sums (or, more generally, linear combinations) on the same graph. We will not discuss the applications of the Euclidean Kakeya conjecture since they are out of scope for this survey (we are focusing on applications in discrete mathematics and computer science). Even though we will not directly use additive combinatorics results developed in Chapter 3, they will be in the background and will provide intuition as to what is going on.
Over a finite field a Kakeya set is a set containing a line in every direction (a line will contain points). It was conjectured by Wolff [Wol99] that the minimum size of a Kakeya set is at least for some constant depending only on . We will see the proof of this conjecture (obtained by the author in [Dvi09]) which uses the polynomial method. An application of this result, described in Section 4.3, is a construction of seeded extractors, which are explicit mappings that transform a ‘weak’ random source into a closetouniform distribution with the aid of a short random ‘seed’ (since there is a single source, the extractor must use a seed). A specific question that arises in this setting is the following: Suppose Alice and Bob each pick a point ( for Alice, for Bob). Consider the random variable computed by picking a random point on the line through . If both Alice and Bob pick their points independently at random then it is easy to see that will also be random. But what happens when Bob picks his points to be some function ? Using the connection to the Kakeya conjecture one can show that, in this case, is still sufficiently random in the sense that it cannot hit any small set with high probability.
Chapter 5: From local to global – SylvesterGallai type theorems
The SylvesterGallai (SG) theorem says that, in a finite set of points in , not all on the same line, there exists a line intersecting exactly two of the points. In other words, if for every two points in the set, the line through contains a third point in the set, then all points are on the same line. Besides being a natural incidence theorem, one can also look at this theorem as converting local geometric information (collinear triples) into global upper bounds on the dimension (i.e., putting all points on a single line, which is one dimensional). We will see several generalizations of this theorem, obtained in [BDYW11], in various settings. For example, assume that for every point in a set of points there are at least other points such that the line through contains a third point. We will see in this case that the points all lie on an affine subspace of dimension bounded by a constant. The proof technique here is different than what we have seen so far and will rely on convex optimization techniques among other things. These results will be described in Section 5.1 with the main technical tool, a rank lower bound for design matrices, proved in Section 5.2.
In Section 5.3 we will consider this type of questions over a finite field and see how the bounds are weaker in this case. In particular, under the same assumption as above (with ) the best possible upper bound on the dimension will be , where is the characteristic of the field [BDSS11]. Here, we will again rely on tools from additive combinatorics and will use results proved in Chapter 3.
In Section 5.4 we will see how this type of questions arise naturally in computer science applications involving error correcting codes which are ‘locally correctable’. A (linear)LocallyCorrectableCode (LCC) is a (linear) error correcting code in which each symbol of a possible corrupted codeword can be corrected by looking at only a few other locations (in the same corrupted codeword). Such codes are very different than ‘regular’ error correcting codes (in which decoding is usually done in one shot for all symbols) and have interesting applications in complexity theory^{2}^{2}2They are also very much related to Locally Decodable Codes (LDCs) which are discussed at length in the survey [Yek11]..
Chapter 2 Counting Incidences Over the Reals
2.1 The SzemerediTrotter theorem
Let be a finite set of points in and let be a finite set of points in . We define
to be the set of incidences between and . We will prove the following result of Szemeredi and Trotter [ST83].
Theorem 2.1.1 (ST theorem).
Under the above notations we have
We will use and to denote (in)equality up to multiplicative absolute constants.
The following example shows that this bound is tight. Let be the set of lines of the form with . Let be a set of points. Observe that each line intersects in points (for each , ). This gives a total of incidences.
As a step towards proving the ST theorem we prove the following claim which gives an ‘easy’ bound on the number of incidences. It is ‘easy’ not just because it has a simple proof but also because it only uses the fact that every two points define a single line and every pair of lines can intersect in at most one point (these facts hold over any field). The proof of the claim will use the CauchySchwarz inequality which says that
whenever are positive real numbers.
Claim 2.1.2.
Let be as above. Then we have the following two bounds:
and
Proof.
We will only prove the first assertion (the second one follows using a similar argument or by duality). The only geometric property used is that through every two points passes only one line. First, observe that
(2.1) 
To see this, count first the lines that have at most one point in on them. These lines contribute at most incidences. The rest of the lines have at least two points in on each line. The total number of incidences on these lines is at most since otherwise there would be a point that lies on lines and each of these lines must have one additional point on it and so there are more than points – a contradiction.
We now bound the number of incidences. We use to denote the indicator function which is equal to if and equal to zero otherwise.
(2.2)  
(2.3)  
(2.4)  
(2.5)  
(2.6) 
which implies the bound. ∎
2.1.1 Proof using cell partitions
The first proof of the ST theorem we will see uses the idea of cell partitions and is perhaps the most direct of the three proofs we will encounter. The proof we will see is due to Tao [Tao09] (based loosely on [CEG90b]) and is similar in spirit to the original proof of Szemeredi and Trotter. The idea is to use the properties of the real plane to partition it into small regions such that each region will intersect a small fraction of the lines in our set . This allows to ‘amplify’ the easy bound (Claim 2.1.2) to a stronger (indeed, optimal) bound by applying it to separated smaller instances of the problem.
Lemma 2.1.3.
For every there exists a set of lines plus some additional line segments not containing any point in that partition into at most regions (convex open sets) such that the interior of each region is incident to at most lines in .
We will sketch the proof of this lemma later. Before that, let’s see how it implies the ST theorem: First we can assume w.l.o.g that
(we use to mean that for some sufficiently small constant ). If not, then the bound in the ST theorem follows from Claim 2.1.2. We will apply Lemma 2.1.3 with some to be chosen later. Let be the set of lines defining the partition (recall that there are some additional line segments not counted in that do not contain points in ). For each cell we apply Claim 2.1.2 to bound the number of incidences in this cell (the cell does not include the boundary). We get that a cell can have at most incidences. Summing over all cells we get that
where the first term counts the incidences of point with lines in , the second term counts incidences in the open cells and the third term counts the incidences of lines not in with points in the cell boundary (each line not in has at most incidences with points on ). Setting
we get that
Since we get that and so, we can repeat the same argument on obtaining a geometric sum that only adds up to a constant. This completes the proof of the ST theorem.
Proof of the cell partition lemma
We only sketch the proof. The proof will be probabilistic. We will pick a random set of the lines in to be the set (plus some additional segments) and will argue that it satisfies the lemma with positive probability (this will imply that a good choice exists). This type of argument is common in combinatorics and is usually referred to as the ‘probabilistic method’. We will make two simplifying assumptions: one is that at most two lines pass through a point (this can be removed by a limiting argument). The second is that there are no vertical lines in and that no point in is on a vertical line through the intersection of two lines in (this can be removed by a random rotation).
The particular procedure we will use to pick the partition is the following: first we take each line to be in with probability . This will give us lines with high probability (say, at least ). This set of lines can create at most cells. Then, we ‘fix’ the partition so that each cell has a bounded (at most 4) number of line segments bordering it. This ‘fix’ is done by adding vertical line segments through every point that is adjacent to a cell with more than 4 border segments (the number 4 is not important, it can be any constant). These extra line segments are not in and, by our ‘random rotation’ assumption, do not hit any point in . One can verify that adding these segments does not increase the number of cells above (there are at most initial ‘corners’ to fix).
Having described the probabilistic construction we turn to analyze the probability of a cell having too many lines passing through it. Consider a cell with border segments. Each line passing through the cell must intersect at least one of these bordering segments. If there are more than lines in the cell than one segment must have at least lines in passing through it. Since all of these lines were not chosen in the partition we get that this event (for this specific segment) happens with probability at most
Taking to be roughly we get that this probability is at most . Therefore, we can bound the union of all ‘bad’ events of this form (i.e, of a particular segment or a line in containing a series of lines not chosen in ) as the product of the number of events times . Since the number of bad events is much smaller than we get that there exists a partition with a bound of . A more careful argument can get rid of the logarithmic factor by arguing that (a) the number of ‘bad’ cells is very small and (b) we can use induction on this smaller set to get the required partition.
This proof seems messy but is actually much cleaner than the original partition proof of Szemeredi and Trotter (which was deterministic). Next, we will see a much simpler proof of ST that does not use cell partition (later on we will see a third proof that uses a very different kind of cell partition using polynomials).
2.1.2 Proof using the crossing number inequality
Next, we will see a different, very elegant, proof of the ST theorem due to Szekely [Szé97] based on the powerful crossing number inequality [ACNS82, Lei81]. We will consider undirected graphs on a finite set of vertices and with a set of edges. A drawing of a graph is a placing of the vertices in the real plane with simple curves connecting two vertices if there is an edge between them (we omit the ‘formal’ definition since this is a very intuitive notion). For a drawing of we denote by the number of ‘crossings’ or intersections of edges in the drawing. The crossing number of , denoted is the minimum over all drawings of of . Thus, a graph is planar if it has a crossing number of zero.
A useful tool when talking about planar graphs is Euler’s formula. Given a planar drawing of a connected graph we have the following equality
(2.7) 
where is the set of faces of the drawing (including the unbounded face). The proof is a very simple induction on . If there is one face then the graph is a tree and so . If there are more faces then we can remove a single edge and decrease the number of faces by one.
The proof of the ST theorem will use a powerful inequality called the crossing number inequality. This inequality gives a strong lower bound on given the number of edges in . As a preliminary step we shall prove a weaker bound (which we will amplify later).
Claim 2.1.4.
Let be a graph. Then
Proof.
W.l.o.g we can assume is connected and with at least 3 vertices. It is easy to check that, if is planar then (draw two points on either side of an edge and count them once by going over all edges and another by going over all faces, using the fact that each face has at least 3 edges adjacent to it). Plugging this into Euler’s formula we get that, for planar graphs
or . If the claim was false, we could remove at most edges and obtain a planar drawing of . The new graph will have at least edges – a contradiction. ∎
This is clearly not a very good bound and some simple examples demonstrate. To get the final bound we will apply Claim 2.1.4 on a random vertex induced subgraph and do some expectation analysis. This is a beautiful example of the power of the probabilistic method.
Proof.
Let be a random vertex induced subgraph with each vertex of chosen to be in independently with probability to be chosen later. Taking the expectation of
we get that
The right hand side is equal to the expectation of the r.h.s of the original inequality by linearity of expectation. The left hand side requires some explanation: consider a single crossing in a drawing of which has the smallest number of crossings. This crossing will remain after the random restriction with probability . Thus, the expected number of crossings will be . This is however only an upper bound on the expected since there could be new ways of obtaining even better drawing after we move to (but this inequality is the ‘right’ direction so we’re fine). Setting (which is at most 1 by our assumption) gives the required bound. ∎
We now prove the ST theorem using this inequality (this proof is by Szekely [Szé97]): Let be our finite sets of points and lines (as above). Put aside lines that have only 1 point on them (this can contribute incidences) so that every line has at least two points. Consider the drawing of the graph whose vertex set is and two points share an edge if they are (1) on the same line and (2) there is no third point on the line segment connecting them. The number of edges on a line is , where . The total number of edges is
The crossing number of this graph is at most since each crossing is obtained from the intersection of two lines in . Plugging all this into the crossing number inequality we get that either or that
Putting all of this together we get .
Notice that this proof can be made to work with simple curves instead of lines as long as two curves intersect in at most points and two points sit on at most curves together. In particular, a set of points and a set of unit circles can have at most incidences (we will use this fact later).
2.2 Applications of SzemerediTrotter over
We mentioned that the crossing number inequality proof of the SzemerediTrotter theorem works also for circles of unit distance. In general, the following is also true and very useful (the proof is left as an easy exercise using the crossing number inequality).
Theorem 2.2.1.
Suppose we have a family of simple curves (i.e., that do not intersect themselves) and a set of points in such that (1) every two points define at most curves in and (2) every two curves in intersect in at most points then
where the hidden constant in the inequality depends only on .
If we only have a bound on the number of curves passing through of the points (for some integer ) the following was shown by Pach and Sharir (and is not known to be tight for ):
Theorem 2.2.2 (PachSharir [Ps98]).
Let be a family of curves in the plane and let be a set of points. Suppose that through every points of there are at most curves and that every two curves intersect in at most points. Then
where the hidden constant depends only on .
In particular, this bound can be used for families of algebraic curves of bounded degree.
A simple but useful corollary of the ST theorem is the following. The proof is an easy calculation and is left to the reader.
Corollary 2.2.3.
The and be sets of points and lines in . For let denote the set of lines in that contain at least elements of . Then,
2.2.1 Beck’s theorem
A nice theorem that follows from ST using purely combinatorial arguments is Beck’s theorem:
Theorem 2.2.4 (Beck’s theorem [Bec83]).
Let be a set of points in the plane and let be the set of lines containing at least 2 points in . Then one of these two cases must hold:

There exists a line in that contains points.

In other words, if a set of points defines lines than there is a line containing a constant fraction of the points.
Proof.
Let . We partition the lines in into sets , the ’th set contains the lines with points from (more precisely, the lines that contain between and points). Using Corollary 2.2.3 on each of the ’s we get that
Take to be some large constant to be chosen later and let be the set of lines with ‘medium’ multiplicity. Since each line in contains pairs of points we get that there are at most pairs of points on lines in . Summing over all ’s with and taking to be sufficiently large we get that the number of pairs of points on lines in is at most . Thus, there are two cases: either there is a line with at least points and we are done. Alternatively, there are points on lines that contain at most points each. In this case we get that which is linear in . ∎
2.2.2 Erdos unit distance and distance counting problems
Let be a set of points in . We define
(i.e., all pairs of Euclidean distance 1). Erdos conjectured (and this is still open) that for all sets we have for all . Again, the construction which obtains the maximal number of unit distances is a grid (this time, however, the step size must be chosen carefully using number theoretic properties).
Using the unitcircle version of the ST theorem we can get a bound of , which is the best bound known. To see this, consider the circles of radius 1 centered at the points of . Two circles can intersect in at most two points and every two points define at most two radius one circles that pass through them. Therefore, we can use the ST theorem to bound the number of incidences by . In four dimensions the number of unit distances in an arrangement can be as high as . In three dimensions the answer is not known.
A related question of Erdos is to lower bound the number of distinct distances defined by the pairs in . Let
It was conjectured by Erdos that
Considering points on a integer grid, gives an example showing this bound is tight. Using the bound on unit distances above (which, by scaling, bounds the maximal number of distances equal to any real number) we immediately get a lower bound of on the number of distinct distances. A result which comes incredibly close to proving Erdos’s conjecture (with instead of ) is a recent breakthrough of Guth and Katz which uses a three dimensional incidence theorem in the spirit of the ST theorem (we will see this proof later on).
2.2.3 Sum Product theorem over
Let be a finite set and define
and
to be the sumset and product set of . If is an arithmetic progression than we have and if is a geometric progression we have . But can be both? In other words, can we have an ‘approximate’ subring of the real numbers (one can also ask this for the integers). Using the ST theorem, Elekes [Ele97] proved the following theorem. The same theorem with the constant replaced by some other constant larger than 1 was proved earlier by Erdos and Szemeredi [ES83].
Theorem 2.2.5 (SumProduct Theorem over ).
Let be a finite set. Then
Proof.
Let and let contain all lines defined by an equation of the form with and ( is the set of inverses of elements of (we can discard the zero). Then . Each line in has incidences with (take all on the line with for some ) and so we have
If both and were we would get that the r.h.s is smaller than the l.h.s – a contradiction. ∎
A more intricate application of the ST theorem can give an improved bound of [Sol05]. The conjectured bound is for all .
2.2.4 Number of points on a convex curve
Let be a strictly convex curve in the plane contained in the range . How many integer lattice points can have? Using the curve version of the ST theorem we can bound this by (this proof is due to Iosevich [Ios04]). This bound is tight and the example which matches it is the convex hull of integer points contained in a ball of radius [BL98]. The argument proceeds as follows: take the family to include all curves obtained from by translating it by all integer points in . One has . We take to be all integer points that are on some curve from so that . The condition on the number of curves through every two points and the maximum intersection of two curves can be readily verified (here we must used the strict convexity). Thus the number of incidences is at most . Since the curves are all translations of each other they all contain the same number of integer points. Therefore, each one will contain at most points.
2.3 The ElekesSharir framework
In a recent breakthrough Guth and Katz [GK10b] proved that any finite set of points in the real plane defines at least distinct distances. This is tight up to a factor and was conjectured by Erdos [Erd46]. The proof combines ideas that were developed in previous works and in particular a general framework developed by Elekes and Sharir in [ES10] that gives a ‘generic’ way to approach such problems by reducing them to an incidence problem. (The original paper of Elekes and Sharir reduced the distance counting problem to an incidence problem between higher degree curves but this was simplified by Guth and Katz to give lines instead of curves.)
To begin, observe that a 4tuple of points satisfies iff there exists a rigid motion (rotation translation) such that and . Let us denote the set of rigid motions by . To define a rigid motion we need to specify a translation (which has two independent parameters) and a rotation (one parameter) thus, we can think of as a three dimensional space. Later on we will fix a concrete parametrization of (minus some points) as but for now it doesn’t matter. For each we define the set
of rigid motions mapping to . This set is ‘one dimensional’ since, after specifying the image of we can only change the rotation parameter. In our concrete parametrization (which we will see shortly) all of the sets will in fact be lines in . Let be the set of lines defined by the point set . We would like to bound the number of distances defined by , denoted , as a function of the number of incidences between the lines in . To this end, consider the following set:
A quick CauchySchwarz calculation shows that
On the other hand, since each 4tuple in gives an intersection between two lines in , we have that
Thus, it will suffice to give a bound of to obtain the bound of GuthKatz on . In general, lines in can have many more intersections but, using some special properties of this specific family of lines we will in fact obtain this bound.
We now describe the concrete parametrization of mentioned above. We begin by removing from all translations. It is easy to see that the number of 4tuples in arising from pure translations is at most (since every three points determine the fourth one). Now, every map in is a rotation by (say, to the right) around some fixed point . If then one sees that the fixed point must lie on the perpendicular bisector of segment . That is, on the line passing through the midpoint and in direction perpendicular to . Our parametrization will be defined as
We now show that
Claim 2.3.1.
For each we have that is a line in .
Proof.
It will help to draw a picture at this point with the two points , the line and the fixed point on this line. We will consider the triangle formed by the three points and (the point on that is directly between and ). This is a right angled triangle with an angle of between the segments and . Thus, . Let be a unit vector parallel to . We thus have that (or with a minus sign). Setting , this shows that
which is a line. ∎
Let . We have lines in and want to show that there are at most incidences. This is clearly not true for an arbitrary set of lines. A trivial example where the number of incidences is is when all lines pass through a single point. Another example is when the lines are all in a single plane inside . If no two lines are parallel we would have incidences (every pair intersects). Surprisingly enough, there is only one more type of examples with incidences: doubly ruled surfaces. Take for example the set . This set is ‘ruled’ by two families of lines: the lines of the form and the lines of the form . If we take lines from each family we will get intersections. In other words, the set contains a ‘grid’ of lines (horizontal and vertical) such that every horizontal lines intersects every vertical line. In general a doubly ruled surface is defined as a surface in which every point has two lines passing through it. A singly ruled surface is a surface in which every point has at least one line passing through it. It is known that the only nonplanar doubly ruled surfaces (up to linear change of coordinates) is the one we just saw and the surface . There are no nonplanar triply ruled surfaces.
Guth and Katz proved the following:
Theorem 2.3.2 (GuthKatz).
Let be a set of lines in such that no more than lines intersect at a single point and no plane or doubly ruled surface contains more than lines. Then the number of incidences of lines in , , is at most .
The bound in the theorem is tight (even with the logarithmic factor) and clearly the conditions cannot be relaxed. Luckily, the lines defined by a point set in the above manner satisfy the conditions of the theorem and so this theorem implies the bound on the number of distinct distances. An example of a set of lines matching the bound in the theorem is as follows: Take an grid in the plane and another identical grid in the plane and pass a line through every two points, one on each grid. The number of lines is thus and a careful calculation shows that the number of incidences is (see the appendix in Guth and Katz’s original paper for the proof).
For each let denote the set of points that have at least lines in passing through them. Define in a similar manner requiring that there are exactly lines through the point. Theorem 2.3.2 will follow from the following lemma (which is also tight using the same construction as above).
Lemma 2.3.3.
Let be as in Theorem 2.3.2. Then for every ,
Let us see how this Lemma implies Theorem 2.3.2.
The case and of the Lemma are proved in [GK10b] using different arguments (the case was proved earlier in [EKS11]). Interestingly, the case does not require the condition on doubly ruled surfaces and can be proven without it.
We still need to show that the lines coming from satisfy the conditions of Theorem 2.3.2 (we omit the mapping at this point to save on notations). To see that at most lines meet at a point observe that, if not, we could find two lines and that intersect. This clearly cannot happen since this would imply that there is a rigid motion mapping to and also mapping to . Let us consider now the maximum number of lines in a plane. Let and observe that the lines in are disjoint. Moreover, by the parametrization of the lines, we have that all lines in are of different directions. Thus, every plane can contain at most one line from . Thus, there could be at most lines in a single plane. The condition regarding doubly ruled surfaces is more delicate and can be found in the GuthKatz paper.
In the next few sections we will develop the necessary machinery for proving Lemma 2.3.3. Since the proof is quite lengthy we will omit the proofs of some claims having to do with doubly ruled surfaces that are used in the proof of the case. As a ‘warmup’ to the full proof we will see two proofs of related theorems which use this machinery in a slightly simpler way. One of these will be yet another proof of the SzemerediTrotter theorem, this time using the polynomial ham sandwich theorem. The other will be the proof of the joints Conjecture which uses the polynomial method.
2.4 The Polynomial Method and the joints Conjecture
The polynomial method is used to impose an algebraic structure on a geometric problem. The basic ingredient in this method is the following simple claim which holds over any field. Notice that the phrase ‘non zero polynomial’ used in the claim refers to a polynomial with at last one non zero coefficient (over a finite field such a polynomial might still evaluate to zero everywhere).
Claim 2.4.1.
Let be a finite set, with some field. If then there exists a non zero polynomial of degree such that for all .
Proof.
Each constraint of the form is a homogenous linear equation in the coefficients of . The number of coefficients for a polynomial of degree at most in variables is exactly and so there must be a non zero solution. ∎
The second component of the polynomial method is given by bounding the maximum number of zeros a polynomial can have. In the univariate case, this is given by the following wellknown fact. Later, we will see a variant of this claim for polynomials with more variables.
Claim 2.4.2.
A non zero univariate polynomial over a field can have at most zeros.
To illustrate the power of the polynomial method we will see how it gives a simple proof of the joints conjecture. To this end we must first prove some rather easy claims about restrictions of polynomials. Let be a degree polynomial over a field . Let be any affine subspace of dimension . We can restrict to in the following way: write as the image of a degree one mapping so that
The restriction of to is the polynomial . This depends in general on the particular choice of but for our purposes all ’s will be the same (we can pick any ). A basic and useful fact is that for any restriction of .
Suppose now that is a line in and write as for some . Restricting to we get a polynomial . It will be useful to prove some properties of this restriction. In particular, we would like to understand some of its coefficients. The constant coefficient is the value of at zero and is simply . On the other hand, the coefficient of highest degree will be , where is the highest degree homogenous component of (that is, the sum of all monomial of maximal degree in ). Another coefficient we will try to understand is that of . To this end we will use the partial derivatives of . Recall that is a polynomial in obtained by taking the derivative of as a polynomial in (with coefficients in ). This is defined over any field but notice some weird things can happen if has positive characteristic (e.g, the partial derivative of is zero over even though this is a non constant polynomial). The gradient of is the vector of polynomials
This vector has geometric meaning which we will not discuss here. Algebraically, we have that the coefficient of in the restriction to a line is exactly . To see this, observe that the coefficient of is obtained by taking the derivative (w.r.t the single variable ) and then evaluating the derivative at . Using the chain rule for functions of several variables we get that the derivative of is and so the claim follows.
2.4.1 The joints problem
Let be a set of lines in . A ‘joint’ w.r.t the arrangement is a point through which pass at least three, non coplanar, lines. The basic question one can ask is ‘what is the maximal number of joints possible in an arrangement of lines’. This problem, raised in [CEG90a] in relation to computer graphics algorithms, was answered completely by Guth and Katz [GK10a] using a clever application of the polynomial method. This result followed a long line of papers proving incremental results using various techniques, quite different than the polynomial method (see [GK10a] for a list of references). This proof of the joints conjecture by Guth and Katz was the first case where the polynomial method was used directly to argue about problems in Euclidean space (in contrast to finite fields where it was more common). Later in [GK10b], ideas from this proof were used in part of the proof of the ErdosDistance problem bound.
A simple lower bound on the number of joints is obtained from taking to be the union of the following three sets, each containing lines:
In other words, the set contains lines in a three dimensional grid. It is easy to check that every point in is a joint and so we have that the number of joints can be as large as . Not surprisingly (at this point), this is tight.
Theorem 2.4.3 (Guth Katz [GK10a]).
Let be a set of lines in . Then defines at most joints.
Proof.
The proof given here is a simplified proof found by Kaplan, Sharir and Shustin [KSS10] who also generalized it to higher dimensions.
Let be the set of joints and suppose that for some large constant to be chosen later. We can throw away all lines in that have fewer than joints on them. This can decrease the size of by at most a half.
Let be a non zero polynomial with real coefficients of minimal degree that vanishes on the set . We saw in previous sections that (since a polynomial of this degree will have coefficients).
Since each of the lines in contains joints (here we take the constant to be large enough) we get that must vanish identically on each line in . To see this, observe that the restriction of to a line is also a degree polynomial and so, if it is not identically zero, it can have at most zeros. Thus, we have moved from knowing that vanishes on all joints to knowing that vanishes on all lines!
Consider a joint and let be three non coplanar lines passing through . We can find three linearly independent vectors such that for all we have . Since vanishes on these three lines we get that for , the polynomial is identically zero. This means that all the coefficients of are zero and in particular the coefficient of which is, as we saw, equal to . Since the ’s are linearly independent, we get that for all joints . One can now check that, over the reals, a non zero polynomial has at least one non zero partial derivative of degree strictly less that the degree of . Therefore, we get a contradiction since we assumed that is a minimal degree polynomial vanishing on . ∎
It is not hard to generalize this proof to finite fields using the fact that a polynomial all of whose partial derivatives are zero must be of the form for some other polynomial , where is equal to the characteristic of the field. For other generalizations, including to algebraic curves, see [KSS10].
2.5 The Polynomial ham sandwich theorem
One of the main ingredients in the proof of the GuthKatz theorem is an ingenious use of the polynomial ham sandwich theorem, originally proved by Stone and Tukey [ST42]. This is a completely different use of polynomials than the one we saw for the joints problem and combines both algebra and topology. We will demonstrate its usefulness by seeing how it can be used to give yet another proof of the SzemerediTrotter theorem in two dimensions. This proof will be both ‘intuitive’ (not ‘magical’ like the crossing number inequality proof) and simple (without the messy technicalities of the cell partition proof we saw).
The folklore ham sandwich theorem states that every three bounded open sets in can be simultaneously bisected using a single plane. If we identify the three sets with two slices of bread and a slice of ham we see the practical significance of this theorem. More generally, we have:
Theorem 2.5.1 ([St42]).
Let be bounded open sets. Then there exists a hyperplane (with a degree one polynomial in variables) such that for each the two sets and have equal volume. In this case we say that bisects each of the ’s.
This can be thought of as extending the basic fact that for every points there is a hyperplane in that passes through all of them. The proof uses the BorsukUlam theorem from topology, whose proof can be found in [Mat07].
Theorem 2.5.2 (BorsukUlam [Bor33]).
Let be the dimensional unit sphere and let be a continuous map such that for all (such a map is called antipodal). Then there exists such that .
Proof of the hamsandwich theorem:.
Each hyperplane corresponds to some degree one polynomial . Since we only care about the sign of we can scale so that the coefficients form a unit vector . We define a function as follows
It is clear that is continuous and that . Thus, there exists a zero of and we are done. ∎
In their original paper, Stone and Tukey also observed that if we want to bisect more sets, we can do it as long as we have enough degrees of freedom. One way to allow for more degrees of freedom is to replace a hyperplane with a hypersurface.
A hypersurface is a set where now can be a polynomial of arbitrary degree . The degree of is defined to be the degree of its defining polynomial (we will abuse this definition a bit and say that a hypersurface has degree if it has degree bounded by ). Recall that, if we have points in than we can find, by interpolation, a non zero degree polynomial that is zero on all of them. For the problem of bisecting open sets the same holds: If the number of sets is smaller than we can find a degree polynomial that bisects all of the sets.
Theorem 2.5.3 (Polynomial ham sandwich (PHS)).
Let be bounded open sets with . Then there exists a degree hypersurface that bisects each of the sets .
Proof.
The proof is identical to the degree one proof. Identify each degree hypersurface with its (unit) vector of coefficients and apply the BorsukUlam theorem on the function mapping to the differences. ∎
2.5.1 Cell partition using polynomials
The PHS theorem gives a particularly nice way to partition into cells. In addition to having a ‘balanced’ partition (as we had in the cell partition method we saw earlier) we will have the additional useful property that the boundaries of the partition are defined using a low degree polynomial. The use of the PHS for cell partition originated in a paper of Guth [Gut08] on the multilinear Kakeya problem.
The first step for obtaining the cell partition theorem is to take the PHS to the ‘limit’ and replace the open sets with discrete sets. If is a finite set and is a hypersurface, we say that bisects if both sets and have size at most . Notice that this definition allows for an arbitrary number of points from to belong to the set itself.
Lemma 2.5.4 (Discrete PHS).
Let be finite sets of points with . Then, there exists a degree hypersurface that bisects each of the sets .
Proof.
Consider neighborhoods of the sets and apply the PHS on this family of open sets obtaining a bisecting hypersurface . Taking to zero and using the compactness of the unit sphere we get that there is sequence of bisecting hypersurfaces converging to some degree hypersurface . If one of the sets or has size larger than we could find a h.s that does not bisect the neighborhood of . ∎
Notice that, if , the dimension, is fixed and the number of sets grows, we always have a degree polynomial that bisects sets. In particular, over , a family of discrete sets can be bisected by a degree h.s.
We will now use the discrete PHS to get our final cell partition theorem. We will only need this theorem over and but will state it over for all (it will help to think of as a fixed constant and of as growing to infinity).
Theorem 2.5.5 (Polynomial Cell Partition).
Let be a finite set and let . Then, there exists a decomposition of into cells (open sets) such that each cell has boundary in a hypersurface of degree and each cell contains at most points from . Notice that the cells do not have to be connected.
Proof.
We will apply the discrete PHS iteratively to obtain finer and finer partitions. Initially, we get a h.s of degree that bisects the single set into two sets of size at most each (plus some points on the boundary). Applying the discrete PHS again on these two sets we obtain a degree h.s