The Local Cut Lemma
The Lovász Local Lemma is a very powerful tool in probabilistic combinatorics, that is often used to prove existence of combinatorial objects satisfying certain constraints. Moser and Tardos  have shown that the LLL gives more than just pure existence results: there is an effective randomized algorithm that can be used to find a desired object. In order to analyze this algorithm, Moser and Tardos developed the so-called entropy compression method. It turned out that one could obtain better combinatorial results by a direct application of the entropy compression method rather than simply appealing to the LLL. The aim of this paper is to provide a generalization of the LLL which implies these new combinatorial results. This generalization, which we call the Local Cut Lemma, concerns a random cut in a directed graph with certain properties. Note that our result has a short probabilistic proof that does not use entropy compression. As a consequence, it not only shows that a certain probability is positive, but also gives an explicit lower bound for this probability. As an illustration, we present a new application (an improved lower bound on the number of edges in color-critical hypergraphs) as well as explain how to use the Local Cut Lemma to derive some of the results obtained previously using the entropy compression method.
- 1 Introduction
- 2 The Local Cut Lemma
- 3 Applications
One of the most useful tools in probabilistic combinatorics is the so-called Lovász Local Lemma (the LLL for short), which was proved by Erdős and Lovász in their seminal paper . Roughly speaking, the LLL asserts that, given a family of random events whose individual probabilities are small and whose dependency is somehow limited, there is a positive probability that none of the events in happen. More precisely:
Theorem 1.1 (Lovász Local Lemma, ).
Let , …, be random events in a probability space . For each , let be a subset of such that the event is independent from the algebra generated by the events with . Suppose that there exists a function such that for every ,
Note that the probability , which the LLL bounds from below, is usually exponentially small (in the parameter ). This is in contrast to the more common situation in the probabilistic method when the probability of interest is not only positive, but separated from zero. Although this property of the LLL makes it an indispensable tool in proving combinatorial existence results, it also makes these results seemingly nonconstructive, since sampling the probability space to find an object with the desired properties would usually take an exponentially long expected time. A major breakthrough was made by Moser and Tardos , who showed that, in a special framework for the LLL called the variable version (the name is due to Kolipaka and Szegedy ), there exists a simple Las Vegas algorithm with expected polynomial runtime that searches the probability space for a point which avoids all the events in . Their algorithm was subsequently refined and extended to other situations by several authors; see e.g. , , , .
The key ingredient of Moser and Tardos’s proof is the so-called entropy compression method (the name is due to Tao ). The idea of this method is to encode the execution process of the algorithm in such a way that the original sequence of random inputs can be uniquely recovered from the resulting encoding. One then proceeds to show that if the algorithm runs for too long, the space of possible codes becomes smaller than the space of inputs, which leads to a contradiction.
It was discovered lately (and somewhat unexpectedly) that applying the entropy compression method directly can often produce better combinatorial results than simply using the LLL. The idea, first introduced by Grytczuk, Kozik, and Micek in their study of nonrepetitive sequences , is to construct a randomized procedure that solves a given combinatorial problem and then apply the entropy compression argument to show that it runs in expected finite time. A wealth of new results have been obtained using this paradigm; see e.g. , , . Some of these examples are discussed in more detail in Section 3.
Note that the entropy compression method is indeed a “method” that one can use to attack a problem rather than a general theorem that contains various combinatorial results as its special cases. It is natural to ask if such a theorem exists, i.e., if there is a generalization of the LLL that implies the new combinatorial results obtained using the entropy compression method. The goal of this paper is to provide such a generalization, which we call the Local Cut Lemma (the LCL for short). It is important to note that this result is purely probabilistic and similar to the LLL in flavor. In particular, its short and simple probabilistic proof does not use the entropy compression method. Instead, it estimates certain probabilities explicitly, in much the same way as the original (nonconstructive) proof of the LLL does. We state and prove the LCL in Section 2. Section 3 is dedicated to applications of the LCL. We start by introducing a simplified special case of the LCL (namely Theorem 3.1) in Subsection 3.1, which turns out to be sufficient for most applications. In fact, Theorem 3.1 already implies the classical LLL, as we show in Subsection 3.2. In Subsection 3.3, we discuss one simple example (namely hypergraph coloring), which provides the intuition behind the LCL and serves as a model for more substantial applications described later. In Subsections 3.4 and 3.5 we show how to use the LCL to prove several results obtained previously using the entropy compression method. We also present a new application (an improved lower bound on the number of edges in color-critical hypergraphs) in Subsection 3.6. The last application, discussed in Subsection 3.7, is a curious probabilistic corollary of the LCL.
2 The Local Cut Lemma
2.1 Statement of the LCL
To state our main result, we need to fix some notation and terminology. In what follows, a digraph always means a finite directed multigraph. Let be a digraph with vertex set and edge set . For , , let denote the set of all edges with tail and head .
A digraph is simple if for all , , . If is simple and , then the unique edge with tail and head is denoted by (or sometimes ). For an arbitrary digraph , let denote its underlying simple digraph, i.e., the simple digraph with vertex set in which is an edge if and only if . Denote the edge set of by . For a set , let be the set of all edges such that . A set is out-closed (resp. in-closed) if for all , implies (resp. implies ).
Let be a digraph with vertex set and edge set and let be an out-closed set of vertices. A set of edges is an -cut if is in-closed in . In other words, a set is an -cut if it contains at least one edge for all such that and (see Fig. 1).
We say that a vertex is reachable from if (or, equivalently, ) contains a directed -path. The set of all vertices reachable from is denoted by .
Let be a digraph with vertex set and edge set . For a function and vertices and , define
For a set , we use to denote the power set of , i.e., the set of all subsets of .
Let be a digraph with vertex set and edge set . Let be a probability space and let and be random variables such that with probability , is an out-closed set of vertices and is an -cut. Fix a function . For , , and , let
For , define the risk to as
For random events , , the conditional probability is only defined if . For convenience, we adopt the following notational convention in Definition 2.3: If is a random event and , then for all events . Note that this way the crucial equation is satisfied even when , and this is the only property of conditional probability we will use.
We are now ready to state the main result of this paper.
Theorem 2.5 (Local Cut Lemma).
Let be a digraph with vertex set and edge set . Let be a probability space and let and be random variables such that with probability , is an out-closed set of vertices and is an -cut. If a function satisfies the following inequality for all :
then for all ,
The following immediate corollary is the main tool used in combinatorial applications of Theorem 2.5:
Let , , , be as in Theorem 2.5. Let , , and suppose that . Then
2.2 Proof of the LCL
for all . For each , let be defined by
Also, let , where and denote the constant and functions respectively. Then (2.1.1) is equivalent to
Note that the map is monotone increasing, i.e., if for all , then for all as well.
Let and let for all . To simplify the notation, let .
For all and ,
For all and ,
Since the sequence is monotone increasing and bounded by , it has a limit, so let
Note that we still have for all . Hence it is enough to prove that for all ,
We will derive (2.2.4) from the following lemma.
For every and ,
If Lemma 2.9 holds, then we are done, since it implies that
To establish Lemma 2.9, we need the following claim.
Let and suppose that for all , (2.2.5) holds. Then for all and ,
The proof of Claim 2.10 uses the following simple algebraic inequality.
Let , …, , , …, be nonnegative real numbers with for all . Then
Proof is by induction on . If , then both sides of (2.2.7) are equal to . If the claim is established for some , then for we get
as desired. ∎
Proof of Claim 2.10.
Let … be some directed -path in . For , let and . Note that .
Proof of Lemma 2.9.
Proof is by induction on . For , the lemma simply asserts that . Now assume that (2.2.5) holds for some and consider an edge . Since is out-closed, implies , so
Since is an -cut, it contains at least one edge whenever and . Using the union bound, we obtain
Let us now estimate for each . Consider any . Since is out-closed, implies , so
Due to Claim 2.10,
Since and , we get
The last inequality holds for every , so we can replace in it by , obtaining
The right hand side of the last inequality can be rewritten as
as desired. ∎
3.1 A special version of the LCL
In this subsection we introduce a particular and perhaps more intuitive set-up for the LCL, that will be sufficient for almost all applications discussed in this paper.
Let be a finite set. A family of subsets of is downwards-closed if for each , . The boundary of a downwards-closed family is defined to be
Suppose that is a probability space and is a random variable such that is downwards-closed with probability . Let be a random event and let be a function. For a subset , let
Finally, for an element , let
The following statement is a straightforward, yet useful, corollary of the LCL:
Let be a finite set. Let be a probability space and let be a random variable such that with probability , is a nonempty downwards-closed family of subsets of . For each , let be a finite collection of random events such that whenever , at least one of the events in holds. Suppose that there is a function such that for all , we have
For convenience, we may assume that for each , the set is nonempty (we can arrange that by adding the empty event to each ). Let be the digraph with vertex set and edge set
where the edge goes from to . Thus, we have
which implies that for , ,
Moreover, if , then all directed -paths have length exactly .
Since is downwards-closed, it is out-closed in . Let be a random set of edges defined by
We claim that is an -cut. Indeed, consider any edge and suppose that we have and . By definition, this means that , so at least one event holds. But then , as desired.
Let be a function satisfying (3.1.1) and let be given by . Note that for any , we have .
Let , , and . Then
Let be a set with such that and let . We have
Since and takes values in ), we have , so
3.2 The LCL implies the Lopsided LLL
In this subsection we use the LCL to prove the Lopsided LLL, which is a strengthening of the standard LLL.
Theorem 3.2 (Lopsided Lovász Local Lemma, ).
Let , …, be random events in a probability space . For each , let be a subset of such that for all , we have
Suppose that there exists a function such that for every , we have
We will use Theorem 3.1. Set and let and be random variables defined by
Set . In other words, a set belongs if and only if holds. It follows that is a nonempty downwards-closed family of subsets of and (i.e., if and only if holds). Therefore, we can apply Theorem 3.1 with for each .
The above derivation of the Lopsided LLL from Theorem 3.1 clarifies the precise relationship between the two statements. Essentially, Theorem 3.1 reduces to the classical LLL under the following two main assumptions: (1) the set contains an inclusion-maximum element; and (2) each of the sets is a singleton, containing only one “bad” event. Neither of these assumptions is satisfied in the applications discussed later, where the LCL outperforms the LLL.
3.3 First example: hypergraph coloring
In this subsection we provide some intuition behind the LCL using a very basic example: coloring uniform hypergraphs with colors.
Let be a -regular -uniform hypergraph with vertex set and edge set , and suppose we want to establish a relation between and that guarantees that is -colorable. A straightforward application of the LLL gives the bound
which is equivalent to
Let us now explain how to apply the LCL (in the simplified form of Theorem 3.1) to this problem. Choose a coloring uniformly at random. Define by
Clearly, is downwards-closed, and, since we always have , is nonempty. Moreover, if and only if is a proper coloring of . Therefore, if we can apply Theorem 3.1 to show that , then is -colorable.
In order to apply Theorem 3.1, we have to specify, for each , a finite family of “bad” random events such that whenever , at least one of the events in holds. Notice that if , i.e., for some , we have and , then there must exist at least one -monochromatic edge . Thus, we can set
where the event happens is and only if is -monochromatic. Since is -regular, .
We will assume that is a constant function. In that case, for any , . Let and let be such that . To verify (3.1.1), we require an upper bound on the quantity . By definition,
so it is sufficient to upper bound for some set . Since
we just need to find a set such that the conditional probability for is easy to bound. Moreover, we would like to be as small as possible (to minimize the factor ).
Since the colors of distinct vertices are independent, the events and “” are independent whenever . Therefore, for ,
(The inequality might be strict if , in which case as well, due to our convention regarding conditional probabilities; see Remark 2.4.) Thus, it is natural to take , which gives
Hence it is enough to ensure that satisfies
A straightforward calculation shows that the following condition is sufficient:
or, a bit more crudely,