Complete minors in graphs without sparse cuts
Abstract
We show that if is a graph on vertices, with all degrees comparable to some , and without a sparse cut, for a suitably chosen notion of sparseness, then it contains a complete minor of order
As a corollary we determine the order of a largest complete minor one can guarantee in regular graphs for which the second largest eigenvalue is bounded away from , in jumbled graphs, and in random regular graphs, for almost all .
1 Introduction
We say that a graph contains a graph as a minor ( for short) if we can obtain from by a series of contractions of edges and deletions of vertices and edges of . Equivalently, if there are disjoint subsets of such that each induces a connected graph and there is an edge between and for every . The contraction clique number of , denoted by , is defined as the largest integer such that .
Due to the importance of minors in graph theory it is natural to expect abundance of various results and conjectures providing sufficient conditions for the existence of large complete minors. One such example is Hadwiger’s conjecture from 1943 which states , where denotes the chromatic number of . If true, this would be a far reaching generalisation of the fourcolour theorem [5]. So far it has only been resolved in the case where (see, e.g., [34]). For larger values of not even the weaker bound is known to be true. Currently the best bound is of order , obtained independently by Kostochka [17] and by Thomason [29]. Bollobás, Catlin, and Erdős [7] showed that if there exists a counterexample to the Hadwiger’s conjecture then it has to be atypical, that is the conjecture holds for almost all graph. More precisely, they showed that , a binomial random graph with edge probability , with high probability satisfies whereas it is known that (see, e.g., [15]).
Other examples include results on large complete minors in graphs with large girth [10, 19, 21, 32], free graphs [19, 22], graphs without small vertex separators [3, 16, 27], lifts of graphs [12], random graphs [13] and random regular graphs [14], and others. Krivelevich and Sudakov [19] studied the complete minors in graphs with good vertex expansion properties. We complement this by investigating a connection between the contraction clique number of a graph and its edge expansion properties. It turns out that, in some cases, this is the right parameter to look at: as a straightforward corollary of Theorem 1.1 we determine the order of magnitude of a largest clique minor one can guarantee in classes of random and pseudorandom graphs, for which a result from [19] falls short by a factor of . Another advantage over vertex expansion is that often edge expansion is easier to verify.
Let be a graph with vertices. Given a subset , the edgeexpansion of is defined as
In other words, denotes the average number of edges a vertex in sends across the cut. The Cheeger constant of is then defined as the smallest edge expansion over all subsets of size at most :
It is a standard exercise to show that every graph with edges admits a nearly balanced cut which contains at most edges, thus for any graph with average degree . However, it could happen that in highly unbalanced cuts we actually have a stronger expansion. To capture this, for we introduce a restricted Cheeger constant defined as follows:
With this notation at hand, we are ready to state our main result.
Theorem 1.1.
For every there exist and such that the following holds for all and . Let be a graph with vertices and maximum degree at most . If and then
It should be noted that in Theorem 1.1 we allow to depend on . This is also the case in all other stated results.
Using a firstmoment calculation, Fountoulakis, Kühn, and Osthus [13] showed that with high probability
(1) 
for .^{1}^{1}1 denotes the natural logarithm. For such we have that with high probability contains a large induced subgraph that satisfies assumptions of Theorem 1.1 with, say, and some parameter , thus the bound on given in Theorem 1.1 is, in general, optimal up to a constant factor.
While the second requirement on the expansion in Theorem 1.1 () might seem restrictive at first, it will easily be satisfied in all our applications. The role of this assumption will become apparent in the proof. It remains an interesting problem to determine if one can guarantee the same lower bound on only assuming , where is the maximum degree of . Adapting the proof of Theorem 1.1, in [18] we showed that this is the case when is a constant.
Theorem 1.2.
For every there exist and such that the following holds. Let be a graph with vertices and maximum degree at most . If then
Note that Theorem 1.2 is applicable with any , however, as already remarked, it is most likely optimal only in the case is a constant.
1.1 Applications
As a straightforward corollary of Theorem 1.1, we improve and extend (and reprove) several results. Previous proofs of some of these results relied on more specific, and difficult to show, properties of studied graphs.
One attractive corollary of Theorem 1.1 relates the size of a largest complete minor in a regular graph to the second largest eigenvalue of its adjacency matrix. This is not surprising knowing that governs the number of edges in a cut (see, e.g., [4, Theorem 9.2.1]).
Corollary 1.3.
For every there exist and such that the following holds. Let be a regular graph with vertices, for some , and let be the second largest eigenvalue of the adjacency matrix of . If , then .
Proof.
Using known bounds on the likely value of of the adjacency matrix of a random regular graph [8, 20, 33] (these results bound the second largest absolute eigenvalue, which is stronger than what we need) in the case , for sufficiently large constant , and a result by Bollobás [6] on the Cheeger constant of random regular graphs for , we immediately obtain the following result from Corollary 1.3 and Theorem 1.2.
Corollary 1.4.
For any , a regular graph chosen uniformly at random among all regular graphs with vertices with high probability satisfies
This extends a result by Fountoulakis, Kühn, and Osthus [14] who showed the same statement for constant (whereas we allow to be a function of ). Optimality of Corollary 1.4 can be derived as follows: Calculations in [13] show that with probability at least we have , for a constant of our choice (having an impact on the hidden constant in ). For , the number of (labelled) regular graphs with vertices is of order
where (see [23, 24, 25]). A simple calculation using Stirling’s approximation shows that by taking the random graph is regular with probability at least , for some absolute constant . Therefore we conclude that a random regular graph satisfies with high probability.
As our last application we mention a problem of estimating the contraction clique number in jumbled graphs, first studied by Thomason [30]. Let be a graph with vertices and let and . We say that is jumbled if for every subset we have
where denotes the number of edges of with both endpoints in . Krivelevich and Sudakov [19] showed that every jumbled graph contains a complete minor of order
(2) 
and the question of whether this bound can be improved to was raised in [14]. Note that this matches the bound in (2) in case for a constant , while for it falls short by a factor of . Here we settle it in the affirmative for all .
Corollary 1.5.
Let be a jumbled graph with vertices, for some where is a sufficiently large constant. Then
2 Preliminaries
We use standard graphtheoretic notation. In particular, given a graph and a vertex , we denote by the neighbourhood of . Given a subset , we abbreviate with the external neighbourhood of , that is
The distance between two vertices in is defined as the length (number of edges) of a shortest path between them (thus every vertex is at distance 0 from itself). We say that a subset of vertices is connected if it induces a connected subgraph of .
We start by relating the Cheeger constant to the vertex expansion of a graph and, as a corollary, give a lower bound on the size of a ball around a subset of vertices.
Lemma 2.1.
Let be a graph with vertices and maximum degree . Then for every subset of size we have
Proof.
By the definition of there are at least edges between and . The desired inequality follows from the fact that every vertex in is incident to at most of these edges. ∎
The following lemma shows that the ball around a subset of vertices grows exponentially until it expands to at least half of the vertex set.
Lemma 2.2.
Let be a graph with vertices and maximum degree . Then for every subset of vertices and any , the set
is of size at least
Proof.
Let be an arbitrary subset. We prove the lower bound on by induction. The claim trivially holds for . Suppose it holds for some . If then as well, in which case we are done. Otherwise, if then we can apply Lemma 2.1 to conclude . The desired bound now follows from . ∎
The following lemma shows that one can find a subset of of prescribed size that has significantly larger external neighbourhood than the one given by Lemma 2.1, and moreover induces a connected subgraph. The proof and the latter use of the lemma are inspired by a similar statement from [19].
Lemma 2.3.
Let , and suppose is a graph with vertices and maximum degree . If then for every integer and a vertex , there exists a connected subset of size which contains and
Proof.
We prove the claim by induction on . For we take . As , satisfies the desired property. Suppose now that the statements holds for some . Let be one such subset of size and be its neighbourhood. If then we can take to be the union of and an arbitrary vertex in . Otherwise, from we have , thus there are
edges between and the rest of the graph. As none of these edges is incident to , there exists a vertex such that . Adding such a vertex to gives a desired set . ∎
2.1 Random walks
A lazy random walk on a graph with the vertex set is a Markov chain whose matrix of transition probabilities is defined by
In other words, if at some point we are at vertex then with probability we stay in and with probability we move to a randomly chosen neighbour. It is easy to verify (and is well known) that this Markov chain has the stationary distribution given by . The following lemma gives an upper bound on the probability that a lazy random walk avoids some subset .
Lemma 2.4.
Let be a graph with vertices and maximum degree . Then for any the probability that a lazy random walk on which starts from the stationary distribution and makes steps does not visit is at most
For the rest of this section we prove Lemma 2.4.
Let be eigenvalues of the transition matrix . The spectral gap of is defined as . The following result of Mossel et al. [26] (more precisely, the first case of [26, Theorem 5.4]) relates the spectral gap to the probability that a lazy random walk does not leave a specific subset. We state a version tailored to our application.
Theorem 2.5.
Let be a connected graph with vertices and let be the spectral gap of the transition matrix . Then the probability that a lazy random walk of length which starts from a vertex chosen according to the stationary distribution does not leave a nonempty subset is at most
The second ingredient is a result of Jerrum and Sinclair [28, Lemma 3.3] which relates the spectral gap of to its conductance , defined as
Note that , where denotes the maximum degree of .
Lemma 2.6.
Let be a connected graph. Then the spectral gap of is at least .
We are now ready to prove Lemma 2.4.
Proof of Lemma 2.4.
Consider some nonempty subset . Theorem 2.5 states that a lazy random walk never leaves the set with probability at most
(3) 
where in the last inequality we used Lemma 2.6 and . Note that the events ’leave ’ and ’visit ’ are the same. From a trivial bound and we get
which after plugging into (3) gives the desired probability that a random walk misses . ∎
3 Proof of Theorem 1.1
The proof of Theorem 1.1 combines an approach of Plotkin, Rao, and Smith [27] with an idea of Krivelevich and Sudakov [19] to use random walks to find connected subsets of with some desired properties. That being said, the main new ingredient is the following lemma.
Lemma 3.1.
For every there exist positive and such that the following holds. Let be a graph with vertices, maximum degree at most , and . Given and such that , and subsets where each is of size , there exists a connected set of size at most
which intersects every .
We mention in passing that, for many pairs of values of and , by choosing subsets of size at random one can see that there is a family whose covering number has order of magnitude . Thus Lemma 3.1 delivers a nearly optimal promise of the size of a hitting set, with an additional – and for us very important – benefit of this set inducing a connected subgraph in .
Proof.
Let
and consider a lazy random walk in which starts from the stationary distribution and makes steps. A desired connected subset is constructed by taking the union of with a shortest path between and , for each . We argue that with positive probability is of required size.
For , let be the random variable measuring the distance from to in . Then the set has expected size at most .
In order to estimate the expectation of , for a positive integer write and . Trivially and, as is connected, . Hence
The proof of Theorem 1.1 splits into three cases, depending on the value of and : the dense case , for some , the intermediate case , and the most difficult sparse case . In the first two cases we shall make use of a classical result obtained independently by Kostochka [17] and by Thomason [29], which states that for any graph with average degree we have . Remarkably, Thomason [31] has later determined the correct constant in the leading term.
Let us say a few words about the similarities and differences between the sparse and intermediate case. In both cases we follow almost the same arguments, however with different goals. As already said, in the intermediate case we do not directly show that contains a large complete minor, but rather only a minor of some graph with vertices and linear (in ) average degree. Applying the KostochkaThomason result on gives a complete minor of order . Note that this is the same strategy as taken by Krivelevich and Sudakov [19], with the main technical ingredient being Lemma 2.4. However, in the sparse case such a bound falls short of the desired one, and we have to show directly that contains a large complete minor. The most important difference is that instead of Lemma 2.4 we use Lemma 3.1.
For technical reasons our proof of the sparse case fails to work past , thus to avoid worrying about exact calculations we choose as the delimiter between the two cases. Since, as already mentioned, the arguments are quite similar we only spell out the details of the sparse case and then describe the necessary changes for the intermediate one.
Proof of Theorem 1.1..
The actual value of is not important and it will be clear that the calculations work if it is sufficiently small.
Dense case .
From we conclude that has linear minimum (and thus average) degree. By the KostochkaThomason result it contains a complete minor of order , as desired.
Sparse case .
Let and let be a constant given by Lemma 3.1 (for ). Set
Here denotes the size of the complete minor we aim to find and is an upper bound on the size of each subset in the witness for such minor (recall the definition of minors from the very beginning of the paper).
Let be the family of all ordered partitions which satisfy the following constraints:

for each we have , induces a connected subgraph of , and ,

all ’s are mutually disjoint and between every and there exists an edge,

, and

either and , or and .
Taking , and trivially satisfies all the conditions, thus is nonempty.
Let be a partition in which maximises and, among all such partitions, one which further maximises . We show that then necessarily , which by 1 and 2 gives a witness for . Suppose towards a contradiction that .
We first rule out . If then by the first part of 4 we have . From we obtain
with room to spare, thus by the assumption that has maximum degree we get a contradiction:
Otherwise, if then from and we get
Therefore and a contradiction follows as in the previous case.
For the rest of the proof we assume . Let us collect some further properties of the chosen partition. First, note that for every we have
(4) 
Indeed, if this was not the case then from the property 1 we get
(5) 
By adding to (recall that , thus the resulting set is smaller than ), relabelling as and decreasing we obtain a partition which satisfies the desired constraints and has a larger set , contradicting the fact that the chosen partition maximises .
Second, we verify that the subgraph of induced by satisfies and . This is, again, a matter of routine calculations. If there would exist of size such that
then by removing such from and adding it to we obtain a new partition with the set of size at most which clearly satisfies all the constraints, again contradicting the choice of the partition. Similarly, if there exists of size such that
(6) 
then from we get
which implies we can remove from and add it to . Note that the new set has size at most , with room to spare, again yielding a contradiction.
Having these properties at hand we are ready to proceed towards the final contradiction with the maximality of . From and we get . As we can apply Lemma 3.1 on and for (we can take by (4)). If then we add sufficiently many ‘dummy’ sets of size . Let be the obtained set of size
(7) 
For every we have that is disjoint from and there is an edge between them. The only thing that prevents us from declaring as the new set in the partition (after removing it from ) is that we are not able guarantee . In fact, as we have chosen to be rather small this cannot possibly be. However, even if had chosen to be of size such a property would not necessarily hold. To fix this, we use that leaves us plenty of space to extend it to a superset which satisfies this inequality. By Lemma 2.3 , there exists a connected subset of size such that contains some vertex from and
It is worth noting that is the only place so far where we used an upper bound on , and the previous inequality is the only place where we used (for some sufficiently large ).
The set is still disjoint from all the sets for and satisfies
By removing from and increasing we obtain a new partition which by construction satisfies all the constraints, the set did not change and has increased; thus a contradiction. This proves that .
Remark: Application of Lemma 2.3 explains how we benefit from stronger expansion properties of small subsets. Namely, and Lemma 2.3 gave us a connected set which has large external neighbourhood, and in particular large enough satisfy the property 1. The property 1 is necessary for the equation (5) to hold which, in turn, shows that (4) holds. Without the bound from (4) we would not be able to take which would, finally, result in a larger set in (7) than what we need.
Intermediate case .
We borrow an idea from the proof of [19, Theorem 4.4]: in order to obtain a desired bound on it suffices to show that contains a minor of a graph with vertices and average degree at least . By the KostochkaThomason result such a graph contains a complete minor of order thus, as , we obtain the desired bound.
We show that contains a minor of such a graph by modifying the proof of the sparse case as follows. Instead of asking in 2 that between every and there exists an edge, we ask that there are, say, pairs for which this holds. Consequently, for this it suffices to take .
The argument now remains the same until the point where we seek a contradiction with the maximality of . The crucial observation is that we do not need to find a set in which intersects every , but only a fraction of them. A simple application of Lemma 2.4 gives such a set of size . Exactly the same as in the proof of the sparse case, we extend such to by finding a connected subset which has good vertex expansion properties. Here it is crucial that is much smaller than , which we can achieve by choosing to be sufficiently small. We leave the details to the reader. ∎
The proof of Theorem 1.2 follows the same lines as the presented proof, thus we give a short sketch of the necessary changes. For the full proof see [18].
Proof of Theorem 1.2 (sketch).
Define both and to be of order . In the definition of , we drop the requirement in 1, and in 4 we only require . The rest of the argument proceeds in the same way, with some small further changes. In particular, we replace (4) by a weaker condition
If this is not true then we can safely move to and would remain satisfied. Such a lower bound on the number of edges and maximum degree imply , which allows us to take only , instead of . Lemma 3.1 then gives us a set of size
and we completely omit the use of Lemma 2.3 as we no longer require to have (exceptionally) large external neighbourhood. ∎
4 Concluding remarks
We showed that a good edge expander contains a large complete minor. As a corollary, we determined the size of a largest complete minor in random regular graphs and some families of pseudorandom graphs. These results extend and improve some of the previously known results in a unified way.
Without any doubt, Theorem 1.1 would be aesthetically more appealing if the same order of would hold without the stronger edge expansion assumption on (very) unbalanced cuts. Of course, having the present proof of Theorem 1.1 in mind one can also come up with different assumptions which would work. For example, and every set of size sends at most edges to every set of size . From the point of view of given applications such an assumption would suffice, however it feels more artificial compared to . That being said, we ask the following question.
Question 4.1.
Let be a graph with vertices, maximum degree at most , and , for some constant . Is it true that ?
In other words, Question 4.1 asks if one can strengthen the conclusion of Theorem 1.2 to match the one of Theorem 1.1.
A possible further direction would be to weaken the assumption that the maximum degree and are of the same order. Unfortunately, such a weakening is no longer sufficient to guarantee even a minor of order . This can be seen by the following example. Let be a complete bipartite graph where one side has and the other side has vertices. Even though this graph has a very large Cheeger constant, namely , it does not contain a complete minor of size : every but at most one set has to contain a vertex from the smaller class. It is worth noting that this is in contrast with the case of vertex expansion. A result of Kawarabayashi and Reed [16] shows that if a graph is such that every subset of vertices of size , where is the number of vertices of , has the external neighbourhood of size at least for some , then contains a complete minor of order . This being said, it remains an interesting question to determine the correct dependency of on and its maximum degree. Some dependency could be retrieved from the proof of Theorem 1.2, however we did not try to optimise it.
Another interesting class of graphs studied by Krivelevich and Sudakov [19] are free graphs. While the proof from [19] relies on establishing vertex expansion properties through edge expansion, it is plausible that using some of the ideas presented here could simplify their proof. However, as the optimal results in this direction were already obtained in [19] and they do not seem to follow from our main theorems used as a black box, we did not pursue this.
We now discuss the algorithmic aspect of the problem and the proof of Theorem 1.1. First, some background on the spectral graph theory (for a thorough introduction to the topic, see, e.g., [9]). Given a graph , let be defined as follows:
where . In case is a regular graph we have , and it is actually that some authors refer to as the Cheeger constant. While we could have stated Theorem 1.1 in terms of , we have opted for for convenience. The famous Cheeger inequality for graphs [1, 2, 11] states that
(8) 
where denotes the second smallest eigenvalue of the normalised Laplacian of . A proof of the right hand side of (8) yields a polynomial time algorithm which produces a set such that and .
We can now describe (somewhat informally) a randomized algorithm which, given a graph satisfying assumptions of Theorem 1.1, with high probability succeeds in finding a minor in , for some . We only do it for the sparse case. The other two cases depend on the algorithmic aspects of the KostochkaThomason proof, which goes beyond the scope of this paper. The numbers used in the algorithm are (almost) the same as in the proof. Start with a partition , and and in each iteration do the following until :

If there exists such that has less than neighbours in then move it to and decrease ;

If there exists a vertex in with less than neighbours in then move it to ;

If then we can find a subset such that and . If then , and otherwise . In any case, we can efficiently find a subset such that and . Move such from to .

Otherwise, if then by (8) we have . As the minimum degree of is at least we have for every , thus . This is somewhat weaker than what we had in the proof, but it nonetheless suffices.
We can now apply the algorithm described in Lemma 3.1. Run a lazy random walk of length in