In the Minimum -Union problem (MU) we are given a set system with sets and are asked to select sets in order to minimize the size of their union. Despite being a very natural problem, it has received surprisingly little attention: the only known approximation algorithm is an -approximation due to [Chlamtáč et al APPROX ’16]. This problem can also be viewed as the bipartite version of the Small Set Vertex Expansion problem (SSVE), which we call the Small Set Bipartite Vertex Expansion problem (SSBVE). SSVE, in which we are asked to find a set of nodes to minimize their vertex expansion, has not been as well studied as its edge-based counterpart Small Set Expansion (SSE), but has recently received significant attention, e.g. [Louis-Makarychev APPROX ’15]. However, due to the connection to Unique Games and hardness of approximation the focus has mostly been on sets of size , while we focus on the case of general , for which no polylogarithmic approximation is known.
We improve the upper bound for this problem by giving an approximation for SSBVE for any constant . Our algorithm follows in the footsteps of Densest -Subgraph (DkS) and related problems, by designing a tight algorithm for random models, and then extending it to give the same guarantee for arbitrary instances. Moreover, we show that this is tight under plausible complexity conjectures: it cannot be approximated better than assuming an extension of the so-called “Dense versus Random” conjecture for DkS to hypergraphs.
In addition to conjectured hardness via our reduction, we show that the same lower bound is also matched by an integrality gap for a super-constant number of rounds of the Sherali-Adams LP hierarchy, and an even worse integrality gap for the natural SDP relaxation. Finally, we note that there exists a simple bicriteria approximation for the more general SSVE problem (where no non-trivial approximations were known for general ).
Suppose we are given a ground set , a set system on , and an integer . One very natural problem is to choose sets from in order to maximize the number of elements of that are covered. This is precisely the classical Maximum Coverage problem, which has been well-studied and is known to admit a -approximation (which is also known to be tight) . But just as natural a problem is to instead choose sets in order to minimize the number of elements of that are covered. This is known as the Minimum -Union problem (MU), and unlike Maximum Coverage, it has not been studied until recently (to the best of our knowledge), when an -approximation algorithm was given by Chlamtáč et al.  (where is the number of sets in the system). This may in part be because MU seems to be significantly harder than Maximum Coverage: when all sets have size exactly then MU is precisely the Smallest -Edge Subgraph problem (SES), which is the minimization version of the well-known Densest -Subgraph (DS) problem and is thought to be hard to approximate better than a polynomial. MU is the natural hypergraph extension of SES.
MU is also related to another set of problems which are thought to be hard to approximate: problems similar to Small Set Expansion. Given an instance of MU, we can construct the obvious bipartite graph in which the left side represents sets, the right side represents elements, and there is an edge between a set node and an element node if the set contains the element. Then MU is clearly equivalent to the problem of choosing left nodes in order to minimize the size of their neighborhood. We call this the Small Set Bipartite Vertex Expansion (SSBVE) problem, and is the way we will generally think of MU throughout this paper. This is the bipartite version of the Small Set Vertex Expansion (SSVE) problem (in which we are given an arbitrary graph and are asked to choose nodes to minimize the size of their neighborhood), which is in turn the vertex version of the Small Set Expansion (SSE) problem (in which we are asked to choose a set of nodes to minimize the number of edges with exactly one endpoint in the set). SSE, and SSVE to a lesser extent, have been extensively studied due to their connection to other hard problems (including the Unique Games Conjecture), but based on these connections have generally been considered only when is relatively large (in particular, when ).
1.1 Random models and worst case approximations
Given the immediate connection to DS and SES, it is natural not only to examine the techniques used for those problems, but the general algorithmic framework – the so called “log-density” framework – developed for these problems. This approach, first introduced in , can be summarized as follows. Begin by considering the problem of distinguishing between a random structure (for DS/SES, a random graph) and a random structure in which a small, statistically similar solution has been planted. Several algorithmic techniques (both combinatorial and LP/SDP based) fail to solve such distinguishing problems [4, 13, 5], and thus the gap (in the optimum) between the random and planted case can be seen as a natural lower bound for approximations. To match these lower bounds algorithmically, one develops robust algorithms for this distinguishing problem for the case when the planted solution does have statistically significant properties that would not appear in a pure random instance, and then, with additional work, adapts these algorithms to work for worst-case instances while guaranteeing the same approximation.
While this framework was new when introduced in , the actual technical tools to adapt algorithms for random planted instances of DS and SES111In , most of the technical work on SES focused on certain strong LP rounding properties, however the basic combinatorial algorithm was similar to the algorithm for DS in . to the adversarial setting were not particularly complex. While a great many problems have strong hardness based on reductions from DS/SES (most importantly Label Cover and the great many problems with hardness reductions from Label Cover), tight approximations in this framework have not been achieved for any problem since . This despite the fact that a number of algorithms for such problems can be seen as a partial application of the same techniques (e.g. [7, 16, 15]). The reason is that the approach only provides a general framework. As with any tool (e.g. SDPs), to apply it, one must deal with the unique technical challenges offered by the specific problem one is attacking.
As we shall see, we are able to successfully apply this approach to MU/SSBVE to achieve tight approximations, making this only the third complete application222It has also been applied to Label Cover in a recent submission which includes the first author, though only in the semirandom setting. of the framework, and the first one to overcome technical obstacles which deviate significantly from .
1.2 Problem Definitions and Equivalence
We study MU/SSBVE, giving an improved upper bound which is tight in the log-density framework. We also strengthen the lower bounds from conjectures for random models in this framework by showing that they are matched by integrality gaps for the Sherali-Adams LP hierarchy, and that the natural SDP relaxation has an even worse integrality gap.
Slightly more formally, we will mostly study the following two problems. Given a graph and a subset , define the neighborhood . In a bipartite graph , the expansion of a set (or ) is .
In the Minimum -Union problem (MU), we are given a universe of elements and a collection of sets , as well as an integer . The goal is to return a collection with in order to minimize .
In the Small Set Bipartite Vertex Expansion problem (SSBVE) we are given a bipartite graph with and and an integer . The goal is to return a subset with minimizing the expansion (equivalently, minimizing ).
The next lemma is obvious, and will allow us to use the two problems interchangeably.
MU and SSBVE are equivalent: There is an -approximation for MU if and only if there is an -approximation for SSBVE.333Note that it is not immediately clear why is the natural input parameter for an approximation guarantee for SSBVE. This is discussed in Section 6.
While the two problems are equivalent, in different contexts one may be more natural than the other. As we have noted, and will also discuss later, all previous work on these problems has been through the lens of MU, which is especially natural in the graph case when studying DS and SES. Moreover, the random distinguishing models used in our conjectured lower bounds are based on random graphs and their extension to -uniform hypergraphs, and thus are best understood as applied to MU. However, our approximation algorithm is much more easily explained as an algorithm for SSBVE. Thus, we will use the MU notation when discussing random models and conjectured hardness, and SSBVE when describing our algorithm.
1.3 Our Results and Techniques
As mentioned, we look at random distinguishing problems of the form studied by Bhaskara et al.  and Chlamtáč et al. . Define the log-density of a graph on nodes to be , where is the average degree. One of the problems considered in [4, 8] is the Dense vs Random problem, which is parameterized by and constants : Given a graph , distinguish between the following two cases: 1) where (and thus the graph has log-density concentrated around ), and 2) is adversarially chosen so that the densest -subgraph has log-density where (and thus the average degree inside this subgraph is approximately ). The following conjecture was explicitly given in , and implies that the known algorithms for DS and SES are tight:
For all , for all sufficiently small , and for all , we cannot solve Dense vs Random with log-density and planted log-density in polynomial time (w.h.p.) when .
This conjecture can quite naturally be extended to hypergraphs. Let denote the distribution over -uniform hypergraphs obtained by choosing every subset of cardinality to be a hyperedge independently with probability . Define the Hypergraph Dense vs Random problem as follows, again parameterized by and constants . Given an -uniform hypergraph on nodes, distinguish between the following two cases: 1) where (and thus the log-density is concentrated around ), and 2) is adversarially chosen so that the densest subhypergraph on vertices has log-density (and thus the average degree in the subhypergraph is ).
For all constant and , for all sufficiently small , and for all such that , we cannot solve Hypergraph Dense vs Random and planted log-density in polynomial time (w.h.p.) when .
An easy corollary of this conjecture (proved in the appendices by setting parameters appropriately) is that for any constant , there is no polynomial-time algorithm which can distinguish between the two cases from Hypergraph Dense vs Random when the gap between the MU objective function in the two instances is . By transforming to SSBVE, we get a similar gap of . This also clearly implies the same gap for the worst-case setting.
Complementing this lower bound, we indeed show that in the random planted setting, we can appropriately modify the basic structure of the algorithm of  and achieve an -approximation for any constant , matching the above lower bound for this model. However, our main technical contribution is an algorithm which matches this guarantee in the worst case setting (thus improving over , who gave a -approximation for MU, i.e. an -approximation for SSBVE).
For any constant , there is a polynomial-time -approximation algorithm for SSBVE.
We prove Theorem 1.6 in Section 2. This implies an -approximation for MU (recall that is the number of sets in an MU instance). It is natural to wonder whether we can instead get an approximation depending on (the number of elements). Unfortunately, we show in Appendix A that this is not possible assuming Conjecture 1.5.
While our aim is to apply the framework of  to SSBVE, we note that SSBVE and DS (or the minimization version SES) differ from each other in important ways, making it impossible to straightforwardly apply the ideas from  or . The asymmetry between and in SSBVE requires us to fundamentally change our approach; loosely speaking, in SSBVE, we are looking not for an arbitrary dense subgraph of a bipartite graph, but rather for a dense subgraph of the form (where is a subset of of size ), since once we choose the set we must take into account all neighbors of in .
For example, suppose that there are nodes in with degree whose neighborhoods overlap on some set of nodes of , but each of the nodes also has one neighbor that is not shared by any of the others. Then a DS algorithm (or any straightforward modification) might return the bipartite subgraph induced by those nodes and their common neighbors. But even though the intersection of the neighborhoods is large, making the returned subgraph very dense, their union is much larger (since could be significantly larger than ). So taking those left nodes as our SSBVE solution would be terrible, as would any straightforward pruning of this set.
This example shows that we cannot simply use a DS algorithm, and there is also no reduction which lets us transform an arbitrary SSBVE instance into a DS instance where we could use such an algorithm. Instead, we must fundamentally change the approach of  to take into account the asymmetry of SSBVE. One novel aspect of our approach is a new asymmetric pruning idea which allows us to isolate a relatively small set of nodes in which will be responsible for collecting all of the “bad” neighbors of small sets which would otherwise have small expansion. Even with this tool in place, we still need to trade off a number of procedures in each step to ensure that if the algorithm halts it will return a set that is both small and has small expansion (ignoring the pruned set on the right).
In addition to the conditional lower bound guaranteed by the log-density framework (matching our upper bound), we can show unconditional lower bounds against certain types of algorithms: those that depend on Sherali–Adams (SA) lifts of the basic LP, or those that depend on the basic SDP relaxation. We do this by showing integrality gaps which match or exceed our upper bound.
The basic SDP relaxation of SSBVE has an integrality gap of .
When , the integrality gap of the -round Sherali–Adams relaxation of SSBVE is .
We show these integrality gaps in Section 4. In our SA gap construction, we use the general framework from , where they present a SA integrality gap for Densest -Subgraph. However, we have to make significant changes to their gap construction, since there is an important difference between DS and SSBVE — the former problem does not have hard constraints, while the latter one does. Specifically, in SSBVE, if a vertex belongs to the solution , then every neighbor of is in the neighborhood of . This means, in particular, that in the SA solution, variable (the indicator variable for the event that ) must be exactly equal to (the indicator variable for the event that and each vertex in is in the neighborhood of ) for every subset of neighbors of . More generally, if and , then must be equal to . However, in the SA integrality gap construction for DS, is exponentially smaller than : ; this inequality is crucial because it guarantees that the SA solution for DS is feasible. In our construction, we have to very carefully define variables in order to ensure that, on one hand, and, on the other hand, the solution is feasible.
While not the main focus of this paper, we also give an improved approximation for the Small Set Vertex Expansion problem (SSVE) and explore its relationship to SSBVE. In SSVE we are given a graph and an integer , and the goal is to find a set with such that is minimized. Louis and Makarychev  gave a polylogarithmic bicriteria approximation for this problem for , but to the best of our knowledge there are no current bounds for general . We give the first nontrivial upper bound, which is detailed in Section 5:
There is an -approximation for SSVE (the algorithm chooses a set of size at most but is compared to the optimum of size at most ).
Finally, we note that as written, SSVE is an “at most” problem, where we are allowed any set of size at most but the set size appears in the denominator. On the other hand, our definition of SSBVE requires picking exactly nodes. We could define an equivalent exact problem for SSVE, where we require , and an equivalent at most problem for SSBVE, where we allow sets of size at most but instead of minimizing minimize . It is straightforward to show that up to a logarithmic factor the at most and the exact versions of the problems are the same, and the following lemma appears in  for SSBVE (the equivalent for SSVE is just as easy).
An -approximation algorithm for the at-most version of SSBVE (SSVE) implies an -approximation for the exact version of SSBVE (SSVE), and vice versa.
Hence we will generally feel free to consider one version or the other depending on which is easier to analyze.
1.4 Related Work
As mentioned, SSBVE is a generalization of the Smallest -Edge Subgraph problem , which is the minimization version of Densest -Subgraph [12, 4]. The best-known approximation for SES is , while the best-known approximation for DS is .
The immediate predecessor to this work is , which provided an -approximation for SSBVE ( for MU) using relatively straightforward techniques. SSVE was also studied by Louis and Makarychev , who provided a polylogarithmic approximation when is very close to (namely, ). To the best of our knowledge, no approximation was known for SSVE for general .
While defined slightly differently, the maximization version of MU, the Densest -Subhypergraph problem (DSH), was defined earlier by Applebaum  in the context of cryptography: he showed that if certain one way functions exist (or that certain pseudorandom generators exist) then DSH is hard to approximate within for some constant . Based on this result, DSH and MU were used to prove hardness for other problems, such as the -route cut problem . He also explicitly considered something similar to Hypergraph Dense vs Random, but instead of distinguishing between a random instance and an adversarial instance with essentially the same log-density, he considered the problem of distinguishing a random instance from a random instance which has a planted dense solution, and then where every hyperedge not in the planted solution is randomly removed with some extra probability. He showed that even this problem is hard if certain pseudorandom generators exist.
2 Approximation algorithm for random planted instances
Before describing our algorithm for SSBVE (in either the random or worst case setting), let us mention a key tool, which was also used in , and can be seen as a straightforward generalization of an LP (and rounding) in :
There is a polynomial time algorithm which exactly solves the Least Expanding Set problem, in which we are given a bipartite graph , and wish to find a set with minimum expansion .
Note that in the Least Expanding Set problem there is no constraint on the cardinality of the set , and so the above lemma has no immediate implication for SSBVE.
Recall that Lemma 1.10 shows the equivalence of the “at most” and “exact” versions of SSBVE interchangeably. Thus, throughout this section and the next, we will use these two version interchangeably.
To understand our approximation algorithm, let us first consider the following random model: For constants where , let be a random bipartite graph where , , and let . The edges are defined by first choosing neighbors in independently at random for every , and then choosing subsets and of size and independently at random, and for every node , remove its neighbors and now sample them uniformly at random from .
Suppose first that . That is, . This means that the log-density gap between the random graph and the planted subgraph is small. Note that any arbitrary -subset of will expand to at most vertices in , compared to the optimum (planted) solution which expands to vertices. In this case, up to a logarithmic factor, choosing any such set gives us an approximation ratio of
It is easily seen that this expression is maximized for , giving an approximation ratio of , which is clearly at most . Since this is the approximation ratio we are aiming for, we may focus on the case when . For simplicity, let us look at the tight case, when , and . That is, , and . The expansion of the hidden subset is .
Consider as a simple example the case of . For simplicity, let us think of the left-degree as some large constant rather than . In this case , and every vertex in has neighbors in . Choosing a vertex (say, by guessing all possible vertices in ) gives us the following: The neighborhood has size , however because of the planted solution, of the vertices in also belong to . We know that expands to which has size . That is, it has expansion . Thus by Lemma 2.1 applied to the subgraph induced on , we can find a set with at most this expansion, which gives an approximation ratio of , and moreover , so we are done.
Now let us consider the general case. Suppose for some relatively prime . Note that the degree of every vertex in is tightly concentrated around and the -degree of every vertex in (the cardinality of its neighborhood intersected with ) is concentrated around .
Following the approach of  for DS, we can think of an algorithm which inductively constructs all possible copies of a caterpillar with fixed leaves which are chosen (guessed) along the way. In our case, the caterpillar is similar, but not identical to the corresponding caterpillar used in . Every step in the construction of the caterpillar corresponds to an interval of the form . The overall construction is as follows:
First step (step 1): This step corresponds to the interval . In this step we guess a vertex , and add an edge, which forms the first edge of the “backbone” (the main path in the caterpillar).
Final step (step q): This step corresponds to the interval . In this step we add a final edge to the backbone.
Intermediate step: For every , step corresponds to the interval . If this interval contains an integer, choose a new vertex and attach an edge (a “hair”) from it to the (currently) last vertex in the backbone. Otherwise, extend the backbone by two edges.444This is the only difference from the caterpillar of , where they add one edge to the backbone in this case.
Note that if we start the caterpillar with a vertex in , then the next backbone vertex will be in , and since we then add two backbone edges each time, the current backbone vertex will always be in (until the last step), and all leaves guessed along the way will be in .
How do we turn this into an algorithm for random planted instances as above? Start by guessing a vertex (by exhaustive enumeration over all vertices in ), and start with set . Whenever the caterpillar construction adds an edge to the backbone, update the set to . Whenever the construction caterpillar adds a hair, guess a new vertex (again we can assume we guess a vertex in by exhaustive enumeration) and update the set to . Do this for all edges except for the last edge in the caterpillar. Note that whenever we have a “backbone step”, the caterpillar construction adds two edges to the backbone, so if we had a set (as we do at the beginning of every intermediate step), we end up with the set .
An easy inductive argument using concentration in random graphs shows that after every step , w.h.p. is a subset of of size , and as long as the vertices we guessed were indeed in , for sufficiently small (at most ), the set has size .
In particular, right after step , we have and . At this time the set expands to which has size , and so has expansion . Thus, by Lemma 2.1 applied to , we can find a set (of size at most ) with at most this expansion, giving an approximation ratio of While we cannot guarantee that will be exactly for some reasonably small constants , we can guarantee that we will not lose more than a -factor by running this algorithm on nearby values of , and so at least in this random model, when , we can always achieve an approximation guarantee of . Our main technical contribution is translating this overall framework into an algorithm that achieves the same approximation guarantee for worst-case instances, which we do in the next section.
3 Approximation algorithm for worst case instances
In this section, we show the desired approximation for worst case instances. As planned, we follow the above caterpillar structure for random planted instances, so that (after some preprocessing) at every step either set sizes behave like in the random planted setting, or we can find set with small expansion. We start with some preprocessing which will be useful in the analysis.
Using standard bucketing and subsampling techniques, and the trivial algorithm (taking an arbitrary -subset when there is no log-density gap, or only a small gap), we can restrict our attention to a fairly uniform setting:
Suppose for every sufficently small and for all integers there is some constant such that we can obtain an -approximation for SSBVE on instances of the following form:
Every vertex in has the same degree .
The size of the neighbor set of the least expanding -subset of is known, and thus so is the average degree from this set back to the least expanding -set, .
We have ,
The optimum average back-degree satisfies .
Then there is an -approximation for SSBVE for every sufficiently small .
We defer the proof of this lemma to Appendix B. From now on, we will assume the above setting, and denote as before. We will also denote by some (unknown) least expanding -subset, and let be its neighbor set (the set that expands to). Note that the optimum expansion in such an instance is , and so to get an -approximation, we need to find a set with expansion at most .
As we noted, the optimum neighbor set has average back-degree into . However, this might not be the case for all vertices in . A common approach to avoiding non-uniformity for DS and other problems is to prune small degree vertices. For example, a common pruning argument shows that a large fraction of edges is retained even after deleting all vertices with at most the average degree on each side (solely as a thought experiment, since we do not know and ). However, the fundamental challenge in SSBVE is that we cannot delete small degree vertices in , since this can severely skew the expansion of any set. A different and somewhat more subtle pruning argument, which explicitly bounds the “penalty” we pay for small degree vertices in , gives the following.
For every , there exists a set of size at least and a set with the following properties:
Every vertex has at least neighbors in .
Every vertex has at most neighbors in .
Consider the following procedure:
Start with and .
As long as and the conditions are not met:
Remove every vertex with at most neighbors in from .
Remove every vertex with at least neighbors in from .
Call the vertices removed at iteration of the loop time vertices. Note that time vertices in have at most neighbors in which are removed at some time , and time vertices in have at least neighbors in which were removed at time . Thus, if we look at the set of edges , we have
Thus , and . ∎
For some constant to be determined later, let and be the sets derived from the above claim for . From now on, call a vertex “good” if . Otherwise, call it “bad”. Thus, we can restrict our attention to , allowing us to make the following simplifying assumption, and lose at most an additional constant factor in the approximation:
All vertices in have at most bad neighbors.
Note that all the good vertices have at least neighbors in . In addition, our assumption gives the following useful corollary:
Every set with expansion at least into some set has good expansion at least into .
Every vertex in has at most bad neighbors overall (and in particular in ), and so has at most bad neighbors in . The rest must be good. ∎
Finally, define , and let . Note that , and so
and so every -set in has expansion at most into . Thus, while by Lemma 1.10 it suffices to find any set of size at most with expansion , it turns out that a weaker goal suffices:
To find a -subset of with expansion (into ), it suffices to have an algorithm which returns a set of size at most with expansion into .
Let us examine the reduction which allows us to find a -subset of with small expansion, where , rather than choosing vertices in one shot (proving Lemma 1.10): we repeatedly find such a set, and remove its vertices from until we have removed vertices. Note that the definition of may change as vertices are removed: at each such iteration, the degrees in may decrease, the value of decreases, and therefore the value of increases. However, these changes can only cause vertices to be removed from , not added. Thus, every vertex which is in at some iteration was also in at the start of the algorithm.
If at each iteration we find a small set with small expansion into , then by the argument in the first writeup, this is sufficient to bound the total number of neighbors in (of our -subset of ) at the end of the algorithm, while losing only an additional factor. Now, while the expansion into may have been very bad at any given iteration, the total number of neighbors accrued throughout all iterations in (as defined at the start) is still at most . Since , this gives us a -approximation, as we wanted. ∎
Thus, we may shift our focus to finding small subsets with good expansion into even if their expansion into is huge.
3.2 The algorithm
Before we describe the algorithm, let us once again state that thanks to Claim 3.5 our goal is simply to find a set of size at most with expansion at most into .
Our algorithm will proceed according the same caterpillar construction described for the random planted setting, though unlike the random setting, each step will require a technically complex algorithm to ensure that either we maintain the same set size bounds as one would expect in a random instance, or we can abort the process and find a small set with small expansion into .
3.2.1 First step
Consider the following algorithm:
Let . If , return an arbitrary subset of of size .
Otherwise, guess a vertex , and proceed to the next step with set .
The above algorithm either returns a set with the required expansion, or for at least one guess, returns a set such that and .
If all good vertices (in ) belong to , then all vertices in are bad. Since by Assumption 3.3 every vertex in has at most bad neighbors, then for every we have . That is, . Thus, the first step returns a set of size with at most neighbors in , as required.
Otherwise, there exists a good vertex in . Thus, guessing such a vertex ensures that has the desired properties by definition. ∎
3.2.2 Hair step
In the random planted setting, a hair step involves guessing a vertex , and replacing our current set with . In that setting, the change in the cardinality of and is concentrated around the expectation by Chernoff bounds. However, in a worst-case setting, we need to ensure that the updated sets have the same cardinality as in the random planted setting in order to proceed to the next step. We do this using degree classification as in the first step, and the least-expanding-set algorithm. If either of these approaches gives a small set with small expansion, we are done. Otherwise, we show that we have the required cardinality bounds. Specifically, the algorithm for this step is as follows:
Given: A set , where and .
Let , and , and .
If , return an arbitrary -subset .
Otherwise, run the least-expanding-set algorithm on the subgraph induced on . If the resulting set has sufficiently small expansion, return this set.
Otherwise, guess a vertex , and proceed to the next step with set .
The outcome of this algorithm and its analysis are somewhat similar to those of the first step, and captured by the following lemma.
If , then the above algorithm either returns a set with the required expansion, or for at least one guess, returns a set such that and .
First, note that , and so
Thus, if then every vertex in has at most neighbors in , and all together we get neighbors in this set. On the other hand, we get at most neighbors in , so we get a total of neighbors for vertices, and we are done.
Suppose, on the other hand, that . Let and . Note that since the vertices in have edges going into at most vertices (in ), the vertices in have average back-degree at least into . In this context, call the vertices in with at least neighbors in “-good”, and the rest “-bad”.
Similarly to Claim 3.2, it is easy to check that at least vertices in must have at most -bad neighbors. Call this set .
If all -good vertices (in ) belong to , then all vertices in are bad. Thus, by the above, for every vertex we have . That is, . Note that has at least vertices and at most neighbors, and so it has expansion at most (where the first inequality follows since ). Since , the least-expanding-set algorithm will return a set of size at most with at most the same expansion.
Otherwise, there exists an -good vertex in . Thus, guessing such a vertex ensures that has the desired properties by definition: is -good, so
and , so . ∎
3.2.3 Backbone step
In the random planted setting, a backbone step involves replacing our current set with . As in the hair step, the change in the cardinality of and is concentrated around the expectation in the random planted setting, while in a worst-case setting, we need to ensure that the updated sets have the same cardinality as in the random planted setting in order to proceed to the next step. This is also done with the least-expanding-set algorithm between and . If this procedure gives a small set with small expansion, we are done. Otherwise, by binning vertices in by degree, we produce sets, at least one of which we show will have the required cardinality bounds. The algorithm for this step is as follows:
Given: A set , where and .
Run the least-expanding-set algorithm on the subgraph induced on . If the resulting set has sufficiently small expansion, return this set.
Otherwise, guess some , let , and let . Subsample this set by retaining every vertex independently with probability , and let be the resulting set. Proceed to the next step with .
The guarantee of the backbone step algorithm is as follows.
If , then the above algorithm either returns a set with the required expansion, or for at least one guess, returns a set such that w.h.p. and .
Let and let .
First, suppose . In this case, has expansion at most into , and the least-expanding-set algorithm will return a set of at most vertices with at most the above expansion into , as required.
Otherwise, , and has expansion at least into , and so by Corollary 3.4, has good expansion at least into . That is, it has at least good neighbors in . Let us call this set of good neighbors .
Consider the sets . We know that at least one of them must cover at least a -fraction of the edges between and . Choose such that this is the case. Then since all the vertices in are good, we know that
On the other hand, we know that every vertex in contributes at most edges to this set, so we also have . Putting these together, we get the bound
Now consider itself. Since vertices in have degree , we know that . Furthermore, since , we know that every vertex in has degree at most , and so we have
On the other hand, every vertex in contributes at least edges to this set, and so . Putting these together, we get
The required bounds on and now follow from Chernoff bounds. ∎
3.3 Putting everything together: the final step
Before examining the final step, let us consider the effect of the first steps, assuming none of them stopped and returned a small set with small expansion, and assuming all the guesses were good (giving the guarantees in the various lemmas). Let be the set passed on from step to step , and let and . Then to summarize the effect of the various steps, the first step gives
a hair step gives
and a backbone step gives
By induction, we can see that after step we have
In particular, choosing such that , this ensures the correctness of the assumptions for backbone steps and for hair steps. When , we get
Given with the above cardinality bounds, the final step is to simply run the least-expanding-set algorithm:
If has the above cardinality bounds, then running the least-expanding-set algorithm on and removing vertices arbitrarily to reduce the cardinality of the resulting set to gives us a subset of of size at most with expansion at most
Note that has expansion at most
Thus the least-expanding-set algorithm on will return a set with at most this expansion. However, this expansion will increase if we have more that vertices, and need to remove all but