Randomized contractions meet lean decompositionsThis research is a part of projects that have received funding from the European Research Council (ERC) under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 306992 (S. Saurabh) and under the European Union’s Horizon 2020 research and innovation programme Grant Agreements no. 677651 (M. Cygan and Michał Pilipczuk) and 714704 (Marcin Pilipczuk). The research of P. Komosa is supported by Polish National Science Centre grant UMO-2015/19/N/ST6/03015. The foundations of this paper have been developed at International Workshop on Graph Decompositions, held at CIRM Marseille in January 2015.

Randomized contractions meet lean decompositionsthanks: This research is a part of projects that have received funding from the European Research Council (ERC) under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 306992 (S. Saurabh) and under the European Union’s Horizon 2020 research and innovation programme Grant Agreements no. 677651 (M. Cygan and Michał Pilipczuk) and 714704 (Marcin Pilipczuk). The research of P. Komosa is supported by Polish National Science Centre grant UMO-2015/19/N/ST6/03015. The foundations of this paper have been developed at International Workshop on Graph Decompositions, held at CIRM Marseille in January 2015.

Marek Cygan Institute of Informatics, University of Warsaw, Poland, cygan@mimuw.edu.pl.    Paweł Komosa Institute of Informatics, University of Warsaw, Poland, p.komosa@mimuw.edu.pl.    Daniel Lokshtanov Department of Informatics, University of Bergen, Norway, daniello@ii.uib.no.    Marcin Pilipczuk Institute of Informatics, University of Warsaw, Poland, marcin.pilipczuk@mimuw.edu.pl.    Michał Pilipczuk Institute of Informatics, University of Warsaw, Poland, michal.pilipczuk@mimuw.edu.pl.    Saket Saurabh Department of Informatics, University of Bergen, Norway and Institute of Mathematical Sciences, India, saket@imsc.res.in.
Abstract

The randomized contractions technique, introduced by Chitnis et al. in 2012, is a robust framework for designing parameterized algorithms for graph separation problems. On high level, an algorithm in this framework recurses on balanced separators while possible, and in the leaves of the recursion uses high connectivity of the graph at hand to highlight a solution by color coding.

In 2014, a subset of the current authors showed that, given a graph and a budget for the cut size in the studied separation problem, one can compute a tree decomposition of with adhesions of size bounded in and with bags exhibiting the same high connectivity properties with respect to cuts of size at most as in the leaves of the recursion in the randomized contractions framework. This led to an FPT algorithm for the Minimum Bisection problem.

In this paper, we provide a new construction algorithm for a tree decomposition with the aforementioned properties, by using the notion of lean decompositions of Thomas. Our algorithm is not only arguably simpler than the one from 2014, but also gives better parameter bounds; in particular, we provide best possible high connectivity properties with respect to edge cuts. This allows us to provide -time parameterized algorithms for Minimum Bisection, Steiner Cut, and Steiner Multicut.

1 Introduction

Since the work of Marx [20] that introduced the notion of important separators, the study of graph separation problems has been a large and live subarea of parameterized complexity. It led to development of many interesting algorithmic techniques, including the aforementioned important separators and related shadow removal [5, 8, 18, 21, 22], branching algorithms based on half-integral relaxations [10, 11, 12, 13, 14], matroid-based algorithms for preprocessing [16, 17], and, what is the most relevant for this work, the framework of randomized contractions [6, 15].

The early work of Marx [20] left a number of questions open, including the parameterized complexity of the -Way Cut problem: given a graph and integers and , can one delete at most edges from to obtain a graph with at least connected components? We remark that it is easy to reduce the problem to the case when is connected and . Marx proved hardness of the vertex-deletion variant of the problem, but the complexity of the edge-deletion variant remained elusive until Kawarabayashi and Thorup settled it in affirmative in 2011 [15].

In their algorithm, Kawarabayashi and Thorup introduced a useful recursive scheme. For a graph , an edge cut is a pair such that and . The order of an edge cut is . Assume one discovers in the input graph an edge cut of order at most such that both and are large; say for some parameter to be fixed later. Then one can recurse on one of the sides, say , in the following manner. For every behavior of the problem on the set — in the context of -Way Cut, for every assignment of the vertices of into target components — one recurses on an annotated version of the problem to find a minimum-size partial solution in . Since is bounded by the order of the edge cut , there is only bounded in number of behaviors to consider. Thus, if is larger than the number of behaviors times (which is still a function of only), there is an edge that is not used in any of the found minimum partial solutions. Such an edge can be safely contracted and the algorithm is restarted.

It remains to show how to find such an edge cut and how the algorithm should work in the absence of such a cut. Since the absence of such balanced cuts is a critical notion in this work, we make the following definitions that take also into account vertex cuts.

Definition 1.1 (unbreakability).

A set is -edge-unbreakable if every edge cut of order at most satisfies or .

A set is -unbreakable if every separation of order at most satisfies or .

Here, a separation is a pair such that and . The order of a separation is .

Hence, the leaves of the recursion of Kawarayashi and Thorup deal with graphs where is -edge-unbreakable. The algorithm of [15] employs involved arguments stemming from the graph minor theory both to deal with this case and to find the desired edge cut for recursion. These arguments, unfortunately, imply a large overhead in the running time bound, and are problem-specific.

A year later, Chitnis et al. [7, 6] replaced the arguments based on the graph minor theory with steps based on color coding: a simple yet powerful algorithmic technique introduced by Alon, Yuster, and Zwick in 1995 [1]. This approach is both arguably simpler and leads to better running time bounds. Furthermore, the general methodology of [6] — dubbed randomized contractions — turns out to be robust, and allowed solving such problems as Unique Label Cover, Multiway Cut-Uncut [6], or Steiner Multicut [4]. All aforementioned algorithms have running time bounds of the order of with both notions of hiding quadratic or cubic polynomials. Later, Lokshtanov et al. [19] showed how the idea of randomized contractions can be applied to give a reduction for the CMSO model-checking problem from general graphs to highly connected graphs.

While powerful, the randomized contractions technique seemed to be one step short of providing a parameterized algorithm for the Minimum Bisection problem, which was an open problem at that time. In this problem, given a graph and an integer , one asks for an edge cut of order at most such that . The only step that fails is the recursive step itself: the number of possible behaviors of the problem on an edge cut of small order is unbounded by a function of , as the description of the behavior needs to include some indicator of the balance between the number of vertices assigned to the sides and . This problem has been circumvented by a subset of the current authors in 2014 [9] by replacing the recursive strategy with a dynamic programming algorithm on an appropriately constructed tree decomposition. To properly describe the contribution, we need some more definitions.

A tree decomposition of a graph is a pair where is a tree and is a mapping that assigns to every a set , called a bag, such that the following holds: (i) for every there exists with , and (ii) for every the set induces a connected nonempty subgraph of .

For a tree decomposition fix an edge . The deletion of from splits into two trees and , and naturally induces a separation in with , which we henceforth call the separation associated with . The set is called the adhesion of . We suppress the subscript if the decomposition is clear from the context.

Some of our tree decompositions are rooted, that is, the tree in a tree decomposition is rooted at some node . For we say that is a descendant of or that is an ancestor of if lies on the unique path from to the root; note that a node is both an ancestor and a descendant of itself. For a node that is not a root of , by we mean the adhesion for the edge connecting with its parent in . We extend this notation to for the root . Again, we omit the subscript if the decomposition is clear from the context.

We define the following functions for convenience:

We say that a rooted tree decomposition of is compact if for every node for which we have that is connected and .

The main technical contribution of [9] is an algorithm that, given a graph and an integer , computes a tree decomposition of with the following properties: (i) the size of every adhesion is bounded by a function of , and (ii) every bag of the decomposition is -unbreakable. In [9], the construction relied on involved arguments using the framework of important separators and, in essence, also shadow removal, leading to bounds of the form for the obtained value of and the bounds on the sizes of adhesions, and running time bound for the construction algorithm.

Our results

The main technical contribution of this paper is an improved construction algorithm of a decomposition with the aforementioned properties.

Theorem 1.2.

Given an -vertex graph and an integer , one can in time compute a rooted compact tree decomposition of such that

  1. every adhesion of is of size at most ;

  2. every bag of is both -edge-unbreakable and -unbreakable in .

The main highlights of Theorem 1.2 is the improved dependency on in the running time bound and best possible both edge-unbreakability bound and adhesion bound. These properties allow us to develop -time parameterized algorithms for a number of problems that ask for an edge cut of order at most , with the most prominent one being Minimum Bisection.

Theorem 1.3.

Minimum Bisection can be solved in time .

This improves the parametric factor of the running time from , provided in [9], to .

In our second application, the Steiner Cut problem, we are given an undirected graph , a set of terminals, and integers . The goal is to delete at most edges from so that has at least connected components containing at least one terminal. This problem generalizes -Way Cut that corresponds to the case .

Theorem 1.4.

Steiner Cut can be solved in time .

This improves the parametric factor of the running time from , provided in [6], to .

In the Steiner Multicut problem we are given an undirected graph , sets of terminals , each of size at most , and an integer . The goal is to delete at most edges from such that every terminal set is separated: for every , there does not exist a single connected component of the resulting graph that contains the entire set . Note that for , the problem becomes the classic Edge Multicut problem. Bringmann et al. [4] showed an FPT algorithm for Steiner Multicut when parameterized by . We use our decomposition theorem to improve the exponential part of the running time of this algorithm.

Theorem 1.5.

Steiner Multicut can be solved in time .

This improves the parametric factor of the running time from , provided in [4], to .

Our techniques

Our starting point is the definition of a lean tree decomposition of Thomas [23]; we follow the phrasing of [2].

Definition 1.6.

A tree decomposition of a graph is called lean if for every and all sets and with , either contains vertex-disjoint paths, or there exists an edge on the path from to such that .

For a graph and a tree decomposition that is not lean, a quadruple for which the above assertion is not true is called a lean witness. Note that it may happen that or . In particular is called a single bag lean witness. The order of a lean witness is the minimum order of a separation such that for .

Bellenbaum and Diestel [2] defined an improvement step, that, given a tree decomposition and a lean witness, refines the decomposition so that it is in some sense closer to being lean. Given a lean witness , the refinement step finds a minimum order separation with for and rearranges the tree decomposition so that appears as a new adhesion on some edge of the decomposition. Bellenbaum and Diestel introduced a potential function, bounded exponentially in , that decreases at every refinement step. Thus, one can exhaustively apply the refinement step while a lean witness exists, obtaining (after possibly an exponential number of steps) a lean decomposition.

A simple but crucial observation connecting lean decompositions with the decomposition promised by Theorem 1.2 is that if a tree decomposition admits no single bag lean witness of order at most , then every bag is -unbreakable. Combining it with the fact that the refinement step applied to a lean witness of order introduces one new adhesion of size (and does not increase the size of other adhesions), we obtain the following.

Theorem 1.7.

For every graph and integer , there exists a tree decomposition of such that every adhesion of is of size at most and every bag is -unbreakable and -edge-unbreakable.

Proof sketch..

Start with a trivial tree decomposition that consists of a single bag . As long as there exists a single bag lean witness of order at most in , apply the refinement step of Bellenbaum and Diestel to it. It now remains to observe that if any bag for some was either not -edge-unbreakable or not -unbreakable, then the edge cut or separation witnessing this would give rise to a single bag lean witness for . ∎

A naive implementation of the procedure of Theorem 1.7 runs in time exponential in , while for any application in parameterized algorithms one needs an FPT algorithm with as a parameter.

To achieve this goal, one needs to overcome two obstacles. First, the potential provided by Bellenbaum and Diestel gave only an exponential in bound on the number of needed refinement steps. Fortunately, one can use the fact that we only refine using single bag witnesses of bounded size to provide a different potential, this time bounded polynomially in .

Second, one needs to efficiently (in FPT time) verify whether a bag is -(edge)-unbreakable and, if not, find a corresponding single bag lean witness. With help of color coding, we provide such an algorithm for edge-unbreakability, that is, in time we can either certify that all bags of a given tree decomposition are -edge-unbreakable or produce a single bag lean witness of order at most . However, for vertex-unbreakability, we could only show an approximate version that either certifies that all bags are -unbreakable or finds a desired lean witness. These ingredients lead to constructing a decomposition with guarantees as in Theorem 1.2.

All applications use the decomposition of Theorem 1.2 and follow well-paved ways of [6, 9] to perform bottom-up dynamic programming algorithm. Let us briefly sketch them for the case of Minimum Bisection.

Let be a Minimum Bisection instance and let be a tree decomposition of promised by Theorem 1.2. The states of our dynamic programming algorithm are the straightforward ones: for every , every , and every we compute a value that equals minimum order of an edge cut in such that and . Furthermore, we are not interested in cut orders larger than , and we replace them with . By using edge-unbreakability of , we can additionally require that either or is of size at most .

In a single step of a dynamic programming algorithm, one would like to populate using the values for all children of in . Fix a cell . Intuitively, one would like to iterate over all partitions with and, for fixed , for every child of use the cells to read the best way to extend to . However, can be large, thus we cannot iterate over all such partitions . Here, the properties of the decomposition come into play: since is -edge-unbreakable, and in the end we are looking for a solution to Minimum Bisection of order at most , we can only focus on partitions such that or . While this still does not allow us to iterate over all such partitions, we can highlight important parts of them by color coding, similarly as it is done in the leaves of the recursion in the randomized contractions framework [6].

2 Preliminaries

Color coding toolbox

Throughout this paper we sometimes use the shorthand , for a positive integer .

Many of our proofs follow the same outline as the treatment of the high-connectivity phase of the randomized contractions technique [6]. As in [6], the color coding step in these algorithms is abstracted in the following lemma:

Lemma 2.1 ([6]).

Given a set of size , and integers , one can in time construct a family of at most subsets of , such that the following holds: for any sets , , , , there exists a set with and .

We also need the following more general version that can be obtained from Lemma 2.1 by a straightforward induction on .

Lemma 2.2.

Given a set of size , and integers , with and , one can in time construct a family of at most functions , such that the following holds: for any pairwise disjoint sets such that for every , there exists a function with for every .

Compactifying tree decompositions

It is well-known that every rooted tree decomposition can be refined to a compact one; see e.g. [3, Lemma 2.8]. For convenience, we provide a proof of this fact, with formulation suited for our needs.

Lemma 2.3.

Given a graph and its tree decomposition , one can compute in polynomial time a compact tree decomposition of such that every bag of is a subset of some bag of , and every adhesion of is a subset of some adhesion of .

Proof.

We assume that is rooted by rooting it at any vertex. We first argue that we may assume that has at most edges. This can be achieved by performing the following operation as long as possible: if there is an edge with , contract this edge keeping the bag at the resulting vertex. To see that the obtained tree decomposition has at most edges, observe that going from child to parent on every edge we forget at least one vertex, and every vertex can be forgotten only once.

Having cleaned as above, we proceed to the construction of . We gradually modify the decomposition as long as it is not compact, and output it as once the compactness is achieved. Suppose then that is still not compact. Then there exists a node that violates the definition of compactness; we proceed as follows.

First, observe that the properties of a tree decomposition imply that . If there exists , then we can delete from the bag of and all its descendants. This operation strictly decreases the sum of sizes of all bags, while every bag and adhesion can only be replaced by its subset. Hence, we may apply it exhaustively, to all nodes violating compactness in this manner, thus arriving in polynomial time at the situation where we can assume that for every .

Therefore, if now a node violates the compactness, then we have that is disconnected. Let be an edge cut of of zero order with . Since , is not the root of ; let be the parent of . Let be the subtree of rooted in . We make two copies of , and , and define the tree as with replaced with and , both with roots being children of . Furthermore, we define as: for , for and otherwise. It is straightforward to verify that is a tree decomposition of . Moreover, every bag of is a subset of a bag of , and similarly for adhesions. Finally, we clean up so that it has at most edges using the same operation as in the first paragraph. We conclude by replacing with and applying the reasoning again.

To bound the number of iterations of the presented procedure, consider the potential . Note that its value is integral and initially bounded by . It is straightforward to verify that the potential strictly decreases in every iteration, so the procedure stops after at most iterations. Since every iteration can be performed in polynomial time, the whole algorithm works in polynomial time as well. ∎

3 Constructing a lean decomposition: Proof of Theorem 1.2

3.1 Refinement step

Bellenbaum and Diestel [2] defined an improvement step, that, given a tree decomposition and a lean witness, refines the decomposition so that it is in some sense closer to being lean. We will use the same refinement step, but only for the special single bag case, and thus in subsequent sections focus on finding a lean witness in a current candidate tree decomposition. Observe that by Menger’s theorem the following conditions are equivalent.

Claim 3.1.

For a tree decomposition , a node , and subsets , either both or none of the following two conditions are true:

  • is a single bag lean witness for ,

  • there is a separation of with , where , such that , and there is a set of vertex disjoint paths such that for every .

Moreover given a single bag lean witness one can find the above separation and set of paths in polynomial time.

A minimum order of a separation from the second point is called the order of the single bag lean witness .

To argue that the refinement process stops after a small number of steps, or that it stops at all, we define the following potential for a graph , a tree decomposition , and an integer :

Note that the potential is different than the one used in [2], as the one used in [2] can be exponential in while being oblivious to the cut size .

Given a witness, a single refinement step we use is encapsulated in the following lemma, which is essentially a repetition of the refinement process of [2] with the analysis of the new potential. We emphasize that in this part, all considered tree decompositions are unrooted.

Lemma 3.2.

Assume we are given a graph , an integer , a tree decomposition of with every adhesion of size at most , one node with , and a single bag lean witness with . Then one can in polynomial time compute a tree decomposition of with every adhesion of size at most such that .

Proof.

Apply Claim 3.1, yielding a separation and a family of vertex disjoint paths, where . Note that , hence the order of is at most .

We construct a tree decomposition as follows. First for every , we construct a decomposition of : we start with being a copy of , where a node corresponds to a node , and for every . Then for every we take the node such that and is closest to among such nodes, and insert into every bag for lying on the path between (inclusive) and (exclusive) in .

Clearly, is a tree decomposition of and . We construct by taking to be a disjoint union of and , with the copies of the node connected by an edge , and . Since is a separation and is present in both bags , , we infer that is a tree decomposition of .

We now argue that every adhesion of is of size at most . This is clearly true for the edge connecting and , as the adhesion there is exactly and .

Consider now a bag in a tree . The set consists of some vertices of , namely those vertices for which lies on the path between (inclusive) and (exclusive). However, by the properties of a tree decomposition and Menger’s theorem, contains at least one vertex of that lies between and the endpoint in , that is in . This vertex is not present in and, consequently, . The same argumentation holds for every edge and adhesion of this edge.

We are left with analysing the potential decrease. Fix . We analyse the difference between the contribution to the potential of in and the copies of in ). First, by the analysis in the previous paragraph, we have for . Consequently, if , then for and the discussed contributions are all equal to . Furthermore, if for some , then as .

Otherwise, assume that and for . Note that while and . Consequently,

We infer that for every bag , the contribution of to the potential is not smaller than the contribution of the two copies of in . To prove strict inequality, we show that these contributions are not equal for the bag .

Recall that we assumed . By the previous argumentation, the only chance for equal contributions of to and to is that for some . However, note that . Consequently, as and , we have and hence . This finishes the proof of the lemma. ∎

3.2 The algorithm

In the subsequent two sections, we prove the following two lemmas that allow us to find a single-bag lean witness in case of a bag being either not -edge unbreakable or not -unbreakable. The proofs follow the principles of the algorithms for the high-connectivity phase in the technique of randomized contractions [6] and the Hypergraph Painting subroutine of [9]; in particular, the main ingredient is color-coding.

Lemma 3.3.

Given an -vertex graph , an integer , and a set with the property that every connected component of satisfies , one can in time either find an edge cut in such that the order of is some and it holds that as well as , or correctly conclude that no such edge cut exists.

Lemma 3.4.

Given an -vertex graph , an integer , and a set with the property that every connected component of satisfies , one can in time either find a separation in such that the order of is some and it holds that and , or correctly conclude that no separation of order at most in has the property that and .

With Lemmas 3.3 and 3.4 in hand, we now formally prove Theorem 1.2.

Proof of Theorem 1.2..

First, we can assume that is connected, as otherwise we can compute a tree decomposition for every component separately, and then glue them up in an arbitrary fashion.

We start with a naive unrooted tree decomposition that has a single bag with the entire vertex set and iteratively improve it, using Lemma 3.2, until it satisfies the conditions of Theorem 1.2, except for compactness, which we will handle in the end. We will maintain the invariant that every adhesion of is of size at most . At every step the potential will decrease, leading to at most steps of the algorithm.

Let us now elaborate on a single step of the algorithm. There are two reasons why may not satisfy the conditions of Theorem 1.2: either it contains a bag that is not -edge unbreakable, or a bag that is not -unbreakable. Note here that a bag that is not -edge unbreakable or not -unbreakable necessarily has more than vertices.

For the first case, consider a bag that is not -edge unbreakable. Since every adhesion of is of size at most , we have that for every connected component of it holds that . Consequently, Lemma 3.3 allows us to find a single-bag lean witness of order at most for the node in time .

For the second case, an analogous argument using Lemma 3.4 allows us to find a single-bag lean witness of order at most inside a bag that is not -vertex unbreakable.

In both cases, we uncovered a single-bag lean witness for a node satisfying . Hence, we may refine the decomposition by applying Lemma 3.2 and proceed iteratively with the refined decomposition. As asserted by Lemma 3.2, the potential strictly decreases in each iteration.

We remark that between the refinement steps we need to reduce the number of edges in the decomposition to at most using the same reasoning as in the proof of Lemma 2.3. That is, as long as there exists an edge with , we can contract onto keeping as the bag of the new node. A direct check shows that this operation neither increases the sizes of adhesions nor the potential of the decomposition, while, as argued in the proof of Lemma 2.3, it bounds the number of edges of by .

Observe that the potential is bounded polynomially in and every iteration can be executed in time . Hence, we conclude that the refinement process finishes within polynomial time and outputs an unrooted tree decomposition that satisfies all the requirements of Theorem 1.2, except for being compact. This can be remedied by applying the algorithm of Lemma 2.3. Note that neither the (edge)-unbreakability of bags nor the upper bound on the sizes of adhesions can deteriorate as a result of applying the algorithm of Lemma 2.3, as every bag (resp. every adhesion) of the obtained tree decomposition is a subset of a bag (resp. an adhesion) of the original one. ∎

3.3 Finding lean witness: edge cuts

Proof of Lemma 3.3..

If the vertices of are present in more than one connected component of , then we can return an edge cut of order zero with some vertices of on both sides of the cut, and we are done. Otherwise, as the statement is trivial for , we can assume is connected by focusing on the single connected component of that contains all vertices of . Furthermore, since the problem is easily solvable in time , we assume .

We start with a regularization step.

Claim 3.5.

If there exists an edge cut as in the lemma statement, then there exists one where and are connected.

Proof.

By definition, every edge of has one endpoint in and one endpoint in . Since , there exists a connected component of such that . Also, note that as is a connected component of and clearly . Thus, the cut also satisfies the requirements of the lemma. Consequently, we can assume that is connected.

We now perform exactly the same operation, but with the roles of and swapped, using a component of . By the connectivity of , since is connected, is connected as well. Consequently, after performing the same operation using the component , both and are connected.    

We note here that the main reason why we are not able to get an exact FPT algorithm for checking -unbreakability (as opposed to -edge unbreakability), is that the analogue of Claim 3.5 fails for vertex cuts.

For a connected component of , by we denote the graph , that is, the subgraph of induced by all edges with at least one endpoint in . Note that graphs for different components are pairwise edge-disjoint.

Assume that an edge cut satisfying the conditions of the lemma exists, and fix one such edge cut . By Claim 3.5, we can additionally assume that both and are connected. Let .

We say that a component of is touched if at least one edge of is present in , and a vertex is touched if either is an endpoint of an edge of , or there exists a touched component with . Note that every component gives raise to at most touched vertices, while every edge of gives raise to two touched vertices. Since , there are at most touched components of and at most touched vertices of .

We construct an auxiliary graph as follows. We take and two vertices are connected by an edge in if or there exists a component of such that . The graph is sometimes called the torso of in literature.

Since and are connected, and are connected as well. Let (resp. ) be an arbitrary set of vertices of (resp. ) such that (resp. ) is connected. Finally, let be the set of all touched vertices of that are not contained in .

We apply Lemma 2.2 to the universe with , and . In time we obtain a family of functions such that there exists with , and . Note that if this is the case, then is contained in one connected component of and is contained in one connected component of .

We observe the following.

Claim 3.6.

If is such that , , and , then the connected component of containing is completely contained in and, symmetrically, the connected component of containing is completely contained in .

Proof.

The claim follows from an observation that for every edge with and we have that and are touched and, consequently, and . Hence, the connected component of containing cannot contain any vertex of , and a symmetrical claim holds for .    

By iterating over all the less than options, we guess the function satisfying the assumptions of Claim 3.6 and the connected components and of and , respectively, that contain and , respectively. Note that and . On the other hand, since and , while , one can in polynomial time find an edge cut of order at most with and . While we may not necessarily obtain , the cut satisfies the desired properties and can be returned by the algorithm. This finishes the proof of Lemma 3.3. ∎

3.4 Finding lean witness: vertex cuts

Proof of Lemma 3.4..

Similarly as in the proof of Lemma 3.3, we can restrict our attention to connected graphs . That is, if the vertices of are present in more than one connected component, then we can return a separation of order zero with some vertices of on both sides of the cut, and we are done. Otherwise, as the statement is trivial for , we can assume is connected by focusing on the single connected component of that contains all vertices of .

As in the proof of Lemma 3.3, we define to be the torso of in . That is, we take and two vertices are connected by an edge in if or there exists a component of such that .

Assume that there exists a separation of order at most and such that ; fix one such separation. We say that a component of is touched if . A vertex is touched if either or there exists a touched component with . Note that every component gives raise to at most touched vertices. There are at most touched components of and at most touched vertices of .

We now consider two cases.

Case 1:

We first assume that both and contain connected components of size at least each. Let be a set of vertices such that is connected, and similarly define . Let be the set of touched vertices that are not contained in .

We apply Lemma 2.2 to the universe with , and . In time we obtain a family of functions such that there exists with , and . Note that if this is the case, then is contained in one connected component of and is contained in one connected component of .

The following claim is straightforward by the definition of touched vertices.

Claim 3.7.

If is such that , , and , then the connected component of containing is completely contained in and, symmetrically, the connected component of containing is completely contained in .

By iterating over all the less than