# Brief Announcement: Almost-Tight Approximation

Distributed Algorithm for Minimum Cut

###### Abstract

In this short paper, we present an improved algorithm for approximating the minimum cut on distributed (CONGEST) networks. Let be the minimum cut. Our algorithm can compute exactly in time, where is the number of nodes (processors) in the network, is the network diameter, and hides . By a standard reduction, we can convert this algorithm into a -approximation -time algorithm. The latter result improves over the previous -approximation -time algorithm of Ghaffari and Kuhn [DISC 2013]. Due to the lower bound of by Das Sarma et al. [SICOMP 2013], this running time is tight up to a factor. Our algorithm is an extremely simple combination of Thorup’s tree packing theorem [Combinatorica 2007], Kutten and Peleg’s tree partitioning algorithm [J. Algorithms 1998], and Karger’s dynamic programming [JACM 2000].

## 1 Introduction

### Problem.

In this paper, we study the time complexity of the fundamental problem of computing minimum cut on distributed network. Given a graph , edge weight assignment , and any set of nodes in , the cut is defined as

Our goal is to find .

### Communication Model.

We use a standard message passing network (the CONGEST model [Pel00]). Throughout the paper, we let be the number of nodes and be the diameter of the network. Every node is assumed to have a unique ID, and initially knows the weights of edges incident to it. The execution in this network proceeds in synchronous rounds and in each round, each node can send a message of size bits to each of its neighbors. The goal of the problem is find the minimum or approximately minimum cut . (Every node outputs whether it is in in the end of the process.) The time complexity is the number of rounds needed to compute this. (For more detail, see [GK13].)

### Previous Work.

The current best algorithm is by Ghaffari and Kuhn [GK13] which takes time with an approximation ratio of . ( hides the factor.) The running time of this algorithm matches the lower bound of Das Sarma et al. [DHK11] which showed that this problem cannot be computed faster than even when we allow a large approximation ratio. (This lower bound was also shown to hold even when a quantum communication is allowed [EKNP14], and when a capacity of an edge is proportional to its weight [GK13].) For a more comprehensive literature review, see [GK13].

### Our Results.

Our main result is a distributed algorithm that can compute the minimum cut exactly in time. For the case where the minimum cut is small (i.e. ), the running time of our algorithm matches the lower bound [DHK11, GK13]. When the minimum cut is large, Karger’s edge sampling technique [Kar94] can be used to reduce the minimum cut to with the cost of approximation factor (due to the space limit, we refer the readers to [Tho07, Lemma 7] for the statement of Karger’s sampling result). This makes our algorithm a -approximation -time one, improving the previous algorithm of Ghaffari and Kuhn [GK13].

### Techniques.

Our algorithm is a simple combination of techniques from [Tho07, KP98, Kar00]. The starting point of our algorithm is Thorup’s tree packing theorem, which shows that if we generate trees , where tree is the minimum spanning tree with respect to the loads induced by , then one of these trees will contain exactly one edge in the minimum cut. (Due to the space limit, we refer the readers to [Tho07, Theorem 9] for the full statement.) Since we can use the -time algorithm of Kutten and Peleg [KP98] to compute the minimum spanning tree (MST), the problem of finding a minimum cut is reduced to finding the minimum cut that -respects a tree; i.e., finding which edge in a given spanning tree defines a smallest cut (see the formal definition in Section 2). Solving this problem is our main result.

To solve this problem, we usea simple observation of Karger [Kar00] which reduces the problem to computing the sum of degree and the number of edges contained in a subtree rooted at each node. We use this observation along with Kutten and Peleg’s tree partitioning [KP98] to quickly compute these quantities. This requires several (elementary) steps, which we will discuss in more detail in Section 2.

### Concurrent Result.

Independent from our work, Su [Su14] also achieved a -approximation -time algorithm for this problem. His starting point is, like ours, Thorup’s theorem [Tho07]. The way he finds the minimum cut that -respects a tree is, however, very different. In particular, he uses edge sampling to make the minimum cut of a certain graph be one and use Thurimella’s algorithm [Thu97] to find a bridge. (See Algorithm 2 in [Su14] for details.) This gives a nice and simple way to achieve essentially the same approximation result as ours, with a small drawback that minimum cut cannot be computed exactly, even when it is small.

## 2 Distributed Algorithm for Finding a Cut that 1-Respects a Tree

In this section, we solve the following problem: Given a spanning tree on a network rooted at some node , we want to find an edge in such that when we cut it, the cut define by edges connecting the two connected component of is smallest. To be precise, for any node , define to be the set of nodes that are descendants of in , including . The problem is then to compute .

###### Theorem 2.1 (Main Result).

There is an -time distributed algorithm that can compute as well as find a node such that .

In fact, at the end of our algorithm every node knows . Our algorithm is inspired by the following observation used in Karger’s dynamic programming [Kar00]. For any node , let be the weighted degree of , i.e. . Let denote the total weight of edges whose endpoints’ least common ancestor is . Let and .

###### Lemma 2.2 (Karger [Kar00] (Lemma 5.9)).

.

Our algorithm will make sure that every node knows and . By Lemma 2.2, this will be sufficient for every node to compute . The algorithm is divided in several steps, as folows.

### Step 1: Partition into Fragments and Compute “Fragment Tree” .

We use the algorithm of Kutten and Peleg [KP98, Section 3.2] to partition nodes in tree into subtrees, where each subtree has diameter^{1}^{1}1To be precise, we compute a spanning forest. Also note that we in fact do not need this algorithm since we obtain by using Kutten and Peleg’s MST algorithm, which already computes the spanning forest as a subroutine. See [KP98] for details.
(every node knows which edges incident to it are in the subtree containing it). This algorithm takes time. We call these subtrees fragments and denote them by , where .
For any , let be the ID of . We can assume that every node in knows . This can be achieved in time by a communication within each fragment.

Let be a rooted tree obtained by contracting nodes in the same fragment into one node. This naturally defines the child-parent relationship between fragments (e.g. the fragments labeled (5), (6), and (7) in Figure 0(b) are children of the fragment labeled (0)). Let the root of any fragment , denoted by , be the node in that is nearest to the root in . We now make every node know : Every “inter-fragment” edge, i.e. every edge such that and are in different fragments, either node or broadcasts this edge and the IDs of fragments containing and to the whole network. This step takes time since there are edges in that link between different fragments. Note that this process also makes every node knows the roots of all fragments since, for every inter-fragment edge , every node knows the child-parent relationship between two fragments that contain and .

### Step 2: Compute Fragments in Subtrees of Ancestors.

For any node let be the set of fragments . For any node in any fragment , let be the set of ancestors of in that are in or the parent fragment of (also let contain ). (For example, Figure 0(c) shows .) The goal of this step is to make every node knows (i) and (ii) for all .

First, we make every node know : for every fragment we aggregate from the leaves to the root of (i.e. upcast) the list of child fragments of . This takes time since there are fragments to aggregate. In this process every node receives a list of child fragments of that are contained in . It can then use to compute fragments that are descendants of these child fragments, and thus compute all fragments contained in . Next, we make every node in every fragment know : every node sends a message containing its ID down the tree until this message reaches the leaves of the child fragments of . Since each fragment has diameter , this process takes time. With some minor modifications, we can also make every node know for all : Initially every node sends a message , for every , to its children. Every node that receives a message from its parents sends this message further to its children if . (A message that a node sends to its children should be interpreted as “ is the lowest ancestor of such that ”.)

### Step 3: Compute .

For every fragment , we let . For every node in every fragment , we will compute by separately computing (i) and (ii) . The first quantity can be computed in time by computing the sum within (every node sends the sum to its parent). To compute the second quantity, it suffices to make every node know for all since every node already knows . To do this, we make every root know in time by computing the sum of degree of nodes within each . Then, we can make every node know for all by letting broadcast to the whole network.

### Step 4: Compute Merging Nodes and .

We say that a node is a merging node if there are two distinct children and of such that both and contain some fragments (e.g. nodes and in Figure 0(a)). In other words, it is a point where two fragments “merge”. Let be the following tree: Nodes in are both roots of fragments (’s) and merging nodes. The parent of each node in is its lowest ancestor in that appears in (see Figure 0(d) for an example). Note that every merging node has at least two children in . This shows that there are merging nodes. The goal of this step is to let every node know .

First, note that every node can easily know whether it is a merging node or not in one round by checking, for each child , whether contains any fragment (i.e. whether ). The merging nodes then broadcast their IDs to the whole network. (This takes time since there are merging nodes.) Note further that every node in knows its parent in because its parent in is one of the ancestors in . So, we can make every node knows in rounds by letting every node in broadcast the edge between itself and its parent in to the whole network.

### Step 5: Compute .

We now count, for every node , the number of edges whose least common ancestor (LCA) of its end-nodes are . For every edge in , we claim that and can compute the LCA of by exchanging messages through edge . Let denote the LCA of . Consider three cases (see Figure 0(e)). Case 1: First, consider when and are in the same fragment, say . In this case we know that must be in . Since and have the lists of their ancestors in , they can find by exchanging these lists. In the next two cases we assume that and are in different fragments, say and , respectively. Case 2: is not in and . In this case, is a merging node such that contains and . Since both and knows and their ancestors in , they can find by exchanging the list of their ancestors in . Case 3: is in (the case where is in can be handled in a similar way). In this case contains . Since knows for all its ancestors in , it can compute its lowest ancestor such that contains . Such ancestor is the LCA of .

Now we compute for every node by splitting edges whose LCA is into two types (see Figure 0(f)): (i) those that and are in different fragments from , and (ii) the rest. For (i), note that must be a merging node. In this case one of and creates a message . We then count the number of messages of the form for every merging node by computing the sum along the breadth-first search tree of . This takes time since there are merging nodes. For (ii), the node among and that is in the same fragment as creates and keeps a message . Now every node in every fragment counts the number of messages of the form in by computing the sum through the tree . Note that, to do this, every node has to send the number of messages of the form to its parent, for all that is an ancestor of in the same fragment. There are such ancestors, so we can compute the number of messages of the form for every node concurrently in time (by pipelining).

### Acknowledgment:

The author would like to thank Thatchaphol Saranurak for bringing Thorup’s tree packing theorem [Tho07] to his attention.

## References

- [DHK11] Atish Das Sarma, Stephan Holzer, Liah Kor, Amos Korman, Danupon Nanongkai, Gopal Pandurangan, David Peleg, and Roger Wattenhofer. Distributed verification and hardness of distributed approximation. In STOC, pages 363–372, 2011.
- [EKNP14] Michael Elkin, Hartmut Klauck, Danupon Nanongkai, and Gopal Pandurangan. Can quantum communication speed up distributed computation? In PODC, 2014.
- [GK13] Mohsen Ghaffari and Fabian Kuhn. Distributed minimum cut approximation. In DISC, pages 1–15, 2013.
- [Kar94] David R. Karger. Random sampling in cut, flow, and network design problems. In STOC, pages 648–657, 1994.
- [Kar00] David R. Karger. Minimum cuts in near-linear time. J. ACM, 47(1):46–76, 2000.
- [KP98] Shay Kutten and David Peleg. Fast distributed construction of small k-dominating sets and applications. J. Algorithms, 28(1):40–66, 1998.
- [Pel00] David Peleg. Distributed Computing: A Locality-Sensitive Approach. SIAM Society for Industrial and Applied Mathematics Monographs on Discrete Mathematics ans Applications, Philadelphia, 2000.
- [Su14] Hsin-Hao Su. Brief announcement: A distributed minimum cut approximation scheme. In SPAA, 2014.
- [Tho07] Mikkel Thorup. Fully-dynamic min-cut. Combinatorica, 27(1):91–127, 2007.
- [Thu97] Ramakrishna Thurimella. Sub-linear distributed algorithms for sparse certificates and biconnected components. J. Algorithms, 23(1):160–179, 1997.