Faster Exact and Approximate Algorithms for k-Cut

# Faster Exact and Approximate Algorithms for k-Cut

Anupam Gupta
CMU
anupamg@cs.cmu.edu. Supported in part by NSF awards CCF-1536002, CCF-1540541, and CCF-1617790.
Euiwoong Lee
NYU
euiwoong@cims.nyu.edu. Part of this work was done as a research fellow at the Simons Institute.
Jason Li
CMU
jmli@cs.cmu.edu. Supported in part by NSF awards CCF-1536002, CCF-1540541, and CCF-1617790.
###### Abstract

In the -Cut problem, we are given an edge-weighted graph and an integer , and have to remove a set of edges with minimum total weight so that has at least connected components. The current best algorithms are an randomized algorithm due to Karger and Stein, and an deterministic algorithm due to Thorup. Moreover, several -approximation algorithms are known for the problem (due to Saran and Vazirani, Naor and Rabani, and Ravi and Sinha).

It has remained an open problem to (a) improve the runtime of exact algorithms, and (b) to get better approximation algorithms. In this paper we show an -time algorithm for -Cut. Moreover, we show an -approximation algorithm that runs in time , and a -approximation in fixed-parameter time .

## 1 Introduction

In this paper we consider the -Cut problem: given an edge-weighted graph and an integer , delete a minimum-weight set of edges so that has at least connected components. This problem is a natural generalization of the global min-cut problem, where the goal is to break the graph into pieces. This problem has been actively studied in theory of both exact and approximation algorithms, where each result brought new insights and tools on graph cuts.

It is not a priori clear how to obtain poly-time algorithms for any constant , since guessing one vertex from each part only reduces the problem to the NP-hard Multiway Cut problem. Indeed, the first result along these lines was the work of Goldschmidt and Hochbaum [GH94] who gave an -time exact algorithm for -Cut. Since then, the exact exponent in terms of has been actively studied. The current best runtime is achieved by an randomized algorithm due to Karger and Stein [KS96], which performs random edge contractions until the remaining graph has nodes, and shows that the resulting cut is optimal with probability at least . The asymptotic runtime of was later matched by a deterministic algorithm of Thorup [Tho08]. His algorithm was based on tree-packing theorems; it showed how to efficiently find a tree for which the optimal -cut crosses it times. Enumerating over all possible edges of this tree gives the algorithm.

These elegant -time algorithms are the state-of-the-art, and it has remained an open question to improve on them. An easy observation is that the problem is closely related to -Clique, so we may not expect the exponent of to go below . Given the interest in fine-grained analysis of algorithms, where in the range does the correct answer lie?

Our main results give faster deterministic and randomized algorithms for the problem.

\thmt@toks\thmt@toks

Let be a positive integer. There is a randomized algorithm for -Cut on graphs with edge weights in with runtime

 ˜O(kO(k)nk+⌊(k−2)/3⌋ω+1+((k−2)mod3)W)≈O(kO(k)n(1+ω/3)k),

that succeeds with probability .

###### Theorem 1.1 (Faster Randomized Algorithm).
\thmt@toks\thmt@toks

Let be a positive integer. For any , there is a deterministic algorithm for exact -Cut on graphs with edge weights in with runtime

 kO(k)n(2ω/3+ε)k+O(1)W≈O(kO(k)n(2ω/3)k).
###### Theorem 1.2 (Even Faster Deterministic Algorithm).

In the above theorems, is the matrix multiplication constant, and hides polylogarithmic terms. While the deterministic algorithm from Theorem 1 is asymptotically faster, the randomized algorithm is better for small values of . Indeed, using the current best value of  [LG14], Theorem 1 gives a randomized algorithm for exact -Cut on unweighted graphs which improves upon the previous best -time algorithm of Karger and Stein for all . For , faster algorithms were given by Levine [Lev00].

##### Approximation algorithms.

The -Cut problem has also received significant attention from the approximation algorithms perspective. There are several -approximation algorithms that run in time  [SV95, NR01, RS08], which cannot be improved assuming the Small Set Expansion Hypothesis [Man17]. Recently, we gave an -approximation algorithm that runs in  [GLL18]. In this current paper, we give a -approximation algorithm for this problem much faster than the current best exact algorithms; prior to our work, nothing better was known for -approximation than for exact solutions.

\thmt@toks\thmt@toks

For any , there is a randomized (combinatorial) algorithm for -Cut with runtime time on general graphs, that outputs a -approximate solution with probability .

###### Theorem 1.3 (Approximation).

The techniques from the above theorem, combined with the previous ideas in [GLL18], immediately give an improved FPT approximation guarantees for the -Cut problem:

###### Theorem 1.4 (FPT Approximation).

There is a deterministic -approximation algorithm for the -Cut problem that runs in time .

##### Limitations.

Our exact algorithms raise the natural question: how fast can exact algorithms for -Cut be? We give a simple reduction showing that while there is still room for improvement in the running time of exact algorithms, such improvements can only improve the constant in front of the in the exponent, assuming a popular conjecture on algorithms for the Clique problem.

\thmt@toks\thmt@toks

Any exact algorithm for the -Cut problem for graphs with edge weights in can solve the -Clique problem in the same runtime. Hence, assuming -Clique cannot be solved in faster than time, the same lower bound holds for the -cut problem.

### 1.1 Our Techniques

Our algorithms build on the approach pioneered by Thorup: using tree-packings, he showed how to find a tree such that it crosses the optimal -cut at most times. (We call such a tree a Thorup tree, or T-tree.) Now brute-force search over which edges to delete from the T-tree (and how to combine the resulting parts together) gave an -time deterministic algorithm. This last step, however, raises the natural question—having found such a T-tree, can we use the structure of the -Cut problem to beat brute force? Our algorithms answer the question in the affirmative, in several different ways. The main ideas behind our algorithm are dynamic programming and fast matrix-multiplication, carefully combined with the fixed-parameter tractable algorithm technique of color-coding, and random sampling in general.

##### Fast matrix multiplication.

Our idea to apply fast matrix multiplication starts with the crucial observation that if (i) the T-tree is “tight” and crosses the optimal -cut only times, and (ii) these edges are “incomparable” and do not lie on a root-leaf path, then the problem of finding these edges can be modeled as a max-weight clique-like problem! (And hence we can use matrix-multiplication ideas to speed up their computation.) An important property of this special case is that choosing an edge to cut fixes one component in the -Cut solution — by incomparability, the subtree below cannot be cut anymore. The cost of a -cut can be determined by the weight of edges between each pair of components (just like being a clique is determined by pairwise connectivity), so this case can be solved via an algorithm similar to -Clique.

##### Randomized algorithm.

Our randomized algorithm removes these two assumptions step by step. First, while the above intuition crucially relies on assumption (ii), we give a more sophisticated dynamic program using color-coding schemes for the case where the edges are not incomparable. Moreover, to remove assumption (i), we show a randomized reduction that given a tree that crosses the optimal cut as many as times, finds a “tight” tree with only crossings (which is the least possible), at the expense of a runtime of . Note that guessing which edges to delete is easily done in time, but adding edges to regain connectivity while not increasing the number of crossings can naively take a factor of more time. We lose only a factor using our random-sampling based algorithm, using that in an optimal -Cut a split cluster should have more edges going to its own parts than to other clusters.

##### Deterministic algorithm.

The deterministic algorithm proceeds along a different direction and removes both assumptions (i) and (ii) at once. We show that by deleting some carefully chosen edges from the T-tree , we can break it into three forests such that we only need to delete about edges from each of these forests. Such a deletion is not possible when is a star, but appropriately extending by introducing Steiner nodes admits this deletion. (And is tight in this extension.) For each forest, there are ways to cut these edges, and once a choice of edges is made, the forest will not be cut anymore. This property allows us to bypass (ii) and establish desired pairwise relationships between choices to delete edges in two forests. Indeed, we set up a tripartite graph where one part corresponds to the choices of which edges to cut in one forest and the cost of the min -cut is the weight of the min-weight triangle, which we find efficiently using fast matrix multiplication. Some technical challenges arise because we need to some components for some forests may only have Steiner vertices, but we overcome these problems using color-coding.

##### Approximation schemes.

The -approximation algorithm again uses the -time randomized reduction, so that we have to cut exactly edges from a “tight” T-tree . An exact dynamic program for this problem takes time — as it should, since even this tight case captures clique, when is a star and hence these edges are incomparable. And again, we need to handle the case where these edges are not incomparable. For the former problem, we replace the problem of finding cliques by approximately finding “partial vertex covers” instead. (In this new problem we find a set of vertices that minimize the total number of edges incident to them.) Secondly, in the DP we cannot afford to maintain the “boundary” of up to edges explicitly any more. We show how to maintain an “-net” of nodes so that carefully “rounding” the DP table to only track a small -sized set of these rounded subproblems incurs only a -factor loss in quality.

Our approximate DP technique turns out to be useful to get a -approximation for -Cut in FPT time, improving on our previous approximation of  [GLL18]. In particular, the laminar cut problem from [GLL18] also has a tight T-tree structure, and hence we can use (a special case of) our approximate DP algorithm to get a -approximation for laminar cut, instead of the -factor previously known. Combining with other ideas in the previous paper, this gives us the -approximation.

### 1.2 Related Work

The first non-trivial exact algorithm for the -Cut problem was by Goldschmidt and Hochbaum, who gave an -time algorithm [GH94]; this is somewhat surprising because the related Multiway Cut problem is NP-hard even for . They also proved the problem to be NP-hard when is part of the input. Karger and Stein improved this to an -time randomized Monte-Carlo algorithm using the idea of random edge-contractions [KS96]. Thorup improved the -time deterministic algorithm of Kamidoi et al. [KYN07] to an -time deterministic algorithm based on tree packings [Tho08]. Better algorithms are known for small values of  [NI92, HO94, BG97, Kar00, NI00, NKI00, Lev00].

##### Approximation algorithms.

The first such result for -Cut was a -approximation of Saran and Vazirani [SV95]. Later, Naor and Rabani [NR01], and also Ravi and Sinha [RS08] gave -approximation algorithms using tree packing and network strength respectively. Xiao et al. [XCY11] extended Kapoor [Kap96] and Zhao et al. [ZNI01] and generalized Saran and Vazirani to give an -approximation in time . On the hardness front, Manurangsi [Man17] showed that for any , it is NP-hard to achieve a -approximation algorithm in time assuming the Small Set Expansion Hypothesis.

In recent work [GLL18], we gave a -approximation for -Cut in FPT time ; this does not contradict Manurangsi’s work, since is polynomial in for his hard instances. We improve that guarantee to by getting a better approximation ratio for the “laminar” -cut subroutine, improving from to . This follows as a special case of the techniques we develop in §4; the rest of the ideas in this current paper are orthogonal to those in [GLL18].

##### FPT algorithms.

Kawarabayashi and Thorup give the first -time algorithm [KT11] for unweighted graphs. Chitnis et al. [CCH16] used a randomized color-coding idea to give a better runtime, and to extend the algorithm to weighted graphs. Here, the FPT algorithm is parameterized by the cardinality of edges in the optimal -Cut, not by the number of parts . For more details on FPT algorithms and approximations, see the book [CFK15], and the survey [Mar07].

### 1.3 Preliminaries

For a graph , consider some collection of disjoint sets . Let denote the set of edges in (i.e., among the edges both of whose endpoints lie in these sets) whose endpoints belong to different sets . For any vertex set , let denote the edges with exactly one endpoint in ; hence . For a collection of edges , let be the sum of weights of edges in . In particular, for a -Cut solution , the value of the solution is .

For a rooted tree , let denote the subtree of rooted at . For an edge with child vertex , let . Finally, for any set , .

For some sections, we make no assumptions on the edge weights of , while in other sections, we will assume that all edge weights in are integers in , for a fixed positive integer . We default to the former unrestricted case, and explicitly mention transitioning to the latter case when needed.

## 2 A Fast Randomized Algorithm

In this section, we present a randomized algorithm to solve -Cut exactly in time , proving Theorem 1. Section 2.1 introduces our high-level ideas based on Thorup’s tree packing results. Section 2.2 shows how to refine Thorup’s tree to a good tree that crosses the optimal -cut exactly times, and Section 2.3 presents an algorithm given a good tree.

### 2.1 Thorup’s Tree Packing and Thorup’s Algorithm

Our starting point is a transformation from the general -Cut problem to a problem on trees, inspired by Thorup’s algorithm [Tho08] based on greedy tree packings. We will be interested in trees that cross the optimal partition only a few times. We fix an optimal -Cut solution, . Let be edges in the solution, so that is the solution value.

###### Definition 2.1 (T-trees).

A tree of is a -T-tree if it crosses the optimal cut at most times; i.e., . If , we often drop the quantification and call it a T-tree. If , the minimum value possible, then we call it a tight T-tree.

Our first step is the same as in [Tho08]: we compute a collection of trees such that there exists a T-tree, i.e., a tree that crosses at most times.

###### Theorem 2.2 ([Tho08], Theorem 1).

For , let be a greedy tree packing with at least trees. Then, on the average, the trees cross each minimum -cut less than times. Furthermore, the greedy tree packing algorithm takes time.

The running time comes from the execution of minimum spanning tree computations. Note that, since our results are only interesting when , resulting in algorithms of running time , we can completely ignore the running time of the greedy tree packing algorithm, which is only run once. Letting , we get the following corollary.

###### Corollary 2.3.

We can find a collection of trees such that for a random tree , in expectation. In particular, there exists a T-tree .

In other words, if we choose such a T-tree , we get the following problem: find the best way to cut edges of , and then merge the connected components into exactly components so that is minimized. Thorup’s algorithm accomplishes this task using brute force: try all possible ways to cut and merge, and output the best one. This gives a runtime of , or even with a more careful analysis [Tho08]. The natural question is: can we do better than brute-force?

For the min-cut problem (when ), Karger was able to speed up this step from to using dynamic tree data structures [Kar00]. However, this case is special: since there are components produced from cutting the tree edges, only one pair of components need to be merged. For larger values of , it is not clear how to generalize the use of clever data structures to handle multiple merges.

Our randomized algorithm gets the improvement in three steps:

• First, instead of trying all possible trees , we only look at a random subset of trees. By Corollary 2.3 and Markov’s inequality, the probability that a random tree satisfies is . Therefore, by trying random trees, we find a T-tree w.h.p.

• Next, given a T-tree from above, we show how to find a collection of trees such that, with high probability, one of these trees is a tight T-tree, i.e., it intersects in exactly edges. We show this in §2.2.

• Finally, given a tight T-tree from the previous step, we show how to solve the optimal -Cut in time , much like the -Clique problem [NP85]. The runtime is not coincidental; the hardness of -Cut derives from -Clique, and hence techniques for the former must work also for the latter. We show this in §2.3.

### 2.2 A Small Collection of “Tight” Trees

In this section we show how to find a collection of trees such that, with high probability, one of these trees is a tight T-tree. Formally,

\thmt@toks\thmt@toks

There is an algorithm that takes as input a tree such that , and produces a collection of trees, such that one of the new trees satisfies w.p. . The algorithm runs in time .

###### Lemma 2.4.

The algorithm proceeds by iterations. In each iteration, our goal is to remove one edge of and then add another edge back in, so that the result is still a tree. In doing so, the value of can either decrease by , stay the same, or increase by . We call an iteration successful if decreases by . Throughout the iterations, we will always refer to as the current tree, which may be different from the original tree. Finally, if initially, then after consecutive successful iterations, we have the desired tight T-tree .

Assume we know beforehand; we can easily discharge this assumption later. For an intermediate tree in the algorithm, we say that component is unsplit if induces exactly one connected component in , and split otherwise. Initially, there are at most split components, possibly fewer if some components induce many components in . Moreover, if all iterations are successful, all components are unsplit at the end.

###### Lemma 2.5.

The probability of any iteration being successful, i.e., reducing the number of tree-edges belonging to the optimal cut, is at least .

###### Proof.

Each successful iteration has two parts: first we must delete a “deletion-worthy” edge (which happens with probability ), and then we add a “good” connecting edge (which happens with probability ). The former just uses that a tree has edges, but the latter must use that there are many good edges crossing the resulting cut—a naive analysis may only give for the second part.

We first describe the edges in that we would like to delete. These are the edges such that if we delete one of them, then we are likely to make a successful iteration (after selectively adding an edge back in). We call these edges deletion-worthy. Let us first root the tree at an arbitrary, fixed root . For any edge , let denote the subtree below it obtained by deleting the edge .

###### Definition 2.6.

A deletion-worthy edge satisfies the following two properties:

• The edge crosses between two parts of the optimal partition, i.e., .

• There is exactly one part satisfying and . In other words, exactly one component of intersects but is not completely contained in . Note that, by condition (1), is necessarily split.

###### Claim 2.7.

If there is a split component , there exists a deletion-worthy edge .

###### Proof.

For each , contract every connected component of induced in , so that split components contract to multiple vertices. Root the resulting tree at , and take a vertex of maximum depth whose corresponding component is split. It is easy to see that and the parent edge of in the rooted tree is deletion-worthy. ∎

Finally, we describe the deletion part of our algorithm. The procedure is simple: choose a random edge in to delete. With probability , we remove a deletion-worthy edge in . This gives rise to the factor in the probability of a successful iteration.

Now we show that, conditioned on deleting a deletion-worthy edge, we can selectively add an edge to produce a successful iteration with probability . In particular, we add a random edge in —i.e., an edge from subtree under to the rest of the vertices—where the probability is weighted by the edge weights in . We show that this makes the iteration successful with probability . (Recall that the iteration is successful if the number of tree edges lying in the optimal cut decreases by .)

First of all, it is clear that adding any edge in will get back a tree. Next, to lower bound the probability of success, we begin with an auxiliary lemma.

###### Claim 2.8.

Given a set of components that partition , we have

 w(OPT)≤(1−(k+12)−1)⋅w(EG(S1,…,Sk+1)).
###### Proof.

Consider merging two components uniformly at random. Every edge in has probability of disappearing from the cut, so the expected new cut is

 (1−(k+12)−1)⋅E(S1,…,Sk+1),

and can only be smaller. ∎

For convenience, define , where is the split component corresponding to the deletion-worthy edge we just deleted. Observe that the only edges in that are not in must be in ; this is because, of the components intersecting , only is split. Therefore,

 w(E(Te,V−Te))≤w(OPT)+w(E(C,S∗i−C)),

and the probability of selecting an edge in is

 w(E(C,S∗i−C))w(E(Te,V−Te))≥w(E(C,S∗i−C))w(OPT)+w(E(C,S∗i−C)). (2.1)

.

###### Proof.

The set of edges cuts the graph into components. Claim 2.8 implies this set has total weight . Observing that the edges of and are disjoint from each other completes the proof. ∎

Using the above claim in (2.1) means the probability of selecting an edge in is . Hence the probability of an iteration being successful is , completing the proof of Lemma 2.5. ∎

Since we have iterations, the probability that each of them is successful is . If we repeat this algorithm times, then with probability , one of the final trees will satisfy . We can remove the assumption of knowing by trying all possible values of , giving a collection of trees in running time . This completes the proof of Lemma 2.2.

### 2.3 Solving k-Cut on “Tight” Trees

In the previous section, we found a collection of trees such that, with high probability, the intersection of one of these trees with the optimal -cut consists of only edges. In this section, we show that given this tree we can find the optimal -cut in time . This will follow from Lemma 2.3 below. In this section, we restrict the edge weights of our graph to be positive integers in .

\thmt@toks\thmt@toks

There is an algorithm that takes a tree and outputs, from among all partitions that satisfy , a partition minimizing the number of inter-cluster edges , in time .

###### Lemma 2.10.

Given a tree and a set of tree edges, deleting these edges gives us a vertex partition . Let be the set of edges in that go between the clusters in ; i.e.,

 Cut(F):=E(S1,…,S|F|+1). (2.2)

Put another way, these are the edges such that the unique - path in contains an edge in . Note that Lemma 2.3 seeks a set of size that minimizes .

#### 2.3.1 A Simple Case: Incomparable Edges

Our algorithm builds upon the algorithm of Nešetřil and Poljak [NP85] for -Clique, using Boolean matrix multiplication to obtain the speedup from the naive brute force algorithm. It is instructive to first consider a restricted setting to highlight the similarity between the two algorithms. This setting is as follows: we are given a vertex and the promise that if the input tree is rooted at , then the optimal edges to delete are incomparable. By incomparable, we mean any root-leaf path in contains at most one edge in .

Like the algorithm of [NP85], our algorithm creates an auxiliary graph on nodes. Our graph construction differs slightly in that it always produces a tripartite graph, and that this graph has edge weights. In this auxiliary graph, we will call the vertices nodes in order to differentiate them from the vertices of the tree.

• The nodes in graph will form a tripartition . For each , let be the family of all sets of exactly edges in that are pairwise incomparable in . For each , define so that . For each and each , add a node to representing set .

• Consider a pair of parts in the tripartition with . Consider a pair of sets , ; recall these are sets of and incomparable edges in . If the edges in are also pairwise incomparable with the edges in , then add an edge of weight

 wH(vFaa,vFbb):=ra∑i=1w(E(Teai,V−Teai))−ra∑i=1ra∑j=i+1w(E(Teai,Teaj))−ra∑i=1rb∑j=1w(E(Teai,Tebj)).

Observe that every triple of nodes in graph that form a triangle together represent many incomparable edges. Moreover, the weights are set up so that for any triangle such that , the total weight of the edges is equal to

 wH(vF11,vF22)+wH(vF22,vF33)+wH(vF33,vF11)=k−1∑i=1w(E(Tei,V−Tei))−k−1∑i=1k−1∑j=i+1w(E(Tei,Tej)). (2.3)

A straightforward counting argument shows that this is exactly , the solution value of cutting the edges in .

Hence, the problem reduces to computing a minimum weight triangle in graph . While the minimum weight triangle problem is unlikely to admit an time algorithm on a graph with vertices with arbitrary edge weights, the problem does admit an time algorithm when the graph has integral edge weights in the range  [WW10]. Since the original graph has integral edge weights in , the edge weights in must be in the range . Therefore, we can set and to obtain an time algorithm in this restricted setting.

#### 2.3.2 The General Algorithm

Now we prove Lemma 2.3 in full generality, and show how to find . The ideas we use here will combine the matrix-multiplication idea from the restricted case of incomparable edges, together with dynamic programming.

Given a tree edge , and an integer , let denote a set of edges  in subtree such that and is minimized.

In other words, represents the optimal way to cut edge along with edges in . For ease of presentation, we assume that this value is unique. Observe that, once all of these states are computed, the remaining problem boils down to choosing an integer , integers whose sum is , and incomparable edges that minimizes

 Cut(ℓ⋃i=1State(ei,si))=k−1∑i=1State(ei,si)−k−1∑i=1k−1∑j=i+1w(E(Tei,Tej)).

Comparing this expression to (2.3) suggests that this problem is similar to the incomparable case in §2.3.1, a connection to be made precise later.

We now compute states for all edges , which we do from bottom to top (leaf to root). When is a leaf edge, the states are straightforward: and for . Also, for each edge , define to be all “descendant edges” of , formally defined as all edges whose path to the root contains edge .

Fix an edge and an , for which we want to compute . Suppose we order the edges in in an arbitrary but fixed order. Let us now figure out some properties for this (unknown) value of . As a thought experiment, let be the list of all the “maximal” edges in —in other words, iff and for all . Let and be the sequence in the defined order, and for each , let . Observe that , and that we must satisfy

 State(e,s)=ℓ†⋃i=1({e†i}∪State(e†i,s†i)). (2.4)

Also,

 w(State(e,s))=E(Te,V−Te) +ℓ†∑i=1w(E(G[Te])∩Cut({e†i}∪State(e†i,s†i))) −ℓ†∑i=1ℓ†∑j=i+1w(EG[Te][Te†i,Te†j]),

since the only edges double-counted in the first summation of are those connecting different .

Given these “ideal” values and , our algorithm repeats the following procedure multiple times:

• Pick a number uniformly at random in . Then, let function be chosen uniformly at random among all possible functions satisfying . With probability , we correctly guess and for each .111Of course, we could instead brute force over all possible choices of and .

• Construct an auxiliary graph as follows. As in §2.3.1, has a tripartition , and assume there is an arbitrary but fixed total ordering on the edges of the tree. For each , let be the family of all sets of exactly edges in that are pairwise incomparable in . For each , let so that , and for each , add a node to representing the edges as a sequence in the total order.

Also, define for . Note that and . Our intention is map the integer values to the sequences represented by nodes in , as we will see later. Consider each tripartition pair with . For each pair , represented as ordered sequences and , if the edges in are pairwise incomparable with the edges in , then add an edge in the auxiliary graph of weight

 wH(vFaa,vFbb):=ra∑i=1w(State(eai,σ(Ra+i)))−ra∑i=1ra∑j=i+1w(EG[Te](Teai,Teaj))−ra∑i=1rb∑j=1w(EG[Te](Teai,Tebj)). (2.5)

For any triangle such that has ordered sequence , the total weight of the edges is equal to

 wH(vF11,vF22)+wH(vF22,vF33)+wH(vF33,vF11)=ℓ∑i=1w(State(eai,σ(i)))−ℓ∑i=1ℓ∑j=i+1w(EG[Te](Tei,Tej)). (2.6)

A straightforward counting argument shows that this is exactly

 w(Cut({e}∪ℓ⋃i=1%State(ei,σ(i))))−w(E(Te,V−Te)).

Thus, the weight of each triangle, with added to it, corresponds to the cut value of one possible solution to . Moreover, if we guess and correctly, then this triangle will exist in auxiliary graph , and we will compute the correct state if we compute the minimum weight triangle in time. Since the probability of guessing correctly is , we repeat the guessing times to succeed w.h.p. in time . This concludes the computation of each ; since there are such states, the total running time becomes .

Lastly, to compute the final -Cut value, we let and construct the same auxiliary graph , except that is replaced by and the relevant graph becomes the entire . By the same counting arguments, the weight of triangle such that has ordered sequence is exactly

 w(Cut({e}∪ℓ⋃i=1%State(ei,σ(i)))).

Again, by repeating the procedure , we compute an optimal -Cut w.h.p., in time . Note that this time is dominated by the running time of computing the states.

In order to get the runtime claimed in Theorem 1, we need a couple more ideas—however, they can be skipped on the first reading, and we defer them to the Appendix C.

## 3 A Faster Deterministic Algorithm

In this section, we show how to build on the randomized algorithm of the previous section and improve it in two ways: we give a deterministic algorithm, with a better asymptotic runtime. (The algorithm of the previous section has a better runtime for smaller values of .) Formally, the main theorem of this section is the following:

###### Theorem ?? (Even Faster Deterministic Algorithm).

Our main idea is a more direct application of matrix multiplication, without paying the overhead in the previous section. Instead of converting a given T-tree to a “tight” tree where matrix multiplication can be combined with dynamic programming, with only overhead, we partition the given T-tree to subforests that are amenable to direct matrix multiplication approach.

As in §