Large Cuts with Local Algorithms on
TriangleFree Graphs
Juho Hirvonen
Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science, Aalto University, Finland
juho.hirvonen@aalto.fi
Joel Rybicki
Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science, Aalto University, Finland
joel.rybicki@aalto.fi
Stefan Schmid
TU Berlin & TLabs, Germany
stefan@net.tlabs.tuberlin.de
Jukka Suomela
Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science, Aalto University, Finland
jukka.suomela@aalto.fi
Abstract. We study the problem of finding large cuts in regular trianglefree graphs. In prior work, Shearer (1992) gives a randomised algorithm that finds a cut of expected size , where is the number of edges. We give a simpler algorithm that does much better: it finds a cut of expected size . As a corollary, this shows that in any regular trianglefree graph there exists a cut of at least this size.
Our algorithm can be interpreted as a very efficient randomised distributed algorithm: each node needs to produce only one random bit, and the algorithm runs in one synchronous communication round. This work is also a case study of applying computational techniques in the design of distributed algorithms: our algorithm was designed by a computer program that searched for optimal algorithms for small values of .
1 Introduction
We study the problem of finding large cuts in trianglefree graphs. In particular, we are interested in the design of fast and simple randomised distributed algorithms.
1.1 Random Cuts
Let be a simple undirected graph. A cut is a function that labels the nodes with symbols and . An edge is a cut edge if . We use the convention that the weight of a cut is the fraction of edges that are cut edges; that is, the weight of the cut is normalised so that it is in the range . See Figure 1 for an illustration.
While the problem of finding a maximum cut (or a good approximation of one) is NPhard [4, 12, 5, 16, 7], there is a very simple randomised algorithm that finds a relatively large cut: for each node , pick independently and uniformly at random. We say that is a uniform random cut.
In a uniform random cut, each edge is a cut edge with probability . It follows that the expected weight of a uniform random cut is also .
1.2 Regular TriangleFree Graphs
In general graphs, we cannot expect to find cuts that are much better than uniform random cuts. For example, in a complete graph on nodes, the weight of any cut is at most .
However, there is a family of graphs that makes for a much more interesting case from the perspective of the maxcut problem: regular trianglefree graphs. Erdős [2] raised the problem of estimating the minimum possible size of a maximum cut in a highgirth graph, and especially the case of trianglefree graphs attracted much interest from the research community [15, 13, 1].
Accordingly, from now on, we assume that is a regular graph for some constant , and that there are no triangles (cycles of length three) in . While focusing on regular trianglefree graphs may seem overly restrictive, our algorithm can be applied in a much more general setting; we will briefly discuss extensions in Section 3.
1.3 Shearer’s Algorithm
In trianglefree graphs, it is easy to find cuts that are (in expectation) larger than uniform random cuts. Nevertheless, a uniform random cut is a good starting point.
Shearer’s [15] algorithm proceeds as follows. Pick three uniform random cuts , , and . For each node , let
be the number of likeminded neighbours in . Then the output of a node is
(1) 
Put otherwise, a node follows if it seems that there are many cut edges w.r.t. in its immediate neighbourhood, and it falls back to another cut otherwise. The value is just used as a random tiebreaker.
1.4 Our Algorithm
Shearer’s algorithm can be characterised as follows: take a uniform random cut and then improve it with the help of a randomised rule described in (1). In this work, we show that we can do much better with the help of a simple deterministic rule.
In our algorithm we pick one uniform random cut . Again, each node counts the number of likeminded neighbours
We define the threshold
(3) 
Now the output of a node is simply
(4) 
Here is the complement of , that is, and . In the algorithm each node simply changes its mind if it seems that there are too many likeminded neighbours.
It is not obvious that such a rule makes sense, or that this particular choice of is good. Nevertheless, we show in this work that the expected weight of cut (4) is at least
(5) 
which is much larger than Shearer’s bound (2), at least in lowdegree graphs. As a corollary, any regular trianglefree graph admits a cut of at least this size.
Our algorithm can be implemented very efficiently in a distributed setting: each node only needs to produce one random bit, and the algorithm only requires one communication round. In Shearer’s algorithm each node has to produce up to three random bits.
Perhaps the most interesting feature of the algorithm is that it was not designed by a human being—it was discovered by a computer program. Indeed, cuts in trianglefree graphs serve as an example of a computational problem in which computeraided methods can be used to partially automate algorithm design and analysis (this process is also known as “algorithm synthesis” or “protocol synthesis”). There is a wide range of other graph problems in which a similar approach has a lot of potential as a shortcut to the discovery of new distributed algorithms.
2 Algorithm Design and Analysis
We begin this section with an informal overview of socalled neighbourhood graphs. The formal definitions that we use in this work are given after that.
2.1 Neighbourhood Graphs in Prior Work
In the context of distributed systems, the radius neighbourhood of a node refers to all information that node may gather in communication rounds. Depending on the model of computation that we use, this may include all nodes that are within distance from , the edges incident to these nodes, their local inputs, and the random bits that these nodes have generated. The idea is that whatever decision node takes, it can only depend on its radius neighbourhood—any distributed algorithm that runs in communication rounds can be interpreted as a mapping from local neighbourhoods to local outputs.
A neighbourhood graph is a graph representation of all possible radius neighbourhoods that a distributed algorithm may encounter. Each node of the neighbourhood graph corresponds to a possible local neighbourhood: there is at least one communication network in which some node has a local neighbourhood isomorphic to . We have an edge in the neighbourhood graph if there is some communication network in which nodes with local neighbourhoods and are adjacent; see Figure 2 for an example.
Neighbourhood graphs are a convenient concept in the study of graph colouring algorithms, both from the perspective of traditional algorithm design [10, 11, 6, 9, 3] and from the perspective of computational algorithm design [14]. The key observation is that the following two statements are equivalent:

is a proper colouring of the neighbourhood graph ,

is a distributed algorithm that finds a proper colouring in rounds.
To see this, consider any graph . If nodes and are adjacent in , then their local views and are adjacent in , and by assumption assigns a different colour to and . Hence distributed algorithm finds a proper colouring of . Conversely, if algorithm finds a proper colouring in any communication network, it defines a proper colouring of .
In summary, colourings of the neighbourhood graph correspond to distributed algorithms for graph colouring, and vice versa. In general, a similar property does not hold for arbitrary graph problems. For example, there is no onetoone correspondence between maximal independent sets of and distributed algorithms that find maximal independent sets [14, Section 8.5].
However, as we will see in this work, we can use neighbourhood graphs also in the context of the maximum cut problem. It turns out that we can define a weighted version of neighbourhood graphs, so that there is a onetoone correspondence between heavy cuts in the weighted neighbourhood graph, and randomised distributed algorithms that find large cuts in expectation.
2.2 Model of Distributed Computing
Next, we formalise the model of distributed computing that is sufficient for the purposes of our algorithm. Fix the parameter ; recall that we are interested in regular trianglefree graphs. Let be such a graph, and let be a uniform random cut in . The local neighbourhood of a node is , where
is the number of neighbours with the same random bit. Note that there are only possible local neighbourhoods.
A distributed algorithm is a function that associates an output with each local neighbourhood . For any regular trianglefree graph , function defines a randomised process that produces a random cut as follows:

Pick a uniform random cut .

For each node , let .
We use the notation for the random cut produced by algorithm in graph . In particular, we are interested in the quantity , the expected weight of cut .
A priori, we might expect that would depend on . However, as we will soon see, this is not the case—it only depends on parameter and algorithm .
2.3 Weighted Neighbourhood Graph
A weighted digraph is a pair with . Here is the set of nodes, and associates a nonnegative weight with each directed edge . Let be a cut in weighted digraph . The weight of cut is
the total weight of all cut edges.
The weighted neighbourhood graph is a weighted digraph defined as follows (see Figure 3 for an illustration). The set of nodes
consists of all possible neighbourhoods that we may encounter in regular trianglefree graphs. We define the edge weights as follows:
We follow the convention that for and .
Note that the weights are symmetric, and the total weight of all edges is . The following lemma shows that the weight of the edge in the neighbourhood graph equals the probability of “observing” adjacent neighbourhoods of types and ; see Figure 4. Note that the probability does not depend on the choice of graph or edge .
Lemma 1.
Let be a regular trianglefree graph, and let be an edge of . Consider a uniform random cut of . Then for any given neighbourhoods we have
Proof.
In what follows, we will denote the neighbours of by where . Similarly, the neighbours of are where . As is trianglefree, sets and are disjoint. In particular, the random variables for are independent.
Let and . There are two cases. First assume that . Then
Second, assume that . Then
2.4 Cuts in Neighbourhood Graphs
Any function can be interpreted in two ways:

A cut of weight in the weighted neighbourhood graph .

A distributed algorithm that finds a cut in any regular trianglefree graph: the algorithm picks a uniform random cut , and then node outputs .
The following lemma shows that the two interpretations are closely related: if is a cut of weight in neighbourhood graph , then it immediately gives us a distributed algorithm that finds a cut of expected weight in any regular trianglefree graph.
Lemma 2.
If is a cut in neighbourhood graph , and is a regular trianglefree graph, then .
Proof.
2.5 Computational Algorithm Design
Now we have all the tools that we need. Lemma 2 gives a onetoone correspondence between large cuts of the neighbourhood graph and distributed algorithms that find large cuts. For any fixed value of , the task of designing a distributed algorithm is now straightforward:

Construct the weighted neighbourhood graph .

Find a heavy cut in .
See Figure 5 for an example. For , the heaviest cut of is
(6) 
This is also the best possible algorithm for this value of , for the model of computing that we defined in Section 2.2.
Remark 1.
Of course finding a maximumweight cut is hard in the general case. However, in this particular case neighbourhood graphs are relatively small (only nodes).
While the smallest cases could be easily solved with brute force, slightly more refined approaches are helpful for moderate values of . We took the following approach. First, we reduced the maxweightcut instance to a maxweightSAT instance in a straightforward manner:

For each node we have a Boolean variable in formula .

For each edge of weight we have two clauses in formula , both of weight :
Note that at least one of these clauses is always satisfied, while both of them are satisfied if and only if and have different values.
Now it is easy to see that a variable assignment of that maximises the total weight of satisfied clauses also gives a maximumweight cut in : let iff is true. More precisely, the total weight of the clauses satisfied by is , where is the total weight of all edges.
With this reduction, we can then resort to offtheself maxweightSAT solvers. In our experiments we used akmaxsat solver [8]; with it we can solve the cases very quickly (e.g., the case on a lowend laptop in less than 5 seconds).
Surprisingly, in all cases the maxweight cut has the following simple structure:
(7) 
The exact values of for the heaviest cuts are given in Table 1; note that all values are slightly larger than .
:  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32 

:  2  3  3  4  5  5  6  6  7  7  8  9  9  10  10  11  11  12  12  13  14  14  15  15  16  16  17  17  18  18  19 
2.6 Generalisation
Now it is easy to generalise the findings: we can make the educated guess that algorithms of form (7) are good also in the case of a general . All we need to do is to find a general expression for the threshold , and prove that algorithm indeed works well in the general case.
To facilitate algorithm analysis, let us define the shorthand notation
for the performance of algorithm . It is easy to see that , as the threshold value of simply means that algorithm outputs a uniform random cut, while means that outputs the complement of the uniform random cut. The general shape of is illustrated in Figure 6.
We are interested in the region , where . In the following, we derive a relatively simple expression for in this region—the proof strategy is inspired by Shearer [15].
Lemma 3.
For all and we have
Proof.
Fix a trianglefree regular graph . Recall that is a uniform random cut, is the local neighbourhood of node , and is the output of algorithm at node .
Consider an edge of . We will calculate the probability that is a cut edge. To this end, define
These are precisely the cases in which ; hence is a cut edge with probability . For each , let
Now we have the following identities:
By definition, , and by symmetry, , , and . Hence the probability that is a cut edge is
(8) 
Now we can easily find an optimal threshold for any given : simply try all possible values and apply Lemma 3. Figure 7 is a plot of optimal for . At least for small values of , it appears that
is close to the optimum. For notational convenience, we pick a slightly larger value
Now we have arrived at the algorithm that we already described in Section 1.4.
What remains is a proof of the performance guarantee (5). Figure 8 gives some intuition on how good the bounds are.
Theorem 4.
Let and
Then
Proof.
See Appendix A. ∎
:  expected weight of a cut found by an optimal threshold algorithm 

:  expected weight of a cut found by our algorithm 
:  a lower bound on 
:  a lower bound for Shearer’s [15] algorithm 
3 Conclusions
In this work, we have presented a new randomised distributed algorithm for finding large cuts. The key observation was that the task of designing randomised distributed algorithms for finding large cuts can be reduced to the problem of finding a maxweight cut in a weighted neighbourhood graph. This way we were able to use computers to find optimal algorithms for small values of . The general form of the optimal algorithms was apparent, and hence the results were easy to generalise.
Our algorithm was designed for regular trianglefree graphs. However, it can be easily applied in a much more general setting as well. To see this, recall that is not only the expected weight of the cut, but it is also the probability that any individual edge is a cut edge. The analysis only assumes that and are of degree and they do not have a common neighbour. Hence we have the following immediate generalisations.

Our algorithm can be applied in trianglefree graphs of maximum degree as follows: a node of degree simulates the behaviour of missing neighbours. We still have the same guarantee that each original edge is a cut edge with probability . The running time of the algorithm is still one communication round; however, some nodes need to produce more random bits.

Our algorithm can also be applied in any graph, even in those that contain triangles. Now our analysis shows that each edge that is not part of a triangle will be a cut edge with probability . This observation already gives a simple bound: if at most a fraction of all edges are part of a triangle, we will find a cut of expected size at least .
Acknowledgements
Computer resources were provided by the Aalto University School of Science “ScienceIT” project (Triton cluster), and by the Department of Computer Science at the University of Helsinki (Ukko cluster).
References
 Alon [1996] Noga Alon. Bipartite subgraphs. Combinatorica, 16(3):301–311, 1996.
 Erdős [1979] Paul Erdős. Problems and results in graph theory and combinatorial analysis. In John Adrian Bondy and U. S. R. Murty, editors, Proc. Graph Theory and Related Topics (University of Waterloo, July 1977), pages 153–163. Academic Press, 1979.
 Fraigniaud et al. [2007] Pierre Fraigniaud, Cyril Gavoille, David Ilcinkas, and Andrzej Pelc. Distributed computing with advice: information sensitivity of graph coloring. In Proc. 34th International Colloquium on Automata, Languages and Programming (ICALP 2007), volume 4596 of Lecture Notes in Computer Science, pages 231–242. Springer, 2007. doi:10.1007/9783540734208_22.
 Garey and Johnson [1979] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman and Company, New York, 1979.
 Håstad [2001] Johan Håstad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001. doi:10.1145/502090.502098.
 Kelsen [1996] Pierre Kelsen. Neighborhood graphs and distributed coloring. In Proc. 5th Scandinavian Workshop on Algorithm Theory (SWAT 1996), volume 1097 of Lecture Notes in Computer Science, pages 223–233. Springer, 1996. doi:10.1007/3540614222_134.
 Khot et al. [2007] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproximability results for MAXCUT and other 2variable CSPs? SIAM Journal on Computing, 37(1):319–357, 2007. doi:10.1137/S0097539705447372.
 Kügel [2012] Adrian Kügel. Improved exact solver for the weighted MaxSAT problem. In Daniel Le Berre, editor, Proc. Pragmatics of SAT Workshop (POS 2010), volume 8 of EasyChair Proceedings in Computing, pages 15–27, 2012. http://www.easychair.org/publications/?page=2003892821.
 Kuhn and Wattenhofer [2006] Fabian Kuhn and Roger Wattenhofer. On the complexity of distributed graph coloring. In Proc. 25th Annual ACM Symposium on Principles of Distributed Computing (PODC 2006), pages 7–15. ACM Press, 2006. doi:10.1145/1146381.1146387.
 Linial [1992] Nathan Linial. Locality in distributed graph algorithms. SIAM Journal on Computing, 21(1):193–201, 1992. doi:10.1137/0221015.
 Naor [1991] Moni Naor. A lower bound on probabilistic algorithms for distributive ring coloring. SIAM Journal on Discrete Mathematics, 4(3):409–412, 1991. doi:10.1137/0404036.
 Papadimitriou and Yannakakis [1991] Christos H. Papadimitriou and Mihalis Yannakakis. Optimization, approximation, and complexity classes. Journal of Computer and System Sciences, 43(3):425–440, 1991. doi:10.1016/00220000(91)90023X.
 Poljak and Tuza [1995] Svatopluk Poljak and Zsolt Tuza. Maximum cuts and largest bipartite subgraphs. In William Cook, László Lovász, and Paul Seymour, editors, Combinatorial Optimization, volume 20 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 181–244. AMS, 1995.
 Rybicki [2011] Joel Rybicki. Exact bounds for distributed graph colouring. Master’s thesis, Department of Computer Science, University of Helsinki, May 2011. http://urn.fi/URN:NBN:fife201106091715.
 Shearer [1992] James B. Shearer. A note on bipartite subgraphs of trianglefree graphs. Random Structures & Algorithms, 3(2):223–226, 1992. doi:10.1002/rsa.3240030211.
 Trevisan et al. [2000] Luca Trevisan, Gregory B. Sorkin, Madhu Sudan, and David P. Williamson. Gadgets, approximation, and linear programming. SIAM Journal on Computing, 29(6):2074–2097, 2000. doi:10.1137/S0097539797328847.
Appendix A Proof of Theorem 4
We need to prove a lower bound on
in the region . Our general strategy is as follows:

Verify cases with a computer.

Prove a closedform lower bound for .
The first part is easily solved with a simple Python script or with a short calculation in Mathematica (see Figure 8 for examples of the results for ). We will now focus on the second part; for that we will need various estimates of binomial coefficients.
The proof given here is certainly not the most elegant way to derive the bound, but it is selfcontained and gets the job done. Proving the claim for a “sufficiently large” would be straightforward. However, we need to show that already a concrete relatively small such as is enough.
We will first approximate binomial coefficients with the normal distribution. Let , and define
for each .
Fact 5.
For any we have
Lemma 6.
For any , , and we have
Proof.
We can estimate
where
Now as . For each we can verify that when . ∎
Lemma 7.
For and we have
Proof.
Now we have the estimates that we will use in the proof of Theorem 4. We will consider the odd and even values of separately.
Odd .
Assume that , . Let
and observe that
It follows that
Therefore
Even .
Assume that , . Let
Now we have
For any we have the identity
We can use it to derive
This completes the proof of Theorem 4.