1 Introduction

Large Cuts with Local Algorithms on
Triangle-Free Graphs

Juho Hirvonen

Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science, Aalto University, Finland
juho.hirvonen@aalto.fi

Joel Rybicki

Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science, Aalto University, Finland
joel.rybicki@aalto.fi

Stefan Schmid

TU Berlin & T-Labs, Germany
stefan@net.t-labs.tu-berlin.de

Jukka Suomela

Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science, Aalto University, Finland
jukka.suomela@aalto.fi

Abstract. We study the problem of finding large cuts in -regular triangle-free graphs. In prior work, Shearer (1992) gives a randomised algorithm that finds a cut of expected size , where is the number of edges. We give a simpler algorithm that does much better: it finds a cut of expected size . As a corollary, this shows that in any -regular triangle-free graph there exists a cut of at least this size.

Our algorithm can be interpreted as a very efficient randomised distributed algorithm: each node needs to produce only one random bit, and the algorithm runs in one synchronous communication round. This work is also a case study of applying computational techniques in the design of distributed algorithms: our algorithm was designed by a computer program that searched for optimal algorithms for small values of .

## 1 Introduction

We study the problem of finding large cuts in triangle-free graphs. In particular, we are interested in the design of fast and simple randomised distributed algorithms.

### 1.1 Random Cuts

Let be a simple undirected graph. A cut is a function that labels the nodes with symbols and . An edge is a cut edge if . We use the convention that the weight of a cut is the fraction of edges that are cut edges; that is, the weight of the cut is normalised so that it is in the range . See Figure 1 for an illustration.

While the problem of finding a maximum cut (or a good approximation of one) is NP-hard [4, 12, 5, 16, 7], there is a very simple randomised algorithm that finds a relatively large cut: for each node , pick independently and uniformly at random. We say that is a uniform random cut.

In a uniform random cut, each edge is a cut edge with probability . It follows that the expected weight of a uniform random cut is also .

### 1.2 Regular Triangle-Free Graphs

In general graphs, we cannot expect to find cuts that are much better than uniform random cuts. For example, in a complete graph on nodes, the weight of any cut is at most .

However, there is a family of graphs that makes for a much more interesting case from the perspective of the max-cut problem: regular triangle-free graphs. Erdős [2] raised the problem of estimating the minimum possible size of a maximum cut in a high-girth graph, and especially the case of triangle-free graphs attracted much interest from the research community [15, 13, 1].

Accordingly, from now on, we assume that is a -regular graph for some constant , and that there are no triangles (cycles of length three) in . While focusing on regular triangle-free graphs may seem overly restrictive, our algorithm can be applied in a much more general setting; we will briefly discuss extensions in Section 3.

### 1.3 Shearer’s Algorithm

In triangle-free graphs, it is easy to find cuts that are (in expectation) larger than uniform random cuts. Nevertheless, a uniform random cut is a good starting point.

Shearer’s [15] algorithm proceeds as follows. Pick three uniform random cuts , , and . For each node , let

 ℓ(v)=∣∣{v,u}∈E:c1(v)=c1(u)}∣∣

be the number of like-minded neighbours in . Then the output of a node is

 c(v)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩c1(v),if ℓ(v)d/2. (1)

Put otherwise, a node follows if it seems that there are many cut edges w.r.t.  in its immediate neighbourhood, and it falls back to another cut otherwise. The value is just used as a random tie-breaker.

Shearer [15] shows that the expected weight of cut (1) is at least

 12+√28√d≈12+0.177√d (2)

in -regular triangle-free graphs.

### 1.4 Our Algorithm

Shearer’s algorithm can be characterised as follows: take a uniform random cut and then improve it with the help of a randomised rule described in (1). In this work, we show that we can do much better with the help of a simple deterministic rule.

In our algorithm we pick one uniform random cut . Again, each node counts the number of like-minded neighbours

 ℓ(v)=∣∣{v,u}∈E:c1(v)≠c1(u)}∣∣.

We define the threshold

 τ=⌈d+√d2⌉. (3)

Now the output of a node is simply

 c(v)={c1(v), if ℓ(v)<τ,−c1(v), if ℓ(v)≥τ. (4)

Here is the complement of , that is, and . In the algorithm each node simply changes its mind if it seems that there are too many like-minded neighbours.

It is not obvious that such a rule makes sense, or that this particular choice of is good. Nevertheless, we show in this work that the expected weight of cut (4) is at least

 12+932√d=12+0.28125√d, (5)

which is much larger than Shearer’s bound (2), at least in low-degree graphs. As a corollary, any -regular triangle-free graph admits a cut of at least this size.

Our algorithm can be implemented very efficiently in a distributed setting: each node only needs to produce one random bit, and the algorithm only requires one communication round. In Shearer’s algorithm each node has to produce up to three random bits.

Perhaps the most interesting feature of the algorithm is that it was not designed by a human being—it was discovered by a computer program. Indeed, cuts in triangle-free graphs serve as an example of a computational problem in which computer-aided methods can be used to partially automate algorithm design and analysis (this process is also known as “algorithm synthesis” or “protocol synthesis”). There is a wide range of other graph problems in which a similar approach has a lot of potential as a shortcut to the discovery of new distributed algorithms.

In Section 2, we outline the procedure that we used to design the algorithm, and then present an analysis of its performance. In Section 3 we discuss how to apply the algorithm in a more general setting beyond regular triangle-free graphs.

## 2 Algorithm Design and Analysis

We begin this section with an informal overview of so-called neighbourhood graphs. The formal definitions that we use in this work are given after that.

### 2.1 Neighbourhood Graphs in Prior Work

In the context of distributed systems, the radius- neighbourhood of a node refers to all information that node may gather in communication rounds. Depending on the model of computation that we use, this may include all nodes that are within distance from , the edges incident to these nodes, their local inputs, and the random bits that these nodes have generated. The idea is that whatever decision node takes, it can only depend on its radius- neighbourhood—any distributed algorithm that runs in communication rounds can be interpreted as a mapping from local neighbourhoods to local outputs.

A neighbourhood graph is a graph representation of all possible radius- neighbourhoods that a distributed algorithm may encounter. Each node of the neighbourhood graph corresponds to a possible local neighbourhood: there is at least one communication network in which some node has a local neighbourhood isomorphic to . We have an edge in the neighbourhood graph if there is some communication network in which nodes with local neighbourhoods and are adjacent; see Figure 2 for an example.

Neighbourhood graphs are a convenient concept in the study of graph colouring algorithms, both from the perspective of traditional algorithm design [10, 11, 6, 9, 3] and from the perspective of computational algorithm design [14]. The key observation is that the following two statements are equivalent:

• is a proper colouring of the neighbourhood graph ,

• is a distributed algorithm that finds a proper -colouring in rounds.

To see this, consider any graph . If nodes and are adjacent in , then their local views and are adjacent in , and by assumption assigns a different colour to and . Hence distributed algorithm finds a proper -colouring of . Conversely, if algorithm finds a proper colouring in any communication network, it defines a proper -colouring of .

In summary, colourings of the neighbourhood graph correspond to distributed algorithms for graph colouring, and vice versa. In general, a similar property does not hold for arbitrary graph problems. For example, there is no one-to-one correspondence between maximal independent sets of and distributed algorithms that find maximal independent sets [14, Section 8.5].

However, as we will see in this work, we can use neighbourhood graphs also in the context of the maximum cut problem. It turns out that we can define a weighted version of neighbourhood graphs, so that there is a one-to-one correspondence between heavy cuts in the weighted neighbourhood graph, and randomised distributed algorithms that find large cuts in expectation.

### 2.2 Model of Distributed Computing

Next, we formalise the model of distributed computing that is sufficient for the purposes of our algorithm. Fix the parameter ; recall that we are interested in -regular triangle-free graphs. Let be such a graph, and let be a uniform random cut in . The local neighbourhood of a node is , where

 ℓc(v)=∣∣{v,u}∈E:c(v)=c(u)}∣∣

is the number of neighbours with the same random bit. Note that there are only possible local neighbourhoods.

A distributed algorithm is a function that associates an output with each local neighbourhood . For any -regular triangle-free graph , function defines a randomised process that produces a random cut as follows:

1. Pick a uniform random cut .

2. For each node , let .

We use the notation for the random cut produced by algorithm in graph . In particular, we are interested in the quantity , the expected weight of cut .

A priori, we might expect that would depend on . However, as we will soon see, this is not the case—it only depends on parameter and algorithm .

### 2.3 Weighted Neighbourhood Graph

A weighted digraph is a pair with . Here is the set of nodes, and associates a non-negative weight with each directed edge . Let be a cut in weighted digraph . The weight of cut is

 w(c)=∑(u,v)∈V×V,c(u)≠c(v)w(u,v),

the total weight of all cut edges.

The weighted neighbourhood graph is a weighted digraph defined as follows (see Figure 3 for an illustration). The set of nodes

 VN={(k,i):k∈{a,b},i∈{0,1,…,d}}

consists of all possible neighbourhoods that we may encounter in -regular triangle-free graphs. We define the edge weights as follows:

 wN((k1,i1),(k2,i2))=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩14d(d−1i1)(d−1i2)if k1≠k2,14d(d−1i1−1)(d−1i2−1)if % k1=k2.

We follow the convention that for and .

Note that the weights are symmetric, and the total weight of all edges is . The following lemma shows that the weight of the edge in the neighbourhood graph equals the probability of “observing” adjacent neighbourhoods of types and ; see Figure 4. Note that the probability does not depend on the choice of graph or edge .

###### Lemma 1.

Let be a -regular triangle-free graph, and let be an edge of . Consider a uniform random cut of . Then for any given neighbourhoods we have

 Pr[Nc(u)=N1 and Nc(v)=N2]=wN(N1,N2).
###### Proof.

In what follows, we will denote the neighbours of by where . Similarly, the neighbours of are where . As is triangle-free, sets and are disjoint. In particular, the random variables for are independent.

Let and . There are two cases. First assume that . Then

 Pr[Nc(u)=N1 and Nc(v)=N2] =Pr[c(u)=k1 and c(v)=k2]⋅Pr[|{y∈Su:c(y)=k1}|=i1−1]⋅Pr[|{y∈Sv:c(y)=k2}|=i2−1] =14⋅12d−1(d−1i1−1)⋅12d−1(d−1i2−1) =wN(N1,N2).

Second, assume that . Then

 Pr[Nc(u)=N1 and Nc(v)=N2] =Pr[c(u)=k1 and c(v)=k2]⋅Pr[|{y∈Su:c(y)=k1}|=i1]⋅Pr[|{y∈Sv:c(y)=k2}|=i2] =14⋅12d−1(d−1i1)⋅12d−1(d−1i2) =wN(N1,N2).\qed

### 2.4 Cuts in Neighbourhood Graphs

Any function can be interpreted in two ways:

1. A cut of weight in the weighted neighbourhood graph .

2. A distributed algorithm that finds a cut in any -regular triangle-free graph: the algorithm picks a uniform random cut , and then node outputs .

The following lemma shows that the two interpretations are closely related: if is a cut of weight in neighbourhood graph , then it immediately gives us a distributed algorithm that finds a cut of expected weight in any -regular triangle-free graph.

###### Lemma 2.

If is a cut in neighbourhood graph , and is a -regular triangle-free graph, then .

###### Proof.

Fix a graph and an edge of . By Lemma 1 we have

The claim follows by summing over all edges of . ∎

### 2.5 Computational Algorithm Design

Now we have all the tools that we need. Lemma 2 gives a one-to-one correspondence between large cuts of the neighbourhood graph and distributed algorithms that find large cuts. For any fixed value of , the task of designing a distributed algorithm is now straightforward:

1. Construct the weighted neighbourhood graph .

2. Find a heavy cut in .

See Figure 5 for an example. For , the heaviest cut of is

 Aopt((k,i))={kif i<3,−kif i≥3. (6)

This is also the best possible algorithm for this value of , for the model of computing that we defined in Section 2.2.

###### Remark 1.

The reader may want to compare (6) with Section 1.4. For , the algorithms are identical, albeit with a slightly different notation. Note that .

Of course finding a maximum-weight cut is hard in the general case. However, in this particular case neighbourhood graphs are relatively small (only nodes).

While the smallest cases could be easily solved with brute force, slightly more refined approaches are helpful for moderate values of . We took the following approach. First, we reduced the max-weight-cut instance to a max-weight-SAT instance in a straightforward manner:

• For each node we have a Boolean variable in formula .

• For each edge of weight we have two clauses in formula , both of weight :

 xu∨xvand¬xu∨¬xv

Note that at least one of these clauses is always satisfied, while both of them are satisfied if and only if and have different values.

Now it is easy to see that a variable assignment of that maximises the total weight of satisfied clauses also gives a maximum-weight cut in : let iff is true. More precisely, the total weight of the clauses satisfied by is , where is the total weight of all edges.

With this reduction, we can then resort to off-the-self max-weight-SAT solvers. In our experiments we used akmaxsat solver [8]; with it we can solve the cases very quickly (e.g., the case on a low-end laptop in less than 5 seconds).

Surprisingly, in all cases the max-weight cut has the following simple structure:

 Aτ((k,i))={kif i<τ,−kif i≥τ. (7)

The exact values of for the heaviest cuts are given in Table 1; note that all values are slightly larger than .

### 2.6 Generalisation

Now it is easy to generalise the findings: we can make the educated guess that algorithms of form (7) are good also in the case of a general . All we need to do is to find a general expression for the threshold , and prove that algorithm indeed works well in the general case.

To facilitate algorithm analysis, let us define the shorthand notation

for the performance of algorithm . It is easy to see that , as the threshold value of simply means that algorithm outputs a uniform random cut, while means that outputs the complement of the uniform random cut. The general shape of is illustrated in Figure 6.

We are interested in the region , where . In the following, we derive a relatively simple expression for in this region—the proof strategy is inspired by Shearer [15].

###### Lemma 3.

For all and we have

 α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i).
###### Proof.

Fix a triangle-free -regular graph . Recall that is a uniform random cut, is the local neighbourhood of node , and is the output of algorithm at node .

Consider an edge of . We will calculate the probability that is a cut edge. To this end, define

 p =Pr[c(u)≠c(v) and ℓc(u),ℓc(v)≥τ], q =Pr[c(u)≠c(v) and ℓc(u),ℓc(v)<τ], r =Pr[c(u)=c(v) and either ℓc(u)<τ≤ℓc(v) or ℓc(v)<τ≤ℓc(u)].

These are precisely the cases in which ; hence is a cut edge with probability . For each , let

 px =Pr[ℓc(x)≥τ∣c(u)≠c(v)], qx =Pr[ℓc(x)<τ∣c(u)≠c(v)], rx =Pr[ℓc(x)≥τ∣c(u)=c(v)].

Now we have the following identities:

 p =12pupv, q =12quqv, r =12(rv(1−ru)+ru(1−rv)).

By definition, , and by symmetry, , , and . Hence the probability that is a cut edge is

 p+q+r=12p2u+12q2u+ru(1−ru)=12+pu(pu−1)+ru(1−ru)=12−puqu+ru(pu+qu−ru)=12+(ru−pu)(qu−ru). (8)

An argument similar to what we used in Lemma 1 gives

 pu =12d−1d−1∑i=τ(d−1i), qu =12d−1τ−1∑i=0(d−1i), ru =12d−1d−1∑i=τ−1(d−1i).

Recall that we assumed that ; hence and

 2d−1(ru−pu) =(d−1τ−1), 2d−1(qu−ru) =τ−1∑i=0(d−1i)−d−τ∑i=0(d−1)i=τ−1∑i=d−τ+1(d−1i).

From (8) we therefore obtain

 p+q+r=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i).\qed

Now we can easily find an optimal threshold for any given : simply try all possible values and apply Lemma 3. Figure 7 is a plot of optimal for . At least for small values of , it appears that

 τ≈d+12+0.439√d

is close to the optimum. For notational convenience, we pick a slightly larger value

 τ=⌈d+√d2⌉.

Now we have arrived at the algorithm that we already described in Section 1.4.

What remains is a proof of the performance guarantee (5). Figure 8 gives some intuition on how good the bounds are.

###### Theorem 4.

Let and

 τ=⌈d+√d2⌉.

Then

 α(τ,d)≥12+932√d.
###### Proof.

See Appendix A. ∎

## 3 Conclusions

In this work, we have presented a new randomised distributed algorithm for finding large cuts. The key observation was that the task of designing randomised distributed algorithms for finding large cuts can be reduced to the problem of finding a max-weight cut in a weighted neighbourhood graph. This way we were able to use computers to find optimal algorithms for small values of . The general form of the optimal algorithms was apparent, and hence the results were easy to generalise.

Our algorithm was designed for -regular triangle-free graphs. However, it can be easily applied in a much more general setting as well. To see this, recall that is not only the expected weight of the cut, but it is also the probability that any individual edge is a cut edge. The analysis only assumes that and are of degree and they do not have a common neighbour. Hence we have the following immediate generalisations.

1. Our algorithm can be applied in triangle-free graphs of maximum degree as follows: a node of degree simulates the behaviour of missing neighbours. We still have the same guarantee that each original edge is a cut edge with probability . The running time of the algorithm is still one communication round; however, some nodes need to produce more random bits.

2. Our algorithm can also be applied in any graph, even in those that contain triangles. Now our analysis shows that each edge that is not part of a triangle will be a cut edge with probability . This observation already gives a simple bound: if at most a fraction of all edges are part of a triangle, we will find a cut of expected size at least .

## Acknowledgements

Computer resources were provided by the Aalto University School of Science “Science-IT” project (Triton cluster), and by the Department of Computer Science at the University of Helsinki (Ukko cluster).

## References

• Alon [1996] Noga Alon. Bipartite subgraphs. Combinatorica, 16(3):301–311, 1996.
• Erdős [1979] Paul Erdős. Problems and results in graph theory and combinatorial analysis. In John Adrian Bondy and U. S. R. Murty, editors, Proc. Graph Theory and Related Topics (University of Waterloo, July 1977), pages 153–163. Academic Press, 1979.
• Fraigniaud et al. [2007] Pierre Fraigniaud, Cyril Gavoille, David Ilcinkas, and Andrzej Pelc. Distributed computing with advice: information sensitivity of graph coloring. In Proc. 34th International Colloquium on Automata, Languages and Programming (ICALP 2007), volume 4596 of Lecture Notes in Computer Science, pages 231–242. Springer, 2007.
• Garey and Johnson [1979] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York, 1979.
• Håstad [2001] Johan Håstad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001.
• Kelsen [1996] Pierre Kelsen. Neighborhood graphs and distributed -coloring. In Proc. 5th Scandinavian Workshop on Algorithm Theory (SWAT 1996), volume 1097 of Lecture Notes in Computer Science, pages 223–233. Springer, 1996.
• Khot et al. [2007] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? SIAM Journal on Computing, 37(1):319–357, 2007.
• Kügel [2012] Adrian Kügel. Improved exact solver for the weighted Max-SAT problem. In Daniel Le Berre, editor, Proc. Pragmatics of SAT Workshop (POS 2010), volume 8 of EasyChair Proceedings in Computing, pages 15–27, 2012.
• Kuhn and Wattenhofer [2006] Fabian Kuhn and Roger Wattenhofer. On the complexity of distributed graph coloring. In Proc. 25th Annual ACM Symposium on Principles of Distributed Computing (PODC 2006), pages 7–15. ACM Press, 2006.
• Linial [1992] Nathan Linial. Locality in distributed graph algorithms. SIAM Journal on Computing, 21(1):193–201, 1992.
• Naor [1991] Moni Naor. A lower bound on probabilistic algorithms for distributive ring coloring. SIAM Journal on Discrete Mathematics, 4(3):409–412, 1991.
• Papadimitriou and Yannakakis [1991] Christos H. Papadimitriou and Mihalis Yannakakis. Optimization, approximation, and complexity classes. Journal of Computer and System Sciences, 43(3):425–440, 1991.
• Poljak and Tuza [1995] Svatopluk Poljak and Zsolt Tuza. Maximum cuts and largest bipartite subgraphs. In William Cook, László Lovász, and Paul Seymour, editors, Combinatorial Optimization, volume 20 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 181–244. AMS, 1995.
• Rybicki [2011] Joel Rybicki. Exact bounds for distributed graph colouring. Master’s thesis, Department of Computer Science, University of Helsinki, May 2011.
• Shearer [1992] James B. Shearer. A note on bipartite subgraphs of triangle-free graphs. Random Structures & Algorithms, 3(2):223–226, 1992.
• Trevisan et al. [2000] Luca Trevisan, Gregory B. Sorkin, Madhu Sudan, and David P. Williamson. Gadgets, approximation, and linear programming. SIAM Journal on Computing, 29(6):2074–2097, 2000.

## Appendix A Proof of Theorem 4

We need to prove a lower bound on

 α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i)

in the region . Our general strategy is as follows:

1. Verify cases with a computer.

2. Prove a closed-form lower bound for .

The first part is easily solved with a simple Python script or with a short calculation in Mathematica (see Figure 8 for examples of the results for ). We will now focus on the second part; for that we will need various estimates of binomial coefficients.

The proof given here is certainly not the most elegant way to derive the bound, but it is self-contained and gets the job done. Proving the claim for a “sufficiently large” would be straightforward. However, we need to show that already a concrete relatively small such as is enough.

We will first approximate binomial coefficients with the normal distribution. Let , and define

 δj(n)=⌊j√n/32⌋,gj=e−j2/32

for each .

###### Fact 5.

For any we have

 0.999√πn<14n(2nn)<1√πn.
###### Lemma 6.

For any , , and we have

 (2nn+δ)>0.995⋅gj⋅(2nn)
###### Proof.

We can estimate

 (2nn+δ)/(2nn)=n!(n+δ)!⋅n!(n−δ)!=n−δ+1n+1⋅n−δ+2n+2⋯nn+δ>(1−δn)δ≥hj(δ),

where

 hj(δ)=(1−j232δ)δ.

Now as . For each we can verify that when . ∎

###### Lemma 7.

For and we have

 14nδ∑i=−δ+1(2nn+i) >0.6088, 14nδ−1∑i=−δ+1(2nn+i) >0.5975.
###### Proof.

Here we could apply the Berry–Esseen theorem, but the following simple piecewise estimate is sufficient for our purposes. As

 δj(n)>j√n/32−1,

we have

Hence using Fact 5 and Lemma 6 we have

The claim follows from the observations

 14nδ∑i=−δ+1(2nn+i) >24nδ∑i=1(2nn+i)>2⋅0.3044=0.6088, 14nδ−1∑i=−δ+1(2nn+i) >(2−1δ)14nδ∑i=1(2nn+i)>1.9629⋅0.3044>0.5975.\qed

Now we have the estimates that we will use in the proof of Theorem 4. We will consider the odd and even values of separately.

#### Odd d.

Assume that , . Let

 δ=τ−n=⌈√n/2+1/4+1/2⌉,δ′=δ4(n),

and observe that

 √n/2<√n/2+1/4+1/2<√n/2+1.

It follows that

 δ′+1≤δ≤δ′+2.

Therefore

 α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i)=12+142n(2nn+δ−1)δ−1∑i=−δ+2(2nn+i)≥12+142n(2nn+δ′+1)δ′∑i=−δ′+1(2nn+i)=12+n−δ′n+δ′+1⋅14n(2nn+δ′)⋅14nδ′∑i=−δ′+1(2nn+i)>12+0.964⋅0.995⋅g4⋅0.999√πn⋅0.6088>12+0.2823√d−1>12+932√d.

#### Even d.

Assume that , . Let

 δ=τ−n=⌈√n/2⌉,δ′=δ4(n).

Now we have

 δ′≤δ≤δ′+1.

For any we have the identity

 k∑i=−k(2nn+i)=k∑i=−k((2n−1n+i−1)+(2n−1n+i))=k∑i=−k((2n−1n−i)+(2n−1n+i))=2k∑i=−k(2n−1n+i).

We can use it to derive

 α(τ,d)=12+14d−1(d−1τ−1)τ−1∑i=d−τ+1(d−1i)=12+142n−1(2n−1n+δ−1)δ−1∑i=−δ+1(2n−1n+i)≥12+142n−1(2n−1n+δ′)δ′−1∑i=−δ′+1(2n−1n+i)=12+142n−1⋅n−δ′2n(2nn+δ′)⋅12δ′−1∑i=−δ′+1(2nn+i)=12+n−δ′n⋅14n(2nn+δ′)⋅14nδ′−1∑i=−δ′+1(2nn+i)>12+0.982⋅0.995⋅g4⋅0.999√πn⋅0.5975>12+0.2822√d>12+932√d.

This completes the proof of Theorem 4.

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters