Tight Hardness for Shortest Cycles and Paths in Sparse Graphs
Abstract
Finegrained reductions have established equivalences between many core problems with time algorithms on node weighted graphs, such as Shortest Cycle, AllPairs Shortest Paths (APSP), Radius, Replacement Paths, Second Shortest Paths, and so on. These problems also have time algorithms on edge node weighted graphs, and such algorithms have wider applicability. Are these bounds optimal when ?
Starting from the hypothesis that the minimum weight Clique problem in edge weighted graphs requires time, we prove that for all sparsities of the form , there is no time algorithm for for any of the below problems

Minimum Weight Cycle in a directed weighted graph,

Shortest Cycle in a directed weighted graph,

APSP in a directed or undirected weighted graph,

Radius (or Eccentricities) in a directed or undirected weighted graph,

Wiener index of a directed or undirected weighted graph,

Replacement Paths in a directed weighted graph,

Second Shortest Path in a directed weighted graph,

Betweenness Centrality of a given node in a directed weighted graph.
That is, we prove hardness for a variety of sparse graph problems from the hardness of a dense graph problem. Our results also lead to new conditional lower bounds from several related hypothesis for unweighted sparse graph problems including cycle, shortest cycle, Radius, Wiener index and APSP.
Tight Hardness for Shortest Cycles and Paths in Sparse Graphs
Andrea Lincoln^{†}^{†}thanks: andreali@mit.edu. Supported by the EECS Merrill Lynch Fellowship. and Virginia Vassilevska Williams^{†}^{†}thanks: virgi@csail.mit.edu. Supported by an NSF CAREER Award, NSF Grants CCF1417238, CCF1528078 and CCF1514339, and BSF Grant BSF:2012338. and Ryan Williams^{†}^{†}thanks: rrw@mit.edu. Supported by an NSF CAREER Award. 
1 Introduction
The AllPairs Shortest Paths (APSP) problem is among the most basic computational problems. A powerful primitive, APSP can be used to solve many other problems on graphs (e.g. graph parameters such as the girth or the radius), but also many nongraph problems such as finding a subarray of maximum sum [TT98] or parsing stochastic context free grammars (e.g. [Aku99]). Over the years, many APSP algorithms have been developed. For edge weighted node, edge graphs, the fastest known algorithms run in time [Wil14] for dense graphs, and in time [Pet02] for sparse graphs.
These running times are also essentially the best known for many of the problems that APSP can solve: Shortest Cycle, Radius, Median, Eccentricities, Second Shortest Paths, Replacement Paths, and so on^{1}^{1}1For Shortest Cycle, an time algorithm was recently developed by Orlin and SedeñoNoda [OS17]. For a full discussion of the best known running times of these problems see Appendix A.. For dense graphs, this was explained by Vassilevska Williams and Williams [VW10] and later Abboud et al. [AGV15] who showed that either all of APSP, Minimum Weight Triangle, Shortest Cycle, Radius, Median, Eccentricities, Second Shortest Paths, Replacement Paths, Betweenness Centrality have truly subcubic algorithms (with runtime for constant ), or none of them do. Together with the popular hypothesis that APSP requires time on a wordRAM (see e.g. [AV14, AGV15, Vas15, BGMW17]), these equivalences suggest that all these graph problems require time to solve.
However, these equivalences no longer seem to hold for sparse graphs. The running times for these problems still match: is the best running time known for all of these problems. In recent work, Agarwal and Ramachandran [AR16] show that some reductions from prior work can be modified to preserve sparsity. Their main result is that if Shortest Cycle in directed weighted graphs requires time for some constant , then so do Radius, Eccentricities, Second Shortest Paths, Replacement Paths, Betweenness Centrality and APSP in directed weighted graphs^{2}^{2}2This is analogous to the dense graph regime of [VW10], where the main reductions went from Minimum Weight 3Cycle (i.e. triangle). The key point of [AR16] is that one can replace Minimum Weight 3Cycle by Minimum Weight Cycle, and preserve the sparsity in the reduction..
Unfortunately, there is no known reduction that preserves sparsity from APSP (or any of the other problems) back to Shortest Cycle, and there are no known reductions to Shortest Cycle from any other problems used as a basis for hardness within FineGrained Complexity, such as the Strong Exponential Time Hypothesis [IP01, IPZ01], SUM [GO95] or Orthogonal Vectors [Wil05, Vas15]. Without a convincing reduction, one might wonder:
Can Shortest Cycle in weighted directed graphs be solved in, say, time?
Can APSP be solved in time?
Such runtimes are consistent with the dense regime of . Minimum Weight Triangle, which is the basis of many reductions in the dense case, can be solved in time (e.g. [IR78]). What prevents us from having such running times for all the problems that are equivalent in the dense regime to Minimum Weight Triangle? Why do our best algorithms for these other problems take time, and no faster? In fact, we know of no for which problems like Shortest Cycle can be solved in time. Such a running time is for and for . Notice that is the special case for . Is there a good reason why no time algorithms have been found?
Our results.
We give compelling reasons for the difficulty of improving over for Shortest Cycle, APSP and other problems. We show for an infinite number of sparsities, any sparsity where , obtaining an time algorithm for Shortest Cycle (or any of the other fundamental problems) in weighted graphs for any constant would refute a popular hypothesis about the complexity of weighted Clique.
Hypothesis 1.1 (Min Weight Clique)
There is a constant such that, on a WordRAM with bit words, finding a Clique of minimum total edge weight in an node graph with nonnegative integer edge weights in requires time.
The Min Weight Clique Hypothesis has been considered for instance in [BT16] and [AVW14] to show hardness for improving upon the Viterbi algorithm, and for Local Sequence Alignment. The (unweighted) Clique problem is NPComplete, but can be solved in time when is fixed [NP85]^{3}^{3}3When is divisible by ; slightly slower otherwise. where [Vas12, Gal14] is the matrix multiplication exponent. The problem is W[1]complete and under the Exponential Time Hypothesis [IP01] it cannot be solved in time. Finding a Clique of minimum total weight (a Min Weight Clique) in an edgeweighted graph can also be solved in if the edge weights are small enough. However, when the edge weights are integers larger than for large enough constant , the fastest known algorithm for Min Weight Clique runs in essentially time (ignoring improvements). The special case , Min Weight Clique is the aforementioned Minimum Weight Triangle problem which is equivalent to APSP under subcubic reductions and is believed to require time.
Theorems 1.1 and F.4 of Vassilevska Williams and Williams [VW10], and Theorem 1.1 and Lemma 2.2 of Abboud et al. [AGV15] give subcubic dense reductions from APSP to many fundamental graph problems. Agarwal and Ramachandran [AR16] build on these reductions to show many sparsitypreserving reductions from Shortest Cycle to various fundamental graph problems.
They thus identify Shortest Cycle as a fundamental bottleneck to improving upon for many problems. However, so far there is no compelling reason why Shortest Cycle itself should need time.
Theorem 1.1 ([Ar16])
Suppose that there is a constant such that one of the following problems on node, edge weighted graphs can be solved in time:

APSP in a directed weighted graph,

Radius (or Eccentricities) in a directed weighted graph,

Replacement Paths in a directed weighted graph,

Second Shortest Path in a directed weighted graph,

Betweenness Centrality of a given node in a directed weighted graph.
Then, the Min Weight Cycle Problem is solvable in for some time.
Our main technical contribution connects the complexity of small cliques in dense graphs to that of small cycles in sparse graphs:
Theorem 1.2
Suppose that there is an integer and a constant such that one of the following problems on node, edge weighted graphs can be solved in time:

Minimum Weight Cycle in a directed weighted graph,

Shortest Cycle in a directed weighted graph,
Then, the Min Weight Clique Hypothesis is false.
Combining our main Theorem 1.2 with the results from previous work in Theorem 1.1 gives us new conditional lower bounds for fundamental graph problems. We also create novel reductions from the Cycle problem, in Section B, and these give us novel hardness results for many new problems. The main new contributions are reductions to Radius in undirected graphs (the result in [AR16] is only for directed) and to the Wiener Index problem which asks for the sum of all distances in the graph. Together all these pieces give us the following theorem.
Theorem 1.3
Suppose that there is an integer and a constant such that one of the following problems on node, edge weighted graphs can be solved in time:

Minimum Weight Cycle in a directed weighted graph,

Shortest Cycle in a directed weighted graph,

APSP in a directed or undirected weighted graph,

Radius (or Eccentricities) in a directed or undirected weighted graph,

Wiener index of a directed or undirected weighted graph,

Replacement Paths in a directed weighted graph,

Second Shortest Path in a directed weighted graph,

Betweenness Centrality of a given node in a directed weighted graph.
Then, the Min Weight Clique Hypothesis is false.
So, either min weighted cliques can be found faster, or is the optimal running time for these problems, up to factors, for an infinite family of edge sparsities . See Figure 7 in the Appendix for a pictorial representation of our conditional lower bounds.
Another intriguing consequence of Theorem 1.3 is that, assuming Min Weight Clique is hard, running times of the form for are impossible! If Shortest Cycle had such an algorithm for , then for every integer and we have that for , and hence the Min Weight Clique Hypothesis is false.
Our reduction from Minimum Weight Clique to Minimum Weight Cycle produces a directed graph on nodes and edges, and hence if directed Minimum Weight Cycle can be solved in time for some , then the Min Weight Clique Hypothesis is false. We present an extension for weighted cycles of even length as well, obtaining:
Corollary 1.1
If Minimum Weight Cycle in directed edge graphs is solvable in time for some for odd, or in time for even, then the Minimum Weight Clique Hypothesis is false for .
Directed cycles in unweighted graphs were studied by Alon, Yuster and Zwick [AYZ97] who gave algorithms with a runtime of for odd, and for even. We show that their algorithm can be extended to find Minimum Weight Cycles with only a polylogarithmic overhead, proving that the above conditional lower bound is tight.
Theorem 1.4
The Minimum Weight Cycle in directed edge graphs can be solved in time for odd, and in time for even.
Sparse Unweighted Problems.
We have proven tight conditional lower bounds for weighted graphs. However, for sparse enough () unweighted graphs, the best algorithms for APSP and its relatives also run in time (see Section A for the relevant prior work on APSP). We hence turn our attention to the unweighted versions of these problems.
Our reduction from Min Weight Clique to Min Weight Cycle still works for unweighted graphs just by disregarding the weights. We can get superlinear lower bounds for sparse unweighted problems from three different plausible assumptions.
As mentioned before, the fastest algorithm for Clique (for divisible by ) runs in [NP85, EG04]. This algorithm has remained unchallenged for many decades and lead to the following hypothesis (see e.g. [ABV15]).
Hypothesis 1.2 (The Clique Hypothesis)
Detecting a Clique in a graph with nodes requires time on a Word RAM.
From this we get superlinear lower bound for the shortest cycle problem. We get an analogous result to the one we had before:
Theorem 1.5
If the Clique Hypothesis is true, Shortest Cycle in undirected or directed graphs requires .
We get superlinear lower bounds for various graph problems as a corollary of Theorem 1.5.
Corollary 1.2
If the Clique Hypothesis is true, the following problems in unweighted graphs require time:

Betweenness Centrality in a directed graph,

APSP in an undirected or directed graph,

Radius in an undirected or directed graphs, n

Wiener Index in an undirected or directed graph.
The reader may notice that the matrix multiplication exponent shows up repeatedly in the unweighted cases of these problems. This is no coincidence. The best known combinatorial^{4}^{4}4Informally, combinatorial algorithms are algorithms that do not use fast matrix multiplication. algorithms for the unweighted clique algorithm take time. This has lead to a new hypothesis.
Hypothesis 1.3 (Combinatorial Clique)
Any combinatorial algorithm to detect a Clique in a graph with nodes requires time on a Word RAM [ABV15].
Our reduction from clique to cycle is combinatorial. Thus, an time (for ) combinatorial algorithm for the directed unweighted cycle problem for odd and would imply a combinatorial algorithm for the clique problem with running time for . Any algorithm with a competitive running time must use fast matrix multiplication, or give an exciting new algorithm for clique.
Currently, the best bound on is [Gal14, Vas12], and the lower bound for Shortest Cycle and related problems might conceivably be which is not far from the best known running time for cycle [YZ04]. Yuster and Zwick gave an algorithm based on matrix multiplication for directed Cycle, however they were unable to analyze its running time for . They conjecture that if , their algorithm runs faster than the best combinatorial algorithms for every , however even the conjectured runtime goes to , as grows. In contrast, for , our lower bound based on the Clique Hypothesis is only ^{5}^{5}5Of course, the Yuster and Zwick running time for cycle might not be optimal, and it might be that time is possible for cycle for some and all .. We thus search for higher conditional lower bounds based on different and at least as believable hypotheses.
To this end, we formalize a working hypothesis about the complexity of finding a hyperclique in a hypergraph. An hyperclique in a uniform hypergraph is composed of a set of nodes of such that all tuples of them form a hyperedge in .
Hypothesis 1.4 (Hyperclique Hypothesis)
Let be integers. On a WordRAM with bit words, finding an hyperclique in a uniform hypergraph on nodes requires time.
Why should one believe the hyperclique hypothesis? There are many reasons: (1) When , there is no time algorithm for any for hyperclique in uniform hypergraphs. (2) The natural extension of the techniques used to solve clique in graphs will NOT solve hyperclique in uniform hypergraphs in time for any when . We prove this in Section 8. (3) There are known reductions from notoriously difficult problems such as Exact Weight Clique, Max SAT and even harder Constrained Satisfaction Problems (CSPs) to Hyperclique so that if the hypothesis is false, then all of these problems have exciting improved algorithms. For these and more, see the Discussion in Section 7.
Now, let us state our results for unweighted Shortest Cycle based on the Hypothesis. The same lower bounds apply to the other problems of consideration (APSP, Radius etc.).
Theorem 1.6
Under the Hypothesis, the Shortest Cycle problem in directed unweighted graphs requires time on a Word RAM with bit words.
The theorem implies in particular that, Shortest Cycle in unweighted directed graphs requires (a) time, unless Max SAT (and other CSPs) have faster than algorithms, (b) time, unless Exact Weight Clique has a significantly faster than algorithm. The latter is the same lower bound as from Clique when but it is from a different and potentially more believable hypothesis. Finally, Shortest Cycle and its relatives are not in linear time, unless the Hypothesis is false for every constant .
Overview
See Figure 1 for a depiction of our core reductions.
In Sections 3 to 6 we cover the core reductions and show they are tight to the best known algorithms. The reduction from hyperclique to hypercycle is covered in Section 3. The reduction from hypercycle to directed cycle in Section 4. The algorithms for weighted minimum cycle which match the conditional lower bounds are discussed in Section 5. The reduction from minimum weight clique to shortest cycle is in Section 6.
In Sections 7 to 9 we give justification for the hardness of the unweighted versions of these problems. In Section 7 we discuss the hyperclique hypothesis and give justification for it. In Section 8 we show that the generalized matrix product related to finding hypercliques in uniform hypergraphs can not be sped up with a Strassen like technique. In Section 9 we reduce MaxkSAT to Tight Hypercycle.
In Appendix A we discuss the prior work getting fast algorithms for the sparse graph problems we study. In Appendix B we present the reductions from minimum cycle and minimum cycle to Radius, Weiner Index and APSP. In Appendix C we reduce general CSP to the Hyperclique problem. Finally, in Appendix D we extend our lower bounds to make improved but nonmatching lower bounds for graph densities between and .
2 Preliminaries
In this section we define various notions that we will be using and prove some simple lemmas.
Definitions and notation.
Throughout this paper will be discussing problems indexed by and . For example, cycle, clique, Hyperclique. We will treat the and values as being constant in these problems. A hypergraph is defined by its vertices and its hyperedges where each is a subset of . is a uniform hypergraph if all its hyperedges are of size .
Graphs are just uniform hypergraphs. Unless otherwise stated, the variables and will refer to the number of hyperedges and vertices of the hypergraph in question. Unless otherwise stated, the graphs in this paper will be directed. Hypergraphs will not be directed. We will use node and vertex interchangeably.
An hypercycle in a uniform hypergraph is an ordered tuple of vertices such that for every , is a hyperedge (where the indices are mod ).
We will be dealing with simple hypercycles, so that all are distinct. These types of hypercycles are known as tight hypercycles. We will omit the term tight for conciseness.
An hyperclique in a uniform hypergraph is a set of vertices such that all subsets of of them form a hyperedge.
A circlelayered graph is a partite directed graph where edges only exist between adjacent partitions. More formally the vertices of can be partitioned into groups such that and if . The only edges from a partition go to the partition .
Hardness Hypotheses.
We will state several hardness hypotheses that we will be using.
The first concerns the Min Weight Clique problem. Min Weight Clique is known to be equivalent to APSP and other problems [VW10], and no truly subcubic algorithms are known for the problem. This issue extends to larger cliques: if the edge weights are large enough, no significantly faster algorithms than the bruteforce algorithm are known. This motivates the following hypothesis used as the basis of hardness in prior work (see e.g. [BT16, AVW14]).
Reminder of Hypothesis 1.1 (Min Weight Clique Hypothesis). There is a constant such that, on a WordRAM with bit words, finding a Clique of minimum total edge weight in an node graph with nonnegative integer edge weights bounded by requires time.
The exact weight version of the clique problem is at least as hard as Min Weight Clique [VW13], so that if the previous hypothesis is true, then so is the following one. For , the Exact Clique problem is known to be at least as hard as both APSP and SUM, making the following hypothesis even more believable.
Hypothesis 2.1 (Exact Weight Clique)
There is a constant such that, on a WordRAM with bit words, finding a Clique of total edge weight exactly , in an node graph with integer edge weights bounded in requires time.
Let be an integer. The following hypothesis concerns the MaxSAT problem. The bruteforce algorithm for MaxSAT on variables and clauses runs in time. There have been algorithmic improvements for the approximation of MaxSAT [AW02, ABZ05, FG95] and Max2SAT [Wil07]. No time algorithms are known for any for . Williams [Wil05, Wil07] showed that MaxSAT does have a faster algorithm running in time, however the algorithm used can not extend to Max SAT for (see the discussion in Section 8).
Hypothesis 2.2 (MaxSAT Hypothesis)
On a WordRAM with bit words, given a CNF formula on variables, finding a Boolean assignment to the variables that satisfies a maximum number of clauses, requires time.
The MaxSAT hypothesis implies the following hypothesis about hyperclique detection, as shown by Williams [Wil07] for (see Appendix for the generalization for ). Williams [Wil07] in fact showed that hyperclique detection solves even more difficult problems such as Satisfiability of Constraint Satisfaction Problems, the constraints of which are given by degree polynomials defining Boolean functions on the variables. Thus if the following hypothesis is false, then more complex MAXCSP problems than MAXSAT can be solved in time for .
Reminder of Hypothesis 1.4 (Hyperclique Hypothesis). Let be integers. On a WordRAM with bit words, finding an hyperclique in a uniform hypergraph on nodes requires time.
Abboud et al. [ABDN17] have shown (using techniques from [ALW14]) that if the Hyperclique Hypothesis is false for some , then the Exact Weight Clique Hypothesis is also false. Thus, the Hyperclique Hypothesis should be very believable even for . The hypergraphs we are considering are dense (). Hyperclique can be solved faster in hypergraphs where [GIKW17].
Simple Cycle Reductions.
Note that throughout this paper we will use the fact that the cycle and clique problems we consider are as hard in partite graphs as they are in general graphs. Furthermore, the cycle problems we consider are as hard in circle layered graphs as they are in general graphs. Using the partite or circle layered versions often makes reductions more legible.
cycle has different behavior when is even and odd. To get some results we will use a simple reduction from cycle to cycle.
Lemma 2.1
Let be an node edge circlelayered graph. Suppose further that the edges have integer weights in . Then in time one can construct a partite directed graph on nodes and edges with weights in , so that contains a directed cycle of weight if and only if contains a directed cycle of weight .

Take , say, and split every node into and , placing a directed edge of weight and splitting the edges incident to among and , so that gets all edges incoming from and gets all edges outgoing to .
An immediate corollary is:
Corollary 2.1
Suppose that there is a time algorithm that can detect a (minweight/ weight/ unweighted) cycle in a circlelayered directed node, edge graph, then there is a time algorithm that can detect a (minweight/ weight/ unweighted) cycle in a circlelayered node, edge directed graph.
The following Lemma allows us to assume that all graphs that we are dealing with are circlelayered.
Lemma 2.2
Suppose that a (minweight/ weight/ unweighted) cycle can be detected in time in a circlelayered directed graph where the edges have integer weights in . Then in time one can detect a(minweight/ weight/ unweighted) cycle in a directed graph (not necessarily circlelayered) on nodes and edges with weights in .

We use the method of colorcoding [AYZ16]. We present the randomized version, but this can all be derandomized using perfect families of hash functions, resulting in roughly the same runtime. Every node in the graph selects a color from independently uniformly at random. We take the original graph and we only keep an edge if and we remove edges that do not satisfy this condition. The created subgraph is partite  there is a partition for each color, and by construction, the edges only go between adjacent colors, so that the graph is circlelayered.
Since is a subgraph of , if has a cycle , then is also a cycle in . Suppose now that has a cycle . If for each , , then is preserved in . Thus, is preserved with probability at least , and repeating times, we will find whp.
3 Reduction from Hyperclique to Hypercycle
In this section we will reduce the problem of finding an hyperclique in a uniform hypergraph to finding an hypercycle in a uniform hypergraph for some function which is roughly .
By a colorcoding argument we can assume that the hypergraph is partite the vertex set is partitioned into parts so that no hyperedge contains two nodes in the same . The colorcoding approach reduces the hyperclique problem to instances of the partite hyperclique problem. A simple randomized approach assigns each vertex a random color from , and then part includes the vertices colored . One removes all hyperedges containing two vertices colored the same and argues that any particular hyperclique has all its vertices colored differently with probability . Thus instances of the partite hyperclique problem suffice with high probability. The approach can be derandomized with standard techniques.
In the following theorem an arc will refer to a valid partial list of nodes from a hyperclique or hypercycle. This usage is attempting to get across the intuition that a set of nodes in a hyperclique can be covered by a small number of overlapping sets if those sets are large enough. See Figure 2 for an image depiction.
We will hence prove the following theorem:
Theorem 3.1
Let be a uniform hypergraph on vertices , partitioned into parts . Let . In time we can create a uniform hypergraph on the same node set as , so that contains an hypercycle if and only if contains an hyperclique with one node from each .
If has weights on its hyperedges in the range , then one can also assign weights to the hyperedges of so that a minimum weight hypercycle in corresponds to a minimum weight hyperclique in and every edge in the hyperclique has weight between . Notably, .

Consider the numbers written in order around a circle and let be any of them. We are interested in covering all these numbers by an arc of the circle. What is the least number of numbers from to an arc covers if it covers all the ?
It’s not hard to see that the arc starts at one of the , goes clockwise and ends at (indices mod ). Let be the number of numbers strictly between an . The number of numbers that the arc contains is thus , and that the best arc picks the that maximizes .
The sum equals , and hence the maximum is at least the average and is thus . Hence the best arc has at most numbers. See Figure 2.
Now, let be the given partite uniform hypergraph in which we want to find an hyperclique. Let be the vertex parts and let be the set of hyperedges. We will build a new hypergraph on the same set of nodes but with hyperedges of size as follows.
Consider each and every choice of nodes call the set of chosen nodes , i.e. nodes in consecutive parts (mod ). We need only consider the sets of consecutive parts because every subset of size will be contained in one of these sets, by our choice of . We add a hyperedge between the nodes in if every size subset of forms a hyperedge in . That is, we create a big hyperedge in if all the tuples contained in it form a hyperedge in . The runtime to create is as is the number of hyperedges created. Clearly is uniform.
Now suppose that is an hyperclique in . All the hyperedges are present in so forms an hypercycle in .
Now suppose that is an hypercycle in . Consider in . We will show that it is an hyperclique. Let for be any nodes of .
Let be the index that maximizes as in the beginning of the proof. Then, (which contains all ) contains at most nodes and is thus contained in which is a hyperedge in since is an hypercycle in . However by the way we constructed the hyperedges, it must be that is a hyperedge of . Thus all tuples are hyperedges in and is an hyperclique in .
So far we have shown that we can construct a hypergraph so that the hypercliques in correspond to the hypercycles in . Suppose now that is a hypergraph with weights on its hyperedges. We will define weights for the hyperedges of so that the weight of any hypercycle in equals the weight of the hyperclique in that it corresponds to. To achieve this, we will assign each hyperedge of to some hyperedges of and we will say that these hyperedges are responsible for . Then we will set the weight of a hyperedge of to be the sum of the weights of the hyperedges of that it is responsible for. We will guarantee that for any hypercycle of , no two hyperedges in it are responsible for the same hyperedge of , and that every hyperedge of the hyperclique that the hypercycle is representing is assigned to some of the hypercycle hyperedges.
Consider any hyperedge of , with . Let be the smallest index that maximizes . We assign to every hyperedge of contained in that intersects exactly at . Then notice that any hypercycle that contains contains exactly one of these hyperedges, so that the weight of the hypercycle is exactly the weight of the hyperclique that it corresponds to. Since every hyperedge of contains hyperedges of , the weights of the hyperedges lie in .
4 Reduction from Hypercycle to Cycle in Directed Graphs
We have shown hardness for hypercycle from hyperclique. However, in order to get results on cycles in normal graphs we have to show that hypercycle can be solved efficiently with cycles in graphs. We do so below.
Lemma 4.1
Given an node uniform hypergraph with nodes partitioned into in which one wants to find a hypercycle with for each , one can in time create a circlelayered directed graph on nodes and edges, so that contains a hypercycle with one node in each partition if and only if contains a directed cycle. Moreover, if has integer weights on its edges bounded by , then one can add integer edge weights to the edges of the graph , bounded by , so that the minimum weight cycle in has the same weight as the minimum weight hypercycle in .
If is odd, the can be made undirected.

Recall that a hypercycle in a uniform hypergraph is formed by having a list of nodes and having a hyperedge for all choices of formed by the set where we consider indices mod .
We describe the construction of the directed graph . It will be circlelayered with node parts . For each , we will add a node in part of for every choice of nodes such that , , , . This totals nodes. Call this node .
We will add a directed edge in between nodes and if for and is a hyperedge in . Assign this edge the weight of the hyperedge in . Every node in can connect to a maximum of other nodes giving us .
Now note that is a circlelayered graph. Further note that if a cycle exists in then each of its edges corresponds to a hyperedge edge in and the set of vertices represented in the cycle in corresponds to a choice of nodes . Further, every edge of covers adjacent vertices from .
We also note that if is odd, then the edges of can be made undirected: any cycle in must have a node from each , as removing any from makes it bipartite, and no odd cycles can exist in a bipartite graph.
We immediately obtain the following corollaries:
Corollary 4.1
Let . Under the Hyperclique Hypothesis, min weight cycle in directed graphs (or in undirected graphs for odd) cannot be solved in time for any for edge, node graphs.

We start with a uniform hypergraph with nodes. The number of edges in the graph produced by Lemma 4.1 when applied to this hypergraph is . By the Hyperclique Hypothesis any algorithm to find a Hyperclique should take time. Combining these facts we get a bound of .
The number of nodes produced by Lemma 4.1 is , the number of edges is . Thus,
Corollary 4.2
Let . Under the Min Weight Clique Hypothesis, min weight cycle in directed graphs (or in undirected graphs for odd) cannot be solved in time for any for edge, node graphs.

The Min Weight Clique Hypothesis is equivalent to the Min Weight Hyperclique Hypothesis. We can plug in these numbers to get the result above.
When considering odd sizes of cliques and cycles, these results become show hardness for the cycle problems at certain densities.
Corollary 4.3
Under the Min Weight Clique Hypothesis, min weight cycle in directed or undirected graphs cannot be solved in time for any for edge, node graphs.

The Min Weight Clique Hypothesis is equivalent to the Min Weight Hyperclique Hypothesis. We can plug in these numbers to get the result above. We then note that directed cycle is solved by undirected cycle because is odd.
Corollary 4.4
Under the Exact Weight Clique Hypothesis, exact weight cycle in directed and undirected graphs cannot be solved in time for any for edge, node graphs.

The Exact Weight Clique Hypothesis is equivalent to the Exact Weight Hyperclique Hypothesis. We can plug in these numbers to get the directed version of the above corollary. We then note that directed cycle is solved by undirected cycle because is odd.
5 Probably Optimal Weighted kCycle Algorithms
The reductions from hyperclique in uniform hypergraphs (through hypercycle) to directed cycle produces graphs on nodes and edges where .
For the special case of the reduction from Min Weight Clique (), one obtains a graph on edges. Suppose that is odd. The number of edges in the graph is , and solving the Shortest Cycle problem in this graph in time for any would refute the Min Weight Clique Hypothesis. We immediately obtain that Min Weight Cycle on edge graphs requires time.
Using Lemma 2.1, we can also conclude that if is even, then solving Min Weight Cycle on edge graphs requires time.
Theorem 5.1
Assuming the Min Weight Clique Hypothesis, on a Word RAM on bit words, Min Weight Cycle on edge graphs requires time if is even and time if is odd.
The rest of this section will show that the above runtime can be achieved:
Theorem 5.2
Min Weight Cycle on edge graphs can be solved in time if is even and time if is odd.
The proof proceeds analogously to Alon, Yuster and Zwick’s algorithm [AYZ97] for Cycle in unweighted directed graphs. Let us review how their algorithm works and see how to modify it to handle weighted graphs. First, pick a parameter and take all nodes of degree . Call the set of these nodes . For every , Alon, Yuster and Zwick use an time algorithm by Monien [Mon85] to check whether there is an cycle going through . If no cycle is found, they consider the subgraph with all nodes of removed and enumerate all paths and all paths in it. The number of paths in is . Then one sorts and in lexicographic order of the path endpoints and searches in linear time in for a path in from to and a path in from to . To make sure that the cycle closed by these paths is simple, one can first start by color coding in two colors red and blue and let contain only paths with red internal nodes and only paths with blue internal nodes, or one can just go through all paths that share the same end points. Either way, the total runtime is asymptotically , and setting gives a runtime of .
One can modify the algorithm to give a Shortest cycle in an edgeweighted graph, as follows. First, we replace Monien’s algorithm with an algorithm that given a weighted graph and a source can in time determine a shortest cycle containing . To this end, we use colorcoding: we give every node of a random color from to and note that with probability at least , the th node of is colored , for all , whp; as is the first node of , we can assume that is colored . As usual, this can be derandomized using perfect hash families. Now, in , only keep the edges such that (not mod , so there are no edges between nodes colored and nodes colored ). This makes the obtained subgraph partite and acyclic. Now, run Dijkstra’s algorithm from , computing the distances for each . Then for every inneighbor of in colored , compute and take the minimum of these, . If the nodes of are colored properly (the th node is colored ), then is the weight of the shortest cycle through since the shortest path from to any colored , if the distance is finite, must have nodes colored from to . Dijkstra’s algorithm runs in time, and one would want to repeat times to get the correct answer with high probability (the same cost is obtained in the derandomization).
Now that we have a counterpart of Monien’s algorithm, let’s see how to handle the case when the shortest cycle in the graph only contains nodes of low degree. Similar to the original algorithm, we again compute the set of paths and , but we only consider shortest paths together with their weights. Then one is looking for two paths (one between and and the other between and ) so that their sum of weights is minimized. This can also be found in linear time in and when they are sorted by end points and by weight. The total runtime is again .
6 Hardness Results for Shortest Cycle
Theorem 6.1
If Shortest cycle in an node, edge directed graph can be solved in time, then the Minimum Weight cycle in an node, edge directed graph is solvable in time.

Let the weights of the cycle instance range between and . Use Lemma 2.2 to reduce the Min Weight cycle problem to one in a circlelayered graph with partitions . Add the value to each edge, which adds to the value of every cycle. Every cycle in a directed circlelayered graph is a cycle when is a positive integer since every cycle must go around the graph circle some number of times. Due to the added weight , the Shortest cycle in the new graph will minimize the number of edges: Any cycle for will have weight , where is the weight of in . The weight of a cycle, however is at most . Thus, the weight of the Shortest Cycle in the new graph is exactly the weight of the Min Weight Cycle in , plus , and the Shortest Cycle will exactly correspond to the Min Weight Cycle in .
Lemma 6.1
If Shortest Cycle can be solved in time in an node, edge directed unweighted graph, then cycle in a directed unweighted node, edge graph is solvable in time.

The proof is similar but simpler than that of Theorem 6.1. We first reduce to cycle in a circlelayered graph, and then just find the Shortest Cycle in it. Since the graph obtained is directed and circlelayered, if it contains a cycle, then that cycle is its shortest cycle.
Corollary 6.1
If Min Weight clique requires time, then Shortest Cycle in directed weighted graphs requires time whenever .
Directed Shortest Cycle in unweighted graphs requires time under the Max SAT Hypothesis, time under the Exact Weight Clique Hypothesis, and time under the Clique Hypothesis.

The first statement follows immediately from Lemma 6.1 and Corollary 4.2. We will focus on the second part of the corollary.
The reduction in Corollary 9.1 from Max SAT on variables to cycle (for any ) produces a node, edge graph (for , so that solving cycle in it in time for any , then the Max SAT Hypothesis is false. Now suppose that Shortest cycle in a directed graph can be solved in time for some . Set to be any integer greater than and divisible by . Consider the cycle problem in edge graphs obtained via the reduction from Max SAT. Reduce it to Shortest Cycle as in Lemma 6.1. As is divisible by , the number of edges in consideration is . Then, applying the time algorithm, we can solve the cycle instance in time. As we set , the exponent in the running time is , and hence we obtain a faster algorithm for cycle and contradict the Max SAT hypothesis.
A similar argument applies to show that time is needed under the Exact Weight Clique Hypothesis, and time is needed under the Clique Hypothesis.
7 Discussion of the Hyperclique Hypothesis
In this section we discuss why the Hyperclique hypothesis is believable.
First, when , the fastest algorithms for the hyperclique problem run in time, and this is not for lack of trying. Many researchers [WBK] have attempted to design a faster algorithm, for instance by mimicking the matrix multiplication approach for Clique. However, in doing this, one needs to design a nontrivial algorithm for a generalized version of matrix multiplication. Unfortunately, in Section 8, we show that the rank and even the border rank of the tensor associated with this generalized product is as large as possible, thus ruling out the arithmetic circuit approach for the problem. Thus, if a faster than algorithm exists for uniform hypergraphs with , then it must use radically different techniques than the Strassenlike approach to regular matrix multiplication.
Another reason to believe the Hyperclique hypothesis is due to its relationship to Maximum Constraint Satisfaction Problems (CSPs). R. Williams [Wil07] showed that MaxSAT can be reduced to finding a Hyperclique in a uniform hypergraph, so that if the latter can be solved in time for node graphs and , then MaxSAT can be solved in time for formulas on variables.
MaxSAT has long resisted attempts to improve upon the bruteforce runtime. Recent results (e.g. [ACW16]) obtained time improvements, but there is still no time algorithm. Generalizing the reduction from [Wil07] (see Section 9), one can reduce MaxSAT to hyperclique in a uniform hypergraph for any , so that if the latter problem can be solved in time for node graphs and , then MaxSAT can be solved in time for formulas on variables. In fact, R. Williams [Wil07] showed that even harder Constraint Satisfaction Problems (CSPs) can be reduced to hyperclique. CSPs where the constraints are degree polynomials representing Boolean functions over the variables. In Section C we generalize this to CSPs where the constraints are degree polynomials. Such CSPs include MaxSAT and also include some CSPs with constraints involving more than variables. In any case, the Hypothesis captures the difficulty of this very general class of CSPs.
Another reason to believe the Hypothesis is due to its relationship to the Exact Weight Clique Conjecture [VW13] which states that finding a Clique of total edge weight exactly in an node graph with large integer weights requires time. The Exact Weight Clique conjecture is implied by the Min Weight Clique conjecture, so it is at least as believable. Furthermore, for the special case , both SUM and APSP can be reduced to Exact Weight Clique, so that a truly subcubic algorithm for the latter problem would refute both the APSP and the SUM conjectures [Pat10, VW13, VW10]. Exact Weight Clique is thus a very difficult problem. Recent work by Abboud et al. [ABDN17] shows how to use the techniques in [ALW14] to reduce the Exact Weight Clique problem to (unweighted) Clique in a uniform hypergraph. Thus, if one believes the Exact Weight Clique conjecture, then one should definitely believe the Hyperclique Hypothesis. (A generalization of this approach also shows that Exact Weight Hyperclique in a uniform hypergraph can be tightly reduced to (unweighted) hyperclique in a uniform hypergraph.)
We note that the hypothesis concerns dense hypergraphs. For hyperclique in sparse hypergraphs, faster algorithms are known: the results of Gao et al. [GIKW17] imply that an hyperclique in an hyperedge, node uniform hypergraph (for ) can be solved in .
8 No Generalized Matrix Multiplication for k2
The fastest known algorithm for clique reduces clique to triangle detection in a graph and then uses matrix multiplication to find a triangle [NP85]. One might ask, is there a similar approach to finding an hyperclique in a uniform hypergraph faster than time?
The first step would be to reduce hyperclique problem in a uniform hypergraph to hyperclique in a uniform hypergraph. This step works fine: Assume for simplicity that is divisible by so that . We will build a new graph . Take all tuples of vertices of and create a vertex in corresponding to the tuple if it forms an hyperclique in (if , any tuple is a hyperclique, and if , it is a hyperclique if all of its subsets are hyperedges). For every choice of distinct tuples, create a hyperedge in on them if every choice of nodes from their union forms a hyperedge in . Now, hypercliques of correspond to hypercliques of . is formed in time and has nodes. Hence if a hyperclique in a uniform hypergraph on nodes can be found in time for some , then an hyperclique in a uniform hypergraph on nodes can be found in time for .
Thus it suffices to just find hypercliques in uniform hypergraphs. Following the approach for finding triangles (the case ), we want to define a suitable matrix product.
In the matrix multiplication problem we are given two matrices and we are asked to compute a third. Matrices are just tensors of order . The new product we will define is for tensors of order . We will call these tensors for brevity. The natural generalization of matrix multiplication for tensors of dimensions ( times) is as follows.

Given tensors of dimensions , , compute the tensor given by
missing missing missing missing missing missing⋯A^k[ℓ,i_k, i_1,⋯, i_missingk2].\@close@row
The special case of was defined in 1980 by Mesner et al. [MB90]: Given three tensors with indices in compute the product defined as . The more general definition as above was defined later by [GER11] and its properties have been studied within algebra and combinatorics, e.g. [Gna15].
Now, if one can compute the wise matrix product in time, then one can also find a hyperclique in a uniform hypergraph in the same time: define to be the adjacency tensor of the hypergraph – it is of order and has a for every tuple that forms a hyperedge; if the wise product of copies of has a nonzero for some tuple that is also a hyperedge, then the hypergraph contains a hyperclique.
Now the question is: “Is there an time algorithm for