(Gap/S)ETH Hardness of SVP

(Gap/S)ETH Hardness of SVP

Divesh Aggarwal
Centre for Quantum Technologies, NUS
divesh.aggarwal@gmail.com
Supported by the Singapore Ministry of Education and the National Research Foundation, also through the Tier 3 Grant “Random numbers from quantum processes” MOE2012-T3-1-009.
   Noah Stephens-Davidowitz
Princeton University
noahsd@gmail.com
Supported by the Simons Collaboration on Algorithms and Geometry.
Abstract

We prove the following quantitative hardness results for the Shortest Vector Problem in the norm (), where is the rank of the input lattice.

  1. For “almost all” , there no -time algorithm for for some explicit constant unless the (randomized) Strong Exponential Time Hypothesis (SETH) is false.

  2. For any , there is no -time algorithm for unless the (randomized) Gap-Exponential Time Hypothesis (Gap-ETH) is false. Furthermore, for each , there exists a constant such that the same result holds even for -approximate .

  3. There is no -time algorithm for for any unless either (1) (non-uniform) Gap-ETH is false; or (2) there is no family of lattices with exponential kissing number in the norm. Furthermore, for each , there exists a constant such that the same result holds even for -approximate .

1 Introduction

A lattice is the set of all integer combinations of linearly independent basis vectors ,

We call the rank of the lattice and the dimension or the ambient dimension of the lattice .

The Shortest Vector Problem () takes as input a basis for a lattice and and asks us to decide whether the shortest non-zero vector in has length at most . Typically, we define length in terms of the norm for some , defined as

for finite and

In particular, the norm is the familiar Euclidean norm, and it is the most interesting case from our perspective. We write for in the norm (and just when we do not wish to specify a norm).

Starting with the breakthrough work of Lenstra, Lenstra, and Lovász in 1982 [LLL82], algorithms for solving in both its exact and approximate forms have found innumerable applications, including factoring polynomials over the rationals [LLL82], integer programming [Len83, Kan87, DPV11], cryptanalysis [Sha84, Odl90, JS98, NS01], etc. More recently, many cryptographic primitives have been constructed whose security is based on the (worst-case) hardness of or closely related lattice problems [Ajt04, Reg09, GPV08, Pei10, Pei16]. Such lattice-based cryptographic constructions are likely to be used on massive scales (e.g., as part of the TLS protocol) in the not-too-distant future [ADPS16, BCD16, NIS16].

Most of the above applications rely on approximate variants of with rather large approximation factors (e.g., the relevant approximation factors are polynomial in for most cryptographic constructions). However, the best known algorithms for the approximate variant of use an algorithm for exact over lower-rank lattices as a subroutine [Sch87, GN08, MW16]. So, the complexity of the exact problem is of particular interest. We briefly discuss some of what is known below.

Algorithms for .

Most of the asymptotically fastest known algorithms for are variants of the celebrated randomized sieving algorithm due to Ajtai, Kumar, and Sivakumar [AKS01], which solved in time for and . This was extended to all norms [BN09], then to in all norms [AJ08], and then even to “norms” whose unit balls are not necessarily symmetric [DPV11]. These -time algorithms that work in all norms in particular imply -time algorithms for , by taking the ambient space to be the span of the lattice. We are therefore primarily interested in the running time of these algorithms as a function of the rank . (Notice that, in the norm, we can always assume that .)

In the special case of , quite a bit of work has gone into improving the constant in the exponent in these -time algorithms [NV08, PS09, MV10, LWXZ11]. The current fastest known algorithm for runs in time [ADRS15, AS17]. But, this is unlikely to be the end of the story. Indeed, there is also a -time algorithm that approximates up to a small constant factor,111Unlike all other algorithms mentioned here, this -time algorithm does not actually find a short vector; it only outputs a length. In the exact case, these two problems are known to be equivalent under an efficient rank-preserving reduction [MG02], but this is not known to be true in the approximate case. and there is some reason to believe that this algorithm can be modified to solve the exact problem [ADRS15, AS17]. Further complicating the situation, there exist even faster “heuristic algorithms,” whose correctness has not been proven but can be shown under certain heuristic assumptions [NV08, WLTB11, Laa15]. The fastest such heuristic algorithm runs in time  [BDGL16].

Hardness of .

Van Emde Boaz first asked whether was NP-hard in 1981, and he proved NP-hardness in the special case when  [van81]. Despite much work, his question went unanswered until 1998, when Ajtai showed NP-hardness of for all  [Ajt98]. A series of works by Cai and Nerurkar [CN98], Micciancio [Mic01], Khot [Kho05], and Haviv and Regev [HR12] simplified the reduction and showed hardness of the approximate version of . We now know that is NP-hard to approximate to within any constant factor and hard to approximate to within approximation factors as large as for some constant under reasonable complexity-theoretic assumptions.222All of these reductions for finite are randomized, as are ours. An unconditional deterministic reduction would be a major breakthrough. See [Mic01, Mic12] for more discussion and even a conditional deterministic reduction that relies on a certain number-theoretic assumption.

However, such hardness proofs tell us very little about the quantitative or fine-grained complexity of . E.g., does the fastest possible algorithm for still run in time at least, say, , or is there an algorithm that runs in time or even ? The above hardness results cannot distinguish between these cases, but we certainly need to be confident in our answers to such questions if we plan to base the security of widespread cryptosystems on these answers. Indeed, most proposed instantiations of lattice-based cryptosystems (i.e., proposed cryptosystems that specify a key size) can essentially be broken by solving with, say, or for any with . So, if we discovered an algorithm running in time, say, -time for or or for , then these schemes would be broken in practice. And, given the large number of recent algorithmic advances, one might (reasonably?) worry that such algorithms will be found. We would therefore very much like to rule out this possibility!

To rule out such algorithms, we typically rely on a fine-grained complexity-theoretic hypothesis—such as the Strong Exponential Time Hypothesis (SETH, see Section 2.3) or the Exponential Time Hypothesis (ETH). To that end, Bennett, Golovnev, and Stephens-Davidowitz recently showed quantitative hardness results for the Closest Vector Problem in norms ([BGS17], which is a close relative of that is known to be at least as hard (so that this was a necessary first step towards proving similar results for ). In particular, assuming SETH, [BGS17] showed that there is no -time algorithm for or for any and “almost all” (not including ). Under ETH, [BGS17] showed that there is no -time algorithm for for any . We prove similar results for for (and a conditional result for that holds if there exists a family of lattices satisfying certain geometric conditions).

1.1 Our results

We now present our results, which are also summarized in Table 1.

Upper Bound Lower Bounds Notes
SETH Gap-ETH
Gap-ETH +
Kissing Number
See Fig. 1.
()
[BGS17]
Table 1: Summary of known fine-grained upper and lower bounds for for various under various assumptions, with new results in blue. Lower bounds in bold also apply for some constant approximation factor strictly greater than one. The one upper bound in parentheses is due to a heuristic algorithm. The SETH-based lower bound only applies for “almost all” , in the sense of Theorem 4.3. We have suppressed low-order terms for simplicity.

SETH-hardness.

Our first main result essentially gives an explicit constant for each such that, under (randomized) SETH, there is no algorithm for that runs in time better than . The constants and do not have a closed form, but they are easily computable to high precision in practice. (E.g., , , and .) We plot over a wide range of in Figure 1. Notice that is unbounded as approaches , but it is a relatively small constant for, say, .

We present this result informally here, as the actual statement is rather technical. In particular, because we use the theorem from [BGS17] that only applies to “almost all” , our result also has this property. See Theorem 4.3 for the formal statement.

Figure 1: The value for different values of . In particular, up to the minor technical issues in Theorem 4.3, there is no -time algorithm for unless SETH is false. The plot on the left shows over a wide range of , while the plot on the right shows the behavior when is close to its minimal value .
Theorem 1.1 (Informal).

For “almost all” (including all odd integers ), there is no -time algorithm for unless (randomized) SETH is false, where is as in Figure 1. Furthermore, as .

To prove this theorem, we give a (randomized) reduction from the instances created by the reduction of [BGS17] to that only increases the rank of the lattice by a constant factor. As we describe in Section 1.3, our reduction is surprisingly simple. In particular, the key step in Khot’s reduction [Kho05] uses a certain “gadget” consisting of a lattice , vector , and distance to convert a provably hard instance into an instance. Our reduction is similar to Khot’s reduction with the simple gadget given by , , and .

We note in passing that we actually do not need the full strength of SETH. We can instead rely on the analogous assumption for Max--SAT, which is potentially weaker. (We inherit this property directly from [BGS17]. See Section 4.)

Gap-ETH-hardness.

Our second main result is the Gap-ETH-hardness of for all .333Gap-ETH is the assumption that there is no -time algorithm that distinguishes a satisfiable -SAT formula from one in which at most a constant fraction of the clauses are simultaneously satisfiable. See Section 2.3. In fact, we prove this even for the problem of approximating up to some fixed constant depending only on (and the approximation factor implicit in the Gap-ETH assumption). See Corollary 5.5.

Theorem 1.2 (Informal).

For any , there is no -time algorithm for unless (randomized) Gap-ETH is false. Furthermore, for each such there is a constant such that the same result holds even for -approximate .

Our reduction is again quite simple (though the proof of correctness is not). We follow Khot’s reduction from approximate Exact Set Cover, and we again use the integer lattice as our gadget (with a different target).444We note that Khot claimed in Section 8 of [Kho05] that he had discovered a linear reduction from -approximate to -approximate for and some unspecified constant . However, it is not clear whether is a small enough constant to yield an alternate proof of Theorem 1.2 for . In particular, one would need to show Gap-ETH-hardness of -approximate .

We note in passing that for this result (as well as Theorem 1.3 and Corollary 1.4), we actually rule out even -time algorithms. However, we focus on the rank instead of the dimension for simplicity.

Towards .

We are unable to extend either Theorem 1.1 or Theorem 1.2 to the important case when . Indeed, we cannot use the integer lattice as a gadget in the Euclidean norm. However, we do show that the existence of a certain type of lattice that is believed to exist would be sufficient to show (possibly non-uniform) Gap-ETH-hardness of . In particular, it would suffice to show the existence of any family of lattices with exponentially large kissing number. See Theorem 5.9 for the precise statement, which requires the existence of a structure that might be easier to construct (and see, e.g., [Alo97, CS98] for discussion of the lattice kissing number).

Theorem 1.3 (Informal).

There is no -time algorithm for unless either (1) (non-uniform) Gap-ETH is false; or (2) the lattice kissing number is . Furthermore, there exists a constant such that the same result holds even for -approximate .

In fact, Regev and Rosen show that is in some sense the “easiest norm” [RR06]. (See Theorem 2.3.) In particular, to show that is Gap-ETH-hard for all , it suffices to show it for . From this, we derive the following. (See Corollary 5.10 for the formal statement.)

Corollary 1.4 (Informal).

There is no -time algorithm for for any unless either (1) (non-uniform) Gap-ETH is false; or (2) the lattice kissing number is (in the norm). Furthermore, for each , there exists a constant such that the same result holds even for -approximate .

1.2 Khot’s reduction

Before we describe our own contribution, it will be useful to review Khot’s elegant reduction from to  [Kho05]. We do our best throughout this description to hide technicalities in an effort to focus on the high-level simplicity of Khot’s reduction.555Khot’s primary motivation for his reduction was to prove hardness of approximating to within any constant factor, by showing a reduction that is well-behaved under a certain tensor product. We are not interested in taking tensor products (since they produce lattices of superlinear rank), so we ignore this issue entirely. (Since the hardness of went unproven for many years, this simplicity is truly remarkable.)

First, some basic definitions and notation. For a lattice and , we write

for the length of the shortest non-zero vector in in the norm. For a target vector , we write

for the distance between and . For any radius , we write

for the number of lattice vectors within distance of .

Recall that is the problem that takes as input a lattice , target vector , and distance and asks us to distinguish the YES case when from the NO case when . When talking about a particular instance, we naturally call a lattice vector with a close vector, and we notice that the number of close vectors is .

The naive reduction and sparsification.

The “naive reduction” from to simply takes a instance consisting of a lattice with basis , target , and distance and constructs the instance given by the basis of a lattice of the form

where is some parameter depending on the instance. Notice that, if is a close vector (i.e., ), then . Therefore, in the YES case when there exists a vector close to , we will have .

However, in the NO case there might still be non-zero vectors whose length is less than . These vectors must be of the form for some integer . Let us for now only consider the case , in which case these vectors are in one-to-one correspondence with the non-zero vectors in of length less than . We naturally call these short vectors.

Khot showed that a (randomized) reduction exists if we just assume that the number of close vectors in any YES case is significantly larger than the number of short vectors in any NO case. In particular, Khot showed that we can randomly “sparsify” the lattice to obtain a sublattice such that each of the short non-zero vectors in stays in with probability where is some parameter that we can choose. So, if we take to be significantly smaller than the number of close vectors in the YES case but significantly larger than the number of short vectors in the NO case, we can show that the resulting lattice will have in the YES case but in the NO case with high probability.

Unfortunately, the instances produced by most hardness reductions typically have short vectors, and they might only have one close vector in the YES case. So, if we want this reduction to work, we will need some way to increase this ratio by an exponential factor.

Adding the gadget.

To increase the ratio of close vectors to short vectors, Khot uses a certain gadget that is itself a instance , where is a lattice with basis , is a target vector, and is some distance. He then takes the direct sum of the two instances. I.e., Khot considers the lattice

with basis

the target , and the distance . We wish to apply the sparsification-based reduction described above to this new lattice. So, we proceed to make some observations about to deduce some properties that it must have in order to make this reduction sufficient to derive our hardness results.

First, we simply notice that the rank of is the sum of the ranks of and . To prove the kind of fine-grained hardness results that we are after, we are only willing to increase the rank by a constant factor, so the rank of must be at most . (Of course, prior work did not have this restriction.)

Next, we notice that any with and satisfies . We call these good vectors, and we notice that there are at least good vectors in the YES case.

Now, we worry about short vectors in in the NO case, i.e., non-zero with . Clearly, will be short if and . Therefore, the number of short vectors is at least

where we have used the fact that and the fact that the input instances that interest us have short vectors. (This is not true in general, but it is true of most instances resulting from hardness proofs.) Since the number of good vectors in the YES case is potentially only , our gadget lattice must satisfy

(1)

Though this in itself is not sufficient to make our reduction work, it is the most important feature that a gadget lattice must have. Indeed, we show in Corollary 5.3 that a slightly stronger condition is sufficient to prove Gap-ETH hardness. (This property and various variants are sometimes called local density, and they play a key role in many hardness proofs for .)

However, short vectors are no longer our only concern. We also have to worry about close vectors that are not good vectors, i.e., vectors in the NO case such that but . We call such vectors impostors. Impostors certainly can exist in general, but our sparsification procedure will work on them just like any other vector. So, as long as our gadget lattice is chosen such that the number of impostors in the NO case is significantly lower than the number of good vectors in the YES case, they will not trouble us.

1.3 Our techniques

We learned in the previous section that, in order to make our reduction work, it is necessary (though not always sufficient) that our gadget has exponentially more close vectors than short vectors. I.e., we need to find a family of gadgets that satisfies Eq. (1). Furthermore, we must somehow ensure that the the number of impostors in the NO case is exponentially lower than the number of good vectors in the YES case.

The integer lattice, , and SETH-hardness.

To prove Theorem 1.1, we take , , and . Notice that, by taking , we ensure that there simply are no impostors in the NO instance (i.e., when , we can never have ).666We note that any gadget that allows us to use must satisfy quite rigid requirements. We need exponentially many vectors that are all exact closest vectors, and we still must satisfy Eq. (1).

To prove that our reduction works, we wish to show that the ratio

is (exponentially) large. Of course, the numerator is easy to calculate. It is . So, we wish to prove that

(2)

Unfortunately, Eq. (2) does not hold for all norms. For example, for , consider the points in with non-zero coordinates, which have norm . There are

such points. (In fact, this is a reasonable estimate for the exact value of , which is for , as we show in Section 6.) However, is decreasing in . So, one might hope that Eq. (2) holds for slightly larger .

To prove this, we wish to find a good upper bound on the number of integer points in a centered ball, . A very nice way to do this uses the function

for  [MO90, EOR91].777One can think of this as a variant of Jacobi’s Theta function. In particular, with , this is Jacobi’s Theta function with a slightly different parametrization. Computer scientists might be more familiar with the closely related function . Notice that

In particular,

Rearranging and taking the infimum over , we see that

(3)

We can relatively easily compute this value numerically and see that it is less than for . (Indeed, we will see below that there is a nearly matching lower bound in a more general context. So, Eq. (3) is quite tight.)

To prove Theorem 1.1, we can plug this very simple gadget into Khot’s reduction described in Section 1.2 to reduce the SETH-hard instances of from [BGS17] to . To make the constant as tight as we can, we exploit the structure of these SETH-hard instances. In particular, we observe that these instances themselves actually look quite a bit like our gadget, in that they are in some sense “small perturbations” of the integer lattice with the all one-halves point as the target. (See Section 4. This is in fact quite common for the instances resulting from hardness proofs.) This allows us to analyze the direct sum resulting from Khot’s reduction very accurately in this case.

More for , and Gap-ETH hardness.

To extend our hardness results to all , we need to construct a gadget with exponentially more close vectors than short vectors for such . We again choose our gadget lattice as , but we now take for some , and we take for some constant .

Our previous gadget was quite convenient in that it was very easy to count the number of close vectors, but for arbitrary and , it is no longer clear how to do this. Fortunately, can be used for this purpose. In particular, we define

By the same argument as before, we see that

But, we need a lower bound on . To that end, we show that the above is actually quite tight. In particular,

(4)

I.e., tells us the number of integer points in an ball up to lower-order terms. (Eq. (4) was already proven for by Mazo and Odlyzko [MO90] and for all by Elkies, Odlyzko, and Rush [EOR91]. See Section 6 for the proof.)

It follows that there exists a and with exponentially more close integer vectors than short integer vectors in the norm if and only if there exists a and such that . Furthermore, this holds if and only if . See Section 6 for the proof.

So, to prove Theorem 1.2, we start with the observation that approximating the Exact Set Cover problem is Gap-ETH-hard for some constant approximation factor . We then plug our gadget into Khot’s reduction from constant-factor-approximate Exact Set Cover to . (This reduction uses as an intermediate problem.) The above discussion explains why Eq. (1) is satisfied. And, like Khot, we exploit the approximation factor to show that the number of impostors in a NO instance is much smaller than the number of good vectors in a YES instance.

Building gadgets in from lattices with high kissing number.

While we are not able to construct a gadget that satisfies Eq. (1) in the norm, we show the existence of such a gadget under the reasonable assumption that for any , there exists a lattice of rank with exponentially many non-zero vectors of minimal norm. I.e., we show that such a gadget exists if there is a family of lattices with exponentially large kissing number. (We actually show that something potentially weaker suffices. See Theorem 5.9.)

To prove this, we show how to choose a and such that . Indeed, we show that if we choose the vector uniformly at random from vectors of an appropriate length, then the expected number of lattice vectors within distance from is exponential in . And, we again exploit the fact that there is a constant-factor gap between the YES and the NO instances to show that the number of impostors in the NO instances is exponentially smaller than the number of good vectors in the YES instances.

1.4 Direction for future work

Our dream result would be an explicit -time lower bound on approximate for the approximation factors most relevant to cryptography (e.g., ) for some not-too-small explicit constant , under a reasonable complexity-theoretic assumption. This seems very far out of reach. There are even complexity-theoretic barriers towards achieving this result, since with these approximation factors cannot be NP-hard unless the polynomial-time hierarchy collapses [AR05, Pei08]. So, any proof of something this strong would presumably have to use a non-standard reduction (e.g., a non-deterministic reduction). Nevertheless, we can still dream of such a result and take more modest steps to at least get results closer to this dream.

One obvious such step would be to extend our hardness results to the case, i.e., to show that there is no -time algorithm for under reasonable purely complexity-theoretic assumptions (as opposed to our geometric assumption). We provide one potential route towards proving this in Theorem 1.3 (or its more general version in Theorem 5.9), but this would require resolving an older open problem in the geometry of numbers. Perhaps a different approach will prove to be more fruitful?

Alternatively, one could try to improve the approximation factor given by Theorem 1.2. The currently known hardness of approximation proofs for with large approximation factor (e.g., a large constant or superconstant) work by “boosting” the approximation factor via repeatedly taking the tensor product [Kho05, HR12]. I.e., given a family of lattices for which we know that is hard to approximate to within some small constant factor , we argue that it is hard to approximate to within a factor of on the tensor product for some . Unfortunately, even a single tensor product increases the rank of the lattice quadratically. So, we cannot afford to use this technique to prove reasonable fine-grained hardness of approximation results. We therefore need a new technique.

Yet another direction would be to try to improve the constant in Theorem 1.1. Perhaps the simple gadget that we use is not the best possible.

Finally, in a completely different direction, we note that Theorem 1.1 provides some additional incentive to study algorithms for for to improve the hidden (very large) constant in the running time of existing algorithms. In particular, it would be interesting to see how close we can get to the lower bound given by Theorem 1.1.

Acknowledgments

The authors thank Huck Bennett, Vishwas Bhargav, Noam Elkies, Sasha Golovnev, Pasin Manurangsi, Priyanka Mukhopadhyay, and Oded Regev for helpful discussions. In particular, we thank Noam Elkies for pointing us to [EOR91] and Oded Regev for observing that the gadgets that we need are related to lattices with high kissing number.

2 Preliminaries

We denote column vectors by bold lower-case letters. Matrices are denoted by bold upper-case letters, and we often think of a matrix as a list of column vectors. For , we abuse notation a bit and write when we should technically write . For , we write

Logarithms are base .

Throughout this paper, we consider computational problems over . Formally, we should specify a method of representing arbitrary real numbers, and our running times should depend in some way on the bit length of these representations and the cost of doing arithmetic in this representation. For convenience, we ignore these issues (in particular assuming that basic arithmetic operations always have unit cost), and we simply note that all of our reductions remain efficient when instantiated with any reasonable representation of . When we say that something is efficiently computable as a function of a dimension , rank , or cardinalities , we mean that it is computable in time , , or , respectively (as opposed to polynomial in the logarithm of these numbers).

2.1 Lattice problems

Definition 2.1.

For any and any , the -approximate Shortest Vector Problem in the norm () is the promise problem defined as follows. The input is a (basis for a) lattice and a length . It is a YES instance if and a NO instance if .

Definition 2.2.

For any and any , the -approximate Closest Vector Problem in the norm () is the promise problem defined as follows. The input is a (basis for a) lattice , a target , and a distance . It is a YES instance if and a NO instance if .

When , we simply write and . We will need the following (simplified version of a) celebrated result, due to Figiel, Lindenstrauss, and Milman [FLM76].

Theorem 2.3 ([Flm76]).

For any , , and any positive integers and with , there exists a linear map such that for any ,

Regev and Rosen showed how theorems like this can be applied to obtain reductions between lattice problems in different norms [RR06]. Here, we only need the following immediate consequence of the above theorem. (The non-uniform reduction can be converted into an efficient randomized reduction and a similar result holds for , but we do not need this for our use case.)

Corollary 2.4.

For any constants and , there is an efficient rank-preserving non-uniform reduction from in dimension to in dimension .

2.2 Sparsification

A lattice vector is non-primitive if for some scalar and lattice vector . Otherwise, is primitive. (Notice that is non-primitive.) For a radius , we write

for the number of primitive lattice vectors of length at most in the norm (counting only once). We will use the following generalization of a sparsification theorem from [Ste16] to all norms.

Theorem 2.5 ([Ste16, Proposition 4.2]).

There is an efficient algorithm that takes as input (a basis for) a lattice of rank and a prime and outputs a sublattice of rank such that for any radius and any ,

as long as , where is the number of primitive lattice vectors of length in the norm (up to the sign). Furthermore, if , then always.

We note in passing that the algorithm works by taking a random linear equation for uniformly random and setting to be the set of lattice vectors whose coordinates in some arbitrary fixed basis satisfy this linear equation. (This idea was originally introduced by Khot.)

2.3 Fine-grained assumptions

Recall that, for integer , a -SAT formula is the conjunction of clauses, where each clause is the disjunction of literals. I.e., -SAT formulas have the form , where or for some boolean variable .

Definition 2.6.

For any , the decision problem -SAT is defined as follows. The input is a -SAT formula. It is a YES instance if there exists an assignment to the variables that makes the formula evaluate to true and a NO instance otherwise.

Definition 2.7.

For any , the decision problem Max--SAT is defined as follows. The input is a -SAT formula and an integer . It is a YES instance if there exists an assignment to the variables such that at least of the clauses evaluate to true and a NO instance otherwise.

Notice that -SAT is a special case of Max--SAT.

Impagliazzo and Paturi introduced the following celebrated and well-studied hypothesis concerning the fine-grained complexity of -SAT [IP99].

Definition 2.8 (Seth).

The (randomized) Strong Exponential Time Hypothesis ((randomized) SETH) asserts that, for every constant , there exists a constant such that there is no -time (randomized) algorithm for -SAT formulas with variables.

Definition 2.9.

For and , the promise problem Gap-- is defines as follows. The input is a -SAT formula with clauses. It is a YES instance if the formula is satisfiable, and it is a NO instance if the maximal number of simultaneously satisfiable clauses is strictly less than .

Dinur [Din16] and Manurangsi and Raghavendra [MR16] recently introduced the following natural assumption, called Gap-ETH. We also consider a non-uniform variant.

Definition 2.10 (Gap-ETH).

The (randomized) Gap-Exponential Time Hypothesis ((randomized) Gap-ETH) asserts that there exists a constant such that there is no (randomized) -time algorithm for Gap-- over variables.

Non-uniform Gap-ETH asserts that there is no circuit family of size for Gap-- over variables.

Definition 2.11.

For , , and , the promise problem Gap-- is defined as follows. The input is a -SAT formula such that each variable appears in at most clauses. It is a YES instance if the formula is satisfiable, and it is a NO instance if the maximal number of simultaneously satisfiable clauses is at most .

We will need the following result due to Manurangsi and Raghavendra [MR16].

Theorem 2.12 ([Mr16]).

Unless Gap-ETH is false, there exist constants and such that there is no -time algorithm for Gap--.

Definition 2.13.

For , the promise problem is defined as follows. The input consists of sets with and a positive integer “size bound” . It is a YES instance if there exist disjoint sets such that for some . It is a NO instance if for every collection of (not necessarily disjoint) sets , .

The following reduction is due to [Man17].

Theorem 2.14.

For any constant , and , there is a polynomial-time Karp reduction from Gap-- on variables to with and for some constants and depending only on and .

Proof.

The reduction takes as input a set of clauses , over a set of variables where each variable is in at most clauses. We assume without loss of generality that each variable or its negation is in at least one clause, and so .

Define to be the set . For each literal or and for each set of clauses containing , we create a set in our instance. I.e. a literal that is contained in exactly clauses will be contained in exactly sets. The reduction outputs YES if there exists an exact set cover of size at most , and no, otherwise.

It is easy to see that the reduction is efficient and that and . We now argue correctness.

Suppose the Gap-- instance is a YES instance, i.e. the formula is satisfiable. Then there exists a satisfying assignment obtained by setting , where each is either or . Thus, for all , let be the set of clauses containing but not containing any of . Clearly, each of these sets is disjoint, and , since is a satisfying assignment. Thus, the sets form an exact set cover of of size .

Suppose, on the other hand, that the Gap-- instance is a NO instance, i.e. any assignment satisfies at most clauses. Let be a set cover of , where the sets are not necessarily disjoint. We wish to show that for some constant .

Let be the set of all clauses containing a literal . Without loss of generality, we can assume that each set equals either or for some . The total number of variables for which and are both in the set cover is at most . Thus, the total number of clauses covered by is at most , so we must have . This implies that

as needed. ∎

3 A reduction from a variant of CVP to SVP

As we discussed in Section 1.2, the “naive reduction” from to simply takes a instance consisting of a basis for a lattice , target , and distance , and constructs the instance given by the basis for of the form

and length , where . Notice that, if the input is a YES instance (i.e., , then .

If the input instance is a NO instance (i.e., if ), then we call a non-zero vector annoying if . As Khot showed, we can sparsify (as in Theorem 2.5), to make this naive reduction work as long as there are significantly fewer annoying vectors than close vectors. We therefore define a rather unnatural quantity below that exactly counts the number of annoying vectors in a NO instance.

For , and , a lattice , target , and distances , we define

Notice that does in fact count the number of annoying vectors resulting from the above reduction (up to sign). In particular, the summand is the number of vectors of length at most for some fixed .

We now define the class of instances on which this sparsification-based reduction works.

Definition 3.1.

For , (the number of annoying vectors), (the number of “good” or close vectors), and (the approximation factor), the promise problem is defined as follows. The input is a (basis for a) lattice , target , and distances