Solving the Closest Vector Problem in 2^{n} Time—The Discrete Gaussian Strikes Again!

Solving the Closest Vector Problem in Time—
The Discrete Gaussian Strikes Again!

Divesh Aggarwal
Divesh.Aggarwal@epfl.ch
Department of Computer Science, EPFL.
   Daniel Dadush  
dadush@cwi.nl
Centrum Wiskunde & Informatica, Amsterdam.Funded by NWO project number 613.009.031 in the research cluster DIAMANT.
   Noah Stephens-Davidowitz  
noahsd@cs.nyu.edu
Courant Institute of Mathematical Sciences, New York University.This material is based upon work supported by the National Science Foundation under Grant No. CCF-1320188. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Abstract

We give a -time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on -dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic -time and -space algorithm of Micciancio and Voulgaris [MV13].

We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, , with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a -time algorithm for approximate CVP with the very good approximation factor . Second, we show that the approximate closest vectors to a target vector can be grouped into “lower-dimensional clusters,” and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that “behaves well with these clusters.” Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP.

The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest.

Keywords. Discrete Gaussian, Closest Vector Problem, Lattice Problems.

1 Introduction

A lattice is the set of all integer combinations of linearly independent vectors . The matrix is called a basis of , and we write for the lattice generated by .

The two most important computational problems on lattices are the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Given a basis for a lattice , SVP asks us to compute a non-zero vector in of minimal length, and CVP asks us to compute a lattice vector nearest in Euclidean distance to a target vector .

Starting with the seminal work of [LLL82], algorithms for solving these problems either exactly or approximately have been studied intensely. Such algorithms have found applications in factoring polynomials over rationals [LLL82], integer programming [LJ83, Kan87, DPV11], cryptanalysis [Odl90, JS98, NS01], checking the solvability by radicals [LM83], and solving low-density subset-sum problems [CJL92]. More recently, many powerful cryptographic primitives have been constructed whose security is based on the worst-case hardness of these or related lattice problems [Ajt96, MR07, Gen09, Reg09, BV11, BLP13, BV14].

In their exact forms, both problems are known to be NP-hard (although SVP is only known to be NP-hard under randomized reductions), and they are even hard to approximate to within a factor of under reasonable complexity assumptions [ABSS93, Ajt98, CN98, BS99, DKRS03, Mic01, Kho05, HR12]. CVP is thought to be the “harder” of the two problems, as there is a simple reduction from SVP to CVP that preserves the dimension of the lattice [GMSS99], even in the approximate case, while there is no known reduction in the other direction that preserves the dimension.111Since both problems are NP-complete, there is necessarily an efficient reduction from CVP to SVP. However, all known reductions either blow up the approximation factor or the dimension of the lattice by a polynomial factor [Kan87, DH11]. Since we are interested in an algorithm for solving exact CVP whose running time is exponential in the dimension, such reductions are not useful for us. Indeed, CVP is in some sense nearly “complete for lattice problems,” as there are known dimension-preserving reductions from nearly all important lattice problems to CVP, such as the Shortest Independent Vector Problem, Subspace Avoidance Problem, Generalized Closest Vector Problem, and the Successive Minima Problem [Mic08]. (The Lattice Isomorphism Problem is an important exception.) None of these problems has a known dimension-preserving reduction to SVP.

Exact algorithms for CVP and SVP have a rich history. Kannan initiated their study with an enumeration-based -time algorithm for CVP [Kan87], and many others improved upon his technique to achieve better running times [Hel85, HS07, MW15]. Since these algorithms solve CVP, they also imply solutions for SVP and all of the problems listed above. (Notably, these algorithms use only polynomial space.)

For over a decade, these -time algorithms remained the state of the art until, in a major breakthrough, Ajtai, Kumar, and Sivakumar (AKS) published the first -time algorithm for SVP [AKS01]. The AKS algorithm is based on “randomized sieving,” in which many randomly generated lattice vectors are iteratively combined to create successively shorter lattice vectors. The work of AKS led to two major questions: First, can CVP be solved in time? And second, what is the best achievable constant in the exponent? Much work went into solving both of these problems using AKS’s sieving technique [AKS01, AKS02, NV08, AJ08, BN09, PS09, MV10, HPS11], culminating in a -time algorithm for SVP and a -time algorithm for -approximate CVP.

But, exact CVP is a much subtler problem than approximate CVP or exact SVP. In particular, for any approximation factor , a target vector can have arbitrarily many -approximate closest vectors in the lattice . For example, might contain many vectors whose length is arbitrarily shorter than the distance between and the lattice, so that any closest lattice vector is “surrounded by” many -approximate closest vectors. Randomized sieving algorithms for CVP effectively sample from a distribution that assigns weight to each lattice vector according to some smooth function of . Such algorithms face a fundamental barrier in solving exact CVP: they can “barely distinguish between” -approximate closest vectors and exact closest vectors for very small . (This problem does not arise when solving SVP because upper bounds on the lattice kissing number show that there cannot be arbitrarily many -approximate shortest lattice vectors. Indeed, such upper bounds play a crucial role in the analysis of sieving algorithms for exact SVP.)

So, the important question of whether CVP could be solved exactly in singly exponential time remained open until the landmark algorithm of Micciancio and Voulgaris [MV13] (MV), which built upon the approach of Sommer, Feder, and Shalvi [SFS09]. MV showed a deterministic -time and -space algorithm for exact CVP. The MV algorithm uses the Voronoi cell of the lattice—the centrally symmetric polytope corresponding to the points closer to the origin than to any other lattice point. Until very recently, this algorithm had the best known asymptotic running time for both SVP and CVP. Prior to this work, this was the only known algorithm to solve CVP exactly in time.

Very recently, Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS) gave a -time and space algorithm for SVP [ADRS15]. They accomplished this by giving an algorithm that solves the Discrete Gaussian Sampling problem (DGS) over a lattice . (As this is the starting point for our work, we describe their techniques in some detail below.) They also showed how to use their techniques to approximate CVP to within a factor of in time , but like AKS a decade earlier, they left open a natural question: is there a corresponding algorithm for exact CVP (or even -approximate CVP)?

Main contribution.

Our main result is a -time and space algorithm that solves CVP exactly via discrete Gaussian sampling. We achieve this in three steps. First, we show how to modify the ADRS sampling algorithm to solve DGS over lattice shifts, . While the actual algorithm is a trivial generalization of ADRS, the analysis uses substantial new ideas. This result alone immediately gives a -time algorithm to approximate CVP to within any approximation factor . Second, we show that the approximate closest vectors to a target can be grouped into “lower-dimensional clusters.” We use this to show a reduction from exact CVP to a variant of approximate CVP. Third, we show that our sampling algorithm actually solves this variant of approximate CVP, yielding a -time algorithm for exact CVP.

We find this result to be quite surprising as, in spite of much research in this area, all previous “truly randomized” algorithms only gave approximate solutions to CVP. Indeed, this barrier seemed inherent, as we described above. Our solution depends crucially on the large number of outputs from our sampling algorithm and new properties of the discrete Gaussian.

1.1 Our techniques

The ADRS algorithm for centered DGS and our generalization.

The centered discrete Gaussian distribution over a lattice with parameter , denoted , is the probability distribution obtained by assigning to each vector a probability proportional to its Gaussian mass, . As the parameter becomes smaller, becomes more concentrated on the shorter vectors in the lattice. So, for a properly chosen parameter, a sample from is guaranteed to be a shortest lattice vector with not-too-small probability.

ADRS’s primary contribution was an algorithm that solves DGS in the centered case, i.e., an algorithm that samples from for any . To achieve this, they show how to build a discrete Gaussian “combiner,” which takes samples from and converts them to samples from . The combiner is based on the simple but powerful observation that the average of two vectors sampled from is distributed exactly as , provided that we condition on the result being in the lattice [ADRS15, Lemma 3.4]. Note that the average of two lattice vectors is in the lattice if and only if they lie in the same coset of . The ADRS algorithm therefore starts with many samples from for some very high (which can be computed efficiently [Kle00, GPV08, BLP13]) and repeatedly takes the average of carefully chosen pairs of vectors that lie in the same coset of to obtain samples from the discrete Gaussian with a much lower parameter.

The ADRS algorithm chooses which vectors to combine via rejection sampling applied to the cosets of , and a key part of the analysis shows that this rejection sampling does not “throw out” too many vectors. In particular, ADRS show that, if a single run of the combiner starts with samples from , then the output will be samples from , where the “loss factor” is equal to the ratio of the collision probability of mod divided by the maximal weight of a single coset (with some smaller factors that we ignore here for simplicity). It is not hard to check that for any probability distribution over elements, this loss factor is lower bounded by . This observation does not suffice, however, since the combiner must be run many times to solve SVP. It is easy to see that the central coset, , has maximal weight proportional to , and ADRS show that the collision probability is proportional to . Indeed, the loss factor for a single step is given by . Therefore, the total loss factor accumulated after running the combiner times is given by a telescoping product, which is easily bounded by . So, (ignoring small factors) their sampler returns at least samples from . The ADRS combiner requires vectors “just to get started,” so they obtain a -time algorithm for centered DGS that yields samples.

In this work, we show that some of the above analysis carries over easily to the more general case of shifted discrete Gaussians, for —the distribution that assigns Gaussian weight to each . As in the centered case, the average of two vectors sampled from is distributed exactly as , provided that we condition on the two vectors landing in the same coset of . (See Lemma 4.1 and Proposition 4.2.) We can therefore use essentially the same combiner as ADRS to obtain discrete Gaussian samples from the shifted discrete Gaussian with low parameters.

The primary technical challenge in this part of our work is to bound the accumulated loss factor . While the loss factor for a single run of the combiner is again equal to the ratio of the collision probability over the cosets to the maximal weight of a coset, this ratio does not seem to have such a nice representation in the shifted case. (See Corollary 4.2.) In particular, it is no longer clear which coset has maximal weight, and this coset can even vary with ! To solve this problem, we first introduce a new inequality (Corollary 3.3), which relates the maximal weight of a coset with parameter to the maximal weight of a coset with parameter .222This inequality is closely related to that of [RS15], and it (or the more general Lemma 3.2) may be of independent interest. Indeed, we use it in two seemingly unrelated contexts in the sequel—to bound the loss factor of the sampler, and to show that cosets that contain a closest vector have relatively high weight. We then show how to use this inequality to inductively bound the accumulated loss factor by (ignoring small factors)

(1)

So, we only need to start out with vectors to guarantee that our sampler will return at least one vector. (Like the ADRS algorithm, our algorithm requires at least vectors “just to get started.”)

This is already sufficient to obtain a -time solution to approximate CVP for any approximation factor . (See Corollary 4.8.) Below, we show that the loss factor in (1) is essentially exactly what we need to construct our exact CVP algorithm. In particular, we note that if we start with vectors, then the number of output samples is

(2)

I.e., we obtain roughly enough samples to “see each coset whose mass is within a factor of the maximum.”

A reduction from exact CVP to a variant of approximate CVP.

In order to solve exact CVP, we consider a new variant of approximate CVP called the cluster Closest Vector Problem (cCVP). The goal of cCVP is to find a vector that is not only very close to the target, but also very close to an exact CVP solution. More specifically, a vector is a valid solution to if there exists an exact closest vector such that . We will show below that approximate closest lattice vectors can be grouped into “clusters” contained in balls of radius . If is sufficiently small (i.e., ), then we can find a lower-rank sublattice so that each cluster is actually contained in a shift of . (I.e., each cluster is contained in a lower-dimensional affine subspace. See Figure 1 for an illustration of the clustering phenomenon.) Furthermore, a cCVP oracle is sufficient to find this sublattice . So, we can solve exact CVP by (1) computing ; (2) solving to find a lattice vector that is in the “correct” shift of ; and then (3) solving CVP recursively over the lower-rank shifted lattice . (See Claim 5.2 for the full reduction.)

Figure 1: A two-dimensional lattice and a target point , showing the “clustering” of the approximate closest points. The lattice points inside the dotted circle are approximate closest vectors, and they are clearly organized into two clusters that lie in two distinct one-dimensional affine subspaces. The closest lattice point is highlighted in blue; the points in the same cluster (i.e., the valid solutions to cCVP) are shown in purple; and approximate closest points in a different cluster are shown in red. Notice that close points in the same coset mod (i.e., points separated by a vector in ) are necessarily in the same cluster.

This reduction might seem a bit too simple, and indeed we do not know how to use it directly. While we will be able to show that our sampling algorithm does in fact output a solution to cCVP with sufficiently high probability, it will typically output very many vectors, many of which will not be valid solutions to cCVP! We do not know of any efficient way of “picking out” a solution to cCVP from a list of lattice vectors that contains at least one solution. (Note that this issue does not arise for CVP or even approximate CVP, since for these problems we can just take the vector in the list that is closest to the target.) So, we consider an easier problem, . A valid solution to this problem is a list of at most lattice vectors, at least one of which lies in the same “cluster” as an exact closest vector, as described above. (See Definition 5.1.) This leads to a natural generalization of the reduction described above, as follows. (1) Compute the lower-rank sublattice as before; (2) solve to obtain a list of vectors , one of which must lie in the “correct” shift of ; (3) solve CVP recursively on all distinct shifts ; and finally (4) output the closest resulting point to the target

Correctness of this procedure follows immediately from the correctness in the special case when . However, bounding the number of recursive calls is more difficult. We accomplish this by first showing that any two of approximate closest vectors that are in the same coset mod must also be in the same cluster. (See Lemma 5.3.) This shows that there are at most clusters and therefore at most recursive calls, which would show that the running time is at most roughly . We obtain a much better bound via a technical lemma, which shows that we can always choose the parameters such that either (1) the number of clusters is at most , where is the rank of the sublattice ; or (2) there are “slightly more” than clusters, but the rank of is “significantly less than” . (See Lemma 5.6.) This will allow us to show that the total number of calls made on sublattices of rank after a full run of the algorithm is at most . (See Theorem 5.7.) In particular, this shows that, in order to solve exact CVP in time , it suffices to find an algorithm that solves for small that itself runs in time on lattices of rank .

Solving cluster CVP.

Our final task is to solve for sufficiently small in time. In other words, we must find an algorithm that outputs a list of approximate closest vectors to the target , at least one of which is very close to an exact closest vector. As we noted above, our discrete Gaussian sampler can be used to obtain approximate closest vectors with extremely good approximation factors. Furthermore, any two approximate closest vectors that lie in the same coset mod must be very close to each other. It therefore suffices to show that at least one of the output vectors of our DGS algorithm will be in the same coset as an exact closest vector mod .

This is why the number of output samples that we computed in (2) is so remarkably convenient. If a coset’s Gaussian mass is within some not-too-large multiplicative factor of the maximal mass of any coset and we run our sampler, say, times, then with high probability one of our output vectors will land in this coset! In particular, if we can find a bound on the ratio between the maximal mass of any coset and the mass of a coset with a closest vector, then we can simply run our sampler times to find a vector in the same coset as this closest vector. In other words, we obtain a -time solution to , as needed. Intuitively, such a bound seems reasonable, since a closest vector itself has higher mass than any other point, so one might hope that its coset has relatively high mass.

Unfortunately, we cannot have such a bound for arbitrary . There exist “pathological” lattices and targets such that for some parameter , the coset of a closest vector to has relatively low mass, while some other coset contains many points whose combined mass is quite high, even though it does not contain an exact closest vector. However, we can show that this cannot happen for “too many” different parameters . Specifically, we show how to pick a list of parameters such that, for at least one of these parameters, the bound that we required above will hold. This suffices for our purposes. The proof of this statement is quite technical and relies heavily on the new inequality that we prove in Section 3. (See Corollary 6.3.)

1.2 Related work

Our exact CVP algorithm uses many ideas from many different types of lattice algorithms, including sieving, basis reduction, and discrete Gaussian sampling. Our algorithm combines these ideas in a way that (almost magically, and in ways that we do not fully understand) avoids the major pitfalls of each. We summarize the relationship of our algorithm to some prior work below.

First, our algorithm finds an approximate Hermite-Korkine-Zolatoreff (HKZ) basis and essentially “guesses” the last coefficients of a closest vector with respect to this basis. HKZ bases are extremely well-studied by the basis reduction community [Kan87, Hel85, LJS90, HS07, MW15], and this idea is used in essentially all enumeration algorithms for CVP. However, there are examples where the standard basis enumeration techniques require time to solve CVP. (See, e.g., [BGJ14].) The main reason for this is that such techniques work recursively on projections of the base lattice, and the projected lattice often contains many points close to the projected target that do not “lift” to points close to the target in the full lattice. Using our techniques, we never need to project, and we are therefore able to ignore these useless points while still guaranteeing that we will find a point whose last coefficients with respect to the basis are equal to those of the closest vector.

Many other authors have noted that the approximate closest lattice vectors form clusters, mostly in the context of AKS-like sieving algorithms. For example, the -approximate closest vectors to can be grouped into clusters of diameter (see, e.g., [AJ08, DK13]). While the clustering bound that we obtain is both stronger and simpler to prove (using an elementary parity argument), we are unaware of prior work mentioning this particular bound. This is likely because sieving algorithms are typically concerned with constant-factor approximations, whereas our sampler allows us to work with “unconscionably” good approximation factors . Our clustering bound seems to be both less natural and less useful for the constant-factor approximations achieved by -time sieving algorithms.

[BD15] improve on the MV algorithm by showing that, once the Voronoi cell of has been computed, CVP on can be solved in expected time. Indeed, before we found this algorithm, we hoped to solve CVP quickly by using the ADRS sampler to compute the Voronoi cell in time. (This corresponds to computing the shortest vectors in every coset of .) Even with our current techniques, we do not know how to achieve this, and we leave this as an open problem.

Finally, after this work was published, [Ste15] showed a dimension-preserving reduction from DGS to CVP, answering a question posed in an earlier version of this paper. Together with our work, this reduction immediately implies a -time algorithm for DGS with any parameter . (Our algorithm works for any parameter , but not arbitrarily small .) This also provides some (arguably weak) evidence that our technique of using DGS for solving CVP is “correct,” in the sense that any faster algorithm for CVP necessarily yields a faster algorithm for DGS.

1.3 Open problems and directions for future work

Of course, the most natural and important open problem is whether a faster algorithm for CVP is possible. (Even an algorithm with the same running time as ours that is simpler or deterministic would be very interesting.) There seem to be fundamental barriers to significantly improving our method, as both our sampler and our reduction to exact CVP require enumeration over the cosets of . And, Micciancio and Voulgaris note that their techniques also seem incapable of yielding an algorithm that runs in less than time (for similar reasons) [MV13]. Indeed, our techniques and those of MV seem to inherently solve the harder (though likely not very important) problem of finding all closest vectors simultaneously. Since there can be such vectors, this problem trivially cannot be solved in better than time in the worst case. So, if an algorithm with a better running time is to be found, it would likely require substantial new ideas.

Given these barriers, we also ask whether we can find a comparable lower bound. In particular, Micciancio and Voulgaris note that the standard NP-hardness proof for CVP actually shows that, assuming the Exponential Time Hypothesis, there is some constant such that no -time algorithm solves CVP [MV13]. Recent unpublished work by Samuel Yeom shows that we can take under plausible complexity assumptions [Vai15]. Obviously, this gap is quite wide, and we ask whether we can make significant progress towards closing it.

In this work, we show how to use a technique that seems “inherently approximate” to solve exact CVP. I.e., our algorithm is randomized and, during any given recursive call, each -approximate closest vector has nearly the same likelihood of appearing as an exact closest vector for sufficiently small . Indeed, prior to this work, the only known algorithm that solved exact CVP in time was the deterministic MV algorithm, while the “AKS-like” randomized sieving algorithms for CVP achieve only constant approximation factors. It would be very interesting to find exact variants of the sieving algorithms. The primary hurdle towards adapting our method to such algorithms seems to be the very good approximation factor that we require—our ideas seem to require an approximation factor of at most , while -time sieving algorithms only achieve constant approximation factors. But, it is plausible that our techniques could be adapted to work in this setting, potentially yielding an “AKS-like” algorithm for exact CVP. Even if such an algorithm were not provably faster than ours, it might be more efficient in practice, as sieving algorithms tend to outperform their provable running times (while our algorithm quite clearly runs in time at least ).

A long-standing open problem is to find an algorithm that solves CVP in time but polynomial space. Currently, the only known algorithms that run in polynomial space are the enumeration-based method of Kannan and its variants, which run in time. Indeed, even for SVP, there is no known polynomial-space algorithm that runs in time. This is part of the reason why -time enumeration-based methods are often used in practice to solve large instances of CVP and SVP, in spite of their much worse asymptotic running time.

The authors are particularly interested in finding a better explanation for why “everything seems to work out” so remarkably well in the analysis of our algorithm. It seems almost magical that we end up with exactly as many samples as we need for our CVP to DGS reduction to go through. We do not have a good intuitive understanding of why our sampler returns the number of samples that it does, but it seems largely unrelated to the reason that our CVP algorithm needs as many samples as it does. The fact that these two numbers are the same is remarkable, and we would love a clear explanation. A better understanding of this would be interesting in its own right, and it could lead to an improved algorithm.

Organization

In Section 2, we provide an overview of the necessary background material and give the basic definitions used throughout the paper. In Section 3, we derive an inequality (Corollary 3.3) that will allow us to bound the “loss factor” of our sampler and the running time of our exact CVP algorithm. In Section 4, we present our discrete Gaussian sampler, which immediately yields an approximate CVP algorithm. In Section 5, we analyze the structure of the approximate closest vectors and show that this leads to a reduction from exact CVP to a variant of approximate CVP. Finally, in Section 6, we show that our DGS algorithm yields a solution to this variant of approximate CVP (and as a consequence, we derive our exact CVP algorithm.)

2 Preliminaries

Let . Except where we specify otherwise, we use , , and to denote universal positive constants, which might differ from one occurrence to the next (even in the same sequence of (in)equalities). We use bold letters for vectors and denote a vector’s coordinates with indices . Throughout the paper, will always be the dimension of the ambient space .

2.1 Lattices

A rank lattice is the set of all integer linear combinations of linearly independent vectors . is called a basis of the lattice and is not unique. Formally, a lattice is represented by a basis for computational purposes, though for simplicity we often do not make this explicit. If , we say that the lattice has full rank. We often implicitly assume that the lattice is full rank, as otherwise we can simply work over the subspace spanned by the lattice.

Given a basis, , we write to denote the lattice with basis . The length of a shortest non-zero vector in the lattice is written . For a vector , we write to denote the distance between and the lattice, . We call any minimizing a closest vector to . The covering radius is .

Definition 2.1.

For a lattice , the th successive minimum of is

Intuitively, the th successive minimum of is the smallest value such that there are linearly independent vectors in of length at most . We will need the following two facts.

Theorem 2.2 ([Bhw93, Theorem 2.1]).

For any lattice and ,

Lemma 2.3.

For any lattice with basis ,

2.2 The discrete Gaussian distribution

For any , we define the function as . When , we simply write . For a discrete set we define .

Definition 2.4.

For a lattice , a shift , and parameter , let be the probability distribution over such that the probability of drawing is proportional to . We call this the discrete Gaussian distribution over with parameter .

We make frequent use of the discrete Gaussian over the cosets of a sublattice. If is a sublattice of , then the set of cosets, is the set of translations of by lattice vectors, for some . (Note that is a set, not a vector.) Banaszczyk proved the following three bounds [Ban93].

Lemma 2.5 ([Ban93, Lemma 1.4]).

For any lattice and ,

Lemma 2.6.

For any lattice , ,

Lemma 2.7 ([Drs14, Lemma 2.13]).

For any lattice , , , and ,

From these, we derive the following corollary.

Corollary 2.8.

For any lattice , , and , let . Then, for any ,

(3)

Furthermore, if , we have that

Proof.

We can assume without loss of generality that is a closest vector to in and therefore . Equation 3 then follows from combining Lemma 2.6 with Lemma 2.7.

Let , and note that . Then, by the first part of the corollary, we have that

as needed. ∎

2.3 The Gram-Schmidt orthogonalization and -HKZ bases

Given a basis, , we define its Gram-Schmidt orthogonalization by

and the corresponding Gram-Schmidt coefficients by

Here, is the orthogonal projection on the subspace and denotes the subspace orthogonal to .

Definition 2.9.

A basis of is a -approximate Hermite-Korkin-Zolotarev (-HKZ) basis if

  1. ;

  2. the Gram-Schmidt coefficients of satisfy for all ; and

  3. is a -HKZ basis of .

We use -HKZ bases in the sequel to find “sublattices that contain all short vectors.” In particular, note that if is a -HKZ basis for , then for any index , contains all lattice vectors with . When , we omit it.

2.4 Lattice problems

Definition 2.10.

For (the approximation factor), the search problem (Closest Vector Problem) is defined as follows: The input is a basis for a lattice and a target vector . The goal is to output a vector with .

When , we omit it and call the problem exact CVP or simply CVP.

Definition 2.11.

For (the error), (the minimal parameter) a function that maps shifted lattices to non-negative real numbers, and (the desired number of output vectors) a function that maps shifted lattices and positive real numbers to natural numbers, (the Discrete Gaussian Sampling problem) is defined as follows: The input is a basis for a lattice , a shift , and a parameter . The goal is to output a sequence of vectors whose joint distribution is -close to .

We stress that bounds the statistical distance between the joint distribution of the output vectors and independent samples from .

2.5 Some known algorithms

The following theorem was proven by Ajtai, Kumar, and Sivakumar [AKS01], building on work of Schnorr [Sch87].

Theorem 2.12.

There is an algorithm that takes as input a lattice , target , and parameter and outputs a -HKZ basis of and a -approximate closest vector to in time , where and .

The next theorem was proven by [GMSS99].

Theorem 2.13.

For any , there is an efficient dimension-preserving reduction from the problem of computing a -HKZ basis to -CVP.

We will also need the following algorithm.

Theorem 2.14 ([Adrs15, Theorem 3.3]).

There is an algorithm that takes as input (the confidence parameter) and elements from and outputs a sequence of elements from the same set such that

  1. the running time is ;

  2. each appears at least twice as often in the input as in the output; and

  3. if the input consists of independent samples from the distribution that assigns probability to element , then the output is within statistical distance of independent samples with respective probabilities where is a random variable.

3 Some inequalities concerning Gaussians on shifted lattices

We first prove an inequality (Corollary 3.3) concerning the Gaussian measure over shifted lattices. We will use this inequality to show that our sampler outputs sufficiently many samples; and to show that our recursive CVP algorithm will “find a cluster with a closest point” with high probability. The inequality is similar in flavor to the main inequality in [RS15], and it (or the more general form given in Lemma 3.2) may have additional applications. The proof uses the following identity from [RS15].

Lemma 3.1 ([Rs15, Eq. (3)]).

For any lattice , any two vectors , and , we have

Our inequality then follows easily.

Lemma 3.2.

For any lattice , any two vectors , and , we have

Proof.

Using Lemma 3.1, we get the following.

Setting for any and switching with gives the following inequality.

Corollary 3.3.

For any lattice , , and , we have

4 Sampling from the discrete Gaussian

4.1 Combining discrete Gaussian samples

The following lemma and proposition are the shifted analogues of [ADRS15, Lemma 3.4] and [ADRS15, Proposition 3.5] respectively. Their proofs are nearly identical to the related proofs in [ADRS15], and we include them in the appendix for completeness. (We note that Lemma 4.1 can be viewed as a special case of Lemma 3.1.)

Lemma 4.1.

Let , and . Then for all ,

(4)
Proposition 4.2.

There is an algorithm that takes as input a lattice , , (the confidence parameter), and a sequence of vectors from , and outputs a sequence of vectors from such that, if the input consists of

independent samples from for some , then the output is within statistical distance of independent samples from where is a random variable with

The running time of the algorithm is at most .

We will show in Theorem 4.3 that by calling the algorithm from Proposition 4.2 repeatedly, we obtain a general discrete Gaussian combiner.

Theorem 4.3.

There is an algorithm that takes as input a lattice , (the step parameter), (the confidence parameter), , and vectors in such that, if the input vectors are distributed as for some , then the output is a list of vectors whose distribution is within statistical distance of at least

independent samples from . The algorithm runs in time .

Proof.

Let be the sequence of input vectors. For , the algorithm calls the procedure from Proposition 4.2 with input , , and , receiving an output sequence of length . Finally, the algorithm outputs the sequence .

The running time is clear. Fix , , and . Define , , and .

We wish to prove by induction that is within statistical distance of with

(5)

for all . This implies that as needed.

Let

be the “loss factor” resulting from the st run of the combiner, ignoring the factor of . By Corollary 3.3, we have

(6)

By Proposition 4.2, up to statistical distance , we have that has the right distribution with

where we used Eq. (6) with . By noting that , we see that (5) holds when .

Suppose that has the correct distribution and (5) holds for some with . In particular, we have that is at least . This is precisely the condition necessary to apply Proposition 4.2. So, we can apply the proposition and the induction hypothesis and obtain that (up to statistical distance at most ), has the correct distribution with

where in the second inequality we used the induction hypothesis and Eq. (6). ∎

4.2 Initializing the sampler

In order to use our combiner, we need to start with samples from the discrete Gaussian distribution with some large parameter . For very large parameters, the algorithm introduced by Klein and further analyzed by Gentry, Peikert, and Vaikuntanathan suffices [Kle00, GPV08]. For convenience, we use the following strengthening of their result due to Brakerski et al., which provides exact samples and gives better bounds on the parameter .

Theorem 4.4 ([Blp13, Lemma 2.3]).

There is a probabilistic polynomial-time algorithm that takes as input a basis for a lattice with , a shift , and and outputs a vector that is distributed exactly as , where .

When instantiated with a -HKZ basis, Theorem 4.4 allows us to sample with parameter . After running our combiner times, this will allow us to sample with any parameter . The following proposition and corollary show that we can sample with any parameter by working over a shifted sublattice that will contain all high-mass vectors of the original lattice.

Proposition 4.5.

There is an algorithm that takes as input a lattice , shift , , and parameter , such that if

then the output of the algorithm is and a basis of a (possibly trivial) sublattice such that all vectors from of length at most are also contained in , and . The algorithm runs in time .

Proof.

On input a lattice , , and , the algorithm behaves as follows. First, it calls the procedure from Theorem 2.12 to compute a -HKZ basis of . Let be the corresponding Gram-Schmidt vectors. Let be maximal such that for , and let . Let and . The algorithm then calls the procedure from Theorem 2.12 again with the same and input and , receiving as output where , a -approximate closest vector to in . Finally, the algorithm returns and .

The running time is clear, as is the fact that . It remains to prove that contains all sufficiently short vectors in . If , then and is irrelevant, so we may assume that . Note that, since is a -HKZ basis, . In particular, . So, there is a unique closest vector to in , and by triangle inequality, the next closest vector is at distance greater than . Therefore, the call to the subprocedure from Theorem 2.12 will output the exact closest vector to .

Let so that . We need to show that is relatively long. Since is a -HKZ basis, it follows that

Applying triangle inequality, we have

as needed. ∎

Corollary 4.6.

There is an algorithm that takes as input a lattice with , shift , (the desired number of output vectors), and parameters and and outputs , a (possibly trivial) sublattice , and vectors from such that if

then the output vectors are distributed as independent samples from , and contains all vectors in of length at most . The algorithm runs in time .

Proof.

The algorithm first calls the procedure from Proposition 4.5 with input , , and

receiving as output and a basis of a sublattice . It then runs the algorithm from Theorem 4.4 times with input , , and and outputs the resulting vectors, , and .

The running time is clear. By Proposition 4.5, contains all vectors of length at most in , and . So, it follows from Theorem 4.4 that the output has the correct distribution. ∎

4.3 The sampler

We are now ready to present our discrete Gaussian sampler.

Theorem 4.7.

For any efficiently computable function , let be the function defined by for any lattice and . Let

Then, there is an algorithm that solves with in time .

Proof.

We assume without loss of generality that . The algorithm behaves as follows on input a lattice , a shift , and a parameter . First, it runs the procedure from Corollary 4.6 with input , , with , , and