Linear-time nearest point algorithms for Coxeter lattices

# Linear-time nearest point algorithms for Coxeter lattices

Robby G. McKilliam, Warren D. Smith and I. Vaughan L. Clarkson Robby McKilliam is partly supported by a scholarship from the Wireless Technologies Laboratory, CSIRO ICT Centre, Sydney, Australia Warren Smith is with the Center for Range Voting, 21 Shore Oaks Drive, Stony Brook NY 11790 USARobby McKilliam and Vaughan Clarkson are with the School of Information Technology & Electrical Engineering, The University of Queensland, Qld., 4072, Australia
###### Abstract

The Coxeter lattices, which we denote , are a family of lattices containing many of the important lattices in low dimensions. This includes , , and their duals , and . We consider the problem of finding a nearest point in a Coxeter lattice. We describe two new algorithms, one with worst case arithmetic complexity and the other with worst case complexity where is the dimension of the lattice. We show that for the particular lattices and the algorithms reduce to simple nearest point algorithms that already exist in the literature.

Lattice theory, nearest point algorithm, quantization, channel coding

## I Introduction

The study of point lattices is of great importance in several areas of number theory, particularly the studies of quadratic forms, the geometry of numbers and simultaneous Diophantine approximation, and also to the practical engineering problems of quantisation and channel coding. They are also important in studying the sphere packing problem and the kissing number problem [1, 2]. Lattices have recently found significant application of in cryptography [3, 4] and communications systems using multiple antannaes [5, 6].

A lattice, , is a set of points in such that

 L={x∈Rn|x=Bw,w∈Zn}

where is termed the generator (or basis) matrix. We will write vectors and matrices in bold font. The th element in a vector is denoted by a subscript: . The generator matrix for a lattice is not unique. Let be an matrix with integer elements such that . is called a unimodular matrix. Then both and are generator matrices for the lattice .

Lattices are equivalent under scaling, rotation and reflection. A lattice with generator matrix and a lattice with generator lattice are equivalent, or isomorphic, iff

 B=αR^BM

where is real, is a matrix consisting of only rotations and reflections and is unimodular. We write .

The Voronoi region or nearest-neighbour region for a lattice is the subset of such that, with respect to a given norm, all points in are nearer to the origin than to any other point in . The Voronoi region is an -dimensional polytope [2]. Given some lattice point we will write to denote the Voronoi region centered around the lattice point . It follows that is the subset of that is nearer to than any other lattice point in .

The nearest lattice point problem is: Given and some lattice whose lattice points lie in , find a lattice point such that the Euclidean distance between and is minimised. We use the notation to denote the nearest point to in the lattice . It follows from the definition of the Voronoi region that111There is a slight technical deficiency here. We actually require to define half of the faces of to be closed and half to be open. Ties in can then be broken accordingly.

 x=NearestPt(y,L)⇔y∈Vor(L)+x

The nearest lattice point problem has significant practical application. If the lattice is used for vector quantisation then the nearest lattice point corresponds to the minimum-distortion point. If the lattice is used as a code for a Gaussian channel, then the nearest lattice point corresponds to maximum likelihood decoding [7]. The closely related shortest vector problem has been used in public key cryptography [3, 4, 8, 9, 10].

Van Emde Boas [11] and Ajtai [12] have shown that the nearest lattice point problem is NP-complete under certain conditions when the lattice itself, or rather a basis thereof, is considered as an additional input parameter. It has even been shown that finding approximately nearest points is NP-hard [8, 13, 14]. Nevertheless, algorithms exist that can compute the nearest lattice point in reasonable time if the dimension is small [15, 16, 17]. One such algorithm introduced by Pohst [17] in 1981 was popularised in signal processing and communications fields by Viterbo and Boutros [16] and has since been called the sphere decoder.

For specific lattices, the nearest point problem is considerably easier and for many classical lattices, fast nearest point algorithms are known [7, 18, 1, 19, 20, 2].

The Coxeter lattices, denoted , are a family of lattices first described by H.S.M. Coxeter [21, 22].

 An/m={Qx∣x∈Zn+1,x′1modm=0} (1)

where is the orthogonal projection matrix

 Q=(I−11′n+1), (2)

is the identity matrix, and indicates the vector or matrix transpose. If does not divide then . Hence, in the sequel, we assume that divides .

A simple geometric description of is to consider the subset consisting of the points of whose coordinate-sum is divisible by . This subset consists of points that lie in ‘layers’ parallel to the hyperplane orthogonal to . By projecting the subset orthogonally to we obtain a set of points equivalent to the -dimensinal lattice .

The family of Coxeter lattices contains many of the important lattices in low dimension. The family is related to the well studied root lattice and its dual lattice . When

 An/1=A∗n={Qx∣x∈Zn+1} (3)

and when

 An/n+1=An={x∈Zn+1∣x′1=0} (4)

It follows that  [2, 22]. Note that whenever and therefore

 Vor(An/k)⊂Vor(An/m). (5)

Other isomorphisms exist: , and . Of significant practical interest is the lattice . Due to its excellent packing and quatising properties has found applications to trellis codes [23, 24, 25, 26] and vector quantisation [2, 27, 28]. The particular representation of as was used by Secord and deBuda to create a code with a spectral null at DC [29].

The lattice is also of practical interest. It gives the thinnest sphere-covering in all dimensions up to  [2] and has found application in a number of estimation problems including period estimation from sparse timing data [30, 31], frequency estimation [32], direction of arrival estimation [33] and noncoherent detection [34].

The paper is organised as follows. Section II describes a log-linear-time nearest point algorithm for . This algorithm is a generalisation of a nearest point algorithm for that was derived in [19]. Section III improves this to worst case linear-time. The speedup employs both a partial sorting procedure called a bucket sort [35] and also the linear-time Rivest-Tarjan selection algorithm [36, 37, 38, 39]. In Section IV we show how the discussed nearest point algorithms for the Coxeter lattices reduce to simple nearest point algorithms for and that already exist in the literature [2, 19, 20]. In Section V we review a simple nearest point algorithm for based on translates of the lattice . This algorithm was previously described by Conway and Sloane [7, 18] but not directly applied to the Coxeter lattices. The algorithm requires arithmetic operations in the worst case. In Section VI we evaluate the practical computational performance of the algorithms.

## Ii Log-linear-time algorithm

In this section we describe a nearest point algorithm for that requires operations in the worst case. This algorithm is a generalisation of the nearest point algorithm for described in [19]. To describe the algorithm we first require to derive some properties of the Voronoi region of . This is done in Lemmata 1 and 2. We firstly require the follow definitions.

Let be the hyperplane in orthogonal to . is typically refered to as the zero-mean-plane. For some lattice we will use the notation to denote the region . For example is the crossection of lying in the hyperplane . Given some region we define the -volume of as . For example, the -volume of is denoted by .

Given a set of -dimensional vectors and suitable matrix we will write to denote the set with elements for all . For example denotes the region of space that results from projecting onto the hyperplane .

###### Lemma 1.
 QVor(Zn+1)⊆VorH(An)
###### Proof:

Let . Decompose into orthogonal components so that for some . Then . Assume that . Then there exists some such that

 ∥x−Qy∥2<∥0−Qy∥2 ⇒∥x−y+t1∥2<∥y−t1∥2 ⇒∥x−y∥2+2tx′1<∥y∥2.

By definition (4) and so . This violates that and hence . ∎

###### Lemma 2.
 VorH(An/m)⊆QVor(Zn+1)

with equality only when .

###### Proof:

When , . The -volume [2]. From Berger et al. [40] we find that the -volume of the projected polytope also. As and are convex polytopes it follows from Lemma 1 that

 VorH(An)=QVor(Zn+1).

The proof follows from the fact that for all (5). ∎

We will now prove Lemma 3 from which our algorithm is derived. We firstly need the following definition. Given two sets and we let be their Minkowski sum. That is, iff where and . We will also write to denote the line of points for all . Then is an infinite cylinder with cross-section . It follows that

###### Lemma 3.

If is a closest point in to then there exists some for which is a closest point in to .

###### Proof:

As is the nearest point to then for all

 y+1λ∈Vor(An/m)+Qk=VorH(An/m)+k+1R.

It follows from Lemma 2 that

 VorH(An/m)+k+1R⊆QVor(Zn+1)+k+1R.

Then and for some

 y+1λ∈Vor(Zn+1)+k

The proof now follows from the definition of the Voronoi region. ∎

Now consider the function defined so that

 f(λ)=⌊y+λ1⌉ (6)

where applied to a vector denotes the vector in which each element is rounded to a nearest integer222The direction of rounding for half-integers is not important so long as it’s consistent. The authors have chosen to round up half-integers in their own implementation.. That is, gives a nearest point in to as a function of . Observe that . Hence,

 Qf(λ+1)=Qf(λ). (7)

Lemma 3 implies there exists some such that is a closest point to . Furthermore, we see from (7) that can be found within an interval of length 1. Hence, if we define the set

 S={f(λ)∣λ∈[0,1)}

then contains a closest point in to . In order to evaluate the elements in we require the following function.

###### Definition 1.

(sort indices)

We define the function

 s=sortindices(z)

to take a vector of length and return a vector of indices such that

 zs1≥zs2≥zs3≥⋯≥zsn+1

Let

 s=sortindices({y})

where denotes the centered fractional part of and we define to operate on vectors by taking the centered fractional part of each element in the vector. It is clear that contains at most vectors, i.e.,

 ⌊y⌉+es1+⋯+esn+1} (8)

where is a vector of 0’s with a 1 in the th position. It can be seen that the last vector listed in the set is simply and so, once multiplied by , the first and the last vector are identical.

We can define the set such that

 W={x∈S∣x⋅1modm=0}. (9)

Noting (1) then contains the nearest point in to .

An algorithm suggests itself: test each of the distinct vectors in and find the closest one to . This is the principle of the algorithm we propose in this Section. It remains to show that this can be done in arithmetic operations.

We label the elements of according to the order given in (II). That is, we set and, for ,

 ui=ui−1+esi. (10)

Let . Clearly, . Decompose into orthogonal components and for some . The squared distance between and is

 ∥y−Qui∥2=di+t2(n+1) (11)

where we define as

 di=∥Qzi∥2=∥∥∥zi−z′i1n+11∥∥∥2=z′izi−(z′i1)2n+1. (12)

We know that the nearest point to is that such that which minimizes (11). Since the term is independent of the index , we can ignore it. That is, it is sufficient to minimize , .

We now show that can be calculated inexpensively in a recursive fashion. We define two new quantities, and . Clearly . From (10),

 αi=z′i1=(zi−1−esi)′1=αi−1−1 (13)

and

 βi=z′izi=(zi−1−esi)′(zi−1−esi)=βi−1−2{ysi}+1. (14)

Algorithm 1 now follows. The main loop beginning at line LABEL:alg_for_all_bres calculates the and recursively. There is no need to retain their previous values, so the subscripts are dropped. The variable maintains the minimum value of the (implicitly calculated values of) so far encountered, and the corresponding index. The variable maintains the value of which must equal in order for .

Each line of the main loop requires arithmetic computations so the loop (and that on line LABEL:alg_for_2) requires in total. The function requires sorting elements. This requires arithmetic operations. The vector operations on lines LABEL:alg_zLABEL:alg_gamma all require operations and the matrix multiplication on line LABEL:alg_project can be performed in operations as

 Qu=u−1′un+11.

It can be seen, then, that the computational cost of the algorithm is dominated by the function and is therefore .

This algorithm is similar to the nearest point algorithm for described in [19]. The significant difference is the addition of on line LABEL:alg_if. This ensures that the lattice points considered are elements of i.e. they satisfy (1). We further discuss the relationship between the algorithms in Section IV.

14

14

14

14

14

14

14

14

14

14

14

14

14

14

## Iii Linear-time algorithm

In the previous Section we showed that the nearest point to in lies in the set (9). We will show that some of the elements of can be immediately excluded from consideration. This property leads to a nearest point algorithm that requires at most arithmetic operations.

###### Lemma 4.

Suppose, for some integers , that

 {ysi}−{ysi+km}≤mn+1. (15)

Then the minimum of the , , occurs at or .

###### Proof:

The proof proceeds by contradiction. Suppose, to the contrary, that

 di+cm

Observe that

 di+cm−di=2αicm−(cm)2n+1+cm∑j=1(1−2{ysi+j}).

Now, since , it follows that

 di+cm−di≥2αicm−(cm)2n+1+cm(1−2{ysi})

With the assumption that , we have that

 2αi−cmn+1<2{ysi}−1. (16)

Similarly, observe that

 di+km−di+cm=2αi(k−c)m−(k2−c2)m2n+1 +km∑j=cm+1(1−2{ysi+j}).

Since , it follows that

With the assumption that , we have that

 2αi−cmn+1>kmn+1−1+2{ysi+km}. (17)

Equations (16) and (17) together imply that

 {ysi}−{ysi+km}>km2(n+1),

which contradicts (15) because . ∎

From we can construct the following subsets

 Uj={ui∣0.5−{ysi}∈(m(j−1)n+1,mjn+1]} (18)

where . Note that . We are interested in the elements of . Let be the smallest integer such that . Let be the largest integer such that . It follows that for some . Also, from (18)

 {ysg}−{ysp}≤mn+1.

It then follows from Lemma 4 that (11) is minimised either by or and not by any where . We see that for each set there are at most two elements that are candidates for the nearest point. An algorithm can be constructed as follows: test the (at most two) candidates in each set and return the closest one to . We will now show how this can be achieved in linear time.

We construct sets

 Bj={i∣0.5−{yi}∈(m(j−1)n+1,mjn+1]}. (19)

and the related sets

 Kj=j⋃t=1Bt.

It follows that

 u|Kj|=⌊y⌉+∑t∈Kjet.
###### Definition 2.

(quick partition)

We define the function

 b=quickpartition(z,Bj,c)

to take a vector and integer and return a vector of length such that for and

 zbi≥zbc≥zbt

Somewhat surprisingly can be implemented such that the required number of operations is . This is facilitated by the Rivest-Tarjan selection algorithm [36, 37, 38, 39]. We can compute

 b=quickpartition(z,Bj,c) (20)

for some integer . Then

 u|Kj−1|+c=u|Kj−1|+c∑t∈1ebt. (21)

Let be the smallest integer such that and

 1⋅u|Kj−1|+gmodm=0 (22)

and let be the largest integer such that and

 1⋅u|Kj−1|+pmodm=0. (23)

From the previous discussion the only candidates for the nearest point out of the elements

 Q{u|Kj−1|+1,…,u|Kj−1|+|Bj|}=QUj

are and . We can compute these quickly using the function as in (20) and (21).

Algorithm 2 now follows. Lines LABEL:alg:slow:empty_buckets-LABEL:alg:slow:create_buckets_last_line construct the sets . The main loop on line LABEL:alg:slow:for_all_bres then computes the values of and for each . We define the function

 b=quickpartition2(z,Bj,g,p)

to return so that for and and

 zbi≥zbg≥zbt≥zbp≥zbc.

Notice that can be performed by two consecutive iterations of the Rivest-Tarjan algorithm and therefore requires operations. The and are computed within the loop on line LABEL:alg:slow:inner_loop_test and the index of the nearest lattice point is stored using the variable . The function on line LABEL:alg:slow:concatenate adds the elements of to the end of the array . This can be performed in operations. Lines LABEL:alg:slow:recoverpointLABEL:alg:slow:project recovers the nearest lattice point using and .

In practice the can be implemented as a list so that the set insertion operation on line LABEL:alg:slow:create_buckets_last_line can be performed in constant time. Then the loops on lines LABEL:alg:slow:empty_buckets and LABEL:alg:slow:create_buckets require arithmetic operations. The operations inside the main loop on line LABEL:alg:slow:for_all_bres require operations. The complexity of these loops is then

 \nicefracn+1m∑j=1O(|Bj|)=O(n)

The remaining lines require or less operations. The algorithm then requires arithmetic operations.

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

22

## Iv Specific algorithms for An and A∗n

For the lattices and Algorithms 1 and 2 reduce to simpler algorithms that have previously been described in the literature. For a log-linear time algorithm similar to that of Conway and Sloane [7, 18] is derived from Algorithm 1 by noting that only one iteration in the main loop on line LABEL:alg_for_all_bres will satisfy . Algorithm 3 now follows.

5

5

5

5

5

A simple linear-time algorithm for can be constructed from Algorithm 3 by replacing the function on line LABEL:alg:Anonlogn:sortindicies with . Pseudocode is provided in Algorithm 4. In effect this is a modification of Algorithm 2 where the sets from (19) are replaced by the single set . This algorithm has previously been suggested by A. M. Odlyzko [2, page 448].

5

5

5

5

5

For a log-linear time algorithm identical to that described in [19] can be derived from Algorithm 1 by noting that for all . A linear-time algorithm for can be constructed from Algorithm 2 by noting that (22) and (23) for all where . This removes the need for using the function. A further simplification is noted in [20] where it was shown that the nearest point is one of the where . The reader is referred to [20] for further details. The proofs used in [20] are significantly different to those in this paper and are only applicable to .

## V Algorithm based on glue vectors

In this section we describe a simple nearest point algorithm for . This algorithm was described by Conway and Sloane [7, 18] but not directly applied to the Coxeter lattices. The algorithm has worst case complexity .

can be constructed by gluing translates of the lattice  [2]. That is

 An/m=q−1⋃i=0([im]+An) (24)

where and are called glue vectors and are defined as

 [i]=1n+1(i,…,ij times,−j,…,−ji times) (25)

for with . Following the notation of Conway and Sloane the glue vectors will not be written in boldface. Instead they are indicated by square brackets.

Noting that can be constructed as a union of translates of the lattice we can use a nearest point algorithm for to find the nearest point in each of the translates. The translate containing the closest point yields the nearest point in . A pseudocode implementation is provided in Algorithm 5. The function can be implemented by either Algorithm 3 or 4 of Section IV.

4

4

4

4

The algorithm requires iterating times. Assuming that is implemented using the linear time algorithm (Algorithm 4) then if is a constant this yields a linear-time algorithm. At worst may grow linearly with . In this case the algorithm requires operations.

## Vi Run-time analysis

In this section we tabulate some practical computation times attained with the nearest point algorithms described in Sections VII and III and also some of the specialised algorithms for and discussed in Section IV. The algorithms were written in Java and the computer used is a 900  Intel Celeron M.

Table I shows the computation times for the three algorithms from Sections VII and III for the lattice and . It is evident that the linear-time algorithm is the fastest. The glue vector algorithm is significantly slower for large . By comparison, Table II shows the computation times for the algorithms with for and . The glue vector algorithm now performs similarly to the other algorithms. This behaviour is expected. As discussed in Section V the glue vector algorithm has linear complexity when is constant, but quadratic complexity when increases with .

Tables III and IV show the performance of the linear-time Coxeter lattice algorithm compared to the specialised algorithms for the lattices and discussed in Section IV. It is evident that the specialised algorithms are faster. This behaviour is expected as the specialised algorithms have less computational overhead.

## Vii Conclusion

In this paper we have described two new nearest point algorithms for the Coxeter lattices. The first algorithm is a generalisation of the nearest point algorithm for described in [19] and requires arithmetic operations. The second algorithm requires operations in the worst case. The second algorithm makes use of a partial sorting procedure called a bucket sort [35] and also the linear-time Rivest-Tarjan selection algorithm [36, 37, 38, 39]. In Section IV we showed how the log-linear and linear-time algorithms for the Coxeter lattices reduce to simple nearest point algorithms for and that already exist in the literature [2, 19, 20].

## References

• [1] I. V. L. Clarkson, “An algorithm to compute a nearest point in the lattice ,” in Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, Marc Fossorier, Hideki Imai, Shu Lin, and Alain Poli, Eds., vol. 1719 of Lecture Notes in Computer Science, pp. 104–120. Springer, 1999.
• [2] J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups, Springer, 3rd edition, 1998.
• [3] M. Ajtai, “Generating hard instances of lattice problems,” in Proc. 28th ACM Symposium on Theory of Computing, pp. 99–108, May 1996.
• [4] M. Ajtai and C. Dwork, “A public-key cryptosystem with worst-case/average-case equivalence,” in Proc. 29th ACM Symposium on Theory of Computing, pp. 284–293, May 1997.
• [5] L. Brunel and J. J. Boutros, “Lattice decoding for joint detection in direct-sequence CDMA systems,” IEEE Trans. Inform. Theory, vol. 49, pp. 1030–1037, 2003.
• [6] D. J. Ryan, I. V. L. Clarkson, I. B. Collings, and . W. Heath Jr., “Performance of vector perturbation multiuser MIMO systems with limited feedback,” Accepted for IEEE Trans. Commun., September 2008.
• [7] J. H. Conway and N. J. A. Sloane, “Fast quantizing and decoding and algorithms for lattice quantizers and codes,” IEEE Trans. Inform. Theory, vol. 28, no. 2, pp. 227–232, Mar. 1982.
• [8] U. Feige and D. Micciancio, “The inapproximability of lattice and coding problems with preprocessing,” Journal of Computer and System Sciences, vol. 69, no. 1, pp. 45–67, Aug 2004.
• [9] O. Regev, “New lattice-based cryptographic constructions,” J. ACM, vol. 51, no. 6, pp. 899–942, 2004.
• [10] D. Micciancio and O. Regev, “Lattice based cryptography,” in Post Quantum Cryptography, D .J. Bernstein, J. Buchmann, and E. Dahmen, Eds. Springer, 2009.
• [11] P. van Emde Boas, “Another NP-complete partition problem and the complexity of computing short vectors in a lattice,” Tech. Rep., Mathematisch Instituut, Roetersstraat 15, 1018 WB Amsterdam, The Netherlands, Apr. 1981.
• [12] M. Ajtai, “The shortest vector problem in is NP-hard for randomized reductions,” in Proc. 30th ACM Symposium on Theory of Computing, pp. 10–19, May 1998.
• [13] S. Arora, L. Babai, J. Stern, and Z. Sweedyk, “The hardness of approximate optimia in lattices, codes, and systems of linear equations,” in IEEE Symposium on Foundations of Computer Science, 1993, pp. 724–733.
• [14] D. Micciancio and O. Regev, “Worst-case to average-case reductions based on gaussian measures,” SIAM J. on Computing, vol. 37, pp. 372–381, 2004.
• [15] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, “Closest point search in lattices,” IEEE Trans. Inform. Theory, vol. 48, no. 8, pp. 2201–2214, Aug. 2002.
• [16] E. Viterbo and J. Boutros, “A universal lattice code decoder for fading channels,” IEEE Trans. Inform. Theory, vol. 45, no. 5, pp. 1639–1642, Jul 1999.
• [17] M. Pohst, “On the computation of lattice vectors of minimal length, successive minima and reduced bases with applications,” SIGSAM Bull., vol. 15, no. 1, pp. 37–44, 1981.
• [18] J. H. Conway and N. J. A. Sloane, “Soft decoding techniques for codes and lattices, including the Golay code and the Leech lattice,” IEEE Trans. Inform. Theory, vol. 32, no. 1, pp. 41–50, Jan. 1986.
• [19] R. G. McKilliam, I. V. L. Clarkson, and B. G. Quinn, “An algorithm to compute the nearest point in the lattice ,” IEEE Trans. Inform. Theory, vol. 54, no. 9, pp. 4378–4381, Sep. 2008.
• [20] R. G. McKilliam, I. V. L. Clarkson, W. D. Smith, and B. G. Quinn, “A linear-time nearest point algorithm for the lattice ,” International Symposium on Information Theory and its Applications, 2008.
• [21] H.S.M. Coxeter, “Extreme forms,” Canad. J. Math., vol. 3, pp. 391–441, 1951.
• [22] J. Martinet, Perfect lattices in Euclidean spaces, Springer, 2003.
• [23] A.R. Calderbank and N.J.A. Sloane, “An eight-dimensional trellis code,” Proc. IEEE, vol. 74, no. 5, pp. 757–759, May 1986.
• [24] Lee-Fang Wei, “Trellis-coded modulation with multidimensional constellations,” IEEE Trans. Inform. Theory, vol. 33, no. 4, pp. 483–501, Jul 1987.
• [25] G. D. Forney Jr., “Coset codes I: Introduction and geometrical classification,” IEEE Trans. Inform. Theory, vol. 34, no. 5, pp. 1123–1151, Sep 1988.
• [26] G. D. Forney Jr., “Coset codes II: Binary lattices and related codes,” IEEE Trans. Inform. Theory, vol. 34, no. 5, pp. 1152–1187, Sep 1988.
• [27] J. Conway and N. Sloane, “Voronoi regions of lattices, second moments of polytopes, and quantization,” IEEE Trans. Inform. Theory, vol. 28, no. 2, pp. 211–226, Mar 1982.
• [28] M.S. Postol, “Some new lattice quantization algorithms for video compression coding,” IEEE Trans. Circuits Systems, vol. 12, no. 1, pp. 53–60, Jan 2002.
• [29] N. Secord and R. de Buda, “Demodulation of a Gosset lattice code having a spectral null at DC,” IEEE Trans. Inform. Theory, vol. 35, no. 2, pp. 472–477, Mar. 1989.
• [30] I. V. L. Clarkson, “Approximate maximum-likelihood period estimation from sparse, noisy timing data,” IEEE Trans. Signal Process., vol. 56, no. 5, pp. 1779–1787, May 2008.
• [31] R. G. McKilliam and I. V. L. Clarkson, “Maximum-likelihood period estimation from sparse, noisy timing data,” Proc. Internat. Conf. Acoust. Speech Signal Process., pp. 3697–3700, Mar. 2008.
• [32] I. V. L. Clarkson, “Frequency estimation, phase unwrapping and the nearest lattice point problem,” Proc. Internat. Conf. Acoust. Speech Signal Process., vol. 3, pp. 1609–1612, Mar. 1999.
• [33] B. G. Quinn, “Estimating the mode of a phase distribution,” Asilomar Conference on Signals, Systems and Computers, pp. 587–591, Nov 2007.
• [34] R. G. McKilliam, I. V. L. Clarkson, D. J. Ryan, and I. B. Collings, “Linear-time block noncoherent detection of PSK,” Accepted for Proc. Internat. Conf. Acoust. Speech Signal Process., 2008.
• [35] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, MIT Press. and McGraw-Hill, 2nd edition, 2001.
• [36] M. Blum, R. W. Floyd, V. R. Pratt, R. L. Rivest, and R. E. Tarjan, “Time bounds for selection,” J. Comput. Syst. Sci., vol. 7, no. 4, pp. 448–461, 1973.
• [37] R. W. Floyd and R. L. Rivest, “The algorithm SELECT - for finding the th smallest of elements,” Commun. ACM, vol. 18, no. 3, pp. 173, 1975.
• [38] R. W. Floyd and R. L. Rivest, “Expected time bounds for selection,” Commun. ACM, vol. 18, pp. 165–172, Mar 1975.
• [39] D. E. Knuth, The Art of Computer Programming, vol. Volume 2 (Seminumerical Algorithms), Addison-Wesley, Reading, Ma., 3rd edition, 1997.
• [40] T. Burger, P. Gritzmann, and V. Klee, “Polytope projection and projection polytopes,” The American Mathematical Monthly, vol. 103, no. 9, pp. 742–755, Nov 1996.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters