# Volumes and distributions for random unimodular complex and quaternion lattices

###### Abstract.

We develop links between harmonic analysis, the geometry of numbers and random matrix theory in the setting of complex and quaternion vectors spaces. Harmonic analysis enters through the computation of volumes — done using the Selberg integral — of truncations of SL and SL endowed with invariant measure, which have applications to asymptotic counting formulas, for example of matrices in SL. The geometry of numbers enters through imposing an invariant measure on the space of unimodular lattices, the latter constructed using certain complex quadratic integers (complex case) and the Hurwitz integers (quaternion case). We take up the problem of applying lattice reduction in this setting, in two dimensions, giving a unified proof of convergence of the appropriate analogue of the Lagrange–Gauss algorithm to the shortest basis. A decomposition of measure corresponding to the QR decomposition is used to specify the invariant measure in the coordinates of the shortest basis vectors, which also encodes the specification of fundamental domains of certain quotient spaces. Integration over the latter gives rise to certain number theoretic constants, which are also present in the asymptotic forms of the distribution of the lengths of the shortest basis vectors, which we compute explicitly. Siegel’s mean value theorem can be used to reclaim these same arithmetic constants. Numerical implementation of the Lagrange–Gauss algorithm allows for statistics relating to the shortest basis to be generated by simulation. Excellent agreement with the theory is obtained in all cases.

spacing=nonfrench

## 1. Introduction

Let be a basis of , and require that the corresponding parallelotope have unit volume. Let

(1.1) |

denote the corresponding lattice. The Minkowski-Hlawka theorem tells us that for large , there exists lattices such that the shortest vectors have length proportional to . By the Minkowski convex body theorem this is also the maximum possible order of magnitude of the shortest vectors; see e.g. [3]. Siegel [30] introduced the notion of a random lattice, and was able to show that for large dimension , a random lattice will typically achieve the Minkowski-Hlawka bound.

The construction of Siegel of a random lattice requires first the specification of the unique invariant measure for the matrix group ; each such matrix is interpreted as having columns forming a basis . One also requires the fact that the quotient space can be identified with the lattice , and that this quotient space has finite volume with respect to the invariant measure.

In a recent work [11] by one of the present authors, a viewpoint from random matrix theory was taken on the computation of volumes associated with , and this led to a Monte Carlo procedure to generate random lattices in the sense of Siegel. In low dimensions , and there are fast exact lattice reduction algorithms to find the shortest lattice vectors [27, 23] – the case is classical being due to Lagrange and Gauss; see e.g. [2]. These were implemented in dimensions two and three to obtain histograms of the lengths and their mutual angles; in dimension two the exact functional forms were obtained by integration over the fundamental domain. For general , it was shown how a mean value theorem derived by Siegel in [30] implies the exact functional form of the distribution of the length of the shortest vector for general ,

(1.2) |

where denotes the Riemann zeta function, and the volume of the unit ball in dimension (actually only the case was presented, but the derivation applies for general to give (1.2)).

In random matrix theory, matrix groups with entries from any of the three associative normed division algebras , or are fundamental [8] (dropping the requirement of associativity permits the octonions to be added to the list; see the recent work [12] for spectral properties of various ensembles of and Hermitian matrices with entries in ). As such, attention is drawn to extending the considerations of [11] to the case of complex and quaternion vector spaces and . One remarks that lattices in these vector spaces, with scalars equal to the Gaussian integers and Eisenstein integers for , and Hurwitz integers for , received earlier attention for their application to signal processing in wireless communication [37, 15, 36, 32], and their consequences for lattice packing bounds [34] respectively. The study [22] extends the LLL lattice reduction algorithm to these settings.

Of particular interest from the viewpoint of [11] are the invariant measure for and , the associated volumes, and the corresponding lattice reduction problems. Following the work of Jack and Macbeath in the case of , we begin in §2 by using the singular value factorisation to decompose the invariant measures. To obtain a finite volume, a certain truncation must be introduced, most naturally by restricting the norm to be bounded by a value . We do this in the case of the operator norm , where is the largest singular value of , and the 2-norm , where is the ^{th} largest singular value. The large form of the volume is of particular relevance due to counting formulas of the type [7]

(1.3) |

Here is the Haar measure on , and vol the volume of the corresponding fundamental domain.

For lattices in with scalars from particular rings of complex quadratic integers, there is a generalisation of the Lagrange-Gauss algorithm that allows for the determination of a reduced basis with the shortest possible lengths. For the Gaussian and Eisenstein integers this has been noted previously [37, 32], although our proofs given in §4.1 are different and apply to all cases at once. They are motivated by known theory in the real case, which we revise in §3. Another point covered in §3 is the observation in [4] that the original Lagrange-Gauss algorithm is equivalent to a simple mapping in the complex plane, related to the Gauss map for continued fractions. We show in §4.2 that in the case of lattices in the generalisation of the Lagrange-Gauss algorithm for lattice reduction can be written as a scalar mappings of quaternions.

In the Gaussian case, the PDF for the lengths of the reduced basis vectors and the scaled inner product are computed analytically in Section 4.4. For values of less than , it is found for a particular , thus relating to (1.2) with . This latter result is found too in the case of the Eisenstein integers, for a different value of , upon the exact calculation of the functional form of the PDF of the length of the shortest vector carried out in Section 4.5. Siegel’s mean value theorem [30] is used to give an independent computation of in the two cases.

Analogous considerations are applied to lattices formed from vectors in with scalars the integer Hurwitz quaternions in Section 5; now for a particular , thus relating to (1.2) with . Here the direct computation of as done for the case of the Gaussian and Eisenstein integers appears not to be tractable, but the exact value can be found indirectly by use of Siegel’s mean value theorem.

In §6 we show how to sample matrices from and with a bounded operator norm. As for the case of discussed in the second paragraph of this section, we take the viewpoint that the columns of these matrices specify bases for and respectively. We implement the Lagrange-Gauss algorithm in the complex and quaternion case, obtaining histograms approximating the PDF for the lengths of the reduced basis vectors and the scaled inner product , and where possible compare against the analytic results.

## 2. Invariant measure and volumes for and

### 2.1. Invariant measure

By way of preliminaries, one recalls that the quaternions are a non-commutative algebra with elements of the form

(2.1) |

where , , , and each distinct pair of anti-commutes. However, matrix groups with elements from typically make use of the representation of quaternions as complex matrices

(2.2) |

Thus for example matrices from and are then block matrices with each entry a block of the form (2.2), and hence complex matrices.

Let , where , or . Label the latter by , , respectively according to the number of independent real parts in an element of . The symbol denotes the product of differentials of all the real and imaginary parts of . Since for fixed

(these follow from e.g. [10, Prop. 3.2.4]), one has that

(2.3) |

is unchanged by both left and right group multiplication, and is thus the left and right invariant Haar measure for the group. In the case of and thus (2.3) was identified by Siegel [30]. Matrices in form the subgroup of with unit determinant. Using a delta function distribution to implement this constraint, (2.3) becomes

(2.4) |

In preparation for computing volumes associated with (2.4), as done in the pioneering work of Jack and Macbeath [19] in the case , we make use of a singular value decomposition

(2.5) |

where – the set of unitary matrices with entries in . In the case each entry in is a block matrix, so viewed as a matrix each is repeated twice along the diagonal. For (2.5) to be one-to-one it is required that the singular values be ordered

and that the entries in the first row of be real and positive.

Changing variables according to (2.5) gives (see e.g. [6, Prop. 2])

(2.6) |

where and are the invariant measure on . For and this was first identified by Hurwitz [18]; the extension of Hurwitz’s ideas to the case of unitary matrices with quaternion entries is given in [5]. The factor comes about due to the restriction on the entries in the first row of .

Let us now first restrict the matrices to have positive determinant, then to have determinant unity by imposing the delta function constraint in (2.4). This requires that we multiply (2.6) by

(2.7) |

Consequently, with

(2.8) |

it follows from this modification of (2.6) that

(2.9) |

The precise value of depends on the convention used to relate the line element corresponding to the differential to the Euclidean line element; see [11, Remark 2.3]. This convention can be uniquely specified by integrating (2.6) against Gaussian weighted matrices – see [11, Remark 2.3] – with the result [6, Eq. (1) with ]

(2.10) |

In the case the multiple integral in (2.9) was first evaluated by Jack and Macbeath [19]. In the recent work [11] a simplified derivation was given by making use of the Selberg integral [26, 13, 10]. This strategy can be extended to general .

###### Proposition 1.

Define

(2.11) |

and set

(2.12) |

For we have

(2.13) |

###### Proof.

Replace the delta function factor by and denote (2.11) in this setting by . Making the change of variables and taking the Mellin transform of both sides shows

Here is the Selberg integral in the notation of [10, Ch. 4]. Making use of the gamma function evaluation of the Selberg integral [26], [10, Eq. (4.3)], and the notation (2.12) reduces this to

Now taking the inverse Mellin transform, (2.13) follows. ∎

###### Remark 2.

###### Corollary 3.

As

(2.16) |

where

(2.17) |

and

(2.18) |

###### Proof.

The results (2.16) and (2.17) follow by closing the contour in the left half plane, and noting that the leading behaviour is determined by the singularity closest to the origin, which occurs at . Recalling the definition (2.11) and substituting (2.16) in (2.9), making use too of (2.10), gives (2.18). ∎

Also of interest is the analogue of (2.8) for the 2-norm

for which the analogue of (2.9) reads

(2.19) |

where

(2.20) |

The integral was evaluated in [10, Prop. 2.9] for , according to a strategy that extends to general .

###### Proposition 4.

For we have

(2.21) |

###### Proof.

First introduce

so that

(2.22) |

Forming the Mellin transform with respect to shows, after minor manipulation, that

The multidimensional integral in this expression is closely related to the Selberg integral, and has the known evaluation in terms of gamma functions [39], [10, Eq. (4.154)]. Substituting the latter in (2.18) shows

The stated result (2.21) now follows by taking the inverse Mellin transform and setting . ∎

###### Corollary 5.

As

(2.23) |

where

(2.24) |

and

(2.25) |

###### Proof.

###### Remark 6.

As commented in the Introduction in the case , one interest in the asymptotic volume formulas
(2.18) and (2.23) lies in asymptotic counting formulas of the type (1.3). For example, as a natural extension of
(1.3), one might expect^{1}^{1}1F. Calegari (private correspondence) remarks that in the context
of [7], or also Eskin–McMullen, [9, Theorem 1.4],
the basic point is that the points of a semi-simple group (like SL) are the points of another group (the Weil restriction of scalars), so one can apply these theorems to to show that counting problem in in the ring of integers of some (any) number field reduces to a volume calculation.
in the case that

(2.26) |

where denotes the Gaussian integers. The leading asymptotics of the integral over is given by (2.18) with for and by (2.23) with for .

In a normalisation of compatible with that used in [7] to give in (1.3) that

(2.27) |

where denotes the Riemann zeta function (in relation to (2.27), see [16]) a result of Siegel [29] gives

(2.28) |

where denotes the Dedekind zeta function for the Gaussian integers,

(2.29) |

where the second equality is a well known factorisation; see e.g. [1]. For future reference we note that for this gives

(2.30) |

where

denotes Catalan’s constant.

## 3. The Lagrange-Gauss algorithm – the real case

Our study of lattice reduction in and draws heavily on the theory of lattice reduction in . For the logical development of our work we must revise some essential aspects of the latter, presenting in particular theory associated with the Lagrange-Gauss algorithm.

### 3.1. Vector recurrence and shortest reduced basis

Let with say, be a basis for , and let be the corresponding lattice. The lattice reduction problem in is to find the shortest nonzero vector in (call this ), and the shortest nonzero vector linearly independent from (call this ) to obtain a new, reduced basis.

Let us suppose that a fundamental cell in has unit volume. Then with written as column vectors, the matrix has unit modulus for its determinant, which we denote . Similarly with we have . The matrices and are related by

(3.1) |

The Lagrange-Gauss algorithm finds a sequence of matrices () such that

(3.2) |

(in fact for chosen with invariant measure, samples from uniformly; see [24]). Defining

(3.3) |

the first column of is the second column of so that we can now set

for some columns vectors . Then (3.3) reduces to a single vector recurrence

(3.4) |

The integer in (3.4) is chosen to minimise and is given by

(3.5) |

where denotes the closest integer function (boundary case ), and so

(3.6) |

Geometrically, the RHS of (3.6) is recognised as the formula for the component of near orthogonal to . The qualification "near" is required because is constrained to be an integer so that .

A basic property of (3.4) is that successive vectors are smaller in magnitude whenever ; see e.g. [2].

###### Lemma 7.

Suppose . We have

(3.7) |

###### Proof.

Since the vectors in with length less than some value form a finite set, Lemma 7 implies that for some we must have . Then (3.4) gives . If at this stage , the algorithm stops with in (3.2), and outputs

(3.11) |

as the reduced basis. If instead the algorithm stops with in (3.2) and outputs

(3.12) |

as the reduced basis. Equivalently, the recurrence (3.2) is iterated until for some , , and the output is the shortest basis and .

For both (3.11) and (3.12) it follows from (3.9) with respectively, and the relative length of that

or equivalently

(3.13) |

One observes that the final inequality is equivalent to requiring that

(3.14) |

An alternative way to see (3.14) is to recall that the integer value which minimises (3.2) is given by (3.3), and to apply this with , for which . Basis vectors which satisfy (3.14), together with the first inequality in (3.11), are said to be greedy reduced in two dimensions [23]. Of fundamental importance is the classical fact that a greedy reduced basis in two dimensions is a shortest reduced basis (the converse is immediate).

###### Proposition 8.

Let be a greedy reduced basis. Then is a shortest reduced basis.

###### Proof.

We follow the proof given in [14], which begins with the greedy reduced basis inequalities

(3.15) |

Let be any nonzero element of . In the case it is immediate that . In the case , write with such that

(3.16) |

Then

and thus by the triangle inequality

(3.17) |

Now by (3.15), and so

(3.18) |

where the second inequality follows from (3.16). Finally, applying (3.15) gives as required. ∎

### 3.2. Complex scalar recurrence

The vector equation (3.6) can also be written in scalar form, albeit involving complex numbers [4]. Thus, set and write . The fact that

(3.19) |

then allows (3.6) to be written

or equivalently, with ()

(3.20) |

With and the complex numbers corresponding to the vectors and , setting the conditions (3.13) for a reduced basis read

(3.21) |

The inequalities (3.21) are recognised as specifying the fundamental domain in the upper half plane model of hyperbolic geometry, up to details on the boundary; see e.g. [33]. Starting with , , the recurrence (3.20) is to be iterated until .

As already noted in [11], the Haar measure for with can be parametrised in terms of variables which allow for a seemingly different simplification of the inequalities (3.13), which can in fact be identified with (3.21). The variables of interest come about by writing in the form , where is a real orthogonal matrix with determinant and is an upper triangular matrix with positive diagonal entries,

(3.22) |

With , the matrix can be used to rotate the lattice so that lies along the positive -axis. Thus (3.22) gives , and the inequalities (3.13) read

(3.23) |

Further, [11, Eq. (4.13)] tells us that the invariant measure in the coordinates and is equal to

(3.24) |

where for , otherwise. In relation to (3.20) and (3.21), we should introduce the scaled vector and thus identify . The inequalities (3.23) then reduce to (3.21), while changing variables in the invariant measure (3.24) gives

(3.25) |

The factor , in keeping with the remark below (3.21), is familiar as the invariant measure in the upper half plane model of hyperbolic geometry [33].

Distributions for the lengths of and can be computed by appropriate integrations over (3.24) and (3.25) [11]. In the present context, the first calculation of this type appears to have been carried out by Shlosman and Tsfasman [28], who computed the distribution of the random variable — this has the interpretation as the sphere (disk) packing density. Integrations with respect to (3.24) are also a feature of exact calculations for the distribution of certain scaled diameters for random -regular circulant graphs with [21]; of the study of kinetic transport in the two-dimensional periodic Lorenz gas [20]; and of calculations relating to the asymptotics of certain random linear congruences , as [31], amongst other recent examples.

## 4. Lattice reduction in

### 4.1. The complex Lagrange-Gauss algorithm

We seek a generalisation of the Lagrange-Gauss lattice reduction algorithm to the case of lattices in