The increment of the argument for the GEF.

Fluctuations of the increment of the argument for the Gaussian entire function.

Jeremiah Buckley Department of Mathematics, King’s College London, Strand, London, WC2R 2LS, UK  and  Mikhail Sodin School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel

The Gaussian entire function is a random entire function, characterised by a certain invariance with respect to isometries of the plane. We study the fluctuations of the increment of the argument of the Gaussian entire function along planar curves. We introduce an inner product on finite formal linear combinations of curves (with real coefficients), that we call the signed length, which describes the limiting covariance of the increment. We also establish asymptotic normality of fluctuations.

The first author is supported by ISF Grants  1048/11 and 166/11, by ERC Grant 335141 and by the Raymond and Beverly Sackler Post-Doctoral Scholarship 2013–14. The second author is supported by ISF Grants 166/11 and 382/15.

Let be a sequence of iid standard complex Gaussian random variables (that is, each has density with respect to the Lebesgue measure on the plane), and define the Gaussian entire function by


A remarkable feature of this random entire function is the invariance of the distribution of its zero set with respect to isometries of the plane. The invariance of the distribution of under rotations is obvious, by the invariance of the distribution of each . The translation invariance arises from the fact that, for any , the Gaussian processes and have the same distribution; this follows, for instance, by inspecting the covariances

Further, by Calabi’s rigidity, is (essentially) the only Gaussian entire function whose zeroes satisfy such an invariance (see [HKPV]*Chapter 2 for details and further references).

Given a large parameter , the function gives rise to multi-valued fields with a high intensity of logarithmic branch points, which is somewhat reminiscent of chiral bosonic fields as described by Kang and Makarov [KM]*Lecture 12. One way to understand asymptotic fluctuations of these fields as is to study asymptotic fluctuations of the increment of the argument of along a given curve, which will be our concern in this paper. Note that, by the argument principle, if the curve bounds a domain then this observable coincides with the number of zeroes of in (the dilation of the set ), up to a factor (and a sign change if the curve is negatively oriented with respect to the domain it bounds).

We begin with the following definition.

Definition 1.

In what follows a curve is always a -smooth regular oriented simple curve in the plane, of finite length111By finite length we mean finite and positive, we do not consider a single point to be a regular curve.. An -chain is a finite formal sum , where are curves and the coefficients are real numbers.

Note that if the coefficients are integer valued, then we can assign an obvious geometric meaning to the formal sum .

Definition 2.

Given a curve and we define to be the random variable given by the increment of the argument of along . Given an -chain we define .

In order for this definition to make sense, we need to see that almost surely does not vanish on a fixed curve. Note that the mean number of zeroes in a (measurable) subset of the plane is proportional to the Lebesgue measure of the set. Since the number of zeroes on a fixed curve is a non-negative random variable, whose mean is zero, the required conclusion follows. A quantitative version of this is given by [NSV]*Lemma 8.

It is worth pointing out that the observable is invariant with respect to rotations but not with respect to translations. Indeed, since the Gaussian functions and are equidistributed, the observable has the same distribution as . Note that the term is not random, and that it vanishes whenever is a closed chain. This implies that and have the same fluctuations, and furthermore hints that the mean of the random variable should be


This formula is not difficult to justify, see the beginning of Section 2.

We are interested in studying the asymptotic fluctuations of the observable , as . In order to understand the limiting covariance of and we introduce an inner product on -chains222Strictly speaking, we introduce an inner product on equivalence classes of -chains, where we identify two chains if their difference is the zero chain. We shall ignore this issue throughout..

Definition 3.

Suppose that and are curves, whose unit normal vectors are denoted and respectively. We define the signed length of their intersection to be

where and are the indicator functions of the supports333By the support of a curve we mean the set for a parameterisation of . of the curves and respectively, is the inner product on given by the standard inner product on (we shall frequently identify with without further comment) and is the one-dimensional Hausdorff measure. More generally, given -chains and we define

This definition needs several comments.

  1. If , , are unit speed parameterisations of the curves and then, if , we define to be the unique value such that . We then have


    where and is the point mass at .

  2. Since we deal with -smooth regular curves, for most of the intersection points of and the angle between the curves is either or ; there are at most countably many points where this does not hold. This means that in (3) we can replace the term by where

    where is the standard Euclidean norm on . In other words, indeed measures the signed length of the intersection of the curves and , see Figure 1.

    Figure 1. Illustration of the signed length of curves and , the value of is indicated at the points of intersection
  3. The signed length is a bilinear form on -chains, that is obviously symmetric. If then the associated quadratic form is

    We see that this quadratic form is non-negative and it vanishes if and only if is the zero chain, that is, is the zero function in . Thus the signed length defines an inner product on -chains.444It might be of some interest to describe the completion of this pre-Hilbert space, though for the purposes of this paper we shall have no need for such a description.

We are ready to state our main result.

Theorem 1.

Let be the Gaussian entire function (1), and let be a non-zero -chain. Then, as ,


where is the Riemann zeta function, and the random variable

converges in distribution to the standard (real) Gaussian distribution.

Less formally our result says that the observables have a scaling limit which is a Gaussian field built on the linear space of -chains equipped with the inner product defined by the signed length.

It is worth singling out a special case of Theorem 1, when each is the positively oriented boundary of a bounded domain . In this case

where is the number of zeroes of the entire function in the domain , the homothety of with scaling factor . Here the Gaussian scaling limit is built on finite linear combinations and the limiting covariance of and is proportional to the signed length of . Note that the same scaling limit appears in a physics paper of Lebowitz [Leb] which deals with fluctuations of classical Coulomb systems.

The Gaussian scaling limit described in this special case corresponds to high-frequency fluctuations of linear statistics of the zero set of the Gaussian entire function . For low frequencies the limiting Gaussian field is built on the Sobolev space , which consists of -functions whose weak Laplacian also belongs to . This scaling limit was described in [ST1], see also [NS]. The co-existence of different scaling limits of linear statistics, with different scaling exponents, is a curious feature of the zeroes of the Gaussian entire function. We expect that a similar phenomenon should arise in other natural homogeneous point processes with suppressed fluctuations (so-called superhomogeneous point processes).

Our work also has a one-dimensional analogue. The natural analogue of a curve in one dimension is the boundary of a finite interval and we attach a unit “normal” vector to each of the two end-points in the following manner: We say the interval is positively oriented if the normals are inward-pointing, that is, the normal on the left end-point points right, and the normal on the right end-point points left. Otherwise the interval is negatively oriented and the normals point in the opposite directions. Given two such boundaries and , denoting the respective normals and , we define an inner product by

where is the (Hausdorff) counting measure, in analogy with the signed length (and we include the factor to agree exactly with the results cited below). Given an ordered pair of distinct real numbers , we identify the pair with the boundary of an interval which is positively oriented if and negatively oriented if . The corresponding inner product is then

This inner product appears as a limiting covariance in Gaussian limit theorems for eigenvalues of random unitary matrices \citelist[DE]*Theorem 6.1 [HKO]*Theorem 2.2 [W]*Theorem 1 and the logarithm of the Riemann zeta function on the critical line [HNY]*Theorem 1 and Section 2.

We end this introduction with a brief discussion of the proof of Theorem 1. We follow the scheme developed in [ST1]. The proof of the asymptotic (4), after some preliminaries, boils down to Laplace-type asymptotic evaluation of certain integrals. The proof of asymptotic normality uses the method of moments, and these moments are estimated using a combinatorial argument based on the diagram method. As often happens the devil is in the details: numerous difficulties555Note that somewhat similar difficulties were encountered by Montgomery in his study of discrepancies of uniformly distributed points [Mon]*Chapter 6, Theorem 3. arise from the fact that we cannot say much about the intersection of two “nice” curves other than that it is a one-dimensional compact subset of the plane. For example, if for and is an arbitrary -valued function on , then the intersection of the corresponding curves can be an arbitrary closed subset of . We also mention that it seems likely that one may apply the Fourth Moment Theorem of Peccati and Tudor [PT4mom]*Proposition 1 to see asymptotic normality, similar to [MPRW]. We have not pursued this since, in our case, computing higher moments only introduces difficulties at the level of notation, and we do not think this a sufficient reason to employ such powerful machinery which relies on deep results from [NP].

Finally, a word on notation. We write to mean that for some constant , which may depend on certain fixed parameters. If and then we write . We write if . We write if as . We write if as .


The authors thank Fedor Nazarov for a helpful discussion of the subtleties of the Laplace method, and Alexander Borichev and Nikolai Makarov for several useful conversations.

1. Preliminary lemmas

1.1. Some elementary Gaussian estimates

Suppose that is a standard complex Gaussian random variable. Then a routine computation shows that for


where is the Euler gamma function. An immediate consequence of (5) is the following.

Lemma 2.

Let be a complex Gaussian random variable and let be a polynomial. Then, for ,

The next lemma is also a simple consequence of (5).

Lemma 3.

Let and be complex Gaussian random variables with , and let . Then


If and is the Hölder conjugate of (i.e., ), we have

The next lemma is given as an exercise in Kahane’s celebrated book, for the reader’s convenience we provide a proof.

Lemma 4 ([Kah]*Chapter 12, Section 8, Exercise 3).

Let and be jointly (complex) Gaussian random variables, with . Then


Let be two i.i.d. random variables. Since are jointly Gaussian, there are such that the pair has the same distribution as . In particular,

Taking expectation, and recalling that , we get

All that remains is to note that and that . ∎

Lemma 5 ([Feld]*Lemma B.2).

Let and be random variables with and suppose that and . Then


If then the expectation is divergent, even for .

1.2. Gradients

For convenience we write , and define

and note that is a random variable that satisfies

Furthermore . To simplify our notation, we define

The next lemma will be important later.

Lemma 6.

Given a compact and , we have

for all .


It is easy to see that

Trivially is finite and independent of , and Cauchy-Schwartz implies that

since is a complex Gaussian with variance . ∎

Lemma 7.

Given a compact , a polynomial and , we have

for all .


Since this lemma follows from Cauchy-Schwarz, Lemma 6 and Lemma 2. ∎

1.3. Interchange of operations

In the proof of Theorem 1 we will repeatedly need to apply Fubini’s Theorem and exchange derivatives with expectation. In this subsection we prove some lemmas that will allow us to do precisely this. Throughout this section will be curves and will denote the normal vector to the curve at the point . We begin with a lemma that covers all of the cases we need.

Lemma 8.

Let be differentiable functions for and let . Suppose that


and that, for almost every tuple with respect to the measure , there exists and such that




Trivially (6) implies that the left-hand side of (8) is well defined. However, as will be clear from the proof, we can only infer that the integrand on the right-hand side, that is the term , is well-defined at the points where (7) holds.


Note that (6) immediately implies, by Fubini, that

It therefore suffices to show that, for almost every tuple with respect to the measure ,


Fix a tuple satisfying (7) for and , and define, for ,

We will show that


which will imply (9), and therefore prove the lemma.

We begin by establishing the existence of the inner limit on the right-hand side of (10). Notice first that, almost surely, does not vanish on the line intervals joining to . Therefore, there exist some (random) neighbourhoods of these intervals where the gradient is a well-defined function. We conclude that the limits

exist almost surely. Finally we show that


By a standard argument, this implies that for is a uniformly integrable class of functions, and since we have already showed almost sure convergence (and therefore convergence in measure), we may infer (10).

Once more we note that, almost surely, does not vanish on the line interval joining to . This implies that


We get

by Fubini. By (7) we see that this is bounded uniformly in , which is precisely (11). ∎

We now show that the hypothesis of this previous lemma hold in each of the specific cases we will need.

Lemma 9.

Suppose that are polynomials for . Then (6) and (7) hold.


In this case, (7) holds for every tuple .


First note that, repeatedly applying Cauchy-Schwarz, both (6) and (7) follow if we see that

is uniformly bounded for in a compact and any . But this is precisely the conclusion of Lemma 7. ∎

Lemma 10.

Suppose that and is a polynomial. Then (6) and (7) hold (with ).


In this case, (7) holds for every pair .


Suppose that , choose and let be the Hölder conjugate of (i.e., ). Note that

Once more, applying Lemma 7, the term involving is uniformly bounded. It therefore suffices to see that

is uniformly bounded for in a compact and . Since

we have

and Lemma 3 completes the proof. ∎

Lemma 11.

Suppose that . Then (with ) (6) holds and for every pair with , (7) holds.


First fix , let and let be the Hölder conjugate of . Then, for ,

since is a complex Gaussian with variance . (The constant depends on and, if and are restricted to lie in a compact , on .) Applying Lemma 5 we have


Once more we note that for . Therefore to show (6) it suffices to see that

Now for and we have , where the implicit constant depends only on and . We conclude that, taking in (12),

since . This proves (6).

Now fix and with and . Since we can find such that

Then, by (12),

This implies that (7) holds, and completes the proof of the lemma. ∎

2. The mean and variance

In this section we prove the first part of our theorem, the asymptotic (4). We begin by computing the mean of , that is, proving (2); note that by linearity that it’s enough to show that

for a regular oriented simple curve . For such a curve we have (note that almost surely does not vanish on )

which implies that

we may apply Fubini by Lemma 3. Applying Lemma 4 we see that

which is precisely (2).

2.1. The variance

Given a chain , to prove (4) it is enough to show that


and the rest of this section will be devoted to establishing this asymptotic. First note that we have


and that (2) may be re-written as


Recalling that

we see that