Fluctuations of the increment of the argument for the Gaussian entire function.
Abstract.
The Gaussian entire function is a random entire function, characterised by a certain invariance with respect to isometries of the plane. We study the fluctuations of the increment of the argument of the Gaussian entire function along planar curves. We introduce an inner product on finite formal linear combinations of curves (with real coefficients), that we call the signed length, which describes the limiting covariance of the increment. We also establish asymptotic normality of fluctuations.
Let be a sequence of iid standard complex Gaussian random variables (that is, each has density with respect to the Lebesgue measure on the plane), and define the Gaussian entire function by
(1) 
A remarkable feature of this random entire function is the invariance of the distribution of its zero set with respect to isometries of the plane. The invariance of the distribution of under rotations is obvious, by the invariance of the distribution of each . The translation invariance arises from the fact that, for any , the Gaussian processes and have the same distribution; this follows, for instance, by inspecting the covariances
Further, by Calabi’s rigidity, is (essentially) the only Gaussian entire function whose zeroes satisfy such an invariance (see [HKPV]*Chapter 2 for details and further references).
Given a large parameter , the function gives rise to multivalued fields with a high intensity of logarithmic branch points, which is somewhat reminiscent of chiral bosonic fields as described by Kang and Makarov [KM]*Lecture 12. One way to understand asymptotic fluctuations of these fields as is to study asymptotic fluctuations of the increment of the argument of along a given curve, which will be our concern in this paper. Note that, by the argument principle, if the curve bounds a domain then this observable coincides with the number of zeroes of in (the dilation of the set ), up to a factor (and a sign change if the curve is negatively oriented with respect to the domain it bounds).
We begin with the following definition.
Definition 1.
In what follows a curve is always a smooth regular oriented simple curve in the plane, of finite length^{1}^{1}1By finite length we mean finite and positive, we do not consider a single point to be a regular curve.. An chain is a finite formal sum , where are curves and the coefficients are real numbers.
Note that if the coefficients are integer valued, then we can assign an obvious geometric meaning to the formal sum .
Definition 2.
Given a curve and we define to be the random variable given by the increment of the argument of along . Given an chain we define .
In order for this definition to make sense, we need to see that almost surely does not vanish on a fixed curve. Note that the mean number of zeroes in a (measurable) subset of the plane is proportional to the Lebesgue measure of the set. Since the number of zeroes on a fixed curve is a nonnegative random variable, whose mean is zero, the required conclusion follows. A quantitative version of this is given by [NSV]*Lemma 8.
It is worth pointing out that the observable is invariant with respect to rotations but not with respect to translations. Indeed, since the Gaussian functions and are equidistributed, the observable has the same distribution as . Note that the term is not random, and that it vanishes whenever is a closed chain. This implies that and have the same fluctuations, and furthermore hints that the mean of the random variable should be
(2) 
This formula is not difficult to justify, see the beginning of Section 2.
We are interested in studying the asymptotic fluctuations of the observable , as . In order to understand the limiting covariance of and we introduce an inner product on chains^{2}^{2}2Strictly speaking, we introduce an inner product on equivalence classes of chains, where we identify two chains if their difference is the zero chain. We shall ignore this issue throughout..
Definition 3.
Suppose that and are curves, whose unit normal vectors are denoted and respectively. We define the signed length of their intersection to be
where and are the indicator functions of the supports^{3}^{3}3By the support of a curve we mean the set for a parameterisation of . of the curves and respectively, is the inner product on given by the standard inner product on (we shall frequently identify with without further comment) and is the onedimensional Hausdorff measure. More generally, given chains and we define
This definition needs several comments.

If , , are unit speed parameterisations of the curves and then, if , we define to be the unique value such that . We then have
(3) where and is the point mass at .

Since we deal with smooth regular curves, for most of the intersection points of and the angle between the curves is either or ; there are at most countably many points where this does not hold. This means that in (3) we can replace the term by where
where is the standard Euclidean norm on . In other words, indeed measures the signed length of the intersection of the curves and , see Figure 1.

The signed length is a bilinear form on chains, that is obviously symmetric. If then the associated quadratic form is
We see that this quadratic form is nonnegative and it vanishes if and only if is the zero chain, that is, is the zero function in . Thus the signed length defines an inner product on chains.^{4}^{4}4It might be of some interest to describe the completion of this preHilbert space, though for the purposes of this paper we shall have no need for such a description.
We are ready to state our main result.
Theorem 1.
Let be the Gaussian entire function (1), and let be a nonzero chain. Then, as ,
(4) 
where is the Riemann zeta function, and the random variable
converges in distribution to the standard (real) Gaussian distribution.
Less formally our result says that the observables have a scaling limit which is a Gaussian field built on the linear space of chains equipped with the inner product defined by the signed length.
It is worth singling out a special case of Theorem 1, when each is the positively oriented boundary of a bounded domain . In this case
where is the number of zeroes of the entire function in the domain , the homothety of with scaling factor . Here the Gaussian scaling limit is built on finite linear combinations and the limiting covariance of and is proportional to the signed length of . Note that the same scaling limit appears in a physics paper of Lebowitz [Leb] which deals with fluctuations of classical Coulomb systems.
The Gaussian scaling limit described in this special case corresponds to highfrequency fluctuations of linear statistics of the zero set of the Gaussian entire function . For low frequencies the limiting Gaussian field is built on the Sobolev space , which consists of functions whose weak Laplacian also belongs to . This scaling limit was described in [ST1], see also [NS]. The coexistence of different scaling limits of linear statistics, with different scaling exponents, is a curious feature of the zeroes of the Gaussian entire function. We expect that a similar phenomenon should arise in other natural homogeneous point processes with suppressed fluctuations (socalled superhomogeneous point processes).
Our work also has a onedimensional analogue. The natural analogue of a curve in one dimension is the boundary of a finite interval and we attach a unit “normal” vector to each of the two endpoints in the following manner: We say the interval is positively oriented if the normals are inwardpointing, that is, the normal on the left endpoint points right, and the normal on the right endpoint points left. Otherwise the interval is negatively oriented and the normals point in the opposite directions. Given two such boundaries and , denoting the respective normals and , we define an inner product by
where is the (Hausdorff) counting measure, in analogy with the signed length (and we include the factor to agree exactly with the results cited below). Given an ordered pair of distinct real numbers , we identify the pair with the boundary of an interval which is positively oriented if and negatively oriented if . The corresponding inner product is then
This inner product appears as a limiting covariance in Gaussian limit theorems for eigenvalues of random unitary matrices \citelist[DE]*Theorem 6.1 [HKO]*Theorem 2.2 [W]*Theorem 1 and the logarithm of the Riemann zeta function on the critical line [HNY]*Theorem 1 and Section 2.
We end this introduction with a brief discussion of the proof of Theorem 1. We follow the scheme developed in [ST1]. The proof of the asymptotic (4), after some preliminaries, boils down to Laplacetype asymptotic evaluation of certain integrals. The proof of asymptotic normality uses the method of moments, and these moments are estimated using a combinatorial argument based on the diagram method. As often happens the devil is in the details: numerous difficulties^{5}^{5}5Note that somewhat similar difficulties were encountered by Montgomery in his study of discrepancies of uniformly distributed points [Mon]*Chapter 6, Theorem 3. arise from the fact that we cannot say much about the intersection of two “nice” curves other than that it is a onedimensional compact subset of the plane. For example, if for and is an arbitrary valued function on , then the intersection of the corresponding curves can be an arbitrary closed subset of . We also mention that it seems likely that one may apply the Fourth Moment Theorem of Peccati and Tudor [PT4mom]*Proposition 1 to see asymptotic normality, similar to [MPRW]. We have not pursued this since, in our case, computing higher moments only introduces difficulties at the level of notation, and we do not think this a sufficient reason to employ such powerful machinery which relies on deep results from [NP].
Finally, a word on notation. We write to mean that for some constant , which may depend on certain fixed parameters. If and then we write . We write if . We write if as . We write if as .
Acknowledgements
The authors thank Fedor Nazarov for a helpful discussion of the subtleties of the Laplace method, and Alexander Borichev and Nikolai Makarov for several useful conversations.
1. Preliminary lemmas
1.1. Some elementary Gaussian estimates
Suppose that is a standard complex Gaussian random variable. Then a routine computation shows that for
(5) 
where is the Euler gamma function. An immediate consequence of (5) is the following.
Lemma 2.
Let be a complex Gaussian random variable and let be a polynomial. Then, for ,
The next lemma is also a simple consequence of (5).
Lemma 3.
Let and be complex Gaussian random variables with , and let . Then
Proof.
If and is the Hölder conjugate of (i.e., ), we have
The next lemma is given as an exercise in Kahane’s celebrated book, for the reader’s convenience we provide a proof.
Lemma 4 ([Kah]*Chapter 12, Section 8, Exercise 3).
Let and be jointly (complex) Gaussian random variables, with . Then
Proof.
Let be two i.i.d. random variables. Since are jointly Gaussian, there are such that the pair has the same distribution as . In particular,
Taking expectation, and recalling that , we get
All that remains is to note that and that . ∎
Lemma 5 ([Feld]*Lemma B.2).
Let and be random variables with and suppose that and . Then
Remark.
If then the expectation is divergent, even for .
1.2. Gradients
For convenience we write , and define
and note that is a random variable that satisfies
Furthermore . To simplify our notation, we define
The next lemma will be important later.
Lemma 6.
Given a compact and , we have
for all .
Proof.
It is easy to see that
Trivially is finite and independent of , and CauchySchwartz implies that
since is a complex Gaussian with variance . ∎
Lemma 7.
Given a compact , a polynomial and , we have
for all .
1.3. Interchange of operations
In the proof of Theorem 1 we will repeatedly need to apply Fubini’s Theorem and exchange derivatives with expectation. In this subsection we prove some lemmas that will allow us to do precisely this. Throughout this section will be curves and will denote the normal vector to the curve at the point . We begin with a lemma that covers all of the cases we need.
Lemma 8.
Let be differentiable functions for and let . Suppose that
(6) 
and that, for almost every tuple with respect to the measure , there exists and such that
(7) 
Then
(8) 
Remark.
Proof.
Note that (6) immediately implies, by Fubini, that
It therefore suffices to show that, for almost every tuple with respect to the measure ,
(9) 
Fix a tuple satisfying (7) for and , and define, for ,
We will show that
(10) 
which will imply (9), and therefore prove the lemma.
We begin by establishing the existence of the inner limit on the righthand side of (10). Notice first that, almost surely, does not vanish on the line intervals joining to . Therefore, there exist some (random) neighbourhoods of these intervals where the gradient is a welldefined function. We conclude that the limits
exist almost surely. Finally we show that
(11) 
By a standard argument, this implies that for is a uniformly integrable class of functions, and since we have already showed almost sure convergence (and therefore convergence in measure), we may infer (10).
We now show that the hypothesis of this previous lemma hold in each of the specific cases we will need.
Remark.
In this case, (7) holds for every tuple .
Proof.
Remark.
In this case, (7) holds for every pair .
Proof.
Proof.
First fix , let and let be the Hölder conjugate of . Then, for ,
since is a complex Gaussian with variance . (The constant depends on and, if and are restricted to lie in a compact , on .) Applying Lemma 5 we have
(12) 
2. The mean and variance
In this section we prove the first part of our theorem, the asymptotic (4). We begin by computing the mean of , that is, proving (2); note that by linearity that it’s enough to show that
for a regular oriented simple curve . For such a curve we have (note that almost surely does not vanish on )
which implies that
we may apply Fubini by Lemma 3. Applying Lemma 4 we see that
which is precisely (2).