Large Complex Correlated Wishart Matrices:The Pearcey Kernel and Expansion at the Hard Edge.

Large Complex Correlated Wishart Matrices:
The Pearcey Kernel and Expansion at the Hard Edge.

 Walid Hachem 111 CNRS LTCI; Télécom ParisTech, 46 rue Barrault, 75634 Paris Cedex 13, France. Email: walid.hachem@telecom-paristech.fr  ,   Adrien Hardy 222Department of Mathematics, KTH Royal Institute of Technology, Lindstedtsvägen 25, 10044 Stockholm, Sweden. Email: ahardy@kth.se  ,   Jamal Najim 333CNRS LIGM; Université Paris-Est, Cité Descartes, 5 Boulevard Descartes, Champs sur Marne, 77 454 Marne-la-Vallée Cedex 2, France. Email: najim@univ-mlv.fr
Abstract

We study the eigenvalue behaviour of large complex correlated Wishart matrices near an interior point of the limiting spectrum where the density vanishes (cusp point), and refine the existing results at the hard edge as well. More precisely, under mild assumptions for the population covariance matrix, we show that the limiting density vanishes at generic cusp points like a cube root, and that the local eigenvalue behaviour is described by means of the Pearcey kernel if an extra decay assumption is satisfied. As for the hard edge, we show that the density blows up like an inverse square root at the origin. Moreover, we provide an explicit formula for the correction term for the fluctuation of the smallest random eigenvalue.

AMS 2000 subject classification: Primary 15A52, Secondary 15A18, 60F15.
Key words and phrases: Large random matrices, Wishart matrix, Pearcey kernel, Bessel kernel.

1 Introduction

Empirical covariance matrices are natural random matrix models in applied mathematics and their study goes back at least to the work of Wishart [36]. In the large dimensional regime, where both the size of the observations and of the sample go to infinity at the same speed, Marčenko and Pastur provided in the seminal paper [26] the first description of the limiting spectral distribution for such matrices, see also [32]. For instance, this limiting distribution has a continuous density on ; its support is compact if the spectral norm of the population covariance matrix is bounded; it may include the origin and may also present several connected components.

Afterwards, attention turned to the local behaviour of the random eigenvalues near points of interest in the limiting spectrum, like positive endpoints (soft edges), see e.g. [21, 23, 4, 15, 18], interior points were the density vanishes (cusp points) [27], or the origin when it belongs to the spectrum (hard edge) [16, 18]. Complex correlated Wishart matrices, namely covariance matrices with complex Gaussian entries, play a particular role in such investigations since their random eigenvalues form a determinantal point process. Indeed, for determinantal point processes, a local asymptotic analysis can often be performed by using tools from complex analysis such as saddle point analysis or Riemann-Hilbert techniques. In the more general setting of non-necessarily Gaussian entries, one then typically shows that the local behaviours are the same as in the Gaussian case by comparison or interpolation methods, see e.g. [24, 25].

For complex correlated Wishart matrices, a fairly complete picture of the local fluctuations at every edges of the limiting spectrum has been obtained in the recent work [18], provided that a regularity condition is satisfied. This condition essentially warrants the local fluctuations to follow the usual laws from random matrix theory. For instance, if one considers a soft edge of the limiting spectrum, then this regularity condition ensures that the limiting density vanishes like a square root at the edge, and the fluctuations of the associated extremal eigenvalues follow the Tracy-Widom law involving the Airy kernel. As for the hard edge, when it is present, the fluctuations are described instead by means of the Bessel kernel.

The aim of this work is twofold. First, we investigate the local behaviour of the eigenvalues near a cusp point which satisfies the regularity condition: We show that the limiting density vanishes like a cube root near the cusp point (hence justifying the name) and, under an extra assumption on the decay of a speed parameter, we establish that the eigenvalues local fluctuations near the cusp point are described by means of the Pearcey kernel.

Our second contribution is to strengthen the results of [18] concerning the local analysis at the hard edge: We show that the density behaves like an inverse square root near the origin, and we provide an explicit formula for the next-order correction term for the fluctuations. This last result is motivated by the recent work [13] by Edelman, Guionnet and Péché where they conjecture a precise formula for the next-order term for the non-correlated Wishart matrix, a conjecture then proven right by Bornemann [7] and Perret and Schehr [30], with different strategies. Our result hence extends this formula, with an alternative proof, to the more general setting of correlated Wishart matrices.

The reader interested in a pedagogical overview on the results from [18] and the present work may have a look at the survey [19]; it also contains further information on the matrix model and lists some open problems.

Let us also stress that, at the technical level, the study of this matrix model shares similar features with the study of the additive perturbation of a GUE random matrix [10], and random Gelfand-Tsetlin patterns [11, 12], although each model ultimately brings up its own share of technicalities.

We provide precise statements for our results in Section 2, and then prove the results on the density behaviour in Section 3, the cusp point fluctuations in Section 4, and the expansion at the hard edge in Section 5.

Acknowledgements.

The authors are pleased to thank Folkmar Bornemann, Antti Knowles and Anthony Metcalfe for fruitful discussions. During this work, AH was supported by the grant KAW 2010.0063 from the Knut and Alice Wallenberg Foundation. The work of WH and JN was partially supported by the program “modèles numériques” of the French Agence Nationale de la Recherche under the grant ANR-12-MONU-0003 (project DIONISOS). Support of Labex BÉZOUT from Université Paris Est is also acknowledged.

2 Statement of the main results

2.1 The matrix model and assumptions

The random matrix model of interest here is the matrix

(2.1)

where is a matrix with independent and identically distributed (i.i.d.) entries with zero mean and unit variance, and is a deterministic positive definite Hermitian matrix. The random matrix thus has non-negative eigenvalues, but which may be of different nature: The smallest eigenvalues are deterministic and all equal to zero, whereas the other eigenvalues are random. The problem is then to describe the asymptotic behaviour of the random eigenvalues of , as the size of the matrix grows to infinity. As for the asymptotic regime of interest, we let both the number of rows and columns of grow to infinity at the same speed: We assume and so that

(2.2)

This regime will be simply referred to as in the sequel.

Let us mention that the random covariance matrix

which is also under consideration, has exactly the same random eigenvalues as , and hence results on the random eigenvalues can be carried out from one model to the other immediately.

Our first assumption is that the entries of are complex Gaussian. As we shall state later on, this assumption is fundamental for our local eigenvalue behaviour analysis, but not for our results on the limiting density behaviour, see Remark 2.3.

Assumption 1.

The entries of are i.i.d. standard complex Gaussian random variables.

Considering now the matrix , we denote by its eigenvalues and by

(2.3)

its spectral measure. We also make the following assumption.

Assumption 2.
  1. For large enough, the eigenvalues of stay in a compact subset of independent of , i.e.

    (2.4)
  2. The measure weakly converges towards a limiting probability measure as , namely

    (2.5)

    for every bounded and continuous function .

Again, Assumption 2(a) is necessary for our results on the local eigenvalue behaviour, but our results on the limiting density behaviour require a weaker assumption, see Remark 2.3.

We now turn to the description of the asymptotic eigenvalue distribution.

2.2 Limiting eigenvalue distribution

Consider the empirical distribution of the eigenvalues of , namely

Since the seminal work of Marčenko and Pastur [26], it is known that this measure almost surely (a.s.) converges weakly towards a limiting probability measure with compact support, provided that Assumption 2 holds true:

(2.6)

for every bounded and continuous function . As a probability measure, is characterized by its Cauchy-Stieltjes transform, which is the holomorphic function defined by

(2.7)

Marčenko and Pastur proved that is the unique solution of the fixed-point equation

(2.8)

where we recall that has been introduced in (2.2) and is the weak limit of , see (2.5). Thanks to this equation, Silverstein and Choi then showed in [32] that and exists for every . Consequently, the function can be continuously extended to and, furthermore, has a density on given by

(2.9)

We therefore have the representation

(2.10)

They also obtained that is real analytic wherever it is positive, and they moreover characterized the (compact) support of the measure by building on ideas from [26]. More specifically, one can see that the function has an explicit inverse (for the composition law) on given by

(2.11)

If we introduce the open subset of the real line

(2.12)

then the map analytically extends to . It was shown in [32] that

(2.13)

Equipped with the definitions of , and , we are now able to state our results concerning the behaviours of the limiting density near a cusp point or at the hard edge.

2.3 Density behaviour near a cusp point

Figure 1: Plot of with parameters and with a cusp point .

As stated in the introduction, we define a cusp point as an interior point where the density vanishes, namely such that . In particular, by virtue of (2.9). Our first result states that the density behaves like a cube root near a cusp point, provided that .

Proposition 1.

Let be such that , and assume that . Then we have

Moreover,

(2.14)

In particular, there exists such that for every , we have .

Remark 2.1.

In the forthcoming local analysis for the random eigenvalues near a cusp point, we shall focus on cusp points ’s satisfying a regularity condition. This extra assumption automatically yields that , see Remark 2.5.

Conversely, we have the following result.

Proposition 2.

If satisfies , then belongs to and . In particular, and satisfies (2.14).

We prove Propositions 1 and 2 in sections 3.1 and 3.2 respectively. Their proofs are based on the fact that there is a strong relation between the property that is a cusp point and the local behaviour of near . For an illustration of these propositions, we refer to Figure 2 where we displayed the graph of the map associated with the density from Figure 1.

Figure 2: Plot of function on for and . The vertical dotted lines are ’s asymptotes at and . The thick segment on the vertical axis represents who is delimited by the local extrema of g (see Eq. (2.13)). The point is a cusp point.

We now turn to the hard edge setting.

2.4 Density behaviour near the hard edge

As usual in random matrix theory, the hard edge refers here to the origin when it belong to the limiting spectrum. In general, the limiting eigenvalue distribution may not display a hard edge, like in Figure 1. In fact, this is always the case when , see [18, Proposition 2.4 (a),(c)]. Our next result states that there is a hard edge, namely , if and only if , and that in this case blows up like an inverse square root at the origin. We furthermore relate the presence of a hard edge to the behaviour of near . More precisely, since by Assumption 2, one can see from the definition (2.11) of that the map is holomorphic at the origin. Thus, we have the analytic expansion as

(2.15)

Clearly, and the coefficients and are respectively given by the first and second derivative of the map evaluated at .

Proposition 3.

The following three assertions are equivalent:

Moreover, we have and, if one of these assertions is satisfied, then

(2.16)

Proposition 3 is proven in Section 3.3.

Remark 2.2.

There is an analogous statement for any left edge of the spectrum satisfying which follows from [32]; see also [19, Section 2] for further information. Indeed, in this case we have

and furthermore, as ,

By analogy with this equation, the preimage corresponding to the hard edge is . The fact that it actually belongs to follows from Assumption 2(a).

Remark 2.3.

As we shall see in Section 3, proofs of Propositions 1, 2 and 3 only rely on the properties of the limiting eigenvalue distribution , which do not depend on whether the entries of are Gaussian or not. More precisely, the exact assumptions required for these propositions are that the entries of are i.i.d centered random variables with variance one, Assumption 2–(b), and that (which follows from Assumption 2–(a)), see [26, 32].

2.5 The Pearcey kernel and fluctuations near a cusp point

Our next result essentially states that the random eigenvalues of , properly scaled near a regular cusp point, asymptotically behave like the determinantal point process associated with the Pearcey kernel, provided that an extra condition on a speed parameter is satisfied.

In order to state this result, we first introduce this limiting point process. Next, we define what we mean by regular, and provide the existence of appropriate scaling parameters. After that, we finally state our result for the fluctuations near a cusp point.

2.5.1 Determinantal point processes

A point process on (or in a subset therein), namely a probability distribution over the locally finite discrete subsets of , is determinantal if there exists an appropriate kernel which characterizes the correlation functions in the following way: For every and every compactly supported Borel function , we have

(2.17)

In particular, the gap probabilities can be expressed as Fredholm determinants. Namely, given any interval , the probability that the point process avoids reads

(2.18)

and the right hand side is the Fredholm determinant of the integral operator acting on with kernel , provided that it makes sense. For instance, if one assumes that is a compact interval, which is enough for the purpose of this work, then is well-defined and finite as soon as . Moreover, the map is Lipschitz with respect to when restricted to the kernels satisfying (see e.g. [3, Lemma 3.4.5]). We refer the reader to [20, 22, 3] for further information on determinantal point processes.

2.5.2 The Pearcey kernel

Given any , consider the Pearcey-like integral functions

where the contour has two non-intersecting components, one which goes from to , whereas the other one goes from to . More precisely, we parametrize here this contour by

(2.19)

with the orientation as shown in Figure 3.

Figure 3: The contour .

It follows from their definitions that the functions and satisfy the respective differential equations

The Pearcey kernel is then defined for by

(2.20)

see for instance [9]. One can alternatively represent this kernel as a double contour integral

(2.21)

from which one can easily see the symmetry by performing the changes of variables and .

The Pearcey kernel first appeared in the works of Brézin and Hikami [8, 9] when studying the eigenvalues of a specific additive perturbation of a GUE random matrix near a cusp point. Subsequent generalizations have been considered by Tracy and Widom [35], and a Riemann-Hilbert analysis has been performed by Bleher and Kuijlaars [6] as well. This kernel also arises in the description of random combinatorial models, such as certain plane partitions [28]. Furthermore, it has been established that the gap probabilities for the associated point process satisfy differential equations. For instance, satisfies PDEs with respect to the variables , and , see [35, 5, 2], which should be compared to the connection between the Tracy-Widom distribution and the Painlevé II equation.

2.5.3 The regularity condition

We start with the following definition.

Definition 2.4.

A cusp point is regular if satisfies

(2.22)

The regularity condition (2.22) has been considered in [18] when dealing with soft edges, to ensure the appearance of the Tracy-Widom distribution. Similarly here, as we soon shall see, this condition enables the Pearcey kernel to arise at a cusp point. Moreover, the behaviour of at such cusp points is well described by Proposition 1, as explained in the next remark.

Remark 2.5.

If is a regular cusp point, then it follows from the weak convergence and the definition of that necessarily . Thus, satisfies the hypothesis of Proposition 1. In particular, we have and behaves like a cube root near .

Finally, the regularity assumption yields the existence of natural scaling parameters for the eigenvalue local asymptotics. Consider the counterpart of the map introduced in (2.11) after replacing by , see (2.3), and by , namely

(2.23)

The map is the inverse Cauchy transform of a probability measure usually referred to as deterministic equivalent for the random eigenvalues distribution of , see [19, Section 3.2]. According to Section 2.2, has a decomposition of the form (2.10). In particular, it has a density on which is analytic wherever it is positive.

The next proposition provides an appropriate sequence of finite– approximations of , which we will use in the definition of the scaling parameters.

Proposition 4.

Let be a regular cusp point. Then there exists a sequence of real numbers, unique up to a finite number of terms, converging to and such that for every large enough, we have and

This proposition is the counterpart of [18, Proposition 2.7], with a similar proof. Let us only provide a sketch here: Combined with Montel’s theorem, the regularity condition ensures that converge uniformly to on a neighbourhood of for every . The proposition then follows by applying Hurwitz’s theorem to since is a simple root for , according to Proposition 2.14.

Let us emphasize that there is however an important difference with regular soft edges as described in [18], where it was shown that if is a regular soft edge for , then is a soft edge for .

Remark 2.6.

A regular cusp point may not be the limit of finite– cusp points. More precisely, if is a regular cusp point, then in particular . However, this only ensures the existence of a sequence such that . A priori, converges to zero as but might not be equal to zero. In fact, it is not hard to show we have the following alternatives:

  • if , then is a cusp point for ;

  • if , then the density is positive in a vicinity of ;

  • if , then does not belong to the support of .

We are finally in position to state our result concerning the eigenvalue behaviour at a regular cusp point.

2.5.4 Fluctuations around a cusp point

Thanks to Assumption 1, the random eigenvalues of form a determinantal point process with respect to a kernel , see [4, 29]. An explicit formula for this kernel is provided in Section 4. The main result of this section is the local uniform convergence of this kernel, properly scaled, towards the Pearcey kernel.

Theorem 5.

Let be a regular cusp point. Let be the sequence associated to coming from Proposition 4. Assume moreover that the following decay assumption holds true: There exists such that

(2.24)

Set

(2.25)

so that and as by Proposition 4. Then, we have

(2.26)

uniformly for in compact subsets of .

This result was obtained by Mo [27] in the special case where the matrix has exactly two distinct eigenvalues (each with multiplicities proportional ), by means of a Riemann-Hilbert asymptotic analysis.

Notice that if is determinantal with kernel , then is determinantal with the kernel given by the left hand side of (2.26). Thus, having in mind Section 2.5.1, a direct consequence of this theorem is the convergence of the compact gap probabilities.

Corollary 2.7.

Under the assumptions of Theorem 5, we have for every ,

(2.27)

We now make a few comments on the assumption (2.24).

Remark 2.8.

The decay assumption (2.24) roughly states that the cusp point appears fast enough. More precisely, for a cusp point , one has in particular .

  • If , then for large . According to Remark 2.6, the density is positive near and will converge to zero to asymptotically give birth to a cusp point. The family of densities display a sharp non-negative minimum at converging to zero which may be thought of as the erosion of a valley, see the thin curve in Figure 4.

  • If , then for large and the density vanishes in a vicinity of . However, this interval will shrink and asymptotically disappear. Thus, two connected components of the support of move towards one another (moving cliffs), see the dotted curve in Figure 4.

The assumption (2.24) is an indication on the speed at which the bottom of the valley reaches zero , or at which the two cliffs approach one another . See [19] for a more in-depth discussion.

Figure 4: Zoom of the density of near the cusp point . The thick curve is the density of in the framework of Figure 1. The thin curve (resp. the dotted curve) is the density of when (resp. ).
Remark 2.9.

When the assumption (2.24) is not satisfied, namely when goes to zero as (which is always true by Proposition 4) slow enough so that diverges to plus or minus infinity, we do not expect the Pearcey kernel to arise. See [19] for further discussion.

The proof of Theorem 5 is provided in Section 4.

2.6 Asymptotic expansion at the hard edge

Our last result concerns the behaviour of the smallest random eigenvalue of when the hard edge is present. Recall that . By Proposition 3, the limiting density displays a hard edge if and only if . With this respect, we restrict ourselves here to the case where is fixed and does not depend on .

2.6.1 The Bessel kernel

The Bessel function of the first kind with parameter is defined by

(2.28)

with the convention that, when , the first terms in the series vanish (since the Gamma function has simple poles on the non-positive integers).

The Bessel kernel is then defined for by

(2.29)

One can alternatively express it as a double contour integral,

(2.30)

where and the contours are simple and oriented counterclockwise, see for instance [18, Lemma 6.2]. We set for convenience

(2.31)

where the right hand side stands for the Fredholm determinant of the restriction to of the integral operator . According to (2.18), is the probability that the smallest particle of the determinantal point process associated with the Bessel kernel is larger than . Tracy and Widom [34] established that certain simple transformations of satisfy Painlevé equations (Painlevé III and Painlevé V are involved).

2.6.2 Correction for the smallest eigenvalue’s fluctuations

We denote by the smallest random eigenvalue of , namely

(2.32)

Our last result is stated as follows.

Theorem 6.

Assume where is fixed and does not depend on . Set

(2.33)

so that

by Assumption 2. Then, for every , we have as ,

(2.34)

The convergence towards has been first observed by Forrester [16] when is the identity. As for the general case, it has been established by the authors in [18]. An explicit formula for the -correction term was conjectured when is the identity by Edelman, Guionnet and Péché [13], a conjecture proved true soon after by Perret and Schehr [30] and Bornemann [7], with different techniques. We thus generalize this formula to the general case. The strategy of the proof is rather similar to Bornemann’s one: It relies on an identity involving the resolvent of obtained by Tracy and Widom, although we cannot rely on existing estimates for the kernel in this general setting.

Remark 2.10.

In fact, as we shall see in the proof of Theorem 6 (see Remark 5.2), our method easily yields for every an expansion of the form

(2.35)

for every as . Although we are able to provide a close formula for the coefficient (as stated in Theorem 6) thanks to a formula due to Tracy and Widom, to the best of our knowledge the next order coefficients do not seem to benefit from such a simple representation.

We prove Theorem 6 in Section 5.

3 Proofs of the limiting density behaviours

This section is devoted to the proofs of Proposition 1, 2 and 3.

We first recall a few facts stated in Section 2.2 that we shall use in the forthcoming proofs: The map is the Cauchy-Stieltjes transform (2.7); it is analytic on and extends continuously to . Moreover, for every . The map defined in (2.11) is analytic on . In particular, has isolated zeroes on . Moreover, after noticing that , we have the identity for every .

We start with a simple but useful fact, which follows by taking the limit in the previous identity and using the continuity of and on their respective domains:

Lemma 3.1.

If is such that , then .

We will also use the following property.

Lemma 3.2.

If satisfies and , then .

Proof.

Consider the map

(3.1)

and notice it is continuous on its definition domain by dominated convergence. It follows from the definition (2.11) of that when , and moreover that for every we have the identity

By using the fact that on , we thus obtain for every ,

(3.2)

If is such that , then by letting in (3.2) we see that necessarily because . Now, since by assumption, there exists a sequence such that as and , and hence . Since and , this yields . ∎

3.1 Density behaviour near a cusp point: Proof of Proposition 1

We now turn to the proof of the first proposition.

Proof.

Assume that and . Set and assume moreover that . Thus, the facts that and directly follows from Lemma 3.1 and Lemma 3.2.

First, we prove that . To do so, we show that on for some . Since this would indeed yield that is a local extremum for . We proceed by contradiction: Assume there exists a sequence in such that and . Since has isolated zeroes on , necessarily for every large enough. It then follows from (2.13) that and, since , this contradicts the assumption that .

Next, we similarly show that there exists such that

(3.3)

We will use (3.3) later on in this proof. Assume there exists a sequence in such that and . Since and by assumption, then and for every large enough. Moreover, we have , but since Lemma 3.2 then yields , this contradicts that has isolated zeroes on and (3.3) follows.

Now, we show that by direct computation. Recalling the definition (2.11) of , the equation reads

As a consequence, we obtain

We finally turn to the proof of the cube root behaviour (2.14). Since and , there exists an analytic map defined on a complex neighbourhood of such that and , see e.g. [31, Theorem 10.32]. In particular, we have . Moreover, the inverse function theorem yields that has a local inverse , defined on a neighbourhood of zero, such that and . If is small enough, then by (3.3), hence , and we have by Lemma 3.1. In particular, we have

and, by taking the cube root (principal determination), applying , and performing a Taylor expansion, we obtain

(3.4)

where is an undetermined cube root of unity. Since , necessarily if and if . Finally, (2.14) follows by taking the imaginary part in (3.4).

Proof of Proposition 1 is therefore complete.

3.2 Identification of a cusp point: Proof of Proposition 2

The main part of the proof consists in showing the following lemma.

Lemma 3.3.

Let such that . Then and .

Equipped with Lemma 3.3, let us first show how the proposition follows.

Proof of Proposition 2.

Assume that satisfies and set . In particular, by (2.13), and we just have to show that and . We know from Lemma 3.3 that and . As a consequence, . Finally, since with and , then [32, Theorem 5.2] shows the condition requires , which is not possible by assumption.

We now prove the lemma.

Proof of Lemma 3.3.

For every we have

and because by assumption, by taking we see that necessarily . Let us now prove that . Introduce for convenience the map

By combining the fixed point equation (2.8) for with and that

we obtain