Distribution of the Smallest Eigenvalue in the Correlated Wishart Model

# Distribution of the Smallest Eigenvalue in the Correlated Wishart Model

Tim Wirtz    Thomas Guhr Fakultät für Physik, Universität Duisburg–Essen, 47048 Duisburg, Germany
August 27, 2019
###### Abstract

Wishart random matrix theory is of major importance for the analysis of correlated time series. The distribution of the smallest eigenvalue for Wishart correlation matrices is particularly interesting in many applications. In the complex and in the real case, we calculate it exactly for arbitrary empirical eigenvalues, i.e., for fully correlated Gaussian Wishart ensembles. To this end, we derive certain dualities of matrix models in ordinary space. We thereby completely avoid the otherwise unsurmountable problem of computing a highly non–trivial group integral. Our results are compact and much easier to handle than previous ones. Furthermore, we obtain a new universality for the distribution of the smallest eigenvalue on the proper local scale.

###### pacs:
05.45.Tp, 02.50.-r, 02.20.-a

In a large number of complex systems, time series are measured which yield rich information about the dynamics but also about the correlations. Examples are found in physics, climate research, biology, medicine, wireless communication, finance and many other fields chatfield ; Kanasewich ; TulinoVerdu ; Gnanadesikan ; BarnettLewis ; VinayakPandey ; AbeSuzuki ; Muelleretal ; Seba ; SanthanamPatra ; LalouxCizeauBouchaudPotters ; ple02 . Consider a set of time series , of () time steps each, which are normalized to zero mean and unit variance . The entries , , are either real or complex, these two cases are labeled by or , respectively. The time series form the rows of the rectangular data matrix . The empirical correlation matrix of these data,

 C=1nMM† , (1)

is positive definite and either real symmetric or Hermitian for . Wishart random matrix theory plays a prominent role for the study of statistical features muirhead ; VinayakPandey ; Johnstone ; Muelleretal ; AbeSuzuki ; Seba ; SanthanamPatra ; TulinoVerdu ; LalouxCizeauBouchaudPotters ; ple02 ; ForresterHughes ; ChenTseKahnValenzuela . The ensemble of Wishart correlation matrices is constructed from random matrices such that it fluctuates around the empirical correlation matrix . The probability distribution for this ensemble is usually chosen as the Gaussian muirhead

 Pβ(W|C)∼exp(−β2tr WW†C−1) , (2)

where the dagger simply indicates the transpose for . The corresponding volume element or measure and all other measures occurring later on are flat, i.e., they are the products of the independent differentials. The Wishart correlation matrices yield upon average the empirical correlation matrix . Invariant observables depend only on the always non–negative eigenvalues , of which are referred to as the empirical ones. We order them in the diagonal matrix . Data analyses strongly corroborate the Gaussian Wishart model, see e.g. Refs. TulinoVerdu ; AbeSuzuki ; Muelleretal ; Seba ; SanthanamPatra ; LalouxCizeauBouchaudPotters .

The smallest eigenvalue of the Wishart correlation matrix , or equivalently, of is of considerable interest for statistical analysis, from a general viewpoint and in many concrete applications. In linear discriminant analysis it gives the leading contribution for the threshold estimate LarryWasserman . It is most sensitive to noise in the data Gnanadesikan . In linear principal component analysis, the smallest eigenvalue determines the plane of closest fit Gnanadesikan . It is also crucial for the identification of single statistical outliers BarnettLewis . In numerical studies involving large random matrices, the conditional number is used, which depends on the smallest eigenvalue Edelman1992 ; Edelmann1988 . In wireless communication models the Multi–Input–Multi–Output (MIMO) channel matrix of an antenna system FoschiniGans . The smallest eigenvalue of yields an estimate for the error of a received signal Burel02statisticalanalysis ; ChenTseKahnValenzuela ; UpamanyuMadhow . In finance, the optimal portfolio is associated with the eigenvector to the smallest eigenvalue of the covariance matrix, which is directly related to the correlation matrix Markowitz . This incomplete list shows the considerable theoretical and practical relevance muirhead ; Johnstone to study the distribution of the smallest eigenvalue. For given empirical eigenvalues , one has muirhead ; MehtasBook

 P(β)\footnotesizemin(t)=−d% dtE(β)p(t) , (3)

where is the gap probability that all eigenvalues of lie in .

We have three goals: First, we calculate the above quantities exactly. In the real case, we provide, for the first time, explicit and easy–to–use formulas for applications. Second, we uncover mutual dualities between matrix models which make the calculation possible. Third, we find a new universality on a local scale referred to as microscopic in Chiral Random Matrix Theory Shuryak1993306 ; VerbaarschotWettig .

To begin with, we diagonalize with if or if . The eigenvalues are non–negative and ordered in the diagonal matrix . The volume element transforms as

 d[W] =|Δp(X)|βdetγXd[X]dμ(V) , (4)

where is the Haar measure and is the Vandermonde determinant MehtasBook . We introduce

 γ = β2(n−p+1)−1={(n−p−1)/2, β=1n−p, β=2, (5)

which involves the “rectangularity” of the matrix . Thus, the joint distribution of the eigenvalues reads

 Pβ(X|Λ) =Kp×n|Δp(X)|βdetγXΦβ(X,Λ−1) , (6)

with the normalization constant . The highly non–trivial part is the group integral

 Φβ(X,Λ−1) =∫dμ(V)exp(−β2trVXV†Λ−1) . (7)

The gap probability can then be cast into the form MehtasBook

 E(β)p(t)=Kp×nexp(−trβt2Λ)∫d[X]|Δp(X)|β×detγ(X+t1p)Φβ(X,Λ−1) , (8)

where is the dimensional unit matrix. Importantly, the derivation of Eq. (8) involves the shift . The group integral in (7) is known exactly for , it is the Harish–Chandra–Itzykson–Zuber integral ItzyksonZuber ; HarishChandra . For , it is the orthogonal Gelfand spherical function GelfandSpherical or the orthogonal Itzykson–Zuber integral. Unfortunately, explicit results are not available, although the real case is the much more relevant one in applications. To make progress, zonal or Jack polynomials muirhead were developed, which are only given by complicated recursions. The resulting formulas for observables are therefore cumbersome. In Refs. RecheretalPRL ; Recheretal , when calculating the spectral density, we circumvented this severe problem by employing the Supersymmetry method efetov ; Verbaarschotetsal . In the present context, a supermatrix model is quickly constructed by multiplying the right–hand side of Eq. (8) with which completes the Jacobian according to Eq. (4). This allows one to reintroduce the matrices . The remaining determinants and in the numerator and the denominator, respectively, then inevitably lead to a supermatrix model.

Here, we put forward a different approach which will eventually lead us to a much more convenient matrix model in ordinary space. Anticommuting variables will only be used in intermediate steps. Our key idea is to identify rectangular matrices of dimension , with yet to be determined such that the gap probability acquires the form of a matrix model in without a determinant in the denominator. As one sees from Eq. (4), this is achieved, if becomes unity, i.e., if the condition

 ¯n=p+2β−1=p+2−β ,for β=1,2 , (9)

is fulfilled. We thus arrive at the matrix model

 E(β)p(t)=Kp×¯nexp(−trβt2Λ)×∫d[¯¯¯¯¯¯W]detγ(¯¯¯¯¯¯W ¯¯¯¯¯¯W†+t1p)×exp(−β2tr¯¯¯¯¯¯W ¯¯¯¯¯¯W†Λ−1) , (10)

which is dual to the model (8). Since the only determinant is in the numerator, we just need anticommuting variables to lift in the exponent and can carry out the ensemble average. Along lines similar to, e.g., Refs. GuhrWettigWilke ; RecheretalPRL ; Recheretal , we then find

 E(β)p(t)=Kp×¯nexp(−trβt2Λ)∫d[σ]exp(−trσ)×fβ,¯n(σ)p∏k=1detβ/2(βt212γ/β−Λkσ) , (11)

where we restrict ourselves to integer for . Although we used anticommuting variables, the matrix is ordinary GuhrWettigWilke , it is either an Hermitian () or a self dual Hermitian matrix (). The function

 fβ,¯n(σ)=∫d[ϱ]detβ¯n/2ϱexp(−ıtrϱσ) (12)

is related to the Ingham–Siegel integral, see Ref. Fyodorov . It is obviously invariant, , where with being the diagonal matrix of the eigenvalues and where for and for . Using Refs. Guhr2006I ; KieburgGroenqvistGuhr , we conclude that

 fβ,¯n(s) ∼γ∏i=1∂¯n+2(γ−1)/β∂s¯n+2(γ−1)/βiδ(si) . (13)

Remarkably, the ordinary matrix model (11) is invariant, once is evaluated. There is no symmetry breaking which would lead to an Itzykson–Zuber–type–of integral as is present in the above mentioned supermatrix model. Since the latter involves for an explicitly unknown supergroup integral, the model (11) is much better tractable. By constructing the model (11) in ordinary space, we fully outmaneuvered the substantial difficulties related to the orthogonal Itzykson–Zuber integral and to the zonal or Jack polynomials. Due to the lack of Efetov–Wegner terms in ordinary space, even the complex case is considerably easier to treat. We mention in passing that for an half–integer value of enforces a supermatrix model, but this will be discussed elsewhere WirtzGuhrII .

Hence, applying standard techniques, we arrive at

 E(β)p(t) =exp(−trβt2Λ)detγΛdetβ/2[Q(β,p)ij(t)] , (14)

where the elements of this and all other determinants run over with . The kernel in Eq. (14) is a finite polynomial in ,

 Q(β,p)ij(t) =qijΘ(αp,β)min(p,αp,β)∑k=0ek(Λ) tp−k(αp,β−k)! . (15)

Here we defined and for , respectively, as well as . We also introduced the Heaviside step function and the elementary symmetric polynomials

 ek(Λ) =∑1≤i1<⋯

with . Applying Eq. (3), we obtain the distribution of the smallest eigenvalue in the explicit form

 P(β)\footnotesizemin(t)=trβ2ΛE(β)p(t)−β2exp(−trβt2Λ)detγΛ×2γ/β∑l=1det[G(l)ij(t)]det1−β/2[Q(β,p)ij(t)] , (17)

where another polynomial kernel occurs,

 G(l)ij(t) =⎧⎪ ⎪⎨⎪ ⎪⎩Q(β,p)ij(t),l≠iddtQ(β,p)ij(t),l=i . (18)

These results are exact and valid for all integer values of . As already mentioned, half–integer values are possible for .

Our formula are much more compact and also easier to handle than the previously known expressions. The duality which we uncovered leads to much clearer structures. In Ref. koev , and for are expressed, apart from an exponential, in terms of a finite series in zonal polynomials. Unfortunately, the latter are only given recursively and are thus cumbersome in applications. Even for , where the Itzykson–Zuber integral is explicitly known, our formulas have a more direct structure and are more convenient in applications than the ones in Refs. ForresterminEig ; ZhangNuiYangZahngYang . We illustrate our findings in Fig. 1 for real

time series () of lengths . As empirical eigenvalues , we chose 0.6, 1.2, 6.7, 9.3, 10.5, 15.5, 17.2, 20.25, 30.1, 35.4. To demonstrate the validity of our results, we compare them with numerical simulations. Using the program R RMan , we generate and diagonalize 50,000 correlated random Wishart matrices drawn from the distribution (2). As expected, the agreement is perfect. In contrast to the formulae in Ref. koev , numerical evaluation of Eq. (17) is easily possible even for large matrix dimension .

We now show the existence of a new kind of universality in the correlated Wishart model. In addition to the theoretical interest in this issue, the universal results to be presented now might also be of practical importance when analyzing correlation matrices built from many long time series. We want to zoom into a local scale set by the mean level spacing of the spectral density near the origin. To this end, we consider the limit in such a way that the “rectangularity” is held fixed. A similar limit, referred to as microscopic, was introduced in Chiral Random Matrix Theory Shuryak1993306 ; VerbaarschotWettig , which models statistical aspects of Quantum Chromodynamics. However, in contrast to our case the probability density is fully rotation invariant, corresponding to . Here we consider an arbitrary , and thus have to take into account how the local mean level spacing depends on . From Refs. Forrester1993709 ; ZhangNuiYangZahngYang it follows that we have to rescale with , when taking the limit . Hence we introduce a new variable by making the ansatz

 t=u4pη (19)

with a dependent constant yet to be determined. We thus we restrict our analysis to the commonly encountered situation in which almost all empirical eigenvalues do not depend on , and only few are proportional to . In the microscopic limit, the gap probability and the distribution of the smallest eigenvalue are defined as

 E(β)(u) =limp→∞E(β)p(u4pη) , (20) ℘(β)\footnotesizemin(u) =limp→∞14pηP(β)\footnotesizemin(u4pη) . (21)

The limit is non–trivial, because the function and the normalization constant depend on . The dependence of the latter is determined by evaluating Eq. (11) at , which shows that with independent of . Next we investigate how the fold product of determinants in Eq. (11) behaves on the local scale,

 (22)

Expanding the logarithm to leading order in shows that we have to choose

 η=1pp∑k=11Λk=1ptrΛ−1 , (23)

to fix the microscopic scale, provided converges to a non zero constant for . The large– limit of the above expression is then

 (−1)pγdetγΛdetpβ/2σexp(−β2u16trσ−1) . (24)

The factor cancels with the dependent part of . Combining with Eq. (13) and performing a series of integrations by parts leads to

 E(β)(u)=Kγexp(−βu8)∫d[σ]exp(−trσ)×fβ,2−β(σ)exp(−β2u16trσ−1) , (25)

which can be evaluated either by using functions or by proper contour integration. Remarkably, the very same matrix model results on the microscopic scale in the case that all empirical eigenvalues are equal to , as seen from Eq. (11),

 E(β)p(u4pη)∣∣∣Λ=1p/η=Kγexp(−βu8)∫d[σ]exp(−trσ)×fβ,¯n(σ)detpβ/2(βu8p12γ/β−σ) , (26)

where dropped out on the right–hand side. Thus, we have effectively traced back the problem to the uncorrelated case considered in Refs. Edelman1991 ; GuhrWettigWilke ; Forrester1993709 ; DamgaardNishigaki ; KatzavPerez . Using the scaling Eq. (19), with given by Eq. (23), the results coincide with the formulae of Ref. DamgaardNishigaki ,

 E(β)(u)=exp(−βu8)detβ/2[~qij L(0)ij(u)], (27)

where and is the modified Bessel function of order . We also have defined and for , for . The distribution of the smallest eigenvalue on the microscopic scale is then given by

 ℘(β)\footnotesizemin(u)=β8E(β)(u)−β8√uexp(−βu8)×2γ/β∑l=1det[~qij L(l)ij(u)]det1−β/2[~qij L(0)ij(u)] . (28)

To illustrate this new universality, we carry out numerical simulations for shown in Fig. 1. We generate 30,000 Hermitian Wishart correlation matrices with and , for a non-trivial empirical correlation matrix as indicated in Fig. 1.

In conclusion, we have calculated the gap probability and the distribution of the smallest eigenvalue for the complex and the real correlated Gaussian Wishart ensemble. By numerical evaluation of our results we demonstrated that they are easy to use in applications. We also found a new universality on the microscopic scale. On the conceptual level, our most important result is the discovery of the duality between the and the matrix models. Actually, there are infinitely many dualities, as there is full freedom in choosing that dimension of the matrices which corresponds to the number of time steps. In turn, each of these models has a dual model in superspace with, in general, different bosonic and fermionic dimensions. Of course, here we chose the simplest duality that led to a model in a superspace which collapses to an ordinary space, because the bosonic dimension is zero. A presentation with further results and more mathematical details will be given elsewhere WirtzGuhrII .

###### Acknowledgements.
We thank Rudi Schäfer for fruitful discussions on applications of our results. We are grateful to Mario Kieburg and Santosh Kumar for many useful comments. We acknowledge support from the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich Transregio 12.

## References

• (1) C. Chatfield, The Analysis of Time Series: An Introduction, sixth ed. (Chapman and Hall/CRC, 2003) iSBN: 1-58488-317-0
• (2) E. R. Kanasewich, Time Sequence Analysis in Geophysics, 3rd ed. (The University of Alberta Press, Edmonton, Alberta, Canada, 1974) iSBN : 0888640749
• (3) A. M. Tulino and S. Verdu, Random Matrix Theory and Wireless Communications, Foundations and Trends Com. and Inf. Th. (now Publisher Inc, 2004)
• (4) R. Gnanadesikan, Methods for Statistical Data Analysis of Multivariate Oberservations, second edition ed. (John Wiley & Sons, 1997)
• (5) V. Barnett and T. Lewis, Outliers in Statistical Data, first edition ed. (John Wiley & Sons, 1980)
• (6) Vinayak and A. Pandey, Phys. Rev. E 81, 036202 (2010)
• (7) S. Abe and N. Suzuki, “Universal and nonuniversal distant regional correlations in seismicity: Random-matrix approach,” ePrint (2009), arXiv:physics.geo-ph/0909.3830
• (8) M. Müller, G. Baier, A. Galka, U. Stephani, and H. Muhle, Phys. Rev. E 71, 046116 (2005)
• (9) P. Šeba, Phys. Rev. Lett. 91, 198104 (2003)
• (10) M. S. Santhanam and P. K. Patra, Phys. Rev. E 64, 016102 (2001)
• (11) L. Laloux, P. Cizeau, J.-P. Bouchaud, and M. Potters, Phys. Rev. Lett. 83, 1467 (1999)
• (12) V. Plerou, P. Gopikrishnan, B. Rosenow, L. A. N. Amaral, T. Guhr, and H. E. Stanley, Phys. Rev. E 65, 066126 (2002)
• (13) R. J. Muirhead, Aspects of Multivariate Statistical Theory (Published at Wiley Intersience, 2005)
• (14) I. M. Johnstone, eprint : arXiv:math/0611589(2006)
• (15) P. J. Forrester and T. D. Hughes, Journal of Mathematical Physics 35, 6736 (1994)
• (16) C.-N. Chuah, D. Tse, J. Kahn, and R. Valenzuela, Information Theory, IEEE Transactions on 48, 637 (2002)
• (17) L. Wasserman, All of Statistics: A Concise Course in Statistical Inference (Springer, 2003)
• (18) A. Edelman, Math. Comp 58, 185 (1992)
• (19) A. Edelman, SIAM Journal on Matrix Analysis and Applications 9, 543 (1988)
• (20) G. Foschini and M. Gans, Wireless Personal Communications 6, 311 (1998)
• (21) G. Burel, in In Proc. of the WSEAS Int. Conf. on Signal, Speech and Image Processing (ICOSSIP (2002)
• (22) E. Visotsky and U. Madhow, in Space-time precoding with imperfect feedback (IEEE eXpress Conference Publishing, Sorrento, Italy, 2000) pp. 312–
• (23) H. Markowitz, Portfolio Selection: Efficient Diversification of Investments (J. Wiley and Sons, 1959)
• (24) M. L. Mehta, Random Matrices, 3rd ed. (Elsevier Academic Press, 2004)
• (25) E. Shuryak and J. Verbaarschot, Nucl. Phys. A 560, 306 (1993)
• (26) J. Verbaarschot and T. Wettig, Annual Review of Nuclear and Particle Science 50, 343 (2000)
• (27) C. Itzykson and J.  . B. Zuber, J. Math. Phys. 21, 411 (1980)
• (28) Harish-Chandra, Proc. Natl. Acad. Sci. 42, 252 (1956)
• (29) I. Gelfand, Dokl. Akad. Nauk SSSR 70, 5 (1995)
• (30) C. Recher, M. Kieburg, and T. Guhr, Phys. Rev. Lett. 105, 244101 (2010)
• (31) C. Recher, M. Kieburg, T. Guhr, and M. R. Zirnbauer, J. Stat. Phys. 148, 981 (2012)
• (32) K. Efetov, Adv. Phys. 32, 53 (1983)
• (33) J. Verbaarschot, M. Zirnbauer, and H. Weidenmüller, Phys. Rep. 129, 367 (1985)
• (34) T. Wilke, T. Guhr, and T. Wettig, Phys.Rev. D 57, 6486 (1998)
• (35) Y. V. Fyodorov, Nucl.Phys. B 621, 643 (2002)
• (36) T. Guhr, J.Phys. A 39, 13191 (2006)
• (37) M. Kieburg, J. Grönqvist, and T. Guhr, J. Phys. A 42, 275205 (2009)
• (38) T. Wirtz and T. Guhr, “Distribution of the smallest eigenvalue in complex and real correlated wishart ensembles,” arXiv:math-ph/1310.2467 (2013)
• (39) P. Koev, “Computing multivariate statistics,” unpublished notes (2012), http://math.mit.edu/~plamen/files/mvs.pdf
• (40) P. J. Forrester, Journal of Physics A: Mathematical and Theoretical 40, 11093 (2007)
• (41) H. Zhang, F. Niu, H. Yang, X. Zhang, and D. Yang, in Vehicular Technology Conference, 2008. VTC 2008-Fall. IEEE 68th (IEEE eXpress Conference Publishing, Calgary, BC, 2008) pp. 1–4
• (42) R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria (2012), ISBN 3-900051-07-0, http://www.R-project.org/
• (43) P. Forrester, Nuclear Physics B 402, 709 (1993)
• (44) A. Edelman, Linear Algebra and its Applications 159, 55 (1991)
• (45) P. H. Damgaard and S. M. Nishigaki, Phys. Rev. D 63, 045012 (2001)
• (46) E. Katzav and I. Pérez Castillo, Phys. Rev. E 82, 040104 (2010)
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters