Distribution of the Smallest Eigenvalue in the Correlated Wishart Model

Distribution of the Smallest Eigenvalue in the Correlated Wishart Model

Abstract

Wishart random matrix theory is of major importance for the analysis of correlated time series. The distribution of the smallest eigenvalue for Wishart correlation matrices is particularly interesting in many applications. In the complex and in the real case, we calculate it exactly for arbitrary empirical eigenvalues, i.e., for fully correlated Gaussian Wishart ensembles. To this end, we derive certain dualities of matrix models in ordinary space. We thereby completely avoid the otherwise unsurmountable problem of computing a highly non–trivial group integral. Our results are compact and much easier to handle than previous ones. Furthermore, we obtain a new universality for the distribution of the smallest eigenvalue on the proper local scale.

pacs:
05.45.Tp, 02.50.-r, 02.20.-a

In a large number of complex systems, time series are measured which yield rich information about the dynamics but also about the correlations. Examples are found in physics, climate research, biology, medicine, wireless communication, finance and many other fields (1); (2); (3); (4); (5); (6); (7); (8); (9); (10); (11); (12). Consider a set of time series , of () time steps each, which are normalized to zero mean and unit variance . The entries , , are either real or complex, these two cases are labeled by or , respectively. The time series form the rows of the rectangular data matrix . The empirical correlation matrix of these data,

(1)

is positive definite and either real symmetric or Hermitian for . Wishart random matrix theory plays a prominent role for the study of statistical features (13); (6); (14); (8); (7); (9); (10); (3); (11); (12); (15); (16). The ensemble of Wishart correlation matrices is constructed from random matrices such that it fluctuates around the empirical correlation matrix . The probability distribution for this ensemble is usually chosen as the Gaussian (13)

(2)

where the dagger simply indicates the transpose for . The corresponding volume element or measure and all other measures occurring later on are flat, i.e., they are the products of the independent differentials. The Wishart correlation matrices yield upon average the empirical correlation matrix . Invariant observables depend only on the always non–negative eigenvalues , of which are referred to as the empirical ones. We order them in the diagonal matrix . Data analyses strongly corroborate the Gaussian Wishart model, see e.g. Refs. (3); (7); (8); (9); (10); (11).

The smallest eigenvalue of the Wishart correlation matrix , or equivalently, of is of considerable interest for statistical analysis, from a general viewpoint and in many concrete applications. In linear discriminant analysis it gives the leading contribution for the threshold estimate (17). It is most sensitive to noise in the data (4). In linear principal component analysis, the smallest eigenvalue determines the plane of closest fit (4). It is also crucial for the identification of single statistical outliers (5). In numerical studies involving large random matrices, the conditional number is used, which depends on the smallest eigenvalue (18); (19). In wireless communication models the Multi–Input–Multi–Output (MIMO) channel matrix of an antenna system (20). The smallest eigenvalue of yields an estimate for the error of a received signal (21); (16); (22). In finance, the optimal portfolio is associated with the eigenvector to the smallest eigenvalue of the covariance matrix, which is directly related to the correlation matrix (23). This incomplete list shows the considerable theoretical and practical relevance (13); (14) to study the distribution of the smallest eigenvalue. For given empirical eigenvalues , one has (13); (24)

(3)

where is the gap probability that all eigenvalues of lie in .

We have three goals: First, we calculate the above quantities exactly. In the real case, we provide, for the first time, explicit and easy–to–use formulas for applications. Second, we uncover mutual dualities between matrix models which make the calculation possible. Third, we find a new universality on a local scale referred to as microscopic in Chiral Random Matrix Theory (25); (26).

To begin with, we diagonalize with if or if . The eigenvalues are non–negative and ordered in the diagonal matrix . The volume element transforms as

(4)

where is the Haar measure and is the Vandermonde determinant (24). We introduce

(5)

which involves the “rectangularity” of the matrix . Thus, the joint distribution of the eigenvalues reads

(6)

with the normalization constant . The highly non–trivial part is the group integral

(7)

The gap probability can then be cast into the form (24)

(8)

where is the dimensional unit matrix. Importantly, the derivation of Eq. (8) involves the shift . The group integral in (7) is known exactly for , it is the Harish–Chandra–Itzykson–Zuber integral (27); (28). For , it is the orthogonal Gelfand spherical function (29) or the orthogonal Itzykson–Zuber integral. Unfortunately, explicit results are not available, although the real case is the much more relevant one in applications. To make progress, zonal or Jack polynomials (13) were developed, which are only given by complicated recursions. The resulting formulas for observables are therefore cumbersome. In Refs. (30); (31), when calculating the spectral density, we circumvented this severe problem by employing the Supersymmetry method (32); (33). In the present context, a supermatrix model is quickly constructed by multiplying the right–hand side of Eq. (8) with which completes the Jacobian according to Eq. (4). This allows one to reintroduce the matrices . The remaining determinants and in the numerator and the denominator, respectively, then inevitably lead to a supermatrix model.

Here, we put forward a different approach which will eventually lead us to a much more convenient matrix model in ordinary space. Anticommuting variables will only be used in intermediate steps. Our key idea is to identify rectangular matrices of dimension , with yet to be determined such that the gap probability acquires the form of a matrix model in without a determinant in the denominator. As one sees from Eq. (4), this is achieved, if becomes unity, i.e., if the condition

(9)

is fulfilled. We thus arrive at the matrix model

(10)

which is dual to the model (8). Since the only determinant is in the numerator, we just need anticommuting variables to lift in the exponent and can carry out the ensemble average. Along lines similar to, e.g., Refs. (34); (30); (31), we then find

(11)

where we restrict ourselves to integer for . Although we used anticommuting variables, the matrix is ordinary (34), it is either an Hermitian () or a self dual Hermitian matrix (). The function

(12)

is related to the Ingham–Siegel integral, see Ref. (35). It is obviously invariant, , where with being the diagonal matrix of the eigenvalues and where for and for . Using Refs. (36); (37), we conclude that

(13)

Remarkably, the ordinary matrix model (11) is invariant, once is evaluated. There is no symmetry breaking which would lead to an Itzykson–Zuber–type–of integral as is present in the above mentioned supermatrix model. Since the latter involves for an explicitly unknown supergroup integral, the model (11) is much better tractable. By constructing the model (11) in ordinary space, we fully outmaneuvered the substantial difficulties related to the orthogonal Itzykson–Zuber integral and to the zonal or Jack polynomials. Due to the lack of Efetov–Wegner terms in ordinary space, even the complex case is considerably easier to treat. We mention in passing that for an half–integer value of enforces a supermatrix model, but this will be discussed elsewhere (38).

Hence, applying standard techniques, we arrive at

(14)

where the elements of this and all other determinants run over with . The kernel in Eq. (14) is a finite polynomial in ,

(15)

Here we defined and for , respectively, as well as . We also introduced the Heaviside step function and the elementary symmetric polynomials

(16)

with . Applying Eq. (3), we obtain the distribution of the smallest eigenvalue in the explicit form

(17)

where another polynomial kernel occurs,

(18)

These results are exact and valid for all integer values of . As already mentioned, half–integer values are possible for .

Our formula are much more compact and also easier to handle than the previously known expressions. The duality which we uncovered leads to much clearer structures. In Ref. (39), and for are expressed, apart from an exponential, in terms of a finite series in zonal polynomials. Unfortunately, the latter are only given recursively and are thus cumbersome in applications. Even for , where the Itzykson–Zuber integral is explicitly known, our formulas have a more direct structure and are more convenient in applications than the ones in Refs. (40); (41). We illustrate our findings in Fig. 1 for real

Figure 1: (color online) Distribution of the smallest eigenvalue (top) for finite , and , and (bottom) for and in the case of correlated Wishart ensembles. The solid lines correspond to analytic results and the histograms to the numerical simulations.

time series () of lengths . As empirical eigenvalues , we chose 0.6, 1.2, 6.7, 9.3, 10.5, 15.5, 17.2, 20.25, 30.1, 35.4. To demonstrate the validity of our results, we compare them with numerical simulations. Using the program R (42), we generate and diagonalize 50,000 correlated random Wishart matrices drawn from the distribution (2). As expected, the agreement is perfect. In contrast to the formulae in Ref. (39), numerical evaluation of Eq. (17) is easily possible even for large matrix dimension .

We now show the existence of a new kind of universality in the correlated Wishart model. In addition to the theoretical interest in this issue, the universal results to be presented now might also be of practical importance when analyzing correlation matrices built from many long time series. We want to zoom into a local scale set by the mean level spacing of the spectral density near the origin. To this end, we consider the limit in such a way that the “rectangularity” is held fixed. A similar limit, referred to as microscopic, was introduced in Chiral Random Matrix Theory (25); (26), which models statistical aspects of Quantum Chromodynamics. However, in contrast to our case the probability density is fully rotation invariant, corresponding to . Here we consider an arbitrary , and thus have to take into account how the local mean level spacing depends on . From Refs. (43); (41) it follows that we have to rescale with , when taking the limit . Hence we introduce a new variable by making the ansatz

(19)

with a dependent constant yet to be determined. We thus we restrict our analysis to the commonly encountered situation in which almost all empirical eigenvalues do not depend on , and only few are proportional to . In the microscopic limit, the gap probability and the distribution of the smallest eigenvalue are defined as

(20)
(21)

The limit is non–trivial, because the function and the normalization constant depend on . The dependence of the latter is determined by evaluating Eq. (11) at , which shows that with independent of . Next we investigate how the fold product of determinants in Eq. (11) behaves on the local scale,

(22)

Expanding the logarithm to leading order in shows that we have to choose

(23)

to fix the microscopic scale, provided converges to a non zero constant for . The large– limit of the above expression is then

(24)

The factor cancels with the dependent part of . Combining with Eq. (13) and performing a series of integrations by parts leads to

(25)

which can be evaluated either by using functions or by proper contour integration. Remarkably, the very same matrix model results on the microscopic scale in the case that all empirical eigenvalues are equal to , as seen from Eq. (11),

(26)

where dropped out on the right–hand side. Thus, we have effectively traced back the problem to the uncorrelated case considered in Refs. (44); (34); (43); (45); (46). Using the scaling Eq. (19), with given by Eq. (23), the results coincide with the formulae of Ref. (45),

(27)

where and is the modified Bessel function of order . We also have defined and for , for . The distribution of the smallest eigenvalue on the microscopic scale is then given by

(28)

To illustrate this new universality, we carry out numerical simulations for shown in Fig. 1. We generate 30,000 Hermitian Wishart correlation matrices with and , for a non-trivial empirical correlation matrix as indicated in Fig. 1.

In conclusion, we have calculated the gap probability and the distribution of the smallest eigenvalue for the complex and the real correlated Gaussian Wishart ensemble. By numerical evaluation of our results we demonstrated that they are easy to use in applications. We also found a new universality on the microscopic scale. On the conceptual level, our most important result is the discovery of the duality between the and the matrix models. Actually, there are infinitely many dualities, as there is full freedom in choosing that dimension of the matrices which corresponds to the number of time steps. In turn, each of these models has a dual model in superspace with, in general, different bosonic and fermionic dimensions. Of course, here we chose the simplest duality that led to a model in a superspace which collapses to an ordinary space, because the bosonic dimension is zero. A presentation with further results and more mathematical details will be given elsewhere (38).

Acknowledgements.
We thank Rudi Schäfer for fruitful discussions on applications of our results. We are grateful to Mario Kieburg and Santosh Kumar for many useful comments. We acknowledge support from the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich Transregio 12.

References

  1. C. Chatfield, The Analysis of Time Series: An Introduction, sixth ed. (Chapman and Hall/CRC, 2003) iSBN: 1-58488-317-0
  2. E. R. Kanasewich, Time Sequence Analysis in Geophysics, 3rd ed. (The University of Alberta Press, Edmonton, Alberta, Canada, 1974) iSBN : 0888640749
  3. A. M. Tulino and S. Verdu, Random Matrix Theory and Wireless Communications, Foundations and Trends Com. and Inf. Th. (now Publisher Inc, 2004)
  4. R. Gnanadesikan, Methods for Statistical Data Analysis of Multivariate Oberservations, second edition ed. (John Wiley & Sons, 1997)
  5. V. Barnett and T. Lewis, Outliers in Statistical Data, first edition ed. (John Wiley & Sons, 1980)
  6. Vinayak and A. Pandey, Phys. Rev. E 81, 036202 (2010)
  7. S. Abe and N. Suzuki, “Universal and nonuniversal distant regional correlations in seismicity: Random-matrix approach,” ePrint (2009), arXiv:physics.geo-ph/0909.3830
  8. M. Müller, G. Baier, A. Galka, U. Stephani, and H. Muhle, Phys. Rev. E 71, 046116 (2005)
  9. P. Šeba, Phys. Rev. Lett. 91, 198104 (2003)
  10. M. S. Santhanam and P. K. Patra, Phys. Rev. E 64, 016102 (2001)
  11. L. Laloux, P. Cizeau, J.-P. Bouchaud, and M. Potters, Phys. Rev. Lett. 83, 1467 (1999)
  12. V. Plerou, P. Gopikrishnan, B. Rosenow, L. A. N. Amaral, T. Guhr, and H. E. Stanley, Phys. Rev. E 65, 066126 (2002)
  13. R. J. Muirhead, Aspects of Multivariate Statistical Theory (Published at Wiley Intersience, 2005)
  14. I. M. Johnstone, eprint : arXiv:math/0611589(2006)
  15. P. J. Forrester and T. D. Hughes, Journal of Mathematical Physics 35, 6736 (1994)
  16. C.-N. Chuah, D. Tse, J. Kahn, and R. Valenzuela, Information Theory, IEEE Transactions on 48, 637 (2002)
  17. L. Wasserman, All of Statistics: A Concise Course in Statistical Inference (Springer, 2003)
  18. A. Edelman, Math. Comp 58, 185 (1992)
  19. A. Edelman, SIAM Journal on Matrix Analysis and Applications 9, 543 (1988)
  20. G. Foschini and M. Gans, Wireless Personal Communications 6, 311 (1998)
  21. G. Burel, in In Proc. of the WSEAS Int. Conf. on Signal, Speech and Image Processing (ICOSSIP (2002)
  22. E. Visotsky and U. Madhow, in Space-time precoding with imperfect feedback (IEEE eXpress Conference Publishing, Sorrento, Italy, 2000) pp. 312–
  23. H. Markowitz, Portfolio Selection: Efficient Diversification of Investments (J. Wiley and Sons, 1959)
  24. M. L. Mehta, Random Matrices, 3rd ed. (Elsevier Academic Press, 2004)
  25. E. Shuryak and J. Verbaarschot, Nucl. Phys. A 560, 306 (1993)
  26. J. Verbaarschot and T. Wettig, Annual Review of Nuclear and Particle Science 50, 343 (2000)
  27. C. Itzykson and J. â. B. Zuber, J. Math. Phys. 21, 411 (1980)
  28. Harish-Chandra, Proc. Natl. Acad. Sci. 42, 252 (1956)
  29. I. Gelfand, Dokl. Akad. Nauk SSSR 70, 5 (1995)
  30. C. Recher, M. Kieburg, and T. Guhr, Phys. Rev. Lett. 105, 244101 (2010)
  31. C. Recher, M. Kieburg, T. Guhr, and M. R. Zirnbauer, J. Stat. Phys. 148, 981 (2012)
  32. K. Efetov, Adv. Phys. 32, 53 (1983)
  33. J. Verbaarschot, M. Zirnbauer, and H. Weidenmüller, Phys. Rep. 129, 367 (1985)
  34. T. Wilke, T. Guhr, and T. Wettig, Phys.Rev. D 57, 6486 (1998)
  35. Y. V. Fyodorov, Nucl.Phys. B 621, 643 (2002)
  36. T. Guhr, J.Phys. A 39, 13191 (2006)
  37. M. Kieburg, J. Grönqvist, and T. Guhr, J. Phys. A 42, 275205 (2009)
  38. T. Wirtz and T. Guhr, “Distribution of the smallest eigenvalue in complex and real correlated wishart ensembles,” arXiv:math-ph/1310.2467 (2013)
  39. P. Koev, “Computing multivariate statistics,” unpublished notes (2012), http://math.mit.edu/~plamen/files/mvs.pdf
  40. P. J. Forrester, Journal of Physics A: Mathematical and Theoretical 40, 11093 (2007)
  41. H. Zhang, F. Niu, H. Yang, X. Zhang, and D. Yang, in Vehicular Technology Conference, 2008. VTC 2008-Fall. IEEE 68th (IEEE eXpress Conference Publishing, Calgary, BC, 2008) pp. 1–4
  42. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria (2012), ISBN 3-900051-07-0, http://www.R-project.org/
  43. P. Forrester, Nuclear Physics B 402, 709 (1993)
  44. A. Edelman, Linear Algebra and its Applications 159, 55 (1991)
  45. P. H. Damgaard and S. M. Nishigaki, Phys. Rev. D 63, 045012 (2001)
  46. E. Katzav and I. Pérez Castillo, Phys. Rev. E 82, 040104 (2010)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
102460
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description