New spectral relations between products and powers of isotropic random matrices

New spectral relations between products and powers of isotropic random matrices

Z. Burda zdzislaw.burda@uj.edu.pl    M.A. Nowak nowak@th.if.uj.edu.pl    A. Swiech artur.swiech@uj.edu.pl Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center
Jagiellonian University, Reymonta 4, 30-059 Kraków, Poland
July 3, 2019
Abstract

We show that the limiting eigenvalue density of the product of identically distributed random matrices from an isotropic unitary ensemble (IUE) is equal to the eigenvalue density of -th power of a single matrix from this ensemble, in the limit when the size of the matrix tends to infinity. Using this observation one can derive the limiting density of the product of independent identically distributed non-hermitian matrices with unitary invariant measures. In this paper we discuss two examples: the product of Girko-Ginibre matrices and the product of truncated unitary matrices. We also provide an evidence that the result holds also for isotropic orthogonal ensembles (IOE).

PACS: 02.50.Cw (Probability theory), 02.70.Uu (Applications of Monte Carlo methods)
Keywords: isotropic random matrices, free probability;

Introduction

Free probability theory is a fusion of non-commutative probability theory and the concept of free independence. Since 1991, when the link between the free probability theory and random matrix theory was established vdn (), several new results have been proven in an easy and powerful way in the limit of infinitely large random matrices hp (); agz (); ns (). In this note we demonstrate a simple albeit quite counterintuitive result that the spectral density of the product of free, identically distributed random matrices from an isotropic unitary ensemble (IUE) is equal to the spectral density of the -th power of a single matrix from this ensemble in the limit of infinite matrix size. The proof is based on the multiplicative properties of the S-transform and the Haagerup-Larsen theorem hl ().

The motivation for the present work comes from the observation made in bjw (); agt1 (); agt2 () that the eigenvalue density of independent Girko-Ginibre g1 (); g2 () matrices is identical to the eigenvalue density of the -th power of a single Girko-Ginibre matrix in the limit of infinite size. This observation leads to the question whether this is a feature of only this particular class of matrices or if there exists a larger class of matrices that have this property. In the present paper we show that there is indeed a larger class of matrices sharing this property - a class of random isotropic matrices. We begin with defining isotropic matrices. Then we present the main result in detail and its derivation. Finally we outline a few sample applications, related to the recent interest in the literature. In particular we apply our result to the product of Ginibre-Girko matrices and rederive the density known from bjw (); agt1 (); agt2 (). We also consider classes of truncated unitary and orthogonal matrices and compare our predictions to Monte Carlo simulations, which allow us to identify finite size corrections. We conclude the paper with a short summary.

Isotropic random matrices

It is convenient to introduce the concept of isotropic random matrices in analogy to isotropic complex random variables that have a circularly symmetric probability distribution depending only on the module . Using polar decomposition, one can write where is a real non-negative random variable and is a random variable (phase) with a uniform distribution on . Isotropic random matrices are defined by a straightforward generalization of isotropic complex random variables. A square matrix is said to be isotropic random matrix if it has a polar decomposition in which is a positive semi-definite Hermitian random matrix and is a unitary random matrix independent of and distributed on the unitary group with the Haar measure. In short, is a Haar unitary matrix. The random matrix plays the role of the radial part of . Such random matrices form an ensemble of isotropic unitary matices (IUE). An example is an ensemble generated by the partition function fz (); fsz ():

(1)

where is a flat measure, and is a polynomial in . Another natural class of IUE matrices are matrices of the form where is an diagonal matrix having real positive random eigenvalues with the given probability distribution and and are two independent Haar unitary matrices on the unitary group . By analogy one can also consider isotropic orthogonal ensemble (IOE) given by the decomposition with being a positive semidefinite real symmetric matrix and being a Haar orthogonal matrix. In this case, when one considers an ensemble given by a partition function like that given above, one has to replace by a real matrix.

In mathematical literature, isotropic matrices for are called -diagonal ns2 (). In this note we prefer to call them isotropic (or IUE, IOE) in the large limit. IUE matrices have an eigenvalue distribution independent of the polar angle on the complex plane. In the limit when the matrix size one can find an explicit relation between the eigenvalue density of the matrix and of the matrix hl (); fz (); fsz (); gkz (). We briefly recall this relation below. Let us mention that the angular independence of the eigenvalue density does not imply that the matrix is isotropic. For example a block diagonal matrix of the form:

(2)

where are independent Hermitian matrices of dimensions and , , and are Haar unitary matrices on an respectively, has a circularly symmetric eigenvalue density in the complex plane, but it is not isotropic. Intuitively, this is because the split into and breaks the isotropy in the whole group.

Main result

The main result of this paper is as follows: consider identically distributed isotropic matrices , , , generated independently from a given IUE (isotropic unitary ensemble). In the limit the eigenvalue density of the product becomes identical as the eigenvalue density of the -th power of a single matrix from this ensemble (e.g. ). In other words, the probability that a randomly chosen eigenvalue of lies within a circle of radius : approaches for the probability that a randomly chosen eigenvalue of lies within the same circle: . One can use this observation to derive the eigenvalue density of the product if the eigenvalue density of is known. In particular one can immediately show that the eigenvalue distribution of the product of independent Girko-Ginibre matrices has a simple form:

(3)

and zero for , in agreement with bjw (); agt1 (); agt2 (); bjn (); b1 (); b2 (); os (). It is interesting to note that the matrices obtained from the products of Girko-Ginibre matrices generate a Fuss-Catalan family of distributions cnz () that have however a much more complicated limiting eigenvalue density pz (). Another interesting case is the product of independent truncated unitary matrices sz () that is

(4)

and zero otherwise. The truncated matrices have dimensions . They are obtained by removing columns and rows from the original Haar unitary random matrix. The result holds for and fixed.

This is a counterintuitive result, so let us stress that it only holds in the limit . For finite the eigenvalue distributions of the product of and of the power differ. The difference however disappears when tends to infinity, as we illustrate it below.

Derivation

Consider an IUE ensemble of random matrices of dimensions . In the large limit the random matrices can be represented as free random variables and one can use the Haagerup-Larsen theorem hl () that relates the eigenvalue density of to the eigenvalue density of by the following formula:

(5)

where is the cumulative density function for the density of eigenvalues of on the complex plane and is the S-transform for the matrix . The cumulative density function

(6)

can be interpreted as the fraction of eigenvalues of in the circle of radius centered at the origin of the complex plane. It is related to the eigenvalue density that depends on the distance from the origin . The integrand is interpreted as the probability of finding eigenvalues of in a narrow ring of radii and :

(7)

The prime denotes the derivation with respect to the radial variable. The cumulative density function enters equation (5) as an argument of the S-transform that is related to the eigenvalue density of the matrix (see Appendix A). The Haagerup-Larsen theorem states also hl (); fz (); fsz (); gkz () that the support of the eigenvalue density of is a ring of radii and or a disk (if ):

(8)

Let us make few comments. For an -diagonal (isotropic) matrix given by the radial decomposition , where is Hermitian and is a Haar unitary matrix, the two matrices and have identical eigenvalues and therefore the S-transforms for and are identical: . This means that (5) can be written as

(9)

Let us now apply this equation to the product of identically distributed -diagonal (isotropic) matrices . The resulting matrix has an identical eigenvalues as , where so we can apply (9) replacing in this equation by :

(10)

The S-transform for the matrix which appears in the last equation can be substituted by the S-transforms for individual terms in the product. Indeed, writing

(11)

where we see that

(12)

since due to the cyclic properties of trace the moments of are identical as those of and moments of as those of . Applying the last equation recursively we eventually obtain

(13)

Taking into account that all are identically distributed and having the same S-transform (that we denote by ) we can write the last equation as

(14)

Inserting this into (10) we have

(15)

This equation has an identical form as (9) except that on the left hand side is replaced by and on the right hand side is replaced by . From this observation it immediately follows that

(16)

The last equality follows from the fact that eigenvalues of the matrix are equal to the -th power of the corresponding eigenvalues of : . So we see that indeed the product of identically distributed isotropic matrices has the same eigenvalue distribution as the -th power of a single matrix in the product. In practice, the eigenvalue distribution of can be calculated directly from the eigenvalue distribution of a single matrix by substituting in the cumulative distribution function (6). The corresponding eigenvalue densities may be found using (7). They read

(17)

and

(18)

Applications

Let us apply these formulas to a couple of examples. First consider Girko-Ginibre matrices g1 (); g2 () that have a uniform distribution inside the unit circle . We have

(19)

and otherwise. For the product of -independent Girko-Ginibre matrices we have (16)

(20)

and one otherwise. Taking the derivative with respect to (7) we find the corresponding densities:

(21)

and

(22)

where denotes the Heaviside step function. This result agrees with that obtained using different methods in bjw (); bjn (); b1 (); b2 (); os () as mentioned in the introduction of the paper.

As the second example, we consider the product of truncated unitary matrices sz (). The cumulative eigenvalue distribution of a single matrix from this ensemble is

(23)

and otherwise. The coefficient is the ratio of the number of rows and columns removed from a Haar unitary matrix of dimensions . This truncation leaves a matrix of dimensions . In Appendix B we show how to derive this result using free random variables. The corresponding density reads:

(24)

Using (16) we find the distribution of eigenvalues for the product of such matrices:

(25)

and otherwise. The corresponding eigenvalue density is

(26)

Numerical comparison and finite size effects

In order to crosscheck our results, we use Monte-Carlo simulations for generating (sampling) finite size random matrices from ensembles in question. An agreement between the analytical formulas (3) or (4) and numerical results is observed taking into account finite size corrections. The shape of obtained distributions (7) is shown in the figure 1. In the limit, distributions have got compact support, and the sharp drop at the edge is present. For finite the spectra do not have a sharp threshold – instead they tend to zero continuously in an extended crossover region, and the difference between the product of independent matrices and the corresponding power of a single one is visible in this region (figure 2). The eigenvalue density of the product of independent matrices approaches the theoretical curve faster than of the corresponding power of a single matrix. Only radial distributions (7) are shown, since eigenvalue densities are circularly symmetric on the complex plane. The shape of the finite size corrections for Girko-Ginibre distribution was discussed in b1 (); b2 ().

Figure 1: (Color online) Numerical verification of theoretical formulas (3) (a) and (4) (b),(c) for the radial part (7) of the mean spectral density of the product of independent matrices. (a) Numerical histograms for the product of 3 independent Gaussian random matrices (black crosses), (red circles) and (blue rotated crosses) compared to theoretical prediction for (solid green line). Each histogram is made for eigenvalues. The numerical histograms approach theoretical curve as the size of matrices increases. (b) An analogous plot to (a) for the product of 2 independent truncated unitary matrices with ratio and (black crosses), (red circles) and (blue rotated crosses). Each histogram is made for eigenvalues. (c) An analogous plot to (a) and (b) for the product of 3 independent truncated unitary matrices with ratio and (black crosses), (red circles) and (blue rotated crosses). Each histogram represents eigenvalues.
Figure 2: (Color online) Numerical analysis of the finite size effects for the radial part (7) of the mean spectral density of the product of independent matrices in comparison to the power of a single matrix. (a) Numerical histograms for product of 3 independent Gaussian random matrices (black crosses) and one matrix raised to 3’rd power for (blue circles) compared to theoretical prediction for (solid green line). Each histogram is made for eigenvalues. Plots are zoomed in the region, where the difference in the shape is visible. (b) An analogous plot to (a) for the product of 2 independent truncated unitary matrices (black crosses) and 2’nd power of a single truncated unitary matrix (blue circles) with ratio and . Each histogram is made for eigenvalues. (c) An analogous plot to (a) and (b) but the product of 3 independent truncated unitary matrices (black crosses) and 3’rd power of a single truncated unitary matrix (blue circles) with ratio and . Each histogram is made for eigenvalues.

We also performed numerical simulations for the products of truncated orthogonal matrices as an example of multiplication of IOE matrices. In the large limit both the IUE and IOE densities are expected to have the same limiting distribution while for finite the distribution in the IOE case is expected to display a characteristic pattern that weakly breaks the circular symmetry of the eigenvalue distribution on the complex plane. More precisely, one expects that a fraction of eigenvalues accumulates on the real axis and disappears from a narrow depletion region close to the axis. The effect was first discussed for real Girko-Ginibre matrices e () and later also for orthogonal truncated matrices ksz (). It is known to be a finite size effect in the sense that the fraction of eigenvalues forming the pattern tends to zero for so the full circular symmetry of the eigenvalue density is restored in the limit. In fact, this is exactly what we see in our numerical simulations of the product of truncated matrices. First we observe that the radial distribution of eigenvalues of product of two truncated unitary matrices is identical to the case of truncated orthogonal matrices except in a small region close to (see fig. 3.a). In figure 3.b we compare finite size distributions for the product of IUE (lower part) and IOE (upper part). We see that the IUE distribution is circularly symmetric up to the statistical noise while the IOE distribution has an elongated shape close to the real axis, as expected. Finally in fig. 3.c we show the full spectrum on which one can clearly see an accumulation of eigenvalues on the real axis.

Figure 3: (Color online) Numerical comparison of the eigenvalues of product of two truncated orthogonal and unitary matrices. (a) The radial part (7) of the mean spectral density for unitary (black crosses) and orthogonal (red circles) matrices for , . Each histogram is made for eigenvalues. The theoretical prediction for is shown for comparison (solid green line). (b) Full eigenvalue distribution of orthogonal (upper half of complex plane) and unitary (lower half of complex plane) truncated matrices for , . (c) Full eigenvalue distribution of orthogonal truncated matrices for same , parameters. The real eigenvalue band is clearly visible.

Discussion

In this note we have shown a simple, (and as far as we know) new relation between the spectral properties of the product of identically distributed isotropic random matrices from the given IUE ensemble and spectral properties of -th power of a single matrix from this ensemble. We stress a nonintuitive aspect of this result that tells us that independent matrices, when multiplied, give the same eigenvalue density as the product of fully correlated (identical) matrices. In a sense it is a self-averaging effect: a single random matrix from a isotropic ensemble is a good representative to describe products of matrices from this ensemble in the limit . -

We have supplemented our analytic proof with extensive numerical simulations, allowing us to see how the finite size effects vanish in the thermodynamical limit. For Girko-Ginibre finite size effects agree with those conjectured in b1 (); b2 ().

Our result elucidates the transparent analytic structure noted in several recently published papers on the products of random matrices bjw (); agt1 (); agt2 (); b1 (); b2 (); bjn (); os (); sz (); bmj (); bbcc (); ms (); gv (); fk (); djl () and provides a powerful tool for the derivation of similar results for products of some application-designed, isotropic random matrices of large (infinite) size.

Acknowledgments

We thank Yan Fyodorov and Karol Zyczkowski for stimulating discussions. This work was supported by the Polish Ministry of Science Grant No. N N202 229137 (2009-2012) and by the Grant DEC-2011/02/A/ST1/00119 of National Centre of Science.

Appendix A

In this appendix we briefly recall basic facts about the S-transform, introduced by Voiculescu in free random probability v (). Consider a Hermitian random matrix . One usually defines the Green function

(27)

that is directly related to the eigenvalue density . Note that the density is a function of a real variable while the Green’s function is a function of a complex variable. The Green’s function generates moments (if they exist)

(28)

of the eigenvalue density

(29)

Sometimes it is more convenient to use another generating function, given by a power series in rather than in :

(30)

and to introduce its functional inverse :

(31)

which can also be expressed as a power series in if the first moment is nonzero: . The S-transform for the matrix is related to the -transform as

(32)

The relevance of the S-transform in free probability is related to the fact that it allows one to concisely formulate the law of free multiplication. The S-transform of the product of two free (independent) matrices from invariant ensembles is a product of the S-transforms of individual matrices:

(33)

The multiplication law was formulated in free random probability v () but it also can be rederived in random matrix set-up using field theoretical techniques for the summation of planar Feynman diagrams and it can be generalized to non-hermitian matrices bjn ().

Appendix B

In this appendix we rederive the distribution of a single unitary truncated matrix (23) using free probability and the Haagerup-Larsen theorem. We first construct the density of an matrix where

(34)

is a projection matrix and is an Haar unitary matrix of dimensions . In order to calculate the S-transform for the projector we first observe that all moments of are equal . Hence (30), (31) and eventually (32)

(35)

Inserting this to (5) we find

(36)

and otherwise. We see that . This means that there are eigenvalues equal zero. They are inherited from the zero eigenvalues of the projector. We can now reduce dimensionality of the matrix by removing zero eigenvectors. The remaining matrix that has no trivial zero eigenvalues. This gives the result given in equation (23) for a single truncated matrix

(37)

and one otherwise. The term removes zero eigenvalues out of eigenvalues, and the factor restores the total normalization.

References

  • (1) D. V. Voiculescu, K. J. Dykema and A. Nica, Free random variables, CRM Monograph Series, Vol. 1 (American Mathematical Society, Providence, RI, 1992). A non-commutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups.
  • (2) F. Hiai and D. Petz, The semicircle law, free random variables and entropy, Mathematical Surveys and Monographs, Vol. 77, (American Mathematical Society, RI, 2000).
  • (3) G. W. Anderson, A. Guionnet and O. Zeitouni, An Introduction to Random Matrices, (Cambridge University Press, UK, 2010).
  • (4) A. Nica and R. Speicher, Lectures on the combinatorics of free probability, London Mathematical Society Lecture Note Series, Vol. 335 (Cambridge University Press, Cambridge, 2006).
  • (5) U. Haagerup and F. Larsen, Journal of Functional Analysis, 176, 331 (2000).
  • (6) Z. Burda, R. A. Janik, B. Waclaw, Phys. Rev. E 81, 041132 (2010).
  • (7) N. Alexeev, F. Götze, A. Tikhomirov, Doklady Mathematics 82, 505 (2010).
  • (8) N. Alexeev, F. Götze, A. Tikhomirov, Lithuanian Math. Journal, 50, 121 (2010).
  • (9) J. Ginibre, J. Math. Phys. 6, (1965).
  • (10) V. Girko, Theor. Prob. Appl. 29, 694 (1985).
  • (11) J. Feinberg and A. Zee, Nucl.Phys. B 501, 643 (1997).
  • (12) J. Feinberg, R. Scalettar, A. Zee, J.Math. Phys. 42, 5718 (2001).
  • (13) A. Nica, R. Speicher, Fields Institute Communications, 12, 149 (1997).
  • (14) A. Guionnet, M. Krishnapur and O. Zeitouni, Ann. Math. 174, 1189 (2011).
  • (15) B. Collins, I. Nechita K. Zyczkowski, J. Phys. A 43, 275303 (2010).
  • (16) K. A. Penson, K. Zyczkowski, Phys. Rev. E 83 061118 (2011).
  • (17) K. Zyczkowski and H.-J. Sommers, J. Phys. A 33, 2045 (2000).
  • (18) Z. Burda et al, Phys. Rev. E 82, 061114 (2010).
  • (19) Z. Burda et al, Acta Phys. Polon. B 42, 939 (2011).
  • (20) A. Edelman, J. Multivariate Anal. 60, 203 (1997).
  • (21) B. A. Khoruzhenko, H.-J. Sommers and K. Zyczkowski, Phys. Rev. E 82, 040106 (2010).
  • (22) Z. Burda, R. Janik and M.A. Nowak, Phys. Rev. E 84, 061125 (2011).
  • (23) S. O’Rourke, A. Soshnikov, Electron. J. Probab. 16, 2219 (2011).
  • (24) Z. D. Bai, B. Miao, B. Jin, J. Multivariate Anal. 98, 76 (2007).
  • (25) T.Banica, S.Belinschi, M.Capitaine, B.Collins, Free Bessel laws, arXiv:0710.5931 (2007).
  • (26) J.A. Mingo and R.Speicher, Sharp bounds for sums associated to graphs of matrices arXiv:0909.4277 (2009).
  • (27) V. L. Girko, A.I. Vladimirova, Random Oper. Stochastic Equations 17, 243 (2009).
  • (28) P. J. Forrester and M. Krishnapur, J. Phys. A: Math. Theor. 42, 385204 (2009).
  • (29) Z. Dong, T. Jiang and D. Li, J. Math. Phys. 53, 013301 (2012).
  • (30) D. V. Voiculescu, J. Operator Theory 18, 223 (1987).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
17436
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description