A Analysis of the large-s behavior

# Wigner surmise for mixed symmetry classes in random matrix theory

## Abstract

We consider the nearest-neighbor spacing distributions of mixed random matrix ensembles interpolating between different symmetry classes, or between integrable and non-integrable systems. We derive analytical formulas for the spacing distributions of  or  matrices and show numerically that they provide very good approximations for those of random matrices with large dimension. This generalizes the Wigner surmise, which is valid for pure ensembles that are recovered as limits of the mixed ensembles. We show how the coupling parameters of small and large matrices must be matched depending on the local eigenvalue density.

02.10.Yn

## I Introduction

Random matrix theory (RMT) is a powerful mathematical tool which can be used to describe the statistical behavior of quantities arising in a wide variety of complex systems. It has been applied to many mathematical and physical problems with great success, see Guhr et al. (1998); Verbaarschot and Wettig (2000); Akemann et al. (2011) for reviews. This wide range of applications is based on the fact that RMT describes universal quantities that do not depend on the detailed dynamical properties of a given system but rather are determined by global symmetries that are shared by all systems in a given symmetry class.

In RMT the operator governing the behavior of the system, such as the Hamilton or Dirac operator, is replaced by a random matrix with suitable symmetries. One then studies statistical properties of the eigenvalue spectrum of such random matrices, typically in the limit of large matrix dimension. To compare different systems in the same symmetry class with RMT, the eigenvalues of the physical system as well as those of the random matrices need to be “unfolded” Brody et al. (1981). The purpose of such an unfolding procedure is to separate the average behavior of the spectral density (which is not universal) from the spectral fluctuations (which are universal). Unfolding is essentially a local rescaling of the eigenvalues, resulting in an unfolded spectrum with mean level spacing equal to unity. How the rescaling is to be done is not unique and may depend on the system under study.

In this paper we focus on the so-called nearest-neighbor spacing distribution , i.e., the probability density to find two adjacent (unfolded) eigenvalues at a distance . This quantity probes the strength of the eigenvalue repulsion due to interactions and can be computed analytically for the classical RMT ensembles, resulting in rather complicated expressions given in terms of prolate spheroidal functions Mehta (2004). However, it was realized early on that the level spacing distribution of large random matrices is very well approximated by that of matrices in the same symmetry class.1 For most practical purposes it is sufficient to use this so-called Wigner surmise Wigner () instead of the exact analytical result. It is given by

 Pβ(s)=aβsβe−bβs2 (1)

with corresponding to the Gaussian orthogonal (GOE), unitary (GUE), and symplectic (GSE) ensemble of RMT, respectively. The quantities and are chosen such that

 ∫∞0dsPβ(s)=1 and ⟨s⟩=∫∞0dsPβ(s)s=1 (2)

in all three cases. Explicit formulas will be given in Sec. II.

RMT describes quantum systems whose classical counterparts are chaotic Bohigas et al. (1984) and correctly predicts the strong short-range correlations of the eigenvalues due to interactions. In contrast, the level spacing distribution of a quantum system whose classical counterpart is integrable is given by that of a Poisson process,

 P0(s)=e−s, (3)

corresponding to uncorrelated eigenvalues. We assign the Dyson index to ensembles of this kind, which is a consistent extension of the generalized Gaussian ensembles with arbitrary real introduced in Dumitriu and Edelman (2002).

Often physical systems consist of parts with different symmetries, or of a classically integrable and a chaotic part. Changing a parameter of the system may then result in transitions between different symmetry classes. Now, the question is whether a symmetry transition in a given physical system can be described by a transition between RMT ensembles (or Poisson). It has been shown in numerous studies that this is indeed the case. For example, billiards are showcases for the interplay of chaos and integrability, and certain billiards exhibit Poisson-GOE transitions Cheon et al. (1991); Shigehara et al. (1993); Csordás et al. (1994); Abul-Magd et al. (2008). A transition between GOE and GUE behavior takes place in the spectrum of a kicked top Lenz and Haake (1991) or kicked rotor Shukla and Pandey (1997) when time-reversal symmetry is gradually broken. Furthermore, a transition from Poisson to GOE statistics was found for random points on fractals as the dimension is changed Sakhr and Nieminen (2005). In the spectrum of the hydrogen atom in a magnetic field, transitions were observed from Poisson to GOE Wintgen and Friedrich (1987) as well as from GOE to GUE Goldberg et al. (1991). Transitions from Poisson to GOE or GUE statistics also occur in condensed matter physics, e.g., in the metal-insulator (Anderson) transition Shklovskii et al. (1993); Shukla (2005) whose properties are similar to those of the Brownian motion model introduced in Ref. Dyson (1962). In relativistic particle physics the Dirac operator shows transitions between different chiral symmetry classes Follana et al. (2006) or an Anderson-type transition García-García and Osborn (2007); Kovács (2010); Bruckmann et al. (2011). In the spectra of nuclei a transition between GOE and Poisson spectral statistics takes place when levels sequences with different exact quantum numbers are mixed Brody et al. (1981). We thus conclude that RMT is broadly applicable not only to pure systems but also to mixed systems.

In this paper, we assume the Hamiltonian describing the mixed system to be of the form2

 H=Hβ+λHβ′, (4)

where represents the original system whose symmetry/integrability is broken by the perturbation for small coupling parameter , and vice versa for large . For the quantities we analyze the absolute scale of is irrelevant, only the relative scale between the different parts matters.

From the level statistics point of view, and correspond either to a Poisson process or to one of the three RMT ensembles. Hence, there are possibilities for a transition between two of these four cases in Eq. (4), i.e., Poisson-GOE, Poisson-GUE, Poisson-GSE, GOE-GUE, GOE-GSE, and GUE-GSE. If a GSE matrix is involved in the transition, there are two possibilities for the other matrix: self-dual or not.3 This leads to an even larger variety of mixed ensembles. Many transitions of this kind have been studied in earlier works, usually for large matrix dimension. Transitions between Gaussian ensembles are considered in Mehta (2004), but closed forms for the spacing distribution could not be obtained, and self-dual symmetry was not conserved in the transitions involving the GSE. Mixtures of Gaussian ensembles with conserved self-dual symmetry and small matrix size are considered in Nieminen (2009), but only numerical results are given for the spacing distributions. Other examples include the heuristic Brody distribution Brody (1973) interpolating between Poisson and the GOE, the spacing distribution of a generalized Gaussian ensemble of  real random matrices Berry and Shukla (2009), and a complete study of the transition between Poisson and the GUE Guhr (1996). The two-point correlation function of the latter case is also studied in Kunz and Shapiro (1998).

Note that an exact analytical calculation of for systems described by an Ansatz of the form (4) is much harder than, e.g., the analytical calculation of low-order spectral correlation functions, which are already difficult to obtain. Here, we do not attempt an analytical calculation of for large matrix dimension. Rather, motivated by the reliability of the Wigner surmise, we study the possible transitions in Eq. (4) for matrices (or, in the symplectic case, matrices, because the smallest non-trivial self-dual matrix has this size) and compare the resulting level spacing distributions with that of large random matrices, the latter obtained numerically. The cases of Poisson-GOE and GOE-GUE were worked out earlier by Lenz and Haake Lenz and Haake (1991), and the spacing distribution of a  matrix interpolating between Poisson and GUE is given in Kota and Sumedha (1999). These cases will briefly be reviewed below, and the remaining ones are the main subject of this work.

This paper is organized as follows. In Sec. II we derive analytical results for for small matrix sizes. If is from the GSE (i.e., is self-dual) we construct in Secs. II.4, II.6, and II.7 self-dual matrices to maintain the Kramers degeneracy. In Sec. II.8 we consider the case where a  GSE matrix is perturbed by a non-self-dual GUE matrix. Section III provides strong numerical evidence that the results obtained in Sec. II approximate the spacing distributions of large random matrices very well. We give a perturbative argument for the matching of the couplings used for the Wigner surmise and for large matrices, respectively, and derive an approximate result that involves the eigenvalue density. This result describes the numerical data rather well. We also show that the transitions from the GSE to either a non-self-dual Poissonian ensemble or the GOE proceed via an intermediate transition to the GUE and can also be described by the surmises calculated in Sec. II. We summarize our findings and conclude in Sec. IV. Technical details are worked out in several appendices.

## Ii Spacing distributions for small matrices

### ii.1 Preliminaries

In the spirit of the Wigner surmise, we now calculate the distributions of eigenvalue spacings of mixed ensembles for the smallest nontrivial (i.e.,  or ) matrices, with normalized as in Eq. (2). Unfolding is not needed for these matrices since they have only two independent eigenvalues (except for Sec. II.8). We first study the transitions from the integrable to the chaotic case for the three Gaussian ensembles and then proceed to the transitions between different symmetry classes.

We define the  Poisson process by a matrix

 H0=(000p), (5)

where is a Poisson distributed non-negative random number with unit mean value, i.e., its probability density is . The eigenvalue spacing of this matrix is obviously Poissonian, as the spacing is just , and therefore we obtain Eq. (3).

The choice of may look like a special case, but it suffices for our purposes. The most general Hermitian  matrix with spacing can be obtained from Eq. (5) by a common shift of the eigenvalues (which does not influence the spacing) and a basis transformation. This transformation can be absorbed in the perturbing matrix since it does not change the probability distribution of the latter. To see this suppose we had started with a general nondiagonal , also with eigenvalues and , instead of Eq. (5). When added to a random matrix with , we choose it to be real symmetric, Hermitian, or self-dual, respectively, in order to preserve the symmetry properties of . Then is diagonalized by a suitable matrix , i.e., , where is orthogonal (), unitary (), or symplectic (). In the total matrix this is equivalent to perturbing , but the probability distribution of the perturbation is invariant under the transformation .

For matrices from the GOE, GUE, and GSE, respectively, we choose the mean values of the matrix elements to be and the normalization

 Missing or unrecognized delimiter for \Big (6)

The index distinguishes the components of the complex/quaternion GUE/GSE matrix elements, while the GOE matrix elements possess only a real part.

All results we derive from Eq. (4) will be symmetric in since the distribution of the elements of is symmetric about zero (the perturbation will be taken from one of the Gaussian ensembles in each case). This means that our results should be expressed in terms of . To avoid such cumbersome notation we restrict ourselves to non-negative .

### ii.2 Poisson to GOE

We first consider the case that corresponds to a classically integrable system perturbed by a chaotic part with anti-unitary symmetry squaring to . The integrable part is represented by a Poisson process, and the chaotic one by the GOE. The spacing distribution for this case has been derived in Lenz and Haake (1991), and we state it here for the sake of completeness.

The  random matrix

 H=H0+λH1=(000p)+λ(accb) (7)

consists of from (5) and from the GOE, i.e., a real symmetric matrix with normalization given in Eq. (6). The calculations are very similar to the ones for the transition from Poisson to the GSE, which are presented in Sec. II.4 (see also App. A). The resulting spacing distribution of reads

 P0→1(s;λ)=Cse−D2s2∫∞0dxe−x24λ2−xI0(xDsλ) (8)

with

 D(λ) =√π2λU(−12,0,λ2), (9) C(λ) =2D(λ)2, (10)

where is the Tricomi confluent hypergeometric function (or Kummer function) (Abramowitz and Stegun, 1964, Eq. (13.1.3)) and is a modified Bessel function (Abramowitz and Stegun, 1964, Eq. (9.6.3)). is plotted in Fig. 1 (left) for various values of . The formula is equivalent to the one given in Lenz and Haake (1991), but our integration variable is scaled differently.

In the limiting cases of and we have

 D(λ)∼{1/(2λ)for λ→0,√π/2for λ→∞. (11)

Using the asymptotic expansion of the Bessel function, it is straightforward to show that for we obtain the Poisson result . It is even simpler to show that the Wigner surmise for the GOE is obtained for .

The small- behavior of shows interesting features. To investigate this behavior, we consider separately the cases and . For we have by construction

 P0→1(s;0)=e−s=1−s+O(s2). (12)

For we obtain from Eq. (8)

 P0→1(s;λ)=c(λ)s+O(s3) (13)

with

 c(λ)∼√π2λfor λ→0. (14)

which means that we recover the linear level repulsion of the GOE for arbitrarily small , i.e., for arbitrarily small admixture of the chaotic part as also observed in Robnik (1987); Hasegawa et al. (1988); Caurier et al. (1990). This implies that for the distribution, viewed as a function of , develops a discontinuity at , since while . This effect is clearly seen in Fig. 1 (left).

For small values of and , we observe something reminiscent of the Gibbs phenomenon, i.e., the interpolation overshoots the Poisson curve considerably. In the limit of , one can show (see App. B.2) that the maximum of is at with a finite value of . This implies an overshoot of compared to the Poisson curve. Such an effect also occurs in the transitions from Poisson to GUE and GSE that are treated in Secs. II.3 and II.4 below, with a quadratic/quartic level repulsion in the small- regime.

The large- behavior of is analyzed in App. A, and we obtain Poisson-like behavior for any finite , see Eq. (108). This is in contrast to the small- behavior, which is GOE-like for any nonzero .

### ii.3 Poisson to GUE

We now consider the transition from Poisson to the GUE. This corresponds to a classically integrable system with a chaotic perturbation without anti-unitary symmetry. The  random matrix

 H=H0+λH2=(000p)+λ(ac0+ic1c0−ic1b) (15)

contains from the GUE, i.e., a complex Hermitian matrix with normalization (6). The spacing distribution of an equivalent setup with different normalizations of the random matrix elements was already considered in Kota and Sumedha (1999), so we just state the result,

 P0→2(s;λ)=Cs2e−D2s2∫∞0dxe−x24λ2−xsinhzz (16)

with and

 D(λ) =1√π+12λeλ2erfc(λ)−λ2Ei(λ2) +2λ2√π2F2(12,1;32,32;λ2), (17) C(λ) =4D(λ)3√π. (18)

Here, is the complementary error function (Abramowitz and Stegun, 1964, Eq. (7.1.2)), is the exponential integral (Abramowitz and Stegun, 1964, Eq. (5.1.2)), and is a generalized hypergeometric function (Gradshteyn and Ryzhik, 1994, Eq. (9.14.1)). We could also have written the result in the form of Eqs. (103) and (104) since .

To check the validity of Eq. (16) and to see the emergence of the limiting spacing distributions, we now consider the limits and . First note that for we have

 D∼12λandC∼12λ3√π (19)

so that Eq. (16) becomes for

 P0→2(s;0)=limλ→0s22λ3√π∫∞0dxe−14λ2(s2+x2)−xsinhzz =s2√π∫∞0dxe−xxlimλ→01λ(e−(s−x)24λ2−e−(s+x)24λ2)=2√π[δ(s−x)−δ(s+x)] =e−s, (20)

which is the Poisson distribution as required. For we have

 D∼2√πandC∼32π2 (21)

so that Eq. (16) becomes

 P0→2(s;∞) =32s2π2e−4s2πlim\lx@stackrelλ→∞z→0∫∞0dxe−x24λ2−xsinhzz =32s2π2e−4s2π, (22)

which is the Wigner surmise for the GUE.

The integral in Eq. (16) can be computed numerically without difficulties as the integrand decays like a Gaussian for large and becomes constant for small .4 The resulting distribution is plotted in Fig. 1 (middle).

As in Sec. II.2, a discontinuity is found at towards the Poisson result. For we obtain from Eq. (16)

 P0→2(s;λ)=c(λ)s2+O(s4) (23)

with

 c(λ)∼12λ2for λ→0. (24)

Hence we obtain the quadratic level repulsion of the GUE for arbitrarily small coupling parameter. For , the maximum of the function is at , with a value of (see App. B.2).

The large- behavior of is given by Eq. (108), i.e., it is Poisson-like.

### ii.4 Poisson to GSE

In this case, a classically integrable system is perturbed by a chaotic part with anti-unitary symmetry squaring to and hence represented by the self-dual matrices of the GSE. One has to consider  matrices here, because a self-dual  matrix is proportional to and has only one non-degenerate eigenvalue. As mentioned in the introduction, there are now two possibilities: The Poisson process could be represented by a self-dual or a non-self-dual matrix. Here we only consider the former possibility, while the latter will be discussed in Sec. III.5. A self-dual Poisson matrix is obtained by taking a tensor product of Eq. (5) with . Thus the transition matrix is

 H=H0⊗12+λH4=⎛⎜ ⎜ ⎜⎝0000000000p0000p⎞⎟ ⎟ ⎟⎠ +λ⎛⎜ ⎜ ⎜⎝a0c0+ic3c1+ic20a−c1+ic2c0−ic3c0−ic3−c1−ic2b0c1−ic2c0+ic30b⎞⎟ ⎟ ⎟⎠, (25)

where the GSE matrix is Hermitian and self-dual, and can be represented by a  matrix whose elements are real quaternions, see Mehta (2004) for details.

We now explain the calculation of the spacing distribution for this transition. The computation of the previous cases, Poisson to GOE and Poisson to GUE, can be done in a similar fashion.

Due to the self-dual structure of , the spacing between its non-degenerate eigenvalues spacing can be computed analytically and reads

 S=λ√(a−b−p/λ)2+4cμcμ, (26)

where the repeated index indicates a sum from to . We have intentionally written instead of since we eventually need to rescale the spacing to ensure . The desired spacing distribution is proportional to the integral

where we have rescaled by for simplicity and are not yet concerned with the normalization. The distributions of the random variables are Gaussian, with variances given by Eq. (6),

 σ2a,b=2σ2c0,c1,c2,c3=1. (28)

Inserting this into Eq. (27) and shifting gives

The multi-dimensional integral in this expression is computed in App. C.1. Rescaling the spacing and normalizing the distribution to satisfy Eq. (2), we obtain

 P0→4(s;λ)=Cs4e−D2s2 ×∫∞0dxe−x24λ2−xzcoshz−sinhzz3 (30)

with and

 D(λ) =λ2√π∫∞0dxe−2λx (31) ×(4x3+2x)e−x2+√π(4x4+4x2−1)erf(x)x3, C(λ) =8D(λ)5√π, (32)

where is the error function (Abramowitz and Stegun, 1964, Eq. (7.1.1)). The last term in the integrand of Eq. (II.4) is proportional to in agreement with Eqs. (103) and (104).

In the limiting cases of and we find

 D(λ)∼{1/(2λ)for λ→0,8/(3√π)for λ→∞. (33)

For , manipulations analogous to those performed in Eq. (20) lead to the Poisson result . For the integral in Eq. (II.4) becomes trivial and yields so that we obtain the Wigner surmise for the GSE.

Equation (II.4) is plotted in Fig. 1 (right) and again displays a discontinuity at as . For we now have

 P0→4(s;λ)=c(λ)s4+O(s6) (34)

with

 c(λ)∼112λ4for λ→0. (35)

For , the maximum of the function is at , with a value of (see App. B.2).

The large- behavior of is again Poisson-like and given by Eq. (108).

### ii.5 GOE to GUE

With this subsection we start the investigation of transitions between different chaotic ensembles using the smallest possible matrix size.

We consider the  matrix

 H=H1+λH2. (36)

The spacing distribution for this transition was already computed in Lenz and Haake (1991). With the normalization of ensembles given in Eq. (6), it reads

 P1→2(s;λ)=Cse−D2s2erf(Dsλ) (37)

with

 D(λ) =√1+λ2√π(λ1+λ2+arccotλ), (38) C(λ) =2√1+λ2D(λ)2. (39)

This formula matches the result of Lenz and Haake (1991) up to a rescaling of the coupling parameter by a factor of , which is due to a different normalization of the ensembles used there.

In the limiting cases of and we have

 D(λ)∼{√π/2for λ→0,2/√πfor λ→∞. (40)

For , the error function in Eq. (37) can be replaced by unity (for ), and we obtain the Wigner surmise for the GOE. For , using the first-order Taylor expansion of the error function yields the Wigner surmise for the GUE.

The result (37) is plotted in Fig. 2 (left). In the small- region, we now have for

 P1→2(s;λ)=c(λ)s2+O(s4) (41)

with

 c(λ)∼π2λfor λ→0. (42)

Similar to the previous subsections, a non-analytic transition between weaker and stronger level repulsion develops as , except that now there is no jump in the function itself but rather in its derivative at . Therefore, the stronger level repulsion takes over immediately in the small -regime, if . As we shall see below, this also happens in the remaining transitions, GOE to GSE and GUE to GSE, and seems to be a characteristic feature of the mixed ensembles.

The large- behavor of is obtained immediately from Eq. (37) by noticing that for . In analogy to the transitions from Poisson to RMT this implies that the large- behavior is dominated by the ensemble with the smaller .

### ii.6 GOE to GSE

As the GSE is involved in this transition, we need matrices of size . Again there are two possibilities: The GOE matrix could be made self-dual, or it could be non-self-dual (as it generically is). Here we only consider the former case, while the latter case will be discussed in Sec. III.5. As in Nieminen (2009) we define a modified GOE matrix by

 H1⊗12=⎛⎜ ⎜ ⎜⎝a0c00a0cc0b00c0b⎞⎟ ⎟ ⎟⎠ (43)

with real parameters . This matrix is self-dual, so we can add it to a matrix from the GSE without spoiling the symmetry properties of the latter. Thus we consider

 H=H1⊗12+λH4, (44)

where and are normalized according to Eq. (6). The eigenvalues of the sum are doubly degenerate and can be calculated easily due to self-duality.

After some algebra (see App. C.2) we obtain for the spacing distribution of

 P1→4(s; λ)=Cs4e−(1+2λ2)D2s2 ×∫10dx(1−x2)e(xDs)2[I0(z)−I1(z)], (45)

where , and are modified Bessel functions, and

 D(λ) =λ−λ3+(1+λ2)2arccotλ√2πλ√1+λ2, (46) C(λ) =29/2√πλ2(1+λ2)3/2D(λ)5. (47)

In the limiting cases of and we have

 D(λ)∼{√π/(23/2λ)for% λ→0,8/(3√2πλ)for λ→∞. (48)

For , we use the asymptotic expansion of the Bessel functions to simplify the integral over in Eq. (45) and obtain the Wigner surmise for the GOE. For , the exponential and the difference of the Bessel functions in the integral over can be replaced by unity, and the Wigner surmise for the GSE follows trivially.

The distribution is plotted for several values of in Fig. 2 (middle) and displays a continuous interpolation between the GOE and GSE curves. In the small- region, the level repulsion is of fourth order for non-vanishing . This is visible in the plots and can be shown by expanding for and small ,

 P1→4(s;λ)=c(λ)s4+O(s6) (49)

with

 c(λ)∼π212λ3for λ→0. (50)

The large- behavor of can be obtained using the asymptotic expansion

 I0(z)−I1(z)=ez[1√8πz3/2+O(z−5/2)] (51)

in Eq. (45), resulting in

 P1→4(s;λ)∼√π32CD3se−2(λDs)2for s→∞. (52)

Again, the large- behavior is dominated by the ensemble with the smaller .

### ii.7 GUE to GSE

Again, due to the presence of the GSE, we have two possibilities for the GUE: self-dual or not. The former case is simpler and analyzed here, while the latter case will be considered in Sec. II.8. We first have to clarify how to obtain a self-dual  matrix whose eigenvalues have the same probability distribution as those of a  matrix from the GUE. In analogy to Sec. II.6, one could try , but the resulting matrix is not self-dual. Instead, we consider the matrix

 H42=(H200HT2) (53)

with given in Eq. (15). The eigenvalues of are obviously equal to those of , but twofold degenerate. Interchanging the second and third row and column of , we obtain the matrix

 Hsd2=⎛⎜ ⎜ ⎜⎝a0c0+ic100a0c0−ic1c0−ic10b00c0+ic10b⎞⎟ ⎟ ⎟⎠, (54)

which is self-dual and has the same eigenvalues as . A matrix of this form was already introduced in Nieminen (2009).

The proper self-dual matrix for the GUE to GSE transition is thus

 H=Hsd2+λH4 (55)

with given in Eq. (25). The calculation of the corresponding spacing distribution proceeds in close analogy with the one presented in App. C.2, and we find the closed expression

 P2→4(s;λ) =Ce−(λDs)2 ×[2(Ds)2−√πDse−(Ds)2erfi(Ds)] (56)

with the imaginary error function and

 D(λ) =1λ√π(2+λ2−λ4arccschλ√1+λ2), (57) C(λ) =2λ3√π(1+λ2)D(λ), (58)

where is defined in (Abramowitz and Stegun, 1964, Eq. (4.6.17)).

In the limiting cases of and we have

 D(λ)∼{2/(λ√π)for λ→0,8/(3λ√π)for λ→∞. (59)

For , the asymptotic expansion of the second term in the square brackets of Eq. (56) yields . This can be neglected compared to the first term in the square brackets, which gives the Wigner surmise for the GUE. For , Taylor expansion of the square brackets in Eq. (56) yields the Wigner surmise for the GSE.

The result (56) is plotted in Fig. 2 (right). In the small- region, we have for

 P2→4(s;λ)=c(λ)s4+O(s6) (60)

with

 c(λ)∼2563π3λ2for λ→0. (61)

The large- behavor of can be obtained by noticing that for large the first term in the square brackets of Eq. (56) dominates the second term so that

 P2→4(s;λ)∼2CD2s2e−(λDs)2for s→∞. (62)

Again, the large- behavior is dominated by the ensemble with the smaller .

### ii.8 GSE to GUE without self-dual symmetry

In this section, we consider a matrix taken from the GSE whose Kramers degeneracy is lifted by a perturbation taken from the GUE without self-dual symmetry. As we shall see, this case also gives a surmise for other transitions involving the GSE and another ensemble without self-dual symmetry. We will return to this point in Sec. III.5.

#### General considerations

The  transition matrix is

 H=H4+λH2 (63)

with taken from the GSE and from the GUE, both in standard normalization, Eq. (6). As has no self-dual symmetry, the two-fold degeneracy of the GSE spectrum is removed and eigenvalue pairs split up. If the perturbation is small, there are two different spacing scales in this setup, as shown in Fig. 3 where the perturbation of two nearest-neighbor eigenvalues is sketched:

• : The spacings between previously degenerate eigenvalues, which are of the same order of magnitude as the coupling parameter for small couplings. They are formed by the two smallest/largest eigenvalues of .

• : The intermediate spacing, which is formed by the second and third largest eigenvalue of . In the limit this is the original spacing of the GSE matrix .

The joint probability density of the eigenvalues of is given, up to a rescaling, by (Mehta, 2004, Eq. (14.2.7))

 P(θ1,θ2,θ3,θ4)=C0exp(−4∑i=1θ2i)Δ(θ1,θ2,θ3,θ4) ×[h(d21)h(d43)+h(d32)h(d41)−h(d31)h(d42)] (64)

with

 Δ(θ1,θ2,θ3,θ4) =∏i

As we are only interested in spacings and thus in differences of eigenvalues, we introduce new variables

 t1 =d21=θ2−θ1, (69) t2 =d32=θ3−θ2, (70) t3 =d43=θ4−θ3 (71)

and keep the original variable . The Jacobi determinant of this transformation is , and we can now perform the integration, which results (up to a constant factor) in

 P(t1,t2,t3)=Δ(−t1,0,t2,t2+t3) (72) Missing or unrecognized delimiter for \right ×[h(t1)h(t3)−h(t1+t2)h(t2+t3)+h(t1+t2+t3)h(t2)].

We now derive the distributions of the two different kinds of spacings from this formula. We assume and include the resulting combinatorial factor of explicitly.

#### Spacings between originally degenerate eigenvalues

To obtain the distribution of the spacing between the two smallest eigenvalues of (the two largest ones give the same result due to symmetry), we set and integrate over and from to . This results in the spacing distribution

 P14→2(s1;λ)=CD∫∞0dt2dt3P(Ds1,t2,t3) (73)

with

 C(λ) =43π−3/2λ−6(2+λ2)5, (74) D(λ) =C(λ)∫∞0dS1dt2dt3S1P(S1,t2,t3). (75)

We replaced by to indicate that this is the spacing on the unfolded scale, i.e., with a mean value of . One of the integrals could in principle be done analytically, but this results in such a lengthy expression that it seems more sensible to evaluate all integrals numerically.

The distribution in the limit can either be obtained by perturbation theory, see App. D.1, or by directly evaluating the spacing distribution in the limit . First note that

 limλ→02√πλ3xh(x)=δ(x), (76)

where the -dependence of , which is suppressed in our notation, plays a crucial role. As the mean value of the spacing on the original scale has to become arbitrarily small in the GSE-limit due to the Kramers degeneracy, we consider a rescaled spacing . Therefore becomes for small

 h(S1)=h(λ~s1)λ→0≈λ~s1e−~s21. (77)

With these considerations we obtain from Eq. (72)

 P(λ Unsupported use of \hfill (78) ×[ 2~s21√πλe−~s21δ(t3)(λ~s1+t2)(λ~s1+t2+t3)t2(t2+t3) −δ(λ~s1+t2)δ(t2+t3)λ~s1t2t3(λ~s1+t2+t3) +δ(λ~s1+t2+t3)δ(t2)λ~s1(λ~s1+t2)(t2+t3)t3]

as . The last two terms in square brackets vanish upon evaluation of the and integrals, because the zeros of the arguments of their -functions lie outside of the integration region. Performing the integration in the first term we obtain for nonzero and

 P14→2(~s1;λ)λ→0∝~s21e−~s21. (79)

Up to normalization and rescaling this is the spacing distribution of a  GUE matrix.

In the opposite limit the result (73) reduces to the distribution of the first and last spacings of a pure  GUE matrix. This distribution can be obtained from similar considerations, starting from (Mehta, 2004, Eq. (3.3.7)).

The result (73) is shown in Fig. 4 (left and middle) for several values of , along with the limiting distributions for and . All these curves are very similar and can only be distinguished by the naked eye in the zoomed-in plot.

We have validated the result (73) by comparing it to the spacing distribution of numerically obtained  random matrices.

#### Perturbed GSE-spacing

We now consider the perturbed spacing of the original GSE matrix, which was formed by the two degenerate eigenvalue pairs of . The distribution of this spacing is obtained by setting and integrating defined in Eq. (72) over and from to . With proper normalization as given in Eq. (6), this yields

 P24→2(s2;λ)=CD∫∞0dt1dt3P(t1,Ds2,t3) (80)

with

 C(λ) =43π−3/2λ−6(2+λ2)5, (81) D(λ) =C(λ)∫∞0dS2dt1dt3S2P(t1,S2,t3). (82)

Again, the replacement of by means that this is the intermediate spacing on the unfolded scale, i.e., with a mean value of .

In the limit the result (80) reduces to the Wigner surmise for the GSE, while in the opposite limit it reduces to the spacing distribution of the intermediate spacing of a pure  GUE matrix, which can again be obtained from similar considerations.

The result (80) is shown in Fig. 4 (right) for several values of , along with the limiting distributions for and . The maximum of the interpolation first drops down as is increased from , while at a value of around it starts to rise again as the distribution approaches its limit. Note that the limiting distributions of and for , i.e., the red dashed curves in Fig. 4, turn out to be almost identical to each other and to the Wigner surmise for the GUE.

We have also validated the result (80) by comparing it to the spacing distribution of numerically obtained  random matrices.

## Iii Application to large spectra

In this section we will show numerically that the formulas derived in Sec. II for small matrices describe the spacing distributions of large random matrices very well. This observation should be viewed as our main result.

When comparing the results obtained from large matrices to our generalized Wigner surmises, a natural question is how the corresponding coupling parameters, i.e., in Eq. (4), should be matched. This question will be addressed in the next subsection based on perturbation theory, while the numerical results will be presented in the remaining subsections.

### iii.1 Matching of the coupling parameters

The setup is most easily explained by means of the transition from Poisson to the GUE. The Poisson case is represented by a diagonal matrix with independent entries (), each distributed according to the same distribution , which we choose independent of . The eigenvalue density of is thus , and the local mean level spacing is . We consider

 H=H0+αH2, (83)

where is an random matrix taken from the GUE, subject to the usual normalization, Eq. (6).

As in the  case, the eigenvalues will experience a repulsion through . We will show in first-order perturbation theory that the relevant quantity for the repulsion is a combination of the eigenvalue density of and the variance of the matrix elements of .

Ordinary perturbation theory in yields a first-order eigenvalue shift of the of

 Δθ(1)i=α(H2)ii. (84)

This shift does not lead to a correlation of the eigenvalues, as it just adds an independent Gaussian random number to each of them. Therefore, the eigenvalues remain uncorrelated, and their spacing distribution remains Poissonian.

However, if there is a small spacing of order between two5 adjacent eigenvalues and of , first-order almost-degenerate perturbation theory Gottfried and Yan (2004) predicts that the perturbed eigenvalues are the eigenvalues of the matrix

 (θk