How Many Eigenvalues of a Random Symmetric Tensor are Real?

# How Many Eigenvalues of a Random Symmetric Tensor are Real?

Paul Breiding
###### Abstract

This article answers a question posed by Draisma and Horobet , who asked for a closed formula for the expected number of real eigenvalues of a random real symmetric tensor drawn from the Gaussian distribution relative to the Bombieri norm. This expected value is equal to the expected number of real critical points on the unit sphere of a Kostlan polynomial. We also derive an exact formula for the expected absolute value of the determinant of a matrix from the Gaussian Orthogonal Ensemble.

\pdfstringdefDisableCommands\pdfstringdefDisableCommands

Max-Planck Institute for Mathematics in the Sciences, Leipzig, breiding@mis.mpg.de.
Partially supported by DFG research grant BU 1371/2-2.

## 1 Introduction

The title of this article is a homage to the article , in which Edelman, Kostlan and Shub compute the expected number of real eigenvalues of a matrix filled with i.i.d. standard Gaussian random variables. This model of a random matrix is extended to matrices of higher order, called tensors. In  the author computed the expected number of real eigenvalues of a random tensor, whose entries are i.i.d. standard Gaussian random variables. The content of this article is the computation of the expected number of real eigenvalues of a random symmetric tensor. Unlike symmetric matrices, symmetric tensors may have complex eigenvalues. But, as we will see, symmetric tensors have in a sense more real eigenvalues than non-symmetric tensors.

In this article, a tensor is an array of numbers arranged in  dimensions, where the -th dimension is of length . We call  the order of . We are interested in the case when are all equal. The space of real tensors of order is denoted by . For  tensors are higher-dimensional analogues of matrices, which form the case . A tensor is called symmetric, if for all permuations on on  elements we have . We denote the vector space of symmetric tensors by . As in  we say that a random real tensor is a Gaussian tensor, if it has density , where is the square of the Frobenius norm.

###### Definition 1.1.

A Gaussian symmetric tensor is a random tensor given by the density , where (Draisma and Horobet  call the distribution given by this density the Gaussian distribution with respect to the Bombieri-norm).

The complex number is called an eigenvalue of the tensor , if there exists a vector , such that the following equation holds:

 Avp−1:=(∑1≤i2,…,ip≤naj,i2,…,ipvi2⋯vip)1≤j≤n=λv, and % vTv=1. (1.1)

The pair is called an eigenpair in this case. For , equation creftype 1.1 is the defining equation of matrix eigenpairs. The condition serves for selecting a point from each class of eigenpairs: if , then also . In particular, if is odd and if is an eigenvalue of , then also is an eigenvlaue of . To take into account this reflection property we make the following definition.

###### Definition 1.2.

If is odd we define the number of eigenvalues of , to be the the number of solutions of creftype 1.1 divided by two. For even even we define number of eigenvalues of , to be the the number of solutions of creftype 1.1.

For this definition Cartwright and Sturmfels  show that the number of complex eigenpairs for the generic tensor is . In the following we use the notation

 E(n,p):=EA∈Sp(Rn) Gaussian % symmetric number of real eigenvalues of A.

In Theorem 1.4 below we give an exact formula for . This complements our result from , where have given an exact formula for

 Enon-sym(n,p):=EA∈(Rn)⊗p Gaussian number of real eigenvalues of A.

This formula is given in terms of Gauss’ hypergeometric function and the Gamma function (see creftype 2.2 and creftype 2.4 for their definitions):

 Enon-sym(n,p)=2n−1√p−1nΓ(n−12)√πpn−12Γ(n)(2(n−1)F(1,n−12,32,p−2p)+F(1,n−12,n+12,1p)).

Furthermore, we have shown the following asymptotic formulas:

 limn→∞Enon-sym(n,p)√D(n,p)=limp→∞Enon-sym(n,p)√D(n,p)=1.

Auffinger et. al. provide in [2, Theorem 2.17] the following formula:

 limn→∞E(n,p)√D(n,p)=√8nπ(1+o(1)). (1.2)

Comparing this with creftype 1.2 it is fair to say that:

For fixed and large , and on the average, a real symmetric tensor has more eigenvalues than a real general tensor.

However, the point of view from Auffinger et. al. is not eigenvalues of tensors, but critical points of the polynomial restricted to the unit sphere. If  is Gaussian symmetric, is called a Kostlan polynomial. Indeed, every solution of creftype 1.1 corresponds to a critical point of on the unit sphere and vice versa. This point of view is also taken by Fyorodov, Lerario and Lundberg , who argue that each connected component of the zero set of a polynomial contains at least one critical point. Consequently, yields information about the average topology of the zero set of a Kostlan polynomial.

Using the formula from Theorem 1.4 we get the following asymptotic formulas for large :

 limp→∞E(2m+1,p)√D(2m+1,p) =√3π∏2m+1i=1Γ(i2)∑1≤i,j≤mdet(Γi,j1)gi−1,j, and (1.3) limp→∞E(2m,p)√D(2m,p) =√3∏2mi=1Γ(i2)∑0≤i,j≤m−1det(Γi,j2)gi,j,

where and denote the following matrices depending on :

 Γi,j1:=[Γ(r+s−12)]1≤r≤m,r≠i1≤s≤m,s≠j and Γi,j2=[Γ(r+s+12)]0≤r≤m−1,r≠i0≤s≤m−1,s≠j. (1.4)

and where

 gi,j=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩√π(2i+1)!(−1)i322ii!F(−i,12,32,−13)−Γ(i+12)2(−34)i+1, if j=0Γ(i+j+12)(1−2i−2j)(1−2i+2j)(−34)i+jF(−2i,−2j+1,32−i−j,34), if j>0. (1.5)

Note that , where are the rational polynomials from Theorem 1.4.

At first glance the formulas in creftype 1.3 don’t provide much insight, and unfortunately we don’t know how to simplify them any further, nor do we know how to compute the leading order like in creftype 1.2. But we have the following interesting corollary:

###### Corollary 1.3.

For fixed we have

###### Proof.

For all we have for some ; see, e.g., [14, 43:4:3]. In both formulas in creftype 1.3 there are as many s in the numerator as there are in the denominator. ∎ Figure 1.1: The histogram shows the output of the following experiment: for 3≤n≤7 we sampled 104 Gaussian symmetric tensors in S3(Rn) and computed the number of real eigenvalues using . Theorem 1.4 predicts E(3,3)≈5.28, E(3,4)≈9.4, E(3,5)≈15.75, E(3,6)≈25.44 and E(3,7)≈40.1.

Here is our main theorem. We give a proof in Section 3.

###### Theorem 1.4 (An exact formula for E(n,p)).

We define the rational polynomial functions

 gi,j(p)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩√π(2i+1)!(−1)i22ii!(p−2)ip(p−1)i(3p−2)F(−i,12,32,−p2(3p−2)(p−2))−Γ(i+12)2(−3p−24(p−1))i+1, if j=0Γ(i+j+12)(1−2i−2j)(1−2i+2j)(−3p−24(p−1))i+jF(−2i,−2j+1,32−i−j,3p−24(p−1)), if j>0

Then, for all we have

 E(2m+1,p) =1+√π(p−1)m−1√(p−1)(3p−2)∏2m+1i=1Γ(i2)∑1≤i,j≤mdet(Γi,j1)gi−1,j(p), and E(2m,p) =(p−1)m−1√3p−2∏2mi=1Γ(i2)∑0≤i,j≤m−1det(Γi,j2)gi,j(p).

Using essentially the same argument as for Corollary 1.3 we can prove the following.

We have and

### 1.1 Random matrix theory

Gaussian tensors of order are better known under another name: the Gaussian Orthogonal Ensemble. If is a matrix from this ensemble, we write . For , let us denote

 In(u):=EA∼GOE(n)|det(A−uIn)|,andJn(u):=EA∼GOE(n)det(A−uIn), (1.6)

where is the identity matrix. The proof of Theorem 1.4 is based on the computation of . We remark that by the triangle inequality. A computation of can be found in [12, Section 22] and the ideas in this paper are inspired by the computations in this reference. The following result, which is new to our best knowledge, shows that can be expressend in terms of  and a collection of Hermite polynomials.

###### Theorem 1.6 (The expected absolute value of the determinant of a GOE matrix).

Let be fixed. Define the functions via

 Pk(x)=⎧⎨⎩−ex22∫xt=−∞e−t22dt,if k=−1Hek(x),if k=0,1,2,….

where is the -th (probabilist’s) Hermite polynomial; see creftype 2.16. Then, we have

 I2m(u) =J2m(u)+√2πe−u22∏2mi=1Γ(i2)∑1≤i,j≤mdet(Γi,j1)det[P2i−1(u)P2j(u)P2i−2(u)P2j−1(u)], and I2m−1(u) =J2m−1(u)+√2e−u22∏2m−1i=1Γ(i2)∑0≤i,j≤m−1det(Γi,j2)det[P2j(u)P2i+1(u)P2j−1(u)P2i(u)].

Here, and are the matrices from creftype 1.4.

###### Remark.

The computation of is based on the formula creftype 3.1 by Draisma and Horobet, which involves the expectation for . In the recent article , together with Khazhgali Kozhasov and Antonio Lerario, we have computed the volume of the set of matrices with repeated eigenvalues, and this computation is based on .

### 1.2 Organization of the article

In the next section we give some preliminary material. Then, in Section 3 we prove Theorem 1.4. In Section 4 we compute several integrals that are used in the proof of Theorem 1.6, which we prove in Section 5.

## 2 Preliminaries

We first fix notation: in what follows is always a positive integer and ; that is, , if is even, and , if is odd. The symbols will denote variables or real numbers. By capital calligraphic letters we denote matrices. The symbols and are reserved for the functions defined in creftype 2.16 and and denote the two hypergeometric functions defined in creftype 2.1 and creftype 2.2 below. The symbol always denotes the inner product defined in creftype 2.18.

### 2.1 Special functions

Throughout the article we use a collection of special function. We present them in this subsection. The Pochhammer polynomials [14, 18:3:1] are defined by where is a positive integer. If , the definition is . Kummer’s confluent hypergeometric function [14, Sec. 47] is defined as

 M(a,c,x):=∞∑k=0(a)k(c)kxkk!, (2.1)

and Gauss’ hypergeometric function [14, Sec. 60] is defined as

 F(a,b,c,x):=∞∑k=0(a)k(b)k(c)kxkk!, (2.2)

where , . Generally, neither nor converges for all . But if either of the numeratorial parameters is a non-positive integer, both and reduce to polynomials and hence are defined for all (and this is the only case we will meet throughout the paper).

###### Remark.

Other common notations are and . This is due to the fact that both and are special cases of the general hypergeometric function .

The following will be useful later.

###### Lemma 2.1.

Let be non-positive integers and . Then

 F(a,b+1,c,x)−F(a+1,b,c,x)=(a−b)xcF(a+1,b+1,c+1,x)
###### Proof.

Since and are non-negative integers, and are polynomials, whose constant term is equal to . Therefore,

 F(a,b+1,c,x)−F(a+1,b,c,x)=∞∑k=1(a)k(b+1)k−(a+1)k(b)k(c)kxkk!. (2.3)

We have According to [14, 18:5:7] the latter is equal to and, moreover, . The claim follows when plugging this into creftype 2.3. ∎

For the Gamma function [14, Sec. 43] is defined as

 Γ(x):=∫∞0tx−1e−tdt. (2.4)

The cumulative distribution function of the normal distribution [14, 40:14:2] and the error function [14, 40:3:2] are respectively defined as

 Φ(x):=1√2π∫∞−∞e−t22dt,anderf(x):=2√π∫x0exp(−t2)dt. (2.5)

The error function and are related by the following equation [14, 40:14:2]

 2Φ(x)=1+erf(x√2). (2.6)

The error function and the Kummer’s hypergeometric function are related by

 erf(x)=2x√πM(12,32,−x2); (2.7)

see [1, 13.6.19].

### 2.2 Hermite polynomials

Hermite polynomials are a family of polynomials that are defined as

 Hn(x):=(−1)nex2dndxne−x2, (2.8)

see [14, 24:3:2]. An alternative Hermite function is defined by

 Hen(x):=(−1)nex22dndxne−x22. (2.9)

The two definitions are related by the following equality [14, 24:1:1]

 Hen(x)=1√2nHn(x√2). (2.10)

By [14, 24:5:1] we have that

 Hk(−z)=(−1)kHk(z)andHek(−z)=(−1)kHek(z) (2.11)
###### Remark.

In the literature, the polynomials are sometimes called the physicists’ Hermite polynomials and the are sometimes called the probabilists’ Hermite polynomials. We will refer to both simply as Hermite polynomials and distinguish them by using the respective symbols.

Hermite polynomials can be expressed in terms of Kummer’s confluent hypergeometric function from creftype 2.1:

 H2k+1(x) =(−1)k(2k+1)!2xk!M(−k,32,x2), % and (2.12) H2k(x) =(−1)k(2k)!k!M(−k,12,x2); (2.13)

see [1, 13.6.17 and 13.6.18].

### 2.3 Orthogonality relations of Hermite polynomials

The Hermite polynomials satisfy the following orthogonality relations. By [11, 7.374.2] we have

 ∫RHem(x)Hen(x)e−x2dx=⎧⎨⎩(−1)⌊m2⌋+⌊n2⌋Γ(m+n+12),if m+n is even0,if m+n is odd., (2.14)

where is the Gamma function from creftype 2.4. More generally, by [3, p. 289, eq. (12)], if is even, we have for , , that

 ∫∞−∞Hem(x)Hen(x)e−α2x2dx=(1−2α2)m+n2Γ(m+n+12)αm+n+1F(−m−n;1−m−n2;α22α2−1). (2.15)

Here is Gauss’ hypergeometric function as defined in creftype 2.2. Recall from creftype 2.5 the definition of . In the following we abbreviate

 Pk(x):={Hek(x), if k=0,1,2,…−√2πex22Φ(x), if k=−1. (2.16)

and put

 Gk(x):=∫x−∞Pk(y)e−y22dy,k=0,1,2,… (2.17)

We can express the functions in terms of the .

We have

1. For all : .

###### Proof.

Note that (2) is a direct consequence of (1). For (1) let and write

 Gk(x)=∫xy=−∞Pk(y)e−y22dy\lx@stackrelby \lx@cref{creftype~refnum}{my_polynomials}=∫xy=−∞Hek(y)e−y22dy\lx@stackrelby \lx@cref{creftype~refnum}{hermite}=∫xy=−∞(−1)kdkdyke−y22dy.

Thus as desired. ∎

We now fix the following notation: if two functions and satisfy and , we define

 ⟨f(x),g(x)⟩:=∫Rf(x)g(x)e−x22dx. (2.18)

The Cauchy-Schwartz inequality implies . The functions and satisfy the following orthogonality relations

###### Lemma 2.3.

For all with we have

1. .

###### Proof.

For (1) we have

 ⟨Gk(x),Pℓ(x)⟩ (2.19) =∫R(∫x−∞Pk(y)e−y22dy)Pℓ(x)e−x22dx =∫R(∫∞yPℓ(x)e−x22dx)Pk(y)e−y22dy =(−1)ℓ∫R(∫−y−∞Pℓ(x)e−x22dx)Pk(y)e−y22dy =(−1)k+ℓ∫R(∫y−∞Pℓ(x)e−x22dx)Pk(y)e−y22dy =(−1)k+ℓ⟨Gℓ(x),Pk(x)⟩,

where the fourth equality is due to the transformation and equation creftype 2.11 and the fifth equality is obtained using the transformation . This shows (1) for the case odd. The case even is implied by (2), which we prove next.

Since and are not both zero, by creftype 2.19, we may assume that . In this case, by Lemma 2.2, we have , so that

 ⟨Gk(x),Pℓ(x)⟩ =−∫RPk−1(x)Pℓ(x)e−x2dx=−∫RHek−1(x)Heℓ(x)e−x2dx.

Combining this equation with creftype 2.14, we have

 ⟨Gk(x),Pℓ(x)⟩=⎧⎨⎩(−1)⌊k−12⌋+⌊ℓ2⌋+1Γ(k+ℓ2),if k+ℓ−1 is even0,if k+ℓ−1 is odd

In particular, for even, which finishes the proof of the first part of this lemma. The second part is proved by replacing and . The case and is a consequence of the case and and the first part of the theorem (we can’t prove this last case simply by plugging in, because might violate the assumption ). This finishes the proof. ∎

### 2.4 The expected value of Hermite polynomials

In this section we will compute the expected value of the Hermite polynomials when the argument follows a normal distribution.

For we have .

###### Proof.

Write

 Eu∼N(0,σ2)H2k(u)\lx@stackrelby definition=1√2πσ2∫∞u=−∞H2k(u)e−u22σ2du=1√π∫∞w=−∞H2k(√2σ2w)e−w2dw,

where the second equality is due to the change of variables . Applying [11, 7.373.2] we get

 1√π∫∞w=−∞H2k(√2σ2w)e−w2dw=(2k)!(2σ2−1)kk!.

This finishes the proof. ∎

###### Lemma 2.5.

Let and recall from creftype 2.16 the definition of , .

1. If and is even, we have

 Eu∼N(0,σ2)Pk(u)Pℓ(u)e−u22=(−1)k+ℓ2√2k+ℓΓ(k+ℓ+12)√π√σ2+1k+ℓ+1F(−k,−ℓ;1−k−ℓ2;σ2+12).
2. For all we have

 Eu∼N(0,σ2)P−1(u)P2k+1(u)e−u22=(−1