Low-degree factors of random polynomials

# Low-degree factors of random polynomials

Sean O’Rourke Department of Mathematics, University of Colorado at Boulder, Boulder, CO 80309  and  Philip Matchett Wood Department of Mathematics, University of Wisconsin-Madison, 480 Lincoln Dr., Madison, WI 53706
###### Abstract.

Inspired by the question of whether a random polynomial with integer coefficients is likely to be irreducible, we study the probability that a monic polynomial with integer coefficients has a low-degree factor over the integers, which is equivalent to having a low-degree algebraic root. It is known in certain cases that random polynomials with integer coefficients are very likely to be irreducible, and our project can be viewed as part of the general program of testing whether this is a universal behavior exhibited by many random polynomial models.

Our main result shows that pointwise delocalization of the roots of a random polynomial can be used to imply that the polynomial is unlikely to have a low-degree factor over the integers. We apply our main result to a number of models of random polynomials, including characteristic polynomials of random matrices, where strong delocalization results are known. Studying a variety of random matrix models—including iid matrices, symmetric matrices, elliptical matrices, and adjacency matrices of random graphs and digraphs—we show that, for a random square matrix with integer entries, the characteristic polynomial is unlikely to have a low-degree factor over the integers, which is equivalent to the matrix having an eigenvalue that is algebraic with low degree. Having a low-degree algebraic eigenvalue generalizes the questions of whether the matrix has a rational eigenvalue and whether the matrix is singular (i.e., has an eigenvalue equal to zero).

The second author was partially supported by National Security Agency (NSA) Young Investigator Grant number H98230-14-1-0149.

## 1. Introduction

Consider the following question: is it true that a random monic polynomial with integer coefficients is irreducible with high probability? For example, a version of Hilbert’s Irreducibility Theorem111 See [52] for a modern formulation of Hilbert’s Irreducibility Theorem. states that if is a monic polynomial in one variable of fixed degree where all coefficients except the degree coefficient are chosen independently and uniformly at random from among all integers in the interval , then the probability that is irreducible approaches in the limit as . This was first proved by van der Waerden in 1934 [49]; and in fact, the probability that is reducible is of order , which was proven by van der Waerden two years later [50]. (The existence and value of the limiting constant was determined by Chela [7] in 1963 in terms of the Riemann Zeta function.) Note that the probability that the constant coefficient of equals zero is ; thus the probability of this elementary factorization matches the order of the probability that is reducible. Van der Waerden [49, 50] also showed that, with probability tending to , the Galois group of the random polynomial is the full symmetric group on elements (which implies irreducibility) as . Estimates for the exact order for the probability that the Galois group is not have been improved since van der Waerden, first in 1955 and 1956 by Knobloch [27, 28], then in 1973 by Gallagher [18] who applied the large sieve, followed by more recent progress in 2010 by Zwina [52], in 2013 by Dietmann [13], and in 2015 by Rivin [41] (see also [8, 9, 51] and references therein).

How the random polynomial is generated matters, and there is a general heuristic that if the random integer coefficients are generated so that “elementary” factorizations are avoided—for example, one ensures that the constant coefficient is not likely to be zero, in which case would be a factor of the polynomial —then the polynomial is very likely to be irreducible. One can think of this heuristic as suggesting a kind of universality, and in some specific instances, it has been conjectured that the behavior in Hilbert’s Irreducibility Theorem extends to different settings, including when the degree is growing. For example, one can define a random polynomial where the constant coefficient and the degree coefficient are equal to , and all other coefficients are or independently with probability . In the limit as the degree goes to infinity (in contrast to the degree being fixed in Hilbert’s Irreducibility Theorem and the results discussed above) it has been conjectured that once again, the probability that is irreducible approaches one as (see [26, 35]; see Figure 1 for numerical evidence that the probability that is reducible goes to zero as ).

The question of proving irreducibility in the case where the degree of the random polynomial tends to infinity and the support of the coefficients remains bounded (or bounded by a function of the degree) seems to be quite challenging. For example, in the specific case of the polynomials described above, the current best result (due to Konyagin [26]) shows that the probability is bounded below by , where is a positive constant (see Example 1.6), and as far as the authors know, there is not a result showing that the probability that is irreducible remains bounded away from zero as the degree increases, even though this probability is conjectured to approach 1. One key step in Konyagin’s result [26] is showing that is unlikely to have a factor over the integers with degree up to , which is step towards proving irreducibility; note that showing that there is no factor over the integers of degree up to would prove irreducibility for a degree polynomial.

In the current note, we show that the phenomenon of random polynomials having no factors over the integers with small degree is quite general, and in fact can be implied by pointwise delocalization of the roots of the random polynomial. Generally speaking, we show that, for a random monic polynomial with integer coefficients, if is sufficiently small, then the probability of a low-degree factor over the integers is also small. We refer to the quantity being small as pointwise delocalization. In particular, pointwise delocalization rules out the possibility that has a deterministic (or near deterministic) root. More generally, pointwise delocalization can be viewed as measuring the probability that has some “elementary” factorization. For instance, is the probability that is a factor of .

While the results in many of the previously cited papers apply to random polynomials with independent coefficients, our main result also provides useful bounds in the case of random polynomials with correlated and highly dependent coefficients. In fact, our main result does not require any knowledge of the coefficients; instead, we require pointwise delocalization, which is a statement about the roots. Thus, our results can be viewed as a way to study low-degree factors when the randomness is more easily described using the roots rather than the coefficients. This is exactly the situation, for example, that arises when studying the characteristic polynomial of a random matrix: the coefficients are typically dependent and correlated, but often more is known about the roots, which are the eigenvalues of the matrix.

When is the characteristic polynomial of a square random matrix, we can often show that the pointwise delocalization condition holds by using sufficiently general results which bound the probability that the matrix is singular or has a very small singular value. In Section 2, we consider various models of random polynomials and random matrices for which good pointwise delocalization results are known. For example, we show that for any and for an by random matrix with each entry or independently with probability , the characteristic polynomial factors over the integers with a factor of degree at most with probability at most (see Theorem 2.3).

We begin by fixing some terminology and notation. If is a field, a polynomial with coefficients in is irreducible over if the polynomial is nonconstant and cannot be factored into the product of two nonconstant polynomials with coefficients in . More generally, a polynomial with coefficients in a unique factorization domain (for example, the integers) is said to be irreducible over if it is an irreducible element of the polynomial ring , meaning that the polynomial is nonzero, is not invertible, and cannot be written as the product of two non-invertible polynomials with coefficients in . Irreducibility of a polynomial over a ring generalizes the definition given for the case of coefficients in a field because, in the field case, the nonconstant polynomials are exactly the polynomials that are non-invertible and nonzero. We say is reducible over if is not irreducible over .

As a simple example, consider the polynomials and . In this case, is irreducible over the integers but reducible over the real numbers. On the other hand, is irreducible over both the integers and the real numbers, but is reducible over the field of complex numbers. We will focus on polynomials over the integers and rationals.

Recall that an algebraic number is a possibly complex number that is a root of a finite, nonzero polynomial in one variable with rational coefficients (or equivalently, by clearing the denominators, with integer coefficients). Given an algebraic number , there is a unique monic polynomial with rational coefficients of least degree that has the number as a root. This polynomial is called the minimal polynomial for , and if is a root of a polynomial with rational coefficients, then the minimal polynomial for divides over the rationals. If the minimal polynomial has degree , then the algebraic number is said to be of degree . For instance, an algebraic number of degree one is a rational number. An algebraic integer is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient (a monic polynomial). The question of whether a monic polynomial with integer coefficients has an irreducible degree factor when factored over the rationals is thus equivalent to whether has a root that is an algebraic number of degree ; in fact, by Gauss’s Lemma (see for instance [14]), being monic implies that is an algebraic integer.

Let be a polynomial of degree over . We let denote the zeros (counted with multiplicity) of , and we define

 Λ(f):={λ1(f),…,λn(f)} (1.1)

to be the set of zeros of .

### 1.1. Models of random monic polynomials with integer coefficients

As mentioned above, there are many ensembles of random polynomials. We begin with the most general ensemble of random monic polynomials with integer coefficients.

###### Definition 1.1 (Random monic polynomial).

We say is a degree random monic polynomial with integer coefficients if are integer-valued random variables (not necessarily independent).

We emphasis that the integer-valued random variables are not assumed to be independent or identically distributed. There are many examples of such random polynomials.

###### Example 1.2 (Random polynomial with independent Rademacher coefficients).

Let be independent Rademacher random variables, which take the values or with equal probability. Then is a random monic polynomial with integer coefficients. More generally, one can consider the case when are independent and identically distributed (iid) copies of an integer-valued random variable (not necessarily Rademacher); see Example 1.3 below for one such example.

###### Example 1.3 (Random polynomial with independent uniform coefficients).

Let be a given parameter. Let be independent and identically distributed (iid) random variables uniformly distributed on the discrete set . Then is a random monic polynomial with integer coefficients.

###### Example 1.4 (Characteristic polynomial of random matrices).

Let be an integer-valued random variable, and let be an random matrix whose entries are iid copies of . Then the characteristic polynomial is a random monic polynomial with integer coefficients. Here, denotes the identity matrix.

###### Example 1.5 (Random polynomial with independent roots).

Let be iid copies of an integer-valued random variable. Then

 f(z):=n∏j=1(z−λj)

is a random monic polynomial with integer coefficients.

Let us now consider the question of irreducibility. On the one hand, it is clear that the random polynomial in Example 1.5 is not irreducible over the integers for (since the polynomial is already written as a product of irreducible factors). On the other hand, for fixed and , the polynomial in Example 1.3 is irreducible over the rationals with probability tending to . Indeed, this is implied by Hilbert’s Irreducibility Theorem as discussed above (see also [10, Section 4.3] using sieve methods; for the related, non-monic case see [29]). This implies that the answer to the question of irreducibility will depend on the particular random polynomial model under consideration.

###### Example 1.6 (Zero-one polynomials).

Consider again Example 1.3 when . In this case, is a random monic polynomial whose coefficients are iid Bernoulli random variables, which take the values or with equal probability. Since with probability , it follows that zero is a root of with probability . However, if we condition on the event that (which eliminates the possibility that zero is a root of ), it has been conjectured that, as , is irreducible with probability tending to one (see [35]). For further details about such polynomials, we refer the reader to [26, 35] and references therein. In particular, it is shown by Konyagin [26], that, conditioned on the event where , the probability that is irreducible is at least for some absolute constant .

###### Example 1.7 (Random permutation matrices).

Let be a random permutation on uniformly sampled from all permutations. Let denote the corresponding permutation matrix, i.e., the -entry of is one if and zero otherwise. Clearly, is an orthogonal matrix. The permutation may be written as a product of disjoint cycles with lengths . Let denote the characteristic polynomial of . Then, as can be seen by reordering the rows and columns of so that it is block diagonal, we have

 fπ(z):=det(zI−Pπ)=ℓ∏j=1(zcj−1),

where is the identity matrix. Clearly is always a root of , making a factor and reducible. In addition, will have other (possibly repeated) factors as well if is composite or if the number of cycles is at least 2. One way to measure randomness in the roots of a random polynomial is testing whether the polynomial has any double roots. For example, Tao and Vu [46] have shown that the spectrum of a random real symmetric by matrix with independent entries contains no double roots with probability tending to as increases (see also [16, 39] for a related question on another class of random polynomials). For contrast, in the case of the characteristic polynomial of a random permutation matrix, the probability that the spectrum contains no double roots is the same as the probability of the permutation having only one cycle, which occurs with probability and tends to zero, rather than 1.

###### Example 1.8 (Erdős–Rényi random graphs).

Let be the Erdös–Rényi random graph on vertices with edge density . That is, is a simple graph on vertices (which we shall label as ) such that each edge is in with probability , independent of other edges. In the special case when , one can view as a random graph selected uniformly among all simple graphs on vertices. The random graph can be defined by its adjacency matrix , which is a real symmetric matrix with entry equal to 1 if there is an edge between vertices and , and the entry equal to zero otherwise. It is well-believed (and numerical evidence suggests) that the characteristic polynomial of is irreducible with probability tending to one as . We discuss this example more in Section 2.7 and Section 3.

We have chosen to focus on monic polynomials, but the question of irreducibility can also be asked for non-monic random polynomials with integer coefficients (or equivalently, by dividing by the leading coefficient, for random monic polynomials with rational coefficients). For fixed degree polynomials with independent coefficients, this question was addressed by Kuba [29]. When the degree tends to infinity, we again expect the answer to depend on the random polynomial model; the following example shows that certain models of random polynomials will be reducible over the rationals with high probability.

###### Example 1.9 (Derivatives of random polynomials with iid roots).

Let be iid copies of an integer-valued random variable , and let

 f(z):=n∏j=1(z−λj).

Then the derivative is a non-monic integer-valued random polynomial of degree . We claim that, as , the probability that is irreducible over the rationals approaches zero. Indeed, since is integer-valued, there exist an integer such that . So, by the law of large numbers, with probability tending to one as tends to infinity, is a root of with multiplicity at least two. Thus, is a root of , and hence (due to Lemma 5.1) is reducible over the rationals. A similar argument shows that, for any fixed positive integer , the -th derivative is reducible over the rationals with probability approaching in the limit as .

### 1.2. Main results

In this paper, we focus on the algebraic degree of the roots of a random monic polynomial . At this point, we now state our main results, which hold for a very general class of random monic polynomials with integer coefficients. We will return to several of the previously mentioned examples as well as several others in the following sections. Since we are working with such a general ensemble of random polynomials, it will be convenient to understand precisely where the roots of the degree random monic polynomial are located. To this end, for , we define the event

 Bf,M:={sup1≤i≤n|λi(f)|≤M}. (1.2)

In particular, on the event , all the roots of will be contained in the disk . The value of will often depend on , the degree of the random polynomial , and will also vary depending on which model of random polynomial we are considering.

Our main result below bounds above the probability that has an algebraic root of degree , for some given value of . In particular, our main result is related to the question of irreducibility since a monic polynomial with integer coefficients is irreducible if and only if its roots are all algebraic of degree . We expect many random monic polynomial models to yield irreducible polynomials with high probability. Intuitively then, algebraic roots of small degree should be rare. Our main result444We thank Melanie Matchett Wood for providing key ideas for the formulation and proof of Theorem 1.10. is an attempt to quantify the probability of their occurrence.

###### Theorem 1.10.

Let be a degree random monic polynomial with integer coefficients (as in Definition 1.1). Let and . Take . Suppose there exists such that

 supz∈ΩP(f(z)=0)≤p (1.3)

(in other words, pointwise delocalization holds on ), and assume the event , defined in (1.2), holds with probability at least . Then, conditioned on , the probability that has an algebraic root of degree in is at most

 2pkk∏j=1(2(kj)Mj+1). (1.4)

To show that, with high probability, has no algebraic roots of degree , one would need to show that the bound in (1.4) is small, which in turn means obtaining a good upper bound for . The bounds we obtain for will depend on the particular random polynomial model under consideration. In the next section, we will consider some specific examples where strong estimates for can be obtained.

Often, we will want to consider the probability that has an algebraic root of degree at most . By an application of the union bound, we immediately obtain the following corollary.

###### Corollary 1.11.

Let be a degree random monic polynomial with integer coefficients (as in Definition 1.1). Let and . Take . Suppose there exists such that

 supz∈ΩP(f(z)=0)≤p

(in other words, pointwise delocalization holds on ), and assume the event , defined in (1.2), holds with probability at least . Then, conditioned on , the probability that has an algebraic root of degree or less in is at most

 2pk∑l=1ll∏j=1(2(lj)Mj+1). (1.5)

The bounds appearing in (1.4) and (1.5) naturally follow from our proofs. However, in applications, these bounds are not the most convenient. Indeed, in most applications we present, it will be simpler to use the following bounds.

###### Proposition 1.12 (Some useful bounds).

For and ,

 M(k2+k)/2e(k2−klog(k))/2≤k∏j=1(2(kj)Mj+1)≤(eM)(k2+k)/2 (1.6)

and

 k∑l=1ll∏j=1(2(lj)Mj+1)≤k2(eM)(k2+k)/2. (1.7)

If , the upper bound of holds in (1.6) and (1.7).

The proof of Proposition 1.12 is given in Section 5 and is based on Stirling’s approximation; in particular, the proof shows that the lower bound in (1.6) gives the correct order for the exponent on .

### 1.3. Random polynomials over finite fields

There are, of course, many other ensembles of random polynomials one can consider. For instance, one can study monic polynomials over the finite field , where is a power of a prime. Indeed, there are monic polynomials of degree over , and we can consider selecting one uniformly at random. Using Galois theory for finite fields and Möbius inversion (see [14, Section 14.3]), one can show that the number of degree irreducible polynomials over is

 1n∑d|nμ(d)qn/d,

where is the Möbius function. Thus, the probability that a randomly selected degree monic polynomial over is irreducible is

 1nqn∑d|nμ(d)qn/d=1n+O(q−n/2),

(using the coarse bound ) for any and . Thus, in a finite field, a degree polynomial chosen uniformly at random is irreducible only with probability close to . This contrasts sharply with the case of polynomials over the integers, where Hilbert’s Irreducibility Theorem shows that at randomly chosen polynomial is very likely to be irreducible (see, for example, [52]).

### 1.4. Overview and outline

The paper is organized as follows. In Section 2, we give some example applications of our main results, including the cases of random polynomials with iid coefficients, the characteristic polynomial of random matrices (non-symmetric, non-symmetric sparse, symmetric, and elliptical), and adjacency matrices of random graphs (directed, undirected, and fixed outdegree). Often we will consider the case where the underlying random variables are Rademacher for simplicity. Section 3 motivates the model of random polynomials studied in this paper by illustrating a connection that exists between irreducible random polynomials, random graphs, and control theory on large scale graphs and networks. Section 4 contains the proof for one of the applications discussed in Section 2. Finally, Theorem 1.10 and Proposition 1.12 are proven in Section 5.

### 1.5. Notation

We use asymptotic notation (such as ) under the assumption that . In particular, denotes a term which tends to zero as . Let denote the discrete interval. We let denote the imaginary unit and reserve as an index. For a finite set , we use to denote the cardinality of . For a vector , we use for the Euclidean norm. We let denote the dot product between two vectors . For a matrix , we let denote the spectral norm, i.e., is the largest singular value of . We let denote the identity matrix; often we will drop the subscript when its size can be deduced from context. For a polynomial , denotes the degree of .

## 2. Example applications of the main results

We now specialize Theorem 1.10 and Corollary 1.11 to some specific examples. Before we do, we begin with a non-example.

### 2.1. Random polynomials with iid roots

Let be iid copies of an integer-valued random variable . Then

 f(z):=n∏j=1(z−λj)

is the random monic polynomial with integer coefficients from Example 1.5. Clearly, Corollary 1.11 can give no useful information in this case because each root of is an integer (hence algebraic of degree one). To see this another way, we note that since is integer-valued, there exists an integer such that . Hence, the value of from Corollary 1.11 satisfies

 p≥supx∈RP(f(x)=0)≥P(λ=k).

In other words, is bounded below by an absolute constant, and so the polynomial does not satisfy a strong pointwise delocalization bound. As we will shortly see, in many other cases, Theorem 1.10 and Corollary 1.11 do give useful information about the algebraic degree of the roots.

### 2.2. Random polynomials with iid coefficients

We now consider Example 1.2, where the coefficients of are iid random variables.

###### Theorem 2.1 (Random polynomials with iid coefficients).

For each , let , where are iid Rademacher random variables, which take the values or with equal probability. Then there exists a sequence of natural numbers such that as and the roots of are algebraic numbers of degree at least with probability .

Stated another way, Theorem 2.1 implies that, for any fixed integer , the probability that has an algebraic root of degree or less is . See Figure 2 for numerical evidence indicating that the probability that is reducible goes to zero as . We present a proof of Theorem 2.1 in Section 4, and below we will comment on potential generalizations of Theroem 2.1 and its connections to the work of Konyagin [26].

Beyond Theorem 2.1, our methods can also be used when are more general iid integer-valued random variables satisfying some technical assumptions. However, a number of complications can arise in this case. For instance, zero will always be a root of with probability . Thus, one needs to assume cannot take the value zero (alternatively, one can consider all roots except for possible roots at zero). We focus on the Rademacher case for simplicity.

In [26], Konyagin studies the random degree polynomial which has for the constant coefficient and the degree coefficient, and every other coefficient is or independently with equal probability. In particular, he shows that there are constants such that has a root that is an algebraic number with degree at most with probability at most . In contrast, Theorem 2.1 (which studies a different random polynomial model), does not give an explicit dependence between the probability and the degree of the algebraic root. In other applications of Theorem 1.10, particularly to random matrices, we are able to prove an explicit dependence between the probability and the algebraic degree by using strong pointwise delocalization results.

Finally, one should note that elementary Galois theory can be used to prove that if generates the multiplicative group (note that this implies is prime), then every random polynomial of degree with coefficients iid Rademacher random variables (as in Theorem 2.1) is in fact irreducible.777We thank Melanie Matchett Wood for describing the formulation and proof of this result. One can prove this by considering the polynomials modulo 2, in which case and every polynomial is equal to (i.e., there is no randomness); thus every root of the polynomial modulo must be a -st root of unity. To complete the argument, one can use the fact that has cyclic multiplicative group and the fact that the Galois group is also cyclic and generated by the Frobenius endomorphism (see [14]). Interestingly, letting be a prime, Artin’s Conjecture on primitive roots would imply that should generate for infinitely many , and in fact, the proportion of primes for which 2 generates should asymptotically approach Artin’s constant, which is approximately (see the survey [30]).

### 2.3. Random matrices with iid Rademacher ±1 entries

While delocalization estimates for random polynomials with iid coefficients are fairly weak, we now consider random matrices with independent entries, for which much better delocalization bounds are known. Indeed, we will use the following theorem from [5] to bound the supremum in (1.3).

###### Theorem 2.2 (Borgain-Vu-Wood, Corollary 3.3 in [5]).

Let be a constant such that and let be a set with cardinality . If is an by matrix with independent random entries taking values in such that for any entry , we have , then

 P(Mn is singular)≤(√q+o(1))n.

Furthermore, by inspecting the proof one can see that the error term depends only on and the cardinality of the set , and not on the values in the set .

In [5], it was shown using the above result that an iid random Rademacher matrix (i.e., where each entry is or independently with probability ) is very unlikely to have a rational eigenvalue. Our result below extends this fact by showing that, for any , an eigenvalue that is algebraic with degree at most (which includes all rational numbers) is similarly unlikely. Our approach here does not extend to algebraic degree or larger; however, in analogy with Hilbert’s Irreducibility Theorem and related results described in the introduction above, it seems likely that the characteristic polynomial of an iid random Rademacher matrix is in fact irreducible with high probability, which would imply that the matrix has no algebraic roots of degree less than .

###### Theorem 2.3.

Let be a constant, and let be an by matrix where each entry takes the value or independently with probability . Then, the probability that has an eigenvalue that is an algebraic number with degree at most is bounded above by .

###### Proof.

Let be the characteristic polynomial of , so that the eigenvalues of are the roots of , and note that , which is the event that all eigenvalues of have absolute value at most , holds with probability 1 by an elementary bound. (In fact, the eigenvalues of are all less than with exponentially high probability using, for example, [43, Proposition 2.4]; we will not need such a refined bound here.)

Let . Using Theorem 2.2 above, we have for any that

 P(f(z)=0)=P(Mn−zIn is singular% )≤(1√2+o(1))n, (2.1)

where the error is uniform for all (this follows using the facts that is the set of values that can appear in and that the cardinality of this set and the value of are the same for any ). Thus,

 supz∈ΩP(f(z)=0)≤(1√2+o(1))n.

Combining Corollary 1.11 (with and ) with Proposition 1.12, we see that the probability that has an algebraic root of degree at most is bounded above by

 2(1√2+o(1))n(n1/2−ϵ)2(en)((n1/2−ϵ)2+n1/2−ϵ)/2 ≤(1√2+o(1))n2n1−2ϵ(en)n1−2ϵ =[(1√2+o(1))exp(log(2)/n+(1−2ϵ)log(n)/n+log(en)/n2ϵ)]n.

The expression inside the square braces is equal to (by adjusting the term), completing the proof. ∎

### 2.4. Random symmetric matrices

In [47], Vershynin proves a general result for real symmetric random matrices bounding the singularity probability, quantifying the smallest singular value, and showing that the spectrum is delocalized with the optimal scale. Here, we will use the following special case showing only pointwise delocalization to illustrate an application of Corollary 1.11.

###### Theorem 2.4 (Vershynin, following from Theorem 1.2 in [47]).

Let be a real constant and let be a real symmetric by matrix whose entries on and above the diagonal (so for ) are iid random variables with mean zero and unit variance satisfying . Then, there exists an absolute constant (depending only on ) such that, for every ,

 P(r is an eigenvalue of Mn)≤2e−nc. (2.2)
###### Remark 2.5.

It is natural to only consider real numbers in (2.2) since is real symmetric, and real symmetric matrices have all real eigenvalues.

###### Remark 2.6.

The constant appearing in Theorem 2.4 is typically less than one and may be much smaller.

The more general version of the above result proven by Vershynin [47, Theorem 1.2] applies to real symmetric matrices with entries having subgaussian tails (see [48] for why bounded implies subgaussian), and the bound we will prove on the probability of having low-degree algebraic numbers as eigenvalues (Theorem 2.7 below) extends to this setting.

###### Theorem 2.7.

Let be a real constant, let be an absolute constant satisfying , where is the absolute constant from Theorem 2.4 (which depends only on ), and let be an by real symmetric matrix whose entries on and above the diagonal are iid integer-valued random variables which are bounded in absolute value by . Then the probability that has an eigenvalue that is algebraic of degree at most is bounded above by for all sufficiently large .

For example, setting , Theorem 2.7 above applies to real symmetric matrices with entries taking the values or with equal probability independently on and above the diagonal.

###### Proof of Theorem 2.7.

Let be the characteristic polynomial of , so that the eigenvalues of are the roots of , and note that is the event that all eigenvalues of have absolute value at most . By [47, Lemma 2.3], we know that holds with probability at least for some constant (depending only on ).

Let . Since is a real symmetric matrix, the eigenvalues of are all real. Hence, on the event , all roots of are contained in . Moreover, Theorem 2.4 implies that . Thus, combining Corollary 1.11 and Proposition 1.12 (and using the fact that from [47, Lemma 2.3]), we have that the probability that has an algebraic root of degree at most is bounded above by

 4e−ncn2c′(eC√n)(n2c′+nc′)/2+2e−n ≤4e−ncn2c′(eC√n)n2c′+2e−n ≤exp(−1+log(4)nc+2c′lognnc+n2c′−clog(eC√n))nc+2e−n.

For sufficiently large , one observes that the expression inside the function is at most (using the fact that ), and also for sufficiently large we have that , proving the result. ∎

### 2.5. Elliptical random matrices

Elliptical random matrices interpolate between iid random matrices and random symmetric matrices. In an elliptical random matrix, all the entries are independent with the exception that the -entry may depend on the -entry, and one also requires that the correlation between the -entry and the -entry is a constant for all . Thus, an iid matrix is an example of an elliptical random matrix with , and if the matrix is symmetric. There are results showing that the limiting distribution of the eigenvalues also interpolates between iid random matrices and symmetric random matrices; in particular, for , the limiting eigenvalue distribution (suitably scaled) is an ellipse with eccentricity ; see Nguyen and O’Rourke [33] and Naumov [31].

To apply Theroem 1.10, we will use a result due to Nguyen and O’Rourke [33] bounding the smallest singular value, and we will focus on the special case of elliptical random matrices for simplicity. Let be an elliptical random matrix with covariance parameter with entries defined as follows: let be a collection of independent random variables, where for and where

 ξi,j:={1 with probability (1+ρ)/2−1 with probability (1−ρ)/2,

for . Then let whenever . Thus,

and note that each entry takes the values or with equal probability. We will call a Rademacher elliptical random matrix with parameter . Observe that a Rademacher elliptical random matrix with parameter is just an iid random Rademacher matrix.

###### Theorem 2.8 (Nguyen-O’Rourke, following from Theorem 1.9 in [33]).

Let be an by Rademacher elliptical random matrix with parameter , and let be a constant. Then, for all sufficiently large (depending only on and ), we have that

 supz∈C, |z|≤nP(z is an % eigenvalue of Mn,ρ)≤n−B.

We can combine Theorem 2.8 with Corollary 1.11 and Proposition 1.12 to get the following result.

###### Theorem 2.9.

Let be an by Rademacher elliptical random matrix with parameter , and let and be constants. Then, for all sufficiently large (depending only on , , and ), the probability that the matrix has an eigenvalue that is algebraic of degree at most is bounded above by .

###### Proof.

Let be the characteristic polynomial of , so that the eigenvalues of are the roots of , and note that the event , which is the event that all eigenvalues of have absolute value at most , holds with probability by an elementary bound.

Let , and note that Theorem 2.8 implies that for all sufficiently large . Thus, combining Corollary 1.11 and Proposition 1.12 we have that the probability that has an algebraic root of degree at most is bounded above by

 2n−2B−K2K2(en)(K2+K)/2 ≤2K2n−2B−K2(en)K2 =(2K2eK2n−B)n−B ≤n−B,

where the last inequality holds for sufficiently large , completing the proof.∎

### 2.6. Product matrices

We now show how Corollary 1.11 can be applied to products of independent random matrices. We begin with the following result from [36].

###### Theorem 2.10 (O’Rourke-Renfrew-Shoshnikov-Vu, following from Theorem 5.2 in [36]).

Let and be constants. Let be independent by matrices in which each entry takes the value or independently with probability . Define the product

 Mn:=M(1)n⋯M(m)n.

Then, for all sufficiently large (depending only on , , and ), we have

 supz∈C, |z|≤nγP(z is an % eigenvalue of Mn)≤n−B.

We can combine Theorem 2.10 with Corollary 1.11 and Proposition 1.12 to get the following result.

###### Theorem 2.11.

Let and be constants. Let be independent by matrices in which each entry takes the value or independently with probability . Then, for all sufficiently large (depending only on , , and ), the probability that the matrix

 Mn:=M(1)n⋯M(m)n

has an eigenvalue that is algebraic of degree at most is bounded above by .

###### Proof.

Let be the characteristic polynomial of , so that the eigenvalues of are the roots of , and note that the event , which is the event that all eigenvalues of have absolute value at most , holds with probability by an elementary bound.

Let , and note that Theorem 2.10 implies that for all sufficiently large . Thus, combining Corollary 1.11 and Proposition 1.12 we have that the probability that has an algebraic root of degree at most is bounded above by

 2n−2B−mK2K2(en)m(K2+K)/2 ≤2K2n−2B−mK2(en)mK2 =(2K2emK2n−B)n−B ≤n−B,

where the last inequality holds for sufficiently large , completing the proof. ∎

More generally, Theorem 2.10 can be extended to products of elliptical random matrices which satisfy a number of constraints (see [36, Theorem 5.2] for details). This leads naturally to a version of Theorem 2.11 for the product of independent Rademacher elliptical random matrices with parameters satisfying .

In addition, Theorem 2.11 can also be extended to the product of two independent random symmetric matrices with iid Rademacher entries using the least singular bounds in [36, Section 7]. For technical reasons, this bound has not been extended beyond the case ; see [36, Remark 5.3] for details.

### 2.7. Erdős–Rényi random graphs

We now consider Erdős–Rényi random graphs on vertices, where each edge is present independently at random with a constant probability satisfying . We denote such a graph by and observe that the graph can be defined by its adjacency matrix , which is a real symmetric matrix with entry equal to 1 if there is an edge between vertices and , and entry equal to zero otherwise.

In the Erdős–Rényi model, the independence among edges means that all entries in the strict upper triangle of are also independent. Thus, the following result due to Nguyen [32] is applicable.

###### Theorem 2.12 (Nguyen, following from Theorem 1.4 in [32]).

Let and be constants, and let be the adjacency matrix of . Then, for sufficiently large (depending only on and ),

 supz∈C, |z|≤nP(z is an eigenvalue of An)≤n−B.

By following the proof of Theorem 2.9 and applying Theorem 2.12 in place of Theorem 2.8, we find that for any and , the probability that has an eigenvalue that is algebraic of degree at most is bounded above for sufficiently large (depending only on and ). We state this result explicitly in Section 3 (see Theorem 3.9). The result is also true when the diagonal entries of are allowed to be one (this corresponds to the case where loops are allowed in the graph).

### 2.8. Directed random graphs

In the case of directed random graphs where directed edges (including loops) are included independently at random with probability , where is a constant, the adjacency matrix is an by matrix with entries independently equal to with probability , and otherwise the entries are zero. In this case, Theorem 2.2 applies with , and thus, following the proof of Theorem 2.3, proves that for any , the probability that has an eigenvalue that is an algebraic number with degree at most is bounded above by .

### 2.9. Directed random graphs with fixed outdegrees

Let be a positive integer, and let be a random binary vector uniformly chosen from among all binary vectors containing exactly ones. If is the matrix whose rows are iid copies of the vector , then can be viewed as the adjacency matrix of a random directed graph on vertices (where loops are allowed) such that each vertex has outdegree . In this case, always has as an eigenvalue (with the corresponding eigenvector being the all-ones vector), and hence not every eigenvalue of can be of high algebraic degree. Using Corollary 1.11, we show that, besides this trivial eigenvalue, the other eigenvalues cannot be low-degree algebraic numbers.

###### Theorem 2.13.

Let , , and be a constants, and let be a random binary vector uniformly chosen from among all binary vectors containing exactly ones for some satisfying . If is a random by matrix whose rows are iid copies of the vector , then, for all sufficiently large (depending only on , , and ), the probability that one of the non-trivial eigenvalues of the matrix is algebraic of degree at most is bounded above by .

###### Proof.

The proof of Theorem 2.13 follows closely the proof of Theorem 2.9, where instead of using Theorem 2.8 we apply Theorem 2.14 below. The main difference comes from the fact that we must now deal with the trivial eigenvalue at .

Let be the characteristic polynomial of , so that the eigenvalues of are the roots of , and note that the event , which is the event that all eigenvalues of have absolute value at most , holds with probability by an elementary bound.

Let , and note that Theorem 2.14 below implies that for all sufficiently large . Thus, combining Corollary 1.11 and Proposition 1.12 we have that the probability that has an algebraic root of degree at most is bounded above by

 2n−2B−K2−1K2(en)(K2+K)/2 ≤2K2n−2B−K2−1(en)K2 =(2K2eK2n−B)n−B−1 ≤n−B−1,

where the last inequality holds for sufficiently large . Therefore, we conclude that, with high probability, has no eigenvalues of algebraic degree at most in . The second bound in Theorem 2.14 below implies that, with probability at least , the eigenvalue has algebraic multiplicity one. Hence, on this event, the other eigenvalues of (counting algebraic multiplicity) are contained in . As , for , the proof is complete. ∎

It remains to verify the following bounds.

###### Theorem 2.14.

Let and be a constants, and let be a random binary vector uniformly chosen from among all binary vectors containing exactly ones for some satisfying . If is a random by matrix whose rows are iid copies of the vector , then, for sufficiently large (depending only on and ),

 (2.3)

and

 P(s is an eigenvalue of Mn with algebraic % multiplicity at least 2)≤n−B. (2.4)
###### Proof.

The proof follows the arguments given by Nguyen and Vu in [34]. We begin with the bound in (2.3). Let . Since, with probability , all eigenvalues of are contained in the disk , it suffices to show

 supz∈ΩP(z is an eigenvalue of Mn)≤n−B

for sufficiently large. Define the matrix

 Xn:=2Mn−Jn,

where is the all-ones matrix. In particular, is an random matrix with and entries whose rows are independent with row sum , where . Such matrices were explicitly studied in [34], and the estimate below follows from [34, Theorem 2.8]. Let be the submatrix of formed from by removing the last row and column. Similarly, let

 Xn−1:=2Mn−1−Jn−1.

Then, for any deterministic matrix satisfying , [34, Theorem 2.8] implies that

 supz∈C, |z|≤2nP(z is an eigenvalue of Xn−1+F)≤n−B (2.5)

for all sufficiently large (depending only on and ).

The advantage of working with is that it does not have a trivial eigenvalue at . Thus, we will reduce to the case where the bound in (2.5) is relevant. Let denote the -entry of . Define . Then , where is obtained from by adding the first columns to the last. Since each entry of the last column of takes the value , , where is obtained from by replacing each entry in the last column by , i.e.,

 M′′:=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣m1,1−zm1,2…m1,n−11⋮⋮⋱⋮⋮mn−1,1mn−1,2…mn−1,n−1−z1mn,1mn,2…mn,n−11⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

Since , it now suffices to show

 supz∈C, |z|≤nP(det(M′′)=0)≤n−B (2.6)

for sufficiently large. Additionally, as , the bound in (2.6) would also imply (2.4).

By subtracting the last row of from each of the previous rows, it follows that

 det(M′′)=det(Mn−1−Qn−1−zIn−1),

where is an rank-one matrix whose rows are each given by . Since the entries are independent of the entries in , we condition on and now treat this matrix as deterministic. Observe that

 det(Mn−1−Qn−1−zIn−1)=0

if and only if is an eigenvalue of

By an elementary bound,

 ∥F∥≤2∥Qn−1∥+∥Jn−1∥≤3n≤n2

for . Therefore, we conclude from (2.5) that

 supz∈C, |z|≤nP(det(Mn−1−Qn−1−zIn−1)=0)≤n−B

for sufficiently large, and the proof is complete. ∎

### 2.10. Sparse matrices

We define the sparse random matrix as follows. Let be a random variable taking the value 1 with probability and taking the value otherwise, and let a Rademacher random variable. Then we define each entry of the by <