Local Kesten–McKay law for random regular graphs

Local Kesten–McKay law for random regular graphs

Roland Bauerschmidt1    Jiaoyang Huang1    Horng-Tzer Yau2
11University of Cambridge, Statistical Laboratory, DPMMS. E-mail: rb812@cam.ac.uk.
33Harvard University, Department of Mathematics. E-mail: htyau@math.harvard.edu, partially supported by NSF grant DMS-1307444, DMS-1606305 and a Simons Investigator award.

We study the adjacency matrices of random -regular graphs with large but fixed degree . In the bulk of the spectrum down to the optimal spectral scale, we prove that the Green’s functions can be approximated by those of certain infinite tree-like (few cycles) graphs that depend only on the local structure of the original graphs. This result implies that the Kesten–McKay law holds for the spectral density down to the smallest scale and the complete delocalization of bulk eigenvectors. Our method is based on estimating the Green’s function of the adjacency matrices and a resampling of the boundary edges of large balls in the graphs.


1 Main results I: Spectral density and eigenvectors

1.1 Introduction

Random regular graphs with fixed degree are fundamental models of sparse random graphs and they arise naturally in many different contexts. The spectral properties of their adjacency matrices are of particular interest in computer science, combinatorics, and statistical physics. The relevant topics include the theory of expanders (see e.g. [71]), quantum chaos (see e.g. [72]), and graph -functions (see e.g. [76]). There has been significant progress in the understanding of the spectra of (random and deterministic) regular graphs. For fixed degree these results generally concern properties of eigenvalues and eigenvectors near the macroscopic scale, and their proofs use the local tree-like structure of these graphs as an important input. On the other hand, dense regular graphs belong to the random matrix universality class and their spectral properties are known to resemble those of Wigner matrices. In this paper, we introduce an approach that allows the Green’s function method of random matrix theory to make use of the local tree-like structure of the random regular graph, while it also captures key random matrix behavior.

Throughout the paper, is the adjacency matrix of a (uniform) random -regular graph on vertices. Thus is uniformly chosen among all symmetric matrices with entries in with and for all . Note that has the trivial constant eigenvector with eigenvalue . We also use the rescaled adjacency matrix , and we denote the set of (simple) -regular graphs on vertices by .

Below we first discuss some known consequences of the tree-like and of the random matrix-like structure.

Tree-like structure

It is well known that most regular graphs of a fixed degree are locally tree-like in the sense that: (i) for any fixed radius (and actually for ), the radius- neighborhoods of almost all vertices are the same as those in the infinite -regular tree; (ii) the -neighborhoods of all vertices have bounded excess, which is the smallest number of edges that must be removed to yield a tree; see e.g. Proposition 4.1 below. The tree-like structure is important for the following results, valid in general for deterministic graphs and in some cases requiring randomness as well.

  1. For regular graphs with locally tree-like structure, the macroscopic spectral density of converges to the Kesten–McKay law [53, 63], characterized by the density . For random regular graphs, the Kesten–McKay law was established on spectral scales [32, 44, 15] by using the fact that the locally tree-like structure holds with high probability in neighborhoods of radius .

  2. For regular graphs with locally tree-like structure, the eigenvectors of are weakly delocalized: their entries are uniformly bounded by [32, 44, 24] and their -mass cannot concentrate on a small set [24]. If, in addition, the graphs are expanders, the eigenvectors of also satisfy the quantum ergodicity property [15, 25, 14].

  3. For random regular graphs using the locally tree-like structure as important input, for any fixed , the nontrivial eigenvalues of are contained in . This was conjectured in [13] and proved in [42]; see also [70, 20] for recent alternative arguments. It was also shown that the scale can actually be taken to be in [20].

Random matrix-like structure

For random matrices of Wigner type, precise estimates on the spectral properties of these matrices were proved (see e.g., [38, 74, 56, 39]):

  1. The spectral density in the bulk is given by the semicircle law on all scales larger than .

  2. The eigenvectors are uniformly bounded in -norm by (up to logarithmic correction).

  3. The extremal eigenvalues are concentrated on scale .

  4. Both bulk and edge universality holds; in particular, the distributions of the extremal eigenvalues are the same as those of Gaussian matrix ensembles (Tracy–Widom distributions).

The first three properties usually can be proved via estimates on the Green’s function; the proofs of universality involve Dyson Brownian Motion or other comparison methods (see [39] for a review).

For random -regular graphs with , properties (i), (ii), and also bulk universality were proved in [19, 18] (the lower bound on can be relaxed to for properties (i) and (ii)). Simulations indicate that (i)-(iv) hold for random regular graphs of fixed degree [67, 50, 68, 46].

In this paper, we consider random regular graphs of large but fixed degree . One of our key ideas to prove the properties (i) and (ii) is to use switchings to resample the boundaries of large balls (see Section 7). This operation preserves the local tree-like structure and it also captures sufficient global structure in random regular graphs. This resampling generalizes and adds a geometric component to the local resampling method we introduced with A. Knowles in [19] for random regular graphs with . The idea of using some form of switchings in studying random regular graphs goes back at least to [64], where it was used in the enumeration of such graphs; see also [80] for further applications in enumeration. Finally, to analyze the propagation of the boundary effect to the interior of the ball in the Green’s function, we explicitly compute the Green’s function of the tree-like graphs.


For two quantities and depending on , we use the notations if is positive and ; if are positive and there exists some universal constant such that ; , or if is positive and ; if are positive and . We write and .

1.2 Spectral density and eigenvector delocalization

Our main result, Theorem 2.4, is a precise estimate on the local profile of the Green’s function down to the smallest possible spectral scales, with high probability. Its statement requires several definitions, and we therefore only state it in Section 2. In the remainder of this section, we state some direct consequences of Theorem 2.4, which can be stated in elementary terms. The proofs of these corollaries are given in Section 2.4.

1.2.1 Spectral density

With high probability, the spectral measure of the rescaled adjacency matrix converges weakly to the rescaled Kesten–McKay law with density given by


This convergence can be expressed as for any independent of , where is the Stieltjes transform of , and is the Stieltjes transform of the empirical spectral measure of ,


and is the upper half-plane. The imaginary part of the spectral parameter determines the scale of the convergence. In particular, the convergence for all fixed corresponds to the convergence on the macroscopic scale, i.e., for intervals containing order eigenvalues. The following theorem gives the convergence on the optimal mesoscopic scale , away from the spectral edges at .

Theorem 1.1 (Local Kesten–McKay Law).

Fix , and . Then with probability with respect to the uniform measure on ,


uniformly for


While Theorem 1.1 shows that the spectral density (or its Stieltjes transform, which is the trace of the Green’s function) does concentrate, the individual entries of the Green’s function of the random regular graph with bounded degree do not concentrate; see also Remark 2.5 below. This is different from the typical examples in random matrix theory, and it is one of the reasons that the fixed degree graphs require a more delicate analysis. For example, the random regular graph contains a triangle with probability uniformly bounded from below. For graphs with bounded degree, triangles and other short cycles have a strong local influence on the elements of the Green’s function, and thus the spectrum.

The spectral density of random regular graphs at scales much larger than the typical eigenvalue spacing has been studied in [78, 32, 44, 15]. Results for spectral density near the typical eigenvalue spacing only appeared very recently [19], where the semicircle law down to the optimal mesoscopic scale was established for degree with . The methods of the current paper could be extended from fixed to growing slowly with , for example to the range beyond which the results of [19] apply. Thus the results of this paper complement those of [19]. For simplicity, we restrict this paper to the most interesting case of fixed degree .

1.2.2 Eigenvectors

Theorem 2.4 implies delocalization estimates of the eigenvectors in the bulk of the spectrum.

Theorem 1.2 (Eigenvector delocalization).

Fix , and . Then, with probability with respect to the uniform measure on , the eigenvectors of whose eigenvalue obeys are simultaneously delocalized:


Theorem 1.2 shows that with probability , the eigenvectors are completely delocalized. On the other hand, it is easy to see that, with probability , the random -regular graph has a localized eigenvector (see Figure 1). In particular, (1.5) cannot hold with probability higher than polynomial in . Moreover, the Erdős–Rényi graph with finite average degree has localized eigenvectors with probability . Thus (1.5) with probability tending to is false for the Erdős–Rényi graph with finite average degree .

Figure 1: Theorem 1.2 shows that a random -regular graph has only completely delocalized eigenvectors with probability . On the other hand, it is not difficult to show that a random -regular graph has localized eigenvectors with probability . For example, a random -regular graph contains the subgraph shown on the left with probability . For comparison, also notice that an Erdős–Rényi graph with finite average degree contains localized eigenvectors with probability ; see the right figure.

The delocalization of eigenvectors of (random and deterministic) regular graphs has been studied in [78, 32, 44, 15, 57, 24, 25, 14] (see also [69] for a survey of results on eigenvector delocalization in random matrices). Our result implies the optimal bound of order (up to logarithmic corrections) on the -norms of the (bulk) eigenvectors of random regular graphs.

For (deterministic) locally tree-like regular graphs, it was previously proved that the eigenvectors are weakly delocalized in the sense that [32, 44, 24], and that eigenvectors cannot concentrate on a small set, in the sense that any vertex set with must have at least elements [24]. Moreover, for deterministic locally tree-like regular expander graphs, it was proved that the eigenvectors satisfy a quantum ergodicity property: for all with and , averages of over many eigenvectors are close to [15, 25, 14].

Theorem 1.2 and the exchangeability of the random regular graph also imply the following isotropic version of Theorem 1.2, implying that the eigenvectors are delocalized not only in the standard basis, but in any deterministic orthonormal basis. In addition, a probabilistic version of the quantum unique ergodicity property (QUE) holds for these graphs. Note that estimates (1.7), (1.8) are not uniform over all or . Therefore and cannot be chosen depending on the random graph.

Corollary 1.3.

Under the assumptions of Theorem 1.2, the following estimates hold with probability with respect to the uniform measure on . For any deterministic with and ( can depend on ), and for all normalized eigenvectors whose eigenvalue obeys , we have:

  1. (Isotropic delocalization) The eigenvectors are delocalized in directions :

  2. (Probabilistic QUE) The eigenvector densities are flat with respect to the test vectors :


    In particular, with probability , simultaneously for any deterministic index sets , and all eigenvectors with ,


The proof of Corollary 1.3 makes strong use of the exchangeability of the random regular graph. On the other hand, the proof of Theorem 2.4, and its consequences Theorem 1.1 and Theorem 1.2, do not exploit exchangeability in a significant way, and we believe that the method could be extended, for example, to graphs with more general degree sequences.

1.3 Related results

Macroscopic eigenvalue statistics for random regular graphs of fixed degree have been studied using the techniques of Poisson approximation of short cycles [31, 52] and (non-rigorously) using the replica method [66]. These results show that the macroscopic eigenvalue statistics for random regular graphs of fixed degree are different from those of a Gaussian matrix. However, this is not predicted to be the case for the local eigenvalue statistics. Spectral properties of regular directed graphs have also been studied recently [28, 27].

The second largest eigenvalue of regular graphs is of particular interest. For the case of fixed degree, see in particular [42, 70, 20, 43, 29]. The conjecture that the distribution of the second largest eigenvalue on scale is the same as that of the largest eigenvalue of the Gaussian Orthogonal Ensemble [71] would imply that slightly more than half of all regular graphs are Ramanujan graphs, namely -regular graphs with (for explicit and probabilistic constructions of sequences of Ramanujan graphs, see [59, 62, 61]). The spectrum of random regular graphs has also received interest from the study of -functions, as it can be related by an exact relationship to the poles of the Ihara -function of regular graphs [49, 17]; see also [76, 77].

Another interesting direction related to the spectral properties of random regular graphs concerns the phase diagram of the Anderson model. The model was originally defined on the square lattice , but only limited progress was made for the delocalization problem in this setting. A simplified model on the infinite regular tree (Bethe lattice) is well-understood [54, 2, 4, 3, 10, 9, 8, 7, 6, 5]; see also [11] for a review. At large disorder, it is known that the Anderson model on the random regular graph exhibits Poisson statistics [45]. The eigenstates of the Anderson model on the random regular graph have also been studied in connection with many-body localization [30, 60].

In random matrix theory, the local spectral statistics of the generalized Wigner matrices are well understood; see in particular [51, 38, 37, 36, 41, 74, 40, 21, 34, 39]. Many results on local eigenvalue statistics also exist for Erdős-Rényi random graphs, in particular [35, 34, 48, 47]; the latter results apply down to logarithmically small average degrees. Similar results have also been proved for more general degree distributions [1, 12]. However, these types of results are false for the Erdős–Rényi graph with bounded average degree. For a review of other results for discrete random matrices, see also [79]. For the eigenvectors of random regular graphs with , the asymptotic normality was proved in [22]; see also the prior results for generalized Wigner matrices [55, 75, 23]. For random regular graphs of fixed degree, a Gaussian wave correlation structure for the eigenvectors was predicted in [33] and partially confirmed in [16].

2 Main results II: Local approximation of the Green’s function

2.1 Graphs

The main result of this paper, Theorem 2.4 below, is a precise local approximation result of the Green’s function, which in particular implies the results stated in Section 1. To state the main result, we require several definitions, which we give below.

Graphs, adjacency matrices, Green’s functions

Throughout this paper, graphs are always simple (i.e., have no self-loops or multiple edges) and have vertex degrees at most (non-regular graphs are also used). The geodesic distance (length of the shortest path between two vertices) in the graph is denoted by . For any graph , the adjacency matrix is the (possibly infinite) symmetric matrix indexed by the vertices of the graph, with if there is an edge between and , and otherwise. Throughout the paper, we denote the normalized adjacency matrix by , where the normalization by is chosen independently of the actual degrees of the graph. Moreover, we denote the (unnormalized) adjacency matrix of a directed edge by , i.e. . The Green’s function of a graph is the unique matrix defined by for , where is the upper half plane.

In Appendix B, several well-known properties of Green’s function are summarized; they will be used throughout the paper. The Green’s function encodes all spectral information of (and thus of ). In particular, the spectral resolution is given by : the macroscopic behavior corresponds to of order , the mesoscopic behavior to , and the microscopic behavior of individual eigenvalues corresponds to below .

Subsets and Subgraphs

Let be a graph, and denote the set of its edges by the same symbol and its vertices by . More generally, throughout the paper, we use blackboard bold letters for set or subsets of vertices, and calligraphic letters for graphs or subgraphs. For any subset , we define the graph by removing the vertices and edges adjacent to from , i.e., the adjacency matrix of is the restriction of that of to . We write for the Green’s function of . For any subgraph , we denote by the vertex boundary of in , and by the edge boundary of in . Moreover, for any subset , we denote by and the vertex and edge boundaries of the subgraph induced by on .


Given a subset of the vertex set of a graph and an integer , we denote the -neighborhood of in by , i.e., it is the subgraph induced by on the set . In particular is the radius- neighborhood of the vertex .

Moreover, given vertices in and , we denote by the smallest subgraph of that contains all paths of length at most between and . Namely,


Notice that .


The infinite -regular tree is the unique (up to isometry) infinite connected -regular graph without cycles, and is denoted by . The rooted -regular tree with root degree is the unique (up to isometry) infinite connected graph that is -regular at every vertex except for a distinguished root vertex , which has degree .

2.2 Tree extension

The local approximation of the Green’s function of a graph will be defined in terms of the tree extension, defined next.

Definition 2.1 (deficit function).

Given a graph with vertex set and degrees bounded by , a deficit function for is a function satisfying for all vertices . We call a vertex extensible if .

Figure 2: The left figure illustrates a finite graph ; its extensible vertices are shown as grey circles. The right figure shows the tree extension , in which a rooted tree (darkly shaded) is attached to every extensible vertex.
Definition 2.2 (tree extension).

Let be a finite graph with deficit function .

  1. The tree extension (abbreviated ) of is the (possibly infinite) graph defined by attaching to any extensible vertex in a rooted -regular tree with root degree .

  2. The Green’s function of with tree extension, denoted , is the Green’s function of the (possibly infinite) graph .

See Figure 2 for an illustration of the tree extension. In our main result, stated in Section 2.3, we approximate the Green’s function of a regular graph at vertices by that of the tree extension of a neighbourhood of . This requires specification of a deficit function, which we will usually do using the following conventions for deficit functions, assumed throughout the paper.

Conventions for deficit functions

Throughout this paper, all graphs are equipped with a deficit function . The interpretation of the deficit function is that it measures the difference to the desired degree of the vertex . We use the following conventions for deficit functions.

  • If the deficit function of is not specified explicitly, it is given by .

    Thus no vertex is extensible and the tree extension of is trivial: .

  • If is a subset of the vertices of , and is the deficit function of , then the deficit function of is given by , unless specified explicitly.

    Thus when removing the edges incident to from , these are also absent in the tree extension.

  • If is a subgraph (which was not obtained as ), then the deficit function of is given by the restriction of the deficit function of on , unless specified explicitly.

    Thus any vertex in has the same degree in the tree extension as in .

The above conventions are illustrated in Figure 3. In particular, in the case that is a -regular graph, the deficit function is always , so that . Moreover, by our conventions, the tree extension of a subgraph is again a -regular graph.

Figure 3: Given a graph (with the standard deficit function ), the left figure illustrates a subgraph , which by our conventions inherits its deficit function from by restriction. Thus all vertices in have the same degrees in the tree extension as in . The right figure illustrates the graph obtained by removing a vertex set . By our convention on the deficit function, the tree extension of is then trivial.
Definition 2.3.

Given an integer , we call the localized Green’s function of at vertices .

Thus the localized Green’s functions at is the Green’s function of a graph that itself depends on a small neighborhood of . However, the dependence of the graph on is weak, in the sense that, up to a small error, the graph could be replaced by any neighborhood of that is not too small and not too large; see Proposition 5.2 and Remark 5.3.

In our main result, stated in Section 2.3 below, we will show that the Green’s function can be approximated by the localized Green’s function . To interpret this result, we note the following elementary properties of the localized Green’s function.

  • If , then is the empty graph, and therefore .

  • If has no cycles (thus it is a tree), then is an infinite tree. In particular, if is -regular, then is the infinite -regular tree , and therefore . By a straightforward calculation (see Section 5), it then follows that


    where and are the Stieltjes transforms of the Kesten–McKay and semicircle laws; see (2.3) below.

  • If has bounded excess, then upper bounds similar to the right-hand side of (2.2) hold. In particular, is uniformly bounded in and decays exponentially in the distance with rate (see Section 5).

Kesten–McKay and semicircle law

Throughout this paper, the Stieltjes transforms of the Kesten–McKay law and that of the closely related semicircle law play an important role. Let be the density of the (normalized) Kesten–McKay law (1.1) and that of Wigner’s semicircle law. We denote their Stieltjes transforms by


Then is explicitly related to by the equation (see also Proposition 5.1)


Moreover, it is well known that is a holomorphic bijection from the upper half plane to the upper half unit disk , and that it satisfies the algebraic equation


and in particular that .

2.3 Main result

Recall that denotes the set of simple -regular graphs on the vertex set . Throughout the paper, we control error estimates in terms of (large powers of) the parameter


where . We will often omit the parameter from the notation if it is clear from the context.

Our main result is the following theorem.

Theorem 2.4.

Fix , and , and set and . Then, for chosen uniformly from , the Green’s function satisfies


with probability , uniformly in , and uniformly in , where is as in (1.4). Here we assume that is large enough and that is even.

We emphasize that, for fixed , the right-hand side of (2.7) converges to , as , uniformly in the spectral domain . The constants in the statement of the theorem can be improved at the expense of a longer proof and a more complicated statement. We do not pursue this.

2.4 Interpretation of Theorem 2.4; proofs of Theorems 1.1, 1.2, and Corollary 1.3

Theorem 2.4 states that, in , the Green’s function is well approximated by , which is random, but only depends on the local graph structure of near the vertices and . Since the local structure of a random regular graph is well understood, the theorem has a number of consequences. Specifically, under the assumptions of the theorem, it is well known that there are and such that, with , one can assume that the radius- neighborhoods of all but many vertices of coincide with those of the infinite -regular tree, and that the -neighborhoods of all other vertices have excess at most (see e.g. Proposition 4.1). Moreover, for the vertices that have radius- tree neighborhoods, we have (see e.g. Proposition 5.1)


The vertices whose -neighbourhood has bounded excess still satisfy (see e.g. Proposition 5.2)


Together with this information on the local graph structure, the result of Theorem 2.4 implies the results stated in Section 1.

Remark 2.5.

The equation (2.7) implies that the individual entries of the Green’s function do not concentrate. For example,

and the first term on the right-hand side can be easily seen to depend strongly on the local graph structure. Its fluctuation is of order .

Proof of Theorem 1.1.

(2.7) and (2.8) imply that for all and at least vertices . For the remaining vertices, by (2.9), we still have . Thus

as claimed. ∎

Proof of Theorem 1.2.

(2.7) and (2.9) imply that for all and all . Taking , it follows that

which implies the claim (1.5). ∎

Proof of Corollary 1.3.

In [19, Section 8], it is proved that any exchangeable random vector satisfies, for any (deterministic) with and , and for any ,


Let be the indicator function of the event that for all eigenvectors with -eigenvalue the estimate holds, where . Let be the normalized eigenvector corresponding to the -th largest -eigenvalue , and set . The are exchangeable, by the exchangeability of the random regular graph. By (2.10) with and Markov’s inequality, for large enough,

By a union bound over and over , it follows

where the maximum over is over all with . Since , by Theorem 1.2, and choosing , we have

which implies the claim. The proof of (1.7) is analogous, using . ∎

3 Proof outline and main ideas

In this section, we give a high-level outline of the proof of Theorem 2.4, whose details occupy the remainder of the paper. The proof is based on the general principle that, for small distances, a random regular graph behaves almost deterministically, while on the other hand, for large distances, it behaves much like a random matrix.

3.1 Parameters

Throughout the paper, we fix constants , , , , , and set and . We also set , and write , where is a parameter chosen such that


We always assume that is even and sufficiently large (depending on the previous parameters).

3.2 Structure of the proof

The proof consists of several sections, which we briefly describe in this section. Here, we also define several subsets of , namely the sets

These sets depend on parameters and (and also on the previously fixed parameters).

Small distance structure; the set

The small distance behavior is captured in terms of cycles in neighborhoods of radius . For any graph, we define the excess to be the smallest number of edges that must be removed to yield a graph with no cycles (a forest). Then, with as fixed above, we define the set to consist of graphs such that

  • the radius- neighborhood of any vertex has excess at most ;

  • the number of vertices that have an -neighborhood that contains a cycle is at most .

The set provides rough a priori stability at small distances. All regular graphs appearing throughout the paper will be members of . It is well-known that ; see Proposition 4.1.

Green’s function approximation; the sets and

For , we define the set be the set of graphs such that for any two vertices in , it holds that


Our main goal is to prove that has high probability uniformly in the spectral domain . That has high probability is not difficult to show if is large enough; see Section 6. To extend this estimate to smaller , we define the set by the same conditions as , except that the right-hand side in (3.2) is smaller by a factor :


Our main goal is to show that, for any (where the spectral domain is defined in (1.4) and is defined in (16.15)), if has high probability, then the event has very small probability, so that still has high probability. Then, by the Lipschitz-continuity of the Green’s function, it follows that for small , and thus that also has high probability. This can then be repeated to show that holds for all with high probability. Since these sets all together cover , it follows that holds for all with high probability.

Local resampling

To show that has small probability, we use the random matrix-like structure of random regular graphs at large distances. To this end, we fix a vertex, without loss of generality chosen to be , and abbreviate the -neighborhood of (as a set of vertices in and as a graph, respectively; see Section 2 for our notational conventions) by


In Section 7, we resample the boundary of the neighborhood by switching the boundary edges with uniformly chosen edges from the remainder of the graph. The switched graph is often denoted by . On the vertex set , it coincides with the unswitched graph , but the boundary of in the switched graph is now essentially random compared to the original graph .

Given , the switching is specified by the resampling data , which consists of independently chosen oriented edges from . The local resampling is implemented by switching a boundary edge of with one of the independently chosen edges encoded by . In fact, in this operation, not all pairs of edges can be switched (are switchable) while keeping the graph simple. Therefore, given , we denote by the index set for switchable edges (see Section 7 for the definition), whose switching leaves the uniform measure on invariant. For notational convenience, without loss of generality, we assume that where throughout the paper (except in the definition in Section 7).

Switching from to

Throughout Sections 815, we condition on a graph that satisfies certain estimates, and only use the randomness of the switching that specifies how to modify to . By our choice of and using has bounded excess (which we can and do assume), the number of edges in the boundary of is about . The randomness of these edges ultimately provides access to concentration estimates, which exhibit the random matrix-like structure of the random regular graph at large distances.

Note that, if we remove the vertex set from , our switchings have a simpler effect than in : they only consist of removing the edges and adding instead , for . Therefore, instead of studying the change from to at once, it will be convenient to analyze the effect of the switching in several steps. For this, we define the following graphs (which need not be regular). \enit@toodeep

is the original unswitched graph;

is the unswitched graph with vertices removed;

is the intermediate graph obtained from by removing the edges with ;

is the switched graph obtained from by adding the edges with ; and

is the switched graph (including vertices ). \enit@after Following the conventions of Section 2.2, the deficit functions of these graphs are given by , where the degree function of the graph considered, and we abbreviate their Green’s functions by , , , , and respectively.

Distance estimates

To use the local resampling, we require some estimates on the local distance structure of graphs and some a priori estimates on their Green’s functions. These are collected in Sections 89. In fact, we use both the usual graph distance (of the unswitched and switched graphs) and a notion of “distance” that is defined in terms of the size of the Green’s function of the graph from which the set is removed (again for the unswitched and switched graph).

The need for the Green’s function distance arises as follows. While estimates that involve sums over the diagonal of the Green’s function can be controlled quite well using only the graph distance, estimates of sums of off-diagonal terms are more delicate because the number terms is squared compared to the diagonal terms. By direct combinatorial arguments, it would be difficult to control large distances sufficiently precisely. However, to understand spectral properties, it is the size of the Green’s function rather than distances themselves that is relevant; and while the size of the Green’s function between two vertices is directly related to the distance between them if there are only few cycles, on a global scale (where many cycles could be present) cancellations can make the Green’s function much smaller. These cancellations are captured in terms of a Ward identity, which states that the Green’s function of any symmetric matrix obeys (see also Appendix B)

Removing the neighborhood and stability under resampling; the sets

Our goal is to show that estimates on the Green’s function of improve near the vertex under the above mentioned local resampling. For this, we work with the Green’s function of the graph obtained from by removing the vertex set (on which the graph does not change under switching).

As a preliminary step to showing that the estimates for the Green’s function improve, we show that they are stable under the operation of removing and resampling, i.e., roughly that the estimates analogous to those assumed continue to hold. More precisely, in Section 10, we show that if , then the (non-regular) graph obeys the analogous estimate


We define the set similarly as the set , except that is replaced by the graph (and with different constant), i.e., is the set of such that


Clearly, by (3.6), we have . In Section 11, we show that if obeys the (stronger) estimate (3.6), then with high probability the resampled graph obeys .

Locally improved Green’s function approximation; the sets

The set is defined by the improved estimates (15.1)–(15.4) near the vertex , with constant . In Sections 1215, it is proved that if we start with a graph , with high probability with respect to the local resampling around vertex , the switched graph belongs to .


To sum up, the argument outlined above shows that, for any graph in , with high probability with respect to the randomness of the local resampling, the switched graph is in the set . However, our goal was to show that a uniform -regular graph is in , except for an event of small probability. This follows from the statement we proved for using that our switching acts as an involution on the larger product probability space (see Proposition 7.5).

Self-consistent equation

The sets and depend on the choice of vertex . However, for any , we can define in the same way, by replacing the vertex in the above definitions by vertex (or using symmetry). By a union bound, then also the union of the events over holds with high probability. On the latter event, we derive (in Section 16) a self-consistent equation for the quantity

where the sum ranges over the set of oriented edges in , and