Expansion of percolation critical points
for Hamming graphs
Abstract.
The Hamming graph is the Cartesian product of complete graphs on vertices. Let be the degree and be the number of vertices of . Let be the critical point for bond percolation on . We show that, for fixed and ,
which extends the asymptotics found in [BorChaHofSlaSpe05b] by one order. The term is the width of the critical window. For we have , and so the above formula represents the full asymptotic expansion of . In [FedHofHolHul16a] we show that this formula is a crucial ingredient in the study of critical bond percolation on for . The proof uses a lace expansion for the upper bound and a novel comparison with a branching random walk for the lower bound. The proof of the lower bound also yields a refined asymptotics for the susceptibility of a subcritical ErdősRényi random graph.
MSC 2010. 60K35, 60K37, 82B43.
Keywords and phrases. Hamming graph, percolation, critical point, critical window, lace expansion.
1. Introduction and main result
1.1. Percolation on the Hamming graph
The Hamming graph is the Cartesian product of complete graphs on vertices (e.g., ). Bernoulli bond percolation is the model where, given a graph, each edge is retained independently with the same probability . In this paper we study the location of the critical point of bond percolation on for the phase transition in the size of the largest connected component when is fixed and .
Formally, we define the Hamming graph for as the graph with vertex set and edge set
(1.1) 
Thus, is a transitive graph on vertices with degree . Bernoulli bond percolation is synonymous with the probability space , where and is the measure such that
(1.2) 
where is the Kronecker delta. When we say that the edge is open, when we say that the edge is closed. Given a vertex , we write for the graph whose vertex set consists of all vertices that can be reached from through a path of open edges, and whose edge set consists of all open edges between these vertices. We call the connected component of , or cluster of , and write for its number of vertices. We write for the cluster with the largest cardinality (using some tiebreaking rule). Two of the main objects of study in percolation are and , the cardinalities of and . For percolation on infinite graphs it is often observed that the critical point of the percolation phase transition on , defined by
(1.3) 
is nontrivial, i.e., (see for example Grimmett [Grim12]) for most infinite graphs (an exception being ). Moreover, Aizenman and Barsky [AizBar87] and independently Menshikov [Mens86] proved that on transitive graphs,
(1.4) 
Since we consider percolation on with finite and is a product measure, any event that is measurable with respect to has a probability that is a polynomial in , and therefore is continuous in : the finite model cannot undergo a nontrivial phase transition in as described above. Nevertheless, it does make sense to study the percolation phase transition on finite graphs in the limit as . To see why, let us give a rough sketch of an important related problem: the emergence of the giant component in the ErdősRényi Random Graph (ERRG).
1.2. Giant component
The ErdősRényi random graph is the common name for percolation on the complete graph . Erdős and Rényi [ErdRen59] proved that in the limit as , if , then w.h.p., ^{1}^{1}1Given a sequence of random variables , we write w.h.p. (with high probability) if there exist constants such that as . while if , then w.h.p. Moreover, zooming in on the transition point by choosing for a sequence such that , Bollobás [Bol84] showed that ^{2}^{2}2Subsequent results in [LucPitWie94, Ald97, Pit01, NacPer07] are much sharper and comprehensive than what is summarized here, and there is an extensive body of literature on the problem.

w.h.p. when (subcritical),

w.h.p. when (critical),

w.h.p. when (supercritical).
What this shows is that the size of the largest component undergoes a sharp transition around . As mentioned above, there is no critical point for a finite graph, but the transition occurs in a slice of the parameter space with a width of order , which is asymptotically vanishing with respect to the center of the window located around . This behaviour inspired the notion of critical window: to indicate that the transition of the ERRG occurs around in a range of width , we use the shorthand notation ^{3}^{3}3Given three sequences , we write that when there exists a constant such that for all .
(1.5) 
Erdős and Spencer [ErdSpe79] conjectured that if we replace by a more “geometric” graph sequence (their primary candidate was , the dimensional hypercube, with ), then the critical behaviour should remain largely intact. In fact, it turned out that to a large extent the picture is the same for a large class of graph sequences with “sufficiently weak” geometries.
Of particular interest to us here are the papers by Borgs et al. [BorChaHofSlaSpe05a, BorChaHofSlaSpe05b, BorChaHofSlaSpe06], demonstrating that graph sequences satisfying the socalled triangle condition (which serves as an indicator of what is meant by sufficiently weak geometry; see e.g. [AizNew84, BorChaHofSlaSpe05a, BorChaHofSlaSpe05b, BorChaHofSlaSpe06, HarSla90]) have a phase transition that strongly resembles that of the ERRG, and that both and satisfy the triangle condition. More precisely, consider a sequence of vertex transitive graphs of degree , and write . Write for the event that , and define the twopoint function and the susceptibility (note that does not depend on by transitivity and that depends on the relative difference of and only because the graphs under consideration are tori). The triangle condition is satisfied for percolation on if for all such that for some sufficiently small , and for all , we have ^{4}^{4}4Here and below we will frequently suppress sub and superscripts when their presence is clear from context. Likewise, we do not always stress that we are considering asymptotic results for sequences.
(1.6) 
Borgs et al. prove that the triangle condition holds for a class of models that includes for any fixed (see [BorChaHofSlaSpe05b, Theorem 1.3]). An alternative proof, applying to e.g. Hamming graphs and hypercubes, was given by van der Hofstad and Nachmias [HofNac12, HofNac13].
1.3. Critical window
Fix some and define as the unique solution of the equation
(1.7) 
Borgs et al. [BorChaHofSlaSpe05a, BorChaHofSlaSpe05b] prove that if we consider percolation on a sequence that satisfies the triangle condition (1.6) with and , then we see subcritical behaviour when and critical behaviour when , just as in the ERRG. Sharper results about meanfield supercritical behaviour of percolation models when were derived later by van der Hofstad and Nachmias [HofNac12], who investigate the supercritical phase and thus establish that (1.7) really constitutes the critical window for several highdimensional tori including the hypercube and Hamming graphs. Moreover, it was shown in [BorChaHofSlaSpe05a, Theorem 1.1] that the critical window satisfies
(1.8) 
and that for any , i.e., any choice of yields the same critical window.
Compare (1.8) with the critical window of the ERRG in (1.5), and note that has because and . Thus, by that analogy, the second error term above corresponds to the width of the critical window, while the first error term can be viewed as a “correction” in to itself. In this interpretation, (1.8) describes the critical window asymptotically precisely for the twodimensional Hamming graph , since in this case and , so that the correction term is vanishingly small compared to . Moreover, (1.8) is also asymptotically precise for because the two terms coincide.
1.4. Expansion of the critical point
This brings us to the main result of our paper. We write for the critical value of percolation on defined in (1.8), and compute the second term of for all :
Theorem 1.1 (Critical window for percolation on ).
For all and all ,
(1.9) 
where the constants in the error terms may depend on .
Observe that for , the correction term of order is asymptotically larger than the width of the critical window, and that when the above expansion is again asymptotically precise, since we have .
To see the relevance of Theorem 1.1, we compare it with other expansions of in the literature. The van der Hofstad and Slade [HofSla06] proved that for percolation on , with either the infinite lattice with nearestneighbour edges or the hypercube , as , can be expanded up to three terms as
(1.10) 
where in both cases denotes the degree of the graph . Moreover, they [HofSla05] also proved that, for any ,
(1.11) 
where are rational coefficients. The critical window of the hypercube has width , so we believe that the expansion cannot be asymptotically precise, regardless of the choice of . Furthermore, it was conjectured that the expansion for , although it may exist, is divergent for all as (in the sense that the power series has radius of convergence ). We conjecture that the expansion for the Hamming graph is very different. We believe that for any there exist coefficients such that
(1.12) 
i.e., we conjecture that has an asymptotically precise expansion in of order for all . Heydenreich and van der Hofstad state the conjecture in (1.12) as [HeyHof16, Open Problem 15.4].
Theorem 1.1 in [BorChaHofSlaSpe05a] confirms this conjecture for , and our current work confirms it for . The argument of van der Hofstad and Slade [HofSla05] establishing (1.11) for the lattice and the hypercube crucially uses the fact that a ball of a radius restricted to a dimensional subspace has the same shape for all , so that we can express each coefficient in terms of events that happen on a fixed subgraph. Balls in the Hamming graph instead grow very rapidly when increases. Each coefficient is obtained as a limit and it will be more involved to prove the existence of this limit. Hence we do not have significant evidence suggesting that all coefficients in (1.12) have to be rational.
We note that the existence of a finite asymptotically precise expansion makes the proof of the critical window of the Hamming graph more challenging than for the hypercube. Roughly speaking, because the critical window of the hypercube is exponentially narrower than any of the expansion terms, we can approximate up to any fixed order by a value that is in fact subcritical, by choosing a negative coefficient for the error term. This allows one to exploit the fact that is polylogarithmic in , which simplifies the analysis considerably. In our case, the approximating will be much closer to , and so we need a much more refined analysis. We will explain this in more detail in Section LABEL:subsect:Pin.
1.5. Scaling limit of largest cluster sizes
Besides offering an interesting comparison with other graphs with sufficiently weak geometry, the expansion of also has another motivation. The Hamming graph is an excellent example to investigate the universality class of the ERRG, since it has a nontrivial geometry yet is highly mean field. See [HofLuc10, HofLucSpe10, FedHofHul15, MilSen16] for a small sample of the literature from this perspective. A crucial motivation for the present paper is that it serves as a companion paper to [FedHofHolHul16a], where we establish the scaling limit of the cluster sizes of the largest clusters within the critical window. More precisely, writing for the th largest cluster, we prove that for any fixed and for the largest critical clusters of Hamming graph percolation satisfy
(1.13) 
for a certain sequence of dependent continuous random variables supported on . Aldous [Ald97] proved this scaling limit for the ERRG. Since then, many other random graph models have been shown to have the same (or at least a similar) scaling limit. See for instance [BhaHofLee10a, BhaHofLee12, Jos14, NacPer10] and the references therein. The above result for the Hamming graph, however, is the first indication that the same scaling occurs for models with an underlying highdimensional geometry. Moreover, it is the most precise determination to date of the critical behaviour of percolation on a finite transitive graph (other than the ERRG scaling limit of Aldous). The proof of (1.13) and various other results in [FedHofHolHul16a] crucially rely on the asymptotically precise determination of the critical window that we give here.
1.6. Alternative definition of the critical point
It is worth noting that a disadvantage of the definition in (1.7) is that it imposes an ad hoc relation between and , which is known not to hold in general and believed to be associated with “highdimensional” models. In other words, (1.7) is possibly only a valid definition of for percolation models in the universality class of the ERRG. Nachmias and Peres in [NacPer08] observed that it would be desirable to have a definition of that applies more generally, and they proposed
(1.14) 
as a definition of the critical point for any graph . Their motivation for this definition is that Russo’s formula [Rus81] implies that is the point where a small change in has the greatest impact on the relative size of the connected components, i.e., changes most dramatically at . A serious downside of this definition appears to be that may be very difficult to compute. Thus far, the only nontrivial determination of is given in recent work by Janson and Warnke [JanWer16]. They determine that, for the ERRG, , so is a point inside the critical window (1.5), that around describes the critical window (1.5) as well, and that, interestingly, does not equal either or . It would be interesting to see whether their methods can be applied to the current setting of percolation on .
1.7. Susceptibility of the subcritical ERRG
In Section 2 we prove Theorem 1.1, and also derive refined asymptotics for the susceptibility of a subcritical ERRG, its second moment, and its surplus: given a connected graph , let denote the number of surplus edges in . Besides being interesting in their own right, these will be crucial for proving the lower bound on , because the restriction of critical percolation on to a onedimensional subspace of is equivalent to a subcritical ERRG. To prove the lower bound of Theorem 1.1 we rely on the following asymptotics, which, to the best of our knowledge, are sharper than results in the literature:
Theorem 1.2 (Second order asymptotics for susceptibility of the subcritical ERRG).
Let be the ERRG with and . Then as ,
(1.15)  
(1.16)  
(1.17) 
The secondorder coefficient computed in (1.15) improves the result by Durrett in [Dur07, Theorem 2.2.1], which states that , while (1.16) provides the matching lower bound to wellknown upper bound derived with the usual branching process domination. To achieve the sharper asymptotics we need a new way to encode the usual breadthfirst search in the ERRG with the help of a branching random walk. We believe that there exists an infinite polynomial expansion of in powers of for all with . There is substantial literature related to (1.17), see e.g. the classic book on random graphs by Bollobás [Boll01, Section 5.2] as well as the seminal paper by Janson, Knuth Łuczak and Pittel [JanKnuLucPit93] computing generating functions of components having various cycle structures. As far as we are aware, the second order asymptotics in (1.17) is new.
1.8. Outline
We prove Theorem 1.1 by separately proving a lower bound and an upper bound on . In Section 2 we prove Theorem 1.2. This theorem is used in Section LABEL:sectlow to prove the lower bound in Theorem 1.1 with the help of an exploration process that uses the fact that the restriction of critical bond percolation on to a onedimensional subspace has the same distribution as a subcritical ERRG. This is used to obtain a sharp enough branching process upper bound on the susceptibility. In Section LABEL:sectconn we estimate connection probabilities and estimate bubble, triangle and polygon diagrams. In Section LABEL:sectup we prove the upper bound in Theorem 1.1 with the help of the lace expansion. Perhaps surprisingly, these disparate methods yield compatible bounds, due to the fact that both methods are asymptotically sharp. The lace expansion method may be improved to prove Theorem 1.1, but this would be more difficult than our current proof and less interesting. We do not see how the exploration process proof could be improved to also prove the upper bound in Theorem 1.1.
2. Susceptibility of the subcritical ErdősRényi Random Graph
In this section we prove Theorem 1.2. To give our estimate of the expected size of a subcritical cluster, we couple a breadthfirst exploration process of the cluster to a process related to a Branching Random Walk (BRW). The breadthfirst exploration exploration process is defined in Section 2.1, the branching random walk exploration in 2.2. The proof of the susceptibility asymptoticis is given in Section LABEL:ssectsa.
2.1. Breadthfirst/surplus exploration
We start by defining a version of the breadthfirst (BF) exploration. This is a very standard tool in the study of the ERRG (see e.g. [Hofs17, Section 5.2.1]). In a nutshell, a breadthfirst exploration is a process that, starting from a vertex , “discovers” its adjacent edges, “activating” the direct neighbours of in some fixed order, and then explores those vertices, discovering their adjacent edges and activating any unexplored, unactivated neighbours, and so on, always choosing the vertex that was activated the longest time ago as the next vertex to explore from. The BF exploration keeps track of which vertices have been explored (the “dead” set), which vertices have been activated but not explored (the “active” set), and the time at which a vertex was activated or explored. Crucially, the “traditional” BF exploration will only explore a vertex once, so the process terminates once all vertices are explored, and the edges associated with newly activated vertices describe a subtree of the component of , but the process provides little information about the surplus, i.e., the discovered edges that do not activate new vertices (also sometimes referred to as the “tree excess” of the graph). For our purposes it is important that we also know about the surplus, so we consider the following modification of the BF exploration:
Definition 2.1 (BF exploration process of a graph).
Given a graph and a vertex we define the breadthfirst/surplus (BF) exploration process as the sequence of dead, active and surplus sets as follows:

Initiation. Initiate the exploration with the dead, active and surplus sets at time as
(2.1) and at time as
(2.2) 
Time . Choose the vertex that minimizes , breaking ties according to an arbitrary but predetermined rule.^{5}^{5}5An example of such a rule: Fix an order on the vertex set . If at step we have explored and/or activated a total of vertices, and we activate more at step , then we assign to these newly explored vertices the labels through , according to the order on . At time we explore from the active vertex with the smallest label. Update the active, dead and surplus sets as follows:
(2.3) 
Stop. Terminate the exploration when . Set .
Note that and are subsets of , whereas is a subset of . When , this means that we have completely explored the connected component and . In the BF we find a new edge every time we activate a vertex (except the initial vertex ) or we discover an edge between active vertices. It follows that . We conclude that that .
2.2. The branching random walk exploration
The subtree generated by a “traditional” BF exploration is often studied through a comparison to a branching process (see e.g. [Hofs17, Dur07]). To study our BF exploration, we define a suitable extension, the branching random walk (BRW) exploration, in which we randomly embed a branching process in the graph, and keep track of its selfintersections.^{6}^{6}6From now on the term nodes will refer to elements of GW trees, while vertices will refer to elements of graphs. Moreover, the progeny of a node will indicate the set of vertices whose path to the root passes through , while the children of are only the vertices for which is the first vertex encountered on such a path. We write for the set of children of in . This is made precise in the following definition:
Definition 2.2 (Branching random walk).
Given an regular graph and , we define the branching random walk (BRW) on started at as the pair , where is a GaltonWatson tree, and is a random mapping of into the vertex set whose law satisfies: (1) maps the root of to ; (2) given any node and its set of children , the marginal law of is the same as that of distinct neighbours of in chosen uniformly at random, independently for all . (Here, for a set and a mapping , we define , and by convention set .)
Next, we define a process that explores a BRW and keeps track of any selfintersections. Briefly, the idea is that we explore the BRW by exploring the tree in a breadthfirst fashion from the root upward. If the BRW intersects its own trace, then we declare the particle that intersected, and all its offspring, to have become “ghosts”. We differentiate between particles that became ghosts through intersecting with active and dead vertices. In Proposition 2.4 below we prove that this exploration process can be coupled to a BF exploration of a percolation cluster:
Definition 2.3 (BRW exploration process).
Given an regular graph , a vertex , and a BRW on , we define the BRW exploration process as the sequence of dead, active, active ghost and dead ghost sets as follows:

Initiation. Initiate the exploration with the dead, active, active ghost and dead ghost sets at time as
(2.4) and at time as
(2.5) 
Time . Choose the node that minimizes , breaking ties according to an arbitrary but predetermined rule, and update the exploration as follows:
(2.6) 
Stop. If , then terminate the exploration. Set .
Using the BRW exploration, we define the subgraph as the graph traced out by a BRW where the particles are killed when they intersect with the active set. More precisely, we let be the subtree in induced by , and define
(2.7) 
Note that, by Definition 2.3, , so is indeed a subgraph of .
We now show that has the same law as , the connected component of in an ERRG, by coupling the BF and BRW explorations:
Proposition 2.4 (Coupling of BF and BRW explorations).
Consider percolation on an regular graph with parameter . Consider the BF exploration on the percolated graph and the BRW exploration processes on , both starting from the vertex (and using the same tiebreaking rule). Then with respect to has the same law as .
Proof.
We show inductively that we can couple each step of the BRW and of the BF exploration in such a way that almost surely. We start by showing that there exists a coupling such that for all ,
(2.8)  
We start with the inductive base. At time , by Definitions 2.1 and 2.3,
(2.9)  
Next, we prove the inductive step: the induction hypothesis is that the relations in (2.8) holds for all . We extend the coupling so that it also holds at time . Our assumption is that we use the same tiebreaking rule for both explorations, so by the induction hypothesis we choose .
Given , fix a set of neighbours of . By Definition 2.2, the mapping is such that neighbours of are distinct neighbours chosen uniformly at random, so
(2.10)  
Next, consider the BF exploration at time . Given , we can determine . For , let denote an independent setvalued random variable that contains the vertex with probability , independently for all such that , so that . For every set of neighbours of we have , and so there exists a trivial coupling of and such that .
Consider an edge . Observe that if , then has not been discovered by the exploration, so it is open in the percolation conditionally independently with probability , while if , then has been discovered in the BF exploration, so its status can be determined from . Let denote the vertices that are endpoints of edges that are discovered in the th step, i.e.,
(2.11) 
Note that if , then either becomes activated at time or . By the above observation, we can couple to such that almost surely, conditionally on .
Consider henceforth the setting in which , , , and are simultaneously coupled according to the above description. (Since both and are essentially independent random subsets, it is easy to make this coupling explicit; we leave those details to the reader.) Using this coupling, the induction hypothesis (2.8), and Definitions 2.1 and 2.3, we derive
(2.12) 
and
(2.13)  
and
(2.14)  
Since , we obtain that (2.8) holds also at time almost surely, and thus, by induction, for all , almost surely.
To conclude the proof, we show that the coupling (2.8) for all implies that almost surely. Recall the definition of in (2.7), and of above it. Since , it follows directly from (2.8) that the vertex sets of and coincide. To see that the edge sets coincide, note that, by Definition 2.3, contains only edges such that , with and . Indeed, by the construction of the BRW exploration it is impossible that both , since vertices in are never explored further. Let . Then, from the definition of the BRW exploration, and . We then obtain
(2.15) 
An application of (2.8) now completes the proof. ∎
We conclude by deriving some consequences of Proposition 2.4 that will be useful in the proof of Theorem 1.2:
Corollary 2.5.
Consider a BF exploration on an regular graph with parameter , and a BRW exploration processes on , both starting from the vertex . Then
(2.16)  
(2.17)  
(2.18) 