# Entropy reduction in Euclidean first-passage percolation

###### Abstract.

The Euclidean first-passage percolation (FPP) model of Howard and Newman is a rotationally invariant model of FPP which is built on a graph whose vertices are the points of homogeneous Poisson point process. It was shown in [howard2001geodesics] that one has (stretched) exponential concentration of the passage time from to about its mean on scale , and this was used to show the bound for on the discrepancy between the expected passage time and its deterministic approximation . In this paper, we introduce an inductive entropy reduction technique that gives the stronger upper bound , where is a general scale of concentration and is the -th iterate of . This gives evidence that the inequality may hold.

###### Key words and phrases:

Euclidean first-passage percolation; rate of convergence; nonrandom fluctuations; entropy reduction## 1. Introduction

In [howardnewman1997], C. D. Howard and C. M. Newman introduced the following Euclidean first-passage percolation (FPP) model on : Let be a rate one Poisson point process. Denote by , , the closest point to in , breaking ties arbitrarily. Fix and define, for and , a finite sequence of points in ,

where is the Euclidean norm. Such a sequence is called a path in . can also be viewed as a subset of and we write . Define, for , , where the infimum is over all finite sequences with and , and is the length of . (The condition is imposed because if , then the straight line segment connecting any two Poisson points is a minimizing path for , and the analysis becomes trivial.) For , define and set . By subadditivity, the time constant exists and is defined by the formula

By the subadditive ergodic theorem, the convergence also holds almost surely, so that in a certain sense, .

In this and related models (lattice FPP and continuum analogues, for example), it is customary to measure the rate of convergence in the definition of by splitting into a random fluctuation and nonrandom fluctuation term:

Typically the random term is analyzed using concentration inequalities (for functions of independent random variables), which lately have developed significantly. In FPP models, current bounds on random fluctuations are still quite far away from the predictions, and this presents an ongoing challenge to researchers. In contrast, there is no general method for providing upper bounds on nonrandom fluctuations of subadditive ergodic sequences. In recent years, though, techniques have been developed [alexander1997, tessera] to bound these nonrandom errors for many lattice models in terms of the random ones. Specifically, if one has a concentration inequality of the type

(1.1) |

for and a suitable function (so far, only results with at least of order (in Euclidean FPP) or (in lattice FPP) have been proved), then one can derive the bound

(In fact, only the lower tail inequality is usually needed.) A natural question emerges: in these models, can one find such that

If the answer is yes, it means that the difference (used to control geodesics, for instance) can be reasonably well approximated by . Furthermore, due to the general lower bounds on nonrandom fluctuations proved in [ADHgamma], it would suggest that the nonrandom fluctuation term is of the same order as the random one (as is the case in exactly solvable directed last-passage percolation [baikLPP, Corollary 1.3]).

This question is the focus of our paper. Although we cannot prove this inequality, we show a weaker, but close one. Specifically, our main method is an inductive “entropy reduction” technique which shows that for any , there is a constant such that for large ,

where is from (1.1) and is the -th iterate of (see Theorem 2.5). This gives strong evidence that the answer to the above question is yes.

In the next section, we give some background on Euclidean FPP from [howard2001geodesics] and sketch the main strategy to prove general bounds on nonrandom fluctuations in the model. In Section 2, we state our main assumptions on and the four results (bounds on nonrandom fluctuations, concentration estimates, and geodesic wandering estimates) which come out of our inductive method.

### 1.1. Background

A geodesic between two points is a path such that . Since , geodesics exist and are unique almost surely [howard2001geodesics, Proposition 1.1]. Denote by the geodesic between and . Note that can also be viewed as a subset of .

First we quote some results from [howard2001geodesics]. Define

(1.2) |

and write for the standard basis vectors of .

###### Theorem 1.1 ([howard2001geodesics], Theorem 2.1).

Define . Then there exist constants such that and

for all and .

###### Theorem 1.2 ([howard2001geodesics], Eqn. (4.3)).

There exists a constant such that

(1.3) |

Define, for ,

Denote by the line segment between and .

###### Theorem 1.3 ([howard2001geodesics], Theorem 2.4).

For any , there exist constants such that

By a simple modification of the proof of [howard2001geodesics, Theorem 2.4], one can show that for some constant ,

(1.4) |

The factor in (1.3) and (1.4) comes from the proof technique. Here we give a sketch of the proof of Theorem 1.2, hinging on the following result, which is [howard2001geodesics, Lemma 4.2].

###### Lemma 1.4.

Suppose that the functions and satisfy the following conditions: , as , and . Then for any , for all large .

Proof: The proof is copied from [howard2001geodesics] for completeness. It is easily verified that, for , satisfies for all large . Iterating this times yields or . Under our hypotheses on and , as , so letting shows that for all large .

Returning to the proof of (1.3), due to the previous lemma, it suffices to prove . Now consider the geodesic and let be the first point in such that . Then we have . Then the proof is completed once we show that with positive probability, both of the following bounds hold:

Since is a random point, in order to prove the second bound, one needs to apply Theorem 1.1 to all pairs of the form where satisfies . Because we have to apply Theorem 1.1 at least times, if we use a union bound, we need the probability in Theorem 1.1 to be at most of the order for some large . Taking in Theorem 1.1 will achieve this and thus complete the sketch of the proof.

Our main goal is to improve the term in Theorem 1.2. This has been done recently in a lattice FPP model and a directed polymer model in [alexander2011subgaussian, alexander2013subgaussian] by an entropy reduction technique, showing that one can replace the term by . Their key idea is to exploit the dependence between passage times between nearby points to reduce the number of times a concentration result like Theorem 1.1 is applied.

The improvement from to is important, especially when a sub-gaussian concentration bound for is available. For the lattice FPP model, [damron2015subdiffusive] proved sub-gaussian concentration on the scale of (extending work in [benaimrossignol]). Using this, [alexander2011subgaussian] proved that for a directed FPP model, non-random fluctuations can be bounded by the order

These bounds have not yet been extended to Euclidean FPP. The strongest concentration inequality to date is Theorem 1.1 of Howard and Newman.

A consequence of our main results is that one can replace the term in Theorem 1.2 to where can be an arbitrary iterate of . Our proof works under a general framework which does not depend on any particular scale of concentration. So if a sub-gaussian concentration result for Euclidean FPP is proved, then our result would immediately imply a bound in Theorem 1.2.

Notation: we use bold face letters (e.g. , , ) to denote elements in or . Denote by the corresponding -norm and the -norm. We use to denote a small constant and a large constant, with values that may vary from case to case. We use notation like to denote constants whose values may depend on and/or , but not on . The subscript refers to the result number. For example, denotes the constant in Theorem 2.3.

## 2. Main results

In this section, we state the main theorems. We state our results in a general way which does not depend on any one particular concentration result. Let be a real function. We assume that we have the following concentration on the scale .

###### Assumption 2.1.

There exist constants , , and such that

for all and .

We put the following assumptions on .

###### Assumption 2.2.

There exists such that is increasing for . In addition, there exist constants and such that for all and , we have

Note that the above assumption implies that and for any . In addition, the above assumption also implies the following simple bounds: For large and ,

We will assume Assumptions 2.1 and 2.2 through out the rest of the paper, and let constants , , , and be as in Assumptions 2.1 and 2.2. We further define three constants as follows:

(2.1) |

These constants show up as exponents in our main theorems below, and reasons for the choices will be clear in the proofs.

Define and for , whenever this is well-defined. Write where and . Define for and ,

###### Theorem 2.3.

Write and . For any and , there exists a constant such that for large

Note that the scale of concentration on Theorem 2.3 is smaller than that of the next theorem (and is independent of ). This is the main reason why we can use estimates for any value of to give improved ones for .

One key ingredient in the proof of the above result is a simple bound on that reflects the fact that is simply a function of . This is not true for general lattice models. Indeed, it is a standard technique (see [Newman, Kesten], among many others) to decompose a difference like that from the last theorem as

(Here we are writing for the limit , which in our model is simply .) The idea then is to use information about the limiting shape for the model (for instance curvature) to control directly, but then one must bound both the random and nonrandom errors on the first two lines. The bounds available for nonrandom errors are generally worse (by some logarithmic factor) than those available for random errors, so one cannot obtain better concentration for than the bounds on nonrandom errors. In our case, we can directly decompose

and exploit the rotational invariance of (from the underlying Poisson process) to obtain bounds without needing control of the nonrandom error.

###### Theorem 2.4.

Write and . For any and , there exists a constant such that for large

###### Theorem 2.5.

Let be the time constant. For any , there exists a constant such that for large

Define for any and

Define for

Recall that for ,

###### Theorem 2.6.

Write and . For any and , there exists a constant such that for all large and

We will prove Theorems 2.3 to 2.6 by mathematical induction on . Note that Theorem 2.3 is stated for while the other three theorems are stated for . The framework of the mathematical induction can be summarized in the following three steps:

Organization of the paper: In Section 3, we prove some basic results about the Euclidean FPP model. In Section 4, we verify the initial step of the mathematical induction. In Section 5, we complete the induction step of the mathematical induction, and therefore complete the proofs of Theorems 2.3, 2.4, 2.5 and 2.6.

## 3. Preliminary Results

In this section, we prove some basic properties about the Euclidean FPP model under the Assumptions 2.1 and 2.2. The proof of these results are analogous to the ones when .

As a result of [howard2001geodesics, Lemma 5.2], we have the following lemma. Define for and

###### Lemma 3.1.

Define the events , for , as follows:

(i) There exist constants such that

(3.1) |

(ii) Furthermore, there exists a constant such that, restricted to , we have

Proof: (The proof follows exactly from [howard2001geodesics, Lemma 5.2], whose statement is similar but with replaced by for some .) It is sufficient to prove (3.1). Note that can be covered with balls of radius . If occurs, then the intersection of and one of these balls must be empty. Therefore

where the last line uses the fact that for large . Then the proof is completed.

For any define . By the symmetry of the Poisson point process, there is a function such that where is the Euclidean norm. As a result of subadditivity, we have the following simple lemma.

###### Lemma 3.2.

There exists a constant such that, for all , .

Proof: By subadditivity,

Then since ,

Reversing the roles of and gives the same bound for . Last, we note that an immediate consequence of [howardnewman1997, Lemma 1] is that for all .

We also need the following simple lemma to control the difference of passage times when the endpoints do not differ too much.

###### Lemma 3.3.

There exists a constant such that, restricted to , for such that ,

Proof: When restricted to , we have . The proof then follows from the following bound from [howard2001geodesics, (2.14)]:

The last result in this section is a global concentration result which plays an important role in verifying the initial cases for the mathematical induction.

###### Lemma 3.4.

Define the set as follows:

For any , there exists a constant such that for all large

where the events , are defined as follows:

Proof: For any , there exists

such that and . By Lemma 3.3, restricted to , when is large,

By Lemma 3.2,

In the rest of the proof we will replace by in the definition of . Combining the above two bounds, when is large, implies that there exists such that

when is large. By Assumption 2.1, for any fixed pair ,

Let . Since and is large, one has and therefore when is large,

Therefore

Since , by a union bound,

Combining this bound with Lemma 3.1 and taking large complete the proof.

## 4. The Initial Step

The goal of this section is to verify the initial step of the mathematical induction. Precisely, we will prove the following three lemmas in this section. Lemmas 4.1, 4.2 and 4.3 imply the cases of Theorems 2.4, 2.5, and 2.6 respectively. Note Lemmas 4.1 and 4.3 are actually stronger than the corresponding initial versions of the theorems.

###### Lemma 4.1.

Define and . For any , there exists a constant such that for large

In fact, one can take .

Proof: When is large,

When , the event considered in this lemma implies . Therefore Lemma 4.1 follows from Lemma 3.4 immediately.

###### Remark 1.

###### Lemma 4.2.

There exists a constant such that for large .

Proof: By Lemma 1.4, it is sufficient to show that there exists a constant such that for all large

(4.1) |

The proof follows from the proof of [howard2001geodesics, Lemma 4.1] closely. Note that restricted to , there exists such that is on the geodesic . Therefore

Applying this to an outcome in (which has positive probability), for such a we have

Then by Lemma 3.2, we have

Combining the above two inequalities, we have

This implies (4.1) for large .

###### Lemma 4.3.

Write and . For any , there exists a constant such that for all large ,

Proof: Restricted to , the event considered in Lemma 4.3 implies that there exist , and such that and is on the geodesic from to , i.e.,

(4.2) |

Meanwhile, elementary geometry shows that there exists a constant such that for large , , and as above,

Therefore by Lemma 4.2 and the fact that ,

Comparing (4.2) and the above bound, we have