Analysis of operators

# Analysis of unbounded operators and random motion

Palle E.T. Jorgensen Department of Mathematics
14 MLH
The University of Iowa
Iowa City, IA 52242-1419
USA
University of Iowa
###### Abstract.

We study infinite weighted graphs with view to “limits at infinity,” or boundaries at infinity. Examples of such weighted graphs arise in infinite (in practice, that means “very” large) networks of resistors, or in statistical mechanics models for classical or quantum systems. But more generally our analysis includes reproducing kernel Hilbert spaces and associated operators on them. If is some infinite set of vertices or nodes, in applications the essential ingredient going into the definition is a reproducing kernel Hilbert space; it measures the differences of functions on evaluated on pairs of points in . And the Hilbert norm-squared in will represent a suitable measure of energy. Associated unbounded operators will define a notion or dissipation, it can be a graph Laplacian, or a more abstract unbounded Hermitian operator defined from the reproducing kernel Hilbert space under study. We prove that there are two closed subspaces in reproducing kernel Hilbert space which measure quantitative notions of limits at infinity in , one generalizes finite-energy harmonic functions in , and the other a deficiency index of a natural operator in associated directly with the diffusion. We establish these results in the abstract, and we offer examples and applications. Our results are related to, but different from, potential theoretic notions of “boundaries” in more standard random walk models. Comparisons are made.

###### Key words and phrases:
Operators in Hilbert space, deficiency spaces, networks of resistors, statistical mechanics models, harmonic functions, graphs, Schroedinger equations, selfadjoint extension operators, reproducing kernel Hilbert spaces.
###### 2000 Mathematics Subject Classification:
Primary 47B25,47B32, 47B37, 47S50, 60H25, 81S10, 81S25.
Work supported in part by the US National Science Foundation.

## I. Introduction

We will use the theory of unbounded Hermitian operators with dense domain in Hilbert space in a study of infinite weighted graphs with view to “limits at infinity.” We begin with introducing the tools from operator theory as developed by H. M. Stone, John von Neumann, Kurt Friedrichs, and Tosio Kato with view to our particular setup. We stress that a Hermitian operator may not be selfadjoint, and that the discrepancy is measured by deficiency-indices (details below and the books [31] and [32], and more recently [11] and [28].) In physical problems, see e.g., [7], these mathematical notions of defect take the form of “boundary conditions;” for example waves that are diffracted on the boundary of a region in Euclidean space; the scattering of classical waves on a bounded obstacle [25]; a quantum mechanical “particle” in a repulsive potential that shoots to infinity in finite time; or in more recent applications (see e.g., [22], [8], [9], [27]) random walk on infinite weighted graphs that “wander off” to points on an idealized boundary of . In all of the instances, one is faced with a dynamical problem: For example, the solution to a Schrödinger equation, represents the time evolution of quantum states in a particular problem in atomic physics.

The operators in these applications will be Hermitian, but in order to solve the dynamical problems, one must first identify a selfadjoint extension of the initially given operator. Once that is done, von Neumann’s spectral theorem can then be applied to the selfadjoint operator. A choice of selfadjoint extension will have a spectral resolution, i.e., it is an integral of an orthogonal projection valued measure; with the different extensions representing different “physical” boundary conditions. Hence non-zero deficiency indices measure degrees of non-selfadjointness, and deficiency spaces “measure” boundary obstructions or scattering on an obstacle.

The variety of applied problems that lend themselves to computation of deficiency indices and the study of selfadjoint extensions are vast and diverse. As a result, it helps if one can identify additional structures that throw light on the problem.

Our results are inspired in part by the following recent developments in related areas: fractals in the small and in the large [13, 17, 19], representation theory [15, 16], operator algebras [10, 30], harmonic analysis [14, 17, 12, 20]; and multiresolutions/wavelets [1, 29, 23, 21].

We will further use notions from dynamics, infinite matrix products, to prove essential selfadjointness of families of Hermitian operators arising naturally in reproducing kernel Hilbert spaces. The latter include graph Laplacians for infinite weighted graphs with the Laplacian in this context presented as a Hermitian operator in an associated Hilbert space of finite energy functions on the vertex set in . Other examples include Hilbert spaces of band-limited signals.

Further applications enter into the techniques used in discrete simulations of stochastic integrals, see [18].

We encountered the present operator theoretic results in our study of discrete Laplacians, which in turn have part of its motivation in numerical analysis. A key tool in applying numerical analysis to solving partial differential equations is discretization, and use of repeated differences; see e.g., [6]. Specifically, one picks a grid size , and then proceeds in steps: (1) Starting with a partial differential operator, then study an associated discretized operator with the use of repeated differences on the -lattice in . (2) Solve the discretized problem for fixed. (3) As tends to zero, numerical analysts evaluate the resulting approximation limits, and they bound the error terms. For this purpose, one must use a metric, and the norm in Hilbert space has proved an effective tool, hence the Hilbert spaces and the operator theory.

This procedure connects to our present graph-Laplacians: When discretization is applied to the Laplace operator in continuous variables, the result is the graph of integer points with constant weights. But if numerical analysis is applied instead to a continuous Laplace operator on a Riemannian manifold, the discretized Laplace operator will instead involve infinite graph with variable weights, so with vertices in other configurations than .

Inside the technical sections we will use standard tools from analysis and probability. References to the fundamentals include [6], [24], [26] and [33].

There is a large literature covering the general theory of reproducing kernel Hilbert spaces and its applications, see e.g., [4], [2], [3], [5], and [34]. Such applications include potential theory, stochastic integration, and boundary value problems from PDEs among others. In brief summary, a reproducing kernel Hilbert space consists of two things: a Hilbert space of functions on a set , and a reproducing kernel , i.e., a complex valued function on such that for every in , the function is in and reproduces the value from the inner product in , so the formula

 f(x)=

holds for all in . Moreover, there is a set of axioms for a function in two variables that characterizes precisely when it determines a reproducing kernel Hilbert space. And conversely there are necessary and sufficient conditions that apply to Hilbert spaces and decide when is a reproducing kernel Hilbert space.

Here we shall restrict these “reproducing” axioms and obtain instead a smaller class of reproducing kernel Hilbert spaces. We add two additional axioms: Firstly, we will be reproducing not the values themselves of the functions in , but rather the differences for all pairs of points in ; and secondly we will impose one additional axiom to the effect that the Dirac mass at is contained in for all in . When these two additional conditions are satisfied, we say that is a relative reproducing kernel Hilbert space.

It is known that every weighted graph (the infinite case is of main concern here) induces a relative reproducing kernel Hilbert space, and an associated graph Laplacian. Under certain conditions, the converse holds as well: Given a relative reproducing kernel Hilbert space on a set , it is then possible in a canonical way to construct a weighted graph such that is the set of edges in , and such that its energy Hilbert space coincides with itself. In our construction, the surprise is that the edges in as well as the weights on the edges may be built directly from only the Hilbert space axioms defining the initially given relative reproducing kernel Hilbert space. Since this includes all infinite graphs of electrical resistors and their potential theory (boundaries, harmonic functions, and graph Laplacians) the result has applications to these fields, and it serves to unify diverse branches in a vast research area.

## Ii. Operator Theoretic Framework

This section contains the precise definitions of the terms used above, and to be used in later sections: the particular discrete networks, the weights on edges, the associated Hilbert spaces, and the infinite Laplacians. We open with two lemmas which establish links between the graph theoretic networks on one side, and the operator theory (unbounded Hermitian operators) on the other. This will allow us to encode certain “boundaries” with two subspaces of an associated Hilbert space.

Let be a set, and let a function satisfying the following four conditions:

1. For all

 #{y∈X|c(x,y)≠0}<∞.
2. Symmetry:

 c(x,y)=c(y,x),∀x,y∈X.
3.  c(x,x)=∑y∈Xc(x,y).
4. For all , such that , there is a finite set of distinct points in such that , , and , .

###### Lemma 2.1.

Let be a system as described above with the function

 c:X×X→R≥0

satisfying (i)(iv).

For , let be the Dirac-function on supported at .

Then there is a Hilbert space containing with inner product such that

 ⟨δx,δx⟩=c(x,x), (1)

and

 ⟨δx,δy⟩=−c(x,y) if x≠y. (2)
###### Proof.

We will be working with functions on modulo multiples of the constant function on . We define the Hilbert space as follows:

 H(X,c):={all functions f:X→CC s.t.
 ∑∑x  ys.t. x≠yc(x,y)|f(x)−f(y)|2<∞}; (3)

and we set

 ∥f∥2 =⟨f,f⟩ =E(f):=12∑∑x≠yc(x,y)|f(x)−f(y)|2 for f∈H(X,c). (4)

It is immediate that is then a Hilbert space, and that the Dirac-functions are in . If is the inner product corresponding to (4), then a direct computation shows that (1) and (2) are satisfied. ∎

###### Lemma 2.2.

Let be a system satisfying conditions (i)(iv) above, and let be the Hilbert space introduced in Lemma 2.1.

For elements , set

 (Δu)(x)\emph{:}=⟨δx,u⟩ for x∈X. (5)

Then is a Hermitian operator with a dense domain in :.

###### Proof.

Select some base-point in . Let , and select such that , , and , . Then if , we get the following estimate

 |f(x)−f(o)|≤(n∑i=11c(xi−1,xi))12∥f∥ (6)

where is the norm in . By Riesz’ lemma, there is a unique such that

 f(x)−f(o)=⟨vx,f⟩ for % all f∈H. (7)

Now let be the linear span of . It follows from (7) that is dense in , and that

 Δvx=δx−δo. (8)

To prove (8) first note that the functions in (7) must be real-valued. This is a consequence of the uniqueness in Riesz. We now verify (8) by the following computation:

 (Δvx)(y) =(by (???))⟨δy,vx⟩ =(by (???))δy(x)−δy(o) =(δx−δo)(y),

which is the desired conclusion (8).

A direct inspection yields the following two additional properties.

 ⟨Δu1,u2⟩ =⟨u1,Δu2⟩, and ⟨u,Δu⟩ ≥0, for all u1,u2, and u∈D.

###### Definition 2.3.

1. Let and be two sets (in our case the infinite cases are of main importance), and assume the following:

• .

• If , then , and we write .

• If then .

• For every pair of points and in , there is a finite subset

 {(xi−1,xi)|i=1,2,⋯,n}⊂G1

such that and .

• For every we assume that the set of neighbors is finite, i.e.,

 #NbhG(x)<∞, (9)

where

 #NbhG(x)={y∈G0|y∼x}. (10)
2. Let : be a function satisfying

 c(x,y)=c(y,x) for all (x,y)∈G1. (11)
3. For functions : set

 E(u)\emph{:}=12∑∑% all xysuch that x∼yc(x,y)|u(x)−u(y)|2. (12)

We will consider the Hilbert space of all functions modulo constants such that

 E(u)<∞; (13)

called the energy Hilbert space, and (13) referring to finite energy.

For functions :, we define the graph Laplacian, or simply the Laplace operator by

 (Δu)(x)=∑y∼xc(x,y)(u(x)−u(y)). (14)
###### Proposition 2.4.

1. The Laplace operator in (14) is defined on a dense linear subspace in , and maps into .

2. Pick some in . Then for every , there is a unique such that

 ⟨vx,u⟩E=u(x)−u(o) % for all u∈HE. (15)
3. The function in (15) satisfies

 Δvx=δx−δo. (16)
4. For the subspace in (a) we may take

 D=span{vx|x∈G0╲(o)}, (17)

i.e., all finite linear combinations of the family of vectors.

5. The Dirac-functions in (16) satisfy

 δx=c(x)vx−∑y∼xc(x,y)vy (18)

where

 c(x)\emph{:}=∑y∼xc(x,y). (19)
6. The operator is Hermitian on , i.e.,

 ⟨Δu1,u2⟩=⟨u1,Δu2⟩ holds for all u1,u2∈D, (20)

where

 ⟨u1,u2⟩{} \emph{:}=⟨u1,u2⟩E \emph{:}=E(u1,u2) (21)
7. Semiboundedness:

 ⟨u,Δu⟩≥0 for all u∈D%. (22)
###### Proof.

1. Follows from (b) and (c).

2. This is an application of Riesz’ lemma for the Hilbert space ; see [22]. Indeed, pick such that , and , ; then

 |u(x)−u(o)|≤(n∑i=11c(xi−1,xi))12∥u∥HE (23)

where

 ∥u∥HE:=E(u)12, u∈HE. (24)

The desired conclusion (15) is then immediate from Riesz.

3. It is clear from (15) that each function

 vx: G0→R (25)

is real valued and that span a dense subspace in .

Hence to prove (16), we need to show that

 (26)

Because of (12), we may impose the following re-normalization:

 vx(o)=0, ∀x∈G0╲(0). (27)

With this, now (26) follows from (12), (14) and (15), by a direct computation which we leave to the reader; see also [22].

4. Follows from (c).

5. By (d) it is enough to prove that for all , we have the identity:

 ⟨vz,δx−c(x)vx+∑y∼xc(x,y)vy⟩=0,

which in turn is a computation similar to the one used in (c).

6. and (g) Follow by polarization, and a direct computation of (22). Indeed, if

 u=∑xξxvx (28)

is a finite summation over , , then , and

 ⟨u,Δu⟩=∑x|Δu(x)|2+∣∣∣∑x(Δu)(x)∣∣∣2. (29)

###### Definition 2.5.

The following two closed subspaces play a critical role in our understanding of geometric boundaries of weighted graphs ; i.e., a graph with serving as the set of vertices, the set of edges, and

 c\emph{:~{}}G1→R+ (30)

a fixed conductance function.

Given we introduce the Hilbert space and the operator as in (12), (13) and (14) above. Note that both depend on the choice of the function in (30).

1. Set

 Harm:={h∈HE|Δh=0}; (31)

the finite-energy harmonic functions, “Harm” is short for “harmonic”. Note that functions in may be unbounded as functions on .

Setting : the closed span of in , it is easy to see that

 Harm=HE⊖Fin, (32)

and

 Fin=HE⊖Harm, (33)

i.e., that the decomposition

 HE:=Fin⊕Harm (34)

holds; see [22].

2. Set

 Def:={u∈HE|Δu=−u}; (35)

the finite-energy deficiency vectors for the operator .

(The reader will notice that solutions to (35) appear to contradict the estimates (22) and (29); but this is not so. The deeper explanation lies in the theory of deficiency-indices, and deficiency-subspaces for unbounded linear operators in Hilbert space.) The notation “Def” is short for deficiency-subspace.

## Iii. Computations

While the analysis of the two closed subspaces from Definition 2.5 (the finite energy harmonic functions, and the functions in the defect-space, or deficiency-space, for the Laplacian) will be carried out in general (sections VI and VII), it will be useful to work them out in a particular family of special cases; examples. At a first glance, these examples may indeed appear rather special, but we will show in section VII that they have wider use in our analysis of the “boundary at infinity.” The special cases discussed here will further show that the abstract spaces and operators from section II allow for explicit computations.

###### Example 3.1.

In an infinite weighted graph take for vertex-set

 G0={0}∪Z+={0,1,2,⋯}, (36)

edges, nearest neighbors,

 NbhG(0)={1}, (37)

and

 NbhG(x)={x−1,x+1} if% x∈Z+. (38)

Pick a function on , and set

 c(x−1,x)=\emph{:}μ(x) if x∈Z+. (39)

We will be interested in an extended version of the example when , and

 NbhG(x)={x−1,x+1} (40)

for all . In this case, we will extend (39) by symmetry, as follows:

 c(x−1,x)=:μ(x) if x∈Z+, (41)

and

 c(x,x+1)=μ(|x|) if x∈Z−. (42)

To distinguish the two cases we will denote the first one , and the second .

It turns out that there are important distinctions between the two, relative to the two subspaces and introduced above.

The matrix form of the operator is as follows in the two cases:

Let ; suppose , and set . For the conductance function in (41) and (42) we set

 μ(x):=Mx=exp(xlnM), x∈Z+. (43)
###### Theorem 3.2.

In the examples with as in (43), we have the following

###### Proof.

It will be convenient for us to organize the proof into the four parts indicated in Table 3.

Part 1. With the definitions as given, we consider the possible solutions : to the equation

 Δh=0. (44)

Hence

 0=μ(1)(h(0)−h(1)), (45)

and

 0=μ(x)(h(x)−h(x−1))+μ(x+1)(h(x)−h(x+1)) for x∈Z+. (46)

Setting

 (δh)(x):=h(x)−h(x−1), (47)

equations (45) and (46) then take the following form:

 μ(1)(δh)(1)=0, (48)

and

 μ(x)(δh)(x)=μ(x+1)(δh)(x+1) for all x∈Z+. (49)

Since , clearly the last two equations imply . So

 h(x)=h(0)+δ(1)+⋯+(δh)(x)=h(0),

and must be the constant function. But is obtained by modding out with the constants, so in .

Part 2. In this case, , and we set , and

 h(−x)=−h(x), x∈Z. (50)

In that case,

 0 =(Δh)(0)=μ(1)(h(0)−h(−1)+h(0)−h(1)) =μ(1)(2h(0)−h(−1)−h(1)),

so conditions (50) work. This together with (49) then determine on .

Pick some constant , and set

 μ(x)δh(x)=t for all x∈Z+; (51)

and then

 h(x) =δh(1)+δh(2)+⋯+(δh)(x) =t(μ(1)−1+μ(2)−1+⋯+μ(x)−1).

With conductance , and resistance :, we get

 h(x) =t(ξ+ξ2+⋯+ξx) (52) =tξ1−ξx1−ξ⟶x→∞tξ1−ξ.

Moreover,

 E(h) =2∞∑x=1μ(x)((δh)(x))2 =2∞∑x=1Mx(tξx)2 =2t2∞∑x=1ξx=2t2ξ1−ξ<∞.

Part 3. Now the “defect equation” is

 Δu=−u, u∈HE, (53)

starting with

 (Δu)(0)=μ(1)(u(0)−u(1))=−u(0);

so

or

 u(1)=(1+ξ)u(0). (54)

Equation (53) for yields

 μ(x+1)(δu)(x+1)=μ(x)(δu)(x)+u(x). (55)

Now set . For , we now define two sequences of polynomials as follows

 (δu)(n):=ξnpn(ξ)λ, (56)
 u(n):=qn(ξ)λ, (57)

where

 ξ=M−1, and μ(n)=Mn. (58)

It follows that is a multiple of the function

 u1(x):=qx(ξ), x∈0∪Z+. (59)

Specifically, , , ; and

 pn+1 =pn+qn (60) qn+1 =qn+ξn+1pn+1=ξn+1pn+(1+ξn+1)qn. (61)

In matrix form:

 (62)

The polynomials may well be of independent interest, and we will need the first few in the infinite string:

###### Lemma 3.3.

The first and the last terms in the polynomials and are as indicated in the following formulas:

 pn(ξ)=n+(n−1)ξ+⋯+ξ(n−1)n2, (63)

and

 qn(ξ)=1+ξ+⋯+ξn(n+1)2. (64)

So the degree of is .

###### Proof.

Note that Table 4 already suggests the start of an induction proof. Now suppose (63) and (64) hold up to , i.e., ; and that both and have leading coefficients one.

With the use of (60) and (61) we then get

 pn+1(0) =pn(0)+qn(0) =n+1,

which is the next step in the induction. Now apply the same argument to

 ddξpn+1∣∣∣ξ=0,

and the result in (63) for the next coefficient follows.

Setting in

 (65)

and using the induction hypothesis, we get .

We now turn to the leading coefficient in . As before, we use the induction hypothesis, and (65). We get

 qn+1(ξ) =ξn+1(n+⋯+ξ(n−1)n2)+(1+ξn+1)(1+⋯+ξn(n+1)2) =1+⋯+ξ(n+1)(n+2)2.

This completes the induction proof.

In other words, the degree of is

 1+2+3+⋯+n=n(n+1)2;

and each has leading coefficient one. ∎

###### Proposition 3.4.

(a) The two generating functions

 P(X,ξ)\emph{:}=∞∑n=1pn(ξ)Xn, (66)

and

 Q(X,ξ)\emph{:}=∞∑n=0qn(ξ)Xn (67)

satisfy

 X(P(X,ξ)+Q(X,ξ))=P(X,ξ) (68)

and

 Q(X,ξ)=1+XQ(X,ξ)+P(ξX,ξ). (69)

(b) Here we have picked ; and we note that, in the first variable, has radius of convergence , while has radius of convergence .

###### Proof.

Multiplying through by in

 pn+1(ξ)=pn(ξ)+qn(ξ),

and using

 (p0q0)=(01), (70)

we arrive at the first formula (68) in the statement of the proposition.

For the proof of (69) we again multiply through by , now in

 qn+1(ξ)=qn(ξ)+ξn+1pn+1(ξ).

After adding up the terms with “” and using (70), we arrive at the desired conclusion (69).

We now turn to the radii of convergence:

Since we already established that

 ∞∑n=1ξnpn(ξ)2<∞,

it follows that there is a finite constant such that

 pn(ξ)≤Cξ−n2; (71)

and we conclude that has radius of convergence as claimed.

But further note that is bounded, as a consequence of the estimate (71). Hence we conclude that has radius of convergence . ∎

The purpose of the previous discussion is to find the deficiency vector , i.e., the solution in in a random walk model with , where . We now turn to the corresponding generating functions

 Q(X,1M)=∞∑n=1u(n)Xn.
###### Corollary 3.5.

The generating function from (66) in Proposition 3.4 has the representation

 P(X,ξ)=∞∑n=1ξn(n+1)2Xn(1−X)2(1−ξX)2(1−ξ2X)2⋯(1−ξnX)2. (72)
###### Proof.

This is an application of (66) and (67): Eliminate and iterate the substitution. ∎

###### Corollary 3.6.

The generating function for itself is as follows:

 Q(X,ξ) =∞∑n=0u(n,ξ)Xn =11−X⎛⎜⎝1+∞∑n=1ξn(n+1)2Xn(1−ξX)2⋯(1−ξnX)2⎞⎟⎠,

where

 ∞∏k=1(1−ξkX)=exp(−ξX1−X).
###### Proof.

Combine the previous formulas. ∎

In the next section, we take up a number of dynamics related issues concerning these polynomials. Below we are concerned with the proof of the following:

###### Lemma 3.7.

For every , there is an such that

 px(ξ)≤xm, (73)

and

 qx(ξ)≤(x+1)m−xm,~{}for all x∈Z+, (74)

where depends on .

###### Proof.

Table 4 makes clear the start of an induction of (73) and (74). Now suppose has been chosen such that (73) and (74) hold up to .

Then

 px+1(ξ) =px(ξ)+qx(ξ) ≤xm+(x+1)m−xm =(x+1)m.

As a result, we get

 qx+1(ξ) =qx(ξ)+ξx+1px+1(ξ) (75) ≤(x+1)m−xm+ξx+1(x+1)m%.

Now in the next step, we adjust such that

 (x+1)m−xm+ξx+1(x+1)m≤(x+2)m−(x+1)m.

We rewrite this:

 2+ξx+1 ≤(1−1x+1)m+(1+1x+1)m ≤2⋅(1+(m2)(1x+1)2);

or

 (x+1)2ξx+1≤m(m−1). (76)

But

 maxt∈R+t2ξt=(2lnξ)2e−2. (77)

It follows that (76) holds if

 |lnξ|>2e√m(m−1),

and so

 M>exp(2e√m(m−1)). (78)

Since the limit on the RHS in (78) is as , and is fixed, it follows that can be adjusted to such that (78) holds. With this choice in fact (74) will be satisfied for all . To see this, apply (77) to . ∎

###### Lemma 3.8.

The solution from Part 3 in Table 3 satisfies

 E(u)<∞ (79)
###### Proof.
 E(u) =∑x∈Z+μ(x)((δu)(x))2 =(by Lemma ???)∑x∈Z+Mx(ξxpx(ξ))2 ≤∑x∈Z+x2mξx<∞.

The finishes the proof of Part Three in Theorem 3.2.

Part 4. Here the model is when the function satisfies (41) and (42). As a result the solution to considered in the theorem satisfies

 u(x)=u(−x), x∈Z. (80)

We get

 (Δu)(0)=M(2u(0)−u(1))=−2M(δu)(1)=−u(0).

Hence

 (δu)(1)=(ξ2)u(0), (81)

and

 u(1)=(1+ξ2)u(0).

Now is the second component in the vector

 (∗u(x))=(11ξx1+ξx)(11ξx−11+ξx−1)⋯(11ξ21+ξ2)(121+ξ2)u(0).

## Iv. The Polynomials pn(z) and qn(z),n=1,2,⋯

We begin with some technical lemmas, and we further observe that the examples from section III indeed have a more general flavor: for example (Proposition 5.1), they may be derived from a standard random walk model. We will do the computations here just for a one-dimensional walk, but the basic idea carries over much more generally as we show in the next section.

Optimal Estimates. In the previous section, we considered and two sequences for . In the proof of Theorem 3.2, we showed the following:

###### Proposition 4.1.

If

 M>exp(e−1√2/3), (82)

then

 pn(ξ)≤n3 for all n∈Z+. (83)

In the following sense, this is not optimal; in fact, there is a finite constant such that we get linear bounds in both directions. Specifically:

 (84)
###### Proof.

We will establish the existence of by induction; and the size of will follow from the a priori estimates to follow. The induction begins with an inspection of Table 4 in section III.

###### Lemma 4.2.

For every , the following finite limit exists:

 Q(∞,ξ)\emph{:}=limn→∞qn(ξ); (85)

the limit is monotone, and .

###### Proof.

Let be the function from Lemma 3.7.

We established that

where . If , for some fixed constant ; then

 (δuξ)(n)≤√Aξn/2. (86)

Since

 (87)

we get

 uξ(n) ≤1+√Aξ1−ξn/21−√ξ <1+√Aξ1−√ξ.

Since

 uξ(1)

the desired conclusion follows, and

 Q(∞,ξ)≤1+√Aξ1−√ξ. (89)

We will see below that

 pn(ξ)=1+n−1∑x=1qx(ξ),

so

 pn(ξ)≤1+(√Aξ1−√ξ)n.

The estimate in the RHS in (87) now follows.

Returning to , we have

 pn(ξ) =pn−1(ξ)+qn−1(ξ) (90) =qn−1(ξ)+qn−2(ξ)+⋯+q1(ξ)+p1(ξ)

where

 (p1(ξ)q1(