Analysis of unbounded operators and random motion
We study infinite weighted graphs with view to “limits at infinity,” or boundaries at infinity. Examples of such weighted graphs arise in infinite (in practice, that means “very” large) networks of resistors, or in statistical mechanics models for classical or quantum systems. But more generally our analysis includes reproducing kernel Hilbert spaces and associated operators on them. If is some infinite set of vertices or nodes, in applications the essential ingredient going into the definition is a reproducing kernel Hilbert space; it measures the differences of functions on evaluated on pairs of points in . And the Hilbert norm-squared in will represent a suitable measure of energy. Associated unbounded operators will define a notion or dissipation, it can be a graph Laplacian, or a more abstract unbounded Hermitian operator defined from the reproducing kernel Hilbert space under study. We prove that there are two closed subspaces in reproducing kernel Hilbert space which measure quantitative notions of limits at infinity in , one generalizes finite-energy harmonic functions in , and the other a deficiency index of a natural operator in associated directly with the diffusion. We establish these results in the abstract, and we offer examples and applications. Our results are related to, but different from, potential theoretic notions of “boundaries” in more standard random walk models. Comparisons are made.
Key words and phrases:Operators in Hilbert space, deficiency spaces, networks of resistors, statistical mechanics models, harmonic functions, graphs, Schroedinger equations, selfadjoint extension operators, reproducing kernel Hilbert spaces.
2000 Mathematics Subject Classification:Primary 47B25,47B32, 47B37, 47S50, 60H25, 81S10, 81S25.
We will use the theory of unbounded Hermitian operators with dense domain in Hilbert space in a study of infinite weighted graphs with view to “limits at infinity.” We begin with introducing the tools from operator theory as developed by H. M. Stone, John von Neumann, Kurt Friedrichs, and Tosio Kato with view to our particular setup. We stress that a Hermitian operator may not be selfadjoint, and that the discrepancy is measured by deficiency-indices (details below and the books  and , and more recently  and .) In physical problems, see e.g., , these mathematical notions of defect take the form of “boundary conditions;” for example waves that are diffracted on the boundary of a region in Euclidean space; the scattering of classical waves on a bounded obstacle ; a quantum mechanical “particle” in a repulsive potential that shoots to infinity in finite time; or in more recent applications (see e.g., , , , ) random walk on infinite weighted graphs that “wander off” to points on an idealized boundary of . In all of the instances, one is faced with a dynamical problem: For example, the solution to a Schrödinger equation, represents the time evolution of quantum states in a particular problem in atomic physics.
The operators in these applications will be Hermitian, but in order to solve the dynamical problems, one must first identify a selfadjoint extension of the initially given operator. Once that is done, von Neumann’s spectral theorem can then be applied to the selfadjoint operator. A choice of selfadjoint extension will have a spectral resolution, i.e., it is an integral of an orthogonal projection valued measure; with the different extensions representing different “physical” boundary conditions. Hence non-zero deficiency indices measure degrees of non-selfadjointness, and deficiency spaces “measure” boundary obstructions or scattering on an obstacle.
The variety of applied problems that lend themselves to computation of deficiency indices and the study of selfadjoint extensions are vast and diverse. As a result, it helps if one can identify additional structures that throw light on the problem.
Our results are inspired in part by the following recent developments in related areas: fractals in the small and in the large [13, 17, 19], representation theory [15, 16], operator algebras [10, 30], harmonic analysis [14, 17, 12, 20]; and multiresolutions/wavelets [1, 29, 23, 21].
We will further use notions from dynamics, infinite matrix products, to prove essential selfadjointness of families of Hermitian operators arising naturally in reproducing kernel Hilbert spaces. The latter include graph Laplacians for infinite weighted graphs with the Laplacian in this context presented as a Hermitian operator in an associated Hilbert space of finite energy functions on the vertex set in . Other examples include Hilbert spaces of band-limited signals.
Further applications enter into the techniques used in discrete simulations of stochastic integrals, see .
We encountered the present operator theoretic results in our study of discrete Laplacians, which in turn have part of its motivation in numerical analysis. A key tool in applying numerical analysis to solving partial differential equations is discretization, and use of repeated differences; see e.g., . Specifically, one picks a grid size , and then proceeds in steps: (1) Starting with a partial differential operator, then study an associated discretized operator with the use of repeated differences on the -lattice in . (2) Solve the discretized problem for fixed. (3) As tends to zero, numerical analysts evaluate the resulting approximation limits, and they bound the error terms. For this purpose, one must use a metric, and the norm in Hilbert space has proved an effective tool, hence the Hilbert spaces and the operator theory.
This procedure connects to our present graph-Laplacians: When discretization is applied to the Laplace operator in continuous variables, the result is the graph of integer points with constant weights. But if numerical analysis is applied instead to a continuous Laplace operator on a Riemannian manifold, the discretized Laplace operator will instead involve infinite graph with variable weights, so with vertices in other configurations than .
There is a large literature covering the general theory of reproducing kernel Hilbert spaces and its applications, see e.g., , , , , and . Such applications include potential theory, stochastic integration, and boundary value problems from PDEs among others. In brief summary, a reproducing kernel Hilbert space consists of two things: a Hilbert space of functions on a set , and a reproducing kernel , i.e., a complex valued function on such that for every in , the function is in and reproduces the value from the inner product in , so the formula
holds for all in . Moreover, there is a set of axioms for a function in two variables that characterizes precisely when it determines a reproducing kernel Hilbert space. And conversely there are necessary and sufficient conditions that apply to Hilbert spaces and decide when is a reproducing kernel Hilbert space.
Here we shall restrict these “reproducing” axioms and obtain instead a smaller class of reproducing kernel Hilbert spaces. We add two additional axioms: Firstly, we will be reproducing not the values themselves of the functions in , but rather the differences for all pairs of points in ; and secondly we will impose one additional axiom to the effect that the Dirac mass at is contained in for all in . When these two additional conditions are satisfied, we say that is a relative reproducing kernel Hilbert space.
It is known that every weighted graph (the infinite case is of main concern here) induces a relative reproducing kernel Hilbert space, and an associated graph Laplacian. Under certain conditions, the converse holds as well: Given a relative reproducing kernel Hilbert space on a set , it is then possible in a canonical way to construct a weighted graph such that is the set of edges in , and such that its energy Hilbert space coincides with itself. In our construction, the surprise is that the edges in as well as the weights on the edges may be built directly from only the Hilbert space axioms defining the initially given relative reproducing kernel Hilbert space. Since this includes all infinite graphs of electrical resistors and their potential theory (boundaries, harmonic functions, and graph Laplacians) the result has applications to these fields, and it serves to unify diverse branches in a vast research area.
Ii. Operator Theoretic Framework
This section contains the precise definitions of the terms used above, and to be used in later sections: the particular discrete networks, the weights on edges, the associated Hilbert spaces, and the infinite Laplacians. We open with two lemmas which establish links between the graph theoretic networks on one side, and the operator theory (unbounded Hermitian operators) on the other. This will allow us to encode certain “boundaries” with two subspaces of an associated Hilbert space.
Let be a set, and let a function satisfying the following four conditions:
For all , such that , there is a finite set of distinct points in such that , , and , .
Let be a system as described above with the function
For , let be the Dirac-function on supported at .
Then there is a Hilbert space containing with inner product such that
We will be working with functions on modulo multiples of the constant function on . We define the Hilbert space as follows:
and we set
Let be a system satisfying conditions (i)–(iv) above, and let be the Hilbert space introduced in Lemma 2.1.
For elements , set
Then is a Hermitian operator with a dense domain in :.
Select some base-point in . Let , and select such that , , and , . Then if , we get the following estimate
where is the norm in . By Riesz’ lemma, there is a unique such that
Now let be the linear span of . It follows from (7) that is dense in , and that
which is the desired conclusion (8).
A direct inspection yields the following two additional properties.
Let and be two sets (in our case the infinite cases are of main importance), and assume the following:
If , then , and we write .
If then .
For every pair of points and in , there is a finite subset
such that and .
For every we assume that the set of neighbors is finite, i.e.,
Let : be a function satisfying
For functions : set
We will consider the Hilbert space of all functions modulo constants such that
called the energy Hilbert space, and (13) referring to finite energy.
For functions :, we define the graph Laplacian, or simply the Laplace operator by
The Laplace operator in (14) is defined on a dense linear subspace in , and maps into .
Pick some in . Then for every , there is a unique such that
The function in (15) satisfies
For the subspace in (a) we may take
i.e., all finite linear combinations of the family of vectors.
The Dirac-functions in (16) satisfy
The operator is Hermitian on , i.e.,
Follows from (b) and (c).
It is clear from (15) that each function
is real valued and that span a dense subspace in .
Hence to prove (16), we need to show that
Follows from (c).
By (d) it is enough to prove that for all , we have the identity:
which in turn is a computation similar to the one used in (c).
and (g) Follow by polarization, and a direct computation of (22). Indeed, if
is a finite summation over , , then , and
The following two closed subspaces play a critical role in our understanding of geometric boundaries of weighted graphs ; i.e., a graph with serving as the set of vertices, the set of edges, and
a fixed conductance function.
the finite-energy harmonic functions, “Harm” is short for “harmonic”. Note that functions in may be unbounded as functions on .
Setting : the closed span of in , it is easy to see that
i.e., that the decomposition
holds; see .
the finite-energy deficiency vectors for the operator .
(The reader will notice that solutions to (35) appear to contradict the estimates (22) and (29); but this is not so. The deeper explanation lies in the theory of deficiency-indices, and deficiency-subspaces for unbounded linear operators in Hilbert space.) The notation “Def” is short for deficiency-subspace.
While the analysis of the two closed subspaces from Definition 2.5 (the finite energy harmonic functions, and the functions in the defect-space, or deficiency-space, for the Laplacian) will be carried out in general (sections VI and VII), it will be useful to work them out in a particular family of special cases; examples. At a first glance, these examples may indeed appear rather special, but we will show in section VII that they have wider use in our analysis of the “boundary at infinity.” The special cases discussed here will further show that the abstract spaces and operators from section II allow for explicit computations.
In an infinite weighted graph take for vertex-set
edges, nearest neighbors,
Pick a function on , and set
We will be interested in an extended version of the example when , and
for all . In this case, we will extend (39) by symmetry, as follows:
To distinguish the two cases we will denote the first one , and the second .
It turns out that there are important distinctions between the two, relative to the two subspaces and introduced above.
The matrix form of the operator is as follows in the two cases:
In the examples with as in (43), we have the following
|Def||one-dimensional||dimension zero or one; see details below!|
It will be convenient for us to organize the proof into the four parts indicated in Table 3.
Part 1. With the definitions as given, we consider the possible solutions : to the equation
Since , clearly the last two equations imply . So
and must be the constant function. But is obtained by modding out with the constants, so in .
Part 2. In this case, , and we set , and
Pick some constant , and set
With conductance , and resistance :, we get
Part 3. Now the “defect equation” is
Equation (53) for yields
Now set . For , we now define two sequences of polynomials as follows
It follows that is a multiple of the function
Specifically, , , ; and
In matrix form:
The polynomials may well be of independent interest, and we will need the first few in the infinite string:
The first and the last terms in the polynomials and are as indicated in the following formulas:
So the degree of is .
which is the next step in the induction. Now apply the same argument to
and the result in (63) for the next coefficient follows.
and using the induction hypothesis, we get .
We now turn to the leading coefficient in . As before, we use the induction hypothesis, and (65). We get
This completes the induction proof.
In other words, the degree of is
and each has leading coefficient one. ∎
(a) The two generating functions
(b) Here we have picked ; and we note that, in the first variable, has radius of convergence , while has radius of convergence .
Multiplying through by in
we arrive at the first formula (68) in the statement of the proposition.
For the proof of (69) we again multiply through by , now in
We now turn to the radii of convergence:
Since we already established that
it follows that there is a finite constant such that
and we conclude that has radius of convergence as claimed.
But further note that is bounded, as a consequence of the estimate (71). Hence we conclude that has radius of convergence . ∎
The purpose of the previous discussion is to find the deficiency vector , i.e., the solution in in a random walk model with , where . We now turn to the corresponding generating functions
The generating function for itself is as follows:
Combine the previous formulas. ∎
In the next section, we take up a number of dynamics related issues concerning these polynomials. Below we are concerned with the proof of the following:
For every , there is an such that
where depends on .
As a result, we get
Now in the next step, we adjust such that
We rewrite this:
It follows that (76) holds if
The solution from Part 3 in Table 3 satisfies
The finishes the proof of Part Three in Theorem 3.2.
Iv. The Polynomials and
We begin with some technical lemmas, and we further observe that the examples from section III indeed have a more general flavor: for example (Proposition 5.1), they may be derived from a standard random walk model. We will do the computations here just for a one-dimensional walk, but the basic idea carries over much more generally as we show in the next section.
Optimal Estimates. In the previous section, we considered and two sequences for . In the proof of Theorem 3.2, we showed the following:
In the following sense, this is not optimal; in fact, there is a finite constant such that we get linear bounds in both directions. Specifically:
We will establish the existence of by induction; and the size of will follow from the a priori estimates to follow. The induction begins with an inspection of Table 4 in section III.
For every , the following finite limit exists:
the limit is monotone, and .