Spectral Theory of Discrete Processes
Abstract.
We offer a spectral analysis for a class of transfer operators. These transfer operators arise for a wide range of stochastic processes, ranging from random walks on infinite graphs to the processes that govern signals and recursive wavelet algorithms; even spectral theory for fractal measures. In each case, there is an associated class of harmonic functions which we study. And in addition, we study three questions in depth:
In specific applications, and for a specific stochastic process, how do we realize the transfer operator as an operator in a suitable Hilbert space? And how to spectral analyze once the right Hilbert space has been selected? Finally we characterize the stochastic processes that are governed by a single transfer operator.
In our applications, the particular stochastic process will live on an infinite pathspace which is realized in turn on a state space . In the case of random walk on graphs , will be the set of vertices of . The Hilbert space on which the transfer operator acts will then be an space on , or a Hilbert space defined from an energyquadratic form.
This circle of problems is both interesting and nontrivial as it turns out that may often be an unbounded linear operator in ; but even if it is bounded, it is a nonnormal operator, so its spectral theory is not amenable to an analysis with the use of von Neumann’s spectral theorem. While we offer a number of applications, we believe that our spectral analysis will have intrinsic interest for the theory of operators in Hilbert space.
Key words and phrases:
Hilbert space, spectrum, encoding, transfer operator, infinite graphs, wavelets, fractals, stochastic process, path measures, transition probability.1991 Mathematics Subject Classification:
Primary 42C40, 47S50, 62B5, 68U10, 94A08Contents
1. Introduction
In this paper, we consider infinite configurations of vectors in a Hilbert space . Since our Hilbert spaces are typically infinitedimensional, this can be quite complicated, and it will be difficult to make sense of finite and infinite linear combinations .
In case the system is orthogonal, the problem is easy, but nonorthogonality serves as an encoding of statistical correlations, which in turn motivates our study. In applications, a particular system of vectors may often be analyzed with the use of a single unitary operator in . This happens if there is a fixed vector such that for all . When this is possible, the spectral theorem will then apply to this unitary operator. A key idea in our paper is to identify a spectral density function and a transfer operator, both computed directly from the pair .
We show that the study of linear expressions may be done with the aid of the spectral function for a pair . A spectral function for a unitary operator is really a system of functions , one for each cyclic subspace . In each cyclic subspace, the function is a complete unitary invariant for restricted to : by this we mean that the function encodes all the spectral data coming from the vectors , . For background literature on spectral function and their applications we refer to [1, 10, 16, 19, 20, 21].
In summary, the spectral representation theorem is the assertion that commuting unitary operators in Hilbert space may be represented as multiplication operators in an Hilbert space. The understanding is that this representation is defined as a unitary equivalence, and that the Hilbert space to be used allows arbitrary measures, and will be a Hilbert space of vector valued functions, see e.g., [6]. Because of applications, our systems of vectors will be indexed by an arbitrary discrete set rather than merely integers .
We will attack this problem via an isometric embedding of into and space built on infinite parths in such a way that the vectors in transform into a system of random variables . Specifically, via certain encodings we build a pathspace for the particular problem at hand as well as a path space measure defined on a algebra of subsets of .
If consists of a space of functions on a state space , we will need the covariance numbers
where , i.e., where the stochastic process is valued. The set is called the state space.
The paper is organized as follows. In section 2, for later use, we present our pathspace approach, and we discuss the pathspace measures that we will use in computing transitions for stochastic processes. We prove two theorems making the connection between our pathspace measures on the one hand, and the operator theory on the other. Several preliminary results are established proving how the transfer operator governs the process and its applications.
The applications we give in sections 3 and 4 are related. In fact, we unify these applications with the use of an encoding map which is also studied in detail. It is applied to transitions on certain infinite graphs, to dynamics of (noninvertible) endomorphisms (measures on solenoids), to digital filters and their use in wavelets and signals, and to harmonic analysis on fractals.
The remaining sections deal primarily with applications to a sample of concrete cases.
2. Stochastic Processes
A key tool in our analysis is the construction of pathspace measures on infinite paths, primarily in the case of discrete paths, but the fundamental ideas are the same in the continuous case. Both viewpoints are used in [12]. Readers who wish to review the ideas behind there constructions (stochastic processes and consistent families of measures) are referred to [8, 9, 7] and [18].
Let be a Borel probablity space, compact Hausdorff space. (Expectation .)
Let be a stochastic process, and
(2.1) 
the corresponding filtration. Let the subspace in generated by . Let be the orthogonal projection of onto ; then the conditional expectations is simply .
We say that has the generalized Markov property if and only if there exists a state space (also a compact Borel space):
such that for all bounded functions on , for all , .
To make precise the operator theoretic tools going into our construction, we must first introduce the ambient Hilbert spaces. We are restricting here to processes, so the corresponding stochastic integrals will take values in an ambient space of random variables: For our analysis, we must therefore specify a fixed probability space, with algebra and probability measure.
We will have occasion to vary this initial probability space, depending on the particular transition operator that governs the process.
In the most familiar case of Brownian motion, or random walk, the probability space amounts to a somewhat standard construction of Wiener and Kolmogorov, but here with some modification for our problem at hand: The essential axiom in Wiener’s case is that all finite samples are jointly Gaussian, but we will drop this restriction and consider general stochastic processes, and so we will not make restricting assumptions on the sample distributions and on the underlying probability space. For more details, and concrete applications, regarding this stochastic approach and its applications, see sections 2 and 4 below.
We begin here with a particular case of a process taking values in the set of vertices in a fixed infinite graph G: [13]
2.1. Starting Assumptions and Constructions.

a graph, the set of vertices, the set of edges.

a probability space.

The transition matrix is the function
defined for all , and we assume that it is independent of .

From (a) and (b), we construct the path space
and the pathmeasure . The cylinder sets given by the following data: For , , set

Starting with , if is a subsigma algebra, let be the conditional expectation, conditioned by .
If is a family of random variables, and is the algebra generated by we write in place of . 
Let be as above. We say that is Markov if and only if

From (b) and (d) we define the transfer operator by
(2.2) for measurable functions on . If 11 denote the constant function on , then .

Let and be as in (g), see(2.12). A measure on is said to be a PerronFrobenius measure if and only if
(2.3) 
Let be as above, and let be the transfer operator. If is a PerronFrobenius measure, let be the measure on determined by using as the first factor, i.e.,
In many cases, it is possible to choose specific PerronFrobenius measures , i.e., measures satisfying
(Note the normalization!)
Theorem 2.1.
(D. Ruelle) [2] Suppose there is a norm on bounded measurable functions on such that the completion is embedded in , and that there are constants , such that
where is the essential supremumnorm. Then has a PerronFrobenius measure.
Theorem 2.2.
Let be a probability space with carrying a separate algebra and defined on . Let be the path space, and supposed the transfer operator has a PerronFrobenius measure , then
(2.4) 
for all , and all . Here for all integrable random variables ; for expectation.
Proof.
∎
It is not necessary in (2.4) to restrict attention to functions in . The important thing is that the integral exists, and this quantity may then be used instead on the RHS in (2.4).
Let be a stochastic process, and let be the algebra generated by . Futhermore, let be the conditioned expectation conditioned by .
Theorem 2.3.
Let be a stochastic process with stationary transitions and operator . Then
(2.5) 
for all bounded measurable functions on , and all
Proof.
We may assume that is a real valued function on . Let all bounded measurable functions. Then the assertion in (2.5) may be restated as:
(2.6) 
for all .
Corollary 2.4.
Let be as in the theorem. Then the process is Markov.
Proof.
We must show that
By the theorem, we only need to show that
In checking this we use the transition operator . As a result we may now assume that has the form for a measurable function on . Hence
which is the desired conclusion. ∎
Definition 2.5.
We say that a measurable function on is harmonic if .
Definition 2.6.
A sequence of random variables is said to be a martingale if and only if for all .
Corollary 2.7.
Let be a stochastic process with stationary transitions and operator . Let be a measurable function on .
Then is harmonic if and only if is a martingale.
Corollary 2.8.
Suppose a process is stationary with a fixed transition operator . Then for all .
Proof.
Let and be a pair of functions on as specified above. Then we showed that
which is the desired conclusion. ∎
2.2. Martingales and Boundaries
Let be an infinite graph with a fixed conductance , and let the corresponding operators be and .
Let is a harmonic function, i.e., , or equivalently .
As an application of Corollary 2.7, we may then apply a theorem of J. Doob to the associated martingale , . This means that the sequence will then have  a. e. limit i.e.,
(2.7) 
The limit function will satisfy , or equivalently,
(2.8) 
The existence of the limit in (2.7) holds if one or the other of the two conditions is satisfied:

; or

.
Proposition 2.9.
[11] If is harmonic and if (i) or (ii) hold, then
(2.9) 
where the measure conditioned with . The converse implication holds as well.
2.3. Solenoids
Example 2.10.
Let be a compact Hausdorff space, and a finitetoone endomorphism onto . Let be the corresponding solenoid:
(2.10) 
One advantage of a choice of solenoid over the initial endomorphism is that induces an automorphism as follows:
Let be a Borel measurable function, and set
(2.11) 
Assume
(2.12) 
For points , set . A measure on is said to be strongly invariant if
Lemma 2.11.
Assume a measure on is strongly invariant, and let be a function on . Set . Then the adjoint operator
Proof.
See [11]. ∎
Set and equip it with the algebra and the topology which is generated by the cylinder sets.
Set ,
(2.13) 
Let be a Borel set, and consider
(2.14) 
Then the algebra on is generated by the sets
(2.15) 
Set
(2.16) 
where refers to the algebra as specified in (2.14).
In , consider the following random walk: For points , a transition is possible if and only if ; and in this case the transition probability is .
Let be a probability measure on . In we introduce the following Kolmogorov measure which is determined on cylinder sets as follows
(2.17)  
(2.18) 
More specifically, is a measure on infinite paths, and
(2.19) 
Example 2.12.
The following is a solenoid which is used in both number theory (the study of algebraic irrational numbers) and in ergodic systems. [4]. For this family of examples, the solenoids are associated with specific polynomials .
Let where is fixed; and let ; , be a polynomial, . Set
Consider the shift on the infinite torus , and set
Then it follows that is invariant and closed. As a result, is a compact solenoid.
3. Graphs
One additional application of these ideas is to infinite graph systems where is a graph and is a positive conductance function. A comprehensive study of this class of examples was carried out in the paper [12]. We will adapt the convention from that paper:

the set of vertices in ;

the set of edges in ;

: the conductance function.
Assumptions

Edge symmetry. If and , then we assume that . Moreover, .

Finite neighborhoods. For all , the set is finite.

No selfloops. If , then .
Convention: If , we write iff . 
Connectedness. For all there exists such that , and .

Choice of origin. We select an origin .
Definition 3.1.

The Laplace operator :

Hilbert spaces:

: functions such that . Set . For every , set ,
Note that is an orthonormal basis (ONB) in .

: finite energy functions module constants:
(3.1) Set
(3.2)


Dipoles. For all there is a unique such that
In this case, satisfies , and we make the choice . The function is called a dipole.
Example 3.2.
The dyadic tree.

the alphabet of two letters, bits .

: the set of all finite words in the empty word, , , a word of length ; .

the edges in the dyadic tree. If , two oneletter words. If , , . Set .

Constant conductance.
This is the restriction on . Thenif , and .

Paths in the tree. If , there is a unique path from to : the path is
and consists of edges.

Concatenation of words: For , . Set .
The dipoles are indexed by , and where is the chosen origin. If the tree, then the empty word.
Lemma 3.3.
[12] Let , , ; and , , . Then


, and .

, for all .
Proof.

By the uniqueness in Lemma 3.3, it is enough to prove that the function in (i) satisfies for all , and therefore also
(3.3) and that (ii)(iii) hold.
Specifically, we must prove that
Each is a computation:
And if , but , then
Finally, we compute the case as follows:
We leave the case to the reader.

Suppose , . From (3.2), we see that the contribution to only includes words with .
The desired conclusion
follows as in (ii). The possibilities may be illustrated in Figure 1 below.
∎
4. Specific Transition Operators
4.1. Transition on Graphs
Let be a graph with conductance function , and transition probabilities
Note that , which makes the corresponding random walk reversible.
Lemma 4.1.
Assume that for all . Set
and let be the random walk on with transition probabilities on edges in , i.e.,
Let be the transition operator, and for , set
then for pairs of functions and on , we have
with