On the Randomization of Frolov’s Algorithm for Multivariate Integration
Abstract
We are concerned with the numerical integration of functions from the Sobolev space of dominating mixed smoothness over the dimensional unit cube.
In [F76], K. K. Frolov introduced a deterministic quadrature rule whose worst case error has the order with respect to the number of function evaluations. This is known to be optimal. In [KN16], 39 years later, Erich Novak and me introduced a randomized version of this algorithm using random dilations. We showed that its error is bounded above by a constant multiple of in expectation and by almost surely. The main term is again optimal and it turns out that the very same algorithm is also optimal for the isotropic Sobolev space of smoothness . We also added a random shift to this algorithm to make it unbiased. Just recently, Mario Ullrich proved in [U16] that the expected error of the resulting algorithm on is even bounded above by . This thesis is a review of the mentioned upper bounds and their proofs.
On the Randomization of Frolov’s Algorithm for Multivariate Integration
M A S T E R A R B E I T
zur Erlangung des akademischen Grades
Master of Science (M. Sc.)
im Studiengang Mathematik
FRIEDRICHSCHILLERUNIVERSITÄT JENA
Fakultät für Mathematik und Informatik
eingereicht von David Krieg
geboren am 08.07.1991 in Würzburg
Betreuer: Prof. Dr. Erich Novak
Jena, 04.02.2016
Contents
1 Introduction
Many applications deal with multivariate functions which are smooth in the sense that certain weak derivatives exist and are squareintegrable, functions from a Sobolev space.
Which derivatives of are known to be existent and squareintegrable highly depends on the actual problem. Classically, covers the range of all vectors in with for some . The corresponding Sobolev space is called isotropic Sobolev space of smoothness . For instance, the solutions of elliptic partial differential equations in general and Poisson’s equation in particular, have this form. They typically appear in electrostatics or continuum mechanics.
But often is known to satisfy a stronger smoothness condition:
Derivatives for each
with exist and are squareintegrable.
This is typically the case, if is a tensor product of times
differentiable functions of one variable: .
We say that is from a Sobolev space of dominating mixed smoothness .
For example, solutions of the electronic Schrödinger equation are of this
form.
We are concerned with the numerical integration of such functions and refer to [HT08] and [GN01] for a treatise on elliptic partial differential equations and their connection with Sobolev spaces and to [Y10] for further information about electronic wave functions.
More precisely, we want to use linear quadrature rules to approximate the integral of integrable, real valued functions in real variables, with a particular interest in functions with dominating mixed smoothness . A linear quadrature rule, algorithm or method is given by a finite number of weights and nodes , and the rule
All these numbers and vectors can be deterministic or random variables. Since counts the number of function values computed by , it is a measure for the cost of , commonly referred to as information cost of the algorithm.
The error of associated with the integration of is
.
We are interested in sequences of quadrature rules
whose error decreases fast with respect to growing information cost .
In this sense, numerical integration of functions
with dominating mixed smoothness is significantly easier
than the integration of functions with isotropic smoothness ,
especially if the number of variables is large:
It turns out that the convergence order can be achieved for the expected error,
while is the best possible rate in the isotropic case.
From now on, for the sake of distinction, we will use as a parameter for isotropic
smoothness and as a parameter for dominating mixed smoothness.
The smoothness parameters and and the dimension
are arbitrary natural numbers, with the single condition that .
But they are considered to be fixed in the sense that
any constant in this thesis is merely a constant with respect to the
information cost and may depend on and .
Let us end this introductory section with an outline of the thesis.
We start with a brief compilation of the definitions and fundamental properties of the above mentioned Sobolev spaces. In Section 3, we will present a familiy of deterministic quadrature rules for the integration of compactly supported, continuous functions. Among those rules is Frolov’s algorithm, which will be examined in Section 4. With respect to the information cost , its integration error for functions with dominating mixed smoothness and compact support in the open unit cube is bounded above by a constant multiple of times the corresponding norm of . The order is optimal. For functions with support in and isotropic smoothness the order is achieved, which is also optimal.
In Section 5, we will add random dilations to Frolov’s algorithm and examine the integration error of the resulting algorithm for the same types of functions. We will see that in both cases the random dilations improve the order of the algorithm’s error by in expectation, while not changing it in the worst case. The additional random shift introduced in Section 6 makes the algorithm unbiased and, in case of functions with dominating mixed smoothness and compact support in , further improves the order of its expected error by a logarithmic term.
Section 7 shows that the condition of having support in can be dropped by applying a suitable change of variables to the above algorithms. The resulting algorithms satisfy the error bounds from above for any function on with dominating mixed smoothness or isotropic smoothness . Beyond that, the change of variables preserves unbiasedness.
2 The Function Spaces
For natural numbers and the Sobolev space of dominating mixed smoothness is the real vector space
of variate, real valued functions, equipped with the scalar product
The scalar product induces the norm
It is known that is a Hilbert space and its elements can be considered to be continuous functions. In this thesis, the Fourier transform is the unique continuous linear operator satisfying
for integrable and . The space contains exactly those functions with for the Fourier transform of and the weight function
In terms of its Fourier transform, the norm of is given by
Analogously, the isotropic Sobolev space of smoothness is
equipped with the scalar product
and its induced norm . For , we will frequently use the abbreviation .
The space is a Hilbert space, too. In the following, we will assume that is greater than . Then also consists of continuous functions, exactly those functions with for the Fourier transform of and the weight function
In terms of its Fourier transform, the norm of is given by
Furthermore, let be the real vector space of all continuous real valued functions with compact support in . The spaces and of functions in or with compact support in the unit cube are subspaces of . They can also be considered as subspaces of the Hilbert space
equipped with the scalar product
or the Hilbert space
with the scalar product
3 The Basic Quadrature Rule
We introduce a family of deterministic and linear quadrature rules. This family is fundamental to our studies. All the algorithms to be presented are based on the following definition.
Algorithm. Let be invertible and be a vector in . We define
for any admissible input function . We call shift parameter and denote by the algorithm for shift parameter .
The matrix is the transpose of the inverse of .
For now, can be any invertible matrix. But
later on, it will be a fixed matrix multiplied with a number
and a dilation matrix for
a dilation parameter .
The dilation parameter and shift parameter
are also arbitrary. As we go along, they will be chosen as
independent random variables and
that are uniformly distributed in and , respectively.
The rule adds up the values of at the lattice points
, , in the
corner of each parallelepiped
weighted with the volume of this parallelepiped.
The value hence can be thought of as a Riemann sum of
over with respect to the partition
.
Admissible input functions are, for instance, functions with compact support. For such functions the above sum is a finite sum. To integrate , the algorithm uses the nodes , where is a lattice point in the compact set of volume . Here, is the Lebesgue measure in . The indicated volume is the approximate number of function values computed by . In particular, the number of nodes of for growing is of order . The following simple lemma gives an exact upper bound, see [S94] for other bounds.
Lemma 1.
Suppose is supported in an axisparallel cube of edge length . For any invertible matrix , and the quadrature rule uses at most function values of .
Proof.
By assumption, has compact support in for some . The number of computed function values is the number of points for which is in and hence bounded by the size of
Since for ,
and . With we get the estimate of Lemma 1. ∎
The error of this algorithm for integration on can be expressed in terms of the Fourier transform.
Lemma 2.
For any invertible matrix , and
Proof.
The function is continuous with compact support. Hence, the Poisson summation formula and an affine linear substitution yield
if the latter series converges absolutely, see [K00, pp. 356]. If not, the stated inequality is obvious. This proves the statement, since . ∎
4 Frolov’s Deterministic Algorithm on
It is known how to choose the matrix in the rule to get a good deterministic quadrature rule on . Let the matrix satisfy the following three conditions:

is invertible,

, for any ,

For any the box with volume contains at most lattice points , ,
where . Such a matrix shall be called a Frolov matrix. Property (b) says that for every point of the lattice but zero lies in the set of all vectors with , the complement of a hyperbolic cross.
This graphic shows the lattice for , and the Frolov matrix
Except zero, every lattice point lies inside .
It is known that one can construct such a matrix in the following way. Let be a polynomial of degree with leading coefficient 1 which is irreducible over and has different real roots . Then the matrix
has the desired properties, as shown in [T93, p. 364] and [U14].
In arbitrary dimension we can choose ,
see [F76] or [U14], but there are many other possible choices.
For example, if is a power of two, we can set , where is the Chebyshev
polynomial of degree , see [T93, p. 365].
Then the roots of are explicitly given by for .
From now on, let be an arbitrary but fixed, dimensional Frolov matrix. Constants may depend on the choice of .
Algorithm. For any natural number , we consider the quadrature rule from Section 3 with shift parameter zero. This deterministic algorithm is usually referred to as Frolov’s algorithm.
For input functions with support in the number of function values computed by is of order . To be precise, Lemma 1 says that uses at most function values of .
K. K. Frolov has already seen in 1976 that the algorithm is optimal on in the sense of order of convergence. It satisfies the following error bound.
Theorem 1.
There is some such that for every and
See also [F76] and [U14] or my Bachelor thesis for a proof of this error
bound and its optimality.
In fact, this error bound holds uniformly for for any and ,
which is the statement of Theorem 3 in Section 5.1.
Theorem 1 is only a special case.
But Frolov’s algorithm is also optimal among deterministic quadrature rules on in the sense of order of convergence. It satisfies:
Theorem 2.
There is some such that for every and
5 The Effect of Random Dilations
We study the impact of random dilations on Frolov’s algorithm .
Algorithm. For any natural number and shift parameter we consider the method from Section 3 with a dilation parameter that is uniformly distributed in the box .
For input functions from or the
information cost of is roughly between
and . More precisely, it uses at most
function values of .
5.1 Worst Case Errors
In the worst case, the error of this method has the same order of convergence like Frolov’s algorithm, both for and .
Theorem 3.
There is a constant such that for any shift parameter , and
Proof.
Let be an arbitrary realization of the algorithm under consideration. By Lemma 2 and Hölder’s inequality,
We first prove that the first factor in this product is bounded above by a constant multiple of .
Consider the auxiliary set
for and .
The domain of summation is the disjoint union of all over .
For , the points in satisfy
. But the second property of the Frolov matrix yields
for any .
Hence, is empty for .
For , any satisfies
and hence . Because of the third property of the Frolov matrix, we obtain
That yields
This is the desired estimate, since .
We now show that the second factor in the above inequality is bounded above by a constant multiple of . This proves the theorem.
For we have
The function has compact support in the parallelepiped . Consider the set of all for which has a nonempty intersection with . Then
Thus we obtain
Since both and are of order , their ratio is bounded by a constant and the above inequality yields the statement. ∎
Theorem 4.
There is a constant such that for any shift parameter , and
Proof.
Let be an arbitrary realization of the algorithm under consideration. By Lemma 2 and Hölder’s inequality,
The first factor in this product is bounded above by a constant multiple of : Since
we have
where this last series converges for .
We show that the second factor in the above inequality is bounded above by a constant multiple of . This proves the theorem.
For any we have