Lattice rules for nonperiodic smooth integrands
The aim of this paper is to show that one can achieve convergence rates of for (and for arbitrarily small) for nonperiodic -smooth cosine series using lattice rules without random shifting. The smoothness of the functions can be measured by the decay rate of the cosine coefficients. For a specific choice of the parameters the cosine series space coincides with the unanchored Sobolev space of smoothness .
We study the embeddings of various reproducing kernel Hilbert spaces and numerical integration in the cosine series function space and show that by applying the so-called tent transformation to a lattice rule one can achieve the (almost) optimal rate of convergence of the integration error. The same holds true for symmetrized lattice rules for the tensor product of the direct sum of the Korobov space and cosine series space, but with a stronger dependence on the dimension in this case.
Quasi-Monte Carlo (QMC) rules are equal weight quadrature rules
which can be used to approximate integrals of the form
see [10, 23, 28] for more information. In QMC rules, the quadrature points are chosen according to some deterministic algorithm. One can show that the convergence rate of the integration error depends on the smoothness of the integrand and some property of the quadrature points.
There are several deterministic construction methods for the quadrature points. One such method yields so-called digital nets. These yield a convergence of the integration error of for functions of bounded variation [10, 23]. Higher order digital nets, using an interlacing factor of , yield a convergence rate of order for integrands with square integrable partial mixed derivatives of order in each variable [8, 9].
Here, for a real number , the braces denote the fractional part, i.e., modulo . For vectors, the fractional part is taken component wise. It is well known that there are generating vectors for lattice rules for which the integration error converges with order () for smoothness , but with the restriction that and its partial derivatives up to order in each variable have to be periodic [19, Theorem 18, p. 120] and [29, 30]. Fast computer search algorithms for such vectors are known from [25, 26]. Hence, in order to be able to benefit from the fast rate of convergence, one needs to apply a transformation which makes the integrand (and its partial derivatives) periodic. This can cause some problems though and is not always recommended [19, 22]. Since arbitrarily high rates of convergence can be obtained using digital nets for nonperiodic functions, the question arises whether this is also possible for lattice rules. Until now, lattice rules achieve a convergence rate of at most () for nonperiodic integrands (via estimates of the star-discrepancy ). If one applies the so-called tent transformation and a random shift, this rate of convergence can be improved to for any for the worst-case error in an unanchored Sobolev space of smoothness , see .
In this paper we present quadrature rules which achieve a convergence rate of order , , for nonperiodic functions with smoothness . The way we measure smoothness in this paper is slightly different from the setting used in, for instance, higher order digital nets [8, 9] though. We consider functions which can be represented by a cosine series. Note that every continuous function can be represented by a cosine series (see [18, Theorem 1] for this basis over ). This follows from the fact that the functions
are -orthogonal and complete.
As mentioned above, the “smoothness” of the cosine series in our context is measured by the rate of decay of the cosine coefficients. To illustrate this, consider a one-dimensional function given by its cosine series
The sum over the even frequencies is a -periodic function over . If the coefficients decay with order then is -times differentiable in the classical sense. However, this does not apply to the sum over the odd coefficients. For instance, the cosine series for is given by
and hence the odd coefficients converge with order only, although is infinitely times differentiable.
Below we introduce a reproducing kernel Hilbert space based on cosine series, with the smoothness measured by the decay rate of the cosine coefficients. Although the smoothness of a cosine series measured by the differentiability of the series can be larger than the decay rate of the cosine coefficients suggests, the opposite can not happen. That is, we show that the reproducing kernel Hilbert space based on cosine series is embedded in the unanchored Sobolev space with the same value of the smoothness parameter. The case of smoothness provides an exception, since there the cosine series space and the unanchored Sobolev space coincide. Various reproducing kernel Hilbert spaces are introduced in Section 2 and their embeddings are studied in Section 3.
In this paper we present two methods which allow us to achieve a higher convergence rate for smoother nonperiodic functions using lattice rules, namely:
application of the tent transformation to the integration nodes;
symmetrization of the integration nodes.
The tent transformation, ,
is a Lebesgue measure preserving function. The idea of using this transformation in conjunction with a random shift for integration based on lattice rules comes from Hickernell  and was also used in  for digital nets. In contrast to these works, here we do not rely on a random element in our quadrature rules. We show that a tent transformed lattice rule achieves an integration error of order , for any , for functions belonging to a certain reproducing kernel Hilbert space of cosine series with smoothness parameter . This result follows by showing that the worst-case error in the cosine series space for a tent-transformed lattice rule is the same as the worst-case error in a Korobov space of smooth periodic functions using lattice rules. Thus all the results for integration in Korobov spaces using lattice rules [7, 12, 20, 25, 26] also apply for integration in the cosine series space using tent-transformed lattice rules. In particular for smoothness , this yields deterministic point sets for numerical integration in unanchored Sobolev spaces with the same tractability properties as for numerical integration in the Korobov space.
Furthermore, we also use symmetrized lattice rules. We show that these rules achieve the optimal order of convergence for integration of sums of products of cosine series and Fourier series. We apply the transformation to each possible set of coordinates separately, so that if we start off with points we get points (see Section 4.2). This symmetrization approach is also mentioned in [19, 28, 34] and is one of the symmetry groups applied in the construction of cubature formulae, see, e.g., [4, 13]. We prove that a lattice rule symmetrized this way achieves an integration error of order , , for functions belonging to a certain reproducing kernel Hilbert space of cosine series and Fourier series with smoothness parameter . The advantage of symmetrized lattice rules is that functions of the form , where is a nonnegative integer, are integrated exactly and hence only the smoothness of the periodic part determines the convergence rate. Note that the decay rate of the cosine series coefficients of the periodic part and of the Fourier series coefficients part coincides with the classical smoothness. Thus the problem with functions where the smoothness in terms of differentiability differs from the rate of decay of the cosine series is overcome using symmetrization. For instance, the function is integrated exactly using symmetrization. However, a disadvantage of the symmetrization compared to the tent transformation is that the number of function evaluations grows exponential in the dimension and therefore symmetrization is only useful in smaller dimensions.
In the next section we introduce four reproducing kernel Hilbert spaces, the unanchored Sobolev space, the Korobov space, the cosine series space and the sum of the cosine and Korobov space. Since the unanchored Sobolev space and the Korobov space are frequently studied in the literature, we study the relations among these four spaces in Section 3 to put our results into context. It is shown that the Korobov space and the cosine series space differ, but both are embedded in the sum of the cosine series and Korobov space, which is itself embedded in the unanchored Sobolev space. In Section 4 we study numerical integration in the cosine series space using tent-transformed lattice rules and numerical integration in the sum of the cosine series and Korobov space using symmetrized lattice rules. Numerical results are presented in Section 5 and a conclusion is presented in Section 6.
We write for the set of integers, for the set of positive integers and for the set of nonnegative integers. We also write . Furthermore, for we write .
2 Reproducing kernel Hilbert spaces
In this section we introduce several reproducing kernel Hilbert spaces . For a reproducing kernel we denote by the corresponding reproducing kernel Hilbert space with inner product and corresponding norm . For any we have and we have the reproducing property
Further, the function is symmetric in its arguments and positive semi-definite.
For higher dimensions we consider tensor product spaces. The reproducing kernel is in this case given by
where and . Again, the corresponding reproducing kernel Hilbert space is denoted by , the corresponding inner product is denoted by and the corresponding norm by .
2.1 The unanchored Sobolev space
The unanchored Sobolev space is a reproducing kernel Hilbert space with reproducing kernel given by
where and are Bernoulli polynomials and is a real number. The inner product in this space is given by
In higher dimensions we consider the tensor product space . The reproducing kernel is in this case given by
where , and .
The reproducing kernel Hilbert space can be generalized to higher order smoothness. For , consider the reproducing kernel given by
where is the Bernoulli polynomial of order . The inner product for this space is given by
To obtain reproducing kernel Hilbert spaces for the domain , we consider again the tensor product of the one-dimensional spaces. This space has the reproducing kernel
Quasi-Monte Carlo rules in which yield the optimal rate of convergence of order for any where studied in .
2.2 The Korobov space
For , and we define
For and , we set
The Korobov space is a reproducing kernel Hilbert space of Fourier series. The reproducing kernel for this space is given by
and in higher dimensions by
where the “” denotes the usual inner product in . Let the Fourier coefficient for a function be given by
Then the inner product in the reproducing kernel Hilbert space is given by
The corresponding norm is defined by .
2.3 The half-period cosine space
The half-period cosine space is a reproducing kernel Hilbert space of (half-period) cosine series with reproducing kernel
where is defined as in (3) and where and . The inner product is given by
where and are the cosine coefficients of and , respectively, as defined in (1).
We can generalize the reproducing kernel Hilbert space to the domain by setting
where and . The inner product is then given by
where the multi-dimensional cosine coefficients for a function are given by
where for we define to be the number of nonzero components in . The corresponding norm is defined by , in particular we have
2.4 The sum of the Korobov space and the half-period cosine space
In this section we introduce the kernel
The space resulting from the sum of kernels is studied in [2, Part I, Section 6]. The norm in the reproducing kernel Hilbert space is then defined by
where the minimum is taken over all functions and such that .
For dimensions we define the reproducing kernel by
Thus the space is the tensor product . For we define
where as usual an empty product is considered to be one. This is a reproducing kernel for the space
with the inner product
Clearly we have that
In this section we investigate the relationships between the spaces introduced above.
For two reproducing kernel Hilbert spaces and we say that is continuously embedded in if
for some constant independent of . We write
in this case.
On the other hand it is possible that is not a subset of , i.e., there is a function in which is not in . In this case we write .
If and we write
3.1 The Korobov space and the unanchored Sobolev space
It is well known, see, e.g., [24, Appendix A], that the Korobov space is continuously embedded in the unanchored Sobolev space
Conversely, as is also well known, for instance the function , is in for all , but not in , since is not periodic. Thus
We note that for , , we have that where denotes the rescaled sequence .
3.2 The half-period cosine space and the unanchored Sobolev space
We now consider the half-period cosine space and the unanchored Sobolev space. For there is a peculiarity.
For completeness we include a short proof.
In the following we calculate the cosine coefficients of . It is easy to check that we have for . Further we have .
Using (2) we obtain
This immediately implies that
if is even and is odd, or is odd and is even, or are even with . If for even , we obtain
Now let be odd. We have and therefore
For we have and further . Thus we obtain
Note that the cosine series for converges absolutely. Since the function is continuous, the cosine series converges to the function pointwise. This completes the proof. ∎
The above lemma and Mercer’s theorem also yield the eigenfunctions of the operator
These are and the corresponding eigenvalues are .
We thus find
The same also applies for the higher dimensional tensor product space
where denotes the sequence . Thus
Now consider . The function belongs to for all . On the other hand we have
Hence the function is not in for and therefore
Conversely, let , . Let be given by
Then for we have
where for even and for odd and where for a real number , denotes the smallest integer bigger or equal to . Thus
Thus we have
This result can be generalized to the tensor product space, thus
3.3 The half-period cosine space and the Korobov space
For the embedding results for the half-period cosine space and the Korobov space follow from the previous two subsections.
Let now . Let
Then for all . On the other hand we have for
Thus we have for . Thus
Conversely, let now
Then for all . On the other hand we have for
For any we have
which implies that