1 Introduction
Abstract

In this paper we propose a family of tractable kernels that is dense in the family of bounded positive semi-definite functions (i.e. can approximate any bounded kernel with arbitrary precision). We start by discussing the case of stationary kernels, and propose a family of spectral kernels that extends existing approaches such as spectral mixture kernels and sparse spectrum kernels. Our extension has two primary advantages. Firstly, unlike existing spectral approaches that yield infinite differentiability, the kernels we introduce allow learning the degree of differentiability of the latent function in Gaussian process (GP) models and functions in the reproducing kernel Hilbert space (RKHS) in other kernel methods. Secondly, we show that some of the kernels we propose require considerably fewer parameters than existing spectral kernels for the same accuracy, thereby leading to faster and more robust inference. Finally, we generalize our approach and propose a flexible and tractable family of spectral kernels that we prove can approximate any continuous bounded nonstationary kernel.

\aistatstitle

Generalized Spectral Kernels

\aistatsauthor

Yves-Laurent Kom Samo
ylks@robots.ox.ac.uk &Stephen J. Roberts
sjrob@robots.ox.ac.uk

\aistatsaddress

University of Oxford &University of Oxford

1 Introduction

Over the past two decades, the use of kernels has been at the heart of many endeavours in the statistics and machine learning communities. Kernels are often used as a flexible way of departing from linear hypotheses in learning machines, thereby allowing for more complex nonlinear patterns (Vapnik (1995, 1998)). They have indeed been successfully applied to problems of classification, clustering, density estimation and regression. The duality between kernels and covariance functions has made kernels a critical tool for both frequentist and Bayesian statisticians.111We will use the expressions ’kernel’ and ’covariance function’ interchangeably to denote any symmetric positive semi-definite function. Unless stated otherwise, kernels in this paper are real-valued. In the Bayesian nonparametric community, kernels are often used as a covariance function of a Gaussian process (GP), introduced as prior over a latent function that is to be inferred from the data. The family of covariance functions postulated for the GP is typically chosen so as to express prior domain knowledge about the underlying function, such as periodicity, regularity and range. The parameters of the kernel are then learned from the data. When one is concerned with automatically uncovering structures from datasets, a flexible family of kernels should be used that can account for intricate patterns. In that regards, it is worth noting that, as most (if not all) loss functions in kernel methods are continuous in the Gram/covariance matrix222E.g. the negative log-likelihood in GP methods, the Lagrangian in kernel SVM etc…, if a family of kernels can approximate arbitrarily well any continuous bounded kernel, then for any continuous bounded kernel , there exists a kernel in the foregoing family that is at least as good as for the problem at hand (i.e. one that achieves a loss at least as small as that of ).

Related work

Approaches have been proposed in recent years that introduce greater flexibility by combining standard unidimensional kernels through series of compounded operations preserving the positive semi-definite property. Examples of such approaches include the hierarchical kernel learning model of Bach (2008), the additive kernels of Duvenaud et al. (2011) and the compositional search method of Duvenaud et al. (2013). Although these methods may be major improvements on popular isotropic kernels for some applications, they are limited in that they may not approximate every stationary kernel with arbitrary precision.

The aforementioned limitation has been addressed by spectral approaches such as the sparse spectrum kernels of Lazaro-Gredilla et al. (2010) and the spectral mixture kernels of Wilson and Adams (2013). Their theoretical underpinning, namely Bochner’s theorem (Stein (1999); Rasmussen and Williams (2005); Rudin (1962)), is particularly helpful to construct flexible classes of kernels in that it fully characterises all stationary kernels with a relatively simple spectral representation condition. {theorem}(Bochner’s theorem) A complex-valued function on is the covariance function of a weakly stationary mean square continuous complex-valued random process on if and only if it can be represented as

(1)

where is a positive finite measure. Bochner’s theorem introduces a duality between the flexibility of a class of stationary kernels and the flexibility of the corresponding family of spectral measures . The link between sparse spectrum kernels and spectral mixture kernels can be understood in the light of Lebesgue’s decomposition theorem (Halmos (1950); Hewitt and Stromberg (1975)). Lebesgue’s decomposition theorem implies that any positive finite measure , can be uniquely decomposed as

(2)

where is a finite measure that is absolutely continuous with respect to Lebesgue’s measure, and is a finite measure that is mutually singular with Lebesgue’s measure333That is there exists a partition such that every subset of is of null Lebesgue’s measure and for every subset , .. Examples of positive finite measures that are mutually singular with Lebesgue’s measure are the discrete444Discrete or pure-point measures are measures supported on a countable set. symmetric measures:

where

It follows from Eq. (1) that these measures yield covariance functions of the form:

(3)

When one is concerned with flexibly learning the shape of the covariance function from the data, the Fourier coefficients need to be inferred directly. For practical purposes we can only work with a finite number of Fourier coefficients. This gives rise to a simple extension of the sparse spectrum kernels introduced by Lazaro-Gredilla et al. (2010). The authors capped the number of spectral components and required that the Fourier coefficients be identical:

(4)

However, this family of kernels has three pitfalls. Firstly, they are prone to over-fitting. As an illustration, when used for GP regression, Lazaro-Gredilla et al. (2010) proved that such kernels are equivalent to Bayesian basis function regression with trigonometric basis functions. As such, the learning machine will aim at inferring the major spectral frequencies evidenced in the training data. This will only lead to appropriate prediction out-of-sample when the underlying latent phenomenon can be appropriately characterized by a finite discrete spectral decomposition that is expected to be the same everywhere on the domain. Secondly, in GP regression, such kernels implicitly postulate that the covariance between the values of the GP at two points does not vanish as the distance between the points becomes arbitrarily large. This imposes a priori the view that the underlying function is highly structured, which might be unrealistic in many real-life non-periodic applications. Thirdly, covariance functions of the form of Eq. (3) yield infinite differentiability in the mean square sense. As noted by Stein (1999), this is unrealistic for modelling several physical processes.

Random Fourier features methods (Rahimi and Recht (2007); Le et al. (2013); Yang et al. (2015)) are closely related to sparse spectrum kernels. They are based on the observation that Eq. (1) may be rewritten as

with and . It then follows that, for any symmetric probability distribution , if the frequencies in Eq. (4) are sampled from , then the corresponding sparse spectrum kernel (Eq. (4)) is an unbiased and consistent estimate of . Although random Fourier features methods are scalable, they do not address the need for flexibly learning the spectral measure from the data and are not applicable to nonstationary kernels.

The approach introduced by Wilson and Adams (2013) focuses on the continuous part of Lebesgue’s decomposition Eq. (2). It follows from Radon-Nikodym’s theorem (Halmos (1950)) that admits a (positive) density with respect to Lebesgue’s measure. Moreover, it is easy to see from Eq. (1) that . Hence, is a probability density function, which Wilson and Adams (2013) modelled as independent mixtures of Gaussians in each dimension of the spectral domain. The resulting family of spectral mixture kernels reads:

(5)

where , and denotes the Hadamard (also known as entrywise) product between the vectors and .

Although mixtures of Gaussian distributions can be used to approximate any distribution, spectral mixture kernels are limited in that, when used as covariance functions, they yield infinite differentiability in the mean square sense. Such an excessive smoothness assumption might result in poor predictive accuracy. Moreover, a large number of spectral mixture components might be required to account for lower degrees of smoothness evidenced in the data. This would result in inference techniques that are costlier, and less robust to local optima.

When kernels are used as covariance functions, complex patterns in datasets may also be regarded as evidence of nonstationarity, under the (ergodic) assumption that some properties of a single path, for instance the degree of homogeneity, are the same as the corresponding properties considered across random samples of the underlying process. This approach is common in time series analysis. In Bayesian nonparametrics, nonstationarity may also be introduced to express domain knowledge that vary throughout the input space. However, commonly used approaches such as the input-dependent rescaling of stationary covariance functions of Paciorek and Schervish (2004) and spatial deformation of stationary covariance functions (Sampson and Guttorp (1992); Damian et al. (2001); Schmidt and O’Hagan (2003)), are application specific in that they may not approximate arbitrarily well every covariance function.

The primary contribution of this paper is to propose families of spectral kernels we refer to as generalized spectral kernels, that (i) we prove can approximate any (possibly nonstationary) bounded kernel, and (ii) allow inference of the degree of differentiability of the corresponding stochastic process when used as a covariance function, or functions in the RKHS in alternative kernel methods such as support vector machines. We show that the only (to the best of our knowledge) existing families of kernels that can approximate arbitrarily well any stationary kernel, namely the spectral mixture kernels of Wilson and Adams (2013) and the sparse spectrum kernels of Lazaro-Gredilla et al. (2010), are special cases of the families we propose.

The rest of the paper is structured as follows. In section 2 we introduce generalized spectral kernels, and we prove that they can approximate any continuous bounded kernel. We start by providing the intuition and mathematical background underpinning our approach in section 2.1. In section 2.2 we introduce stationary generalized spectral kernels, we prove that they can approximate arbitrarily well any stationary kernel, we show that they extend existing approaches, and we provide examples of stationary generalized spectral kernels that allow the learning of the degree of differentiability of latent functions. In section 2.3 we extend our approach to nonstationary kernels, and we prove that the family of generalized spectral kernels we introduce can approximate arbitrarily well any continuous bounded kernel. We provide empirical evidence that validates our approach in section 3, and we conclude with a discussion in section 4.

2 Generalized Spectral Kernels

2.1 Intuition and Background

The intuition behind our approach is best illustrated with stationary kernels that admit a spectral density. From a practical perspective, in GP models, these are kernels that postulate that the correlation between two GP values vanishes as the distance between the points increases. Considering that spectral measures are finite according to Bochner’s theorem, the spectral density of such a kernel is integrable, and hence admits a Fourier transform. In fact, it follows from Bochner’s theorem that the spectral density of such a kernel turns out to be its Fourier transform, and vice-versa555We use the ‘real’ frequency convention for the Fourier transform: ..

We are interested in constructing families of integrable functions that can ‘approximate’ arbitrarily well any such spectral density in the spectral domain, in a sense that is intuitive and can easily be shown to yield to approximating the inverse Fourier transform (i.e. the kernel in the original domain). This will then allow us to conclude that the inverse Fourier transforms of the approximating functions in the spectral domain can approximate any stationary kernel with absolutely continuous spectral measure. We would also like the family of approximating functions in the original domain to approximate arbitrarily well any stationary kernel whose spectral measure has a non-null singular part in Lebesgue’s decomposition (e.g: sparse spectrum kernels). There are two main possible approaches for giving a meaning to the notion of approximating integrable positive-valued functions: one probabilistic and the other deterministic.

Firstly, noting that integrable positive-valued functions can be normalized to become probability density functions, approximation can be thought of in the sense of the convergence in distribution of random variables. We recall that convergence in distribution is equivalent to pointwise convergence of cumulative density functions, which does not imply convergence of the corresponding probability density functions. Hence, approximating in this sense does not guarantee approximating spectral densities, let alone approximating their inverse Fourier transforms. Stronger notions of convergence of random variables may be used, but the resulting links between approximating in the spectral domain and approximating in the original domain are more involved. This approach is therefore not suitable for our purpose.

The deterministic alternative has several options, two of which are of interest to us. The first notion of approximation is that of the pointwise convergence of functions, according to which a sequence of functions converges to a function if and only if for every , the sequence converges to . The second notion of approximation is the one of the strong topology of convergence in the space of integrable functions, considered with its canonical norm:

More precisely, we say of a sequence of integrable functions that it converges in the sense to an integrable function if and only if converges to 0; in other words when the volume between the surfaces and goes to zero. We recall that a set is dense in a set with respect to some sense of convergence (topology) if any element is the limit of some sequence of elements in . If and are integrable, denoting and their Fourier transforms, it follows from Jensen’s inequality that

Hence, approximating in the spectral domain in the sense implies approximating in the original domain in the pointwise sense. More importantly, if a family of functions is dense in the space of integrable functions (in the spectral domain) with respect to the convergence in , then the corresponding family of inverse Fourier transforms is also dense in the space of integrable kernels (in the original domain) with respect to the pointwise convergence of functions.

2.2 Stationary Kernels

Conditions for a family of functions to be dense in have been extensively studied in the mathematical analysis literature. The most famous results on the matter are known as Wiener’s Tauberian theorems (Wiener (1932); Rudin (1962); Korevaar (2004)). We recall the theorem that is of interest to us below. {theorem} (Wiener’s Tauberian theorem) If is a function in , a necessary and sufficient condition for the set of all linear combinations of translations of to be dense in (in the sense of the convergence in ) is that the Fourier transform of

has no zeros. Gaussian probability density functions in the spectral domain satisfy the conditions of Wiener’s Tauberian theorem, and the corresponding linear combinations of translations give rise to the spectral mixture kernels of Wilson and Adams (2013). Wiener’s Tauberian theorem however provides a considerably weaker condition. We use it in the spectral domain to construct a broad range of families of tractable functions in the original domain, that are dense in the family of stationary real-valued kernels with respect to the pointwise convergence of functions. {theorem} Let be a real-valued positive semi-definite, continuous, and integrable function such that . The family of functions

(6)

with is dense in the family of stationary real-valued kernels with respect to the pointwise convergence. {proof} Sketch: The functions arise as inverse Fourier transforms of linear combinations of translations of the Fourier transform of : . As , the requirement makes Wiener’s Tauberian theorem applicable. See Appendix A for the full proof. The assumptions of Th. 2.2 are mostly standard. From a practical perspective, the requirement is what makes the family of functions approximate arbitrarily well the absolutely continuous part of any spectral measure, whereas the continuity assumption implies , which allows approximating arbitrarily well the singular part of any spectral measure. The parameters serve as inverse input scales. Noting that is positive semi-definite, to restrict ourselves to valid kernels we only consider linear combinations with non-negative coefficients. We may also further impose without loss of generality. {definition} Following the notations of Th. 2.2, we denote stationary generalized spectral kernels functions of the form:

(7)

where and . Differentiability: When stationary generalized spectral kernels are used as covariance functions, the following proposition establishes the degree of smoothness they induce. {proposition} A mean zero stationary Gaussian process with stationary generalized spectral covariance function is times continuously differentiable in the mean square sense if and only if a mean zero stationary Gaussian process with covariance function is. {proof} See Appendix B. Examples: Sparse spectrum kernels correspond to the limit case with equal terms. Moreover, it follows from Eq. (5) that the spectral mixture kernels of Wilson and Adams (2013) correspond to the special case , which satisfies the conditions of Th. 2.2, and yields infinitely differentiable GPs as a result of Prop. 2.2. It is easy to verify that the Matérn kernels

where is the gamma function and is the modified Bessel function of second kind, satisfy the conditions of Th. 2.2. Hence, Matérn spectral kernels

with are also dense in the family of stationary kernels, and allow learning the differentiability of the underlying latent function from the data.

2.3 Nonstationary Kernels

Bochner’s theorem was the cornerstone of the previous section. The spectral characterisation of stationary kernels it provides turned the problem of approximating stationary kernels into that of approximating measures in the spectral domain. Luckily, it turns out that a more general spectral characterisation exists that includes nonstationary kernels (see (Yaglom, 1987, §26.4) and (Loeve, 1994, §37.4) for the univariate case and (Genton, 2002, pp. 308) and (Kakihara, 1985, pp. 149) for a generalization). {theorem} A complex-valued bounded continuous function on is the covariance function of a mean square continuous complex-valued random process on if and only if it can be represented as

(8)

where is the Lebesgue-Stieltjes measure associated to some positive semi-definite function with bounded-variations. When the spectral measure has mass concentrated along the diagonal , we recover Bochner’s theorem. We may once again leverage Wiener’s Tauberian theorem to construct families of functions in the spectral domain that are dense in with respect to the convergence in , so that any spectral density can be approximated arbitrarily well. The argument developed in section 2.1 may once again be used, in conjunction with Th. 2.3 rather than Bochner’s theorem, to demonstrate that this would correspond to approximating arbitrarily well any bounded kernel in the original domain in the sense of the pointwise convergence of functions. We obtain the following result. {theorem} Let be a real-valued positive semi-definite, continuous, and integrable function such that . The family

where , with is dense in the family of real-valued continuous bounded nonstationary kernels with respect to the pointwise convergence of functions. {proof} See Appendix C. The functions are positive semi-definite like products of such functions, so that to build a flexible family of expressive nonstationary kernels we may simply require . We may also impose without loss of generality. {definition} Following the notations of Th. 2.3, we denote generalized spectral kernels functions of the form:

(9)

with and , and relaxing the integrability condition on . Remarks: We note that when is stationary and we recover stationary generalized spectral kernels. The differentiability discussion of the stationary case can be extended. In the general setting, it is sufficient that a generalized spectral covariance function be times differentiable for the corresponding mean zero Gaussian process to be mean square times differentiable (Adler and Taylor (2011)). Similarly to the stationary case, the degree of smoothness may be learned or set a priori through the function .

Examples of integrable : Silverman (1957) introduced numerous examples of real-valued continuous and integrable covariance functions that are so-called ‘locally stationary’; i.e. of the form

where is a stationary covariance function and is positive-valued. The approaches suggested by the author are flexible enough that can be constructed so that and the degree of differentiability may be controlled for instance by taking to be a Matérn kernel. Moreover, with this choice of , nonstationary random Fourier features approximations may easily be constructed by noting that Eq. (8) may be rewritten in this case as

with , and where is a location-scale mixture of the distribution whose density is the Fourier transform of , and with mixing probabilities deduced from Eq. (9).

may also be chosen to be of the form

(10)

where is any continuous, integrable and positive-valued function. In this case, form a basis of the RKHS induced by the kernel . As the dimension of the RKHS is finite, exact inference may be achieved in time complexity and with memory requirement in most kernel methods, where is the number of samples. Differentiability may once again be controlled by taking to be a Matérn kernel.

3 Experiments

In this section SE denotes the squared exponential kernel (i.e. ), SS denotes the sparse spectrum kernel, MA*2 denotes the Matérn */2 kernel, S-* denotes the stationary generalized spectral kernel with modulating kernel (i.e. in Eq. (7)) the kernel *, and NS-* denotes the nonstationary generalized spectral kernel of the form Eq. (10) where is the kernel *. Moreover, unless stated otherwise all spectral kernels have spectral components.

Option pricing: Firstly, we consider modelling the evolution of the price of a put option on the STOXX Europe 600 Banks index666We use the strike 195, maturity June 2015 put option on the STOXX 600 Banks index. The data originate from Bloomberg (security code SX7P 6 P195). as a function of time (i.e. the theta of the option), through GP regression777Kernel hyper-parameters are learned by maximizing the marginal log-likelihood.. We use a third of the data for training and the rest for prediction. As evidenced by Tab. 3, the spectral Matérn 3/2 kernel (S-MA32) improves on the predictive accuracy (RMSE) of the spectral mixture kernel (S-SE). Given the density property of S-SE, this suggests that on this dataset, the spectral mixture kernel requires more parameters than the spectral MA 3/2 kernel to achieve the same accuracy, thus making the latter kernel faster and more robust to local maxima during inference. Fig. 1 illustrates the learned posterior mean +/- 2 posterior standard deviations for the S-MA32 kernel. Learned kernels are illustrated in Fig. 2.

\captionof

tableFit and predictive accuracy on the option experiment.

SE S-SE S-MA52 S-MA32
Log. Lik. -28.56 -18.61 -19.39 -19.60
RMSE 0.89 0.90 0.76 0.64
Figure 1: Posterior mean in the option experiment with S-MA32 kernel.
Figure 2: Learned kernels in the option experiment.

Air temperature anomalies: Our second experiment is based on the well studied temperature anomalies dataset of Wood et al. (2002). The dataset consists of monthly readings of air temperature anomalies at various points on the globe in December 1993. The authors defined air temperature anomaly as the deviation of a monthly temperature at a given location from the average over the period 1950-1979 of the monthly temperatures at the same location. There were readings in December 1993. We selected 2/3 of the data at random for training and predicted the left-out temperature anomalies using GP regression. Training and predictive results are summarized in Tab. 3. We found that the spectral Matérn 1/2 kernel outperforms competing kernels including the spectral mixture kernel (S-SE), which evidences that the latent anomaly function is best modelled as continuous but not smoother. Fig. 3 illustrates a map of the posterior mean of the temperature anomaly, and a map of the learned correlation between the temperature anomaly in London and elsewhere on the globe under the spectral Matérn 1/2 kernel.

\captionof

tableTraining log-likelihood and predictive accuracy on the air temperature anomalies dataset.

SE S-SE S-MA12 S-MA32
Log. Lik. -358.58 -341.68 -316.89 -326.10
RMSE 1.34 1.31 1.28 1.29
Figure 3: Posterior mean temperature anomaly and learned correlation function between the temperature anomaly in London and elsewhere on the globe under the spectral Matérn 1/2 kernel.

Approximating nonstationary kernels: Finally, we consider approximating a nonstationary kernel in order to demonstrate the need for nonstationary generalized spectral kernels. The nonstationary kernel of interest is the covariance function of a time-inverted fractional Brownian motion (IFBM):

with . Such kernels might be particularly useful to model continuous latent functions with known long range behaviour and uncertain short range behaviour (e.g: the price of an option contract as a function of time, value functions in dynamic programming). Higher values of the Hurst index result in more volatile increments and rougher paths of the corresponding IFBM. We approximate on with generalized spectral kernels for several values of . The parameters of approximating kernels are learned by minimizing the sum of the square errors between and the approximating spectral kernel, both evaluated on a uniform grid with mesh size . Tab. 3 illustrates the corresponding root mean square errors normalized by the average value of on the grid for different values of the Hurst index, and with spectral components. Fig. 4 illustrates sections of the learned kernels along the vertical plane for different Hurst indices. It can be seen that nonstationary spectral kernels considerably outperform stationary alternatives such as the spectral mixture kernel and the sparse spectrum kernel. This comes as no surprise given that , which cannot be modelled by stationary kernels. More importantly, it can be seen at a glance in Fig. 4 that, with only 5 spectral components, nonstationary generalized spectral kernels approximate the IFBM kernel pretty well in absolute terms, which is consistent with the density property discussed in the previous section.

\captionof

tableNormalized RMSE of approximations of the time-inverted fractional Brownian motion kernel with Hurst index by various spectral kernels.

h S-SE SS NS-SE NS-MA12 NS-MA12
0.22 0.26 0.09 0.08 0.08
0.48 0.49 0.09 0.07 0.08
0.37 0.37 0.10 0.09 0.08
Figure 4: Approximations of the IFBM kernel by spectral kernels with components.

4 Conclusion

We propose families of kernels we refer to as generalized spectral kernels that we prove can approximate arbitrarily well any continuous bounded kernel. As a result, given that loss functions in kernel methods are often continuous in the kernel/Gram matrix, generalized spectral kernels (out of the box) can perform as well as (if not better than) any hand-crafted kernel of practical interest, stationary or not. We show that the only (to the best of our knowledge) families of kernels that have previously been proposed (that can approximate arbitrarily well any stationary kernel) are special cases of generalized spectral kernels. Critically, our extension improves on competing approaches in that it allows learning the degree of smoothness of the latent function in Gaussian process models, or that of functions in the RKHS in other kernel methods. More importantly, the families of kernels we propose are the first families of kernels that can approximate arbitrarily well any bounded continuous nonstationary kernel. Finally, our nonstationary extension is amenable to scalable inference either directly, or through a nonstationary extension of random Fourier features approximations.

References

  • Vapnik (1995) V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
  • Vapnik (1998) V. Vapnik. Statistical Learning Theory. Springer, New York, 1998.
  • Bach (2008) F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. Advances in Neural Information Processing Systems (NIPS), 2008.
  • Duvenaud et al. (2011) D. Duvenaud, H. Nickisch, and C. E. Rasmussen. Additive Gaussian processes. In J Shawe-Taylor, RS Zemel, P Bartlett, F Pereira, and KQ Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 226–234, 2011.
  • Duvenaud et al. (2013) D. Duvenaud, J. R. Lloyd, R. Grosse, Tenenbaum J. B., and Z. Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. In Proceedings of the 30th International Conference on Machine Learning, pages 1166–1174, June 2013.
  • Lazaro-Gredilla et al. (2010) M. Lazaro-Gredilla, J. Quinonero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vida. Sparse spectrum gaussian process regression. The Journal of Machine Learning Research, 11:1866–1881, 2010.
  • Wilson and Adams (2013) A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1067–1075, 2013.
  • Stein (1999) M. L. Stein. Interpolation of Spatial Data. Springer-Verlag, New York, 1999.
  • Rasmussen and Williams (2005) C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2005.
  • Rudin (1962) W. Rudin. Fourier analysis on groups. John Wiley, 1962.
  • Halmos (1950) P. R. Halmos. Measure theory. Springer-Verlag, 1950.
  • Hewitt and Stromberg (1975) E. Hewitt and K. Stromberg. Real and abstract analysis. A Modern treatment of the theory of functions of a real variable. Springer-Verlag, 1975.
  • Rahimi and Recht (2007) A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems (NIPS), pages 1177–1184, 2007.
  • Le et al. (2013) Q. Le, T. Sarlós, and A Smola. Fastfood-approximating kernel expansions in loglinear time. In International Conference on Machine Learning (ICML), 2013.
  • Yang et al. (2015) Z. Yang, A. Smola, L. Song, and A. G. Wilson. A la carte – learning fast kernels. In International Conference on Artificial Intelligence and Statistics (AISTATS), pages 1098–1106, 2015.
  • Paciorek and Schervish (2004) C. Paciorek and M. Schervish. Nonstationary covariance functions for Gaussian process regression. Advances in neural information processing systems, 16:273–280, 2004.
  • Sampson and Guttorp (1992) P. D. Sampson and P. Guttorp. Nonparametric estimation of nonstationary spatial covariance structure. Journal of the American Statistical Association, 87(417), 1992.
  • Damian et al. (2001) D. Damian, P. D. Sampson, and P. Guttorp. Bayesian estimation of semi-parametric nonstationary spatial covariance structures. Environmetrics, 12(2):161–178, 2001.
  • Schmidt and O’Hagan (2003) A. M. Schmidt and A. O’Hagan. Bayesian inference for nonstationary spatial covariance structure via spatial deformations. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(3):743–758, 2003.
  • Wiener (1932) N. Wiener. Tauberian theorems. Annals of Mathematics, 33:1–100, 1932.
  • Korevaar (2004) J. Korevaar. Tauberian theory: a century of developments. Springer, 2004.
  • Yaglom (1987) A. M. Yaglom. Correlation theory of stationary and related random functions, volume 1. Springer-Verlag, 1987.
  • Loeve (1994) M. Loeve. Probability Theory II (Graduate Texts in Mathematics), volume 2. Springer-Verlag, 1994.
  • Genton (2002) M. G. Genton. Classes of kernels for machine learning: A statistics perspective. The Journal of Machine Learning Research, 2:299–312, March 2002. ISSN 1532-4435.
  • Kakihara (1985) Y. Kakihara. A note on harmonizable and v-bounded processes. Journal of Multivariate Analysis, 16:140–156, 1985.
  • Adler and Taylor (2011) R. J. Adler and J. E. Taylor. Topological Complexity of Smooth Random Functions: École D’Été de Probabilités de Saint-Flour XXXIX-2009. Lecture Notes in Mathematics / École d’Été de Probabilités de Saint-Flour. Springer, 2011.
  • Silverman (1957) R. Silverman. Locally stationary random processes. Information Theory, IRE Transactions on, 3(3):182–187, September 1957.
  • Wood et al. (2002) S. A. Wood, W. Jiang, and M. Tanner. Bayesian mixture of splines for spatially adaptive nonparametric regression. Biometrika, 89(3):pp. 513–528, 2002.
\toptitlebar

Appendix

\bottomtitlebar

In what follows, we use the ‘real’ (as opposed to angular) frequency convention for the Fourier transform. That is, if is an integrable real-valued function on (i.e. ), the Fourier transform of reads

and the inverse Fourier transform is obtained as

It is a direct consequence of our convention that .

Appendix Appendix A Proof of Th. 2.2

We want to prove the following theorem.

Theorem: Let be a real-valued positive semi-definite, continuous, and integrable function such that . The family of functions

(11)

with is dense in the family of stationary real-valued kernels with respect to the pointwise convergence. {proof} Let us define

and

Firstly we note that the function is integrable888Because the function is and the cosine function is bounded., and therefore admits a Fourier transform. As is integrable and even, it admits an even Fourier transform. Denoting the Fourier transform of , and the element-wise division, by definition of the inverse Fourier transform, and using properties of the Fourier transform, it follows that

which, because is even, can be rewritten as

Let us now consider a stationary real-valued kernel .

Case 1:

When the singular part of the spectral measure of in Lebesgue’s decomposition theorem is null, the spectral measure of admits a density with respect to Lebesgue’s measure and we note

The density is even as is real-valued. It is also integrable999. Let us consider the function . It is integrable and its Fourier transform is strictly positive everywhere. Hence, by Wiener’s Tauberian theorem,

where the convergence is to be understood in the sense. As is even, we also have

so that

This proves that the family of spectral density functions

is dense in the family of Fourier transforms of integrable stationary kernels101010We recall that Bochner’s theorem implies that the spectral density function of an integrable kernel is its Fourier transform. with respect to the convergence in of functions. Hence, as proved in the paper, the corresponding family of inverse Fourier transforms, namely

is dense in the family of integrable real-valued stationary kernels with respect to the pointwise convergence of functions. This result also holds for the superset by definition of the density property. Moreover, as the cosine function is even, the density property is preserved after imposing the constraint .

Case 2:

We now deal with the case where the continuous part of the spectral measure of in Lebesgue’s decomposition theorem is null. A refined version of Lebesgue’s decomposition theorem states that the singular measure can be uniquely decomposed as where is a discrete (pure-point) measure and is mutually singular with Lebesgue’s measure. The singular continuous measure is not intuitive as it gives null probability mass to any countable set of ‘outcomes’, and yet it gives positive probability mass to some sets of outcomes with null ‘volume’ (Lesbegue’s measure). For those reasons, we believe singular continuous measures to be of limited interest in most statistical inference problems involving stationary kernels, and we will restrict our attention to discrete measures in this section (i.e. ).

We recall from Eq. (3) that the stationary covariance functions arising from discrete positive and symmetric spectral measures can be written as:

with and . Moreover, as is positive-valued and positive semi-definite, we have that

which implies

Hence,

As , by the dominated convergence theorem we have that

Hence, any stationary kernel whose spectral measure is a pure-point measure is the limit of kernels of the form (as go to and goes to ), which concludes the proof in the second case.

Case 3: and

In the general case, we decompose any covariance function as

with , and . We then use the two cases previously discussed to conclude that is the limit of linear combinations of kernels of the form .

Appendix Appendix B Proof of Prop. 2.2

We now prove the following proposition.

Proposition: A mean zero stationary Gaussian process with stationary generalized spectral covariance function is times continuously differentiable in the mean square sense if and only if a mean zero stationary Gaussian process with covariance function is. {proof} times differentiability of a stationary GP in the mean square sense is equivalent to times differentiability of its covariance function at (Adler and Taylor (2011)). It is easy to see that if is times differentiable at 0, then so will the corresponding stationary generalized spectral kernel. Reciprocally, a simple reasoning by contradiction allows us to conclude that if is not at least times differentiable at , and subsequently the corresponding stationary spectral kernel cannot be.

Appendix Appendix C Proof of Th. 2.3

In this section we prove Th. 2.3, which we recall below.

Theorem Let be a real-valued positive semi-definite, continuous, and integrable function such that . The family

where , with is dense in the family of real-valued continuous bounded nonstationary kernels with respect to the pointwise convergence of functions.

{proof}

being integrable, it admits a Fourier transform

and we have

Hence, the conditions of Wiener’s Tauberian theorem are met, so that any integrable function on is the limit of linear combinations of translations of .

Let be a real-valued continuous bounded nonstationary kernel and its Lebesgue-Stieltjes spectral measure. We will start with the case where is absolutely continuous with respect to Lebesgue’s measure. In that case, denoting the corresponding Radon-Nikodym derivative, we have:

Noting that is integrable, we can define

a sequence of linear combinations of translations of converging to in the sense. We can always consider such a sequence with symmetric functions. In effect, for any candidate , with

are symmetric, integrable, linear combinations of translations of as is symmetric, and converge to in the sense. As both and are symmetric, we also have:

We can therefore rewrite as

Denoting

it is easy to see, by applying Jensen’s inequality to like we did in the stationary case, that converges to pointwise. Thus, as is real-valued, the real-part of converges to pointwise too. Simple changes of variables using the expanded expression of give us the following expression for the real-part of :

(12)

If we denote , by expanding the cosine functions in Eq. (12), it follows that each of the three sums above can be rewritten in the form

where for the first sum we have , , for the other two sums , for the second sum and for the last sum . This proves that for any real-valued continuous bounded nonstationary kernel with absolutely continuous spectral measure, there exist a sequence of the form

that converges to .

As for pure-point spectral measures, they can be written as

(13)

with . By symmetry of , Eq. (13) may be rewritten as

and using the same trick as in the integrable case, we get that the corresponding kernels are of the form

where we also have . As both and are bounded,111111 being continuous and integrable is bounded. we may once again use the dominated convergence theorem to conclude that

where we have used the continuity of at and . This concludes the proof for the pure-point case. Hybrid cases are dealt with in a similar manner to the proof of Th. 2.2.

Appendix Appendix D Technical discussion on Th. 2.3

Every real-valued continuous bounded function of the form of Eq. (8) is indeed the covariance function of a real-value mean square continuous stochastic process. Strictly speaking, for a real-valued continuous bounded function that is also the covariance function of a real-valued mean square continuous stochastic process to be of the form of Eq. (8), it has to be harmonizable (Yaglom (1987); Kakihara (1985)). However, as noted by (Yaglom, 1987, pp. 464), the only continuous bounded covariance functions known to the author that are not harmonizable ‘are rather complicated and have some unusual, even pathological properties’. Moreover, Kakihara (1985) proved that, if a real-valued bounded function is the covariance function of a mean square continuous stochastic process , providing that the mapping is (strongly) measurable, is harmonizable. The foregoing condition will often be verified by stochastic processes of practical interest, which is the reason why, as did Genton (2002), we have ignored this technicality in Th. 2.3.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
60265
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description