Optimal Rates for Spectral-regularized Algorithms with Least-Squares Regression over Hilbert Spaces
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral-regularized algorithms, including ridge regression, principal component analysis, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases. Keywords Learning theory, Reproducing kernel Hilbert space, Sampling operator, Regularization scheme, Regression.
Let the input space be a separable Hilbert space with inner product denoted by and the output space . Let be an unknown probability measure on , the induced marginal measure on , and the conditional probability measure on with respect to and . Let the hypothesis space The goal of least-squares regression is to approximately solve the following expected risk minimization,
where the measure is known only through a sample of size , independently and identically distributed according to . Let be the Hilbert space of square integral functions from to with respect to , with its norm given by . The function that minimizes the expected risk over all measurable functions is the regression function [6, 24], defined as
Throughout this paper, we assume that there exists a constant , such that
The above problem was raised from [21, 13] for non-parametric regression with kernel methods [6, 24] and it is closely related to functional regression . A common and classic approach for the above problem is based on spectral-regularization algorithms. It amounts to solving an empirical linear equation, where to avoid over-fitting and to ensure good performance, a filter function for regularization is involved, see e.g., [1, 10]. Such approaches include ridge regression, principal component analysis, gradient methods and iterated ridge regression.
A large amount of research has been carried out for spectral-regularization algorithms within the setting of learning with kernel methods, see e.g., [23, 5] for Tikhonov regularization, [29, 27] for gradient methods, and [4, 1] for general spectral-regularization algorithms. Statistical results have been developed in these references, but still, they are not satisfactory. For example, most of the previous results either restrict to the case that the space is universal consistency (i.e., is dense in ) [23, 27, 4] or the attainable case (i.e., ) [5, 1]. Also, some of these results require an unnatural assumption that the sample size is large enough and the derived convergence rates tend to be (capacity-dependently) suboptimal in the non-attainable cases. Finally, it is still unclear whether one can derive capacity-dependently optimal convergence rates for spectral-regularization algorithms under a general source assumption.
In this paper, we study statistical results for spectral-regularization algorithms. Considering a capacity assumption of the space [28, 5], and a general source condition  of the target function , we show high-probability, optimal convergence results in terms of variants of norms for spectral-regularized algorithms. As a corollary, we obtain almost sure convergence results with optimal rates. The general source condition is used to characterize the regularity/smoothness of the target function in , rather than in as those in [5, 1]. The derived convergence rates are optimal in a minimax sense. Our results, not only resolve the issues mentioned in the last paragraph but also generalize previous results to convergence results with different norms and consider a more general source condition.
2 Learning with Kernel Methods and Notations
In this section, we first introduce supervised learning with kernel methods, which is a special instance of the learning setting considered in this paper. We then introduce some useful notations and auxiliary operators.
Learning with Kernel Methods. Let be a closed subset of Euclidean space . Let be an unknown but fixed Borel probability measure on . Assume that are i.i.d. from the distribution . A reproducing kernel is a symmetric function such that is positive semidefinite for any finite set of points in . The kernel defines a reproducing kernel Hilbert space (RKHS) as the completion of the linear span of the set with respect to the inner product For any , the reproducing property holds: In learning with kernel methods, one considers the following minimization problem
Since by the reproducing property, the above can be rewritten as
Defining another probability measure , the above reduces to (1).
Notations and Auxiliary Operators. We next introduce some notations and auxiliary operators which will be useful in the following. For a given bounded operator denotes the operator norm of , i.e., .
Let be the linear map , which is bounded by under Assumption (3). Furthermore, we consider the adjoint operator , the covariance operator given by , and the operator given by It can be easily proved that and Under Assumption (3), the operators and can be proved to be positive trace class operators (and hence compact):
For any , it is easy to prove the following isometry property 
Moreover, according to the spectral theorem,
We define the sampling operator by , where the norm in is the Euclidean norm times . Its adjoint operator defined by for is thus given by Moreover, we can define the empirical covariance operator such that . Obviously,
Then it is easy to see that (1) is equivalent to Using the projection theorem, one can prove that a solution for the above problem is the projection of the regression function onto the closure of in , and moreover, for all , (see e.g., ),
3 Spectral-regularization Algorithms
In this section, we demonstrate and introduce spectral-regularization algorithms.
The search for an approximate solution in for Problem (1) is equivalent to the search of an approximated solution in for
As the expected risk can not be computed exactly and that it can be only approximated through the empirical risk defined as
a first idea to deal with the problem is to replace the objective function in (10) with the empirical risk, which leads to an estimator satisfying the empirical, linear equation
However, solving the empirical, linear equation directly may lead to a solution that fits the sample points very well but has a large expected risk. This is called as overfitting phenomenon in statistical learning theory. Moreover, the inverse of the empirical covariance operator does not exist in general. To tackle with this issue, a common approach in statistical learning theory and inverse problems, is to replace with an alternative, regularized one, which leads to spectral-regularization algorithms [8, 4, 1].
A spectral-regularization algorithm is generated by a specific choice of filter function. Recall that the definition of filter functions is given as follows.
Definition 3.1 (Filter functions).
Let be a subset of A class of functions is said to be filter functions with qualification () if there exist some positive constants such that
Given a filter function , the spectral-regularization algorithm is defined as follows.
Let be a filter function indexed with . The spectral-regularization algorithm over the samples is given by
Different filter functions correspond to different regularization algorithms. The following examples provide several specific choices on filter functions, which leads to different types of regularization methods, see e.g. [10, 1, 23].
Example 3.1 (Spectral cut-off).
Consider the spectral cut-off or truncated singular value decomposition (TSVD) defined by
Then the qualification could be any positive number and .
Example 3.2 (Gradient methods).
The choice with where we identify corresponds to gradient methods or Landweber iteration algorithm. The qualification could be any positive number, and .
Example 3.3 ((Iterated) ridge regression).
Let Consider the function
It is easy to show that the qualification , and In the case that the algorithm is ridge regression.
The performance of spectral-regularization algorithms can be measured in terms of the excess risk, which is exactly according to (9). Assuming that , which implies that there exists some such that (in this case, the solution with minimal -norm for is denoted by ), it can be measured in terms of -norm, which can be rewritten as according to (6). In what follows, we will measure the performance of spectral-regularization algorithms in terms of a broader class of norms, where is such that is well defined. Throughout this paper, we assume that
4 Convergence Results
In this section, we first introduce some basic assumptions and then present convergence results for spectral-regularization algorithms.
The first assumption relates to a moment condition on the output value .
There exists positive constants and such that for all with
The above assumption is very standard in statistical learning theory. It is satisfied if is bounded almost surely, or if , where is a Gaussian random variable with zero mean and it is independent from . Obviously, Assumption 1 implies that the regression function is bounded almost surely, as
The next assumption relates to the regularity/smoothness of the target function As and it is natural to assume a general source condition on as follows.
and the following source condition
Here, and is a non-decreasing index function such that and . Moreover, for some is non-decreasing, and the qualification of covers the index function .
Recall that the qualification of covers the index function is defined as follows .
We say that the qualification covers the index function if there exists a such that for all
Condition (17) is trivially satisfied if is bounded almost surely. Moreover, when making a consistency assumption, i.e., , as that in [23, 4, 5, 25], for kernel-based non-parametric regression, it is satisfied with Condition (18) is a more general source condition that characterizes the “regularity/smoothness” of the target function. It is trivially satisfied with as . In non-parametric regression with kernel methods, one typically considers Hölders condition (corresponding to ) [23, 5, 4] . [1, 17, 20] considers a general source condition but only with an index function , where can be decomposed as and is operator monotone with and , and is Lipschitz continuous with . In the latter case has a solution in as that [24, 21]
In this paper, we will consider a source assumption with respect to a more general index function, , where is operator monotone with and , and is Lipschitz continuous. Without loss of generality, we assume that the Lipschitz constant of is , as one can always scale both sides of the source condition (18). Recall that the function is called operator monotone on , if for any pair of self-adjoint operators with spectra in such that ,
Finally, the last assumption relates to the capacity of the hypothesis space (induced by ).
For some and , satisfies
The left hand-side of of (21) is called as the effective dimension , or the degrees of freedom . It can be related to covering/entropy number conditions, see  for further details. Assumption 3 is always true for and , since is a trace class operator which implies the eigenvalues of , denoted as , satisfy This is referred to as the capacity independent setting. Assumption 3 with allows to derive better rates. It is satisfied, e.g., if the eigenvalues of satisfy a polynomial decaying condition , or with if is finite rank.
4.2 Main Results
Now we are ready to state our main results as follows.
2) If , where is operator monotone with , and , and is Lipschitz continuous with constant and . Furthermore, assume that the quality of covers then
Here, are positive constants depending only on and (independent from and ).
The above theorem provides convergence results with respect to variants of norms in high-probability for spectral-regularization algorithms. Balancing the different terms in the upper bounds, one has the following results with a optimal, data-dependent choice of regularization parameters. Throughout the rest of this paper, is denoted as a positive constant that depends only on and , and it could be different at its each appearance.
The error bounds in the above corollary are optimal as they match the minimax rates from  (considering only the case and ). The assumption that the quality of covers in Part 2) of Corollary 4.3 is also implicitly required in [1, 17, 20], and it is always satisfied for principle component analysis and gradient methods. The condition will be satisfied in most cases when the index function has a Lipschitz continuous part, and moreover, it is trivially satisfied when as will be seen from the proof.
As a direct corollary of Theorem 4.2, we have the following results considering Hölder source conditions.
The error bounds in (26) are optimal as the convergence rates match the minimax rates shown in [5, 3] with . The above result asserts that spectral-regularization algorithms with an appropriate regularization parameter converge optimally.
Corollary 4.4 provides convergence results in high-probability for the studied algorithms. It implies convergence in expectation and almost sure convergence shown in the follows. Moreover, when it can be translated into convergence results with respect to norms related to
Under the assumptions of Corollary 4.4, the following holds.
1) For any we have
2) For any ,
3) If then for some almost surely, and with probability at least
The proof for all the results stated in this subsection are postponed in the next section.
There is a large amount of research on theoretical results for non-parametric regression with kernel methods in the literature, see e.g., [26, 22, 14, 7, 17, 12] and references therein. As noted in Section 2, our results apply to non-parametric regression with kernel methods. In what follows, we will translate some of the results for kernel-based regression into results for regression over a general Hilbert space and compare our results with these results.
We first compare Corollary 4.4 with some of these results in the literature for spectral-regularization algorithms with Hölder source conditions. Making a source assumption as
, and with ,  shows that with probability at least
Condition (29) implies that as and . Thus almost surely.
Note also that  provides the same optimal error bounds as the above, but only restricts to the cases and . In comparison, Corollary 4.4 is more general. It provides convergence results with different norms and it does not require the universal consistency assumption. The derived error bound in (26) is more meaningful as it holds with high probability. However, it has an extra logarithmic factor in the upper bound for the case which is worser than that from . [1, 3] study statistical results for spectral-regularization algorithms, under a Hölder source condition, with Particularly,  shows that if
then with probability at least , with and ,
In comparison, Corollary 4.4 provides optimal convergence rates even in the case that , while it does not require the extra condition (30). Note that we do not pursue an error bound that depends both on and the noise level as those in [3, 7], but it should be easy to modify our proof to derive such error bounds (at least in the case that ). The only results by now for the non-attainable cases with a general Hölder condition with respect to (rather than ) are from , where convergence rates of order are derived (but only) for gradient methods assuming (30).
We next compare Theorem 4.2 with results from [1, 20] for spectral-regularization algorithms considering general source conditions. Assuming that with (which implies for some ,) where is as in Part 2) of Theorem 4.2,  shows that if the qualification of covers and (30) holds, then with probability at least
The error bound is capacity independent, i.e., with . Involving the capacity assumption
As noted in [11, Discussion], these results lead to the following estimates in expectation
In comparison with these results, Theorem 4.2 is more general, considering a general source assumption and covering the general case that may not be in . Furthermore, it provides convergence results with respect to a broader class of norms, and it does not require the condition (30). Finally, it leads to convergence results in expectation with a better rate (without the logarithmic factor) when the index function is , and it can infer almost-sure convergence results.
In this section, we prove the results stated in Section 4. We first give some basic lemmas, and then give the proof of the main results.
We first introduce the following lemma, which is a generalization of [1, Proposition 7]. For notational simplicity, we denote
Let be a non-decreasing index function and the qualification of the filter function covers the index function , and for some is non-decreasing. Then for all
where is from Definition 4.1.
Using the above lemma, we have the following results for the deterministic vector , defined by
Under Assumption 2, we have for all
The left hand-side of (34) is often called as the true bias.
Following from the definition of in (33), we have
Introducing with (18), with the notation we get
According to the spectral theorem, with (4), one has
Since both and are non-decreasing and non-negative over , thus is also non-decreasing for any If then
where for the last inequality, we used (11) and that is non-decreasing. If similarly, we have