Estimating Learnability in the Sublinear Data RegimeThis work was supported by NSF CCF-1704417 and ONR N00014-17-1-2562.

# Estimating Learnability in the Sublinear Data Regime††thanks: This work was supported by NSF CCF-1704417 and ONR N00014-17-1-2562.

Weihao Kong
Stanford University
whkong@stanford.edu
Gregory Valiant
Stanford University
gvaliant@cs.stanford.edu
###### Abstract

We consider the problem of estimating how well a model class is capable of fitting a distribution of labeled data. We show that it is often possible to accurately estimate this “learnability” even when given an amount of data that is too small to reliably learn any accurate model. Our first result applies to the setting where the data is drawn from a -dimensional distribution with isotropic covariance, and the label of each datapoint is an arbitrary noisy function of the datapoint. In this setting, we show that with samples, one can accurately estimate the fraction of the variance of the label that can be explained via the best linear function of the data. For example, in setting where the label is a linear function of the datapoint plus independent noise, the variance of the noise can be accurately approximated given samples, which is information theoretically optimal. We extend these techniques to the setting of binary classification, where we show that in an analogous setting, the prediction error of the best linear classifier can be accurately estimated given labeled samples. Note that in both the linear regression and binary classification settings, even if there is no noise in the labels, a sample size linear in the dimension, , is required to learn any function correlated with the underlying model. We further extend our estimation approach to the setting where the data distribution has an (unknown) arbitrary covariance matrix, allowing these techniques to be applied to settings where the model class consists of a linear function applied to a nonlinear embedding of the data. Finally, we demonstrate the practical viability of these approaches on synthetic and real data. This ability to estimate the explanatory value of a set of features (or dataset), even in the regime in which there is too little data to realize that explanatory value, may be relevant to the scientific and industrial settings for which data collection is expensive and there are many potentially relevant feature sets that could be collected.

## 1 Introduction

Given too little labeled data to learn a model or classifier, is it possible to determine whether an accurate classifier or predictor exists? For example, consider a setting where you are given datapoints with real-valued labels drawn from some distribution of interest, . Suppose you are in the regime in which is too small to learn an accurate prediction model; might it still be possible to estimate the performance that would likely be obtained if, hypothetically, you were to gather more data, say a dataset of size and train a model on that data? We answer this question affirmatively, and show that in the settings of linear regression and binary classification via linear (or logistic) classifiers, it is possible to estimate the likely performance of a (hypothetical) predictor trained on a larger hypothetical dataset, even given an amount of data that is sublinear in the amount that would be required to learn such a predictor.

For concreteness, we begin by describing the flavor of our results in a very basic setting: learning a noisy linear function of high-dimensional data. Suppose we are given access to independent samples from a -dimensional isotropic Gaussian, and each sample, is labeled according to a noisy linear function where is the true model and the noise is drawn (independently) from a distribution E of (unknown) variance One natural goal is to estimate the signal to noise ratio, , namely estimating how much of the variation in the label we could hope to explain. Even in the noiseless setting (), it is information theoretically impossible to learn any function that has even a small constant correlation with the labels unless we are given an amount of data that is linear in the dimension, . Nevertheless, as we show, it is possible to estimate the magnitude of the noise, , and variance of the label, given only samples.

Our results (summarized in Section 1.2), explore this striking ability to estimate the “learnability” of a distribution over labeled data based on relatively little data. Our most basic result considers the restrictive setting where the data is drawn from an isotropic -dimensional distribution, and the label consists of a linear function plus independent noise. In this setting, we show that samples are necessary and sufficient to estimate the variance of the noise. We extend this result in several different directions, via natural generalizations of the same techniques. These extensions include 1) the “agnostic” setting, where the label can have an arbitrary joint distribution with the -dimensional datapoint, and the goal is to estimate the fraction of the variance that can be captured by a linear function (sometimes referred to as estimating the explained variance), 2) the setting where the data has an unknown (and non-isotropic) covariance, and the label is a linear function of the datapoint plus independent noise, and 3) a class of natural models for data with binary labels, where the goal is to estimate the prediction error achievable by the best linear/logistic classifier.

Throughout, we focus on linear models and classifiers, and our assumptions on the data generating distribution are very specific for our binary classification results. Because some of our results apply when the covariance matrix of the distribution is non-isotropic (and non-Gaussian), the results extend to the many non-linear models that can be represented as a linear function applied to a non-linear embedding of the data, for example settings where the label is a noisy polynomial function of the features.

Still, our estimation algorithms do not apply to all relevant settings; for example, they do not encompass binary classification settings where the two classes do not occur with equal probabilities. We are optimistic that our techniques may be extended to address that setting, and other practically relevant settings that are not encompassed by the models we consider. We discuss some of these possibility, and several other shortcomings of this work and potential directions for future work, in Section 1.4.

### 1.1 Motivating Application: Estimating the value of data and dataset selection

In some data-analysis settings, the ultimate goal is to quantify the signal and noise—namely understand how much of the variation in the quantity of interest can be explained via some set of explanatory variables. For example, in some medical settings, the goal is to understand how much disease risk is associated with genomic factors (versus random luck, or environmental factors, etc.). In other settings, the goal is to accurately predict a quantity of interest. The key question then becomes “what data should we collect—what features or variables should we try to measure?” The traditional pipeline is to collect a lot of data, train a model, and then evaluate the value of the data based on the performance (or improvement in performance) of the model.

Our results demonstrate the possibility of evaluating the explanatory utility of additional features, even in the regime in which too few data points have been collected to leverage these data points to learn a model. For example, suppose we wish to build a predictor for whether or not someone will get a certain disease. We could begin by collecting a modest amount of genetic data (e.g. for a few hundred patients, record the presence of genetic abnormalities for each of the  20k genes), and a modest amount of epigenetic data. Even if we have data for too few patients to learn a good predictor, we can at least evaluate how much the model would improve if we were to collect a lot more genetic data, versus collecting more epigenetic data.

This ability to explore the potential of different features with less data than would be required to exploit those features seems extremely relevant to the many industry and research settings where it is expensive or difficult to gather data.

Alternately, these techniques could be leveraged by data providers in the context of a “verify then buy” model: Suppose I have a large dataset of customer behaviors that I think will be useful for your goal of predicting customer clicks/purchases. Before you purchase access to my dataset, I could give you a tiny sample of the data—too little to be useful to you, but sufficient for you to verify the utility of the dataset.

### 1.2 Summary of Results

Our first result applies to the setting where the data is drawn according to a dimensional distribution with identity covariance, and the labels are noisy linear functions. Provided we have more than datapoints, we can accurately determine the magnitude of the noise:

###### Proposition 1.

Suppose we are given labeled examples, , with drawn independently from a -dimension distribution of mean zero, identity covariance, and fourth moments bounded by . Assuming that each label , where the noise is drawn independently from an (unknown) distribution with mean variance , and the labels have been normalized to have unit variance. There is an estimator , that with probability , approximates with additive error .

The fourth moment condition of the above theorem is formally defined as follows: for all vectors , . In the case that the data distribution is an isotropic Gaussian, this fourth moment bound is satisfied with .

We stress that in the above setting, it is information theoretically impossible to approximate , or accurately predict the ’s without a sample size that is linear in the dimension, . The above result is also optimal, to constant factors, in the constant-error regime. No algorithm can distinguish the case that the label is pure noise, from the case that the label has a significant signal, using datapoints:

###### Theorem 1.

In the setting of Proposition 1, there is a constant such that no algorithm can distinguish the case that the signal is pure noise (i.e. and ) versus almost no noise (i.e. and is chosen to be a random vector s.t. , using fewer than datapoints with probability of success greater than .

Our estimation machinery extends beyond the isotropic setting, and we provide an analog of Proposition 1 to the setting where the datapoints, are drawn from a dimensional distribution with (unknown) non-isotropic covariance. This setting is considerably more challenging than the isotropic setting, since a significant portion of the signal could be accounted for by directions in which the distribution has extremely small variance. Though our results are weaker than in the isotropic setting, we still establish accurate estimation of the unexplained variance in the sublinear regime, though require a sample size to obtain an estimate within error

Our results in the non-isotropic setting apply to the following standard model of non-isotropic distributions: the distribution is specified by an arbitrary real-valued matrix, , and a univariate random variable with mean 0, variance 1, and bounded fourth moment. Each sample is then obtained by computing where has entries drawn independently according to . In this model, the covariance of will be . This model is fairly general (by taking to be a standard Gaussian this model can represent any -dimensional Gaussian distribution, and it can also represent any rotated and scaled hypercube, etc), and is widely considered in the statistics literature (see e.g. [40, 5]). While our theoretical results rely on this modeling assumption, our algorithm is not tailored to this specific model, and likely performs well in more general settings.

###### Theorem 2.

Suppose we are given labeled examples, , with where is an unknown arbitrary real matrix and each entry of is drawn independently from a one dimensional distribution with mean zero, variance , and constant fourth moment. Assuming that each label , where the noise is drawn independently from an unknown distribution with mean 0 and variance , and the labels have been normalized to have unit variance. There is an algorithm that takes labeled samples, parameter and which satisfies , and with probability , outputs an estimate with additive error where is a function that only depends on .

Setting , and considering the case when are constants, yields the following corollary:

###### Corollary 1.

In the setting of Theorem 2, with constant and , the noise can be approximated to error with , where the constant in the big-Oh notation hides a dependence on .

We note that our proof of Theorem 2 actually establishes a slightly tighter bound, in which the expression in the error of Theorem 2 can by replaced by the quantity , where is the covariance matrix, but with all singular values greater than truncated to be , where is the parameter in the statement of Theorem 2.

Finally, we establish the following lower bound, demonstrating that, without any assumptions on or no sublinear sample estimation is possible.

###### Theorem 3.

Without any assumptions on the covariance of the data distribution, or bound on , it is impossible to distinguish the case that the labels are linear functions of the data (zero noise) from the case that the labels are pure noise with probability better than using samples, for some constant .

#### 1.2.1 Estimating Unexplained Variance in the Agnostic Setting

Our algorithms and techniques do not rely on the assumption that the labels consist of a linear function plus independent noise, and our results partially extend to the agnostic setting. Formally, assuming that the label, , can have any joint distribution with , we show that our algorithms will accurately estimate the fraction of the variance in that can be explained via (the best) linear function of , namely the quantity .

The analog of Proposition 1 in the agnostic setting is the following:

###### Theorem 4.

Suppose we are given labeled examples, , with drawn independently from a -dimension distribution where has mean zero and identity covariance, and has mean zero and variance , and the fourth moments of the joint distribution is bounded by . There is an estimator , that with probability , approximates with additive error .

The fourth moment condition of the above theorem is analogous to that of Proposition 1: namely, the fourth moments of the joint distribution are bounded by a constant if, for all vectors , and . As in Proposition 1, in the case that the data distribution is an isotropic Gaussian, and the label is a linear function of the data plus independent noise, this fourth moment bound is satisfied with .

In the setting where the distribution of is non-isotropic, the algorithm to which Theorem 2 applies still extends to this agnostic setting. While the estimate of the unexplained variance is still accurate in expectation, some additional assumptions on the (joint) distribution of would be required to bound the variance of the estimator in the agnostic and non-isotropic setting. Such conditions are likely to be satisfied in many practical settings, though a fully general agnostic and non-isotropic analog of Theorem 2 likely does not hold.

#### 1.2.2 The Binary Classification Setting

Our approaches and techniques for the linear regression setting also can be applied to the important setting of binary classification—namely estimating the performance of the best linear classifier, in the regime in which there is insufficient data to learn any accurate classifier. As an initial step along these lines, we obtain strong results in a restricted model of Gaussian data with labels corresponding to the latent variable interpretation of logistic regression. Specifically, we consider labeled data pairs where is drawn from a Gaussian distribution, with arbitrary unknown covariance, and is label that takes value with probability and with probability where is the sigmoid function, and is the unknown model parameter.

###### Theorem 5.

Suppose we are given labeled examples, , with drawn independently from a Gaussian distribution with mean and covariance where is an unknown arbitrary by real matrix. Assuming that each label takes value with probability and with probability , where is the sigmoid function. There is an algorithm that takes labeled samples, parameter and which satisfies , and with probability , outputs an estimate with additive error where is the classification error of the best linear classifier, and is an absolute constant.

In the setting where the distribution of is an isotropic Gaussian, we obtain the simpler result that the classification error of the best linear classifier can be accurately estimated with samples. This is information theoretically optimal, as we show in Section E.

###### Corollary 2.

Suppose we are given labeled examples, , with drawn independently from a -dimension isotropic Gaussian distribution . Assuming that each label takes value with probability and with probability , where is the sigmoid function. There is an algorithm that takes labeled samples, and with probability , outputs an estimate with additive error where is the classification error of the best linear classifier and is an absolute constant.

Despite the strong assumptions on the data-generating distribution in the above theorem and corollary, the algorithm to which they apply seems to perform quite well on real-world data, and is capable of accurately estimating the classification error of the best linear predictor, even in the data regime where it is impossible to learn any good predictor. One partial explanation is that our approach can be easily adapted to a wide class of “link functions,” beyond just the sigmoid function addressed by the above results. Additionally, for many smooth, monotonic functions, the resulting algorithm is almost identical to the algorithm corresponding to the sigmoid link function.

###### Theorem 6.

Without any assumptions on the covariance of the data distribution, or bound on , it is impossible to distinguish the case that the label is pure noise, meaning the label of each data point is uniformly randomly drawn from , independent from the data, versus no noise, where there is an underlying hyperplane represented as vector such that the label is , with probability better than using samples, for some constant .

### 1.3 Related Work

There is a huge body of work, spanning information theory, statistics, and computer science, devoted to understanding what can be accurately inferred about a distribution, given access to relatively few samples—too few samples to learn the distribution in question. This area was launched with the early work of R.A. Fisher [24] and Alan Turing and I.J. Good [26] to estimate properties of the unobserved portion of a distribution (e.g. estimating the “missing mass”, namely the probability that a new sample will be a previously unobserved domain element). More recently, there has been a surge of results establishing that many distribution properties, including support size, entropy, distances between distributions, and general classes of functionals of distributions, can be estimated in the sublinear sample regime in which most of the support of the distribution is unobserved (see e.g. [6, 7, 3, 36, 37, 1, 2, 15, 39, 28]). The majority of work in this vein has focused on properties of distributions that are supported on some discrete (and unstructured) alphabet, or structured (e.g. unimodal) distributions over (e.g. [13, 14, 8, 9, 17]).

There is also a line of relevant work, mainly from the statistics community, investigating properties of high-dimensional distributions (over ). One of the fundamental questions in this domain is to estimate properties of the spectrum (i.e. singular values) of the covariance matrix of a distribution, in the regime in which the covariance cannot be accurately estimated [22, 4, 30, 21, 31, 32]. This line of work includes the very recent work [29] demonstrating that the full spectrum can be estimated given a sample size that is sublinear in the dimensionality of the data—given too little data to accurately estimate any principal components, you can accurately estimate how many directions have large variance, small variance, etc. We leverage some techniques from this work in our analysis of our estimator for the non-isotropic setting.

For the specific question of estimating the signal to variance ratio (or signal to noise), also referred to as the “unexplained variance”, there are many classic and more recent estimators that perform well in the linear and super-linear data regime. These estimators apply to the most restrictive setting we consider, where each label is given as a linear function of plus independent noise of variance . Two common estimators for involve first computing the parameter vector that minimizes the squared error on the datapoints. These estimators are 1) the “naive estimator” or the “maximum likelihood” estimator: , and 2) the “unbiased” estimator , where refers to the vector of labels, and is the matrix whose rows represent the datapoints. Verifying that the latter estimator is unbiased is a straightforward exercise. Of course, both of these estimators are zero (or undefined) in the regime where , as the prediction error is identically zero in this regime. Additionally, the variance of the unbiased estimator increases as approaches , as is evident in our empirical experiments where we compare our estimators with this unbiased estimator.

In the regime where , variants of these estimators might still be applied but where is computed as the solution to a regularized regression (see, e.g. [38]); however, such approaches seem unlikely to apply in the sublinear regime where , as the recovered parameter vector is not significantly correlated with the true in this regime, unless strong assumptions are made on .

Indeed, there has been a line of work on estimating the noise level assuming that is sparse [23, 35, 34, 10]. These works give consistent estimates of even in the regime where . More generally, there is an enormous body of work on the related problem of feature selection.The basis dependent nature of this question (i.e. identifying which features are relevant) and the setting of sparse , are quite different from the setting we consider where the signal may be a dense vector.

There have been recent results on estimating the variance of the noise, without assumptions on , in the regime. In the case where but approaches a constant , Janson et al. proposed the EigenPrism [27] to estimate the noise level. Their results rely on the assumptions that the data is drawn from an isotropic Gaussian distribution, and that the label is a linear function plus independent noise, and the performance bounds become trivial if

Perhaps the most similar work to our paper is the work of Dicker [20], which proposed an estimator of with error rate in the setting where the data is drawn from an isotropic Gaussian distribution, and the label is a linear function plus independent Gaussian. Their estimator is fairly similar to ours in the identity covariances setting and gives the same error rate. However, our result is more general in the following senses: 1) Our estimator and analysis do not rely on Gaussianity assumptions; 2) Our estimator and results apply beyond the setting where label is a linear function of plus independent noise, and estimates the fraction of the variance that can be explained via a linear function (the “agnostic” setting. Finally, our approach and analysis extends to the non-isotropic setting.

Finally, there is a body of work from the theoretical computer science community on “testing” whether a function belongs to a certain class, including work on testing linearity [11, 12] generally over finite fields rather than , and testing monotonicity of functions over the Boolean hypercube [25, 16]. Most of this work is in the “query model” where the algorithm can (adaptively) choose a point, , and obtain its label . The goal is determine whether the labeling function belongs to the desired class using as few queries as possible. This ability to query points seems to significantly alter the problem, although it corresponds to the setting of “active learning” in the setting where there is an exponential amount of unlabeled data. Also, it is worth mentioning that much of the work in this regime focuses on these testing questions with “one-sided error”: for example, distinguishing whether a function is linear, versus far from linear, as opposed to the often more difficult problem of estimating how close the function is to linear.

### 1.4 Future Directions and Shortcomings of Present Work

This work demonstrates—both theoretically and empirically—a surprising ability to estimate the performance of the best model in basic model classes (linear functions, and linear classifiers) in the regime in which there is too little data to learn any such model. That said, there are several significant caveats to the applicability of these results, which we now discuss. Some of these shortcomings seem intrinsically necessary, while others can likely be tackled via extensions of our approaches.

More General Model Classes, and Loss Functions: Perhaps the most obvious direction for future work is to tackle more general model classes, under more general classes of loss function, in more general settings. While our results on linear regression extend to function classes (such as polynomials) that can be obtained via a linear function applied to a nonlinear embedding of the data, the results are all in terms of estimating unexplained variance, namely estimating ( error). Our techniques do leverage the geometry of the loss, and it is not immediately clear how they could be extended to more general loss functions.

Our results for binary classification are restricted to the specific model of Gaussian data (with arbitrary covariance) and with label assigned to be with probabilities according to the latent variable interpretation of logistic regression, namely 1 with probability and with probability , where is the vector of hidden parameters, and the function is the sigmoid function. Our techniques are not specific to the sigmoid function, and can yield analogous results for other monotonic “link” functions. Similarly, the Gaussian assumption can likely be relaxed. Still, it seems that any strong theoretical results for the binary classification setting would need to rely on fairly stringent assumptions on the structure of the data and labels in question.

Heavy-tailed covariance spectra: One of the practical limitations of our techniques is that they are unable to accurately capture portions of the signal that depend on directions in which the underlying distribution has extremely small variance. As our lowerbound show, this is unavoidable. That said, many real-world distributions exhibit a power-law like spectrum, with a large number of directions having variance that is orders of magnitude smaller than the directions of larger variance, and a significant amount of signal is often contained in these directions.

From a practical perspective, this issue can be addressed by partially “whitening” the data so as to make the covariance more isotropic. Such a re-projection requires an estimate of the covariance of the distribution, which would require either specialized domain knowledge, or a (unlabeled) dataset of size at least linear in the dimension. In some settings it might be possible to easily collect a surrogate (unlabeled) dataset from which the re-projection matrix could be computed. For example, for NLP settings, a generic language dataset such as the Wikipedia corpus could be used to compute the reprojection.

Data aggregation, federated learning, and secure “proofs of value”: There are many tantalizing directions (both theoretical and empirical) for future work on downstream applications of the approaches explored in this work. The approaches of this work could be re-purposed to explore the extent to which two or more labeled datasets have the same (or similar) labeling function, even in the regime in which there is too little data to learn such a function—for example, by applying these techniques to the aggregate of the datasets versus individually and seeing whether the signal to noise ratio degrades upon aggregation. Such a primitive might have fruitful applications in realm of “federated learning”, and other settings where there are a large number of heterogeneous entities, each supplying a modest amount of data that might be too small to train an accurate model in isolation. One of the key questions in such settings is how to decide which entities have similar models, and hence which subsets of entities might benefit from training a model on their combined data.

Finally, a more speculative line of future work might explore the possibility of creating secure or privacy preserving “proofs of value” of a dataset. The idea would be to publicly release either a portion of a dataset, or some object derived from the dataset, that would “prove” the value of the dataset while preventing others from exploiting the dataset, or while preserving various notions of security or privacy of the database). The approaches of this work might be a first step towards those directions, though such directions would need to begin with a formal specification of the desired security/privacy notions, etc.

## 2 The Estimators, Regression Setting

Before describing our estimators, we first provide an intuition for why it is possible to estimate the “learnability” in the sublinear data regime.

### 2.1 Intuition for Sublinear Estimation

We begin by describing one intuition for why it is possible to estimate the magnitude of the noise using only samples, in the isotropic setting. Suppose we are given data drawn i.i.d. from and let represent the labels, with for a random vector and drawn independently from . Fix , and consider partitioning the datapoints into two sets, according to whether the label is positive or negative. In the case where the labels are complete noise (), the expected value of a positively labeled point is the same as that of a negatively labeled point and is . In the case where there is little noise, the expected value of a positive point will be different than that of a negative point, , and the distance between these points corresponds to the distance between the mean of the ‘top’ half of a Gaussian and the ‘bottom’ half of a Gaussian. Furthermore, this distance between the expected means will smoothly vary between and as the variance of the noise, , varies between and .

The crux of the intuition for the ability to estimate in the regime where is the following observation: while the empirical means of the positive and negative points have high variance in the regime, it is possible to accurately estimate the distance between and from these empirical means! At a high level, this is because the empirical means consists of coordinates, each of which has a significant amount of noise. However, their squared distance is just a single number which is a sum of quantities, and we can leverage concentration in the amount of noise contributed by these summands to save a factor. This closely mirrors the folklore result that it requires samples to accurately estimate the mean of an identity covariance Gaussian with unknown mean, , though the norm of the mean can be estimated to error using only .

Our actual estimators, even in the isotropic case, do not directly correspond to the intuitive argument sketched in this section. In particular, there is no partitioning of the data according to the sign of the label, and the unbiased estimator that we construct does not rely on any Gaussianity assumption.

### 2.2 The Estimators

The basic idea of our proposed estimator is as follows. Given a joint distribution over where has mean and variance , the classical least square estimator which minimizes the unexplained variance takes the form , and the corresponding value of the unexplained variance is . Notice that the least square estimator is exactly the model parameter in the linear model setting, and we use the same notation to denote them. The variance of the labels, can be estimated up to error with samples, after which the problem reduces to estimating . While we do not have an unbiased estimator of , as we show, we can construct an unbiased estimator for for any integer .

To see the utility of estimating these “higher moments”, assume for simplicity that is a diagonal matrix. Consider the distribution over consisting of point masses with the th point mass located at with probability mass . The problem of estimating is now precisely the problem of approximating the first moment of this distribution, and we are claiming that we can compute unbiased (and low variance) estimates of for , which exactly correspond to the 2nd, 3rd, etc. moments of this distribution of point masses. Our main theorem follows from the following two components: 1) There is an unbiased estimator that can estimate the th () moment of the distribution using only samples. 2) Given accurate estimates of the 2nd, 3rd,…,th moments, one can approximate the first moment with error . The main technical challenge is the first component—constructing and analyzing the unbiased estimators for the higher moments; the second component of our approach amounts to showing that the function can be accurately approximated via the polynomials , and is a straightforward exercise in real analysis. The final estimator for in the non-identity covariance setting will be the linear combination of the unbiased estimates of , where the coefficients correspond to those of the polynomial approximation of via The following proposition (proved in the supplementary material) summarizes the quality of this polynomial interpolation:

###### Proposition 2.

For any integer , there is a degree polynomial with no linear or constant terms, satisfying for all .

The above proposition follows easily from Theorem 5.5 in [18], and we include the short proof in the appendix.

Identity Covariance Setting: In the setting where the data distribution has identity covariance, simply because and hence we do have a simple unbiased estimator, summarized in the following algorithm for the isotropic setting, to which Proposition 1 applies:

To see why the second term of the output corresponds to an unbiased estimator for (and hence for in the isotropic case), consider drawing two independent samples from our linear model plus noise. Indeed, is an unbiased estimator of , because . Given samples, by linearity of expectation, a natural unbiased estimate is hence to compute this quantity for each pair (of distinct) samples, and take the average of these quantities. This is precisely what Algorithm 1 computes, since .

Given that the estimator in the isotropic case is unbiased, Proposition 1 will follow provided we adequately bound its variance:

###### Proposition 3.

where is the bound on the fourth moments.

###### Proof.

The variance can be expressed as the following summation: For each term in the summation, we classify it into one of the different cases according to :

1. If all take different values, the term is .

2. If take different values, WLOG assume . The term can then be expressed as: . By our fourth moment assumption, , and we conclude that is an upperbound.

3. If take different values, the term is: First taking the expectation over the th sample, we get the following upper bound Notice that . Taking the expectation over the th sample and applying the fourth moment condition, we get the following bound:

The final step is to sum the contributions of these cases. Case has different quadruples . Case has different quadruples . Combining the resulting bounds yields: Since , it must be that . Further by the fact that , the proposition statement follows. ∎

Having shown that the estimator is unbiased and has variance bounded according to the above proposition, Proposition 1 now follows immediately from Chebyshev’s inequality.

Non-Identity Covariance: Algorithm 2, to which Theorem 2 applies, describes our estimator in the general setting where the data has a non-isotropic covariance matrix.

The general form of the unbiased estimators of for closely mirrors the case discussed for , and the proof that these are unbiased is analogous to that for the setting explained above. The analysis of the variance, however, becomes quite complicated, as a significant amount of machinery needs to be developed to deal with the combinatorial number of “cases” that are analogous to the 3 cases discussed in the variance bound for the setting of Proposition 3. Fortunately, we are able to borrow some of the approaches of the work [29], which also bounds similar looking moments (with the rather different goal of recovering the covariance spectrum).

The proof of correctness of Algorithm 2, establishing Theorem 2 is given in a self-contained form in the appendix.

## 3 The Binary Classification Setting

In the binary classification setting, we assume that we have independent labeled samples where each is drawn from a Gaussian distribution . There is an underlying link function which is monotonically increasing and satisfies , and an underlying weight vector , such that each label takes value with probability and with probability . Under this assumption, the goal of our algorithm is to predict the classification error of the best linear classifier. In this setting, the best linear classifier is simply the linear threshold function whose classification error is .

The core of the estimators in the binary classification setting is the following observation: given two independent samples drawn from the linear classification model described above, is an unbiased estimator of , simply because is an unbiased estimator of , as we show below in Proposition 4. We argue that such an estimator is sufficient for the setting where and the function is known. To see that, taking the square root of the estimator yields an estimate of , which is monotonically increasing in , and hence can be used to determine . The classification error, , can then be calculated as a function of the estimate of . The following proposition proves the unbiasedness property of our estimator.

###### Proof.

First we decompose into the sum of two parts: and , where the second part is independent of . Since , we have that

For the setting where is drawn from a non-isotropic Gaussian with unknown covariance , we apply a similar approach as in the linear regression case. First, we obtain a series of unbiased estimators for with . Then, we find a linear combination of those estimates to yield an estimate of . This latter expression can then be used to determine , after which the value of can be determined.

Our general covariance algorithm for estimating the classification error of the best linear predictor, to which Theorem 5 applies, is the following:

For convenience, we restate our main theorem for estimating the classification error of the best linear model:

Theorem 5. Suppose we are given labeled examples, , with drawn independently from a Gaussian distribution with mean and covariance where is an unknown arbitrary by real matrix. Assuming that each label takes value with probability and with probability , where is the Sigmoid function. There is an algorithm that takes labeled samples, parameter and which satisfies , and with probability , outputs an estimate with additive error where and is an absolute constant.

We provide the proof of Theorem 5 in Appendix C. As in the linear regression setting, the main technical challenge is bounding the variance of our estimators for each of the “higher moments”, in this case for the estimators for the expressions for . Our proof that these quantities can be accurately estimated in the sublinear data regime does leverage the Gaussianity assumption on , though does not rely on the assumption that the “link function” is the sigmoid. The only portion of our algorithm and proof that leverages the assumption that is the sigmoid function is in the definition and analysis of the function (of Algorithm 3), which provides the invertible mapping between the quantity we estimate directly, , and the classification error of the best predictor, . Analogous results to Theorem 5 can likely be obtained easily for other choices of link function, by characterizing the corresponding mapping .

## 4 Empirical Results

We evaluated the performance of our estimators on several synthetic datasets, and on a natural language processing regression task. In both cases, we explored the performance across a large range of dimensionalities. In both the synthetic and NLP setting, we compared our estimators with the “naive” unbiased estimator, , discussed in Section 1.3, which is only applicable in the regime where the sample size is at least the dimension. In general, the results seem quite promising, with the estimators of Algorithms 1 and 2 yielding consistently accurate estimates of the proportion of the variance in the label that cannot be explained via a linear model. As expected, the performance becoming more impressive as the dimension of the data increases. All experiments were run in Matlab v2016b running on a MacBook Pro laptop, and the code will be available from our websites.

### 4.1 Implementation Details

Algorithms 1 and 2 were implemented as described in Section 2. The only hitherto unspecified portion of the estimators is the choice of the coefficients in the polynomial interpolation portion of Algorithm 2, which is necessary for our “moment”-based approach to the non-isotropic setting. Recall that the algorithm takes, as input, an upper and lower bound, on the singular values of the data distribution, and then approximates the linear function in the interval via a polynomial of the form The error of this polynomial approximation corresponds to an upper bound on the bias of the resulting estimator, and the variance of the estimator will increase according to the magnitudes of the coefficients . To compute these coefficients, we proceed via two small linear programs. The variables of the LPs correspond to the coefficients, and the objective function of the first LP corresponds to minimizing the error of approximation, estimated based on a fine discretization of the range into evenly spaced points. Specifically, the function is represented as a vector with and as are the basis functions . The first LP computes the optimal approximation error given and the number of moments, . The second LP then computes coefficients that minimize the sum of the magnitudes of the coefficients (with the magnitude of the th coefficient weighted by to account for the higher variance of these moments), subject to incurring an error that is not too much larger (at most a factor of larger) than the optimal one computed via the first LP. We did not explore alternate weightings, and the results are similar if the factor of is replaced by any value in the range

### 4.2 Synthetic Data Experiments

Isotropic Covariance: Our first experiments evaluate Algorithm 1 on data drawn from an isotropic Gaussian distribution. In this experiment, datapoints are drawn from an isotropic Gaussian, . The labels are computed by first selecting a uniformly random vector, , with norm and then setting each where is drawn independently from The ’s are then scaled according to their empirical variance (simulating the setting where we do not know, a priori, that the labels have variance 1), and the magnitude of the fraction of this (unit) variance that is unexplained via a linear model is computed via Algorithm 1. Figure 2 depicts the mean and standard deviation (over 50 trials) of the estimated value of unexplainable variance, , for three choices of the dimension, 1,000, 10,000, and 50,000, and a range of choices of for each . We compare our estimator with the classic “unbiased” estimator in the settings when . As expected our estimator demonstrates an ability to accurately recover even in the sublinear data regime, and the “unbiased” estimator has a variance that increases when is not much larger than . Figure 2 portrays the setting where , and the results for other choices of are similar.

Non-Isotropic Covariance: We also evaluated Algorithm 2 on synthetic data that does not have identity covariance. In this experiment, datapoints are drawn from a uniformly randomly rotated Gaussian with covariance with singular values . As above, the labels are computed by selecting uniformly and then scaling such The labels are assigned as , and are then scaled according to their empirical variance. We then applied Algorithm 2 with moments. Figure 3 depicts the mean and standard deviation (over 50 trials) of the recovered estimates of for the same parameter settings as in the isotropic case (1,000, 10,000, and 50,000, evaluated for a range of sample sizes, ). For clarity, we only plot the results corresponding to using 2 and 3 moments; as expected, 2-moment estimator is significantly biased, whereas the for 3 (and higher) moments, the bias is negligible compared to the variance. Again, the results are more impressive for larger , and demonstrate the ability of Algorithm 2 to perform well in the sublinear sample setting where .

### 4.3 NLP Experiments

We also evaluated our approach on an amusing natural language processing dataset: predicting the “point score” of a wine (the scale used by Wine Spectator to quantify the quality of a wine), based on a description of the tasting notes of the wine. This data is from Kaggle’s Wine-Reviews dataset, originally scraped from Wine Spectator. The dataset contained data on 150,000 highly-rated wines, each of which had an integral point score in the range . The tasting notes consisted of several sentences, with each entry having a mean and median length of 40.1 and 39 words—95% of the tasting notes contained between 20 and 70 words. The following is a typical tasting note (corresponding to a 96 point wine): Ripe aromas of fig, blackberry and cassis are softened and sweetened by a slathering of oaky chocolate and vanilla. This is full, layered, intense and cushioned on the palate, with rich flavors of chocolaty black fruits and baking spices….

Our goal was to estimate the ability of a linear model (over various featurizations of the tasting notes) to predict the corresponding point value of the wine. This dataset was well-suited for our setting because 1) the NLP setting presents a variety of natural high-dimensional featurizations, and 2) the 150k datapoints were sufficient to accurately estimate a “ground truth” prediction error, allowing us to approximate the residual variance in the point value that cannot be captured via a linear model over the specified features.

We considered two featurizations of the tasting notes, both based on the publicly available 100-dimensional GloVe word vectors [33]. The first, very naive featurization, consisted of concatenating the vectors corresponding to the first 20 words of each tasting note (this was capable of explaining of the variance of held-out points—for comparison, using the average of all the word vectors of each note explained of the variance). We also considered a much higher-dimensional embedding, yielded by computing the -dimensional outerproduct of vectors corresponding to each pair of words appearing in a tasting note, and then averaging these. This was capable of explaining of the variance in the point scores. In both settings, we leveraged the (unlabeled) large dataset to partially “whiten” the covariance, by reprojecting the data so as to have covariance with singular values in the range and removing the or of dimensions with smallest variance, yielding datasets with dimension 1,950 and 9,000, respectively. The results of applying Algorithm 2 to these datasets are depicted in Figure 4. The results are promising, and are consistent with the synthetic experiments. We also note that the classic “unbiased” estimator is significantly biased when is close to —this is likely due to the lack of independence between the “noise” in the point score, and the tasting note, and would be explained by the presence of sets of datapoints with similar point values and similar tasting notes. Perhaps surprisingly, our estimator did not seem to suffer this bias.

### 4.4 Binary Classification Experiments

We evaluated our estimator for the prediction accuracy of the best linear classifier on 1) synthetic data (with non-isotropic covariance) that was drawn according to the specific model to which our theoretical results apply, and 2) the MNIST hand-written digit image classification dataset. Our algorithm performed well in both settings—perhaps suggesting that the theoretical performance characterization of our algorithm might continue to hold in significantly more general settings beyond those assumed in Theorem 5.

#### 4.4.1 Synthetic Data Experiments

We evaluated Algorithm 3 on synthetic data with non-isotropic covariance. In this experiment, datapoints are drawn from a uniformly randomly rotated Gaussian with covariance with singular values . Model parameter is a -dimensional vector with that points in an uniformly random direction. Each label is assigned by setting to be with probability and with probability , where is the sigmoid function. We then applied Algorithm 3 with moments. Figure 5 depicts the mean and standard deviation (over 50 trials) of the recovered estimates of the classification error of the best linear classifier. We considered dimension 1,000, and 10,000, and evaluated each setting for a range of sample sizes, . For context, we also plotted the test and training accuracy of the logistic regression algorithm with regularization parameter . Again, the performance of our algorithm seem more impressive for larger , and demonstrate the ability of Algorithm 3 to perform well in the sublinear sample setting where and the (regularized) logistic regression algorithm can not recover an accurate classifier.

#### 4.4.2 MNIST Image Classification

We also evaluated our algorithm for predicting the classification error on the MNIST dataset. The MNIST dataset of handwritten digits has a training set with 60,000 grey-scale images. Each image is a handwritten digit with resolution, and each grey-scale pixel is represented as an integer between 0 and 255. Our goal is to estimate the ability of a linear classifier to predict the label (digit) given the image. Since we are only considering binary linear classifier in this work, we take digits “0”,“1”,“2”,“3”,“4” as positive examples and “5”,“6”,“7”,“8”,“9” as negative examples, and the task is to determine which group an image belongs to. Since our algorithm requires the dataset to be balanced in terms of positive and negative examples, we subsample from the majority class to obtain a balanced dataset with total training examples (20404 each class). Each image is unrolled to a dimensional real vector (). All the data are centered and scaled so the largest singular value of the sample covariance matrix is . For comparison, we implemented logistic regression with no regularization and with regularization with parameter . We also use the simpler (and more robust) function , which is the linear approximation to the that corresponds to the sigmoid under the Gaussian distribution. Algorithm 3 is applied with and moments. For each sample size , we randomly select samples from the set of size 58,808. To evaluate the test performance of logistic regression, we use the remaining examples as a “test” set. For each algorith, we repeat 50 times, reselecting the samples, etc. Figure 6 depicts the mean and standard deviation (over 50 trials) of the recovered estimate of the classification error of the best linear classifier.

As shown in the plot, even with samples, the training error of the unregularized logistic regression is still , meaning the data is perfectly separable, and the learned classifier does not generalize. Although the conditions of Theorem 5 obviously do not hold for the MNIST dataset, our algorithm still provides a reasonable estimate even with less than samples. One interesting phenomenon of the MNIST experiment comparing to the synthetic dataset is that the high order moments are smaller both in terms of the value and standard deviation, hence using more moments does not introduce much more variance, while still decreasing the bias. In the experiments we found that the estimation of Algorithm 3 is stable even with 12 moments.

## References

• [1] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, S. Pan, and A. Suresh. Competitive classification and closeness testing. In Conference on Learning Theory (COLT), 2012.
• [2] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, S. Pan, and A. Suresh. A competitive test for uniformity of monotone distributions. In AISTATS, 2013.
• [3] A. Antos and I. Kontoyiannis. Convergence properties of functional estimates for discrete distributions. Random Structures and Algorithms, 19(3–4):163–193, 2001.
• [4] Zhidong Bai, Jiaqi Chen, and Jianfeng Yao. On estimation of the population spectral distribution from a high-dimensional sample covariance matrix. Australian & New Zealand Journal of Statistics, 52(4):423–437, 2010.
• [5] Zhidong D Bai, Yong Q Yin, and Paruchuri R Krishnaiah. On the limiting empirical distribution function of the eigenvalues of a multivariate f matrix. Theory of Probability & Its Applications, 32(3):490–500, 1988.
• [6] T. Batu, E. Fischer, L. Fortnow, R. Kumar, R. Rubinfeld, and P. White. Testing random variables for independence and identity. In IEEE Symposium on Foundations of Computer Science (FOCS), 2001.
• [7] T. Batu, L. Fortnow, R. Rubinfeld, W.D. Smith, and P. White. Testing that distributions are close. In IEEE Symposium on Foundations of Computer Science (FOCS), 2000.
• [8] T. Batu, R. Kumar, and R. Rubinfeld. Sublinear algorithms for testing monotone and unimodal distributions. In Proceedings of the ACM Symposium on Theory of Computing (STOC), 2004.
• [9] T. Batu, R. Kumar, and R. Rubinfeld. Sublinear algorithms for testing monotone and unimodal distributions. In Symposium on Theory of Computing (STOC), 2004.
• [10] Mohsen Bayati, Murat A Erdogdu, and Andrea Montanari. Estimating lasso risk and noise level. In Advances in Neural Information Processing Systems, pages 944–952, 2013.
• [11] Mihir Bellare, Don Coppersmith, JOHAN Hastad, Marcos Kiwi, and Madhu Sudan. Linearity testing in characteristic two. IEEE Transactions on Information Theory, 42(6):1781–1795, 1996.
• [12] Eli Ben-Sasson, Madhu Sudan, Salil Vadhan, and Avi Wigderson. Randomness-efficient low degree tests and short pcps via epsilon-biased sets. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pages 612–621. ACM, 2003.
• [13] L. Birge. Estimating a density under order restrictions: nonasymptotic minimax risk. Annals of Statistics, 15(3):995–1012, 1987.
• [14] L. Birge. On the risk of histograms for estimating decreasing densities. Annals of Statistics, 15(3):1013–1022, 1987.
• [15] S. Chan, I. Diakonikolas, G. Valiant, and P. Valiant. Optimal algorithms for testing closeness of discrete distributions. In SODA, 2014.
• [16] Xi Chen, Rocco A Servedio, and Li-Yang Tan. New algorithms and lower bounds for monotonicity testing. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 286–295. IEEE, 2014.
• [17] Constantinos Daskalakis, Ilias Diakonikolas, Rocco A Servedio, Gregory Valiant, and Paul Valiant. Testing k-modal distributions: Optimal algorithms via reductions. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pages 1833–1852. Society for Industrial and Applied Mathematics, 2013.
• [18] Ronald A DeVore and George G Lorentz. Constructive approximation, volume 303. Springer Science & Business Media, 1993.
• [19] Ilias Diakonikolas, Daniel M Kane, and Alistair Stewart. Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures. arXiv preprint arXiv:1611.03473, 2016.
• [20] Lee H Dicker. Variance estimation in high-dimensional linear models. Biometrika, 101(2):269–284, 2014.
• [21] David L Donoho, Matan Gavish, and Iain M Johnstone. Optimal shrinkage of eigenvalues in the spiked covariance model. arXiv preprint arXiv:1311.0851, 2013.
• [22] Noureddine El Karoui. Spectrum estimation for large dimensional covariance