Optimal Regularization Can Mitigate Double Descent

Optimal Regularization Can Mitigate Double Descent

Abstract

Recent empirical and theoretical studies have shown that many learning algorithms – from linear regression to neural networks – can have test performance that is non-monotonic in quantities such the sample size and model size. This striking phenomenon, often referred to as “double descent”, has raised questions of if we need to re-think our current understanding of generalization. In this work, we study whether the double-descent phenomenon can be avoided by using optimal regularization. Theoretically, we prove that for certain linear regression models with isotropic data distribution, optimally-tuned regularization achieves monotonic test performance as we grow either the sample size or the model size. We also demonstrate empirically that optimally-tuned regularization can mitigate double descent for more general models, including neural networks. Our results suggest that it may also be informative to study the test risk scalings of various algorithms in the context of appropriately tuned regularization.

1 Introduction

Recent works have demonstrated a ubiquitous “double descent” phenomenon present in a range of machine learning models, including decision trees, random features, linear regression, and deep neural networks (Opper, 1995, 2001; Advani and Saxe, 2017; Spigler et al., 2018; Belkin et al., 2018; Geiger et al., 2019b; Nakkiran et al., 2020; Belkin et al., 2019; Hastie et al., 2019; Bartlett et al., 2019; Muthukumar et al., 2019; Bibas et al., 2019; Mitra, 2019; Mei and Montanari, 2019; Liang and Rakhlin, 2018; Liang et al., 2019; Xu and Hsu, 2019; Dereziński et al., 2019; Lampinen and Ganguli, 2018; Deng et al., 2019; Nakkiran, 2019). The phenomenon is that models exhibit a peak of high test risk when they are just barely able to fit the train set, that is, to interpolate. For example, as we increase the size of models, test risk first decreases, then increases to a peak around when effective model size is close to the training data size, and then decreases again in the overparameterized regime. Also surprising is that Nakkiran et al. (2020) observe a double descent as we increase sample size, i.e. for a fixed model, training the model with more data can hurt test performance.

Figure 1: Test Risk vs. Num. Samples for Isotropic Ridge Regression in dimensions. Unregularized regression is non-monotonic in samples, but optimally-regularized regression () is monotonic. The sample distribution is where and for , and . For , the ridge estimator on samples is . In this setting, the optimal regularizer does not depend on number of samples (Lemma 2), but this is not always true – see Figure 2.

These striking observations highlight a potential gap in our understanding of generalization and an opportunity for improved methods. Ideally, we seek to use learning algorithms which robustly improve performance as the data or model size grow and do not exhibit such unexpected non-monotonic behaviors. In other words, we aim to improve the test performance in situations which would otherwise exhibit high test risk due to double descent. Here, a natural strategy would be to use a regularizer and tune its strength on a validation set.

This motivates the central question of this work:

When does optimally tuned regularization mitigate or remove the double-descent phenomenon?

Another motivation to start this line of inquiry is the observation that the double descent phenomenon is largely observed for unregularized or under-regularized models in practice. As an example, Figure 1 shows a simple linear ridge regression setting in which the unregularized estimator exhibits double descent, but an optimally-tuned regularizer has monotonic test performance.

Our Contributions: We study this question from both a theoretical and empirical perspective. Theoretically, we start with the setting of high-dimensional linear regression. Linear regression is a sensible starting point to study these questions, since it already exhibits many of the qualitative features of double descent in more complex models (e.g. Belkin et al. (2019); Hastie et al. (2019) and further related works in Section 1.1).

This work shows that optimally-tuned ridge regression can achieve both sample-wise monotonicity and model-size-wise monotonicity under certain assumptions. Concretely, we show

  • Sample-wise monotonicity: In the setting of well-specified linear regression with isotropic features/covariates (Figure 1), we prove that optimally-tuned ridge regression yields monotonic test performance with increasing samples. That is, more data never hurts for optimally-tuned ridge regression (see Theorem 2).

  • Model-wise monotonicity: We consider a setting where the input/covariate lives in a high-dimensional ambient space with isotropic covariance. Given a fixed model size (which might be much smaller than ambient dimension), we consider the family of models which first project the input to a random -dimensional subspace, and then compute a linear function in this projected “feature space.” (This is nearly identical to models of double-descent considered in  Hastie et al. (2019, Section 5.1)). We prove that in this setting, as we grow the model-size, optimally-tuned ridge regression over the projected features has monotone test performance. That is, with optimal regularization, bigger models are always better or the same. (See Theorem 3).

  • Monotonicity in the real-world: We also demonstrate several richer empirical settings where optimal regularization induces monotonicity, including random feature classifiers and convolutional neural networks. This suggests that the mitigating effect of optimal regularization may hold more generally in broad machine learning contexts. (See Section 5).

A few remarks are in order:

Problem-specific vs Minimax and Bayesian. It is worth noting that our results hold for all linear ground-truths, rather than holding for only the worst-case ground-truth or a random ground-truth. Indeed, the minimax optimal estimator or the Bayes optimal estimator are both trivially sample-wise and model-wise monotonic with respect to the minimax risk or the Bayes risk. However, they do not guarantee monotonicity of the risk itself for a given fixed problem.

Universal vs Asymptotic. We also remark that our analysis is not only non-asymptotic but also works for all possible input dimensions, model sizes, and sample sizes. Prior works on double descent mostly rely on asymptotic assumptions that send the sample size or the model size to infinity in a specific manner. To our knowledge, the results herein are the first non-asymptotic sample-wise and model-wise monotonicity results for linear regression. (See discussion of related works Hastie et al. (2019); Mei and Montanari (2019) for related results in the asymptotic setting).

Finally, we note that our claims are about monotonicity of the actual test risk, instead of the monotonicity of the generalization bounds (e.g., results in (Wei et al., 2019)).

Towards a more general characterization. Our theoretical results crucially rely on the covariance of the data being isotropic. A natural next question is if and when the same results can hold more generally. A full answer to this question is beyond the scope of this paper, though we give the following results:

  1. Sample-wise monotonicity does not hold for optimally-tuned ridge regression for a certain non-Gaussian data distribution and heteroscedastic noise. This can be seen from an example with and . (See Section 4.1 for the counterexample and intuitions.)

  2. For non-isotropic Gaussian covariates, we can achieve sample-wise monotonicity with a regularizer that depends on the population covariance matrix of data. This suggests unlabeled data might also help mitigate double descent in some settings, because the population covariance can be estimated from unlabeled data.

  3. For non-isotropic Gaussian covariates, we conjecture that optimally-tuned ridge regression is sample-monotonic even with a standard regularizer (as in Figure 2). We derive a sufficient condition for this conjecture, which we verify numerically in a variety of cases.

The last result above highlights the importance of the form of the regularizer, which leads to the open question: “How do we design good regularizers which mitigate or remove double descent?” We hope that our results can motivate future work on mitigating the double descent phenomenon, and allow us to train high performance models which do not exhibit unexpected nonmonotonic behaviors.

1.1 Related Works

This work builds on and is inspired by the long line of work on “double descent” phenomena in machine learning. Double descent of test risk as a function of model size was proposed in generality by Belkin et al. (2018). Similar behavior was observed empirically in Advani and Saxe (2017); Geiger et al. (2019a); Spigler et al. (2018); Neal et al. (2018), and even earlier in restricted settings as early as Trunk (1979); Opper (1995, 2001); Skurichina and Duin (2002). Recently Nakkiran et al. (2020) demonstrated a generalized double descent phenomenon on modern deep networks, and highlighted “sample non-monotonicity” as an aspect of double descent.

Following Belkin et al. (2018), a recent stream of theoretical works consider model-wise double descent in simplified settings— often linear models for regression or classification. A partial list includes (Belkin et al., 2019; Hastie et al., 2019; Bartlett et al., 2019; Muthukumar et al., 2019; Bibas et al., 2019; Mitra, 2019; Mei and Montanari, 2019; Liang and Rakhlin, 2018; Liang et al., 2019; Xu and Hsu, 2019; Dereziński et al., 2019; Lampinen and Ganguli, 2018; Deng et al., 2019; Nakkiran, 2019; Mahdaviyeh and Naulet, 2019). Of these, most closely related to our work are Hastie et al. (2019); Mei and Montanari (2019); Nakkiran (2019). Specifically, Hastie et al. (2019) considers the risk of unregularized and regularized linear regression in an asymptotic regime, where dimension and number of samples scale to infinity together, at a constant ratio . In contrast, we show non-asymptotic results, and are able to consider increasing the number of samples for a fixed model, without scaling both together. Mei and Montanari (2019) derive similar results for unregularized and regularized random features, also in an asymptotic limit where the number of features and samples scale to infinity together. The non-asymptotic versions of the settings considered in Hastie et al. (2019) are almost identical to ours— for example, our projection model in Section 3 is nearly identical to the model in Hastie et al. (2019, Section 5.1).

Most of the above works on double descent are concerned with studying test risk as a function of increasing model size. In this work, we also study the recent “sample-wise” perspective on double-descent, and consider the test risk of of a fixed model for increasing samples. Nakkiran (2019) highlights that unregularized isotropic linear regression exhibits this kind of sample-wise nonmonotonicity, and we study this model in the context of optimal regularization. The study of nonmonotonicity in learning algorithms had also existed prior to double descent, including in  Duin (1995, 2000); Opper (2001); Loog and Duin (2012). Loog et al. (2019) introduces the same notion of risk monotonicity which we consider, and studies several examples of monotonic and non-monotonic procedures.

2 Sample Monotonicity in Ridge Ridgression

In this section, we prove that optimally-regularized ridge regression has test risk that is monotonic in samples, for isotropic gaussian covariates and linear response. This confirms the behavior empirically observed in Figure 1. We also show that this monotonicity is not “fragile”, and using larger than larger regularization is still sample-monotonic (consistent with Figure 1).

Formally, we consider the following linear regression problem in dimensions. The input/covariate is generated from , and the output/response is generated by

with and for some unknown parameter . We denote the joint distribution of by . We are given training examples i.i.d sampled from . We aim to learn a linear model with small population mean-squared error on the distribution

For simplicity, let be the data matrix that contains ’s as rows and let be column vector that contains the responses ’s as entries. For any estimator as a function of samples, define the expected risk of the estimator as:

(1)

We consider the regularized least-squares estimator, also known as the ridge regression estimator. For a given , define

(2)
(3)

Here denotes the dimensional identity matrix. Let be the optimal ridge parameter (that achieves the minimum expected risk) given samples:

(4)

Let be the estimator that corresponds to the

(5)

Our main theorem in this section shows that the expected risk of monotonically decreases as increases. {theorem} In the setting above, the expected test risk of optimally-regularized well-specified isotropic linear regression is monotonic in samples. That is, for all and all ,

The above theorem shows a strong form of monotonicity, since it holds for every fixed ground-truth , and does not require averaging over any prior on ground-truths. Moreover, it holds non-asymptotically, for every fixed . Obtaining such non-asymptotic results is nontrivial, since we cannot rely on concentration properties of the involved random variables.

In particular, evaluating as a function of the problem parameters (, and ) is technically challenging. In fact, we suspect that a simple closed form expression does not exist. The key idea towards proving the theorem is to derive a “partial evaluation” — the following lemmas shows that we can write in the form of where contains the singular values of . We will then couple the randomness of data matrices obtained by adding a single sample, and use singular value interlacing to compare their singular values.

{lemma}

In the setting of Theorem 2, let be the singular values of the data matrix . (If , we pad the for .) Let be the distribution of . Then, the expected test risk is

From Lemma 2, the below lemma follows directly by taking derivatives to find the optimal .

{lemma}

In the setting of Theorem 2, the optimal ridge parameter is constant for all : Moreover, the optimal expected test risk can be written as

(6)

Lemma 2’s proof is deferred to the Appendix, Section A.1. We now prove Lemma 2.

Proof of Lemma 2.

For isotropic , the test risk is related to the parameter error as:

Plugging in the form of and expanding:

Now let be the full singular value decomposition of , with . Let denote the singular values, defining for . Then, continuing:

(7)
(8)
(9)
(10)

In Line (8) follows because by symmetry, the distribution of is a uniformly random orthonormal matrix, and is independent of . Thus, is distributed as a uniformly random point on the unit sphere of radius .

Now we are ready to prove Theorem 2.

Proof of Theorem 2.

Let and be any two matrices which differ by only the last row of . By the Cauchy interlacing theorem Theorem 4.3.4 of  Horn et al. (1990) (c.f.,Lemma 3.4 of  Marcus et al. (2014)), the singular values of and are interlaced: where is the -th singular value.

If we couple and , it will induce a coupling between the distributions and , of the singular values of the data matrix for and samples. This coupling satisfies that with probability 1 for .

Now, expand the test risk using Lemma 2, and observe that each term in the sum of Equation (11) below is monotone decreasing with . Thus:

(11)
(12)
(13)

By similar techniques, we can also prove that overregularization —that is, using ridge parameters larger than the optimal value— is still monotonic. This proves the behavior empirically observed in Figure 1.

{theorem}

In the same setting as Theorem 2, over-regularized regression is also monotonic in samples. That is, for all , the following holds

where .

Proof.

In Section A.1. ∎

3 Model-wise Monotonicity in Ridge Regression

In this section, we show that for a certain family of linear models, optimal regularization prevents model-wise double descent. That is, for a fixed number of samples, larger models are not worse than smaller models.

We consider the following learning problem. Informally, covariates live in a -dimensional ambient space, and we consider models which first linearly project down to a random -dimensional subspace, then perform ridge regression in that subspace for some .

Formally, the covariate is generated from , and the response is generated by

with and for some unknown parameter . Next, examples are sampled i.i.d from this distribution. For a given model size , we first sample a random orthonormal matrix which specifies our model. We then consider models which operate on , where . We denote the joint distribution of by . Here, we emphasize that is some large ambient dimension and is the size of the model we learn.

For a fixed , we want to learn a linear model for estimating , with small mean squared error on distribution:

For samples , let be the data matrix, be the projected data matrix and be the responses. For any estimator as a function of the observed samples, define the expected risk of the estimator as:

(14)

We consider the regularized least-squares estimator. For a given , define

(15)
(16)

Let be the optimal ridge parameter (that achieves the minimum expected risk) for a model of size , with samples:

(17)

Let be the estimator that corresponds to the

(18)

Now, our main theorem in this setting shows that with optimal regularization, test performance is monotonic in model size. {theorem} In the setting above, the expected test risk of the optimally-regularized model is monotonic in the model size .

That is, for all , we have

Proof.

In Section A.2. ∎

This proof follows closely the proof of Theorem 2, making crucial use of Lemma 3 below.

{lemma}

For all , , and , let be a matrix with i.i.d. entries. Let be a random orthonormal matrix. Define .

Let be the singular values of the data matrix , for (with for ). Let be the distribution of singular values .

Then, the optimal ridge parameter is constant for all :

where we define

Moreover, the optimal expected test risk can be written as

Proof.

This proof follows exactly analogously as the proof of Lemma 2 from Lemma 2, in Section A.1. ∎

4 Counterexamples to Monotonicity

In this section, we show that optimally-regularized ridge regression is not always monotonic in samples. We give a numeric counterexample in dimensions, with non-gaussian covariates and heteroscedastic noise. This does not contradict our main theorem in Section 2, since this distribution is not jointly Gaussian with isotropic marginals.

4.1 Counterexample

Here we give an example of a distribution for which the expected error of optimally-regularized ridge regression with samples is worse than with samples.

This counterexample is most intuitive to understand when the ridge parameter is allowed to depend on the specific sample instance as well as 1. We sketch the intuition for this below.

Consider the following distribution on in dimensions. This distribution has one “clean” coordinate and one “noisy” coordinate. The distribution is:

where and is uniformly random independent noise. This distribution is “well-specified” in that the optimal predictor is linear in : for . However, the noise is heteroscedastic.

For samples, the estimator can decide whether to use small or large depending on if the sampled coordinate is the “clean” or “noisy” one. Specifically, for the sample : If , then the optimal ridge parameter is . If , then the optimal parameter is .

For samples, with probability the two samples will hit both coordinates. In this case, the estimator must chose a single value of uniformly for both coordinates. This yields to a suboptimal tradeoff, since the “noisy” coordinate demands large regularization, but this hurts estimation on the “clean” coordinate.

It turns out that a slight modification to the above also serves as a counterexample to monotonicity when the regularization parameter is chosen only depending on (and not on the instance ).

The distribution is:

with , .

{theorem}

There exists a distribution over for with the following properties.

Let be the optimally-regularized ridge regression solution for samples from . Then:

  1. is “well-specified” in that is a linear function of ,

  2. The expected test risk increases as a function of , between and . Specifically

Proof.

For samples, it can be confirmed analytically that the expected risk . This is achieved with .

For samples, it can be confirmed numerically (via Mathematica) that the expected risk . This is achieved with . ∎

5 Experiments

We now experimentally demonstrate that optimal regularization can mitigate double descent, in more general settings than Theorems 2 and 3.

5.1 Sample Monotonicity

Here we show various settings where optimal regularization empirically induces sample-monotonic performance.

Nonisotropic Regression. We first consider the setting of Theorem 2, but with non-isotropic covariantes . That is, we perform ridge regression on samples , where the covariate is generated from for . As before, the response is generated by with for some unknown parameter .

We consider the same ridge regression estimator,

(19)
Figure 2: Test Risk vs. Num. Samples for Non-Isotropic Ridge Regression in dimensions. Unregularized regression is non-monotonic in samples, but optimally-regularized regression is monotonic. Note the optimal regularization depends on the number of samples . Plotting empirical means of test risk over trials. See Figure 6 for the corresponding train errors.

Figure 2 shows one instance of this, for a particular choice of and . The covariance is diagonal, with for and for . That is, the covariance has one “large” eigenspace and one “small” eigenspace. The ground-truth , which lies almost entirely within the “small” eigenspace of . The noise parameter is .

We see that unregularized regression () actually undergoes “triple descent”2 in this setting, with the first peak around samples due to the 15-dimensional large eigenspace, and the second peak at .

In this setting, optimally-regularized ridge regression is empirically monotonic in samples (Figure 2). Unlike the isotropic setting of Section 2, the optimal ridge parameter is no longer a constant, but varies with number of samples .

Random ReLU Features. We consider random ReLU features, in the random features framework of (Rahimi and Recht, 2008). We apply random features to Fashion-MNIST (Xiao et al., 2017), an image classification problem with 10 classes. Input images are normalized and flattened to for . Class labels are encoded as one-hot vectors . For a given number of features , and number of samples , the random feature classifier is obtained by performing regularized linear regression on the embedding

where is a matrix with each entry sampled i.i.d , and ReLU applies pointwise. This is equivalent to a 2-layer fully-connected neural network with a frozen (randomly-initialized) first layer, trained with loss and weight decay.

Figure 2(a) shows the test error of the random features classifier, for random features and varying number of train samples. We see that underregularized models are non-monotonic, but optimal regularization is monotonic in samples. Moreover, the optimal ridge parameter appears to be constant for all , similar to our results from the isotropic setting in Theorem 2.

(a) Test Classification Error vs. Number of Training Samples.
(b) Test Classification Error vs. Model Size (Number of Random Features).
Figure 3: Double-descent for Random ReLU Features. Test classification error as a function of model size and sample size for Random ReLU Features on Fashion-MNIST. Left: with features. Right: with samples. See Figures 7, 8 for the corresponding test Mean Squared Error. See Appendix D of Nakkiran et al. (2020) for the performance of these unregularized models plotted across Num. Samples Model Size simultaneously.

5.2 Model-size Monotonicity

Here we empirically show that optimal regularization can mitigate model-wise double descent.

Random ReLU Features. We consider the same experimental setup as in Section 5.1, but now fix the number of samples , and vary the number of random features . This corresponds to varying the width of the corresponding 2-layer neural network.

Figure 2(b) shows the test error of the random features classifier, for train samples and varying number of random features. We see that underregularized models undergo model-wise double descent, but optimal regularization prevents double descent.

Convolutional Neural Networks. We follow the experimental setup of Nakkiran et al. (2020) for model-wise double descent, and add varying amounts of regularization (weight decay). We chose the following setting from Nakkiran et al. (2020), because it exhibits double descent even with no added label noise.

We consider the same family of 5-layer convolutional neural networks (CNNs) from Nakkiran et al. (2020), consisting of 4 convolutional layers of widths for varying . This family of CNNs was introduced by Page (2018). We train and test on CIFAR-100 (Krizhevsky and Hinton, 2009), an image classification problem with 100 classes. Inputs are normalized to , and we use standard data-augmentation of random horizontal flip and random crop with 4-pixel padding. All models are trained using Stochastic Gradient Descent (SGD) on the cross-entropy loss, with step size at step . We train for gradient steps, and use weight decay for varying . Due to optimization instabilities for large , we use the model with the minimum train loss among the last 5K gradient steps.

Figure 4: Test Error vs. Model Size for 5-layer CNNs on CIFAR-100, with regularization (weight decay). Note that the optimal regularization varies with . See Figure 5 for the corresponding train errors.

Figure 4 shows the test error of these models on CIFAR-100. Although unregularized and under-reguarized models exhibit double descent, the test error of optimally-regularized models is largely monotonic. Note that the optimal regularization varies with the model size — no single regularization value is optimal for all models.

6 Towards Monotonicity with General Covariates

Here we investigate whether monotonicity provably holds in more general models, inspired by the experimental results. As a first step, we consider Gaussian (but not isotropic) covariances and homeostatic noise. That is, we consider ridge regression in the setting of Section 2, but with , and . In this section, we observe that ridge regression can be made sample-monotonic with a modified regularizer. We also conjecture that ridge regression is sample-monotonic without modifying the regularizer, and we outline a potential proof strategy along with numerical evidence.

6.1 Adaptive Regularization

The results on isotropic regression in Section 2 imply that ridge regression can be made sample-monotonic even for non-isotropic covariates, if an appropriate regularizer is applied. Specifically, the appropriate regularizer depends on the covariance of the inputs: for , the following estimator is sample-monotonic for optimally-tuned :

(20)

This follows directly from Theorem 2 by applying a change-of-variable; full details of this equivalence are in Section A.3. Note that if the population covariance is not known, it can potentially be estimated from unlabeled data.

6.2 Towards Proving Monotonicity

We conjecture that optimally-regularized ridge regression is sample-monotonic for non-isotropic covariates, even without modifying the regularizer (as suggested by the experiment in Figure 2). We derive a sufficient condition for monotonicity, which we have numerically verified in a variety of instances.

Specifically, we conjecture the following. {conjecture} For all , and all PSD covariances , consider the distribution on where , and . Then, we conjecture that the expected test risk of the ridge regression estimator:

(21)

for optimally-tuned , is monotone non-increasing in number of samples .

In order to establish Conjecture 6.2, it is sufficient to prove the following technical conjecture. {conjecture} For all , , , symmetric positive definite matrix , the following holds.

Define

where is sampled with each entry i.i.d. . Similarly, define

Then, we conjecture that

(22)

Proving Conjecture 6.2 presents a number of technical challenges, but we have numerically verified it in a variety of cases. (One can numerically verify the conjecture for a fixed , and . Here can be assumed to be diagonal w.l.o.g. because is isotropic. The matrices and scalars in equation (22) can be evaluated by sampling the random matrix . The derivatives w.r.t can be done by auto-differentiation).

It can also be shown that Conjecture 6.2 is true when , corresponding to isotropic covariates. We show that Conjecture 6.2 implies Conjecture 6.2 in in Section A.3.1 of the Appendix.

7 Discussion and Conclusion

In this work, we study the double descent phenomenon in the context of optimal regularization. We show that, while unregularized or under-regularized models often have non-monotonic behavior, appropriate regularization can eliminate this effect.

Theoretically, we prove that for certain linear regression models with isotropic covariates, optimally-tuned regularization achieves monotonic test performance as we grow either the sample size or the model size. These are the first non-asymptotic monotonicity results we are aware of in linear regression. We also demonstrate empirically that optimally-tuned regularization can mitigate double descent for more general models, including neural networks. We hope that our results can motivate future work on mitigating the double descent phenomenon, and allow us to train high performance models which do not exhibit unexpected nonmonotonic behaviors.

Open Questions. Our work suggests a number of natural open questions. First, it is open to prove (or disprove) that optimal ridge regression is sample-monotonic for non-isotropic Gaussian covariates (Conjecture 6.2). We conjecture that it is, and outline a potential route to proving this (via Conjecture 6.2). The non-isotropic setting presents a number of differences from the isotropic one (e.g. the optimal regularizer depends on number of samples ), and thus a proof of this may yield further insight into mechanisms of monotonicity.

Second, more broadly, it is open to prove sample-wise or model-wise monotonicity for more general (non-linear) models with appropriate regularizers. Addressing the monotonicity of non-linear models may require us to design new regularizers which improve the generalization when the model size is close to the sample size. It is possible that data-dependent regularizers (which depend on certain statistics of the labeled or unlabeled data) can be used to induce sample monotonicity, analogous to the approach in Section 6.1 for linear models. Recent work has introduced data-dependent regularizers for deep models with improved generalization upper bounds (Wei and Ma, 2019a, b), however a precise characterization of the test risk remain elusive.

Finally, it is open to understand why large neural networks in practice are often sample-monotonic in realistic regimes of sample sizes, even without careful choice of regularization.

Acknowledgements

Work supported in part by the Simons Investigator Awards of Boaz Barak and Madhu Sudan, and NSF Awards under grants CCF 1715187, CCF 1565264 and CNS 1618026. Sham Kakade acknowledges funding from the Washington Research Foundation for Innovation in Data-intensive Discovery, and the NSF Awards CCF-1703574, and CCF-1740551.

The numerical experiments were supported in part by Google Cloud research credits, and a gift form Oracle. The work is also partially supported by SDSI and SAIL at Stanford.

Appendix A Appendix

In Section A.1 and  A.2 we provide the proofs for sample-monotonicity and model-size monotonicity. In Section A.4 we include additional and omitted plots.

a.1 Sample Monotonicity Proofs

Next we prove Lemma 2.

Proof of Lemma 2.

First, we determine the optimal ridge parameter. Using Lemma 2, we have

Thus, and we conclude that .

For this optimal parameter, the test risk follows from Lemma 2 as

(23)
(24)

Proof of Theorem 2.

We follow a similar proof strategy as in Theorem 2: we invoke singular value interlacing () for the data matrix when adding a single sample. We then apply Lemma 2 to argue that the test risk varies monotonically with the singular values.

We have

and we compute how each term in the sum varies with :

Thus we have

(25)

By the coupling argument in Theorem 2, this implies that the test risk is monotonic:

(26)
(27)
(28)
(29)

where is the coupling. Line (29) follows from Equation (25), and the fact that the coupling obeys . ∎

a.2 Projection Model Proofs

{lemma}

For all , , and , let be a matrix with i.i.d. entries. Let be a random orthonormal matrix. Define and .

Let be the singular values of the data matrix , for (with for ). Let be the distribution of singular values .

Then, the expected test risk is