Benefit of Interpolation in Nearest Neighbor Algorithms
Abstract
The overparameterized models attract much attention in the era of data science and deep learning. It is empirically observed that although these models, e.g. deep neural networks, overfit the training data, they can still achieve small testing error, and sometimes even outperform traditional algorithms which are designed to avoid overfitting. The major goal of this work is to sharply quantify the benefit of data interpolation in the context of nearest neighbors (NN) algorithm. Specifically, we consider a class of interpolated weighting schemes and then carefully characterize their asymptotic performances. Our analysis reveals a Ushaped performance curve with respect to the level of data interpolation, and proves that a mild degree of data interpolation strictly improves the prediction accuracy and statistical stability over those of the (uninterpolated) optimal NN algorithm. This theoretically justifies (predicts) the existence of the second Ushaped curve in the recently discovered double descent phenomenon. Note that our goal in this study is not to promote the use of interpolatedNN method, but to obtain theoretical insights on data interpolation inspired by the aforementioned phenomenon.
1 Introduction
Classical statistical learning theory believes that overfitting deteriorates prediction performance: when the model complexity is beyond necessity, the testing error must be huge. Therefore, various techniques have been proposed in literature to avoid overfitting, such as early stopping, dropout and cross validation. However, recent experiments reveal that even with overfitting, many learning algorithms still achieve small generalization error. For instances, Wyner et al. (2017) explored the overfitting in AdaBoost and random forest algorithms; Belkin et al. (2019a) discovered a double descent phenomenon in random forest and neural network: with growing model complexity, testing performance firstly follows a (conventional) Ushaped curve, and as the level of overfitting increases, a second descent or even a second Ushaped testing performance curve occurs.
To theoretically understand the effect of overfitting or data interpolation, Du and Lee (2018); Du et al. (2019, 2018); Arora et al. (2018, 2019); Xie et al. (2017) analyzed how to train neural networks under overparametrization, and why overfitting does not jeopardize the testing performance; Belkin et al. (2019) constructed a NadarayaWatson kernel regression estimator which perfectly fits training data but is still minimax rate optimal; Belkin et al. (2018) and Xing et al. (2018) studied the rate of convergence of interpolated nearest neighbor algorithm (interpolatedNN); Belkin et al. (2019b); Bartlett et al. (2019) quantified the prediction MSE of the linear least squared estimator when the data dimension is larger than sample size and the training loss attains zero. Similar analysis is also conducted by Hastie et al. (2019) for twolayer neural network models with a fixed first layer.
In this work, we aim to provide theoretical reasons on whether, when and why the interpolatedNN performs better than the optimal NN, by some sharp analysis. The classical NN algorithm for either regression or classification is known to be rateminimax under mild conditions (Chaudhuri and Dasgupta (2014)), say diverges propoerly. However, can such a simple and versatile algorithm still benefit from intentional overfitting? We first demonstrate some empirical evidence below.
Belkin et al. (2018) designed an interpolated weighting scheme as follows:
(1) 
where is the th closest neighbor to with the corresponding label . The parameter controls the level of interpolation: with a larger , the algorithm will put more weights on the closer neighbors. In particular when or , interpolatedNN reduces to NN or 1NN, respectively. Belkin et al. (2018) showed that such an interpolated estimator is rate minimax in the regression setup, but suboptimal in the setting of binary classification. Later, Xing et al. (2018) obtained the minimax rate of classification by adopting a slightly different interpolating kernel. What is indeed more interesting is the preliminary numerical analysis (see Figure 1) conducted in the aforementioned paper, which demonstrates that interpolatedNN is even better than the rate minimax NN in terms of MSE (regression) or misclassification rate (classification). This observation asks for deeper theoretical exploration beyond the rate of convergence. A reasonable doubt is that the interpolatedNN may possess a smaller multiplicative constant for its rate of convergence, which may be used to study the generalization ability within the “overparametrized regime.”
In this study, we will theoretically compare the minimax optimal NN and the interpolatedNN (under (1)) in terms of their multiplicative constants. On the one hand, we show that under proper smooth conditions, the multiplicative constant of interpolatedNN, as a function of interpolation level , is Ushaped. As a consequence, interpolation indeed leads to more accurate and stable performance when the interpolation level for some only depending on the data dimension . The amount of benefit (i.e., the “performance ratio” defined in Section 2) follows exactly the same asymptotic pattern for both regression and classification tasks. In addition, the gain from interpolation diminishes as the dimension grows to infinity, i.e. high dimensional data benefit less from data interpolation. We also want to point out that there still exist other “noninterpolating” weighting schemes, such as OWNN, which can achieve an even better performance; see Section 3.4. More subtle results are summarized in the figure below.
From Figure 2, we theoretically justify (predict) the existence of the Ushaped curve within the “overfitting regime” of the recently discovered double descent phenomenon by Belkin et al. (2019a, b). As complementary to Belkin et al. (2018); Xing et al. (2018), we further show in appendix that interpolatedNN reaches optimal rate for both regression and classification under more general smoothness conditions in Section F.
In the end, we want to emphasize that our goal here is not to promote the practical use of this interpolation method given that NN is more userfriendly. Rather, the interpolatedNN algorithm is used to precisely describe the role of interpolation in generalization ability so that more solid theoretical arguments can be made for the very interesting double descent phenomenon, especially in the overfitting regime.
2 Interpolation in Nearest Neighbors Algorithm
In this section, we review the interpolatedNN algorithm introduced by Belkin et al. (2018) in more details. Given , we define to be the distance between and its th nearest neighbor. W.O.L.G, we let to denote the (unsorted) nearest neighbors of , and let to be distances between and . Thus, based on the same argument used in Chaudhuri and Dasgupta (2014) and Belkin et al. (2018), conditional on , to are iid variables whose support is a ball centered at with radius , and as a consequence, to are conditionally independent given as well. Note that when no confusion is caused, we will write as . Thus, the weights of the neighbors are defined as
for and some .
For regression models, denote as the target function, and where is an independent zeromean noise with . The regression estimator at is thus
For binary classification, denote , with as the Bayes estimator. The interpolatedNN classifier is defined as
As discussed previously, the parameter controls the level of interpolation: a larger value of leads to a higher degree of data interpolation.
We adopt the conventional measures to evaluate the theoretical performance of interpolatedNN given a new test data :
3 Quantification of Interpolation Effect
3.1 Model Assumptions
Recent works by Belkin et al. (2018) and Xing et al. (2018) confirm the rate optimality of MSE and regret for interpolatedNN under mild interpolation conditions. Two deeper questions (hinted by Figure 1) we would like to address are whether and how interpolation strictly benefits NN algorithm, and whether interpolation affects regression and classification in the same manner.
To facilitate our theoretical investigation, we impose the following assumptions:

is a dimensional random variable on a compact set with boundary .

For classification, is nonempty.

for some constant .

For classification, is continuous in some open set containing . The thirdorder derivative of is bounded when for a small constant . The gradient when , and with restriction on , if and .

For classification, density of , denoted as , is twice differentiable and finite.

For regression, the thirdorder derivative of is bounded for all .

For regression, is finite and has finite firstorder derivative in .
The above assumptions (except A.3) are mostly derived from the framework established by Samworth and others (2012). Note that the additional smoothness required in and is needed to facilitate the asymptotic study of interpolation weighting scheme. We also want to point out that these assumptions are generally stronger than those used in Chaudhuri and Dasgupta (2014), but necessary to figure out the multiplicative constant. Further discussions regarding the conditions can be found in Remark 3 in appendix.
3.2 Main Theorem
The following theorem quantifies how interpolation affects NN estimate in terms of , then Corollary 2 examines the asymptotic performance ratios of MSE and Regret between interpolatedNN and NN and discovers that these ratios (under their respective optimal choice of ) converges to a function of only. In particular, a Ushaped curve is revealed where the ratio is smaller than 1 when for some .
Theorem 1
For regression, suppose that assumptions A.1, A.3, A.6, and A.7 hold. If satisfies for some , we have ^{1}^{1}1The notation “” is understood as .
For classification, under A.1 to A.5, the excess risk w.r.t. becomes
where the exact form / value of , , , can be found in Section C in appendix.
Theorem 1 holds for any , where controls a proper diverging rate of as in Samworth and others (2012) and Sun et al. (2016). This allows us to define the minimum MSE and Regret over as follows:
Corollary 2 asymptotically compares the interpolatedNN and NN, i.e., , in terms of the above measures. Interestingly, it turns out that the performance ratio, defined as
is a function of and only, independent of the underlying data distribution. Note that is just the ratio of multiplicative constants before the minimax rate of interpolatedNN and NN.
Corollary 2
Under the same conditions as in Theorem 1, for any ,
Note that / is the optimum MSE/Regret for NN.
When can be chosen adaptively based on , we can address the second question that interpolation affects regression and classification in exactly the same manner through . In particular, this ratio exhibits an interesting Ushape of for any fixed . Specifically, as increases from , first decreases from and then increases above ; see Figures 2 and 3. Therefore, within the range for some only depending on dimension , , that is, the interpolatedNN is strictly better than the NN. Given the imposed condition that , Some further calculations reveal that when ; when .
Remark 1
It is easy to show that . This indicates that high dimensional model benefits less from interpolation, or said differently, high dimensional model is less affected by data interpolation. This phenomenon can be explained by the fact that, as increases, due to high dimensional geometry.
Remark 2
The optimum , which leads to the best MSE/regret, depends on the interpolation level . Thus, we denote it as . As shown in the appendix, , but for , i.e., interpolatedNN needs to employ slightly more neighbors to achieve the best performance. Empirical support for this finding can be found in Section A of appendix. If we insist using the same for interpolatedNN and NN, we can still verify that and , when for some depending on the distribution of and .
3.3 Statistical Stability
In this section, we will explore how the interpolation affects the statistical stability of nearest neighbor classification algorithms. This is beyond the generalization results obtained in Section 3.2. In short, if we choose the best in NN and apply it to the interpolatedNN, then NN will be more stable; otherwise, the interpolatedNN will be more stable for if the is allowed to be chosen separately and optimally based on .
For a stable classification method, it is expected that with high probability, the classifier can yield the same prediction when being trained by different data sets sampled from the same population. As a result, Sun et al. (2016) introduced a type of statistical stability, classification instability (CIS), which is different from the algorithmic stability in the literature (Bousquet and Elisseeff, 2002). Denote and as two i.i.d. training sets with the same sample size . The CIS is defined as:
Hence, a larger value of CIS indicates that the classifier is less statistically stable. In practice, we need to take into account of misclassification rate and classification instability at the same time. Therefore, we are interested in comparing the stability between interpolatedNN and NN only when the regrets of both algorithms reach their optimal performance under respective optimal choices.
Theorem 3 below illustrates how CIS is affected by interpolation through , and .
Theorem 3
Under the conditions in Theorem 1, the CIS of interpolatedNN is derived as
Similarly, Corollary 4 asymptotically compares CIS between interpolatedNN and NN.
Corollary 4
Following the conditions in Theorem 3, when the same value is used for NN and interpolatedNN, then as ,
On the other hand, if we choose optimum ’s for NN and interpolatedNN respectively, i.e. , when , we have
Therefore, when , interpolatedNN with optimal has higher accuracy and stability than NN at the same time.
From Corollary 4, the interpolatedNN is not as stable as NN if the same number of neighbors is used in both algorithms. However, this is not the case if an optimal is chosen separately. An intuitive explanation is that, under the same , NN has a smaller variance (more stable) given equal weights for all neighbors; on the other hand, by choosing an optimum , the interpolatedNN can achieve a much smaller bias, which offsets its performance lost in variance through enlarging .
3.4 Connection with OWNN and Double Descent Phenomenon
Samworth and others (2012) firstly worked out a general form of regret using a rankbased weighting scheme, and proposed the optimally weighted nearest neighbors algorithm (OWNN). The OWNN is the best nearest neighbors algorithm in terms of minimizing MSE for regression (and Regret for classification), when the weights of neighbors are only rankbased.
Combining Theorem 1, Corollary 2 with Samworth and others (2012), we can further compare the interpolatedNN against OWNN as follows:
which is always smaller than 1 (just by definition). Here denotes the MSE/Regret of OWNN given its optimum , and denotes the one of interpolatedNN given its own optimum choice. It is interesting to note from the above ratio that that the advantage of OWNN is only reflected at the level of multiplicative constant, and further that the ratio converges to 1 as diverges (just as the case of ; see Remark 1). Thus, under ultra high dimensional setting, the performance differences among NN, interpolatedNN and OWNN are almost negligible even at the multiplicative constant level.
We first describe the framework of the recently discovered double descent phenomenon (e.g., Belkin et al., 2019a, b), and then comment our contributions (summarized in Figure 2) to it in the context of nearest neighbor algorithm. Specifically, within the “classical regime” where exact data interpolation is impossible, the testing performance curve is the usual Ushape w.r.t. model complexity; once the model complexity is beyond a critical point it thus enters the “overfitting regime,” the testing performance will start to decrease again as severeness of data interpolation increases, which is socall “double descend”.
In the context of nearest neighbors algorithms, different weighting schemes may be viewed as a surrogate of modeling complexity. For OWNN, though it allocates more weights on closer neighbors, while none of the weights exceeds . Thus, OWNN is never an interpolation weighting scheme. From this aspect, NN and OWNN both belong to the “classical regime,” while interpolatedNN is within the “overfitting regime.” In particular, the testing performance of OWNN reaches the minimum point of the Ushaped curve inside the “classical regime.” Deviation from this optimal choice of weight leads to the increase of the MSE/Regret within this “classical regime.” After the “overfitting regime” is reached by the interpolatedNN, say from in Figure 2, the MSE/Regret decreases as the interpolation level increases within the range (0,) and ascends again when (if the dimension allows ), forming the second Ushaped curve in Figure 2. Therefore, we obtain an overall Wshaped performance curve with theoretical guarantee, which coincides the empirical finding of Belkin et al. (2019b) for overparametrized linear models.
4 Numerical Experiments
In this section, we will present several simulation studies to justify our theoretical discoveries for regression, classification and stability performances of the interpolatedNN algorithm, together with some real data analysis.
4.1 Simulations
We aim to estimate the performance ratio curve by data simulation and compare it with the theoretical curve . The second simulation setting in Samworth and others (2012) is adopted here. Specifically, the joint distribution of follows , and , where denotes the density of . The sample size and dimension . The interpolatedNN regressor and classifier were implemented under different choices of and . For regression, the MSE was estimated based on repetitions, and for classification, the Regret was based on repetitions.
When , the Regret/MSE ratio for different is shown in Figure 3. Here Regret ratio is defined by , the MSE ratio is defined by . The trends for theoretical value and simulation value are mostly close. The small difference is mostly caused by the small order terms in the asymptotic result and shall vanish if larger is used. Note that is outside our theoretical range , but the performance is still reasonable in our numerical experiment.
We further estimate CIS by training two classifiers based on two different simulated data sets of 1024 samples. The CIS was estimated by calculating the proportion of testing samples that have different prediction labels, that is
The CIS result is shown in Figure 4. One can see that when is small, the simulated CIS ratio decreases in a similar manner as the asymptotic value, while simulated value will increase when gets larger. This pattern is the same as the theoretical result predicted in Theorem 3.
An additional experiment is postponed to appendix, which shows how the MSE and optimum changes in and where for .
4.2 Real Data Analysis
In real data experiment, we compare the classification accuracy of interpolatedNN with NN.
Data  Error ()  Error (best )  best  

Abalone  7  0.22239  0.22007  0.3 
HTRU2  8  0.02315  0.0226  0.2 
Credit  23  0.1933  0.19287  0.05 
Digit  64  0.01745  0.01543  0.25 
MNIST  784  0.04966  0.04656  0.05 
Five data sets were considered in this experiment. The data set HTRU2 from Lyon et al. (2016) uses 17,897 samples with 8 continuous attributes to classify pulsar candidates. The data set Abalone contains 4,176 samples with 7 attributes. Following Wang et al. (2018), we predict whether the number of rings is greater than 10. The data set Credit (Yeh and Lien, 2009) has 30,000 samples with 23 attributes, and predicts whether the payment will be default in the next month given the current payment information. The builtin data set of digits in sklearn (Pedregosa et al., 2011) contains 1,797 samples of 88 images. For images in MNIST are , we will use part of it in our experiment. Both the data set of digit and MNIST have ten classes. Here for binary classification we group 0 to 4 as the first class and 5 to 9 as the second class.
For each data set, a proportion of data is used for training and the rest is reserved to test the accuracy of the trained classifiers. For Abalone, HTRU2, Credit and Digit, we use 25% data as training data and 75% as testing data. For MNIST, we use randomly choosen 2000 samples as training data and 1000 as testing data, which is sufficient for our comparison. The above experiment is repeated for 50 times and the average testing error rate is summarized in Table 1. For all data sets, the testing error of interpolatedNN (column “best ”) is always smaller than the NN(column “”), which verifies that nearest neighbor algorithm actually benefits from interpolation.
5 Conclusion
Our work precisely quantifies how data interpolation affects the performance of nearest neighbor algorithms beyond the rate of convergence. We find that for both regression and classification problems, the asymptotic performance ratios between interpolatedNN and NN converge to the same value, which depends on and only. More importantly, when the interpolation level is within a reasonable range, the interpolatedNN is strictly better than NN as it has a smaller multiplicative constant of the convergence rate, and it has a more stable prediction performance as well.
Classical learning framework opposes data interpolation as it believes that overfitting means fitting the random noise rather than the model structures. However, in the interpolatedNN, the weight degenerating occurs only on a nearlyzeromeasure set, and thus there is only “local overfitting”, which may not hurt the overall rate of convergence. Technically, through balancing the variance and bias, data interpolation can possibly improve the overall performance. And our work essentially quantify such a biasvariance balance in a very precise way. It is of great interest to investigate how our theoretical insights can be carried over to the real deep neural networks, leading to a more complete picture of double descent phenomenon.
Acknowledgments
Prof. Guang Cheng is a visiting member of Institute for Advanced Study, Princeton (funding provided by Eric and Wendy Schmidt) and visiting Fellow of SAMSI for the Deep Learning Program in the Fall of 2019; he would like to thank both Institutes for their hospitality.
References
 On the optimization of deep networks: implicit acceleration by overparameterization. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 80, pp. 244–253. Cited by: §1.
 Finegrained analysis of optimization and generalization for overparameterized twolayer neural networks. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 97, pp. 322–332. Cited by: §1.
 Benign overfitting in linear regression. arXiv preprint arXiv:1906.11300. Cited by: §1.
 Reconciling modern machine learning and the biasvariance tradeoff. Proceedings of the National Academy of Sciences 116 (32), pp. 15849–15854. External Links: Document, ISSN 00278424 Cited by: §1, §1, §3.4.
 Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate. In Advances in Neural Information Processing Systems 31, pp. 2300–2311. Cited by: §C.2.2, §F.1, §F.1, §1, §1, §1, §2, §3.1, Benefit of Interpolation in Nearest Neighbor Algorithms.
 Two models of double descent for weak features. arXiv preprint arXiv:1903.07571. Cited by: §1, §1, §3.4, §3.4.
 Does data interpolation contradict statistical optimality?. In Proceedings of Machine Learning Research, Proceedings of Machine Learning Research, Vol. 89, pp. 1611–1619. Cited by: §F.1, §1.
 Stability and generalization. Journal of machine learning research 2 (Mar), pp. 499–526. Cited by: §3.3.
 Local nearest neighbour classification with applications to semisupervised learning. arXiv preprint arXiv:1704.00642. Cited by: §C.2.2.
 Rates of convergence for nearest neighbor classification. In Advances in Neural Information Processing Systems, pp. 3437–3445. Cited by: §F.1, §F.1, §F.2, §F.2, §1, §2, §3.1, Remark 3.
 Gradient descent finds global minima of deep neural networks. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 97, pp. 1675–1685. Cited by: §1.
 On the power of overparametrization in neural networks with quadratic activation. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 80, pp. 1329–1338. Cited by: §1.
 Gradient descent provably optimizes overparameterized neural networks. arXiv preprint arXiv:1810.02054. Cited by: §1.
 Surprises in highdimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560. Cited by: §1.
 Fifty years of pulsar candidate selection: from simple filters to a new principled realtime classification approach. Monthly Notices of the Royal Astronomical Society 459 (1), pp. 1104–1123. Cited by: §4.2.
 Scikitlearn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §4.2.
 Optimal weighted nearest neighbour classifiers. The Annals of Statistics 40 (5), pp. 2733–2763. Cited by: §C.2.2, §C.2.2, §C.2.2, §C.2, §3.1, §3.2, §3.4, §3.4, §4.1, Remark 3, Benefit of Interpolation in Nearest Neighbor Algorithms.
 Stabilized nearest neighbor classifier and its statistical properties. Journal of the American Statistical Association 111 (515), pp. 1254–1265. Cited by: Appendix E, §3.2, §3.3, Proposition 5.
 Analyzing the robustness of nearest neighbors to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 80, pp. 5133–5142. Cited by: §4.2.
 Explaining the success of adaboost and random forests as interpolating classifiers. The Journal of Machine Learning Research 18 (1), pp. 1558–1590. Cited by: §1.
 Diverse neural network learns true target functions. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, Vol. 54, pp. 1216–1224. Cited by: §1.
 Statistical optimality of interpolated nearest neighbor algorithms. arXiv preprint arXiv:1810.02814. Cited by: Figure 1, §1, §1, §1, §3.1.
 The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications 36 (2), pp. 2473–2480. Cited by: §4.2.
The appendix is organized as follows. In section A, we demonstrate additional simulation study which empirically shows that the performances of interpolatedNN and NN converge under the same rate, and interpolatedNN generally requires larger number of neighbors than NN. Section BE provide the proofs for the main theorems in the manuscript. Section C is for Theorem 1, Section D provides the proof of Corollary 2, Section E is the proof of Theorem 3.
In Section F, we delivers a complementary result for rate optimality of interpolatedNN in classification. Originally Belkin et al. (2018) obtains the optimal MSE rate for regression task, but only a suboptimal rate of classification regret. We adopt some techniques introduced by Samworth and others (2012), and rigorously show that, under smooth condition (which is more general than the smooth condition imposed in our main theorems), interpolatedNN achieves optimal convergence rate for classification as well.
Appendix A Additional Numerical Experiment
In this experiment, instead of taking only, we take for to see how the performance ratio and optimum changes in for different ’s. The phenomenon for classification is similar as regression so we only present regression. Figure 5 summarizes the change of MSE and optimum choice of with respect to different choices of and , when . The plot corresponding to is quite similar hence omitted here. This plot shows that with respect to the increase of , interpolatedNN converges in the same rate as NN, and interpolatedNN generally requires larger than NN.
Appendix B Preliminary Proposition
This section provides an useful result when integrating c.d.f:
Proposition 5
From Lemma S.1 in Sun et al. (2016), we have for any distribution function ,
Appendix C Proof of Theorem 1
Define and (density as , ) as the conditional distributions of given respectively, and are the marginal probability and , then take , , , and also denote . The terms in Theorem 1 are defined as
c.1 Regression
Rewrite the interpolatedNN estimate at given the distance to the th neighbor , interpolation level as
where the weighting scheme is defined as
For regression, we decompose MSE into bias square and variance, where
in which the bias square can be rewritten as
and the variance can be approximated as
Following a procedure similar as Step 1 for classification, i.e., use Taylor expansion to approximate the bias square, we obtain that for some function , the bias becomes
As a result, the MSE of interpolatedNN estimate given becomes,
Finally we integrate MSE over the whole support.
c.2 Classification
The main structure of the proof follows Samworth and others (2012). As the whole proof is long, we provide a brief summary in Section C.2.1 to describe things we will do in each step, then in Section C.2.2 we will present the details in each step.
c.2.1 Brief Summary
Step 1: denote i.i.d random variables for where
then the probability of classifying as 0 becomes
The mean and variance of can be obtained through Taylor expansion of and density function of :
for some function . The smoothness conditions are assumed in A.4 and A.5.
Note that on the denominator of , there is an expectation . From later calculation in Corollary 2, the value of this expectation in fact has little changes given or without a condition of , and it is little affected by either.
Step 2: One can rewrite Regret as
From Assumption A.2, A.4, the region where is likely to make a wrong prediction is near , thus we use tube theory to transform the integral of Regret over the dimensional space into a tube, i.e.,
The term will be defined in detail in appendix. Basically, when is within a suitable range, the integral over will not depend on asymptotically.
Step 3: given and , the nearest neighbors are i.i.d. random variables distributed in , thus we use nonuniform BerryEsseen Theorem to get the Gaussian approximation of the probability of wrong prediction:
Step 4: take expectation over all , and integral the Gaussian probability over the tube to obtain
c.2.2 Details
Denote is the Euclidean ball volume parameter
Define and . Denote be the set that there exists such that , then for some constant ,
Hence from Claim A.5 in Belkin et al. (2018), there exist and satisfying
Step 1: in this step, we figure out the i.i.d. random variable in our problem, and calculate its mean and variance given .
Denote
(2) 
then the dominant part we want to integrate becomes
Therefore, one can adopt nonuniform BerryEssen Theorem to approximate the probability using normal distribution. Unlike Samworth and others (2012) in which is calculated, since the i.i.d. item in nonuniform BerryEssen Theorem is rather than , we now calculate mean and variance of . Under ,
and
Then the mean and variance of can be calculated as