Smoothing spline ANOVA for super-large samples: Scalable computation via rounding parameters1footnote 11footnote 1A version of this paper will appear in the upcoming special issue of Statistics and Its Interface on Statistical and Computational Theory and Methodology for Big Data.

# Smoothing spline ANOVA for super-large samples: Scalable computation via rounding parameters111A version of this paper will appear in the upcoming special issue of Statistics and Its Interface on Statistical and Computational Theory and Methodology for Big Data.

Nathaniel E. Helwig helwig@umn.edu Ping Ma pingma@uga.edu Department of Statistics, University of Georgia
July 15, 2015
###### Abstract

In the current era of big data, researchers routinely collect and analyze data of super-large sample sizes. Data-oriented statistical methods have been developed to extract information from super-large data. Smoothing spline ANOVA (SSANOVA) is a promising approach for extracting information from noisy data; however, the heavy computational cost of SSANOVA hinders its wide application. In this paper, we propose a new algorithm for fitting SSANOVA models to super-large sample data. In this algorithm, we introduce rounding parameters to make the computation scalable. To demonstrate the benefits of the rounding parameters, we present a simulation study and a real data example using electroencephalography data. Our results reveal that (using the rounding parameters) a researcher can fit nonparametric regression models to very large samples within a few seconds using a standard laptop or tablet computer.

Keywords: Smoothing spline ANOVA, Rounding parameter, Scalable algorithm

## 1 Introduction

In the current era of big data, it is common for researchers to collect super-large sample data ranging from hundreds of thousands to hundreds of millions of observations. The ambitious BRAIN Initiative of NIH is expected to bring a torrent of data, e.g, 100 terabytes of data per day from a single brain lab. These super-large datasets provide a wealth of information. To effectively extract the information, numerous data-oriented statistical learning methods have been developed. Among these methods, data-driven nonparametric regression models (see Ruppert et al., 2003; Silverman, 1985) have achieved remarkable success in identifying subtle patterns and discovering functional relationships in large noisy data; such models require few assumptions about the observed data, but produce a powerful prediction.

For example, smoothing splines (see Silverman, 1985; Wahba, 1990) offer a powerful and flexible framework for nonparametric modeling. Smoothing spline analysis of variance (SSANOVA) models (Gu, 2013) further expand the research horizon of the smoothing spline; SSANOVAs can model multivariate data and provide nice interpretability of the modeling and prediction outcome. Furthermore, assuming that the smoothing parameters are selected via cross-validation, SSANOVA models have been shown to have desirable asymptotic properties (see Gu, 2013; Li, 1987; Wahba, 1990). The main drawback of the SSANOVA approach is its computational expense: the computational complexity of SSANOVA is on the order of , where is sample size.

Over the years, many efforts have been made to design scalable algorithms for SSANOVA. Generalized additive models (GAMs; Hastie & Tibshirani, 1990; Wood, 2006) provide scalable computation at the price of eliminating or reparameterizing all interaction terms of an SSANOVA model. By collapsing similar subspaces, Helwig and Ma (2015) provide an algorithm for modeling all interactions with affordable computational complexity. However, even using the most efficient SSANOVA approximation (Kim & Gu, 2004; Ma et al., 2015) and algorithm (Helwig & Ma, 2015), the computational burden grows linearly with the sample size, which makes the approach impractical for analyzing super-large datasets.

One possibility is to fit the model to a subset of the observed data. For example, when analyzing ultra large datasets, Ma et al. (2014) suggest fitting regression models to a randomly selected influential sample of the full dataset. This sort of smart-sampling approach works well, as long as a representative sample of observations is selected for analysis; however, the fitted model varies from time to time as the subsample is randomly taken. Furthermore, determining the appropriate size of the subsample could be difficult in some situations.

In this paper, we propose a new approach for fitting SSANOVA models to super-large samples. Specifically, we introduce user-tunable rounding parameters in the SSANOVA model, which makes it possible to control the precision of each predictor. As we demonstrate, fitting a nonparametric regression model to the rounded data can result in substantial computational savings without introducing much bias to the resulting estimate. In the following sections, we provide a brief introduction to SSANOVA (Section 2), develop the concept of rounding parameters for nonparametric regression (Section 3), present finite-sample and asymptotic results concerning the quality of the rounded SSANOVA estimator (Section 4), demonstrate the benefits of the rounding parameters with a simulation study (Section 5), and provide an example with real data to reveal the practical potential of the rounding parameters (Section 6).

## 2 Smoothing Splines

### 2.1 Overview

A typical (Gaussian) nonparametric regression model has the form

 yi=η(xi)+ei (1)

where is the response variable, is the predictor vector, is the unknown smooth function relating the response and predictors, and is unknown, normally-distributed measurement error (see Gu, 2013; Ruppert et al., 2003; Wahba, 1990). Typically, is estimated by minimizing the penalized least-squares functional

 (1/n)n∑i=1(yi−η(xi))2+λJ(η) (2)

where the nonnegative penalty functional quantifies the roughness of , and the smoothing parameter balances the trade-off between fitting the data and smoothing .

Given fixed smoothing parameters and a set of selected knots , the minimizing Equation (2) can be approximated using

 ηλ(x)=m∑v=1dvϕv(x)+q∑h=1chρc(x,˘xh) (3)

where are functions spanning the null space (i.e., ), is the reproducing kernel (RK) of the contrast space (i.e., ), and and are the unknown function coefficients (see Helwig & Ma, 2015; Kim & Gu, 2004; Gu & Wahba, 1991). Note that , where denotes the RK of the -th orthogonal contrast space, and are additional smoothing parameters with .

### 2.2 Estimation

Inserting the optimal representation in Equation (3) into the penalized least-squared functional in Equation (2) produces

 (4)

where denotes the squared Frobenius norm, , for and , with for and , and where for . Given a choice of , the optimal function coefficients are given by

 (^d^c)=(K′KK′JθJ′θKJ′θJθ+λnQθ)†(K′J′θ)y (5)

where denotes the Moore-Penrose pseudoinverse.

The fitted values are given by , where

 Sλ=(KJθ)(K′KK′JθJ′θKJ′θJθ+λnQθ)†(K′J′θ) (6)

is the smoothing matrix, which depends on . The smoothing parameters are typically selected by minimizing Craven and Wahba’s (1979) generalized cross-validation (GCV) score:

 GCV(λ)={n∥(In−Sλ)y∥2}/{[n−tr(Sλ)]2}. (7)

The estimates and that minimize the GCV score have desirable properties (see Craven & Wahba, 1979; Gu, 2013; Gu & Wahba, 1991; Li, 1987).

## 3 Rounding Parameters

### 3.1 Overview

When fitting a nonparametric regression model to ultra large samples, we propose including user-tunable rounding parameters in the model (see Helwig, 2013, for preliminary work). Assuming that all (continuous) predictors have been transformed to the interval [0,1], the rounding parameters are used to create locally-smoothed versions of the (continuous) predictor variables, such as

 zij=rd(xij/rj)rj (8)

for and , where the rounding function rounds the input value to the nearest integer. Note that the scores are formed simply by rounding the original scores to the precision defined by the rounding parameter for the -th predictor variable, e.g., if , then each value is rounded to the nearest .02 to form .

Let with defined according to Equation (8), and let denote the rounded knots; then, the penalized least-squares function in Equation (4) can be approximated as , where , , and are defined according to Equation (4) with replacing . Similarly, the optimal basis function coefficients corresponding to the rounded data (i.e., and ) can be defined according to Equation (5) with with replacing . Finally, smoothing matrix corresponding to these coefficients (denoted by ) can be defined according to Equation (6) with with replacing .

One could calculate the fitted values using (and this is what we recommend for the smoothing parameter estimation), however this could introduce a small bias to each predicted score. So, when interpreting specific scores, we recommend using the and coefficients and basis function matrices with unrounded predictor variable scores

 ^y⋆=K^d⋆+Jθ^c⋆ (9)

where and are defined according to Equation (4).

### 3.2 Computational Benefits

Let denote the set of unique observed vectors with , and note that has an upper-bound that is determined by the rounding parameters and the predictor variables. For example, suppose that with and ; then, defining , it is evident that , given that can have a maximum of 101 unique values for the first predictor, and maximum of unique values for the second predictor. As a second example, suppose that with ; then, defining , it is evident that , given that can have a maximum of 101 unique values for each predictor. Similar reasoning can be used to place an upper bound on for different combinations of rounding parameters and predictor variable types.

Note that the inner-portion of can be written as

 (10)

where for and , where for and , and with denoting the number of that are equal to (for ). Next, define and define the reduced smoothing matrix , such as

 ~S⋆λ=~X(~K′⋆W~K⋆~K′⋆W~J⋆θ(~J⋆θ)′W~K⋆(~J⋆θ)′W~J⋆θ+λnQ⋆θ)†~X′. (11)

Note that is a matrix, and note that if there are replicate predictor vectors after the rounding (which is guaranteed if is larger than ’s upper bound).

Next, suppose that the scores are ordered such that observations have predictor scores , observations have predictor scores , and so on. Then can be written in terms of , such as

 (12)

where denotes a vector with a one in the -th position and zeros elsewhere, and denotes a vector of ones (for ). Furthermore, note that the fitted values corresponding to can be written as

 Sλ,ry=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝(e′1~S⋆λe1)1w1⋯(e′1~S⋆λeu)1w1(e′2~S⋆λe1)1w2⋯(e′2~S⋆λeu)1w2⋮⋱⋮(e′u~S⋆λe1)1wu⋯(e′u~S⋆λeu)1wu⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠~y (13)

where with and denoting the set of indices such that is equal to .

Now, let denote the fitted value corresponding to (for ), and note that the numerator of the GCV score in Equation (7) can be written as

 nu∑t=1∑It(yi−^y⋆t)2=nn∑i=1y2i−2nu∑t=1~yt^y⋆t+nu∑t=1wt(^y⋆t)2=n[∥y∥2−2~y′~S⋆λ~y+~y′~S⋆λW~S⋆λ~y] (14)

In addition, note that the denominator of the GCV score can be written as using the relation in Equation (12).

The above formulas imply that, after initializing , , and , it is only necessary to calculate the reduced smoothing matrix to evaluate the GCV score. Furthermore, note that the optimal function coefficients can be estimated from the reduced smoothing matrix using

 (15)

which implies that it is never necessary to construct the full smoothing matrix to estimate when using the rounding parameters.

### 3.3 Choosing Rounding Parameters

In many situations, a rounding parameter can be determined by the measurement precision of the predictor variable. For example, suppose we have one predictor recorded with the precision of two decimals on the interval [0,1], i.e., for . In this case, setting will produce the exact same solution as using the unrounded predictors (i.e., ) and can immensely reduce the computational burden. Note that even if is very large, and it is only necessary to evaluate the functions and for the unique predictor scores to estimate .

Now, for large , note that a cubic smoothing spline is approximately a weighted moving average smoother (see Silverman, 1985, Section 3). In particular, let denote the entry in the -th row and -th column of , and note that asymptotically depends on a kernel function whose influence decreases exponentially as increases (see Silverman, 1985, equations 3.1–3.4). Also, note that the rounding parameter proposed in this paper widens the peak of the kernel (see Figure 1).

For relatively smooth functions (e.g., ), the shape of the asymptotic kernel function is stable for ; however, for more jagged functions (e.g., ), the rounding parameter will need to be set smaller (e.g., ) for the rounded kernel function to resemble the true asymptotic kernel (see Figure 1).

## 4 Quality of Rounded Solution

### 4.1 A Taylor Heuristic

Note that the rounded predictor can be written as

 zij=xij+rjvij (16)

where by definition and so that . This implies where and . Consider the linear approximation of at the point

 η(zi)=η(xi)+[∇η(xi)]′Rvi+o(∥Rvi∥)

where denotes the gradient of . If the gradient of were known, we could approximate the rounding error using

 n−1n∑i=1[η(xi)−η(zi)]2≈n−1n∑i=1{[∇η(xi)]′Rvi}2≤n−1n∑i=1∥∇η(xi)∥2∥Rvi∥2≤(4n)−1n∑i=1∥∇η(xi)∥2∥r∥2 (17)

where ; note that the last line is due to the fact that .

For example, using an -th order polynomial smoothing spline with (see Craven & Wahba, 1979; Gu, 2013) we have

 ηλ(x)=m−1∑v=0dvkv(x)+q∑h=1chρ˘xh(x)

where are scaled Bernoulli polynomials, are the selected knots, and

 ρ˘xh(x)=km(x)km(˘xh)+(−1)m−1k2m(|x−˘xh|)

is the reproducing kernel of the contrast space. Using the properties of Bernoulli polynomials we have

 η′λ(x)=∂ηλ(x)∂x=m−1∑v=1dvkv−1(x)+q∑h=1chρ′˘xh(x)

where

 ρ′˘xh(x)=km−1(x)km(˘xh)+(−1)m−1shk2m−1(x−˘xh)

with if and otherwise (see Craven & Wahba, 1979; Gu, 2013).

Consequently, for polynomial splines we can approximate the rounding error using

 n−1n∑i=1[η(xi)−η(zi)]2≈n−1n∑i=1(rvi)2[η′λ(xi)]2≤r2(4n)−1∥Xb∥2

where with for and for , and . Note that the contrast space reproducing kernel is rather smooth for the classic cubic smoothing spline, and the magnitude of the derivatives are rather small (see Figure 2). This implies that setting will not introduce much rounding error to the contrast kernel evaluation when using cubic smoothing splines on .

The rounding error depends on the norm , so the relative impact of a particular choice of rounding parameters will depend on the (unknown) function coefficients . For practical use, we can approximate the rounding error relative to the norm of the coefficients, such as

 1n∥b∥2n∑i=1[η(xi)−η(zi)]2≈1n∥b∥2n∑i=1(rvi)2[η′λ(xi)]2≤r2(4n)−1λ∗1

where is the largest eigenvalue of ; note that we have and by definition. For practical computation, it is possible to estimate by taking a random sample of observations, and then approximate the relative rounding error as . Clearly this sort of approach can be extended to assess the relative rounding error for tensor product smoothing splines, but the gradient formulas become a bit more complicated.

### 4.2 Finite Sample Performance

To quantify the finite-sample error introduced by rounding, define the loss function

 L(r)=1nn∑i=1(^ηλ(xi)−^ηλ,r(zi))2=n−1∥(Sλ−Sλ,r)y∥2 (18)

where and are the smoothing matrices corresponding to the unrounded and rounded predictors (i.e., and , respectively). Denote the risk function as

 R(r)=E[L(r)]=n−1∥(Sλ−Sλ,r)η∥2+n−1σ2tr{(Sλ−Sλ,r)2} (19)

where contains the realizations of the (unknown) true function . Note that the first term of corresponds to the (squared) bias difference between and , and the second term is related to (but not equal to) the variance difference. Also note that we can write

 R(r)≤n−1∥Sλ−Sλ,r∥2∥η∥2+n−1σ2n∑i=1λi,r≤λ1,r(n−1∥η∥2+σ2) (20)

where are the eigenvalues of .

The risk depends on the squared norm of the unknown function , so the practical relevance of a particular value of , e.g., , differs depending on the situation, i.e., unknown true function. To overcome this practical issue, we can examine the risk relative to the squared norm of the unknown function, such as

 U(r)=R(r)∥η∥−2≤n−1λ1,r(1+nσ2∥η∥−2) (21)

where relates to the noise-to-signal ratio, i.e., inverse of signal-to-noise ratio (SNR). Furthermore, for a fixed SNR and a large enough , the second term in the upper-bound of the relative risk is negligible, and we have that . Consequently, it is only necessary to know the largest eigenvalue of to understand the expected performance of a given set of rounding parameters for a large sample size .

In practice, calculating and for various values of is a computational challenge for large . For practical computation, we recommend examining and/or using a random sample of observations. Using this approach, the unknown parameters (i.e., and can be estimated using the results of the unrounded solution. For example, the SNR can be estimated as where and are the estimated function and error variance using the observations with unrounded predictors. Or, if the approximate SNR is known, Equation (21) can be used to place an upper-bound on the relative risk .

We demonstrate this approach in Figures 34, which plot functions with various degrees of smoothness (Figure 3) and the median estimated rounding risk across five samples of observations (Figure 4). Note that Figure 4 illustrates that the expected difference between the unrounded and rounded solutions increases as the error variance increases. Furthermore, note that Figure 4 affirms that for setting can be expected to introduce minimal rounding error for a variety of functions and SNRs. Finally, Figure 4 reveals that setting will not introduce much rounding error whenever the underlying function is relatively smooth. For example, for the functions and , we should expect a negligible difference between the unrounded and rounded solutions using for a variety of different SNRs.

### 4.3 Asymptotic Bias and Variance

To establish the asymptotic properties of the proposed estimate, we employ an equivalent kernel approach developed in Nychka (1995). The key idea is that a smoothing spline estimate can be written as kernel estimate

 ^ηλ(x)=1nn∑i=1w(xi,x)yi (22)

where the kernel function can be well approximated by a Green’s function. Then the asymptotic properties of can be established via the analytical properties of the Green’s function.

Following Nychka (1995), we establish the asymptotic properties of our rounding estimate for the one dimensional case. In addition, we assume that we use a full basis where all distinct rounded data are used as knots, i.e., . Then our estimate is the minimizer of

 (1/n)n∑i=1(yi−η(zi))2+λ∫10(η(m))2dx. (23)

Let denote the empirical distribution function for the rounded predictor , , let be the limiting distribution of the original predictor with a continuous and strictly positive density function on and let

 Dn,r=supx∈[0,1]|Fn,r−F|,

and . Then we have the following theorem.

###### Theorem 4.1

Assume that is a smoothing spline estimate of (23) with and are not equally spaced. Suppose that and satisfies the Hölder condition for some and some . Assume that has a uniformly continuous derivative and as . Choose and let and as . Then

 E[^ηλ,r(x)]−η(x)=−λf(x)η(2)(x)+o(λ)+O(Dn,rρ),Var[^ηλ,r(x)]=σ28nf(x)(f(x)λ)1/2+σ2O(Dn,rρ),

uniformly for and as .

The theorem is a direct result of Theorem 2.2 of Nychka (1995). For , a slightly more complicated version of our theorem can be shown using Theorem 2 of Wang et al. (2013).

The theorem states that both the bias and variance of our estimate depend on , which is required to be sufficiently small relative to as . Consequently, the theorem reveals that the rounding parameter will have to be set smaller when

• the true function is rougher

• the spline order is larger

• the predictor distribution is rougher

• the sample size is larger.

These conclusions derive directly from the requirement that be sufficiently small relative to as .

## 5 Simulation Study

### 5.1 Design and Analyses

We conducted a simulation study to demonstrate the benefits of the rounding parameters. As a part of the simulation, we manipulated two conditions: (a) the function smoothness (8 levels: see Figure 3), and (b) the number of observations (3 levels: for ). Note that the functions are defined such that and for , so the function smoothness is systematically manipulated. We generated by (a) independently sampling the predictor(s) from a uniform distribution, (b) independently sampling from a standard normal distribution, and (c) defining the observed response as for .

Then, we fit a nonparametric regression model using six different methods: Method 1 is an SSANOVA using unrounded data (see Helwig & Ma, 2015), Method 2 is an SSANOVA with , Method 3 is an SSANOVA with , Method 4 is an SSANOVA with , Method 5 is standard GAM implemented through Wood’s (2015) gam.R function, and Method 6 is batch-processed GAM implemented through Wood’s (2015) bam.R function. Methods 1–4 are implemented through Helwig’s (2015a) bigspline.R function (for ) and bigssa.R function (for ).

For the functions we used knots to fit the model, and for functions we used knots. For Methods 1–4, we used a bin-sampling approach to select knots spread throughout the covariate domain (Helwig & Ma, in prep); for Methods 5 and 6, we used the default gam.R and bam.R knot-selection algorithm (see Wood, 2015). For each method, we used cubic splines and selected the smoothing parameters that minimized the GCV score. Given the optimal smoothing parameters, we calculated the fitted values, and then defined the true mean-squared-error (MSE) as . Finally, we used 100 replications of the above procedure within each cell of the simulation design.

### 5.2 Results

The true MSE for each combination of simulation conditions is plotted in Figure 5.

First, note that for each method, the true MSE decreased as increased, which was expected. Next, note that all of the methods recovered quite well (i.e., all MSEs smaller than 0.01). Comparing Methods 1–4, it is evident that setting introduced minimal bias to the resulting solution. In contrast, setting produced a more noticeable bias, particularly when analyzing the more jagged and functions, i.e., those with larger . However, the bias introduced with was small relative to the norm of , so there is little practical difference between the solutions with . Examining the true MSEs of Methods 5 and 6, it is apparent that the standard GAM performed almost identical to the batch-processed GAM throughout the simulation.

Comparing the true MSEs of Methods 1–4 to those of Methods 5 and 6, it apparent that the SSANOVAs performed similar to the GAMs in every simulation condition. In the one-dimensional case ( functions), the GAMs have slightly smaller true MSEs for , but the difference is trivial compared to the norm of the functions. In the two-dimensional case ( functions), the SSANOVAs have slightly smaller true MSEs for . Differences between the SSANOVA and GAM solutions are most pronounced when analyzing the function; in this case, the median true MSE of the GAM solutions is over 10 times larger than the corresponding median of the SSANOVA solutions with . However, the difference is still quite small compared to the norm of the function.

The median analysis runtimes (in seconds) for each simulation condition are displayed in Tables 1 and 2. First, note that for each method, the runtime increased as increased, which was expected. Next, note that the runtimes for Methods 1, 5, and 6 were substantially larger than the corresponding runtimes of Methods 2–4. When analyzing the functions, the median runtimes for Methods 2–4 were less than one-tenth of a second for all examined , and were anywhere from 40–60 times faster than the median runtimes for Methods 5 and 6. When analyzing the functions, the median runtimes for Methods 3–4 were less than one second for all examined , and were anywhere from 10–20 times faster than the median runtimes for Methods 5 and 6.

## 6 Real Data Example

### 6.1 Data and Analyses

To demonstrate the practical benefits of the rounding parameters when working with real data, we use electroencephalography (EEG) data obtained from Bache and Lichman (2013). Note that EEG data consist of electrical activities that are recorded from various electrodes on the scalp, and EEG patterns are used to infer information about mental processing. The EEG data used in this example were recorded from both control and alcoholic subjects participating in an experiment at the Henri Begleiter Neurodynamic Lab at SUNY Brooklyn. The data were recorded during a standard visual stimulus event-related potential (ERP) experiment using a 61-channel EEG cap (see Figure 6). The data were recorded at a frequency of 256 Hz for one second following the presentation of the visual stimulus.

For the example, we analyzed data from the Pz electrode of 120 subjects (44 controls and 76 alcoholics), and we used 10 replications of the ERP experiment for each subject.222Note that data from subjects co2a0000425 and co2c0000391 were excluded from the analysis due to small amounts of data, and we used the first 10 replications for each subject. This resulted in 307,200 data points (120 subjects 256 time points 10 replications). We analyzed the data using a two-way SSANOVA on the domain , where the first predictor is the time effect and the second predictor is the group effect (control vs. alcoholic); see the Appendix for an explanation of how the rounding parameter can be applied when working with continuous and nominal predictors. We used a cubic spline for the time effect, a nominal spline for the group effect, and bin-sampled knots. Finally, we fit the model both with the unrounded data and with the time covariate rounded to the nearest .01 second (i.e., on the interval [0,1]); note that setting for the time covariate results in unique covariate vectors, which is substantially less than the original data points.

### 6.2 Results

The predicted ERPs for the unrounded and rounded data are plotted in Figure 7.

Note that there are no practical differences between the two solutions (c.f. Figure 7a,b). Furthermore, note that both solutions produced a GCV score of GCV=85.96 and variance-accounted-for value of , suggesting that the rounded solution fits the data as well as the unrounded solution. It is also worth noting that the unrounded solution took over five times longer to fit compared to the rounded solution; furthermore, the unrounded solution required a substantial amount of RAM to fit the model, whereas the rounded solution is easily fittable on a standard laptop or tablet.

Comparing the estimated ERPs of the controls and alcoholics, there are obvious differences (see Figure 7). In particular, the alcoholic subjects are missing the P300 component of the ERP waveform (i.e., large positive peak occurring about 300 ms after the stimulus). Note that the P300 component is thought to relate to a subject’s internalization and/or categorization of stimuli, so these results suggest that alcoholic subjects have different information processing patterns for standard visual stimuli. This finding is consistent with previous findings regarding EEG patterns of alcoholic subjects (see Porjesz et al., 1980; Porjesz et al., 1987), and some research suggests that this sort of EEG pattern may predispose individuals to alcoholism (see Porjesz & Begleiter, 1990a, b).

## 7 Discussion

This paper proposes the use of rounding parameters to overcome the computational burden of fitting nonparametric regression models to super-large samples of data. By rounding each predictor to a given precision (e.g., 0.01), it is possible to estimate using the unique rounded predictor variables. We have provided a simple Taylor heuristic that justifies the use of a small rounding parameter (e.g., ) when using cubic smoothing splines for . Furthermore, we have provided methods for assessing the finite sample and asymptotic performance of the rounded SSANOVA estimator in various situations.

The simulation study and EEG example clearly demonstrate the benefits of the proposed rounding parameters. When fitting nonparametric regression models with large , the simulation results reveal that setting can result in substantial computational savings without introducing much bias to the solution. Furthermore, the EEG data example reveals that there are no practical differences between the unrounded and rounded solutions (using ) when analyzing real data. Thus, the rounding parameters offer a fast and stable method for fitting nonparametric regression models to very large samples.

In addition to providing a fast method for smoothing large datasets, the rounding parameters are also quite memory efficient. Because the rounding approach only uses the unique rounded-covariate values, it is never necessary to construct the full model design matrix (or the smoothing matrix). So, using the rounding parameters, it is possible to fit nonparametric regression models to very large samples using a standard laptop or tablet, e.g., all of the rounded SSANOVA models in this paper are easily fittable on a laptop with 4 GB of RAM. As a result, typical researchers now have the ability to discover functional relationships in super-large data sets without needing access to supercomputers or computing clusters.

As a final point, it should be noted that in some cases (e.g., large ) the number of unique rounded-covariate values may be very large. In such cases, forming the model design matrix may require a substantial amount of memory (because is so large). However, as is noted in Helwig (2013) and Helwig and Ma (2015), fitting an SSANOVA model only depends on various crossproduct vectors and matrices. So, if is too large to form the full model design matrix, then the needed crossproduct statistics can be formed in a batch-processing manner similar to the approach used by Wood’s (2015) bam.R function.

## Appendix: Rounding Algorithm

In this section, we provide algorithms for rounding SSANOVA predictors and obtaining the sufficient statistics for the SSANOVA estimation. The first algorithm assumes that all of the covariates are continuous; extensions for nominal covariates will be discussed after the presentation of the initial algorithm.

First, let denote the rounding parameter for the -th predictor, let denote the vector containing the -th predictor’s scores, and let denote the -th order statistic of the -th predictor. Next, initialize and , and then calculate

 for j∈{1,…,p}1.  g←g+h[rd{(1/rj)(~xj−x(1)j)/(x(n)j−x(1)j)}]2.  h←rd(1+1/rj)hend

where the rounding function rounds the input to the nearest integer. After running the for loop, we have , where denotes the -th element of