Multiple Learning for Regression in Big Data

Multiple Learning for Regression in Big Data

Xiang Liu1 Purdue University
xiang35@purdue.edu
   Ziyang Tang1 Purdue University
tang385@purdue.edu
   Huyunting Huang Purdue University
huan1182@purdue.edu
   Tonglin Zhang Purdue University
tlzhang@purdue.edu
   Baijian Yang * indicates equal contribution Purdue University
byang@purdue.edu
Abstract

Regression problems that have closed-form solutions are well understood and can be easily implemented when the dataset is small enough to be all loaded into the RAM. Challenges arise when data are too big to be stored in RAM to compute the closed form solutions. Many techniques were proposed to overcome or alleviate the memory barrier problem but the solutions are often local optima. In addition, most approaches require loading the raw data to the memory again when updating the models. Parallel computing clusters are often expected in practice if multiple models need to be computed and compared. We propose multiple learning approaches that utilize an array of sufficient statistics (SS) to address the aforementioned big data challenges. The memory oblivious approaches break the memory barrier when computing regressions with closed-form solutions, including but not limited to linear regression, weighted linear regression, linear regression with Box-Cox transformation (Box-Cox regression) and ridge regression models. The computation and update of the SS arrays can be handled at per row level or per mini-batch level. And updating a model is as easy as matrix addition and subtraction. Furthermore, the proposed approaches also enable the computational parallelizability of multiple models because multiple SS arrays for different models can be computed simultaneously with a single pass of slow disk I/O access to the dataset. We implemented our approaches on Spark and evaluated over the simulated datasets. Results showed our approaches can achieve exact solutions of multiple models. The training time saved compared to the traditional methods is proportional to the number of models need to be investigated.

Big Data; Linear Regression; Weighted Linear Regression; Ridge Regression; Box-Cox Transformation

I Introduction

Linear regression, weighted linear regression, linear regression with Box-Cox transformation (Box-Cox regression) and ridge regression have powered the society in many respects by modeling the relationship between a scalar response variable and explanatory variable(s). From housing price prediction to stock price prediction, and from face recognition to marketing analysis, the related applications span a wide spectrum [20, 1, 21]. After entering the big data era, these regression models are still prevalent in academia and industry. Even though more advanced models, such as XGBoost and deep learning, have seen significant successes lately, the regression models continue their impact in many fields due to their transparency, reliability and explainability [6, 10]. However, it is not easy to compute these models if the dataset is massive. Closed-form solutions would be impossible if the physical memory cannot hold all the data or the intermediate results needed for the computation. And trade-offs must be made between the accuracy and the time if the iterative methods should to be applied. Hence, it is of high value to propose a set of big-data oriented approaches that can preserve the benefits of linear, weighted linear, Box-Cox and ridge regression.

For linear regression, academia and industry resort to two major techniques, ordinary least squares (OLS) and the iterative methods. The OLS method is designed to calculate the closed-form solution [13]. By solving the normal equation, OLS can immediately derive the solution from the data. The normal equation consists of , if is singular, the normal equation will become unsolvable. One solution is to use generalized inverse [9, 3, 4]. Although OLS is efficient time-wise in deriving the closed-form solution, it also introduces the memory barrier issue in that the RAM needs to be big enough to store the entire dataset to solve the equation. To overcome the memory barrier, the distributed matrix could be applied to perform the calculation as a remedy [18]. But the time cost makes this algorithm infeasible nevertheless. Due to this reason, the applications of this technique are limited. And another technique, the iterative methods, which include gradient descent, Newton’s method and Quasi-Newton’s method, are commonly used to provide approximate solutions. [14, 25].

Gradient descent, also known as steepest descent, targets to find the minimum of a function. It approaches the minimum by taking steps along the negative gradient of the function with a learning rate proportional to the gradient. It is more universal than OLS as the variations, such as mini-batch gradient descent and stochastic gradient descent, overcome the memory barrier issue by performing a calculation in small batches instead of feeding all the data into memory at once [23]. But, gradient descent oscillates around the minimum region when the algorithm gets close to the minimum. And its asymptotic rate of convergence is inferior to many other iterative methods. If an easier approach to the minimum or higher asymptotic rate of convergence is demanded, Newton’s method is an alternative.

Newton’s method is a root-finding algorithm, utilizing the Taylor series. To find a minimum/maximum, it needs the knowledge of the second derivative. Unlike gradient descent, this strategy enables Newton’s method to approach the extrema/optima more easily rather than oscillations. Besides, it has been proven that Newton’s method has the quadratic asymptotic rate of convergence. However, this algorithm is faster than gradient descent only if the Hessian matrix is known or easy to compute [25]. Unfortunately, the expressions of the second derivatives for large scale optimization problem are often complicated and intractable.

Quasi-newton methods, for instance, DFP, BFGS and L-BFGS, were proposed as alternatives to Newton’s method when the Hessian matrix is unavailable or too expensive to calculate [7, 2, 16]. Instead of inverting the Hessian matrix in Newton’s method, quasi-newton methods build up an approximation for the inverse matrix to reduce the computational load. With this mechanism, quasi-newton methods are usually faster than Newton’s method for large datasets. In linear regression, L-BFGS, a variation of BFGS, is one of the most widely used quasi-newton method [26]. Generally, L-BFGS outperforms gradient descent in linear regression.

For the aforementioned approaches, the majority of them require multiple pass through the dataset. Donald Knuth proposed an efficient solution which requires only single-pass through the dataset, however, this approach is only applicable for variance computation [15].

Weighted linear regression is a more generalized version of linear regression by quantifying the importance of different observations [19]. A weighted version of OLS is designed to obtain the corresponding closed-form solution. The iterative methods with slight modifications are also applicable to weighted linear regression [12].

For Box-Cox regression, it is linear regression with the response variable changed by Box-Cox transformation [5, 24]. The design philosophy of Box-Cox regression is to handle non-linearity between the response variable and explanatory variables by casting power transformation on the response variable. Naturally, approaches for linear regression are applicable to Box-Cox regression.

As linear regression is deficient in handling highly-correlated data, ridge regression is then proposed [11]. The basic idea of ridge regression is to add a penalty term to the error sum of squares (SSE) cost function of linear regression [11, 17]. A constrained version of OLS can solve this problem, producing similar closed-form solution. The only difference is that the component from OLS is substituted by , where is the coefficient of penalty, and is the identity matrix. By means of , the constrained OLS no longer has to deal with the singularity issue but the memory barrier issue from OLS remains. Gradient descent, Newton’s method and quasi-newton methods as well can be applied [14, 25, 8].

From the above discussions, it can be concluded that research gaps remain in the following two perspectives: (i) OLS and its extended versions are difficult in handling the memory barrier issue; and (ii) The iterative methods are time inefficient and require many iterations to well-train regression models. In addition, parameter tuning is inevitable under most conditions. It may probably take several days or even weeks for large scale projects to accomplish the desired performance goals of models. For Box-Cox regression or ridge regression, the situation gets worse as a set of power or ridge parameters are usually applied to pick the best one, which, of course, also multiply the time cost [22].

In order to integrate the pros of OLS based approaches that use closed-form solutions to produce the exact results and the iterative methods that overcome the memory barrier, we propose multiple learning approaches that utilize sufficient statistics (SS). The main contributions of our algorithms are summarized as below:

  • We introduced a SS array which can be computed at per row or per mini-batch level for calculating closed-form solutions.

  • Once the closed-form solutions are obtained, the optimums are found, i.e., the prediction performance is at least as good as OLS.

  • With SS, the datasets stored in the large secondary storage, such as HDD or SSD, needs to be loaded to the primary storage one time only. The time efficiency is therefore greatly improved in contrast to the iterative methods that require multiple slow disk I/Os.

  • Because multiple SS arrays for different models can be computed simultaneously, multiple models can be computed and updated with a single pass of the entire dataset with one iteration of slow disk I/Os.

Ii Background Concepts

For regression analysis, not only the estimators of the regression model coefficients are required, but also the estimators of variance and the variance-covariance matrices should be computed for significant test. For the ease of presentation, necessary notions and notations closely relevant to linear regression, weighted linear regression, Box-Cox regression and ridge regression are explained below.

Ii-a Linear Regression

Assume the dataset contains observations each of which has features. Consider a linear regression model

(1)

where is a vector of the response variables, is a matrix of explanatory variables, is a vector of regression coefficient parameters, and is the error term which is a vector following the normal distribution .

Linear regression is usually solved by maximizing loglikelihood function (2).

(2)

where is an norm.

The estimators of model coefficients, variance and variance-covariance matrix are shown in (3).

(3)

Note that are all known observations. This means, the value of can be easily computed and included as an explanatory variable in equation (1). As a result, this approach can also be used to fit polynomial regressions models, in addition to linear regression models.

Ii-B Weighted Linear Regression

The weighted linear regression is similar to linear regression, except it assumes all the off-diagonal entries of the correlation matrix of the residuals are . By means of minimizing the corresponding SSE cost function in (4), the estimators of the model coefficients, variance and variance-covariance matrix are shown in (5).

(4)

where is a diagonal matrix of weights.

(5)

Ii-C Box-Cox Regression

Box-Cox regression model is a linear regression model with an additional power transformation on the response variable, as shown in (6).

(6)

where is the element-wise power transformation defined in (7).

(7)

Normally, a set of power parameters are applied to the response variable. In this case, for every , the one maximizes the profile loglikelihood (8) is chosen as the best power parameter.

(8)

The estimator of the model coefficients, variance and variance-covariance matrix for Box-Cox regression are

(9)

Ii-D Ridge Regression

Ridge regression is linear regression with an penalty term added. The corresponding SSE cost function is:

(10)

where is a non-negative tuning parameter used to control the penalty magnitude. For any , (10) can be analytically minimized, yielding the estimator of as

(11)

Iii Methodology

The main goal is to find approaches that are able to overcome the memory barrier issue of closed-form solutions and make them as widely applicable as the iterative methods in big data. In pursuit of this goal, the array of sufficient statistics (SS) is formally defined. And SS based multiple learning algorithms are proposed in this section.

Iii-a Sufficient Statistics Array

SS array is an array of sufficient statistics used to calculate the estimators of the models and the loglikelihood function (or SSE cost function) without a second visit to the dataset. It’s inspired by the computation-wise row-independent of the equivalent forms of (3) of linear regression [28, 27].

Rewritting from (3) in (12), is computation-wise row independent, i.e., for any two observations and , calculating the summation of doesn’t depend on . Likewise, and are computation-wise row-independent as well.

(12)

Inspired by this thought, the array of SS is formally defined as follows.

Definition 1.

Sufficient statistics (SS) array is an array of sufficient statistics that computed at per row level or per mini batch level from the dataset and can be used to compute the estimators of the model coefficients , the variance , the variance-covariance matrices and the loglikelihood (or SSE cost function) without revisiting the dataset.

Iii-B Linear Regression

Based on (2) and (3), is presented as an array of SS for linear regression.

(13)

where is a scalar, is a vector, and is a matrix.

By (13), we obtain the following

(14)
Theorem 1.

is an array of SS for linear regression to derive , , and .

Proof.

From (13), the loglikelihood can be expressed as a functin of .

(15)

which only depends on SS for linear regression. ∎

To accelerate the computation, row-by-row calculation could be optimized by batch-by-batch computation, i.e. , and could be written in the form of batch:

(16)

where denotes the total number of batches, , and denotes SS array in batch . is a vector, is a array and is the batch size for batch . The multiple learning approach for linear regression algorithm by mini-batch is shown in Algorithm 1.

Input: batch-by-batch of the entire dataset

Output: , and

1:  
2:  for  to  do
3:     Compute based on (16)
4:     
5:  end for
6:  if  is singular then
7:     Compute using generalized inverse
8:  else
9:     Compute
10:  end if
11:  Compute , and based on (14)
12:  return , and
Algorithm 1 Linear Regression with Sufficient Statistics

Iii-C Weighted Linear Regression

Weighted linear regression uses weights to adjust the importance of different observations. Therefore, the SS array for weighted linear regression is slightly different.

(17)

where is scalar, is a vector, and is a matrix.

The estimators are re-expressed as follows:

(18)
Theorem 2.

is an array of SS for weighted linear regression to derive the estimators of , , and .

Proof.

From (17), (4) can be expressed as a function of the SS array

(19)

which only depends on SS for weighted linear regression. ∎

Similar to multiple learning approach for linear regression algorithm, calculating SS batch by batch is also feasible.

(20)

where is a diagonal weight matrix in batch .

The multiple learning approach for weighted linear regressoin is shown in Algorithm 2.

Input: batch by batch of the entire dataset

Output: , and

1:  
2:  for  to  do
3:     Compute based on (20)
4:     
5:  end for
6:  if  is singular then
7:     Compute using generalized inverse
8:  else
9:     Compute
10:  end if
11:  Compute , and based on (18)
12:  return , and
Algorithm 2 Weighted Linear Regression with Sufficient Statistics

Iii-D Box-Cox Regression

Box-Cox regression requires a power transformation on the response variable. Commonly, a set of power parameters are applied. And the maximizes the (8) is picked as the best parameter. As the profile loglikelihood is required for parameter picking, is necessarily needed.

The arrays of SS for Box-Cox regression is shown in (21). For every ,

(21)

where and are scalars, is a vector and is a matrix. Notably, is sharable to all models.

Thus, for every ,

(22)
Theorem 3.

For any , the corresponding is an array of SS for Box-Cox regressoin, which can be used to compute , , and .

Proof.

By (22), (8) becomes

(23)

which only depends on SS for Box-Cox linear regression. ∎

Batched version of SS for any is shown in (24).

(24)

where is a vector in batch .

The SS-based Box-Cox regression algorithm by mini-batch is presented in Algorithm 3.

Input: batch by batch of the entire dataset

Output: , and

1:  
2:  for  do
3:     
4:  end for
5:  for  to  do
6:     Compute based on (24)
7:     
8:     for  do
9:        Compute and based on (24)
10:        
11:     end for
12:  end for
13:  if  is singular then
14:     Compute using generalized inverse
15:  else
16:     Compute
17:  end if
18:  for  do
19:     Compute , and based on (22)
20:     Compute based on (23)
21:  end for
22:  return , and based on
Algorithm 3 Box-Cox Regression with Sufficient Statistics

Iii-E Ridge Regression

Although ridge regression requires a set of ridge parameters, the SS array is re-usable to all ridge parameters and could be borrowed directly from linear regression.

Let , for every , the corresponding estimators , , and the SSE cost function are:

(25)
(26)

The best is selected by the ridge trace method.

Theorem 4.

is the SS array for ridge regression.

Proof.

From (25), (10) could be expressed as

(27)

which only depends on SS for ridge regression. ∎

The batched version for SS is also identical to that of linear regression. The corresponding algorithm is presented in Algorithm 4.

Input: batch-by-batch of the entire dataset

Output: , and

1:  
2:  for  to  do
3:     Compute based on (16)
4:     
5:  end for
6:  for  do
7:     Compute
8:     Compute , and based on (25)
9:     Compute and ridge trace
10:  end for
11:  return , and by ridge trace
Algorithm 4 Ridge Regression with Sufficient Statistics

Iv Experiments

To evaluate the proposed multiple learning algorithms, extensive experiments were conducted on a four-node Spark cluster. All the algorithms were implemented and tested on Spark.

Master Slave1 Slave2 Slave3
CPU i7-3770 i7-3770 Quad Q8400 Quad Q9400
Memory 16GB 16GB 4GB 4GB
Disk 1TB 1TB 250GB 250GB
TABLE I: Configurations of Clusters

Iv-a Setup

The 4-node Spark cluster was configured with 1 master node and 3 worker nodes. The hardware specs of each of the four computers are shown in Table I.

Iv-A1 Data Simulation


To understand how massive datasets could impact the computing, we simulated 3 datasets with 0.6 million, 6 million and 60 million observations. The sizes of these datasets are approximately 1GB, 10GB, and 100GB. Generally, the 1GB and 10GB datasets can be loaded into memory easily. However, the 100GB dataset cannot be entirely loaded into the memory at one time. Each row of the data has 100 features for the experiments and all the features are of double type and continuous variables. In each response , the corresponding error follows the normal distribution, i.e. . Additionally, another 3 similar datasets are generated with all the responses set to be positive for proper Box-Cox regression.

Iv-A2 Experiment Design


We designed two experiments, one for time performance and the other for prediction quality, to compare the results between the multiple learning algorithms and the traditional ones on Spark.

Model Time Used (s)
1GB 10GB 100GB
LR Spark 41.86 338.27 3266.16
SS 1 19.59 154.16 1505.64
SS 128 15.67 126.33 1267.96
Weighted LR Spark 42.23 339.54 3263.37
SS 1 19.76 155.47 1528.75
SS 128 16.73 125.35 1289.54
Box-Cox Spark 42.63 341.31 3264.33
SS 1 19.16 156.41 1532.00
SS 128 15.19 122.49 1200.49
Box-Cox Spark 431.29 3429.34 33701.51
SS 1 19.87 160.13 1674.62
SS 128 16.52 122.21 1206.17
Ridge Spark 41.58 328.48 3276.10
SS 1 19.87 152.47 1620.46
SS 128 16.10 127.92 1213.64
Ridge Spark 423.63 3342.58 32688.28
SS 1 20.56 154.34 1651.33
SS 128 16.80 125.63 1230.45
TABLE II: Time Performance Comparison. Spark represents the traditional approaches implemented by Apache Spark; SS 1 (SS 128) means the multiple learning approaches with batch size fixed to (); denotes the weights of the observations; represents the power parameters for Box-Cox regression from to by an interval of . Likewise, are the ridge parameters from to by an interval of 0.1.
Model MSE (s)
1GB 10GB 100GB
LR Spark 1009520.77 993455.96 994025.56
SS 1009520.77 993455.96 994025.56
Weighted LR Spark 1009520.77 993455.96 994025.56
SS 1009520.77 993455.96 994025.56
Box-Cox Spark 1138432.54 1053491.23 1011557.43
SS 1138432.54 1053491.23 1011557.43
Ridge Spark 1009520.77 993455.96 994025.56
SS 1009520.77 993455.96 994025.56
TABLE III: MSE comparison

Experiment I: Time Performance Comparison

The first experiment is to evaluate the time used for training different models. In this experiment, we compared the time performance of the multiple learning approaches with the traditional approaches. For the multiple learning approaches, we measured the time performance with regard to different batch sizes.

Experiment II: Prediction Quality Comparison

To experimentally support that our algorithms are as accurate as OLS algorithms with one pass through the datasets, we compared our algorithms with the traditional ones. In this experiment, we used 1GB, 10GB, and 100GB as the training sets and an additional 0.2GB, 2GB and 20GB data for testing (the testing sets are sampled in accordance with the same strategy for the generation of the training sets). To compare the prediction quality, Mean Squared Error (MSE), defined in equation (28), is used as performance matircs.

(28)

where is the real value for observation and is the predicted value, is the total number of observations.

Iv-B Results

Table II and Table III show the results of two experiments.

Experiment I: Time Performance Comparison

Based on the results from Table II, the training time of our methods is twice efficient than that of the traditional ones on Spark. However, it’s mainly ascribed to the embedded model summary functionality of Spark which requires a second visit to the dataset. Excluding this factor, the performance of our algorithms are nearly the same as the traditional ones on Spark. But for model training with multiple parameters (e.g. model selection) from a set of candidate models, the proposed multiple learning has a great advantage. As is shown in Table II, the computation time needed to perform traditional Box-Cox and ridge regression are affected drastically by the number of power parameters and ridge parameters. In contrast, the time overhead of the proposed multiple learning algorithms increased marginally by computing multiple parameters (or multiple models) simultaneously with multiple SS arrays. In Table II, our approaches are almost 20 times faster than the traditional approaches on Spark when computing 31 Box-Cox models or 20 Ridge regression models for the batch size . Speed-up factors can be further increased to around 27 if we increased the batch size to 128, i.e. the sufficient statistical arrays are updated every 128 rows. Essentially, the training time saved with the multiple learning approach is proportional to the number of models needed to train.

It is also evident in Table II that bigger batch size also decreases the training time. The effect of batch size becomes more significant when the data size is larger. Comparing batch size of 128 against batch size of 1, the time reduction for data size of 1GB, 10GB and 100GB dataset are approximately 16%, 22%, and 30%, respectively. It can be inferred that more time is likely to be saved with bigger batch size for larger datasets.

From experiment I, we conclude that if model selection is needed for a given large scale dataset, the proposed multiple learning approach can significantly outperform the traditional approaches by reducing the disk I/Os to one time. This feature is highly desirable when multiple models need to be calculated and compared in real life applications.

Experiment II: Prediction Quality Comparison

Table III shows the prediction quality, using MSE, for the multiple learning approaches and the traditional ones given 1GB, 10GB, and 100GB datasets. As expected, the prediction accuracy of our approaches is identical to the built-in spark algorithms, providing experimental support to the proof presented in Section 3. Given the same accuracy, the proposed approaches outperformed the traditional approaches with with faster training time. And the larger the datasets, the more advantageous the proposed methods are.

V Conclusion

In this paper, the multiple learning approaches for regression are proposed for big data. With only one pass through the dataset, a SS array is computed to derive the closed-form solutions for linear regression, weighted linear regression, Box-Cox regression and ridge regression. Theoretically and experimentally, it’s proven that multiple learning is capable of overcoming the memory barrier issue.

Furthermore, multiple SS arrays could be applied to obtain multiple models at once. Unlike other traditional methods that can only learn one model at a time, multiple learning outperforms the traditional techniques as far as time is concerned. Results also showed our approaches are extremely efficient when calculating multiple models as opposed to the traditional methods. Basically, the training time saved compared to the traditional methods is proportional to the number of models need to be investigated.

We believe this to be promising for big data for two main reasons: firstly, the coefficients of the models could be easily obtained as long as the SS arrays are calculated. Secondly, most of the models require a large amount of training and retraining, tuning and re-tuning to get better performance. While, multiple learning is able to solve or largely alleviate this time consuming problem.

Multiple learning approaches can be implemented on a single node as well as parallel computing frameworks, e.g. Spark. Due to time and resource constraints, our work is currently limited to closed-form solutions. For our further work, we would like to conduct more experiments over large scale datasets form real world applications and extend the multiple learning to models with no closed-form solutions.

References

  • [1] E. Altay and M. H. Satman (2005) Stock market forecasting: artificial neural network and linear regression comparison in an emerging market. Journal of Financial Management & Analysis 18 (2), pp. 18. Cited by: §I.
  • [2] M. Avriel (2003) Nonlinear programming: analysis and methods. Courier Corporation. Cited by: §I.
  • [3] J. C. A. Barata and M. S. Hussein (2012) The moore–penrose pseudoinverse: a tutorial review of the theory. Brazilian Journal of Physics 42 (1-2), pp. 146–165. Cited by: §I.
  • [4] A. Ben-Israel and T. N. Greville (2003) Generalized inverses: theory and applications. Vol. 15, Springer Science & Business Media. Cited by: §I.
  • [5] G. E. Box and D. R. Cox (1964) An analysis of transformations. Journal of the Royal Statistical Society: Series B (Methodological) 26 (2), pp. 211–243. Cited by: §I.
  • [6] T. Chen and C. Guestrin (2016) Xgboost: a scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794. Cited by: §I.
  • [7] W. C. Davidon (1991) Variable metric method for minimization. SIAM Journal on Optimization 1 (1), pp. 1–17. Cited by: §I.
  • [8] J. E. Dennis and J. J. Moré (1977) Quasi-newton methods, motivation and theory. SIAM review 19 (1), pp. 46–89. Cited by: §I.
  • [9] A. Dresden (1920-06) The fourteenth western meeting of the american mathematical society. Bull. Amer. Math. Soc. 26 (9), pp. 385–396. External Links: Link Cited by: §I.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §I.
  • [11] A. E. Hoerl and R. W. Kennard (1970) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12 (1), pp. 55–67. Cited by: §I.
  • [12] P. W. Holland and R. E. Welsch (1977) Robust regression using iteratively reweighted least-squares. Communications in Statistics-theory and Methods 6 (9), pp. 813–827. Cited by: §I.
  • [13] J. F. Kenney and E. Keeping (1962) Linear regression and correlation. Mathematics of statistics 1, pp. 252–285. Cited by: §I.
  • [14] K. C. Kiwiel (2001) Convergence and efficiency of subgradient methods for quasiconvex minimization. Mathematical programming 90 (1), pp. 1–25. Cited by: §I, §I.
  • [15] D. E. Knuth (2014) Art of computer programming, volume 2: seminumerical algorithms. 3rd edition, Addison-Wesley Professional. Cited by: §I.
  • [16] R. Malouf (2002) A comparison of algorithms for maximum entropy parameter estimation. In proceedings of the 6th conference on Natural language learning-Volume 20, pp. 1–7. Cited by: §I.
  • [17] D. W. Marquaridt (1970) Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation. Technometrics 12 (3), pp. 591–612. Cited by: §I.
  • [18] C. Moler (1986) Matrix computation on distributed memory multiprocessors. Hypercube Multiprocessors 86 (181-195), pp. 31. Cited by: §I.
  • [19] R. H. Myers and R. H. Myers (1990) Classical and modern regression with applications. Vol. 2, Duxbury Press Belmont, CA. Cited by: §I.
  • [20] I. Naseem, R. Togneri, and M. Bennamoun (2010) Linear regression for face recognition. IEEE transactions on pattern analysis and machine intelligence 32 (11), pp. 2106–2112. Cited by: §I.
  • [21] N. Nghiep and C. Al (2001) Predicting housing value: a comparison of multiple regression analysis and artificial neural networks. Journal of real estate research 22 (3), pp. 313–336. Cited by: §I.
  • [22] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al. (2011) Scikit-learn: machine learning in python. Journal of machine learning research 12 (Oct), pp. 2825–2830. Cited by: §I.
  • [23] S. Ruder (2016) An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Cited by: §I.
  • [24] R. Sakia (1992) The box-cox transformation technique: a review. Journal of the Royal Statistical Society: Series D (The Statistician) 41 (2), pp. 169–178. Cited by: §I.
  • [25] R. W. Wedderburn (1974) Quasi-likelihood functions, generalized linear models, and the gauss—newton method. Biometrika 61 (3), pp. 439–447. Cited by: §I, §I, §I.
  • [26] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica (2010) Spark: cluster computing with working sets.. HotCloud 10 (10-10), pp. 95. Cited by: §I.
  • [27] T. Zhang and B. Yang (2017) An exact approach to ridge regression for big data. Computational Statistics 32 (3), pp. 909–928. Cited by: §III-A.
  • [28] T. Zhang and B. Yang (2017) Box–cox transformation in big data. Technometrics 59 (2), pp. 189–201. Cited by: §III-A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393305
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description