Abstract
We present a novel adaptive random subspace learning algorithm (RSSL) for prediction purpose. This new framework is flexible where it can be adapted with any learning technique. In this paper, we tested the algorithm for regression and classification problems. In addition, we provide a variety of weighting schemes to increase the robustness of the developed algorithm. These different wighting flavors were evaluated on simulated as well as on realworld data sets considering the cases where the ratio between features (attributes) and instances (samples) is large and vice versa. The framework of the new algorithm consists of many stages: first, calculate the weights of all features on the data set using the correlation coefficient and Fstatistic statistical measurements. Second, randomly draw samples with replacement from the data set. Third, perform regular bootstrap sampling (bagging). Fourth, draw without replacement the indices of the chosen variables. The decision was taken based on the heuristic subspacing scheme. Fifth, call base learners and build the model. Sixth, use the model for prediction purpose on test set of the data. The results show the advancement of the adaptive RSSL algorithm in most of the cases compared with the synonym (conventional) machine learning algorithms.
Adaptive Random SubSpace Learning (RSSL) Algorithm for Prediction
Mohamed Elshrif mme4362@rit.edu
Rochester Institute of Technology (RIT), 102 Lomb Memorial Dr., Rochester, NY 14623 USA
Ernest Fokoué epfeqa@rit.edu
Rochester Institute of Technology (RIT), 102 Lomb Memorial Dr., Rochester, NY 14623 USA
Given a dataset , where and are realizations of two random variables and respectively, we seek to use the data to build estimators of the underlying function for predicting the response given the vector of explanatory variables. In keeping with the standard in statistical learning theory, we will measure the predictive performance of any given function using the theoretical risk functional given by
(1) 
with the ideal scenario corresponding to the universally best function defined by
(2) 
For classification tasks, the most commonly used loss function is the zeroone loss , for which the theoretical universal best defined in (2) is the Bayes classifier given by . For regression tasks, the squared loss is by far the most commonly used, mainly because of the wide variety of statistical, mathematical and computational benefits it offers. For regression under the squared loss, the universal best defined in (2) is also known theoretically known to be the conditional expectation of given , specifically given by . Unfortunately, these theoretically expressions of the best estimators cannot be realized in practice because the distribution function of defined on is unknown. To circumvent this learning challenge, one has to do essentially two foundational thing, namely: (a) choose a certain function class (approximation) from which to search for the estimator of the true but unknown underlying , (b) specify the empirical version of (1) based on the given sample , an use that empirical risk as the practical objective function. However, in this paper, we do not directly construct our estimating classification functions from the empirical risk. Instead, we build the estimators using other optimality criteria, and then compare their predictive performances using the average test error , namely
(3) 
where is the th realization of the estimator built using the training portion of the split of into training set and test set, and is the th observation from the test set at the th random replication of the split of . In this paper, we consider both multiclass classification tasks with response space and regression tasks with , and we focus on learning machines from a function class whose members are ensemble learners in the sense of Definition (1). In machine learning, in order to improve the accuracy of a regression function, or a classification function, scholars tend to combine multiple estimators because it has been proven both theoretically and empirically (Tumer & Ghosh, 1995; Tumer & Oza, 1999) that an appropriate combination of good base learners leads to a reduction in prediction error. This technique is known as ensemble learning (aggregation). In spite of the underlying algorithm used, the ensemble learning technique most of the time (on average) outperforms the single learning technique, especially for prediction purposes (van Wezel & Potharst, 2007). There are many approaches of performing ensemble learning. Among these, there are two popular ensemble learning techniques, bagging (Breiman, 1996) and boosting (Freund, 1995). Many variants of these two techniques have been studied previously such as random forest (Breiman, 2001) and AdaBoost (Freund & Schapire, 1997) and applied in a prediction problem. Our proposed method belongs to the subclass of ensemble learning methods known as random subspace learning.
Definition 1
Given an ensemble of base learners , with relative weight (usually for convex aggregation), the ensemble representation of the underlying function is given by the aggregation (weighted sum)
(4) 
A question naturally arises as to how the ensemble is chosen, and how the weights are determined. Bootstrap Aggregating also known as bagging (Breiman, 1996), boosting (Freund & Schapire, 1996), random forests (Breiman, 2001), and bagging with subspaces (Panov & Dzeroski, 2007) are all predictive learning methods based on the ensemble learning principle for which the ensemble is built from the provided data set and the weights are typically taken to be equal. In this paper, we focus on learning tasks involving high dimension low sample size (HDLSS) data, and we further zeroin on those data sets for which the number of explanatory variables is substantially larger than the sample size . As our main contribution in this paper, we introduce, develop and apply a new adaptation of the theme of random subspace learning (Ho, 1998) using the traditional multiple linear regression (MLR) model as our base learner in regression and the generalized linear model (GLM) as a base learner in classification. Some applications by nature posses few instances (small ) with large number of features () such as fMRI (Kuncheva et al., 2010) and DNA microarrays (Bertoni et al., 2005) data sets. It is hard for a traditional (conventional) algorithm to build a regression model, or to classify the data set when it possesses a very small instances to features ratio. The prediction problem becomes even more difficult when this huge number of features correlated are highly correlated, or irrelevant for the task of building such a model, as we will show later in this paper. Therefore, we harness the power of our proposed adaptive subspace learning technique to guide the choice/selection of good candidate features from the data set, and therefore select the best base learners, and ultimately the ensemble yielding the lowest possible prediction error. In most typical random subspace learning algorithms, the features are selected according to an equally likely scheme. The question then arises as to whether one can devise a better scheme to choose the candidate features for efficiently with some predictive benefits. On the other hand, it is interesting to assess the accuracy of our proposed algorithm under different levels of the correlation of the features. The answer to this question constitutes one of the central aspect of our proposed method, in the sense we explore a variety of weighting scheme for choosing the features, most of them (the schemes) based on statistical measures of relationship between the response variable and each explanatory variable. As the computational section will reveal, the weighting schemes proposed here lead to a substantially improvement in predictive performance of our method over random forest on all but one data set, arguably due to the fact that our method because it leverages the accuracy of the learning algorithm through selecting many good models (since the weighting scheme allows good variables to be selected more often and therefore leads to near optimal base learners).
Traditionally, in a prediction problem, a single model is built based on the training set and the prediction is decided based solely on this single fitted model. However, in bagging, bootstrap samples are taken from the data set, then, for each instance, the model is fitted. Finally, the prediction is made based on the average of all bagged models. Mathematically, the prediction accuracy for the constructed model using bagging outperforms the traditional model and in the worst case it has the same performance. However, it must be said that it depends on the stability of the modeling procedure. It turns out that bagging reduces the variance without affecting the bias, thereby leading to an overall reduction in prediction error, and hence its great appeal. Any set of predictive models can be used as an ensemble in the sense defined earlier. There are many ensemble learning approaches. These approaches could be categorized into four classes: (1) algorithms that use heterogeneous predictive models such as stacking (Wolpert, 1992). (2) algorithms that manipulate the instances of the data sets such as bagging (Breiman, 1996), boosting (Freund & Schapire, 1996), random forests (Breiman, 2001), and bagging with subspaces (Panov & Dzeroski, 2007). (3) algorithms that maniplulate the features of the data sets such as random forests (Breiman, 2001), random subspaces (Ho, 1998), and bagging with subspaces (Panov & Dzeroski, 2007). (4) algorithms that manipulate the learning algorithm such as random forests (Breiman, 2001), neural networks ensemble (Hansen & Salamon, 1990), and extratrees ensemble (Geurts et al., 2006). Since our proposed algorithm manipulates both the instances and features of the data sets, we will focus on the algorithms in the second and third categories (Breiman, 1996; 2001; Panov & Dzeroski, 2007; Ho, 1998).
Bagging (Breiman, 1996), or bootstrap aggregating is an ensemble learning method that generates multiple predictive models. These models are based on performing bootstrap replicates of the learning (training) data set and utilizing from each replicate to build a separate predictive model. The bootstrap sample is attained through randomly (uniformly) sampling with replacement from instances of the training data set. The decision is made based on averaging the predictor classifiers in regression task and taking the majority vote in classification task. Bagging tend to decrease the variance and keeps the bias as in the case of a single classifier. The bagging accuracy increases when the applied learner is unstable, which means that for any small fluctuation on the training data set causes large impact on the test data set such as trees (Breiman, 1996). Random forests (Breiman, 2001), is an ensemble learning method that averages the prediction results from multiple independent predictor (tree) models. It also performs bootstrap replicates, like bagging (Breiman, 1996), to construct different predictors. For each node of the tree, randomly selecting subset of the attributes. It is considered to improve over bagging through decorrelating the trees. Choose the best attribute from the selected subset. As (Denil et al., 2014) mentions that when building a random tree, there are three issues that should be decided in advance; (1) the leafs splitting method, (2) the type of predictor, and 3 the randomness method. Random subspace learning (Ho, 1998), is an ensemble learning method that constructs base models based on different features. It chooses a subset of features and then learns the base model depending only on these features. The random subspaces reaches the highest accuracy when the number of features is large as well as the number of instances. In addition, it performs good when there are redundant features on the data set. Bagging subspaces (Panov & Dzeroski, 2007), is an ensemble learning method that combines both the bagging (Breiman, 1996) and random subspaces (Ho, 1998) learning methods. It generates a bootstrap replicates of the training data set, in the same way as bagging. Then, it randomly chooses a subset from the features, in the same manner as random subspaces. It outperforms the bagging and random subspaces. Also, it is found to yield the same performance as random forests in case of using decision tree as a base learner. In the simulation part of this paper, we aim to answer the following research questions: (1) Is the performance of the adaptive random subspace learning (RSSL) better than the performance of single classifiers? (2) What is the performance of the adaptive RSSL compared to the most widely used classifier ensembles? (3) Is there a theoretical explanation as to why adaptive RSSL works well for most of the simulated and reallife data sets? (4) How does adaptive RSSL perform on different parameter settings and with various percentages of the instancetofeature ratio (IFR)? (5) How does the correlation between features affect the predictive performance of adaptive RSSL?
In this section, we present an adaptive random subspace learning algorithm for the prediction problem. We start with the formulation of the problem, followed by our suggested solution (proposed algorithm) to tackle (handle) it. A crucial step of assessing the candidate features for building the models is explained in detail. Finally, we elucidate the strength of the new algorithm, from a theoretical perspective.
As we said earlier our proposed method belongs to the category of random subspace learning where each base learner is constructed using a bootstrap sample and a subset of the original features. The main difference here is that we use base learners that are typically considered not to lead to any improvement when aggregated, and we also select features using weighting schemes that inspired for the strength of the relationship between each feature and the response (target). Each base learner is driven by the subset of variables of predictors that are randomly select to build it, and the subsample drawn with replacement from . For notational convenience, we use vectors of indicator variables to denote these two important quantities. The sample indicator , where
The variable indicator , where
The estimator of the th base learner can therefore fully and unambiguously denoted by which we refer to as for notational simplicity. Each is chosen according to one of the weighting schemes. To increase the strength of the developed algorithm, we introduce a weighting scheme procedure to select the important features, which facilitates building a proper model and leverage the prediction accuracy. Our weighting schemes are

Correlation coefficient: We measure the strength of the association between each feature vector and the response (target), and take the square of the resulting sample correlation

Fstatistic: For classification tasks especially, we use the observed Fstatistic resulting from the analysis of variance with as the response and the class label the treatment group.
Using the ensemble we form the ensemble estimator of class Membership as
and the ensemble estimator of regression response as
We used a collection of simulated and realworld data sets for our experiments. In addition, we used realworld data sets from previous papers, which aim to solve the same problem, for comparison purpose. We report the mean square error (MSE) for each individual algorithm and task purposes, i.e., regression, or classification.
We designed our artificial data sets to fit six scenarios based on the factors, which are the dimensionality of the data (number of features), the number of sample size ((number of instances), and the correlation of the data.
We benefit from the public repository of the UCI University reallife data sets in our paper. For the purposes of consistency and completeness, we choose the real data sets that carries different characteristics in terms of the number of instances and the number of features along with variety of applications. The real data sets can be represented based on the task as follows:
Weighting  n  p  MLR  Uniform MLR  Adaptive MLR  RF  Better?  

Correlation  200  25  0.05  5.690.89  14.502.63  4.600.706  9.811.86  
200  25  0.5  4.780.81  11.672.55  4.770.94  8.461.97  
25  200  0.05  974.375.e3  18.356.92  8.103.86  18.567.24  
25  200  0.5  5.e35.e4  18.838.72  8.275.24  18.188.65  
50  1000  0.05  2.e41.e5  28.3611.51  12.385.91  27.9211.78  
1000  50  0.05  4.660.34  16.621.37  4.330.33  6.730.62  
Fstatistics  200  25  0.05  5.040.79  14.422.67  4.480.74  8.751.76  
200  25  0.5  4.490.76  12.062.04  5.511.09  8.331.59  
25  200  0.05  3.e42.e5  17.779.15  5.814.10  15.818.55  
25  200  0.5  1.e41.e5  23.0916.06  12.5310.27  24.1116.31  
50  1000  0.05  4.e53.e6  16.655.38  7.652.83  15.545.31  
1000  50  0.05  4.190.33  15.971.15  3.900.30  6.240.55 
Data Set  Weighting  MLR  Uni. MLR  Adap. MLR  RF  Better? 

BodyFat  correlation  17.412.69  23.593.71  19.253.06  19.723.18  
Fstatistics  17.062.50  23.073.46  17.462.65  19.512.99  
Attitude  correlation  74.1232.06  80.3534.40  58.4920.21  88.7235.97  
Fstatistics  75.1936.63  74.7133.17  51.8415.19  82.2135.58  
Cement  correlation  10.767.25  NA  19.9215.98  75.9156.05  
Fstatistics  11.078.55  NA  24.2718.27  62.2046.53  
Diabetes 1  correlation  2998.13322.37  3522.30311.81  3165.74300.86  3203.94311.94  
Fstatistics  2988.32341.20  3533.45375.38  3133.60324.75  3214.11318.6931  
Diabetes 2  correlation  3916.98782.35  4244.00390.29  3016.54285.89  3266.50324.82  
Fstatistics  3889.00679.55  4306.76419.66  3076.77338.08  3326.28382.37  
Longley  correlation  0.210.13  0.620.36  0.490.29  1.540.92  
Fstatistics  0.220.13  0.660.42  0.490.29  1.631.04  
Prestige  correlation  66.6815.31  73.3214.77  64.8713.93  55.9611.83  
Fstatistics  65.7715.96  72.3316.64  63.2714.71  56.0212.66  

Weighting  n  p  GLM  Uni. GLM  Adap. GLM  RF  Better?  

Fstatistics  200  25  0.05  0.0700.033  0.4860.172  0.0710.032  0.1010.053  
200  25  0.5  0.1400.045  0.4980.221  0.1380.043  0.1360.058  
50  200  0.05  0.1020.093  0.6730.123  0.1000.092  0.3200.103  
50  200  0.5  0.0580.141  0.3460.346  0.0490.121  0.1780.188  
50  1000  0.05  0.0330.064  0.5220.158  0.0340.062  0.4090.114  
1000  50  0.05  0.1300.019  0.6430.028  0.1300.019  0.1670.024 
Data Set  W. S.  GLM  Uni. GLM  Adap. GLM  RF  Better? 

Diabetes in Pima  Fstat  0.2740.071  0.2490.051  0.2550.051  0.2690.050  
Prostate Cancer  Fstat  0.4250.113  0.3550.093  0.3320.094  0.3430.098  
Golub Leukemia  Fstat  0.427  0.023  0.021  0.023  
Diabetes  Fstat  0.0340.031  0.0680.039  0.0380.034  0.0310.029  
Lymphoma  Fstat  0.2480.065  0.0570.034  0.0460.029  0.0820.046  
Lung Cancer  Fstat  0.1130.051  0.0380.023  0.0370.024  0.0510.030  
Colon Cancer  Fstat  0.2960.124  0.1680.095  0.1240.074  0.1990.106 
Data set  inst.  feat.  IFR ratio 

regression  
Bodyfat  252  14  1,800% 
attitude  30  7  428.5% 
Cement  13  5  260% 
Diabetes 1  442  11  4,018% 
Diabetes 2  442  65  680% 
Longley  16  7  228.5% 
Prestige  102  5  2,100% 
classification  
Diabetes in Pima  200  8  2,500% 
Prostate cancer  79  501  15.8% 
Leukemia  72  3572  2.0% 
Diabetes  145  4  3,625% 
Lymphoma  180  662  27.2% 
To elucidate the performance of our developed model, we compare the accuracy of the RSSL with random forest and … on the same real data sets they used before.
As revealed (experienced) from our experiments on synthetic data sets that when the number of selected features is higher than 1520 (for our particular dataset) yields ensemble classifiers that are highly accurate and stable. The reason for this is that only if the number of voters is Òlarge enoughÓ does the random process of attribute selection yield suKcient number of qualitatively different classifiers that ensure high accuracy and stability of the ensemble.
how many bootstrap replications are useful? The evidence both experimental and theoretical is that bagging can push a good but unstable procedure a significant step towards optimality. why the training set in real dataset was chosen to be large and in simulated dataset the test set used to be large? The bootstrap sample was repeated 50 times. The random division of the data is repeated 100 times. Choosing between these two strategies is not an easy task since it involves a tradeoff between bias and estimation variance over the forecast horizon.
Even though that our developed adaptive RSSL algorithm outperforms many classifier ensembles. It has limitations where this new algorithm can not deal with data set that has categorical features. Instead it necessities to encode these features numerically. Also, the algorithm is not designed to classify data sets with multiple classes. Moreover, the adaptive RSSL algorithms sometimes fails to select the optimal feature subsets?
We presented a detailed quantitative analysis of the adaptive RSSL algorithm for an ensemble prediction problem. We support this analysis with deep theoretical (mathematical) explanation (formulation). The key important issues for the developed algorithm resides on four fundamental factors: generalization, flexibility, speed, and accuracy. We will explain each of these four factors. We present a rigorous theoretical justification of our propose algorithm. For now, we choose fixed number of attribute subset. However, the algorithm should evaluated based on the performance (accuracy) to determine the appropriate number (dimension) for single classifiers used in the ensemble learning. In addition, the adaptive RSSL algorithm is tested on a relatively small data sets. Our next step will be applying the developed algorithm on a big data sets.
Also, we show that the adaptive RSSL performs better than widely used ensemble algorithms even with the dependence of feature subsets.
Computational issues.
References
 Bertoni et al. (2005) Bertoni, Alberto, Folgieri, Raffaella, and Valentini, Giorgio. Biomolecular cancer prediction with random subspace ensembles of support vector machines. Neurocomputing, 63(0):535 – 539, 2005. ISSN 09252312. doi: http://dx.doi.org/10.1016/j.neucom.2004.07.007. URL http://www.sciencedirect.com/science/article/pii/S0925231204003637. New Aspects in Neurocomputing: 11th European Symposium on Artificial Neural Networks.
 Breiman (1996) Breiman, Leo. Bagging predictors. Machine Learning, 24(2):123–140, 1996. ISSN 08856125. doi: 10.1023/A:1018054314350. URL http://dx.doi.org/10.1023/A%3A1018054314350.
 Breiman (2001) Breiman, Leo. Random forests. Mach. Learn., 45(1):5–32, October 2001. ISSN 08856125. doi: 10.1023/A:1010933404324. URL http://dx.doi.org/10.1023/A:1010933404324.
 Denil et al. (2014) Denil, Misha, Matheson, David, and de Freitas, Nando. Narrowing the gap: Random forests in theory and in practice. In International Conference on Machine Learning (ICML), 2014. URL http://jmlr.org/proceedings/papers/v32/denil14.html.
 Freund (1995) Freund, Y. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256 – 285, 1995. ISSN 08905401. doi: http://dx.doi.org/10.1006/inco.1995.1136. URL http://www.sciencedirect.com/science/article/pii/S0890540185711364.
 Freund & Schapire (1996) Freund, Yoav and Schapire, Robert E. Experiments with a new boosting algorithm, 1996.
 Freund & Schapire (1997) Freund, Yoav and Schapire, Robert E. A decisiontheoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 – 139, 1997. ISSN 00220000. doi: http://dx.doi.org/10.1006/jcss.1997.1504. URL http://www.sciencedirect.com/science/article/pii/S002200009791504X.
 Geurts et al. (2006) Geurts, Pierre, Ernst, Damien, and Wehenkel, Louis. Extremely randomized trees. Machine Learning, 63(1):3–42, 2006. ISSN 08856125. doi: 10.1007/s1099400662261. URL http://dx.doi.org/10.1007/s1099400662261.
 Hansen & Salamon (1990) Hansen, L. K. and Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell., 12(10):993–1001, October 1990. ISSN 01628828. doi: 10.1109/34.58871. URL http://dx.doi.org/10.1109/34.58871.
 Ho (1998) Ho, Tin Kam. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell., 20(8):832–844, August 1998. ISSN 01628828. doi: 10.1109/34.709601. URL http://dx.doi.org/10.1109/34.709601.
 Kuncheva et al. (2010) Kuncheva, L.I., Rodriguez, J.J., Plumpton, C.O., Linden, D.E.J., and Johnston, S.J. Random subspace ensembles for fmri classification. Medical Imaging, IEEE Transactions on, 29(2):531–542, Feb 2010. ISSN 02780062. doi: 10.1109/TMI.2009.2037756.
 Panov & Dzeroski (2007) Panov, Pance and Dzeroski, Saso. Combining bagging and random subspaces to create better ensembles. In R. Berthold, Michael, ShaweTaylor, John, and Lavrafc, Nada (eds.), Advances in Intelligent Data Analysis VII, volume 4723 of Lecture Notes in Computer Science, pp. 118–129. Springer Berlin Heidelberg, 2007. ISBN 9783540748243. doi: 10.1007/9783540748250_11. URL http://dx.doi.org/10.1007/9783540748250_11.
 Tumer & Ghosh (1995) Tumer, Kagan and Ghosh, Joydeep. Classifier combining: Analytical results and implications. In In Proceedings of the AAAI96 Workshop on Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms, pp. 126–132. AAAI Press, 1995.
 Tumer & Oza (1999) Tumer, Kagan and Oza, Nikunj C. Decimated input ensembles for improved generalization, 1999.
 van Wezel & Potharst (2007) van Wezel, Michiel and Potharst, Rob. Improved customer choice predictions using ensemble methods. European Journal of Operational Research, 181(1):436 – 452, 2007. ISSN 03772217. doi: http://dx.doi.org/10.1016/j.ejor.2006.05.029. URL http://www.sciencedirect.com/science/article/pii/S0377221706003900.
 Wolpert (1992) Wolpert, David H. Stacked generalization. Neural Networks, 5:241–259, 1992.