Random forests (RF henceforth) is a popular and very efficient algorithm, based on model aggregation ideas, for both classification and regression problems, introduced by Breiman (2001) . It belongs to the family of ensemble methods, appearing in machine learning at the end of nineties (see for example Dietterich (1999)  and (2000) ). Let us briefly recall the statistical framework by considering a learning set made of i.i.d. observations of a random vector . Vector contains predictors or explanatory variables, say , and where is either a class label or a numerical response. For classification problems, a classifier is a mapping while for regression problems, we suppose that and is the so-called regression function. For more background on statistical learning, see Hastie et al. (2001) . Random forests is a model building strategy providing estimators of either the Bayes classifier or the regression function.
The principle of random forests is to combine many binary decision trees built using several bootstrap samples coming from the learning sample and choosing randomly at each node a subset of explanatory variables . More precisely, with respect to the well-known CART model building strategy (see Breiman et al. (1984) ) performing a growing step followed by a pruning one, two differences can be noted. First, at each node, a given number (denoted by ) of input variables are randomly chosen and the best split is calculated only within this subset. Second, no pruning step is performed so all the trees are maximal trees.
In addition to CART, another well-known related tree-based method
must be mentioned: bagging (see Breiman (1996) ).
Indeed random forests with reduce simply to unpruned
bagging. The associated R
RF algorithm becomes more and more popular and appears to be very
powerful in a lot of different applications (see for example
Díaz-Uriarte and Alvarez de Andrés (2006)  for
gene expression data analysis) even if it is not clearly elucidated
from a mathematical point of view (see the recent paper by Biau
et al. (2008)  and Bühlmann, Yu (2002)
 for bagging). Nevertheless, Breiman (2001)
 sketches an explanation of the good performance of
random forests related to the good quality of each tree (at least
from the bias point of view) together with the small correlation
among the trees of the forest, where the correlation between trees
is defined as the ordinary correlation of predictions on so-called
out-of-bag (OOB henceforth) samples. The OOB sample which is the
set of observations which are not used for building the current
tree, is used to estimate the prediction error and
then to evaluate variable importance.
Tuning method parameters
It is now classical to distinguish two typical situations depending on the number of observations, and the number of variables: standard (for ) and high dimensional (when ). The first question when someone try to use practically random forests is to get information about sensible values for the two main parameters of the method. Essentially, the study carried out in the two papers  and  give interesting insights but Breiman focuses on standard problems while Díaz-Uriarte and Alvarez de Andrés concentrate on high dimensional classification ones.
So the first objective of this paper is to give compact information
about selected bench datasets and to examine again the choice of the
method parameters addressing more closely the
RF variable importance
The quantification of the variable importance (VI henceforth) is an important issue in many applied problems complementing variable selection by interpretation issues. In the linear regression framework it is examined for example by Grömping (2007) , making a distinction between various variance decomposition based indicators: ”dispersion importance”, ”level importance” or ”theoretical importance” quantifying explained variance or changes in the response for a given change of each regressor. Various ways to define and compute using R such indicators are available (see Grömping (2006) ).
In the random forests framework, the most widely used score of importance of a given variable is the increasing in mean of the error of a tree (MSE for regression and misclassification rate for classification) in the forest when the observed values of this variable are randomly permuted in the OOB samples. Often, such random forests VI is called permutation importance indices in opposition to total decrease of node impurity measures already introduced in the seminal book about CART by Breiman et al. (1984) .
Even if only little investigation is available about RF variable
importance, some interesting facts are collected for classification
problems. This index can be based on the average loss of another
criterion, like the Gini entropy used for growing classification
trees. Let us cite two remarks. The first one is that the RF Gini
importance is not fair in favor of predictor variables with many
categories while the RF permutation importance is a more reliable
indicator (see Strobl et al. (2007) ). So we
restrict our attention to this last one. The second one is that it
seems that permutation importance overestimates the variable
importance of highly correlated variables and they propose a
conditional variant (see Strobl et al. (2008)
). Let us mention that, in this paper, we do not
notice such phenomenon. For classification problems, Ben Ishak,
Ghattas (2008)  and Díaz-Uriarte, Alvarez de
Andrés (2006)  for example, use RF variable
importance and note that it is stable for correlated predictors,
scale invariant and stable with respect to small perturbations of
the learning sample. But these preliminary remarks need to be
extended and the recent paper by Archer et al. (2008)
, focusing more specifically on the VI topic, do not
answer some crucial questions about the variable importance
behavior: like the importance of a group of variables or its
behavior in presence of
highly correlated variables. This one is the second goal of this paper.
Many variable selection procedures are based on the cooperation of variable importance for ranking and model estimation to evaluate and compare a family of models. Three types of variable selection methods are distinguished (see Kohavi et al. (1997)  and Guyon et al. (2003) ): ”filter” for which the score of variable importance does not depend on a given model design method; ”wrapper” which include the prediction performance in the score calculation; and finally ”embedded” which intricate more closely variable selection and model estimation.
For non-parametric models, only a small number of methods are available, especially for the classification case. Let us briefly mention some of them, which are potentially competing tools. Of course we must firstly mention the wrapper methods based on VI coming from CART, see Breiman et al. (1984)  and of course, random forests, see Breiman (2001) . Then some examples of embedded methods: Poggi, Tuleau (2006)  propose a method based on CART scores and using stepwise ascending procedure with elimination step; Guyon et al. (2002)  (and Rakotomamonjy (2003) ), propose SVM-RFE, a method based on SVM scores and using descending elimination. More recently, Ben Ishak et al. (2008)  propose a stepwise variant while Park et al. (2007)  propose a ”LARS” type strategy (see Efron et al. (2004)  for classification problems.
Let us recall that two distinct objectives about variable selection
can be identified: (1) to find important variables highly related to
the response variable for interpretation purpose; (2) to find a
small number of variables sufficient for a good prediction of the
response variable. The key tool for task 1 is thresholding variable
importance while the crucial point for task 2 is to combine variable
ranking and stepwise introduction of variables on a prediction model
building. It could be ascending in order to avoid to select
redundant variables or, for the case , descending first to
reach a classical situation , and then ascending using the
first strategy, see Fan, Lv (2008) . We propose in this
paper, a two-steps procedure, the first one is common while the
depends on the objective interpretation or prediction.
The paper is organized as follows. After this introduction, Section 2 focuses on random forests parameters. Section 3 proposes to study the behavior of the RF variable importance index. Section 4 investigates the two classical issues of variable selection using the random forests based score of importance. Section 5 finally opens discussion about future work.
2 Selecting method parameters
2.1 Experimental framework
The R package about random forests is based on the the seminal
contribution of Breiman and Cutler  and is described
in Liaw, Wiener (2002) . In this paper, we focus on the
randomForest procedure. The two main parameters are ,
the number of input variables randomly chosen at each split and
, the number of trees in the forest
A third parameter, denoted by , allows to specify the minimum number of observations in a node. We retain the default value (1 for classification and 5 for regression) of this parameter for all of our experimentations, since it is close to the maximal tree choice.
In this section, we concentrate on the prediction performance of RF focusing on out-of-bag (OOB) error (see ). We use this kind of prediction error estimate for three reasons: the main is that we are mainly interested in comparing results instead of assessing models, the second is that it gives fair estimation compared to the usual alternative test set error even if it is considered as a little bit optimistic and the last one, but not the least, is that it is a default output of the procedure. To avoid unsignificant sampling effects, each OOB errors is actually the mean of OOB error over runs.
We have collected information about the data sets considered in this paper: the name, the name of the corresponding data structure (when different), , , the number of classes in the multiclass case, a reference, a website or a package. The two next tables contain synthetic information while details are postponed in the Appendix. We distinguish standard and high dimensional situations and, in addition, the three problems: regression, 2-class classification and multiclass classification.
Table 1 displays some information about standard problems datasets: for classification at the top and for regression at the bottom.
Table 2 displays high dimensional problems datasets: for classification at the top and for regression at the bottom.
|toys data||100||100 to 1000||2|
|Friedman1||100||100 to 1000|
|Friedman2||100||100 to 1000|
|Friedman3||100||100 to 1000|
About regression problems, even if it seems at first inspection that the seminal paper by Breiman  closes the debate about good advice, it remains that the experimental results are about a variant which is not implemented in the universally used R package. Moreover, except this reference, at our knowledge, no such a general paper is available, so we develop again the Breiman’s study both for real and simulated data corresponding to the case and we provide some additional study on data corresponding to the case (such examples typically come from chemometrics).
We observe that the default value of proposed by the R package is not optimal, and that there is no improvement by using random forests with respect to unpruned bagging (obtained for ).
Let us briefly examine standard () regression datasets. In Figure 1 for real ones and for simulated ones in Figure 2. Each plot gives for to the OOB error for three different values of and . The vertical solid line indicates the value , the default value proposed by the R package for regression problems, the vertical dashed line being the value .
Three remarks can be formulated. First, the OOB error is maximal for and then decreases quickly (except for the ozone dataset, for reasons not clearly elucidated), then as soon as , the error remains the same. Second, the choice gives always lower OOB error than , and the gain can be important. So the default value proposed by the R package seems to be often not optimal, especially when . Lastly, the default value is convenient, but a much smaller one leads to comparable results.
So, for standard () regression problems, it seems that there is no improvement by using random forests with respect to unpruned bagging (obtained for ).
High dimensional problems
Let us start with a simulated data set for the high dimensional case . This example is built by adding extra noisy variables (independent and uniformly distributed on ) to the Friedman1 model defined by:
where are independent and uniformly distributed on and . So we have variables related to the response , the others being noise. We set and let vary.
Figure 3 contains four plots corresponding to 4 values of (, , and ) increasing the nuisance space dimension. Each plot gives for ten values of (, , , , , , , , , ) the OOB error for three different values of and . The x-axis is in log scale and the vertical solid line indicates the default value proposed by the R package for regression, the vertical dashed line being the value .
Let us give four comments. All curves have the same shape: the OOB error decreases while increases. While increases, both OOB errors of unpruned bagging (obtained with ) and random forests with default value of increase, but unpruned bagging performs better than RF (about of improvement). The choice gives always worse results than those obtained for . Finally, the default choice is convenient, but a much smaller one leads to comparable results.
Figure 4 and 5 show the results of the same study for the Friedman2 and Friedman3 models. The previous comments remain valid. Let us just note that the difference between unpruned bagging and random forests with default value is even more pronounced for these two problems.
To end, let us now examine the high dimensional real data set PAC. Figure 6 gives for same ten values of the OOB error for four different values of and (x-axis is in log scale). The general behavior is similar except for the shape: as soon as , the error remains the same instead of still decreasing. The difference of the shape of the curves between simulated and real datasets can be explained by the fact that, in simulated datasets we considered, the number of true variables is very small compared to the total number of variables. One may expect that in real datasets, the proportion of true variables is larger.
So, for high dimensional () regression problems, unpruned bagging seems to perform better than random forests and the difference can be large.
About standard classification problems, we check that Breiman’s conclusions remain valid for the considered variant and that the default value proposed in the R package is good. However for high dimensional classification problems, we observe that larger values of give sometimes much better results.
For classification problems for which , again the paper by Breiman is interesting and we just quickly check the conclusions.
Let us first examine in Figure 7 standard () classification real data sets. Each plot gives for to the OOB error for three different values of and . The vertical solid line indicates the value , the default value proposed by the R package for classification.
Three remarks can be formulated. The default value is convenient for all the examples. The default value is sufficient and a much smaller one is not convenient and can leads to significantly larger errors. The general shape is the following: the errors for and for (corresponding to the unpruned bagging) are of the same ”large” order of magnitude and the minimum is reached for the value . The gain can be about 30 or 50%.
So, for these 9 examples, the default value proposed by the R package is quite optimal.
Let us now examine in Figure 8 standard () classification simulated datasets. As it can be seen, is sufficient and, except for the ringnorm already pointed out as a somewhat special dataset (see Cutler, Zhao (2001) ) the value is good. Here, the general shape of the error curve is quite different compared to real datasets: the error increases with . So for these four examples, the smaller , the better.
High dimensional problems
Let us now consider the case for which Díaz-Uriarte and Alvarez de Andrés (2006)  give numerous advice. We complete the study by trying larger values of , which give interesting results. One can found in Figure 9 the OOB errors for nine high dimensional real datasets. Each plot gives for nine values of (, , , , , , , , ) the OOB error for four different values of and . The x-axis is in log scale. The vertical solid line indicates the default value proposed by the R package .
Again the default value is sufficient, and at the contrary the value can leads to significantly larger errors. The general shape is the following: it decreases in general and the minimum value is obtained or is close to the one reached using (corresponding to the unpruned bagging). The difference with standard problems is notable, the reason is that when is large, must be sufficiently large in order to have a high probability to capture important variables (that is variables highly related to the response) for defining the splits of the RF. In addition, let us mention that the default value is still reasonable from the OOB error viewpoint but of course, since is small with respect to , it is a very attractive value from a computational perspective (notice that the trees are not too deep since is not too large).
Let us examine a simulated dataset for the case , introduced by Weston et al. (2003) , called “toys data” in the sequel. It is an equiprobable two-class problem, , with true variables, the others being some noise. This example is interesting since it constructs two near independent groups of 3 significant variables (highly, moderately and weakly correlated with response ) and an additional group of noise variables, uncorrelated with . A forward reference to the plots on the left side of Figure 11 allow to see the variable importance picture and to note that the importance of the variables 1 to 3 is much higher than the one of variables 4 to 6. More precisely, the model is defined through the conditional distribution of the for :
for of data, for and for .
for the left, for and for .
the other variables are noise, for .
After simulation, obtained variables are standardized. Let us fix .
The plots of Figure 10 are organized as previously, four values of are considered: , , and corresponding to increasing nuisance space dimension. For and , the error decreases hugely until reaches and then remains constant, so the default values work well and perform as well as unpruned bagging, even if the true dimension . For larger values of (), the shape of the curve is close to the one for high dimensional real data sets (the error decreases and the minimum is reached when ). Whence, the error reached by using random forests with default is about to larger than the error reached by unpruned bagging which is close to for all the considered values of .
Finally, for high dimensional classification problems, our conclusion is that it may be worthwhile to choose larger than the default value .
After this section focusing on the prediction performance, let us now focus on the second attractive feature of RF: the variable importance index.
3 Variable importance
The quantification of the variable importance (abbreviated VI) is a crucial issue not only for ranking the variables before a stepwise estimation model but also to interpret data and understand underlying phenomenons in many applied problems.
In this section, we examine the RF variable importance behavior according to three different issues. The first one deals with the sensitivity to the sample size and the number of variables . The second examines the sensitivity to method parameters and . The last one deals with the variable importance of a group of variables, highly correlated or poorly correlated together with the problem of correct identification of irrelevant variables.
As a result, a good choice of parameters of RF can help to better discriminate between important and useless variables. In addition, it can increase the stability of VI scores.
To illustrate this discussion, let us consider the toys data introduced in Section 2.3.2 and compute the variable importance. Recall that only the first variables are of interest and the others are noise.
Let us mention that variable importance is computed conditionally to a given realization even for simulated datasets. This choice which is criticizable if the objective is to reach a good estimation of an underlying constant, is consistent with the idea of staying as close as possible to the experimental situation dealing with a given dataset. In addition, the number of permutations of the observed values in the OOB sample, used to compute the score of importance is set to the default value .
3.1 Sensitivity to and
Figure 11 illustrates the behavior of variable importance for several values of and . Parameters and are set to their default values. Boxplots are based on runs of the RF algorithm and for visibility, we plot the variable importance only for a few variables.
On each row, the first plot is the reference one for which we observe a convenient picture of the relative importance of the initial variables. Then, when increases tremendously, we try to check if: (1) the situation between the two groups remains readable; (2) the situation within each group is stable; (3) the importance of the additional dummy variables is close to 0.
The situation (graphs at the top of the figure) corresponds to an “easy” case, where a lot of data are available and (graphs at the bottom) to a harder one. For each value of , three values of are considered: and . When only the true variables are present. Then two very difficult situations are considered: with a lot of noisy variables and is even harder. Graphs are truncated after the th variable for readability (importance of noisy variables left are the same order of magnitude as the last plotted).
Let us comment on graphs on the first row (). When we obtain concentrated boxplots and the order is clear, variables and having nearly the same importance. When increases, the order of magnitude of importance decreases. The order within the two groups of variables ( and ) remains the same, while the overall order is modified (variable is now less important than variable ). In addition, variable importance is more unstable for huge values of . But what is remarkable is that all noisy variables have a zero VI. So one can easily recover variables of interest.
In the second row(), we note a greater instability since the number of observations is only moderate, but the variable ranking remains quite the same. What differs is that in the difficult situations () importance of some noisy variables increases, and for example variable cannot be highlighted from noise (even variable in the bottom right graph). This is due to the decreasing behavior of VI with growing, coming from the fact that when the algorithm randomly choose only variables at each split (with the default value). The probability of choosing one of the true variables is really small and the less a variable is chosen, the less it can be considered as important.
In addition, let us remark that the variability of VI is large for true variables with respect to useless ones. This remark can be used to build some kind of test for VI (see Strobl et al. (2007) ) but of course ranking is better suited for variable selection.
We now study how this VI index behaves when changing values of the main method parameters.
3.2 Sensitivity to and
The choice of and can be important for the VI computation. Let us fix and . In Figure 12 we plot variable importance obtained using three values of ( the default, and ) and two values of ( the default, and ).
The effect of taking a larger value for is obvious. Indeed the magnitude of VI is more than doubled starting from to , and it again increases whith . The effect of is less visible, but taking leads to better stability. What is interesting in the bottom right graph is that we get the same order for all true variables in every run of the procedure. In top left situation the mean OOB error rate is about and in the bottom right one it is . The gain in error may not be considered as large, but what we get in VI is interesting.
3.3 Sensitivity to highly correlated predictors
Let us address an important issue: how does variable importance behave in presence of several highly correlated variables? We take as basic framework the previous context with , , and . Then we add to the dataset highly correlated replications of some of the 6 true variables. The replicates are inserted between the true variables and the useless ones.
The first graph of Figure 13 is the reference one: the situation is the same as previously. Then for the three other cases, we simulate , and variables with a correlation of with variable (the most important one). These replications are plotted between the two vertical lines.
The magnitude of importance of the group is steadily decreasing when adding more replications of variable . On the other hand, the importance of the group is unchanged. Notice that the importance is not divided by the number of replications. Indeed in our example, even with replications the maximum importance of the group containing variable (that is variable and all replications of variable ) is only three times lower than the initial importance of variable . Finally, note that even if some variables in this group have low importance, they cannot be confused with noise.
Let us briefly comment on similar experiments (see Figure 14) but perturbing the basic situation not only by introducing highly correlated versions of the third variable but also of the sixth, leading to replicate the most important of each group.
Again, the first graph is the reference one. Then we simulate , and variables of correlation about with variable and the same with variable . Replications of variable are plotted between the first vertical line and the dashed line, and replications of variable between the dashed line and the second vertical line.
The magnitude of importance of each group ( and respectively) is steadily decreasing when adding more replications. The relative importance between the two groups is preserved. And the relative importance between the two groups of replications is of the same order than the one between the two initial groups.
3.4 Prostate data variable importance
To end this section, we illustrate the behavior of variable importance on a high dimensional real dataset: the microarray data called Prostate. The global picture is the following: two variables hugely important, about twenty moderately important variables and the others of small importance. So, more precisely Figure 15 compares VI obtained for parameters set to their default values (graphs of the left column) and those obtained for and (graphs of the right column).
Let us comment on Figure 15. For the two most important variables (first row), the magnitude of importance obtained with and is much larger than to the one obtained with default values. In the second row, the increase of magnitude is still noticeable from the third to the 9th most important variables and from the 10th to the 20th most important variables, VI is quite the same for the two parameter choices. In the third row, we get VI closer to zero for the variables with and than with default values. In addition, note that for the less important variables, boxplots are larger for default values, especially for unimportant variables (from the 200th to the 250th).
4 Variable selection
We distinguish two variable selection objectives:
to find important variables highly related to the response variable for interpretation purpose;
to find a small number of variables sufficient to a good prediction of the response variable.
The first is to magnify all the important variables, even with high redundancy, for interpretation purpose and the second is to find a sufficient parsimonious set of important variables for prediction.
Díaz-Uriarte, Alvarez de Andrés propose a strategy based on recursive elimination of variables. More precisely, they first compute RF variable importance. Then, at each step, they eliminate the of the variables having the smallest importance and build a new forest with the remaining variables. They finally select the set of variables leading to the smallest OOB error rate. The proportion of variables to eliminate is an arbitrary parameter of their method and does not depend on the data.
Ben Ishak, Ghattas choose an ascendant strategy based on a sequential introduction of variables. First, they compute some SVM-based variable importance. Then, they build a sequence of SVM models invoking at the beginning the most important variables, by step of 1. When becomes too large, the additional variables are invoked by packets. They finally select the set of variables leading to the model of smallest error rate. The way to introduce variables is not data-driven since it is fixed before running the procedure. They also compare their procedure with a similar one using RF instead of SVM.
We propose the following two-steps procedure, the first one is common while the second one depends on the objective:
Preliminary elimination and ranking:
Compute the RF scores of importance, cancel the variables of small importance;
Order the remaining variables in decreasing order of importance.
For interpretation: construct the nested collection of RF models involving the first variables, for to and select the variables involved in the model leading to the smallest OOB error;
For prediction: starting from the ordered variables retained for interpretation, construct an ascending sequence of RF models, by invoking and testing the variables stepwise. The variables of the last model are selected.
Of course, this is a sketch of procedure and more details are needed to be effective. The next paragraph answer this point but we emphasize that we propose an heuristic strategy which is not supported by specific model hypotheses but based on data-driven thresholds to take decisions.
Since we want to treat in an unified way all the situations, we will use for finding prediction variables the somewhat crude strategy previously defined. Nevertheless, starting from the set of variables selected for interpretation (say of size ), a better strategy could be to examine all, or at least a large part, of the possible models and to select the variables of the model minimizing the OOB error. But this strategy becomes quickly unrealistic for high dimensional problems so we prefer to experiment a strategy designed for small and large which is not conservative and even possibly leads to select fewer variables.
To both illustrate and give more details about this procedure, we apply it on a simulated learning set of size from the classification toys data model (see Section 2.3.2) with . The results are summarized in Figure 16. The true variables ( to ) are respectively represented by (). We compute, thanks to the learning set, forests with and , which are values of the main parameters previously considered as well adapted for VI calculations.
Let us detail the main stages of the procedure together with the results obtained on toys data:
First we rank the variables by sorting the VI in descending order.
The result is drawn on the top left graph for the most important variables (the other noisy variables having an importance very close to zero too). Note that true variables are significantly more important than the noisy ones.
We keep this order in mind and plot the corresponding standard deviations of VI. We use this graph to estimate some threshold for importance, and we keep only the variables of importance exceeding this level. More precisely, we select the threshold as the minimum prediction value given by a CART model fitting this curve. This rule is, in general conservative and leads to retain more variables than necessary in order to make a careful choice later.
The standard deviations of VI can be found in the top right graph. We can see that true variables standard deviation is large compared to the noisy variables one, which is close to zero. The threshold leads to retain variables.
Then, we compute OOB error rates of random forests (using default parameters) of the nested models starting from the one with only the most important variable, and ending with the one involving all important variables kept previously. The variables of the model leading to the smallest OOB error are selected.
Note that in the bottom left graph the error decreases quickly and reaches its minimum when the first true variables are included in the model. Then it remains constant. We select the model containing of the true variables. More precisely, we select the variables involved in the model almost leading to the smallest OOB error, i.e. the first model almost leading to the minimum. The actual minimum is reached with 24 variables.
The expected behavior is non-decreasing as soon as all the ”true” variables have been selected. It is then difficult to treat in a unified way nearly constant of or slightly increasing. In fact, we propose to use an heuristic rule similar to the 1 SE rule of Breiman et al. (1984)  used for selection in the cost-complexity pruning procedure.
We perform a sequential variable introduction with testing: a variable is added only if the error gain exceeds a threshold. The idea is that the error decrease must be significantly greater than the average variation obtained by adding noisy variables.
The bottom right graph shows the result of this step, the final model for prediction purpose involves only variables , and . The threshold is set to the mean of the absolute values of the first order differentiated errors between the model with variables (the first model after the one we selected for interpretation, see the bottom left graph) and the last one.
It should be noted that if one wants to estimate the prediction error, since ranking and selection are made on the same set of observations, of course an error evaluation on a test set or using a cross validation scheme should be preferred. It is taken into account in the next section when our results are compared to others.
To evaluate fairly the different prediction errors, we prefer here to simulate a test set of the same size than the learning set. The test error rate with all (200) variables is about while the one with the variables selected for interpretation is about , a little bit smaller. The model with prediction variables , and reaches an error of . Repeating the global procedure times on the same data always gave the same interpretation set of variables and the same prediction set, in the same order.
Highly correlated variables
Let us now apply the procedure on toys data with replicated variables: a first group of variables highly correlated with variable and a second one replicated from variable (the most important variable of each group). The situations of interest are the same as those considered to produce Figure 14.
|number of replications||interpretation set||prediction set|
|1||3 2 6 5||3 6 5|
|5||3 2 6 5||3 6 5|
|10||3 2 6 5||3 6 5|
Let us comment on Table 3, where the expression means that variable is a replication of variable .
Interpretation sets do not contain all variables of interest. Particularly we hardly keep replications of variable . The reason is that even before adding noisy variables to the model the error rate of nested models do increase (or remain constant): when several highly correlated variables are added, the bias remains the same while the variance increases. However the prediction sets are satisfactory: we always highlight variables and and at most one correlated variable with each of them.
Even if all the variables of interest do not appear in the interpretation set, they always appear in the first positions of our ranking according to importance. More precisely the most important variables in the case of replications are: ( ), and the most important variables in the case of replications are: ( ). Note that the order of the true variables ( ) remains the same in all situations.
We apply the variable selection procedure on Prostate data. The graphs of Figure 17 are obtained as those of Figure 16, except that for the RF procedure, we use , and for the bottom left graph, we only plot the most important variables for visibility. The procedure leads to the same picture as previously, except for the OOB rate along the nested models which is less regular. The key point is that it selects variables for interpretation, and variables for prediction. The number of selected variables is then very much smaller than .
In addition, to examine the variability of the interpretation and prediction sets the global procedure is repeated five times on the entire Prostate dataset. The five prediction sets are very close to each other. The number of prediction variables fluctuates between and , and variables appear in all sets. Among the five interpretation sets, are identical and made of variables and the other are made of variables. The variables of the smallest sets are present in all sets and the biggest sets (of size ) have variables in common.
So, although the sets of variables are not identical for each run of the procedure, they are not completely different. And in addition the most important variables are included in all sets of variables.
High dimensional classification
We apply the global variable selection procedure on high dimensional real datasets studied in Section 2.3.2, and we want to get an estimation of prediction error rates. Since these datasets are of small size, we use a 5-fold cross-validation to estimate the error rate. So we split the sample in stratified parts, each part is successively used as a test set, and the remaining of the data is used as a learning set. Note that the set of variables selected vary from one fold to another. So, we give in Table 4 the misclassification error rate, given by the 5-fold cross-validation, for interpretation and prediction sets of variables respectively. The number into brackets is the average number of selected variables. In addition, one can find the original error which stands for the misclassification rate given by the 5-fold cross-validation achieved with random forests using all variables. This error is calculated using the same partition in 5 parts and again we use and for all datasets.
|Colon||0.16 (35)||0.20 (8)||0.14|
|Leukemia||0 (1)||0 (1)||0.02|
|Lymphoma||0.08 (77)||0.09 (12)||0.10|
|Prostate||0.085 (33)||0.075 (8)||0.07|
The number of interpretation variables is hugely smaller than , at most tens to be compared to thousands. The number of prediction variables is very small (always smaller than ) and the reduction can be very important w.r.t the interpretation set size. The errors for the two variable selection procedures are of the same order of magnitude as the original error (but a little bit larger).
We compare these results with the results obtained by Ben Ishak and Ghattas (2008) (see tables 9 and 11 in ) which have compared their method with 5 competitors (mentioned in the introduction) for classification problems on these four datasets. Error rates are comparable. With the prediction procedure, as already noted in the introductory remark, we always select fewer variables than their procedures (except for their method GLMpath which select less than 3 variables for all datasets).
A simulated dataset
We now apply the procedure to a simulated regression problem. We construct starting from the Friedman1 model and adding noisy variables as in Section 2.2.2, a learning set of size with variables. Figure 18 displays the results of the procedure. The true variables of the model (1 to 5) are respectively represented by ().
The graphs are of the same kind as in classification problems. Note that variable 3 is confused with noise and is not selected by the procedure. This is explained by the fact that it is hardly correlated with the response variable. The interpretation procedure select the true variables except variable 3 and two noisy variables, and the prediction set of variables contains only the true variables (except variable 3). Again the whole procedure is stable in the sense that several runs give the same set of selected variables.
In addition, we simulate a test set of the same size than the learning set to estimate the prediction error. The test mean squared error with all variables is about , the one with the variables selected for interpretation is and the one with the variables selected for prediction is .
Before ending the paper, let us apply the entire procedure to the ozone dataset. It consists of observations of the daily maximum one-hour-average ozone together with meteorologic explanatory variables. Let us first examine, in Figure 19 the VI obtained with RF procedure using and .
From the left to the right, the 12 explanatory variables are 1-Month, 2-Day of month, 3-Day of week, 5-Pressure height, 6-Wind speed, 7-Humidity, 8-Temperature (Sandburg), 9-Temperature (El Monte), 10-Inversion base height, 11-Pressure gradient, 12-Inversion base temperature, 13-Visibility.
Three very sensible groups of variables appear from the most to the least important. First, the two temperatures (8 and 9), the inversion base temperature (12) known to be the best ozone predictors, and the month (1), which is an important predictor since ozone concentration exhibits an heavy seasonal component. A second group of clearly less important meteorological variables: pressure height (5), humidity (7), inversion base height (10), pressure gradient (11) and visibility (13). Finally three unimportant variables: day of month (2), day of week (3) of course and more surprisingly wind speed (6). This last fact is classical: wind enter in the model only when ozone pollution arises, otherwise wind and pollution are uncorrelated (see for example Cheze et al. (2003)  highlighting this phenomenon using partial estimators).
Let us now examine the results of the selection procedures.
After the first elimination step, the 2 variables of negative importance are canceled, as expected.
Therefore we keep 10 variables for interpretation step and then the model with 7 variables is then selected and it contains all the most important variables: (9 8 12 1 11 7 5).
For the prediction procedure, the model is the same except one more variable is eliminated: humidity (7) .
In addition, when different values for are considered, the most important 4 variables (9 8 12 1) highlighted by the VI index, are selected and appear in the same order. The variable 5 also always appears but another one can appear after of before.
Of course, one of the main open issue about random forests is to elucidate from a mathematical point of view its exceptionally attractive performance. In fact, only a small number of references deal with this very difficult challenge and, in addition to bagging theoretical examination by Bülmann and Yu (2002) , only purely random trees, a simple version of random forests, is considered. Purely random trees have been introduced by Cutler and Zhao (2001)  for classification problems and then studied by Breiman (2004) , but the results are somewhat preliminary. More recently Biau et al. (2008)  obtained the first well stated consistency type results.
From a practical perspective, surprisingly, this simplified and essentially not data-driven strategy seems to perform well, at least for prediction purpose (see Cutler and Zhao 2001 ) and, of course, can be handled theoretically in a easier way. Nevertheless, it should be interesting to check that the same conclusions hold for variable importance and variable selection tasks.
In addition, it could be interesting to examine some variants of random forests which, at the contrary, try to take into account more information. Let us give for example two ideas. The first is about pruning: why pruning is not used for individual trees? Of course, from the computational point of view the answer is obvious and for prediction performance, averaging eliminate the negative effects of individual overfitting. But from the two other previously mentioned statistical problems, prediction and variable selection, it remains unclear. The second remark is about the random feature selection step. The most widely used version of RF selects randomly input variables according to the discrete uniform distribution. Two variants can be suggested: the first is to select random inputs according to a distribution coming from a preliminary ranking given by a pilot estimator; the second one is to adaptively update this distribution taking profit of the ranking based on the current forest which is then more and more accurate.
These different future directions, both theoretical and practical, will be addressed in the next step of the work.
In the sequel, information about datasets retrieved from the R
package mlbench can be found in the corresponding
Standard problems, :
Real data sets
Simulated data sets
Real data sets
Simulated data sets
Real data sets
Simulated data sets
High dimensional problems, :
Real data sets
Simulated data sets
- thanks: [
- thanks: Université Paris Descartes, France
- thanks: Université Nice Sophia-Antipolis, France
- see http://www.r-project.org/
- In all the paper, with stands for
- from the R package mlbench
- see http://ligarto.org/rdiaz/Papers/rfVS/randomForestVarSel.html
- see description in section 2.3.2
- from the R package chemometrics
- Alon U., Barkai N., Notterman D.A., Gish K., Ybarra S., Mack D., and Levine A.J. (1999) Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci USA, Cell Biology, 96(12):6745-6750
- Alizadeh A.A. (2000) Distinct types of diffues large b-cell lymphoma identified by gene expression profiling. Nature, 403:503-511
- Archer K.J. and Kimes R.V. (2008) Empirical characterization of random forest variable importance measures. Computational Statistics & Data Analysis 52:2249-2260
- Ben Ishak A. and Ghattas B. (2008) Sélection de variables en classification binaire : comparaisons et application aux données de biopuces. To appear, Revue SFDS-RSA
- Biau G., Devroye L., and Lugosi G. (2008) Consistency of random forests and other averaging classifiers. Journal of Machine Learning Research, 9:2039-2057
- Breiman L., Friedman J.H., Olshen R.A., Stone C.J. (1984) Classification And Regression Trees. Chapman & Hall
- Breiman, L. (1996) Bagging predictors. Machine Learning, 26(2):123-140
- Breiman L. (2001) Random Forests. Machine Learning, 45:5-32
- Breiman L. (2004) Consistency for a simple model of Random Forests. Technical Report 670, Berkeley
- Breiman L. and Cutler, A. (2005) Random Forests. Berkeley, http://www.stat.berkeley.edu/users/breiman/RandomForests/
- Bühlmann, P. and Yu, B. (2002) Analyzing Bagging. The Annals of Statistics, 30(4):927-961
- Cheze N., Poggi J.M. and Portier B. (2003) Partial and Recombined Estimators for Nonlinear Additive Models. Statistical Inference for Stochastic Processes, Vol. 6, 2, 155-197
- Cutler A. and Zhao G. (2001) Pert - Perfect random tree ensembles. Computing Science and Statistics, 33:490-497
- Díaz-Uriarte R. and Alvarez de Andrés S. (2006) Gene Selection and classification of microarray data using random forest. BMC Bioinformatics, 7:3, 1-13
- Dietterich, T. (1999) An experimental comparison of three methods for constructing ensembles of decision trees : Bagging, Boosting and randomization. Machine Learning, 1-22
- Dietterich, T. (2000) Ensemble Methods in Machine Learning. Lecture Notes in Computer Science, 1857:1-15
- Efron B., Hastie T., Johnstone I., and Tibshirani R. (2004) Least angle regression. Annals of Statistics, 32(2):407-499
- Fan J. and Lv J. (2008) Sure independence screening for ultra-high dimensional feature space. J. Roy. Statist. Soc. Ser. B, 70:849-911
- Guyon I., Weston J., Barnhill S., and Vapnik V.N. (2002) Gene selection for cancer classification using support vector machines. Machine Learning, 46(1-3):389-422
- Guyon I. and Elisseff A. (2003) An introduction to variable and feature selection. Journal of Machine Learning Research, 3:1157-1182
- Golub T.R., Slonim D.K, Tamayo P., Huard C., Gaasenbeek M., Mesirov J.P., Coller H., Loh M.L., Downing J.R., Caligiuri M.A., Bloomfield C.D., and Lander E.S. (1999) Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286:531-537
- Grömping U. (2007) Estimators of Relative Importance in Linear Regression Based on Variance Decomposition. The American Statistician 61:139-147
- Grömping U. (2006) Relative Importance for Linear Regression in R: The Package relaimpo. Journal of Statistical Software 17, Issue 1
- Hastie T., Tibshirani R., Friedman J. (2001) The Elements of Statistical Learning. Springer
- Ho, T.K. (1998) The random subspace method for constructing decision forests. IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(8):832-844
- Khan J., Wei J.S., Ringner M., Saal L.H., Ladanyi M., Westermann F., Berthold F., Schwab M., Antonescu C.R., Peterson C., Meltzer P.S. (2001) Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med, 7:673-679
- Kohavi R. and John G.H. (1997) Wrappers for Feature Subset Selection. Artificial Intelligence, 97(1-2):273-324
- Liaw A. and Wiener M. (2002). Classification and Regression by randomForest. R News, 2(3):18-22
- Park M.Y. and Hastie T. (2007) An L1 regularization-path algorithm for generalized linear models. J. Roy. Statist. Soc. Ser. B, 69:659-677
- Poggi J.M. and Tuleau C. (2006) Classification supervisée en grande dimension. Application à l’agrément de conduite automobile. Revue de Statistique Appliquée, LIV(4):39-58
- Pomeroy S.L., Tamayo P., Gaasenbeek M., Sturla L.M., Angelo M., McLaughlin M.E., Kim J.Y., Goumnerova L.C., Black P.M., Lau C., Allen J.C., Zagzag D., Olson J.M., Curran T., Wetmore C., Biegel J.A., Poggio T., Mukherjee S., Rifkin R., Califano A., Stolovitzky G., Louis D.N., Mesirov J.P., Lander E.S., Golub T.R. (2002) Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature, 415:436-442
- Rakotomamonjy A. (2003) Variable selection using SVM-based criteria. Journal of Machine Learning Research, 3:1357-1370
- Ramaswamy S., Ross K.N., Lander E.S., Golub T.R. (2003) A molecular signature of metastasis in primary solid tumors. Nature Genetics, 33:49-54
- Ross D.T., Scherf U., Eisen M.B., Perou C.M., Rees C., Spellman P., Iyer V., Jeffrey S.S., de Rijn M.V., Waltham M., Pergamenschikov A., Lee J.C., Lashkari D., Shalon D., Myers T.G., Weinstein J.N., Botstein D., Brown P.O. (2000) Systematic variation in gene expression patterns in human cancer cell lines. Nature Genetics, 24(3):227-235
- Singh D., Febbo P.G., Ross K., Jackson D.G., Manola J., Ladd C., Tamayo P., Renshaw A.A., D’Amico A.V., Richie J.P., Lander E.S., Loda M., Kantoff P.W., Golub T.R., and Sellers W.R. (2002) Gene expression correlates of clinical prostate cancer behavior. Cancer Cell, 1:203-209
- Strobl C., Boulesteix A.-L., Zeileis A. and Hothorn T. (2007) Bias in random forest variable importance measures: illustrations, sources and a solution. BMC Bioinformatics, 8:25
- Strobl C., Boulesteix A.-L., Kneib T., Augustin T. and Zeileis A. (2008) Conditional variable importance for Random Forests. BMC Bioinformatics, 9:307
- van’t Veer L.J., Dai H., van de Vijver M.J., He Y.D., Hart A.A.M., Mao M., Peterse H.L., van der Kooy K., Marton M.J., Witteveen A.T., Schreiber G.J., Kerkhoven R.M., Roberts C., Linsley P.S., Bernards R., Friend S.H. (2002) Gene expression profiling predicts clinical outcome of breast cancer. Nature, 415:530-536
- Weston J., Elisseff A., Schoelkopf B., and Tipping M. (2003) Use of the zero norm with linear models and kernel methods. Journal of Machine Learning Research, 3:1439-1461