TreeGrad: Transferring Tree Ensembles to Neural Networks

TreeGrad: Transferring Tree Ensembles to Neural Networks

Chapman Siu
Faculty of Engineering and Information Technology
University of Technology Sydney, Australia
chpmn.siu@gmail.com
Abstract

Gradient Boosting Decision Tree (GBDT) are popular machine learning algorithms with implementations such as LightGBM and in popular machine learning toolkits like Scikit-Learn. Many implementations can only produce trees in an offline manner and in a greedy manner. We explore ways to convert existing GBDT implementations to known neural network architectures with minimal performance loss in order to allow decision splits to be updated in an online manner and provide extensions to allow splits points to be altered as a neural architecture search problem. We provide learning bounds for our neural network.

1 Introduction

Gradient boosting decision tree (GBDT) Friedman2001 is a widely-used machine learning algorithm, and has achieved state of the art performance in many machine learning tasks. With the recent rise of Deep Learning architectures which open the possibility of allowing all parameters to be updated simultaneously with gradient descent rather than splitting procedures, furthermore it promises to be scalable with mini-batch based learning and GPU acceleration with little effort.

In this paper, we present an neural network architecture which we call TreeGrad, based on Deep Neural Decision Forests dndf which enables boosted decision trees to be trained in an online manner; both in the nature of updating decision split values and the choice of split candidates. We demonstrate that TreeGrad achieves learning bounds previously described by cortes17a and demonstrate the efficacy of TreeGrad approach by presenting competitive benchmarks to leading GBDT implementations.

1.1 Related Work

Deep Neural Decision Forests dndf demonstrates how neural decision tree forests can replace a fully connected layer using stochastic and differential decision trees which assume the node split structure fixed and the node split is learned.

TreeGrad is a simple extension of Deep Neural Decision Forests, which treats the node split structure to be a neural network architecture search problem; whilst enforcing neural network compression approaches to render our decision trees to be more interpretable through creating axis-parallel splits.

2 Learning Decision Stumps using Automatic Differentiation

Consider a binary classification problem with input and output spaces given by and , respectively. A decision tree is a tree-structured classifier consisting of decision nodes and prediction (or leaf) nodes. A decision stump is a machine learning model which consists of a single decision (or split) and prediction (or leaf) nodes corresponding to the split, and is used by decision nodes to determine how each sample is routed along the tree. A decision stump consists of a decision function , which is parameterized by which is responsible for routing the sample to the subsequent nodes.

In this paper we will consider only decision functions which are binary. Typically, in decision tree and tree ensemble models the routing is deterministic, in this paper we will approximate deterministic routing through the use of the Concrete distribution (or the Gumbel-Softmax trick) gumbel_softmax1 gumbel_softmax2 , whereby the routing direction is the output of a Concrete distribution; that is we consider softmax map indexed by temperature parameter , which is given by . This approach differs from how decision functions are composed by “Deep Neural Decision Forest” which uses Bernoulli random variables and use probabilistic routing for their decision functionsdndf . Once a sample reaches a leaf node , the related tree prediction is given , which represents the output of a binary classification problem. In this case, as the routings are not purely deterministic, the leaf predictions will weighted by the by the probability of reaching the leaf. The predictions from the decision stump is then parametrized as

where , representing our binary classification problem and , representing the set of leaves corresponding to the binary class predictions.

2.1 Decision Stumps

In many implementations of decision trees, the decision node is determined using axis-parallel split; whereby the split is determined based on a comparison with a single valuemurthy1994system . In this paper we are interested both axis-parallel and oblique splits for the decision function. More specifically, we’re interested in the creation of axis-parallel from an oblique split.

To create an oblique split, we assume that the decision function is a linear classifier function, i.e. , where is parameterized by the linear coefficients and the intercept and belongs to the class of logistic functions, which include sigmoid and softmax variants. In a similar way an axis-parallel split is create through a linear function , with the additional constraint that the norm (defined as , where is the indicator function) of is 1, i.e. .

2.1.1 Learning Decision Stumps as a Model Selection Problem

If we interpret the decision function to be a model selection process, then model selection approaches can be used to determine the ideal model, and hence axis-parallel split for the decision function. A simple approach is to use a stacking model. Stacking models are an ensemble model in the form , for set of real weights Wolpert1992 Breiman1996 .

From this formulation, we can either choose the best model and create an axis-parallel split, or leave the stacking model which will result in an oblique split. This demonstrates the ability for our algorithm to convert oblique splits to axis-parallel splits for our decision stumps which is also automatically differentiable, which can allow non-greedy decision trees to be created. In the scenario that the best model is preferred, approaches like straight-through Gumbel-Softmax gumbel_softmax1 can be applied: for the forward pass, we sample a one-hot vector using Gumbel-Max trick, while for the backward pass, we use Gumbel-Softmax to compute the gradient. This approach is analogous to neural network compression algorithms which aim to aggressively prune parameters at a particular threshold; whereby the threshold chosen is to ensure that each decision boundary contains only one single parameter with all other parameters set to .

3 Decision Trees

Extending decision nodes to decision trees has been discussed by dndf . We denote the output of node to be , which is then routed along a pre-determined path to the subsequent nodes. When the sample reaches the leaf node , the tree prediction will be given by a learned value of the leaf node. In some implementations it is the raw number of observations seen in each class (scikit-learn), in other implementations it is the estimated log-odds (LightGBM, XGBoost). As the routings are probabilistic in nature, the leaf predictions can be averaged by the probability of reaching the leaf, as done in dndf , or through the usage of the Gumbel straight through trick gumbel_softmax1 .

To provide an explicit form for routing within a decision tree, we observe that routes in a decision tree are fixed and pre-determined. We introduce a routing matrix which is a binary matrix which describes the relationship between the nodes and the leaves. If there are nodes and leaves, then , where the rows of represents the presence of each binary decision of the nodes for the corresponding leaf .

We define the matrix containing the routing probability of all nodes to be . We construct this so that for each node , we concatenate each decision stump route probability , where is the matrix concatenation operation, and indicate the probability of moving to the positive route and negative route of node respectively. We can now combine matrix and to express as follows:

where represents the binary vector for leaf . This is interpreted as the product pooling the nodes used to route to a particular leaf . Accordingly, the final prediction for sample from the tree with decision nodes parameterized by is given by

Where represents the parameters denoting the leaf node values, and is the routing function which provides the probability that the sample will reach leaf , i.e. for all . The matrix is the routing matrix which describes which node is used for each leaf in the tree.

Figure 1: Left: Iris Decision Tree by Scikit Learn, Right: Corresponding Parameters for our Neural Network. If our input , and we use for our Gumbel-Softmax as our activation function on our node layer, then the corresponding output for , , and predictions , which would correctly output class 2 in line with the decision tree shown above.
Figure 2: Decision Tree as a two layer Neural Network. The Neural Network has two trainable layers: the decision tree nodes, and the leaf nodes. The node layers has activation function Gumbel-Softmax, which is then combined together with concatenation operator, which is then combined through linear operations with the routing matrix with product pooling applied. The final prediction is then computed through dot product with the leaf nodes

3.1 Decision Trees as a Neural Network

Next we demonstrate that a decision tree is a neural network has sets of layers that belongs to family of artificial neural networks defined by cortes17a . The size of these layers are based on a predetermined number of nodes with a corresponding number of leaves . Let the input space be and for any , let denote the corresponding feature vector.

The first layer is decision node layer. This is defined by trainable parameters , with and . Define and , which represent the positive and negative routes of each node. Then the output of the first layer is .

The next is the probability routing layer, which are all untrainable, and are a predetermined binary matrix as defined in Section 3. We define the activation function to be . Then the output of the second layer is . As is 1-Lipschitz bounded function in the domain and the range of , then by extension, is a 1-Lipschitz bounded function for . As is a binary matrix, then the output of must also be in the range .

The final output layer is the leaf layer, this is a fully connected layer to the previous layer, which is defined by parameter , which represents the number of leaves. The activation function is defined to be . The the output of the last layer is defined to be . Since has range , then is a 1-Lipschitz bounded function as is 1-Lipschitz bounded in the domain .

This formulation is equivalent to the above formulation, as the product pooling operator . As each activation function is 1-Lipschitz functions, then our decision tree neural network belongs to the same family of artificial neural networks defined by cortes17a , and thus our decision trees have the corresponding learning bounds related to AdaNet.

The number of trainable parameters () in our decision tree implementation is . If we include stacking weights in our model, then the number of parameters increase to , if we alter the network to axis-parallel splits, then the parameters reduce to . More importantly, if our decision tree implementation is sparsified, then the number of trainable parameters does not depend on the number of features in the first feature layer; instead it depends only on the number of nodes in the model.

3.2 Discussion

Our implementation of decision trees is straightforward and can be implemented using auto-differentiation frameworks with as few as ten lines of code. Our approach has been implemented using Autograd as a starting point and in theory can be moved to a GPU enabled framework.

Methods to seamless move between oblique splits and axis-parallel splits would be to introduce Gumbel-trick to the model. One could choose to keep the parameters in the model, rather than taking them out. The inability to grow or prune nodes is a deficiency in our implementation compared with off-the-shelf decison tree models which ca easily do this readily. Growing or pruning decision trees would be an architectural selection problem and not necessarily a problem related to the training of weights.

3.3 Extensions to Boosting

The natural extension to building decision trees is boosting decision trees. To that end, AdaNet algorithm cortes17a can be used combine and boost multiple decision trees, another approach is to train models in an offline manner, in the same manner which Boosting algorithms are implemented when the models cannot be updated in an online manner.

4 Experiments

Our experiments explore three different components to grow boosted trees as neural networks. We examine the three components from stumps, trees and boosted trees over a range of benchmark datasets to demonstrate the level of agreement with competing implementations in scikit-learn and LightGBM.

We perform experiments on a combination of benchmark classification datasets from the UCI repository to compare our non-greedy decision tree ensemble using neural networks (TreeGrad) against other popular implementations in LightGBM (LGM) and Scikit-Learn Gradient Boosted Trees (GBT).

Our TreeGrad is based on a two stage process. First, constructing a tree where the decision boundaries are oblique boundaries. Next, sparsifying the neural network with axis-parallel boundaries and fine tuning the decision tree.

In each application of TreeGrad, our models may only copy the structure of decision trees, but we will always reset the subsequent weights.

4.1 Decision Trees

We consider the usage of regularizer combined with regularizer in a manner described by louizos2017learning . We found sparsifying neural networks pre-emptively using the regularizer enabled minmal loss in performance after neural networks were compressed to produce axis-parallel splits. All trees were grown with the same hyperparameters with maximum number of leaves being set to . The base architecture chosen for TreeGrad algorithm was determined by LightGBM, where the results shown below are when all weights are re-initialise to random values.

Kendall’s Tau
Dataset Light GBM Scikit-Learn
adult -0.10 -0.33
covtype 0.22 0.33
dna 0.12 0.13
glass 0.34 0.07
mandelon 0.54 0.59
soybean 0.08 0.21
yeast 0.47 -0.28
Table 1: Kendall’s Tau of TreeGrad and LGM and GBT Feature Importance (Split Metric): larger values mean ‘more similar’.
Dataset TreeGrad LGM GBT
adult 0.797 0.765 0.759
covtype 0.644 0.731 0.703
dna 0.850 0.541 0.891
glass 0.688 0.422 0.594
mandelon 0.789 0.752 0.766
soybean 0.662 0.583 0.892
yeast 0.553 0.364 0.517
Number of wins 4 1 2
Mean Reciprocal Rank 0.762 0.452 0.619
Table 2: Accuracy Performance of TreeGrad against LGM and GBT for single Decision Tree (test dataset)

In the models which TreeGrad had low agreement on the feature importance metrics were the models in which TreeGrad performed best. This suggests that TreeGrad was able to find combination of features and their interactions which were not recovered when the decision trees were grown in a greedy fashion. It would appear from these results that TreeGrad may have an advantage when training a single tree where there are constraints on the number of leafs or depth.

4.2 Boosted Trees

To compare the performance for Boosted Trees, we use 100 trees grown using Scikit-learn and also LightGBM all with maximum number of leaves of 32.

As in the other experiments, we use and regularizer to sparsify the node layer first, before applying regularizer to the rest of the network, TreeGrad trees in this scenario all have the same structure with the weights randomly initialised.

Kendall’s Tau
Dataset Light GBM Scikit-Learn
adult 0.48 0.47
covtype 0.45 0.44
dna -0.06 0.28
glass 0.17 0.11
mandelon 0.05 0.07
soybean 0.05 0.05
yeast 0.47 0.6
Table 3: Kendall’s Tau of TreeGrad and LGM and GBT Feature Importance (Split Metric) for ensemble of 100 boosted trees: larger values mean ‘more similar’.
Dataset TreeGrad (Sequential) LGM GBT
adult 0.860 0.873 0.874
covtype 0.832 0.835 0.826
dna 0.950 0.949 0.946
glass 0.766 0.813 0.719
mandelon 0.882 0.881 0.866
soybean 0.936 0.936 0.917
yeast 0.591 0.573 0.542
Number of wins 4 3 1
Mean Reciprocal Rank 0.762 0.714 0.429
Table 4: Accuracy Performance of TreeGrad against LGM and GBT for Boosted Decision Trees (test dataset)

Again, we observe that features importance with low correlation have TreeGrad outperforming counterparts, which suggests that non-greedy approach can find different relationships to the greedy counterparts. In terms of the results, it is much less clear cut based on this sample of datasets, though it does appear that TreeGrad is superior to GBT models.

4.3 Training Neural Network Decision Tree Sequentially versus End-to-End

In the previous section, TreeGrad were trained in a sequential manner, and not in an end-to-end fashion. If we have a desire to train the neural networkin an end-to-end fashion, it will incur greater computation cost, as all parameters will need to be updated simulatenously rather than only updating part of a network at a time. We repeat the experiment with identical networks; one trained end-to-end and the other trained sequentially only.

Dataset TreeGrad (Sequential) TreeGrad (End to End)
adult 0.860 0.857
covtype 0.832 0.844
dna 0.950 0.951
glass 0.766 0.766
mandelon 0.882 0.868
yeast 0.591 0.557
Table 5: Accuracy Performance of TreeGrad Sequential and with End to End Tuning (test dataset)

When we compare the results, we observe that there is some difference in performance when we train all trees concurrently versus training them in a sequential manner; though these differences would make only minor differences to the mean reciprocal rank when comparing with LGM and GBT models.

5 Conclusion

We have demonstrated approaches to unify boosted tree models and neural networks, allowing tree models to be transferred to neural network structures. We have provided an approach to rebuild trees in a non-greedy manner and decision splits in the scenario were weights are reset and provided learning bounds for this approach. This approach is demonstrated to be competitive with current tree ensemble algorithms, and empirically better than popular frameworks in Scikit-learn.

References

  • [1] Leo Breiman. Bagging Predictors. Machine Learning, 24(421):123–140, 1996.
  • [2] Corinna Cortes, Xavier Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yang. AdaNet: Adaptive structural learning of artificial neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 874–883, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
  • [3] Jerome H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5):1189–1232, 2001.
  • [4] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2018.
  • [5] Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, and Samuel Rota Bulò. Deep neural decision forests. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 4190–4194, 2016.
  • [6] Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through l0 regularization. arXiv preprint arXiv:1712.01312, 2017.
  • [7] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference on Learning Representations, 2018.
  • [8] Sreerama K Murthy, Simon Kasif, and Steven Salzberg. A system for induction of oblique decision trees. Journal of artificial intelligence research, 2:1–32, 1994.
  • [9] David H. Wolpert. Stacked generalization. Neural Networks, 5(2):241–259, 1992.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
391968
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description