Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines

Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines

Abstract

A method for the local and global interpretation of a black-box model on the basis of the well-known generalized additive models is proposed. It can be viewed as an extension or a modification of the algorithm using the neural additive model. The method is based on using an ensemble of gradient boosting machines (GBMs) such that each GBM is learned on a single feature and produces a shape function of the feature. The ensemble is composed as a weighted sum of separate GBMs resulting a weighted sum of shape functions which form the generalized additive model. GBMs are built in parallel using randomized decision trees of depth 1, which provide a very simple architecture. Weights of GBMs as well as features are computed in each iteration of boosting by using the Lasso method and then updated by means of a specific smoothing procedure. In contrast to the neural additive model, the method provides weights of features in the explicit form, and it is simply trained. A lot of numerical experiments with an algorithm implementing the proposed method on synthetic and real datasets demonstrate its efficiency and properties for local and global interpretation.

Keywords: interpretable model, XAI, gradient boosting machine, decision tree, ensemble model, Lasso method.

1 Introduction

Machine learning models play an important role for making decision and inferring predictions in various applications. However, a lot of machine learning models are regarded as black boxes, and they cannot easily explain their predictions or decisions in a way that humans could understand. This fact contradicts with a requirement of understanding results provided by the models, for example, a doctor has to have an explanation of a stated diagnosis and has to understand why a machine learning model provides the diagnosis in order to choose a preferable treatment [26], the doctor cannot effectively use predictions provided by the models without their interpretation and explanation. To cope with the problem of interpretation of black-box models, a lot of interpretable models have been developed to explain predictions of the deep classification and regression algorithms, for example, deep neural network predictions [4, 24, 38, 52].

Interpretation methods can be divided into two groups: local and global methods. Local methods try to explain a black-box model locally around a test example whereas global methods derive interpretations on the whole dataset or its part. In spite of importance and interest of the global interpretation methods, most applications aim to understand decisions concerning with an object to answer the question what features of the analyzed object are responsible for a black-box model prediction. Therefore, we focus on both the groups of interpretation methods, including local as well as global interpretations.

One of the most popular post-hoc approaches to interpretation is to approximate a black-box model by the linear model. The well-known methods like SHAP (SHapley Additive exPlanations) [36, 49] and LIME (Local Interpretable Model-Agnostic Explanation) [43] are based on building a linear model around the instance to be explained. Coefficients of the linear model are interpreted as the feature’s importance. The linear regression for solving the regression problem or logistic regression for solving the classification problem allow us to construct the corresponding linear models.

However, the simple linear model cannot adequately approximate a black-box model in some cases. Therefore, its generalization in the form of Generalized Additive Models (GAMs) [25] is used. GAMs are a class of linear models where the outcome is a linear combination of some functions of features. They aim to provide a better flexibility for the approximation of the black-box model and to determine the feature importance by analyzing how the feature affects the predicted output through its corresponding function [34, 55]. GAMs can be written as follows:

(1)

where is the feature vector; is the target variable; is a univariate shape function with ; is the vector of coefficients.

The feature contribution to the black-box model output can be understood by looking at the shape functions [40, 10].

One of the interpretation methods using GAMs is Explainable Boosting Machine (EBM) proposed by Nori et al. [40]. According to EBM, shape functions are gradient-boosted ensembles of bagged trees, each tree operating on a single variable. Another interesting class of models called Neural Additive Models (NAMs) was proposed by Agarwal et al. [2]. NAMs learn a linear combination of neural networks such that a single feature is fed to each network which performs the function . The networks are trained jointly. The impact of every feature on the prediction is determined by its corresponding shape function .

Similar approaches using neural networks for constructing GAMs and performing shape functions called GAMI-Net and the Adaptive Explainable Neural Networks (AxNNs) were proposed by Yang et al. [54] and Chen et al. [11], respectively.

One of the popular models to explain the black-box model by means of GAMs is the well-known gradient boosting machine (GBM) [19, 20]. GBMs have illustrated their efficiency for solving regression problems. An idea behind their using as interpretation models is that all features are sequentially considered in each iteration of boosting to learn shape function for all features. For example, it is shown by Lou et al. [34] that all shape functions are set to zero to initialize the algorithm. For each feature, residuals are calculated, the one-dimensional function is learned on residuals, and it is added to the shape function. This procedure is performed over a predefined number of iterations. It should be noted that regression trees are often used to predict new residuals. They are built in the GBM such that each successive tree predicts the residuals of the preceding trees given an arbitrary differentiable loss function [46].

Another modification of the GBM for interpretation is based on the boosting package XGBoost [12]. Chang et al. [10] propose to limit the tree depth to 1 to avoid interactions of features and to convert XGBoost to the GAM.

In order to extend the set of interpretation methods using GAMs, we propose another method based on applying GBMs with partially randomized decision trees [32]. In contrast to extremely randomized trees proposed by Geurts et al. [23], the cut-point for partitioning each feature in these trees is determined randomly from the uniform distribution, but the best feature is selected such that it maximizes the score. The proposed method is based on a weighted sum of separate GBMs such that each GBM depends on a single feature.

In contrast to the NAM [2], the proposed model calculates coefficients of the GAM in the explicit form because GBMs are built in parallel, but not sequentially. Moreover, the problem of a long training of neural networks is partially solved because the proposed algorithm implementing the model optimally fits coefficients in each iteration of the gradient boosting due to using the Lasso method and a cross-validation procedure. When the weights stop changing, the training process can be stopped, unlike EBM, where the stopping criterion is an accuracy measure. In addition, if there exist correlations between features, the proposed model detects them. In this case, the weights continue to grow and will not reach a plateau.

In contrast to models proposed in [8], the proposed method builds many decision trees for every feature. Moreover, all GBMs are trained simultaneously and dependently. First, coefficients of the GAM play the role of an adaptive learning rate [32], which allows us to reduce the overfitting problem. Second, if the prediction is a non-linear function, for example, in the case of classification, then derivatives of the loss function for each GBM depend on predictions of other GBMs. It is important to note that the GBMs in the proposed method can be used jointly with neural networks when a part of features is processed by trees, another part is used by networks.

A lot of numerical experiments with an algorithm implementing the proposed method on synthetic and real datasets demonstrate its efficiency and properties for local and global interpretation.

The code of the proposed algorithmn can be found in https://github.com/andruekonst/egbm-gam.

The paper is organized as follows. Related work is in Section 2. An introduction to the GBM, brief introductions to LIME and the NAM are provided in Section 3 (Background). A detailed description of the proposed interpretation method and an algorithm implementing it are provided in Section 4. Numerical experiments with synthetic data and real data using the global interpretation are given in Section 5. Numerical experiments with the local interpretation are given in Section 6. Concluding remarks can be found in Section 7.

2 Related work

Local and global interpretation methods. Due to importance of the machine learning model interpretation in many applications, a lot of methods have been proposed to explain black-box models locally. One of the first local interpretation methods is the Local Interpretable Model-agnostic Explanations (LIME) [43], which uses simple and easily understandable linear models to approximate the predictions of black-box models locally. Following the original LIME [43], a lot of its modifications have been developed due to a nice simple idea underlying the method to construct a linear approximating model in a local area around a test example. Examples of these modifications are ALIME [47], Anchor LIME [44], LIME-Aleph [42], GraphLIME [27], SurvLIME [33]. Another explanation method, which is based on the linear approximation, is the SHAP [36, 49], which takes a game-theoretic approach for optimizing a regression loss function based on Shapley values. A comprehensive analysis of LIME, including the study of its applicability to different data types, for example, text and image data, is provided by Garreau and Luxburg [21]. The same analysis for tabular data is proposed by Garreau and Luxburg [22]. An interesting information-theoretic justification of interpretation methods on the basis of the concept of explainable empirical risk minimization is proposed by Jung [29].

An important group of interpretation methods is based on perturbation techniques [17, 18, 41, 51]. The basic idea behind the perturbation techniques is that contribution of a feature can be determined by measuring how a prediction score changes when the feature is altered [14]. Perturbation techniques can be applied to a black-box model without any need to access the internal structure of the model. However, the corresponding methods are computationally complex when samples are of the high dimensionality.

Global interpretations are more difficult tasks, and there are a few papers devoted to solving the global interpretation tasks [6, 28, 30, 35, 37, 53]. We also have to point out models which can be used as local and global interpretations [2, 8, 10, 11, 34, 40, 54].

A lot of interpretation methods, their analysis, and critical review can be found in survey papers [1, 3, 4, 9, 13, 24, 45, 52].

Gradient boosting machines with randomized decision trees. GBMs are efficient tool for solving regression and classification problems. Moreover, it is pointed out by Lundberg et al. [35] that GBMs with decision trees as basic models can be more accurate than neural networks and more interpretable than linear models. Taking into account this advantage of decision trees in GBMs, a modification of the GBM on the basis of the deep forests [56] called the multi-layered gradient boosting decision tree model is proposed by Feng et al. [16]. Another modification is the soft GBM [15]. It turns out that randomized decision trees [23] may significantly improve GBMs with decision trees. As a results, several models are implemented by using different modifications of randomized decision trees [31]. These methods can be successfully applied to the local and global interpretation problems.

3 Background

3.1 A brief introduction to the GBM for regression

The standard regression problem can be stated as follows. Given training examples , in which belongs to a set and represents a feature vector involving features, and represents the observed output or the target value such that . Here is the random noise with expectation and unknown finite variance. Machine learning aims to construct a regression model or an approximation of the function that minimizes the expected risk or the expected loss function

(2)

with respect to the function parameters. Here is a joint probability distribution of and ; the loss function may be represented, for example, as follows:

(3)

There are many powerful machine learning methods for solving the regression problem, including regression random forests [5, 7], the support vector regression [48], etc. One of the powerful methods is the GBM [20], which will be briefly considered below.

GBMs iteratively improve the predictions of from with respect to by adding new weak or base learners that improve upon the previous ones, forming an additive ensemble model of size :

(4)

where is the iteration index; is the -th base model, for example, a decision tree; is the coefficient or the weight of the -th base model.

The algorithm aims to minimize the loss function , for example, the squared-error -loss, by iteratively computing the gradient in accordance with the standard gradient descent method. If to say about decision trees as the base models, then a single decision tree is constructed in each iteration to fit the negative gradients. The function can be defined by parameters , i.e., . However, we will not use the parametric representation of the function.

The gradient boosting algorithm can be represented be means of Algorithm 1.

0:  Training set ; the number of the GBM iterations
0:  Predicted function
1:  Initialize the zero base model , for example, with the constant value.
2:  for ,  do
3:     Calculate the residual as the partial derivative of the expected loss function at every point of the training set, , (the negative gradient)
(5)
4:     Build a new base model (a regression tree) on dataset
5:     Find the best gradient descent step-size :
(6)
6:     Update the whole model ;
7:  end for
8:  Output
(7)
Algorithm 1 The GBM algorithm

The above algorithm minimizes the expected loss function by using decision trees as base models. Its parameters include depths of trees, the learning rate, the number of iterations. They are selected to provide a high generalization and accuracy depending on an specific task.

The gradient boosting algorithm is a powerful and efficient tool for solving regression problems, which can cope with complex non-linear function dependencies [39].

3.2 Neural Additive Models

The proposed interpretation method can be regarded as a modification of the NAM because this method takes some ideas behind the NAM, namely, the separate training on single features and summing the shape functions . Therefore, we consider it in detail. According to [2], a general scheme of the NAM is shown in Fig. 1. The separate networks with inputs are trained jointly using backpropagation. They can realize arbitrary shape functions because there are no restrictions on these networks. However, a difficulty of learning the neural network may be a possible small amount of training data in case of the global interpretation. In this case, the network may be overfitted. On the one hand, to cope with the overfitting problem, the separate networks can be constructed as small as possible to reduce the number of their training parameters. On the other hand, simple neural networks may have difficulties in realizing the corresponding shape functions.

Figure 1: A scheme of the NAM given in [2]

3.3 Lime

It is important to point out that methods based on using GAMs and NAMs [2, 8, 10, 11, 34, 40, 54] as well as the proposed method can be regarded as some modifications of LIME for local interpretation to some extent. In the case of local interpretation, the methods have to use the perturbation technique and to minimize the loss function which measures how the interpretation is close to the prediction. The same elements are used in LIME [43]. Therefore, we briefly consider the original LIME.

LIME aims to approximate a black-box interpretable model denoted as with a simple linear function in the vicinity of the point of interest , whose prediction by means of has to be interpreted, under condition that the approximation function belongs to a set of linear functions . According to LIME, a new dataset consisting of perturbed examples around point is generated to construct the function , and predictions corresponding to the perturbed examples are obtained by means of the black-box model, i.e., by computing for many points . Weights are assigned to new examples in accordance with their proximity to the point of interest by using a distance metric, for example, the Euclidean distance.

An interpretation (local surrogate) model is trained on new generated samples by solving the following optimization problem:

(8)

Here is a loss function, for example, mean squared error, which measures how the model is close to the prediction of the black-box model ; is the regularization term.

The obtained local linear model interprets the prediction by analyzing coefficients of . It should be noted that LIME can also apply GAMs as an extension of . However, the shape functions of features have to be known for constructing the optimization problem.

4 The interpretation method and the algorithm for regression

The basic idea behind the proposed interpretation method is to realize a weighted sum of GBMs such that each GBM depends on a single feature. Moreover, weights of features obtained by means of the proposed algorithm can be used for interpretation because their absolute values can be viewed as impacts of the features.

The proposed method of interpretation can play roles of the local interpretation when a test example is interpreted and a new dataset consisting of perturbed samples at a local area around the test example is generated, as well as the global interpretation when the interpretation model is built on the entire dataset. Plots showing the corresponding shape functions of features describe the model behavior. The difference between the local and global models from the implementation point of view is in a dataset for training the proposed interpretation model. In particular, outcomes of the black-box model in the local case are values of obtained for examples generated in a local area around the test example . In case of the global interpretation, outcomes of the black-box model are values of obtained for examples from the initial dataset.

The algorithm implementing the proposed interpretation method for regression is based on the iterative use of a GBM ensemble consisting of parallel GBMs such that each GBM deals with a separate feature. The corresponding model is composed as a weighted sum of separate GBMs such that each GBM depends on a single feature. Partially randomized decision trees [32] are used for implementing each GBM. The cut-point for partitioning each feature in these trees is randomly selected from the uniform distribution, but the optimal feature is selected to maximize the score. The tree depth is limited to 1. It is important to point out that arbitrary trees can be used for implementing the GBMs. However, various experiments show that the used depth 1 significantly reduces the risk of overfitting. Moreover, the training time is significantly reduced when the depth of trees is 1. It is a very interesting and unexpected observation that trees of depth 1 with the random splitting of each feature provide better results than deep trees. This observation again illustrates the strength of the GBM as a machine learning model.

Initially, each GBM computes functions of the -th feature in the -th iteration of the whole algorithm by using residuals , , obtained from all training examples, and the feature vectors , , as input data. Here is the weight of the -th feature. The upper index shows the iteration number of the ensemble. It is assumed that all weights are initially equal to 1, i.e., for . In other words, the input data is the set of pairs , . If to initialize residuals as , then the iteration with starts with the training set , . One should distinguish residuals obtained in each GBM (5) (see Algorithm 1) and residuals obtained in the ensemble of GBMs. Moreover, in order to avoid misunderstanding, we will consider every GBM based on decision trees and predicting a separate feature as a minimal element without its detal.

A general scheme of the algorithm implementing the proposed interpretation method is shown in Fig. 2. Let us denote the -dimensional vector of functions for all examples from the dataset computed by the -th GBM as , and the matrix of functions as , where . The importance of features can be estimated by computing weights , , of every shape function or every vector . In order to assign weights to vectors , , the Lasso method [50] is used. It should be noted that other methods for assigning the weights can be used, for example, the ridge regression or the elastic net regression. However, our numerical experiments show that the Lasso method significantly reduces time (and the number of the ensemble iterations) for convergence of the weights. Denote weights computed by using the Lasso method as . In order to smooth the weight changes in each iteration, we propose to update the weights by means of the following rule:

(9)

Here is the smoothing parameter. In particular, if , then “old” weights obtained in the previous iteration are not used and weights are totally determined by the current iteration. This case may lead to a better convergence of weights due to their possible large deviations.

Having the updated weights, new target values , , are computed by multiplying matrix by vector . The values can be viewed as current predictions of the model after iterations. Hence, the residual can be computed as the partial derivative (the negative gradient) of the expected loss function at every point of the training set, ,

(10)

In contrast to the standard GBM, we have to get residual for every example as well as every feature because every feature is separately processed by the corresponding GBM. It should be noted that there holds

(11)

Hence, we get

(12)

The above implies that the training set for learning the -th GBM at the next iteration is represented by pairs , . The same training sets are used by all GBMs which correspond to features with indices .

One of the problems is that the obtained weights may be of different scales. In order to overcome this problem and to correct the weights, coefficients are multiplied by the standard deviation of every feature computed with using all points from the dataset. The obtained corrected weights will be viewed as the feature importance. The same procedure has been performed by Chen et al. [11].

Several rules for stopping the algorithm can be proposed. The algorithm can stop iterations when the weights do not change or they insignificantly change with some relative deviation . However, this rule may lead to an enormous number of iteration when weights are not stabilized due to overfitting problems. Therefore, we use just some predefined value of iterations, which can be tuned in accordance with the weight changes.

Algorithm 2 can be viewed as a formal scheme for computing the weights and shape functions.

0:  Training set ; point of interest ; the number of generated or training points ; the smoothing parameter ; the black-box survival model for explaining ; the number of iterations
0:  Vector of weights ; shape functions for every feature
1:  Initialize weights ,
2:  Initialize residuals , ,
3:  Standardize ,
4:  Generate random nearest points in a local area around , point is the -th point
5:  for ,  do
6:     for ,  do
7:        Learn the GBM with partially randomized decision trees of depth on the dataset ,
8:        Predict vector of shape functions
9:     end for
10:     Compute weights by using the Lasso method trained on matrix as the training feature vectors and on residuals as predictions
11:     Update weights , , by means of (9)
12:     Compute and by using (10) and (12), respectively, ,
13:  end for
14:  Correct weights , , by using the standard deviation
15:  ; ,
Algorithm 2 The interpretation algorithm for regression

In fact, we have the gradient boosting algorithms in the “global” boosting algorithm which computes weights and shape functions ; for every feature. An outline scheme of this “global” boosting algorithm is shown in Fig. 3. It conditionally can be represented as a series of the gradient boosting machine ensembles (EGBMs), where every EGBM is the algorithm given in Fig. 2. Each EGBM uses the initial dataset for training in the -th iteration as well as matrix of the shape function values and weights computed in the previous iteration.

Figure 2: A scheme of the proposed interpretation algorithm (the EGBM)
Figure 3: An outline scheme of the “global” boosting algorithm

The number of iterations is chosen large enough. It will be illustrated below by means of numerical examples that a lack of the weight stabilization means that there is a strong correlation between features, which leads to overfitting of the GBMs.

The interpretation algorithm for classification is similar to the studied above algorithm. However, in contrast to the regression problem, we have softmax function and the cross-entropy loss for computing residuals. Moreover, it also makes sense to pre-train each GBM in classification to predict the target value by means of performing several iterations. The pre-training iterations aim to separate points by a hyperplane at the beginning of the training process, otherwise all points may be equal to .

5 Numerical experiments using the local interpretation

5.1 Numerical experiments with synthetic data

In order to study the proposed interpretation method, we consider several numerical examples for which training examples are randomly generated.

Linear regression function

Let us suppose that the studied regression function is known, and it is

(13)

There are features whose impacts should be estimated. We generate points of with expectation and standard deviation . It is assumed that outputs of the black-box model correspond to these points. We expect to get the same relationship between weights obtained by means of the explanation model and coefficients of the above linear regression function. Fig. 4 illustrates how the weights are changed with increase of the number of iterations . One can see that the weights tend to converge. This is a very important property which means that there is no overfitting of the explanation model. Moreover, one can see from Fig. 4 that the weights totally correspond to coefficients of the linear regression. In particular, weights of features 5, 6, 7 tend to 0, weights of features 1, 2, 3, 4 tend to , , , , respectively. Fig. 5 shows changes of every feature, where large points (the wide band) are generated points of the dataset. They can be regarded as true values. The narrow band in the middle of the wide band corresponds to predictions of a single GBM in accordance with the feature which is fed to this GBM. The -axis corresponds to the feature values, the -axis is predictions of a single GBM.

The obtained weights are . These weights are given before their correction. After the correction, we get the feature importance values . It can be seen from the numerical results that relationships between interpreted weights and between coefficients of the regression function are quite the same. This implies that the proposed method provides correct interpretation results for the linear function.

Figure 4: Weights of features as functions of the iteration number for the linear regression
Figure 5: Predictions of single GBMs as functions of the corresponding feature values

Non-linear regression function

Let us return to the previous numerical example, but the seventh term in (13) is instead of now. We again generate points with the same parameters of the normal distribution for (expectation and standard deviation ).

The obtained weights are . These weights are presented before their correction. After the correction, we get the feature importance values . It can be seen from the numerical results that the seventh feature (its function ) has the largest weight. Moreover, relationships between interpreted weights and between coefficients of the regression function are quite the same. This implies that the proposed method provides correct interpretation results. It is interesting to point out that the interpretation linear model used, for example, in LIME, could not detect the non-linearity in many cases. Fig. 6 shows how the weights are changed with increase of the number of iterations . We again observe the convergence of weights to values given above. Fig. 7 is similar to Fig. 5, and we clearly observe how the seventh feature impacts on predictions.

Figure 6: Weights of features as functions of the iteration number for the non-linear regression
Figure 7: Predictions of single GBMs as functions of the corresponding feature values

The chess board

So far, we have considered interpretation of the regression models. The next numerical example illustrates the binary classification task. Training examples having two labels , and consisting of two features are generated such that the corresponding points depict a small chess board as it is shown in Fig. 8. Black and white points belong to classes and , respectively. This is a very interesting example which illustrates inability of the proposed algorithm to correctly classify the given dataset due to overfitting of GBMs. Indeed, Fig. 9 shows that weights of features are not converged and are not stabilized with increase of the iteration number . The overfitting problem arises because there is a strong correlation between features. The same can be seen in Fig. 10, where outputs of the black-box model are depicted in the form of two lines with and . Outcomes of the proposed algorithm are irregularly located because the algorithm is subjected to overfitting. It is also interesting to note that the points are scattered such that the sum of GBM outcomes for all features provides correct values. It should also be pointed out that the standard GBM with decision trees of depth cannot solve this problem. This implies that the proposed parallel architecture of the tree training is better in comparison with the serial architecture. Fig. 11 illustrates predicted values of the trained explanation model.

The computed weights of features before their correction are . After correction, they are . In this numerical example, weights do not have a specific sense, but one can see that the correction method itself provides accurate results because both the features are identical and equal to .

Figure 8: Generated points from two classes forming the chess board
Figure 9: Weights of features as functions of the iteration number for the chess board example
Figure 10: Predictions of GBMs as functions of the corresponding feature values
Figure 11: Predictions of the overfitted model

The polynomial regression with pairwise interactions

The next numerical example uses the following regression function for interpretation:

(14)

It contains pairwise interactions of features ( and ). Fig. 6 shows how the weights are changed with increase of the number of iterations . The stabilization of weights is observed in Fig. 6. This is an interesting fact because many features are interacted. In spite of this interaction, the proposed algorithm copes with this problem and provides the stabilized weights. Fig. 13 shows how predictions of each GBM depend on values of the corresponding feature (see the explained similar Fig. 5). One can see from Fig. 13 that the first feature significantly impacts on the prediction because the narrow band rises significantly with the feature .

Figure 12: Weights of features as functions of the iteration number for the polynomial regression
Figure 13: Predictions of single GBMs as functions of the corresponding feature values

The computed weights of features before their correction are . Weights after correction are . The weights imply that the first feature has the highest importance. Indeed, it is presented in the polynomial twice: in and in .

5.2 Numerical experiments with real data

Boston Housing dataset

Let us consider the real data called the Boston Housing dataset. It can be obtained from the StatLib archive (http://lib.stat.cmu.edu/datasets/boston). The Boston Housing dataset consists of 506 examples such that each example is described by 13 features.

Figure 14: Importance of all features for the Boston Housing dataset in the case of the global interpretation
Figure 15: Weights of the most important features as functions of the iteration number for the Boston Housing dataset and the global interpretation

Importance of all features obtained by means of the proposed algorithm are shown in Fig. 14. It can be seen from Fig. 14 that features CRIM, RM, B, LSTAT have the highest importance values. Fig. 15 shows how weights are changed with increase of the number of iterations . Only four features having the highest importance are shown in Fig. 15 in order to avoid a confused mixture of many curves.

Figure 16: Shape functions of four important features learned on the Boston Housing dataset (the x-axis indicates values of each feature, the y-axis indicates the feature contriburion) for global interpretation

We plot individual shape functions also only for four important features: RM, LSTAT, CRIM, B. They are depicted in Fig. 16. Contributions of features (-axis) are biased to 0 and scaled in order to have the same interval from 0 to 1 for all plots. The shape plot for the most important feature RM (the average number of rooms per dwelling) shows that contribution of the RM rises significantly with the average number of rooms. It is interesting to note that the contribution decreases when the number of rooms is smaller than . The shape plot for the second important feature LSTAT (% lower status of the population) shows that its contribution tends to decrease. A small increase of the contribution when LSTAT is larger than may be caused by overfitting. Features CRIM and B have significantly smaller weights. This fact can be seen from plots in Fig. 16 where changes of their contributions are small in comparison with the RM and the LSTAT.

Breast Cancer dataset

The next real dataset is the Breast Cancer Wisconsin (Diagnostic). It can be found in the well-known UCI Machine Learning Repository (https://archive.ics.uci.edu). The Breast Cancer dataset contains 569 examples such that each example is described by 30 features. For classes of the breast cancer diagnosis, the malignant and the benign are assigned by classes and , respectively. We consider the corresponding model in the framework of regression with outcomes in the form of probabilities from 0 (malignant) to 1 (benign).

The importance values of features obtained by using the proposed algorithm are depicted in Fig. 17. It can be seen from Fig. 17 that features “worst texture”, “worst perimeter”, “worst concave points”, “worst smoothness” are of the highest importance. Fig. 18 shows how weights are changed with increase of the number of iterations . We again consider only four important features in Fig. 18.

Figure 17: Importance of all features for the Breast Cancer dataset
Figure 18: Weights of the most important features as functions of the iteration number for the Breast Cancer dataset

Individual shape functions are plotted for four important features: “worst texture”, “worst perimeter”, “worst concave points”, “worst smoothness”. Fig. 19 illustrates them. The shape plot for the “worst perimeter” shows that the probability of benign drops with increase of the worst perimeter and increases for the worst perimeter above 140. The shape plot for the second important feature “worst concave points” shows that the probability of benign decreases with decrease of the worst concave points. Features “worst texture” and “worst smoothness” have significantly smaller impact on the target probability.

Figure 19: Shape functions of four important features learned on the Breast Cancer dataset (the x-axis indicates values of each feature, the y-axis indicates the feature contriburion) for the global interpretation

6 Numerical experiments with the local interpretation

So far, we have studied the global interpretation when tried to interpret all examples in training sets. Let us consider numerical examples with the local interpretation.

6.1 Numerical experiments with synthetic data

First, we return to the chess board example and investigate a bound between two checkers. A neural network is used as a black-box regression model for interpretation. The network is trained by using the quadratic loss function. The black and white checkers are labelled by 1 and 0, respectively. The network consists of 3 layers having 100 units with the ReLU activation function and optimizer Adam. perturbed points are generated around point from the normal distribution with the standard deviation . The perturbed points are depicted in Fig. 20 (the right picture). Predictions of the neural network learned on the generated “chess board” training set are depicted in Fig. 20 (the left picture).

Figure 20: Predictions of the neural network learned on the generated “chess board” training set (left picture) and perturbed points generated around point (right picture)

Fig. 21 depicts the shape function of the second feature for the chess board example. It follows from Fig. 21 that the probability of the class (the black checker) decreases with increase of the second feature. This is also seen from Fig. 20, where generated points “move” from the black checker to the white checker.

Figure 21: The shape function of the second feature for the chess board example (the x-axis indicates values of feature , the y-axis indicates the probability of the black or white checkers)

6.2 Numerical experiments with real data

Boston Housing dataset

Let us consider the Boston Housing dataset for the local interpretation. A point of interest is randomly selected by means of the following procedure. First, two points and are randomly selected from the dataset. Then the point for interpretation is determined as the middle point between and . The neural network consisting of 3 layers having 100 units with the ReLU activation function and optimizer Adam is used for training on the Boston Housing dataset. The learning rate is , the number of epochs is . perturbed points are generated around the point of interest from the uniform distribution with bounds and . All features are standardized before training the neural network and interpreting results.

Importance values of all features obtained by means of the proposed method are shown in Fig. 22. It follows from Fig. 22 that features NOX, AGE, DIS, RAD have the highest importance. They differ from the important features obtained for the case of global interpretation (see Fig. 14). Fig. 23 shows how the weights are changed with increase of the number of iterations . Only four important features are shown in Fig. 23.

Figure 22: Importance of features for the Boston Housing dataset by considering the local interpretation
Figure 23: Weights of the most important features as functions of the iteration number for the Boston Housing dataset and the local interpretation
Figure 24: Shape functions of four important features learned on the Boston Housing dataset (the x-axis indicates values of each feature, the y-axis indicates the feature contriburion) for local interpretation

Shape functions are depicted for four important features NOX, AGE, DIS, RAD in Fig. 24. The shape plot for the most important feature RM shows that contribution of the RM rises significantly with the average number of rooms. It is interesting to note that the contribution decreases when the number of rooms is smaller than . The shape plot for the second important feature LSTAT shows that its contribution tends to decrease. A small increase of the contribution, when LSTAT is larger than , may be caused by overfitting. Features CRIM and B have significantly smaller weights. This fact can be seen from plots in Fig. 24 where changes of their contributions are small in comparison with the RM and LSTAT.

Breast Cancer dataset and the regression black-box model

For illustrating the local interpretation, we again use the Breast Cancer Wisconsin (Diagnostic) dataset. For classes of the breast cancer diagnosis, the malignant and the benign are assigned by labels and , respectively. We consider the corresponding model in the framework of regression with outcomes in the form of probabilities from 0 (malignant) to 1 (benign), i.e., we use the regression black-box model based on the SVM with the RBF kernel having parameter . The penalty parameter of the SVM is . The point of interest is determined by means of the same procedure as for the Boston Housing dataset. The perturbation procedure also coincides with the same procedure for the Boston Housing dataset. All features are standardized before training the SVM and interpreting results.

Importance of features obtained by using the proposed algorithm are depicted in Fig. 25. It can be seen from Fig. 25 that features “worst symmetry”, “mean concave points”, “worst concavity”, “worst concave points” are of the highest importance. They mainly differ from the important features obtained for the global interpretation. Fig. 26 shows how the weights are changed with increase of the number of iterations for the above important features.

Figure 25: Importance of all features for the Breast Cancer dataset in the case of the local interpretation using the regression black-box model
Figure 26: Weights of the most important features as functions of the iteration number for the Breast Cancer dataset and the local interpretation using the regression black-box model

For these important features, individual shape functions are plotted in Fig. 27. The shape plot for the “worst concave points” shows that the probability of benign drops with increase of the worst concave points. The shape plot for the second important feature “worst concavity” also shows that the probability of benign significantly decreases with increase of the worst concavity. Features “worst symmetry” and “mean concave points” have significantly smaller impacts on the target probability.

Figure 27: Shape functions of four important features learned on the Breast Cancer dataset (the x-axis indicates values of each feature, the y-axis indicates the feature contriburion) for the local interpretation using the regression black-box model

Breast Cancer dataset and the classification black-box model

We consider the Breast Cancer Wisconsin (Diagnostic) dataset and corresponding model in the framework of classification with the same outcomes in the form of probabilities from 0 (malignant) to 1 (benign), i.e., we use the classification black-box model based on the SVM. Parameters of the black-box model, the point of interest and the perturbation procedure do not differ from those used for the Breast Cancer dataset with the regression black-box model. All features are standardized before training the SVM and interpreting results.

The importance values of features obtained by using the proposed algorithm are depicted in Fig. 25. It can be seen from Fig. 28 that the same features as in the example with the regression black-box, including “worst symmetry”, “mean concave points”, “worst concavity”, “worst concave points”, are of the highest importance. Fig. 29 shows how the weights are changed with increase of the number of iterations for the above important features. Individual shape functions for these important features are plotted in Fig. 30. They are similar to those computed for the regression black-box model.

Figure 28: Importance of all features for the Breast Cancer dataset in the case of the local interpretation using the classification black-box model
Figure 29: Weights of all features for the Breast Cancer dataset in the case of the local interpretation using the classification black-box model
Figure 30: Shape functions of four important features learned on the Breast Cancer dataset (the x-axis indicates values of each feature, the y-axis indicates the feature contriburion) for the local interpretation using the regression black-box model

7 Conclusion

A new interpretation method which is based on applying an ensemble of parallel GBMs has been proposed for interpreting black-box models. The interpretation algorithm implementing the method learns a linear combination of GBMs such that a single feature is processed by each GBM. GBMs use randomized decision trees of depth 1 as base models. In spite of this simple implementation of the base models, they show correct results whose intuitive interpretation coincides with the obtained interpretation. Various numerical experiments with synthetic and real data have illustrated the advantage of the proposed method.

The method is based on the parallel and independent usage of GBMs during one iteration. Indeed, each GBM deals with a single feature. However, the architecture of the proposed algorithm leads to the idea to develop a method which could consider combinations of features. In this case, the feature correlation can be taken into account. This is a direction for further research. Another interesting direction for research is to consider a combination of the NAMs [2] with the proposed algorithm.

The proposed algorithm consists of several components, including, the Lasso method, a specific scheme for updating weights. Every component can be replaced by another implementation which could lead to better results. The choice of the best configuration for the algorithm is also an important direction for further research.

The proposed method is efficient mainly for tabular data. However, it can be also adapted to the image processing which has some inherent peculiarities. The adaptation is another interesting direction for research in future.

References

  1. A. Adadi and M. Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6:52138–52160, 2018.
  2. R. Agarwal, N. Frosst, X. Zhang, R. Caruana, and G.E. Hinton. Neural additive models: Interpretable machine learning with neural nets. arXiv:2004.13912, April 2020.
  3. A.B. Arrieta, N. Diaz-Rodriguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. arXiv:1910.10045, October 2019.
  4. V. Belle and I. Papantonis. Principles and practice of explainable machine learning. arXiv:2009.11698, September 2020.
  5. G. Biau and E. Scornet. A random forest guided tour. Test, 25(2):197–227, 2016.
  6. C.D. Blakely and O.C. Granmo. Closed-form expressions for global and local interpretation of Tsetlin machines with applications to explaining high-dimensional data. arXiv:2007.13885, July 2020.
  7. L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  8. R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. Intelligible models for healthcare: Predicting pneumoniarisk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1721–1730, 2015.
  9. D.V. Carvalho, E.M. Pereira, and J.S. Cardoso. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(832):1–34, 2019.
  10. C.-H. Chang, S. Tan, B. Lengerich, A. Goldenberg, and R. Caruana. How interpretable and trustworthy are gams? arXiv:2006.06466, June 2020.
  11. J. Chen, J. Vaughan, V.N. Nair, and A. Sudjianto. Adaptive explainable neural networks (AxNNs). arXiv:2004.02353v2, April 2020.
  12. T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794, New York, NY, 2016. ACM.
  13. A. Das and P. Rad. Opportunities and challenges in explainableartificial intelligence (XAI): A survey. arXiv:2006.11371v2, June 2020.
  14. M. Du, N. Liu, and X. Hu. Techniques for interpretable machine learning. arXiv:1808.00033, May 2019.
  15. J. Feng, Y.X. Xu, Y. Jiang, and Z.-H. Zhou. Soft gradient boosting machine. arXiv:2006.04059, June 2020.
  16. J. Feng, Y. Yu, and Z.-H. Zhou. Multi-layered gradient boosting decision trees. In Advances in Neural Information Processing Systems, pages 3551–3561. Curran Associates, Inc., 2018.
  17. R. Fong and A. Vedaldi. Explanations for attributing deep neural network predictions. In Explainable AI, volume 11700 of LNCS, pages 149–167. Springer, Cham, 2019.
  18. R.C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3429–3437. IEEE, 2017.
  19. J.H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189–1232, 2001.
  20. J.H. Friedman. Stochastic gradient boosting. Computational statistics & data analysis, 38(4):367–378, 2002.
  21. D. Garreau and U. von Luxburg. Explaining the explainer: A first theoretical analysis of LIME. arXiv:2001.03447, January 2020.
  22. D. Garreau and U. von Luxburg. Looking deeper into tabular LIME. arXiv:2008.11092, August 2020.
  23. P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machine learning, 63:3–42, 2006.
  24. R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. A survey of methods for explaining black box models. ACM computing surveys, 51(5):93, 2019.
  25. T. Hastie and R. Tibshirani. Generalized additive models, volume 43. CRC press, 1990.
  26. A. Holzinger, G. Langs, H. Denk, K. Zatloukal, and H. Muller. Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4):e1312, 2019.
  27. Q. Huang, M. Yamada, Y. Tian, D. Singh, D. Yin, and Y. Chang. GraphLIME: Local interpretable model explanations for graph neural networks. arXiv:2001.06216, January 2020.
  28. T. Huber, K. Weitz, E. Andre, and O. Amir. Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps. arXiv:2005.08874, May 2020.
  29. A. Jung. Explainable empirical risk minimization. arXiv:2009.01492, September 2020.
  30. I. Karlsson, J. Rebane, P. Papapetrou, and A. Gionis. Locally and globally explainable time series tweaking. Knowledge and Information Systems, 62:1671–1700, 2020.
  31. A.V. Konstantinov and L.V. Utkin. A generalized stacking for implementing ensembles of gradient boosting machines. arXiv:2010.06026, October 2020.
  32. A.V. Konstantinov and L.V. Utkin. Gradient boosting machine with partially randomized decision trees. arXiv:2006.11014, June 2020.
  33. M.S. Kovalev, L.V. Utkin, and E.M. Kasimov. SurvLIME: A method for explaining machine learning survival models. Knowledge-Based Systems, 203:106164, 2020.
  34. Y. Lou, R. Caruana, and J. Gehrke. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 150–158. ACM, August 2012.
  35. S.M. Lundberg, G. Erion, H. Chen, A. DeGrave, J.M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and S.-I. Lee. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence, 2:56–67, 2020.
  36. S.M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765–4774, 2017.
  37. A. Mikolajczyk, M. Grochowski, and Kwasigroch A. Global explanations for discovering bias in data. arXiv:2005.02269, May 2020.
  38. C. Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Published online, https://christophm.github.io/interpretable-ml-book/, 2019.
  39. A. Natekin and A. Knoll. Gradient boosting machines, a tutorial. Frontiers in neurorobotics, 7(Article 21):1–21, 2013.
  40. H. Nori, S. Jenkins, P. Koch, and R. Caruana. InterpretML: A unified framework for machine learning interpretability. arXiv:1909.09223, September 2019.
  41. V. Petsiuk, A. Das, and K. Saenko. Rise: Randomized input sampling for explanation of black-box models. arXiv:1806.07421, June 2018.
  42. J. Rabold, H. Deininger, M. Siebers, and U. Schmid. Enriching visual with verbal explanations for relational concepts: Combining LIME with Aleph. arXiv:1910.01837v1, October 2019.
  43. M.T. Ribeiro, S. Singh, and C. Guestrin. “Why should I trust You?” Explaining the predictions of any classifier. arXiv:1602.04938v3, Aug 2016.
  44. M.T. Ribeiro, S. Singh, and C. Guestrin. Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence, pages 1527–1535, 2018.
  45. C. Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206–215, 2019.
  46. O. Sagi and L. Rokach. Ensemble learning: A survey. WIREs Data Mining and Knowledge Discovery, 8(e1249):1–18, 2018.
  47. S.M. Shankaranarayana and D. Runje. ALIME: Autoencoder based approach for local interpretability. arXiv:1909.02437, Sep 2019.
  48. A.J. Smola and B. Scholkopf. A tutorial on support vector regression. Statistics and Computing, 14:199–222, 2004.
  49. E. Strumbel and I. Kononenko. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11:1–18, 2010.
  50. R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996.
  51. M.N. Vu, T.D. Nguyen, N. Phan, and M.T. Thai R. Gera. Evaluating explainers via perturbation. arXiv:1906.02032v1, Jun 2019.
  52. N. Xie, G. Ras, M. van Gerven, and D. Doran. Explainable deep learning: A field guide for the uninitiated. arXiv:2004.14545, April 2020.
  53. C. Yang, A. Rangarajan, and S. Ranka. Global model interpretation via recursive partitioning. In 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pages 1563–1570. IEEE, 2018.
  54. Z. Yang, A. Zhang, and A. Sudjianto. Gami-net: An explainable neural networkbased on generalized additive models with structured interactions. arXiv:2003.07132, March 2020.
  55. X. Zhang, S. Tan, P. Koch, Y. Lou, U. Chajewska, and R. Caruana. Axiomatic interpretability for multiclass additive models. In In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 226–234. ACM, 2019.
  56. Z.-H. Zhou and J. Feng. Deep forest: Towards an alternative to deep neural networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17), pages 3553–3559, Melbourne, Australia, 2017. AAAI Press.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
417157
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description