Enhancing Explainability of Neural Networks through Architecture Constraints

Enhancing Explainability of Neural Networks through Architecture Constraints

Zebin Yang, Aijun Zhang and Agus Sudjianto
Department of Statistics and Actuarial Science, The University of Hong Kong,
Pokfulam Road, Hong Kong
Corporate Model Risk, Wells Fargo, USA
Abstract

Prediction accuracy and model explainability are the two most important objectives when developing machine learning algorithms to solve real-world problems. The neural networks are known to possess good prediction performance, but lack of sufficient model interpretability. In this paper, we propose to enhance the explainability of neural networks through the following architecture constraints: a) sparse additive subnetworks; b) projection pursuit with orthogonality constraint; and c) smooth function approximation. It leads to an explainable neural network (xNN) with the superior balance between prediction performance and model interpretability. We derive the necessary and sufficient identifiability conditions for the proposed xNN model. The multiple parameters are simultaneously estimated by a modified mini-batch gradient descent method based on the backpropagation algorithm for calculating the derivatives and the Cayley transform for preserving the projection orthogonality. Through simulation study under six different scenarios, we compare the proposed method to several benchmarks including least absolute shrinkage and selection operator, support vector machine, random forest, extreme learning machine, and multi-layer perceptron. It is shown that the proposed xNN model keeps the flexibility of pursuing high prediction accuracy while attaining improved interpretability. Finally, a real data example is employed as a showcase application. Keywords: Explainable neural network, additive decomposition, sparsity, orthogonality, smoothness.

1 Introduction

The recent developments of neural network techniques offer tremendous breakthroughs in machine learning and artificial intelligence (AI). The complex network structures are designed and have brought great successes in areas like computer vision and natural language processing. Besides predictive performance, transparency and interpretability are essential aspects of a trustful model; however, most of the neural networks remain black-box models, where the inner decision-making processes cannot be easily understood by human beings. Without sufficient interpretability, their applications in specialized domain areas such as medicine and finance can be largely limited. For instance, a personal credit scoring model in the banking industry should be not only accurate but also convincing. The terminology “Explainable AI” advocated by the Defense Advanced Research Projects Agency (DARPA) draws public attention (Gunning, 2017). Recently, the US Federal Reserve governor raised the regulatory use of AI in financial services (Brainard, 2018) and emphasized the development of explainable AI tools.

There has been a considerable amount of research works on interpretable machine learning by post-hoc analysis, including the model-agnostic approach and the model distillation approach. The examples of the former approach are the partial dependence plot (Friedman, 2001), the individual conditional expectation plot (Goldstein et al., 2015), the locally interpretable model-agnostic explanation method (Ribeiro et al., 2016) and others (Apley, 2016, Liu et al., 2018). The examples of the latter approach are model compression and distillation (Bucilua et al., 2006, Ba and Caruana, 2014), network distilling (Hinton et al., 2015), network pruning (Wang et al., 2018), and tree-partitioned surrogate modeling (Hu et al., 2018).

It is our goal to design highly predictive models that are intrinsically interpretable. This is a challenging task as the two objectives (i.e., prediction accuracy and model interpretability) usually conflict with each other (Gunning, 2017). A deep neural network may provide accurate prediction, but it can be hardly understood even by the model developers. In contrast, most statistical models are intrinsically explainable (e.g., linear regression, logistic regression, and decision tree), but these simplified models are known to be less predictive than the black-box type of machine learning algorithms in dealing with large-scale complex data. We find that the additive index model (AIM) is a promising statistical model for balancing prediction accuracy and model interpretability.

The AIM, also known as the projection pursuit regression (PPR; Friedman and Stuetzle, 1981), decomposes a complex function into the linear combination of multiple component functions. Given a dataset with features and response , the AIM takes the following form,

(1)

where is a link function, is the intercept, are the projection indexes for and are the corresponding nonparametric ridge functions. Note that the AIM is closely related to neural networks (Hwang et al., 1994). If we fix each ridge function to be a prespecified activation function, it reduces to a single-hidden-layer neural network. Indeed, the AIM is also a universal approximator as is sufficiently large.

The estimation of AIM is usually based on alternating optimization between the ridge functions and the projection indexes. When is fixed, the ridge functions are estimated by nonparametric smoothers subject to backfitting. When are fixed, the projection indexes are iteratively estimated by Gauss-Newton algorithms. Such an alternating procedure may not guarantee the global optimum, and it becomes extremely time-consuming for large-sample datasets. To improve computation, an AIM-based neural network was recently proposed by Vaughan et al. (2018), where each activation function of the hidden layer is extended to be a ridge function as modeled by a feedforward subnetwork. Each subnetwork consists of one input node, multiple hidden layers, and one output node. With such a network architecture, the AIM parameters can be simultaneously estimated by mini-batch gradient descent, which is effective and scalable for large-scale datasets.

In Vaughan et al. (2018), the above AIM-based neural network is called an explainable neural network (xNN), which is essentially an AIM with NN-represented ridge functions. The additive model structure of (1) makes it easy to explain the effect attribution only when the ridge functions are mutually independent. However, such an independence condition is hard to be satisfied, especially when grows. Actually, as discussed in Chapter 11 of Hastie et al. (2009), the AIM is usually regarded as a non-interpretable model. We argue that the model interpretability should be induced from practical constraints, and an AIM is not explainable unless it meets further interpretability constraints. In this regard, the xNN by Vaughan et al. (2018) can achieve partial explainability as it is suggested to be equipped with shrinkage for the purpose of sparse and parsimonious modeling. We call it a naive version of explainable neural network (xNN.naive).

There are some potential drawbacks of the naive xNN. First, it was suggested to rewrite each ridge function as and then apply the shrinkage on to enforce the sparsity. Such split ridge functions become unidentifiable unless appropriate norm constraints are imposed. Second, the projection weights in the original AIM can be correlated, hence violate the independence condition for clear effect attribution. It is desirable that the resulting projection indexes are mutually orthogonal. Third, the ridge functions represented by the neural networks may not be as smooth as the nonparametric methods (e.g., the smoothing splines). The non-smooth ridge functions would also add difficulty for explaining the functional relationship.

In this paper, we propose to enhance the explainability of neural networks through architecture constraints, including additivity, sparsity, orthogonality, and smoothness. Specifically, the classical fully connected network architecture is re-configured with sparse additive subnetworks subject to -sparsity constraints on both the projection layer and the output layer, together with an additional normalization step for ensuring ridge function identifiability. The orthogonality constraint is imposed on the projection indexes such that , which is known as the Stiefel manifold and can be treated by the Cayley transform (Wen and Yin, 2013). For each ridge function, the smoothness constraint is imposed by considering an empirical roughness penalty based on the squared second-order derivatives (Wahba, 1990). Combining these interpretability constraints into the neural network architecture, we obtain an enhanced version of explainable neural network (xNN.enhance).

Computationally, the enhanced xNN model is estimated by modern neural network training techniques, including backpropagation, mini-batch gradient descent, batch normalization, and the Adam optimizer. We develop a new SOS-BP algorithm based on the backpropagation technique for calculating the derivatives and the Cayley transform for preserving the projection orthogonality. All the unknown parameters in the enhanced xNN model can be simultaneously optimized by mini-batch gradient descent. Owing to the modern machine learning platform (e.g., the TensorFlow used here) with the automatic differentiation technology, the empirical roughness penalty can be readily evaluated for each projected data point. The involved hyperparameters can be either set to our suggested configurations or optimized by grid search, random search, or other automatic procedures. Moreover, the proposed SOS-BP algorithm based on the mini-batch training strategy is scalable to large-scale datasets, and it can also be implemented on GPU machines.

The enhanced xNN architecture keeps the flexibility of pursuing prediction accuracy while attaining the improved interpretability. It can be therefore used as a promising surrogate for complex model approximation. Through simulation study under six different scenarios, its prediction accuracy is shown to outperform or perform quite close to classical machine learning models, e.g., support vector machine (SVM) and multi-layer perceptron (MLP). On the other hand, the enhanced xNN model conveys the intrinsic interpretability in terms of projection weights and corresponding subnetworks. As a showcase of potential applications, the proposed model is used to study a real data example from LendingClub.

This paper is organized as follows. Section 2 provides the AIM-based formulation of the explainable neural network subject to sparsity, orthogonality, and smoothness constraints. The identifiability conditions for the proposed xNN model are derived. Section 3 discusses the computational method through the new SOS-BP algorithm. Numerical experiments are conducted with both simulation studies and a real data example, as presented in Section 4. Finally, Section 5 concludes the paper with remarks about future study.

2 Explainable Neural Networks

Given the feature vector and the response , consider the AIM with the matrix of projection indexes for , and the corresponding normalized ridge functions of each projected variable or learned feature , for . The proposed xNN model is formulated by:

(2)
subject to the following interpretability constraints:
(2a)
(2b)
(2c)
(2d)
(2e)

for , where , are the regularization strengths and each represents the empirical cumulative distribution function of the th projected variable. The multiple constraints (2a-2e) are imposed by the interpretability considerations from the sparse, orthogonal, and smooth perspectives. The coefficients for represent signed scales for each normalized ridge function, based on which we define the corresponding importance ratio (IR) as

(3)

Fig. 1 presents the proposed xNN architecture using neural network notations. In particular, each node represents the linear combination with inputs from the previous layer and the coefficient arrows, where the arrows are drawn as dashed lines to indicate that the coefficients are subject to sparsity constraints. The shaded area of indicates the orthogonality constraint for the projection indexes . For each subnetwork, it consists of one input node for projected data, multiple hidden layers for the feedforward neural network, one output node subject to (normalization) and an additional node for calculating (roughness penalty). All the normalized ridge functions are then linearly combined to form the final output, together with a bias node for capturing the overall mean.

Table 1 lists a summary of notations in both AIM and xNN contexts, as well as the interpretation based on the enhanced xNN model.

Figure 1: The enhanced xNN architecture: the dashed arrows to the nodes are subject to the sparse constraints on and , respectively; the red shaded area represents the Stiefel manifold constraint on the projection indexes (), and for each blue shaded subnetwork, a smooth function approximation is performed to capture the normalized ridge function through a normalization node and a roughness penalty node .
Symbol AIM xNN Interpretation based on xNN
intercept the bias of the output node overall mean
projection index
weights of inputs
and the th subnetwork
coefficients for constructing
the th learned feature
normalized
ridge function
the th subnetwork
the functional relationship of the th
learned feature and the response
scaling constant
the weight of the th
subnetwork and the output
the signed scale of
the th normalized ridge function
Table 1: Summary of notations in the AIM and the xNN contexts.

2.1 Interpretability Constraints

The explainability of the proposed xNN model is induced by the imposed interpretability constraints. We make justification from the following three aspects.

a) Sparse Additive Subnetworks

The normalized ridge functions in (2) are modeled by the additive subnetworks, where each subnetwork corresponds to a feedforward neural network for flexible function approximation. A single-hidden-layer feedforward neural network with an arbitrarily wide hidden layer and appropriate activation functions is known to approximate a continuous function on a compact domain arbitrarily well (Hornik, 1991). Therefore, it is flexible in specifying the number of layers and nodes for each subnetwork, according to the complexity of tasks.

Two -norm constraints are imposed for inducing the sparsity in ridge functions and projection weights, in order to make the model parsimonious. By the constraint (2b) upon a suitable choice of , a certain number of ridge functions will have zero scales, i.e., for some ; such ridge functions and corresponding projection indexes will become inactive. By the constraint (2a) upon the suitable choice of , the sparsity in each projection index can be induced such that some negligible weights will be regularized to zero, similar to the sparse principal component analysis by Zou et al. (2006).

For the sake of subnetwork identifiability, the overall mean and norm of functions are constrained by the condition (2e). This is essentially performing normalization of each estimated ridge function. It is a critical procedure for simultaneously estimating all the subnetworks while performing the shrinkage on the scales of ridge functions.

b) Orthogonal Projection Pursuit

The constraint (2d) is imposed to enforce projection indexes mutually orthogonal, which is also known as the Stiefel manifold . It is a natural idea to consider mutually orthogonal projection indexes to avoid ambiguity due to multicollinearity, see, e.g., the principal component regression (PCR) by Jolliffe (1982) and the supervised PCA by Bair et al. (2006), Barshan et al. (2011). Without the orthogonality assumption, the AIM often results in correlated or even identical projection indexes, which makes it difficult to distinguish different additive components. The orthogonality constraint can also be understood geometrically, as it provides an orthogonal basis for data rotation. Therefore, the proposed xNN model is more explainable than the AIM and the naive xNN model.

c) Smooth Function Approximation

The functional roughness penalty (2c) is imposed as a constraint in order to enforce smoothness of each ridge function. Following Wahba (1990), we formulate it by the integrated squared second-order derivatives over the entire range of projected data . Using the empirical distribution, the integral form in (2c) can be rewritten as

(4)

Thus, the subnetworks subject to roughness penalty can be regarded as a neural network alternative to nonparametric smoothers. As a benefit, the neural network training techniques can be directly used for fitting the smooth ridge functions. Compared with the naive xNN model without smoothness regularization, the enhanced xNN model can prevent non-smooth representation and achieve better interpretability.

2.2 Identifiability Conditions

A model is not identifiable if it has more than one representations. Such non-uniqueness will make a model hard to interpret. It turns out the interpretability constraints for the proposed xNN model can actually make the model more identifiable, which in turn enhances the model interpretability. In this section, we derive the identifiability conditions for the enhanced xNN model.

For the AIM, some trivial conditions should be imposed for removing possible ambiguity caused by shifting, scaling, and flipping effects, see Yuan (2011), Chen and Samworth (2016) for details. Then, we have the following lemma for the standard AIM.

Lemma 1

(Theorem 1 of Yuan (2011)). The AIM (1) is identifiable if the following conditions are satisfied:

  • (A1) There is at most one linear ridge function. If is linear, then for all ;

  • (A2) There is at most one quadratic ridge function;

  • (A3) The projection matrix is column full rank.

After imposing the orthogonality constraint (2d), we prove that the enhanced xNN model has weaker identifiability conditions. Note that for splitting of the ridge function into does not affect model identifiability, since each is estimated subject to the normalization constraint (2e). For convenience, we analyze the identifiability conditions based on the unnormalized form of ridge functions, as stated in the following theorem.

Theorem 1

For the xNN model (2) with , it is identifiable if the following conditions are satisfied:

  1. (B1) There is at most one linear ridge function;

  2. (B2) If there exist multiple quadratic ridge functions, their quadratic term coefficients should be all distinct.

Comparing Theorem 1 to Lemma 1, we still require at most one linear ridge function, since it is clear that any two linear ridge functions can be combined or represented with other two linear forms. Since we have imposed the orthogonality constraint in the enhanced xNN model, the rest part of Condition (A1) is naturally satisfied. Meanwhile, for Condition (A3), the projection matrix is definitely column full rank. As for the requirement of the quadratic ridge functions in Condition (A2), it can be relaxed to Condition (B2). That is, multiple quadratic functions can be identified, as long as their quadratic term coefficients are all different. The detailed proof of Theorem 1 is available in Appendix Proof of Theorem 1.

Theorem 1 states the sufficient conditions for the xNN model to be identifiable. We have the following observations when examining the necessity of these conditions. First, without further assumption, the enhanced xNN model is not identifiable if there exist two or more linear ridge functions. For example, a linear term can be expressed by two linear ridge functions , where , and .

Next, we consider the necessity of Condition (B2). Assume there are two quadratic functions with mutually orthogonal projection indexes . The enhanced xNN model can be expressed in matrix notation as follows,

where is a real symmetric matrix . For any matrix subject to and , we can construct a different projection matrix that

It is obvious that also holds, therefore the projection indexes are not identifiable.

2.3 Special Cases

There exist a few special cases of the enhanced xNNs which are worth mentioning below.

  1. If , the xNN model reduces to the generalized additive model (GAM; Hastie and Tibshirani, 1990);

  2. If and , the xNN model reduces to the generalized linear model (GLM);

  3. If , the xNN model reduces to the single-index model (SIM; Ichimura, 1993).

Given the orthogonality constraint (2d), it is easy to check that for each , and . The equality holds only when there is exactly one non-zero element in the vector . Therefore, when in the sparsity constraint (2a), each projection picks one component, and it corresponds to the GAM. Furthermore, if in the smoothness constraint (2c), each ridge function is enforced to have zero second-order derivative, which reduces to the linear ridge functions as in the GLM.

The GAM, GLM, and SIM under these special cases are all identifiable and highly explainable. Furthermore, for the GAM (when ) subject to the sparsity constraint (2b), we obtain the sparse GAM (Ravikumar et al., 2009). For the SIM (when ) subject to the sparsity constraint (2a), we obtain the SIM with sparse projection. These sparse models tend to be more parsimonious and even more explainable, and they are especially useful for high-dimensional problems.

3 Computational Method

In this section, we discuss the computational procedures for estimating the enhanced xNN model (2). Denote the list of unknowns in the proposed model by

(5)

consisting of scalars, vectors, and functions. It is noted that the normalized ridge function is approximated by the feedforward neural network with finite parameters, but for simplicity, we denote the ridge function as unknown. For each feature vector , denote the xNN prediction by . To measure the prediction accuracy, a convex loss can be specified depending on the type of response , e.g., the least squares loss for regression and the cross-entropy loss for binary classification.

By the method of Lagrange multipliers, both the penalty (2a, 2b) and the roughness penalty (4) can be formulated as soft regularizers, while the orthogonality (2d), zero-mean and unit-norm requirements (2c) are hard constraints. It leads to the following constrained optimization problem,

(6)

for , where are the regularization hyperparameters corresponding to the soft constraints (2a2c), respectively.

3.1 SOS-BP Algorithm

To deal with the hard constraints in (6), we employ the Cayley transform for preserving projection indexes on the Stiefel manifold and employ batch normalization for each estimated ridge function. The multiple unknown parameters in (5) are simultaneously optimized by mini-batch gradient descent, and such a backpropagation procedure is applied throughout the remaining discussions.

During the updates, the orthogonality of matrix should be preserved, for which we employ a fast and effective update scheme proposed by Wen and Yin (2013). It is based on the following Cayley transform along gradient descent direction on the Stiefel manifold

(7)

where is a step size, is a skew-symmetric matrix with being the partial gradient . It can be verified that for , and the orthogonality can be preserved as long as . It can also be justified that when the step size is positive and small enough, the objective function will get improved after the Cayley transform.

The multiple parameters in (5) can be simultaneously optimized by mini-batch gradient descent together with the Cayley transform. It is an iterative procedure with flexibility in specifying the mini-batch size and the number of epochs . The total number of gradient descent iterations is controlled by multiplied by . Within each iteration, the weight matrix can be updated separately by the Cayley transform, while the remaining parameters (all the parameters except for ) are updated by gradient descent with adaptive learning rates determined by the Adam optimizer (Kingma and Ba, 2014). The new SOS-BP algorithm for estimating the xNN model is presented in Algorithm 1.

(Training data), (Number of subnetworks), (Sparsity), (Smoothness), (Subnetwork structure), (Learning rate), (Step size for the Cayley transform), (Mini-batch size) and (Number of epochs).
Initialize the network with satisfying .
for Epoch  do
     Split the reshuffled data into mini-batches, each with samples.
     for Batch  do
         Select the th mini-batch and set .
         Update by .
         Update , where .
         Perform batch normalization for , .
         Adjust by the Adam optimizer.
     end for
     Stop if the validation score is not improving.
end for
Algorithm 1 The SOS-BP Algorithm

The proposed SOS-BP algorithm adopts the mini-batch gradient descent strategy, and it utilizes some of the latest developments of neural network training techniques. It is capable of handling very big dataset. Next, we discuss several other computational aspects.

  1. For initialization, the projection matrix should be generated subject to the orthogonality constraint , which can be achieved by the QR decomposition.

  2. Each subnetwork modeled by the feedforward neural network can be parametrized by , where stands for the number of nodes for the th hidden layer and “act-type” is the type of activation function in each subnetwork. A deeper network could be more expressive but is also more expensive for training.

  3. The roughness penalty (4) for each ridge function is evaluated empirically for each mini-batch data, where the second-order derivatives for each data point are readily obtainable by automatic differentiation. This procedure corresponds to the dashed input arrow of the node within each subnetwork; see Fig. 1.

  4. The three soft regularization terms in (6) are automatically taken into account by the Cayley transform and gradient descent. More specifically, takes effect when computing the partial gradient for the Cayley transform, while and take effect when computing the partial gradient in Algorithm 1.

  5. To deal with the zero-mean and unit-norm constraint for each ridge function, a normalization procedure is required. We adopt the popular batch normalization strategy with momentum set to zero. Referring to Fig. 1, the batch normalization is performed by the N node within each subnetwork.

The regularization in neural networks is based on the weight decay technique, which is used in our SOS-BP algorithm for shrinking the number of ridge functions or subnetworks. Unlike the shrinkage in the sparse linear modeling (e.g., LASSO), the weight decay method cannot exactly enforce the weights of small effects to zero. Therefore, a subnetwork pruning procedure is needed to filter out the subnetworks with sufficiently small . As a rule of thumb, we can rank all the subnetworks according to their importance ratios (3), and select the top-ranked subnetworks that accumulate more than 95% or 99% of importance ratios. The rest subnetworks are treated as negligible, and their corresponding are set to zero. We also apply a one-step refining procedure to re-estimate the selected subnetworks, conducted as follows, a) fix the projection layer; b) fix the regularization strength ; c) fine-tune the remaining network via the Adam optimizer for certain epochs. After fine-tuning finishes, we can recalculate the importance ratios, and visualize the estimated projection indexes and ridge functions/subnetworks.

3.2 Hyperparameter Settings

As in typical neural network models, there exist multiple hyperparameters that need to be configured. In this section, we provide a detailed guideline for the hyperparameter configuration for the proposed xNN model.

For parsimonious modeling purpose, a small number of subnetworks is preferable. For instance, we can set in practice. For the architecture of subnetworks, a network with two hidden layers and nonlinear activations is tested to be sufficient for curve fitting, and we fix the subnetwork structure to with hyperbolic tangent activations.

For gradient descent, the projection weights are initialized to be an orthogonal matrix, and the weights of the rest layers are initialized with Xavier normal initializer (Glorot and Bengio, 2010). The initial learning rate is tested to be not sensitive to the performance as it is sufficiently small, and we fix it to 0.001. It is worth mentioning that for the Cayley transform, an adaptive scheme for the step size was proposed by Wen and Yin (2013), while here, we fix to be a relatively small value, which makes the network training relatively straightforward. For training data with sample size , the mini-batch size is set to be .

The three regularization parameters are fine-tuned, as they are usually sensitive to data and model performance. Below we provide the setting that is tested through the numerical experiments from the next section:

  • The sparsity hyperparameter : by default , and it can be tuned at log-scale within ;

  • The sparsity hyperparameter : by default , and it can be tuned at log-scale within ;

  • The smoothing hyperparameter : by default , and it can be tuned at log-scale within .

These hyperparameters can be selected using the hold-out validation accuracy by grid search, random search, Bayesian optimization, and many other tools in the area of automated machine learning. In this paper, we use the simple grid search and tune the first two sparsity regularization parameters, while the smoothness regularization strength is fixed to be the default value.

4 Numerical Experiments

In this section, we compare the proposed xNN.enhanced model to several benchmarks through simulation studies. It is shown that the intrinsically interpretable xNN.enhanced is flexible enough to approximate complex functions and achieve high prediction accuracy. We also include a real case study based on the LendingClub dataset as a showcase of the xNN.enhanced application.

4.1 Experimental Setting

Several benchmark models are considered for comparison, including the xNN.naive model (Vaughan et al., 2018), multi-layer perceptron (MLP), extreme learning machine (ELM), support vector machine (SVM), random forest (RF), least absolute shrinkage and selection operator (LASSO) and logistic regression (LogR; only for the real data application).

Each dataset is split it into training, validation and test sets, and the validation set is used to control early stopping for neural network-based models and select the best hyperparameters. In particular, we use the following settings. For LASSO, the shrinkage parameter is tuned within . For SVM with the radial basis function (RBF) kernel, its regularization parameter and kernel parameter are selected within the grid . We use a two-hidden-layer MLP with 100 hyperbolic tangent nodes in the first hidden layer and 60 hyperbolic tangent nodes in the second hidden layer, respectively. For ELM, we use the version with generalized radial basis functions and random layers (Huang et al., 2006, Fernández-Navarro et al., 2011). The number of hidden nodes and kernel width parameter are tunned within . For RF with 100 trees, the maximum tree depth is selected from . For a fair comparison, the hyperparameters for both the naive and enhanced xNN models are configured in the same way according to the discussion in Section 3.2.

All the experiments are implemented using the Python environment on a server with multiple Intel Xeon 2.60G CPUs. Both the naive and enhanced xNN models are implemented using the TensorFlow 2.0 platform, while all the other benchmark models are implemented using the Scikit-learn and Sklearn-extensions packages.

4.2 Simulation Study

We consider six different scenarios under the regression settings. In all the examples, we generate 10-dimensional features using the following mechanism:

  1. Generate the i.i.d. uniform random variables , for ;

  2. Generate a random variable from ;

  3. For each , generate the th feature by , where is chosen to be 1 such that the pairwise correlation

Then, the response is generated according to different model assumptions as specified below.

S1: Additive model (orthogonal projection)
This is a case that follows the enhanced xNN model assumption. It consists of four additive components with mutually orthogonal projection indexes, i.e.,

(8)

where the weights and ridge functions are given by

Note that in the matrix, the last three features are treated as inactive variables.

S2: Additive model (near-orthogonal projection)
In this scenario, the model also takes the additive form, while the projection indexes are not mutually orthogonal,

(9)

where the weights and ridge functions are given by

S3 – S6: Non-additive models
Consider the following four models that all violate the AIM assumption:

(10)
(11)
(12)
(13)

In all the six scenarios, we set the white noise . Four different sample sizes are considered, including . Each data is further split into two parts, with 80% for training and 20% for validation. Moreover, a test set with samples is generated following the same data generation mechanism.

Name Size xNN xNN SVR RF MLP ELM LASSO
(enhanced) (naive)
S1 1000 1.086 1.088 1.274 1.415 1.232 1.476 2.584
S1 2000 1.039 1.043 1.168 1.308 1.121 1.260 2.548
S1 5000 1.007 1.009 1.090 1.251 1.043 1.149 2.561
S1 10000 1.004 1.005 1.069 1.239 1.023 1.091 2.600
S2 1000 1.068 1.065 1.460 1.189 1.134 1.622 2.141
S2 2000 1.030 1.031 1.305 1.119 1.066 1.384 2.131
S2 5000 1.002 1.002 1.169 1.067 1.019 1.254 2.116
S2 10000 1.002 1.000 1.109 1.054 1.011 1.167 2.129
S3 1000 1.100 1.099 1.109 1.127 1.115 1.177 1.370
S3 2000 1.038 1.065 1.060 1.092 1.058 1.108 1.362
S3 5000 1.005 1.019 1.022 1.057 1.018 1.057 1.349
S3 10000 1.003 1.009 1.016 1.057 1.012 1.037 1.361
S4 1000 1.095 1.135 1.196 1.280 1.220 1.356 2.121
S4 2000 1.052 1.068 1.127 1.182 1.140 1.208 2.098
S4 5000 1.032 1.024 1.065 1.110 1.058 1.103 2.087
S4 10000 1.030 1.035 1.045 1.089 1.030 1.060 2.118
S5 1000 1.069 1.062 1.074 1.154 1.069 1.145 1.243
S5 2000 1.045 1.030 1.041 1.101 1.041 1.081 1.237
S5 5000 1.013 1.003 1.015 1.060 1.014 1.029 1.231
S5 10000 1.010 1.001 1.012 1.052 1.011 1.020 1.240
S6 1000 1.105 1.083 1.187 1.168 1.083 1.292 1.300
S6 2000 1.058 1.040 1.141 1.107 1.047 1.203 1.300
S6 5000 1.027 1.003 1.074 1.064 1.018 1.105 1.289
S6 10000 1.027 1.003 1.054 1.050 1.012 1.078 1.287
Table 2: Testing MSE comparison under Scenarios 1–6.

Table 2 summarizes the results of the compared models under Scenarios 1 to 6, respectively. The experiments are repeated for 10 times with mean square error (MSE) on the test set being reported. In particular, the best results are highlighted in bold. Even though Scenarios 2 - 6 are generated by functions that do not follow the assumption of (2), the xNN.enhance model still achieves the best or nearly the best performance in most cases. Therefore, we can claim that the proposed xNN model is competitive regarding prediction accuracy.

The proposed xNN model with enhanced explainability can be checked from Figs. 37. We draw the ridge functions and projection indexes for both the naive and enhanced xNN model estimates, with one repetition of 10000 samples (Note the estimated results may be slightly different among the repetitions). For Scenarios 1 and 2, the left, middle, and right panels of Figs. 33 represent the ground truth, the xNN.naive estimation, and the xNN.enhanced estimation, respectively. For Scenarios 3-6, the corresponding ground truth plot is not available, since the underlying functions do not follow the form of (1). For each sub-figure, the left panel shows the ridge function, and the right panel presents the corresponding projection index. Moreover, the ridge functions and corresponding projection indexes are sorted in the descending order of the importance ratio (IR).

(a) Ground Truth
(b) t
(c) xNN.enhanced
(a) Ground Truth
(b) xNN.naive
(c) xNN.enhanced
Figure 2: Visualized model fits (vs. the ground truth) for Scenario 1.
Figure 3: Visualized model fits (vs. the ground truth) for Scenario 2.
Figure 2: Visualized model fits (vs. the ground truth) for Scenario 1.
(a) t
(b) t
Figure 4: Visualized model fits for Scenario 3.
(a) t
(b) t
Figure 5: Visualized model fits for Scenario 4.
(a) t
(b) t
Figure 6: Visualized model fits for Scenario 5.
(a) t
(b) t
Figure 7: Visualized model fits for Scenario 6.

In Scenario 1, the sine curve is the dominant component, followed by the exponential curve , the linear curve and the quadratic term . It can be seen in Fig. 3 that the four ridge functions and corresponding projections are all successfully recovered by the enhanced xNN, which shows superior performance over the xNN.naive model. More specifically, the xNN.naive model suffers from correlated projection indexes, where the original sine curve is represented by two similar components. With the orthogonality constraint, such a problem can be efficiently avoided, which leads to improved model interpretability.

Scenario 2 is designed with mutually near-orthogonal projection indexes. Interestingly, the enhanced xNN model can still find a good approximation to the ground truth. As shown in Fig. 3, the main effect is captured with almost correct ridge function and projection indexes. The less important and are also well approximated. Referring to Table 2, we can see such an approximation generalizes well on the test set as compared to the benchmark models. In contrast, the estimation results by xNN.naive still suffer from identifiability issues. That is, the first component is highly confounded with the third component, where the original linear term of is mixed into other nonlinear terms.

Although Scenarios 3 - 6 are all complex functions which do not have the additive form, we can visualize their approximations for interpretation, as shown in Figs. 47. Consistent with our previous findings, the naive xNN model tends to have highly correlated and non-sparse projections, which makes the estimated xNN.naive model hard to interpret. The xNN.enhanced model is free from such problems, and thus it is generally more explainable. For example, in Fig. (b)b, two groups of quadratic terms are captured, which are close to the forms of and . In Fig. (b)b, similar results are observed for (), and the relationship between and the response looks like a square root function. In Fig. (b)b, xNN.enhanced reveals that () and () are two different groups of variables with nonlinear ridge functions; the variables and are instead linearly related to the response. In Fig. (b)b, it can be seen that the importance ratio of the sine-like curve is greater than 80%, subject to the projection of . The remaining effects are further explained by the rest components whose projection indexes are mutually orthogonal.

4.3 Real Data Application

The peer-to-peer (P2P) lending is a method of lending money through online services by matching individual lenders and borrowers. It has been one of the hottest FinTech applications. The LendingClub dataset is obtained from (https://www.lendingclub.com/info/download-data.action), including all the issued loans and declined loan applications that do not meet LendingClub credit underwriting policy, from Jan. 2015 to Jun. 2018. Each sample represents a loan application with six features, and the binary flag indicates the approval result (approved or declined). In particular, a data cleaning procedure is implemented to remove samples with missing values or outliers. We delete the cases whose risk score is greater than 850, as that is out of the range of FICO score. Moreover, samples with debt-income-ratio outside the range of 0 – 200% or with a requested amount greater than 40000 are removed. As a result, the preprocessed dataset has 1,433,570 accepted applications, and 5,607,986 declined cases. Such a dataset is then split into three parts, with 40% for training, 10% for validation and the rest 50% for test. Due to the large sample size, the SVM model is not applicable, and we compare the proposed xNN.enhanced model with the LogR, RF, MLP, and xNN.naive model.

The original loan purpose variable is categorical with multiple categories. To reduce complexity, we group these various purposes into five categories, including “Credit Card”, “Debt Consolidation”, “Housing”, “Purchase” and “Others”. A summary of the dataset is given in Table 3. This categorical variable is preprocessed using one-hot-encoding, such that each category corresponds to a different bias term in the subnetwork. These bias terms are then added up to form the subnetwork output, which is then linked to the final output, subject to the sparsity constraint.

The receiver operating characteristic (ROC) curves of different models are plotted in Fig. 8, together with the area under curve (AUC) scores shown in the bottom right corner. The MLP model performs the best while the LogR model ranks the last. The other models show a relatively close performance. However, despite the high predictive performance, both the black-box models, including the RF, MLP, and ELM, are too complex to interpret; the LogR can be easily interpreted, but it is less accurate. In contrast, the proposed xNN.enhanced model achieves relatively high accuracy, while the estimated model is essentially interpretable.

No. Variables Range
X1 Application Date Jan. 2015 - Jun. 2018
X2 Amount Requested 150 - 40000
X3 Risk Score 300 - 850
X4 Debt Income Ratio 0% - 200%
X5 Employment Length 0 - 10 & 11 (more than 10)
X6 Loan Purpose
“Debt Consolidation”, “Credit Card”,
“Housing”, “Purchases”, and “Others”
Table 3: Description of the LendingClub Dataset.
Figure 8: ROC curve comparison for the LendingClub dataset.

The estimated subnetworks and projection indexes of xNN.enhanced are visualized in Fig. 9. Without any prior knowledge, xNN.enhanced automatically approximates the GAM structure. Compared to marginal approved rates on the left subfigure of Fig. 9, the estimated ridge functions are smooth and can provide a quantitative ranking of different variables. More specifically, we have the following findings.

(a) Marginal plot
(b) xNN.enhanced
Figure 9: The left sub-figure shows the marginal approved rates with respect to both numerical and categorical variables. The right sub-figure shows the estimated xNN.enhanced model.

First, the risk score (X3) is the most important feature for a loan application, with 45.2% importance ratio. It is found that the risk scores around 700 are generally preferred. Second, the employment length (X5) accounts for the second important feature (with 21.9% importance ratio). It will impact applicants who have fewer than two years of working experience and also a slight influence for applicants who have around 5 years of working experience. Third, the model suggests that a moderate debt-income-ratio (X4) is preferable. The application date (X1) is also an influential factor of the application result, as the loan acquisition policy may change over time. Finally, the loan amount (X2) and loan purposes (X6) turn out to be less important, but we can still get hints for successful applications. For instance, the loan applications with purpose of “Credit card” and “Debt consolidation” are shown to have higher approved rates than the other three categories.

4.4 Summary of Results

We summarize the above experimental results from the following two perspectives.

  1. The proposed xNN.enhanced model is flexible enough to decompose a complex relationship into several additive components. Compared with the popularly used machine learning algorithms, the proposed model is competitive in the sense of prediction accuracy. The numerical results show that the xNN.enhanced model is competitive regarding prediction accuracy.

  2. The proposed xNN.enhanced model is inherently interpretable with additive, sparsity, orthogonality, and smoothness considerations. The estimated projection indexes are sparse and uncorrelated, and the subnetworks are smooth. When the true underlying model is close to the additive assumption, most of the main effects can be nicely captured or well explained; in more complicated settings, the xNN.enhanced model can be an interpretable surrogate, which enhances the interpretability of neural networks for modeling complex relationships.

Therefore, the proposed xNN.enhanced method provides an effective approach that balances between predictive accuracy and model interpretability. It is justified that xNN.enhanced is a promising tool for interpretable machine learning.

5 Conclusion

This paper studies the explainable neural networks subject to interpretability constraints in terms of additivity, sparsity, orthogonality, and smoothness. First, a complex function is decomposed into sparse additive subnetworks, which is straightforward for model interpretation. Second, the projection indexes are enforced to be mutually orthogonal such that the resulting subnetworks tend to be less confounded with each other. Lastly, each subnetwork-represented ridge function is estimated subject to the smoothness constraint, so that the functional relationship can be better explained. Numerical experiments based on simulation and real-world datasets demonstrate that the proposed xNN model has competitive prediction performance compared to existing machine learning models. More importantly, such an xNN model is designed to be intrinsically interpretable.

There are some promising topics for future study. First, the xNN architecture can be further improved to take other practical constraints into account. For example, the ridge function can be constrained to be monotonic (either increasing or decreasing), so that the resulting model can fit better the prior experience or domain knowledge. Second, the orthogonality constraint (2d) might be too restrictive to achieve high prediction accuracy, and it is realistic to consider the near-orthogonality as a relaxed constraint. Third, it is also of our interest to investigate the main effects and interaction effects under the xNN framework. It is our plan to study the sensitivity of these effects in the sense of functional analysis of variance.

Appendix

Proof of Theorem 1

When Condition (B1) holds, and the projection matrix is orthogonal, then we can directly deduce that Condition (A1) and Condition (A3) hold. According to Yuan (2011), the sum of the quadratic components is identifiable while the individual quadratic components are not identifiable. Here we are in the position to prove that Condition (B2) is sufficient to identify the individual quadratic ridge functions.

Without loss of generality, assume the first ridge functions are quadratic, written as , where the non-zero coefficients for . Using matrix notation, these quadratic terms can be represented by a matrix . Since is a real symmetric matrix, we have that

for . Construct a diagonal matrix and a orthonormal matrix with arbitrary orthogonal complement to . Then, we can write , which corresponds to the eigen-decomposition of . The pairs of are the non-zero eigenvalues and corresponding eigenvectors, respectively. Since are all distinct, the corresponding unit eigenvectors are uniquely determined up to sign. Therefore, both the ridge functions and projection indexes of these quadratic components are identifiable.

Acknowledgment

We thank Vijay Nair and Joel Vaughan from Wells Fargo for insightful discussions. This research project was partially supported by the Big Data Project Fund of The University of Hong Kong from Dr Patrick Poon’s donation.

References

  • Apley (2016) Daniel W Apley. Visualizing the effects of predictor variables in black box supervised learning models. arXiv:1612.08468, 2016.
  • Ba and Caruana (2014) Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems (NIPS), pages 2654–2662, 2014.
  • Bair et al. (2006) Eric Bair, Trevor Hastie, Debashis Paul, and Robert Tibshirani. Prediction by supervised principal components. J. Am. Stat. Assoc., 101(473):119–137, 2006.
  • Barshan et al. (2011) Elnaz Barshan, Ali Ghodsi, Zohreh Azimifar, and Mansoor Zolghadri Jahromi. Supervised principal component analysis: Visualization, classification and regression on subspaces and submanifolds. Pattern Recognit., 44(7):1357–1371, 2011.
  • Brainard (2018) Lael Brainard. What are we learning about artificial intelligence in financial services? https://www.federalreserve.gov/newsevents/speech/brainard20181113a.htm, 2018.
  • Bucilua et al. (2006) Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 535–541. ACM, 2006.
  • Chen and Samworth (2016) Yining Chen and Richard J Samworth. Generalized additive and index models with shape constraints. J. R. Stat. Soc. Series B Stat. Methodol., 78(4):729–754, 2016.
  • Fernández-Navarro et al. (2011) Francisco Fernández-Navarro, César Hervás-Martínez, Javier Sanchez-Monedero, and Pedro Antonio Gutiérrez. Melm-grbf: a modified version of the extreme learning machine for generalized radial basis function neural networks. Neurocomputing, 74(16):2502–2510, 2011.
  • Friedman (2001) Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Ann. Stat., 29(5):1189–1232, 2001.
  • Friedman and Stuetzle (1981) Jerome H Friedman and Werner Stuetzle. Projection pursuit regression. J. Am. Stat. Assoc., 76(376):817–823, 1981.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256, 2010.
  • Goldstein et al. (2015) Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat., 24(1):44–65, 2015.
  • Gunning (2017) David Gunning. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), 2017.
  • Hastie and Tibshirani (1990) Trevor Hastie and Robert Tibshirani. Generalized Additive Models. Chapman & Hall, London, 1990.
  • Hastie et al. (2009) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York, NY: Springer, 2009.
  • Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv:1503.02531, 2015.
  • Hornik (1991) Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
  • Hu et al. (2018) Linwei Hu, Jie Chen, Vijayan N Nair, and Agus Sudjianto. Locally interpretable models and effects based on supervised partitioning (LIME-SUP). arXiv:1806.00663, 2018.
  • Huang et al. (2006) Guang-Bin Huang, Qin-Yu Zhu, and Chee-Kheong Siew. Extreme learning machine: theory and applications. Neurocomputing, 70(1-3):489–501, 2006.
  • Hwang et al. (1994) Jeng-Neng Hwang, Shyh-Rong Lay, Martin Maechler, R Douglas Martin, and Jim Schimert. Regression modeling in back-propagation and projection pursuit learning. IEEE Trans. Neural Netw., 5(3):342–353, 1994.
  • Ichimura (1993) Hidehiko Ichimura. Semiparametric least squares (sls) and weighted sls estimation of single-index models. J. Econom., 58(1-2):71–120, 1993.
  • Jolliffe (1982) Ian T Jolliffe. A note on the use of principal components in regression. Appl. Stat., pages 300–303, 1982.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
  • Liu et al. (2018) Xiaoyu Liu, Jie Chen, Vijayan Nair, and Agus Sudjianto. Model interpretation: A unified derivative-based framework for nonparametric regression and supervised machine learning. arXiv:1808.07216, 2018.
  • Ravikumar et al. (2009) Pradeep Ravikumar, John Lafferty, Han Liu, and Larry Wasserman. Sparse additive models. J. R. Stat. Soc. Series B Stat. Methodol., 71(5):1009–1030, 2009.
  • Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM, 2016.
  • Vaughan et al. (2018) Joel Vaughan, Agus Sudjianto, Erind Brahimi, Jie Chen, and Vijayan N Nair. Explainable neural networks based on additive index models. The RMA Journal, pages 40–49, October 2018.
  • Wahba (1990) Grace Wahba. Spline Models for Observational Data, volume 59. SIAM, 1990.
  • Wang et al. (2018) Jian Wang, Chen Xu, Xifeng Yang, and Jacek M Zurada. A novel pruning algorithm for smoothing feedforward neural networks based on group lasso method. IEEE Trans. Neural Netw. Learn. Syst., 29(5):2012–2024, 2018.
  • Wen and Yin (2013) Zaiwen Wen and Wotao Yin. A feasible method for optimization with orthogonality constraints. Math. Program., 142(1-2):397–434, 2013.
  • Yuan (2011) Ming Yuan. On the identifiability of additive index models. Stat. Sin., 21:1901–1911, 2011.
  • Zou et al. (2006) Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse principal component analysis. J. Comput. Graph. Stat., 15(2):265–286, 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388278
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description