A Descriptive Study of Variable Discretization and Cost-Sensitive Logistic Regression on Imbalanced Credit Data

A Descriptive Study of Variable Discretization and Cost-Sensitive Logistic Regression on Imbalanced Credit Data

\nameLili Zhanga, Herman Rayb and Soon Tanc Contact Herman Ray Email: hray8@kennesaw.edu aProgram in Analytics and Data Science, Kennesaw State University, USA; bAnalytics and Data Science Institute, Kennesaw State University, USA; cErmas Consulting Inc., USA
Abstract

Training classification models on imbalanced data sets tends to result in bias towards the majority class. In this paper, we demonstrate how the variable discretization and Cost-Sensitive Logistic Regression help mitigate this bias on an imbalanced credit scoring data set. 10-fold cross-validation is used as the evaluation method, and the performance measurements are ROC curves and the associated Area Under the Curve. The results show that good variable discretization and Cost-Sensitive Logistic Regression with the best class weight can reduce the model bias and/or variance. It is also shown that effective variable selection helps reduce the model variance. From the algorithm perspective, Cost-Sensitive Logistic Regression is beneficial for increasing the prediction ability of predictors even if they are not in their best forms and keeping the multivariate effect and univariate effect of predictors consistent. From the predictors perspective, the variable discretization performs slightly better than Cost-Sensitive Logistic Regression, provides more reasonable coefficient estimates for predictors which have nonlinear relationship against their empirical logit, and is robust to penalty weights of misclassifications of events and non-events determined by their proportions.

I
\articletype

Application Note

mbalanced learning; variable discretization; cost-sensitive logistic regression; credit scoring

1 Introduction

Imbalanced learning is defined as the knowledge discovery process on severely skewed data set to support the decision making [4]. The tasks include regression, classification, and clustering. For classification, it refers to learning the decision boundary on the data set where the proportion of interesting events in the dependent variable is very low. Effective classification on the imbalanced data is key to many real-world problems like anti-money laundering, fraud detection, credit scoring, rare disease diagnoses, spam detection, and cybersecurity. However, classical machine learning algorithms and statistical methods usually perform poorly without any adjustment when the event rate is low [10].

To solve these problems more efficiently, researchers and practitioners have made efforts from various perspectives like data sampling and algorithms, with the considerations of concrete problem characteristics. In this paper, we focus on the credit scoring problem. It predicts the probability of a debtor’s default or delinquency. The default instances are usually much less than the non-default instances. We provide a detailed descriptive study on how the variable discretization and Cost-Sensitive Logistic Regression help mitigate the bias of an imbalanced credit scoring data. These two techniques are studied for their high interpretability to serve the regulation purpose on the credit scoring.

The paper is structured as follows. In Section 2, related work is reviewed. In Section 3, the data is explored and discretized. In Section 4, the models are developed, evaluated, and compared. In Section 5, conclusions and future work are discussed.

2 Related Work

A comprehensive review on the foundations, algorithms, and applications of the imbalanced learning was conducted by He et al. in 2013 [4]. It summarized the past research in five categories, including sampling methods, cost-sensitive methods, kernel-based learning methods, active learning methods, and one-class learning methods. In 2001, King proposed the weighting technique for the logistic regression in rare events data, where the weighted log-likelihood in Eq. 2 was maximized instead of the log-likelihood in Eq. 1 during the training phase [7], where , is the population fraction of events and is the sample proportion of events induced by choice-based sampling.

(1)
(2)

The weighted logistic regression in Eq. 2 is referred as Class-Dependent Cost-Sensitive Logistic Regression [9]. Bahnsen et al. proposed a different version of Cost-Sensitive Logistic Regression, called Example-Dependent Cost-Sensitive Logistic Regression [1], where the objective cost function was defined depending on the pre-defined misclassification cost of each example/observation and minimized during the training phase.

The past research has rarely considered the variable discretization as a technique for the imbalanced classification task, although it has been widely used as a generic technique for creating more powerful and interpretable discretized predictors from continuous ones. Dougherty et al. reviewed existing variable discretization methods, compared three of them (i.e. equal width interval, entropy-based, and purity-based) in depth on 16 datasets, and found that the global entropy-based one performed the best on average [2]. For entropy-based discretization methods, the evaluation measures include class information entropy, Gini, dissimilarity, and the Hellinger measure [8].

(3)

To select powerful discretized variables in the credit scoring problem, a common measurement is information value [3]. The information value of a discretized variable is defined as in Eq. 3, where is the number of non-events (i.e. non-delinquency) in the level of the variable divided by the total number of non-events, is the number of events (i.e. delinquency) in the level of the variable divided by the total number of events, and referred as the weight of evidence. It also pointed out that the variables with the information value over should be considered in the model.

3 Data

Biographical and financial information from clients is available in the data set from Kaggle competition Give Me Some Credit [6]. The characteristics of the individuals in the data are represented by variables, as shown in Table 3. The goal is to predict whether a client will experience financial distress in the next two years or not, indicated by the dependent variable . As shown in Table 3, there are delinquency instances and non-delinquency instances. The proportion of delinquency instances is 6.

\tbl

Variables for Analysis and Modeling. Variable Type Description Binary Person experienced days past due delinquency or worse Interval Monthly income Interval Monthly debt payments, alimony, living costs divided by monthly gross income Interval Age of borrower in years Interval Number of dependents in family excluding themselves (spouse, children, etc.) Interval Number of open loans (installment like car loan or mortgage) and lines of credit (e.g. credit cards) Interval Number of mortgage and real estate loans including home equity lines of credit Interval Total balance on credit cards and personal lines of credit except real estate and no installment debt like car loans divided by the sum of credit limits Interval Number of times borrower has been - days past due but no worse in the last years Interval Number of times borrower has been - days past due but no worse in the last years Interval Number of times borrower has been days or more past due

\tbl

Frequency of Dependent Variable. SeriousDlqin2yrs Frequency Percent (%)

There are observations with missing values in original variables provided, which is of the total. They are treated as follows.

  1. When building the model with original variables, those observations are dropped to ensure the data accuracy and support the model training computation.

  2. When building the model with discretized variables, those observations are kept by grouping the missing values separately into a level of a variable.

3.1 Exploratory Analysis

Because the dependent variable is binary and all independent variables are interval, the empirical logit plot is used to examine whether the relationship between the dependent variable and an independent variable is linear or not. If linear, we can use the interval form of that independent variable. If not linear, we need to discretize that independent variable to represent the nonlinearity. Moreover, through the empirical logit plots, we can check the univariate effects, positive or negative.

The empirical logit plot is created in the following steps.

  1. For each interval variable, generate percentile ranks from to .

  2. For each rank of each interval variable, calculate the total number of observations , the number of delinquency observations , and the mean of the interval variable .

  3. For each rank of each interval variable, compute the empirical logit using the formula .

  4. For each interval variable, plot the empirical logit against the mean in each rank and their linear regression line. Each point in the plot represents data points from the data set by their mean.

  5. For each interval variable, plot the empirical logit against the rank and their linear regression line. Each point in the plot represents data points from the data set by their rank index.

To show how the empricial logit plot works, take the variable as the example. Its percentile rank information can be found in Table 3.1. Note that its rank are merged together because they have the same cutting points (i.e. min, max). As shown in Figure (a)a, there is nonlinear relationship between and its empirical logit, mainly caused by extreme values. Note that these extreme values in the empricial logit plot cannot be simply removed, considering they represent several hundred data points in the data set instead of a few ones. However, the relationship between its rank and its empricial logit is quite linear in the positive direction as shown in Figure (b)b. In this case, its rank, the discretized form of its original continuous values, is preferred to be used in the modeling.

\tbl

Percentile Ranks of . Rank Min Max Mean Count Event

(a)
(b)
Figure 1: Empirical Logit Plot against and its Rank.

3.2 Variable Discretization

Three variable discretization methods (i.e. distance, quantile, and Gini) are compared, and the quantile discretization gives the best Area Under the Curve (i.e. AUC), after fitting a logistic regression model on the data set partitioned into the training data () and the validation data (). So, each variable is ranked and discretized into bins maximally based on the quantile, with the threshold value selected by the same procedure above.

Information value is used as the measurement of the discrimination power of each individual variable after discretization, as shown in Table 3.2. Note that for some variables, the resulting number of bins is less than 20, because the bins with the same cutting points are merged together. And for the variable , there is bins, because there are some missing values in it, which is seperated into one extra bin.

\tbl

Information Values. Variables Bins Information Value

4 Modeling

Logistic Regression and Class-Dependent Cost-Sensitive Logistic Regression are used as the methodology for their high interpretability. -fold cross-validation is used for the model evaluation. The performance measurements are ROC curve and AUC. The mean of AUCs of -fold cross-validation is used to measure the model bias, while the standard deviation of AUCs of -fold cross-validation is used to measure the model variance. They are reasonable measurements, considering that the model bias refers to the error introduced by approximating the true model and the model variance refers to the amount of the change of the estimated model if using a different training data set [5].

To evaluate and compare the performance of the variable discretization and Class-Dependent Cost-Sensitive Logistic Regression, the following five models are built.

  • Model : Logistic Regression model on all original interval form of variables provided.

  • Model : Logistic Regression model on original interval form of variables with the information value over .

  • Model : Class-Dependent Cost-Sensitive Logistic Regression model on original interval form of variables with the information value over and the best Class Weight based on the mean of AUCs of -fold cross-validation, which is as indicated by the gray line in Figure 2.

  • Model : Logistic Regression model on discretized form of variables with the information value over , where the discretized variables are transformed by the one-hot encoder with each bin represented by one dummy variable. In total, dummy variables are created.

  • Model : Class-Dependent Cost-Sensitive Logistic Regression model on discretized form of variables with the information value over and the best Class Weight based on the mean AUC of -fold cross-validation, which is any value from to as indicated by the blue line in Figure 2. The discretized variables are encoded to dummy variables in the same way as in Model .

Figure 2: The Performance of Cost-Sensitive Logistic Regression Model.

For Model , to avoid inducing the population proportion of events, which is used in Eq. 2, we use a single hyperparameter to conduct the weighting, as shown in Eq. 4, where . is referred as Class Weight, which penalizes the misclassifications of events to non-events. Correspondingly, is referred as Class Weight, which penalizes the misclassification of non-events to events. The larger the value is, the more the misclassifications of events to non-events are penalized or weighted. As shown by the grey line in Figure 2, as Class Weight increases, which indicates that more weight is put on the misclassifications of events to non-events, the mean of AUCs on the -fold cross-validation increases gradually and then decreases sharply when approaching . The best occurs when it is .

(4)

For Model , the same search for the best Class Weight is conducted as for Model . The blue line of Figure 2 shows the performance of Class-Dependent Cost-Sensitive Logistic Regression model on discretized form of variables with the information value over under different Class Weights. As shown, Class 1 Weight does not have any influence when its value is from to . If we take the Class Weight as , Model is the same as Model , which penalizes the misclassifications of events to non-events and non-events to events in the same scale. Moreover, compared to the performance of Class-Dependent Cost-Sensitive Logistic Regression model on original interval form of variables with the information value over , which is Model 3, Class Weight has much less influence on this model. It implies that good variable discretization is robust to penalty weights determined by proportions of events and non-events.

The ROC curves of five models can be found in Figure 3. Because Model ends up the same as Model , we only compare Model with other models. For Model and Model , they have similiar AUCs, but the ROC curves of Model are closer to each other. It indicates that the variables with the information vlaue below don’t make much contributions in this model. The ROC curves of Model and Model are much closer to the upper-left corner than the ones of Model . Moreover, for Model , the ROC curves on -fold cross-validation are closer to each other. This can be further confirmed by the mean and standard deviation of AUCs on -fold cross-validation in Table 4. The mean of AUCs for Model , Model , Model , and Model is , , , and , respectively. Their standard deviation is , , , and , respectively.

(a) Model 1
(b) Model 2
(c) Model 3
(d) Model 4
Figure 3: 10-Fold Cross-Validation ROC.

The estimated coefficients of the models are also examined. The estimated parameters of Model and Model can be found in Table 4. Their values are different, as well as the sign of the variable . Its sign is negative in Model , while its sign is positive in Model . Its empirical logit plot in Figure (c)c shows the positive relationship, so the positive sign in Model is consistent with its univariate effect. For other variables, the signs of estimated parameters are consistent with their univariate effect shown in their empirical logit plots in Figure (a)a, (b)b, and (d)d. The estimated parameters of Model are not presented here, because it is space-consuming to list dummy variables. But one-hot encoded discretized variables give more interpretable estimates, considering they are binary indicators.

\tbl

-Fold Cross-Validation AUC of Models. Model Mean Std. Model Model Model Model

\tbl

Estimated Parameters of Model and Model . Parameter Model Estimate Model Estimate Intercept

(a)
(b)
(c)
(d)
Figure 4: Empirical Logit Plots Against Ranks

In short summary, from Model to Model , selecting only the interval form of variables with the information value over , the model variance is reduced. From Model to Model , penalizing the misclassifications of events and non-events in different scales by running the Class-Dependent Logistic Regression, the model bias is reduced, and all multivariate effects become consistent with the univariate effects based on the signs of estimated parameters. From Model to Model , using the discretized form of variables with the information value over , both the model bias and the model variance are reduced. And Model is slightly better than Model . From Model to Model , running the Class-Dependent Logistic Regression on the discretized variables, the model performance is the same.

5 Discussions and Conclusions

To improve the model performance, two efforts have been made from the perspective of the predictors and the modeling algorithm respectively. Based on the ROC curves and AUC on -fold cross-validation, good variable discretization and Class-Dependent Cost-Sensitive Logistic Regression with the best class weight help mitigate the imbalance in the data and reduces the model bias and/or variance. We also observe that effective variable selection can help reduce the model variance. Moreover, Class-Dependent Cost-Sensitive Logistic Regression is beneficial for increasing the prediction power of predictors during the training phase even if those predictors are not transformed in their best forms and keeping the multivariate effect and univariate effect of predictors consistent.

On the other hand, the model with good discretized variables performs slightly better than Class-Dependent Cost-Sensitive Logistic Regression, provides more reasonable coefficient estimates, and is robust to penalty scales of misclassifications of events and non-events determined by their proportions. This indicates that we should always discretize the variables which show nonlinear relationship against their empirical logits.

In this study, we provide the detailed study of the variable discretization and Class-Dependent Cost Sensitive Logistic Regression on an imbalanced credit data set. In the future, we will consider more data sets, study concretely the relationship between the penalty scales and the proportions of events and non-events, and compare comprehensively with other classification algorithms like neural network and some sampling methods for imbalanced learning.

References

  • [1] A.C. Bahnsen, D. Aouada, and B. Ottersten, Example-dependent cost-sensitive logistic regression for credit scoring, in Machine Learning and Applications (ICMLA), 2014 13th International Conference on. IEEE, 2014, pp. 263–269.
  • [2] J. Dougherty, R. Kohavi, and M. Sahami, Supervised and unsupervised discretization of continuous features, in Machine Learning Proceedings 1995, Elsevier, 1995, pp. 194–202.
  • [3] D.J. Hand and W.E. Henley, Statistical classification methods in consumer credit scoring: a review, Journal of the Royal Statistical Society: Series A (Statistics in Society) 160 (1997), pp. 523–541.
  • [4] H. He and Y. Ma, Imbalanced learning: foundations, algorithms, and applications, John Wiley & Sons, 2013.
  • [5] G. James, D. Witten, T. Hastie, and R. Tibshirani, An introduction to statistical learning, Vol. 112, Springer, 2013.
  • [6] Kaggle, Give Me Some Credit. Available at https://www.kaggle.com/c/GiveMeSomeCredit/data, Accessed: 2018-02-01.
  • [7] G. King and L. Zeng, Logistic regression in rare events data, Political analysis 9 (2001), pp. 137–163.
  • [8] S. Kotsiantis and D. Kanellopoulos, Discretization techniques: A recent survey, GESTS International Transactions on Computer Science and Engineering 32 (2006), pp. 47–58.
  • [9] mlr-org, Cost-Sensitive Classification. Available at https://mlr-org.github.io/mlr-tutorial/release/html/cost_sensitive_classif/index.html, Accessed: 2018-04-27.
  • [10] L. Zhang, J. Priestley, and X. Ni, Influence of the event rate on discrimination abilities of bankruptcy prediction models, International Journal of Database Management Systems 10 (2018), pp. 1–14.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
328066
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description