Why Is My Classifier Discriminatory?

Why Is My Classifier Discriminatory?

Irene Chen
MIT
iychen@mit.edu
&Fredrik D. Johansson
MIT
fredrikj@mit.edu
&David Sontag
MIT
dsontag@csail.mit.edu
Abstract

Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have devastating consequences. In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model. We decompose cost-based metrics of discrimination into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Finally, we perform case-studies on prediction of income, mortality, and review ratings, confirming the value of this analysis. We find that data collection is often a means to reduce discrimination without sacrificing accuracy.

 

Why Is My Classifier Discriminatory?


  Irene Chen MIT iychen@mit.edu Fredrik D. Johansson MIT fredrikj@mit.edu David Sontag MIT dsontag@csail.mit.edu

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

As machine learning algorithms increasingly affect decision making in society, many have raised concerns about the fairness and biases of these algorithms, especially in applications to healthcare or criminal justice, where human lives are at stake (Angwin et al., 2016; Barocas & Selbst, 2016). It is often hoped that the use of automatic decision support systems trained on observational data will remove human bias and improve accuracy. However, factors such as data quality and model choice may encode unintentional discrimination, resulting in systematic disparate impact.

We study fairness in prediction of outcomes such as recidivism, annual income, or patient mortality. Fairness is evaluated with respect to protected groups of individuals defined by attributes such as gender or ethnicity (Ruggieri et al., 2010). Following previous work, we measure discrimination in terms of differences in prediction cost across protected groups (Calders & Verwer, 2010; Dwork et al., 2012; Feldman et al., 2015). Correcting for issues of data provenance and historical bias in labels is outside of the scope of this work. Much research has been devoted to constraining models to satisfy cost-based fairness in prediction, as we expand on below. The impact of data collection on discrimination has received comparatively little attention.

Fairness in prediction has been encouraged by adjusting models through regularization (Bechavod & Ligett, 2017; Kamishima et al., 2011), constraints (Kamiran et al., 2010; Zafar et al., 2017), and representation learning (Zemel et al., 2013). These attempts can be broadly categorized as model-based approaches to fairness. Others have applied data preprocessing to reduce discrimination (Hajian & Domingo-Ferrer, 2013; Feldman et al., 2015; Calmon et al., 2017). For an empirical comparison, see for example Friedler et al. (2018). Inevitably, however, restricting the model class or perturbing training data to improve fairness may harm predictive accuracy (Corbett-Davies et al., 2017).

A tradeoff of predictive accuracy for fairness is sometimes difficult to motivate when predictions influence high-stakes decisions. In particular, post-hoc correction methods based on randomizing predictions (Hardt et al., 2016; Pleiss et al., 2017) are unjustifiable for ethical reasons in clinical tasks such as severity scoring. Moreover, as pointed out by Woodworth et al. (2017), post-hoc correction may lead to suboptimal predictive accuracy compared to other equally fair classifiers.

Disparate predictive accuracy can often be explained by insufficient or skewed sample sizes or inherent unpredictability of the outcome given the available set of variables. With this in mind, we propose that fairness of predictive models should be analyzed in terms of model bias, model variance, and outcome noise before they are constrained to satisfy fairness criteria. This exposes and separates the adverse impact of inadequate data collection and the choice of the model on fairness. The cost of fairness need not always be one of predictive accuracy, but one of investment in data collection and model development. In high-stakes applications, the benefits often outweigh the costs.

Main contributions

We give a procedure for analyzing discrimination in predictive models with respect to cost-based definitions of group fairness, emphasizing the impact of data collection. First, we propose the use of bias-variance-noise decompositions for separating sources of discrimination. Second, we suggest procedures for estimating the value of collecting additional training samples. Finally, we propose the use of clustering for identifying subpopulations that are discriminated against to guide additional variable collection. We use these tools to analyze the fairness of common learning algorithms in three tasks: predicting income based on census data, predicting mortality of patients in critical care, and predicting book review ratings from text. We find that the accuracy in predictions of the mortality of cancer patients vary by as much as between protected groups. In addition, our experiments confirm that discrimination level is sensitive to the quality of the training data.

2 Background

We study fairness in prediction of an outcome . Predictions are based on a set of covariates and a protected attribute . In mortality prediction, represents the medical history of a patient in critical care, the self-reported ethnicity, and mortality. A model is considered fair if its errors are distributed similarly across protected groups, as measured by a cost function . Predictions learned from a training set are denoted for some from a class . The protected attribute is assumed to be binary, , but our results generalize to the non-binary case. A dataset consists of samples distributed according to . When clear from context, we drop the subscript from .

A popular cost-based definition of fairness is the equalized odds criterion, which states that a binary classifier is fair if its false negative rates (FNR) and false positive rates (FPR) are equal across groups (Hardt et al., 2016). We define FPR and FNR with respect to protected group by

Exact equality, , is often hard to verify or enforce in practice. Instead, we study the degree to which such constraints are violated. More generally, we use differences in cost functions between protected groups to define the level of discrimination ,

(1)

In this work we study cost functions in binary classification tasks, with the zero-one loss. In regression problems, we use the group-specific mean-squared error . According to (1), predictions satisfy equalized odds on if and .

Calibration and impossibility

A score-based classifier is calibrated if the prediction score assigned to a unit equals the fraction of positive outcomes for all units assigned similar scores. It is impossible for a classifier to be calibrated in every protected group and satisfy multiple cost-based fairness criteria at once, unless accuracy is perfect or base rates of outcomes are equal across groups (Chouldechova, 2017). A relaxed version of this result (Kleinberg et al., 2016) applies to the discrimination level . Inevitably, both constraint-based methods and our approach are faced with a choice between which fairness criteria to satisfy, and at what cost.

3 Sources of perceived discrimination

There are many potential sources of discrimination in predictive models. In particular, the choice of hypothesis class and learning objective has received a lot of attention (Calders & Verwer, 2010; Zemel et al., 2013; Fish et al., 2016). However, data collection—the chosen set of predictive variables , the sampling distribution , and the training set size —is an equally integral part of deploying fair machine learning systems in practice, and it should be guided to promote fairness. Below, we tease apart sources of discrimination through bias-variance-noise decompositions of cost-based fairness criteria. In general, we may think of noise in the outcome as the effect of a set of unobserved variables , potentially interacting with . Even the optimal achievable error for predictions based on may be reduced further by observing parts of . In Figure 1, we illustrate three common learning scenarios and study their fairness properties through bias, variance, and noise.

(a) For identically distributed protected groups and unaware outcome (see below), bias and noise are equal in expectation. Perceived discrimination is only due to variance.
(b) Heteroskedastic noise, i.e. , may contribute to discrimination even for an optimal model if protected groups are not identically distributed.
(c) One choice of model may be more suited for one protected group, even under negligible noise and variance, resulting in a difference in expected bias, .
Figure 1: Scenarios illustrating how properties of the training set and model choice affect perceived discrimination in a binary classification task, under the assumption that outcomes and predictions are unaware, i.e. and . Through bias-variance-noise decompositions (see Section 3.1), we can identify which of these dominate in their effect on fairness. We propose procedures for addressing each component in Section 4, and use them in experiments (see Section 5) to mitigate discrimination in income prediction and prediction of ICU mortality.

To account for randomness in the sampling of training sets, we redefine discrimination level (1) in terms of the expected cost over draws of a random training set .

Definition 1.

The expected discrimination level of a predictive model learned from a random training set , is

is not observed in practice when only a single training set is available. If is small, it is recommended to estimate through re-sampling methods such as bootstrapping (Efron, 1992).

3.1 Bias-variance-noise decompositions of discrimination level

An algorithm that learns models from datasets is given, and the covariates and size of the training data are fixed. We assume that is a deterministic function given the training set , e.g. a thresholded scoring function. Following Domingos (2000), we base our analysis on decompositions of loss functions evaluated at points . For decompositions of costs we let this be the zero-one loss, , and for , the squared loss, . We define the main prediction as the average prediction over draws of training sets for the squared loss, and the majority vote for the zero-one loss. The (Bayes) optimal prediction achieves the smallest expected error with respect to the random outcome . Bias , variance and noise at a point are defined by111Here, and , are all deterministic functions of , while is a random variable.

(2)

The bias is the loss incurred by the main prediction relative to the optimal prediction. The variance is the average loss incurred by the predictions learned from different datasets relative to the main prediction. The noise is the remaining loss independent of the learning algorithm, often known as the Bayes error. We use these definitions to decompose under various definitions of .

Theorem 1.

With the group-specific zero-one loss or class-conditional versions (e.g. FNR, FPR), or the mean squared error, and the discrimination level admit decompositions of the form

where we leave out in the decomposition of for brevity. With defined as in (2), we have

For the zero-one loss, if , otherwise . For the squared loss . The noise term for population losses is

and for class-conditional losses w.r.t class ,

For the zero-one loss, and class-conditional variants, and for the squared loss, .

  • Conditioning and exchanging order of expectation, the cases of mean squared error and zero-one losses follow from Domingos (2000). Class-conditional losses follow from a case-by-case analysis of possible errors. See the supplementary material for a full proof. ∎

Theorem 1 points to distinct sources of perceived discrimination. Significant differences in bias indicate that the chosen model class is not flexible enough to fit both protected groups well (see Figure 0(c)). This is typical of (misspecified) linear models which approximate non-linear functions well only in small regions of the input space. Regularization or post-hoc correction of models effectively increase the bias of one of the groups, and should be considered only if there is reason to believe that the original bias is already minimal.

Differences in variance, , could be caused by differences in sample sizes or group-conditional feature variance , combined with a high capacity model. Targeted collection of training samples may help resolve this issue. Our decomposition does not apply to post-hoc randomization methods (Hardt et al., 2016) but we may treat these in the same way as we do random training sets and interpret them as increasing the variance of one group to improve fairness.

When noise is significantly different between protected groups, discrimination is partially unrelated to model choice and training set size and may only be reduced by measuring additional variables.

Proposition 1.

If , no model can be 0-discriminatory in expectation without access to additional information or increasing bias or variance w.r.t. to the Bayes optimal classifier.

Proof.

By definition, . As the Bayes optimal classifier has neither bias nor variance, the result follows immediately. ∎

In line with Proposition 1, most methods for ensuring algorithmic fairness reduce discrimination by trading off a difference in noise for one in bias or variance. However, this trade-off is only motivated if the considered predictive model is close to Bayes optimal and no additional predictive variables may be measured. Moreover, if noise is homoskedastic in regression settings, post-hoc randomization is ill-advised, as the difference in Bayes error is zero, and discrimination is caused only by model bias or variance (see the supplementary material for a proof).

Estimating bias, variance and noise

Group-specific variance may be estimated through sample splitting or bootstrapping (Efron, 1992). In contrast, the noise and bias are difficult to estimate when is high-dimensional or continuous. In fact, no convergence results of noise estimates may be obtained without further assumptions on the data distribution (Antos et al., 1999). Under some such assumptions, noise may be approximately estimated using distance-based methods (Devijver & Kittler, 1982), nearest-neighbor methods (Fukunaga & Hummels, 1987; Cover & Hart, 1967), or classifier ensembles (Tumer & Ghosh, 1996). When comparing the discrimination level of two different models, noise terms cancel, as they are independent of the model. As a result, differences in bias may be estimated even when the noise is not known (see the supplementary material).

Testing for significant discrimination

When sample sizes are small, perceived discrimination may not be statistically significant. In the supplementary material, we give statistical tests both for the discrimination level and the difference in discrimination level between two models .

4 Reducing discrimination through data collection

In light of the decomposition of Theorem 1, we explore avenues for reducing group differences in bias, variance, and noise without sacrificing predictive accuracy. In practice, predictive accuracy is often artificially limited when data is expensive or impractical to collect. With an investment in training samples or measurement of predictive variables, both accuracy and fairness may be improved.

4.1 Increasing training set size

Standard regularization used to avoid overfitting is not guaranteed to improve or preserve fairness. An alternative route is to collect more training samples and reduce the impact of the bias-variance trade-off. When supplementary data is collected from the same distribution as the existing set, covariate shift may be avoided (Quionero-Candela et al., 2009). This is often achievable; labeled data may be expensive, such as when paying experts to label observations, but given the means to acquire additional labels, they would be drawn from the original distribution. To estimate the value of increasing sample size, we predict the discrimination level as increases in size.

The curve measuring generalization performance of predictive models as a function of training set size is called a Type II learning curve (Domhan et al., 2015). We call , as a function of , the learning curve with respect to protected group . We define the discrimination learning curve (see Figure 1(a) for an example). Empirically, learning curves behave asymptotically as inverse power-law curves for diverse algorithms such as deep neural networks, support vector machines, and nearest-neighbor classifiers, even when model capacity is allowed to grow with  (Hestness et al., 2017; Mukherjee et al., 2003). This observation is also supported by theoretical results (Amari, 1993).

Assumption 1 (Learning curves).

The population prediction loss , and group-specific losses , for a fixed learning algorithm , behave asymptotically as inverse power-law curves with parameters . That is, such that for ,

(3)

Intercepts, in (3) represent the asymptotic bias and the Bayes error , with the former vanishing for consistent estimators. Accurately estimating from finite samples is often challenging as the first term tends to dominate the learning curve for practical sample sizes.

In experiments, we find that the inverse power-laws model fit group conditional () and class-conditional (FPR, FNR) errors well, and use these to extrapolate based on estimates from subsampled data.

4.2 Measuring additional variables

When discrimination is dominated by a difference in noise, , fairness may not be improved through model selection alone without sacrificing accuracy (see Proposition 1). Such a scenario is likely when available covariates are not equally predictive of the outcome in both groups. We propose identification of clusters of individuals in which discrimination is high as a means to guide further variable collection—if the variance in outcomes within a cluster is not explained by the available feature set, additional variables may be used to further distinguish its members.

Let a random variable represent a (possibly stochastic) clustering such that indicates membership in cluster . Then let denote the expected prediction cost for units in cluster with protected attribute . As an example, for the zero-one loss we let

and define analogously for false positives or false negatives. Clusters for which is large identify groups of individuals for which discrimination is worse than average, and can guide targeted collection of additional variables or samples. In our experiments on income prediction, we consider particularly simple clusterings of data defined by subjects with measurements above or below the average value of a single feature with . In mortality prediction, we cluster patients using topic modeling. As measuring additional variables is expensive, the utility of a candidate set should be estimated before collecting a large sample (Koepke & Bilenko, 2012).

5 Experiments

We analyze the fairness properties of standard machine learning algorithms in three tasks: prediction of income based on national census data, prediction of patient mortality based on clinical notes, and prediction of book review ratings based on review text.222A synthetic experiment validating group-specific learning curves is left to the supplementary material. We disentangle sources of discrimination by assessing the level of discrimination for the full data,estimating the value of increasing training set size by fitting Type II learning curves, and using clustering to identify subgroups where discrimination is high. In addition, we estimate the Bayes error through non-parametric techniques.

5.1 Income prediction

(a) Group differences in false positive rates and false negative rates for a random forest classifier decrease with increasing training set size.
Method group
Mahalanobis 0.29 men
(Mahalanobis, 1936) 0.13 women
Bhattacharyya 0.001 0.040 men
(Bhattacharyya, 1943) 0.001 0.027 women
Nearest Neighbors 0.10 0.19 men
(Cover & Hart, 1967) 0.04 0.07 women
(b) Estimation of Bayes error lower and upper bounds ( and ) for zero-one loss of men and women. Intervals for men and women are non-overlapping for Nearest Neighbors.
Figure 2: Discrimination level and noise estimation in income prediction with the Adult dataset.

Predictions of a person’s salary may be used to help determine an individual’s market worth, but systematic underestimation of the salary of protected groups could harm their competitiveness on the job market. The Adult dataset in the UCI Machine Learning Repository (Lichman, 2013) contains 32,561 observations of yearly income (represented as a binary outcome: over or under $50,000) and twelve categorical or continuous features including education, age, and marital status. Categorical attributes are dichotomized, resulting in a total of 105 features.

We follow Pleiss et al. (2017) and strive to ensure fairness across genders, which is excluded as a feature from the predictive models. Using an 80/20 train-test split, we learn a random forest predictor, which is is well-calibrated for both groups (Brier (1950) scores of 0.13 and 0.06 for men and women). We find the difference in zero-one loss has a 95%-confidence interval333Details for computing statistically significant discrimination can be found in the supplementary material. with decision thresholds at 0.5. At this threshold, the false negative rates are and for men and women respectively, and the false positive rates and . We focus on random forest classifiers, although we found similar results for logistic regression and decision trees.

We examine the effect of varying training set size on discrimination. We fit inverse power-law curves to estimates of and using repeated sample splitting where at least 20% of the full data is held out for evaluating generalization error at every value of . We tune hyperparameters for each training set size for decision tree classifiers and logistic regression but tuned over the entire dataset for random forest. We include full training details in the supplementary material. Metrics are averaged over 50 trials. See Figure 1(a) for the results for random forests. Both FPR and FNR decrease with additional training samples. The discrimination level for false negatives decreases by a striking 40% when increasing the training set size from 1000 to 10,000. This suggests that trading off accuracy for fairness at small sample sizes may be ill-advised. Based on fitted power-law curves, we estimate that for unlimited training data drawn from the same distribution, we would have and .

In Figure 1(b), we compare estimated upper and lower bounds on noise ( and ) for men and women using the Mahalanobis and Bhattacharyya distances (Devijver & Kittler, 1982), and a -nearest neighbor method (Cover & Hart, 1967) with and 5-fold cross validation. Men have consistently higher noise estimates than women, which is consistent with the differences in zero-one loss found using all models. For nearest neighbors estimates, intervals for men and women are non-overlapping, which suggests that noise may contribute substantially to discrimination.

To guide attempts at reducing discrimination further, we identify clusters of individuals for whom false negative predictions are made at different rates between protected groups, with the method described in Section 4.2. We find that for individuals in executive or managerial occupations (12% of the sample), false negatives are more than twice as frequent for women (0.412) as for men (0.157). For individuals in all other occupations, the difference is significantly smaller, 0.543 for women and 0.461 for men, despite the fact that the disparity in outcome base rates in this cluster is large (0.26 for men versus 0.09 for women). A possible reason is that in managerial occupations the available variable set explains a larger portion of the variance in salary for men than for women. If so, further sub-categorization of managerial occupations could help reduce discrimination in prediction.

5.2 Intensive care unit mortality prediction

 Using Tukey’s range test, we can find the 95%-significance level for the zero-one loss for each group over 5-fold cross validation.
(a) Using Tukey’s range test, we can find the 95%-significance level for the zero-one loss for each group over 5-fold cross validation.
(b) As training set size increases, zero-one loss over 50 trials decreases over all groups and appears to converge to an asymptote.
(c) Topic modeling reveals subpopulations with high differences in zero-one loss, for example cancer patients and cardiac patients.
Figure 3: Mortality prediction from clinical notes using logistic regression. Best viewed in color.

Unstructured medical data such as clinical notes can reveal insights for questions like mortality prediction; however, disparities in predictive accuracy may result in discrimination of protected groups. Using the MIMIC-III dataset of all clinical notes from 25,879 adult patients from Beth Israel Deaconess Medical Center (Johnson et al., 2016), we predict hospital mortality of patients in critical care. Fairness is studied with respect to five self-reported ethnic groups of the following proportions: Asian (2.2%), Black (8.8%), Hispanic (3.4%), White (70.8%), and Other (14.8%). Notes were collected in the first 48 hours of an intensive care unit (ICU) stay; discharge notes were excluded. We only included patients that stayed in the ICU for more than 48 hours. We use the tf-idf statistics of the 10,000 most frequent words as features. Training a model on 50% of the data, selecting hyper-parameters on 25%, and testing on 25%, we find that logistic regression with L1-regularization achieves an AUC of 0.81. The logistic regression is well-calibrated with Brier scores ranging from 0.06-0.11 across the five groups; we note better calibration is correlated with lower prediction error.

We report cost and discrimination level in terms of generalized zero-one loss (Pleiss et al., 2017). Using an ANOVA test (Fisher, 1925) with , we reject the null hypothesis that loss is the same among all five groups. To map the 95% confidence intervals, we perform pairwise comparisons of means using Tukey’s range test (Tukey, 1949) across 5-fold cross-validation. As seen in Figure 2(a), patients in the Other and Hispanic groups have the highest and lowest generalized zero-one loss, respectively, with relatively few overlapping intervals. Notably, the largest ethnic group (White) does not have the best accuracy, whereas smaller ethnic groups tend towards extremes.

To better understand the discrimination induced by our model, we explore the effect of changing training set size. To this end, we repeatedly subsample and split the data, holding out at least 20% of the full data for testing. In Figure 2(b), we show loss averaged over 50 trials of training a logistic regression on increasingly larger training sets; estimated inverse power-law curves show good fits. We see that some pairwise differences in loss decrease with additional training data.

Next, we identify clusters for which the difference in prediction errors between protected groups is large. We learn a topic model with topics generated using Latent Dirichlet Allocation (Blei et al., 2003). Topics are concatenated into an matrix where designates the proportion of topic in note . Following prior work on enrichment of topics in clinical notes (Marlin et al., 2012; Ghassemi et al., 2014), we estimate the probability of patient mortality given a topic as where is the hospital mortality of patient . We compare relative error rates given protected group and topic using binary predicted mortality , actual mortality , and group for patient through

which follows using substitution and conditioning on . These error rates were computed using a logistic regression with L1 regularization using an 80/20 train-test split over 50 trials. While many topics have consistent error rates across groups, some topics (e.g. cardiac patients or cancer patients as shown in Figure 2(c)) have large differences in error rates across groups. We include more detailed topic descriptions in the supplementary material. Once we have identified a subpopulation with particularly high error, for example cancer patients, we can consider collecting more features or collecting more data from the same data distribution. We find that error rates differ between and across protected groups of cancer patients, and between and for cardiac patients.

5.3 Book review ratings

In the supplementary material, we study prediction of book review ratings from review texts (Gnanesh, 2017). The protected attribute was chosen to be the gender of the author as determined from Wikipedia. In the dataset, the difference in mean-squared error has 95%-confidence interval with for reviews for male authors and . Strikingly, our findings suggest that may be completely eliminated by additional targeted sampling of the less represented gender.

6 Discussion

We identify that existing approaches for reducing discrimination induced by prediction errors may be unethical or impractical to apply in settings where predictive accuracy is critical, such as in healthcare or criminal justice. As an alternative, we propose a procedure for analyzing the different sources contributing to discrimination. Decomposing well-known definitions of cost-based fairness criteria in terms of differences in bias, variance, and noise, we suggest methods for reducing each term through model choice or additional training data collection. Case studies on three real-world datasets confirm that collection of additional samples is often sufficient to improve fairness, and that existing post-hoc methods for reducing discrimination may unnecessarily sacrifice predictive accuracy when other solutions are available.

Acknowledgements

The authors would like to thank Yoni Halpern and Hunter Lang for helpful comments, and Zeshan Hussain for clinical guidance.

References

  • Amari (1993) Amari, Shun-Ichi. A universal theorem on learning curves. Neural networks, 6(2):161–166, 1993.
  • Angwin et al. (2016) Angwin, Julia, Larson, Jeff, Mattu, Surya, and Kirchner, Lauren. Machine bias. ProPublica, May, 23, 2016.
  • Antos et al. (1999) Antos, András, Devroye, Luc, and Gyorfi, Laszlo. Lower bounds for bayes error estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(7):643–645, 1999.
  • Barocas & Selbst (2016) Barocas, Solon and Selbst, Andrew D. Big data’s disparate impact. Cal. L. Rev., 104:671, 2016.
  • Bechavod & Ligett (2017) Bechavod, Yahav and Ligett, Katrina. Learning fair classifiers: A regularization-inspired approach. arXiv preprint arXiv:1707.00044, 2017.
  • Bhattacharyya (1943) Bhattacharyya, Anil. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc., 35:99–109, 1943.
  • Blei et al. (2003) Blei, David M, Ng, Andrew Y, and Jordan, Michael I. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003.
  • Brier (1950) Brier, Glenn W. Verification of forecasts expressed in terms of probability. Monthey Weather Review, 78(1):1–3, 1950.
  • Brown et al. (2001) Brown, Lawrence D, Cai, T Tony, and DasGupta, Anirban. Interval estimation for a binomial proportion. Statistical science, pp. 101–117, 2001.
  • Calders & Verwer (2010) Calders, Toon and Verwer, Sicco. Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277–292, 2010.
  • Calmon et al. (2017) Calmon, Flavio, Wei, Dennis, Vinzamuri, Bhanukiran, Ramamurthy, Karthikeyan Natesan, and Varshney, Kush R. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems, pp. 3995–4004, 2017.
  • Chouldechova (2017) Chouldechova, Alexandra. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv preprint arXiv:1703.00056, 2017.
  • Corbett-Davies et al. (2017) Corbett-Davies, Sam, Pierson, Emma, Feller, Avi, Goel, Sharad, and Huq, Aziz. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806. ACM, 2017.
  • Cover & Hart (1967) Cover, Thomas and Hart, Peter. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21–27, 1967.
  • Devijver & Kittler (1982) Devijver, Pierre A. and Kittler, Josef. Pattern recognition: a statistical approach. Sung Kang, 1982.
  • Domhan et al. (2015) Domhan, Tobias, Springenberg, Jost Tobias, and Hutter, Frank. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
  • Domingos (2000) Domingos, Pedro. A unified bias-variance decomposition. In Proceedings of 17th International Conference on Machine Learning, pp. 231–238, 2000.
  • Dwork et al. (2012) Dwork, Cynthia, Hardt, Moritz, Pitassi, Toniann, Reingold, Omer, and Zemel, Richard. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226. ACM, 2012.
  • Efron (1992) Efron, Bradley. Bootstrap methods: another look at the jackknife. In Breakthroughs in statistics, pp. 569–593. Springer, 1992.
  • Feldman et al. (2015) Feldman, Michael, Friedler, Sorelle A, Moeller, John, Scheidegger, Carlos, and Venkatasubramanian, Suresh. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. ACM, 2015.
  • Fish et al. (2016) Fish, Benjamin, Kun, Jeremy, and Lelkes, Ádám D. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining, pp. 144–152. SIAM, 2016.
  • Fisher (1925) Fisher, R.A. Statistical methods for research workers. Edinburgh Oliver & Boyd, 1925.
  • Friedler et al. (2018) Friedler, Sorelle A, Scheidegger, Carlos, Venkatasubramanian, Suresh, Choudhary, Sonam, Hamilton, Evan P, and Roth, Derek. A comparative study of fairness-enhancing interventions in machine learning. arXiv preprint arXiv:1802.04422, 2018.
  • Fukunaga & Hummels (1987) Fukunaga, Keinosuke and Hummels, Donald M. Bayes error estimation using parzen and k-nn procedures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(5):634–643, 1987.
  • Ghassemi et al. (2014) Ghassemi, Marzyeh, Naumann, Tristan, Doshi-Velez, Finale, Brimmer, Nicole, Joshi, Rohit, Rumshisky, Anna, and Szolovits, Peter. Unfolding physiological state: Mortality modelling in intensive care units. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 75–84. ACM, 2014.
  • Gnanesh (2017) Gnanesh. Goodreads book reviews, 2017. URL https://www.kaggle.com/gnanesh/goodreads-book-reviews.
  • Hajian & Domingo-Ferrer (2013) Hajian, Sara and Domingo-Ferrer, Josep. A methodology for direct and indirect discrimination prevention in data mining. IEEE transactions on knowledge and data engineering, 25(7):1445–1459, 2013.
  • Hardt et al. (2016) Hardt, Moritz, Price, Eric, Srebro, Nati, et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pp. 3315–3323, 2016.
  • Hestness et al. (2017) Hestness, Joel, Narang, Sharan, Ardalani, Newsha, Diamos, Gregory, Jun, Heewoo, Kianinejad, Hassan, Patwary, Md, Ali, Mostofa, Yang, Yang, and Zhou, Yanqi. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
  • Johnson et al. (2016) Johnson, Alistair EW, Pollard, Tom J, Shen, Lu, Lehman, Li-wei H, Feng, Mengling, Ghassemi, Mohammad, Moody, Benjamin, Szolovits, Peter, Celi, Leo Anthony, and Mark, Roger G. Mimic-iii, a freely accessible critical care database. Scientific data, 3, 2016.
  • Kamiran et al. (2010) Kamiran, Faisal, Calders, Toon, and Pechenizkiy, Mykola. Discrimination aware decision tree learning. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pp. 869–874. IEEE, 2010.
  • Kamishima et al. (2011) Kamishima, Toshihiro, Akaho, Shotaro, and Sakuma, Jun. Fairness-aware learning through regularization approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pp. 643–650. IEEE, 2011.
  • Kleinberg et al. (2016) Kleinberg, Jon, Mullainathan, Sendhil, and Raghavan, Manish. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807, 2016.
  • Koepke & Bilenko (2012) Koepke, Hoyt and Bilenko, Mikhail. Fast prediction of new feature utility. arXiv preprint arXiv:1206.4680, 2012.
  • Lichman (2013) Lichman, M. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
  • Mahalanobis (1936) Mahalanobis, Prasanta Chandra. On the generalized distance in statistics. National Institute of Science of India, 1936.
  • Marlin et al. (2012) Marlin, Benjamin M, Kale, David C, Khemani, Robinder G, and Wetzel, Randall C. Unsupervised pattern discovery in electronic health care data using probabilistic clustering models. In Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, pp. 389–398. ACM, 2012.
  • Mukherjee et al. (2003) Mukherjee, Sayan, Tamayo, Pablo, Rogers, Simon, Rifkin, Ryan, Engle, Anna, Campbell, Colin, Golub, Todd R, and Mesirov, Jill P. Estimating dataset size requirements for classifying dna microarray data. Journal of computational biology, 10(2):119–142, 2003.
  • Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • Pleiss et al. (2017) Pleiss, Geoff, Raghavan, Manish, Wu, Felix, Kleinberg, Jon, and Weinberger, Kilian Q. On fairness and calibration. In Advances in Neural Information Processing Systems, pp. 5684–5693, 2017.
  • Quionero-Candela et al. (2009) Quionero-Candela, Joaquin, Sugiyama, Masashi, Schwaighofer, Anton, and Lawrence, Neil D. Dataset shift in machine learning. The MIT Press, 2009.
  • Ruggieri et al. (2010) Ruggieri, Salvatore, Pedreschi, Dino, and Turini, Franco. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data (TKDD), 4(2):9, 2010.
  • Tukey (1949) Tukey, John W. Comparing individual means in the analysis of variance. Biometrics, pp. 99–114, 1949.
  • Tumer & Ghosh (1996) Tumer, Kagan and Ghosh, Joydeep. Estimating the bayes error rate through classifier combining. In Pattern Recognition, 1996., Proceedings of the 13th International Conference on, volume 2, pp. 695–699. IEEE, 1996.
  • Woodworth et al. (2017) Woodworth, Blake, Gunasekar, Suriya, Ohannessian, Mesrob I, and Srebro, Nathan. Learning non-discriminatory predictors. Conference On Learning Theory, 2017.
  • Zafar et al. (2017) Zafar, Muhammad Bilal, Valera, Isabel, Gomez Rodriguez, Manuel, and Gummadi, Krishna P. Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259, 2017.
  • Zemel et al. (2013) Zemel, Richard S, Wu, Yu, Swersky, Kevin, Pitassi, Toniann, and Dwork, Cynthia. Learning fair representations. ICML (3), 28:325–333, 2013.

Appendix A Testing for significant discrimination

In general, neither nor can be computed exactly, as the expectations and , for are known only approximately through a set of samples drawn from the (possibly class-conditional) population . The Monte Carlo estimate,

with , may be used to form an estimate . By the central limit theorem, for sufficiently large , and . As a result, the significance of can be tested with a two-tailed z-test or using the test of Woodworth et al. (2017). If sample sizes are small and the target binary, more appropriate tests are available (Brown et al., 2001). In addition, we will often want to compare the discrimination levels of predictors , resulting from different learning algorithms, models, or sets of observed variables. The random variable is not Normal distributed, but is an absolute difference of folded-normal variables. However, for any , is Normal distributed. Further, by enumerating the signs of and , we can show that . As a result, to reject the null hypothesis , we require that the observed values of both and are unlikely under at given significance.

Appendix B Additional experimental details

b.1 Datasets

  • Adult Income Dataset (Lichman, 2013). The dataset has 32,561 instances. The target variable indicates whether or not income is larger than 50K dollars, and the sensitive feature is Gender. Each data object is described by 14 attributes which include 8 categorical and 6 numerical attributes. We quantize the categorical attributes into binary features and keep the continuous attributes, which results in 105 features for prediction. We note the label imbalance as 30% of male adults have income over 50K whereas only 10% of female adults have income over 50K. Additionally 24% of all adults have salary over 50K, and the dataset has 33% women and 67% men.

  • Goodreads reviews Gnanesh (2017), only included in the supplemental materials. The dataset was collected from Oct 12, 2017 to Oct 21, 2017 and has 13,244 reviews. The target variable is the rating of the review, and the sensitive feature is the gender of the author. Genders were gathered by querying Wikipedia and using pronoun inference, and the dataset is a subset of the original Goodreads dataset because it only includes reviews about the top 100 most popular authors. Each datum consists of the review text, vectorized using Tf-Idf. The review scores occurred with counts 578, 2606, 4544, 5516 for scores 1,3,4, and 5 respectively. Books by women authors and men authors had average scores of 4.088 and 4.092 respectively.

  • MIMIC-III dataset (Johnson et al., 2016). The dataset includes 25,879 adult patients admitted to the intensive care unit of the Beth Israel Deaconess Medical Center in downtown Boston. Clinical notes from the first 48 hours are used to predict hospital mortality after 48 hours. Of all adult patients, 13.8% patients died in the hospital. We are interested in the difference in performance between the five self-reported ethnic groups and following data sizes and hospital mortality rates.

    Race # patients % total Hospital Mortality
    Asian 583 2.3 14.2
    Black 2,327 9.0 10.9
    Hispanic 832 3.2 10.3
    Other 3,761 14.5 18.4
    White 18,377 71.0 13.4
    Table 1: Summary statistics of clinical notes dataset

b.2 Synthetic experiments

To illustrate the effect of training set size and model choice, and the validity of the power-law learning curve assumption, we conduct a small synthetic experiment in which and with . The outcome is a quadratic function with heteroskedastic noise, , with . We fit decision tree, random forest and ridge regressors of the outcome to using default parameters in the implementation in scikit-learn (Pedregosa et al., 2011), but limiting the decision tree to depth . The size of the training set is varied exponentially between and samples, and at each size, trees are fit 200 times. In Figure 4, we show the resulting learning curves and as well as fits of Pow3 curves to them. Shown in dotted lines are extrapolations of learning curves from different sample sizes, illustrating the difficulty of estimating the intercepts and the Bayes error with high accuracy.

(a) Learning curves, for random forest
(b) Discrimination, for various models
Figure 4: Inverse power-laws (Pow3) fit to generalization error as a function of training set size on synthetic data. Dotted lines are extrapolations from sample sizes indicated by black stars. This illustrates the difficulty of estimating the Bayes error through extrapolation, here at and respectively.

b.3 Book review ratings

(a) As training set size increases for random forest, MSE decreases but maintains difference between groups. Intercepts from fitted power-laws show no difference in noise.
(b) Holding number of reviews for male authors steady and varying number of reviews for female authors , we can achieve higher MSE for one group than with the full dataset.
Figure 5: Goodreads dataset for book rating prediction. Adding training data decreases overall mean squared error (MSE) for both groups while adding training data to only one group has a much bigger impact on reducing . Increasing the number of features reduces MSE but does not reduce .

Sentiment and rating prediction from text reveal quantitative insights from unstructured data; however deficiencies in algorithmic prediction may incorrectly represent populations. Using a dataset of 13,244 reviews collected from Goodreads (Gnanesh, 2017) with inferred author sex scraped from Wikipedia, we seek to predict the review rating based on the review text. We use as features the Tf-Idf statistics of the 5000 most frequent words. Our protected attribute is gender of the author of the book, and the target attribute is the rating (1-5) of the review. The data is heavily imbalanced, with 18% reviews about female authors versus 82% reviews about male authors.

We observe statistically significant levels of discrimination with respect to mean squared error (MSE) with linear regression, decision trees and random forests. Using a random forest and training on 80% of the dataset and testing on 20%, we find that our has 95%-confidence interval with for reviews for male authors and for reviews for female authors using a difference in means statistical test. Results were found after hyperparameter turning for each training set size and taking an average over 50 trials. We observe similar patterns with linear regression and decision trees.

To estimate the impact of additional training data, we evaluate the effect of varying training set size on predictive performance and discrimination. Through repeated sample spitting, we train a random forest on increasing training set sizes, reserving at least 20% of the dataset for testing. In Figure 4(a), additional training data lowers and , fitting an inverse power-law. Based on the intercept terms of the extrapolated power-laws ( for reviews with male authors and for reviews with female authors), we may expect that can be explained more by differences in bias and variance than by noise since our estimated difference in noise .

In order to further measure the effect of collecting more samples, we analyze a one-sized increase in training data. Because of the initial skew of author genders in the dataset, we vary the number of reviews for female authors, creating a shift in populations in the training data. We fix the training set size of reviews for male authors at , which represents the size of the full data for female authors , reserving 20% of the dataset as test data. We then vary the training data size for female authors such that the ratio varies evenly between 0.1 to 1.0. Using a linear regression in Figure 4(b), we see that as the ratio increases, decreases far below and far below our best reported MSE of the random forest on the full dataset. This suggests that shifting the data ratio and collecting more data for the under-represented group can adapt our model to reduce discrimination.

b.4 Clinical notes

Here we include additional details about topic modeling. Topics were sampled using Markov Chain Monte Carlo after 2,500 iterations. We present the topics with highest and lowest variance in error rates among groups in Table 2. Error rates were computed using a logistic regression with L1 regularization over 10,000 TF-IDF features using 80/20 training and testing data split over 50 trials. Based on the most representative words for each topic, we can infer topic descriptions, for example cancer patients for topic 48 and cardiac patients for topic 45.

Topic Top words Asian Black Hispanic Other White
31 no(t pain present normal edema tube history pulse absent left respiratory monitor 5.9 8.4 17.6 30.8 11.1
17 hospital lymphoma continue s/p unit bmt thrombocytopenia line rash 34.3 13.6 34.9 30.2 26.0
43 bowel abdominal abd abdomen surgery s/p small pain obstruction fluid ngt 16.6 11.8 5.7 26.8 13.2
45 artery carotid aneurysm left identifier numeric vertebral internal clip 5.4 5.3 3.8 20.4 10.0
48 mass cancer metastatic lung tumor patient cell left malignant breast hospital 21.6 25.4 12.3 30.2 18.5
1 neo gtt pain resp neuro wean clear plan insulin good 3.3 1.8 1.6 3.6 2.7
2 assessment insulin mg/dl plan pain meq/l mmhg chest cabg action 0.3 0.6 0.9 3.6 2.2
0 chest reason tube clip left artery s/p pneumothorax cabg pulmonary 3.2 5.5 2.5 5.6 4.0
25 c/o pain clear denies oriented sats plan alert stable monitor 7.3 3.9 5.9 8.2 6.5
47 pacer pacemaker icd s/p paced rhythm ccu amiodarone cardiac 8.2 9.1 8.3 13.8 10.1
Table 2: Top and bottom 5 topics (of 50) based on variance in error rates of groups. Error rates by group and topic are reported in percentages.

We identified patients with notes corresponding to topic 48, corresponding to cancer, as a subpopulation with large differences in errors between groups. By varying the training size while saving 20% of the data for testing, we estimate that more data would not be beneficial for decreasing error (see Figure 5(c)). The mean over 50 trials is reported with hyperparameters chosen for each training size. Instead, we recommend collecting more features (e.g. structured data from lab results, more detailed patient history) as a way of improving error for this subpopulation.

Furthermore, we compute the 95% confidence intervals for false positive and false negative rates for a logistic regression with L1 regularization in Figure 5(a) and Figure 5(b).

 The false negative rates for logistic regression with L1 regularization do not differ across five ethnic groups, shown by the overlapping 95%-confidence intervals intervals, except for Asian patients.
(a) The false negative rates for logistic regression with L1 regularization do not differ across five ethnic groups, shown by the overlapping 95%-confidence intervals intervals, except for Asian patients.
(b) The false positive rates also does not differ much across groups with many overlapping intervals. Note that Asian patients have high false positive rate but low false negative rates.
(c) Adding training data size on error enrichment for cancer (topic 48) does not necessarily reduce error for all groups. This may suggest we should focus on collecting more features instead.
Figure 6: Additional clinical notes experiments highlight the differences in false positive and false negative rates. We also examine the effect of training size on cancer patients in the dataset.

Appendix C Exploring model choice

If a difference in bias is the dominating source of discrimination between groups, changing the class of models under consideration could have a large impact on discrimination.Consider for example Figure 1c in which the true outcome has higher complexity in regions where one protected group is more densely distributed than the other. Increasing model capacity in such cases, or exploring other model classes of similar capacity, may reduce as long as the bias-variance trade-off is beneficial. Bias is not identifiable in general, as this requires estimation or bounding of noise components , or an assumption that they are equal, , or negligible, . However, as noise is in-dependent of model choice, a difference in bias of different models is identifiable even if the noise is not known, provided that the variance is estimated. With , and , and , two predictors for comparison, we may test the hypothesis .

Appendix D Regression with homoskedastic noise

By definition of , we can state the following result.

Proposition 2.

Homoskedastic noise, i.e. , does not contribute to discrimination level under the squared loss .

Proof.

Under the squared loss, , as . ∎

In contrast, for the zero-one loss and class-specific variants, the expected noise terms do not cancel, as they depend on the factor .

Appendix E Bias-variance decomposition. Proof of Theorem 1.

Lemma A1 (Squared loss and zero-one loss).

The following claim holds for both:
a) the zero-one loss with and ,
b) a) the squared loss with .

Proof.

See Domingos (2000).∎

Lemma A2 (Class-specific zero-one loss).

With the zero-one loss, it holds with and

Proof.

We begin by showing that with .

As the above should be zero for all options, this implies that .

We now show that,

We have that if ,

A similar calculation for the case where yields the claim.

Finally, We have that

which gives us our result. ∎

Since datasets are drawn independently of the protected attribute ,

and an analogous results hold for class-specific losses, Theorem 1 follows from lemmas A1A2.

Appendix F Difference between power law curves

Let and . Then has at most 2 local minima. We see this by re-writing

and so

Setting the derivative to zero,

which has a unique positive root

Since has a single critical point (for ), can switch signs at most twice. The curves and intersect twice on . If , has a single zero,

yields

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
200359
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel