Optimized Data Pre-Processing for Discrimination Prevention

Optimized Data Pre-Processing for Discrimination Prevention

Flavio P. Calmon, Dennis Wei, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney
Data Science Department, IBM Thomas J. Watson Research Center
Contact: {fdcalmon,dwei,knatesa,krvarshn}@us.ibm.com
Abstract

Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective, and apply two instances of the proposed optimization to datasets, including one on real-world criminal recidivism. The results demonstrate that all three criteria can be simultaneously achieved and also reveal interesting patterns of bias in American society.

\newmdenv

[backgroundcolor=light-gray, linecolor=black, linewidth =1pt,skipabove = 10pt,skipbelow = 10pt ]comments \newmdenv[backgroundcolor=aliceblue, linewidth = 2pt,skipabove = 10pt,skipbelow = 10pt, pstrickssetting=linestyle=dashed,, linecolor=airforceblue, middlelinewidth=2pt ]TODO

1 Introduction

Discrimination is the prejudicial treatment of an individual based on membership in a legally protected group such as a race or gender. Direct discrimination occurs when protected attributes are used explicitly in making decisions, which is referred to as disparate treatment in law. More pervasive nowadays is indirect discrimination, in which protected attributes are not used but reliance on variables correlated with them leads to significantly different outcomes for different groups. The latter phenomenon is termed disparate impact. Indirect discrimination may be intentional, as in the historical practice of “redlining” in the U.S. in which home mortgages were denied in zip codes populated primarily by minorities. However, the doctrine of disparate impact applies in many situations regardless of actual intent.

Supervised learning algorithms, increasingly used for decision making in applications of consequence, may at first be presumed to be fair and devoid of inherent bias, but in fact, inherit any bias or discrimination present in the data on which they are trained (Calders & Žliobaitė, 2013). Furthermore, simply removing protected variables from the data is not enough since it does nothing to address indirect discrimination and may in fact conceal it. The need for more sophisticated tools has made discrimination discovery and prevention an important research area (Pedreschi et al., 2008).

Algorithmic discrimination prevention involves modifying one or more of the following to ensure that decisions made by supervised learning methods are less biased: (a) the training data, (b) the learning algorithm, and (c) the ensuing decisions themselves. These are respectively classified as pre-processing (Hajian, 2013), in-processing (Fish et al., 2016; Zafar et al., 2016; Kamishima et al., 2011) and post-processing approaches (Hardt et al., 2016). In this paper, we focus on pre-processing since it is the most flexible in terms of the data science pipeline: it is independent of the modeling algorithm and can be integrated with data release and publishing mechanisms.

Researchers have also studied several notions of discrimination and fairness. Disparate impact is addressed by the principles of statistical parity and group fairness (Feldman et al., 2015), which seek similar outcomes for all groups. In contrast, individual fairness (Dwork et al., 2012) mandates that similar individuals be treated similarly irrespective of group membership. For classifiers and other predictive models, equal error rates for different groups are a desirable property (Hardt et al., 2016), as is calibration or lack of predictive bias in the predictions (Zhang & Neill, 2016). The tension between the last two notions is described by Kleinberg et al. (2017) and Chouldechova (2016); the work of Friedler et al. (2016) is in a similar vein. Corbett-Davies et al. (2017) discuss the cost of satisfying prevailing notions of algorithmic fairness from a public safety standpoint and discuss the trade-offs. Since the present work pertains to pre-processing and not modeling, balanced error rates and predictive bias are less relevant criteria. Instead we focus primarily on achieving group fairness while also accounting for individual fairness through a distortion constraint.

Existing pre-processing approaches include sampling or re-weighting the data to neutralize discriminatory effects (Kamiran & Calders, 2012), changing the individual data records (Hajian & Domingo-Ferrer, 2013), and using -closeness (Li et al., 2007) for discrimination control (Ruggieri, 2014). A common theme is the importance of balancing discrimination control against utility of the processed data. However, this prior work neither presents general and principled optimization frameworks for trading off these two criteria, nor allows connections to be made to the broader statistical learning and information theory literature via probabilistic descriptions. Another shortcoming is that individual distortion or fairness is not made explicit.

Figure 1: The proposed pipeline for predictive learning with discrimination prevention. Learn mode applies with training data and apply mode with novel test data. Note that test data also requires transformation before predictions can be obtained.

In this work, addressing gaps in the pre-processing literature, we introduce a probabilistic framework for discrimination-preventing pre-processing in supervised learning. Our aim in part is to work toward a more unified view of previously proposed concepts and methods, which may help to suggest refinements. We formulate the determination of a pre-processing transformation as an optimization problem that trades off discrimination control, data utility, and individual distortion. (Trade-offs among various fairness notions may be inherent as shown by Kleinberg et al. (2017).) While discrimination and utility are defined at the level of probability distributions, distortion is controlled on a per-sample basis, thereby limiting the effect of the transformation on individuals and ensuring a degree of individual fairness. Figure 1 illustrates the supervised learning pipeline that includes our proposed discrimination-preventing pre-processing.

The work of Zemel et al. (2013) is closest to ours in also presenting a framework with three criteria related to discrimination control (group fairness), individual fairness, and utility. However, the criteria are manifested less directly than in our proposal. In particular, discrimination control is posed in terms of intermediate features rather than outcomes, individual distortion does not take outcomes into account (simply being an -norm between original and transformed features), and utility is specific to a particular classifier. Our formulation more naturally and generally encodes these fairness and utility desiderata.

Given the novelty of our formulation, we devote more effort than usual to discussing its motivations and potential variations. We state natural conditions under which the proposed optimization problem is convex. The resulting transformation is in general a randomized one. The proposed optimization problem assumes as input an estimate of the distribution of the data which, in practice, can be imprecise due to limited sample size. Accordingly, we characterize the possible degradation in discrimination and utility guarantees at test time in terms of the training sample size. As a demonstration of our framework, we apply specific instances of it to a prison recidivism risk score dataset (ProPublica, 2017) and the UCI adult dataset (Lichman, 2013). By solving the optimization problem, we show that discrimination, distortion, and utility loss can be controlled simultaneously with real data. In addition, the resulting transformations reveal intriguing demographic patterns in the data.

2 General Formulation

We are given a dataset consisting of i.i.d. samples from a joint distribution with domain . Here denotes one or more discriminatory variables such as gender and race, denotes other non-protected variables used for decision making, and is an outcome random variable. For instance, could represent a loan approval decision for individual based on demographic information and credit score . We focus in this paper on discrete (or discretized) and finite domains and and binary outcomes, i.e. . There is no restriction on the dimensions of and .

Our goal is to determine a randomized mapping that (i) transforms the given dataset into a new dataset , which may be used to train a model, and (ii) similarly transforms data to which the model is applied, i.e. test data. Each is drawn independently from the same domain as by applying to the corresponding triplet . Since is retained as-is, we do not include it in the mapping to be determined. Motivation for retaining is discussed later in Section 3.2. For test samples, is not available at the input while may not be needed at the output. In this case, a reduced mapping may be used, which can be obtained from by marginalizing over and after weighting by .

It is assumed that is known along with its marginals and conditionals. This assumption is often satisfied using the empirical distribution of . In Section 3.2, we state a result ensuring that discrimination and utility loss continue to be controlled if the distribution used to determine differs from the distribution of test samples.

We propose that the mapping satisfy the properties discussed in the following three subsections.

2.1 Discrimination Control

The first objective is to limit the dependence of the transformed outcome on the discriminatory variables , as represented by the conditional distribution . We propose two alternative formulations. The first requires to be close to a target distribution for all values of ,

(1)

where denotes some distance function. The second formulation constrains to be similar for any two values of ,

(2)

for all The latter (2) does not require a target distribution as reference but does increase the number of constraints from to .

The choice of target in (1), and distance and thresholds in (1) and (2) should be informed by societal considerations. If the application domain has a clear legal definition of disparate impact, for example the “80% rule” (EEOC, 1979), then it can be translated into a mathematical constraint. Otherwise and more generally, the instantiation of (1) should involve consultation with domain experts and stakeholders before being put into practice.

For this work, we choose to be the following probability ratio measure:

(3)

The combination of (3) and (1) generalizes the extended lift criterion proposed in the literature (Pedreschi et al., 2012), while the combination of (3) and (2) generalizes selective and contrastive lift. In the numerical results in Section 4, we use both (1) and (2). For (1), we make the straightforward choice of setting , the original marginal distribution of the outcome variable. We recognize however that this choice of target may run the risk of perpetuating bias in the original dataset. On the other hand, how to choose a target distribution that is “fairer” than is largely an open question; we refer the reader to Žliobaitė et al. (2011) for one such proposal, which is reminiscent of the concept of “balanced error rate” in classification (Zhao et al., 2013).

In (1) and (2), discrimination control is imposed jointly with respect to all discriminatory variables, e.g. all combinations of gender and race if consists of those two variables. An alternative is to take the discriminatory variables one at a time, e.g. gender without regard to race and vice-versa. The latter, which we refer to as univariate discrimination control, can be formulated similarly to (1), (2). In this work, we opt for joint discrimination control as it is more stringent than univariate. We note however that legal formulations tend to be of the univariate type.

Formulations (1) and (2) control discrimination at the level of the overall population in the dataset. It is also possible to control discrimination within segments of the population by conditioning on additional variables , where is a subset of and is a collection of features. Constraint (1) would then generalize to

(4)

for all and Similar conditioning or “context” for discrimination has been explored before in Hajian & Domingo-Ferrer (2013) in the setting of association rule mining. As one example, may consist of non-discriminatory variables that are strongly correlated with the outcome , e.g. education level as it relates to income. One may wish to control for such variables in determining whether discrimination is present and needs to be corrected. At the same time, care must be taken so that the population segments created by conditioning on are large enough for statistically valid inferences to be made. For present purposes, we simply note that conditional discrimination constraints (4) can be accommodated in our framework and defer further investigation to future work.

2.2 Distortion Control

The mapping should satisfy distortion constraints with respect to the domain . These constraints restrict the mapping to reduce or avoid altogether certain large changes (e.g. a very low credit score being mapped to a very high credit score). Given a distortion metric , we constrain the conditional expectation of the distortion as follows:

(5)

We assume that for all .

Constraint (5) is formulated with pointwise conditioning on in order to promote individual fairness. It ensures that distortion is controlled for every combination of , i.e. every individual in the original dataset, and more importantly, every individual to which a model is later applied. By way of contrast, an average-case measure in which an expectation is also taken over may result in high distortion for certain , likely those with low probability. Equation (5) also allows the level of control to depend on if desired. We also note that (5) is a property of the mapping , and does not depend on the assumed distribution

The expectation over in (5) encompasses several cases depending on the choices of the metric and thresholds . If , then no mappings with nonzero distortion are allowed for individuals with original values . If , then certain mappings may still be disallowed by assigning them infinite distortion. Mappings with finite distortion are permissible subject to the budget . Lastly, if is binary-valued (perhaps achieved by thresholding a multi-valued distortion function), it can be seen as classifying mappings into desirable () and undesirable ones (). Here, (5) reduces to a bound on the conditional probability of an undesirable mapping, i.e.

(6)

2.3 Utility Preservation

In addition to constraints on individual distortions, we also require that the distribution of be statistically close to the distribution of . This is to ensure that a model learned from the transformed dataset (when averaged over the discriminatory variables ) is not too different from one learned from the original dataset, e.g. a bank’s existing policy for approving loans. For a given dissimilarity measure between probability distributions (e.g. KL-divergence), we require that be small.

2.4 Optimization Formulation

Putting together the considerations from the three previous subsections, we arrive at the optimization problem below for determining a randomized transformation mapping each sample to :

s.t.
(7)

We choose to minimize the utility loss subject to constraints on individual distortion (5) and discrimination, where we have used (1) for concreteness, since it is more natural to place bounds on the latter two.

The distortion constraints (5) are an essential component of the problem formulation (7). Without (5) and assuming that , it is possible to achieve perfect utility and non-discrimination simply by sampling from the original distribution independently of any inputs, i.e. . Then , and for all . This solution however is clearly objectionable from the viewpoint of individual fairness, especially for individuals to whom a subsequent model is applied since it amounts to discarding an individual’s data and replacing it with a random sample from the population . Constraint (5) seeks to prevent such gross deviations from occurring.

3 Theoretical Properties

3.1 Convexity

We first discuss conditions under which (7) is a convex or quasiconvex optimization problem. Considering first the objective function, the distribution is a given quantity while

is seen to be a linear function of the mapping , i.e. the optimization variable. Hence if the statistical dissimilarity is convex in its first argument with the second fixed, then is a convex function of by the affine composition property (Boyd & Vandenberghe, 2004). This condition is satisfied for example by all -divergences (Csiszár & Shields, 2004), which are jointly convex in both arguments, and by all Bregman divergences (Banerjee et al., 2005). If instead is only quasiconvex in its first argument, a similar composition property implies that is a quasiconvex function of (Boyd & Vandenberghe, 2004).

For discrimination constraint (1), the target distribution is assumed to be given. The conditional distribution can be related to as follows:

Since is given, is a linear function of . Hence by the same composition property as above, (1) is a convex constraint, i.e. specifies a convex set, if the distance function is quasiconvex in its first argument.

If constraint (2) is used instead of (1), then both arguments of are linear functions of . Hence (2) is convex if is jointly quasiconvex in both arguments.

Lastly, the distortion constraint (5) can be expanded explicitly in terms of to yield

Thus (5) is a linear constraint in regardless of the choice of distortion metric .

We summarize this subsection with the following proposition.

Proposition 1.

Problem (7) is a (quasi)convex optimization if is (quasi)convex and is quasiconvex in their respective first arguments (with the second arguments fixed). If discrimination constraint (2) is used in place of (1), then the condition on is that it be jointly quasiconvex in both arguments.

3.2 Generalizability of Discrimination Control

We now discuss the generalizability of discrimination guarantees (1) and (2) to unseen individuals, i.e. those to whom a model is applied. Recall from Section 2 that the proposed transformation retains the discriminatory variables . We first consider the case where models trained on the transformed data to predict are allowed to depend on . While such models may qualify as disparate treatment, the intent and effect is to better mitigate disparate impact resulting from the model. In this respect our proposal shares the same spirit with “fair” affirmative action in Dwork et al. (2012) (fairer on account of distortion constraint (5)). Later in this subsection we consider the case where is suppressed at classification time.

3.2.1 Maintaining the Discriminatory Variable

Assuming that predictive models for can depend on , let be the output of such a model based on and . To remove the separate issue of model accuracy, suppose for simplicity that the model provides a good approximation to the conditional distribution of : . Then for individuals in a protected group , the conditional distribution of is given by

(8)

Hence the model output can also be controlled by (1) or (2).

On the other hand, if must be suppressed from the transformed data, perhaps to comply with legal requirements regarding its non-use, then a predictive model can depend only on and approximate , i.e. . In this case we have

(9)

which in general is not equal to in (8). The quantity on the right-hand side of (9) is less straightforward to control. We address this issue in the next subsection.

3.2.2 Suppressing the Discriminatory Variable

In many applications the discriminatory variable cannot be revealed to the classification algorithm. In this case, the train-time discrimination guarantees are preserved at apply time if the Markov relationship (i.e. ) holds since, in this case,

(10)

Thus, given that the distribution is known, the guarantees provided during training still hold when applied to fresh samples if the additional constraint is satisfied. We refer to (7) with the additional constraint as the suppressed optimization formulation (SOF). Alas, since the added constraint is non-convex, the SOF is not a convex program, despite being convex in for a fixed and vice-versa (i.e. it is biconvex). We propose next two strategies for addressing this problem.

  1. The first approach is to restrict and solve (7) for . If is an -divergence, then

    where the inequality follows from convexity of . Since the last quantity is achieved by setting , this choice is optimal in terms of the objective function. It may, however, render the constraints in (7) infeasible. Assuming feasibility is maintained, this approach has the added benefit that a classifier can be trained using the original (non-perturbed) data, and maintained for classification during apply time.

  2. Alternatively, a solution can be found through alternating minimization: fix and solve the SOF for , and then fix as the optimal solution and solve the SOF for . The resulting sequence of values of the objective function is non-increasing, but may converge to a local minima.

3.3 A Note on Estimation and Discrimination

There is a close relationship between estimation and discrimination. If the discriminatory variable can be reliably estimated from the outcome variable , then it is reasonable to expect that the discrimination control constraint (1) does not hold for small values of . We make this intuition precise in the next proposition when is given in (3).

More specifically, we prove that if the advantage of estimating from over a random guess is large, then there must exist a value of and such that is also large. Thus, standard estimation methods can be used to detect the presence of discrimination: if an estimation algorithm can estimate from , then discrimination may be present. Alternatively, if discrimination control is successful, then no estimator can significantly improve upon a random guess when estimating from .

We denote the highest probability of correctly guessing from an observation of by , where

(11)

and the maximum is taken across all estimators that satisfy the Markov condition . For and defined over finite supports, this is achieved by the maximum a posteriori (MAP) estimator and, consequently,

(12)

Let be the most likely outcome of , i.e. . The (multiplicative) advantage over a random guess is given by

(13)
Proposition 2.

For and defined over finite support sets, if

(14)

then for any , there exists and such that

(15)
Proof.

We prove the contrapositive of the statement of the proposition. Assume that

(16)

Then

where the inequality follows by noting that (16) implies for all , . Rearranging the terms of the last equality, we arrive at

and the result follows by observing that the left-hand side is the definition of . ∎

3.4 Training and Application Considerations

The proposed optimization framework has two modes of operation (Fig. 1): train and apply. In train mode, the optimization problem (7) is solved in order to determine a mapping for randomizing the training set. The randomized training set, in turn, is used to fit a classification model that approximates , where are the parameters of the model. At apply time, a new data point is received and transformed into through a randomized mapping . The mapping is given by marginalizing over :

(17)

Assuming that the variable is not suppressed, and that the marginals are known, then the utility and discrimination guarantees set during train time still hold during apply time, as discussed in Section 3.2. However, the distortion control will inevitably change, since the mapping has been marginalized over . More specifically, the bound on the expected distortion for each sample becomes

(18)

If the distortion control values are independent of , then the upper-bound on distortion set during training time still holds during apply time. Otherwise, (18) provides a bound on individual distortion at apply time. The same guarantee holds for the case when is suppressed.

3.5 Robustness

We may also consider the case where the distribution used to determine the transformation differs from the distribution of test samples. This occurs, for example, when is the empirical distribution computed from i.i.d. samples from an unknown distribution . In this situation, discrimination control and utility are still guaranteed for samples drawn from that are transformed using , where the latter is obtained by solving (7) with . In particular, denoting by and the corresponding distributions for and when is transformed using , we have and for sufficiently large (the distortion control constraints (5) only depend on ). The next proposition provides an estimate of the rate of this convergence in terms of and assuming is fixed and bounded away from zero. Its proof can be found in the Appendix.

Proposition 3.

Let be the empirical distribution obtained from i.i.d. samples that is used to determine the mapping , and be the true distribution of the data. In addition, denote by the joint distribution after applying to samples from . If for all , we have , , where is given in (3), and

then with probability ,

(19)
(20)

Proposition 3 guarantees that, as long as is sufficiently large, the utility and discrimination control guarantees will approximately hold when is applied to fresh samples drawn from . In particular, the utility and discrimination guarantees will converge to the ones used as parameters in the optimization at a rate that is at least . The distortion control guarantees (5) are a property of the mapping , and do not depend on the distribution of the data.

Observe that hidden within the big-O terms in Proposition 3 are constants that depend on the probability of the least likely symbol and the alphabet size. The exact characterization of these constants can be found in the proof of the proposition in the appendix. Moreover, the upper bounds become loose if can be made arbitrarily small. Thus, it is necessary to assume that is fixed and bounded away from zero. Moreover, if the dimensionality of the support sets of and is large, and the number of samples is limited, then a dimensionality reduction step (e.g. clustering) may be necessary in order to assure that discrimination control and utility are adequately preserved at test time. Proposition 3 and its proof can be used to provide an explicit estimate of the required reduction. Finally, we also note that if there are insufficient samples to reliably estimate for certain values , then, for those groups , it is statistically challenging to verify discrimination and thus control may not be meaningful.

4 Applications to Datasets

We apply our proposed data transformation approach to two different datasets to demonstrate its capabilities. We approximate using the empirical distribution of in the datasets, specialize the optimization (7) according to the needs of the application, and solve (7) using a standard convex solver (Diamond & Boyd, 2016).

4.1 ProPublica’s COMPAS Recidivism Data

Recidivism refers to a person’s relapse into criminal behavior. It has been found that about two-thirds of prisoners in the US are re-arrested after release (Durose et al., 2014). It is important therefore to understand the recidivistic tendencies of incarcerated individuals who are considered for release at several points in the criminal justice system (bail hearings, parole, etc.). Automated risk scoring mechanisms have been developed for this purpose and are currently used in courtrooms in the US, in particular the proprietary COMPAS tool by Northpointe (Northpointe Inc., ).

Recently, ProPublica published an article that investigates racial bias in the COMPAS algorithm (ProPublica, 2016), releasing an accompanying dataset that includes COMPAS risk scores, recidivism records, and other relevant attributes (ProPublica, 2017). A basic finding is that the COMPAS algorithm tends to assign higher scores to African-American individuals, a reflection of the a priori higher prevalence of recidivism in this group. The article goes on to demonstrate unequal false positive and false negative rates between African-Americans and Caucasian-Americans, which has since been shown by Chouldechova (2016) to be a necessary consequence of the calibration of the model and the difference in a priori prevalence.

In this work, our interest is not in the debate surrounding the COMPAS algorithm but rather in the underlying recidivism data (ProPublica, 2017). Using the proposed data transformation approach, we demonstrate the technical feasibility of mitigating the disparate impact of recividism records on different demographic groups while also preserving utility and individual fairness. (We make no comment on the associated societal considerations.) From ProPublica’s dataset, we select severity of charge, number of prior crimes, and age category to be the decision variables (). The outcome variable () is a binary indicator of whether the individual recidivated (re-offended), and race and gender are set to be the discriminatory variables (). The encoding of the decision and discrimination variables is described in Table 1. The dataset was processed to contain around 5k records.

Feature Values Comments
Recidivism (binary) 1 if re-offended, 0 otherwise
Gender {Male, Female}
Race {Caucasian, African-American} Races with small samples removed
Age category years of age
Charge degree {Felony, Misdemeanor} For the current arrest
Prior counts Number of prior crimes
Table 1: ProPublica dataset features.

Specific Form of Optimization. We specialize our general formulation in (7) by setting the utility measure to be the KL divergence . For discrimination control, we use (2), with given in (3), while fixing . For the sake of simplicity, we use the expected distortion constraint in (5) with uniformly. The distortion function in (5) has the following form. Jumps of more than one category in age and prior counts are heavily discouraged by setting a high distortion penalty () for such transformations. We impose the same penalty on increases in recidivism (change of from to ). Both these choices are made to promote individual fairness. Furthermore, for every jump to the next category for age and prior counts, a penalty of is assessed, and a similar jump incurs a penalty of for charge degree. Reduction in recidivism ( to ) has a penalty of . The total distortion for each individual is the sum of squares of distortions for each attribute of . These distortion values were chosen for demonstration purposes to be reasonable to our judgement, and can easily be tuned according to the needs of a practitioner.

Figure 2: Objective vs. discrimination parameter for distortion constraint .
Figure 3: Conditional mappings with , and for: (left) , less than years (), , (middle) , less than years (), , and (right) , less than years (), . Original charge degree and prior counts () are shown in vertical axis, while the transformed age category, charge degree, prior counts and recidivism are represented along the horizontal axis. The charge degree F indicates felony and M indicates misdemeanor. Colors indicate mapping probability values. Columns included only if the sum of its values exceeds .
Figure 4: Top row: Percentage recidivism rates in the original dataset as a function of charge degree, age and prior counts for the overall population (i.e. ) and for different groups (). Bottom row: Change in percentages due to transformation, i.e. , etc. Values for cohorts of charge degree, age, and prior counts with fewer than samples are not shown. The discrimination and distortion constraints are set to and respectively.

Results. We computed the optimal objective value (i.e., KL divergence) resulting from solving (7) for different values of the discrimination control parameter , when the expected distortion constraint . Around , no feasible solution can be found that also satisfies the distortion constraint. Above , the discrimination control is loose enough to be satisfied by the original dataset with just an identity mapping (). In between, the optimal value varies as a smooth function (Fig. 2).

We set and for the rest of the experiments. The optimal value of utility measure (KL divergence) was . In order to evaluate if discrimination control was achieved as expected, we examine the dependence of the outcome variable on the discrimination variable before and after the transformation. Note that to have zero disparate impact, we would like the to be independent of , but practically it will be controlled by the discrimination control parameter . The corresponding marginals and are illustrated in Table 2, where clearly is less dependent on compared to . In particular, since an increase in recidivism is heavily penalized, the net effect of the randomized transformation is to decrease the recidivism risk of males, and particularly African-American males.

The mapping produced by the optimization (7) can reveal important insights on the nature of disparate impact and how to mitigate it. We illustrate this by exploring for the COMPAS dataset next. Fig. 3 displays the conditional mapping restricted to certain socio-demographic groups. First consider young males who are African-American (left-most plot). This group has a high recidivism rate, and hence the most prominent action of the mapping (besides identity transformation) is to change the recidivism value from (recidivism) to (no recidivism). The next prominent action is to change the age category from young to middle aged (25 to 45 years). This effectively reduces the average value of for young African-Americans, since the mapping for young males who are African-American and do not recidivate (middle plot) is essentially the identity mapping, with the exception of changing age category to middle aged. This is expected, since increasing recidivism is heavily penalized. For young Caucasian males who recidivate, the action of the proposed transformation seems to be similar to that of young African-American males who recidivate, i.e., the outcome variable is either changed to 0, or the age category is changed to middle age. However the probabilities of the transformations are lower since Caucasian males have, according to the dataset, a lower recidivism rate.

We apply this conditional mapping on the dataset (one trial) and present the results in Fig. 4. The original percentage recidivism rates are also shown in the top panel of the plot for comparison. Because of our constraint that disallows changing the outcome to , a demographic group’s recidivism rate can (indirectly) increase only through changes to the decision variables (). We note that the average percentage change in recidivism rates across all demographics is negative when the discrimination variables are marginalized out (leftmost column). The maximum decreases in recidivism rates are observed for African-American males since they have the highest value of (cf. Table 2). Contrast this with Caucasian females (middle column), who have virtually no change in their recidivism rates since they are a priori close to the final ones (see Table 2). Another interesting observation is that middle aged Caucasian males with 1 to 3 prior counts see an increase in percentage recidivism. This is consistent with the mapping seen in Fig. 3 (middle), and is an example of the indirect introduction of positive outcome variables in a cohort as discussed above.

D Before transformation After transformation
(gender, race)
F, A-A 0.607 0.393 0.607 0.393
F, C 0.633 0.367 0.633 0.367
M, A-A 0.407 0.593 0.596 0.404
M, C 0.570 0.430 0.596 0.404
Table 2: Dependence of the outcome variable on the discrimination variable before and after the proposed transformation. F and M indicate Female and Male, and A-A, and C indicate African-American and Caucasian.
Figure 5: Top row: High income percentages in the original dataset as a function of age and education for the overall population (i.e. ) and for different groups ). Bottom row: Change in percentages due to transformation, i.e. , etc. Age-education pairs with fewer than 20 samples are not shown.

4.2 UCI Adult Data

We apply our optimization approach to the well-known UCI Adult Dataset (Lichman, 2013) as a second illustration of its capabilities. The features were categorized as discriminatory variables (): Race (White, Minority) and Gender (Male, Female); decision variables (): Age (quantized to decades) and Education (quantized to years); and response variable (): Income (binary). While the response variable considered here is income, the dataset could be regarded as a simplified proxy for analyzing other financial outcomes such as credit approvals.

Specific Form of Optimization. We use -distance (twice the total variation) (Pollard, 2002) to measure utility, . For discrimination control, we use (1), with given in (3), We also set in (1). We use the distortion function in (5), and write for an age-education pair and for a corresponding transformed pair. The distortion function returns (i) if income is decreased, age is not changed and education is increased by at most 1 year, (ii) if age is changed by a decade and education is increased by at most 1 year regardless of the change of income, (iii) if age is changed by more than a decade or education is lowered by any amount or increased by more than 1 year, and (iv) 0 in all other cases. We set with corresponding distance thresholds for as and corresponding probabilities () as in (5). As a consequence, decreases in income, small changes in age, and small increases in education (events (i), (ii)) are permitted with small probabilities, while larger changes in age and education (event (iii)) are not allowed at all. We note that the parameter settings are selected with the purpose of demonstrating our approach, and would change depending on the practitioner’s requirements or guidelines.

Results. For the remainder of the results presented here, we set , and the optimal value of the utility measure ( distance) was . We apply the conditional mapping, generated as the optimal solution to (7), to transform the age, education, and income values of each sample in the dataset. The result of a single realization of this randomization is given in Fig. 5, where we show percentages of high income individuals as a function of age and education before and after the transformation. The original age and education () are plotted throughout Fig. 5 for ease of comparison, and that changes in individual percentages may be larger than a factor of because discrimination is not controlled by (1) at the level of age-education cohorts. The top left panel indicates that income is higher for more educated and middle-aged people, as expected. The second column shows that high income percentages are significantly lower for females and are accordingly increased by the transformation, most strongly for educated older women and younger women with only 8 years of education, and less so for other younger women. Conversely, the percentages are decreased for males but by much smaller magnitudes. Minorities receive small percentage increases but less than for women, in part because they are a more heterogeneous group consisting of both genders.

5 Conclusions

We proposed a flexible, data-driven optimization framework for probabilistically transforming data in order to reduce algorithmic discrimination, and applied it to two datasets. The differences between the original and transformed datasets revealed interesting discrimination patterns, as well as corrective adjustments for controlling discrimination while preserving utility of the data. Despite being programmatically generated, the optimized transformation satisfied properties that are sensible from a socio-demographic standpoint, reducing, for example, recidivism risk for males who are African-American in the recidivism dataset, and increasing income for well-educated females in the UCI adult dataset. The flexibility of the approach allows numerous extensions using different measures and constraints for utility preservation, discrimination, and individual distortion control. Investigating such extensions, developing theoretical characterizations based on the proposed framework, and quantifying the impact of the transformations on specific supervised learning tasks will be pursued in future work.

Appendix A Proof of Proposition 3

The proposition is a consequence of the following elementary lemma.

Lemma 1.

Let , and be three fixed probability mass functions with the same discrete and finite support set , and . Then if

(21)

and for all and

(22)

then for all and

(23)
Proof.

We assume , otherwise and we are done. From (21) and the Data Processing Inequality for KL-divergence, for any

(24)

Let be fixed, and, in order to simplify notation, denote . Assuming, without loss of generality,

then (24) implies

(25)

The Taylor series of around 0 has the form

(26)

where is the Eulerian polynomial, which is positive for and satisfies and . First, assume . Then can be lower-bounded by the first term in its Taylor series expansion since all the terms in the series are non-negative. From (25),

(27)

Consequently,

(28)

Now assume . Then the Taylor series (26) becomes an alternating series, and can be lower-bounded by its first two terms

(29)

The term in the l.h.s. of the first inequality satisfies

(30)

as long as . Since the lhs is larger than 1 when then it is a valid lower-bound for in the entire interval where and as long as

(31)

which holds by assumption in the Lemma. Thus,

(32)

and combining the previous equation with (28)

(33)

Finally, since