Causal Discrimination Discovery Through Propensity Score Analysis

Causal Discrimination Discovery Through Propensity Score Analysis

Bilal Qureshi Information Technology University, Pakistan Faisal Kamiran Information Technology University, Pakistan Asim Karim Lahore University of Management Sciences (LUMS), Pakistan Salvatore Ruggieri Università di Pisa, Italy
mscs13034@itu.edu.pk, faisal.kamiran@itu.edu.pk,
akarim@lums.edu.pk, ruggieri@di.unipi.it
Abstract

Social discrimination is considered illegal and unethical in the modern world. Such discrimination is often implicit in observed decisions’ datasets, and anti-discrimination organizations seek to discover cases of discrimination and to understand the reasons behind them. Previous work in this direction adopted simple observational data analysis; however, this can produce biased results due to the effect of confounding variables. In this paper, we propose a causal discrimination discovery and understanding approach based on propensity score analysis. The propensity score is an effective statistical tool for filtering out the effect of confounding variables. We employ propensity score weighting to balance the distribution of individuals from protected and unprotected groups w.r.t. the confounding variables. For each individual in the dataset, we quantify its causal discrimination or favoritism with a neighborhood-based measure calculated on the balanced distributions. Subsequently, the causal discrimination/favoritism patterns are understood by learning a regression tree. Our approach avoids common pitfalls in observational data analysis and make its results legally admissible. We demonstrate the results of our approach on two discrimination datasets.

Authors’ Instructions

1 Introduction

In many countries, it is prohibited by law or it is considered unethical to make decisions regarding individuals or groups of individuals that are influenced by certain attributes of the individual or group such as race, gender, religion, or locality. Such attributes define protected social groups, and an unjustified unfavorable decision for members of the protected group is considered discriminatory. Consequently, a key requirement of law enforcement agencies, anti-discrimination organizations, and decision makers is to quantify, discovery, and understand discrimination in an objective way starting from historical decision records.

Recently, there has been much research on discrimination discovery using data mining approaches [20]. However, this research largely ignores the fact that observational approaches in data analysis may produce biased results. For example, in a hiring database a naive analysis may reveal that a larger proportion of males have been offered a job than females. This conclusion will be biased if it is known that a larger fraction of males obtain the necessary qualification for the job than females. Hence, qualification confounds or biases the observed difference in hiring rates between males and females. Qualification is a confounding variable in this analysis as it can predict both hiring decision and gender of an applicant. It can also be thought of as an explanatory attribute that explains the preference for males for the job. In a given analysis, there may be several confounding variables, and it is essential that their effect is minimized while estimating discrimination. In the previous example, there is a need to estimate, instead, the causal effect of gender on the hiring decision.

Discovering and understanding causal influences among variables is a fundamental goal of any data analysis process. In general, randomized experiments are the gold-standard for inferring such influences in a process. However, randomized experiments are not possible or not cost-effective in discrimination analysis. An example of quasi-experimental approaches is situation testing [2], which uses pairs of testers who have been matched to be similar on all characteristics that may influence the outcome except race, gender, or other grounds of possible discrimination. The tester pairs are then sent into one or more situations in which discrimination is suspected, for example renting an apartment or applying for a job, and the decision outcome is recorded. The approach is quasi-experimental because the analyst does not have full control of all experimental variables (e.g., the presence of other job applicants). Such approaches have limited applicability and high costs. In the vast majority of cases, one has to rely upon observed data along with domain expert’s knowledge. Nonetheless, causal influences are more likely to be acceptable in court of law than statistical correlations or than statistical test of hypothesis alone [10].

In this paper, we present an approach for causal discrimination discovery and understanding using propensity score analysis. The propensity score is the probability of membership to the protected group (e.g., females) given the individual’s or group of individual’s attributes. Propensity score analysis provides a principled way to ‘filter out’ the explainable effect of confounding variables and to quantify the causal effect of the grouping variable (e.g., gender). We adopt propensity score weighting to balance the distributions of protected and unprotected group instances. Based on the modified distribution, we quantify the degree of discrimination or favoritism of each instance. Subsequently, we build a regression tree to understand the characteristics of instances falling under different discrimination/favoritism bands.

The paper is organized as follows. Section 2 introduces basic tools that we adopt in the discrimination analysis. Section 3 presents the core approach for causal discrimination against individuals, which is extended to causal discrimination against groups in Section 4. Experiments are reported in Section 5. Then related work in Section 6. Section 7 summarizes the contribution and concludes.

2 Preliminaries

This section presents notation and assumptions on the input dataset for discrimination analysis, on distance measures and the set of neighbors of a tuple, and on discrimination measures over contingency tables.

2.1 Dataset for Discrimination Analysis

We assume a dataset in input for the discrimination analysis in the form of a relation of tuples , with , where is the tuple id. Each tuple represents a fact regarding an individual, such as the result of an application (for a loan, a position, a school admission). Attributes of the tuple include: a decision dec, modeling the outcome of the fact; membership to a social group group, such as gender and race; and the legally relevant features taken into account by the (human or automated) decision maker for taking the decision. For example, in a dataset of bank loans, individuals are loan applicants, the decision value is the deny or grant of the loan, the group value is female or male (or, e.g., majority/minority), and the remaining values regard attributes for deciding whether to grant the loan (occupation, income, age, debts, etc.).

Regarding the decision attribute dec, we restrict to binary decisions: is the negative decision (deny of some benefit), and is the positive decision (grant of the benefit). We will write to denote the decision value ( or ) for the tuple .

Regarding the social group group, it is worth noting that civil rights laws explicitly identify the groups to be protected against discrimination – on the grounds of sex, age, race, religion and other social or cultural traits. We assume then a specific protected-by-law group is provided as an input to the discrimination analysis, and call it the protected group. Individuals not in the protected group form the unprotected group. Summarizing, is the social group (protected or unprotected) of the tuple . Given , we denote: , i.e.,  is the set of tuples in the protected group, and   is the set of those in the unprotected group.

2.2 Distance and Neighbors

Discrimination refers to different treatment for individuals that have similar characteristics apart from membership to different social groups. Similarity is naturally modeled through as distance function between tuples. is a non-negative real number, close to when and are highly similar or "near" each other, and becoming larger the more they differ. We assume that is defined in terms of all attributes of apart dec and group. For a tuple , we assign to every other a rank as a neighbor of , denoted as , on the basis of its distance from or, for equal distances, on the tuple id. Stated operationally, the rank of a tuple is its position in the list of tuples in ordered according to distance from and tuple id’s. The kset for a given tuple is the neighborhood of tuples centered around :

A refined version includes an additional constraint on the maximum allowable distance :

2.2.1 Contingency Tables

decision
group
protected
unprotected

[2ex]

Figure 1: 4-fold, contingency table for .

A common tool for statistical analysis is provided by a , or 4-fold, contingency table, as shown in Fig. 1. With reference to a given population, the table reports the counts of individuals in the protected group with negative () and positive () decision, and similarly for individuals in the unprotected group ( and , respectively). Different outcomes between two groups are measured in terms of the proportion of people in each group with a specific outcome. The proportions of negative decisions for the protected-by-law group () and the unprotected-by-law group (). A general legal principle is then to consider group proportional representation [20] in decision outcomes as a quantitative measure of discrimination against a protected-by-law group. Group proportional representation can be measured as differences or rates of these proportions. We will consider risk difference (RD ), also called discrimination score [5], which measures the difference in the proportion of negative decisions between the protected and the unprotected group. However, our approach readily applies to other measures defined in terms of and , such as risk ratio, odds ration, etc., and to their tests of statistical significance (see surveys [20, 25]). Due to a division by zero, discrimination measures may be undefined. This occurs when or . Such cases are unlikely for the contingency table of the whole dataset, but they can readily occur for contingency tables regarding (small) subsets of individuals. We adhere to the solution of [22, Section 3.3.], which consists of replacing with when , and with when , where (resp., ) is the fraction of tuples with negative (resp., positive) decision in the whole dataset . Such an extension is intuitive: when the proportion of individuals from the protected group with negative decision is undefined (because ), we simply consider the expected proportion .

3 Causal Individual Discrimination

Consider an individual whose decision was negative, i.e.,  and the central question: was discriminated? Similarly, one is also interested in individuals whose decision was positive, i.e.,  and the question: was favored?

Our starting point will be the contingency table for the set of tuples in . E.g,  in Fig. 1 is the number of tuples in the kset belonging to the protected group and having negative decision. Intuitively, a kset represents a context of analysis: a cluster of individuals around , and the contingency table summarizes observed distribution of groups and decisions in such a cluster. Discrimination measures represent then the amount of disproportionate distribution of decision values observable for individuals similar to for what regard the features that should affect the decision. For instance, RD ranges from , which is the maximum observable favoritism towards neighbors of the protected group111This maybe the result, e.g., of affirmative actions [20]., to , which is the maximum observable discrimination against neighbors of the protected group.

A preliminary answer to the central questions above is as follows. For an individual of the protected group who experienced a negative decision, namely , is a measure of the degree of discrimination suffered. If such a degree is above a threshold , i.e., , then the individual was discriminated. Dually, an individual of the unprotected group who experienced a positive decision, namely , was favored if or, equivalently, if . The threshold is a parameter of the analysis. It is up to the law or to a trial in the court to establish a threshold such that values higher than that represents prima facie evidence of discrimination, namely cases worth to be further investigated.

However, a major drawback of directly using a discrimination measure over the contingency table as in Fig. 1 is that it does not account for differences of the individuals in the neighborhood. Although we consider in the neighborhood tuples that are close to each other, differences in the attribute values of tuples in the protected and those in the unprotected group may still support different decisions for such groups. For example, in an employee salary dataset several factors may contribute to the high/low salary of an employee (education, work hours, age, etc.) while it is desired that males and females with similar credentials should have similar salaries. In addition, some variables, e.g. work hours, can predict both the gender of the employee and the salary (females often work fewer hours than males, and fewer work hours imply lower salaries). Therefore, any attempt to estimate the discrimination degree of the dataset will be biased as it will be confounded by these variables. We aim, instead, at estimating the causal effect of belonging to the protected group on the negative outcome.

3.1 Propensity Scores

Our problem commonly arises in the statistical analysis of observational data where assignment of individuals to treatment and control groups cannot be assumed to be random [24]. In our context, we consider the protected group as the treatment group, and the unprotected group as the control group.

A method for causal analysis over observational data is provided by propensity scores. Propensity score analysis is a principled way to handle multiple confounding variables and to ‘filter out’ the explainable effect of these variables.

Definition 1

Assume that the dataset is a sample over a distribution of covariates , not including dec nor group. The propensity score is the conditional probability of belonging to the protected group given the covariates:

The estimation of propensity scores requires two key decisions: the model or functional form of and the (confounding or explanatory) variables to include in . For binary valued groups, as in our case, the logistic regression model is commonly adopted [7, 17]. The decision regarding which covariates to include in the logistic regression model is less standardized. In general, the selected variables should influence simultaneously the group participation and the decision. It is also advised against selecting too many covariates, and polynomial and interaction terms of these covariates, as this can lead to estimates of propensity scores for individuals of exactly one or zero which violates the condition of overlap for effective propensity score analysis [4]. In any case, domain expert oversight is always recommended for selecting variables since this is a causal concept [18]. Following these guidelines, our estimation procedure involves the steps below:

1)The propensity score is given by a logistic regression model learned over the dataset , i.e.,

Here, is a linear basis function vector comprising of selected covariates (i.e., each returns a selected variable) and is the corresponding weight vector.

2)The variables that are strongly correlated with both group participation and decision are candidates for inclusion. The final selection, including fixing of correlation threshold, is decided by a domain expert.

3)The variables that are proxies for the group membership (e.g., having a correlation value >95%) are removed.

Propensity scores can answer the following counter-factual question: “what kind of outcomes would we have observed had the decision involving unprotected individuals involved protected ones instead"? Let us introduce now weights such that the distribution of the protected and unprotected group become identical, i.e., such that:

By simple algebra and the Bayesian rule, we obtain:

(1)

where is a constant term that will not be relevant in subsequent analysis. is the weight that a tuple of the unprotected group should count for in case it would belong to the protected group. This approach is known as propensity score weighting [21]. Rosenbaum and Rubin showed that, conditional on the propensity score, all observed covariates are independent of group assignment and, in large samples, they will not confound estimated treatment effects, i.e., in our context the decisions on the protected group can be compared with the decisions on the unprotected group once the latter is re-weighted using propensity scores.

3.2 Propensity Score Weighting of Contingency Tables

Let us now apply propensity score weighting in the context of discrimination measures over a contingency table. See Fig. 1. Instead of comparing the average negative decision of the protected group with the average negative decision of the unprotected group, we compare with the average negative decision that would be obtained for individuals of the unprotected group if they were instead in the protected group. We define the weighted average negative decision of the unprotected group as:

where is the set of tuples in the neighborhood that belong to the unprotected group. Notice that the constant term in (1) is ruled out in the calculation of the ration .

Definition 2

The causal risk difference is .

The causal risk difference compares the proportion of negative decisions for the tuples in the protected group with the proportion of negative decisions for the tuples of the unprotected group re-weighted to the distribution of the protected group. estimates the average effect of the treatment on the outcome after balancing for the confounding factors [21]. close to zero means that differences in the outcomes of the decision between the protected and unprotected group are not motivated by confounding factors in the selection of the two groups which covered by the covariates of the propensity scores. If is not close to zero, then there is a bias in decision value due to group membership (causal discrimination), or to covariates that have been not accounted for in the analysis (omitted variable bias).

4 Causal Discrimination Discovery

In the previous section, we were able to assign to every tuple of the protected group with negative (resp., positive) decision a causal measure of discrimination (resp., favoritism) observable when comparing its decision to the one of its neighbors. With dual reasonings, we can assign to tuples of the unprotected group a causal measure of discrimination and favoritism by taking the opposite . The procedure above is useful for individual discrimination analysis. However, in many cases, a global description of who was discriminated/favored is required. In order to infer such descriptions, we proceed by extracting a regression model from a modified version of the dataset at hand. Consider the problem of characterizing discrimination in the protected group. We restrict to tuples in with negative decision. Further, we augment the dataset with a new attribute, the class attribute, equal to the value of for each tuple in it. Over such a labelled dataset, a regression tree is extracted, which can be used for descriptive purposes. A path in the regression tree ending in a prediction of high class value describes a context of discrimination for the protected group. E.g., in a loan granting dataset, it could specify a specific region or job type of applicants. Similarly, patterns of favoritism for the protected group can be studied by extracting decision trees starting from tuples in with positive decision, and looking at tree paths predicting high values of .

5 Experiments

We evaluate our causal discrimination discovery approach on two real-world datasets. We compare causal and non-causal risk differences and highlight rules explaining the process underlying discrimination or favoritism in each dataset. We also experiment on tampered versions of one dataset to demonstrate the impact of confounding variables.

5.1 Settings

5.1.1 Datasets.

We perform experiments on two commonly-used datasets in discrimination analysis research. The Crime (Communities and Crime) dataset contains 1,994 records of communities described by 124 socio-economic and demographic factors including their crime rates. The Adult dataset consists of 48,842 records of individuals over 14 attributes related to the individual’s income. Both datasets are publicly available from the UCI Machine Learning Repository222http://archive.ics.uci.edu/ml/. In Adult dataset, we use income as the class (decision) attribute with income 50K as the positive decision. We use sex as the sensitive attribute and females to be the protected group. In Crime dataset, ViolentCrimesPerPop is taken as class (decision) attribute after discretizing it into bins containing 20 as positive and 20 as negative decision. Race is taken as the sensitive attribute with black-majority communities as the protected group.

5.1.2 Propensity Score, Neighborhood, Distance Function.

Propensity scores are estimated by a logistic regression model learned over the dataset for predicting the protected group tuples. The attributes selected for propensity score estimation are the common attributes ranked in the top 50% attributes correlated with sensitive attribute and the top 50% attributes correlated with the class attribute. Information gain is used as the correlation measure.

In Adult dataset, the selected attributes include: relationship, marital-status, age, occupation, and hours-per-week. We find that relationship is a proxy for sensitive attribute; thus, this attribute is discarded. In Crime dataset, we select 48 out of the 124 attributes that are strongly correlated with both class and sensitive attributes. All selected attributes are binarized for use in logistic regression model.

The neighborhood of a tuple (namely, ) is defined by its nearest neighbors. The Euclidean distance with attributes normalized in the interval [0, 1] is used as the distance function. The code and datasets used in this work will be made available at the authors website.

5.2 Results

We will contrast causal and non-causal discrimination and favoritism trends and highlight causal rules of discrimination and favoritism. We will also present an analysis of different modifications of the Adult dataset to understand the influence of confounding variables on causal and non-causal risk difference.

5.2.1 Variation of Causal Discrimination/Favoritism with Propensity Score.

We start contrasting on an fictious example risk difference and causal risk difference at the variation of propensity score weighting. Fix a tuple of the protected group tuple with a negative decision, e.g., a female having income 50K in the Adult dataset. Suppose the 15 nearest neighbors of this individual contain 7 females, with 4 having a negative decision, and 8 males, with 3 having a negative decision. The (non-causal) risk difference of this individual is . Suppose that males have a propensity score of . Their weights will be and then , hence . Intuitively, if there is no difference in the distribution of male and female, causal risk difference boils down to risk difference. Assume now that the 5 males with positive decision have a greater propensity for being a female (i.e., they are ‘femalish’ in characteristics). Then their weights will become greater than and , hence . For instance, when the propensity score of the positive males is , the weight for the positive males is equal to , and the causal risk difference of the individual becomes . That is, the individual faces higher discrimination than the one quantified by risk difference, because similar males are given positive decisions even though they have ‘femalish’ characteristics. The opposite case is also possible, namely . E.g., if the 5 males with positive decision have propensity score of , i.e., they are definitely not following the distribution of females, their weight is also , and then . This a case where non-causal risk difference is overestimating the degree of individual discrimination. Dual statements can be made regarding for favoritism of tuples with a positive decision.


Figure 2: Positive decision probability vs. propensity score (left); and (discrimination) vs. propensity score (middle); and (favoritism) vs. propensity score (right). Top plots: Adult dataset; Bottom plots: Crime dataset

5.2.2 Causal Discrimination/Favoritism Trends.

In this section, we study the trends of discrimination and favoritism in each dataset.

Adult Dataset. Fig. 2 (top left), (top middle), and (top right) show the variation of average probability for positive decision, average and for discriminated individuals, and average and for favored individuals, respectively, in different propensity score intervals. The x-axis in these figures gives the ranges of propensity score over which individuals are binned for averaging. Fig. 2 (top left) shows a rapid decline in positive decision (income > 50K) probability with increase in propensity score. Both males and females exhibit a declining trend confirming that propensity for being a female, irrespective of the individual’s actual gender, influences the final outcome. However, it is observed that males have a higher chance of belonging to the positive outcome than females.

Fig. 2 (top middle) and (top right) show the discrimination and favoritism trends, respectively, in Adult dataset. Causal and non-causal risk difference for discrimination/favoritism is shown by solid and dashed lines, respectively, and each point in the lines is the average risk difference of all, only females, and only males discriminated/favored individuals in the respective interval. It is observed that overall causal and non-causal discrimination and favoritism tend to be high when propensity score is low, lowest when propensity score is around 0.5, and highest when propensity score is high. The trend for females facing discrimination and males experiencing favoritism is similar to the corresponding trend for all individuals and is not shown in the figures for clarity of presentation. However, the trend for males facing discrimination and females experiencing favoritism deviates from the overall trend at higher propensity score. Furthermore, such males and females are much less in numbers than their corresponding overall females and males, thus indicating significant disparity in outcomes at higher propensity scores. It is seen that causal risk difference is lower than non-causal risk difference (except for males facing discrimination and females facing favoritism). These observations confirm that females face higher discrimination than males, especially when they exhibit more ‘femalish’ characteristics, but their discrimination is also often over-estimated by non-causal risk difference.

Crime Dataset. Fig. 2 (bottom left), (bottom middle), and (bottom right) show the variation of average probability of positive decision, average causal and non-causal risk difference for discrimination, and average causal and non-causal risk difference for favoritism, respectively, in different propensity score intervals (probability of belonging to the black-majority community). The trends for positive decision probability and causal and non-causal favoritism are similar to those for Adult dataset, while causal and non-causal risk difference for discrimination tends to increase consistently with increase in propensity score. Unlike in Adult dataset, the non-causal risk difference under-estimates the causal risk difference for both favoritism and discrimination. In summary, black-majority communities are more discriminated than other communities but non-causal risk difference can give the impression that their discrimination is not significant.

5.2.3 Causal Discrimination/Favoritism Rules.

A key benefit of our approach is the capability to extract and understand factors and rules that cause discrimination or favoritism in a given dataset. This is possible from the regression trees built over the dataset for predicting . We start by discussing the Adult dataset.

Adult Dataset. Fig. 3 (top left) shows the average of selected rules describing females facing high discrimination and high favoritism. The different rules are identified on the x-axis. The Most Favored (MF) rules – MF-1: marital-status {Married-civ-spouse, Married-AF-spouse}, MF-2: MF-1 + occupation {Adm-clerical, Exec-managerial, Prof-specialty, Tech-support}, and MF-3: MF-2 + Hours-per-week< 34.5 – have favoritism of 1.2%, 8.2%, and 21.1% respectively ( as percentages). Females satisfying MF-3 represent an extreme case of favoritism. Similary, the Most Discriminated (MD) rules – MD-1: marital-status {Divorced, Married-spouse-absent, Never-married, Separated, Widowed}, MD-2: MD-1 + occupation {Adm-clerical} – have discrimination of 20.2%, and 27.4%, respectively ( as perentages). On deeper analysis we discover that only those females are favored whose charachteristics are highly similar to males. For instance, rule MF-3 is the most favored rule for females but it has 59% coverage amoung males and only 41% coverage among females; similarly MF-2 is another favored rule for females but having 85% coverage amoung male and only 15% coverage among females.


Figure 3: Most favored and most discriminated rules for protected group (left); key discriminatory factor identification (middle); comparison of rules between protected and unprotected groups (right). Top plots: Adult dataset; Bottom plots: Crime dataset

Next, we discuss how certain attributes when combined with another changes the discrimination significantly. Marital-status is an example of such an attribute in the Adult dataset. Fig. 3 (top middle) shows the biasing patterns when we split the female part of the Adult data on marital status (MS). We observe that females with marital status of married-civ-spouse, married-AF-spouse have 1.2% favoritism while females with other marital status values suffer from 20.2% discrimination. We further analyze this split and compare females of same occupation (i.e., adm-clerical) from both splits (i.e., married-civ-spouse, married-AF-spouse and others); it has almost similar biasing patterns. On the third split we pick individuals of same hours.per.week (i.e., 34.5) from both groups (1: MS = married-civ-spouse, married-AF-spouse & occupation = adm-clerical and 2: MS = others & occupation = adm-clerical ) and again the married-civ-spouse, married-AF-spouse category is favored by 21.9% as compared to the females with other marital statuses which are discriminated by 29.3%.These observations from Fig. 3 (top middle) show that females with similar income determining characteristics (e.g., education) are differently treated just because of different marital status (which intuitively should not affect income). Such factors for low income can be used as evidence of discrimination in the court of law.

Next, we assess the decision imbalance by comparing the same set of rules and their corresponding for both protected and unprotected groups. We consider the following common rules from both groups (females and males):

Rule-1: marital-status {Divorced, Married-spouse-absent, Never-married, Separated, Widowed} + occupation {Craft-repair, Exec-managerial, Farming-fishing, Handlers-cleaners, Machine-op-inspct, Other-service, Priv-house-serv, Prof-specialty ,Protective-serv, Sales, Tech-support, Transport-moving} + marital.status {Widowed} Rule-2: marital-status {Divorced, Married-spouse-absent, Never-married, Separated, Widowed} + occupation {Adm-clerical} Rule-3: marital-status {Married-AF-spouse, Married-civ-spouse} + occupation {Adm-clerical, Exec-managerial, Prof-specialty, Tech-support} + hours.per.week 34.5 Rule-4: marital-status {Married-AF-spouse, Married-civ-spouse} + occupation {Craft-repair, Farming-fishing, Handlers-cleaners, Machine-op-inspct, Other-service, Priv-house-serv, Protective-serv, Sales, Transport-moving}

Fig. 3 (top right) shows average of the above rules in both groups. Although the characteristics of females and males according to the rules are the same, the degree of discrimination and favoritism for each rule is significantly different for males and females. For instance, Rule-1 has low discrimination for males but higher for females. The females that follow Rule-2 have to suffer from higher discrimination as compared to the males with the same characteristics. Interestingly, males satisfying Rule-3 get discriminated while females get favored.

Crime Dataset. We conduct the same set of experiments to discover and understand causal rules and factors for discrimination and favoritism from the Crime dataset. Fig. 3 (bottom left) shows the average values for some of the most discriminatory and most favored rules for black-majority communities:

MF-1: PctYoungKids2Par = 0.635 MF-2: MF-1 + PctKids2Par = 0.735 MF-3: MF-2 + OwnOccMedVal = 0.405 MD-1: PctYoungKids2Par 0.635 MD-2: MD-1 + PctIlleg 0.635 MD-3: MD-2 + PctSpeakEnglOnly 0.865 MD-4: MD-3 + HousVacant = 0.615.

Fig. 3 (bottom middle) identifies the most biasing attributes and show their preferential impact on the outcome. Fig. 3 (bottom right) compare average of the following rules when applied to black-majority and other communities separately:

Rule-1: PctYoungKids2Par = 0.635 + PctKids2Par = 0.735 + OwnOccMedVal 0.405 Rule-2: PctYoungKids2Par 0.635 + PctIlleg 0.635 + PctSpeakEnglOnly = 0.865 Rule-3: PctYoungKids2Par = 0.635 + PctKids2Par 0.735 + PctWorkMom = 0.615 Rule-4: PctYoungKids2Par = 0.635 + PctKids2Par = 0.735 + OwnOccMedVal = 0.405

5.2.4 Impact of Confounding Variables.

We study here the influence of confounding variables on causal and non-causal risk difference on modified versions of the Adult dataset. We generated three new datasets from the original dataset by introducing controlled discrimination via a rule. Specifically, we changed the decision value to negative for 80% randomly selected positive individuals satisfying the rules: (a) marital-status = divorced, (b) occupation = adm-clerical, and (c) education = bachelors. Note that marital status and occupation are confounding variables (i.e., determine both group membership and class) while education is not. The four datasets are identified as Original, Divorced, Adm-Clerical, and Bachelors.

Fig. 4 shows the variation of average causal and non-causal risk differences with propensity score on discriminated tuples in the Divorced (left plot), Adm-Clerical (middle plot), and Bachelors (right plot) datasets. In each plot, the causal and non-causal risk differences on Original dataset are also shown. It is readily observed that while non-causal discrimination is not changed significantly in Divorced and Adm-Clerical datasets from Original dataset, the causal discrimination is reduced further after the modification especially at high propensity score. This indicates that a considerable proportion of the non-causal discrimination is explained by these confounding variables. On the other hand, the difference between causal and non-causal discrimination remains almost the same after modification in the Bachelors dataset.


Figure 4: Causal and non-causal discrimination in Divorced (left plot), Adm-Clerical (middle plot), and Bachelors (right plot) datasets

Interestingly, regression trees built on datasets modified via the confouding variable rules do not have a split on sex = Female when a check on marital-status=Divorced (for Divorced dataset) or occupation = Adm-clerical (for Adm-Clerical dataset) has already occurred. Thus, our approach correctly identifies data modifications through a confouding variable.

6 Related Work

Discrimination-aware data mining is an established multi-disciplinary area of research focusing on topics like discrimination discovery, discrimination prevention and control, legal aspects of discrimination, and social impact of discrimination [23, 13, 15, 11, 14, 12, 6, 26, 20, 16, 9, 1]. Discrimination discovery, which is the focus of this work, is concerned with quantifying the discrimination or favoritism experienced by individuals or groups of individuals and extracting significant rules and patterns underlying such practices from historical data. Discrimination discovery is done through an observational study of outcomes (decisions) for protected and unprotected groups as recorded in the data.

Earlier studies have primarily relied upon simple statistical analysis involving association or correlation measures [23, 5, 15, 22, 25] However, such analyses can produce misleading conclusions because they largely ignore the effect of confounding variables – variables that cam be used to determine both the outcome and the group. In other words, quantification of discrimination or favoritism through such analyses can distort the causal effect of belonging to the protected or unprotected group.

Propensity score analysis, proposed in [21], provides a principled approach for filtering out the effect of observed confounding variables in observational studies. It has been applied in econometrics [19], medical studies [8], and social sciences [8]. Propensity score analysis itself has been studied extensively w.r.t. model selection [7, 17] and confounding variable selection [4, 18]. However, classical statistical approaches is limited to weighting of the contingency table for the whole dataset at hand. Instead, we consider one contingency table for each tuple in the dataset, with the aim of measuring causal individual discrimination, not at aggregate level. Data in the contingency table refers to neighbors of the tuple, hence coupling the statistical approach with the legal one of situation testing [2]. In addition, we extract a global description of the groups with the highest discrimination by learning regression trees over the dataset enriched with an attribute measuring causal individual discrimination.

The discrimination-aware data mining community has recently recognized the importance of causal analysis and the perils of correlation analysis [6, 3] while causation has been recognized as significant in the legal circles for a while [10]. In [6], propensity score based stratification is adopted to filter out the effect of confounding variables before learning linear regression models for discrimination-aware predictions. In our work, we employ propensity score weighting and address the problem of causal discrimination discovery and quantification. In [3] the causal structure of attributes is learned from data which is then analyzed to identify patterns of discrimination or favoritism. While their work focuses on discovering underlying patterns of discrimination, our work also quantifies discrimination/favoritism faced by single cases.

7 Conclusions

Our approach for discrimination discovery and understanding uses a causal measure of discrimination or favoritism, called causal risk difference, to identify and understand cases of discrimination or favoritism. Based on propensity score and regression tree analysis, our approach constructs a modified distribution of observed covariates that eliminates the effect of observed confounding factors. The degree of discrimination or favoritism is quantified by the risk difference computed over the modified distribution. Subsequently, a regression tree is built to understand the causal patterns of discrimination or favoritism. We applied the approach on two commonly-used datasets in discrimination research. The results show that causal and non-causal discovery and understanding can be significantly different, thus highlighting the value of causal analysis and the suspect results of non-causal analysis. Causal inference and causal discovery are active topics of research in general, but it is less extensively explored in the data mining field. In particular, the practice of social discrimination analysis can benefit greatly from further research in causality.

References

  • [1] Solon Barocas and Andrew D. Selbst. Big data’s disparate impact. California Law Review, 104, 2016. Available at SSRN: http://ssrn.com/abstract=2477899.
  • [2] Marc Bendick. Situation testing for employment discrimination in the United States of America. Horizons Stratégiques, 3(5):17–39, 2007.
  • [3] Francesco Bonchi, Sara Hajian, Bud Mishra, and Daniele Ramazzotti. Exposing the probabilistic causal structure of discrimination. arXiv preprint arXiv:1510.00552, 2015.
  • [4] Alex Bryson, Richard Dorsett, and Susan Purdon. The use of propensity score matching in the evaluation of active labour market policies. Crown, 2002.
  • [5] T. Calders and S. Verwer. Three naive bayes approaches for discrimination-free classification. Data Mining & Knowledge Discovery, 21(2):277–292, 2010.
  • [6] Toon Calders, Asad Karim, Faisal Kamiran, Wesam Ali, and Xiangliang Zhang. Controlling attribute effect in linear regression. In Proceedings of 13th IEEE International Conference on Data Mining (ICDM), pages 71–80. IEEE, 2013.
  • [7] Marco Caliendo and Sabine Kopeinig. Some practical guidance for the implementation of propensity score matching. Journal of Economic Surveys, 22(1):31–72, 2008.
  • [8] R. B. D’Agostino. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine, 17(19):2265–2281, 1998.
  • [9] Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the Int. Conference on Knowledge Discovery and Data Mining (KDD 2015), pages 259–268. ACM, 2015.
  • [10] Sheila R. Foster. Causation in antidiscrimination law: Beyond intent versus impact. Houston Law Review, 41(5):1469–1548, 2004.
  • [11] F. Kamiran and T. Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2012.
  • [12] F. Kamiran, A. Karim, and X. Zhang. Decision theory for discrimination-aware classification. In Proc. of the Int. Conference on Data Mining (ICDM), pages 924–929. IEEE, 2012.
  • [13] Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. Discrimination aware decision tree learning. In G. I. Webb, B. Liu, C. Zhang, D. Gunopulos, and X. Wu, editors, Proc. of the IEEE Int. Conf. on Data Mining (ICDM), pages 869–874. IEEE Computer Society, 2010.
  • [14] T. Kamishima, S. Akaho, and J. Sakuma. Fairness-aware classifier with prejudice remover regularizer. In Proc. of the European Conference on Machine Learning and on Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), volume 7524 of LNCS, pages 35–50. Springer, 2012.
  • [15] B.T. Luong, S. Ruggieri, and F. Turini. k-NN as an implementation of situation testing for discrimination discovery and prevention. In Proc. of the ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD), pages 502–510. ACM, 2011.
  • [16] Koray Mancuhan and Chris Clifton. Combating discrimination using bayesian networks. Artificial Intelligence and Law, 22(2):211–238, 2014.
  • [17] Daniel F McCaffrey, Greg Ridgeway, and Andrew R Morral. Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods, 9(4):403, 2004.
  • [18] Judea Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, USA, 2 edition, 2009.
  • [19] Greg Ridgeway. Assessing the effect of race bias in post-traffic stop outcomes using propensity scores. Journal of Quantitative Criminology, 22(1):1–29, 2006.
  • [20] A. Romei and S. Ruggieri. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5):582–638, 2014.
  • [21] P. R. Rosenbaum and D. B. Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983.
  • [22] S. Ruggieri. Using t-closeness anonymity to control for non-discrimination. Transactions on Data Privacy, 7(2):99–129, 2014.
  • [23] S. Ruggieri, D. Pedreschi, and F. Turini. Data mining for discrimination discovery. ACM Trans. on Knowledge Discovery from Data, 4(2):Article 9, 2010.
  • [24] Campbell D. T Shadish W. R., Cook T. D. Experimental and quasi-experimental designs for generalized causal inference. Houghton-Mifflin, 2002.
  • [25] Indrė Z̆liobaitye. A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148v1, 2015.
  • [26] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In Proc. of the 30st Int. Conf. on Machine Learning, ICML, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
170512
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description