Fair Active Learning

Fair Active Learning

Abstract

Bias in training data, as well as proxy attributes, are probably the main reasons for unfair machine learning outcomes. ML models are trained on historical data that are problematic due to the inherent societal bias. Besides, collecting labeled data in societal applications is challenging and costly. Subsequently, proxy attributes are often used as alternatives to labels. Yet, biased proxies cause model unfairness.

In this paper, we propose fair active learning (FAL) as a resolution. Considering a limited labeling budget, FAL carefully selects data points to be labeled in order to balance the model performance and fairness. Our comprehensive experiments on real datasets, confirm a significant fairness improvement while maintaining the model performance.

\printAffiliationsAndNotice

1 Introduction

Data-driven decision making plays a significant role in modern societies. Data science and advanced computational methods have enabled us to make wise decisions and to make societies more just, prosperous, inclusive, and safe. With this unique opportunity, however, comes a great deal of responsibilities as improper development of data science technologies can not only fail but make matters worse. Judges in US courts, for example, use criminal assessment algorithms that are based on the background information of individuals for setting bails or sentencing criminals. This is valuable as it can lead to safer societies but at the same time, if not properly developed, it has the potential to have a significant consequences on people’s lives. For instance, the recidivism scores provided for the judges are highly criticized as being discriminatory, since it turns out they assign higher risks to African American individuals Angwin et al. (2016).

Machine learning (ML) is in the center of data-driven decision making as it provides insightful unseen information about phenomena based on available observations. Besides many other applications, Machine learning has also been utilized to evaluate individuals and to make societal decisions. In this context, similar to other applications, ML models try to learn the system (individuals and society) based on some observations (historical data). Blindly applying machine learning without paying attention to societal impacts, however, can lead to serious issues such as biased decision making, resulting in racism or sexism. The following are two major reasons for this to happen:

  • Bias in training data: ML models use the background information of individuals that is usually biased due to the historical discriminations. For example, redlining is a systematic denial of services used in the past against (mainly) specific racial communities, leaving its footprint up to the day and the existing data records Jan (2018). Gender bias in Data Perez (2019), including health care Pley and Keeling (Sep. 2019) is yet another example of bias in training data.

  • Proxy attributes: labeled data is the corner-stone of supervised learning. Yet, in societal applications there is often limited labeled data. For example, the aforementioned recidivism scores are meant to show how likely an individual is to commit a crime in the future. Similarly, in the context of college admission the goal is to admit students who are likely to be the most successful in the future. Training data with such information is either not available or a very limited number of labeled data exists. As a consequence, other available attributes are typically used as proxies for the true labels. For instance, “getting arrested by cops” may be considered as a proxy for committing a crime, or GPA for future college success.

ML models rely on the data that can be highly biased. Bias in data can cause model unfairness which can give rise to discrimination in consequent decisions. For instance, a job platform can rank less qualified male candidates higher than more qualified female candidates Lahoti et al. (2019). As a result, a new paradigm of fairness in machine learning has emerged. Fairness has a different definitions and is measurable in various ways. To explain these arguments further, in the following we provide two running examples of applications that we shall use for clarity during the paper.

Example 1.

A company is interested in creating a model for predicting recidivism to help judges make wise decisions when setting bails; that is, they want to find out how likely a person is to commit a crime in the future. Suppose the company has access to the background information of some criminal defendants 1. However, the collected data is not labeled. That is because there is no evidence available at the time of the trial about whether or not an individual will commit a crime in the future. Considering a time window, it is possible to label an individual in the dataset by checking the background of the individual within the time window after being released. Nonetheless, this is not simple, may be associated with a cost, and may require expert efforts for data integration and entity resolution.

Example 2.

A loan consulting company is about to create models that will help financial agencies identify “valuable customers” who will pay off their loans on time. The company has collected a dataset of customers who have received a loan in the past few years. In addition to the demographic information, the dataset includes information such as education and income level of individuals. Unfortunately, at the time of approving loans, it is not known whether customers will pay their debt on time, and hence, the data are not labeled. Nevertheless, the company has hired experts who, given the information of an individual who has received a loan in the past, can verify their background and assess if payments were made on time. Of course, considering the costs associated with a background check, it is not viable to freely label all customers.

Both of the above examples use historical data for building their models that, as we observed in our experiments, are biased. For instance, the income in Example 2 is known to include gender bias Jones (1983) or prior count in Example 1 is racially biased Angwin et al. (2016). Also, in both examples the datasets are unlabeled. A data scientist may decide to use a (problematic) proxy attribute as true label 2 and train the model. Another alternative is (depending on the available budget) to randomly label a subset of data and use it for training. A more reasonable solution, however, is to follow an active learning strategy while using an expert oracle for gradually labeling the data according to the available budget. Active learning Settles (2009) sequentially chooses the unlabeled instances that their labeling is the most beneficial to the performance improvement of the underlying model. There are different sampling techniques, which select instances from the pool of unlabeled data points according to their informativeness Settles (2009). The most commonly used measure of informativeness is the classification uncertainty, where the model is most uncertain about the label of an instance. We will further discuss the active learning literature in § 2.2.

Unfortunately, despite its importance, to the best of our knowledge none of the existing work in active learning takes into account the societal impacts of the models such as fairness and optimizes solely to maximize model performance. This is our goal in this paper. We aim to design an active learning framework that generates fair outcomes. As we shall further elaborate in § 2.3, we define the notion of fairness with respect to sensitive attributes such as race and gender. Focusing on the group fairness Dwork et al. (2012); Li and Cropanzano (2009), we consider a model fair if its outcome does not depend on the sensitive attributes. That is, we adopt demographic parity, one of the popular fairness measures Kusner et al. (2017); Dwork et al. (2012). Although we consider model independence as our measure of fairness, in § 4 we demonstrate how to extend our findings for other notions of fairness based on separation and sufficiency Barocas et al. (2019). In this paper, our focus is on classification models. In the rest of the paper, we simplify the word to “models”.

Summary of contributions. In this paper, we introduce fairness in active learning (FAL) for constructing fair models in the context of limited labeled data. We first provide the necessary theoretical background, definitions, and terms. Next, we propose our fair active learning (FAL) framework to balance model performance and fairness. The proposed framework is flexible to incorporate different measures for model performance and fairness. Finally, we conduct comprehensive experiments on real datasets to show that performing active learning while considering the fairness constraint can significantly improve the fairness of a classifier while not dramatically impacting its performance. As we shall present in § 5, our experiment results across different fairness metrics confirm improvements in fairness by around 50% without major reductions in model performance. In summary, our contributions are as following:

  • Carefully formalizing terms and background, we present different fairness measures based on model independence.

  • We introduce fairness in active learning, an iterative approach that incorporates the fairness measure in its sample selection unit and constructs a fair predictive model as a result.

  • We propose the expected fairness measure for unlabeled sample points based on the best-known estimate of the function.

  • We conduct comprehensive experiments on real-world data, considering different fairness metrics based on model independence. Our results show an improvement of around 50% in fairness measures while not significantly impacting the model performance.

  • We discuss how to extend our framework for different fairness measures.

In the remainder of this paper, we start with a background on active learning and our fairness model in § 2. In § 3, we will present the FAL framework. In § 4, we will discuss how to extend our proposal for different fairness measures. We will provide our comprehensive experiments in§ 5, review the related work in § 6, and conclude the paper in § 7.

2 Background

2.1 Data Model

We assume the existence of a (training) dataset with instances, each consisting of features . For a data point , we use the notation for the vector of input features and to refer to the value of th feature, . We also assume each data point is associated with at least one sensitive attribute . As we shall further explain in § 2.3, sensitive attributes such as gender and race are non-ordinal categorical attributes used in the fairness model. We use the notation to refer to the sensitive attribute(s) of . Without loss of generality and to simplify the explanations, unless explicitly stated, we assume is a single sensitive attribute. Still, we would like to emphasize that our techniques are not limited to the number of sensitive attributes. Each data point is also associated with a label attribute with possible values . We assume the labels of the data points are initially unknown. In § 2.2, we will explain how to obtain the label of a data point . At any moment during the training process, the subset of data points for which labels are known is referred to as labeled pool and the rest of them is called unlabeled pool. We note that every entry in is identified by the pair , while every entry in is the triple .

2.2 Learning Model

The goal is to learn a classifier function that maps the feature space to the labels. We use to refer to the predicted label for . Recall that data points are initially unlabeled. Pool-based active learning for classification, which was introduced in  Lewis and Gale (1994), selects instances from an unlabeled data set to form a labeled set , sequentially for training. Active learning assumes the existence of an expert oracle that given a data point provides its labels. Labeling, however, is costly and usually there is a limited labeling budget . Using the sampling budget, one can randomly label data points and use them to train a classifier. The challenge, however, is to wisely exhaust the budget to build the most accurate model.

Different sampling strategies have been proposed in the context of active learning. Uncertainty sampling Lewis and Gale (1994) is probably the most common strategy in active learning for classification. It selects data points for labeling such that the model variance is maximally minimized. To do so, it chooses the point that the current model is least certain about its label. At every iteration of the process, let the classifier be the current model, using the labeled dataset . For every data point with the feature vector , let be the posterior probability that its unknown label will be based on .

By maximizing the uncertainty, active learning selects the points that are close to the decision boundary, where we are least certain about the class label. Uncertainty can be defined in different ways Settles (2009). In general, uncertainty sampling refers to maximum entropy Shannon (1948). Equation 1 denotes the Shannon entropy formulation for classifying with based on the probability obtained from the classifier for an unknown label variable .

(1)

where is defined as follows:

(2)

Algorithm 1 presents the standard active learning algorithm, using Equation 1. Iteratively, the algorithm selects a point from to be labeled next. It uses the classifier trained in the previous step to obtain class probabilities (initially, all the probabilities are equal), and to calculate the entropies. The algorithm passes the selected point to the labeling oracle, acquires its label, and adds the point to the set of labeled dataset . It uses to train the classifier , where shows the current iteration. This process continues until the labeling budget is exhausted.

1:  for  to  do
2:     
3:      label using the labeling oracle
4:     add to
5:     train the classifier using
6:  end for
7:  return
Algorithm 1 Active Learning
difference
ratio
Table 1: Difference metrics for measuring demographic disparity.3

2.3 (Un)Fairness Model

Following many of the existing work, while founding our model on societal norms of fairness Barocas et al. (2017), we develop our fairness model on the notion of model independence or demographic disparities Barocas et al. (2019); Žliobaitė (2017); Narayanan (2018); Zafar et al. (2017), also referred by terms such as group fairness Dwork et al. (2012), statistical parity Dwork et al. (2012); Simoiu et al. (2017), and disparate impact Barocas and Selbst (2016); Feldman et al. (2015); Ayres (2005). Although our main focus in this paper is on fairness based on model independence, in § 4 we show how to extend our framework for other measures based on separation () and sufficiency () Barocas et al. (2019).

Apparently, disparities in the model does not necessarily imply that the designers intentionally want them to arise. The problem occurs as these models rely entirely on (biased) historical data for learning a system; therefore, the historical disparities in the data cause the (unintentional) bias in the model. We believe that machine learning practitioners are responsible to intervene in the modeling process (in different learning stages) to mitigate model disparities.

Given a classifier and a random point with a predicted label , demographic parity holds iff Barocas et al. (2017, 2019). Consider a binary classifier and think of as “acceptance” – in Example 2, the group that receive a loan. Demographic parity is the condition that requires the acceptance rate to be the same for all groups of i.e. female or male in this case. Under demographic parity, for a binary classifier and a binary sensitive attribute, the statistical independence of a sensitive attribute from the label causes the following:

  1. : The probability of acceptance is equal for members of different demographic groups. For instance, in Example 1 members of different race groups have an equal chance for being classified as low risk.

  2. : If the population ratio of a particular group is (i.e. , the ratio of this group in the accepted class is also . For instance, in Example 2, let be the female ratio in the applicants’ pool. Under demographic parity, female ratio in the set of admitted applications for a loan equals to .

  3. : Mutual information is the measure of mutual dependence between two variables. Under demographic parity and are independent, hence their mutual information is zero. That is, the conditional entropy is equal to .

  4. : The covariance between the target and sensitive features is zero. Under demographic parity and are independent. As a result, the covariance is equal to zero.

A disparity (or unfairness) measure can be defined using any of the above quantities. Table 1 summarizes some of the ways the disparity can be measured. As stated in the first row of the table, mutual information and covariance (or correlation) provide two natural measures. Another way of quantifying the disparity is by subtracting the probabilities (Row 2 of Table 1) or the ratios between probabilities (Row 3 of Table 1). Consistent with Row 2, we defined the ratio-based measures such that zero is the maximum fairness and the measure is in the range [0,1].

Finally, we would like to reiterate that we consider the societal norms, which are not always quite aligned with statistical measures. Due to the societal discrimination against some minority groups, social data is usually “biased” Olteanu et al. (2019). Hence, actions known as reverse discrimination (such as affirmative action) are taken to increase the presence of underrepresented groups in the outcomes. Such fairness guidance is usually provided by law. That can be viewed as a higher acceptance probability for certain protected groups of sensitive attributes. In this paper, we do not limit ourselves to any of the (demographic disparity) fairness measures and give the user the freedom to provide a customized measure. In other words, we are agnostic to the choice of fairness measure. It is worth mentioning that although we used a single binary sensitive attribute for the explanations and measures, a measure can be defined over multiple non-binary sensitive attributes with overlapping protected groups. Generalizing our notion of fairness to any measure based on demographic disparity, in the rest of the paper, we use the notation to refer to the (user-provided) fairness measure. We simplify the notation to when is clear by context.

3 Fair Active Learning

Smartly selecting samples to label, AL has the potential to mitigate algorithmic bias by incorporating the fairness measure into its sampling process. Still, not considering fairness while building models can result in model unfairness. As a naïve resolution, one could decide to drop sensitive attributes from the training data. This, however, is not enough as the bias in the features can cause model unfairness Salimi et al. (2019). For example, consider a linear regression model in the form of . Let the covariance between the model outcome and a sensitive attribute be the measure of fairness. Theorem 1 shows that only depends on and . In other words, the underlying covariance can cause model unfairness.

Theorem 1.
4

For a linear classifier in form of , .

Theorem 1 shows that dropping sensitive attributes is not enough for achieving fairness. An immediate question which might come to mind is that will any of the existing AL strategies generate fair models. The answer is: not necessarily. We show this in Theorem 2.

Theorem 2.

An active learning method with any sampling strategy that does not consider fairness may generate unfair models.

Following Theorem 2, AL strategies that do not consider fairness may generate unfair models. On the other hand, according to Theorem 1 and the existing literature Salimi et al. (2019); Buolamwini and Gebru (2018); Zou and Schiebinger (2018); Manrai et al. (2016), bias in labeled training data is the reason for model unfairness. In other words, the samples selected for labeling in an active learning framework can significantly impact the fairness of a trained model. Hence, a smart sampling strategy that mitigates bias in labeled data can resolve unfairness.

Yet, optimizing for fairness, the model may lose its purpose for accuracy while obtaining high performance is indeed the main objective in any ML framework, including active learning. For instance, in Example 1, consider a model that randomly (with equal probabilities) classifies individuals as high-risk. This model indeed satisfies demographic parity since the probability of the outcome is (random and therefore) independent of . However, such a model provides zero information about how risky an individual is.

To balance the fairness and accuracy trade-off, we consider fairness as part of the AL sampling strategy in order to develop fair classifiers for applications with limited labeled data, similar to Example 1 and Example 2. The goal is to minimize the misclassification error as well as unfairness (c.f. § 2.3). Similar to standard AL, FAL is an iterative process that selects a sample from the unlabeled pool to be added to the labeled pool . Different from AL, however, FAL considers both fairness and misclassification error as the optimization objective for the sampling step. That is, to choose the next sample to be labeled, FAL selects the one that contributes the most to the reduction of the misclassification error as well as model unfairness. More Specifically, for a sample point , we consider the Shannon entropy measure for misclassification error, while considering demographic disparity for unfairness —  is the classifier trained on at iteration , after labeling the point and is the entropy of the based on the current model . One way of formulating the optimization problem for sampling is as following:

(3)

where denotes the restriction imposed by some regulations for fairness consideration. Similarly, one could consider misclassification error as a hard constraint while optimizing for fairness. Both of these models can be reformulated as unconstrained optimizations using Lagrange multipliers Rockafellar (1993). Another alternative is to add fairness to the optimization as a regularization term. The formulation can also be viewed as a multi-objective optimization for fairness and misclassification error. Equation 4 is consistent with all of these views and is therefore considered in our framework.

(4)

where the coefficient is the user-provided parameter that determines the trade-off between the model fairness and model performance. Values closer to put greater emphasize on model performance, while smaller values of put greater importance on fairness. Our experimental results verify fairness improves substantially through FAL optimization while maintaining the accuracy level. As we elaborate in § 5, entropy and fairness values are standardized to the same scale before combining them in Equation 4.

Having discussed the optimization function, next we will provide the details of our FAL framework.

Figure 1: FAL framework

3.1 Framework

At a high level, FAL is an iterative approach similar to standard active learning approaches. As shown in Figure 1, the central component of FAL is the sample selection unit (SSU) that chooses an unlabeled point from and asks the labeling oracle to provide its label. The labeled point are moved to , the set of labeled points. The set of labeled points are used to train : the classifier at iteration . In the next iteration, , SSU employs and selects the next point to be labeled. This process continues until the budget for labeling is exhausted.

The sample selection unit is in charge of selecting the next point to be labeled. SSU uses Equation 4 to balance the trade-off between fairness and misclassification error for the next sample selection. A problem, however, is that at the time of evaluating the candidate points, we still do not know their labels as those belong to . In contrary, to evaluate the impact of each of the points on the model fairness, we need to know what the model parameters will be after adding the point to . This requires knowing all labels beforehand which contradicts with the fact the is unlabeled.

To resolve this issue, using a decision theoretic approach  Settles (2009), we consider the Expected Unfairness Reduction: selecting the point that is expected to impart the largest reduction to the current model unfairness, after acquiring its label. That is to say, we use Equation 5 to pick a sample. In this way we are approximating the expected future fairness of a model using over all possible labels under the current model. In other words, SSU should select the sample point , which if labeled and added to would result in a new model with reduced unfairness.

Let be the unfairness of the current model. Consider a point . Let be the new unfairness measure after adding to if the true label is . Of course, SSU does not know the label in advance. Hence, it must instead calculate the unfairness as an expectation over the possible labels5:

(5)

Equation 6 denotes the expected unfairness computation used by SSU (as shown in Figure 2):

(6)

where is the model trained using .

Following Figure 2, for every point in the unlabeled pool, SSU considers different values of as possible labels for . For every possible label , it updates the model parameters to the intermediate model using .

We note that the set of labeled points has a different distribution from the underlying data distribution. That is because the labeled points are carefully selected from and hence are not unbiased samples from the underlying data distribution. As a result, even though the model is trained using these data points, cannot be used for evaluating the fairness of the model. On the other hand, the points in the unlabeled pool are expected to be drawn according to the underlying distribution. Therefore, to create a dataset for evaluating the fairness of a model, we select a random subset of and move it to a verification set . The verification set is created once and is used in different FAL iterations.

Figure 2: Sample Selection Unit: Evaluation of a point for fairness.

Following the standard AL, at every iteration, for every possible outcome for a point , SSU uses the current model for calculating .

Having the fairness measures and the probabilities for each possible outcome, the expected unfairness is computed by aggregating for different values of (Equation 6). After computing the expected unfairness for each data point in , SSU identifies the one that optimizes Equation 5 and passes it to the labeling oracle. Algorithm 2 shows the pseudo-code of FAL. It uses the function of Algorithm 3 for computing the expected unfairness.

1:  for  to  do
2:     
3:     for  to  do
4:        
5:         ExpF
6:        
7:        if  then
8:           ;
9:        end if
10:     end for
11:      label using the labeling oracle
12:     add to
13:     train the classifier using
14:  end for
15:  return
Algorithm 2 Fair Active Learning
1:  
2:  for  to  do
3:     train using
4:     compute using
5:     
6:  end for
7:  return
Algorithm 3 ExpF
input:

4 Extension to Other Fairness Models

So far in this paper, we considered independence () for fairness. Here we discuss how to extended our findings to other measures based on separation and sufficiency Barocas et al. (2019), such as predictive parity, error rate balance, and accuracy equity Narayanan (2018).

FAL follows balancing fairness and misclassification error, using Equation 5. Certainly, the entropy term does not depend on the choice of fairness measure. Also, the abstract fairness term is not limited to a specific definition. However, despite being abstract, computing the expected unfairness (based on separation or sufficiency) is challenging.

Looking at Figure 2, recall that we use the verification set for estimating the fairness of a model. As an unbiased sample set from , follows the underlying data distribution and, hence, can be used for the demographic disparity. However, this set cannot be used for estimating fairness according to separation or sufficiency since its instances are not labeled. On the other hand, the pool of labeled data is not representative of the underlying data distribution.

In order to extend our results for other fairness measures, it is enough to label . Once is labeled, it is easy to see that our framework performs as-is for any such fairness measure. We understand that since the labeling budget is limited, labeling may reduce the number of instances we can use for training the model. One resolution is to limit to a small set and accept the potential error in estimations. How small can be to still provide accurate-enough estimations, besides other resolutions, are interesting questions that we will consider for future work.

Figure 3: Accuracy versus .
Figure 4: Precision versus recall for .
Figure 5: Accuracy versus .
Figure 6: Precision versus recall for .
Figure 7: Accuracy versus .
Figure 8: Precision versus recall for .
Figure 9: Accuracy versus .
Figure 10: Precision versus recall for .

5 Experiments

The experiments were performed on a Linux machine with a Core I9 CPU and 128GB memory. The algorithms were implemented using Python 3.7.

5.1 Datasets

COMPAS6: published by ProPublica Angwin et al. (2016), this dataset contains information of juvenile felonies such as marriage status, race, age, prior convictions, and the charge degree of the current arrest. We normalized data to be mean zero and unit variance. We consider sex and race as sensitive attributes. Defining the default fairness measures on race, we filtered dataset to black and white defendants. The dataset contains 5,875 defendants, after filtering. Following the standard practice Corbett-Davies et al. (2017); Mehrabi et al. (2019); Dressel and Farid (2018); Flores et al. (2016), we use two-year violent recidivism record as the true label of recidivism: if the recidivism is greater than zero and otherwise.

5.2 Algorithms Evaluated

We evaluate the performance of the following approaches on the benchmark datasets in § 5. We apply the standard logistic regression as the classifier in all of the cases.

Fair Active Learning (FAL). Algorithm 2, as described in § 3 is our proposal in this paper. As an implementation note, we are paying attention to the fact that the two components of the optimization objective function, entropy, and expected unfairness reduction are on different scales. Consequently, we need to standardize the measures to the same magnitude in order to be able to combine them. For the points in , let , , , and be the minimum entropy, maximum entropy, minimum expected unfairness reduction, and maximum expected unfairness reduction, respectively. To ensure that the measures are in the same scale and the trade-off between two components is consistent, we standardize entropy and expected unfairness reduction as in Equations 7 and 8, correspondingly.

(7)
(8)

Consequently, Equation 5 can be rewritten as:

(9)

Active Learning (AL). In AL, Algorithm 1, we choose the sample selection based on minimizing the miss-classification error through uncertainty sampling. To do so, we calculate the entropy of each unlabeled instances in and choose the point with maximum entropy to label next.

Random Labeling (RL). As mentioned briefly in § 1, a baseline approach for the limited labeling context is to randomly label a subset of points in . In RL, we ask the labeling oracle to provide the label of random samples and use them to train the classifier.

5.3 Performance Evaluation

We evaluate the performance of FAL, AL, and RL, with different demographic disparity functions defined in Table 1. We labeled the Fairness measures as , which is consistent with the corresponding cell that each measure is located in Table 1. For instance, refers to Mutual Information located in cell [1,1] (first row and first column) of Table 1. We study the trade-off between the accuracy and fairness, by changing the coefficient in Equation 5 while considering different Fairness metrics in Table 1. For each scenario, we ran the experiments on 10 different random splits of data and considered the accuracy, Precision/Recall, and the corresponding demographic function used in the optimization step, as evaluation measures.

For COMPAS dataset, we perform the experiments using 10 random splits of the dataset into training ( of the examples), verification ( of the examples), and testing ( of the examples). We study the values of for , plus AL and RL. We consider the mean and variance over the 10 random splits. We specify the maximum labeling budget to 400, where the performance leveled off in our preliminary results. In each FAL and AL scenario, we start with six labeled points and sequentially select points to label, until the 400 budget is exhausted.

Summary of results: Figures 610 provide the results of the evaluation for each method. Each pair of plots corresponds to a measure of fairness in Table 1. Due to the space limitation, we show the results for four of the measures listed in Table 17. In each pair, the first plot corresponds to the accuracy-fairness trade-off and the second plot to precision-recall. In summary, the results of the experiment support the effectiveness of our proposal as FAL could significantly improve fairness by reducing the disparities in different scenarios by around 50% while the model accuracy (fraction of correct predictions) remained almost unchanged (Figures 6, 6, 10, and 10). The results indicate that AL and RL have fairly lower precision and recall (Figures 6610, and 10). Precision and recall are better evaluation measures for the unbalanced data classification, thereby, FAL outperforms AL and RL in predicting the right class. Below, we explain different cases in detail.

Figures 6 and 6 corresponds to employing Mutual Information, , in optimization step. We note that adding the fairness measure to the optimization for candidate selection does not significantly impact the overall accuracy of the model across different values, Figure 6. Compared to AL also, the accuracy does not drop dramatically. As we observe, the demographic disparity of the model though, dropped by almost on average compared to AL and RL. Figure 6 indicates that AL and RL have fairly lower precision and recall. Precision and recall are better evaluation measure for unbalanced data classification, thereby, FAL outperforms AL and RL in predicting the right class.

Moving to second fairness measure, Figures 6 and 6 shows the results using , the covariance metric, as fairness metric in optimization. We note that the accuracy of the model across different scenarios has a negligible difference. The covariance metric of FAL on average is lower than AL and RL.

Employing in the optimization process leads to the results presented in Figures 10 and 10. A very small change can be seen in the accuracy score through different values of and the same pattern of fairness is observable.

Results provided in Figures 10 and 10 correspond to the metric applied in optimization process. The results are consistent with our findings in previous scenarios; the fairness measure is improved in FAL across different values, whereas the accuracy does not change, remarkably.

6 Related work

Algorithmic Fariness. Algorithmic fairness has been extensively studied in recent years. Barocas et al. (2017); Žliobaitė (2017); Romei and Ruggieri (2014); Mehrabi et al. (2019) provide surveys on discrimination and fairness in algorithmic decision making and machine learning.

Existing works have formulated fairness in classification as a constrained optimization Zafar et al. (2017, 2015); Menon and Williamson (2018); Corbett-Davies et al. (2017); Celis et al. (2019); Hardt et al. (2016); Huang and Vishnoi (2019). A body of work focus on modifying the classifier, in-process, to build a fair classifiers Fish et al. (2016); Goh et al. (2016); Dwork et al. (2012); Komiyama et al. (2018); Corbett-Davies et al. (2017). Some others remove disparate impact through pre-processing the training data Luong et al. (2011); Zemel et al. (2013); Kamiran and Calders (2010, 2012); Feldman et al. (2015); Krasanakis et al. (2018); Sun et al. (2019); Asudeh et al. (2019b); Salimi et al. (2019), while the last group post-process model outcomes to develop fairness Kim et al. (2018); Hardt et al. (2016); Pleiss et al. (2017); Hébert-Johnson et al. (2017). Fairness has also been studied in special ML context such as reinforcement learning Jabbari et al. (2017), adversarial networks Wadsworth et al. (2018); Xu et al. (2018), and feature acquisition Noriega-Campero et al. (2019). Fairness in data-driven decision making has also been studied in related topics, such as ranking Asudeh et al. (2019a); Guan et al. (2019); Yang et al. (2018); Zehlike et al. (2017) and recommendation systems Burke (2017); Tsintzou et al. (2018); Yao and Huang (2017).

Active Learning. Different active learning scenarios (Membership Query Synthesis, Stream-Based Selective Sampling, pool-Based Active Learning) and sampling strategies (Uncertainty Sampling, Query-By-Committee, Expected Model Change, Variance Reduction, etc.) are surveyed in Settles (2009). Active Learning has been widely used in different applications for training a wide range of classifires, where the labeling process for these datasets is labor-intensive and costly. The examples are image and speech recognition Hoi et al. (2006); Joshi et al. (2009); Minakawa et al. (2013); Yu et al. (2010); Riccardi and Hakkani-Tur (2005), information retrieval Tian and Lease (2011), text analysisTong and Koller (2001); Cormack and Grossman (2016); Hu et al. (2016); Davy and Luz (2007), recommender systems Sun et al. (2013); Resnick and Varian (1997); Houlsby et al. (2014).

7 Conclusion

In this paper, we introduced fairness in active learning. Our framework computes the expected unfairness reduction for each unlabeled sample point and selects the one that maximizes a linear combination of misclassification error and expected unfairness reduction. We carried out experiments with different fairness measures across different weights. The results confirmed that our proposed FAL frameworks work properly in terms of building a fair model without majorly affecting its performance. The comparison with the standard active learning shows an improvement in the fairness quality of the constructed classifier by about .

APPENDIX

Appendix A Proofs

Theorem 1: For a linear classifier in form of , .

Proof.
(10)

Theorem 2: An active learning method with any sampling strategy that does not consider fairness may generate unfair models.

Figure 11: Illustration of proof by construction for Theorem 2.
Proof.

We provide the proof by construction. In the following, we construct a case where between a pair of points, the one that worsens the fairness is preferred to be labeled. Consider a toy 2D setting where are the features used for training a linear classifier . Let color be the binary sensitive attribute with values blue and red. Assume the underlying distribution for the red points is uniform in range for and (the two features are independent for the red points). Now consider an active learning method (AL) that has labeled a set of points , so far. AL can use any arbitrary sampling strategy that is specified by a function , which should get maximized, i.e., the points with higher scores are preferred. Suppose using the labeled points , the current weights of the classifier are and and the decision boundary is at . That is, as shown in Figure 11 with the black solid line, for a point , iff .

Now consider the two points and specified in the figure. Note that AL chooses between and purely based on and . That is, it prefers for choosing between these two points. Suppose is the point that maximizes and, hence, is preferred to be labeled next (the following argument can be easily adjusted if is preferred). Suppose and . Therefore, after labeling and adding it to , the decision boundary gets updated to (the dashed line in Figure 11). Suppose the dotted line would be the decision boundary if had been selected.

First, assuming that all lines , , and divide the space by half, while considering the uniform distribution of red points in the space, for all three decision boundaries colorred. Next, let the underlying distribution of blue points8 be according to the blue ellipse in Figure 11 ( and are the eigen vectors of the covariance matrix of and , while the eigen values are as specified by the intersection of blue ellipse with the two vectors). In this setting, if was the decision boundary, colorblue would be . However, since has been selected, is the decision boundary and, hence, colorblue. In other words, if AL had picked the model would satisfy demographic parity since colorbluecolorred, but it picked that introduced demographic disparity. ∎

Appendix B Additional Experiment Results

Figure 12: Accuracy versus .
Figure 13: Precision versus recall for .
Figure 14: Accuracy versus .
Figure 15: Precision versus recall for .

Figures 1515 provide the results of the evaluation for and , as presented in Table 1. We note a very small change in the accuracy score across different values of . Figures 15 and 15 demonstrate that, in both cases, FAL significantly improves the . The results indicate that compared to FAL, AL and RL have lower precison and recall.

Footnotes

  1. In the US, such information is provided by sheriff offices of the counties. For instance, for the COMPAS dataset, ProPublica used information obtained from the Sheriff Office of the Broward County. https://bit.ly/36CTc2F
  2. In the rest of the paper we refer to true label as label.
  3. In addition to the measures in Table 1, one could use correlation , or as in Agarwal et al. (2018) for measuring disparity.
  4. Proofs are provided in the Appendix.
  5. Note that since is equal for all points, the second term in Equation 5 maximizes the expected unfairness reduction.
  6. ProPublica, https://bit.ly/35pzGFj
  7. We refer to the appendix for more results.
  8. Note that since AL does not consider demographic distributions for choosing the points, we have the freedom to pick the demographic distributions as we wish.

References

  1. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453. Cited by: footnote 4.
  2. Machine bias: risk assessments in criminal sentencing. ProPublica. External Links: Link Cited by: §1, §1, §5.1.
  3. Designing fair ranking schemes. SIGMOD. Cited by: §6.
  4. Assessing and remedying coverage for a given dataset. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 554–565. Cited by: §6.
  5. Three tests for measuring unjustified disparate impacts in organ transplantation: the problem of” included variable” bias. Perspectives in biology and medicine 48 (1), pp. 68–S87. Cited by: §2.3.
  6. Fairness in machine learning. NIPS Tutorial. Cited by: §2.3, §2.3, §6.
  7. Fairness and machine learning: limitations and opportunities. Note: \urlfairmlbook.org Cited by: §1, §2.3, §2.3, §4.
  8. Big data’s disparate impact. Calif. L. Rev. 104, pp. 671. Cited by: §2.3.
  9. Gender shades: intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pp. 77–91. Cited by: §3.
  10. Multisided fairness for recommendation. arXiv preprint arXiv:1707.00093. Cited by: §6.
  11. Classification with fairness constraints: a meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 319–328. Cited by: §6.
  12. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806. Cited by: §5.1, §6.
  13. Scalability of continuous active learning for reliable high-recall text classification. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 1039–1048. Cited by: §6.
  14. Dimensionality reduction for active learning with nearest neighbour classifier in text categorisation problems. In Sixth International Conference on Machine Learning and Applications (ICMLA 2007), pp. 292–297. Cited by: §6.
  15. The accuracy, fairness, and limits of predicting recidivism. Science advances 4 (1), pp. eaao5580. Cited by: §5.1.
  16. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226. Cited by: §1, §2.3, §6.
  17. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. Cited by: §2.3, §6.
  18. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining, pp. 144–152. Cited by: §6.
  19. False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. and it’s biased against blacks. Fed. Probation 80, pp. 38. Cited by: §5.1.
  20. Satisfying real-world goals with dataset constraints. In Advances in Neural Information Processing Systems, pp. 2415–2423. Cited by: §6.
  21. MithraRanking: a system for responsible ranking design. In SIGMOD, Cited by: §6.
  22. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pp. 3315–3323. Cited by: §6.
  23. Calibration for the (computationally-identifiable) masses. arXiv preprint arXiv:1711.08513. Cited by: §6.
  24. Batch mode active learning and its application to medical image classification. In Proceedings of the 23rd international conference on Machine learning, pp. 417–424. Cited by: §6.
  25. Cold-start active learning with robust ordinal matrix factorization. In International Conference on Machine Learning, pp. 766–774. Cited by: §6.
  26. Active learning for text classification with reusability. Expert Systems with Applications 45, pp. 438–449. Cited by: §6.
  27. Stable and fair classification. arXiv preprint arXiv:1902.07823. Cited by: §6.
  28. Fairness in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1617–1626. Cited by: §6.
  29. Redlining was banned 50 years ago. it’s still hurting minorities today. Note: Washington Post Cited by: 1st item.
  30. Sources of gender inequality in income: what the australian census says. Social Forces 62 (1), pp. 134–152. Cited by: §1.
  31. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2372–2379. Cited by: §6.
  32. Classification with no discrimination by preferential sampling. In Proc. 19th Machine Learning Conf. Belgium and The Netherlands, pp. 1–6. Cited by: §6.
  33. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems 33 (1), pp. 1–33. Cited by: §6.
  34. Multiaccuracy: black-box post-processing for fairness in classification. arXiv preprint arXiv:1805.12317. Cited by: §6.
  35. Nonconvex optimization for regression with fairness constraints. In International Conference on Machine Learning, pp. 2742–2751. Cited by: §6.
  36. Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. In Proceedings of the 2018 World Wide Web Conference, pp. 853–862. Cited by: §6.
  37. Counterfactual fairness. In Advances in Neural Information Processing Systems, pp. 4066–4076. Cited by: §1.
  38. Ifair: learning individually fair data representations for algorithmic decision making. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 1334–1345. Cited by: §1.
  39. A sequential algorithm for training text classifiers. In SIGIR’94, pp. 3–12. Cited by: §2.2, §2.2.
  40. Fairness at the group level: justice climate and intraunit justice climate. Journal of management 35 (3), pp. 564–599. Cited by: §1.
  41. K-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 502–510. Cited by: §6.
  42. Genetic misdiagnoses and the potential for health disparities. New England Journal of Medicine 375 (7), pp. 655–665. Cited by: §3.
  43. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635. Cited by: §5.1, §6.
  44. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency, pp. 107–118. Cited by: §6.
  45. Image sequence recognition with active learning using uncertainty sampling. In The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. Cited by: §6.
  46. Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA, Cited by: §2.3, §4.
  47. Active fairness in algorithmic decision making. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 77–83. Cited by: §6.
  48. Social data: biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data 2, pp. 13. Cited by: §2.3.
  49. Invisible women: exposing data bias in a world designed for men. Random House. Cited by: 1st item.
  50. On fairness and calibration. In Advances in Neural Information Processing Systems, pp. 5680–5689. Cited by: §6.
  51. Gender bias in health AI - prejudicing health outcomes (or getting it right!). Women in Global Health. Cited by: 1st item.
  52. Recommender systems. Communications of the ACM 40 (3), pp. 56–59. Cited by: §6.
  53. Active learning: theory and applications to automatic speech recognition. IEEE transactions on speech and audio processing 13 (4), pp. 504–511. Cited by: §6.
  54. Lagrange multipliers and optimality. SIAM review 35 (2), pp. 183–238. Cited by: §3.
  55. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review 29 (5), pp. 582–638. Cited by: §6.
  56. Interventional fairness: causal database repair for algorithmic fairness. In Proceedings of the 2019 International Conference on Management of Data, pp. 793–810. Cited by: §3, §3, §6.
  57. Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §1, §2.2, §3.1, §6.
  58. A mathematical theory of communication. Bell system technical journal 27 (3), pp. 379–423. Cited by: §2.2.
  59. The problem of infra-marginality in outcome tests for discrimination. The Annals of Applied Statistics 11 (3), pp. 1193–1216. Cited by: §2.3.
  60. MithraLabel: flexible dataset nutritional labels for responsible data science. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM), Cited by: §6.
  61. Learning multiple-question decision trees for cold-start recommendation. In Proceedings of the sixth ACM international conference on Web search and data mining, pp. 445–454. Cited by: §6.
  62. Active learning to maximize accuracy vs. effort in interactive information retrieval. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pp. 145–154. Cited by: §6.
  63. Support vector machine active learning with applications to text classification. Journal of machine learning research 2 (Nov), pp. 45–66. Cited by: §6.
  64. Bias disparity in recommendation systems. arXiv preprint arXiv:1811.01461. Cited by: §6.
  65. Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199. Cited by: §6.
  66. Fairgan: fairness-aware generative adversarial networks. In 2018 IEEE International Conference on Big Data (Big Data), pp. 570–575. Cited by: §6.
  67. A nutritional label for rankings. In SIGMOD, pp. 1773–1776. Cited by: §6.
  68. Beyond parity: fairness objectives for collaborative filtering. In Advances in Neural Information Processing Systems, pp. 2921–2930. Cited by: §6.
  69. Active learning and semi-supervised learning for speech recognition: a unified framework using the global entropy reduction maximization criterion. Computer Speech & Language 24 (3), pp. 433–444. Cited by: §6.
  70. Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180. Cited by: §2.3, §6.
  71. Fairness constraints: mechanisms for fair classification. arXiv preprint arXiv:1507.05259. Cited by: §6.
  72. Fa* ir: a fair top-k ranking algorithm. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1569–1578. Cited by: §6.
  73. Learning fair representations. In International Conference on Machine Learning, pp. 325–333. Cited by: §6.
  74. Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery 31 (4), pp. 1060–1089. Cited by: §2.3, §6.
  75. AI can be sexist and racist—it’s time to make it fair. Nature Publishing Group. Cited by: §3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
408815
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description