Enhancing the Accuracy and Fairness of Human Decision Making

Enhancing the Accuracy and Fairness
of Human Decision Making

Isabel Valera Max Planck Institute for Intelligent Systems {isabel.valera}@tue.mpg.de
Max Planck Institute for Software Systems {adishs, manuelgr}@mpi-sws.org
Adish Singla Max Planck Institute for Intelligent Systems {isabel.valera}@tue.mpg.de
Max Planck Institute for Software Systems {adishs, manuelgr}@mpi-sws.org
Manuel Gomez-Rodriguez Max Planck Institute for Intelligent Systems {isabel.valera}@tue.mpg.de
Max Planck Institute for Software Systems {adishs, manuelgr}@mpi-sws.org
Abstract

Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics. In this context, each decision is taken by an expert who is typically chosen uniformly at random from a pool of experts. However, these decisions may be imperfect due to limited experience, implicit biases, or faulty probabilistic reasoning. Can we improve the accuracy and fairness of the overall decision making process by optimizing the assignment between experts and decisions?

In this paper, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees. Moreover, these algorithms also benefit from posterior sampling to actively trade off exploitation—selecting expert assignments which lead to accurate and fair decisions—and exploration—selecting expert assignments to learn about the experts’ preferences and biases. We demonstrate the effectiveness of our algorithms on both synthetic and real-world data and show that they can significantly improve both the accuracy and fairness of the decisions taken by pools of experts.

1 Introduction

In recent years, there have been increasing concerns about the potential for unfairness of algorithmic decision making. Moreover, these concerns have been often supported by empirical studies, which have provided, e.g., evidence of racial discrimination [8, 10]. As a consequence, there have been a flurry of work on developing computational mechanisms to make sure that the machine learning methods that fuel algorithmic decision making are fair [3, 4, 5, 6, 13, 14, 15]. In contrast, to the best of our knowledge, there is a lack of machine learning methods to ensure fairness in human decision making, which is still prevalent in a wide range of critical applications such as, e.g., jail-or-release decisions by judges, stop-and-frisk decisions by police officers or accept-or-reject decisions by academics. In this work, we take a first step towards filling this gap.

More specifically, we focus on a problem setting that fits a variety of real-world applications, including the ones mentioned above: binary decisions come sequentially over time and each decision need to be taken by a human decision maker, typically an expert, who is chosen from a pool of experts. For example, in jail-or-release decisions, the expert is a judge who needs to decide whether she grants bail to a defendant; in stop-and-frisk decisions, the expert is a police officer who needs to decide whether she stop (and potentially frisk) a pedestrian; or, in accept-or-reject decisions, the expert is an academic who needs to decide whether a paper is accepted in a conference (or a journal). In this context, our goal is then to find the optimal assignments between human decision makers and decisions which maximizes the accuracy of the overall decision making process while satisfying several popular notions of fairness studied in the literature.

In this paper, we represent (biased) human decision making using threshold decisions rules [3] and then show that, if the thresholds used by each judge are known, the above problem can be reduced to a sequence of matching problems, which can be solved efficiently with approximation guarantees. More specifically:

  • Under no fairness constraints, the problem can be cast as a sequence of maximum weighted bipartite matching problems, which can be solved exactly in polynomial (quadratic) time [12].

  • Under (some of the most popular) fairness constraints, the problem can be cast as a sequence of bounded color matching problems, which can be solved using a bi-criteria algorithm based on linear programming techniques with a approximation guarantee [9].

Moreover, if the thresholds used by each judge are unknown, we also show that, if we estimate the value of each threshold using posterior sampling, we can effectively trade off exploitation—taking accurate and fair decisions—and exploration—learning about the experts’ preferences and biases. More formally, we can show that posterior samples achieve a sublinear regret in contrast to point estimates, which suffer from linear regret.

Finally, we experiment on synthetic data and real jail-or-release decisions by judges [8]. The results show that: (i) our algorithms improve the accuracy and fairness of the overall human decision making process with respect to random assignment; (ii) our algorithms are able to ensure fairness more effectively if the pool of experts is diverse, e.g., there exist harsh judges, lenient judges, and judges in between; and, (iii) our algorithms are able to ensure fairness even if a significant percentage of judges (e.g., %) are biased against a group of individuals sharing a certain sensitive attribute value (e.g., race).

2 Preliminaries

In this section, we first define decision rules and formally define their utility and group benefit. Then, we revisit threshold decision rules, a type of decision rules which are optimal in terms of accuracy under several notions of fairness from the literature.

Decision rules, their utilities, and their group benefits. Given an individual with a feature vector , a (ground-truth) label , and a sensitive attribute , a decision rule controls whether the ground-truth label is realized by means of a binary decision about the individual. As an example, in a pretrial release scenario, the decision rule specifies whether the individual remains in jail, i.e., if she remains in jail and otherwise; the label indicates whether a released individual would reoffend, i.e., if she would reoffend and otherwise; the feature vector may include the current offense, previous offenses, or times she failed to appear in court; and the sensitive attribute may be race, i.e., black vs white.

Further, we define random variables , , and that take on values , , and for an individual drawn randomly from the population of interest. Then, we measure the (immediate) utility as the overall profit obtained by the decision maker using the decision rule [3], i.e.,

(1)

where is a given constant. For example, in a pretrial release scenario, the first term is proportional to the expected number of violent crimes prevented under , the second term is proportional to the expected number of people detained, and measures the cost of detention in units of crime prevented. Here, note that the above utility reflects only the proximate costs and benefits of decisions rather than long-term, systematic effects. Finally, we define the (immediate) group benefit as the fraction of beneficial decisions received by a group of individuals sharing a certain value of the sensitive attribute  [15], i.e.,

(2)

For example, in a pretrial release scenario, one may define and thus the benefit to the group of white individuals be proportional to the expected number of them who are released under . Remarkably, most of the notions of (un)fairness used in the literature, such as disparate impact [1], equality of opportunity [6] or disparate mistreatment [13] can be expressed in terms of group benefits. Finally, note that, in some applications, the beneficial outcome may correspond to .

Optimal threshold decision rules. Assume the conditional distribution is given111In practice, the conditional distribution may be approximated using a machine learning model trained on historical data.. Then, the optimal decision rules that maximize under the most popular fairness constraints from the literature are threshold decision rules [3, 6]:

  • No fairness constraints: the optimal decision rule under no fairness constraints is given by the following deterministic threshold rule:

    (3)
  • Disparate impact, equality of opportunity, and disparate mistreatment: the optimal decision rule which satisfies (avoids) the three most common notions of (un)fairness is given by the following deterministic threshold decision rule:

    (4)

    where are constants that depend only on the sensitive attribute and the fairness notion of interest. Note that the unconstraint optimum can be also expressed using the above form if we take .

3 Problem Formulation

In this section, we first use threshold decision rules to represent biased human decisions and then formally define our sequential human decision making process.

Biased humans as threshold decision rules. Inspired by recent work by Kleinberg et al. [7], we model a biased human decision maker who has access to using the following threshold decision rule:

(5)

where are constants that depend on the decision maker and the sensitive attribute, and they represent human decision makers’ biases (or preferences) towards groups of people sharing a certain value of the sensitive attribute . For example, in a pretrial release scenario, if a judge is generally more lenient towards white people () than towards black people (), then .

In the above formulation, note that we assume all experts make predictions using the same (true) conditional distribution , i.e., all experts have the same prediction ability. It would be very interesting to relax this assumption and account for experts with different prediction abilities. However, this entails a number of non trivial challenges and is left for future work.

Sequential human decision making problem. A set of human decision makers need to take decisions about individuals over time. More specifically, at each time , there are decisions to be taken and each decision is taken by a human decision maker , who applies her threshold decision rule , defined by Eq. 5, to the corresponding feature vector and sensitive attribute . Note that we assume that for all and .

At each time , our goal is then to find the assignment of human decision makers to individuals , with for all , that maximizes the expected utility of a sequence of decisions, i.e.,

(6)

where is a empirical estimate of a straight forward generalization of the utility defined by Eq. 1 to multiple decision rules.

4 Proposed Algorithms

In this section, we formally address the problem defined in the previous section without and with fairness constraints. In both cases, we first consider the setting in which the human decision makers’ thresholds are known and then generalize our algorithms to the setting in which they are unknown and need to be learned over time.

Decisions under no fairness constraints. We can find the assignment of human decision makers with the highest expected utility by solving the following optimization problem:

maximize
subject to
(7)

Known thresholds. If the thresholds are known for all human decision makers, the above problem decouples into independent subproblems, one per time , and each of these subproblems can be cast as a maximum weighted bipartite matching, which can be solved exactly in polynomial (quadratic) time [12]. To do so, for each time , we build a weighted bipartite graph where each human decision maker is connected to each individual with weight , where

Finally, it is easy to see that the maximum weighted bipartite matching is the optimal assignment, as defined by Eq. 7.

Unknown thresholds. If the thresholds are unknown, we need to trade off exploration, i.e., learning about the thresholds , and exploitation, i.e., maximizing the average utility. To this aim, for every decision maker , we assume a Beta prior over each threshold . Under this assumption, after round , we can update the (domain of the) distribution of as:

(8)

where

and write the posterior distribution of as

(9)

Then, at the beginning of round , one can think of estimating the value of each threshold using point estimates, i.e., , and use the same algorithm as for known thresholds. Unfortunately, if we define regret as follows:

(10)

where is the optimal assignment under the point estimates of the thresholds and is the optimal assignment under the true thresholds, we can show the following theoretical result (proven in the Appendix A):

Proposition 1

The optimal assignments with deterministic point estimates for the thresholds suffers linear regret .

The above result is a consequence of insufficient exploration, which we can overcome if we estimate the value of each threshold using posterior sampling, i.e., , as formalized by the following theorem:

Theorem 2

The expected regret of the optimal assignments with posterior samples for the thresholds is .

Proof Sketch. The proof of this theorem follows via interpreting the problem setting as a reinforcement learning problem. Then, we can apply the generic results for reinforcement learning via posterior sampling of [11]. In particular, we map our problem to an MDP with horizon as follows. The actions in the MDP correspond to assigning individuals to experts (given by ) and the reward is given by the utility at time .

Then, it is easy to conclude that the expected regret of the optimal assignments with posterior samples for the thresholds is , where denotes the possible assignments of individuals to experts and is a problem dependent parameter. quantifies the the total number of states/realizations of feature vectors and sensitive features to the individuals—note that is bounded only for the setting where feature vectors and sensitive features are discrete.

Given that the regret only grows as (i.e., sublinear in ), this theorem implies that the algorithm based on optimal assignments with posterior samples converges to the optimal assignments given the true thresholds as .

Decisions under fairness constraints. For ease of exposition, we focus on disparate impact, however, a similar reasoning follows for equality of opportunity and disparate mistreatment [6, 13].

To avoid disparate impact, the optimal decision rule , given by Eq. 4, maximizes the utility, as defined by Eq. 1, under the following constraint [3, 13]:

(11)

where is a given parameter which controls the amount of disparate impact—the smaller the value of , the lower the disparate impact of the corresponding decision rule. Similarly, we can calculate a empirical estimate of the disparate impact of a decision rule at each time as:

(12)

where , where defines what is a beneficial outcome. Here, it is easy to see that, for the optimal decision rule under impact parity, converges to as , and converges to as .

For a fixed , assume there are at least experts with , at least experts with for each , and . Then, we can find the assignment of human decision makers with the highest expected utility and disparate impact less than as:

maximize
subject to
(13)

where and is the number of decisions with sensitive attribute at round and . Here, the assignment given by the solution to the above optimization problem satisfies that and thus .

Known thresholds. If the thresholds are known, the problem decouples into independent subproblems, one per time , and each of these subproblems can be cast as a constrained maximum weighted bipartite matching. To do so, for each time , we build a weighted bipartite graph where each human decision maker is connected to each individual with weight , where

and we additionally need to ensure that, for , the matching satisfies that

where denotes the number of individuals with sensitive attribute at round and the function depends on what is the beneficial outcome, e.g., in a pretrial release scenario, . Remarkably, we can reduce the above constrained maximum weighted bipartite matching problem to an instance of the bounded color matching problem [9], which allows for a bi-criteria algorithm based on linear programming techniques with a approximation guarantee. To do so, we just need to rewrite the above constraints as

(14)
(15)

To see the equivalence between the above constraints and the original ones, one needs to realize that we are looking for a perfect matching and thus . For example, in a pretrial release scenario, and .

Unknown thresholds. If the threshold are unknown, we proceed similarly as in the case under no fairness constraints, i.e., we again assume Beta priors over each threshold, update their posterior distributions after each time , and use posterior sampling to set their values at each time.

Finally, for the regret analysis, we focus on an alternative unconstrained problem, which is equivalent to the one defined by Eq. 4 by Lagrangian duality [2]:

maximize
subject to
(16)

where and are the Lagrange multipliers for the band constraints. Then, we can then state the following theoretical result (the proof easily follows from the proof of Theorem 2):

Theorem 3

The expected regret of the optimal assignments for the problem defined by Eq. 16 with posterior samples for the thresholds is .

5 Experiments

Expected utility

(a) Expected utility

(b) Disparate impact

(c) Regret
Figure 4: Performance in synthetic data. Panels (a) and (b) show the trade-off between expected utility and disparate impact. For the utility, the higher the better and, for the disparate impact, the lower the better. Panel (c) shows the regret achieved by our algorithm under unknown experts’ thresholds as defined in Eq. 10. Here, the solid lines show the results for and dashed lines for

In this section we empirically evaluate our framework on both synthetic and real data. To this end, we compare the performance, in terms of both utility and fairness, of the following algorithms:

Optimal: Every decision is taken using the optimal decision rule , which is defined by Eq. 3 under no fairness constraints and by Eq. 4 under fairness constraints.

Known: Every decision is taken by a judge following a (potentially biased) decision rule , as given by Eq. 5. The threshold for each judge is known and the assignment between judges and decisions is found by solving the corresponding matching problem, i.e., Eq, 7 under no fairness constraints and Eq. 4 under fairness constraints.

Unknown: Every decision is taken by a judge following a (potentially biased) decision rule , proceeding similarly as in “Known”. However, the threshold for each judge is unknown it is necessary to use posterior sampling to estimate the thresholds.

Random: Every decision is taken by a judge following a (potentially biased) decision rule . The assignment between judges and decision is random.

5.1 Experiments on Synthetic Data

Experimental setup. For every decision, we first sample the sensitive attribute from and then sample if and from , otherwise. For every expert, we generate her decision thresholds and . Here we assume there are experts, to ensure that there are at least experts with , at least experts with for , and Finally, weset , and , and the beneficial outcome for an individual is , i.e., .

Results. Figures 4(a)-(b) show the expected utility and the disparate impact after units of time for the optimal decision rule and for the group of experts under the assignments provided our algorithms and under random assignments. We find that the experts chosen by our algorithm provide decisions with higher utility and lower disparate impact than the experts chosen at random, even if the thresholds are unknown. Moreover, if the threshold are known, the experts chosen by our algorithm closely match the performance of the optimal decision rule both in terms of utility and disparate impact. Finally, we compute the regret as defined by Eq. 10, i.e., the difference between the utilities provided by algorithm with Known and Unknown thresholds over time. Figure 4(c) summarizes the results, which show that, as time progresses, the regret degreases at a rate .

5.2 Experiments on Real Data

Experimental setup. We use the COMPAS recidivism prediction dataset compiled by ProPublica [8], which comprises of information about all criminal offenders screened through the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) tool in Broward County, Florida during 2013-2014. In particular, for each offender, it contains a set of demographic features (gender, race, age), the offender’s criminal history (e.g., the reason why the person was arrested, number of prior offenses), and the risk score assigned to the offender by COMPAS. Moreover, ProPublica also collected whether or not these individuals actually recidivated within two years after the screening.

In our experiments, the sensitive attribute is the race (white, black), the label indicates whether the individual recidivated () or not (), the decision rule specifies whether an individual is released from jail () or not () and, for each sensitive attribute , we approximate using a logistic regression classifier, which we train on % of the data. Then, we use the remaining % of the data to evaluate our algorithm as follows. Since we do not have information about the identify of the judges who took each decision in the dataset, we create (fictitious) judges and sample their thresholds from a , where controls the diversity (lenient vs harsh) across judges by means the standard deviation of the distribution since . Here, we consider two scenarios: (i) all experts are unbiased towards race and thus and (ii) % of the experts are unbiased towards race and the other % are biased, i.e., . Finally, we consider decisions per round, which results into rounds, where we assign decisions to rounds at random.

Expected Utility

(a) Expected Utility

(b) True Utility

(c) Disparate Impact
Figure 8: Performance in COMPAS data. Panels (a) and (b) show the expected utility and true utility and panel (c) shows the disparate impact. For the expected and true utility, the higher the better and, for the disparate impact, the lower the better.

Results. Figure 8 shows the expected utility, the true utility and the disparate impact after units of time for the optimal decision rule and for the group of unbiased experts (scenario (i)) under the assignments provided our algorithms and under random assignments. The true utility is just the utility after units of time given the actual true values rather than , i.e., . Similarly as in the case of synthetic data, we find that the judges chosen by our algorithm provide higher expected utility and true utility as well as lower disparate impact than the judges chosen at random, even if the thresholds are unknown.

Figure 11 shows the probability that a round does not allow for an assignment between judges and decisions with less than disparate impact for different pools of experts of varying diversity and percentage of biased judges. The results show that, on the one hand, our algorithms are able to ensure fairness more effectively if the pool of experts is diverse and, on the other hand, our algorithms are able to ensure fairness even if a significant percentage of judges (e.g., %) are biased against a group of individuals sharing a certain sensitive attribute value.

6 Conclusions

In this paper, we have proposed a set of practical algorithms to improve the utility and fairness of a sequential decision making process, where each decision is taken by a human expert, who is selected from a pool experts. Experiments on synthetic data and real jail-or-release decisions by judges show that our algorithms are able to mitigate imperfect human decisions due to limited experience, implicit biases or faulty probabilistic reasoning. Moreover, they also reveal that our algorithms benefit from higher diversity across the pool experts helps and and they are able to ensure fairness even if a significant percentage of judges are biased against a group of individuals sharing a certain sensitive attribute value (e.g., race).

There many interesting venues for future work. For example, in our work, we assumed all experts make predictions using the same (true) conditional distribution and then apply (potentially) different thresholds. It would be very interesting to relax the first assumption and account for experts with different prediction abilities. Moreover, we have also assumed that experts do not learn from the decisions they take over time, i.e., their prediction model and thresholds are fixed. It would be very . In some scenarios, a decision is taking jointly by a group of experts, e.g., faculty recruiting decisions. It would be a natural follow-up to the current work to design our algorithms for such scenario. Finally, in our experiments, we have to generate fictitious judges since we do not have information about the identify of the judges who took each decision. It would be very valuable to gain access to datasets with such information [7].

(a) Known

(b) Unknown
Figure 11: Feasibility in COMPAS data. Probability that a round does not allow for an assignment between judges and decisions with less than disparate impact for different pools of experts of varying diversity and percentage of biased judges.

References

  • [1] S. Barocas and A. D. Selbst. Big data´s disparate impact. California Law Review, 2016.
  • [2] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
  • [3] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of fairness. KDD, 2017.
  • [4] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In ITCS, 2012.
  • [5] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and removing disparate impact. In KDD, 2015.
  • [6] M. Hardt, E. Price, N. Srebro, et al. Equality of opportunity in supervised learning. In NIPS, 2016.
  • [7] J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan. Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1):237–293, 2017.
  • [8] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. https://github.com/propublica/compas-analysis, 2016.
  • [9] M. Mastrolilli and G. Stamoulis. Constrained matching problems in bipartite graphs. In ISCO, pages 344–355. Springer, 2012.
  • [10] C. Muñoz, M. Smith, and D. Patil. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of the President. The White House., 2016.
  • [11] I. Osband, D. Russo, and B. Van Roy. (more) efficient reinforcement learning via posterior sampling. In NIPS, pages 3003–3011, 2013.
  • [12] D. B. West et al. Introduction to graph theory, volume 2. Prentice hall Upper Saddle River, 2001.
  • [13] B. Zafar, I. Valera, M. Gomez-Rodriguez, and K. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In WWW, 2017.
  • [14] B. Zafar, I. Valera, M. Gomez-Rodriguez, and K. Gummadi. Training fair classifiers. AISTATS, 2017.
  • [15] B. Zafar, I. Valera, M. Gomez-Rodriguez, K. Gummadi, and A. Weller. From parity to preference: Learning with cost-effective notions of fairness. In NIPS, 2017.

Appendix A Proof sketch of Proposition 1

Consider a simple setup with experts and decision at each round . Furthermore, we fix the following two things before setting up the problem instance: (i) let be a deterministic function which computes a point estimate of a distribution (e.g., mean, or MAP); (ii) we assume a deterministic tie-breaking by the assignment algorithm, and w.l.o.g. expert is preferred over expert for assignment when both of them have same edge weights.

For the first expert , we know the exact value of the threshold . For the second expert , the threshold could take any value in the range and we are given a prior distribution . Let us denote . Now, we construct a problem instance for which the algorithm would suffer linear regret separately for and .

Problem instance if
We consider a problem instance as follows: , , , and for all we have uniformly sampled from the range (note that and there is only one individual at each round ). The algorithm would always assign the individual to expert and has a cumulative expected utility of . However, given the true thresholds, the algorithm would have always assigned the individual to expert and would have a cumulative expected utility of . Hence, the algorithm suffers a linear regret of .

Problem instance if
We consider a problem instance as follows: , , , and for all we have uniformly sampled from the range (note that and there is only one individual at each round ). The algorithm would always assign the individual to expert and has a cumulative expected utility of . However, given the true thresholds, the algorithm would have always assigned the individual to expert and would have a cumulative expected utility of . Hence, the algorithm suffers a linear regret of .

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199990
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description