Welfare and Distributional Impacts of Fair Classification

Welfare and Distributional Impacts of Fair Classification

Lily Hu    Yiling Chen
Abstract

Current methodologies in machine learning analyze the effects of various statistical parity notions of fairness primarily in light of their impacts on predictive accuracy and vendor utility loss. In this paper, we propose a new framework for interpreting the effects of fairness criteria by converting the constrained loss minimization problem into a social welfare maximization problem. This translation moves a classifier and its output into utility space where individuals, groups, and society at-large experience different welfare changes due to classification assignments. Under this characterization, predictions and fairness constraints are seen as shaping societal welfare and distribution and revealing individuals’ implied welfare weights in society—weights that may then be interpreted through a fairness lens. The social welfare formulation of the fairness problem brings to the fore concerns of distributive justice that have always had a central albeit more implicit role in standard algorithmic fairness approaches.

Machine Learning, ICML

1 Introduction

In his 1979 Tanner Lectures, Amartya Sen noted that since nearly all theories of fairness are founded on an equality of some sort, the heart of the issue rests on clarifying the “equality of what?” problem (1980). The field of fair machine learning has not escaped this essential question. Do such tools have an obligation to assure probabilistic equality of outcomes (Feldman et al., 2015; Hardt et al., 2016)? Or do they simply owe an equality of treatment (Dwork et al., 2012)? Does fairness demand that individuals (or groups) be subject to equal mistreatment rates (Zafar et al., 2017; Bechavod & Ligett, 2017)? Or does being fair refer only to avoiding some intolerable level of discrimination? Differential demands of fairness contrast starkly with each other in both their effects on the outcomes that are ultimately issued and in the means by which they may be implemented.

In machine learning, the task of accounting for fairness involves comparing myriad metrics—probability distributions, error likelihoods, classification rates—sliced every way possible to reveal the range of inequalities that may arise before, during, and after the learning process. But as shown in Chouldechova (2017) and Kleinberg et al. (2017), fundamental statistical incompatibilities rule out any solution that satisfies all parity metrics, and we are left with the harsh but unavoidable task of adjudicating between these measures and methods. Past work has been limitedly able to address these “inherent trade-offs.” For one, the leading approach of constrained loss minimization offers little guidance by itself for choosing among the fairness desiderata, which appear incommensurable and result in different impacts on different individuals and groups. Even when comparisons are made across fairness metrics and methods, current approaches refer only to losses in predictive accuracy or vendor utility as illustrative of the cost of assuring different types of fairness. This methodology flattens multifaceted distributive procedures, which involve many individuals and thus many interests, into a two-dimensional comparison of accuracy vs. “fairness,” and as a result fails to capture the fundamentally social nature of fairness.

This paper proposes a conceptual framework and methodology for conceiving of fairness in machine learning that is based in analysis of the society-wide distributional effects of classifier outputs. Our approach maps the standard empirical risk minimization task in supervised learning into a corresponding social welfare maximization task as it appears in welfare economics’ Planner’s Problem. Social welfare functionals are typically formulated as the sum of weighted utility functions, where an individual’s weight represents the value placed by society on her welfare. Inverting the Planner’s Problem of efficient social welfare maximization generates a question that is more concerned with social equity: “Given a particular allocation, what is the presumptive social weight function that would yield the allocation as optimal?” Mapping a fair learning problem into a social welfare problem lends new insight into the different fairness regimes proposed by admitting comparison of the distributive and welfare effects of the various models and constraints used in prediction. By centering social welfare as the primary object of interest of fair machine learning, we highlight a positive conception of fairness as a societal good rather than as an oppositional force that detracts from a decision-maker’s accuracy and optimality. We believe that this perspective presents a more nuanced and faithful understanding of fairness as a social ideal.

In this paper, we establish this mapping for the task of binary classification using linear SVMs. Starting with standard empirical risk minimization, we present general characteristics of the structure of the implied welfare weight function corresponding to a given learned classifier. We then follow the two main distinct approaches to fairness-adjusted classification, connecting the altered outcomes and margins resulting from each of these new methodologies with shifting weight functions in the social welfare problem. In offering two different perspectives on how the welfare weight distribution may be transformed by fair adjustment, we present novel interpretations of how fairness constraints alter boundary-based classifiers’ treatments of individuals, groups, and the underlying feature space.

The deployment of socially-oriented machine learning inevitably implicates several ethical questions surrounding the tension between shared societal norms and ideals and a decision-maker’s private goals and interests. Most leading methodologies have focused on optimization of utility or welfare to the vendor, limiting our ability to answer questions about how individuals, groups, and society-at-large fare under various distributive allocations. The social welfare perspective directly engages both questions of efficiency, in the task of maximization, and equity, in the design of welfare weights. This perspective is especially enlightening when applied to sectors in which the government, acting as the Planner, maintains a strong interest in issues of distributive fairness and can justifiably make interpersonal comparisons of utility. Financial services, wherein loan and credit approvals are increasingly automated, satisfy both criteria and will be the main application focus of this paper.

2 Problem Formalization

Before we formalize this paper’s objective of connecting loss minimization with social welfare maximization, we provide an overview of the separate perspectives on and methodologies for achieving fairness in optimal predictions or allocations. In this paper, we will center our analysis on binary decision tasks using linear SVM classifiers.

Consider risk minimization within the supervised learning classification setting where the decision-maker seeks the classifier that minimizes the probability of error on a training set of data points . The risk-minimizing predictor is thus where contains only those classifiers that are linear halfspaces that may be written as with and . For binary classification, the ultimate classifications follow . For our considered case of linear SVMs, we will relax loss and replace it with hinge loss .

In the social welfare problem, a Planner is charged with the task of maximizing societal welfare given as an aggregate weighted sum of individuals’ utilities, . The Planner distributing financial loans solves , where individual ’s contribution to society’s overall welfare is a product of her utility , a function of her income and allocation outcome, and her societal weight . In binary classification, the Planner can either allocate the good to individual or not (), while in the standard welfare problem, the Planner faces a fixed exogenous budget for allocations. In our formulation, the budget is set to be equal to the number of positive instances issued by the classifier.

Our central question of interest in thus: For a given boundary classifier output by a loss minimization task and a set of income levels , can we characterize aspects of the functional form of the welfare weights within that would yield an optimal social allocation such that ? We call such an allocation produced by a learned classifier that is also socially optimal in the welfare sense a matched allocation.

2.1 Preliminary Results

A mapping from loss minimization to social welfare maximization requires that welfare weights be formulated to “track” the classifier’s treatment of individuals. The Planner with a given classifier must prefer individual to whenever ; under matched allocations, equivalent margins enforce equivalent weights. The Planner considers both classification margins and incomes , and as such, weights must be a function of both. Formally, the marginal social gain associated with a positive classification , , where represents the utility gain due to receiving a loan, satisfies

(1)

whenever and . From here, characterizing welfare weights depends on the functional form of .

3 Results

3.1 Unconstrained Loss Minimization

In the simplest case in which is either linear or additively separable, then , and Eq (1) reduces to the condition that welfare weights where is any positive monotonic transformation of . Here, weights do not depend on at all (), and so long as the welfare weights are such that , the Planner is justified in distributing the matched allocation.

3.1.1 concave in

When utility exhibits the property of diminishing marginal utility of income such that is concave in , the implied welfare weight function must now explicitly account for . In the binary allocation setting, the standard statement of concavity, , becomes equivalent to . Under this assumption of , the welfare weight condition in Eq. (1) may be expanded to

(2)
(3)

such that enforces a strictly greater lower bound on compared to the linear case, and changes in classification margin must correspond to larger deviations in . Notice that when utility is linear or additively separable, two individuals and with identical classification margins would be equally preferred under welfare weights and would receive the matched allocation under the Planner’s Problem even if they were endowed with differing income levels . In contrast, under concave , the condition that is insufficient to achieve the appropriate welfare weights. must also be increasing in the the concavity of as given in Eq. (3). As marginal utility returns to income decrease, the Planner is only justified in allocating the loan to an individual with high if she inflates the individual’s welfare weight in accordance with her income to “offset” the loss due to concavity. These relations and conditions of the weight function in the Planner’s Problem are summarized in the following Theorem.

Theorem 3.1.

Given classifier margins and income levels , the following welfare maximization problem

s.t

with weights satisfying Eq. (2) and (3) yields the matched optimal allocation . Moreover, the welfare weight function is of multiplicative form such that for some constant .

Proof.

Notice that the Planner can reduce her task to a binary knapsack problem (BKP) by considering the maximization where

When is concave in , . The optimal solution to BKP may be attained via the greedy algorithm in which the Planner allocates the good in decreasing order starting with the individual with the highest marginal contribution to social welfare, , until she depletes her “budget” . This procedure generates the same ordering on individuals as whenever , which is the inequality condition given in Eq. (2). When two individuals share the same , the greedy algorithm for BKP must also be indifferent such that , marginal gains are equal: , corresponding to Eq. (3). Notice that generally cannot be a function of and as a result, the functional form of can be decomposed into functions and for some constant .

The proof follows similarly when is linear or additively separable, and any weight function that preserves the ordering given by yields the same BKP solution. Since weights are defined up to a constant, this result agrees with the multiplicative decomposition of . ∎

The multiplicative form of these underlying social welfare weights highlights two intertwined effects of using boundary-based classifiers in financial distribution decisions. Concavity of utility enforces a term that explicitly incorporates wealth as having a multiplicative impact on welfare weights. Moreover, the wealth effect encoded in is compounded by the classifier score effect in such that differences in individuals’ classification margins also amplify differences in their incomes, and vice versa, in determining an individual’s ultimate social weight. In the binary classification task, intensified disparities in welfare weight magnitudes do not affect the Planner’s optimal allocation, but such differences do have significant repercussions in more general welfare maximization settings in which the Planner distributes allocations .

3.2 Fairness-constrained Loss Minimization

Having characterized aspects of the implied social welfare weight functions under standard loss minimization, we now move to “fair” formulations of learning and ask how popular parity-based constraints on optimization may be translated into social welfare space where they may be interpreted as redistributive mechanisms that act to shift welfare weight among individuals and groups.

3.2.1 Fair Post-processing

A post-processing approach to fairness proposed by Hardt et al. adjusts the distribution of outcomes by using sensitive attribute information to construct group-specific thresholds for classification (2016). This approach grants flexibility to practitioners who can apply the adjustment without needing to access the original dataset or learn a new classifier.

Without a new classifier or new margins, welfare weights explicitly incorporate group information to handle fairness criteria. The welfare problem adopts the post-processing approach of resolving fairness constraints by transforming the original margins by group-specific threshold factors to achieve various fairness parities. Since , then any positive affine transformation applied group-wide, where and represent the old and new group-specific thresholds respectively, that maps to new margin preserves ordering and interval scales. Then the optimal allocation with weights will match the post-processed fair allocation .

3.2.2 Learning a Fair Classifier

Recent efforts have incorporated fairness constraints into the learning process itself by way of regularization (Bechavod & Ligett, 2017) or convex proxies for constrained optimization (Zafar et al., 2017) to generate a new classifier . We note that conditions for the implied welfare weights under may be rederived by direct appeal to Theorem 3.1, but we also present in this section an alternative procedure that although requiring the Planner to have access to richer information—she must know both the previous and new classifiers, rather than just individuals’ margins—admits derivation of new conditions on from old conditions on . This technique allows direct comparison of the two functions and and thus sheds light on how fairness constraints shift the distribution of social welfare weight.

We derive conditions for in terms of by constructing a transformation from the old margins to the new margins . Let be the linear transformation that maps the orthogonal projection of onto to its orthogonal projection onto . Formally, we have where gives the orthogonal projection mapping of onto . Then following Eq. (2) with , where gives the Euclidean distance from to its projection on , we have that

(4)

where and . The total differential is computed as

(5)

Since , and are unique for boundary classifiers and , and the Euclidean distance is easily computable, all of the new variables and functions in Eq. (5) can be analytically derived. Since , the multiplicative factor given by dictates how the new welfare weights shift. In particular, the inclusion of vector rows of the matrix offers a geometric interpretation of the new classifier’s transformation of the feature space. Thus, working from the previous weight function corresponding to the original unfair classifier and ensuring that Eq. (4) and Eq. (5) holds, the new weights will justify the fair matched allocation under social welfare maximization.

4 Discussion

Within the financial services sector, both disparate impact and accuracy loss may be understood in terms of utility gains and losses incurred by different agents. By connecting loss minimization of accuracy to social welfare maximization, we make such trade-offs explicit and in doing so, broaden the scope of the fairness question to include distributive justice. This descriptive mathematical link sets the groundwork for the requisite normative reasoning of fair machine learning. Different loss functions, parity constraints, and fairness-adjustment methodologies all differentially impact “optimal” classifier behavior. Translating these various effects into changes in welfare weights, which may be analyzed at the individual level or summed to reveal group shares of societal welfare, allows practitioners to better interpret and evaluate the distributive impacts of predictions and as a result, make more informed comparisons among these choices when building models.

This work presents preliminary results on the mapping from boundary-based classification to social welfare maximization, but it is our hope that future work will establish a bidirectional relationship such that insights in welfare maximization may be translated for fair classification. Welfare economics as a field has a rich history of developing principles and methods of analysis centered on problems of fair representation and distribution. Linking the field with machine learning would yield complementary perspectives on fairness that would be both normative and descriptive, theoretical and implementable.

Acknowledgements

This work is supported in part by an NSF Graduate Research Fellowship and NSF Grant #CCF-1718549.

References

  • Bechavod & Ligett (2017) Bechavod, Yahav and Ligett, Katrina. Learning fair classifiers: A regularization-inspired approach. arXiv preprint arXiv:1707.00044, 2017.
  • Chouldechova (2017) Chouldechova, Alexandra. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
  • Dwork et al. (2012) Dwork, Cynthia, Hardt, Moritz, Pitassi, Toniann, Reingold, Omer, and Zemel, Richard. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226. ACM, 2012.
  • Feldman et al. (2015) Feldman, Michael, Friedler, Sorelle A, Moeller, John, Scheidegger, Carlos, and Venkatasubramanian, Suresh. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. ACM, 2015.
  • Hardt et al. (2016) Hardt, Moritz, Price, Eric, Srebro, Nati, et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pp. 3315–3323, 2016.
  • Kleinberg et al. (2017) Kleinberg, Jon, Mullainathan, Sendhil, and Raghavan, Manish. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference, pp. 43:1–43:23. ACM, 2017.
  • Sen (1980) Sen, Amartya. Equality of What? Cambridge University Press, Cambridge, 1980. Reprinted in John Rawls et al., Liberty, Equality and Law (Cambridge: Cambridge University Press, 1987).
  • Zafar et al. (2017) Zafar, Muhammad Bilal, Valera, Isabel, Gomez Rodriguez, Manuel, and Gummadi, Krishna P. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180. International World Wide Web Conferences Steering Committee, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
211842
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description