Differentially Private and Fair Classification via Calibrated Functional Mechanism

Differentially Private and Fair Classification via Calibrated Functional Mechanism

Abstract

Machine learning is increasingly becoming a powerful tool to make decisions in a wide variety of applications, such as medical diagnosis and autonomous driving. Privacy concerns related to the training data and unfair behaviors of some decisions with regard to certain attributes (e.g., sex, race) are becoming more critical. Thus, constructing a fair machine learning model while simultaneously providing privacy protection becomes a challenging problem. In this paper, we focus on the design of classification model with fairness and differential privacy guarantees by jointly combining functional mechanism and decision boundary fairness. In order to enforce -differential privacy and fairness, we leverage the functional mechanism to add different amounts of Laplace noise regarding different attributes to the polynomial coefficients of the objective function in consideration of fairness constraint. We further propose an utility-enhancement scheme, called relaxed functional mechanism by adding Gaussian noise instead of Laplace noise, hence achieving -differential privacy. Based on the relaxed functional mechanism, we can design -differentially private and fair classification model. Moreover, our theoretical analysis and empirical results demonstrate that our two approaches achieve both fairness and differential privacy while preserving good utility and outperform the state-of-the-art algorithms.

Introduction

In this big data era, machine learning has been becoming a powerful technique for automated and data-driven decision making processes in various domains, such as spam filtering, credit ratings, housing allocation, and so on. However, as the success of machine learning mainly rely on a vast amount of individual data (e.g., financial transactions, tax payments), there are growing concerns about the potential for privacy leakage and unfairness in training and deploying machine learning algorithms [13, 3]. Thus, the problem of fairness and privacy in machine learning has attracted considerable attention.

Fairness-aware learning has received growing attentions in the machine learning field due to the social inequities and unfair behaviors observed in classification models. For example, a classification model of automated job hiring system is more likely to hire candidates from certain racial or gender groups [14, 23]. Hence, substantial effort has centered on developing algorithmic methods for designing fair classification models and balancing the trade-off between accuracy and fairness, mainly including two groups: pre/post-processing methods [6, 12, 16] and in-processing methods [18, 27]. Pre/post-processing methods achieve fairness by directly changing values of the sensitive attributes or class labels in the training data. As pointed out in [27], pre/post-processing methods treat the learning algorithm as a black box, which can result in unpredictable loss of the classification utility. Thus, in-processing methods, which introduce fairness constraints or regularization terms to the objective function to remove the discriminatory effect of classifiers, have been shown a great success.

At the same time, differential privacy [9] has emerged as the de facto standard for measuring the privacy leakage associated with algorithms on sensitive databases, which has recently received considerable attentions by large-scale corporations such as Google [11] and Microsoft [5], etc. Generally speaking, differential privacy ensures that there is no statistical difference to the output of a randomized algorithm whether a single individual opts in to, or out of its input. A large class of mechanisms has been proposed to ensure differential privacy. For instance, the Laplace mechanism is employed by introducing random noise drawn from the Laplace distribution to the output of queries such that the adversary will not be able to confirm a single individual is in the input with high confidence [8]. To design private machine learning models, more complicated perturbation mechanisms have been proposed like objective perturbation [2] and functional mechanism [28], which inject random noise into the objective function rather than model parameters.

Thus, in this paper, we mainly focus on achieving classification models that simultaneously provide differential privacy and fairness. As pointed out in recent study [25], achieving both requirements efficiently is quite challenging, due to the different aims of differential privacy and fairness. Differential privacy in a classification model focuses on the individual level, i.e., differential privacy guarantees that the model output is independent of whether any individual record presents or absents in the dataset, while fairness in a classification model focuses on the group level, i.e., fairness guarantees that the model predictions of the protected group (such as female group) are same to those of the unprotected group (such as male group). Lots of researches have emerged in achieving both privacy protection and fairness. Specifically, in [6], Dwork et al. gave a new definition of fairness that is an extended definition of differential privacy. In [15], Hajian et al. imposed fairness and -anonymity via a pattern sanitization method. Moreover, Ekstrand et al. in [10] put forward a set of questions about whether fairness are compatible with privacy. However, only Xu et al. in [25] studied how to meet the requirements of both differential privacy and fairness in classification models by combining functional mechanism and decision boundary fairness together. Therefore, how to simultaneously meet the requirements of differential privacy and fairness in machine learning algorithms is under exploited.

In this paper, we propose Purely and Approximately Differential private and Fair Classification algorithms, called PDFC and ADFC, respectively, by incorporating functional mechanism and decision boundary covariance, a novel measure of decision boundary fairness. As shown in [17], due to the correlation between input features (attributes), the discrimination of classification still exists even if removing the protected attribute from the dataset before training. Hence, different from [25], which adds same scale of noise in each attribute, in PDFC, we consider a calibrated functional mechanism, i.e., injecting different amounts of Laplace noise regarding different attributes to the polynomial coefficients of the constrained objective function to ensure -differential privacy and reduce effects of discrimination. To further improve the model accuracy, in ADFC, we propose a relaxed functional mechanism by inserting Gaussian noise instead of Laplace noise and leverage it to perturb coefficients of the polynomial representation of the constrained objective function to enforce -differential privacy and fairness. Our salient contributions are listed as follows.

  • We propose two approaches PDFC and ADFC to learn a logistic regression model with differential privacy and fairness guarantees by applying functional mechanism to a constrained objective function of logistic regression that decision boundary fairness constraint is treated as a penalty term and added to the original objective function.

  • For PDFC, different magnitudes of Laplace noise regarding different attributes are added to the polynomial coefficients of the constrained objective function to enforce -differential privacy and fairness.

  • For ADFC, we further improve the model accuracy by proposing the relaxed functional mechanism based on Extended Gaussian mechanism, and leverage it to introduce Gaussian noise with different scales to perturb objective function.

  • Using real-world datasets, we show that the performance of PDFC and ADFC significantly outperforms the baseline algorithms while jointly providing differential privacy and fairness.

The rest of paper is organized as follows. We first give the problem statement and background in differential privacy and fairness. Next, we present our two approaches PDFC and ADFC to achieve DP and fair classification. Finally, we give the numerical experiments based on real-world datasets and draw conclusion remarks. Due to the space limit, we leave all the proofs in the supplemental materials.

Problem Statement

This paper considers a training dataset that includes tuples . We also denote each tuple where the feature vector contains attributes, i.e., , and is the corresponding label. Without loss of generality, we assume where , and for binary classification tasks. The objective is to construct a binary classification model with model parameters that taken as input, can output the prediction , by minimizing the empirical loss on the training dataset over the parameter space of .

In general, we have the following optimization problem.

(1)

where is the loss function. In this paper, we consider logistic regression as the loss function, i.e., . Thus, the classification model has the form .

Although there is no need to share the dataset during the training procedure, the risk of information leakage still exists when we release the classification model parameter . For example, the adversary may perform model inversion attack [13] over the release model together with some background knowledge about the training dataset to infer sensitive information in the dataset.

Furthermore, if labels in the training dataset are associated with a protected attribute (note that we denote as unprotected attributes), like gender, the classifier may be biased, i.e., , where we assume the protected attribute . According to [20], even if the protected attribute is not used to build the classification model, this unfair behavior may happen when the protected attribute is correlated with other unprotected attributes.

Therefore, in this paper, our objective is to learn a binary classification model, which is able to guarantee differential privacy and fairness while preserving good model utility.

Background

In this section, we first introduce some background knowledge of differential privacy, which helps us to build private classification models. Then we present fairness definition, which helps us to enforce classification fairness.

Differential Privacy

Differential privacy is introduced to guarantee that the ability of an adversary to obtain additional information about any individual is independent of whether any individual record presents or absents in the dataset.

Definition 1 (-Differential Privacy).

A randomized Mechanism is enforced by -differential privacy, if for any two neighboring datasets , i.e., differing at most one single data sample, and for any possible output in the output space of , it holds that

The privacy parameter controls the strength of the privacy guarantee. A smaller value indicates a stronger privacy protection. Though differential privacy provides very strong guarantee, in some cases it may be too strong to have a good data utility. We then introduce a relaxation, (,)-differential privacy, that has been proposed in [7].

Definition 2 ((,)-Differential Privacy).

A randomized Mechanism is enforced by (,)-differential privacy, if for any two neighboring datasets differing at most one single data item, and for any possible output in the output space of , it holds that .

Laplace mechanism [9] and Extended Gaussian mechanism [21] are common techniques for achieving differential privacy, both of which add random noise calibrated to the sensitivity of the query function .

Theorem 1 (Laplace Mechanism).

Given any function , the Laplace mechanism defined by

preserves -differential privacy, where are i.i.d. random variables drawn from and -sensitivity of the query is taken over all neighboring datasets and .

Theorem 2 (Extended Gaussian Mechanism).

Given any function and for any , , the Extended Gaussian mechanism defined by

preserves -differential privacy, where are i.i.d drawn from a Gaussian distribution with and -sensitivity of the query is taken over all neighboring datasets and .

Functional Mechanism. Functional mechanism, introduced by [28], as an extension of the Laplace mechanism is designed for regression analysis. To preserve -differential privacy, functional mechanism injects differentially private noise into the objective function and then publishs a noisy model parameter derived from minimizing the perturbed objective function rather than the original one. As a result of the objective function being a complex function of , in functional mechanism, is represented in polynomial forms trough Taylor Expansion. The model parameter is a vector consisting of several values . We denote as a product of , namely, for some . We also denote as the set of all products of with degree , i.e., .

According to the Stone-Weierstrass Theorem [22], any continuous and differentiable function can always be expressed as a polynomial form. Therefore, the objective function can be written as follows

(2)

where represents the coefficient of in polynomial.

To preserve -differential privacy, the objective function is perturbed by adding Laplace noise into the polynomial coefficients, i.e., , where . And then the model parameter is obtained by minimizing the noisy objective function . The sensitivity of logistic regression is given in the following lemma

Lemma 1 (-Sensitivity of Logistic Regression).

Let and be the logistic regression on two neighboring datasets and , respectively, and denote their polynomial representations as and . Then, we have the following inequality

where , or is an arbitrary tuple.

Classification Fairness

The goal of classification fairness is to find a classifier that minimizes the empirical loss while guaranteeing certain fairness requirements. Many fairness definitions have been proposed for in the literature including mistreatment parity [26], demographic parity [20], etc.

Demographic parity, the most widely-used fairness definition in the classification fairness domain, requires the decision made by the classifier is not dependent on the protected attribute , for instance, sex or race.

Definition 3.

(Demographic Parity in a Classifier) Given a classification model and a labeled dataset , the property of demographic parity in a classifier is defined by where is the protected attribute.

Moreover, demographic parity is quantified in terms of the risk difference (RD) [19], i.e., the difference of the positive decision made in between the protected group and unprotected group. Thus, the risk difference produced by a classifier is defined as

One of the in-processing methods, called decision boundary fairness [27], to ensure classification fairness is to find a model parameter that minimizes the loss function under a fairness constraint. Thus, the fair classification problem is formulated as follows,

(3)

where is a constraint term, and is the threshold. For instance, Zafar et al. [27] have proposed to adopt the decision boundary covariance to define the fairness constraint, i.e.,

(4)

where is decision boundary, is the average of the protected attribute and . For logistic regression classification models, the decision boundary is defined by . The decision boundary covariance (Classification Fairness) then reduces to .

Differentially Private and Fair Classification

In this section, we first present our approach PDFC to achieve fair logistic regression with -differentially private guarantee. Then we propose a relaxed functional mechanism by injecting Gaussian noise instead of Laplace noise to provide -differential privacy. By leveraging the relaxed functional mechanism, we will show that our second approach ADFC can jointly provide -differential privacy and fairness.

Purely DP and Fair Classification

In order to meet the requirements of -differential privacy and fairness, motivated by [25], we consider to combine the functional mechanism and decision boundary fairness. We first consider to transform the constrained optimization problem (Classification Fairness) into unconstrained problem by treating the fairness constraint as a penalty term, where the fairness constraints are shifted to the original objective function . Then, we have the new objective function defined as , where we consider as a hyperparameter to optimize the trade-off between model utility and fairness. For convenience of discussion, we set and choose suitable values to make . Note that our theoretical results still hold if we choose other values of and . By equation (Classification Fairness), we have

(5)

To apply functional mechanism, we first write the approximate objective function based on (2) as follows.

(6)

where denotes the coefficient of in the polynomial of and .

1:  Input: Dataset ; The objective function ; The fairness constraint ; The privacy budget for unprotected attribute ; The privacy budget for other unprotected attributes ; -sensitivity .
2:  Output: , .
3:  Set the approximate function by equation (Purely DP and Fair Classification).
4:  Set two sets , .
5:  for  do
6:        for each  do
7:              if  includes for a particular attribute  then
8:                    Put into .
9:              else
10:                    Put into .
11:              end if
12:        end for
13:  end for
14:  for  do
15:        for each  do
16:              if  then
17:                    Set .
18:              else
19:                    Set .
20:              end if
21:        end for
22:  end for
23:  Let .
24:  Compute .
25:  Compute
26:  return: , .
Algorithm 1 Purely DP and Fair Classification (PDFC)

The attributes involving in the dataset may not be independent from each other, which means some unprotected attributes in are quite correlated with the protected attribute . For instance, the protected attribute, like gender, may be correlated with the attribute, marital status. Thus, to reduce the discrimination between the protected attribute and the labels , it is important to weaken the correlation between these most correlated attributes and protected attribute . However, it is often impossible to determine the degree of relation between an unprotected attribute and the protected attribute. Therefore, we randomly select an unprotected attribute and leverage functional mechanism to add noise with large scale to the corresponding polynomial coefficients of the monomials involving . Interestingly, this approach not only helps to reduce the correlation between attributes and , but also improve the privacy on attribute to prevent model inversion attacks, as shown in [24].

The key steps of PDFC are outlined in Algorithm 1. We first set two different privacy budgets, and , for attribute and the rest of attributes . Before injecting noise to the coefficients, all coefficients should be separated into two groups and by considering whether involves in the corresponding monomials (i.e., whether their the coefficients contain attribute ). We then add Laplace noises drawn from and to the coefficients of and respectively to reconstruct the differentially private objective function , where can be found in Lemma 6. Finally, the differentially private model parameter is obtained by minimizing . Note that also ensures classification fairness due to the objective function involving fairness constraint.

Lemma 2.

Let and be any two neighboring datasets differing in at most one tuple. Let and be the approximate objective function on and , then we have the following inequality,

The following theorem shows the privacy guarantee of PDFC.

Theorem 3.

The output model parameter in PDFC (Algorithm 1) preserves -differential privacy, where .

Approximately DP and Fair Classification

We now focus on using the relaxed version of -differential privacy, i.e., -differential privacy to further improve the utility of differentially private and fair logistic regression. Hence, in order to satisfy -differential privacy, we propose the relaxed functional mechanism by making use of Extended Gaussian mechanism. As shown in Theorem 2, before applying Extended Gaussian mechanism, we first calculate the sensitivity of a query function, i.e., the objective function of logistic regression , given in the following lemma.

Lemma 3 (-Sensitivity of Logistic Regression).

For polynomial representations of logistic regression, two and given in Lemma 1, we have the following inequality

where we denote and as the set of polynomial coefficients of and . And we denote or as an arbitrary tuple.

We then perturb by injecting Gaussian noise drawn from with into its polynomial coefficients, and obtain the differentially private model parameter by minimizing the noisy function , as shown in Algorithm 2. Finally, we provide a privacy guarantee of proposed relaxed functional mechanism by the following theorem.

Theorem 4.

The relaxed functional mechanism in Algorithm 2 guarantees -differential privacy.

1:  Input: Dataset ; The objective function ; The privacy parameters .
2:  Output:
3:  Set according Lemma 7.
4:  for  do
5:        for each  do
6:              Set , where .
7:        end for
8:  end for
9:  Let .
10:  Compute .
11:  return: .
Algorithm 2 Relaxed Functional Mechanism

Our second approach called, ADFC, applies the relaxed functional mechanism into the objective function with decision boundary fairness constraint to enforce -differential privacy and fairness. As shown in Algorithm 3, we first derive the polynomial representation according to (Purely DP and Fair Classification), and employ random Gaussian noise to perturb the objective function , i.e., injecting Gaussian noise into its polynomial coefficients. Furthermore, we also allocate differential privacy parameters, and for a particular unprotected attribute and the rest of unprotected attributes to improve the privacy on attribute and reduce the correlation between attributes and . Hence, we add random noise drawn from to polynomial coefficients of . For polynomial coefficients in , we inject noise drawn from .

Lemma 4.

Let and be any two neighboring datasets differing in at most one tuple. Let and be the approximate objective function on and , then we have the following inequality,

where we denote and as the set of polynomial coefficients of and . And we denote or as an arbitrary tuple.

1:  Input: Dataset ; The objective function ; The fairness constraint ; The privacy parameters for unprotected attribute ; The privacy parameters for other unprotected attributes .
2:  Output: , and .
3:  Set the approximate function by equation (Purely DP and Fair Classification).
4:  Set two sets , .
5:  for  do
6:        for each  do
7:              if  includes for a particular attribute  then
8:                    Put into .
9:              else
10:                    Put into .
11:              end if
12:        end for
13:  end for
14:  Set -sensitivity by Lemma 8.
15:  for  do
16:        for each  do
17:              if  then
18:                    Set , where .
19:              else
20:                    Set , where .
21:              end if
22:        end for
23:  end for
24:  Let .
25:  Compute .
26:  Compute and .
27:  return: , and .
Algorithm 3 Approximately DP and Fair Classification (ADFC)

Finally, by minimizing the differentially private objective function , we derive the model parameter , which achieves differential privacy and fairness at the same time. We now show that ADFC satisfies -differential privacy in the following theroem.

Theorem 5.

The output model parameter in ADFC (Algorithm 3) guarantees -differential privacy, where and .

Performance Evaluation

Simulation Setup

Figure 1: Compare accuracy under different privacy budgets on ()
Figure 2: Compare accuracy under different values of on US.
(a)
(b)
(c)
Figure 3: Compare accuracy under different privacy budgets on Adult ().

Data preprocessing We evaluate the performance on two datasets, Adult dataset and US dataset. The Adult dataset from UCI Machine Learning Repository [4] contains information about 13 different features (e.g., work-class, education, race, age, sex, and so on) of 48,842 individuals. The label is to predict whether the annual income of those individuals is above 50K or not. The US dataset is from Integrated Public Use Microdata Series [1] and consists of 370,000 records of census microdata, which includes features like age, sex, education, family size, etc. The goal is to predict whether the income is over 25K a year. In both datasets, we consider sex as a binary protected attribute.

Baseline algorithms In our experiments, we compare our approaches, PDFC, and ADFC against several baseline algorithms, namely, LR and PFLR*. LR is a logistic regression model. PFLR* [25] is a differentially private and fair logistic regression model that injects Laplace noise with shifted mean to the objective function of logistic regression with fairness constraint. Moreover, we compare our relaxed functional mechanism against the original functional mechanism proposed in [28] and No-Privacy, which is the original functional mechanism without injecting any noise to the polynomial coefficients.

Evaluation The utility of algorithms is measured by Accuracy, defined as follows,

which demonstrates the quality of a classifier. The fairness of classification models is qualified by risk difference (RD)

where is the protected attribute. We consider a random 80-20 training-testing split and conduct 10 independent runs of algorithms. We then record the mean values and standard deviation values of Accuracy and RD on the testing dataset. For the parameters of differential privacy, we consider , and .

Results and Analysis

In Figure 1, we show the accuracy of each algorithm, functional mechanism, relaxed functional mechanism and No-Privacy, as a function of the privacy budget with fixed . We can see that the accuracy of No-Privacy remains unchanged for all values of , as it does not provide any differential privacy guarantee. Our relaxed functional mechanism exhibits quite higher accuracy than functional mechanism in high privacy regime, and the accuracy of relaxed functional mechanism is the same as No-Privacy baseline when . Figure 2 studies the accuracy of each algorithm under different values of with fixed . Relaxed functional mechanism incurs lower accuracy when decreases, as a smaller requires a larger scale of noise to be injected in the objective function. But the accuracy of functional mechanism remains considerably lower than relaxed functional mechanism in all cases.

Figure 3(a) studies the accuracy comparison among PFLR*, LR, PDFC and ADFC on Adult dataset with the particular unprotected attribute denoted by marital status. We can observe that ADFC continuously achieves better accuracy than PFLR* in all privacy regime, and PDFC only outperforms PFLR* when is small. We also evaluate the effect of choosing different attributes as by performing experiments on Adult dataset. As shown in Figure 3(b) and Figure 3(c), choosing different attributes, marital status, age, relation and race, has different effects on the accuracy of PDFC and ADFC. However, PDFC and ADFC still outperform PFLR* under varying values of . As expected, as the value of increases, the accuracy of each algorithm becomes higher in above three figures.

Table 1 shows how different privacy budgets affect the risk difference of LR, PFLR*, PDFC and ADFC on two datasets. Note that we consider the attribute as race on Adult dataset, and work on US dataset. It is clear that PDFC and ADFC produce less risk difference compared to PFLR* in most cases of . The key reason is that adding different amounts of noise regarding different attributes indeed reduces the correlation between unprotected attributes and protected attributes.

Data LR PFLR* PDFC ADFC
Adult
US
Table 1: Risk difference with different privacy budgets on two datasets ().

Conclusion

In this paper, we have introduced two approaches, PDFC and ADFC, to address the discrimination and privacy concerns in logistic regression classification. Different from existing techniques, in both approaches, we consider leveraging functional mechanism to the objective function with decision boundary fairness constraints, and adding noise with different magnitudes into the coefficients of different attributes to further reduce the discrimination and improve the privacy protection. Moreover, for ADFC, we utilize the proposed relaxed functional mechanism that is built upon Extended Gaussian mechanism, to further improve the model accuracy. By performing extensive empirical comparisons with state-of-the-art methods for differentially private and fair classification, we demonstrated the effectiveness of proposed approaches.

Acknowledgments

The work of J. Ding, X. Zhang, and M. Pan was supported in part by the U.S. National Science Foundation under grants US CNS-1350230 (CAREER), CNS-1646607, CNS-1702850, and CNS-1801925. The work of X. Li was supported in part by the Programs of NSFC under Grant 61762030, in part by the Guangxi Natural Science Foundation under Grant 2018GXNSFDA281013, and in part by the Key Science and Technology Project of Guangxi under Grant AA18242021.

Lemma 6.

Let and be any two neighboring datasets differing in at most one tuple. Let and be the approximate objective function on and , then we have the following inequality,

Proof.

Assume that and differ in the last tuple and . We have that

where represents -th element in feature vector . ∎

Theorem 8.

The output model parameter in PDFC (Algorithm 1) guarantees -differential privacy, where .

Proof.

We assume there are two neighboring datasets and that differ in the last tuple and . As shown in the Algorithm 1, all polynomial coefficients are divided into two subsets and in view of whether they include sensitive attribute or not. After adding Laplace noise, we have

Then, the following inequality holds

In the last second equality, we directly adopt the result in [24].

Lemma 7 (-Sensitivity of Logistic Regression).

For polynomial representations of logistic regression, two and given in Lemma 1, we have the following inequality

where we denote and as the set of polynomial coefficients of and . And we denote or as an arbitrary tuple.

Proof.

Assume that and differ in the last tuple and . For logistic regression, we have

where we have