Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest
In this paper we investigate the usage of adversarial perturbations for the purpose of privacy from human perception and model (machine) based detection. We employ adversarial perturbations for obfuscating certain variables in raw data while preserving the rest. Current adversarial perturbation methods are used for data poisoning with minimal perturbations of the raw data such that the machine learning model’s performance is adversely impacted while the human vision cannot perceive the difference in the poisoned dataset due to minimal nature of perturbations. We instead apply relatively maximal perturbations of raw data to conditionally damage model’s classification of one attribute while preserving the model performance over another attribute. In addition, the maximal nature of perturbation helps adversely impact human perception in classifying hidden attribute apart from impacting model performance. We validate our result qualitatively by showing the obfuscated dataset and quantitatively by showing the inability of models trained on clean data to predict the hidden attribute from the perturbed dataset while being able to predict the rest of attributes.
In this paper we investigate the usage of adversarial perturbations for privacy from both human and model (machine) based classification of hidden attributes from the perturbed data. With the advent of distributed learning, several methods such as federated learning and split learning have become prominent for distributed deep learning. At the same time, privacy preserving machine learning is a very active area of research. Adversarial approaches of minimally perturbing the data to fool the model performance while fooling human perception in detecting such a perturbation have become popular. We investigate whether an advesarial perturbation can be used to achieve the following goals:
Damage the model performance for predicting a chosen sensitive attribute while keeping the perfomance of predicting another attribute intact.
Obfuscate with maximal perturbation to make it difficult for human to detect the hidden attribute.
Such perturbations are necessitated in sectors like finance (Bateni et al. , 2018; Srinivasan et al. , 2019; Chen et al. , 2018; Chen, 2018), healthcare (Vepakomma et al. , 2018; Chang et al. , 2018), government, retail (Zhao et al. , 2017; Yao & Huang, 2017) and hiring (Kay et al. , 2015; Harrison et al. , 2018) due to privacy, fairness, ethical and regulatory issues.
We validate our results qualitatively by presenting the perturbed datasets and quantitatively by showing the inability of models trained on clean data to predict the hidden attribute from the perturbed dataset while being able to predict the rest of attributes. We consider these to be useful intermediate experiments and results towards the goal of using adversarial methods for generating perturbations such that when a model is trained from the perturbed data for predicting the hidden attribute, the model performance is under control. We also would like to mention that these approaches can motivate a theoretical study of privacy guarantees of adversarial approaches under worst-case settings.
1.1 Contributions and method
We employ adversarial perturbation methods with a larger -ball of perturbations to generate images that are pretty hard for humans to classify with respect to the hidden attribute. Typically adversarial perturbations have only been used with small -ball perturbations to the best of our knowledge in rest of the works. However, the goal of this work is to obfuscate the data to hide certain latent information from the both model as well as humans, therefore, we conduct our experiments over a broad spectrum of the epsilon values which encompass the output range which fool machine as well as human. In this study we consider only images but our technique is generic in design and applicable wherever adversarial perturbations can be performed successfully. To generate adversarial perturbation, we use VGG-16 Simonyan & Zisserman (2014) model with pretrained layers on ImageNet Deng et al. (2009). We employ an architecture as shown in Figure 1 where a fork is created after three blocks of the VGG-16 architecture where each block consists of two convolution layers and one pooling layer. The hidden attribute fork consists of few layers of DNN for local computation. The rest of the network after the red fork is used to predict the label attribute that is supposed to be preserved. We then train the network. Upon training, we then employ adversarial methods of fast gradient sign method (FGSM) and projected gradient based perturbation (PG), to perturb the layer preceding the hidden attribute fork (shown by grey arrow). We although choose a higher -ball of possible perturbations in order to generate the perturbation of this layer with respect to loss function corresponding to only the hidden label attribute. We show detailed results with regards to the quality of our results in the experiments section. In addition we weight the two loss functions with .
1.2 Related work
In Figure 2 we share the landscape of deep learning based approaches for hiding certain attributes in data. We categorize it broadly into 4 approaches that include a.) perturbation of raw data which includes our approach, b.) overlaying mask on raw data to hide certain parts of image, Wang et al. (2018), c.) modifying the output of intermediate (encoded) representations (Lample et al. , 2017; Vepakomma et al. , 2018, 2019), d.) transforming the entire (or partial) natural image into another natural image (Du et al. , 2014; Chen et al. , 2019; Wu et al. , 2018). We share 6 example methods within these categories. Our approach belongs to the category of ‘adversarial attack based perturbations’. As a sub note, all the above approaches can be further categorized into sub approaches that fool humans and/or machines while our intermediate work focuses on both.
We detail our experimental setup and results in this section. A condensed version of the architecture in our setup is shown in figure 1. We perform experiments where VGG16 is the initially trained model that is adversarially perturbed with large -ball of perturbations. We then predict over the perturbed data with a clean model trained on unperturbed data. Three clean models were trained with architectures of VGG16 and VGG19. The two methods of adversarial perturbations used with large choice of -ball were fast gradient sign method (FGSM), Goodfellow et al. (2014) and projected gradient adversarial method (PGD), Athalye et al. (2017). The dataset used was UTKFace which is a large-scale face dataset with long age span. The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. This dataset could be used on a variety of tasks, e.g., face detection, age estimation, age progression/regression and landmark localization. In Table 1, we show results of the different methods of PGD abd FGSM employed in the large -ballsetting with different weights for the weighted loss and the accuracy of predicting on perturbeddata generated by our approach using clean models of VGG16 and VGG19 trained on unperturbed data. We note that of race predictions by the clean model after our perturbation belong to the majority race class. This shows that our method is successfully able to increase the no-information rate in our predictions of the hidden label attribute of race while preserving gender accuracy. of ground truth of race belong to the same class as well. Therefore we reach the required level of obfuscation.
||Epsilon||Race Accuracy||Gender Accuracy||Method||Clean Model|
3 Conclusion and future work
We investigate large -ball perturbations obtained via adversarial methods for obfuscating one label attribute while preserving rest. We show better performance in fooling human with projected gradient descent based approach and better utility in preserving accuracy of public label attributes with fast gradient sign method based prediction. For future work, we aim to enhance this approach with information theoretic and other statistical dependency minimizing loss functions like distance correlation, Hilbert Schmidt Independence Criterion and Kernel Target Alignment. We note that of race predictions by the clean model after our perturbation belong to the majority race class. This shows that our method is successfully able to increase the no-information rate in our predictions of the hidden label attribute of race while preserving gender accuracy. of ground truth of race belong to the same class as well. Therefore we reach the required baseline. Therefore the other goal would be to raise the performance of predicting the public label attribute as we reach the required obfuscation performance on hidden label attribute.
- Athalye et al. (2017) Athalye, Anish, Engstrom, Logan, Ilyas, Andrew, & Kwok, Kevin. 2017. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397.
- Bateni et al. (2018) Bateni, Mohammad Hossein, Chen, Yiwei, Ciocan, Dragos, & Mirrokni, Vahab. 2018. Fair Resource Allocation in A Volatile Marketplace. Available at SSRN 2789380.
- Chang et al. (2018) Chang, Ken, Balachandar, Niranjan, Lam, Carson, Yi, Darvin, Brown, James, Beers, Andrew, Rosen, Bruce, Rubin, Daniel L, & Kalpathy-Cramer, Jayashree. 2018. Distributed deep learning networks among institutions for medical imaging. Journal of the American Medical Informatics Association, 25(8), 945–954.
- Chen et al. (2018) Chen, Chaofan, Lin, Kangcheng, Rudin, Cynthia, Shaposhnik, Yaron, Wang, Sijia, & Wang, Tong. 2018. An interpretable model with globally consistent explanations for credit risk. arXiv preprint arXiv:1811.12615.
- Chen (2018) Chen, Jiahao. 2018. Fair lending needs explainable models for responsible recommendation. arXiv preprint arXiv:1809.04684.
- Chen et al. (2019) Chen, Xiao, Navidi, Thomas, Ermon, Stefano, & Rajagopal, Ram. 2019. Distributed generation of privacy preserving data with user customization. arXiv preprint arXiv:1904.09415.
- Deng et al. (2009) Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, & Fei-Fei, Li. 2009. Imagenet: A large-scale hierarchical image database. Pages 248–255 of: 2009 IEEE conference on computer vision and pattern recognition. Ieee.
- Du et al. (2014) Du, Liang, Yi, Meng, Blasch, Erik, & Ling, Haibin. 2014. GARP-face: Balancing privacy protection and utility preservation in face de-identification. Pages 1–8 of: IEEE International Joint Conference on Biometrics. IEEE.
- Goodfellow et al. (2014) Goodfellow, Ian J, Shlens, Jonathon, & Szegedy, Christian. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
- Harrison et al. (2018) Harrison, Galen, Duarte, Natasha, & Hall, Joseph. 2018. Where’s the Bias? Developing Effective Model Governance. In: Proceedings of the Neural Information Processing Systems.
- Kay et al. (2015) Kay, Matthew, Matuszek, Cynthia, & Munson, Sean A. 2015. Unequal representation and gender stereotypes in image search results for occupations. Pages 3819–3828 of: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM.
- Lample et al. (2017) Lample, Guillaume, Zeghidour, Neil, Usunier, Nicolas, Bordes, Antoine, Denoyer, Ludovic, & Ranzato, Marc’Aurelio. 2017. Fader networks: Manipulating images by sliding attributes. Pages 5967–5976 of: Advances in Neural Information Processing Systems.
- Simonyan & Zisserman (2014) Simonyan, Karen, & Zisserman, Andrew. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
- Srinivasan et al. (2019) Srinivasan, Ramya, Chander, Ajay, & Pezeshkpour, Pouya. 2019. Generating User-friendly Explanations for Loan Denials using GANs. arXiv preprint arXiv:1906.10244.
- Vepakomma et al. (2018) Vepakomma, Praneeth, Gupta, Otkrist, Swedish, Tristan, & Raskar, Ramesh. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint arXiv:1812.00564.
- Vepakomma et al. (2019) Vepakomma, Praneeth, Gupta, Otkrist, Dubey, Abhimanyu, & Raskar, Ramesh. 2019. Reducing leakage in distributed deep learning for sensitive health data. arXiv preprint arXiv:1812.00564.
- Wang et al. (2018) Wang, Tianlu, Zhao, Jieyu, Yatskar, Mark, Chang, Kai-Wei, & Ordonez, Vicente. 2018. Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations. arXiv preprint arXiv:1811.08489.
- Wu et al. (2018) Wu, Yifan, Yang, Fan, & Ling, Haibin. 2018. Privacy-protective-gan for face de-identification. arXiv preprint arXiv:1806.08906.
- Yao & Huang (2017) Yao, Sirui, & Huang, Bert. 2017. Beyond parity: Fairness objectives for collaborative filtering. Pages 2921–2930 of: Advances in Neural Information Processing Systems.
- Zhao et al. (2017) Zhao, Jieyu, Wang, Tianlu, Yatskar, Mark, Ordonez, Vicente, & Chang, Kai-Wei. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.