On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
Recent studies have found that deep learning systems are vulnerable to adversarial examples; e.g., visually unrecognizable adversarial images can easily be crafted to result in misclassification. The robustness of neural networks has been studied extensively in the context of adversary detection, which compares a metric that exhibits strong discriminate power between natural and adversarial examples. In this paper, we propose to characterize the adversarial subspaces through the lens of mutual information (MI) approximated by conditional generation methods. We use MI as an information-theoretic metric to strengthen existing defenses and improve the performance of adversary detection. Experimental results on MagNet defense demonstrate that our proposed MI detector can strengthen its robustness against powerful adversarial attacks.
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
|Chia-Yi Hsu, Pei-Hsuan Lu, Pin-Yu Chen and Chia-Mu Yu|
|National Chung Hsing University, Taiwan|
Index Terms— Adversarial example, conditional generation, detection, mutual information
In recent years, deep learning has demonstrated impressive performance on many tasks in machine learning, such as speech recognition and image classification. However, recent research has shown that well-trained deep neural networks (DNNs) are rather vulnerable to adversarial examples [1, 2, 3, 4]. There have been many efforts on defending against adversarial examples. In order to enhance the robustness of DNNs against adversarial perturbations, several studies aim to characterize adversarial subspaces and develop the countermeasures. For example, Ma et al.  characterize the dimensional properties of adversarial regions through the use of local intrinsic dimensionality (LID). Nonetheless, very recently Lu et al.  demonstrate the limitation of LID. Generally speaking, the essence of adversary detection lies in finding a metric that exhibits strong discriminative power between natural and adversarial examples. More importantly, when mounting the detector to identify adversarial inputs, one should ensure minimal performance degradation on the natural (clean) examples, which suggests a potential trade-off between test accuracy and adversary detectability.
In this paper, we propose to characterize adversarial subspaces by using mutual information (MI). Our approach is novel in the sense that the MI is approximated by a well-trained conditional generator owing to the recent advances in generative adversarial networks (GANs) . We demonstrate the effectiveness of our approach on MagNet , a recent defense method based on data reformation and adversary detection. Experimental results show that when integrating MagNet with our MI detector, the detection capability can be significantly improved against powerful adversarial attacks.
Adversarial examples can be categorized into targeted and untargeted attacks based on attack objectives. The former falsely renders the prediction of the targeted DNN model towards a specific output, while the latter simply leads the targeted DNN model to a falsified prediction.
2.1 Carlini and Wagner’s Attack (C&W attack)
Carlini and Wagner  propose an optimization-based framework for targeted and untargeted attacks that can generate adversarial examples with a small perturbation. They design an norm regularized loss function in addition to the model prediction loss defined by the logit layer representations in DNNs. C&W attack can successfully bypass undefended and several defended DNNs by simply tuning the confidence parameter in the optimization process of generating adversarial examples . It can also be adopted to generate adversarial examples based on and distortion metrics.
C&W attack finds an effective adversarial perturbation by solving the following optimization problem:
where is the data dimension and denotes the space of valid data examples. For a natural example , the C&W attack aims to find a small perturbation (evaluated by ) in order to preserve the visual similarity to but will simultaneously deceive the classifier (evaluated by the term). The hyperparameter is used to balance these two losses.
Let denote the perturbed example of . The loss is designed in a way that if and only if the classifier assigns to a wrong class. In particular, for untargeted attacks takes a hinge loss form defined as
where is the hidden representation of in the pre-softmax layer (also known as the logits) and is the ground truth label of . Similar loss can be defined for targeted attacks. The parameter is a hyper-parameter governing the model confidence of . Setting a higher gives a stronger adversarial example in classification confidence.
2.2 EAD: Elastic-Net Attack to Deep Neural Network
Chen et al.  propose EAD attack which has two decision rules: one is elastic-net (EN) rule and the other is rule. In the process of attack optimization, the EN decision rule selects the minimally-distorted adversarial example based on the elastic-net loss of all successful adversarial examples in the attack iterations. On the other hand, the decision rule refers to selecting the final adversarial examples based on the minimum of distortion among successful adversarial examples. In essence, EAD attack finds an effective adversarial example by solving the following optimization problem:
The parameters c, are regularization parameters for f and distortion. Notably, the attack formulation of C&W attack can be viewed as a special case of EAD attack when the penalty coefficient , reducing to a pure distortion based attack. In many cases, EAD attack can generate more effective adversarial examples than C&W attack by considering the additional regularization [11, 12, 6, 13].
2.3 MagNet: Defending Adversarial Examples with Detector and Reformer
Recently, strategies such as feature squeezing , manifold projection , gradient and representation masking [15, 16], and adversarial training , have been proposed as potential defenses against adversarial examples.
In particular, the MagNet proposed in , which is composed of an adversary detector and a data reformer, can not only filter out adversarial examples but also rectify adversarial examples via manifold projection learned from an auto-encoder. The detector compares the statistical difference between an input example and the reconstructed one via an auto-encoder. The reformer trained by an auto-encoder reforms the input example and brings it close to the data manifold of training data. MagNet declares a robust defense performance against C&W attack under different confidence levels in the oblivious attack setting, where an attacker knows the model parameters but is unaware of MagNet defense. In addition, MagNet can also defend against many attacks such as DeepFool , the fast gradient sign method (FGSM) , and iterative FGSM . However, Lu et al.  demonstrate that despite its success in defending against distortion based adversarial examples on MNIST and CIFAR-10, in the same oblivious attack setting MagNet is less effective against distortion based adversarial examples crafted by EAD attack.
2.4 Conditional Generation
Generative adversarial networks (GANs)  have been recently proposed to generate (fake) data examples that are distributionally similar to real ones from a low-dimensional latent code space in an unsupervised manner. Conditional generation, i.e., generating data examples with specific properties, has also been made possible by incorporating side information such as class labels into the GAN training. For example, the -GAN  combines the variational auto-encoder (VAE) and a GAN for generation. The use of VAE enables the capability of inferring the latent variable from training data in addition to the realistic generative power of GAN.
For the purpose of adversary detection via mutual information approximation based on conditional generation, in this paper we adopt the -GAN framework to train a conditional generator using an auto-encoder + GAN architecture, where the class condition is appended to the generation process, as illustrated in Figure 1. Figure 2 shows some generated hand-written digits of our conditional generator trained on MNIST given an input image and different class (digit) conditions.
3 Proposed Method
Characterizing adversarial subspaces aids in understanding the behaviors of adversarial examples and potentially gaining discriminate power against them . In this paper, we use an anto-encoder to learn the low-dimensional data manifold via reconstruction and propose to use it for conditional generation and approximating the mutual information (MI) as a discriminative metric for detecting adversarial inputs. By treating an input example as an instance drawn from an oracle data generation process, our main idea roots in the hypothesis that the MI of a natural input before and after projecting to the (natural) data manifold should be maximally preserved, while the MI of an adversarial input should be relatively small due to its deviation to the data manifold. Therefore, the MI could be used as a detector to distinguish adversarial inputs.
For any two discrete random variables and , their MI is defined as
where is the entropy of , which is defined as
and is the probability of . The conditional entropy is defined as
where is the probability of conditioned on .
Connecting the dots between MI and our adversary detection propoal, let denote the (DNN) classifier that takes a -dimensional vector input and outputs the prediction results (i.e., probability distributions) over classes, and let denote the auto-encoder trained for reconstruction using training data. For any data input (either natural or adversarial), we proposes to use the MI of and as the metric for adversary detection. Specifically, we consider the setting where and in (2), and we use the Jaccard distance (a properly normalized index between )
for detection, where and . Here the Jaccard distance measures the information-theoretic difference between and . In the adversary detection setting, large Jaccard distance means that and are more distinct in distribution and thus indiate and share less similarity. Therefore, we declare as an adversarial input if , where is a pre-specified threshold that balances adversary detectability and rejection rate on natural inputs.
We note that while the entropy can be easily computed, the conditional entropy is difficult to be evaluated when is a DNN classifier. Here we propose to use the conditional generator as introduced in Section 2.4 to approximate the conditional entropy. In particular, the conditional probability is evaluated by the prediction probability of the generated image to be classified as given the class condition and the latent code of as the input to the conditional generator.
|5||94.6||33.5||26.6||95.8 ()||39.4 ()||37.4 ()|
|10||91.5||17.9||11.7||97.8 ()||46.9 ()||44.0 ()|
|15||90.0||16.2||9.7||98.0 ()||47.4 ()||41.8 ()|
|20||91.4||19.6||12.1||98.2 ()||45.1 ()||36.8 ()|
|25||93.9||26.1||16.8||98.4 ()||44.3 ()||35.6 ()|
|30||96.2||34.5||22.5||98.5 ()||44.3 ()||32.9 ()|
|35||97.7||41.1||28.6||99.0 ()||47.3 ()||35.4 ()|
|40||98.5||47.8||33.1||98.9 ()||52.0 ()||37.9 ()|
|10||50.3||26.2||26.4||51.3 ()||28.4 ()||29.4 ()|
|20||48.0||26.8||26.8||51.8 ()||29.4 ()||29.1 ()|
|30||62.9||37.1||38.4||64.0 ()||38.6 ()||39.4 ()|
|40||72.3||48.4||45.3||73.0 ()||49.3 ()||46.7 ()|
|50||81.4||61.0||60.0||81.8 ()||61.2 ()||60.3 ()|
In this section, we applied the proposed MI detector to the MagNet defense against untargeted C&W and EAD attacks on MNIST and CIFAR-10 datasets under the oblivious attack setting. For each dataset, we randomly selected 1000 correctly classified images from the test sets to generate adversarial examples with different confidence levels. Under this attack setting, it has been shown in  that while MagNet is resilient to C&W attack, it is more vulnerable to EAD attack.
4.1 Experiment Setup and Parameter Setting
We followed the oblivious attack setting used in MagNet111https://github.com/Trevillie/MagNet, where the adversarial examples are generated from the same DNN but the adversary is unaware of the deployed defense. The image classifiers on MNIST and CIFAR-10 are trained with the same DNN architecture and training parameters in . For defending adversarial inputs, we report the classification accuracy measured by the percentage of adversarial examples detected by the detectors or correctly classified after passing the reformer. Higher classification accuracy means better defense performance.
Similar to MagNet, in each dataset a validation set of 1000 images is used to determine the MI detector threshold , which is set such that the false-positive rate is 0.5%. The -GAN framework  was used to train our conditional generator consisting of an auto-encoder + GAN architecture.
We report the defense results of untargeted attacks since they are generally more difficult to be detected than targeted attacks. For C&W attack222https://github.com/carlini/nn robust attacks. ( version), we used the same parameters in  to generate adversarial examples. For EAD attack333https://github.com/ysharma1126/EAD-Attack, we used the same parameters in  and generate adversarial examples using both elastic-net (EN) and decision rules. All experiments were conducted using an Intel Xeon E5-2620v4 CPU, 125 GB RAM and a NVIDIA TITAN Xp GPU with 12 GB RAM.
4.2 Performance Evaluation on MNIST
Effect on natural examples. Without MagNet, the original test accuracy is 99.42%. With MagNet, it is decreased to 99.13%; with MI-strengthened MagNet, the test accuracy could still remain at 98.63%. The slight reduction in test accuracy trades in enhanced adversary detectability.
Effect on adversarial examples. Table 1 shows the classification accuracy of adversarial examples on MNIST. While MagNet is robust to C&W attack in the oblivious attack setting, it is shown to be less effective against EAD attack , especially for medium confidence levels (e.g., ). On the other hand, our MI-strengthened MagNet can improve the detection performance by up to 31.2%, highlighting the utility of approximating MI via conditional generation for characterizing adversarial subspaces.
4.3 Performance Evaluation on CIFAR-10
Effect on natural examples. Without MagNet, the test accuracy is 86.91%. With MagNet, it reduced to 83.33%; with MI-strengthened MagNet, it becomes 83.18%.
Effect on adversarial examples. Comparing to MNIST, on CIFAR-10 the MI-strengthened MagNet provides less improvement (up to 3.8%) in adversary detection. One possible explanation is that for most values the original MagNet already performs better on CIFAR-10 than on MNIST. We also conjecture that the performance of MI detector is closely associated with the capability of the conditional generator, as we observe that the quality of the generated images on MNIST is much better than those on CIFAR-10.
In this paper, we propose to utilize the mutual information (MI) and data manifold of deep neural networks as a novel information-theoretic metric to characterize and distinguish adversarial inputs, where the MI is approximated by a well-trained conditional generator. The experimental results show that our MI detector can effectively strengthen the detection capability of MagNet defense while causing negligible effect on test accuracy. Our future work includes further exploring the utility of MI and conditional generation for adversarial robustness and extending our detector to other defenses.
-  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, “Intriguing properties of neural networks,” ICLR, arXiv preprint arXiv:1312.6199, 2014.
-  Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy, “Explaining and harnessing adversarial examples,” ICLR, arXiv preprint arXiv:1412.6572, 2015.
-  Battista Biggio and Fabio Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” arXiv preprint arXiv:1712.03141, 2017.
-  Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao, “Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models,” ECCV, arXiv preprint arXiv:1808.01688, 2018.
-  Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Michael E Houle, Grant Schoenebeck, Dawn Song, and James Bailey, “Characterizing adversarial subspaces using local intrinsic dimensionality,” ICLR, arXiv preprint arXiv:1801.02613, 2018.
-  Pei-Hsuan Lu, Pin-Yu Chen, and Chia-Mu Yu, “On the limitation of local intrinsic dimensionality for characterizing the subspaces of adversarial examples,” ICLR Workshop, arXiv preprint arXiv:1803.09638, 2018.
-  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
-  Dongyu Meng and Hao Chen, “Magnet: a two-pronged defense against adversarial examples,” in ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 135–147.
-  Nicholas Carlini and David Wagner, “Towards evaluating the robustness of neural networks,” in IEEE Symposium on Security and Privacy, 2017, pp. 39–57.
-  Nicholas Carlini and David Wagner, “Adversarial examples are not easily detected: Bypassing ten detection methods,” in ACM Workshop on Artificial Intelligence and Security, 2017, pp. 3–14.
-  Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh, “EAD: elastic-net attacks to deep neural networks via adversarial examples,” AAAI, arXiv preprint arXiv:1709.04114, 2018.
-  Yash Sharma and Pin-Yu Chen, “Attacking the Madry defense model with -based adversarial examples,” ICLR Workshop, arXiv preprint arXiv:1710.10733, 2018.
-  Pei-Hsuan Lu, Pin-Yu Chen, Kang-Cheng Chen, and Chia-Mu Yu, “On the limitation of magnet defense against -based adversarial examples,” arXiv preprint arXiv:1805.00310, 2018.
-  Weilin Xu, David Evans, and Yanjun Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
-  Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in IEEE Symposium on Security and Privacy, 2016, pp. 582–597.
-  John Bradshaw, Alexander G de G Matthews, and Zoubin Ghahramani, “Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks,” arXiv preprint arXiv:1707.02476, 2017.
-  Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, “Towards deep learning models resistant to adversarial attacks,” ICLR, arXiv preprint arXiv:1706.06083, 2018.
-  Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.
-  Alexey Kurakin, Ian Goodfellow, and Samy Bengio, “Adversarial machine learning at scale,” ICLR, arXiv preprint arXiv:1611.01236, 2017.
-  Sebastian Lutz, Konstantinos Amplianitis, and Aljosa Smolic, “AlphaGAN: Generative adversarial networks for natural image matting,” BMVC, arXiv preprint arXiv:1807.10088, 2018.