AdvGAN++ : Harnessing latent layers for adversary generation

AdvGAN++ : Harnessing latent layers for adversary generation

Abstract

Adversarial examples are fabricated examples, indistinguishable from the original image that mislead neural networks and drastically lower their performance. Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model. In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN and at the same time generates perceptually realistic images on MNIST and CIFAR-10 datasets.

\makenomenclature\iccvfinalcopy

1 Introduction and Related Work

Deep Neural Networks(DNNs), now have become a common ingredient to solve various tasks dealing with classification, object recognition, segmentation, reinforcement learning, speech recognition etc. However recent works [18, 4, 15, 13, 19, 6] have shown that these DNNs can be easily fooled using carefully fabricated examples that are indistinguishable to original input. Such fabricated examples, knows as adversarial examples mislead the neural networks by drastically changing their latent features, thus affecting their output.

Adversarial attacks are broadly classified into White box and Black box attacks. White box attacks such as FGSM [3] and DeepFool [12] have access to the full target model. In contrary to this black box attacks like Carlini and Wagner. [1], the attacker does not have access to the structure or parameters of the target model, it only has access to the labels assigned for the selected input image.

Gradient based attack methods like Fast Gradient Sign Method (FGSM) obtains an optimal max-norm constrained perturbation of

(1)

where J is the cost function and gradient is calculated w.r.t to input example.

Optimization-based methods like Carlini Wagner [1] optimize the adversarial perturbations subject to several constraints. This approach targets , , distance metrics for attack purpose. The optimization objective used in the approach makes it slow as it can focus on one perturbation instance at a time.

In contrary to this, AdvGAN [17] used a GAN [2] with an encoder-decoder based generator to generate perceptually more realistic adversarial examples, close to original distribution. The generator network produces adversarial perturbation when an original image instance is provided as input. The discriminator tries to distinguish adversarial image with original instance . Apart from standard GAN loss, it uses hinge loss to bound the magnitude of maximum perturbation and an adversarial loss to guide the generation of image in adversarial way. Though, AdvGAN is able to generate the realistic examples, it fails to exploit latent features as priors which are shown to be more susceptible to the adversarial perturbations recently [14].

Our Contributions in this work are:

  • We show that the latent features serve as a better prior for adversarial generation than the whole input image for the untargeted attacks thereby utilizing the observation from [14] and at same time eliminating the need to follow encoder-decoder based architecture for generator, thus reducing training/inference overhead.

  • Since GANs are already found to work well in a conditioned setting [7, 11], we show that we can directly make generator to learn the transition from latent feature space to adversarial image rather than from the whole input image.

In the end, through quantitative and qualitative evaluation we show that our examples look perceptually very similar to the real ones and have higher attack success rates compared to AdvGAN.

2 Methodology

2.1 Problem definition

Given a model that accurately maps image sampled from a distribution to its corresponding label , We train a generator to generate an adversary of image using its feature map (extracted from a feature extractor) as prior. Mathematically :

(2)

such that

(3)
(4)

where , represents a feature extractor and is maximum magnitude perturbation allowed.

2.2 Harnessing latent features for adversary generation

We now propose our attack, AdvGAN++ which take latent feature map of original image as prior for adversary generation. Figure 1 shows the architecture of our proposed network. It contains the target model , a a feature extractor , generator network and a discriminator network . The generator receives feature of image and a noise vector (as a concatenated vector) and generates an adversary corresponding to . The discriminator distinguishes the distribution of generator output with actual distribution . In order to fool the target model , generator minimize , which represents the softmax-probability of adversary belonging to class . To bound the magnitude of perturbation, we also minimize loss between the adversary and . The final loss function is expressed as :

(5)

where

(6)
(7)
(8)

Here , are hyper-parameters to control the weight-age of each objective. The feature is extracted from one of the intermediate convolutional layers of target model . By solving the min-max game we obtain optimal parameters for and . The training procedure thus ensures that we learn to generate adversarial images close to input distribution that harness the susceptibility of latent features to adversarial perturbations. Algorithm 1 summarizes the training procedure of AdvGAN++.

for number of training iterations do
       Sample a mini-batch of noise samples { , … } from noise prior ;
      
       Sample a mini-batch of examples {, … } from data generating distribution ;
      
       Extract latent features {, … };
      
       Update the discriminator by ascending its stochastic gradient. ;
       ;
      
       Sample a mini-batch of noise samples { , } from noise prior ;
      
       Update the generator by descending its stochastic gradient. ;
        
end for
Algorithm 1 AdvGAN++ training
Data Model Defense AdvGAN AdvGAN++
MNIST Lenet C FGSM Adv. training 18.7 20.02
Iter. FGSM training 13.5 27.31
Ensemble training 12.6 28.01
CIFAR-10 Resnet-32 FGSM Adv. training 16.03 29.36
Iter. FGSM training 14.32 32.34
Ensemble training 29.47 34.74
Wide-Resnet-34-10 FGSM Adv. training 14.26 26.12
Iter. FGSM training 13.94 43.2
Ensemble training 20.75 23.54
Table 1: Attack success rate of Adversarial examples generated AdvGAN++ when target model is under defense.
Figure 1: AdvGAN++ architecture.

3 Experiments

In this section we evaluate the performance of AdvGAN++, both quantitatively and qualitatively. We start by describing datasets and model-architectures followed by implementation details and results.

Datasets and Model Architectures: We perform experiments on MNIST[10] and CIFAR-10[8] datasets wherein we train AdvGAN++ using training set and do evaluations on test set. We follow Lenet architecture C from [16] for MNIST[10] as our target model. For CIFAR-10[8], we show our results on Resnet-32 [5] and Wide-Resnet-34-10 [20].

Figure 2: Adversarial images generated by AdvGAN++ for MNIST and CIFAR-10 dataset. Row 1: Original image, Row 2: generated adversarial example.

3.1 Implementation details

We use an encoder and decoder based architecture of discriminator and generator respectively. For feature extractor we use the last convolutional layer of our target model . Adam optimizer with learning rate 0.01 and = 0.5 and = 0.99 is used for optimizing generator and discriminator. We sample the noise vector from a normal distribution and use label smoothing to stabilize the training procedure.

3.2 Results

Attack under no defense We compare the attack success rate of examples generated by AdvGAN and AdvGAN++ on target models without using any defense strategies on them. The results in table 2 shows that with much less training/inference overhead, AdvGAN++ performs better than AdvGAN.

Data Target Model AdvGAN AdvGAN++
MNIST Lenet C 97.9 98.4
CIFAR-10 Resnet-32 94.7 97.2
Wide-Resnet-34-10 99.3 99.92
Table 2: Attack success rate of AdvGAN and AdvGAN++ under no defense

Attack under defense We perform experiment to compare the attack success rate of AdvGAN++ with AdvGAN when target model is trained using various defense mechanism such as FGSM[3] , iterative FGSM [9] and ensemble adversarial training [16]. For this, we first generate adversarial examples using original model as target (without any defense) and then evaluate the attack success rate of these adversarial examples on same model, now trained using one of the aforementioned defense strategies. Table 1 shows that AdvGAN++ performs better than the AdvGAN under various defense environment.

Visual results Figure 2 shows the adversarial images generated by AdvGAN++ on MNIST[10] and CIFAR-10[8] datasets. It shows the ability of AdvGAN++ to generate perceptually realistic adversarial images.

Transferability to other models Table 3 shows attack success rate of adversarial examples generated by AdvGAN++ and evaluated on different model doing the same task. From the table we can see that the adversaries produced by AdvGAN++ are significantly transferable to other models performing the same task which can also be used to attack a model in a black-box fashion.

Data Target Model Other Model Attack Success rate
MNIST LeNet C LeNet B [16] 20.24
CIFAR-10 Resnet-32 A Wide-Resnet-34 48.22
Wide-Resnet-34 Resnet-32 89.4
Table 3: Transferability of adversarial examples generated by AdvGAN++

4 Conclusion

In our work, we study the gaps left by AdvGAN [17] mainly focusing on the observation [14] that latent features are more prone to alteration by adversarial noise as compared to the input image. This not only reduces training time but also increases attack success rate. This vulnerability of latent features made them a better candidate for being the starting point for generation and allowed us to propose a generator that could directly convert latent features to the adversarial image.

References

  1. N. Carlini, David and Wagner (2017) Towards evaluating the robustness of neural networks.. In Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39–57. Cited by: §1, §1.
  2. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial networks. External Links: arXiv:1406.2661 Cited by: §1.
  3. I. Goodfellow, J. Shlens and C. Szegedy. (2015) Explaining and harnessing adversarial examples.. In International Conference on LearningRepresentations,. Cited by: §1, §3.2.
  4. K. Grosse, N. Papernot, P. Manoharan, M. Backes and P. McDaniel (2017) Adversarial examples for malware detection. In Computer Security – ESORICS 2017, S. N. Foley, D. Gollmann and E. Snekkenes (Eds.), Cham, pp. 62–79. External Links: ISBN 978-3-319-66399-9 Cited by: §1.
  5. K. He, X. Zhang, S. Ren and J. Sun (2015) Deep residual learning for image recognition. External Links: arXiv:1512.03385 Cited by: §3.
  6. S. H. Huang, N. Papernot, I. J. Goodfellow, Y. Duan and P. Abbeel (2017) Adversarial attacks on neural network policies. CoRR abs/1702.02284. External Links: Link, 1702.02284 Cited by: §1.
  7. P. Isola, J. Zhu, T. Zhou and A. A. Efros (2016) Image-to-image translation with conditional adversarial networks. External Links: arXiv:1611.07004 Cited by: 2nd item.
  8. A. Krizhevsky, V. Nair and G. Hinton () CIFAR-10 (canadian institute for advanced research). CoRRCoRRCoRR. External Links: Link Cited by: §3.2, §3.
  9. A. Kurakin, I. J. Goodfellow and S. Bengio (2016) Adversarial examples in the physical world. abs/1607.02533. External Links: Link, 1607.02533 Cited by: §3.2.
  10. Y. LeCun and C. Cortes (2010) MNIST handwritten digit database. Note: http://yann.lecun.com/exdb/mnist/ External Links: Link Cited by: §3.2, §3.
  11. M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. External Links: arXiv:1411.1784 Cited by: 2nd item.
  12. Seyed-Mohsen, Moosavi-Dezfooli, A. Fawzi and P. Frossard (2016) “Deepfool: a simple and accurate method to fool deep neural networks,”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),. Cited by: §1.
  13. M. Sharif, S. Bhagavatula, L. Bauer and M. K. Reiter (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, New York, NY, USA, pp. 1528–1540. External Links: ISBN 978-1-4503-4139-4, Link, Document Cited by: §1.
  14. M. Singh, A. Sinha, N. Kumari, H. Machiraju, B. Krishnamurthy and V. N. Balasubramanian (2019) Harnessing the vulnerability of latent layers in adversarially trained models. External Links: arXiv:1905.05186 Cited by: 1st item, §1, §4.
  15. R. Taori, A. Kamsetty, B. Chu and N. Vemuri (2018) Targeted adversarial examples for black box audio systems. CoRR abs/1805.07820. External Links: Link, 1805.07820 Cited by: §1.
  16. F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh and P. McDaniel (2017) Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204. Cited by: §3.2, Table 3, §3.
  17. C. Xiao, B. Li, J. Zhu, W. He, M. Liu and D. Song (2018) Generating adversarial examples with adversarial networks. IJCAI. Cited by: §1, §4.
  18. C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie and A. Yuille (2017) Adversarial examples for semantic segmentation and object detection. In International Conference on Computer Vision, Cited by: §1.
  19. X. Yuan, P. He and X. A. Li (2018) Adaptive adversarial attack on scene text recognition. CoRR abs/1807.03326. External Links: Link, 1807.03326 Cited by: §1.
  20. S. Zagoruyko and N. Komodakis (2016) Wide residual networks. External Links: arXiv:1605.07146 Cited by: §3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
403185
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description