Saliency Methods for Explaining Adversarial Attacks
In this work, we aim to explain the classifications of adversary images using saliency methods. Saliency methods explain individual classification decisions of neural networks by creating saliency maps. All saliency methods were proposed for explaining correct predictions. Recent research shows that many proposed saliency methods fail to explain the predictions. Notably, the Guided Backpropagation (GuidedBP) is essentially doing (partial) image recovery. In our work, our numerical analysis shows the saliency maps created by GuidedBP do contain class-discriminative information. We propose a simple and efficient way to enhance the created saliency maps. The proposed enhanced GuidedBP is the state-of-the-art saliency method to explain adversary classifications.
The explanations produced by saliency methods reveal the relationship between inputs and outputs of the underlying model. In image classifications, the explanations are generally visualized as saliency maps. A saliency map (SM) is defined by three components: an input , a model corresponding to a function , and an output class .
A saliency method can be formulated as a function . A saliency map for the classification of the -th class is defined as
where has the same dimensions as the input . The value of an element in specifies the relevance of the input feature to the -th class. To be noted that the -th class could be neither the ground-truth class nor the predicted class .
In recent years, a large number of significant saliency methods have been proposed Simonyan et al. (2013); Zeiler and Fergus (2014); Springenberg et al. (2014); Bach et al. (2015); Ribeiro et al. (2016); Sundararajan et al. (2017); Smilkov et al. (2017); Shrikumar et al. (2017); Selvaraju et al. (2017); Ancona et al. (2017); Zintgraf et al. (2017); Dabkowski and Gal (2017); Fong and Vedaldi (2017); Gu et al. (2018). Mahendran and Vedaldi (2016); Adebayo et al. (2018) show that SMs created by Guided Backpropagation (GuidedBP Springenberg et al. (2014)) are neither class-discriminative nor sensitive to model parameters. Nie et al. (2018) proves that Guided Backpropagation is essentially doing (partial) image recovery, which is unrelated to the network decisions. Different from their conclusions, our numerical analysis shows that the SMs created by GuidedBP do contain class-relevant decisions.
Most of the existing saliency methods only consider the SM of the ground-truth class without considering the change of the class in Equation 1. Alvarez-Melis and Jaakkola (2018) shows that meaningful explanations should be robust to small local perturbations of the input. However, the small perturbation can lead the misclassification of neural networks Szegedy et al. (2014); Goodfellow et al. (2015). We would not expect that the explanations stay unchanged in this case since the neural networks make totally different decisions. Hence, the saliency methods should be discriminative to adversary perturbation.
Our contributions are as follows: 1) We identify class-discriminative information in SMs created by GuidedBP and propose a simple and efficient way to enhance the created SMs; 2) We explain classifications of adversary images with the proposed enhanced Guided Backpropagation and the existing ones. Their created evaluations are evaluated via qualitative and quantitative experiments.
2 Enhanced Guided Backpropagation
Similar to raw gradient backpropagation, GuidedBP propagates gradients back to inputs and takes the received gradients as their saliency values. The two methods differ only in handling ReLU layers. In GuidedBP, where is the gradients of the -th layer and the are the activations before RuLU layer, and 1 is the indicator function. Since the indicator function filter out part of gradients, the gradients received by some input features can be zeros, which is called filtering effect (FE). The filtering effect of a SM is formally defined as .
Alvarez-Melis and Jaakkola (2018) provides a theoretical analysis of GuidedBP. They show that the created SMs of different classes have similar filtering effects, which means that GuidedBP is not class-discriminative. In the following, we show the SMs created by GuiedBP do contain class-discriminative information and propose a simple way to enhance the discriminative information in the corresponding saliency maps.
2.1 Identifying Discriminative Information
and are the two saliency maps created by GuidedBP for the -th output class and the -th output class. They have similar filtering effects, as theoretically analyzed in Alvarez-Melis and Jaakkola (2018). The difference between them can only be their saliency values, if existing. However, in all published work, SMs are visualized by normalizing saliency values in a SM and mapping them to a color map . The possible difference of their saliency values is hidden by the normalization.
We take pre-trained VGG16 Simonyan and Zisserman (2014) model and fine-tune it on the PASCAL VOC2012 Everingham et al. (2010) dataset. Each image in the dataset may have many objects belonging to more than one class. We select images with multiple labels from the validation dataset. For each image, we produce SMs for ground-truth classes and choose any two of SMs to form a SM pair ( and ), i.e., SM pairs.
For each pair, we binarize the SMs and compute the similarity between two binarized ones, which is defined as the ratio of the number of pixels with the same value to the number of all pixels. All the scores of all images from the validation dataset are shown in Figure 0(a). All the scores are close to 1, which means the SMs of different classes have almost the same filtering effect.
Without modifying values of SMs, we compute their average and maximum. For each SM pair, we compute the difference of saliency values of two SMs as Avg-Diff and Max-Diff . The scores are visualized in Figure 0(b). They vary from 0 to 0.8. Given a classification, the two SMs and differ in saliency values instead of filtering effect.
2.2 Enhancing Discriminativie Information of Saliency Maps
In this section, we propose a simple and efficient way to extract information about the difference. We argue that the relatively larger saliency values in SMs correspond to the input features that support a specific class. We extract such class-relevant information by normalize them and subtract them, which is visualized in Figure 2. Figure 1(a) shows the saliency values of two SMs where input features are ordered by the saliency values of a SM . The two SMs have zeros in the interval [0, A] since both have the same filtering effect. The difference between the two SMs is their saliency values in the interval (A, C]. Figure 1(b) shows the normalized saliency values where the input features of (A, B] are relevant to the -th class, and the ones in (B, C] are relevant to the -th class.
In classifications of real-world images, the obtained discriminative pixels strongly depends on how the SMs are normalized. The trivial normalization is to divide the SM by their maximum. However, the maximal value of the SMs (i.e., the maximal local gradient value in vanilla Gradient approach) are noisy and often outliers Szegedy et al. (2014); Smilkov et al. (2017).
One alternative is the energy-based normalization. The individual SMs are normalized by the sum of its saliency values (i.e., the energy of the SMs). The SMs and are composed of three channels. The discriminative pixels for the -th class on the R channel are .
Neural networks have different sensitivity to different feature maps and input channels. In a classification, the sensitivity of channels could be different for different output classes. E.g., in case of , the discriminative region , and we lose all the information on the red channel. On the contrary case, we might keep too much detail information without highlighting discriminative features. On other channels, we could similarly lose all the information or keep too much non-discriminative information.
We propose the channel-wise energy-based normalization to circumvent the problem. We consider three channels separately. The discriminative pixels of R channel is . Similarly, the discriminative information of each channel is accurately identified. The generalization of the proposed enhancing method to other saliency methods will also be discussed in Section 4.
3 Explaining Classifications of Adversary Images
Inputs with imperceptible perturbation can fool the well-trained neural networks. The Fast Gradient Sign Method (FGSM) Szegedy et al. (2014) perturbs an image to increase the loss of classifier on the resulting image. The Basic Iterative Method (BIM) Kurakin et al. (2016) extends FGSM by taking multiple small steps instead of one big step. Another superior attack method is the Carlini and Wagner attack (C&W) Carlini and Wagner (2017). In the wake of defensive distillation, they create the quasi-imperceptible perturbations by restricting their and -norms. The -norm is used across this paper.
For ImageNet validation images, we create adversary images using the three described attack methods on pre-trained VGG16. The SMs of clean images and adversary images are shown in Figure 4. For all the saliency methods except for our enhanced GuidedBP, the SMs created for predicted classes of the clean image and its adversary versions are visually the same. One might argue that it is an advantage of the saliency methods: they can still identify the object in the image even when attacked. However, we argue that saliency methods should reflect the strong reaction of deep neural networks. In other words, they should produce different SMs for clean images and adversary ones.
Since the existing saliency methods always create similar SMs for a clean image and its adversary versions, they cannot be applied to explain classifications misled by adversary perturbations. Our enhanced GuidedBP can identify the relevant evidence of the decisions. For the classification of the original input (e.g., sheepland dog), the created SM shows the VGG16 focus on the important visual feature of the target object (the head), while it focuses on class-irrelevant features (background and body parts) when explaining the classifications of adversary inputs.
The saliency methods can identify the input features that contribute to the classification decision. We can apply saliency methods on misled classifications of adversary samples. If we perturb the pixels relevant to the misclassification according to the created SMs, the attack effectiveness will be decreased. The performance of the model on the perturbated samples can be recovered to some extent. Figure 4 shows the performance of the model on the adversary samples (C&W attack) when they are perturbated according to the SMs. We can observe that the perturbation with SMs of our enhanced GuidedBP can recovery the most score. Instead of claiming the SM-based perturbation is an effective defense method, we aim to show that SMs created by enhanced GuidedBP can better identify the pixels relevant to classifications. When too many images pixels are perturbated, the visual features of true target objects is lost, which can also lead to low performance of the model.
To further analyze the adversary-discrinativity of SMs created by enhanced GuidedBP. We categorize created adversary images into two categories: the ones that mislead the classification decisions successfully and the ones that fail to attack the neural network. For the clean images and the perturbed images in , the created SMs should identify the class-discriminative parts. Contrarily, for the adversary images , the parts identified in the SMs are irrelevant to the ground-truth label, which means the network focuses on the wrong parts of the adversary images when making decisions.
In Figure 5, the image in the first row contains a vulture. If the created adversary image fails to fool the neural network, the corresponding SM focuses on the head of the vulture (see 1st-3rd columns right of the image). If the attack is successful, the create SM for the misclassified class (i.e., kite) focuses on wings of the vulture. As a comparison, the GuidedBP always visualizes all the salient low-level features of all the images (e.g., the ski, the persons, and the alp in the image of the second row).
4 Discussion and Conclusion
Why enhanced GuidedBP is better? The pre-softmax scores (logits) are often taken as output scores to create SMs. The previous attribution methods show that the scores of different classes can be attributed to the same pixels. They explain where the scores themselves come from. Our approach explains where the difference between logits comes from, which is the exact reason why the network predicts a higher probability for a class A than a class B. In the optimization of creating adversary images, the loss of the neural network is increased, which results in the change of the rank of logits. Our approach can find the evidence for the difference between the scores, i.e., the rank of logits. The change of the rank is the reason for misclassifications. That is why the enhanced GuidedBP can explain the classification decisions of adversary images better.
The generalization of the enhancing method As analyzed in Sec. 2.1, the important factors to support the success of enhanced GuidedBP is that and have similar Filtering effect. When generalizing the enhancing method to other methods, the effectiveness depends on whether their created SMs have similar filtering effects.
In this work, we identify the class-discriminative information in SMs created by GuidedBP and propose a simple way to enhance it. The proposed enhanced GuidedBP can explain classification decisions of adversary images better. In future work, we will investigate how to regularize the deep neural networks using the captured discriminative information so that the rank of logits is not easily changed by adversary perturbations.
- Sanity checks for saliency maps. In NeurIPS, pp. 9525–9536. Cited by: §1.
- On the robustness of interpretability methods. In Workshop on Human Interpretability in Machine Learning (WHI), Cited by: §1, §2.1, §2.
- A unified view of gradient-based attribution methods for deep neural networks. In NIPS 2017-Workshop on Interpreting, Explaining and Visualizing Deep Learning, Cited by: §1.
- On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10 (7), pp. e0130140. Cited by: §1.
- Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. Cited by: §3.
- Real time image saliency for black box classifiers. In NeuIPS, pp. 6967–6976. Cited by: §1.
- The pascal visual object classes (voc) challenge. IJCV 88 (2), pp. 303–338. Cited by: §2.1.
- Interpretable explanations of black boxes by meaningful perturbation. ICCV, pp. 3449–3457. Cited by: §1.
- Explaining and harnessing adversarial examples. In ICLR, Cited by: §1.
- Understanding individual decisions of cnns via contrastive backpropagation. In ACCV, Cited by: §1.
- Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §3.
- Salient deconvolutional networks. In ECCV, Cited by: §1.
- A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In 2018 Workshop on Human Interpretability in Machine Learning (WHI), Cited by: §1.
- Why should i trust you?: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD, pp. 1135–1144. Cited by: §1.
- Grad-cam: visual explanations from deep networks via gradient-based localization.. In ICCV, pp. 618–626. Cited by: §1.
- Learning important features through propagating activation differences. In ICML, Cited by: §1.
- Deep inside convolutional networks: visualising image classification models and saliency maps. In ICLR, Cited by: §1.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2.1.
- Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825. Cited by: §1, §2.2.
- Striving for simplicity: the all convolutional net. In ICLR, Cited by: §1.
- Axiomatic attribution for deep networks. In ICML, Cited by: §1.
- Intriguing properties of neural networks. In ICLR, Cited by: §1, §2.2, §3.
- Visualizing and understanding convolutional networks. In ECCV, pp. 818–833. Cited by: §1.
- Visualizing deep neural network decisions: prediction difference analysis. In ICLR, Cited by: §1.