References
Abstract

We attempt to interpret how adversarially trained convolutional neural networks (AT-CNNs) recognize objects. We design systematic approaches to interpret AT-CNNs in both qualitative and quantitative ways and compare them with normally trained models. Surprisingly, we find that adversarial training alleviates the texture bias of standard CNNs when trained on object recognition tasks, and helps CNNs learn a more shape-biased representation. We validate our hypothesis from two aspects. First, we compare the salience maps of AT-CNNs and standard CNNs on clean images and images under different transformations. The comparison could visually show that the prediction of the two types of CNNs is sensitive to dramatically different types of features. Second, to achieve quantitative verification, we construct additional test datasets that destroy either textures or shapes, such as style-transferred version of clean data, saturated images and patch-shuffled ones, and then evaluate the classification accuracy of AT-CNNs and normal CNNs on these datasets. Our findings shed some light on why AT-CNNs are more robust than those normally trained ones and contribute to a better understanding of adversarial training over CNNs from an interpretation perspective.

oddsidemargin has been altered.
marginparsep has been altered.
topmargin has been altered.
marginparwidth has been altered.
marginparpush has been altered.
paperheight has been altered.
The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.

 

Interpreting Adversarially Trained Convolutional Neural Networks

 

Tianyuan Zhang0  Zhanxing Zhu0 0 0 


footnotetext: 1AUTHORERR: Missing \icmlaffiliation. 2AUTHORERR: Missing \icmlaffiliation. 3AUTHORERR: Missing \icmlaffiliation. 4AUTHORERR: Missing \icmlaffiliation. . Correspondence to: Zhanxing Zhu <zhanxing.zhu@pku.edu.cn>.  
Proceedings of the International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s).
\@xsect

Convolutional neural networks (CNNs) have achieved great success in a variety of visual recognition tasks (Krizhevsky et al., 2012; Girshick et al., 2014; Long et al., 2015) with their stacked local connections. A crucial issue is to understand what is being learned after training over thousands or even millions of images. This involves interpreting CNNs.

Along this line, some recent works showed that standard CNNs trained on ImageNet make their predictions rely on the local textures rather than long-range dependencies encoded in the shape of objects (Geirhos et al., 2019; Brendel & Bethge, 2019; Ballester & de Araújo, 2016). Consequently, this texture bias prevents the trained CNNs from generalizing well on those images with distorted textures but maintained shape information. Geirhos et al. (2019) also showed that using a combination of Stylized-ImageNet and ImageNet can alleviate the texture bias of standard CNNs. It naturally raises an intriguing question:

Are there any other trained CNNs are more biased towards shapes?

Recently, normally trained neural networks were found to be easily fooled by maliciously perturbed examples, i.e., adversarial examples (Goodfellow et al., 2014; Kurakin et al., 2016). To defense the adversarial examples, adversarial training was proposed; that is, instead of minimizing the loss function over the clean example, it minimizes almost worst-case loss over the slightly perturbed examples (Madry et al., 2018). We name these adversarially trained networks as AT-CNNs. They were extensively shown to be able to enhance the robustness, i.e., improving the classification accuracy over the adversarial examples. Then,

What is learned by adversarially trained CNNs to make it more robust?

In this work, in order to explore the answer to the above questions, we systematically design various experiments to interpret the AT-CNNs and compare them with normally trained models. We find that AT-CNNs are better at capturing long-range correlations such as shapes, and less biased towards textures than normally trained CNNs in popular object recognition datasets. This finding partially explains why AT-CNNs tends to be more robust than standard CNNs.

We validate our hypothesis from two aspects. First, we compare the salience maps of AT-CNNs and standard CNNs on clean images and those under different transformations. The comparison could visually show that the predictions of the two CNNs are sensitive to dramatically different types of features. Second, we construct additional test datasets that destroy either textures or shapes, such as the style-transferred version of clean data, saturated images and patch-shuffled images, then evaluate the classification accuracy of AT-CNN and normal CNNs on these datasets. These sophisticated designed experiments provide a quantitative comparison between the two CNNs and demonstrate their biases when making predictions.

To the best of our knowledge, we are the first to implement systematic investigation on interpreting the adversarially trained CNNs, both visually and quantitatively. Our findings shed some light on why AT-CNNs are more robust than those normally trained ones and also contribute to better understanding adversarial training over CNNs from an interpretation perspective.111Our codes are available at https://github.com/PKUAI26/AT-CNN

The remaining of the paper is structured as follows. We introduce background knowledge on adversarial training and salience methods in Section id1. The methods for interpreting AT-CNNS are described in Section id1. Then we present the experimental results to support our findings in Section id1. The related works and discussions are presented in Section id1. Section id1 concludes the paper.

\@xsect\@xsect

This training method was first proposed by (Goodfellow et al., 2014), which is the most successful approach for building robust models so far for defending adversarial examples (Madry et al., 2018; Sinha et al., 2018; Athalye et al., 2018; Zhang et al., 2019b; a). It can be formulated as solving a robust optimization problem (Shaham et al., 2015)

(1)

where represents the neural network parameterized by weights ; the input-output pair is sample from the training set ; denotes the adversarial perturbation and is the chosen loss function, e.g. cross entropy loss. denotes a certain norm constraints, such as or .

The inner maximization is approximated by adversarial examples generated by various attack methods. Training against a projected gradient descent (PGD, Madry et al. (2018)) adversary leads to state-of-the-art white-box robustness. We use PGD based adversarial training with bounded and norm constraints. We also investigate FGSM (Goodfellow et al., 2014) based adversarial training.

\@xsect

Given a trained neural network, visualizing the salience maps aims at assigning a sensitivity value, sometimes also called “attribution”, to show the sensitivity of the output to each pixel of an input image. Salience methods can mainly be divided into (Ancona et al., 2018) perturbation-based methods (Zeiler & Fergus, 2014; Zintgraf et al., 2017) and gradient-based method (Erhan et al., 2009; Simonyan et al., 2013; Shrikumar et al., 2017; Sundararajan et al., 2017; Selvaraju et al., 2017; Zhou et al., 2016; Smilkov et al., 2017; Bach et al., 2015). Recently (Adebayo et al., 2018) carries out a systematic test for many of the gradient-based salience methods, and only variants of Grad and GradCAM (Selvaraju et al., 2017) pass the proposed sanity checks. We thus choose Grad and its smoothed version SmoothGrad (Smilkov et al., 2017) for visualization.

Formally, let denote the input image, a trained network is a function , where is the total number of classes. Let denotes the class activation function for each class . We seek to obtain a salience map . The Grad explanation is the gradient of class activation with respect to the input image ,

(2)

SmoothGrad (Smilkov et al., 2017) was proposed to alleviate noises in gradient explanation by averaging over the gradient of noisy copies of an input. Thus for an input , the smoothed variant of Grad, SmoothGrad can be written as

(3)

where , and are noise vectors drawn i.i.d from a Gaussian distribution . In all our experiments, we set , and the noise level , . We choose , where is the probability of class assigned by a classifier to input .

\@xsect

In this section, we elaborate our method for interpreting the adversarially trained CNNs and comparing them with normally trained ones. Three image datasets are considered, including Tiny ImageNet222https://tiny-imagenet.herokuapp.com/, Caltech-256 (Griffin et al., 2007) and CIFAR-10.

We first visualize the salience maps of AT-CNNs and normal CNNs to demonstrate that the two models trained with different ways are sensitive to different kinds of features. Besides this qualitative comparison, we also test the two kinds of CNNs on different transformed datasets to distinguish the difference of their preferred features.

(a) Original (b) Stylized (c) Saturated 8 (d) Saturated 1024 (e) patch-shuffle 2 (f) patch-shuffle 4
Figure 1: Visualization of three transformations. Original images are from Caltech-256. From left to right, original, stylized, saturation level as 8, 1024, patch-shuffling, patch-shuffling.
\@xsect

A straightforward way of investigating the difference between AT-CNNs and CNNs is to visualize which group of pixels the network outputs are most sensitive to. Salience maps generated by Grad and its smoothed variant SmoothGrad are good candidates to show what features a model is sensitive to. We compare the salience maps between AT-CNNs and CNNs on clean images, and images under texture preserving and shape preserving distortions. Extensive results can been seen in Section id1.

As pointed by Smilkov et al. (2017), sensitivity maps based on Grad method are often visually noisy, highlighting that some pixels, to a human eye, seem randomly selected. SmoothGrad in Eq. (3), on the other hand, could reduce visual noise by averaging the gradient over the Gaussian perturbed images. Thus, we mainly report the salience maps produced by SmoothGrad, and the Grad visualization results are provided in the appendix. Note that the two visualization methods could help us draw a consistent conclusion on the difference between the two trained CNNs.

\@xsect

Besides visual inspection of sensitivity maps, we propose to measure the sensitivity of AT-CNNs and CNNs to different features by evaluating the performance degradation under several distortions that either preserves shapes or textures. Intuitively, if one model relies on textures a lot, the performance would degrade severely if we destroy most of the textures while preserving other information, such as the shapes and other features. However, a perfect disentanglement of texture, shape and other feature information is impossible (Gatys et al., 2015). In this work, we mainly construct three kinds of image translations to achieve the shape or texture distortion, style-transfer, saturating and patch-shuffling operation. Some of the image samples are shown in Figure 1. We also added three Fourier-filtered test set in the appendix. We now describe each of these transformations and their properties.

Note that we conduct normal training or adversarial training on the original training sets, and then evaluate their generalizability over the transformed data. During the training, we never use the transformed datasets.

Stylizing. Geirhos et al. (2019) utilized style transfer (Huang & Belongie, 2017) to generate images with conflicting shape and texture information to demonstrate the texture bias of ImageNet-trained standard CNNs. Following the same rationale, we utilize style transfer to destroy most of the textures while preserving the global shape structures in images, and build a stylized test dataset. Therefore, with similar generalization error, models capturing shapes better should also perform better on stylized test images than those biased towards textures. The style-transferred image samples are shown in Figure 1(b).

Saturation. Similar to (Ding et al., 2019), we denote the saturation of the image by , where indicates the saturation level ranging from to . When , the saturation operation does not change the image. When , increasing the saturation level will push the pixel values towards binarized ones, and leads to the pure binarization. Specifically, for each pixel of image with value , its corresponding saturated pixel of is defined as One can observe that, from Figure 1(c) and (d), increasing saturation level can gradually destroy some texture information while preserving most parts of the contour structures.

CIFAR10 TinyImageNet Caltech 256
Accuracy Robustness Accuracy Robustness Accuracy Robustness
PGD-inf: 8 86.27 44.81 54.42 14.25 66.41 31.16
PGD-inf: 4 89.17 30.85 61.85 6.87 72.22 20.10
PGD-inf: 2 91.4 39.11 67.06 1.66 76.51 7.51
PGD-inf: 1 93.40 7.53 69.42 0.18 79.11 1.70
PGD-L2: 12 85.79 34.61 53.44 14.80 65.54 31.36
PGD-L2: 8 88.01 26.88 58.21 10.03 69.75 26.19
PGD-L2: 4 90.77 13.19 64.24 3.61 74.12 14.33
FGSM: 8 84.90 34.25 66.21 0.01 70.88 20.02
FGSM: 4 88.13 25.08 63.43 0.13 73.91 15.16
Normal 94.52 0 72.02 0.01 83.32 0
Underfit 86.79 0 60.05 0.01 69.04 0
Table 1: Accuracy and robustness of all the trained models. Robustness is measured against the PGD attack with bounded norm. Details are listed in the appendix. Note that underfitting CNNs have similar generalization performance with some of the AT-CNNs on clean images.

Patch-Shuffling. To destroy long-range shape information, we split images into small patches and randomly rearranging the order of these patches, with . Favorably, this operation preserves most of the texture information and destroys most of the shape information. The patch-shuffled image samples are showed in Figure 1(e), (f). Note that as increasing, more information of the original image is lost, especially for images with low resolution.

\@xsect\@xsect

We describe the experiment setup to evaluate the performance of AT-CNNs and standard CNNs in data distributions manipulated by above-mentioned operations. We conduct experiments on three datasets. CIFAR-10, Tiny ImageNet and Caltech-256 (Griffin et al., 2007). Note that we do not create the style-transferred and patch-shuffled test set for CIFAR-10 due to its limited resolution.

When training on CIFAR-10, we use the ResNet-18 model (He et al., 2016a; b); for data augmentation, we perform zero paddings with width as 4, horizontal flip and random crop.

Tiny ImageNet has 200 classes of objects. Each class has 500 training images, 50 validation images, and 50 test images. All images from Tiny ImageNet are of size . We re-scale them to and perform random horizontal flip and per-image standardization as data augmentation.

(a) Images from Caltech-256 (b) Images from Tiny ImageNet
Figure 2: Sensitivity maps based on SmoothGrad (Smilkov et al., 2017) of three models on images under saturation, and stylizing. From top to bottom, Original, Saturation 1024 and Stylizing. For each group of images, from left to right, original image, sensitivity maps of standard CNN, underfitting CNN and PGD- AT-CNN.

Caltech-256 (Griffin et al., 2007) consists of 257 object categories containing a total of 30607 images. Resolution of images from Caltech is much higher compared with the above two datasets. We manually split of images as the test set. We perform re-scaling and random cropping following (He et al., 2016a). For both Tiny ImageNet and Caltech-256, we use ResNet-18 model as the network architecture.

\@xsect

For all above three datasets, we train three types of AT-CNNs, they mainly differ in the way of generating adversarial examples: FGSM, PGD with bounded norm and PGD with bounded norm, and for each attack method we train several models under different attack strengths. Details are listed in the appendix. To understand whether the difference of performance degradation for AT-CNNs and standard CNNs is due to the poor generalization (Schmidt et al., 2018; Tsipras et al., 2018) of adversarial training, we also compare the AT-CNNs with an underfitting CNN (trained over clean data) with similar generalization performance as AT-CNNs. We train 11 models on each dataset. Their generalization performance on clean data, and robustness measured by PGD attack are shown in Table  1.

\@xsect

To investigate what features of an input image AT-CNNs and normal CNNs are most sensitive to, we generate sensitivity maps using SmoothGrad (Smilkov et al., 2017) on clean images, saturated images, and stylized images. The visualization results are presented in Figure 2.

We can easily observe that the salience maps of AT-CNNs are much more sparse and mainly focus on contours of each object on all kinds of images, including the clean, saturated and stylized ones. Differently, sensitivity maps of standard CNNs are more noisy, and less biased towards the shapes of objects. This is consistent with the findings in (Geirhos et al., 2019).

Particularly, in the second row of Figure 2, sensitivity maps of normal CNNs of the “dog” class are still noisy even when the input saturated image are nearly binarized. On the other hand, after adversarial training, the models successfully capture the shape information of the object, providing a more interpretable prediction.

For stylized images shown in the third row of Figure 2, even with dramatically changed textures after style transfer, AT-CNNs can still be able to focus the shapes of original object, while standard CNNs totally fail.

Due to the limited space, we provide more visualization results (including the sensitivity maps generated by Grad method) in appendix.

\@xsect

In this part, we mainly show generalization performance of AT-CCNs and normal CNNs on either shape or texture preserving distorted image datasets. This could help us to understand how different that the two types of models are biased in a quantitative way.

For all experimental results below, besides the top-1 accuracy, we also report an “accuracy on correctly classified images”. This accuracy is measured by first selecting the images from the clean test set that is being correctly classified, then measuring the accuracy of transformed images from these correctly classified ones.

Figure 3: Visualization of images from style-transferred test set. Applying AdaIn (Huang & Belongie, 2017) style transfer distorts local textures of original images, while the global shape structure is retained. The first row are images from Caltech-256, and the second row are images from Tiny ImageNet.
dataset Caltech-256 Stylized Caltech-256 TinyImageNet Stylized TinyImageNet
Standard 83.32 16.83 72.02 7.25
Underfit 69.04 9.75 60.35 7.16
PGD-: 8 66.41 19.75 54.42 18.81
PGD-: 4 72.22 21.10 61.85 20.51
PGD-: 2 76.51 21.89 67.06 19.25
PGD-: 1 79.11 22.07 69.42 18.31
PGD-: 12 65.24 20.14 53.44 19.33
PGD-: 8 69.75 21.62 58.21 20.42
PGD-: 4 74.12 22.53 64.24 21.05
FGSM: 8 70.88 21.23 66.21 15.07
FGSM: 4 73.91 21.99 63.43 20.22
Table 2: “Accuracy on correctly classified images” for different models on stylized test set. The columns named “Caltech-256” and “TinyImageNet” show the generalization of different models on the clean test set.
Figure 4: Illustration of how varying saturation changes the appearance of the image. From left to right, saturation level 0.25, 0.5, 1, 2 (original image), 4, 8, 16, 64, 1024. Increasing saturation level pushes pixels towards 0 or 1, which preserves most of the shape while wiping most of the textures. Decreasing saturation level pushes all pixels to .
\@xsect

Following Geirhos et al. (2019), we generate stylized version of test set for Caltech-256 and Tiny ImageNet.

We report the “accuracy on correctly classified images” of all the trained models on stylized test set in Table 2. Compared with standard CNNs, though with a lower accuracy on original test images, AT-CNNs achieve higher accuracy on stylized ones with textures being dramatically changed. The comparison quantitatively shows that AT-CNNs tend to be more invariant with respect to local textures.

(a) Caltech-256 (b) Tiny ImageNet
Figure 5: “Accuracy on correctly classified images” for different models on saturated Caltech-256 and Tiny ImageNet with respect to different saturation levels. Note that in the plot, there are several curves with same color and line type shown for each adversarial training method, PGD and FGSM-based, those of which with larger perturbation achieves better robustness for most of the cases. Detailed results are list in the appendix.
(a) Original Image (b) Patch-Shuffle 2 (c) Patch-Shuffle 4 (d) Patch-Shuffle 8
Figure 6: Visualization of patch-shuffling transformation. The first row shows probability of “cake” assigned by different models.
(a) Caltech-256 (b) Tiny ImageNet
Figure 7: “Accuracy on correctly classified images” for different models on patch-shuffled Tiny ImageNet and Caltech-256 with different splitting numbers. Detailed results are listed in the appendix.
\@xsect

We use the saturation operation to manipulate the images, and show the how increasing saturation levels affects the accuracy of models trained in different ways.

In Figure 4, we visualize images with varying saturation levels. It can be easily observed that increasing saturation levels pushes images more “binnarized”, where some textures are wiped out, but produces sharper edges and preserving shape information. When saturation level is smaller than , i.e. clean image, it pushes all the pixels towards and nearly all the information is lost, and leads to a totally gray image with constant pixel value.

We measure the “accuracy on correctly classified images” for all the trained models, and show them in Figure 5. We can observe that with the increasing level of saturation, more texture information is lost. Favorably, adversarially trained models exhibit a much less sensitivity to this texture loss, still obtaining a high classification accuracy. The results indicate that AT-CNNs are more robust to “saturation” or “binarizing” operations, which may demonstrate that the prediction capability of AT-CNNs relies less on texture and more on shapes. Results on CIFAR-10 tells the same story, as presented in appendix due to the limited space.

Additionally, in our experiments, for each adversarial training approach, either PGD or FGSM based, AT-CNNs with higher robustness towards PGD adversary are more invariant to the increasing of the saturation level and texture loss. On the other hand, adversarial training with higher robustness typically ruin the generalization over the clean dataset. Our finding also supports the claim “robustness maybe at odds with accuracy” (Tsipras et al., 2018).

When decreasing the saturation level, all models have similar degree of performance degradation, indicating that AT-CNNs are not robust to all kinds of image distortions. They tend to be more robust for fixed types of distortions. We leave the further investigation regarding this issue as future work.

\@xsect

Stylizing and saturation operation aim at changing or removing the texture information of original images, while preserving the features of shapes and edges. In order to test the different bias of AT-CNN and standard CNN in the other way around, we shatter the shape and edge information by splitting the images into patches and then randomly shuffling them. This operation could still maintains the local textures if is not too large.

Figure 6 shows one example of patch-shuffled images under different numbers of splitting. The first row shows the probabilities assigned by different models to the ground truth class of the original image. Obviously, after random shuffling, the shapes and edge features are destroyed dramatically, the prediction probability of the adverarially trained CNNs drops significantly, while the normal CNNs still maintains a high confidence over the ground truth class. This reveals AT-CNNs are more baised towards shapes and edges than normally trained ones.

Moreover, Figure 7 depicts the “ accuracy of correctly classified images” for all the models measured on “Patch-shuffled” test set with increasing number of splitting pieces. AT-CNNs, especially trained against with a stronger attack are more sensitive to “Patch-shuffling” operations in most of our experiments.

Note that under “Patch-shuffle 8” operation, all models have similar “ accuracy of correctly classified images”, which is largely due to the severe information loss. Also note that this accuracy of all models on Tiny ImageNet shown in 7(a) is mush lower than that on Caltech-256 in 7(b). That is, under “Patch-shuffle 1”, normally trained CNN has an accuracy of on Caltech-256, while only on Tiny ImageNet. This mainly origins from the limited resolution of Tiny ImageNet, since “Patch-Shuffle” operation on low-resolution images destroys more useful features than those with higher resolution.

\@xsect

Interpreting AT-CNNs. Recently there are some relevant findings indicating that AT-CNNs learn fundamentally different feature representations than standard classifiers. Tsipras et al. (2018) showed that sensitivity maps of AT-CNNs in the input space align well with human perception. Additionally, by visualizing large- adversarial examples against AT-CNNs, it can be observed that the adversarial examples could capture salient data characteristics of a different class, which appear semantically similar to the images of the different class. Dong et al. (2017) leveraged adversarial training to produce a more interpretable representation by visualizing active neurons. Compared with Tsipras et al. (2018) and Dong et al. (2017), we have conducted a more systematical investigation for interpreting AT-CNNs. We construct three types of image transformation that can largely change the textures while preserving shape information (i.e. stylizing and saturation), or shatter the shape/edge features while keeping the local textures (i.e. patch-shuffling). Evaluating the generalization of AT-CNNs over these designed datasets provides a quantitative way to verify and interpret their strong shape-bias compared with normal CNNs.

Insights for defensing adversarial examples. Based on our investigation over the AT-CNNs, we find that the robustness towards adversarial examples is correlated with the capability of capturing long-range features like shapes or contours. This naturally raises the question: whether any other models that can capture more global features or with more texture invariance could lead to more robustness to adversarial examples, even without adversarial training? This might provide us some insights on designing new network architecture or new strategies for enhancing the bias towards long-range features. Some recent works turn out partially answering this question. (Xie et al., 2018) enhanced standard CNNs with non-local blocks inspired from (Wang et al., 2018; Vaswani et al., 2017) which capture long-range dependencies in a data-dependent manner, and when combined with adversarial training, their networks achieved state-of-the-art adversarial robustness on ImageNet. (Luo et al., 2018) destroyed some of the local connection of standard CNNs by randomly select a set of neurons and remove them from the network before training, and thus forcing the CNNs to less focus on local texture features. With this design, they achieved improved black-box robustness.

Adversarial training with other types of attacks. In this work, we mainly interpret the AT-CNNs based on norm-constrained perturbation over the original images. It is worthy of noting that the difference between normally trained and adversarially trained CNNs may highly depends on the type of adversaries. Models trained against spatially-transformed adversary (Xiao et al., 2018), denoted as ST-ST-CNNs, have similar robustness towards PGD attack with standard models, and their salience maps are still quite different as shown in Figure 8. Also the average distance between salience maps is close to that of standard CNN, which is much higher than that of PGD-AT-CNN. There exists a variety of generalized types of attacks, parameterized by , such as spatially transformed (Xiao et al., 2018) and GAN-based adversarial examples (Song et al., 2018). We leave interpreting the AT-CNNs based on these generalized types of attacks as future work.

Figure 8: Sensitivity maps based on SmoothGrad (Smilkov et al., 2017) of three models. From left to right, original image, sensitivity maps of standard CNN, PGD- AT-CNN and ST-AT-CNN.
\@xsect

From both qualitative and quantitative perspectives, we have implemented a systematic study on interpreting the adversarially trained convolutional neural networks. Through constructing distorted test sets either preserving shapes or local textures, we compare the sensitivity maps of AT-CNNs and normal CNNs on the clean, stylized and saturated images, which visually demonstrates that AT-CNNs are more biased towards global structures, such as shapes and edges. More importantly, we evaluate the generalization performance of the two models on the three constructed datasets, stylized, saturated and patch-shuffled ones. The results clearly indicate that AT-CNNs are less sensitive to the texture distortion and focus more on shape information, while the normally trained CNNs the other way around.

Understanding what a model has learned is an essential topic in both machine learning and computer vision. The strategies we propose can also be extended to interpret other neural networks, such as models for object detection and semantic segmentation.

\@ssect

Acknowledgement This work is supported by National Natural Science Foundation of China (No.61806009), Beijing Natural Science Foundation (No.4184090), Beijing Academy of Artificial Intelligence (BAAI) and Intelligent Manufacturing Action Plan of Industrial Solid Foundation Program (No.JCKY2018204C004). We also appreciate insightful discussions with Dinghuai Zhang and Dr. Lei Wu.

References

  • Adebayo et al. (2018) Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., and Kim, B. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, pp. 9525–9536, 2018.
  • Ancona et al. (2018) Ancona, M., Ceolini, E., Oztireli, C., and Gross, M. Towards better understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations (ICLR 2018), 2018.
  • Athalye et al. (2018) Athalye, A., Carlini, N., and Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
  • Bach et al. (2015) Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
  • Ballester & de Araújo (2016) Ballester, P. and de Araújo, R. M. On the performance of googlenet and alexnet applied to sketches. In AAAI, pp. 1124–1128, 2016.
  • Brendel & Bethge (2019) Brendel, W. and Bethge, M. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. In International Conference on Learning Representations, 2019.
  • Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. Ieee, 2009.
  • Ding et al. (2019) Ding, G. W., Lui, K. Y.-C., Jin, X., Wang, L., and Huang, R. On the sensitivity of adversarial robustness to input data distributions. In International Conference on Learning Representations, 2019.
  • Dong et al. (2017) Dong, Y., Su, H., Zhu, J., and Bao, F. Towards interpretable deep neural networks by leveraging adversarial examples. arXiv preprint arXiv:1708.05493, 2017.
  • Erhan et al. (2009) Erhan, D., Bengio, Y., Courville, A., and Vincent, P. Visualizing higher-layer features of a deep network. University of Montreal, 1341(3):1, 2009.
  • Gatys et al. (2015) Gatys, L. A., Ecker, A. S., and Bethge, M. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
  • Geirhos et al. (2019) Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., and Brendel, W. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations, 2019.
  • Girshick et al. (2014) Girshick, R., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014.
  • Goodfellow et al. (2014) Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • Griffin et al. (2007) Griffin, G., Holub, A., and Perona, P. Caltech-256 object category dataset. 2007.
  • He et al. (2016a) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016a.
  • He et al. (2016b) He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016b.
  • Huang & Belongie (2017) Huang, X. and Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1510–1519. IEEE, 2017.
  • Jo & Bengio (2017) Jo, J. and Bengio, Y. Measuring the tendency of cnns to learn surface statistical regularities. arXiv preprint arXiv:1711.11561, 2017.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
  • Kurakin et al. (2016) Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
  • Long et al. (2015) Long, J., Shelhamer, E., and Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440, 2015.
  • Luo et al. (2018) Luo, T., Cai, T., Zhang, M., Chen, S., and Wang, L. Random mask: Towards robust convolutional neural networks. 2018.
  • Madry et al. (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  • Paszke et al. (2017) Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. Automatic differentiation in pytorch. 2017.
  • Schmidt et al. (2018) Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and Madry, A. Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285, 2018.
  • Selvaraju et al. (2017) Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626. IEEE, 2017.
  • Shaham et al. (2015) Shaham, U., Yamada, Y., and Negahban, S. Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432, 2015.
  • Shrikumar et al. (2017) Shrikumar, A., Greenside, P., and Kundaje, A. Learning important features through propagating activation differences. arXiv preprint arXiv:1704.02685, 2017.
  • Simonyan et al. (2013) Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
  • Sinha et al. (2018) Sinha, A., Namkoong, H., and Duchi, J. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018.
  • Smilkov et al. (2017) Smilkov, D., Thorat, N., Kim, B., Viégas, F., and Wattenberg, M. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
  • Song et al. (2018) Song, Y., Shu, R., Kushman, N., and Ermon, S. Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems, pp. 8322–8333, 2018.
  • Sundararajan et al. (2017) Sundararajan, M., Taly, A., and Yan, Q. Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365, 2017.
  • Tsipras et al. (2018) Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. 2018.
  • Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.
  • Wang et al. (2018) Wang, X., Girshick, R., Gupta, A., and He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803, 2018.
  • Xiao et al. (2018) Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., and Song, D. Spatially transformed adversarial examples. In International Conference on Learning Representations, 2018.
  • Xie et al. (2018) Xie, C., Wu, Y., van der Maaten, L., Yuille, A., and He, K. Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411, 2018.
  • Zeiler & Fergus (2014) Zeiler, M. D. and Fergus, R. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Springer, 2014.
  • Zhang et al. (2019a) Zhang, D., Zhang, T., Lu, Y., Zhu, Z., and Dong, B. You only propagate once: Painless adversarial training using maximal principle. arXiv preprint arXiv:1905.00877, 2019a.
  • Zhang et al. (2019b) Zhang, H., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E., and Jordan, M. I. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019b.
  • Zhou et al. (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.
  • Zintgraf et al. (2017) Zintgraf, L. M., Cohen, T. S., Adel, T., and Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. arXiv preprint arXiv:1702.04595, 2017.
\@xsect\@xsect
  • CIFAR-10. We train a standard ResNet-18 (He et al., 2016a) architecture, it has 4 groups of residual layers with filter sizes (64, 128, 256, 512) and 2 residual units.

  • Caltech-256 & Tiny ImageNet. We use a ResNet-18 architecture using the code from pytorch(Paszke et al., 2017). Note that for models on Caltech-256 & Tiny ImageNet, we initialize them using ImageNet(Deng et al., 2009) pre-trained weighs provided by pytorch.

We evaluate the robustness of all our models using a projected gradient descent adversary with , step size = 2 and number of iterations as 40.

\@xsect

We perform 9 types of adversarial training on each of the dataset. 7 of the 9 kinds of adversarial training are against a projected gradient descent (PGD) adversary(Madry et al., 2018), the other 2 are against FGSM adversary(Goodfellow et al., 2014).

\@xsect

We list value of for adversarial training of each dataset and -norm. In all settings, PGD runs 20 iterations.

  • -norm bounded adversary. For all of the three data set, pixel vaules range from 0 1, we train 4 adversarially trained CNNs with , these four models are denoted as PGD-inf:1, 2, 4, 8 respectively, and steps size as 1/255, 1/255, 2/255, 4/255.

  • -norm bounded adversary. For Caltech-256 & Tiny ImageNet, the input size for our model is , we train three adversarially trained CNNs with , and these four models are denoted as PGD-l2: 4, 8, 12 respectively. Step sizes for these three models are 2/255, 4/255, 6/255. For CIFAR-10, where images are of size , the three adversarially trained CNNs have , but they are denoted in the same way and have the same step size as that in Caltech-256 & Tiny ImageNet.

    \@xsect

    for these two adversarially trained CNNs are , and they are denoted as FGSM 4, 8 respectively.

\@xsect

Following (Geirhos et al., 2019) we construct stylized test set for Caltech-256 and Tiny ImageNet by applying the AdaIn style transfer(Huang & Belongie, 2017) with a stylization coefficient of to every test image with the style of a randomly selected painting from 333https://www.kaggle.com/c/painter-by-numbers/Kaggle’s Painter by numbers dataset. we used source code provided by(Geirhos et al., 2019).

\@xsect

(Jo & Bengio, 2017) showed deep neural networks tend to learn surface statistical regularities as opposed to high-level abstractions. Following them, we test the performance of different trained CNNs on the high-pass and low-pass filtered dataset to show their tendencies.

\@xsect

Following (Jo & Bengio, 2017) We construct three types of Fourier filtered version of test set.

  • The low frequency filtered version. We use a radial mask in the Fourier domain to set higher frequency modes to zero.(low-pass filtering)

  • The high frequency filtered version. We use a radial mask in the Fourier domain to preserve only the higher frequency modes.(high-pass filtering)

  • The random filtered version. We use a random mask in the Fourier domain to set each mode to 0 with probability uniformly. The random mask is generated on the fly during the test.

\@xsect

We measure generalization performance (accuracy on correctly classified images) of each model on these three filtered datasets from Caltech-256, results are listed in Table 3. AT-CNNs performs better on Low-pass filtered dataset and worse on High-pass filtered dataset. Results indicate that AT-CNNs make their predictions depend more on low-frequency information. This finding is consistent with our conclusions since local features such as textures are often considered as high-frequency information, and shapes and contours are more like low-frequency.

Data set The low frequency filtered version The high frequency filtered version The random filtered version
Standard 15.8 16.5 73.5
Underfit 14.5 17.6 62.2
PGD-: 71.1 3.6 73.4

Table 3: “Accuracy on correctly classified images” for different models on three Fourier-filtered Caltech-256 test sets.
\@xsect

We the detailed results for our quantitative experiments here. Table 5, 4, 6 show the results of each models on test set with different saturation levels. Table 8, 7 list all the results of each models on test set after different path-shuffling operations.

Saturaion level 0.25 0.5 1 4 8 16 64 1024
Standard 28.62 57.45 85.20 90.13 65.37 42.37 23.45 20.03
Underfit 31.84 63.36 90.96 84.51 57.51 38.58 26.00 23.08
PGD-: 8 32.84 53.47 82.72 86.45 70.33 61.09 53.76 51.91
PGD-: 4 31.99 57.74 85.18 87.95 70.33 58.38 48.16 45.45
PGD-: 2 32.99 60.75 87.75 89.35 68.78 51.99 40.69 37.83
PGD-: 1 32.67 61.85 89.36 90.18 69.07 50.05 37.98 34.80
PGD-: 12 31.38 53.07 82.10 83.89 67.06 58.51 52.45 50.75
PGD-: 8 32.82 56.65 85.01 86.09 68.90 58.75 51.59 49.30
PGD-: 4 32.82 58.77 86.30 86.36 67.94 53.68 44.43 41.98
FGSM: 8 29.53 55.46 85.10 86.65 69.01 55.64 45.92 43.42
FGSM: 4 32.68 59.37 87.22 87.90 66.71 51.13 41.66 38.78
Table 4: “Accuracy on correctly classified images” for different models on saturated Caltech-256 test set. It is easily observed AT-CNNs are much more robust to increasing saturation levels on Caltech-256.
Saturaion level 0.25 0.5 1 4 8 16 64 1024
Standard 7.24 25.88 72.52 72.73 25.38 8.24 2.62 1.93
Underfit 7.34 25.44 69.80 60.67 18.01 6.72 3.16 2.65
PGD-: 8 11.07 29.08 67.11 74.53 49.8 40.16 35.44 33.96
PGD-: 4 12.44 33.53 72.94 75.75 46.38 32.12 24.92 22.65
PGD-: 2 12.09 34.85 75.77 76.15 41.35 25.20 16.93 14.52
PGD-: 1 11.30 35.03 76.85 78.63 40.48 21.37 12.70 10.81
PGD-: 12 11.30 29.48 66.94 75.22 52.26 42.11 37.20 35.85
PGD-: 8 12.42 32.78 71.94 75.15 47.92 35.66 29.55 27.90
PGD-: 4 12.63 34.10 74.06 77.32 45.00 28.73 20.16 18.04
FGSM: 8 12.59 32.66 70.55 81.53 41.83 17.52 7.29 5.82
FGSM: 4 12.63 34.10 74.06 75.05 42.91 29.09 22.15 20.14
Table 5: “Accuracy on correctly classified images” for different models on saturated Tiny ImageNet test set. It is easily observed AT-CNNs are much more robust to increasing saturation levels on Tiny ImageNet.
Saturaion level 0.25 0.5 1 4 8 16 64 1024
Standard 27.36 55.95 91.03 93.12 69.98 48.30 34.39 31.06
Underfit 21.43 50.28 87.71 89.89 66.09 43.35 29.10 26.13
PGD-: 8 26.05 46.96 80.97 89.16 75.46 69.08 58.98 64.64
PGD-: 4 27.22 49.81 84.16 89.79 73.89 65.35 59.99 58.47
PGD-: 2 28.32 53.12 86.93 91.37 74.02 62.82 55.25 52.60
PGD-: 1 27.18 53.59 88.54 91.77 72.67 58.39 47.25 41.75
PGD-: 12 25.99 46.92 81.72 88.44 73.92 66.03 60.98 59.41
PGD-: 8 27.75 50.29 83.76 80.92 73.17 64.83 58.64 46.94
PGD-: 4 27.26 51.17 85.78 90.08 73.12 61.50 52.04 48.79
FGSM: 8 25.50 46.11 81.72 87.67 74.22 67.12 62.51 61.32
FGSM: 4 26.39 58.93 84.30 89.02 73.47 64.43 58.80 56.82
Table 6: “Accuracy on correctly classified images” for different models on saturated CIFAR-10 test set. It is easily observed AT-CNNs are much more robust to increasing saturation levels on CIFAR-10.
Data set t
Standard 84.76 51.50 10.84
Underfit 75.59 33.41 6.03
PGD-: 8 58.13 20.14 7.70
PGD-: 4 68.54 26.45 8.18
PGD-: 2 74.25 30.77 9.00
PGD-: 1 78.11 35.03 8.42
PGD-: 12 58.25 21.03 7.85
PGD-: 8 63.36 22.19 8.48
PGD-: 4 69.65 28.21 7.72
FGSM: 8 64.48 22.94 8.07
FGSM: 4 70.50 28.41 6.03
Table 7: “Accuracy on correctly classified images” for different models on Patch-shuffled Caltech-256 test set. Results indicates that AT-CNNs are more sensitive to Patch-shuffle operations on Caltech-256.
Data set t
Standard 66.73 24.87 4.48
Underfit 59.22 23.62 4.38
PGD-: 8 41.08 16.05 6.83
PGD-: 4 49.54 18.23 6.30
PGD-: 2 55.96 19.95 5.61
PGD-: 1 60.19 23.24 6.08
PGD-: 12 42.23 16.95 7.66
PGD-: 8 47.67 16.28 6.50
PGD-: 4 51.94 17.79 5.89
FGSM: 8 57.42 20.70 4.73
FGSM: 4 50.68 16.84 5.98
Table 8: “Accuracy on correctly classified images” for different models on Patch-shuffled Tiny ImageNet test set. Results indicates that AT-CNNs are more sensitive to Patch-shuffle operations on Tiny ImageNet.
\@xsect

We show additional sensitive maps in Figure 9. We also compare the sensitive maps using Grad and SmoothGrad in Figure 10.

Figure 9: Visualization of Salience maps generated from SmoothGrad (Smilkov et al., 2017) for all 11 models. From left to right, Standard CNNs, underfitting CNNs, PGD-inf: 8, 4, 2, 1, PGD-L2: 12, 8, 4 and FGSM: 8, 4.
Figure 10: Visualization of Salience maps generated from Grad for all 11 models. From left to right, Standard CNNs, underfitting CNNs, PGD-inf: 8, 4, 2, 1, PGD-L2: 12, 8, 4 and FGSM: 8, 4. It’s easily observed that sensitivity maps generated from Grad are more noisy compared with its smoothed variant SmoothGrad, especially for Standard CNNs and underfitting CNNs.

References

  • Adebayo et al. (2018) Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., and Kim, B. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, pp. 9525–9536, 2018.
  • Ancona et al. (2018) Ancona, M., Ceolini, E., Oztireli, C., and Gross, M. Towards better understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations (ICLR 2018), 2018.
  • Athalye et al. (2018) Athalye, A., Carlini, N., and Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
  • Bach et al. (2015) Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
  • Ballester & de Araújo (2016) Ballester, P. and de Araújo, R. M. On the performance of googlenet and alexnet applied to sketches. In AAAI, pp. 1124–1128, 2016.
  • Brendel & Bethge (2019) Brendel, W. and Bethge, M. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. In International Conference on Learning Representations, 2019.
  • Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. Ieee, 2009.
  • Ding et al. (2019) Ding, G. W., Lui, K. Y.-C., Jin, X., Wang, L., and Huang, R. On the sensitivity of adversarial robustness to input data distributions. In International Conference on Learning Representations, 2019.
  • Dong et al. (2017) Dong, Y., Su, H., Zhu, J., and Bao, F. Towards interpretable deep neural networks by leveraging adversarial examples. arXiv preprint arXiv:1708.05493, 2017.
  • Erhan et al. (2009) Erhan, D., Bengio, Y., Courville, A., and Vincent, P. Visualizing higher-layer features of a deep network. University of Montreal, 1341(3):1, 2009.
  • Gatys et al. (2015) Gatys, L. A., Ecker, A. S., and Bethge, M. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
  • Geirhos et al. (2019) Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., and Brendel, W. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations, 2019.
  • Girshick et al. (2014) Girshick, R., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014.
  • Goodfellow et al. (2014) Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • Griffin et al. (2007) Griffin, G., Holub, A., and Perona, P. Caltech-256 object category dataset. 2007.
  • He et al. (2016a) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016a.
  • He et al. (2016b) He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016b.
  • Huang & Belongie (2017) Huang, X. and Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1510–1519. IEEE, 2017.
  • Jo & Bengio (2017) Jo, J. and Bengio, Y. Measuring the tendency of cnns to learn surface statistical regularities. arXiv preprint arXiv:1711.11561, 2017.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
  • Kurakin et al. (2016) Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
  • Long et al. (2015) Long, J., Shelhamer, E., and Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440, 2015.
  • Luo et al. (2018) Luo, T., Cai, T., Zhang, M., Chen, S., and Wang, L. Random mask: Towards robust convolutional neural networks. 2018.
  • Madry et al. (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  • Paszke et al. (2017) Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. Automatic differentiation in pytorch. 2017.
  • Schmidt et al. (2018) Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and Madry, A. Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285, 2018.
  • Selvaraju et al. (2017) Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626. IEEE, 2017.
  • Shaham et al. (2015) Shaham, U., Yamada, Y., and Negahban, S. Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432, 2015.
  • Shrikumar et al. (2017) Shrikumar, A., Greenside, P., and Kundaje, A. Learning important features through propagating activation differences. arXiv preprint arXiv:1704.02685, 2017.
  • Simonyan et al. (2013) Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
  • Sinha et al. (2018) Sinha, A., Namkoong, H., and Duchi, J. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018.
  • Smilkov et al. (2017) Smilkov, D., Thorat, N., Kim, B., Viégas, F., and Wattenberg, M. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
  • Song et al. (2018) Song, Y., Shu, R., Kushman, N., and Ermon, S. Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems, pp. 8322–8333, 2018.
  • Sundararajan et al. (2017) Sundararajan, M., Taly, A., and Yan, Q. Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365, 2017.
  • Tsipras et al. (2018) Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. 2018.
  • Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.
  • Wang et al. (2018) Wang, X., Girshick, R., Gupta, A., and He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803, 2018.
  • Xiao et al. (2018) Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., and Song, D. Spatially transformed adversarial examples. In International Conference on Learning Representations, 2018.
  • Xie et al. (2018) Xie, C., Wu, Y., van der Maaten, L., Yuille, A., and He, K. Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411, 2018.
  • Zeiler & Fergus (2014) Zeiler, M. D. and Fergus, R. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Springer, 2014.
  • Zhang et al. (2019a) Zhang, D., Zhang, T., Lu, Y., Zhu, Z., and Dong, B. You only propagate once: Painless adversarial training using maximal principle. arXiv preprint arXiv:1905.00877, 2019a.
  • Zhang et al. (2019b) Zhang, H., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E., and Jordan, M. I. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019b.
  • Zhou et al. (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.
  • Zintgraf et al. (2017) Zintgraf, L. M., Cohen, T. S., Adel, T., and Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. arXiv preprint arXiv:1702.04595, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
366292
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description