Enhancing Adversarial Example Transferability with an Intermediate Level Attack
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model. However, adversarial examples are typically overfit to exploit the particular architecture and feature representation of a source model, resulting in sub-optimal black-box transfer attacks to other target models. We introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer of the source model, improving upon state-of-the-art methods. We show that we can select a layer of the source model to perturb without any knowledge of the target models while achieving high transferability. Additionally, we provide some explanatory insights regarding our method and the effect of optimizing for adversarial examples using intermediate feature maps.
Adversarial examples are small, imperceptible perturbations of images carefully crafted to fool trained models [30, 8]. Studies such as  have shown that Convolutional Neural Networks (CNNs) are particularly vulnerable to such adversarial attacks. The existence of these adversarial attacks suggests that our architectures and training procedures produce fundamental blind spots in our models, and that our models are not learning the same features that humans do.
These adversarial attacks are of interest for more than just the theoretical issues they pose – concerns have also been raised over the vulnerability of CNNs to these perturbations in the real world, where they are used for mission-critical applications such as online content filtration systems and self-driving cars [7, 15]. As a result, a great deal of effort has been dedicated to studying adversarial perturbations. Much of the literature has been dedicated to the development of new attacks that use different perceptibility metrics [2, 28, 26], security settings (black box/white box) [23, 1], as well as increasing efficiency . Defending against adversarial attacks is also well studied. In particular, adversarial training, where models are trained on adversarial examples, has been shown to be effective under certain assumptions [18, 27].
Adversarial attacks can be classified into two categories: white-box attacks and black-box attacks. In white-box attacks, information of the model (i.e., its architecture, gradient information, etc.) is accessible, whereas in black-box attacks, the attackers have access only to the prediction. Black-box attacks are a bigger concern for real-world applications for the obvious reason that such applications typically will not reveal their models publicly, especially when security is a concern (e.g., CNN-based objectionable content filters in social media). Consequently, black-box attacks are mostly focused on the transferability of adversarial examples .
Moreover, adversarial examples generated using white-box attacks will sometimes successfully attack an unrelated model. This phenomenon is known as “transferability.” However, black-box success rates for an attack are nearly always lower than those of white-box attacks, suggesting that the white-box attacks overfit on the source model. Different adversarial attacks transfer at different rates, but most of them are not optimizing specifically for transferability. This paper aims to achieve the goal of increasing the transferability of a given adversarial example. To this end, we propose a novel method that fine-tunes a given adversarial example through examining its representations in intermediate feature maps that we call Intermediate Level Attack (ILA).
Our method draws upon two primary intuitions. First, while we do not expect the direction found by the original adversarial attack to be the most optimal for transferability, we do expect it to be a reasonable proxy, as it still transfers far better than random noise would. As such, if we are searching for a more transferable attack, we should be willing to stray from the original attack direction in exchange for increasing the norm111Perturbations with a higher norm are generally more effective, regardless of layer (holds true for black-box attacks as well).. However, from the ineffectiveness of random noise on neural networks, we see that straying too far from the original direction will cause a decrease in effectiveness – even if we are able to increase the norm by a modest amount. Thus, we must balance staying close to the original direction and increasing norm. A natural way to do so is to maximize the projection onto the original adversarial perturbation.
Second, we note that although for transferability we would like to sacrifice some direction in exchange for increasing the norm, we are unable to do so in the image space without changing perceptibility, as norm and perceptibility are intrinsically tied222Under the standard -ball constraints.. However, if we examine the intermediate feature maps, perceptibility (in image space) is no longer intrinsically tied to the norm in an intermediate feature map, and we may be able to increase the norm of the perturbation in that feature space significantly with no change in perceptibility in the image space. We will investigate the effects of perturbing different intermediate feature maps on transferability and provide insights drawn from empirical observations.
Our contributions are as follows:
We propose a novel method, ILA, that enhances black-box adversarial transferability by increasing the perturbation on a pre-specified layer of a model. We conduct a thorough evaluation that shows our method improves upon state-of-the-art methods on multiple models across multiple datasets. See Sec. 4.
We introduce a procedure, guided by empirical observations, for selecting a layer that maximizes the transferability using the source model alone, thus obviating the need for evaluation on transfer models during hyperparameter optimization. See Sec. 4.2.
Additionally, we provide explanatory insights into the effects of optimizing for adversarial examples using intermediate feature maps. See Sec. 5.
2 Background and Related Work
2.1 General Adversarial Attacks
An adversarial example for a given model is generated by augmenting an image so that in the model’s decision space its representation moves into the wrong region. Most prior work in generating adversarial examples for attack focuses on disturbing the softmax output space via the input space [8, 18, 21, 6]. Some representative white-box attacks are the following:
Gradient Based Approaches The Fast Gradient Sign Method (FGSM)  generates an adversarial example with the update rule:
It is the linearization of the maximization problem
where represents the original image; is the adversarial example; is the ground-truth label; is the loss function; and is the model until the final softmax layer. Its iterative version (I-FGSM) applies FGSM iteratively . Intuitively, this fools the model by increasing its loss, which eventually causes misclassification. In other words, it finds perturbations in the direction of the loss gradient of the last layer (i.e., the softmax layer).
Decision Boundary Based Approaches Deepfool  produces approximately the closest adversarial example iteratively by stepping towards the nearest decision boundary. Universal Adversarial Perturbation  uses this idea to craft a single image-agnostic perturbation that pushes most of a dataset’s images across a model’s classification boundary.
Model Ensemble Attack The methods mentioned above are designed to yield the best performance only on the model they are tuned to attack; often, the generated adversarial examples do not transfer to other models. In contrast,  proposed the Model-based Ensembling Attack that transfers better by avoiding dependence on any specific model. It uses models with softmax outputs, notated as , …, , and solves
Using such an approach, the authors showed that the decision boundaries of different CNNs align with each other. Consequently, an adversarial example that fools multiple models is likely to fool other models as well.
2.2 Intermediate-layer Adversarial Attacks
A small number of studies have focused on perturbing mid-layer outputs. These include , which perturbs mid-layer activations by crafting a single universal perturbation that produces as many spurious mid-layer activations as possible. Another is Feature Adversary Attack [32, 25], which performs a targeted attack by minimizing the distance of the representations of two images in internal neural network layers (instead of in the output layer). However, instead of emphasizing adversarial transferability, it focuses more on internal representations. Results in the paper show that even when given a guide image and a dissimilar target image, it is possible to perturb the target image to produce an embedding similar to that of the guide image.
Two other related works [12, 24] focus on perturbing intermediate activation maps for the purpose of increasing adversarial transferability in a method similar to that of [32, 25] except they focus on black-box transferability. Their method does not focus on fine-tuning existing adversarial examples and differs significantly in attack methodology from ours.
Another recent work that examines intermediate layers for the purposes of increasing transferability is TAP . The TAP attack attempts to maximize the norm between the original image and the adversarial example at all layers. In contrast to our approach, they do not attempt to take advantage of a specific layer’s feature representations, instead choosing to maximize the norm of the difference across all layers. In addition, unlike their method which generates an entirely new adversarial example, our method fine-tunes existing adversarial examples, allowing us to leverage existing adversarial attacks.
Based on the motivation presented in the introduction, we propose the Intermediate Level Attack (ILA) framework, shown in Algorithm 2. We propose the following two variants, differing in their definition of the loss function . Note that we define as the output at layer of a network given an input .
3.1 Intermediate Level Attack Projection (ILAP) Loss
Given an adversarial example generated by attack method for natural image , we wish to enhance its transferability by focusing on a layer of a given network . Although is not the optimal direction for transferability, we view as a hint for this direction. We treat as a directional guide towards becoming more adversarial, with emphasis on the disturbance at layer . Our attack will attempt to find an such that matches the direction of while maximizing the norm of the disturbance in that direction. The high-level idea is that we want to maximize for the reasons expressed in Section 1. Since this is a maximization, we can disregard constants, and this simply becomes the dot product. The objective we solve is given below, and we term it the ILA projection loss:
3.2 Intermediate Level Attack Flexible (ILAF) Loss
Since the image may not be the optimal direction for us to optimize towards, we may want to give the above loss greater flexibility. We do this by explicitly balancing both norm maximization and also fidelity to the adversarial direction at layer . We note that in a rough sense, ILAF is optimizing for the same thing as ILAP. We augment the above loss by separating out the maintenance of the adversarial direction from the magnitude, and control the trade-off with the additional parameter to obtain the following loss, termed the ILA flexible loss:
In practice, we choose either the ILAP or ILAF loss and iterate times to attain an approximate solution to the respective maximization objective. Note that the projection loss only has the layer as a hyperparameter, whereas the flexible loss also has the additional loss weight as a hyperparameter. The above attack assumes that is a pre-generated adversarial example. As such, the attack can be viewed as a fine-tuning of the adversarial example . We fine-tune for greater norm of the output difference at layer (which we hope will be conducive to greater transferability) while attempting to preserve the output difference’s direction to avoid destroying the original adversarial structure.
We start by showing that ILAP increases transferability against I-FGSM, MI-FGSM  and Carlini-Wagner  in the context of CIFAR-10 (Sections 4.1 and 4.2). Results for FGSM and Deepfool are shown in Appendix A333We re-implemented all attacks except Deepfool, for which we used the original publicly provided implementation. For C&W, we used a randomized targeted version, since it has better performance.. We test on a variety of models, namely: ResNet18 , SENet18 , DenseNet121  and GoogLeNet . Architecture details are specified in Appendix A; note that in the below results sections, instead of referring to the architecture specific layer names, we refer to layer indices (e.g. is the last layer of the first block). Our models are trained on CIFAR-10  with the code and hyperparameters in  to final test accuracies of for ResNet18, for SENet18, for DenseNet121, and for GoogLeNet.
For a fair comparison, we use the output of an attack that was run for iterations as a baseline. ILAP runs for iterations starting from scratch using the output of attack after iterations as the reference adversarial example. The learning rate is set to for both I-FGSM and MI-FGSM444Tuning the learning rate does not substantially affect transferability, as shown in Appendix G..
In Section 4.2 we also show that we can select a nearly-optimal layer for transferability using only the source model. Moreover, ILAF allows further tuning to improve the performance across layers (Section 4.3).
Finally, we demonstrate that ILAP also improves transferability under the more complex setting of ImageNet  and that it supercedes state-of-the-art attacks focused on increasing transferability, namely the Zhou et al. attack (TAP)  and the Xie et al. attack  (Section 4.4).
4.1 ILAP Targeted at Different Values
To confirm the effectiveness of our attack, we fix a single source model and baseline attack method, and then check how ILAP transfers to the other models compared to the baseline attack. Results for ResNet18 as the source model and I-FGSM as the baseline method are shown in Figure 3. Comparing the results of both methods on the other models, we see that ILAP outperforms I-FGSM when targeting any given intermediate layer, and does especially well for the optimal hyperparameter value of . Note that the choice of layer is important for both performance on the source model and target models. Full results are shown in Appendix A.
|TAP ||DI-FGSM |
|Transfer||20 Itr||Opt ILAP||20 Itr||Opt ILAP|
Same model as source model.
4.2 ILAP with Pre-Determined Value
Above we demonstrated that adversarial examples produced by ILAP exhibit the strongest transferability when targeting a specific layer (i.e. choosing a layer as the hyperparameter). We wish to pre-determine this optimal value based on the source model alone, so as to avoid tuning the hyperparameter by evaluating on other models. To do this, we examine the relationship between transferability and the ILAP layer disturbance values for a given ILAP attack. We define the disturbance values of an ILAP attack perturbation as values of the function for all values of in the source model. For each value of in ResNet18 (the set of is defined for each architecture in Appendix A) we plot the disturbance values of the corresponding ILAP attack in Figure 4. The same figure is given for other models in Appendix B.
We notice that the adversarial examples that produce the latest peak in the graph are typically the ones that have highest transferability for all transferred models (Table 2). Given this observation, we propose that the latest that still exhibits a peak is a nearly optimal value of (in terms of maximizing transferability). For example, according to Figure 4, we would choose as the expected optimal hyperparameter for ILAP with ResNet18 as the source model. Table 2 supports our claim and shows that selecting this layer gives an optimal or near-optimal attack. We discuss our discovered explanatory insights for this method in Section 5.3.
|MI-FGSM||C & W|
|Source||Transfer||20 Itr||10 Itr ILAP||Opt ILAP||1000 Itr||500 Itr ILAP||Opt ILAP|
|5.7%||11.3%||2.3% (6)||7.3%||5.2%||2.1% (5)|
|ResNet18||SENet18||33.8%||30.6%||30.6% (4)||85.4%||41.7%||41.7% (4)|
|()||DenseNet121||35.1%||30.4%||30.4% (4)||84.4%||41.7%||41.7% (4)|
|GoogLeNet||45.1%||37.7%||37.7% (4)||90.6%||57.3%||57.3% (4)|
|ResNet18||31.0%||27.5%||27.5% (4)||87.5%||42.7%||42.7% (4)|
|SENet18||3.3%||10.0%||2.6% (6)||6.2%||7.3%||3.1% (5)|
|()||DenseNet121||31.6%||27.3%||27.3% (4)||88.5%||38.5%||38.5% (4)|
|GoogLeNet||41.1%||34.8%||34.8% (4)||91.7%||52.1%||52.1% (4)|
|DenseNet121||SENet18||33.5%||27.7%||27.7% (6)||86.5%||34.4%||34.4% (6)|
|GoogLeNet||36.3%||30.3%||30.3% (6)||90.6%||45.8%||45.8% (6)|
|1.5%||1.4%||0.5% (11)||4.2%||0.0%||0.0% (12)|
Same model as the source model.
4.3 ILAF vs. ILAP
We show that ILAF can further improve transferability with the additional tunable hyperparameter . The best ILAF result for each model improves over ILAP as shown in Table 3. However, note that the optimal differs for each model and requires substantial hyperparameter tuning to outperform ILAP. Thus, ILAF can be seen as a more model-specific version that requires more tuning, whereas ILAP works well out of the box. Full results are in Appendix C.
|Model||ILAP (best)||ILAF (best)|
4.4 ILAP on ImageNet
We also tested ILAP on ImageNet, with ResNet18, DenseNet121, SqueezeNet, and AlexNet pretrained on ImageNet (as provided in ). The learning rates for all attacks are tuned for best performance. For I-FGSM the learning rate is set to , for ILAP with I-FGSM to , for MI-FGSM to , and for ILAP with MI-FGSM to . To evaluate transferability, we tested the accuracies of different models over adversarial examples generated from all ImageNet test images. We observe that ILAP improves over I-FGSM and MI-FGSM on ImageNet. Results for ResNet18 as the source model and I-FGSM as the baseline attack are shown in Figure 14. Full results are in Appendix D.
In order to show our approach outperforms pre-existing methods, we tested ILAP against both TAP 555Code was not made available for this paper, hence we reproduced their method to the best of our ability. and Xie et al. 666Pretrained ImageNet models for Inc-v3, Inc-v4, and IncRes-v2 were obtained from Cadene’s Github repo . in an ImageNet setting. The results are shown in Table 1777Results indicating that ILAP is competitive with TAP on CIFAR-10 are in Appendix H..
5 Explaining the Effectiveness of Intermediate Layer Emphasis
At a high level, we motivated projection in an intermediate feature map as a way to increase transferability. We saw empirically that it was desireable to target the layer corresponding to the latest peak (see Figure 4) on the source model in order to maximize transferability. In this section, we attempt to explain the factors causing ILAP performance to vary across layers as well as what they suggest about the optimal layer for ILAP. As we iterate through layer indices, there are two factors affecting our performance: the angle between the original perturbation direction and best transfer direction (defined below in Section 5.1) as well as the linearity of the model decision boundary.
Below, we discuss how the factors change across layers and affect transferability of our attack.
5.1 Angle between the Best Transfer Direction and the Original Perturbation
Motivated by  (where it is shown that the decision boundaries of models with different architectures often align) we define the Best Transfer Direction (BTD):
Best Transfer Direction: Let be an image and be a large (but finite) set of distinct CNNs. Find such that
Then the Best Transfer Direction of x is .
Since our method uses the original perturbation as an approximation for the BTD, it is intuitive that the better this approximation is in the current feature representation, the better our attack will perform.
We want to investigate the nature of how well a chosen source model attack, like I-FGSM, aligns with the BTD throughout layers. Here we measure alignment between an I-FGSM perturbation and an empirical estimate of the BTD (a multi-fool perturbation of the four models we evaluate on in the CIFAR-10 setting) using the angle between them. We investigate the alignment between the feature map outputs of the I-FGSM perturbation and the BTD at each layer. As shown in Figure 6, the angle between the perturbation of I-FGSM and that of the BTD decreases as we iterate the layer indices. Therefore, the later the target layer is in the source model, the better it is to use I-FGSM’s attack direction as a guide. This is a factor increasing transfer attack success rate as layer indices increase.
To test our hypothesis, we propose to eliminate this source of variation in performance by using a multi-fool perturbation as the starting perturbation for ILAP, which is a better approximation for the BTD. As shown in Figure 7, ILAP performs substantially better when using a multi-fool perturbation as a guide rather than an I-FGSM perturbation, thus confirming that using a better approximation of the BTD gives better performance for ILAP. In addition, we see that these results correspond with what we would expect from Figure 6. In the earlier layers, I-FGSM is a worse approximation of the BTD, so passing in a multi-fool perturbation improves performance significantly. In the later layers, I-FGSM is a much better approximation of the BTD, and we see that passing in a multi-fool perturbation does not increase performance much.
5.2 Linearity of Decision Boundary
If we view I-FGSM as optimizing to cross the decision boundary, we can interpret ILAP as optimizing to cross the decision boundary approximated with a hyper-plane perpendicular to the I-FGSM perturbation. As the layer indices increase, the function from the feature space to the final output of the source model tends to becomes increasingly linear (there are more nonlinearities between earlier layers and the final layer than there are between a later layer and the final layer). In fact, we note that at the final layer, the decision boundary is completely linear. Thus, our linear approximation of the decision boundary becoming more accurate is one factor in improving ILAP performance as we select the later layers.
We define the “true decision boundary” as a majority-vote ensemble of a large number of CNNs. Note that for transfer, we care less about how well we are approximating the source model decision boundary than we do about how well we are approximating the true decision boundary. In most feature representations we expect that the true decision boundary is more linear, as ensembling reduces variance. However, note that at least in the final layer, by virtue of the source model decision boundary being exactly linear, the true decision boundary cannot be more linear, and is likely to be less linear.
We hypothesize that this flip is what causes us to perform worse in the final layers. In these layers, the source model decision boundary is more linear than the true decision boundary, so our approximation performs poorly. We test this hypothesis by attacking two variants of ResNet18 augmented with 3 linear layers before the last layer: one variant without activations following the added layers (var1) and one with (var2). As shown in Figure 8, ILAP performance decreases less in the second variant. Also note that these nonlinearities also cause worse ILAP performance earlier in the network.
Thus, we conclude that the extreme linearity of the last several layers is associated with ILAP performing poorly.
5.3 Explanation of the main result
In this section, we tie together all of the above factors to explain the optimal intermediate layer for transferability. Denote:
the decreasing angle difference between I-FGSM’s and BTD’s perturbation direction as Factor 1
the increasing linearity with respect to the decision boundary as we increase layer index as Factor 2, and
the excessive linearity of the source model decision boundary as Factor 3
On the transfer models, as the index of the attacked source model layer increases, Factors 1 and 2 increase attack rate, while Factor 3 decreases the attack rate. Thus, before some layer, Factors 1 and 2 cause transferability to increase as layer index increases; however, afterward, Factor 3 wins out and causes transferability to decrease as the layer index increases. Thus the layer right before the point where this switch happens is the layer that is optimal for transferability.
We note that this explanation would also justify the method presented in Section 4.2. Intuitively, having a peak corresponds with having the linearized decision boundary (from using projection as the objective) be very different from the source model’s decision boundary. If this were not the case, then I-FGSM would presumably have found this improved perturbation already. As such, choosing the last layer that we can get a peak at corresponds with both having as linear of a decision boundary as possible (as late of a layer as possible) while still having enough room to move (the peak).
On the source model, since there is no notion of a “transfer” attack, Factor 3 and Factor 1 do not have any effect. Therefore, Factor 2 causes the performance of the later layers to improve, so much so that at the final layer ILAP’s performance on the source model is actually equal or better on all the attacks we used as baselines (see Figure 3). We hypothesize the improved performance on the source model is the result of a simpler loss and thus an easier to optimize loss landscape.
We introduce a novel attack, coined ILA, that aims to enhance the transferability of any given adversarial example. It is a framework with the goal of enhancing transferability by increasing projection onto the Best Transfer Direction. Within this framework, we propose two variants, ILAP and ILAF, and analyze their performance. We demonstrate that there exist specific intermediate layers that we can target with ILA to substantially increase transferability with respect to the attack baselines. In addition, we show that a near-optimal target layer can be selected without any knowledge of transfer performance. Finally, we provide some intuition regarding ILA’s performance and why it performs differently in different feature spaces.
Potential future works include making use of the interactions between ILA and existing adversarial attacks to explain differences among existing attacks, as well as extending ILA to perturbations produced for different settings (universal or targeted perturbations). In addition, other methods of attacking intermediate feature spaces could be explored, taking advantage of the properties we explored in this paper.
We want to thank Pian Pawakapan, Prof. Kavita Bala, and Prof. Bharath Hariharan for helpful discussions. This work is supported in part by a Facebook equipment donation.
-  (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In ICML, Cited by: §1.
-  (2017) Adversarial patch. CoRR abs/1712.09665. Cited by: §1.
-  (2019) Pretrained-models.pytorch. GitHub. Note: https://github.com/Cadene/pretrained-models.pytorch Cited by: footnote 6.
-  (2017) Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. Cited by: §4.
-  (2009) ImageNet: a large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. Cited by: §4.
-  (2017) Boosting adversarial attacks with momentum. Cited by: §2.1, §4.
-  (2017) Robust physical-world attacks on deep learning models. Cited by: §1.
-  (2014) Explaining and harnessing adversarial examples. CoRR abs/1412.6572. Cited by: §1, §1, §2.1, §2.1.
-  (2016) Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §4, §7.1.
-  (2017) Squeeze-and-excitation networks. CoRR abs/1709.01507. Cited by: §4, §7.1.
-  (2017) Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269. Cited by: §4, §7.1.
-  (2019) Feature space perturbations yield more transferable adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7066–7074. Cited by: §2.2.
-  (2012) ImageNet classification with deep convolutional neural networks. In NIPS, Cited by: §1.
-  (2009) Learning multiple layers of features from tiny images. Cited by: §4, §7.8.
-  (2016) Adversarial examples in the physical world. CoRR abs/1607.02533. Cited by: §1, §2.1.
-  (2018) PyTorch cifar10. GitHub. Note: https://github.com/kuangliu/pytorch-cifar Cited by: §4, §7.1.
-  (2016) Delving into transferable adversarial examples and black-box attacks. CoRR abs/1611.02770. Cited by: §1, §2.1, §5.1.
-  (2017) Towards deep learning models resistant to adversarial attacks. CoRR abs/1706.06083. Cited by: §1, §2.1.
-  (2010) Torchvision the machine-vision package of torch. In ACM Multimedia, Cited by: §4.4, §7.4.
-  (2017) Universal adversarial perturbations. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86–94. Cited by: §2.1.
-  (2016) DeepFool: a simple and accurate method to fool deep neural networks. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582. Cited by: §2.1, §2.1.
-  (2018) Generalizable data-free objective for crafting universal adversarial perturbations. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.2.
-  (2017) Practical black-box attacks against machine learning. In AsiaCCS, Cited by: §1.
-  (2017) LOTS about attacking deep features. In International Joint Conference on Biometrics (IJCB), Cited by: §2.2.
-  (2015) Adversarial manipulation of deep representations. CoRR abs/1511.05122. Cited by: §2.2, §2.2.
-  (2017) Adversarial generative nets: neural network attacks on state-of-the-art face recognition. CoRR abs/1801.00349. Cited by: §1.
-  (2017) Certifying some distributional robustness with principled adversarial training. Cited by: §1.
-  (2017) One pixel attack for fooling deep neural networks. CoRR abs/1710.08864. Cited by: §1.
-  (2015) Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. Cited by: §4, §7.1.
-  (2013) Intriguing properties of neural networks. CoRR abs/1312.6199. Cited by: §1.
-  (2019-06) Improving transferability of adversarial examples with input diversity. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.4, Table 1, §4.
-  (2017) Adversarial examples: attacks and defenses for deep learning. CoRR abs/1712.07107. Cited by: §2.2, §2.2.
-  (2018) Transferable adversarial perturbations. In ECCV, Cited by: §2.2, §4.4, Table 1, §4, 8th item, §7.8, §7.8, Table 8.
The following are provided in the supplementary material for this paper:
A more thorough description of the networks used, the layers selected for attack, and full results on other attacks tested
Complete disturbance graphs across layers of an attacking comprised of ILAP and I-FGSM
A more complete result showing the comparison between ILAP and ILAF
Full results for ILAP’s performance on ImageNet
Visualization of decision boundary
Results for different norm values
Results for ablating the learning rate used in ILAP
Results comparing ILAP to TAP  on CIFAR-10
7.1 ILAP Network Overview and Results for Other Base Attacks
As shown in the main paper, we tested ILAP against MI-FGSM, C&W, and TAP. We also tested I-FGSM, DeepFool, and FGSM. We test on a variety of models, namely: ResNet18 , SENet18 , DenseNet121 and GoogLeNet  trained on CIFAR-10. For each source model, each large block output in the source model and each attack , we generate adversarial examples for all images in the test set using with 20 iterations as a baseline. We then generate adversarial examples using with 10 iterations as input to ILA, which will then run for 10 iterations. The learning rate is set to for I-FGSM, for I-FGSM with momentum and for ILAP. We are in the norm setting with for all attacks. We then evaluate transferability of baseline and ILA adversarial examples over the other models by testing their accuracies, as shown in Figure 11.
Below is the list of layers (models from ) we picked for each source model, which is indexed starting from 0 in the experiment results:
ResNet18: conv, bn, layer1, layer2, layer3, layer4, linear (layer1-4 are basic blocks)
GoogLeNet: pre_layers, a3, b3, maxpool, a4, b4, c4, d4, e4, a5, b5, avgpool, linear
DenseNet121: conv1, dense1, trans1, dense2, trans2, dense3, trans3, dense4, bn, linear
SENet18: conv1, bn1, layer1, layer2, layer3, layer4, linear (layer1-4 are pre-activation blocks)
|Source||Transfer||20 Itr||10 Itr ILAP||Opt ILAP||50 Itr||25 Itr ILAP||Opt ILAP|
|ResNet18222Model that is exactly the same model as the source model.||3.3%||7.6%||1.8% (5)||48.7%||12.9%||5.4% (5)|
|ResNet18||SENet18||44.4%||27.5%||27.5% (4)||87.4%||43.7%||43.7% (4)|
|()||DenseNet121||45.8%||27.7%||27.7% (4)||89.1%||43.8%||43.8% (4)|
|GoogLeNet||58.6%||35.8%||35.8% (4)||89.3%||50.7%||50.7% (4)|
|ResNet18||36.8%||25.8%||25.8% (4)||91.9%||40.3%||39.9% (5)|
|SENet18||2.4%||7.9%||2.3% (6)||56.8%||11.4%||5.1% (6)|
|()||DenseNet121||38.0%||25.9%||25.9% (4)||92.9%||41.3%||41.1% (5)|
|GoogLeNet||48.4%||33.7%||33.7% (4)||92.3%||48.7%||48.7% (4)|
|GoogLeNet||SENet18||55.6%||33.1%||31.8% (3)||92.1%||42.9%||42.9% (9)|
|0.9%||0.8%||0.4% (11)||51.5%||4.2%||3.9% (11)|
|Source||Transfer||20 Itr||10 Itr ILAP||Opt ILAP|
Same model as source model.
7.2 Disturbance graphs
In this experiment, we used the same setting as our main experiment in Appendix 7.1 to generate adversarial examples, with only I-FGSM used as the reference attack. The average disturbance of each set of adversarial examples is calculated at each layer. We repeated the experiment for all four models described in Appendix 7.1, as shown in Figure 12. Observe that the in the legend refers to the hyperparameter set in the ILA attack, and afterwards the disturbance values were computed on layers indicated by the in the x-axis.
7.3 ILAP vs ILAF Full Result
As described in the main paper, we compared the performace of ILAP and ILAF with a range of . We used the same setting as our main experiment in Appendix 7.1 for ILAP and ILAF to generate adversarial examples, with only I-FGSM used as the reference attack. The result is shown in Figure 13.
7.4 ILAP on ImageNet Full Result
We tested ILAP against I-FGSM and I-FGSM with momentum on ImageNet similarly to the experiment on CIFAR-10. The models we used are ResNet18, DenseNet121, SqueezeNet1.0 and AlexNet. The learning rate is set to for I-FGSM, for ILAP plus I-FGSM, for I-FGSM with momentum and for ILAP plus I-FGSM with momentum. To evaluate transferability, we test the accuracies of different models over adversarial examples generated from all ImageNet test images, as shown in Figure 14.
Below is the list of layers (models from ) we picked for each source model:
ResNet18: conv1, bn1, layer1, layer2, layer3, layer4, fc
DenseNet121: conv0, denseblock1, transition1, denseblock2, transition2, denseblock3, transition3, denseblock4, norm5, classifier
SqueezeNet1.0: Features: 0 3 4 5 7 8 9 10 12, classifier
AlexNet: Features: 0 3 4 6 8 10, classifiers: 1 4
7.5 Visualization of the Decision Boundary
To gain some understanding over how ILA interplays with the decision boundaries, we visualize the two dimensional plane between the initial I-FGSM perturbation and the ILA perturbation for some examples. Visualization is done on Resnet with layer 4, and I-FGSM as the starting perturbation. See Figure 15.
7.6 Fooling with Different Values
In this experiment, we use ILAP to generate adversarial examples with an I-FGSM baseline attack on ResNet18 with , while other settings are kept the same as in section 7.1. We then evaluated their transferability against I-FGSM baseline on the adversarial examples of the whole test set, as shown in Figure 16.
7.7 Learning Rate Ablation
We set iterations to 20 for both I-FGSM and I-FGSM with Momentum and experimented different learning rates on ResNet18. We then evaluate different models’ accuracies on the generated adversarial examples, as shown in Table 6 and 7.
|learning rate||ResNet18222Model that is exactly the same model as the source model.||SENet18||DenseNet121||GoogLeNet|
7.8 Comparison to TAP 
|Source||Transfer||20 Itr||Opt ILAP|
Same model as source model.