Improving Transferability of Adversarial Examples with Input Diversity

Improving Transferability of Adversarial Examples with Input Diversity

Abstract

Though convolutional neural networks have achieved state-of-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding human-imperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. To further improve the transferability, we (1) integrate the recently proposed momentum method into the attack process; and (2) attack an ensemble of networks simultaneously. By evaluating our method against top defense submissions and official baselines from NIPS adversarial competition, this enhanced attack reaches an average success rate of , which outperforms the top attack submission in the NIPS competition by a large margin of . We hope that our proposed attack strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future. The code is public available at https://github.com/cihangxie/DI-2-FGSM.

Keywords:
Adversarial Examples, Black-Box Attacks

1 Introduction

Recent success of convolutional neural networks (CNNs) leads to a dramatic performance improvement on various vision tasks, including image classification [13, 28, 11], object detection [8, 24, 36] and semantic segmentation [18, 3]. However, CNNs are extremely vulnerable to small perturbations to the input images, i.e., human-imperceptible additive perturbations can result in failure predictions of CNNs. These intentionally crafted images are known as adversarial examples [32]. Learning how to generate adversarial examples can help us investigate the robustness of different models [1] and understand the insufficiency of current training algorithms [9, 15, 33].

Figure 1: Success rates comparisons of three attacks on four networks. The ground-truth is walking stick, and is marked as pink in the top- confidence distribution plots. The adversarial examples are crafted on Inception-v3 with the maximum perturbation . The first row shows the top- confidence distributions of the clean image, which indicates all the networks make right predictions with high confidences. The second and third rows show the top- confidence distributions of the adversarial examples generated by the Fast Gradient Sign Method (FGSM) and the Iterative Fast Gradient Sign Method (I-FGSM), respectively. These adversarial examples successfully attack the white-box model Inception-v3, but cannot transfer to all black-box models, e.g., Inception-Resnet-v2. The fourth row shows the top- confidence distributions of the adversarial examples generated by our proposed attack method, Diverse Inputs Iterative Fast Gradient Sign Method (DI2-FGSM), which attacks the white-box model and all black-box models successfully. Although these adversarial examples have different success rates, they are all perceived to be similar to the clean image by human observer

Several methods [9, 32, 14] have been proposed recently to find adversarial examples. In general, these attacks can be categorized into two types, single-step attacks [9] and iterative attacks [32, 14], according to the number of steps of gradient computation. Under the white-box setting, where the attackers have a perfect knowledge of the network structure and weights, iterative attacks can generate adversarial examples with much higher success rates than those generated by single-step attacks. However, if these adversarial examples are tested on a different network (either in terms of network structure, weights or both), i.e., the black-box setting, single-step attacks achieve higher success rates than iterative attacks. This trade-off is due to the fact that iterative attacks tend to overfit the specific network parameters (i.e., have high white-box success rates) thus generated adversarial examples rarely transfer to other networks (i.e., have low black-box success rates), while single-step attacks usually underfit to the network parameters (i.e., have low white-box success rates) thus producing adversarial examples with slightly better transferability. Given this phenomenon, one interesting question is whether we can generate adversarial examples with high success rates under both white-box and black-box settings.

Data augmentation [13, 28, 11] has been shown to be an effective way to prevent networks from overfitting during the training process. Specifically, a set of label-preserving transformations, e.g., resizing, cropping and rotating, are applied to the images to enlarge the training set. Consequently, the trained networks have stronger ability to generalize well to unseeing images. Meanwhile, [34, 10] showed that image transformations can defend against adversarial examples under certain situations, which indicates that adversarial examples cannot generalize well under different transformations. These transformed adversarial examples are known as hard examples [26, 27] for attackers, which can then be served as good samples to produce more transferable adversarial examples.

To this end, we propose the Diverse Input Iterative Fast Gradient Sign Method (DI2-FGSM) to improve the transferability of adversarial examples. At each iteration, unlike the traditional methods which maximize the loss function directly w.r.t. the original inputs, we apply random and differentiable transformations to the input images with probability and maximize the loss function w.r.t. these transformed inputs. In particular, the transformations used here are random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Note that, these randomized operations were previously used to defend against adversarial examples [34], while here we incorporate them into the attack process to create hard and diverse input patterns. Figure 1 shows an adversarial examples generated by our proposed attack method, DI2-FGSM, and compares its success rates to other attack methods under both white-box and black-box settings.

We test the proposed attack method on several networks under both white-box and black-box settings. Compared with traditional iterative attacks, the results on ImageNet (see Section 4.2) show that DI2-FGSM gets significantly higher success rates for black-box models, and maintains similar success rates for white-box models. To improve the transferability of adversarial examples further, we (1) integrate momentum term [7] into the attack process; and (2) attack multiple networks simultaneously [17]. By evaluating our attack method w.r.t. the top defense submissions and official baselines from NIPS adversarial competition, this enhanced attack reaches an average success rate of , which outperforms the top attack submission in the NIPS competition by a large margin of . We hope that our proposed attack strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future.

2 Related Work

2.1 Generating Adversarial Examples

Traditional machine learning algorithms are known to be vulnerable to adversarial examples [5, 12, 2]. Recently, Szegedy et al. [32] pointed out that CNNs are also fragile to adversarial examples, and proposed a box-constrained L-BFGS method to find adversarial examples reliably. Due to the expensive computation in [32], Goodfellow et al. [9] proposed the fast gradient sign method to generate adversarial examples efficiently by performing a single gradient step. This method was then extended by [14] to an iterative version, and showed that the generated adversarial examples can exist in the physical world. Dong et al. [7] proposed a broad class of momentum-based iterative algorithms to boost the transferability of adversarial examples. The transferability can also be improved by attacking an ensemble of networks simultaneously [17]. Besides image classification, adversarial examples also exist in object detection [35], semantic segmentation [35, 4], speech recognition [4], deep reinforcement learning [16], etc.. Unlike adversarial examples which can be recognized by human, Nguyen et al. [21] generated fooling images that are different from natural images and difficult for human to recognize, but CNNs believe they are recognizable objects with high confidences.

2.2 Defending Against Adversarial Examples

Conversely, many methods have been proposed recently to defend against adversarial examples. [9, 15] proposed to inject adversarial examples into the training data to increase the network robustness. Tramèr et al. [33] pointed out that such adversarially trained models still remain vulnerable to adversarial examples, and proposed ensemble adversarial training, which augments training data with perturbations transferred from other models, in order to improve the network robustness further. [34, 10] utilized randomized image transformations to inputs at inference time to mitigate adversarial effects. Dhillon et al. [6] pruned a random subset of activations according to their magnitude to enhance network robustness. Prakash et al. [23] proposed a framework which combines pixel deflection with soft wavelet denoising to defend against adversarial examples. [20, 29, 25] leveraged generative models to purify adversarial images by moving them back towards the distribution of clean images.

3 Methodology

Let denote an image, and denote the corresponding ground-truth label. We use to denote the network parameters, and to denote the loss. For the adversarial example generation, the goal is to maximize the loss for the image , under the constraint that the generated adversarial example should look visually similar to the original image and the corresponding predicted label . In this paper, we use -norm to measure the perceptibility of adversarial perturbations, i.e., . The loss function is defined as

(1)

where is the one-hot encoding of the ground-truth , and is the logits output. Note that all the baseline attacks have been implemented in the cleverhans library [22], which can be used directly for our experiments.

3.1 Family of Fast Gradient Sign Methods

In this section, we give an overview of the family of fast gradient sign methods:

  • Fast Gradient Sign Method (FGSM): FGSM [9] is the first member in this attack family, which finds the adversarial perturbations in the direction of the loss gradient . The update equation is

    (2)
  • Iterative Fast Gradient Sign Method (I-FGSM): Kurakin et al. [15] extended FGSM to an iterative version, which can be expressed as

    (3)
    (4)

    where indicates the resulting image are clipped within the -ball of the original image , is the iteration number and is the step size.

  • Momentum Iterative Fast Gradient Sign Method (MI-FGSM): MI-FGSM [7] proposed to integrate the momentum term into the attack process to stabilize update directions and escape from poor local maxima. The updating procedure is similar to I-FGSM, with the replacement of Equation (4) by:

    (5)
    (6)

    where is the decay factor of the momentum term and is the accumulated gradient at iteration .

3.2 Diverse Inputs Iterative Fast Gradient Sign Method

Overfitting Phenomenon

Let denote the unknown network parameters. In general, a strong adversarial example should have high success rates on both white-box models, i.e., , and black-box models, i.e., . On one hand, the traditional single-step attacks, e.g., FGSM, tend to underfit to the specific network parameters due to inaccurate linear appropriation of the loss , thus cannot reach high success rates on white-box models. On the other hand, the traditional iterative attacks, e.g., I-FGSM, greedily perturb the images in the direction of the sign of the loss gradient at each iteration, thus easily fall into the poor local maxima and overfit to the specific network parameters . These overfitted adversarial examples rarely transfer to black-box models. In order to generate adversarial examples with strong transferability, we need to find a better way to optimize the loss to alleviate this overfitting phenomenon.

Data augmentation [13, 28, 11] is shown as an effective way to prevent networks from overfitting during the training process. Meanwhile, [34, 10] showed that adversarial examples are no longer malicious if simple image transformations are applied, which indicates these transformed adversarial images can serve as good samples for better optimization.

Our Solution

Based on the analysis above, we propose the Diverse Inputs Iterative Fast Gradient Sign Method (DI2-FGSM), which applies image transformations to the original inputs with probability at each iteration to alleviate the overfitting phenomenon. Specifically, the image transformations applied here is random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner [34]. The transformation probability controls the trade-off between success rates on white-box models and success rates on black-box models, which can be observed from Figure 3. If , DI2-FGSM degrades to I-FGSM and leads to overfitting. If , i.e., only transformed inputs are used for the attack, the generated adversarial examples tend to have much higher success rates on black-box models but lower success rates on white-box models, since the original inputs are not seen by the attackers.

In general, the updating procedure of DI2-FGSM is similar to I-FGSM, with the replacement of Equation (4) by:

(7)

where the stochastic transformation function is:

(8)

3.3 Momentum Diverse Inputs Iterative Fast Gradient Sign Method

Intuitively, momentum and diverse inputs are two completely different ways to alleviate the overfitting phenomenon. We can combine them naturally to form a much stronger attack, i.e., Momentum Diverse Inputs Iterative Fast Gradient Sign Method (M-DI2-FGSM). The overall updating procedure of M-DI2-FGSM is similar to MI-FGSM, with only replacement of Equation (5) by:

(9)

3.4 Relationships between Different Attacks

The attacks mentioned above all belong to the family of Fast Gradient Sign Methods, and can be related via different parameter settings, as shown in Figure 2. In summary:

  • If the transformation probability , M-DI2-FGSM degrades to MI-FGSM, and DI2-FGSM degrades to I-FGSM;

  • If the decay factor , M-DI2-FGSM degrades to DI2-FGSM, and MI-FGSM degrades to I-FGSM;

  • If the total iteration number , I-FGSM degrades to FGSM.

Figure 2: Relationships between different attacks

3.5 Attacking an Ensemble of Networks

Liu et al. [17] suggested that attacking an ensemble of multiple networks simultaneously can generate much stronger adversarial examples. The motivation is that if an adversarial image remains adversarial for multiple networks, then it is more likely to transfer to other networks as well. Therefore, we can use this strategy to improve the transferability even further.

We follow the ensemble strategy proposed in [7], which fuse the logit activations together to attack multiple networks simultaneously. Specifically, to attack an ensemble of models, the logits are fused by:

(10)

where is the logits output of the -th model with the parameters , is the ensemble weight with and .

4 Experiment

4.1 Experiment Setup

Dataset

It is less meaningful to attack the images that are already classified wrongly. Therefore, we randomly choose images from the ImageNet validation set that are classified correctly by all the networks which we test on, to form our test dataset. All these images are resized to beforehand.

Networks

We consider four normally trained networks, i.e., Inception-v3 (Inc-v3) [31], Inception-v4 (Inc-v4) [30], Resnet-v2-152 (Res-152) [11] and Inception-Resnet-v2 (IncRes-v2) [30], and three adversarially trained networks [33], i.e., ens3-adv-Inception-v3 (Inc-v3ens3), ens4-adv-Inception-v3 (Inc-v3ens4) and ens-adv-Inception-ResNet-v2 (IncRes-v2ens). All networks are publicly available19,20.

Implementation details

For the parameters of different attackers, we follow the default settings in [14] with the step size and the total iteration number . We set the maximum perturbation to be , which is still imperceptible to human vision [19]. For the momentum term, decay factor is set to be as in [7]. For the stochastic transformation function , the probability is set to be , i.e., attackers put equal attentions on the original inputs and the transformed inputs. For transformation operations , the input is first randomly resized to a image, with , and then padded to the size in a random manner.

4.2 Attacking a Single Network

\arraybackslash \arraybackslashAttack \arraybackslashInc-v3 \arraybackslashInc-v4 \arraybackslashIncRes-v2 \arraybackslashRes-152 \arraybackslashInc-v3ens3 \arraybackslashInc-v3ens4 \arraybackslashIncRes-v2ens
\arraybackslashInc-v3 \arraybackslashFGSM \arraybackslash64.6% \arraybackslash23.5% \arraybackslash21.7% \arraybackslash21.7% \arraybackslash8.0% \arraybackslash7.5% \arraybackslash3.6%
\arraybackslashI-FGSM \arraybackslash99.9% \arraybackslash14.8% \arraybackslash11.6% \arraybackslash8.9% \arraybackslash3.3% \arraybackslash2.9% \arraybackslash1.5%
\arraybackslashDI2-FGSM (Ours) \arraybackslash99.9% \arraybackslash35.5% \arraybackslash27.8% \arraybackslash21.4% \arraybackslash5.5% \arraybackslash5.2% \arraybackslash2.8%
\arraybackslashMI-FGSM \arraybackslash99.9% \arraybackslash36.6% \arraybackslash34.5% \arraybackslash27.5% \arraybackslash8.9% \arraybackslash8.4% \arraybackslash4.7%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash99.9% \arraybackslash63.9% \arraybackslash59.4% \arraybackslash47.9% \arraybackslash14.3% \arraybackslash14.0% \arraybackslash7.0%
\arraybackslashInc-v4 \arraybackslashFGSM \arraybackslash26.4% \arraybackslash49.6% \arraybackslash19.7% \arraybackslash20.4% \arraybackslash8.4% \arraybackslash7.7% \arraybackslash4.1%
\arraybackslashI-FGSM \arraybackslash22.0% \arraybackslash99.9% \arraybackslash13.2% \arraybackslash10.9% \arraybackslash3.2% \arraybackslash3.0% \arraybackslash1.7%
\arraybackslashDI2-FGSM (Ours) \arraybackslash43.3% \arraybackslash99.7% \arraybackslash28.9% \arraybackslash23.1% \arraybackslash5.9% \arraybackslash5.5% \arraybackslash3.2%
\arraybackslashMI-FGSM \arraybackslash51.1% \arraybackslash99.9% \arraybackslash39.4% \arraybackslash33.7% \arraybackslash11.2% \arraybackslash10.7% \arraybackslash5.3%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash72.4% \arraybackslash99.5% \arraybackslash62.2% \arraybackslash52.1% \arraybackslash17.6% \arraybackslash15.6% \arraybackslash8.8%
\arraybackslashIncRes-v2 \arraybackslashFGSM \arraybackslash24.3% \arraybackslash19.3% \arraybackslash39.6% \arraybackslash19.4% \arraybackslash8.5% \arraybackslash7.3% \arraybackslash4.8%
\arraybackslashI-FGSM \arraybackslash22.2% \arraybackslash17.7% \arraybackslash97.9% \arraybackslash12.6% \arraybackslash4.6% \arraybackslash3.7% \arraybackslash2.5%
\arraybackslashDI2-FGSM (Ours) \arraybackslash46.5% \arraybackslash40.5% \arraybackslash95.8% \arraybackslash28.6% \arraybackslash8.2% \arraybackslash6.6% \arraybackslash4.8%
\arraybackslashMI-FGSM \arraybackslash53.5% \arraybackslash45.9% \arraybackslash98.4% \arraybackslash37.8% \arraybackslash15.3% \arraybackslash13.0% \arraybackslash8.8%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash71.2% \arraybackslash67.4% \arraybackslash96.1% \arraybackslash57.4% \arraybackslash25.1% \arraybackslash20.7% \arraybackslash14.9%
\arraybackslashRes-152 \arraybackslashFGSM \arraybackslash34.4% \arraybackslash28.5% \arraybackslash27.1% \arraybackslash75.2% \arraybackslash12.4% \arraybackslash11.0% \arraybackslash6.0%
\arraybackslashI-FGSM \arraybackslash20.8% \arraybackslash17.2% \arraybackslash14.9% \arraybackslash99.1% \arraybackslash5.4% \arraybackslash4.6% \arraybackslash2.8%
\arraybackslashDI2-FGSM (Ours) \arraybackslash53.8% \arraybackslash49.0% \arraybackslash44.8% \arraybackslash99.2% \arraybackslash13.0% \arraybackslash11.1% \arraybackslash6.9%
\arraybackslashMI-FGSM \arraybackslash50.1% \arraybackslash44.1% \arraybackslash42.2% \arraybackslash99.0% \arraybackslash18.2% \arraybackslash15.2% \arraybackslash9.0%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash78.9% \arraybackslash76.5% \arraybackslash74.8% \arraybackslash99.2% \arraybackslash35.2% \arraybackslash29.4% \arraybackslash19.0%
Table 1: The success rates on seven networks where we attack a single network. The adversarial examples are crafted on four normally trained networks. The diagonal blocks indicate white-box attacks, while the off-diagonal blocks indicate black-box attacks which are much more challenging. We observe that M-DI2-FGSM always reaches the highest success rates on all black-box models, beating other methods by a large margin, and maintains high success rates on all white-box models

We first perform adversarial attacks on a single network, using FGSM, I-FGSM, DI2-FGSM, MI-FGSM and M-DI2-FGSM, respectively. We craft adversarial examples only on normally trained networks, and test them on all seven networks. The success rates are shown in Table 1, where the diagonal blocks indicate white-box attacks and off-diagonal blocks indicate black-box attacks. We list the networks that we attack on in rows, and networks that we test on in columns.

From Table 1, first and foremost, we observe that M-DI2-FGSM outperforms all other baseline attacks by a large margin on all black-box models, and maintains high success rates on all white-box models. For example, if adversarial examples are crafted on IncRes-v2, M-DI2-FGSM has success rates of on Inc-v4 (normally trained black-box model) and on Inc-v3ens3 (adversarially trained black-box model), while strong baselines like MI-FGSM only obtains the corresponding success rates of and , respectively. This convincingly demonstrates the effectiveness of the combination of input diversity and momentum for improving the transferability of adversarial examples.

We then compare the success rates of I-FGSM and DI2-FGSM to see the effectiveness of diverse input patterns solely. By generating adversarial examples with input diversity, DI2-FGSM significantly improves the success rates of I-FGSM on challenging black-box models, regardless whether this model is adversarially trained, and maintains high success rates on white-box models. For example, if adversarial examples are crafted on Res-152, DI2-FGSM has success rates of on Res-152 (white-box model), on Inc-v3 (normally trained black-box model) and on Inc-v3ens4 (adversarially trained black-box model), while I-FGSM only obtains the corresponding success rates of , and , respectively. Compared with FGSM, DI2-FGSM also reaches much higher success rates on the normally trained black-box models, and comparable performance on the adversarially trained black-box models.

4.3 Attacking an Ensemble of Networks

\arraybackslash \arraybackslashAttack \arraybackslash-Inc-v3 \arraybackslash-Inc-v4 \arraybackslash-IncRes-v2 \arraybackslash-Res-152 \arraybackslash-Inc-v3ens3 \arraybackslash-Inc-v3ens4 \arraybackslash-IncRes-v2ens
\arraybackslashEnsemble \arraybackslashI-FGSM \arraybackslash96.6% \arraybackslash96.9% \arraybackslash98.7% \arraybackslash96.2% \arraybackslash97.0% \arraybackslash97.3% \arraybackslash94.3%
\arraybackslashDI2-FGSM (Ours) \arraybackslash88.9% \arraybackslash89.6% \arraybackslash93.2% \arraybackslash87.7% \arraybackslash91.7% \arraybackslash91.7% \arraybackslash93.2%
\arraybackslashMI-FGSM \arraybackslash96.9% \arraybackslash96.9% \arraybackslash98.8% \arraybackslash96.8% \arraybackslash96.8% \arraybackslash97.0% \arraybackslash94.6%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash90.1% \arraybackslash91.1% \arraybackslash94.0% \arraybackslash89.3% \arraybackslash92.8% \arraybackslash92.7% \arraybackslash94.9%
\arraybackslashHold-out \arraybackslashI-FGSM \arraybackslash43.7% \arraybackslash36.4% \arraybackslash33.3% \arraybackslash25.4% \arraybackslash12.9% \arraybackslash15.1% \arraybackslash8.8%
\arraybackslashDI2-FGSM (Ours) \arraybackslash69.9% \arraybackslash67.9% \arraybackslash64.1% \arraybackslash51.7% \arraybackslash36.3% \arraybackslash35.0% \arraybackslash30.4%
\arraybackslashMI-FGSM \arraybackslash71.4% \arraybackslash65.9% \arraybackslash64.6% \arraybackslash55.6% \arraybackslash22.8% \arraybackslash26.1% \arraybackslash15.8%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash80.7% \arraybackslash80.6% \arraybackslash80.7% \arraybackslash70.9% \arraybackslash44.6% \arraybackslash44.5% \arraybackslash39.4%
Table 2: The success rates of ensemble attacks. We take all seven networks into consideration. Adversarial examples are generated on an ensemble of six networks, and tested on the ensembled network (white-box setting, top row) and the hold-out network (black-box setting, bottom row). The sign “-” indicates the name of the hold-out network. We observe that M-DI2-FGSM always reaches the highest success rates on all black-box models, beating other methods by a large margin, and maintains high success rates (though slightly lower than I-FGSM & MI-FGSM) on all white-box models

Though the results in Table 1 show that momentum and input diversity can significantly improve the transferability of adversarial examples, they are still relatively weak at attacking an adversarially trained network under the black-box setting, e.g., the highest black-box success rate on IncRes-v2ens is only . Therefore, we follow the strategy in [17] to attack multiple networks simultaneously in order to further improve transferability. We consider all seven networks here. Adversarial examples are generated on an ensemble of six networks, and tested on the ensembled network and the hold-out network, using I-FGSM, DI2-FGSM, MI-FGSM and M-DI2-FGSM, respectively. FGSM is ignored here due to its low success rates on white-box models. All ensembled models are assigned with equal weight, i.e., .

The results are summarized in Table 2, where the top row shows the success rates on the ensembled network (white-box setting), and the bottom row shows the success rates on the hold-out network (black-box setting). Under the challenging black-box setting, we observe that M-DI2-FGSM always generates adversarial examples with better transferability than other methods on all networks. For example, by keeping Inc-v3ens3 as a hold-out model, M-DI2-FGSM can fool Inc-v3ens3 with an success rate of , while I-FGSM, DI2-FGSM and MI-FGSM only have success rates of , and , respectively. Besides, compared with MI-FGSM, we observe that using diverse input patterns alone, i.e., DI2-FGSM, can reach a much higher success rate if the hold-out model is an adversarially trained network, and a comparable success rate if the hold-out model is a normally trained network.

Under the white-box setting, we see that DI2-FGSM and M-DI2-FGSM reach slightly lower (but still very high) success rates on ensemble models compared with I-FGSM and MI-FGSM under the white-box setting. This is due to the fact that attacking multiple networks simultaneously is much harder than attacking a single model. However, the white-box success rates can be improved if we assign the transformation probability with a smaller value, increase the number of total iteration or use a smaller step size (see Section 4.4).

4.4 Ablation Studies

In this section, we conduct a series of ablation experiments to study the impact of different parameters, e.g., the step sizp , on DI2-FGSM and M-DI2-FGSM. We only consider attacking an ensemble of networks here, since this is much stronger than attacking a single network, which provides a more accurate evaluation of the network robustness. The max perturbation is set to for all experiments.

Transformation Probability

Figure 3: The success rates of DI2-FGSM (left) and M-DI2-FGSM (right) w.r.t. different transformation probability . We generate adversarial examples using an ensemble of six networks, and attack on both the corresponding ensembled network (white-box setting, dashed line) and the hold-out network (black-box setting, solid line). We observe that both attack methods achieve a higher black-box success rates but lower white-box success rates as increase

We first study the influence of the transformation probability on the success rates under both white-box and black-box settings. We set the step size and the total iteration number . The transformation probability is varied from to . According to the relationships showed in Figure 2, if , M-DI2-FGSM degrades to MI-FGSM and DI2-FGSM degrades to I-FGSM.

We show the success rates on various networks in Figure 3. We observe that both DI2-FGSM and M-DI2-FGSM achieve a higher black-box success rates but lower white-box success rates as increase. Moreover, for all attacks, if is small, i.e., only a small amount of transformed inputs are utilized, black-box success rates can increase significantly, while white-box success rates only drop a little. This phenomenon indicates the importance of adding transformed inputs into the attack process.

The trends showed in Figure 3 also provide useful suggestions of constructing strong adversarial attacks in practice. For example, if you know the black-box model is a new network that totally different from any existing networks, you can set to reach the maximum transferability. If the black-box model is a mixture of new networks and existing networks, you can choose a moderate value of to maximize the black-box success rates under a pre-defined white-box success rates, e.g., white-box success rates must greater or equal than .

Total Iteration Number

Figure 4: The success rates of DI2-FGSM (left) and M-DI2-FGSM (right) w.r.t. different total iteration number . We generate adversarial examples using an ensemble of six networks, and attack on both the corresponding ensembled network (white-box setting, dashed line) and the hold-out network (black-box setting, solid line). We observe that both attack methods can be benefited if more iterations are performed

We here study the influence of the total iteration number on the success rates under both white-box and black-box settings. We set the transformation probability and the step size . The total iteration number is varied from to , and the results are plotted in Figure 4. For DI2-FGSM, we see that the black-box success rates and white-box success rates always increase as the total iteration number increase. Similar trends can also be observed for M-DI2-FGSM except for the black-box success rates on adversarially trained models, i.e., performing more iterations cannot bring extra transferability on adversarially trained models. Moreover, we observe that the success rates gap between M-DI2-FGSM and DI2-FGSM is diminished as increase.

Step Size

Figure 5: The success rates of DI2-FGSM (left) and M-DI2-FGSM (right) w.r.t. different step size . We generate adversarial examples using an ensemble of six networks, and attack on both the corresponding ensembled network (white-box setting, dashed line) and the hold-out network (black-box setting, solid line). We observe that both attack methods can be benefited if a smaller step is provided

We finally study the influence of the step size on the success rates under both white-box and black-box settings. We set the transformation probability . In order to reach the maximum perturbation even for a small step size , we set the total iteration number be proportional to the step size, i.e., . The results are plotted in Figure 5. We observe that the white-box success rates of both DI2-FGSM and M-DI2-FGSM can be boosted if a smaller step size is provided. Under the black-box setting, the success rates of DI2-FGSM is insensitive to the step size, while the success rates of M-DI2-FGSM can still be improved with smaller step size.

4.5 Reproducing NIPS Adversarial Competition

In order to examine the effectiveness of our proposed attack methods in practice, we here reproduce the top defense submissions, which are black-box models to us, and official baselines from NIPS 2017 adversarial competition. Due to resource limitation, we only consider the top defense submissions, i.e., TsAIL21, iyswim22 and Anil Thomas23, and official baselines, i.e., Inc-v3adv, IncRes-v2ens and Inc-v3. The test dataset contains images which are all of the size , and their corresponding labels are the same as the ImageNet -class labels.

Generating Adversarial Examples

When generating adversarial examples, we follow the procedure24 that: (1) firstly, split the dataset equally into batches; (2) secondly, for each batch, the maximum perturbation is randomly chosen from the set ; (3) lastly, generate adversarial examples for each batch under the corresponding perturbation constraint.

Attacker Configurations

For the attacker configuration, we follow exactly the same settings in [7] which attacks an ensemble of Inc-v3, Inc-v4, IncRes-v2, Res-152, Inc-v3ens3, Inc-v3ens4, IncRes-v2ens and Inc-v3adv [15]. The ensemble weights are set as equally for the first seven models and for Inc-v3adv. The total iteration number is and the decay factor is . This configuration for MI-FGSM won the -st place in the NIPS adversarial attack competition. For DI2-FGSM and M-DI2-FGSM, we choose according to the trends showed in Figure 3.

\arraybackslashAttack \arraybackslashTsAIL \arraybackslashiyswim \arraybackslashAnil Thomas \arraybackslashInc-v3adv \arraybackslashIncRes-v2ens \arraybackslashInc-v3 \arraybackslashAvg.
\arraybackslashI-FGSM \arraybackslash14.0% \arraybackslash35.6% \arraybackslash30.9% \arraybackslash98.2% \arraybackslash96.4% \arraybackslash99.0% \arraybackslash62.4%
\arraybackslashDI2-FGSM (Ours) \arraybackslash22.7% \arraybackslash58.4% \arraybackslash48.0% \arraybackslash91.5% \arraybackslash90.7% \arraybackslash97.3% \arraybackslash68.1%
\arraybackslashMI-FGSM \arraybackslash14.9% \arraybackslash45.7% \arraybackslash46.6% \arraybackslash97.3% \arraybackslash95.4% \arraybackslash98.7% \arraybackslash66.4%
\arraybackslashMI-FGSM* \arraybackslash13.6% \arraybackslash43.2% \arraybackslash43.9% \arraybackslash94.4% \arraybackslash93.0% \arraybackslash97.3% \arraybackslash64.2%
\arraybackslashM-DI2-FGSM (Ours) \arraybackslash20.0% \arraybackslash69.8% \arraybackslash64.4% \arraybackslash93.3% \arraybackslash92.4% \arraybackslash97.9% \arraybackslash73.0%
Table 3: The success rates on top defense submissions and official baselines from NIPS 2017 adversarial competition. * indicates the official results reported in the competition. We see that M-DI2-FGSM obtains the highest average success rate, beating other methods by a large margin

Results

The results are summarized in Table 3. We also report the official results of MI-FGSM (named MI-FGSM*) as a reference to validate our implementation. The performance difference between MI-FGSM and MI-FGSM* is due to the randomness of max perturbation magnitude introduced in the attack process. Compared with MI-FGSM, DI2-FGSM have higher success rates on top submissions while slightly lower success rates on baseline models, which results in these two attack methods having similar average success rates. By integrating both diverse inputs and momentum term, this enhanced attack, M-DI2-FGSM, reaches an average success rate of , which is far better than other methods. For example, the top attack submission, MI-FGSM, in the NIPS competition only get an average success rate of . We believe the same advantage can be observed even if we test on all defense submissions. This results also indicate that our proposed attack method can be used as a better tool to evaluate the robustness of various newly developed networks and defense methods.

4.6 Discussion

We provide a brief discussion of why diverse patterns help generate adversarial examples with better transferability. One hypothesis is that the decision boundaries of different networks share similar inherent structures due to the same training dataset, e.g., ImageNet. For example, as shown in Figure 1, different networks make similar mistakes in the presence of adversarial examples. By incorporating diverse patterns at each step, the optimization produces adversarial examples that are more robust to small transformations. These adversarial examples are malicious in a certain region at the network decision boundary, thus increase the chance to fool other networks, i.e., they achieve better black-box success rate than existing methods. In the future, we plan to validate this hypothesis theoretically or empirically.

5 Conclusions

In this paper, we propose to improve transferability of adversarial examples with input diversity. Specifically, our method applies random transformations to the input images at each iteration in the attack process. Compared with traditional iterative attacks, the results on ImageNet show that our proposed attack method gets significantly higher success rates for black-box models, and maintains similar success rates for white-box models. We improve the transferability further by integrating momentum term and attacking multiple networks simultaneously. By evaluating this enhanced attack against the top defense submissions and official baselines from NIPS adversarial competition, we show that this enhanced attack reaches an average success rate of , which outperforms the top attack submission in the NIPS competition by a large margin of . We hope that our proposed attack strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future. The code is public available at https://github.com/cihangxie/DI-2-FGSM.

Footnotes

  1. email: {cihangxie306, zhshuai.zhang, zhouyuyiner, alan.l.yuille}@gmail.com
  2. email: wjyouch@gmail.com
  3. email: zhou.ren@snapchat.com
  4. email: {cihangxie306, zhshuai.zhang, zhouyuyiner, alan.l.yuille}@gmail.com
  5. email: wjyouch@gmail.com
  6. email: zhou.ren@snapchat.com
  7. email: {cihangxie306, zhshuai.zhang, zhouyuyiner, alan.l.yuille}@gmail.com
  8. email: wjyouch@gmail.com
  9. email: zhou.ren@snapchat.com
  10. email: {cihangxie306, zhshuai.zhang, zhouyuyiner, alan.l.yuille}@gmail.com
  11. email: wjyouch@gmail.com
  12. email: zhou.ren@snapchat.com
  13. email: {cihangxie306, zhshuai.zhang, zhouyuyiner, alan.l.yuille}@gmail.com
  14. email: wjyouch@gmail.com
  15. email: zhou.ren@snapchat.com
  16. email: {cihangxie306, zhshuai.zhang, zhouyuyiner, alan.l.yuille}@gmail.com
  17. email: wjyouch@gmail.com
  18. email: zhou.ren@snapchat.com
  19. https://github.com/tensorflow/models/tree/master/research/slim
  20. https://github.com/tensorflow/models/tree/master/research/adv_imagenet_models
  21. https://github.com/lfz/Guided-Denoise
  22. https://github.com/cihangxie/NIPS2017_adv_challenge_defense
  23. https://github.com/anlthms/nips-2017/tree/master/mmd
  24. https://www.kaggle.com/c/nips-2017-non-targeted-adversarial-attack

References

  1. Arnab, A., Miksik, O., Torr, P.H.: On the robustness of semantic segmentation models to adversarial attacks. arXiv preprint arXiv:1711.09856 (2017)
  2. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Joint European conference on machine learning and knowledge discovery in databases. pp. 387–402. Springer (2013)
  3. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017)
  4. Cisse, M., Adi, Y., Neverova, N., Keshet, J.: Houdini: Fooling deep structured prediction models. arXiv preprint arXiv:1707.05373 (2017)
  5. Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM (2004)
  6. Dhillon, G.S., Azizzadenesheli, K., Bernstein, J.D., Kossaifi, J., Khanna, A., Lipton, Z.C., Anandkumar, A.: Stochastic activation pruning for robust adversarial defense. In: International Conference on Learning Representations (2018)
  7. Dong, Y., Liao, F., Pang, T., Su, H., Hu, X., Li, J., Zhu, J.: Boosting adversarial attacks with momentum. arXiv preprint arXiv:1710.06081 (2017)
  8. Girshick, R.: Fast r-cnn. In: International Conference on Computer Vision. IEEE (2015)
  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
  10. Guo, C., Rana, M., Cissé, M., van der Maaten, L.: Countering adversarial images using input transformations. In: International Conference on Learning Representations (2018)
  11. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision. Springer (2016)
  12. Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM workshop on Security and artificial intelligence. ACM (2011)
  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)
  14. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: International Conference on Learning Representations Workshop (2017)
  15. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: International Conference on Learning Representations (2017)
  16. Lin, Y.C., Hong, Z.W., Liao, Y.H., Shih, M.L., Liu, M.Y., Sun, M.: Tactics of adversarial attack on deep reinforcement learning agents. In: International Joint Conference on Artificial Intelligence. AAAI (2017)
  17. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: International Conference on Learning Representations (2017)
  18. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition. IEEE (2015)
  19. Luo, Y., Boix, X., Roig, G., Poggio, T., Zhao, Q.: Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292 (2015)
  20. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. arXiv preprint arXiv:1705.09064 (2017)
  21. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Computer Vision and Pattern Recognition. IEEE (2015)
  22. Papernot, N., Goodfellow, I., Sheatsley, R., Feinman, R., McDaniel, P.: cleverhans v1.0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768 (2016)
  23. Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.: Deflecting adversarial attacks with pixel deflection. arXiv preprint arXiv:1801.08926 (2018)
  24. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems (2015)
  25. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In: International Conference on Learning Representations (2018)
  26. Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. In: Computer Vision and Pattern Recognition. IEEE (2016)
  27. Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F.: Discriminative learning of deep convolutional feature point descriptors. In: International Conference on Computer Vision. IEEE (2015)
  28. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
  29. Song, Y., Kim, T., Nowozin, S., Ermon, S., Kushman, N.: Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766 (2017)
  30. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI (2017)
  31. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Computer Vision and Pattern Recognition. IEEE (2016)
  32. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)
  33. Tramèr, F., Kurakin, A., Papernot, N., Boneh, D., McDaniel, P.: Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)
  34. Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.: Mitigating adversarial effects through randomization. In: International Conference on Learning Representations (2018)
  35. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial Examples for Semantic Segmentation and Object Detection. In: International Conference on Computer Vision. IEEE (2017)
  36. Zhang, Z., Qiao, S., Xie, C., Shen, W., Wang, B., Yuille, A.L.: Single-shot object detection with enriched semantics. arXiv preprint arXiv:1712.00433 (2017)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
127107
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description