The ProxLogBarrier adversarial attack

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

Aram-Alexandre Pooladian and Chris Finlay and Tim Hoheisel and Adam Oberman Department of mathematics and statistics, McGill University {aram-alexandre.pooladian,christopher.finlay}@mail.mcgill.ca tim.hoheisel@mcgill.ca adam.oberman@mcgill.ca
Abstract.

Deep neural networks perform well on real world data but are prone to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology not only for cases where the perturbations are measured by norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, , and perturbations; the counting “norm” (i.e. true sparseness); and the total variation seminorm, which is a (non-) convolutional dissimilarity measuring local pixel changes. Our approach is a natural extension of a recent adversarial attack method, and eliminates the differentiability requirement of the metric. We demonstrate our algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We consider undefended and defended models, and show that our algorithm easily transfers to various datasets. We observe that ProxLogBarrier outperforms a host of modern adversarial attacks specialized for the case. Moreover, by altering images in the total variation seminorm, we shed light on a new class of perturbations that exploit neighboring pixel information.

AO supported by AFOSR grant FA9550-18-1-0167

1. Introduction

Deep neural networks (DNNs) have strong classification abilities on training and validation datasets. However, they are vulnerable to adversarial images, which are formally defined as imperceptibly small changes (in a given dissimilarity metric) to model input that lead to misclassification [origin_adversarial, FGSM]. This behavior could mean several things: the model is overfitting on some level; the model is under-regularized; or this is simply due to complex nonlinearities in the model. This has lead to several lines of work in the deep learning community: the generation of adversarial images, defending against these adversarial attacks, and lastly determining which dissimilarity metric to consider.

Regarding the latter, it is not obvious what “imperceptibly small” means, and recent work has demonstrated adversarial image generation beyond norms by considering deformations instead of perturbations [adef]. There is also the problem of generating “realistic” attacks, such as through sparse attacks. For example these include small stickers on a road sign, which may tamper with autonomous vehicles [eykholt2017robust]. The purpose of this work is adversarial image generation for a broad class of (possibly non-differentiable) dissimilarity metrics for both undefended and defended networks. We do not make judgment regarding which metric is “best”; instead we are interested in an attack framework that works well for a broad class of metrics.

Adversarial attacks are often broadly categorized into one of two types: white-box attacks, where the full structure of the neural network is provided to the attacker, including gradient information, or black-box attacks, where the attacker is only given the model decision. One of the first proposed adversarial attacks is the Fast Gradient Signed Method (FGSM), which generates an adversarial image with respect to the norm, along with its iterative form, dubbed Iterative FGSM (IFGSM) [FGSM, ifgsm]. A similar iterative attack was also done with respect to the norm. In their purest form, the above attacks perform gradient ascent on the training loss function subject to a norm constraint on the perturbation, either with one step in the case of FGSM, or multiple steps in the case of IFGSM, and their norm equivalents. Apart from training loss maximization, attacks have been developed using loss functions that directly measure misclassification [cw, deepfool]. Others have considered the and norms; these both induce sparsity in the perturbations [sparsefool]. In the black-box setting, adversarial examples are generated using only model decisions, which is a much more expensive endeavor. However, black-box methods often perform better, most notably by avoiding gradient obfuscation, since they take advantage of sampling properties near the decision boundary of the model. Notable examples of black-box (decision-based) attacks are the Boundary Attack [boundaryattack] and the recent HopSkipJumpAttack [bapp].

The development of new and improved adversarial attacks has occurred in parallel with various defensive training regimes to provide robustness against adversarial perturbations. The task of training a robust network is two-fold: models must be resistant to perturbations of a certain magnitude, while also maintaining classification ability on clean data. It has been argued that these two objectives are inherently “at odds” [atodds]. A popular method for training robust networks is adversarial training, where adversarial examples are added to the training data (see for example [madryLinf]).

Contributions

This paper introduces an attack methodology for not just norms, but any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, , , , the counting “norm”, i.e. a true measurement of sparseness of the perturbation, and total variation, a non- dissimilarity. Our approach adopts the relaxation structure of the recently proposed LogBarrier attack [logbarrier], which required differentiable metrics. We extend this work to include a broad class of non-smooth (non-differentiable) metrics. Our algorithm, ProxLogBarrier, uses the proximal gradient method for generating adversarial perturbations. We demonstrate our attack on MNIST, CIFAR10, and ImageNet-1k datasets. ProxLogBarrier shows significant improvement over both the LogBarrier attack, and over the other attacks we considered. In particular, in the case, we achieve state-of-the-art results with respect to a suite of attacks typically used for this problem class. Finally, by using the total variation dissimiliarity, we shed light on a new class of imperceptible adversaries that incorporates neighboring pixel information, which can be viewed as an adversarial attack measured in a convolutional norm.

2. Background material

2.1. Adversarial attacks

Let be the image space, and be the label space (the unit-simplex for classes). An image-label pair is defined by , with the image belonging to one of classes. The trained model is defined by . An adversarial perturbation should be small with respect to a dissimilarity metric (henceforth simply called the metric) , e.g. . Formally, the optimal adversarial perturbation is the minimizer of the following optimization problem:

(1)

DNNs might be powerful classifiers, but that does not mean their decision boundaries are well-behaved. Instead, researchers have popularized using the training loss, often the cross-entropy loss, as a surrogate for the decision boundary: typically a model is trained until the loss is very low, which is often related to good classification performance. Thus, instead of solving (1), one can perform Projected Gradient Descent (PGD) on the cross-entropy loss:

(2)

where is typically taken to be either the or norm, and defines the perturbation threshold of interest.

Some adversarial attack methods try to solve the problem posed in (1) without incorporating the loss function used to train the network. For example, Carlini & Wagner attack the logit-layer of a network and solve a different optimization problem, which depends on the choice of norm [cw]. Regarding adversarial defense methods, they demonstrated how a significant number of prior defense methods fail because of “gradient obfuscation”, where gradients are small only locally to the image [obfuscated_cw]. Another metric of adversarial dissimilarity is the “norm”, which counts the number of total different pixels between the adversary and the clean image [sparsefool, jsma]. This is of interest because an adversary might be required to also budget the number of allowed pixels to perturb, while still remaining “imperceptible” to the human eye. For example, the sticker-attack [eykholt2017robust] is a practical attack with real-world consequences, and does not interfere with every single part of the image.

2.2. Proximal gradient method

Our adversarial attack amounts to a proximal gradient method. Proximal algorithms are a driving force for nonsmooth optimization problems, and are receiving more attention in the deep learning community on a myriad of problems [proxquant, admm-prox, learningprox, catalyst]. For a full discussion on this topic, we suggest [beckbook].

We consider the following framework for proximal algorithms, namely a composite minimization problem

(3)

where is a Euclidean space. We make the following assumptions:

  • is a non-degenerate, closed convex function over

  • is non-degenerate, closed function, with convex, and has -Lipschitz gradients over the interior of its domain

  • the solution set, , is non-empty.

Generating a stationary point of (3) amounts to finding a fixed point of the following sequence:

(4)

where is some step size, and is defined as

Despite not being convex, there are still convergence properties we can get from a sequence of iterates generated in this way. The following theorem is a simplified version of what can be found in [beckbook] (Section 10.3 with proof), and is the main motivation for our proposed method.

Theorem 1.

Given the assumptions on (3), let be the sequence generated by (4), with fixed step size . Then,

  1. [(a)]

  2. the sequence is non-increasing. In addition, if and only if is not a stationary point of (3);

  3. as ;

  4. all limit points of the sequence are stationary points of (3).

3. Our method: ProxLogBarrier

Following the previous theoretical ideas, we reformulate (1) in the following way:

(5)

Here, is the model output before the softmax layer that “projects” onto , and so and . In other words, we want to perturb the clean image minimally in such a way that the model misclassifies it. This problem is difficult as the decision boundary has virtually no exploitable structure. Thus the problem can be relaxed using a logarithmic barrier, a technique often used in traditional optimization [nocedal],

(6)

This objective function now includes the constraint that enforces misclassification. In [logbarrier], (6) was originally solved via gradient descent, which necessarily assumes that is at least differentiable. The assumption of differentiability is not a given, and may be impracticable. For example, consider the subgradient of for an element in ;

where , and are the standard basis vectors. At each subgradient step, very little information is obtained. Indeed, in the original LogBarrier paper, a smooth approximation of this norm was used to get around this issue. We shall see that this does not occur with our proposed ProxLogBarrier method.

For brevity, let and

The optimization problem (6) becomes

(7)

One can draw several similarities between (7) and (3). As before, we have no guarantees of convexity on , which is a representation of in the composite problem, but it is smooth provided (that is, is smooth from a computational perspective). Our dissimilarity metric represents , as it usually has a closed-form proximal operator. Thus, we simply turn to the proximal gradient method to solve the minimization problem in (7).

We iteratively find a minimizer for the problem; the attack is outlined in Algorithm 1. Due to the highly non-convex nature of the decision boundary, we perform a backtracking step to ensure the proposed iterate is in fact adversarial. We remark that the adversarial attack problem is constrained by the image-space, and thus requires a further projection step back onto the image space (pixels must be in the range [0,1]). In traditional non-convex optimization, best practice is to also record the “best iterate”, as valleys are likely pervasive throughout the decision boundary. This way, even at some point our gradient sends our image far-off and is unable to return in the remaining iterations, we already have a better candidate. The algorithm begins with a misclassified image, and moves the iterates towards the original image by minimizing the dissimilarity metric. Misclassification is maintained by the log barrier function, which prevents the iterates from crossing the decision boundary. Refer to Figure 1. Contrast this with PGD based algorithms, which begin at or near the original image, and iterate away from the original image.

  Input: image-label pair , trained model , adversarial dissimilarity metric
  Intialize hyperparameters: , and .
  Initialize to be misclassified,
  for  do
     Every iterations:
     
     
     Backtrack along line between current and previous iterate until misclassified
     if  then
        
     else
        
     end if
  end for
  Output:
Algorithm 1 ProxLogBarrier (PLB)
Figure 1. Illustration of the ProxLogBarrier attack: the attack is initialized with a misclassified image, which is then moved towards the original image.

Proximal operators for dissimilarities

To complete the algorithm, it remains to compute the proximal operator for various choices of . One can turn to [beckbook] for complete derivations of the proximal operators for the adversarial metrics we are considering, namely norms, and the cardinality function. Consider measuring the distance between the clean image and our desired adversarial perturbation:

Due to the Moreau Decomposition Theorem [RWbible], the proximal operator of this function relies on projecting onto the unit ball:

We make use of the algorithm from [duchi] to perform the projection step, implemented over batches of vectors for efficiency. Similarly, one obtains the proximal operator for and via the same theorem,

where is the soft thresholding operator. In the case that one wants to minimize the number of perturbed pixels in the adversarial image, one can turn to the counting “norm”, called , which counts the number of non-zero entries in a vector. While this function is non-convex, the proximal operator still has a closed form:

where is a hard-thresholding operator, and acts component-wise in the case of vector arguments.

Example of non- dissimilarity: Total variation

We let denote the image space, and for the time being assume the images are grayscale, and let denote the finite-difference operator on the grid-space defined by the image. Then , where

(8)

where are the pixel indices of the image in row-column notation. The anisotropic total variation semi-norm is defined by

(9)

where is an induced matrix norm. Heuristically, this is a measure of large changes between neighboring pixels. In practice can be implemented via a convolution. In the case of color images, we aggregate the total variation for each channel. Total variation (TV) is not true norm, in that non-zero images can have zero TV. In what follows, we omit the distinction and write TV-norm to mean the total variation seminorm. Traditionally, TV has been used in the context of image denoising [ROF].

What does this mean in the context of adversarial perturbations? The TV-norm of the perturbation will be small when the perturbation has few jumps between pixels. That is, small TV-norm perturbations have locally flat regions. This is primarily because TV-norm is convolutional in nature: the finite-difference gradient operator incorporates neighboring pixel information. We note that this is not the first instance of TV being used as a disimillarity metric [spatially]; however our approach is quite different and is not derived from a flow. An outline for the proximal operator can be found in [beckbook]; we use a standard package for efficient computation [proxtv_1, proxtv_2]

{adjustbox}

width= MNIST CIFAR10 ImageNet % error at median distance % error at median distance % error at median distance PLB 86.30 100 6 44.10 68.50 39 66.00 80.20 268 SparseFool 46.00 99.40 11 15.60 22.60 3071111We believe this is an implementation error on behalf of the repository. To accurately compare, we attacked an 18-layer ResNet for CIFAR10 that achieves slightly worse clean error as reported in [sparsefool]. Our median percent pixels perturbed was 1.4%, and they reported 1.27%. 30.40 46.80 JSMA 12.73 61.38 25 29.56 48.92 84 Pointwise 5.00 57.30 28 13.20 50.60 80 (D) PLB 79.8 98.90 6 74.90 97.80 13 38.40 70.0 691 (D) SparseFool 20.67 75.45 20 34.23 52.15 70 24.80 41.80 (D) JSMA 12.63 44.51 34 36.65 60.79 53 (D) Pointwise 12.50 65.80 24 23.80 43.10 102

Table 1. Adversarial robustness statistics, measured in the norm.
{adjustbox}

width= MNIST CIFAR10 ImageNet % error at median distance % error at median distance % error at median distance PLB 10.30 100 95.00 98.60 20.40 33.80 PGD 10.70 80.90 54.70 87.00 90.80 98.60 DeepFool 8.12 86.55 16.23 51.00 93.64 100 LogBarrier 5.89 73.90 60.60 93.10 7.60 7.70 (D) PLB 3.0 32.9 23.3 44.1 11.40 18.80 (D) PGD 2.8 23.6 22.9 46.1 49.20 96.60 (D) DeepFool 2.7 10.2 23.8 44.1 43.20 97.40 (D) LogBarrier 2.50 11.89 17.6 28.3 9.80 10.40

Table 2. Adversarial robustness statistics, measured in the norm.

4. Experimental methodology

Outline

We compare the ProxLogBarrier attack with several other adversarial attacks on MNIST [mnist_dataset], CIFAR10 [cifar10_dataset], and ImageNet-1k [imagenet_dataset]. For MNIST, we use the network described in [jsma]; on CIFAR10, we use a ResNeXt network [resnext]; and for ImageNet-1k, ResNet50 [resnet50, DAWNBench]. We also consider defended models for the aforementioned networks. This is to further benchmark the attack capability of the ProxLogBarrier, and to reaffirm previous work in the area. For defended models, we consider Madry-style adversarial training for CIFAR10 and MNIST [madryLinf]. On ImageNet-1k, we use the recently proposed scaleable input gradient regularization for adversarial robustness [finlay2019scaleable]. We randomly select 1000 (test) images to evaluate performance on MNIST and CIFAR10, and 500 (test) images on ImageNet-1k. We consider the same images on their defended counterparts. We note that for ImageNet-1k, we consider the problem of Top5 misclassification, where the log barrier is with respect to the following constraint set

where denotes the largest index.

We compare the ProxLogBarrier attack with a wide range of attack algorithms that are available through the FoolBox adversarial attack library [foolbox]. For perturbations in , we compare against SparseFool [sparsefool], Jacobian Saliency Map Attack (JSMA) [jsma], and Pointwise [pointwise] (this latter attack is black-box). For attacks, we consider Carlini-Wagner’s attack (CW) [cw], Projected Gradient Descent (PGD) [ifgsm], DeepFool [deepfool], and the original LogBarrier attack [logbarrier]. Finally, for norm perturbations, we consider PGD, DeepFool, and LogBarrier. All hyperparameters are left to their implementation defaults, with the exception of SparseFool, where we used the exact parameters indicated in the paper. We omit the One-Pixel attack [onepixel], as [sparsefool] showed that this attack is quite weak on MNIST, CIFAR10, and not tractable on ImageNet-1k.

{adjustbox}

width= MNIST CIFAR10 ImageNet % error at median distance % error at median distance % error at median distance PLB 38.60 99.40 1.35 97.70 99.80 47.60 89.40 CW 35.10 98.30 1.41 89.94 95.97 20.06 44.26 1.16 PGD 24.70 70.00 1.70 60.60 73.30 37.60 70.60 DeepFool 13.21 48.04 2.35 17.33 22.04 1.11 40.08 76.48 LogBarrier 37.40 98.90 1.35 69.60 84.00 43.70 88.30 (D) PLB 29.50 92.90 1.54 28.7 35.4 15.80 28.20 1.74 (D) CW 28.24 78.59 1.72 29.6 38.7 (D) PGD 17.20 45.70 2.44 28.30 34.70 14.60 22.60 2.20 (D) DeepFool 5.22 18.07 3.73 28.0 33.3 15.60 24.40 2.14 (D) LogBarrier 25.00 89.60 1.65 28.0 34.6 10.00 10.20 63.17

Table 3. Adversarial robustness statistics, measured in the norm.

Implementation details for our algorithm

When optimizing for based noise, we initialize the adversarial image with sufficiently large Gaussian noise; for and based perturbations, we use uniform noise. For hyper-parameters, we used , with . We observed some computational drawbacks for ImageNet-1k: firstly, the proximal operator for the norm is far too strict. We decided to use the norm to induce sparseness in our adversarial perturbation (changing both the prox parameter and the step size to ). Other parameter changes for the ImageNet-1k dataset are that for the proximal parameter in the case, we set , and we used 2500 algorithm iterations. Finally, we found that using the softmax layer outputs helps with ImageNet-1k attacks against both the defended and undefended network. For TV-norm, perturbations, we set the proximal parameter , and with (far less than before).

Reporting

For perturbations in and , we report the percent misclassification at various threshold levels that are somewhat standard [atodds]. Our choices for distance thresholds were arbitrary, however we supplement with a median perturbation distances on all attack norms to mitigate cherry-picking. For attacks that were unable to successfully perturb at least half the sampled images, we do not report anything. If the attack was able to perturb more than half but not all, we add an asterisk to the median distance. We denote the defended models by “(D)” (recall that for MNIST and CIFAR10, we are using Madry’s adversarial training, and scaleable input-gradient regularization for Imagenet-1k).

Perturbations in

(a) attacks on MNIST
(b) attacks on CIFAR10
Figure 2. Adversarial images for perturbations, generated by our method.

Result for perturbations are found in Table 1, with examples available in Figure 2 and Figure 3(b). Across all datasets considered, ProxLogBarrier outperforms all other attack methods, for both defended and undefended networks. It also appears immune to Madry-style adversarial training on both MNIST and CIFAR10. This is entirely reasonable, for the Madry-style adversarial training is targeted towards attacks. In contrast, on ImageNet-1k, the defended model trained with input-gradient regularization performs significantly better than the undefended model, even though this defence is not aimed towards attacks. Neither JSMA or Pointwise scale to networks on ImageNet-1k. Pointwise exceeds at smaller images, since it takes less than 1000 iterations to cycle over every pixel and check if it can be zero’d out. We remark that SparseFool was unable to adversarially attack all images, whereas ProxLogBarrier always succeeded.

Perturbations in

Results for perturbations are found in Table 2. Our attack stands out on MNIST, in both the defended and undefended case. On CIFAR10, our attack is best on the undefended network, and only slightly worse than PGD when adversarially defended. On ImageNet-1k, our method suffers dramatically. This is likely due to very poor decision boundaries with respect to this norm , as our method will necessarily be better when the boundaries are not muddled. PGD does not focus on the decision boundaries explicitly, thus has more room to find something adversarial quickly.

(a) TV-norm attacks on MNIST
(b) TV-norm attacks on CIFAR10
Figure 3. Adversarial images for TV-norm perturbations, generated by our method.

Perturbations in

Results for perturbations measured in Euclidean distance are found in Table 3. For MNIST and ImageNet-1k, on both defended and undefended networks, our attack performs better than all other methods, both in median distance and at a given perturbation norm threshold. On CIFAR10, we are best on undefended but lose to CW in the defended case. However, the CW attack did not scale to ImageNet-1k using the implementation in the FoolBox attack library.

Perturbations in the TV-norm

To our knowledge, there are no other TV-norm atacks against which to compare our methods. However, we present the median total variation across the data in question, and a handful of pictures for illustration. On MNIST, adversarial images with minimal total variation are often as expected: near-flat perturbations or very few pixels perturbed (see Figure 2(a)). For CIFAR10 and ImageNet-1k, we have found that adversarial images with small TV-norm have an adversarial “tint” on the image: they appear nearly identical to the original, with a small color shift. When the adversary is not a tint, perturbations are highly localized or localized in several regions. See for example Figures 2(b) and 3(a).

{adjustbox}

width=0.35 median TV-norm max TV-norm MNIST 2.52 11.0 CIFAR10 1.36 11.0 ImageNet-1k 13.4 149.6

Table 4. Statistics for perturbations in TV-norm

Algorithm runtime

We strove to implement ProxLogBarrier so that it could be run in a reasonable amount of time. For that reason, ProxLogBarrier was implemented to work over a batch of images. Using one consumer grade GPU, we can comfortably attack several MNIST and CIFAR10 images simultaneously, but only one ImageNet-1k image at a given time. We report our algorithm runtimes in Table 5. Algorithms implemented from the FoolBox repository were not written to take advantage of the GPU, hence we omit run-time comparisons. Heuristically speaking, PGD is one of the faster algorithms, whereas CW, SparseFool, and DeepFool are slower. We omit the computational complexity for minimizing total variation since the proximal operator is coded in C, and not Python.

We are not surprised that our attack in takes longer than the other norms; this is likely due to the backtracking step to ensure misclassification of the iterate. On ImageNet-1k, the ProxLogBarrier attack in the metric is quite slow due to the projection step onto the ball, which is , where is the input dimension size [duchi].

{adjustbox}

width=0.45 Batch Size MNIST 100 8.35 6.91 6.05 CIFAR10 25 69.07 56.11 30.87 ImageNet-1k 1 35.45 29.47 75.50

Table 5. ProxLogBarrier attack runtimes (in seconds)
(a) TV-norm attacks
(b) attacks, with fewer than 1000 pixels perturbed
Figure 4. Adversarial images for ImageNet-1k. Note that attacks are only visible when the image is magnified. The TV-norm perturbations are visible as either a tint of the full image, or as a set of local tints.

5. Conclusion

We have presented a concise framework for generating adversarial perturbations by incorporating the proximal gradient method. We have expanded upon the LogBarrier attack, which was originally only effective in and norms, by addressing the norm case and the total variation seminorm. Thus we have proposed a method unifying all three common perturbation scenarios. Our approach requires fewer hyperparameter tweaks than LogBarrier, and performs significantly better than many attack methods we compared against, both on defended and undefended models, and across all norm choices. We highlight that our method is, to our knowledge, the best choice for perturbations measured in , compared to all other methods available in FoolBox. We also perform better than all other attacks considered on the MNIST network with in the median distance and in commonly reported thresholds. The proximal gradient method points towards new forms of adversarial attacks, such as those measured in the TV-norm, provided the attack’s dissimilarity metric has a closed proximal form.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393367
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description