Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Xiaoyu Cao, Neil Zhenqiang Gong
ECE Department, Iowa State University
{xiaoyuc, neilgong}@iastate.edu
Abstract.

Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.

In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.

conference: ACSAC’17; ; San Juan, Puerto Ricojournalyear: 2017

1. Introduction

Deep neural networks (DNNs) are unprecendentedly effective at solving many challenging artificial intelligence problems such as image recognition (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), natural language processing (Mikolov et al., 2013), and playing games (Silver et al., 2016). For instance, DNNs can recognize images with accuracies that are comparable to human (Krizhevsky et al., 2012); and they can outperform the best human Go players (Silver et al., 2016).

However, researchers in various communities–such as security, machine learning, and computer vision–have demonstrated that DNNs are vulnerable to attacks at testing time (Szegedy et al., 2013; Goodfellow et al., 2014; Papernot et al., 2016a; Moosavi-Dezfooli et al., 2016; Liu et al., 2017; Carlini and Wagner, 2017; Papernot et al., 2017). For instance, in image recognition, an attacker can add a small noise to a testing example such that the example is misclassified by a DNN classifier. The testing example with noise is called adversarial example (Szegedy et al., 2013). In contrast, the original example is called benign example. Usually, the noises are so small such that, to human, the benign example and adversarial example still have the same label. Figure 1 shows some adversarial examples for digit recognition in the MNIST dataset. The adversarial examples were generated by the state-of-the-art evasion attacks proposed by Carlini and Wagner (Carlini and Wagner, 2017). We use the same DNN classifier as the one used by them. The examples in the th row have true label , while the examples in the th column are predicted to have label by the DNN classifier, where .

Figure 1. Adversarial examples generated by an evasion attack proposed by Carlini and Wagner (Carlini and Wagner, 2017).

Evasion attacks limit the use of DNNs in safety and security critical applications such as self-driving cars. The adversarial examples can make self-driving cars make unwanted decisions. For instance, one basic capability of self-driving cars is to automatically recognize stop signs and traffic lights. Suppose an adversary creates an adversarial stop sign, i.e., the adversary adds several human-unnoticeable dots to a stop sign, such that the self-driving car does not recognize it as a stop sign. As a result, self-driving cars will not stop at the stop sign and may collide with other cars, resulting in severe traffic accidents.

To defend against evasion attacks, Goodfellow et al. (Goodfellow et al., 2014) proposed to train a DNN via augmenting the training dataset with adversarial examples, which is known as adversarial training. Specifically, for each training benign example, the learner generates a training adversarial example using evasion attacks. Then, the learner uses a standard algorithm (e.g., back propagation) to learn a DNN using the original training benign examples and the corresponding adversarial examples. Adversarial training is not robust to adversarial examples that are unseen during training. Papernot et al. (Papernot et al., 2016b) proposed a distillation based method to train DNNs. A DNN that is trained via distillation can significantly reduce the success rates of the evasion attacks also proposed by Papernot et al. (Papernot et al., 2016a). However, Carlini and Wagner (Carlini and Wagner, 2017) demonstrated that their attacks can still achieve 100% success rates for DNNs trained with distillation. Moreover, the noises added to the benign examples when generating adversarial examples are just slightly higher for distilled DNNs than those for undistilled DNNs. Carlini and Wagner (Carlini and Wagner, 2017) concluded that all defenses should be evaluated against state-of-the-art evasion attacks, i.e., the attacks proposed by them at the time of writing this paper. For simplicity, we call their attacks CW.

Our work:  We propose a new defense method called region-based classification. Our method can reduce success rates of the CW attacks from 100% to less than 16%, while not impacting classification accuracy on testing benign examples. First, we performed a measurement study about the adversarial examples generated by the CW attacks. We trained a 10-class DNN classifier on the standard MNIST dataset to recognize digits in images. The DNN has the same architecture as the one used by Carlini and Wagner (Carlini and Wagner, 2017). Suppose we have a testing digit 0. We use a CW attack to generate an adversarial example for each target label 1, 2, , 9. Each example is represented as a data point in a high-dimensional space. For each adversarial example, we sample 10,000 data points from a small hypercube centered at the adversarial example in the high-dimensional space. We use the DNN classifier to predict labels for the 10,000 data points. We found that a majority of the 10,000 data points are still predicted to have label 0. Our measurement results indicate that 1) the adversarial examples generated by the CW attacks are close to the classification boundary, and 2) ensembling information in the hypercube around an adversarial example could correctly predict its label.

Second, based on our measurement results, we propose a region-based classification. In our region-based classification, we learn a DNN classifier using standard training algorithms. When predicting label for a testing example (benign or adversarial), we sample data points uniformly at random from the hypercube that is centered at the testing example and has a length of . We use the DNN classifier to predict label for each sampled data point. Finally, we predict the label of the testing example as the one that appears the most frequently in the sampled data points. To distinguish our region-based classification with traditional DNN classification, we call traditional DNN point-based classification.

One challenge for our region-based classification is how to determine the length of the hypercube. is a critical parameter that controls the tradeoff between robustness to adversarial examples and classification accuracy on benign examples. To address the challenge, we propose to learn the length using a validation dataset consisting of only benign examples. We do not use adversarial examples because the adversarial examples used by the attacker may not be accessible to the defender. Our key idea is to select the maximal length such that the classification accuracy of our region-based classification on the validation dataset is no smaller than that of the standard point-based DNN classifier. We propose to select the maximal possible length, so an adversarial example needs a larger noise to move further away from the classification boundary in order to evade our region-based classification.

Third, we evaluate our region-based classification using two standard image recognition datasets, MNIST and CIFAR-10. We use the CW attacks to generate adversarial examples. First, our evaluation results demonstrate that our region-based classification achieves the same classification accuracy on testing benign examples with the standard point-based classification. However, adversarial training and distillation sacrifice classification accuracy. Second, for our region-based classification, the CW attacks have less than 16% and 7% success rates on the MNIST and CIFAR-10 datasets, respectively. In contrast, for standard point-based classification, adversarial training, and distillation, the CW attacks achieve 100% success rates on both datasets. Third, we consider an attacker strategically adapts the CW attacks to our region-based classification. In particular, the attacker adds more noise to an adversarial example generated by a CW attack to move it further away from the classification boundary. Our results demonstrate that our region-based classification can also effectively defend against such adapted attacks. In particular, the largest success rate that the adapted attacks can achieve on the MNIST dataset is 64%, when the attacker doubles the noises added to adversarial examples. We conclude that, in the future, researchers who develop powerful evasion attacks should evaluate their attacks against our region-based classification instead of standard point-based classification.

In summary, our contributions are as follows:

  • We perform a measurement study to characterize the adversarial examples generated by state-of-the-art evasion attacks.

  • We propose a region-based classification to defend against state-of-the-art evasion attacks, while not impacting classification accuracy on benign examples.

  • We evaluate our region-based classification using two image datasets. Our results demonstrate that 1) our method does not impact classification accuracy on benign examples, and 2) our method can significantly reduce success rates of state-of-the-art evasion attacks as well as attacks that are strategically adjusted to our region-based classification.

2. Background and Related Work

2.1. Deep Neural Networks (DNNs)

A deep neural network (DNN) consists of an input layer, several hidden layers, and an output layer. The output layer is often a softmax layer. The neurons in one layer are connected with neurons in the next layer with certain patterns, e.g., fully connected, convolution, or max pooling (Krizhevsky et al., 2012). In the training phase, the weights on the connections are often learnt via back-propagation with a training dataset. In the testing phase, the DNN is used to predict labels for examples that are unseen in the training phase. Specifically, suppose we have classes, denoted as . Both the layer before the output layer and the output layer have neurons. Let be an unseen example, which is a -dimension vector; represents the th dimension of . We denote the output of the th neuron before the output layer as , and we denote the output of the th neuron in the output layer as , where . The outputs are also called logits. Since the output layer is a softmax layer, represents the probability that has a label ; and the outputs sum to 1, i.e., . The label of is predicted to be the one that has the largest probability, i.e., , where is the predicted label.

A classifier essentially can be viewed as a classification boundary that divides the -dimension space into class regions, denoted as , , , . Any data point in the region will be predicted to have label by the classifier.

Attack Noise metric
CW- (Carlini and Wagner, 2017)
CW- (Carlini and Wagner, 2017)
CW- (Carlini and Wagner, 2017)
Table 1. State-of-the-art evasion attacks.

2.2. Evasion Attacks

Poisoning attacks and evasion attacks (Huang et al., 2011) are two well-known attacks to machine learning/data mining. A poisoning attack aims to pollute the training dataset such that the learner produces a bad classifier. Various studies have demonstrated poisoning attacks to spam filter (Nelson et al., 2008), support vector machines (Biggio et al., 2012), deep neural networks (Shen et al., 2016), and recommender systems (Li et al., 2016; Yang et al., 2017). In an evasion attack, an attacker adds a small noise to a normal testing example (we call it benign example) such that a classifier predicts an incorrect label for the example with noise. A testing example with noise is called adversarial example. From a perspective of geometrics, an evasion attack moves a testing example from one class region to another.

In this work, we focus on DNNs and evasion attacks. A number of recent studies (Szegedy et al., 2013; Goodfellow et al., 2014; Papernot et al., 2016a; Moosavi-Dezfooli et al., 2016; Liu et al., 2017; Carlini and Wagner, 2017; Papernot et al., 2017) have demonstrated that DNNs are vulnerable to evasion attacks at the testing phase. We denote by a DNN classifier. is the predicted label of a testing example . Note that we assume each dimension of is normalized to be in the range , like previous studies (Moosavi-Dezfooli et al., 2016; Carlini and Wagner, 2017). An evasion attack adds noise to a benign example such that the adversarial example is predicted to have a label by the classifier . , , and norms are often used as the metric to measure the noise . Specifically, norm is the number of dimensions of that are changed, i.e., the number of non-zero dimensions of ; norm is the standard Euclidean distance between and ; and norm is the maximum change to any dimension of , i.e., .

Carlini and Wagner (Carlini and Wagner, 2017) recently proposed a family of targeted evasion attacks, which achieve state-of-the-art attack performance. They concluded that all defense methods should be evaluated against their attacks. Therefore, in this work, we focus on their attacks. They didn’t name their attacks. For simplicity, we call their attacks Carlini and Wagner (CW) attacks. Table 1 summarizes different versions of their attacks. Like other existing evasion attacks to DNN, CW attacks require that the DNN outputs are differentiable with respect to the input example. When attacking a classifier that is non-differentiable or the classifier parameters are unknown (i.e., black-box setting), an attacker needs to generate adversarial examples with respect to an auxiliary classifier and the adversarial examples are likely to also evade the target classifier.

CW- attack (Carlini and Wagner, 2017):  CW attacks have three variants that are tailored to the , , and norms, respectively. The variant CW- attack is tailored to find adversarial examples with small noises measured by norm. Formally, the evasion attack solves the following optimization problem:

(1)

where . is the target label the attacker wants for the adversarial example. The adversarial example is , which automatically constrains each dimension to be in the range [0,1]. The noise is . CW- iterates over the parameter via binary search in a relatively large range of candidate values. For each given , CW- uses the Adam optimizer (Kingma and Ba, 2014) to solve the optimization problem in Equation 1 to find the noise. The iterative process is halted at the smallest parameter that the classifier predicts the target label for the adversary example.

CW- attack (Carlini and Wagner, 2017):  This variant is tailored to find adversarial examples with small noises measured by norm. This attack iteratively identifies the dimensions of that do not have much impact on the classifier’s prediction and fixes them. The set of fixed dimensions increases until the attack has identified a minimal subset of dimensions that can be changed to construct a successful adversarial example. In each iteration, the set of dimensions that can be fixed are identified by the CW- attack. Specifically, in each iteration, CW- calls CW-, which can only modify the unfixed dimensions. Suppose is the found noise for the benign example . CW- computes the gradient and selects the dimension to be fixed. The iterative process is repeated until CW- cannot find a successful adversarial example. Again, the parameter in CW- is selected via a searching process: starting from a very small value; if CW- fails, then doubling until finding a successful adversarial example.

CW- attack (Carlini and Wagner, 2017):  This variant is tailored to find adversarial examples with small noises measured by norm. Formally, the evasion attack aims to solve the following optimization problem to find the adversarial example:

(2)

where is the same function as in CW-; if , otherwise . CW- iterates over until finding a successful adversarial example. Specifically, is iteratively doubled from a small value. For each given , CW- further iterates over . In particular, is initialized to be 1. For a given , CW- solves the optimization problem in Equation 2. If for every , then is reduced by a factor of 0.9, and then CW- solves the optimization problem with the updated . This process is repeated until such a noise vector that for every cannot be found.

Success rate:  An adversarial example is successful if it satisfies two conditions: 1) the adversarial example and the original benign example have the same true label (determined by human) and 2) the classifier predicts the target label for the adversarial example. It is unclear how to check the first condition automatically because we do not have a way to model human perception yet. Therefore, existing studies leveraged , , or norms to measure the noises added to the adversarial examples; and if the noises are small enough, an adversarial example is assumed to satisfy the first condition. However, it is still an open question on how to define “small enough” noises. Ideally, we aim to find a noise threshold; adversarial examples whose noises are less than the threshold do not change the true label. Whether there exists such a noise threshold and what the threshold is are still open questions and are valuable directions for future works.

In principle, success rate of a targeted evasion attack should be the fraction of its generated adversarial examples that satisfy both conditions. However, due to the challenges of checking the first condition, existing studies approximate success rate of an attack as the fraction of its generated adversarial examples that satisfy the second condition alone. For attacks that add very small noises to adversarial examples, the approximate success rates are close to the real success rates. In this work, we will focus on the approximate success rates and will call them success rates for simplicity.

2.3. Defenses Against Evasion Attacks

Roughly speaking, existing defenses against evasion attacks include designing new methods to train DNNs and defense in depth.

New methods to train DNNs:  Goodfellow et al. (Goodfellow et al., 2014) proposed to train a DNN via augmenting the training dataset with adversarial examples, which is called adversarial training. Specifically, for each training benign example, the learner generates a training adversarial example using evasion attacks. Then, the learner uses a standard algorithm (e.g., back propagation) to learn a DNN using the original training benign examples and the adversarial examples. However, as we will demonstrate in our experiments, adversarial training is not robust to powerful evasion attacks and adversarial examples that are unseen during training. In particular, the CW evasion attacks can still achieve 100% success rates at generating adversarial examples.

Papernot et al. (Papernot et al., 2016b) proposed a distillation based method to train a DNN. The DNN is first trained using a standard method. For each training example, the DNN produces a vector of confidence scores. The confidence scores are treated as the soft label for the training example. Given the soft labels and the training examples, the weights of the DNN are retrained. A parameter named distillation temperature is used in softmax layer during both training sessions to control confidence scores. A DNN that is trained via distillation can significantly reduce the success rates of the evasion attacks also proposed by Papernot et al. (Papernot et al., 2016a). However, Carlini and Wagner (Carlini and Wagner, 2017) demonstrated that their attacks CW can still achieve 100% success rates for DNNs trained with distillation. Moreover, the noises added to the benign examples when generating adversarial examples are just slightly higher for distilled DNNs than those for undistilled DNNs. Our experimental results confirm such findings.

Defense in depth:  Meng and Chen proposed MagNet (Meng and Chen, 2017), a defense-in-depth approach to defend against evasion attacks to DNNs. Specifically, given a testing example, they first use a detector to determine whether the testing example is an adversarial example or not. If the testing example is predicted to be an adversarial example, the DNN classifier will not predict its label. If the testing example is not predicted to be an adversarial example, they will reform the testing example using a reformer. In the end, the DNN classifier will predict label of the reformed testing example and treat it as the label of the original testing example. MagNet designs both the detector and the reformer using auto-encoders, which are trained using only benign examples. Meng and Chen demonstrated that MagNet can reduce the success rates of various known evasion attacks. However, MagNet has two key limitations. First, MagNet decreases the classification accuracy on benign testing examples. For instance, on the CIFAR-10 dataset, their trained point-based DNN achieves an accuracy of 90.6%. However, MagNet reduces the accuracy to be 86.8% using the same point-based DNN. Second, it is unclear how to handle the testing examples that are predicted to be adversarial examples by the detector. We suspect that those testing examples eventually would require human to manually label them, i.e., the entire system becomes a human-in-the-loop system, losing the benefits of automated decision making. We note that the second limitation also applies to other approaches (Metzen et al., 2017; Grosse et al., 2017) that aim to detect adversarial examples.

3. Design Goals

We aim to achieve the following two goals:

1) Not sacrificing classification accuracy on testing benign examples. Our first design goal is that the defense method should maintain the high accuracy of the DNN classifier on testing benign examples. Neural networks re-gained unprecedented attention in the past several years under the coat of “deep learning”. The major reason is that neural networks with multiple layers (i.e., DNN) achieve significantly better classification accuracy than other machine learning methods for a variety of artificial intelligence tasks such as computer vision, speech recognition, and natural language processing. Therefore, our defense method should maintain such advantage of DNNs.

2) Increasing robustness. We aim to design a defense method that is robust to powerful evasion attacks. In particular, our new classifier should have better robustness than conventional DNN classifiers with respect to state-of-the-art evasion attacks, i.e., the CW attacks. Suppose we have a DNN classifier . After deploying a certain defense method, we obtain another classifier . Suppose we have an evasion attack. The success rate (SR) of the attack for the classifiers and are denoted as and , respectively. We say that the classifier is more robust than the classifier if . In other words, a defense method is said to be effective with respect to an evasion attack if the defense method decreases the attack’s success rate.

We note that our goal is not to completely eliminate adversarial examples. Instead, our goal is to reduce attackers’ success rates without sacrificing classification accuracy on benign examples.

4. Measuring Evasion Attacks

We first show some measurement results on evasion attacks, which motivate the design of our region-based classification method. We performed our measurements on the standard MNIST dataset. In the dataset, our task is to recognize the digit in an image, which is a 10-class classification problem. We normalize each pixel to be in the range [0,1]. We adopted the same DNN classifier that was used by Carlini and Wagner (Carlini and Wagner, 2017). The classifier essentially classifies the digit image space into class regions, denoted as , , , . Any data point in the class region will be predicted to have label by the classifier.

We sample a benign testing image of digit 0 uniformly at random. We use the CW-, CW-, and CW- attacks to generate adversarial examples based on the sampled benign example. We obtained the open-source implementation of the CW attacks from its authors (Carlini and Wagner, 2017). For each target label , we use an evasion attack to generate an adversarial example with the target label based on the benign example, where . We denote the adversarial example with the target label as . The DNN classifier predicts label for the adversarial example , while its true label is 0.

We denote the hypercube that is centered at and has a length of as . Formally, , where and are the th dimensions of and , respectively. For each adversarial example , we sample 10,000 data points from the hypercube uniformly at random, where we set in our experiments (we will explain the setting of in experiments). We treat each data point as a testing example and feed it to the DNN classifier, which predicts a label for it. For the 10,000 data points, we obtain a histogram of their labels predicted by the DNN classifier.

Figure (a)a, Figure (b)b, and Figure (c)c (the figures are shown at the end of the paper) show the label histograms for the 10,000 randomly sampled data points from the hypercube around the benign example and the 9 adversarial examples generated by the CW- attack, CW- attack, and CW- attack, respectively. For instance, in Figure (a)a, the first graph in the first row shows the histogram of labels for the 10,000 data points that are sampled from the hypercube centered at the benign example; the second graph (from left to right) in the first row shows the histogram of labels for the 10,000 data points that are sampled from the hypercube centered at the adversarial example that has a predicted label 1, where the adversarial example is generated by the CW- attack.

For the benign example, almost all the 10,000 randomly sampled data points are predicted to have label 0, which is the true label of the benign example. For most adversarial examples, a majority of the 10,000 randomly sampled data points are predicted to have label 0, which is the true label of the adversarial examples. From these measurement results, we have the following two observations:

  • Observation I: The hypercube centered at a benign example intersects the most with the class region , where is the true label of the benign example . This indicates that we can still correctly predict labels for benign examples by ensembling information in the hypercube.

  • Observation II: For most adversarial examples, the hypercube intersects the most with the class region , where is the true label of the adversarial example . This indicates that we can also correctly predict labels for adversarial examples by ensembling information in the hypercube.

These measurement results motivate us to design our region-based classification, which we will introduce in the next section.

Figure 2. Illustration of our region-based classification. is a testing benign example and is the corresponding adversarial example. The hypercube centered at intersects the most with the class region that has the true label.

5. Our Region-based Classification

We propose a defense method called Region-based Classification (RC). Traditional DNN classifier is point-based, i.e., given a testing example, the DNN classifier predicts its label. Therefore, we call such a classifier Point-based Classification (PC). In our RC classification, given a testing example, we ensemble information in the region around the testing example to predict its label. For any point-based DNN classifier, our method can transform it to be a region-based classifier that is more robust to adversarial examples, while maintaining its accuracy on benign examples.

5.1. Region-based Classification

Suppose we have a point-based DNN classifier . For a testing example (either benign example or adversarial example), we create a hypercube around the testing example. Recall that the DNN classifier essentially divides the input space into class regions, denoted as , , , ; all data points in the class region are predicted to have label by the classifier, where . In our RC classifier, we predict the label of a testing example to be the one whose class region intersects the most with the hypercube . Formally, we denote our RC classifier as since it relies on the point-based DNN classifier and the length . We denote the area of the intersection between and as . Then, our RC classifier predicts the label of to be . Figure 2 illustrates our region-based classification.

Approximating the areas :  One challenge of using our RC classifier is how to compute the areas , because the class regions might be very irregular. We address the challenge via sampling data points from the hypercube uniformly at random and use them to approximate the areas . In particular, for each sampled data point, we use the point-based classifier to predict its label. We denote by the number of sampled data points that are predicted to have label by the classifier . Then, our RC classifier predicts the label of as .

Learning the length :  Another challenge for our RC classifier is how to determine the length of the hypercube. is a critical parameter for our method RC (we will show the impact of on the effectiveness of RC in our experiments). Specifically, controls the tradeoff between robustness to adversarial examples and classification accuracy on benign examples. Suppose we want to classify an adversarial example , whose true label is . On one hand, if the length of the hypercube is too small, the hypercube will not intersect with the class region , which means that our RC classifier will not be able to correctly classify the adversarial example. On the other hand, if the length is too large, the hypercube around a benign example will intersect with the incorrect class regions, which makes our method predict incorrect labels for benign examples.

To address the challenge, we propose to learn the length using a validation dataset consisting of only benign examples. We do not use adversarial examples because the adversarial examples used by the attacker may not be accessible to the defender. Our key idea is to select the maximal length such that the classification accuracy of our classifier on the validation dataset is no smaller than that of the point-based classifier . There are many choices of , with which our classifier has no smaller classification accuracy than the point-based classifier . We propose to select the maximum one, so an adversarial example needs a larger noise to move further away from the classification boundary of in order to evade .

Specifically, we learn the radius through a search process. Suppose a point-based DNN classifier has classification accuracy on the validation dataset. We transform the classifier into a RC classifier. Initially, we set to be a small value. For each benign example in the validation dataset, we predict its label using our classifier . We compute the classification accuracy of the classifier on the validation dataset. If the classification accuracy is no smaller than , we increase the radius by a step size and repeat the process. This search process is repeated until the classifier achieves a classification accuracy on the validation dataset that is smaller than . Algorithm 1 shows the search process.

0:  Validation dataset , point-based DNN classifier , step size , initial length .
0:  Length .
1:  Initialize .  
2:   = Accuracy of on .  
3:   = Accuracy of the classifier on . 
4:  while   do
5:     .  
6:      = Accuracy of the classifier on . 
7:  end while
8:  return  .  
Algorithm 1 Learning Length by Searching

5.2. Evasion Attacks to Our RC Classifier

We consider a strong attacker who knows all the model parameters of our classifier . In particular, the attacker knows the architecture and parameters of the point-based DNN classifier , the length , and , the number of data points sampled to approximate the areas. Our threat model is also known as the white-box setting.

5.2.1. Existing evasion attacks

An attacker can use any attack shown in Table 1 to find adversarial examples to evade our classifier . All state-of-the-art evasion attacks to DNN classifiers require the classifier to be differentiable, in order to propagate the gradient flow from the outputs to the inputs. However, our classifier is non-differentiable. Therefore, we consider an attacker generates adversarial examples based on the point-based classifier , which is the key component of our classifier ; and the attacker uses the adversarial examples to attack . This is also known as transferring adversarial examples from one classifier to another.

We note that Carlini and Wagner (Carlini and Wagner, 2017) proposed to adjust their CW- attack to generate high-confidence transferable adversarial examples, which are more likely to transfer from one classifier to another. However, Meng and Chen (Meng and Chen, 2017) demonstrated that such high-confidence transferable adversarial examples can be easily detected. Therefore, we do not consider high-confidence transferable adversarial examples in our work.

5.2.2. New evasion attacks

An attacker, who knows our region-based classification, can also strategically adjust its attacks. Specifically, since our classifier ensembles information within a region, an attacker can first use an existing evasion attack to find an adversarial example based on the point-based classifier and then strategically add more noise to the adversarial example. The goal is to move the adversarial example further away from the classification boundary such that the hypercube centered at the adversarial example does not intersect or intersects less with the class region that has the true label of the adversarial example.

Specifically, suppose we have a benign example . The attacker uses an evasion attack shown in Table 1 to find the corresponding adversarial example . The added noise is . Then, the attacker strategically constructs another adversarial example as . Essentially, the attacker moves the adversarial example further along the direction of the current noise. Note that, we will clip the adversarial example to be in the space . Specifically, for each dimension of , we set if , we if , and keeps unchanged if . The parameter controls how much further to move the adversarial example away from the classification boundary. For and norms, is the increased fraction of noise. Specifically, suppose an evasion attack shown in Table 1 finds an adversarial example with noise , whose and norms are and , respectively. Then, the adapted adversarial example has noise , whose and norms are and , respectively. A larger indicates a larger noise (for and norms) and a possibly larger success rate.

For convenience, for an evasion attack, we append the suffix -A at the end of the attack’s name to indicate the attack that is adapted to our classifier . For instance, CW--A means the adapted version of the attack CW-. In our experiments, we will explore how impacts the success rates of the adapted evasion attacks and noises added to the adversarial examples.

6. Evaluations

Training Validation Testing
MNIST 55,000 5,000 10,000
CIFAR-10 45,000 5,000 10,000
Table 2. Dataset statistics.
Classification Success Rate
Accuracy CW- CW- CW-
Standard point-based DNN 99.4% 100% 100% 100%
Adversarial training DNN 99.3% 100% 100% 100%
Distillation DNN 99.2% 100% 100% 100%
Our region-based DNN 99.4% 16% 0% 0%
Table 3. Classification accuracy on benign examples and robustness to CW attacks on the MNIST dataset.
Classification Success Rate
Accuracy CW- CW- CW-
Standard point-based DNN 90.1% 100% 100% 100%
Adversarial training DNN 88.1% 100% 100% 100%
Distillation DNN 88.3% 100% 100% 100%
Our region-based DNN 90.1% 7% 2% 6%
Table 4. Classification accuracy on benign examples and robustness to CW attacks on the CIFAR-10 dataset.

6.1. Experimental Setup

Datasets:  We perform evaluations on two standard image datasets used to benchmark object recognition methods: MNIST and CIFAR-10. Table 2 shows the statistics of the datasets. For each dataset, we sample 5,000 of the predefined training examples uniformly at random and treat them as the validation dataset used to learn the length in our RC classifier.

Compared methods:  We compare the following DNN classifiers.

  • Standard point-based DNN. For each dataset, we trained a standard point-based DNN classifier. For the MNIST dataset, we adopt the same DNN architecture as the one adopted by Carlini and Wagner (Carlini and Wagner, 2017). For the CIFAR-10 dataset, the DNN architecture adopted by Carlini and Wagner is not state-of-the-art. Therefore, we do not adopt their DNN architecture for the CIFAR-10 dataset. Instead, we use the DNN architecture proposed by He et al. (He et al., 2016). We obtained implementation from Carlini and Wagner to train the DNN for MNIST; and we obtained the implementation from (to Train DNN for CIFAR-10, 2017) to train the DNN for CIFAR-10.

  • Adversarial training DNN. For each dataset, we use adversarial training (Goodfellow et al., 2014) to learn a DNN classifier. The DNN classifiers have the same architectures as the standard point-based DNNs. We note that CW attacks are inefficient to generate adversarial examples. Specifically, among the three evasion attacks, CW- is the most efficient one, but it still requires 15-20 days to generate adversarial examples for all training examples in the MNIST dataset on our machine. The CIFAR-10 dataset even takes a much longer time to generate adversarial examples for all training examples. Therefore, we use DeepFool (Moosavi-Dezfooli et al., 2016), a less powerful but orders of magnitude more efficient evasion attack to generate adversarial examples for each training example.

  • Distillation DNN. For each standard point-based DNN classifier, we use distillation (Papernot et al., 2016b) to re-train the DNN classifier with a temperature .

  • Our region-based DNN. For each dataset, we transform the corresponding standard point-based DNN classifier to our region-based DNN classifier. The length is learnt through our Algorithm 1 using the validation dataset. Specifically, we set the initial length value and step size in Algorithm 1 to be 0 and 0.01, respectively. Figure 3 shows the classification accuracy of our RC classifier on the MNIST validation dataset as we increase the length in Algorithm 1. We observe that our classifier has slightly higher accuracies than the standard point-based classifier when is small. Moreover, when is larger than around 0.3, accuracy of starts to decrease. Therefore, according to Algorithm 1, the length is set to be 0.3 for the MNIST dataset. Moreover, via Algorithm 1, the length is set to be 0.02 for the CIFAR-10 dataset. To estimate the areas between a hypercube and class regions, we sample 1,000 data points from the hypercube, i.e., the parameter is set to be 1,000.

Figure 3. Classification accuracies of the standard point-based DNN and our region-based DNN on the MNIST validation dataset as we increase the length .

6.2. Results

Classification accuracies:  Table 3 and Table 4 show the classification accuracies of the compared classifiers for the MNIST and CIFAR-10 datasets, respectively. First, our region-based DNN achieves the same classification accuracy on the testing dataset with the standard point-based DNN for both the MNIST and CIFAR-10 datasets. Second, adversarial training DNN and distillation DNN achieve lower classification accuracies than standard point-based DNN, though the differences are smaller for the MNIST dataset.

Robustness to existing state-of-the-art evasion attacks:  Table 3 and Table 4 show the success rates of the state-of-the-art evasion attacks CW-, CW-, and CW- for different DNN classifiers on the MNIST and CIFAR-10 datasets. Since the CW attacks are inefficient, for each dataset, we randomly sample 100 testing benign examples that the standard point-based DNN correctly classifies and generate adversarial examples for them. First, we observe that adversarial training and distillation are not effective to mitigate state-of-the-art evasion attacks. Specifically, the CW attacks still achieve 100% success rates for adversarial training and distillation DNNs on both the MINIST and CIFAR-10 datasets.

Second, our region-based DNN can substantially reduce success rates of the CW attacks. Specifically, for the MNIST dataset, our region-based DNN reduces success rates of CW-, CW-, and CW- to be 16%, 0%, and 0%, respectively; for the CIFAR-10 dataset, our region-based DNN reduces success rates of CW-, CW-, and CW- to be 7%, 2%, and 6%, respectively. For our region-based DNN classifier, CW- achieves the largest success rates across the two datasets. We speculate the reason is that CW- modifies a small number of pixels of a benign example, but it can change the pixel values substantially. As a result, the adversarial examples generated by CW- are further away from the classification boundary and thus are more likely to evade our region-based DNN.

Robustness to new evasion attacks:  Recall that we discussed adapting CW attacks to our region-based DNN in Section 5.2. The key idea is to move the adversarial example further away from the classification boundary. The parameter controls the tradeoff between the increased fraction of noise (for and norms) and success rates. Figure 4 shows such tradeoffs.

The adapted CW attacks cannot achieve 100% success rates anymore no matter how we set the parameter . Specifically, the success rates first increase and then decrease as increases. This is because adding too much noises to adversarial examples moves them to other class regions, resulting in an unsuccessful targeted evasion attack. Suppose an adversarial example has a target label . The original adversarial example generated by a CW attack is in the class region . When is small, the adapted adversarial example generated by an adapted CW attack is still within the class region . However, when is large, the adapted adversarial example is moved to be in another class region , which has a different label.

For the MNIST dataset, the best evasion attack is the adapted attack CW--A. The largest success rate the attack can achieve is 64%, when , i.e., the added average noise of adversarial examples is doubled. When the attack CW--A wants to achieve 50% success rate, the adversarial examples need 25% more noises. For the CIFAR-10 dataset, the best adapted evasion attack is CW--A. The attack achieves 85% success rates when . The attack needs to set in order to achieve a 50% success rate.

Figure 5 shows the adversarial examples generated by the adapted evasion attack CW--A for the MNIST dataset when the attack achieves the highest success rate, i.e., . For all these adversarial examples, our region-based DNN classifier predicts the target label for each of them. Recall that Figure 1 shows adversarial examples generated by the existing CW- attack. To compare the adversarial examples generated by CW- and CW--A, we use the same benign examples in Figure 5 and Figure 1. We observe that some adversarial examples generated by CW--A have changed the true labels. For instance, the sixth adversarial example in Figure 5 was generated from a benign example with true label 5. However, human can hardly classify the adversarial example to be a digit 5, i.e., the true label has been changed. Similarly, the third, eighth, and ninth adversarial examples almost change the true labels of the corresponding benign examples.

Recall that in Section 2.2, we discussed that a successful adversarial example should satisfy two conditions and we approximate success rate of an attack using its generated adversarial examples that satisfy the second condition only. Our results in Figure 5 show that some adversarial examples that satisfy the second condition do not satisfy the first condition, because of adding too much noises. Therefore, the real success rates of the adapted evasion attacks are even lower.

(a) MNIST
(b) CIFAR-10
Figure 4. Tradeoff between success rates and increased fraction of noises for adapted CW attacks.
Figure 5. Adversarial examples generated by the adapted evasion attack CW--A for the MNIST dataset, where .

7. Conclusion

In this work, we propose a region-based classification to mitigate evasion attacks to deep neural networks. First, we perform a measurement study about the adversarial examples generated by state-of-the-art evasion attacks. We observe that the adversarial examples are close to the classification boundary and the hypercube around an adversarial example significantly intersects with the class region that has the true label of the adversarial example. Second, based on our measurement study, we propose a region-based DNN classifier, which ensembles information in the hypercube around an example to predict its label. Third, we perform evaluations on the standard MNIST and CIFAR-10 datasets. Our results demonstrate that our region-based DNN classifier can significantly reduce success rates of both state-of-the-art evasion attacks and the evasion attacks that are strategically adapted to our region-based DNN classifier, without sacrificing classification accuracy on benign examples.

We encourage researchers who propose new evasion attacks to evaluate their attacks against our region-based classifier, instead of standard point-based classifier only.

References

  • (1)
  • Biggio et al. (2012) Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. In ICML.
  • Carlini and Wagner (2017) Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE S & P.
  • Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. In arXiv.
  • Grosse et al. (2017) Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel. 2017. On the (statistical) detection of adversarial examples. In arXiv.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR.
  • Hinton et al. (2012) Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, and others. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29, 6 (2012), 82–97.
  • Huang et al. (2011) Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. 2011. Adversarial machine learning. In ACM AISec.
  • Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In arXiv.
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS.
  • Li et al. (2016) Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data Poisoning Attacks on Factorization-Based Collaborative Filtering. In NIPS.
  • Liu et al. (2017) Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into Transferable Adversarial Examples and Black-box Attacks. In ICLR.
  • Meng and Chen (2017) Dongyu Meng and Hao Chen. 2017. MagNet: a Two-Pronged Defense against Adversarial Examples. In CCS.
  • Metzen et al. (2017) Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischof. 2017. On detecting adversarial perturbations. In International Conference on Learning Representations (ICLR).
  • Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
  • Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: a simple and accurate method to fool deep neural networks. In CVPR.
  • Nelson et al. (2008) B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia. 2008. Exploiting machine learning to subvert your spam filter. In LEET.
  • Papernot et al. (2017) Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In AsiaCCS.
  • Papernot et al. (2016a) Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016a. The Limitations of Deep Learning in Adversarial Settings. In EuroS&P.
  • Papernot et al. (2016b) Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016b. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. In IEEE S & P.
  • Shen et al. (2016) Shiqi Shen, Shruti Tople, and Prateek Saxena. 2016. AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems. In ACSAC.
  • Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, and others. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484–489.
  • Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. In arXiv.
  • to Train DNN for CIFAR-10 (2017) Code to Train DNN for CIFAR-10. 2017. (September 2017). https://goo.gl/mEX7By
  • Yang et al. (2017) Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. 2017. Fake Co-visitation Injection Attacks to Recommender Systems. In NDSS.
(a) CW- attack
(b) CW- attack
(c) CW- attack
Figure 6. Label histograms of 10,000 random data points in the hypercube around a benign example or its adversarial examples generated by the (a) CW- attack, (b) CW- attack, and (c) CW- attack. Each histogram corresponds to an example. The benign example has label 0. In each subfigure, the first row (from left to right): the benign example, and the adversarial examples that have target labels 1, 2, 3, and 4, respectively; and the second row (from left to right): the adversarial examples that have target labels 5, 6, 7, 8, and 9, respectively.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14287
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description