Enhancing Robustness of Machine Learning Systems via Data Transformations

Enhancing Robustness of Machine Learning Systems via Data Transformations

Arjun Nitin Bhagoji
Princeton University
   Daniel Cullina
Princeton University
   Chawin Sitawarin
Princeton University
   Prateek Mittal
Princeton University
Abstract

We propose the use of data transformations as a defense against evasion attacks on ML classifiers. We present and investigate strategies for incorporating a variety of data transformations including dimensionality reduction via Principal Component Analysis and data ‘anti-whitening’ to enhance the resilience of machine learning, targeting both the classification and the training phase. We empirically evaluate and demonstrate the feasibility of linear transformations of data as a defense mechanism against evasion attacks using multiple real-world datasets. Our key findings are that the defense is (i) effective against the best known evasion attacks from the literature, resulting in a two-fold increase in the resources required by a white-box adversary with knowledge of the defense for a successful attack, (ii) applicable across a range of ML classifiers, including Support Vector Machines and Deep Neural Networks, and (iii) generalizable to multiple application domains, including image classification and human activity classification.

I Introduction

We are living in an era of ubiquitous machine learning (ML) and artificial intelligence. Machine learning is being used in a number of essential applications such as image recognition [26], natural language processing [9], spam detection [10], autonomous vehicles [8, 33] and even malware detection [46, 11]. High classification accuracy in these settings [23, 48, 2] has enabled the widespread deployment of ML systems. Given the ubiquity of ML applications, it is increasingly being deployed in adversarial scenarios, where an attacker stands to gain from the failure of a ML system to classify inputs correctly. The question then arises: are ML systems secure in adversarial settings?

Adversarial Machine Learning: Starting in the early 2000s, there has been a considerable body of work [20, 3, 25] exposing the vulnerability of machine learning algorithms to strategic adversaries. For example, poisoning attacks [5] systematically introduce adversarial data during the training phase with the aim of causing the misclassification of data during the test phase. On the other hand, evasion attacks [4, 35, 47] aim to fool existing ML classifiers trained on benign data by adding strategic perturbations to test inputs.

Evasion attacks: In this paper we focus on evasion attacks in which the adversary aims to perturb test inputs to ML classifiers in order to cause misclassification. Evasion attacks have been proposed for a variety of machine learning classifiers such as Support Vector Machines [4, 34], tree-based classifiers [34, 22] such as random forests and boosted trees and more recently for neural networks [16, 47, 35, 24, 7, 32]. These attacks have been used to demonstrate the vulnerability of applications that use machine learning, such as facial recognition [29, 43], voice command recognition [6] and  PDF malware detection [53] in laboratory settings. Recent work also illustrates the possibility of attacks on deployed systems such as the Google video summarization API [19], highlighting the urgent need for defenses. Surprisingly, it has also been shown that the evasion properties of adversarially modified data (for a particular classifier) persist across different ML classifiers [47], which allows an adversary with limited knowledge of the ML system to attack it.

However, very few defenses [39, 22] exist against these attacks, and the applicability of each is limited to only certain known attacks and specific types of ML classifiers (see Section VII for a detailed description of and comparison with previous work).

I-a Contributions

We propose and thoroughly investigate the use of linear transformations of data as a defense against evasion attacks. We consider powerful adversaries with knowledge of our defenses when evaluating their effectiveness and find that they demonstrably reduce the success of evasion attacks. To the best of our knowledge, ours are the only defenses against evasion attacks with the following properties: (1) applicability across multiple ML classifiers (such as SVMs, DNNs), (2) applicability in varied application domains (image and activity classification), and (3) mitigation of multiple attack types, including strategic ones. Further, the tunability of our defense allows a system designer to pick appropriate operating points on the utility-security tradeoff curve depending on the application.

I-A1 Defense

We propose the use of data transformations as a defense mechanism. Specifically, we consider linear dimensionality reduction techniques such as Principal Component Analysis which aim to project high-dimensional data to a lower-dimensional space while preserving the most useful variance of the data [44, 51]. We present and investigate a strategy for incorporating dimensionality reduction and other linear transformations of data to enhance the resilience of machine learning, targeting both the classification and training phases. Data transformations are applied to the training data to enhance the resilience of the trained classifier and they significantly change the learned classifier. Linear data transformations are a generalization of regularization methods. They allow us to access novel and otherwise inaccessible robustness-performance tradeoffs.

I-A2 Empirical Evaluation

We empirically demonstrate the feasibility and effectiveness of our defenses using:

  • multiple ML classifiers, such as Support Vector Machines (SVMs) and Deep Neural Networks (DNNs);

  • several distinct types of evasion attacks, such as an attack on Linear SVMs from Moosavi-Dezfooli et. al. [30], and on deep neural networks from Goodfellow et. al. [16] and Carlini et al. [7], which is the best known attack for neural networks, as well as white-box attacks targeting our defense;

  • a variety of real-world datasets/applications: the MNIST image dataset [27] and the UCI Human Activity Recognition (HAR) dataset [1].

Our key findings are that even in the face of a white-box adversary with complete knowledge of the ML system:

  • Security: the defense leads to significant increases of up to in the degree of modification required for a successful attack and equivalently, reduces adversarial success rates by around at fixed levels of perturbation.

  • Generality: the defense can be used for different ML classifiers (and application domains) with minimal modification of the original classifiers, while still being effective at combating adversarial examples.

  • Utility: there is a modest utility loss of about 0.5-2% in the classification success on benign samples in most cases.

We also provide an analysis of the utility-security tradeoffs as well as the computational overheads incurred by our defense. Our results may be reproduced using the open source code available at https://github.com/anonymous111Link anonymized for double-blind submission.

We note that our defense does not completely solve the problem of evasion attacks, since it reduces adversarial success rates at fixed budgets, but does not make them negligible in all cases. However, since our defense is classifier and dataset-agnostic it can be used in conjunction with other defenses such as adversarial training in order to close this gap. We will explore the synergy of our defense with other techniques in future work. We hope that our work inspires further research in combating the problem of evasion attacks and securing machine learning based systems.

Ii Adversarial machine learning

In this section, we present the required background on adversarial machine learning, focusing on evasion attacks that induce misclassification by perturbing the input at test time.

Motivation and Running Example: We use image data from the MNIST dataset [27] for our running examples. Figure 1(a) depicts example test images from the MNIST dataset that are correctly classified by a SVM classifier (see Section IV for details), while Figure 1(b) depicts adversarially crafted test images (perturbed images using the evasion attack of Moosavi-Dezfooli et. al. [30]), which are misclassified by a linear SVM.

(a) Typical test images from the MNIST dataset. Correctly classifed as 7, 2, 1, 0 and 4 respectively.
(b) Corresponding adversarial images obtained using the evasion attack on Linear SVMs [30]. Now, misclassified as 9, 9, 3, 2 and 0 respectively.
Fig. 1: Comparison of benign and adversarial images taken from the MNIST dataset.

Classification using machine learning: We focus on supervised machine learning, in which a classifier is trained on labeled data. For our purposes, a classifier is a function that takes as input a data point and produces an output , where is the set of all categories. The classifier succeeds if matches the true class . For example, for the MNIST dataset, is a pixel grayscale image of a handwritten digit and is the finite set .

Ii-a Attacks against machine learning systems

In this subsection we lay out the notation for the remainder of the paper, and describe the adversarial model under consideration.

Ii-A1 Adversarial goals

In each case, the adversary is given an input with true class and uses an attack algorithm to produce a modified input .

Attacks are relevant in the case where this input is correctly classified: . The attack takes one of two forms.

  • Untargeted attack: is misclassified ()

  • Targeted attack: is classified as a specific class (, )

Note that these attacks are equivalent for binary classifiers. Additionally, the adversarial example should be as similar to the benign example as possible. Similarity is quantified using some metric on the space of examples. As in previous work, we focus on example spaces that are vector spaces and use -norms as the metric [16, 7].

Ii-A2 Adversarial knowledge

We consider three settings. The first is the white box setting, in which the adversary knows the classification function , all its parameters and the existence of a defense, if any. This assumption on the adversary’s capabilities is conservative as this setting corresponds to a very powerful adversary. A system with the ability to defend against attacks from an adversary with complete knowledge does not rely on security through obscurity.

In the second setting, we consider somewhat less powerful adversaries which may be more prevalent. We call this is the classifier mismatch setting. Here, the adversary knows the training dataset , the architecture of the the original classifier and the hyperparameters (i.e regularization constants etc.) used in the training of the original classifier without the defense. Thus, the adversary is capable of training a classifier that mimics that true classifier . The adversary generates examples that are adversarial on and submits them to . Note that since the adversary is not aware of the defense measures taken, there is a mismatch between and , hence the term ‘classifier mismatch’ for this setting.

Further, it has been shown in previous work [34, 47] that adversarial examples transfer across classifiers with different architectures and hyperparameter settings, so adversaries can use their own models trained on similar datasets to construct adversarial samples for the ML system under attack. Thus, we consider a third setting, the architecture mismatch setting222This is commonly referred to in the literature as a black-box setting, but we use a different terminology to highlight the exact nature of the adversary’s lack of knowledge., where the adversary is unaware of the classifier architecture being used, and just trains some on the portion of the training data available. This is a plausible practical setting, since knowledge about the architecture and the hyperparameters of the network under attack may be difficult for the adversary to obtain.

Ii-B Evasion attacks on specific classifiers

Attack Classifier Constraint Intuition
Optimal attack on Linear SVMs  [30] Linear SVMs Move towards classifier boundary
Fast Gradient Neural networks
First order approximation to direction of smallest perturbation
Fast Gradient Sign  [16] Neural networks
Constant scaling for each pixel models perception better
Carlini’s attack  [7] Neural networks
Iterative optimization over relaxed minimization problem
TABLE I: Summary of attacks on Linear SVMs and neural networks.

We now describe existing attacks from the literature for specific ML classifiers such as linear classifiers and neural networks. These are summarized in Table I

Ii-B1 Optimal attacks on linear classifiers

In the multi-class classification setting for Linear SVMs, a classifier is trained for each class , where

(1)

The final classifier assigns to the class . Given that the true class is , the objective of an untargeted attack is to find the closest point such that .

From [30], we know the optimal attacks on affine multi-class classifiers under the metric. This attack finds that minimizes subject to the constraint . For such that and targeted class , let . Observe that . Then, the adversarial example

(2)

satisfies . The minimum modification required to cause a misclassification as is

(3)

Thus the optimal choice of for an untargeted attack is , where .

Ii-B2 Gradient based attacks on neural networks

The Fast Gradient Sign (FGS) attack is an efficient attack against neural networks introduced in [16]. This attack is for the metric. Adversarial examples are generated by adding adversarial noise proportional to the sign of the gradient of the loss function . Here, is the example, is the true class, and is the network weight parameters. Concretely,

(4)

The gradient can be efficiently calculated using backpropagation. The parameter controls the magnitude of the adversarial perturbation, similar to for the attack on Linear SVMs:

(5)

See Figure 13 in the Appendix for images modified with a range of .

The FGS attack and the attack on Linear SVMs are constrained according to different norms. To facilitate a comparison of the robustness of classifiers as well as the effectiveness of our defense across them, we propose a modification of the FGS attack which is constrained by the norm. Denoting this as the Fast Gradient (FG) attack, we define the adversarial examples to be equal to

(6)

For the FG attack, is the norm of the perturbation.

Ii-B3 Optimization-based attacks on neural networks

The direct optimization based formulation of adversarial sample generation for a classifier is

(7)
s.t.

where is an appropriately chosen distance metric and is the constraint on the input space. Since the constraint in the above optimization problem is combinatorial, various related forms of this minimization problem have been proposed [47, 7, 30]. We focus on the relaxation studied by Carlini and Wagner [7]:

(8)
s.t.

Carlini and Wagner investigate a variety of loss functions as well as methods to ensure the generated adversarial sample stays within the input space constraints. In our experiments with neural networks, for untargeted attacks we the use a loss function, , where is the original class of the input, represents the output of the neural network before the softmax layer and represents the confidence parameter. The distance metric used is the norm since it is found to perform the best.

We evaluate the effectiveness of our defense against this state-of-the-art attack in Section V.

Iii Data transformations as a defense

In the previous section, we have seen that ML classifiers are vulnerable to a variety of different evasion attacks. Thus, there is a clear need for a defense mechanism that is effective against a variety of attacks, since a priori, the owner of the system has no knowledge of the range of possible attacks. Further, finding a defense that works across multiple classifiers can direct us to a better understanding of why ML systems are vulnerable in the first place.

In this section, we present a defense that is resilient to attacks from the literature in the mismatch setting, and remains effective even in the presence of a white-box adversary with knowledge of the defense. Further, the defense makes multiple types of ML classifiers operating in different application scenarios more robust as shown by our results in Section V. The defense is based on linear transformations of data, including linear dimensionality reduction.

Iii-a Overview of defense

The dimension of the data is and the training data is a matrix , so each example is a column. We assume the data is centered, i.e. where is the vector of all ones and is the vector of all zeros. The set of data classes is and the classifier in use is .

In our defense, we leverage linear transformations of the data to make the classifier more resilient by modifying the training phase. In the first step of our defense, an algorithm selects a linear transformation such as dimensionality reduction based on properties of the data distribution. Then, the training data is transformed and a new classifier is then trained on the transformed training set. In the classification phase, all inputs are transformed in the same way before being provided to the classifier.

0:  , Train, Select
0:  
1:  Select the linear transformation
2:  Compute the transformed training set
3:  Train classifier
4:  Let
Algorithm 1 LTtrain

The additional inputs to Algorithm 1 are:

  • Select: The algorithm used to select a linear transformation of the data based on some properties of the data. This may be a specialization of a more general algorithm to specific parameters: e.g. .

  • Train: This is the algorithm used to train classifiers of the desired class. In general, this will be a specialization of more general training algorithm to specific parameters. For example, Train might produce a neural network via Stochastic Gradient Descent [15] starting from the untrained classifier network using training parameters .

At first glance, this approach may seem futile, because the initial layer of many machine learning classifiers, including SVMs and neural networks, is a linear function. However, although these classifiers are already capable of applying any linear transformation to the data that the training procedure finds to be useful, the standard training process does not optimize for adversarial robustness and does not choose to make these transformation, even though they are available. Thus, an explicit linear transformation of the data is capable of significantly changing the learned classifier and a carefully selected transformation can lead to beneficial changes.

Iii-B Effect on Support Vector Machines

To motivate Algorithm 1, we examine in detail the case where Train produces a linear classifier by learning a support vector machine.

Learning a SVM [41] that can classify data points from two classes, , involves finding an affine function that minimizes the following loss function

(9)

If we use Algorithm 1 and apply an invertible linear transformation to the training data, we will learn an alternative function that minimizes

(10)

Our actual classifier will be . Let and rewrite (10) as

(11)

Selecting the that minimizes (10) and composing it with is equivalent to directly selecting the value of that minimizes (11). Thus applying an invertible linear transformation to the data is equivalent to modifying the quadratic form that appears in the regularization term of the SVM loss function.

Regularization: A standard generalization of (9) multiplies the by a regularization parameter . This corresponds to the simplest possible linear transformation of the data: multiplication by a constant. Explicitly, we have . Thus, ordinary regularization of SVMs fits neatly into this framework. However, more general linear transformations provide us with significantly more flexibility to modify the regularization constraint and allow us to access novel and otherwise inaccessible robustness-performance tradeoffs.

Singular linear transformations: What happens if is not invertible? In this case, is a member of by definition.333The image of the operator is the vector space of linear combinations of columns of the matrix . Then minimizes

over , where is the Moore-Penrose pseudoinverse of . 444Because is a symmetric matrix, it has a spectral decomposition where is diagonal. Then where is diagonal, if , and otherwise. [38]. In addition to modifying the costs assigned to each weight vector, applying a non-invertible transformation to the data rules out some choices of completely. Alternatively, one can think of the regularization term as assigning an infinite cost to each and thus to each .555The kernel of the operator is the space of vectors such that . is orthogonal to .

Expressivity: For a fixed collection of data points, it is possible to find a linear transformation that results in the selection of essentially any hard-decision classifier positively correlated with the true labels. Observe that this is the opposite of the naive fear described in Section III-A that linear transformations should have no effect on the learned classifier: the choice of linear transformation might influence the final classifier structure too much! Because of this, it is essential to select linear transformations in a systematic matter.

Iii-C Defense using PCA

Several of the choices of Select that we will use in Algorithm 1 are based on Principal Component Analysis.

Iii-C1 PCA in brief

PCA [44] is a linear transformation of the data that identifies so-called ‘principal axes’ in the space of the data, which are the directions in which the data has maximum variance, and projects the original data along these axes. The dimensionality of the data is reduced by choosing to project it along principal axes. The choice of depends on what percentage of the original variance is to be retained. Intuitively, PCA identifies the directions in which the ‘signal’, or useful information in the data is present, and discards the rest as noise.

Concretely, let the data samples be column vectors for , let be the matrix of centered data samples. The principal components of are the normalized eigenvectors of its sample covariance matrix . More precisely, because is positive semidefinite, there is a decomposition where is an orthogonal matrix, , and . In particular, is the matrix whose columns are unit eigenvectors of . The eigenvalue is the variance of along the component.

Each column of is a data sample represented in the principal component basis. Let be the projection of the sample data in the -dimensional subspace spanned by the largest principal components. Thus , where is a rectangular diagonal matrix. The amount of variance retained is , which is the sum of the largest eigenvalues.

Iii-C2 Implementing the defense

There are two choices of a linear transformation that keep the largest principal components: , which is a matrix, and , which is a matrix. For some choices of Train including SVMs trained using (9), these choices of are equivalent, i.e. they will output identical classifiers given the same inputs. The choice allows for more efficient training because representation of the data is more compact. The choice makes is it easier to compare the reduced dimension data to the original data.

The complexity analysis for the PCA-based defense is in Section IX-A (c.f. Appendix).

Iii-D Intuition behind the PCA defense

We will give some intuition about why dimensionality reduction should improve resilience for SVMs. We discuss the two-class case for simplicity, but the ideas generalize to the multiple class case. The core of a linear classifier is a function . Both and can be expressed in the principal component basis as and . We expect that many of the principal components with the largest coefficients in the weight vector, , to correspond to small eigenvalues .

Fig. 2: Magnitudes of the coefficients of the weight vector of a linear SVM in the principal component basis. On the horizontal axis, we have . On the vertical axis, . The classifier is trained on the original MNIST data.

The reason for this is very simple: in order for different principal components to achieve the same level of influence on the classifier output, must be proportional to . To take advantage of the information in a component with small variation, the classifier must use a large coefficient. Of course, the principal components vary in their usefulness for classification. However, among the components that are useful, we expect a general trend of decreasing coefficients of as increases.

Figure 2 validates this prediction. Many of the principal components with very low variances have large coefficients in . As variance increases, the coefficients tend to decrease. The exception to the trend is the first principal component, but this is not surprising. The first principal component666on the top right in each plot is by far the most useful source of classification information because it is strongly aligned with the difference of the class means. Consequently it does not fit the overall trend and actually has the largest coefficient. However, among the other components, there is a mixture of cross-class and in-class variation and the trend holds.

Effect on robustness: Since the optimal attack perturbation for a linear classifier is a multiple of , the principal components with large coefficients are the ones that the attacker takes advantage of. The defense denies this opportunity to the attacker by forcing the classifier to assign no weight to the low variance components. This significantly changes the resulting that the classifier learns. The classifier loses access to some information, but accessing that information required large weight coefficients, so the attacker is hurt far more. Thus, by using only high variance components, the classifier gains significant adversarial robustness for the loss of a small amount of classification performance.

Figure 3 shows the coefficient magnitudes for classifiers trained on data projected onto the top principal components. Observe that eliminating the low variance principal components mostly removes the relationship between the variance of a component and the corresponding coefficient of .

(a)
(b)
Fig. 3: The magnitudes of the coefficients of the weight vector of a linear SVM in the principal component basis. On the horizontal axis, we have . On the vertical axis, . The classifiers are trained on the MNIST data projected onto the top principal components.

Iii-E Other linear transformations and classifiers

Iii-E1 Anti-whitening

Now we will discuss another linear transformation based on the principal components of the training data that can confer additional robustness. As before, we have . The linear transformation which we call ‘anti-whitening’ is accomplished by selecting and for some . In our experiments, we use the former but the latter is conceptually easier to work with. Anti-whitening exaggerates the disparity between the variances of the components. In the full rank case, the quadratic form introduced in the SVM loss is . It serves as a softer alternative to completely eliminating low variance principal components: the hard cutoff is replaced with the gradual penalty . Low variance components are still available for use, but the price of accessing them is increased.

Iii-E2 Neural networks

Neural networks are more complicated due to the non-uniqueness of local minima in the associated loss function and the larger variety of regularization methods that are employed. At first glance, it may seem that adding a linear layer as the first layer of a neural network may provide the same benefits as PCA-based dimensionality reduction. However, the training process does not optimize for robustness, so in practice the linear layer that is learned does not have the desired effect777In fact, Gu and Rigazio [17] found that even non-linear denoising layers do not add adversarial robustness., unlike in our defense where the linear layer weights are separately specified using PCA. Thus, the intution for the effectiveness of linear transformations carries over from the case of Linear SVMs, with the adversary losing access to dimensions which aid in the creation of adversarial samples, while the classifier remains largely unaffected since most of the information required for classification is retained. Further, in our empirical results in Section V, it is clear that the average distance to the boundary of the classifier increases when the linear transformation is added, thus leading to robustness.

Finally, we note that for Linear SVMs, standard regularization can also be understood as increasing the price of all components, which reduces the dependence of the classifier on components that only provide marginal classification benefit. However, in the case of neural networks, the usual regularization of the weight matrix does not follow the intuition given here for increasing robustness.

Iii-F Attacks against the linear transformation defense

In order to evaluate our proposed defense mechanism, we carry out the attacks described in Section II-B against classifiers learned using our defense. Both for linear classifiers and neural networks, the classifier learned using our defense lies in the same family as the classifier that would be learned without the defense. Thus, simple modifications of existing attacks give the white-box versions of attacks against the classifier with the defense.

Iii-F1 White-box attacks

Due to the inclusion of a linear transformation of the data, the overall classifier is . In the white-box setting, since the adversary is aware of the exact parameters of the classifier produced by the defense, attacks are carried out with respect to the overall classifier. For the optimal attack on Linear SVMs, a similar change is made, where each (the output of the SVM optimization) is replaced by , since that is the term which acts on the input . Thus, the adversarial sample now is

(12)

In the case of gradient based attacks on neural networks, the gradient of the loss is

(13)

where is the loss function associated with and is associated with . The loss of the neural network is computed with respect to its input, which is now , but the adversary has to add a perturbation to the input , which causes the largest increase in loss (up to first order). The FG attack on the defended network is then

(14)

In the case of the optimization based attack on neural networks, the optimization objective remains the same, with a change in the classifier the loss function is computed over:

(15)
s.t.

where . In our experiments, we first compute the linear transformation matrix, and then add it as a linear layer after the input layer of the neural network.

Iii-F2 Classifier mismatch attacks

In this setting, the adversary trains a classifier that mimics the original classifier , but is not aware of the defense. We assume the adversary is able to train such that it perfectly matches trained on the original data without any linear transformations. The adversarial samples are thus generated with respect to , and not with respect to the true classifier . Equivalently, this setting corresponds to the adversary using Algorithm 1 with the true training examples , the true training function Train, but with a different choice of Select. The adversary does not know the true Select function, so they use a trivial version that always returns .

Iii-F3 Architecture mismatch attacks

In this setting, the adversary trains a classifier using a choice of Train that does not match that used to produce . Thus is not only a different function from , but it may come from a different family of classifiers. For example, may be a three layer neural network and may be a five layer neural network. As in the classifier mismatch setting, the adversary is not aware of the defense used (the choice of Select).

The classifier and architecture mismatch attack settings are interesting to consider since the problem of the transferability of adversarial samples [34] is still an open research question. Our results in these settings demonstrate that a defense using linear transformations can mitigate the threat posed by transferability.

Iii-F4 Goal of the defenses

The goal of our defense is to increase classifier robustness. Specifically, the defenses increase the distance between benign examples and nearest adversarial examples. Unlike some proposed defenses, we are not attempting to make the process of finding adversarial examples computationally difficult. While it is possible to use known attacks against classifiers produced by our defense, the resulting adversarial examples will be farther away from the benign examples that the adversarial examples for undefended classifiers. Thus, the fact that these attacks find adversarial examples is not a limitation of our approach.

Iv Experimental setup

In this section we provide brief descriptions and implementation details of the datasets, machine learning algorithms, dimensionality reduction algorithms, and metrics used in our experiments.

Iv-a Datasets

In our evaluation, we use two datasets. The first is the MNIST image dataset and the second is the UCI Human Activity Recognition dataset. We now describe each of these in detail.

Iv-A1 Mnist

This is a dataset of images of handwritten digits [27]. There are 60,000 training examples and 10,000 test examples. Each image belongs to a single class from 0 to 9. The images have a dimension of pixels (total of 784) and are grayscale. The digits are size-normalized and centred. This dataset is used commonly as a ‘sanity-check’ or first-level benchmark for state-of-the-art classifiers. We use this dataset since it has been extensively studied from the attack perspective by previous work. It is also easy to visualize the effects of our defenses on this dataset.

Iv-A2 UCI Human Activity Recognition (HAR) using Smartphones

This is a dataset of measurements obtained from a smartphone’s accelerometer and gyroscope [1] while the participants holding it performed one of six activities. Of the 30 participants, 21 were chosen to provide the training data, and the remaining 9 the test data. There are 7352 training samples and 2947 test samples. Each sample has 561 features, which are various signals obtained from the accelerometer and gyroscope. The six classes of activities are Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing and Laying. We used this dataset to demonstrate that our defenses work across multiple datasets and applications.

Iv-B Machine learning algorithms

We have evaluated our defenses across multiple machine learning algorithms including linear Support Vector Machines (SVMs) and a variety of neural networks with different configurations. All experiments were run on a desktop running Ubuntu 14.04, with an 4-core Intel® CoreTM i7-6700K CPU running at 4.00GHz, 24 GB of RAM and a NVIDIA® GeForce® GTX 960 GPU.

Iv-B1 Linear SVMs

: Ease of training and the interpretability of separating hyperplane weights has led to the use of Linear SVMs in a wide range of applications [40, 2]. We use the LinearSVC implementation from the Python package scikit-learn [37] for our experiments, which uses the ‘one-versus-rest’ method for multi-class classification by default.

In our experiments, we obtained a classification accuracy of 91.5% for the MNIST dataset and 96.7% for the HAR dataset.

Iv-B2 Neural networks

Neural networks can be configured by changing the number of layers, the activation functions of the neurons, the number of neurons in each layer etc. We performed most of our experiments on a standard neural network used in previous work, for the purposes of comparison. The network we use is a standard one from [47] which we refer to as FC100-100-10 and a variant of it, FC200-200-200-10. The first neural network has an input layer, followed by 2 hidden layers, each containing 100 neurons, and an output softmax layer containing 10 neurons. Similarly, the second neural network has 3 hidden layers, each containing 200 neurons. Each neuron has a sigmoid activation function and the loss function used for training is the cross-entropy loss. We also ran experiments with a neural network with Rectified Linear Units (ReLU) as the neurons. We omit those results here due to lack of space and since they are very similar to the sigmoid activation results for the datasets we use. Both FC100-100-10 and FC200-200-200-10 are trained with a learning rate of 0.01 and momentum of 0.9 for 500 epochs. The size of each minibatch is 500. On the MNIST test data, we get a classification accuracy of 97.71% for FC100-100-10 and 98.02% for FC200-200-200-10. We use Theano [49], a Python library optimized for mathematical operations with multi-dimensional arrays and Lasagne [13], a deep learning library that uses Theano, for neural network experiments.

Our classification accuracy results for both Linear SVMs and fully connected neural networks are comparable to baseline numbers888http://yann.lecun.com/exdb/mnist/ for corresponding architectures on MNIST, validating our implementation.

Iv-C Linear transformation techniques

We use the PCA module from scikit-learn [37]. Depending on the application, either the number of components to be projected onto, or the percentage of variance to be retained can be specified. After performing PCA on the vectorized MNIST training data to retain 99% of the variance, the reduced dimension is 331, which is the first reduced dimension we use in our experiments on PCA based defenses. For anti-whitening, we use a slight modification of the PCA interface to create the required transformation matrix.

Iv-D Metrics

We evaluate the relationship between , which is the allowed distance between the original example and the adversarial example, and the adversarial success rate, which we compute as follows. For each benign input with true label , we check two conditions: if after perturbation, and if initially, . Thus, the adversary’s attempt is successful if the original classification was correct but the new classification on the adversarial sample is incorrect. In a sense, this count represents the number of samples that are truly adversarial, since it is only the adversarial perturbation that is causing misclassification, and not an inherent difficulty for the classifier in classifying this sample. While reporting adversarial success rates, we divide this count by the total number of benign samples correctly classified after they pass through the entire robust classification pipeline.

V Experimental results

In this section we present an overview of our empirical results. The main questions we seek to answer with our evaluations are:

  1. Is the defense effective in the classifier mismatch setting?

  2. Is the defense effective in the white box setting?

  3. Does the defense work for different classifier families?

  4. Does the defense generalize across different datasets?

  5. Which linear transformations are most effective?

Our evaluation results confirm the effectiveness of our defense in a variety of scenarios, each of which has a different combination of datasets, machine learning algorithms, attacks and linear transformation used for the defense. For each set of evaluations, we vary a particular step of the classification pipeline and fix the others. Our results are summarized in Table II.

Baseline configuration: We start by considering a classification pipeline with the MNIST dataset as input data, a Linear SVM as our classification algorithm and PCA as the linear transformation used in our defense. Since we consider the Linear SVM as our classifier, we evaluate its susceptibility to adversarial samples generated using the attack on Linear SVMs described in Section II-A. We evaluate our defenses on adversarial samples created starting from the test set for each dataset. Unless otherwise noted, all security and utility results are for the complete test set. To empirically demonstrate that our defense is resilient not only in this baseline case, but also various configurations of it, we systematically investigate its effect as each component of the pipeline, as well as the attacks, are changed.

Note that in all of the plots showing the effectiveness of our defense, the legend key ‘None’ denotes adversarial success for a classifier without any defense.

Data set Classifier Attack type Defense type and parameter Robustness improvement Accuracy reduction
MNIST Linear SVM Classifier mismatch PCA (80) 0.22%
MNIST Linear SVM White-box (optimal) PCA (80) 0.22%
MNIST FC100-100-10 White-box (FG) PCA (40) 0.76%
MNIST FC100-100-10 White-box (FGS) PCA (40) 0.76%
MNIST FC100-100-10 White-box (Opt.) PCA (40) 0.76%
MNIST FC200-200-200-10 Arch. mismatch (FC100-100-10) PCA (40) 0.85%
MNIST FC100-100-10 White-box (FG) Anti-whiten (2) 0.15%
HAR Linear SVM White-box (optimal) PCA (70) 2.3%
TABLE II: Robustness improvement at a misclassification rate of 60%. The table also gives the accuracy reduction for different classifiers, attacks and defenses on the MNIST and HAR datasets.
02040608010000.511.522.5None33110080604020
Fig. 4: Effectiveness of the defense in classifier mismatch setting for the MNIST dataset with Linear SVMs. The adversarial example success on the MNIST dataset is plotted versus the perturbation magnitude . The attack is performed against the original classifier and the effect of the defense is plotted for each reduced dimension .

V-a Effect of defense on Support Vector Machines

In the baseline case, we begin by answering questions (i) and (ii) for Linear SVMs.

02040608010000.511.522.5Optimal attack on Linear SVM on MNIST with pcaNone33110080604020
Fig. 5: Effectiveness of the defense for the MNIST dataset against optimal white-box attacks on Linear SVMs.
606570758085909510000.511.52Average minimum adversarial perturbationLinear SVM accuracy/robustness tradeoff for MNIST with pca7843311006040201e+00
Fig. 6: Tradeoff between SVM classification performance on benign test data, and adversarial performance. The x-axis represents the average over test samples of the minimum perturbation needed to cause misclassification. The legend 1e+00 denotes the regularization constant used to train the SVM.

V-A1 Defense in the classifier mismatch setting

Figure 4 shows the variation in adversarial success against SVMs in the classifier mismatch setting. The defense significantly reduces adversarial success rates. For example, at , the defense using PCA with a reduced dimension of reduces adversarial success from 100% to 3.4% 999We note here that all changes in adversarial success (and utility) are expressed in terms of the change in percentage points, i.e. a fall of indicates the absolute difference in the percentages, and not a relative difference.. This is a 96.6% or around decrease in the adversarial success rate. At , where the adversarial success rate is 99% , the defense with brings the adversarial success rate down to just , which is a decrease. Clearly, training with reduced dimension data leads to more robust Linear SVMs, and this can be seen in the effectiveness of the defense.

Further, as we decrease the reduced dimension used in the projection step of the defense, adversarial success decreases, allowing the system designer to tune the defense according to her needs. At , the adversarial success rate is 56.7% at , which drops to 5.9% when . At , at the same , the adversarial success rate drops to 4.4% with a further decrease to 1.42% using aggressive dimensionality reduction with .

In the classifier mismatch setting, the defense also acts like a noise removal process, removing adversarial perturbations and leaving behind the clean input data. This accounts for the added robustness we see in this setting as compared to the white box setting. Further, this mitigates the problem of the transferability of adversarial examples when the attacker is only aware of the classifier used and not of the defense.

V-A2 Defense in the white box setting

Figure 5 shows the variation in adversarial success for the defense against the optimal attack on Linear SVMs. This plot corresponds to the case where the adversary is aware of the dimensionality reduction defense and inputs a sample to the pipeline which is designed to optimally evade the reduced dimension classifier. At a perturbation magnitude of 0.5, where the classifier with no defenses has a misclassification rate of 99.04%, the reduced dimension classifier with has a misclassification rate of just 19.75%, which represents a 80.25% or decrease in the adversarial success rates. At an adversarial budget of 1.3, the misclassification rate for the classifier with no defenses is 100%, while it is about 66.7% for the classifier with a reduced dimension of 40.

We can also study the effect of our defense on the adversarial budget required to achieve a certain adversarial success rate. A budget of 0.3 is required to achieve a 86.6% misclassification without the defense, while the required budget for a classifier with a defense with is 1.75, which is a increase. The corresponding numbers to achieve a 98% misclassification rate are 0.5 without the defense and 2.5 with, which represents a increase. The presence of the defense forces the adversary to add much larger perturbations to achieve the same misclassification rates. Thus, our defense clearly reduces the effectiveness of an attack carried out by a very powerful adversary with full knowledge of the defense and the classifier as well as the ability to carry out optimal attacks.

V-A3 Utility-security tradeoff for defense

Figure 6 shows the tradeoff between performance under ordinary and adversarial conditions. The the kink in the tradeoff for this dataset is clearly between 80 and 60. There is very little benefit in classification performance by using more dimensions, and essentially no benefit in robustness by using fewer. At , we see a drop in classification success on the test set from 91.5% without any defenses, to 90.64% with the defense. Thus, there is a modest utility loss of about 1.2% at this value of , as compared to a security gain of , since the perturbation needed to cause 50 % of the test set to be misclassified increases from 0.16 to 0.95.

With these results, we can conclude that our defense is effective for Linear SVMs in both the classifier-mismatch and white-box settings. Now, we investigate the performance of our defenses on neural networks, to substantiate our claim of the applicability of our defenses across machine learning classifiers.

V-B Effect of defense on neural networks

We now modify the baseline configuration by changing the classifier used to FC100-100-10. We evaluate our defenses on both gradient-based attacks for FC100-100-10: the Fast Gradient (FG) and Fast Gradient Sign (FGS) attacks as well as on the state-of-the-art optimization based attack from Carlini et al. [7]. We continue to use the MNIST dataset and PCA as the linear transformation. With these experiments, we answer question iii), i.e. ‘Do the defenses work for different classifiers?’ and further strengthen our claim that our defense is effective in the white box setting.

02040608010000.511.522.5White box FG attack on NN on MNIST with pcaNone33110080604020
Fig. 7: Effectiveness of the defense for the MNIST dataset against FG attacks in the white box setting on FC100-100-10.
02040608010000.050.10.150.20.25White box FGS attack on NN on MNIST with pcaNone33110080604020
Fig. 8: Effectiveness of the defense for the MNIST dataset against FGS attacks in the white box setting on FC100-100-10.
02040608010000.511.522.5White box Carlini attack on NN on MNIST with pcaNone33110080604020
Fig. 9: Effectiveness of the defense for the MNIST dataset against Carlini’s untargeted attack in the white-box setting on FC100-100-10.

V-B1 Defense against Fast Gradient attack in the white box setting

Figure 7 shows the variation in adversarial success for the defense as , the parameter governing the strength of the FG attack, increases. The defense also reduces adversarial success rates for this attack. At , the defense using PCA with a reduced dimension of reduces adversarial success from 41.42% to 16.7%. This is a 24.72% or around decrease in the adversarial success rate. Again, at , while the adversarial success rate is 72.42% without any defense, the defense with brings the adversarial success rate down to , which is a % or decrease. Thus, even for neural networks, the defense causes significant reductions in adversarial success.

Again, we can study the effect of our defense on the adversarial budget required to achieve a certain misclassification percentage. A budget of roughly is required to achieve a 60% misclassification without the defense, while the required budget for a classifier with the defense using is 2, which is a increase.

Directly comparing neural networks and linear SVMs, it appears that neural network are more robust to constrained attacks. However, it should be noted that while the Linear SVM robustness was evaluated on optimal attacks, the non-convex nature of the classification function in neural networks implies that the FG attack is only a first order approximation to an optimal attack.

V-B2 Defense against Fast Gradient Sign attack in the white box setting

The FGS attack is constrained in terms of the norm, so all features with non-zero gradient are perturbed by either or . The MNIST dataset has pixel values normalized to lie in . Thus if , the image with every pixel equal to can be generated from any initial image. We restrict to be less than 0.25.

In Figure 8, at , the adversarial success rate falls from 41.64% to 10.14% for the defense with which is a 31.5% or reduction. Also, at , the adversarial success rate is 91.59% without the defense and 48.92% for the defense with , which is a 42.67% or reduction. Further, the perturbation needed to cause 90% misclassification is 0.11 without the defense but increases to 0.23 for the defense with , which is a increase.

V-B3 Defense against optimization based attack in the white-box setting

We use Carlini and Wagner’s [7] constrained attack to find untargeted adversarial samples, i.e. the closest possible in terms of the norm. Since this attack returns the minimal possible perturbation for each sample, in Figure 9 we plot the CDF of the minimal perturbations found by the attack over the test set in order to compare using the same metric as the other results. To see that this attack is indeed more powerful than the Fast Gradient attack (which uses the same distance metric), note that at , the adversarial success is around 65% compared to 41.42% for the FG attack, and at it is 90% compared to 72.42% for the FG attack.

We repeated the attack on neural networks enhanced using our PCA-based defense. The attack was carried out on the composite classifier, thus representing the white box setting. In this case, we see that at the adversarial success falls to 29.5% using the defense with , which represents a drop of or . At a larger allowed budget of 1.5, the fall is 26.4% or to 63.8%. Further, the budget required to achieve a misclassification rate of 90% increases from 1.5 to 2.16, which is a increase.

V-B4 Defense against Fast Gradient attack in the architecture mismatch setting

We now consider a setting where the adversary is less powerful. In the architecture mismatch setting, the adversary creates adversarial sample for a different neural network (FC100-100-10) than the one being attacked (FC200-200-200-10). These results are presented in Figure 10. We see a significant drop in adversarial success when our defense is used. For example, at the adversarial success falls from 51.2% to 13.2% using the defense with , which is a 38.0% or drop. Also, the budget required to achieve a misclassification rate of 40% increases from 1.3 to 2.5, which is a increase. The performance of our defense in this setting, which is commonly referred to as a ‘black-box’ setting highlights that the defense can mitigate the transferability of adversarial samples to a large extent.

Arch. Mismatch Attack for NNs on MNIST with pcaNone33110080604020
Fig. 10: Effectiveness of the defense for the MNIST dataset for FG samples generated for FC100-100-10 on FC200-200-200-10 (architecture mismatch setting) .

With these results for neural networks, we conclude that our defenses are effective against a variety of different attacks, in each of which the nature of the adversarial perturbation is very different. We have also shown that our defense is effective against the state of the art attack for neural networks in the white-box setting, making a case for it to be included as a crucial component of any defensive measures against evasion attacks. These results also demonstrate that our defense can be used in conjunction with different types of classifiers, providing a general method for defending against adversarial inputs.

V-C Applicability for different datasets

Next, we modify the baseline configuration by changing the datasets used. We show results with Linear SVMs as the classifier and PCA as the dimensionality reduction algorithm. We present results for the Human Activity Recognition dataset.

V-C1 Defense for the HAR dataset

In Figure 11, the reduction in adversarial success of a white-box attack due to the defense using PCA on the HAR dataset is shown. At , the adversarial success rate drops from 77.3% without the defense to 48.3% with which is a drop respectively. In order to achieve a misclassification rate of 90%, the amount of perturbation needed is 0.65 without the defense, which increases to 0.93 with . Thus, the adversarial budget increases to achieve the same adversarial success rate. The impact on utility is modest: a drop of 2.3% for , which is small in comparison to the gain in security.

02040608010000.511.522.5White box attack on Linear SVM on HAR with pcaNone2009070503010
Fig. 11: Effectiveness of the defense for the HAR dataset against optimal white-box attacks on Linear SVMs. Adversarial example success on the HAR dataset versus perturbation magnitude for the Linear SVM attack. Plotted for each reduced dimension used in the defense.

V-D Effect of PCA-based defense on utility

Table III presents the effect of our defense on the classification accuracy of benign data. The key takeaways are that the decrease in accuracy for both neural networks and Linear SVMs for reduced dimensions down to is at most 4%. Further, we notice that dimensionality reduction using PCA can actually increase classification accuracy as when , the accuracy on the MNIST dataset increases from 97.47% to 97.52%. This effect probably occurs as the dimensionality reduction can prevent over-fitting. More aggressive dimensionality reduction however, can lead to steep drops in classification accuracy, which is to be expected since much of the information used for classification is lost.

MNIST data HAR data
FC100-100-10 Linear SVM Linear SVM
No D.R. 97.47 91.52 No D.R. 96.67
784 97.32 91.54 561 96.57
331 97.35 91.37 200 96.61
100 97.36 90.89 90 94.60
80 97.25 90.64 70 94.37
60 97.38 90.47 50 92.47
40 96.71 89.03 30 91.11
TABLE III: Utility values for the dimensionality reduction defense. For the MNIST and HAR datasets, the classification accuracy on the benign test set is provided for various values of reduced dimension used for the PCA based defense, as well as the accuracy without the defense.

V-E Defense using anti-whitening

As described in Section III-E, anti-whitening is a soft approximation of PCA where high-variance components are boosted with respect to the low-variance ones, instead of just dropping them. This can be controlled by the parameter in the anti-whitening transformation . In Figure 12, the effects of the defense using anti-whitening with and 3 are shown. At , the defense with causes the adversarial success to fall from 41.42% to 17.06%, which is a 24.36% or fall. At , the corresponding reduction is from 72.42% to 34.42%, which is a 38% or decrease. The anti-whitening defense thus performs slightly better than the PCA defense with a comparable parameter ().
Effect of anti-whitening on utility: For FC100-100-10, the classification rate on benign data is 97.47% without any defense. Using anti-whitening with and 3, the utility values are 97.45%, 97.32% and 96.83% respectively. This shows that the anti-whitening defense is slightly better both with respect to both security and utility as compared to the PCA defense. The increase in utility is likely due to the fact that dimensions are not dropped and are used to achieve better classification performance.

02040608010000.511.522.5White box FG attack on NN on MNIST with antiwhiteningNone123
Fig. 12: Effectiveness of the anti-whitening defense for the MNIST dataset against FG attacks in the white box setting on FC100-100-10.

These results highlight the broad applicability of our defense across application domains. It is clear that the effectiveness of our defense is not an artifact of the particular structure of data from the MNIST dataset, and that the intuition for its effect holds across different kinds of data.

Vi Discussions, Limitations and Future Work

Even though our defense reduces adversarial success rates and increases the amount of perturbation the adversary has to add to achieve fixed levels of misclassification in a number of cases, there are two main areas where it can be improved in conjunction with other defense mechanisms.

Vi-1 Further reductions in adversarial success

While our defense causes significant reductions in adversarial success rates in a variety of settings, there are cases where the adversarial success rate is still non-trivial. In such cases, it is likely our defense would have to be combined with other defenses such as adversarial training [16] and ensemble methods [45] for detection in order to create a ML system secure against evasion attacks. Our defense has the advantage that it can be used in conjunction with a variety of ML classifiers and it will not interfere with the operation of other defense mechanisms. Since our defense increases the amount of perturbation needed to achieve a fixed misclassification rate, it may aid detection based defenses.

Vi-2 Better data transformations

In certain settings, using PCA for dimensionality reduction may have limited applicability. For example, we found that our PCA based defense offers only marginal security improvement for the Papernot-CNN (See Section IX-B for details). It is likely that this effect stems from PCA reducing the amount of local information that the convolutional layers in the CNN are able to use for the purposes of classification. A key step in addressing this limitation of our defense is to use other dimensionality reduction techniques which could reduce adversarial success to negligible levels and work better when combined with classifiers such as CNNs. This limitation also prevents us from achieving state-of-the-art accuracy on image datasets like MNIST, since the best classifiers in for these datasets use convolutional layers. In future work we plan to explore techniques such as autoencoders and kernel PCA for designing robust classifiers. For certain problems, it may also be feasible to explicitly optimize for the linear transformation achieving the best utility-security tradeoff. This is another direction we plan to explore.

Vii Related work

Previous defenses against adversarial examples have largely focused on specific classifier families or application domains. Further, the existing defenses provide improved security only against existing attacks in the literature, and it is unclear if the defense mechanisms will be effective against adversaries with knowledge of their existence, i.e. strategic attacks exploiting weaknesses in the defenses. As a case in point, Papernot et al. [36] demonstrated a defense using distillation of neural networks against the Jacobian-based saliency map attack [35]. However, Carlini et al. [7] showed that a modified attack negated the effects of distillation and made the neural network vulnerable again. Now, we give an overview of the existing defenses.

Vii-1 Classifier-specific

Russu et al. [39] propose defenses for SVMs by adding various kinds of regularization. Kantchelian et al. [22] propose defenses against optimal attacks designed specifically for tree-based classifiers. Existing defenses for neural networks [17, 42, 55, 28, 21] make a variety of structural modifications to improve resilience to adversarial examples. These defenses do not readily generalize across classifiers and may still be vulnerable to adversarial examples, as shown by Gu and Rigazio [17].

Vii-2 Application-specific

Hendrycks and Gimpel  [18] study transforming images from the RGB space to YUV space to enable better detection by humans and decrease misclassification rates. They also use whitening to make adversarial perturbations in RGB images more visible to the human eye. The effect of JPG compression on adversarial images has also been studied [14, 12]. Their conclusions were that it has a small beneficial effect when the perturbations are small. These approaches are restricted to combating evasion attacks on image data and do not generalize across applications. Further, it is unclear if they are effective against white-box attacks.

Vii-3 General defenses

An ensemble of classifiers was used by Smutz and Stavrou [45] to detect evasion attacks, by checking for disagreement between various classifiers. However, an ensemble of classifiers may still be vulnerable to adversarial examples since they generalize across classfiers. Further, Goodfellow et. al. [16] show that ensemble methods have limited effectiveness for evasion attacks against neural networks. Goodfellow et. al. [16], Tramèr et al. [50] and Mądry et al. [31] re-train on adversarial samples of different types to improve the resilience of neural networks. They all find that adversarial training works, but needs high capacity classifiers to be effective, and further, its effectiveness reduces as the perturbation is increased beyond the one used for training. In our experiments, we find that re-training on adversarial samples has an extremely limited effect on increasing the robustness of linear SVMs (see Figure 14 in the Appendix), thus this defense may not be applicable across classifiers, and does indeed depend on the capacity of the classifier. Wang et al. [52] use random feature nullification to reduce adversarial success rates for evasion attacks on neural networks. The applicability of this idea across classifiers is not studied. Zhang et al. [54] use adversarial feature selection to increase the robustness of SVMs. They find and retain features that decrease adversarial success rates. This defense may be generalized across other classifiers and is an interesting direction for future work.

Due to the classifier and dataset-agnostic nature of our defense, it may be combined with existing defenses such as adversarial training which have differnet aims for an even larger improvement in robustness. For example, neural networks may be trained with reduced dimension samples, and the training process can also incorporate the adversarial loss to further increase the robustness of the network. We plan to explore these directions in future work.

Viii Conclusion

In this paper, we considered the novel use of data transformations such as dimensionality reduction as a defense mechanism against evasion attacks on ML classifiers. Our defenses rely on the insight that (a) linear transformations of data allow access to usually inaccessible security-performance tradeoffs, and (b) training classifiers on reduced dimension data leads to enhanced resilience of ML classifiers (by reducing the weights of less informative and low-variance features). Using empirical evaluation on multiple real-world datasets, we demonstrated a 2x reduction in adversarial success rates across a range of attack strategies (including white-box ones), ML classifiers, and applications. Our defenses have a modest impact on the utility of the classifiers (0.5-2% reduction), and are computationally efficient. Our work thus provides an attractive foundation for countering the threat of evasion attacks.

References

Ix Appendix

Fig. 13: Adversarial images of digit ‘7’ (against a neural network with no defense): The images have been modified with the Fast Gradient Sign attack on neural networks with (from left to right), and . The perturbation begins to be visible at and is very obvious in the images with . The attack was carried out on a classifier without any dimensionality reduction.

Ix-a Complexity Analysis of PCA Defenses

The defense using PCA adds a one-time overhead for finding the principal components, with the first term arising from the covariance matrix computation and the second term from the eigenvector decomposition. There is also a one-time overhead for training a new classifier on the reduced dimension data. The time needed to train the new classifier will be less than that needed for the original classifier since the dimensionality of the input data has reduced. Each subsequent input will incur a overhead due to the matrix multiplication needed to project it onto the principal components.

02040608010000.511.522.5Adversarially trained Linear SVMNone0.10.51.01.52.0
Fig. 14: Effectiveness of adversarial training for the MNIST dataset against optimal white box attacks on Linear SVMs. The Linear SVM was trained using gradient descent with periodically augmented training sets containing adversarial samples with the specified perturbation values.

Ix-B CNNs

We also run our experiments on a Convolutional Neural Network [15] whose architecture we obtain from Papernot et al. [36]. This CNN’s architecture is as follows: it has 2 convolutional layers of 32 filters each, followed by a max pooling layer, then another 2 convolutional layers of 64 filters each, followed by a max pooling layer. Finally, we have two fully connected layers with 200 neurons each, followed by a softmax output with 10 neurons (for the 10 classes in MNIST). All neurons in the hidden layers are ReLUs. We call this network Papernot-CNN. It is trained with a learning rate of 0.1 (adjusted to 0.01 for the last 10 epochs) and momentum of 0.9 for 50 epochs. The batchsize is 500 samples for MNIST and on the MNIST test data we get a classification accuracy of 98.91% with the Papernot-CNN network.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14291
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description