Image Transformation can make Neural Networks more robust against Adversarial Examples
Abstract
Neural networks are being applied in many tasks related to IoT with encouraging results. For example, neural networks can precisely detect human, objects and animal via surveillance camera for security purpose. However, neural networks have been recently found vulnerable to welldesigned input samples that called adversarial examples. Such issue causes neural networks to misclassify adversarial examples that are imperceptible to humans. We found giving a rotation to an adversarial example image can defeat the effect of adversarial examples. Using MNIST number images as the original images, we first generated adversarial examples to neural network recognizer, which was completely fooled by the forged examples. Then we rotated the adversarial image and gave them to the recognizer to find the recognizer to regain the correct recognition. Thus, we empirically confirmed rotation to images can protect pattern recognizer based on neural networks from adversarial example attacks.
I Introduction
Recently, neural networks have achieved very impressive success on a wide range of fields like computer vision [1] and natural language processing [2]. There are many tasks that have been used by neural networks close to humanperformance, such as image classification [3], sentence classification [4], voice synthesis [5] and object detection [6]. In the Internet of Things (IoT), one of the problems is how to reliably process realworld data is captured from IoT devices. And neural networks are considered to be the most promising method to solve this problem [7]. Despite great successes in numerous of applications in IoT [9], many machine learning applications are raising great concerns in the field of security and privacy. Recent research has shown that machine learning models are vulnerable to adversarial examples [10]. Adversarial examples are welldesigned inputs that are created by adding adversarial perturbations. Machine learning systems have been developed following the assumption that the environment is benign during both training and testing. Intuitively, the inputs are assumed to all be get from the same distribution at both training and test time. This means that while test inputs are news and previously unseen during the training process, they at least have the same properties as the inputs used for training. These assumptions are advantageous for creating a powerful machine learning model but this rule also makes an attacker be able to alter the distribution at the either training time [11] or testing time [12]. Typical training attacks [13] try to inject adversarial training data into the original training set to wrongly train the deep learning models. However, most of existing adversarial attacks are focused on testing phase attacks [14][16] because it is more reliable while training phase attacks are more difficult for implementing because attackers should exploit the machine learning system before executing an attack on it. For example, an attacker might slightly modify an image [11] to cause it to be recognized incorrectly or alter the code of an executable file to enable it to bypass a malware detector [17]. For dealing with the existence of adversarial samples, many research works are proposing defense mechanisms for adversarial examples. For examples, Papernot et al. [18] used the distillation algorithm for defending to adversarial perturbations. However, Carnili et al. [19] pointed out that method is not effective for improving the robustness of a deep neural network system. Weilin et al. [20] proposed the feature squeezing for detecting adversarial examples, and there are several other adversarial detection approaches [14][16].
In this paper, we study the robustness of neural networks through the very simple technique but very effective is rotation. Firstly, we craft the adversarial examples by using the FGSM algorithm [10] on the MNIST dataset [21]. Afterwards, we apply the rotation algorithm on those adversarial examples and evaluate our method through machine learning system [22]. The results show that our method is very effective for making neural networks more robustness against adversarial examples.
Ii Background and related work
In this section, we provide the background about adversarial attack and specify some of the notations that we use in this paper. We denote is the original image from a given dataset , and denotes the class that is belong to the class space . The ground truth label is denoted by . And denotes the adversarial example that is generated from . Given an input , its feature vector at layer is , and its predicted probability class of is is predicted class of . denotes the loss function of given input and target class .
Iia Adversarial attack methods
The adversarial examples and their counterparts are defined as indistinguishable from humans. Because it is hard to model human perception, researchers use three popular distance metrics to approximate human’s perception based on the norm:
(1) 
Researchers usually use metrics for expressing the different aspects of visual significance. counts the number of pixels with different values at corresponding positions in two images. It describes how many pixels are changed between two images. is used for measuring the Euclidean distance between two images. And will help to measure the maximum difference for all pixels at corresponding positions in two images. There is no agreement which distance metric is the best so it depends on the proposed algorithms.
Szegedy et al. [23] used a method name LBFGS (Limitedmemory BroydenFletcherGoldfarbShanno) to create targeted adversarial examples. This method minimize the weighted sum of perturbation size and loss function while constraining the elements of to be normal pixel value.
Goodfellow et al. [10] consumed that adversarial examples can be caused by cumulative effects of high dimensional model weights. They proposed a simple attack method, called Fast Gradient Sign Method (FGSM):
(2) 
where denotes the perturbation size for crafting adversarial example from original input . Given a clean image , this method try to create a similar image in neighborhood of that fools the target classifier. This leads to maximize loss function which is the cost of classifying image as the target label . Fast gradient sign method solves this problem by performing one step gradient update from in the input space with small size of perturbation . Increasing will lead to higher and faster attack success rate however it maybe also makes your adversarial sample to be more difference to original input. FGSM computes the gradients for once, so it is much more efficient than LBFGS. This method is very simple however it is fast and powerful for creating the adversarial examples. So in this paper, we use this method for attack phase. The model is used to create adversarial attacks is called the attacking model. When the attacking model is the target model itself or contains the target model, the resulting attacks are whitebox. In this work, we also implement our method on whitebox manner.
IiB Defense methods
Many research works focused on the adversarial training [24, 25] to resist adversarial attacks for a machine learning system from. This strategy aims to use the adversarial examples to train a machine learning model to make it more robust. Some researchers combine data augmentation with adversarial perturbed data for training [23, 24, 25]. However, this method is more time consuming than traditional training on only clean images, because an extra training dataset will be added to training set and it is clearly that will take more time than in usual. Other defense strategy is preprocessing based methods that try to remove the perturbation noise before feeding data into a machine learning model. Osadchy et al. [26] use some of filters to remove the adversarial noise, such as the median filter, Gaussian lowpass filter. Meng el al. [27] proposed a two phases defense model. First phase is to detect the adversarial input and the second one reforms original input based on the difference between the manifolds of original and adversarial examples. Another adversarial defense direction is based on gradient masking method [25]. This defense strategy performs gradient masking typically result in a model that is very smooth in specific directions and neighborhoods of training data, which makes it harder for attackers to find gradients indicating good candidate directions to perturb the input in a damaging way. Papernot et al. [18] adapts distillation to adversarial defense, and uses the output of another machine learning model as soft labels to train the target model. Nayebi et al. [29] use saturating networks for robustness to adversarial examples. In that paper, the loss function is designed to encourage the activations to be in their saturating regime. Gu et al. [30] propose the deep contrastive network, which uses a layerwise contrastive penalty term to achieve output invariance to input perturbation. However, with methods based on gradient masking, attackers can train a substitute model: a copy that imitates the defended model by observing the labels that the defended model assigns to inputs chosen carefully by the adversary.
IiC Neural network architecture
In this section, we will describe about the neural networks (NNs) architecture that we use in this paper. We use Lenet5 [22], that is a convolutional network used in our experiments. NNs learn hierarchical representations of high dimensional inputs used to solve machine learning tasks, including classification, detection or recognition [31]. This network comprises seven layers not counting the input, all of which contain weights (trainable parameters). The input data is a pixel image. As Fig. 1, convolutional layers are labeled , subsampling layers are labeled , and fully connected layers are labeled , where is the layer index.
Layer is convolutional layer with six feature maps. Each unit in each feature map is connected to a neighborhood in the input. The size of the feature maps is which prevents connection from the input from falling off the boundary. And layer consists 122,304 connections. Layer is a subsampling layer with six feature maps of size . This layer reduces the size of features in previous layer. Layer is a convolutional layer with 16 feature maps that each unit in each feature map is connected to several neighborhoods at identical locations in a subset of ’s feature maps. This layer contains 156,000 connections. Next layer is , a subsampling layer with 16 feature maps of size . This layer has 2000 connections. The layer is a convolutional layer that has 120 feature maps. Each unit in this layer is connected to a neighborhoods on all 16 of ’s feature maps. This layer contains 48,120 connections. The last layer is has 84 units and is fully connected to previous layer . This layer consists 10,164 trainable parameters. For more intuitive, the input of each layer is the output of the previous layer multiplied by a set of weights, which are part of the layer’s parameter . A neural net can be viewed as a composition of parameterized functions:
(3) 
where are parameters learned during training phase. In the case of classification, the network is given a large collection of known inputlabel pairs and adjusts its parameters to reduce the label prediction error on these inputs. At test time, the model extrapolates from its training data to make predictions on unseen inputs. For more understanding, the FGSM equation (2) that we describe in previous section, can be described as:
(4) 
where is the targeted network, is cost function and is label of input . An adversarial sample is successfully crafted when misclassified by convolutional network if it satisfies while its perturbation factor still remains indistinguishable to humans.
Iii Our system
The goal of adversarial examples is to make a machine learning model to misclassify an input data by changing the objective function value based on it’s gradients on the adversarial direction.
Iiia Whitebox targeted attack
We consider the whitebox targeted attack settings, where the attacker can fully access into the model type, model architecture, all trainable parameters and the adversary aims to change the classifier’s prediction to some specific target class. To create adversarial samples that are misclassified by machine learning model, an adversary with knowledge of the model f and its trainable parameters . In this work, we use FGSM [10] method for crafting adversarial examples. We define classifier function that mapping image pixel value vectors to a particular label. Then we assume that function has a loss function . For an input image and target label , our system aims to solve the following optimization problem: subject to , where is perturbation noise that we add to original image . We have to note that this function method would yield the solution for in the case of convex losses, however the neural networks are nonconvex so we end up with an approximation in this case.
IiiB Rotation  affine transformation
Affine transformations have been widely used in computer vision [32]. So it has an importance role in computer vision. Now we define the range of defense that we want to optimize over. For rotation manner that is one of an affine transformations, we find the parameter that rotating the adversarial image with degree around the center will make classifier can remove the adversarial noise. Formally, the pixel at position is rotated counterclockwise, and then multiplied by a rotation matrix that calculated from the angle :
(5) 
So the vectors and have the same magnitude and they are separated by an given angle . In our research, we set angle and our experiments show that we can find the best angle that it defeats completely the adversarial noise and rerecognize the correct image.
When we apply rotation technique on the adversarial examples that are generated by FGSM, we observe that adversarial examples are failure with rotation. For more intuitive, supposing we have original data and model . So our prediction label is . And the loss function shows us how far is away from . When we apply FGSM for crafting adversarial examples, the purpose is to increase the loss function by add a small adversarial noise to original input . Recall the FGSM algorithm, equation (2) will be turn to the equation as bellow:
(6) 
We aim to solve equation (6) by maximize loss function instead of in equation (2). The logits (vector of raw prediction) is the output of the neural network before we feed them into the softmax activation function for normalizing, it is described as:
(7) 
(8) 
(9) 
By calculating partial derivative of function (9), we have:
(10)  
From equation (10), it is clear that is influenced by product of trainable weights and activations. For examples, when we have two images with the same label, their activations in any fixed networks are similar and the weights of the network are unchanged. Consequently, is a constant for any given image with the same class. This means the gradient is highly correlated with true label . Because of this property, when attacker added the adversarial noise, the classifying becomes a simpler problem than the original problem of classifying , as contains extra information from the added noise. However, with a small change in input data (by rotating adversarial images with a particular angle) that system makes an another decision. And in this case, we show that neural network recognizes the rotated adversarial images as true label instead of targeted label. Our system is described as in Fig. 1.
Iv Experiments and results
Based on the understanding of the problem we described in previous sections, now we can apply our approach to defeat the adversarial examples for protecting a machine learning system. As our experiments are demonstrated as below, we should note two importance points: a) our approach can remove the adversarial examples, b) our approach still keep neural network’s performance is good enough and in some case the neural network performs equal or better than on rotated adversarial images.
Iva Experimental setup
We use the MNIST dataset [21], that is a very wellknown handwritten digits data in both deep learning and security field. The dataset includes 50,000 training images, 10,000 validation images and 10,000 test images. Each grayscale pixel image is encoded as a vector of intensities whose real values range from 0 (for black color) to 1 (for white color).
IvB Results
We select randomly 10 images in the same category, and name them as original images. We consider the whitebox targeted attack so we also randomly choose a targeted label for crafting our targeted adversarial images. In our implementation, the random original input classes are 1 and 6 while random targeted adversarial are 8 and 9 respectively. In attacking phase, we run 20 iterations for crafting adversarial examples with a step size of 0.01 (we choose to take gradient steps in the norm L when using FGSM algorithm).
In the first implementation with 10 random inputs are digit 1 and targeted adversarial samples is 8. Fig. 3 shows the one adversarial image with targeted digit is 8. The first column shows the adversarial image, and second one describes the classification result after we rotate the adversarial image. The horizontal axis is angle of rotation, that we change angle degree from 0 to 90 and observe the probabilities changing in vertical axis. When degree of angle is 0, it is clear that the classification accuracy of machine learning system recognizes the image as digit 1 is 0. However, this curve rapidly increases with degree of angle and it reaches a peak at angle of rotation is 39 with highest classification rate as digit 1 is around 99.3%. This result confirms that our approach works well on the adversarial image in this case.
We also compare the difference of classification accuracy between rotated adversarial image and rotated original image. Fig. 2 shows the rotation  affine transformation takes an importance role in image classification problem when it can make classification accuracy lead to the highest score.
In Fig. 4, the first row shows the original image (digit 1) and classification accuracy rate for recognizing it as digit 1 is 99.9% on Lenet5. The second row shows the adversarial image and classification rate as digit 1 dropped to 0.22% while machine learning system thinks that image as digit 8 with probability in 97.1%. And the last row shows the rotated adversarial image at angle of rotation is 39 degree. In this case, the system recognizes it as digit 1 at 99.3% probability while for digit 8 dropped from 97.1% to 0.2%. So in this implementation, it demonstrates that our approach can completely remove adversarial example and remains the system performance for classification task.
We observe the changing classification rate between original image and rotated adversarial image. Table I shows the classification accuracy for original, adversarial and rotated images in our first implementation.
Image index  Classification Accuracy (%)  Changing rate between (a)(c)  




7463  100  96.1  99.8 (26)  0.2  
1773  94.2  96.9  96.2 (20)  2.0  
9737  100  96.7  99.8 (27)  0.2  
7738  99.5  97.2  99.5 (38)  0  
9071  92.0  97.3  48.7 (10)  43.3  
7399  100  96.5  99.5 (34)  0.5  
3765  100  97.6  90.4 (15)  9.6  
6670  99.9  97.5  100 (35)  0.1  
9896  100  95.8  99.5 (31)  0.5  
228  100  96.0  99.4 (19)  0.6 
The first column in table I is the image index in MNIST dataset that we randomly select for the experiment. We note that when we apply our approach, the neural net can recognize the correct label with slightly decrease accuracy to compare to original recognition. However, in all of cases the adversarial examples are no longer effect on the neural net.
In the second implementation, we randomly choose 10 inputs as digit 6 and set the targeted adversarial examples is 9. Fig. 5 shows one of 10 adversarial images for demonstration. In this case, the rotated adversarial image is also recognized by system as digit 6 with 100% probability at angle of rotation is 28 degree.
In Fig. 6, for given input image digit 6, the system recognizes it as digit 6 with 100% confidence. In attacking phase, this classification rate dropped to 0.21% for recognizing digit 6 and 95.6% for targeted digit 9. Surprisingly, in this case after rotating, the classification rate as digit 6 recovers in a perfect rate at 100%. So that means our approach completely removes the influence of adversarial and keeps the system up to high performance.
V Conclusion
Adversarial attack so far is a very serious problem for security and privacy on machine learning system. Our research work provide evidence that the neural networks can be made more robustness to adversarial attacks. As our theory and experiments, we now can develop a powerful defense method for adversarial problem. Our experiments on MNIST have not reached the best on all of cases. However, our results already show that our approach lead to significant increase in the robustness of the neural network. And we also believe that our findings will be further explored in our future work.
Acknowledgment
We would like to thank Professor Akira Otsuka for his helpful and valuable comments. This work is supported by Iwasaki Tomomi Scholarship.
References
 [1] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
 [2] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in neural information processing systems, 2014, pp. 3104–3112.
 [3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
 [4] Y. Kim, “Convolutional neural networks for sentence classification,” arXiv preprint arXiv:1408.5882, 2014.
 [5] A. Van Den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio.” in SSW, 2016, p. 125.
 [6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster rcnn: Towards realtime object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
 [7] J. Qiu, Q. Wu, G. Ding, Y. Xu, and S. Feng, “A survey of machine learning for big data processing,” EURASIP Journal on Advances in Signal Processing, vol. 2016, no. 1, p. 67, 2016.
 [8] S. M. Erfani, S. Rajasegarar, S. Karunasekera, and C. Leckie, “Highdimensional and largescale anomaly detection using a linear oneclass svm with deep learning,” Pattern Recognition, vol. 58, pp. 121–134, 2016.
 [9] M. S. Mahdavinejad, M. Rezvan, M. Barekatain, P. Adibi, P. Barnaghi, and A. P. Sheth, “Machine learning for internet of things data analysis: A survey,” Digital Communications and Networks, 2017.
 [10] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2015.
 [11] C. Xiao, B. Li, J.Y. Zhu, W. He, M. Liu, and D. Song, “Generating adversarial examples with adversarial networks,” arXiv preprint arXiv:1801.02610, 2018.
 [12] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Joint European conference on machine learning and knowledge discovery in databases. Springer, 2013, pp. 387–402.
 [13] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar, “Adversarial machine learning,” in Proceedings of the 4th ACM workshop on Security and artificial intelligence. ACM, 2011, pp. 43–58.
 [14] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 39–57.
 [15] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint arXiv:1607.02533, 2016.
 [16] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
 [17] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, “Adversarial examples for malware detection,” in European Symposium on Research in Computer Security. Springer, 2017, pp. 62–79.
 [18] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 2016, pp. 582–597.
 [19] N. Carlini and D. Wagner, “Defensive distillation is not robust to adversarial examples,” arXiv preprint arXiv:1607.04311, 2016.
 [20] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
 [21] Y. LeCun, C. Cortes, and C. Burges, “Mnist handwritten digit database,” AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, vol. 2, 2010.
 [22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
 [23] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
 [24] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
 [25] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
 [26] M. Osadchy, J. HernandezCastro, S. Gibson, O. Dunkelman, and D. PérezCabo, “No bot expects the deepcaptcha! introducing immutable adversarial examples, with applications to captcha generation,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 11, pp. 2640–2653, 2017.
 [27] D. Meng and H. Chen, “Magnet: a twopronged defense against adversarial examples,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 135–147.
 [28] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical blackbox attacks against machine learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, 2017, pp. 506–519.
 [29] A. Nayebi and S. Ganguli, “Biologically inspired protection of deep networks from adversarial attacks,” arXiv preprint arXiv:1703.09202, 2017.
 [30] S. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” arXiv preprint arXiv:1412.5068, 2014.
 [31] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
 [32] M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spatial transformer networks,” in Advances in neural information processing systems, 2015, pp. 2017–2025.