Adversarial Examples as an Input-Fault Tolerance Problem

Adversarial Examples as an Input-Fault Tolerance Problem

\subfile

./tex/abstract \subfile./tex/intro \subfile./tex/methodology \subfile./tex/evaluation_perturbations \subfile./tex/evaluation_deformations \subfile./tex/conclusion

\subfile

./tex/appendix_methodology \subfile./tex/appendix_additional_curves \subfile./tex/appendix_arch \subfile./tex/appendix_advex \subfile./tex/appendix_testset_attack

References

  1. Anonymous. ADef: An Iterative Algorithm to Construct Adversarial Deformations. In Submitted to International Conference on Learning Representations, 2019.
  2. A. Athalye, N. Carlini, and D. Wagner. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In International Conference on Machine Learning, pages 274–283, 2018.
  3. T. B. Brown, N. Carlini, C. Zhang, C. Olsson, P. Christiano, and I. Goodfellow. Unrestricted Adversarial Examples. arXiv:1809.08352, 2018.
  4. N. Carlini and D. Wagner. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy (SP), pages 39–57, 2017.
  5. P. Chandra and Y. Singh. Fault tolerance of feedforward artificial neural networks- a framework of study. In International Joint Conference on Neural Networks, volume 1, pages 489–494, 2003.
  6. T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, 1991.
  7. T. DeVries and G. W. Taylor. Improved Regularization of Convolutional Neural Networks with Cutout. arXiv:1708.04552, 2017.
  8. L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations. arXiv:1712.02779, 2017.
  9. A. Galloway, T. Tanay, and G. W. Taylor. Adversarial Training Versus Weight Decay. arXiv:1804.03308, 2018.
  10. J. Gilmer, R. P. Adams, I. Goodfellow, D. Andersen, and G. E. Dahl. Motivating the Rules of the Game for Adversarial Example Research. arXiv:1807.06732, 2018.
  11. I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations, 2015.
  12. G. E. Hinton and T. Shallice. Lesioning an attractor network: Investigations of acquired dyslexia. Psychological Review, 98(1):74–95, 1991.
  13. A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial Machine Learning at Scale. International Conference on Learning Representations, 2017.
  14. S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  15. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
  16. A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Computer Vision and Pattern Recognition, pages 427–436. IEEE Computer Society, 2015.
  17. N. Papernot, F. Faghri, N. Carlini, I. Goodfellow, R. Feinman, A. Kurakin, C. Xie, Y. Sharma, T. Brown, A. Roy, A. Matyasko, V. Behzadan, K. Hambardzumyan, Z. Zhang, Y.-L. Juang, Z. Li, R. Sheatsley, A. Garg, J. Uesato, W. Gierke, Y. Dong, D. Berthelot, P. Hendricks, J. Rauber, and R. Long. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv:1610.00768, 2018.
  18. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical Black-Box Attacks Against Machine Learning. In Asia Conference on Computer and Communications Security, ASIA CCS, pages 506–519, Abu Dhabi, UAE, 2017. ACM.
  19. N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The Limitations of Deep Learning in Adversarial Settings. In IEEE European Symposium on Security and Privacy, pages 372–387, 2016.
  20. V. Piuri. Analysis of Fault Tolerance in Artificial Neural Networks. Journal of Parallel and Distributed Computing, 61(1):18–48, 2001.
  21. P. W. Protzel, D. L. Palumbo, and M. K. Arras. Performance and fault-tolerance of neural networks for optimization. IEEE transactions on Neural Networks, 4(4):600–614, 1993.
  22. B. Recht, R. Roelofs, L. Schmidt, and V. Shankar. Do CIFAR-10 Classifiers Generalize to CIFAR-10? arXiv:1806.00451, 2018.
  23. L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, and A. Madry. Adversarially Robust Generalization Requires More Data. arXiv:1804.11285, 2018.
  24. L. Schott, J. Rauber, M. Bethge, and W. Brendel. Towards the first adversarially robust neural network model on MNIST. arXiv:1805.09190, 2018.
  25. C. H. Sequin and R. D. Clay. Fault tolerance in artificial neural networks. In International Joint Conference on Neural Networks, volume 1, pages 703–708, 1990.
  26. C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423, 623–656, 1948.
  27. Y. Sharma and P.-Y. Chen. Attacking the Madry Defense Model with -based Adversarial Examples. arXiv:1710.10733, 2017.
  28. Y. Song, R. Shu, N. Kushman, and S. Ermon. Constructing Unrestricted Adversarial Examples with Generative Models. In Advances in Neural Information Processing Systems, 2018. To appear.
  29. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
  30. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
  31. E. B. Tchernev, R. G. Mulvaney, and D. S. Phatak. Investigating the Fault Tolerance of Neural Networks. Neural Computation, 17(7):1646–1664, 2005.
  32. N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Allerton Conference on Communication, Control and Computing, 1999.
  33. A. Torralba, R. Fergus, and W. T. Freeman. 80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008.
  34. C. Torres-Huitzil and B. Girau. Fault and Error Tolerance in Neural Networks: A Review. IEEE Access, 5:17322–17341, 2017.
  35. C. Xiao, J.-Y. Zhu, B. Li, W. He, M. Liu, and D. Song. Spatially Transformed Adversarial Examples. In International Conference on Learning Representations, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
322020
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description