Adversarial Examples as an Input-Fault Tolerance Problem\subfile
./tex/abstract \subfile./tex/intro \subfile./tex/methodology \subfile./tex/evaluation_perturbations \subfile./tex/evaluation_deformations \subfile./tex/conclusion
./tex/appendix_methodology \subfile./tex/appendix_additional_curves \subfile./tex/appendix_arch \subfile./tex/appendix_advex \subfile./tex/appendix_testset_attack
- Anonymous. ADef: An Iterative Algorithm to Construct Adversarial Deformations. In Submitted to International Conference on Learning Representations, 2019.
- A. Athalye, N. Carlini, and D. Wagner. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In International Conference on Machine Learning, pages 274–283, 2018.
- T. B. Brown, N. Carlini, C. Zhang, C. Olsson, P. Christiano, and I. Goodfellow. Unrestricted Adversarial Examples. arXiv:1809.08352, 2018.
- N. Carlini and D. Wagner. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy (SP), pages 39–57, 2017.
- P. Chandra and Y. Singh. Fault tolerance of feedforward artificial neural networks- a framework of study. In International Joint Conference on Neural Networks, volume 1, pages 489–494, 2003.
- T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, 1991.
- T. DeVries and G. W. Taylor. Improved Regularization of Convolutional Neural Networks with Cutout. arXiv:1708.04552, 2017.
- L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations. arXiv:1712.02779, 2017.
- A. Galloway, T. Tanay, and G. W. Taylor. Adversarial Training Versus Weight Decay. arXiv:1804.03308, 2018.
- J. Gilmer, R. P. Adams, I. Goodfellow, D. Andersen, and G. E. Dahl. Motivating the Rules of the Game for Adversarial Example Research. arXiv:1807.06732, 2018.
- I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations, 2015.
- G. E. Hinton and T. Shallice. Lesioning an attractor network: Investigations of acquired dyslexia. Psychological Review, 98(1):74–95, 1991.
- A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial Machine Learning at Scale. International Conference on Learning Representations, 2017.
- S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
- Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
- A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Computer Vision and Pattern Recognition, pages 427–436. IEEE Computer Society, 2015.
- N. Papernot, F. Faghri, N. Carlini, I. Goodfellow, R. Feinman, A. Kurakin, C. Xie, Y. Sharma, T. Brown, A. Roy, A. Matyasko, V. Behzadan, K. Hambardzumyan, Z. Zhang, Y.-L. Juang, Z. Li, R. Sheatsley, A. Garg, J. Uesato, W. Gierke, Y. Dong, D. Berthelot, P. Hendricks, J. Rauber, and R. Long. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv:1610.00768, 2018.
- N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical Black-Box Attacks Against Machine Learning. In Asia Conference on Computer and Communications Security, ASIA CCS, pages 506–519, Abu Dhabi, UAE, 2017. ACM.
- N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The Limitations of Deep Learning in Adversarial Settings. In IEEE European Symposium on Security and Privacy, pages 372–387, 2016.
- V. Piuri. Analysis of Fault Tolerance in Artificial Neural Networks. Journal of Parallel and Distributed Computing, 61(1):18–48, 2001.
- P. W. Protzel, D. L. Palumbo, and M. K. Arras. Performance and fault-tolerance of neural networks for optimization. IEEE transactions on Neural Networks, 4(4):600–614, 1993.
- B. Recht, R. Roelofs, L. Schmidt, and V. Shankar. Do CIFAR-10 Classifiers Generalize to CIFAR-10? arXiv:1806.00451, 2018.
- L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, and A. Madry. Adversarially Robust Generalization Requires More Data. arXiv:1804.11285, 2018.
- L. Schott, J. Rauber, M. Bethge, and W. Brendel. Towards the first adversarially robust neural network model on MNIST. arXiv:1805.09190, 2018.
- C. H. Sequin and R. D. Clay. Fault tolerance in artificial neural networks. In International Joint Conference on Neural Networks, volume 1, pages 703–708, 1990.
- C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423, 623–656, 1948.
- Y. Sharma and P.-Y. Chen. Attacking the Madry Defense Model with -based Adversarial Examples. arXiv:1710.10733, 2017.
- Y. Song, R. Shu, N. Kushman, and S. Ermon. Constructing Unrestricted Adversarial Examples with Generative Models. In Advances in Neural Information Processing Systems, 2018. To appear.
- N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
- E. B. Tchernev, R. G. Mulvaney, and D. S. Phatak. Investigating the Fault Tolerance of Neural Networks. Neural Computation, 17(7):1646–1664, 2005.
- N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Allerton Conference on Communication, Control and Computing, 1999.
- A. Torralba, R. Fergus, and W. T. Freeman. 80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008.
- C. Torres-Huitzil and B. Girau. Fault and Error Tolerance in Neural Networks: A Review. IEEE Access, 5:17322–17341, 2017.
- C. Xiao, J.-Y. Zhu, B. Li, W. He, M. Liu, and D. Song. Spatially Transformed Adversarial Examples. In International Conference on Learning Representations, 2018.