You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

Dinghuai Zhang, Tianyuan Zhang, Yiping Lu
Peking University
{zhangdinghuai, 1600012888, luyiping9712}@pku.edu.cn
&Zhanxing Zhu
School of Mathematical Sciences, Peking University
Center for Data Science, Peking University
Beijing Institute of Big Data Research
zhanxing.zhu@pku.edu.cn
\AND
Bin Dong
Beijing International Center for Mathematical Research, Peking University
Center for Data Science, Peking University
Beijing Institute of Big Data Research
dongbin@math.pku.edu.cn
Equal Contribution

Supplementary Materials:
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

Dinghuai Zhang, Tianyuan Zhang, Yiping Lu
Peking University
{zhangdinghuai, 1600012888, luyiping9712}@pku.edu.cn
&Zhanxing Zhu
School of Mathematical Sciences, Peking University
Center for Data Science, Peking University
Beijing Institute of Big Data Research
zhanxing.zhu@pku.edu.cn
\AND
Bin Dong
Beijing International Center for Mathematical Research, Peking University
Center for Data Science, Peking University
Beijing Institute of Big Data Research
dongbin@math.pku.edu.cn
Equal Contribution
Abstract

Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations, which raised a serious robustness issue of deep networks. Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks. A major drawback of existing adversarial training algorithms is the computational overhead of the generation of adversarial examples, typically far greater than that of the network training. This leads to the unbearable overall computational cost of adversarial training. In this paper, we show that adversarial training can be cast as a discrete time differential game. Through analyzing the Pontryagin’s Maximum Principle (PMP) of the problem, we observe that the adversary update is only coupled with the parameters of the first layer of the network. This inspires us to restrict most of the forward and back propagation within the first layer of the network during adversary updates. This effectively reduces the total number of full forward and backward propagation to only one for each group of adversary updates. Therefore, we refer to this algorithm YOPO (You Only Propagate Once). Numerical experiments demonstrate that YOPO can achieve comparable defense accuracy with approximately 1/5 1/4 GPU time of the projected gradient descent (PGD) algorithm kurakin2016adversarial ().111Our codes are available at https://https://github.com/a1600012888/YOPO-You-Only-Propagate-Once

\PassOptionsToPackage

square,comma,numbers,sort&compress,supernatbib

 

You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle


  Dinghuai Zhang, Tianyuan Zhang, Yiping Luthanks: Equal Contribution Peking University {zhangdinghuai, 1600012888, luyiping9712}@pku.edu.cn Zhanxing Zhu School of Mathematical Sciences, Peking University Center for Data Science, Peking University Beijing Institute of Big Data Research zhanxing.zhu@pku.edu.cn Bin Dong Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University Beijing Institute of Big Data Research dongbin@math.pku.edu.cn

\@float

noticebox[b]Preprint. Under review.\end@float

1 Introduction

Deep neural networks achieve state-of-the-art performance on many tasks lecun2015deep (); goodfellow2016deep (). However, recent works show that deep networks are often sensitive to adversarial perturbations szegedy2013intriguing (); moosavi2016deepfool (); zugner2018adversarial (), i.e., changing the input in a way imperceptible to humans while causing the neural network to output an incorrect prediction. This poses significant concerns when applying deep neural networks to safety-critical problems such as autonomous driving and medical domains. To effectively defend the adversarial attacks, madry2018towards () proposed adversarial training, which can be formulated as a robust optimization wald1939contributions ():

(1)

where is the network parameter, is the adversarial perturbation, and is a pair of data and label drawn from a certain distribution . The magnitude of the adversarial perturbation is restricted by . For a given pair , we refer to the value of the inner maximization of (1), i.e. , as the adversarial loss which depends on .

A major issue of the current adversarial training methods is their significantly high computational cost. In adversarial training, we need to solve the inner loop, which is to obtain the "optimal" adversarial attack to the input in every iteration. Such "optimal" adversary is usually obtained using multi-step gradient decent, and thus the total time for learning a model using standard adversarial training method is much more than that using the standard training. Considering applying 40 inner iterations of projected gradient descent (PGD kurakin2016adversarial ()) to obtain the adversarial examples, the computation cost of solving the problem (1) is about 40 times that of a regular training.

Figure 1: Our proposed YOPO expolits the structure of neural network. To alleviate the heavy computation cost, YOPO focuses the calculation of the adversary at the first layer.

The main objective of this paper is to reduce the computational burden of adversarial training by limiting the number of forward and backward propagation without hurting the performance of the trained network. In this paper, we exploit the structures that the min-max objectives is encountered with deep neural networks. To achieve this, we formulate the adversarial training problem(1) as a differential game. Afterwards we can derive the Pontryagin’s Maximum Principle (PMP) of the problem.

From the PMP, we discover a key fact that the adversarial perturbation is only coupled with the weights of the first layer. This motivates us to propose a novel adversarial training strategy by decoupling the adversary update from the training of the network parameters. This effectively reduces the total number of full forward and backward propagation to only one for each group of adversary updates, significantly lowering the overall computation cost without hampering performance of the trained network. We name this new adversarial training algorithm as YOPO (You Only Propagate Once). Our numerical experiments show that YOPO achieves approximately 4 5 times speedup over the original PGD adversarial training with comparable accuracy on MNIST/CIFAR10. Furthermore, we apply our algorithm to a recent proposed min max optimization objective "TRADES"zhang2019theoretically () and achieve better clean and robust accuracy within half of the time TRADES need.

1.1 Related Works

Adversarial Defense.

To improve the robustness of neural networks to adversarial examples, many defense strategies and models have been proposed, such as adversarial training madry2018towards (), orthogonal regularization cisse2017parseval (); lin2018defensive (), Bayesian method ye2018bayesian (), TRADES zhang2019theoretically (), rejecting adversarial examples xu2017feature (), Jacobian regularization jakubovitz2018improving (); qian2018l2 (), generative model based defense ilyas2017robust (); sun2019enhancing (), pixel defense song2017pixeldefend (); luo2019random (), ordinary differential equation (ODE) viewpoint zhang2019towards (), ensemble via an intriguing stochastic differential equation perspective wang2018enresnet (), and feature denoising xie2018feature (); svoboda2018peernets (), etc. Among all these approaches, adversarial training and its variants tend to be most effective since it largely avoids the the obfuscated gradient problem athalye2018obfuscated (). Therefore, in this paper, we choose adversarial training to achieve model robustness.

Neural ODEs.

Recent works have built up the relationship between ordinary differential equations and neural networks weinan2017proposal (); lu2017beyond (); haber2017stable (); chen2018neural (); zhang2018dynamically (); thorpe2018deep (); sonoda2019transport (). Observing that each residual block of ResNet can be written as , one step of forward Euler method approximating the ODE . Thus li2017maximum (); weinan2019mean () proposed an optimal control framework for deep learning and chen2018neural (); li2017maximum (); pmlr-v80-li18b () utilize the adjoint equation and the maximal principle to train a neural network.

Decouple Training.

Training neural networks requires forward and backward propagation in a sequential manner. Different ways have been proposed to decouple the sequential process by parallelization. This includes ADMM taylor2016training (), synthetic gradients jaderberg2017decoupled (), delayed gradient huo2018decoupled (), lifted machines askari2018lifted (); li2018lifted (); gu2018fenchel (). Our work can also be understood as a decoupling method based on a splitting technique. However, we do not attempt to decouple the gradient w.r.t. network parameters but the adversary update instead.

1.2 Contribution

  • To the best of our knowledge, it is the first attempt to design NN–specific algorithm for adversarial defense. To achieve this, we recast the adversarial training problem as a discrete time differential game. From optimal control theory, we derive the an optimality condition, i.e. the Pontryagin’s Maximum Principle, for the differential game.

  • Through PMP, we observe that the adversarial perturbation is only coupled with the first layer of neural networks. The PMP motivates a new adversarial training algorithm, YOPO. We split the adversary computation and weight updating and the adversary computation is focused on the first layer. Relations between YOPO and original PGD are discussed.

  • We finally achieve about 4 5 times speed up than the original PGD training with comparable results on MNIST/CIFAR10. Combining YOPO with TRADESzhang2019theoretically (), we achieve both higher clean and robust accuracy within less than half of the time TRADES need.

1.3 Organization

This paper is organized as follows. In Section 2, we formulate the robust optimization for neural network adversarial training as a differential game and propose the gradient based YOPO. In Section 3, we derive the PMP of the differential game, study the relationship between the PMP and the back-propagation based gradient descent methods, and propose a general version of YOPO. Finally, all the experimental details and results are given in Section 4.

2 Differential Game Formulation and Gradient Based YOPO

2.1 The Optimal Control Perspective and Differential Game

Inspired by the link between deep learning and optimal control pmlr-v80-li18b (), we formulate the robust optimization (1) as a differential game evans2005introduction (). A two-player, zero-sum differential game is a game where each player controls a dynamics, and one tries to maximize, the other to minimize, a payoff functional. In the context of adversarial training, one player is the neural network, which controls the weights of the network to fit the label, while the other is the adversary that is dedicated to producing a false prediction by modifying the input.

The robust optimization problem (1) can be written as a differential game as follows,

(2)
subject to

Here, the dynamics represent a deep neural network, denote the number of layers, denotes the parameters in layer (denote ), the function is a nonlinear transformation for one layer of neural network where is the dimension of the th feature map and is the training dataset. The variable is the adversarial perturbation and we constrain it in an -ball. Function is a data fitting loss function and is the regularization weights such as the -norm. By casting the problem of adversarial training as a differential game (2), we regard and as two competing players, each trying to minimize/maximize the loss function respectively.

2.2 Gradient Based YOPO

The Pontryagin’s Maximum Principle (PMP) is a fundamental tool in optimal control that characterizes optimal solutions of the corresponding control problem evans2005introduction (). PMP is a rather general framework that inspires a variety of optimization algorithms. In this paper, we will derive the PMP of the differential game (2), which motivates the proposed YOPO in its most general form. However, to better illustrate the essential idea of YOPO and to better address its relations with existing methods such as PGD, we present a special case of YOPO in this section based on gradient descent/ascent. We postpone the introduction of PMP and the general version of YOPO to Section 3.

Let us first rewrite the original robust optimization problem (1) (in a mini-batch form) as

where denotes the first layer, denotes the network without the first layer and is the batch size. Here is defined as . For simplicity we omit the regularization term .

The simplest way to solve the problem is to perform gradient ascent on the input data and gradient descent on the weights of the neural network as shown below. Such alternating optimization algorithm is essentially the popular PGD adversarial training madry2018towards (). We summarize the PGD- (for each update on ) as follows, i.e. performing iterations of gradient ascent for inner maximization.

For , perform where by the chain rule, Perform the SGD weight update (momentum SGD can also be used here)

Note that this method conducts sweeps of forward and backward propagation for each update of . This is the main reason why adversarial training using PGD-type algorithms can be very slow.

To reduce the total number of forward and backward propagation, we introduce a slack variable

and freeze it as a constant within the inner loop of the adversary update. The modified algorithm is given below and we shall refer to it as YOPO--.

Initialize for each input . For Calculate the slack variable p = ∇_g_~θ (ℓ(g_~θ(f_0(x_i+η_i^j,0,θ_0)),y_i))⋅∇_f_0 (g_~θ(f_0(x_i+η_i^j,0,θ_0)) ), Update the adversary for for fixed Let . Calculate the weight update and update the weight . (Momentum SGD can also be used here.)

Intuitively, YOPO freezes the values of the derivatives of the network at level during the -loop of the adversary updates. Figure 2 shows the conceptual comprison between YOPO and PGD. YOPO-- accesses the data times while only requires full forward and backward propagation. PGD-, on the other hand, propagates the data times for full forward and backward propagation. As one can see that, YOPO-- has the flexibility of increasing and reducing to achieve approximately the same level of attack but with much less computation cost. For example, suppose one applies PGD-10 (i.e. 10 steps of gradient ascent for solving the inner maximization) to calculate the adversary. An alternative approach is using YOPO-- which also accesses the data 10 times but the total number of full forward propagation is only 5. Empirically, YOPO-m-n achieves comparable results only requiring setting a litter larger than .

Another benefit of YOPO is that we take full advantage of every forward and backward propagation to update the weights, i.e. the intermediate perturbation are not wasted like PGD-. This allows us to perform multiple updates per iteration, which potentially drives YOPO to converge faster in terms of the number of epochs. Combining the two factors together, YOPO significantly could accelerate the standard PGD adversarial training.

We would like to point out a concurrent paper shafahi2019adversarial () that is related to YOPO. Their proposed method, called "Free-", also can significantly speed up adversarial training. In fact, Free- is essentially YOPO--1, except that YOPO-- delays the weight update after the whole mini-batch is processed in order for a proper usage of momentum 222Momentum should be accumulated between mini-batches other than different adversarial examples from one mini-batch, otherwise overfitting will become a serious problem..

Figure 2: Pipeline of YOPO-- described in Algorithm 1. The yellow and olive blocks represent feature maps while the orange blocks represent the gradients of the loss w.r.t. feature maps of each layer.

3 The Pontryagin’s Maximum Principle for Adversarial Training

In this section, we present the PMP of the discrete time differential game (2). From the PMP, we can observe that the adversary update and its associated back-propagation process can be decoupled. Furthermore, back-propagation based gradient descent can be understood as an iterative algorithm solving the PMP and with that the version of YOPO presented in the previous section can be viewed as an algorithm solving the PMP. However, the PMP facilitates a much wider class of algorithms than gradient descent algorithms li2017maximum (). Therefore, we will present a general version of YOPO based on the PMP for the discrete differential game.

3.1 Pmp

Pontryagin type of maximal principle pontryagin1987mathematical (); boltyanskii1960theory () provides necessary conditions for optimality with a layer-wise maximization requirement on the Hamiltonian function. For each layer , we define the Hamiltonian function as

The PMP for continuous time differential game has been well studied in the literature  evans2005introduction (). Here, we present the PMP for our discrete time differential game (2).

Theorem 1.

(PMP for adversarial training) Assume is twice continuous differentiable, are twice continuously differentiable with respect to ; together with their partial derivatives are uniformly bounded in and , and the sets and are convex for every and . Denote as the solution of the problem (2), then there exists co-state processes such that the following holds for all and :

(3)
(4)

At the same time, the parameters of the first layer and the optimal adversarial perturbation satisfy

(5)
(6)

and the parameters of the other layers maximize the Hamiltonian functions

(7)
Proof.

Proof is in the supplementary materials. ∎

From the theorem, we can observe that the adversary is only coupled with the parameters of the first layer . This key observation inspires the design of YOPO.

3.2 PMP and Back-Propagation Based Gradient Descent

The classical back-propagation based gradient descent algorithm lecun1988theoretical () can be viewed as an algorithm attempting to solve the PMP. Without loss of generality, we can let the regularization term , since we can simply add an extra dynamic to evaluate the regularization term , i.e.

We append to to study the dynamics of a new -dimension vector and change to . The relationship between the PMP and the back-propagation based gradient descent method was first observed by Li et al.  li2017maximum (). They showed that the forward dynamical system Eq.(3) is the same as the neural network forward propagation. The backward dynamical system Eq.(4) is the back-propagation, which is formally described by the following lemma.

Lemma 1.

To solve the maximization of the Hamiltonian, a simple way is the gradient ascent:

(8)
Theorem 2.

The update (8) is equivalent to gradient descent method for training networksli2017maximum (); pmlr-v80-li18b ().

3.3 YOPO from PMP’s View Point

Based on the relationship between back-propagation and the Pontryagin’s Maximum Principle, in this section, we provide a new understanding of YOPO, i.e. solving the PMP for the differential game. Observing that, in the PMP, the adversary is only coupled with the weight of the first layer . Thus we can update the adversary via minimizing the Hamiltonian function instead of directly attacking the loss function, described in Algorithm 1.

For YOPO--, to approximate the exactly minimization of the Hamiltonian, we perform times gradient descent to update the adversary. Furthermore, in order to make the calculation of the adversary more accurate, we iteratively pass one data point times. Besides, the network weights are optimized via performing the gradient ascent to Hamiltonian, resulting in the gradient based YOPO proposed in Section 2.2.

  Randomly initialize the network parameters or using a pre-trained network.
  repeat
     Randomly select a mini-batch from training set.
     Initialize by sampling from a uniform distribution between [-, ]
     for  to  do
        
        for  to  do
           
        end for
        
        for  to  do
           
        end for
        
     end for
     for  to  do
        
     end for
     
  until Convergence
Algorithm 1 YOPO (You Only Propagate Once)

4 Experiments

4.1 YOPO for Adversarial Training

To demonstrate the effectiveness of YOPO, we conduct experiments on MNIST and CIFAR10. We find that the models trained with YOPO have comparable performance with that of the PGD adversarial training, but with a much fewer computational cost. We also compare our method with a concurrent method "For Free"shafahi2019adversarial (), and the result shows that our algorithm can achieve comparable performance with around 2/3 GPU time of their official implementation.

(a) "Samll CNN" zhang2019theoretically () Result on MNIST
(b) PreAct-Res18 Results on CIFAR10
Figure 3: Performance w.r.t. training time

Mnist

We achieve comparable results with the best in [5] within 250 seconds, while it takes PGD-40 more than 1250s to reach the same level. The accuracy-time curve is shown in Figuire 3(a). Naively reducing the backprop times of PGD-40 to PGD-10 will harm the robustness, as can be seen in Table 1. Experiment details can be seen in supplementary materials.

Training Methods Clean Data PGD-40 Attack CW Attack
PGD-5 madry2018towards () 99.43% 42.39% 77.04%
PGD-10 madry2018towards () 99.53% 77.00% 82.00%
PGD-40 madry2018towards () 99.49% 96.56% 93.52%
YOPO-5-10 (Ours) 99.46% 96.27% 93.56%
Table 1: Results of MNIST robust training. YOPO-5-10 achieves state-of-the-art result as PGD-40. Notice that for every epoch, PGD-5 and YOPO-5-3 have approximately the same computational cost.

Cifar10.

madry2018towards () performs a 7-step PGD to generate adversary while training. As a comparison, we test YOPO-- and YOPO-- with a step size of 2/255. We experiment with two different network architectures.

Under PreAct-Res18, for YOPO--, it achieves comparable robust accuracy with madry2018towards () with around half computation for every epoch. The accuracy-time curve is shown in Figuire 3(b).The quantitative results can be seen in Tbale 2. Experiment details can be seen in supplementary materials.

Training Methods Clean Data PGD-20 Attack CW Attack
PGD-3 madry2018towards () 88.19% 32.51% 54.65%
PGD-5 madry2018towards () 86.63% 37.78% 57.71%
PGD-10 madry2018towards () 84.82% 41.61% 58.88%
YOPO-3-5 (Ours) 82.14% 38.18% 55.73%
YOPO-5-3 (Ours) 83.99% 44.72% 59.77%
Table 2: Results of PreAct-Res18 for CIFAR10. Note that for every epoch, PGD-3 and YOPO-3-5 have the approximately same computational cost, and so do PGD-5 and YOPO-5-3.

As for Wide ResNet34, YOPO-5-3 still achieves similar acceleration against PGD-10, as shown in Table 3. We also test PGD-3/5 to show that naively reducing backward times for this minmax problem madry2018towards () cannot produce comparable results within the same computation time as YOPO. Meanwhile, YOPO-3-5 can achieve more aggressive speed-up with only a slight drop in robustness.

Training Methods Clean Data PGD-20 Attack Training Time (mins)
Natural train 95.03% 0.00% 233
PGD-3 madry2018towards () 90.07% 39.18% 1134
PGD-5 madry2018towards () 89.65% 43.85% 1574
PGD-10 madry2018towards () 87.30% 47.04% 2713
Free-8 shafahi2019adversarial ()1 86.29% 47.00% 667
YOPO-3-5 (Ours) 87.27% 43.04% 299
YOPO-5-3 (Ours) 86.70% 47.98% 476
Table 3: Results of Wide ResNet34 for CIFAR10.

4.2 YOPO for TRADES

TRADESzhang2019theoretically () formulated a new min-max objective function of adversarial defense and achieves the state-of-the-art adversarial defense results. The details of algorithm and experiment setup are in supplementary material, and quantitative results are demonstrated in Table 4.

Training Methods Clean Data PGD-20 Attack CW Attack Training Time (mins)
TRADES-10zhang2019theoretically () 86.14% 44.50% 58.40% 633
TRADES-YOPO-3-4 (Ours) 87.82% 46.13% 59.48% 259
TRADES-YOPO-2-5 (Ours) 88.15% 42.48% 59.25% 218
Table 4: Results of training PreAct-Res18 for CIFAR10 with TRADES objective

5 Conclusion

In this work, we have developed an efficient strategy for accelerating adversarial training. We recast the adversarial training of deep neural networks as a discrete time differential game and derive a Pontryagin’s Maximum Principle (PMP) for it. Based on this maximum principle, we discover that the adversary is only coupled with the weights of the first layer. This motivates us to split the adversary updates from the back-propagation gradient calculation. The proposed algorithm, called YOPO, avoids computing full forward and backward propagation for too many times, thus effectively reducing the computational time as supported by our experiments.

References

  • (1)
  • (2) Armin Askari, Geoffrey Negiar, Rajiv Sambharya, and Laurent El Ghaoui. Lifted neural networks. arXiv preprint arXiv:1805.01532, 2018.
  • (3) Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
  • (4) Vladimir Grigor’evich Boltyanskii, Revaz Valer’yanovich Gamkrelidze, and Lev Semenovich Pontryagin. The theory of optimal processes. i. the maximum principle. Technical report, TRW SPACE TECHNOLOGY LABS LOS ANGELES CALIF, 1960.
  • (5) Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
  • (6) Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pages 6572–6583, 2018.
  • (7) Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 854–863. JMLR. org, 2017.
  • (8) Lawrence C Evans. An introduction to mathematical optimal control theory. Lecture Notes, University of California, Department of Mathematics, Berkeley, 2005.
  • (9) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
  • (10) Fangda Gu, Armin Askari, and Laurent El Ghaoui. Fenchel lifted networks: A lagrange relaxation of neural network training. arXiv preprint arXiv:1811.08039, 2018.
  • (11) Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2017.
  • (12) Zhouyuan Huo, Bin Gu, Qian Yang, and Heng Huang. Decoupled parallel backpropagation with convergence guarantee. arXiv preprint arXiv:1804.10574, 2018.
  • (13) Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G Dimakis. The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv:1712.09196, 2017.
  • (14) Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, and Koray Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1627–1635. JMLR. org, 2017.
  • (15) Daniel Jakubovitz and Raja Giryes. Improving dnn robustness to adversarial attacks using jacobian regularization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 514–529, 2018.
  • (16) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
  • (17) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436, 2015.
  • (18) Yann LeCun, D Touresky, G Hinton, and T Sejnowski. A theoretical framework for back-propagation. In Proceedings of the 1988 connectionist models summer school, volume 1, pages 21–28. CMU, Pittsburgh, Pa: Morgan Kaufmann, 1988.
  • (19) Jia Li, Cong Fang, and Zhouchen Lin. Lifted proximal operator machines. arXiv preprint arXiv:1811.01501, 2018.
  • (20) Qianxiao Li, Long Chen, Cheng Tai, and E Weinan. Maximum principle based algorithms for deep learning. The Journal of Machine Learning Research, 18(1):5998–6026, 2017.
  • (21) Qianxiao Li and Shuji Hao. An optimal control approach to deep learning and applications to discrete-weight neural networks. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2985–2994, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
  • (22) Ji Lin, Chuang Gan, and Song Han. Defensive quantization: When efficiency meets robustness. In International Conference on Learning Representations, 2019.
  • (23) Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv preprint arXiv:1710.10121, 2017.
  • (24) Tiange Luo, Tianle Cai, Mengxiao Zhang, Siyu Chen, and Liwei Wang. RANDOM MASK: Towards robust convolutional neural networks, 2019.
  • (25) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  • (26) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
  • (27) Lev Semenovich Pontryagin. Mathematical theory of optimal processes. CRC, 1987.
  • (28) Haifeng Qian and Mark N Wegman. L2-nonexpansive neural networks. arXiv preprint arXiv:1802.07896, 2018.
  • (29) Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Xu Zeng, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! arXiv preprint arXiv:1904.12843, 2019.
  • (30) Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766, 2017.
  • (31) Sho Sonoda and Noboru Murata. Transport analysis of infinitely deep neural network. The Journal of Machine Learning Research, 20(1):31–82, 2019.
  • (32) Ke Sun, Zhanxing Zhu, and Zhouchen Lin. Enhancing the robustness of deep neural networks by boundary conditional gan. arXiv preprint arXiv:1902.11029, 2019.
  • (33) Jan Svoboda, Jonathan Masci, Federico Monti, Michael Bronstein, and Leonidas Guibas. Peernets: Exploiting peer wisdom against adversarial attacks. In International Conference on Learning Representations, 2019.
  • (34) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • (35) Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training neural networks without gradients: A scalable admm approach. In International conference on machine learning, pages 2722–2731, 2016.
  • (36) Matthew Thorpe and Yves van Gennip. Deep limits of residual neural networks. arXiv preprint arXiv:1810.11741, 2018.
  • (37) Abraham Wald. Contributions to the theory of statistical estimation and testing hypotheses. The Annals of Mathematical Statistics, 10(4):299–326, 1939.
  • (38) Bao Wang, Binjie Yuan, Zuoqiang Shi, and Stanley J Osher. Enresnet: Resnet ensemble via the feynman-kac formalism. arXiv preprint arXiv:1811.10745, 2018.
  • (39) E Weinan. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11, 2017.
  • (40) E Weinan, Jiequn Han, and Qianxiao Li. A mean-field optimal control formulation of deep learning. Research in the Mathematical Sciences, 6(1):10, 2019.
  • (41) Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411, 2018.
  • (42) Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.
  • (43) Nanyang Ye and Zhanxing Zhu. Bayesian adversarial learning. In Advances in Neural Information Processing Systems, pages 6892–6901, 2018.
  • (44) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019.
  • (45) Jingfeng Zhang, Bo Han, Laura Wynter, Kian Hsiang Low, and Mohan Kankanhalli. Towards robust resnet: A small step but a giant leap. arXiv preprint arXiv:1902.10887, 2019.
  • (46) Xiaoshuai Zhang, Yiping Lu, Jiaying Liu, and Bin Dong. Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration. In International Conference on Learning Representations, 2019.
  • (47) Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847–2856. ACM, 2018.

Appendix A Proof Of The Theorems

a.1 Proof of Theorem 1

In this section we give the full statement of the maximum principle for the adversarial training and present a proof. Let’s start from the case of the natural training of neural networks.

Theorem.

(PMP for adversarial training) Assume is twice continuous differentiable, are twice continuously differentiable with respect to , and together with their partial derivatives are uniformly bounded in and . The sets and are convex for every and . Let to be the solution of

(9)
subject to (10)
(11)

Then there exists co-state processes such that the following holds for all and :

(12)
(13)

Here is the per-layer defined Hamiltonian function as

At the same time, the parameter of the first layer and the best perturbation satisfy

(14)

while parameter of the other layers will maximize the Hamiltonian functions

(15)
Proof.

We first propose PMP for discrete time dynamic system and utilize it directly gives out the proof of PMP for adversarial training.

Lemma 2.

(PMP for discrete time dynamic system) Assume is twice continuous differentiable, are twice continuously differentiable with respect to , and together with their partial derivatives are uniformly bounded in and . The sets and are convex for every and . Let to be the solution of

(16)
subject to (17)

There exists co-state processes such that the following holds for all and :

(18)
(19)

Here is the per-layer defined Hamiltonian function as

The parameters of the layers will maximize the Hamiltonian functions

(20)
Proof.

Without loss of generality, we let . The reason is that we can simply add an extra dynamic to calculate the regularization term , i.e.

We append to to study the dynamic of a new dimension vector and modify to . Thus we only need to prove the case when .

For simplicity, we omit the subscript in the following proof. (Concatenating all into can justify this.)

Now we begin the proof. Following the linearization lemma in [1] [21], consider the linearized problem

(21)

The reachable states by the linearized dynamic system is denoted as

here denotes the the evolution of the dynamical system for under . We also define

The linearization lemma in [1, 21] tells us that and are separated by , i.e.

(22)

Thus setting

we have

(23)

Thus from Eq.22 and Eq.23 we get

Setting for we have , which leads to . This finishes the proof of the maximal principle on weight space .

We return to the proof of the theorem. The proof of the maximal principle on the weight space, i.e.

and

can be reached with the help of Lemma 2: replacing the dynamic start point in Eq.18 with makes this maximal principle a direct corollary of Lemma 2.

Next, we prove the Hamiltonian conidition for the adversary, i.e.

(24)

Assuming like above, we define a new optimal control problem with target function and previous dynamics except :

(25)
subject to (26)
(27)

However in this time, all the layer parameters are fixed and is the control. From the above Lemma 2 we get

(28)
(29)

where and . This gives the fact that . Lemma 2 also tells us

(30)

which is

(31)

On the other hand, Lemma 2 gives

Then we have