Invariant backpropagation: how to train
a transformationinvariant neural network
Abstract
In many classification problems a classifier should be robust to small variations in the input vector. This is a desired property not only for particular transformations, such as translation and rotation in image classification problems, but also for all others for which the change is small enough to retain the object perceptually indistinguishable. We propose two extensions of the backpropagation algorithm that train a neural network to be robust to variations in the feature vector. While the first of them enforces robustness of the loss function to all variations, the second method trains the predictions to be robust to a particular variation which changes the loss function the most. The second methods demonstrates better results, but is slightly slower. We analytically compare the proposed algorithm with two the most similar approaches (Tangent BP and Adversarial Training), and propose their fast versions. In the experimental part we perform comparison of all algorithms in terms of classification accuracy and robustness to noise on MNIST and CIFAR10 datasets. Additionally we analyze how the performance of the proposed algorithm depends on the dataset size and data augmentation.
Invariant backpropagation: how to train
a transformationinvariant neural network
Sergey Demyanov ^{†}^{†}thanks: http://www.demyanov.net 

IBM Research Australia, Melbourne, VIC, Australia 
sergeyde@au1.ibm.com 
James Bailey, Ramamohanarao Kotagiri, Christopher Leckie 

Department of Computing and Information Systems 
The University of Melbourne, Parkville, VIC, Australia, 3010 
{baileyj, kotagiri, caleckie}@unimelb.edu.au 
1 Introduction
Neural networks are widely used in machine learning. For example, they are showing the best results in image classification (Szegedy et al. (2014); Lee et al. (2014)), image labeling (Karpathy & FeiFei (2014)) and speech recognition. Deep neural networks applied to large datasets can automatically learn from a huge number of features, that allow them to represent very complex relations between raw input data and output classes. However, it also means that deep neural networks can suffer from overfitting, and different regularization techniques are crucially important for good performance.
It is often the case that there exist a number of variations of a given object that preserve its label. For example, image labels are usually invariant to small variations in their location on the image, size, angle, brightness, etc. In the area of voice recognition the result has to be invariant to the speech tone, speed and accent. Moreover, the predictions should always be robust to random noise. However, this knowledge is not incorporated in the learning process.
In this work we propose two methods of achieving local invariance by extending the standard backpropagation algorithm. First of them enforces robustness of the loss function to all variations in the input vector. Second methods trains the predictions to be robust to variation of the input vector in the direction which changes the loss function the most. We refer to them as Loss Invariant BackPropagation (Loss IBP), and Prediction IBP. While one of them is faster, the other one demonstrates better performance. Both methods can be applied to all types of neural networks in combination with any other regularization technique.
1.1 Backpropagation algorithm
We denote as the number of layers in a neural network and as the activation vectors of each layer. The activation of the first layer is the input vector . If the input is an image that consists of one or more feature maps, we still consider it as a vector by traversing the maps and concatenating them together. The transformation between layers might be different: convolution, matrix multiplication, nonlinear transformation, etc. We assume that , where is the set of weights, which may be empty. The computation of the layer activations is the first (forward) pass of the backpropagation algorithm. Moreover, the loss function can also be considered as a layer of the length . The forward pass is thus a calculation of the composition of functions , applied to the input vector .
Let us denote the vectors of derivatives with respect to layer values as . Then, similar to the forward propagating functions , we can define backward propagating functions . We refer to them as reverse functions. According to the chain rule, we can obtain their matrix form:
(1) 
where is the Jacobian, i.e. the matrix of the derivatives . The backward pass is thus a consecutive matrix multiplication of the Jacobians of layer functions , computed at the points . Note, that the first Jacobian is the vector of derivatives of the loss function with respect to predictions . The last vector contains the derivatives of the loss function with respect to the input vector.
Next, let us also denote the vector of weight gradients as . Then we can write the chain rule for in a matrix form as , where is the Jacobian matrix of the derivatives with respect to weights . However, if is a linear function, the Jacobian is equivalent to the vector , so
(2) 
In this article we consider all layers with weights to be linear.
After the are computed, the weights are updated: . Here is the coefficient that specifies the size of the step in the opposite direction to the derivative, which usually reduces over time.
2 Related work
A number of techniques that allow to achieve robustness to particular variations have been proposed. Convolutional neural networks, which consist of pairs of convolutional and subsampling layers, are the most commonly used one. They provide robustness to small shifts and scaling, and also significantly reduce the number of training parameters compared to fullyconnected perceptrons. However, they are not able to deal with other types of variations. Another popular method is data augmentation. It assumes training on the objects, artificially generated from the existing training set using the transformation functions. Unfortunately, such generation is not always possible. There exist two other approaches, which also attempt to solve this problem analytically using the gradients of the loss function with respect to input. We discuss them below.
2.1 Tangent backpropagation algorithm
The first approach is Tangent backpropagation algorithm (Simard et al. (2012)), which allows to train a network, robust to a set of predefined transformations. The authors consider some invariant transformation function , which must preserve the predictions within a local neighborhood of . Since the predictions in this neighborhood must also be constant, a necessary condition for the network is
To achieve this, the authors add a loss regularization term to the main loss function :
(3) 
Using the chain rule we can get obtain the following representation for :
The last term depends only on the function and the input value , and therefore can be computed in advance. The authors refer to as tangent vectors. The authors propose to compute the additional loss term by initializing the network with a tangent vector and propagating it through a linearized network, i.e., consecutively multiplying it on the transposed Jacobians . Indeed,
The main drawback of Tangent BP is computational complexity. As it can be seen from the definition, it linearly depends on the number of transformations the classifier learns to be invariant to. The authors describe an example of training a network for image classification, which is robust to five transformations: two translations, two scalings, and rotation. In this case the required learning time is 6 times larger than for the standard BP.
The usage of tangent vectors also makes Tangent BP more difficult to implement. To achieve this, the authors suggest to obtain a continuous image representation by applying a Gaussian filter, which requires additional preprocessing and one more hyperparameter (filter smoothness). While the basic transformation operators are given by simple Lie operators, other transformations may require additional coding.
2.2 Adversarial Training
The second algorithm is a recently proposed Adversarial Training (Goodfellow et al. (2014)). In (Szegedy et al. (2013)) the authors described an interesting phenomena: it is possible to artificially generate an image indistinguishable from the image of the dataset, such that a trained network’s prediction about it is completely wrong. Of course, people never make such kinds of mistakes. These objects were called adversarial examples. In (Goodfellow et al. (2014)) the authors showed that it is possible to generate adversarial examples by moving into the direction given by the loss function gradient , i.e.,
(4) 
In a high dimensional space even a small move may significantly change the loss function . To deal with the problem of adversarial examples, the authors propose the algorithm of Adversarial Training (AT). The idea of the algorithm is to additionally train the network on the adversarial examples, which can be quickly generated using the gradients , obtained in the end of the backward pass. Adversarial Training uses the same labels for the new object as for the original object , so the loss function is the same. The updated loss function is thus
(5) 
Adversarial training is quite similar to the Tangent propagation algorithm, but differs in a couple of aspects. First, Adversarial training uses the gradients of the loss function , while Tangent BP uses tangent vectors . Second, while Adversarial Training propagates the new objects through the original network, Tangent BP propagates the gradients through the linearized network. The proposed Prediction IBP algorithm can be also derived by combining these properties.
3 Invariant backpropagation
In the first part of this section we describe Loss IBP, which makes the main loss function robust to all variations in the input vector. In the second part we describe Prediction IBP, which aims to make the network predictions robust to the variation in the direction specified by . While both versions use the gradients , they differ in their loss functions, computational complexity, and also in experimental results.
3.1 Loss IBP
In many classification problems we have a large number of features. Formally it means that the input vectors come from a high dimensional vector space. In this space every vector can move in a huge number of directions, but most of them should not change the vector’s label. The goal of the algorithm is to make a classifier robust to such variations.
Let us consider a layer neural network with an input , and predictions . Using the vector of true labels , we compute the loss function , and at the end of the backward pass of backpropagation algorithm we obtain the vector of its gradients . This vector defines the direction that changes the loss function , and its length specifies how large this change is. In the small neighborhood we can assume that . If is small, then the same change of , will cause a smaller change of . Thus, a smaller vector length corresponds to a more robust the classifier, and vice versa. Let us specify the additional loss function
(6) 
which is computed at the end of the backward pass. In order to achieve robustness to variations, we need to make it as small as possible. By default we assume .
Note that is very similar to the Frobenius norm of the Jacobian matrix, which is used as a regularization term in contractive autoencoders (Rifai et al. (2011)). The minimization of encourages the classifier to be invariant to changes of the input vector in all directions, not only those that are known to be invariant. At the same time, the minimization of ensures that the predictions change when we move towards the samples of a different class, so the classifier is not invariant in these directions. The combination of these two loss functions aims to ensure good performance. In order to minimize the joint loss function
(7) 
we need to additionally obtain the derivatives of the additional loss function with respect to the weights . In Section 3.3 we discuss how to efficiently compute them, using only one additional forward pass. Once these derivatives are computed, we can update the weights using the new rule
(8) 
Here is the coefficient that controls the strength of regularization, and plays a crucial role in achieving good performance. Note that when , the algorithm is equivalent to the standard backpropagation. Since the additional loss function aims to minimize the gradients of the main loss function , we call this algorithm Loss IBP.
3.2 Prediction IBP
While Loss IBP makes the main loss function robust to variations, it does not necessarily imply the robustness of the predictions themselves. Unfortunately we cannot compute the gradients of predictions with respect to the input vector as their dimensionality can be very large. However, we can compute the gradients of predictions in the direction given by . As it was shown in Section 2.2, movement in this direction can generate adversarial examples, whose predictions significantly differ from . We can thus introduce another additional loss function
(9) 
We call the algorithm with this loss function Prediction IBP. The only difference of Prediction IBP from Tangent BP is the initial vector for the third pass. While Tangent BP uses precomputed tangent vectors, Prediction IBP uses the vector of gradients , obtained at the end of the backward pass. The weight gradients of the additional loss function can be computed the same way as they are computed in Tangent BP. Therefore, Prediction IBP always requires two times more computation time than standard BP.
3.3 Loss IBP implementation
In this section we will show how to efficiently compute the weight gradients for the additional loss function (6). To optimize , we need to look at the backward pass from another point of view. We may consider that the derivatives are the first layer of a reverse neural network that has as its output. Indeed, all transformation functions have reverse pairs that are used to propagate the derivatives (1). If we consider these pairs as the original transformation functions, they have their own inverse pairs .
Therefore we consider the derivatives as activations and the backward pass as a forward pass for the reverse network. As in standard backpropagation, after such a “forward” pass we compute the loss function . The next step is quite natural: we need to initialize the input vector with the gradients and perform another “backward” pass that has the same direction as the original forward pass. At the same time the derivatives with respect to the weights must be computed. Fig. 1 shows the general scheme of the derivative computation. The top part corresponds to the standard backpropagation procedure.
An important subset of transformation functions is linear functions. It includes convolutional layers, fully connected layers, subsampling layers, and other types. In Section A.1 we show that if a function is linear, i.e. then


, and
Therefore, in the case of a linear function , we can propagate third pass activations the same way as we do on the first pass, i.e., multiplying them on the same matrix of weights . This statement remains true for elementwise multiplication, as it can be considered as matrix multiplication as well. The weight derivatives are also computed the same way as in the standard BP algorithm. This fact allows us to easily implement Loss IBP using the same procedures as for standard BP.
Moreover, in Section A.1 we also show that if the function has a symmetric Jacobian , then . This property is useful for implementation of the nonlinear functions. The summary of the Loss IBP algorithm is given in Algorithm 1.
It is easy to compare the computation time for standard BP and Loss IBP. We know that convolution and matrix multiplication operations occupy almost all the processing time. As we see, IBP needs one more forward pass and one more calculation of weight gradients. If we assume that for each layer the forward pass, backward pass and calculation of derivatives all take approximately the same time, then IBP requires about more time to train the network. The experiments have shown that the additional time is about . It is less than the approximated , because both versions contain fixed time procedures such as batch composing, data augmentation, etc. At the same time Loss IBP is faster than Prediction IBP on approximately .
4 Fast versions of Tangent BP and Adversarial Training
4.1 Fast Tangent BP
Let us change the additional loss function in Eq. (3) such that we penalize the sensitivity of the main loss function instead of predictions themselves:
(10) 
In this case the computations can be simplified. Notice, that
Therefore can be directly computed in the end of the backward pass by multiplying the gradient on the tangent vector . In Section A.5 we show that this modification of Tangent BP is equivalent to Loss IBP with the additional loss function instead of . Therefore, this version of Tangent BP can be implemented using less time than original Tangent BP. We refer to it as Fast TBP.
4.2 Fast Adversarial Training
Using Taylor expansion for the loss of adversarial example , we can get
(11) 
Combining (5) and (11), we can approximate as
(12) 
It is easy to notice, that the usage of instead of just scales the hyperparameter , which needs to be tuned anyway. At the same time, the calculation of gradients takes computation time. Therefore, the Adversarial Training algorithm can be sped up by avoiding the calculation of , and using only the gradients . Compared with the originally proposed loss , the optimal parameter must be times lower. Similar to Tangent BP, this trick also saves .
Now we can see the difference between Loss IBP and Adversarial Training. While Loss IBP minimizes only the first derivative, and does not affect higher orders of the derivatives of the loss functions such as curvature, Adversarial Training essentially minimizes all orders of the derivatives with the predefined weight coefficients between them. In the case of a highly nonlinear true data distribution this might be a disadvantage. In Section 5 we show that none of these algorithms outperform another one in all the cases.
5 Experiments
In the experimental part we compared all algorithms and their modifications in different aspects. We performed the experiments on two benchmark image classification datasets: MNIST (LeCun et al. (1998)) and CIFAR10 (Krizhevsky (2009)) using the ConvNet toolbox for Matlab ^{1}^{1}1https://github.com/sdemyanov/ConvNet. In all experiments we used the following parameters: 1) the batch size , 2) initial learning rate , 3) momentum , 4) exponential decrease of the learning rate, i.e., , 5) each convolutional layer was followed by a scaling layer with max aggregation function among the region of size and stride , 6) relu nonlinear functions on the internal layers, 7) final softmax layer combined with the negative loglikelihood loss function. We trained the classifiers for epochs with the coefficient , so the final learning rate was . For the experiments on MNIST we employed a network with two convolutional layers with filters of size (padding ) and filters of size (padding ) and one internal FC layer of length . The experiments on CIFAR were performed on the network with convolutional layers with the filter size (paddings , and ), and one internal FC layer of length .
In all our experiments we used norm additional loss function as we had found that it always works better that norm. For Tangent BP algorithm we used tangent vectors for each image in the training set, corresponding to and shifts, and scaling and rotation. The employed value of standard deviation for the Gaussian filter was . For numerical stability reasons we omitted multiplication on softmax gradients on additional forward and backward passes in Prediction IBP and Original TBP algorithms.
5.1 Classification accuracy
First we compared the performance of all algorithms and their modifications. We trained the networks on different subsets of MNIST and CIFAR10 datasets of size with different initial weights and shuffling order. Each dataset was first normalized to have pixel values within and then the mean pixel value was subtracted from all images. The results are presented in Table 1.
MNIST  CIFAR10  

Error, %  Best or  Time, s  Error, %  Best or  Time, s  
Standard BP  N/A  N/A  
Prediction IBP  
Loss IBP  
Original AT  
Fast AT  
Original TBP  
Fast TBP 
First, we can see that all algorithms except Fast TBP can decrease classification error compared with the standard BP. We suppose that the lack of improvement by Fast TBP can be explained by a weak connection between the behavior of the loss function and predictions themselves. While is trained to be robust to predefined transformations, the predictions might remain sensitive to them. Further we discuss only Original TBP.
Second, we can notice that Original and Fast AT demonstrate identical performance, thus confirming our suggestion about a possibility to speed up the algorithm. The achieved speed up is and . We can also see that the best parameters of for MNIST datasets differ in 2 times, what was also predicted by our considerations. Further we do not differentiate between Original and Fast AT, and refer to them as AT.
Third, we can conclude that Prediction IBP shows better results than Loss IBP (improvement on vs on MNIST and vs on CIFAR), while being slightly slower (on and accordingly). Since Prediction IBP can be seen as a modification of Original TBP, while Loss IBP is equivalent to a modification of Fast TBP, the reason might also be a weak connection between and .
Forth, we observe that the algorithms demonstrate different performance on MNIST and CIFAR10 datasets. The best results on MNIST are achieved by Prediction IBP and AT, while the best result on CIFAR10 is achieved by Tangent BP. Notice, that the improvement of Tangent BP on CIFAR10 dataset () is much larger, than the next best result of Prediction IBP (). At the same time, AT algorithm could not improve the accuracy at all, achieving the best accuracy using the lowest possible value of the parameter . However, the Tangent BP algorithm works much slower than the competitors.
We suppose that such results can be explained by a high nonlinearity of a decision function. As it was shown in Section 4.2, AT minimizes not only the first order of the loss function derivatives, but also all other orders, thus preventing the classifier from learning such nonlinearity. At the same time, Prediction IBP just makes the predictions less sensitive to variations in the input vector, specified by . In the case of highly nonlinear decision function this might be not necessary. Unlike both AT and IBP, Tangent BP uses prior knowledge to train invariance in directions that the predictions must always be invariant to. This allows it to achieve the best performance on CIFAR10.
5.2 Robustness to Adversarial noise
We next measured the sensitivity of all algorithms to adversarial noise. We employed the classifiers trained in Section 5.1 with the parameters, which yield the best accuracy, and measured performance of the classifiers on the test sets, corrupted by adversarial noise. Adversarial examples were generated using Eq. (4). The results are presented in Fig. 2, where we show the errors at the variation of . It is important to keep in mind that performance of the classifiers significantly depends on the value of a regularization parameter.
Firstly notice that CIFAR10 classifiers are much more sensitive to adversarial noise than those trained on MNIST dataset. As expected, the most robust classifier was trained by Adversarial Training algorithm. It is the only one which constantly remains better than standard BP classifier. Other classifiers show better results until a certain point, when the level of noise becomes too high. Interestingly, while Tangent BP demonstrates the best results on CIFAR10 dataset, its performance degrades much faster than the performance of other classifiers on both MNIST and CIFAR10. Note, that despite the ratio of best values for Prediction IBP and Loss IBP is the same in both cases, they demonstrate different behavior.
5.3 Robustness to Gaussian noise
After that we measured the sensitivity of the same classifiers to Gaussian noise. The results are presented in Fig. 3. Surprisingly, the most robust classifier on MNIST dataset was trained by standard BP. We thus see that robustness to adversarial noise and other predefined transformations makes a classifier more sensitive to Gaussian noise. At the same time, Tangent BP classifier remains the most sensitive to Gaussian noise as well. On CIFAR10 dataset it is the only classifier which degrades significantly faster than others.
5.4 Dataset size and Data augmentation
We have also established how the dataset size and data augmentation affects the Prediction IBP improvement. We performed these experiments on subsets of the MNIST dataset using the same parameters as in Section 5.1. In data augmentation regime we randomly modified each training object every time it was accessed according to the following parameters: 1) range of shift from the central position in each dimension  pixels, 3) range of scaling in each dimension  , 3) range of rotation angle  degrees, 4) pixel value if the pixel is out of the original image  . In order to decrease the variance we trained the networks for epochs without data augmentation and for epochs with it.
The results are summarized in Fig. 4. We see that without data augmentation smaller datasets require more regularization, i.e., larger . The relative improvement is also higher: it is for samples and for . We thus see that the larger the dataset is, the less the network overfits, and the less improvement we can obtain from regularization. With data augmentation the improvement of IBP is less, but does not converge to even when the full training set is used. Interestingly, the optimal value of remains approximately on the same level for all dataset sizes. Therefore we can conclude that data augmentation cannot completely substitute IBP regularization as the last one enforces robustness to variations, which are not represented by additionally generated objects.
6 Conclusion
We proposed two versions of the Invariant Backpropagation algorithm, which extends the standard Backpropagation in order to enforce robustness of a classifier to variations in the input vector. While Loss IBP trains the main loss function to be insensitive to any variations, Prediction IBP trains the predictions to be insensitive to variations in the direction of the gradient . We have demonstrated that the weight gradients for Loss IBP can be efficiently computed using only one additional forward pass, which is identical to the original forward pass for the majority of layer types. We experimentally established that Prediction IBP achieves higher classification accuracy on both MNIST and CIFAR10 datasets, but requires more time than Loss IBP. Additionally we proposed fast versions for both Tangent BP and Adversarial Training algorithms. While the fast version of Tangent BP does not improve classification accuracy, the modification of Adversarial Training algorithm demonstrates the same performance as the originally proposed algorithm, being faster.
In the experimental part we performed comparison of all algorithms and their modifications in terms of classification accuracy and robustness to noise. We have found that none of the algorithms outperforms others in all cases. While the best results on MNIST are achieved by Prediction IBP and Adversarial Training, Tangent BP significantly outperformed others on CIFAR10. At the same time Tangent BP classifier is the most sensitive to Gaussian and Adversarial noise on both datasets. Additionally we demonstrated that the regularization effect of Prediction IBP remains visible even on the full size MNIST dataset with data augmentation, so the methods can be applied together. The choice of a particular regularizer depends on the properties of a dataset.
References
 Bishop (1995) Bishop, Christopher M. Training with noise is equivalent to Tikhonov regularization. Neural computation, 7(1):108–116, 1995.
 Fawzi et al. (2015) Fawzi, Alhussein, Fawzi, Omar, and Frossard, Pascal. Fundamental limits on adversarial robustness. In ICML 2015, 2015.
 Goodfellow et al. (2014) Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Christian. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
 Karpathy & FeiFei (2014) Karpathy, Andrej and FeiFei, Li. Deep visualsemantic alignments for generating image descriptions. arXiv preprint arXiv:1412.2306, 2014.
 Krizhevsky (2009) Krizhevsky, Alex. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009.
 LeCun et al. (1998) LeCun, Yann, Bottou, Léon, Bengio, Yoshua, and Haffner, Patrick. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Lee et al. (2014) Lee, ChenYu, Xie, Saining, Gallagher, Patrick, Zhang, Zhengyou, and Tu, Zhuowen. Deeplysupervised nets. arXiv preprint arXiv:1409.5185, 2014.
 Rifai et al. (2011) Rifai, Salah, Vincent, Pascal, Muller, Xavier, Glorot, Xavier, and Bengio, Yoshua. Contractive autoencoders: Explicit invariance during feature extraction. In ICML 2011, pp. 833–840, 2011.
 Simard et al. (2012) Simard, Patrice Y, LeCun, Yann A, Denker, John S, and Victorri, Bernard. Transformation invariance in pattern recognition–tangent distance and tangent propagation. In Neural networks: tricks of the trade, pp. 235–269. Springer, 2012.
 Szegedy et al. (2013) Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian, and Fergus, Rob. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
 Szegedy et al. (2014) Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
Appendix A Supplementary material
a.1 Reverse function theorems
First, let us notice that the forward and backward passes of Loss IBP are performed in the same way as in the standard backpropagation algorithm. Then the additional loss function (6) is computed, and its derivatives are used as input for the propagation on the third pass. As it follows from (6), for the gradients are
i.e., coincide with the gradients . For , they are the signs of :
In Section 3.3 we described double reverse functions . Let us additionally introduce functions and their reverse pairs as
Now we can prove the following theorems.
Theorem 1.
Let us assume that is linear, i.e., , where matrix multiplication is used. Then

, i.e.,

, i.e., , and
Proof.
First, notice that the reverse of any function is always linear:
(13) 
In the case of a linear function the reverse function is known:
(14) 
Now let us consider the double reverse functions , such that . Compared with linear , its reverse function multiplies its first argument on the transposed parameter. The same is true for the double reverse function compared with , i.e.:
This proves the first statement.
Next, in the case of linear function we also know the function which computes the weight derivatives (2):
(15) 
Let us again consider the backward pass as the forward pass for the reverse net. Since the function is linear, the formula for derivative calculation of reverse net is also (15). However, as it follows from (14) the reverse net uses the transposed matrix of weights for forward propagation, so the result of the derivative calculation is also transposed with respect to the matrix . Also note that since acts as activations in the reverse net, we pass it as the first argument, and as the second. Therefore,
(16) 
and this proves the part 2. ∎
Theorem 2.
If the function has a symmetric Jacobian , then .
Proof.
a.2 Implementation of particular layer types
A fully connected layer
is a standard linear layer, which transforms its input by multiplication on the matrix of weights: , where is the vector of biases. Notice that on the backward pass we do not add any bias to propagate the derivatives, so we do not add it on the third pass as well and do not compute additional bias derivatives. This is the difference between the first and the third passes. If dropout is used, the third pass should use the same dropout matrix as used on the first pass.
Nonlinear activation functions
can be considered as a separate layer, even if they are usually implemented as a part of each layer of the other type. They do not contain weights, so we write just . The most common functions are: (i) sigmoid, , (ii) rectified linear unit (relu), , and (iii) softmax, . All of them are differentiable (except relu in , but it does not cause uncertainty) and have a symmetric Jacobian matrix, so according to Theorem 2 the third pass is the same the backward pass. For example, in the case of the relu function this means that , where elementwise multiplication is used.
Convolution layers
perform 2D filtering of the activation maps with the matrices of weights. Since each element of is a linear combination of elements of , convolution is also a linear transformation. Linearity immediately gives that and . Therefore the third pass of convolutional layer repeats its first pass, i.e., it is performed by convolving with the same filters using the same stride and padding. As with the fully connected layers, we do not add biases to the resulting maps and do not compute their derivatives.
The scaling layer
aggregates the values over a region to a single value. Typical aggregation functions are mean and max. As it follows from their definition, both of them also perform linear transformations, so . Notice that in the case of the max function it means that on the third pass the same elements of should be chosen for propagation to as on the first pass regardless of what value they have.
a.3 Regularization properties of Loss IBP
In the case of loss function (6), we can derive some interesting theoretical properties. Using the CauchySchwarz inequality, we can obtain, that
The most common loss functions for the predictions and true labels are the squared loss and the crossentropy loss , applied to the softmax output layer . Here is the number of neurons in the output layer (number of classes), and . In the first case we have , in the second case we can show that . Therefore, the strength of function Loss IBP regularization decreases when the predictions approach the true labels . This property prevents overregularization when the classifier achieves high accuracy. Notice, that if a network has no hidden layers, then , i.e., in this case penalty term can be considered as a weight decay regularizer, multiplied on .
For the model of a single neuron we can derive another interesting property. In (Bishop (1995)) it was demonstrated that for a single neuron with the norm loss function noise injection is equivalent to the weight decay regularization. In Section A.4 we show, that if the negative logloss function is used, noise injection becomes equivalent to the Loss IBP regularizer.
a.4 Noise injection
Assuming Gaussian noise , such that and , we can get approximate an arbitrary loss function as
where is the trace of the Hessian matrix , consisting of the second derivatives of with respect to the elements of . Solving the differential equation
for each independently, we can find the following solution:
where is the class label for the object . Indeed, assuming , we obtain the first derivatives:
(17) 
Now we can compute the second derivatives:
(18) 
Notice, that the last expression uses instead of . However if , then , so the expressions (17) and (18) are equal. Therefore, when the negative loglikelihood function is applied to a single neuron without a nonlinear transfer function, the Gaussian noise, added to the input vector , is equivalent to the IBP regularization term . This result is supported by the discussion in Fawzi et al. (2015), where the authors show that for the linear classifier the robustness to adversarial examples is bounded from below by the robustness to random noise. However, since is only the expected value, the quality of approximation also depends on the number of iterations.
a.5 Equivalence of Loss IBP and Fast TBP
In Section 4.1 we showed that the gradient can be efficiently computed by multiplying the gradient , obtained at the end of the backward pass, on the tangent vector . We can demonstrate that Loss IBP with the additional loss function is equivalent to Fast Tangent BP with the additional loss function (10).
In Fast Tangent BP we perform an additional iteration of backpropagation through the linearized network, applied to a tangent vector . The additional forward pass computes the following values:
On the additional backward pass the computed gradients are therefore
According to (2), the weight gradients are then
We thus see that in order to compute additional weight derivatives , we need to compute the cumulative Jacobian products from both sides of the network.
Let us now compute the same gradients for Loss IBP with . In this case we initialize the third pass by the tangent vector . Thus the third pass values are
According to (16), the gradients are
Therefore, the weight gradients of both algorithms are the same, so the algorithms are equivalent.