How to Initialize your Network? Robust Initialization for WeightNorm & ResNets

How to Initialize your Network?
Robust Initialization for WeightNorm & ResNets

Devansh Arpit  , Víctor Campos11footnotemark: 1  , Yoshua Bengio§
Salesforce Research, Barcelona Supercomputing Center,
§Montréal Institute for Learning Algorithms, Université de Montréal, CIFAR Senior Fellow
devansharpit@gmail.com, victor.campos@bsc.es
Equal contribution. Work done while Víctor Campos was an intern at Salesforce Research.
Abstract

Residual networks (ResNet) and weight normalization play an important role in various deep learning applications. However, parameter initialization strategies have not been studied previously for weight normalized networks and, in practice, initialization methods designed for un-normalized networks are used as a proxy. Similarly, initialization for ResNets have also been studied for un-normalized networks and often under simplified settings ignoring the shortcut connection. To address these issues, we propose a novel parameter initialization strategy that avoids explosion/vanishment of information across layers for weight normalized networks with and without residual connections. The proposed strategy is based on a theoretical analysis using mean field approximation. We run over 2,500 experiments and evaluate our proposal on image datasets showing that the proposed initialization outperforms existing initialization methods in terms of generalization performance, robustness to hyper-parameter values and variance between seeds, especially when networks get deeper in which case existing methods fail to even start training. Finally, we show that using our initialization in conjunction with learning rate warmup is able to reduce the gap between the performance of weight normalized and batch normalized networks.

 

How to Initialize your Network?
Robust Initialization for WeightNorm & ResNets


  Devansh Arpitthanks: Equal contribution. Work done while Víctor Campos was an intern at Salesforce Research.  , Víctor Campos11footnotemark: 1  , Yoshua Bengio§ Salesforce Research, Barcelona Supercomputing Center, §Montréal Institute for Learning Algorithms, Université de Montréal, CIFAR Senior Fellow devansharpit@gmail.com, victor.campos@bsc.es

\@float

noticebox[b]Preprint. Under review.\end@float

1 Introduction

Parameter initialization is an important aspect of deep network optimization and plays a crucial role in determining the quality of the final model. In order for deep networks to learn successfully using gradient descent based methods, information must flow smoothly in both forward and backward directions (Glorot and Bengio, 2010; He et al., 2016; Hanin and Rolnick, 2018; Hanin, 2018). Too large or too small parameter scale leads to information exploding or vanishing across hidden layers in both directions. This could lead to loss being stuck at initialization or quickly diverging at the beginning of training. Beyond these characteristics near the point of initialization itself, we argue that the choice of initialization also has an impact on the final generalization performance. This non-trivial relationship between initialization and final performance emerges because good initializations allow the use of larger learning rates which have been shown in existing literature to correlate with better generalization (Jastrzebski et al., 2017; Smith and Le, 2018; Smith et al., 2018).

Weight normalization Salimans and Kingma (2016) accelerates convergence of stochastic gradient descent optimization by re-parameterizing weight vectors in neural networks. However, previous works have not studied initialization strategies for weight normalization and it is a common practice to use initialization schemes designed for un-normalized networks as a proxy. We study initialization conditions for weight normalized ReLU networks, and propose a new initialization strategy for both plain and residual architectures.

The main contribution of this work is the theoretical derivation of a novel initialization strategy for weight normalized ReLU networks, with and without residual connections, that prevents information flow from exploding/vanishing in forward and backward pass. Extensive experimental evaluation shows that the proposed initialization increases robustness to network depth, choice of hyper-parameters and seed. When combining the proposed initialization with learning rate warmup, we are able to use learning rates as large as the ones used with batch normalization Ioffe and Szegedy (2015) and significantly reduce the generalization gap between weight and batch normalized networks reported in the literature Gitman and Ginsburg (2017); Shang et al. (2017). Further analysis reveals that our proposal initializes networks in regions of the parameter space that have low curvature, thus allowing the use of large learning rates which are known to correlate with better generalization (Jastrzebski et al., 2017; Smith and Le, 2018; Smith et al., 2018).

2 Background and Existing Work

Weight Normalization: previous works have considered re-parameterizations that normalize weights in neural networks as means to accelerate convergence. In Arpit et al. (2016), the pre- and post-activations are scaled/summed with constants depending on the activation function, ensuring that the hidden activations have 0 mean and unit variance, especially at initialization. However, their work makes assumptions on the distribution of input and pre-activations of the hidden layers in order to make these guarantees. Weight normalization Salimans and Kingma (2016) is a simpler alternative, and the authors propose a data-dependent initialization for the introduced re-parameterization. This operation improves the flow of information, but its dependence on statistics computed from a batch of data may make it sensitive to the samples used to estimate the initial values.

Residual Network Architecture: residual networks (ResNets) He et al. (2016) have become a cornerstone of deep learning due to their state-of-the-art performance in various applications. However, when using residual networks with weight normalization instead of batch normalization Ioffe and Szegedy (2015), they have been shown to have significantly worse generalization performance. For instance, Gitman and Ginsburg (2017) and Shang et al. (2017) have shown that ResNets with weight normalization suffer from severe over-fitting and have concluded that batch normalization has an implicit regularization effect.

Initialization strategies: there exists extensive literature studying initialization schemes for un-normalized plain networks (c.f. Glorot and Bengio (2010); He et al. (2015); Saxe et al. (2014); Poole et al. (2016); Pennington et al. (2017, 2018), to name some of the most prominent ones). Similarly, previous works have studied initialization strategies for un-normalized ResNets Hanin and Rolnick (2018); Taki (2017); Tarnowski et al. (2018), but they lack large scale experiments demonstrating the effectiveness of the proposed approaches and consider a simplified ResNet setup where shortcut connections are ignored, even though they play an important role Jastrzębski et al. (2018). Zhang et al. (2019) propose an initialization scheme for un-normalized ResNets which involves initializing the different types of layers individually using carefully designed schemes. They provide large scale experiments on various datasets, and show that the generalization gap between batch normalized ResNets and un-normalized ResNets can be reduced when using their initialization along with additional domain-specific regularization techniques like cutout DeVries and Taylor (2017) and mixup Zhang et al. (2018). All the aforementioned works consider un-normalized networks and, to the best of our knowledge, there has been no formal analysis of initialization strategies for weight normalized networks that allow a smooth flow of information in the forward and backward pass.

3 Weight Normalized ReLU Networks

We derive initialization schemes for weight normalized networks in the asymptotic setting where network width tends to infinity, similarly to previous analysis for un-normalized networks Glorot and Bengio (2010); He et al. (2016). We define an layer weight normalized ReLU network recursively, where the hidden layer’s activation is given by,

(1)

where are the pre-activations, are the hidden activations, is the input to the network, are the weight matrices, are the bias vectors, and is a scale factor. We denote the set of all learnable parameters as . Notation implies that each row vector of has unit norm, i.e.,

(2)

thus controls the norm of each weight vector, whereas controls its direction. Finally, we will make use of the notion to represent a differentiable loss function over the output of the network.

Forward pass: we first study the forward pass and derive an initialization scheme such that for any given input, the norm of hidden activation of any layer and input norm are asymptotically equal. Failure to do so prevents training to begin, as studied by Hanin and Rolnick (2018) for vanilla deep feedforward networks. The theorem below shows that a normalized linear transformation followed by ReLU non-linearity is a norm preserving transform in expectation when proper scaling is used.

Theorem 1

Let , where and . If where is any isotropic distribution in , or alternatively is a randomly generated matrix with orthogonal rows, then for any fixed vector , where,

(3)

and is the surface area of a unit -dimensional sphere.

The constant seems hard to evaluate analytically, but remarkably, we empirically find that for all integers . Thus applying the above theorem to Eq. 3 implies that every hidden layer in a weight normalized ReLU network is norm preserving for an infinitely wide network if the elements of are initialized with . Therefore, we can recursively apply the above argument to each layer in a normalized deep ReLU network starting from the input to the last layer and have that the network output norm is approximately equal to the input norm, i.e. . Figure 1 (top left) shows a synthetic experiment with a 20 layer weight normalized MLP that empirically confirms the above theory. Details for this experiment can be found in the supplementary material.

Figure 1: Experiments on weight normalized networks using synthetic data to confirm theoretical predictions. Top: feed forward networks. Bottom: residual networks. The proposed initialization prevents explosion/vanishing of the norm of hidden activations (left) and gradients (right) across layers at initialization. For ResNets, norm growth is for an arbitrary depth network. Naively initializing results in vanishing/exploding signals.

Backward pass: the goal of studying the backward pass is to derive conditions for which gradients do not explode nor vanish, which is essential for gradient descent based training. Therefore, we are interested in the value of for different layers, which are indexed by . To prevent exploding/vanishing gradients, the value of this term should be similar for all layers. We begin by writing the recursive relation between the value of this derivative for consecutive layers,

(4)
(5)

We note that conditioned on a fixed , each dimension of in the above equation follows an i.i.d. sampling from Bernoulli distribution with probability 0.5 at initialization. This is formalized in Lemma 1 in the supplementary material. We now consider the following theorem,

Theorem 2

Let , where , and . If each where is any isotropic distribution in or alternatively is a randomly generated matrix with orthogonal rows and , then for any fixed vector , .

In order to apply the above theorem to Eq. 5, we assume that is independent of the other terms, similar to He et al. (2016). This simplifies the analysis by allowing us to treat as fixed and take expectation w.r.t. the other terms, over and . Thus if we initialize . This also shows that is a norm preserving transform. Hence applying this theorem recursively to Eq. 5 for all yields that thereby avoiding gradient explosion/vanishment. Note that the above result is strictly better for orthogonal weight matrices compared with other isotropic distributions (see proof). Figure 1 (top right) shows a synthetic experiment with a 20 layer weight normalized MLP to confirm the above theory. The details for this experiment are provided in the supplementary material.

We also point out that the factor that appears in theorems 1 and 2 is due to the presence of ReLU activation. In the absence of ReLU, this factor should be 1. We will use this fact in the next section with the ResNet architecture.

Implementation details: since there is a discrepancy between the initialization required by the forward and backward pass, we tested both (and combinations of them) in our preliminary experiments and found the one proposed for the forward pass to be superior. We therefore propose to initialize weight matrices to be orthogonal111We note that Saxe et al. (2014) propose to initialize weights of un-normalized deep ReLU networks to be orthogonal with scale . Our derivation and proposal is for weight normalized ReLU networks where we study both Gaussian and orthogonal initialization and show the latter is superior., , and , where and represent the fan-in and fan-out of the layer respectively. Our results apply to both fully-connected and convolutional222For convolutional layers with kernel size and channels, we define and (He et al., 2015). networks.

4 Residual Networks

Similar to the previous section, we derive an initialization strategy for ResNets in the infinite width setting. We define a residual network with residual blocks and parameters whose output is denoted as , and the hidden states are defined recursively as,

(6)

where is the input, denotes the hidden representation after applying residual blocks and is a scalar that scales the output of the -th residual blocks. The residual block is a feed-forward ReLU network. We discuss how to deal with shortcut connections during initialization separately. We use the notation to denote dot product between the argument vectors.

Forward pass: here we derive an initialization strategy for residual networks that prevents information in the forward pass from exploding/vanishing independent of the number of residual blocks, assuming that each residual block is initialized such that it preserves information in the forward pass.

Theorem 3

Let be a residual network with output . Assume that each residual block () is designed such that at initialization, for any input to the residual block, and . If we set , then , such that,

(7)

The assumption is reasonable because at initialization, is a random transformation in a high dimensional space which will likely rotate a vector to be orthogonal to itself. To understand the rationale behind the second assumption, , recall that is essentially a non-residual network. Therefore we can initialize each such block using the scheme developed in Section 3 which due to Theorem 1 (see discussion below it) will guarantee that the norm of the output of equals the norm of the input to the block. Figure 1 (bottom left) shows a synthetic experiment with a 40 block weight normalized ResNet to confirm the above theory. The ratio of norms of output to input lies in independent of the number of residual blocks exactly as predicted by the theory. The details for this experiment can be found in the supplementary material.

Backward pass: we now study the backward pass for residual networks.

Theorem 4

Let be a residual network with output . Assume that each residual block () is designed such that at initialization, for any fixed input of appropriate dimensions, and . If , then , such that,

(8)

The above theorem shows that scaling the output of the residual block with prevents explosion/vanishing of gradients irrespective of the number of residual blocks. The rationale behind the assumptions is similar to that given for the forward pass above. Figure 1 (bottom right) shows a synthetic experiment with a 40 block weight normalized ResNet to confirm the above theory. Once again, the ratio of norms of gradient w.r.t. input to output lies in independent of the number of residual blocks exactly as predicted by the theory. The details can be found in the supplementary material.

Shortcut connections: a ResNet often has stages He et al. (2016), where each stage is characterized by one shortcut connection and residual blocks, leading to a total of blocks. In order to account for shortcut connections, we need to ensure that the input and output of each stage in a ResNet are at the same scale; the same argument applies during the backward pass. To achieve this, we scale the output of the residual blocks in each stage using the total number of residual blocks in that stage. Then theorems 3 and 4 treat each stage of the network as a ResNet and normalize the flow of information in both directions to be independent of the number of residual blocks.

Implementation details: we consider ResNets with shortcut connections and architecture design similar to that proposed in He et al. (2016) with the exception that our residual block structure is ConvReLUConv, similar to blocks in (Zagoruyko and Komodakis, 2016), as illustrated in the supplementary material333More generally, our residual block design principle is [ConvReLU]Conv, where .. Weights of all layers in the network are initialized to be orthogonal and biases are set to zero. The gain parameter of weight normalization is initialized to be . We set for the last convolutional layer of each residual block in the -th stage444We therefore absorb in Eq. 6 into the gain parameter .. For the rest of layers we follow the strategy derived in Section 3, with when the layer is followed by ReLU, and otherwise.

5 Experiments

We study the impact of initialization on weight normalized networks across a wide variety of configurations. Among others, we compare against the data-dependent initialization proposed by Salimans and Kingma (2016), which initializes and so that all pre-activations in the network have zero mean and unit variance based on estimates collected from a single minibatch of data.

We refer the reader to the supplementary material for detailed description of the hyperparameter settings for each experiment, as well as for initial reinforcement learning results.

5.1 Robustness Analysis of Initialization methods– Depth, Hyper-parameters and Seed

The difficulty of training due to exploding and vanishing gradients increases with network depth. In practice, depth often complicates the search for hyperparameters that enable successful optimization, if any. This section presents a thorough evaluation of the impact of initialization on different network architectures for increasing depths, as well as their robustness to hyperparameter configurations. We benchmark fully-conected networks on MNIST (Lecun and Cortes, ), whereas CIFAR-10 (Krizhevsky, 2009) is considered for convolutional and residual networks. We tune hyperparameters individually for each network depth and initialization strategy on a set of held-out examples, and report results on the test set. We refer the reader to the supplementary material for a detailed description of the considered hyperparameters.

Fully-connected networks: results in Figure 2 (left) show that the data-dependent initialization can be used to train networks of up to depth 20, but training diverges for deeper nets even when using very small learning rates, e.g. . On the other hand, we managed to successfully train very deep networks with up to 200 layers using the proposed initialization. When analyzing all runs in the grid search, we observe that the proposed initialization is more robust to the particular choice of hyperparameters (Figure 2, right). In particular, the proposed initialization allows using learning rates up to larger for most depths.

Figure 2: Results for MLPs on MNIST. Dashed lines denote train accuracy, and solid lines denote test accuracy. A held-out validation set is used to select the best model for each configuration. The accuracy of diverged runs is set to . Left: Accuracy as a function of depth. Right: Robustness to hyperparameter configurations.

Convolutional networks: we adopt a similar architecture to that in Xiao et al. (2018), where all layers have kernels and a fixed width. The two first layers use a stride of in order to reduce the memory footprint. Results are depicted in Figure 3 (left) and show a similar trend to that observed for fully-connected nets, with the data-dependent initialization failing at optimizing very deep networks.

Residual networks: we construct residual networks of varying depths by controlling the number of residual blocks per stage in the wide residual network (WRN) architecture with . Training networks with thousands of layers is computationally intensive, so we measure the test accuracy after a single epoch of training (Zhang et al., 2019). We consider two additional baselines for these experiments: (1) the default initialization in PyTorch555https://pytorch.org/docs/stable/_modules/torch/nn/utils/weight_norm.html, which initializes , and (2) a modification of the initialization proposed by Hanin and Rolnick (2018) to fairly adapt it to weight normalized multi-stage ResNets. For the -th stage with blocks, the stage-wise Hanin scheme initializes the gain of the last convolutional layer in each block as , where refers to the block number within a stage. All other parameters are initialized in a way identical to our proposal, so that information across the layers within residual blocks remains preserved. We report results over 5 random seeds for each configuration in Figure 3 (right), which shows that the proposed initialization achieves similar accuracy rates across the wide range of evaluated depths. PyTorch’s default initialization diverges for most depths, and the data-dependent baseline converges significantly slower for deeper networks due to the small learning rates used in order to avoid divergence. Despite the stage-wise Hanin strategy and the proposed initialization achieve similar accuracy rates, we were able to use an order of magnitude larger learning rates with the latter, which denotes an increased robustness against hyperparameter configurations.

To further evaluate the robustness of each initialization strategy, we train WRN-40-10 networks for 3 epochs with different learning rates, with and without learning rate warmup (Goyal et al., 2017). We repeat each experiment 20 times using different random seeds, and report the percentage of runs that successfully completed all 3 epochs without diverging in Figure 4. We observed that learning rate warmup greatly improved the range of learning rates that work well for all initializations, but the proposed strategy manages to train more robustly across all tested configurations.

Figure 3: Accuracy as a function of depth on CIFAR-10 for CNNs (left), and WRNs (right). Dashed lines denote train accuracy, and solid lines denote validation accuracy. Note that WRNs are trained for a single epoch due to the computational burden of training extremely deep networks.
Figure 4: Robustness to seed of different initialization schemes on WRN-40-10. We launch 20 training runs for every configuration, and measure the percentage of runs that reach epoch 3 without diverging. Weight normalized ResNets benefit from learning rate warmup, which enables the usage of higher learning rates. The proposed initialization is the most robust scheme across all configurations.

5.2 Comparison with Batch Normalization

Existing literature has pointed towards an implicit regularization effect of batch normalization (Luo et al., 2019), which prevented weight normalized models from matching the final performance of batch normalized ones (Gitman and Ginsburg, 2017). On the other hand, previous works have shown that larger learning rates facilitate finding wider minima which correlate with better generalization performance (Keskar et al., 2016; Jastrzebski et al., 2017; Smith and Le, 2018; Smith et al., 2018), and the proposed initialization and learning rate warmup have proven very effective in stabilizing training for high learning rates. This section aims at evaluating the final performance of weight normalized networks trained with high learning rates, and compare them with batch normalized networks.

We evaluate models on CIFAR-10 and CIFAR-100. Unlike previous works which use the test set for hyperparameter tuning (He et al., 2016; Zagoruyko and Komodakis, 2016), we set aside 10% of the training data for such purpose. Since we do not optimize hyperparameters on the test set and use a smaller training set, the achieved error rates are slightly larger than those reported in the literature. For each architecture we use the default hyperparameters reported in literature for batch normalized networks, and tune only the initial learning rate for weight normalized models.

Results in Table 1 show that the proposed initialization scheme, when combined with learning rate warmup, allows weight normalized residual networks to achieve comparable error rates to their batch normalized counterparts. We note that previous works reported a large generalization gap between weight and batch normalized networks (Shang et al., 2017; Gitman and Ginsburg, 2017). The only architecture for which the batch normalized variant achieves a superior performance is WRN-40-10, for which the weight normalized version is not able to completely fit the training set before reaching the epoch limit. This phenomena is different to the generalization gap reported in previous works, and might be caused by sub-optimal learning rate schedules that were tailored for networks with batch normalization.

Dataset Architecture Method Test Error ()
CIFAR-10 ResNet-56 BN (He et al. (2016)) 6.97
WN (proposed init + warmup) 7.2 0.12
ResNet-110 BN (He et al. (2016)) 6.61 0.16
WN (proposed init + warmup) 6.69 0.11
WN (Shang et al. (2017)) 7.46
WRN-40-10 BN (orthogonal init + cutout) 3.53 0.38
WN (proposed init + warmup + cutout) 4.75 0.08
CIFAR-100 ResNet-164 BN (orthogonal init + cutout) 25.52 0.17
WN (proposed init + warmup + cutout) 25.31 0.26
Table 1: Comparison between Weight Normalization with proposed initialization and Batch Normalization. Results are reported as mean and std over 5 runs.

5.3 Initialization Method and Generalization Gap

The motivation behind designing good parameter initialization is mainly for better optimization at the beginning of training, and it is not apparent why our initialization is able to reduce the generalization gap between weight normalized and batch normalized networks Gitman and Ginsburg (2017); Shang et al. (2017). On this note we point out that a number of papers have shown how using stochastic gradient descent (SGD) with larger learning rates facilitate finding wider minima which correlate with better generalization performance Keskar et al. (2016); Jastrzebski et al. (2017); Smith and Le (2018); Smith et al. (2018). Additionally, it is often not possible to use large learning rates with weight normalization with traditional initializations. Therefore we believe that the use of large learning rate allowed by our initialization played an important role in this aspect. In order to understand why our initialization allows using large learning rates compared with existing ones, we compute the (log) spectral norm of the Hessian at initialization (using Power method) for the various initialization methods considered in our experiments using of the training samples. They are shown in Table 2. We find that the local curvature (spectral norm) is smallest for the proposed initialization. These results are complementary to the seed robustness experiment shown in Figure 4.

Dataset Model PyTorch default Data-dependent Stage-wise Hanin Proposed
CIFAR-10 WRN-40-10 4.68 0.60 3.01 0.02 7.14 0.72 1.31 0.12
CIFAR-100 ResNet-164 9.56 0.54 2.68 0.09 N/A 1.56 0.18
Table 2: Log (base 10) spectral norm of Hessian at initialization for different initializations. Smaller values imply lower curvature. N/A means that the computation diverged. The proposed strategy initializes at a point with lowest curvature, which explains why larger learning rates can be used.

6 Conclusion and Future Work

Weight normalization (WN) is frequently used in different network architectures due to its simplicity. However, the lack of existing theory on parameter initialization of weight normalized networks has led practitioners to arbitrarily pick existing initializations designed for un-normalized networks. To address this issue, we derived parameter initialization schemes for weight normalized networks, with and without residual connections, that avoid explosion/vanishment of information in the forward and backward pass. To the best of our knowledge, no prior work has formally studied this setting. Through thorough empirical evaluation, we showed that the proposed initialization increases robustness to network depth, choice of hyper-parameters and seed compared to existing initialization methods that are not designed specifically for weight normalized networks. We found that the proposed scheme initializes networks in low curvature regions, which enable the use of large learning rates. By doing so, we were able to significantly reduce the performance gap between batch and weight normalized networks which had been previously reported in the literature. Therefore, we hope that our proposal replaces the current practice of choosing arbitrary initialization schemes for weight normalized networks.

We believe our proposal can also help in achieving better performance using WN in settings which are not well-suited for batch normalization. One such scenario is training of recurrent networks in backpropagation through time settings, which often suffer from exploding/vanishing gradients, and batch statistics are timestep-dependent Cooijmans et al. (2016). The current analysis was done for feedforward networks, and we plan to extend it to the recurrent setting. Another application where batch normalization often fails is reinforcement learning, as good estimates of activation statistics are not available due to the online nature of some of these algorithms. We confirmed the benefits of our proposal in preliminary reinforcement learning experiments, which can be found in the supplementary material.

References

  • Arpit et al. (2016) Devansh Arpit, Yingbo Zhou, Bhargava U Kota, and Venu Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. In ICML, 2016.
  • Bellemare et al. (2013) Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 2013.
  • Cooijmans et al. (2016) Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
  • DeVries and Taylor (2017) Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
  • Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. IMPALA: Scalable distributed deep-rl with importance weighted actor-learner architectures. In ICML, 2018.
  • Gitman and Ginsburg (2017) Igor Gitman and Boris Ginsburg. Comparison of batch normalization and weight normalization algorithms for the large-scale image classification. arXiv preprint arXiv:1709.08145, 2017.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
  • Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
  • Hanin (2018) Boris Hanin. Which neural net architectures give rise to exploding and vanishing gradients? In NeurIPS, 2018.
  • Hanin and Rolnick (2018) Boris Hanin and David Rolnick. How to start training: The effect of initialization and architecture. In NeurIPS, 2018.
  • He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, 2015.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • Ioffe and Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
  • Jastrzebski et al. (2017) Stanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in sgd. arXiv preprint arXiv:1711.04623, 2017.
  • Jastrzębski et al. (2018) Stanisław Jastrzębski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, and Yoshua Bengio. Residual connections encourage iterative inference. In ICLR, 2018.
  • Keskar et al. (2016) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
  • Krizhevsky (2009) Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
  • (18) Yann Lecun and Corinna Cortes. The mnist database of handwritten digits. URL http://yann.lecun.com/exdb/mnist/.
  • Luo et al. (2019) Ping Luo, Xinjiang Wang, Wenqi Shao, and Zhanglin Peng. Towards understanding regularization in batch normalization. In ICLR, 2019.
  • Mnih et al. (2016) Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016.
  • Pennington et al. (2017) Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In NIPS, 2017.
  • Pennington et al. (2018) Jeffrey Pennington, Samuel S Schoenholz, and Surya Ganguli. The emergence of spectral universality in deep networks. In AISTATS, 2018.
  • Poole et al. (2016) Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In NIPS, 2016.
  • Salimans and Kingma (2016) Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS, 2016.
  • Saxe et al. (2014) Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR, 2014.
  • Schulman et al. (2015) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, 2015.
  • Shang et al. (2017) Wenling Shang, Justin Chiu, and Kihyuk Sohn. Exploring normalization in deep residual networks with concatenated rectified linear units. In AAAI, 2017.
  • Silver et al. (2017) David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 2017.
  • Smith and Le (2018) Samuel L Smith and Quoc V Le. A bayesian perspective on generalization and stochastic gradient descent. In ICLR, 2018.
  • Smith et al. (2018) Samuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V Le. Don’t decay the learning rate, increase the batch size. In ICLR, 2018.
  • Taki (2017) Masato Taki. Deep residual networks and weight initialization. arXiv preprint arXiv:1709.02956, 2017.
  • Tarnowski et al. (2018) Wojciech Tarnowski, Piotr Warchoł, Stanisław Jastrzębski, Jacek Tabor, and Maciej A Nowak. Dynamical isometry is achieved in residual networks in a universal way for any activation function. arXiv preprint arXiv:1809.08848, 2018.
  • Vinyals et al. (2017) Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, et al. Starcraft II: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782, 2017.
  • Xiao et al. (2018) Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. In ICML, 2018.
  • Zagoruyko and Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
  • Zhang et al. (2018) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.
  • Zhang et al. (2019) Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In ICLR, 2019.

Appendix A Experimental setup

a.1 Details about Figure 1 (top)

We use a weight normalized 20 layer MLP with 1000 randomly generated input samples in . We test three initialization strategies. (1) He initialization [He et al., 2016] for the weight matrices and the gain parameter for all layers are initialized to 1. (2) Proposed initialization, where weights are initialized to be orthogonal and gains are set as . (3) Proposed initialization, where weights are initialized using He initialization and gains are set as . In all cases biases are set to 0. At initialization itself, we forward propagate the 1000 randomly generated input samples, measure the norm of hidden activations, and compute the mean and standard deviation of the ratio of norm of hidden activation to the norm of the input. This is shown in Figure 1 (top left). In Figure 1 (top right), we similarly record the norm of hidden activation gradient by backpropagating 1000 random error vectors, and measure the ratio of the norm of hidden activation gradient to the norm of the error vector. We find that the proposed initialization preserves norm in both directions while vanilla He initialization fails. This shows the importance of proper initialization of the parameter of weight normalization.

a.2 Details about Figure 1 (bottom)

We use a weight normalized ResNet with 40 residual blocks with 1000 randomly generated input samples in . The network architecture is exactly as described in Eq. 6, with a residual block composed of two fully connected (FC) layers, i.e. FC1 ReLU FC2. The weight normalization layers are inserted after FC layers. We test three initialization strategies. (1) He initialization He et al. [2016] for all the weight matrices, and gain parameter . (2) Proposed initialization where weights are initialized to be orthogonal and gains are set as for FC1 and for FC2. (3) Proposed initialization where weights are initialized using He initialization and gains are set same as in the previous case. In all cases biases are set to 0. At initialization itself, we forward propagate the 1000 randomly generated input samples, measure the norm of hidden activations and compute the mean and standard deviation of the ratio of norm of hidden activation to the norm of the input . This is shown in Figure 1 (bottom left). In Figure 1 (bottom right), we similarly record the norm of hidden activation gradient by backpropagating 1000 random error vectors and measure the ratio of the norm of hidden activation gradient to the norm of the error vector . We find that the proposed initialization preserves norm in both directions while vanilla He initialization fails. This shows the importance of proper initialization of the parameter of weight normalization.

Parameter Value
Data split 10% of the original train is set aside for validation purposes
Number of hidden layers
Size of hidden layers
Number of epochs
Initial learning rate
Learning rate schedule Decreased by at epochs and
Batch size
Weight decay
Optimizer SGD with
Table 3: Hyperparameters for MNIST experiments. Values between brackets were used in the grid search. Learning rate of was considered for depths and only.
Parameter Value
Data split 10% of the original train is set aside for validation purposes
Architecture
Global Average Pooling
10-d Linear, softmax
Number of hidden layers (N)
Number of epochs
Initial learning rate
Learning rate schedule Decreased by at epoch
Batch size
Weight decay
Optimizer SGD without momentum
Table 4: Hyperparameters for CNN experiments on CIFAR-10. Values between brackets were used in the grid search. Learning rate of was considered for depth only.
Group name Output size Block type
conv1 3232 [Conv2D 33, 16k]
conv2 3232 N
conv3 1616 N
conv4 88 N
out 11 [average pooling, 10-d fc, softmax]
Figure 5: Left: Architecture of Wide Residual Networks considered in this work. Downsampling is performed through strided convolutions by the first layers in groups conv3 and conv4. Right: Structure of a residual block. Note that there is no non-linearity after residual conections, unlike He et al. [2016].
Parameter Value
Data split 10% of the original train is set aside for validation purposes
WRN’s (residual blocks per stage)
WRN’s (width factor)
Number of epochs
Initial learning rate
Batch size
Weight decay
Optimizer SGD with
Table 5: Hyperparameters for WRN experiments on CIFAR-10. Values between brackets were used in the grid search. Learning rates smaller than were considered for (10,000 layers) only.

Appendix B Reinforcement Learning experiments

Despite its tremendous success in supervised learning applications, Batch Normalization Ioffe and Szegedy [2015] is seldom used in reinforcement learning (RL), as the online nature of some of the methods and the strong correlation between consecutive batches hinder its performance. These properties suggest the need for normalization techniques like Weight Normalization Salimans and Kingma [2016], which are able to accelerate and stabilize training of neural networks without relying on minibatch statistics.

We consider the Asynchronous Advantage Actor Critic (A3C) algorithm Mnih et al. [2016], which maintains a policy and a value function estimate which are updated asynchronously by different workers collecting experience in parallel. Updates are estimated based on -step returns from each worker, resulting in highly correlated batches of samples, whose impact is mitigated through the asynchronous nature of updates. This setup is not well suited for Batch Normalization and, to the best of our knowledge, no prior work has successfully applied it to this type of algorithm.

We evaluate agents using Atari environments in the Arcade Learning Environment Bellemare et al. [2013]. Our initial experiments with the deep residual architecture introduced by Espeholt et al. [2018] show that adding Weight Normalization improves convergence speed and robustness to hyperparameter configurations across different environments. However, we did not observe important differences between initialization schemes for these weight normalized models. Despite being significantly deeper than previous architectures used in RL, this model is still relatively shallow for supervised learning standards, and we observed in our computer vision experiments that performance differences arise for deeper architectures or high learning rates. The latter is known to cause catastrophic performance degradation in deep RL due to excessively large policy updates Schulman et al. [2015], so we opt for building a much deeper residual network with 100 layers. Collecting experience with such a deep policy is a very slow process even when using GPU workers. Given this computational burden, we use hyperparameters tuned in initial experiments for the deep network introduced by Espeholt et al. [2018], and report initial results in one of the simplest environments666Collecting 7M timesteps of experience took approximately 10h on a single GPU shared by 6 workers. Even though this amount of experience is enough to solve Pong, A3C usually needs many more interactions to learn competitive policies in more complex environments. in Figure 6.

Figure 6: Learning progress in Pong. Shading shows maximum and minimum over 3 random seeds, while dark lines indicate the mean. Weight normalization with the proposed initialization improves convergence speed and reduces variance across seeds. These results highlight the importance of initialization in weight normalized networks, as using the default initialization in PyTorch prevents training to start.

We observe that the weight normalized policy with the proposed initialization manages to solve the task much faster than the un-normalized architecture. Perhaps surprisingly, the weight normalized policy with the sub-optimal initialization is not able to solve the environment in the given timestep budget, and it performs even worse than the un-normalized policy. These results highlight the importance of proper initialization even when using normalization techniques.

The deep network architecture considered in this experiment is excessively complex for the considered task, which can be solved with much smaller networks. However, with the development of ever complex environments Vinyals et al. [2017] and distributed learning algorithms that can take advantage of massive computational resources Espeholt et al. [2018], recent results have shown that RL can benefit from techniques that have found success in the supervised learning community, such as deeper residual networks Silver et al. [2017], Espeholt et al. [2018]. The aforementioned findings suggest that RL applications could benefit from techniques that help training very deep networks robustly in the future.

Appendix C Proofs

Theorem 5

Let , where and . If where is any isotropic distribution in , or alternatively is a randomly generated matrix with orthogonal rows, then for any fixed vector , where,

(9)

and is the surface are of a unit -dimensional sphere.

Proof: During the proof, take note of the distinction between the notations and . Our goal is to compute,

(10)
(11)

Suppose the weights are randomly generated to be orthogonal with uniform probability over all rotations. Due to the linearity of expectation, when taking the expectation of any unit over the randomly generated orthogonal weight matrix, the expectation marginalizes over all the rows of the weight matrix except the row. As a consequence, for each unit , the expectation is over an isotropic random variable since the orthogonal matrix is generated randomly with uniform probability over all rotations. Therefore, we can equivalently write,

(12)

Note that the above equality would trivially hold if all rows of the weight matrix were sampled i.i.d. from an isotropic distribution. In other words, the above equality holds irrespective of the two choice of distributions used for sampling the weight matrix.

We have,

(13)
(14)

where denotes the probability distribution of the random variable , and is the angle between vectors and . Hence is a function of . Since is sampled from an isotropic distribution, the direction and scale of are independent. Thus,

(15)
(16)
(17)

Since is an isotropic distribution in , the likelihood of all directions is uniform. It essentially means that can be seen as a uniform distribution over the surface area of a unit -dimensional sphere. We can therefore re-parameterize in terms of by aggregating the density over all points on this -dimensional sphere at a fixed angle from the vector . This is similar to the idea of Lebesgue integral. To achieve this, we note that all the points on the -dimensional sphere at a constant angle from lie on an -dimensional sphere of radius . Thus the aggregate density at an angle from is the ratio of the surface area of the -dimensional sphere of radius and the surface area of the unit -dimensional sphere. Therefore,

(18)
(19)
(20)
(21)

Now we use a known result in existing literature that uses integration by parts to evaluate the integral of exponentiated sine function, which states,

(22)

Since our integration is between the limits and , we find that the first term on the R.H.S. in the above expression is 0. Recursively expanding the power sine term, we can similarly eliminate all such terms until we are left with the integral of or depending on whether is odd or even. For the case when is odd, we get,

(23)
(24)
(25)

For the case when is even, we similarly get,

(26)
(27)
(28)

Thus,

(29)

Define,

(30)

Then,

(31)

Thus,

(32)
(33)
(34)

which proves the claim.

Lemma 1

If network weights are sampled i.i.d. from a Gaussian distribution with mean 0 and biases are 0 at initialization, then conditioned on , each dimension of follows an i.i.d. Bernoulli distribution with probability 0.5 at initialization.

Proof: Note that at initialization (biases are 0) and are sampled i.i.d. from a random distribution with mean 0. Therefore, each dimension is simply a weighted sum of i.i.d. zero mean Gaussian, which is also a 0 mean Gaussian random variable.

To prove the claim, note that the indicator operator applied on a random variable with 0 mean and symmetric distribution will have equal probability mass on both sides of 0, which is the same as a Bernoulli distributed random variable with probability 0.5. Finally, each dimension of is i.i.d. simply because all the elements of are sampled i.i.d., and hence each dimension of is a weighted sum of a different set of i.i.d. random variables.

Theorem 6

Let , where , and . If each where is any isotropic distribution in or alternatively is a randomly generated matrix with orthogonal rows and , then for any fixed vector , .

Proof: Our goal is to compute,

(35)
(36)
(37)
(38)
(39)
(40)
(41)

where is the angle between and . For orthogonal matrix is always 0, while for such that each where is any isotropic distribution, . Thus for both cases777This also suggests that orthogonal initialization is strictly better than Gaussian initialization since the result holds without the dependence on expectation in contrast to the Gaussian case. we have that,

(42)

which proves the claim.

Theorem 7

Let be a residual network with output . Assume that each residual block () is designed such that at initialization, for any input to the residual block, and . If we set , then,

(43)

where .

Proof: Let denote the input of the residual network. Consider the first hidden state given by,

(44)

Then the squared norm of is given by,

(45)
(46)

Since and due to our assumptions, we have,

(47)

Similarly,

(48)

Thus,

(49)

Then due to our assumptions we get,

(50)

Thus we get,

(51)

Extending such inequalities to the residual block, we get,

(52)

Setting , we get,

(53)

Setting , we get,

(54)

Note that the factor as due to the following well known result,

(55)

Since , lies in .

Thus we have proved the claim.

Theorem 8

Let be a residual network with output . Assume that each residual block () is designed such that at initialization, for any fixed input of appropriate dimensions, and . If , then,

(56)

where .

Proof: Recall,

(57)

Therefore, taking derivative on both sides,

(58)
(59)

Taking norm on both sides,

(60)

Due to our assumptions, we have,

(61)
(62)

Applying this result to all residual blocks we have that,

(63)

Setting , we get,

(64)

Note that the factor as due to the following well known result,

(65)

Since , lies in .

Thus we have proved the claim.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
372931
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel