Improved Bayesian Compression
Improved Bayesian Compression
Marco Federici University of Amsterdam firstname.lastname@example.org Karen Ullrich University of Amsterdam email@example.com Max Welling University of Amsterdam Canadian Institute for Advanced Research (CIFAR) firstname.lastname@example.org
noticebox[b]31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\end@float
1 Variational Bayesian Networks for Compression
Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via band-limited channels and most importantly doing inference at scale. There have been two proposals that show strong results, both using empirical Bayesian priors: (i) Ullrich et al.  show impressive compression results by use of an adaptive Mixture of Gaussian prior on independent delta distributed weights. This idea has initially been proposed as Soft-Weight Sharing by Nowlan and Hinton  but was never demonstrated to compress before. (ii) Equivalently, Molchanov et al.  use Variational Dropout [Kingma et al., 2015] to prune out independent Gaussian posterior weights with high uncertainties. To achieve high pruning rates the authors refined the originally proposed approximations to the KL-divergence and a different parametrization to increase the stability of the training procedure. In this work, we propose to join these two somewhat orthogonal compression schemes since (ii) seems to prune out more weights but does not provide a technique for quantization such as (i). We find our method to outperform both of the above.
Given a dataset and a model parametrized by a weight vector , the learning goal consists in the maximization of the posterior probability of the parameters given the data . Since this quantity involves the computation of intractable integrals, the original objective is replaced with a lower bound obtained by introducing an approximate parametric posterior distribution . In the Variational Inference framework, the objective function is expressed as a Variational Lower Bound:
The first term of the equation represents the log-likelihood of the model predictions, while the second part stands for the KL-Divergence between the weights approximate posterior and their prior distribution . This term works as a regularizer whose effects on the training procedure and tractability depend on the chosen functional form for the two distributions. In this work we propose the use of a joint distribution over the -dimensional weight vector and their corresponding centers .
|Distribution factorization||Functional forms|
Table 1 shows the factorization and functional form of the prior and posterior distributions. Each conditional posterior is represented with a Gaussian distribution with a variance around a center defined as a delta peak determined by the parameter .
On the other hand, the joint prior is modeled as a product of independent distributions over and . Each represents a log-uniform distribution, while is a mixture of Gaussian distribution parametrized with that represents the mixing proportions , the mean of each Gaussian component and their respective precision . In this settings, the KL-Divergence between the prior and posterior distribution can be expressed as:
The zero-centered heavy-tailed prior distribution on induces sparsity in the parameters vector, while the adaptive mixture model applied on the weight centers forces a clustering behavior while adjusting the parameters to better match their distribution.
In preliminary experiments, we compare the compression ratio as proposed by Han et al.  and accuracy achieved by different Bayesian compression techniques on the well studied MNIST dataset with dense (LeNet300-100) and convolutional (LeNet5) architectures. The details of the training procedure are described in appendix A.
The preliminary results presented in table 2 suggests that the joint procedure presented in this paper results in a dramatic increase of the compression ratio, achieving state-of-the-art results for both the dense and convolutional network architectures. Further work could address the interaction between the soft-weight sharing methodology and structured sparsity inducing techniques [Wen et al., 2016, Louizos et al., 2017] to reduce the overhead introduced by the compression format.
- Ullrich et al.  K. Ullrich, E. Meeds, and M. Welling. Soft Weight-Sharing for Neural Network Compression. ICLR, 2017.
- Nowlan and Hinton  Steven J. Nowlan and Geoffrey E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Comput., 4(4):473–493, July 1992. ISSN 0899-7667. doi: 10.1162/neco.1918.104.22.1683. URL http://dx.doi.org/10.1162/neco.1922.214.171.1243.
- Molchanov et al.  D. Molchanov, A. Ashukha, and D. Vetrov. Variational Dropout Sparsifies Deep Neural Networks. ICML, January 2017.
- Kingma et al.  D. P. Kingma, T. Salimans, and M. Welling. Variational Dropout and the Local Reparameterization Trick. ArXiv e-prints, June 2015.
- Han et al.  S. Han, H. Mao, and W. J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. ArXiv e-prints, October 2015.
- Wen et al.  Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. CoRR, abs/1608.03665, 2016. URL http://arxiv.org/abs/1608.03665.
- Louizos et al.  Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. NIPS, 2017.
Appendix A Training
The full training objective equation is given by the variational lower bound in equation 1, where the two KL-Divergence terms have been scaled according to two coefficients and respectively:
Starting from a pre-trained model, in a first warm-up phase we set and . Note that this part of the training procedure matches the Sparse Variational Dropout methodology [Molchanov et al., 2017]. After reaching convergence (200 epochs in our experiments), we initialize the parameters for the mixture model and the coefficient is set to a value of ( is kept to ) to induce the clustering effect with the Soft Weight sharing procedure. This phase usually requires 50-100 epochs to reach convergence.
The mixture model used for our experiments uses 17 components, one of which has been fixed to zero with a fixed mixing proportion . A gamma hyper-prior (, ) have been applied to the precision of the Gaussian components to ensure numerical stability.
The proposed parametrization stores the weight variance , each mixture component precision and the mixing proportions in the logarithmic space. All the models have been trained using ADAM optimizer, the learning rates and initialization values for the parameters are reported in the table 3
Appendix B Compression
The computation of the maximum compression ratio has been done accordingly to the procedure reported in Han et al. . The first part of the pipeline slightly differs according to the methodologies:
Sparse Variational Dropout (VD)
At the end of the Sparse Variational Dropout training procedure (VD) all the parameters with a corresponding binary dropout rate greater then a fixed threshold () are set to zero [Molchanov et al., 2017]. Secondly, the weight means are clustered with a 64-components mixture model and they are collapsed into the mean of the Gaussian with the highest responsibility.
Soft Weight Sharing (SWS) and joint approach (SWS+VD)
Once the training procedures involving the mixture of Gaussian model have converged to a stable configuration, the overlapping components are merged and all the weights are collapsed into the mean of the component that corresponds to the highest responsibility (see Ullrich et al.  for details). Since the weight assigned to the zero-centered component are automatically set to zero, there is no need to specify a pruning threshold.
At the end of this initial phases, the models are represented as a sequence of sparse matrices containing discrete assignments. Empty columns and filters banks are removed and the model architecture is adjusted accordingly. The discrete assignments are encoded using the Huffman compression scheme and the resulting matrices are stored according to the CSC format using 5 and 8 bits to represents the offset between the entries in dense and convolutional layers respectively.
Appendix C Derivation of the KL-Divergence
In order to compute the KL-divergence between the joint prior and approximate posterior distribution, we use the factorization reported in table 1:
By plugging in and observing , we obtain:
Where the first KL-divergence can be approximated according to Molchanov et al. :
While the second term can be computed by decomposing the KL-divergence into the entropy of the approximate posterior and the cross-entropy between prior and posterior distributions . Note that the entropy of a delta distribution does not depend on the parameter , therefore it can be considered to be constant.