Improved Bayesian Compression
Improved Bayesian Compression
Marco Federici University of Amsterdam marco.federici@student.uva.nl Karen Ullrich University of Amsterdam karen.ullrich@uva.nl Max Welling University of Amsterdam Canadian Institute for Advanced Research (CIFAR) welling.max@gmail.com
noticebox[b]31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\end@float
1 Variational Bayesian Networks for Compression
Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via bandlimited channels and most importantly doing inference at scale. There have been two proposals that show strong results, both using empirical Bayesian priors: (i) Ullrich et al. [2017] show impressive compression results by use of an adaptive Mixture of Gaussian prior on independent delta distributed weights. This idea has initially been proposed as SoftWeight Sharing by Nowlan and Hinton [1992] but was never demonstrated to compress before. (ii) Equivalently, Molchanov et al. [2017] use Variational Dropout [Kingma et al., 2015] to prune out independent Gaussian posterior weights with high uncertainties. To achieve high pruning rates the authors refined the originally proposed approximations to the KLdivergence and a different parametrization to increase the stability of the training procedure. In this work, we propose to join these two somewhat orthogonal compression schemes since (ii) seems to prune out more weights but does not provide a technique for quantization such as (i). We find our method to outperform both of the above.
2 Method
Given a dataset and a model parametrized by a weight vector , the learning goal consists in the maximization of the posterior probability of the parameters given the data . Since this quantity involves the computation of intractable integrals, the original objective is replaced with a lower bound obtained by introducing an approximate parametric posterior distribution . In the Variational Inference framework, the objective function is expressed as a Variational Lower Bound:
(1) 
The first term of the equation represents the loglikelihood of the model predictions, while the second part stands for the KLDivergence between the weights approximate posterior and their prior distribution . This term works as a regularizer whose effects on the training procedure and tractability depend on the chosen functional form for the two distributions. In this work we propose the use of a joint distribution over the dimensional weight vector and their corresponding centers .
Distribution factorization  Functional forms 
Table 1 shows the factorization and functional form of the prior and posterior distributions. Each conditional posterior is represented with a Gaussian distribution with a variance around a center defined as a delta peak determined by the parameter .
On the other hand, the joint prior is modeled as a product of independent distributions over and . Each represents a loguniform distribution, while is a mixture of Gaussian distribution parametrized with that represents the mixing proportions , the mean of each Gaussian component and their respective precision . In this settings, the KLDivergence between the prior and posterior distribution can be expressed as:
(2) 
Where can be effectively approximated as described in Molchanov et al. [2017]. A full derivation can be found in appendix C.
The zerocentered heavytailed prior distribution on induces sparsity in the parameters vector, while the adaptive mixture model applied on the weight centers forces a clustering behavior while adjusting the parameters to better match their distribution.
3 Experiments
In preliminary experiments, we compare the compression ratio as proposed by Han et al. [2015] and accuracy achieved by different Bayesian compression techniques on the well studied MNIST dataset with dense (LeNet300100) and convolutional (LeNet5) architectures. The details of the training procedure are described in appendix A.
The preliminary results presented in table 2 suggests that the joint procedure presented in this paper results in a dramatic increase of the compression ratio, achieving stateoftheart results for both the dense and convolutional network architectures. Further work could address the interaction between the softweight sharing methodology and structured sparsity inducing techniques [Wen et al., 2016, Louizos et al., 2017] to reduce the overhead introduced by the compression format.
Architecture  Training  Acc  CR  
LeNet300100  L2  98.39  100  1  
SWS  98.16  8.6  34  
VD  98.05  1.6  131  
VD+SWS  98.24  1.5  161  
LeNet5  L2  99.13  100  1  
SWS  99.01  3.6  84  
VD  99.09  0.7  349  
VD+SWS  99.14  0.5  482 
References
 Ullrich et al. [2017] K. Ullrich, E. Meeds, and M. Welling. Soft WeightSharing for Neural Network Compression. ICLR, 2017.
 Nowlan and Hinton [1992] Steven J. Nowlan and Geoffrey E. Hinton. Simplifying neural networks by soft weightsharing. Neural Comput., 4(4):473–493, July 1992. ISSN 08997667. doi: 10.1162/neco.1992.4.4.473. URL http://dx.doi.org/10.1162/neco.1992.4.4.473.
 Molchanov et al. [2017] D. Molchanov, A. Ashukha, and D. Vetrov. Variational Dropout Sparsifies Deep Neural Networks. ICML, January 2017.
 Kingma et al. [2015] D. P. Kingma, T. Salimans, and M. Welling. Variational Dropout and the Local Reparameterization Trick. ArXiv eprints, June 2015.
 Han et al. [2015] S. Han, H. Mao, and W. J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. ArXiv eprints, October 2015.
 Wen et al. [2016] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. CoRR, abs/1608.03665, 2016. URL http://arxiv.org/abs/1608.03665.
 Louizos et al. [2017] Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. NIPS, 2017.
Appendix A Training
The full training objective equation is given by the variational lower bound in equation 1, where the two KLDivergence terms have been scaled according to two coefficients and respectively:
Starting from a pretrained model, in a first warmup phase we set and . Note that this part of the training procedure matches the Sparse Variational Dropout methodology [Molchanov et al., 2017]. After reaching convergence (200 epochs in our experiments), we initialize the parameters for the mixture model and the coefficient is set to a value of ( is kept to ) to induce the clustering effect with the Soft Weight sharing procedure. This phase usually requires 50100 epochs to reach convergence.
The mixture model used for our experiments uses 17 components, one of which has been fixed to zero with a fixed mixing proportion . A gamma hyperprior (, ) have been applied to the precision of the Gaussian components to ensure numerical stability.
The proposed parametrization stores the weight variance , each mixture component precision and the mixing proportions in the logarithmic space. All the models have been trained using ADAM optimizer, the learning rates and initialization values for the parameters are reported in the table 3
Initialization  10  
Learning Rate 
Appendix B Compression
The computation of the maximum compression ratio has been done accordingly to the procedure reported in Han et al. [2015]. The first part of the pipeline slightly differs according to the methodologies:

Sparse Variational Dropout (VD)
At the end of the Sparse Variational Dropout training procedure (VD) all the parameters with a corresponding binary dropout rate greater then a fixed threshold () are set to zero [Molchanov et al., 2017]. Secondly, the weight means are clustered with a 64components mixture model and they are collapsed into the mean of the Gaussian with the highest responsibility.

Soft Weight Sharing (SWS) and joint approach (SWS+VD)
Once the training procedures involving the mixture of Gaussian model have converged to a stable configuration, the overlapping components are merged and all the weights are collapsed into the mean of the component that corresponds to the highest responsibility (see Ullrich et al. [2017] for details). Since the weight assigned to the zerocentered component are automatically set to zero, there is no need to specify a pruning threshold.
At the end of this initial phases, the models are represented as a sequence of sparse matrices containing discrete assignments. Empty columns and filters banks are removed and the model architecture is adjusted accordingly. The discrete assignments are encoded using the Huffman compression scheme and the resulting matrices are stored according to the CSC format using 5 and 8 bits to represents the offset between the entries in dense and convolutional layers respectively.
Appendix C Derivation of the KLDivergence
In order to compute the KLdivergence between the joint prior and approximate posterior distribution, we use the factorization reported in table 1:
By plugging in and observing , we obtain:
(3) 
Where the first KLdivergence can be approximated according to Molchanov et al. [2017]:
(4) 
While the second term can be computed by decomposing the KLdivergence into the entropy of the approximate posterior and the crossentropy between prior and posterior distributions . Note that the entropy of a delta distribution does not depend on the parameter , therefore it can be considered to be constant.
(5) 
Pluggingin the results from equations 4 and 5 into equation 3 we obtain the full expression for the KLdivergence reported in equation 2.