Improved Bayesian Compression

Improved Bayesian Compression

Marco Federici
University of Amsterdam
marco.federici@student.uva.nl&Karen Ullrich
University of Amsterdam
karen.ullrich@uva.nl&Max Welling
University of Amsterdam
Canadian Institute for Advanced Research (CIFAR)
welling.max@gmail.com

 

Improved Bayesian Compression


  Marco Federici University of Amsterdam marco.federici@student.uva.nl Karen Ullrich University of Amsterdam karen.ullrich@uva.nl Max Welling University of Amsterdam Canadian Institute for Advanced Research (CIFAR) welling.max@gmail.com

\@float

noticebox[b]31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\end@float

1 Variational Bayesian Networks for Compression

Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via band-limited channels and most importantly doing inference at scale. There have been two proposals that show strong results, both using empirical Bayesian priors: (i) Ullrich et al. [2017] show impressive compression results by use of an adaptive Mixture of Gaussian prior on independent delta distributed weights. This idea has initially been proposed as Soft-Weight Sharing by Nowlan and Hinton [1992] but was never demonstrated to compress before. (ii) Equivalently, Molchanov et al. [2017] use Variational Dropout [Kingma et al., 2015] to prune out independent Gaussian posterior weights with high uncertainties. To achieve high pruning rates the authors refined the originally proposed approximations to the KL-divergence and a different parametrization to increase the stability of the training procedure. In this work, we propose to join these two somewhat orthogonal compression schemes since (ii) seems to prune out more weights but does not provide a technique for quantization such as (i). We find our method to outperform both of the above.

2 Method

Given a dataset and a model parametrized by a weight vector , the learning goal consists in the maximization of the posterior probability of the parameters given the data . Since this quantity involves the computation of intractable integrals, the original objective is replaced with a lower bound obtained by introducing an approximate parametric posterior distribution . In the Variational Inference framework, the objective function is expressed as a Variational Lower Bound:

(1)

The first term of the equation represents the log-likelihood of the model predictions, while the second part stands for the KL-Divergence between the weights approximate posterior and their prior distribution . This term works as a regularizer whose effects on the training procedure and tractability depend on the chosen functional form for the two distributions. In this work we propose the use of a joint distribution over the -dimensional weight vector and their corresponding centers .

Distribution factorization Functional forms
Table 1: Choice of probability distribution for the approximate posterior and the prior distribution . represent a Gaussian mixture model over parametrized with .

Table 1 shows the factorization and functional form of the prior and posterior distributions. Each conditional posterior is represented with a Gaussian distribution with a variance around a center defined as a delta peak determined by the parameter .

On the other hand, the joint prior is modeled as a product of independent distributions over and . Each represents a log-uniform distribution, while is a mixture of Gaussian distribution parametrized with that represents the mixing proportions , the mean of each Gaussian component and their respective precision . In this settings, the KL-Divergence between the prior and posterior distribution can be expressed as:

(2)

Where can be effectively approximated as described in Molchanov et al. [2017]. A full derivation can be found in appendix C.

The zero-centered heavy-tailed prior distribution on induces sparsity in the parameters vector, while the adaptive mixture model applied on the weight centers forces a clustering behavior while adjusting the parameters to better match their distribution.

3 Experiments

In preliminary experiments, we compare the compression ratio as proposed by Han et al. [2015] and accuracy achieved by different Bayesian compression techniques on the well studied MNIST dataset with dense (LeNet300-100) and convolutional (LeNet5) architectures. The details of the training procedure are described in appendix A.

The preliminary results presented in table 2 suggests that the joint procedure presented in this paper results in a dramatic increase of the compression ratio, achieving state-of-the-art results for both the dense and convolutional network architectures. Further work could address the interaction between the soft-weight sharing methodology and structured sparsity inducing techniques [Wen et al., 2016, Louizos et al., 2017] to reduce the overhead introduced by the compression format.

Architecture Training Acc CR
LeNet300-100 L2 98.39 100 1
SWS 98.16 8.6 34
VD 98.05 1.6 131
VD+SWS 98.24 1.5 161
LeNet5 L2 99.13 100 1
SWS 99.01 3.6 84
VD 99.09 0.7 349
VD+SWS 99.14 0.5 482
Table 2: Test accuracy (Acc), percentage of non-zero weights and compression ratio (CR) evaluated on the MNIST classification task. Soft-weight sharing (SWS) [Ullrich et al., 2017] and the Variational Dropout (VD) [Molchanov et al., 2017] approaches are compared with the Ridge regression (L2) and the combined approach proposed in this work (VD+SWS). All the accuracies are evaluated on the compressed models (details on the compression scheme in appendix B). The figure reports the best accuracies and compression ratios obtained by the three different techniques on the LeNet5 architecture by changing the values of the hyper-parameters.

References

  • Ullrich et al. [2017] K. Ullrich, E. Meeds, and M. Welling. Soft Weight-Sharing for Neural Network Compression. ICLR, 2017.
  • Nowlan and Hinton [1992] Steven J. Nowlan and Geoffrey E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Comput., 4(4):473–493, July 1992. ISSN 0899-7667. doi: 10.1162/neco.1992.4.4.473. URL http://dx.doi.org/10.1162/neco.1992.4.4.473.
  • Molchanov et al. [2017] D. Molchanov, A. Ashukha, and D. Vetrov. Variational Dropout Sparsifies Deep Neural Networks. ICML, January 2017.
  • Kingma et al. [2015] D. P. Kingma, T. Salimans, and M. Welling. Variational Dropout and the Local Reparameterization Trick. ArXiv e-prints, June 2015.
  • Han et al. [2015] S. Han, H. Mao, and W. J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. ArXiv e-prints, October 2015.
  • Wen et al. [2016] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. CoRR, abs/1608.03665, 2016. URL http://arxiv.org/abs/1608.03665.
  • Louizos et al. [2017] Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. NIPS, 2017.

Appendix A Training

The full training objective equation is given by the variational lower bound in equation 1, where the two KL-Divergence terms have been scaled according to two coefficients and respectively:

Starting from a pre-trained model, in a first warm-up phase we set and . Note that this part of the training procedure matches the Sparse Variational Dropout methodology [Molchanov et al., 2017]. After reaching convergence (200 epochs in our experiments), we initialize the parameters for the mixture model and the coefficient is set to a value of ( is kept to ) to induce the clustering effect with the Soft Weight sharing procedure. This phase usually requires 50-100 epochs to reach convergence.

The mixture model used for our experiments uses 17 components, one of which has been fixed to zero with a fixed mixing proportion . A gamma hyper-prior (, ) have been applied to the precision of the Gaussian components to ensure numerical stability.

The proposed parametrization stores the weight variance , each mixture component precision and the mixing proportions in the logarithmic space. All the models have been trained using ADAM optimizer, the learning rates and initialization values for the parameters are reported in the table 3

Initialization -10
Learning Rate
Table 3: Learning rates and initialization values corresponding to the model parameters. represents the weight vector of a pre-trained model, while represent the distance between the means of the mixing components and it is obtained by dividing two times the standard deviation of the distribution by the number of mixing components . Note that the indexing goes from to while starts from and reaches .

Appendix B Compression

The computation of the maximum compression ratio has been done accordingly to the procedure reported in Han et al. [2015]. The first part of the pipeline slightly differs according to the methodologies:

  1. Sparse Variational Dropout (VD)

    At the end of the Sparse Variational Dropout training procedure (VD) all the parameters with a corresponding binary dropout rate greater then a fixed threshold () are set to zero [Molchanov et al., 2017]. Secondly, the weight means are clustered with a 64-components mixture model and they are collapsed into the mean of the Gaussian with the highest responsibility.

  2. Soft Weight Sharing (SWS) and joint approach (SWS+VD)

    Once the training procedures involving the mixture of Gaussian model have converged to a stable configuration, the overlapping components are merged and all the weights are collapsed into the mean of the component that corresponds to the highest responsibility (see Ullrich et al. [2017] for details). Since the weight assigned to the zero-centered component are automatically set to zero, there is no need to specify a pruning threshold.

At the end of this initial phases, the models are represented as a sequence of sparse matrices containing discrete assignments. Empty columns and filters banks are removed and the model architecture is adjusted accordingly. The discrete assignments are encoded using the Huffman compression scheme and the resulting matrices are stored according to the CSC format using 5 and 8 bits to represents the offset between the entries in dense and convolutional layers respectively.

Appendix C Derivation of the KL-Divergence

In order to compute the KL-divergence between the joint prior and approximate posterior distribution, we use the factorization reported in table 1:

By plugging in and observing , we obtain:

(3)

Where the first KL-divergence can be approximated according to Molchanov et al. [2017]:

(4)

While the second term can be computed by decomposing the KL-divergence into the entropy of the approximate posterior and the cross-entropy between prior and posterior distributions . Note that the entropy of a delta distribution does not depend on the parameter , therefore it can be considered to be constant.

(5)

Plugging-in the results from equations 4 and 5 into equation 3 we obtain the full expression for the KL-divergence reported in equation 2.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44758
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description