The Description Length of Deep Learning Models

The Description Length of Deep Learning Models

Léonard Blier
École Normale Supérieure
Paris, France
leonard.blier@ens.fr
&
Yann Ollivier
Facebook Artificial Intelligence Research
Paris, France
yol@fb.com
Abstract

Solomonoff’s general theory of inference (Solomonoff, 1964) and the Minimum Description Length principle (Grünwald, 2007) formalize Occam’s razor, and hold that a good model of data is a model that is good at losslessly compressing the data, including the cost of describing the model itself. Deep neural networks might seem to go against this principle given the large number of parameters to be encoded.

We demonstrate experimentally the ability of deep neural networks to compress the training data even when accounting for parameter encoding. The compression viewpoint originally motivated the use of variational methods in neural networks (Hinton and Van Camp, 1993, Schmidhuber, 1997). Unexpectedly, we found that these variational methods provide surprisingly poor compression bounds, despite being explicitly built to minimize such bounds. This might explain the relatively poor practical performance of variational methods in deep learning. On the other hand, simple incremental encoding methods yield excellent compression values on deep networks, vindicating Solomonoff’s approach.

 

The Description Length of Deep Learning Models


  Léonard Blier École Normale Supérieure Paris, France leonard.blier@ens.fr Yann Ollivier Facebook Artificial Intelligence Research Paris, France yol@fb.com

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Deep learning has achieved remarkable results in many different areas (LeCun et al., 2015). Still, the ability of deep models not to overfit despite their large number of parameters is not well understood. To quantify the complexity of these models in light of their generalization ability, several metrics beyond parameter-counting have been measured, such as the number of degrees of freedom of models (Gao and Jojic, 2016), or their intrinsic dimension (Li et al., 2018). These works concluded that deep learning models are significantly simpler than their numbers of parameters might suggest.

In information theory and Minimum Description Length (MDL), learning a good model of the data is recast as using the model to losslessly transmit the data in as few bits as possible. More complex models will compress the data more, but the model must be transmitted as well. The overall codelength can be understood as a combination of quality-of-fit of the model (compressed data length), together with the cost of encoding (transmitting) the model itself. For neural networks, the MDL viewpoint goes back as far as (Hinton and Van Camp, 1993), which used a variational technique to estimate the joint compressed length of data and parameters in a neural network model.

Compression is strongly related to generalization and practical performance. Standard sample complexity bounds (VC-dimension, PAC-Bayes…) are related to the compressed length of the data in a model, and any compression scheme leads to generalization bounds (Blum and Langford, 2003). Specifically for deep learning, (Arora et al., 2018) showed that compression leads to generalization bounds (see also (Dziugaite and Roy, 2017)). Several other deep learning methods have been inspired by information theory and the compression viewpoint. In unsupervised learning, autoencoders and especially variational autoencoders (Kingma and Welling, 2013) are compression methods of the data (Ollivier, 2014). In supervised learning, the information bottleneck method studies how the hidden representations in a neural network compress the inputs while preserving the mutual information between inputs and outputs (Tishby and Zaslavsky, 2015, Shwartz-Ziv and Tishby, 2017, Achille and Soatto, 2017).

MDL is based on Occam’s razor, and on Chaitin’s hypothesis that “comprehension is compression(Chaitin, 2007): any regularity in the data can be exploited both to compress it and to make predictions. This is ultimately rooted in Solomonoff’s general theory of inference (Solomonoff, 1964) (see also, e.g., (Hutter, 2007, Schmidhuber, 1997)), whose principle is to favor models that correspond to the “shortest program” to produce the training data, based on its Kolmogorov complexity (Li and Vitányi, 2008). If no structure is present in the data, no compression to a shorter program is possible.

The problem of overfitting fake labels is a nice illustration: convolutional neural networks commonly used for image classification are able to reach accuracy on random labels on the train set (Zhang et al., 2017). However, measuring the associated compression bound (Fig. 1) immediately reveals that these models do not compress fake labels (and indeed, theoretically, they cannot, see Appendix A), that no information is present in the model parameters, and that no learning has occurred.

Figure 1: Fake labels cannot be compressed Measuring codelength while training a deep model on MNIST with true and fake labels. The model is an MLP with 3 hidden layers of size 200, with RELU units. With ordinary SGD training, the model is able to overfit random labels. The plot shows the effect of using variational learning instead, and reports the variational objective (encoding cost of the training data, see Section 3.3), on true and fake labels. We also isolated the contribution from parameter encoding in the total loss (KL term in (2)). With true labels, the encoding cost is below the uniform encoding, and half of the description length is information contained in the weights. With fake labels, on the contrary, the encoding cost converges to a uniform random model, with no information contained in the weights: there is no mutual information between inputs and outputs.

In this work we explicitly measure how much current deep models actually compress data. (We introduce no new architectures or learning procedures.) As seen above, this may clarify several issues around generalization and measures of model complexity. Our contributions are:

  • We show that the traditional method to estimate MDL codelengths in deep learning, variational inference (Hinton and Van Camp, 1993), yields surprisingly inefficient codelengths for deep models, despite explicitly minimizing this criterion. This might explain why variational inference as a regularization method often does not reach optimal test performance.

  • We introduce new practical ways to compute tight compression bounds in deep learning models, based on the MDL toolbox (Grünwald, 2007). We show that prequential coding on top of standard learning, yields much better codelengths than variational inference, correlating better with test set performance. Thus, despite their many parameters, deep learning models do compress the data well, even when accounting for the cost of describing the model.

2 Probabilistic Models, Compression, and Information Theory

Imagine that Alice wants to efficiently transmit some information to Bob. Alice has a dataset where are some inputs and some labels. We do not assume that these data come from a “true” probability distribution. Bob also has the data , but he does not have the labels. This describes a supervised learning situation in which the inputs may be publicly available, and a prediction of the labels is needed. How can deep learning models help with data encoding? One key problem is that Bob does not necessarily know the precise, trained model that Alice is using. So some explicit or implicit transmission of the model itself is required.

We study, in turn, various methods to encode the labels , with or without a deep learning model. Encoding the labels knowing the inputs is equivalent to estimating their mutual information (Section 2.4); this is distinct from the problem of practical network compression (Section 3.2) or from using neural networks for lossy data compression. Our running example will be image classification on the MNIST (LeCun et al., 1998) and CIFAR10 (Krizhevsky, 2009) datasets.

2.1 Definitions and notation

Let be the input space and the output (label) space. In this work, we only consider classification tasks, so . The dataset is . Denote . We define a model for the supervised learning problem as a conditional probability distribution , namely, a function such that for each , . A model class, or architecture, is a set of models depending on some parameter : . The Kullback–Leibler divergence between two distributions is .

2.2 Models and codelengths

We recall a basic result of compression theory (Shannon, 1948).

Proposition 1 (Shannon–Huffman code).

Suppose that Alice and Bob have agreed in advance on a model , and both know the inputs . Then there exists a code to transmit the labels losslessly with codelength (up to at most one bit on the whole sequence)

(1)

This bound is known to be optimal if the data are independent and coming from the model (Mackay, 2003). The one additional bit in the Shannon–Huffman code is incurred only once for the whole dataset (Mackay, 2003). With large datasets this is negligible. Thus, from now on we will systematically omit the as well as admit non-integer codelengths (Grünwald, 2007). We will use the terms codelength or compression bound interchangeably.

This bound is exactly the categorical cross-entropy loss evaluated on the model . Hence, trying to minimize the description length of the outputs over the parameters of a model class is equivalent to minimizing the usual classification loss.

Here we do not consider the practical implementation of compression algorithms: we only care about the theoretical bit length of their associated encodings. We are interested in measuring the amount of information contained in the data, the mutual information between input and output, and how it is captured by the model. Thus, we will directly work with codelength functions.

An obvious limitation of the bound (1) is that Alice and Bob both have to know the model in advance. This is problematic if the model must be learned from the data.

2.3 Uniform encoding

The uniform distribution over the classes does not require any learning from the data, thus no additional information has to be transmitted. Using (1) yields a codelength

(2)

This uniform encoding will be a sanity check against which to compare the other encodings in this text. For MNIST, the uniform encoding cost is . For CIFAR, the uniform encoding cost is .

2.4 Mutual information between inputs and outputs

Intuitively, the only way to beat a trivial encoding of the outputs is to use the mutual information (in a loose sense) between the inputs and outputs.

This can be formalized as follows. Assume that the inputs and outputs follow a “true” joint distribution . Then any transmission method with codelength satisfies (Mackay, 2003)

(3)

Therefore, the gain (per data point) between the codelength and the trivial codelength is

(4)

the mutual information between inputs and outputs (Mackay, 2003).

Thus, the gain of any codelength compared to the uniform code is limited by the amount of mutual information between input and output. (This bound is reached with the true model .) Any successful compression of the labels is, at the same time, a direct estimation of the mutual information between input and output. The latter is the central quantity in the Information Bottleneck approach to deep learning models (Shwartz-Ziv and Tishby, 2017).

Note that this still makes sense without assuming a true underlying probabilistic model, by replacing the mutual information with the “absolute” mutual information based on Kolmogorov complexity (Li and Vitányi, 2008).

3 Compression Bounds via Deep Learning

Code mnist cifar10
Codelength Comp. Test Codelength Comp. Test
() Ratio Acc () Ratio Acc
Uniform 199 1. 10% 166 1. 10%
float32 2-part 98.4% 92.9%
Network compr. 98.4% 93.3%
Intrinsic dim. 90% 70%
Variational 22.2 0.11 98.2% 89.0 0.54 66,5%
Prequential 4.10 0.02 99.5% 45.3 0.27 93.3%
Table 1: Compression bounds via Deep Learning. Compression bounds given by different codes on two datasets, MNIST and CIFAR10. The Codelength is the number of bits necessary to send the labels to someone who already has the inputs. This codelength includes the description length of the model. The compression ratio for a given code is the ratio between its codelength and the codelength of the uniform code. The test accuracy of a model is the accuracy of its predictions on the test set. For 2-part and network compression codes, we report results from (Han et al., 2015a) and (Xu et al., 2017), and for the intrinsic dimension code, results from (Li et al., 2018). The values in the table for these codelengths and compression ratio are lower bounds, only taking into account the codelength of the weights, and not the codelength of the data encoded with the model (the final loss is not always available in these publications). For variational and prequential codes, we selected the model and hyperparameters providing the best compression bound.

Various compression methods from the MDL toolbox can be used on deep learning models. (Note that a given model can be stored or encoded in several ways, some of which may have large codelengths. A good model in the MDL sense is one that admits at least one good encoding.)

3.1 Two-Part Encodings

Alice and Bob can first agree on a model class (such as “neural networks with two layers and 1,000 neurons per layer”). However, Bob does not have access to the labels, so Bob cannot train the parameters of the model. Therefore, if Alice wants to use such a parametric model, the parameters themselves have to be transmitted. Such codings in which Alice first transmits the parameters of a model, then encodes the data using this parameter, have been called two-part codes (Grünwald, 2007).

Definition 1 (Two-part codes).

Assume that Alice and Bob have first agreed on a model class . Let be any encoding scheme for parameters . Let be any parameter. The corresponding two-part codelength is

(1)

An obvious possible code for is the standard float32 binary encoding for , for which . In deep learning, two-part codes are widely inefficient and much worse than the uniform encoding (Graves, 2011). For a model with 1 million parameters, the two-part code with float32 binary encoding will amount to , or 200 times the uniform encoding on CIFAR10.

3.2 Network Compression

The practical encoding of trained models is a well-developed research topic, e.g., for use on small devices such as cell phones. Such encodings can be seen as two-part codes using a clever code for instead of encoding every parameter on bits. Possible strategies include training a student layer to approximate a well-trained network (Ba and Caruana, 2014, Romero et al., 2015), or pipelines involving retraining, pruning, and quantization of the model weights (Han et al., 2015a, b, Simonyan and Zisserman, 2014, Louizos et al., 2017, See et al., 2016, Ullrich et al., 2017).

Still, the resulting codelengths (for compressing the labels given the data) are way above the uniform compression bound for image classification (Table 1).

Another scheme for network compression, less used in practice but very informative, is to sample a random low-dimensional affine subspace in parameter space and to optimize in this subspace (Li et al., 2018). The number of parameters is thus reduced to the dimension of the subspace and we can use the associated two-part encoding. (The random subspace can be transmitted via a pseudorandom seed.) Our methodology to derive compression bounds from (Li et al., 2018) is detailed in Appendix B.

3.3 Variational and Bayesian Codes

Another strategy for encoding weights with a limited precision is to represent these weights by random variables: the uncertainty on represents the precision with which is transmitted. The variational code turns this into an explicit encoding scheme, thanks to the bits-back argument (Honkela and Valpola, 2004). Initially a way to compute codelength bounds with neural networks (Hinton and Van Camp, 1993), this is now often seen as a regularization technique (Blundell et al., 2015). This method yields the following codelength.

Definition 2 (Variational code).

Assume that Alice and Bob have agreed on a model class and a prior over . Then for any distribution over , there exists an encoding with codelength

(2)

This can be minimized over , by choosing a parametric model class , and minimizing (2) over . A common model class for is the set of multivariate Gaussian distributions , and and can be optimized with a stochastic gradient descent algorithm (Graves, 2011, Kucukelbir et al., 2017). can be interpreted as the precision with which the parameters are encoded.

The variational bound is an upper bound for the Bayesian description length bound of the Bayesian model with parameter and prior . Considering the Bayesian distribution of ,

(3)

then Proposition 1 provides an associated code via (1) with model : Then, for any we have (Graves, 2011)

(4)

with equality if and only if is equal to the Bayesian posterior . Variational methods can be used as approximate Bayesian inference for intractable Bayesian posteriors.

We computed practical compression bounds with variational methods on MNIST and CIFAR10. Neural networks that give the best variational compression bounds appear to be smaller than networks trained the usual way. We tested various fully connected networks and convolutional networks (Appendix C): the models that gave the best variational compression bounds were small LeNet-like networks. To test the link between compression and test accuracy, in Table 1 we report the best model based on compression, not test accuracy. This results in a drop of test accuracy with respect to other settings.

On MNIST, this provides a codelength of the labels (knowing the inputs) of , i.e., a compression ratio of . The corresponding model achieved accuracy on the test set.

On CIFAR, we obtained a codelength of , i.e., a compression ratio of . The corresponding model achieved classification accuracy on the test set.

We can make two observations. First, choosing the model class which minimizes variational codelength selects smaller deep learning models than would cross-validation. Second, the model with best variational codelength has low classification accuracy on the test set on MNIST and CIFAR, compared to models trained in a non-variational way. This aligns with a common criticism of Bayesian methods as too conservative for model selection compared with cross-validation (Rissanen et al., 1992, Foster and George, 1994, Barron and Yang, 1999, Grünwald, 2007).

3.4 Prequential or Online Code

The next coding procedure shows that deep neural models which generalize well also compress well.

The prequential (or online) code is a way to encode both the model and the labels without directly encoding the weights, based on the prequential approach to statistics (Dawid, 1984), by using prediction strategies. Intuitively, a model with default values is used to encode the first few data; then the model is trained on these few encoded data; this partially trained model is used to encode the next data; then the model is retrained on all data encoded so far; and so on.

Precisely, we call a prediction strategy for predicting the labels in knowing the inputs in if for all , is a conditional model; namely, any strategy for predicting the - label after already having seen input-output pairs. In particular, such a model may learn from the first data samples. Any prediction strategy defines a model on the whole dataset:

(5)

Let be a deep learning model. We assume that we have a learning algorithm which computes, from any number of data samples , a trained parameter vector . Then the data is encoded in an incremental way: at each step , is used to predict .

In practice, the learning procedure may only reset and retrain the network at certain timesteps. We choose timesteps , and we encode the data by blocks, always using the model learned from the already transmitted data (Algorithm 2 in Appendix D). A uniform encoding is used for the first few points. (Even though the encoding procedure is called “online”, it does not mean that only the most recent sample is used to update the parameter : the optimization procedure can be any predefined technique using all the previous samples , only requiring that the algorithm has an explicit stopping criterion.) This yields the following description length:

Definition 3 (Prequential code).

Given a model , a learning algorithm , and retraining timesteps , the prequential codelength is

(6)

where for each , is the parameter learned on data samples to .

The model parameters are never encoded explicitly in this method. The difference between the prequential codelength and the log-loss of the final trained model, can be interpreted as the amount of information that the trained parameters contain about the data contained: the former is the data codelength if Bob does not know the parameters, while the latter is the codelength of the same data knowing the parameters.

Prequential codes depend on the performance of the underlying training algorithm, and take advantage of the model’s generalization ability from the previous data to the next. In particular, the model training should yield good generalization performance from data to data .

In practice, optimization procedures for neural networks may be stochastic (initial values, dropout, data augmentation…), and Alice and Bob need to make all the same random actions in order to get the same final model. A possibility is to agree on a random seed (or pseudorandom numbers) beforehand, so that the random optimization procedure is deterministic given , Hyperparameters may also be transmitted first (the cost of sending a few numbers is small).

Prequential coding with deep models provides excellent compression bounds. On MNIST, we computed the description length of the labels with different networks (Appendix D). The best compression bound was given by a convolutional network of depth 8. It achieved a description length of , i.e., a compression ratio of , with test set accuracy (Table 1). This codelength is 6 times smaller than the variational codelength.

On CIFAR, we tested a simple multilayer perceptron, a shallow network, a small convolutional network, and a VGG convolutional network (Simonyan and Zisserman, 2014) first without data augmentation or batch normalization (VGGa) (Ioffe and Szegedy, 2015), then with both of them (VGGb) (Appendix D). The results are in Figure 2. The best compression bound was obtained with VGGb, achieving a codelength of , i.e., a compression ratio of , and test set accuracy (Table 1). This codelength is twice smaller than the variational codelength. The difference between VGGa and VGGb also shows the impact of the training procedure on codelengths for a given architecture.

Figure 2: Prequential code results on CIFAR. Results of prequential encoding on CIFAR with 5 different models: a small Multilayer Perceptron (MLP), a shallow network, a small convolutional layer (tinyCNN), a VGG-like network without data augmentation and batch normalization (VGGa) and the same VGG-like architecture with data augmentation and batch normalization (VGGb) (see Appendix D). Performance is reported during online training, as a function of the number of samples seen so far. Top left: codelength per sample (log loss) on a pack of data given data . Bottom left: test accuracy on a pack of data given data , as a function of . Top right: difference between the prequential cumulated codelength on data , and the uniform encoding. Bottom right: compression ratio of the prequential code on data .

Model Switching.

A weakness of prequential codes is the catch-up phenomenon (Van Erven et al., 2012). Large architectures might overfit during the first steps of the prequential encoding, when the model is trained with few data samples. Thus the encoding cost of the first packs of data might be worse than with the uniform code. Even after the encoding cost on current labels becomes lower, the cumulated codelength may need a lot of time to “catch up” on its initial lag. This can be observed in practice with neural networks: in Fig. 2, the VGGb model needs 5,000 samples on CIFAR to reach a cumulative compression ratio , even though the encoding cost per label becomes drops below uniform after just 1,000 samples. This is efficiently solved by switching (Van Erven et al., 2012) between models (see Appendix E). Switching further improves the practical compression bounds, even when just switching between copies of the same model with different SGD stopping times (Fig. 3, Table 2).

4 Discussion

Too Many Parameters in Deep Learning Models?

>From an information theory perspective, the goal of a model is to extract as much mutual information between the labels and inputs as possible—equivalently (Section 2.4), to compress the labels. This cannot be achieved with 2-part codes or practical network compression. With the variational code, the models do compress the data, but with a worse prediction performance: one could conclude that deep learning models that achieve the best prediction performance cannot compress the data.

Thanks to the prequential code, we have seen that deep learning models, even with a large number of parameters, compress the data well: from an information theory point of view, the number of parameters is not an obstacle to compression. This is consistent with Chaitin’s hypothesis that “comprehension is compression”, contrary to previous observations with the variational code.

Prequential Code and Generalization.

The prequential encoding shows that a model that generalizes well for every dataset size, will compress well. The efficiency of the prequential code is directly due to the generalization ability of the model at each time.

Theoretically, three of the codes (two-parts, Bayesian, and prequential based on a maximum likelihood or MAP estimator) are known to be asymptotically equivalent under strong assumptions (-dimensional identifiable model, data coming from the model, suitable Bayesian prior, and technical assumptions ensuring the effective dimension of the trained model is not lower than ): in that case, these three methods yield a codelength (Grünwald, 2007). This corresponds to the BIC criterion for model selection. Hence there was no obvious reason for the prequential code to be an order of magnitude better than the others.

However, deep learning models do not usually satisfy any of these hypotheses. Moreover, our prequential codes are not based on the maximum likelihood estimator at each step, but on standard deep learning methods (so training is regularized at least by dropout and early stopping).

Inefficiency of Variational Models for Deep Networks.

The objective of variational methods is equivalent to minimizing a description length. Thus, on our image classification tasks, variational methods do not have good results even for their own objective, compared to prequential codes. This makes their relatively poor results at test time less surprising.

Understanding this observed inefficiency of variational methods is an open problem. As stated in (4), the variational codelength is an upper bound for the Bayesian codelength. More precisely,

(1)

with notation as above, and with the Bayesian posterior on given the data. Empirically, on MNIST and CIFAR, we observe that

Several phenomena could contribute to this gap. First, the optimization of the parameters of the approximate Bayesian posterior might be imperfect. Second, even the optimal distribution in the variational class might not approximate the posterior well, leading to a large term in (1); this would be a problem with the choice of variational posterior class . On the other hand we do not expect the choice of Bayesian prior to be a key factor: we tested Gaussian priors with various variances as well as a conjugate Gaussian prior, with similar results. Moreover, Gaussian initializations and L2 weight decay (acting like a Gaussian prior) are common in deep learning. Finally, the (untractable) Bayesian codelength based on the exact posterior might itself be larger than the prequential codelength. This would be a problem of underfitting with parametric Bayesian inference, perhaps related to the catch-up phenomenon or to the known conservatism of Bayesian model selection (end of Section 3.3).

5 Conclusion

Deep learning models can represent the data together with the model in fewer bits than a naive encoding, despite their many parameters. However, we were surprised to observe that variational inference, though explicitly designed to minimize such codelengths, provides very poor such values compared to a simple incremental coding scheme. Understanding this limitation of variational inference is a topic for future research.

11todo: 1Acknowledgments: DLP, Corentin, Moustapha, Gaétan.

References

  • Achille and Soatto (2017) A. Achille and S. Soatto. On the Emergence of Invariance and Disentangling in Deep Representations. arXiv preprint arXiv:1706.01350, jun 2017. URL http://arxiv.org/abs/1706.01350.
  • Arora et al. (2018) S. Arora, R. Ge, B. Neyshabur, and Y. Zhang. Stronger generalization bounds for deep nets via a compression approach. arXiv preprint arXiv:1802.05296, 2018.
  • Ba and Caruana (2014) L. J. Ba and R. Caruana. Do Deep Nets Really Need to be Deep? In Advances in Neural Information Processing Systems, pages 2654–2662, 2014.
  • Barron and Yang (1999) A. Barron and Y. Yang. Information-theoretic determination of minimax rates of convergence. The Annals of Statistics, 27(5):1564–1599, 1999.
  • Blum and Langford (2003) A. Blum and J. Langford. PAC-MDL Bounds. In B. Schölkopf and M. K. Warmuth, editors, Learning Theory and Kernel Machines, pages 344–357, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg. ISBN 978-3-540-45167-9.
  • Blundell et al. (2015) C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight Uncertainty in Neural Networks. In International Conference on Machine Learning, pages 1613–1622, 2015.
  • Chaitin (2007) G. J. Chaitin. On the intelligibility of the universe and the notions of simplicity, complexity and irreducibility. In Thinking about Godel and Turing: Essays on Complexity, 1970-2007. World scientific, 2007.
  • Dawid (1984) A. P. Dawid. Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach. Journal of the Royal Statistical Society. Series A (General), 147(2):278, 1984.
  • Dziugaite and Roy (2017) G. K. Dziugaite and D. M. Roy. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, Sydney, 2017.
  • Foster and George (1994) D. P. Foster and E. I. George. The Risk Inflation Criterion for Multiple Regression. The Annals of Statistics, 22(4):1947–1975, dec 1994.
  • Gao and Jojic (2016) T. Gao and V. Jojic. Degrees of Freedom in Deep Neural Networks. arXiv preprint arXiv:1603.09260, mar 2016.
  • Graves (2011) A. Graves. Practical Variational Inference for Neural Networks. In Neural Information Processing Systems, 2011.
  • Grünwald (2007) P. D. Grünwald. The Minimum Description Length principle. MIT press, 2007.
  • Han et al. (2015a) S. Han, H. Mao, and W. J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. arXiv preprint arXiv:1510.00149, 2015a.
  • Han et al. (2015b) S. Han, J. Pool, J. Tran, and W. J. Dally. Learning both Weights and Connections for Efficient Neural Networks. In Advances in Neural Information Processing Systems, 2015b.
  • Hinton and Van Camp (1993) G. E. Hinton and D. Van Camp. Keeping Neural Networks Simple by Minimizing the Description Length of the Weights. In Proceedings of the sixth annual conference on Computational learning theory. ACM, 1993.
  • Honkela and Valpola (2004) A. Honkela and H. Valpola. Variational Learning and Bits-Back Coding: An Information-Theoretic View to Bayesian Learning. IEEE transactions on Neural Networks, 15(4), 2004.
  • Hutter (2007) M. Hutter. On Universal Prediction and Bayesian Confirmation. Theoretical Computer Science, 384(1), sep 2007.
  • Ioffe and Szegedy (2015) S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning, pages 448–456, 2015.
  • Kingma and Welling (2013) D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
  • Krizhevsky (2009) A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
  • Kucukelbir et al. (2017) A. Kucukelbir, D. Tran, R. Ranganath, A. Gelman, and D. M. Blei. Automatic Differentiation Variational Inference. Journal of Machine Learning Research, 18:1–45, 2017.
  • LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11), 1998.
  • LeCun et al. (2015) Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • Li et al. (2018) C. Li, H. Farkhoor, R. Liu, and J. Yosinski. Measuring the Intrinsic Dimension of Objective Landscapes. arXiv preprint arXiv:1804.08838, apr 2018.
  • Li and Vitányi (2008) M. Li and P. Vitányi. An introduction to Kolmogorov complexity. Springer, 2008.
  • Louizos et al. (2017) C. Louizos, K. Ullrich, and M. Welling. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pages 3290–3300, 2017.
  • Mackay (2003) D. J. C. Mackay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, cambridge edition, 2003.
  • Ollivier (2014) Y. Ollivier. Auto-encoders: reconstruction versus compression. arXiv preprint arXiv:1403.7752, mar 2014. URL http://arxiv.org/abs/1403.7752.
  • Rissanen et al. (1992) J. Rissanen, T. Speed, and B. Yu. Density estimation by stochastic complexity. IEEE Transactions on Information Theory, 38(2):315–323, 1992.
  • Romero et al. (2015) A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In Proceedings of the International Conference on Learning Representations, 2015.
  • Schmidhuber (1997) J. Schmidhuber. Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability. Neural Networks, 10(5):857–873, jul 1997.
  • See et al. (2016) A. See, M.-T. Luong, and C. D. Manning. Compression of Neural Machine Translation Models via Pruning. arXiv preprint arXiv:1606.09274, 2016.
  • Shannon (1948) C. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27, 1948.
  • Shwartz-Ziv and Tishby (2017) R. Shwartz-Ziv and N. Tishby. Opening the Black Box of Deep Neural Networks via Information. arXiv preprint arXiv:1703.00810, 2017.
  • Simonyan and Zisserman (2014) K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556, sep 2014.
  • Solomonoff (1964) R. Solomonoff. A formal theory of inductive inference. Information and control, 1964.
  • Tallec and Blier (2018) C. Tallec and L. Blier. Pyvarinf : Variational Inference for PyTorch, 2018. URL https://github.com/ctallec/pyvarinf.
  • Tishby and Zaslavsky (2015) N. Tishby and N. Zaslavsky. Deep Learning and the Information Bottleneck Principle. In Information Theory Workshop, pages 1–5. IEEE, 2015.
  • Ullrich et al. (2017) K. Ullrich, E. Meeds, and M. Welling. Soft Weight-Sharing for Neural Network Compression. arXiv preprint arXiv:1702.04008, 2017.
  • Van Erven et al. (2012) T. Van Erven, P. Grünwald, and S. De Rooij. Catching Up Faster by Switching Sooner: A predictive approach to adaptive estimation with an application to the AIC-BIC Dilemma. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(3):361–417, 2012.
  • Xu et al. (2017) T.-B. Xu, P. Yang, X.-Y. Zhang, and C.-L. Liu. Margin-Aware Binarized Weight Networks for Image Classification. In International Conference on Image and Graphics, pages 590–601. Springer, Cham, sep 2017.
  • Zagoruyko (2015) S. Zagoruyko. 92.45% on CIFAR-10 in Torch, 2015. URL http://torch.ch/blog/2015/07/30/cifar.html.
  • Zhang et al. (2017) C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In Proceedings of the International Conference on Learning Representations, 2017.

Appendix A Fake labels are not compressible

In the introduction, we stated that fake labels could not be compressed. This means that the optimal codelength for this labels is almost the uniform one. This can be formalized as follows. We define a code for as any program (in a reference Turing machine) that outputs , and denote the length of this program, or for programs that may use as their input.

Proposition 2.

Assume that are inputs, and that are iid random labels uniformly sampled in . Then for any , with probability the values satisfy that for any possible coding procedure (even depending on the values of ), the codelength of is at least

(1)
(2)

We insist that this does not require any assumptions on the coding procedure used, so this result holds for all possible models. Moreover, this is really a property of the sampled values : most values of can just not be compressed by any algorithm.

Proof.

This proposition is a standard counting argument, or an immediate consequence of Theorem 2.2.1 in (Li and Vitányi, 2008). Let be the set of all possible outcomes for the sequence of random labels. We have . Let be an integer, , we want to know how many elements in can be encoded in less than bits. We consider, on a given Turing machine, the number of programs of length less than . This number is less than :

(3)
(4)

Therefore, the number of elements in which can be described in less than bits is less than . We can deduce from this that the number of elements in which cannot be described by any program in less than bits is at least . Equivalently, there are at least elements in such that for any coding scheme, . Since the random labels are uniformly distributed, the result follows. ∎

Appendix B Technical details on compression bounds with random affine subspaces

We describe in Algorithm 1 the detailed procedure which allows to compute compression bounds with the random affine subspace method (Li et al., 2018). To compute the numerical results in Table 1, we took the intrinsic dimension computed in the original paper, and considered that the precision of the parameter was 32 bits, following the authors’ suggestion. Then, the description length of the model itself is the intrinsic dimension. This does not take into account the description length of the labels given the model, which is non-negligible (to take this quantity into account, we would need to know the loss on the training set of the model, which was not specified in the original paper). Thus we only get a lower bound.

  Alice transmits a parametric model .
  Alice transmits the random seed (if using stochastic optimization), and a dimension .
  Alice and Bob both sample a random affine subspace , with the seed . This means that they sample and a matrix of dimension where is the dimension of . It defines a new parametric model
  Alice optimizes the parameter with a gradient descent algoritm in order to minimize .
  Alice sends with a precision to Bob. It costs .
  Alice sends the labels with the models . It costs
Algorithm 1 Encoding with random affine subspaces

For MNIST, the model with the smaller intrinsic dimension is the LeNet, which has an intrinsic dimension of 290 for an accuracy of (the threshold at which (Li et al., 2018) stop by definition, hence the performance in Table 1). This leads to a description length for the model of bits, which corresponds to a compression ratio, without taking into account the description length of the labels given the model.

For CIFAR, again with the LeNet architecture, the intrinsic dimension is 2,900. This leads to a description length for the model of bits, which corresponds to a compression ratio, without taking into account the description length of the labels given the model.

These bounds could be improved by optimizing the precision . Indeed, reducing the precision makes the model less accurate and increases the encoding cost of the labels with the model, but it decreases the encoding cost of the parameters. Therefore, we could find an optimal precision to improve the compression bound. This would be a topic for future work.

Appendix C Technical Details on Variational Learning for Section 3.3

Variational learning was performed using the library Pyvarinf (Tallec and Blier, 2018).

We used a prior with , chosen to optimize the compression bounds.

The chosen class of posterior was the class of multivariate gaussian distributions with diagonal covariance matrix . It was parametrized by , with defined as , and the covariance matrix as the diagonal matrix with diagonal values .

We optimize the bound (2) as a function of () with a gradient descent method, and estimate its values and gradient with a Monte-Carlo method. Since the prior and posteriors are gaussian, we have an explicit formula for the first part of the variational loss (Hinton and Van Camp, 1993). Therefore, we can easily compute its values and gradients. For the second part

(1)

we can use the following proposition (Graves, 2011). For any function , we have

(2)
(3)

Therefore, we can estimate the values and gradients of (2) with a Monte-Carlo algorithm:

(4)
(5)

where are sampled from . In practice, we used both for the computations of the variational loss and its gradients.

We used both convolutional and fully connected architectures, but in our experiments fully connected models were better for compression. For CIFAR and MNIST, we used fully connected networks with two hidden layers of width 256, trained with SGD, with a learning rate and mini-batchs of size 128.

For CIFAR and MNIST, we used a LeNet-like network with 2 convolutional layers with 6 and 16 filters, both with kernels of size 5 and 3 fully connected layers. Each convolutional is followed by a ReLU activation and a max-pooling layer. The code will be publicly available. The first and the second fully connected layers are of dimension 120 and 84 and are followed by ReLU activations. The last one is followed by a softmax activation layer. The code for all models will be publicly available.

During the test phase, we sampled parameters from the learned distribution , and used the model for prediction. This explains why our test accuracy on MNIST is lower than other numerical results (Blundell et al., 2015), since they use for prediction the averaged model with parameters . But our goal was not to get the best prediction score, but to evaluate the model which was used for compression on the test set.

Appendix D Technical details on prequential learning

Prequential Learning on MNIST.

On MNIST, we used three different models:

  1. The uniform probability over the labels.

  2. A fully connected network or Multilayer Perceptron (MLP) with two hidden layers of dimension 256.

  3. A VGG-like convolutional network with 8 convolutional layers with 32, 32, 64, 64, 128, 128, 256 and 256 filters respectively and max pooling operators every two convolutional layers, followed by two fully connected layers of size 256.

For the two neural networks we used Dropout with probability between the fully connected layers, and optimized the network with the Adam algorithm with learning rate .

The successive timestep for the prequential learning are 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384 and 32768.

For the prequential code results in Table 1, we selected the best model, which was the VGG-like network.

Prequential Learning on CIFAR.

On CIFAR, we used five different models:

  1. The uniform probability over the labels.

  2. A fully connected network or Multilayer Perceptron (MLP) with two hidden layers of dimension 512.

  3. A shallow network, with one hidden layer and width 5000.

  4. A convolutional network (tinyCNN) with four convolutional layers with 32 filters, and a maxpooling operator after every two convolutional layers. Then, two fully connected layers of dimension 256. We used Dropout with probability between the fully connected layers.

  5. A VGG-like network with 13 convolutional layers from (Zagoruyko, 2015). We trained this architecture with two learning procedures. The first one (VGGa) without batch-normalization and data augmentation, and the second one (VGGb) with both of them, as introduced in (Zagoruyko, 2015). In both of them, we used dropout regularization with parameter 0.5.

We optimized the network with the Adam algorithm with learning rate 0.001.

For prequential learning, the timesteps were: 10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 10240, 20480, 40960. The training results can be seen in Figure 2.

For the prequential code, all the results are in Figure 2. For the results in Table 1, we selected the best model for the prequential code, which was VGGb.

  Input: data , timesteps
  Alice transmits the random seed (if using stochastic optimization).
  Alice encodes with the uniform code. This costs bits. Bob decodes .
  for  to  do
     Alice and Bob both compute .
     Alice encodes with model . This costs bits
     Bob decodes
  end for
Algorithm 2 Prequential encoding

Appendix E Switching between models against the catch-up phenomenon

Code mnist cifar10
Codelength Comp. Test Codelength Comp. Test
() Ratio Acc () Ratio Acc
Uniform 199 1. 10% 166 1. 10%
Variational 24.1 0.12 95.5% 89.0 0.54 61,6%
Prequential 4.10 0.02 99.5% 45.3 0.27 93.3%
Switch 4.05 0.02 99.5% 34.6 0.21 93.3%
Self-Switch 4.05 0.02 99.5% 34.9 0.21 93.3%
Table 2: Compression bounds by switching between models. Compression bounds given by different codes on two datasets, MNIST and CIFAR10. The Codelength is the number of bits necessary to send the labels to someone who already has the inputs. This codelength includes the description length of the model. The compression ratio for a given code is the ratio between its codelength and the codelength of the uniform code. The test accuracy of a model is the accuracy of its predictions on the test set. For variational and prequential codes, we selected the model and hyperparameters providing the best compression bound.

e.1 Switching between model classes

The solution introduced by (Van Erven et al., 2012) against the catch-up phenomenon described in Section 3.4, is to switch between models, to always encode a data block with the best model at that point. That way, the encoding adapts itself to the number of data samples seen. The switching pattern itself has to be encoded.

Assume that Alice and Bob have agreed on a set of prediction strategies . We define the set of switch sequences, .

Let be a switch sequence. The associated prediction strategy uses model on the time interval , namely

(1)

where is such that for . Fix a prior distribution over switching sequences (see (Van Erven et al., 2012) for typical examples).

Definition 4 (Switch code).

Assume that Alice and Bob have agreed on a set of prediction strategies and a prior over . The switch code first encodes a switch sequence strategy, then uses the prequential code with this strategy:

(2)

where is the model used by switch sequence at time .

We then choose the switching strategy wich minimizes . We tested switching between the uniform model, a small convolutional network (tinyCNN), and a VGG-like network with two training methods (VGGa, VGGb) (Appendix D). On MNIST, switching between models does not make much difference. On CIFAR10, switching by taking the best model on each interval saves more than , reaching a codelength of , and a compression ratio of . The cost of encoding the switch is negligible (see Table 2).

e.2 Self-Switch: Switching between variants of a model or hyperparameters

Figure 3: Compression with the self-switch method: Results of the self-switch code on CIFAR with 2 different models: the shallow network, and the VGG-like network trained with data augmentation and batch normalization (VGGb). Performance is reported during online training, as a function of the number of samples seen so far. Top: test accuracy on a pack of data given data , as a function of . Second: codelength per sample (log loss) on a pack of data given data . Third: difference between the prequential cumulated codelength on data , and the uniform encoding. Bottom: compression ratio of the prequential code on data . The catch-up phenomenon is clearly visible for both models: even if models with and without the self-switch have similar performances after a training on the entire dataset, the standard model has lower performances than the uniform model (for the 1280 first labels for the VGGb network, and for the 10,000 first labels for the shallow network), and encoding these first labels is very expensive. The self-switch method solves this problem.

With standard switch, it may be cumbersome to work with different models in parallel. Instead, for models learned by gradient descent, we may use the same architecture but with different parameter values corresponding obtained at different gradient descent stopping times. This is a form of regularization via early stopping.

Let be a model class. Let be the parameter obtained by some optimization procedure after epochs of training on data . For instance, would correspond to using an untrained model (usually close to the uniform model).

We call self-switch code the switch code obtained by switching among the family of models with different gradient descent stopping times (based on the same parametric family ). In practice, this means that at each step of the prequential encoding, after having seen data , we train the model on those data and record, at each epoch , the loss obtained on data . We then switch optimally between those. We incur the small additional cost of encoding the best number of epochs to be used (which was limited to ) at each step.

The catch-up phenomenon and the beneficial effect of the self-switch code can be seen in Figure 3.

The self-switch code achieves similar compression bounds to the switch code, while storing only one network. On MNIST, there is no observable difference. On CIFAR, self-switch is only 300 bits (0.006 bit/label) worse than full -architecture switch.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198613
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description