1 Introduction
Abstract

The popularity of deep learning is increasing by the day. However, despite the recent advancements in hardware, deep neural networks remain computationally intensive. Recent work has shown that by preserving the angular distance between vectors, random feature maps are able to reduce dimensionality without introducing bias to the estimator. We test a variety of established hashing pipelines as well as a new approach using Kac’s random walk matrices. We demonstrate that this method achieves similar accuracy to existing pipelines.

 

Compressing Deep Neural Networks: A New Hashing Pipeline Using Kac’s Random Walk Matrices


 


Jack Parker-Holder                        Sam Gass

Columbia University                        Columbia University

1 Introduction

After decades of research, deep learning methods burst onto the scene with the success of AlexNet in 2012 [1], the first convolutional neural network to win ImageNet Large Scale Visual Classification [2]. Since then, there have been a series of breakthroughs, such as DeepMind beating the world’s best AlphaGo player, and subsequently improving the model to learn from itself without simulation [3]. Artificial Intelligence has now gone mainstream, the media have caught on and deep learning is all the rage. However, training neural networks remains computationally intensive.

Over the past few years, work on using pseudo-random matrices (i.e. matrices where some entries are indeed fully random but others are derived from them and thus not independent) has become more prominent. This is of particular interest due to the relative ease of storing such matrices. Work by Yu et al. [4] and Yi et al. [5] showed that the circulant and Gaussian Toeplitz matrices could be used to do this while Choromanska, Choromanski et al. [6] went further and proved theoretically that such methods produce unbiased estimators.

In this paper, we explore several hashing pipelines which seek to compress the data by applying a random projection followed by the non-linear sign function. We show that these methods successfully preserve the angular distance between observations sufficiently to maintain high classification accuracy. We also test a new pipeline using Kac’s random walk matrix, based onthe work of Marc Kac [7] in the 1950s. Recent theoretical results [8] indicate the matrix, which produces a random rotation, can be can be constructed in steps.

The structure of the paper is as follows, the next section gives an overview of the existing literature, we then explain the hashing pipelines and follow this with experimental results on the MNIST [9] dataset of handwritten digits.

2 Related work

The first work on random projections was done by Dasgupta [10] in the 1990s, who successfully applied it to real datasets [11]. More recently, such methods have been applied to deep learning architectures (for a review see Saxe. et al, [12]), which is the focus of our work.

Recently, Chen et al. [13] introduced a neural network using hashing, which they call HashedNets, and showed that such a network was able to achieve a significant reduction in model size without the loss of accuracy.

In recent years there has been increasing interest in using pseudo-random projections to compress neural networks. Studies by Yu et al. [4] and Yi et al. [5] focused on the efficacy of the circulant Gaussian matrix and found significant gains in storage and efficiency with a minimal increase in the error rate compared to a regular neural network or an unstructured projection.

Another recent paper by Choromanska, Choromanski et al. [6] introduce two hashing pipelines (which we will explain further later in this paper) and provide theoretical guarantees regarding concentration of the estimators around their mean. They demonstrate the accuracy of the hashing pipelines using the MNIST [9] dataset, and find only a small decrease in accuracy with as much as an eightfold reduction in dimensionality. They tested several structured Gaussian matrices such as the circulant and Toeplitz Gaussian matrix, and showed these projections can maintain high levels of accuracy. We seek to reproduce their results for random Gaussian matrices, as well as two structured Gaussian matrices (circulant and Toeplitz).

In addition to transformations discussed in [6], we note exciting new work from Pillai and Smith [8] on Kac’s random walk matrices, which showed that matrices constructed using a random walk as suggested by Marc Kac [7] can reach a steady state in steps, where is the dimensionality of the data. We show that this matrix can also be included in the hashing pipeline without a reduction in accuracy.

More details on the structure of the matrices and the processing pipeline follows in the next section.

3 Random Feature Maps

3.1 The JLT Lemma

The goal of the transformations utilized in this paper is to input high n-dimensional vectors from and compress each into a lower dimensional vector while preserving the pair-wise Euclidean distance between the vectors. The mathematical theory behind these transformations is the Johnson-Lindenstrauss Lemma [14]:

Theorem 1: Johnson Lindenstrauss Lemma: Let be a set of points in . Let . With a probability of at least

for any

As k , the probability above approaches 1[15]. This discovery has led to an entire field devoted to the reduction of dimensions through random projections. These transformations allow most data analyses based on Euclidean distances among points can be reduced to times. Since its inception the technique has progressed rapidly, leading to faster machine learning algorithms in multiple fields including nearest neighbor searches and and clustering procedures.

3.2 The hashing pipeline: unstructured matrices

For the unstructured matrix we take a simple hashing pipeline of the matrix multiplication followed by the sign function.

where is a Gaussian random matrix.

3.2.1 Gaussian random matrix

The Gaussian random matrix is simply a matrix such that all entries are taken from the unit Gaussian distribution . The matrix is of the form:

3.3 The hashing pipeline: structured matrices

For structured matrices, we test two pipelines recently introduced by Choromanska, Choromanski et al [6]. They proposed two pipelines which they call extended -regular hashing and short -regular hashing, each of which consists of a pre-processing step followed by a hashing step.

3.3.1 Extended -regular hashing

The first pipeline is as follows:

where and are independent copies of the diagonal matrix where each entry is taken from the set with probability . is the -normalized Hadamard matrix. is the projection matrix (in this case either or ).

3.3.2 Short -regular hashing

The second pipeline avoids applying the first random matrix as well as the Hadamard matrix:

3.3.3 Hashing with Kac’s random walk matrix

The third pipeline we test is an adaptation of the Extended -regular hashing pipeline, where we replace the block matrix with Kac’s random walk matrix , of the form:

where

with

where is the dimensionality of the dataset, and

Recent results from Pillai and Smith [8] showed that in order for to be a random matrix, we need to set equal to . This result makes Kac’s random walk matrices an appealing component of a hash, given we seek to compress the data as efficiently as possible while preserving the angular distance between the vectors.

Although we provide no theoretical guarantees for this pipeline, the intuition makes sense considering that each corresponds to a truly random rotation and the results from Pillai and Smith find that steps suffice for the walk to be in a steady state.

3.3.4 Structured Gaussian matrices

In our hashing pipeline we use two structured matrices, the circulant and Toeplitz Gaussian matrices. These are defined below.

3.3.5 Circulant Gaussian matrix

A circulant matrix is a structured random matrix where each row of the matrix is a rotation of a single random vector c, where:

The final result, , is a matrix of the form given by:

3.3.6 Toeplitz Gaussian matrix

A Toeplitz gaussian matrix is a structured random matrix if each of its descending diagonals is given by:

The resulting matrix is of the form:

4 Experimental results

4.1 The dataset

For the purpose of our experiments, we use the MNIST dataset, which has been used extensively by researchers in the field. The dataset has 60,000 28x28 training images and 10,000 testing images, each of hand written digits between 0 and 9. Given that we are working with images we use a convolutional neural network for classification, which we will describe in the next section.

4.2 Neural network architecture

The network is set up with two convolutional layers, each with filter size 3x3 and stride 1. Each convolutional layer is followed by a max pooling layer with filters of size 2x2 applied with a stride of 2. These layers are followed by fully connected layers, the first of which we fix to an arbitrary number of neurons (in this case 50), and the second of which is the output layer, with ten neurons representing the numbers 0 to 9.

Figure 1: The neural network set-up for our experiment, where is the number of dimensions prior to the hash, is the size of the hash and is number of classes of the output (here )

To compress the network, we implement our hashing pipeline before the first fully connected layer, thus keeping the image in tact for the two convolutional layers while reducing dimensionality for the subsequent dense layer. In convolutional neural networks, the fully connected layers are often the bottleneck in terms of efficiency, sometimes contributing over 90% of the storage (for example ”AlexNet” [16]).

Figure 1 shows the visual representation of the network, where the input data has dimensions (in this case ) and we reduce this to dimensions with the hashing step, which is represented by the red arrows. For the purpose of our experiments, tested for a dimensionality reduction () of 2, 4, 8, 16 and 32 respectively.

When training the model, we use ten epochs and the Adam optimization technique [17]. The network was built using Keras with the TensorFlow backend.

4.3 Results

Figure 2: Results for all three hashing pipelines with (left) and (right)

We present our results in a table in two charts. For each of the three hashing pipelines mentioned in Section 3, we use both , the circulant matrix described in section 3.3.5, and , the Toeplitz Gaussian matrix described in section 3.3.6. For comparison purposes, we show the result of a ”baseline” model, which is the result of training the same neural network described in Section 4.2 without a hashing step. We also present the results of the hash using the fully random Gaussian matrix, described in Section 3.2.1, which we refer to as ”Random” in Table 1 and Figure 2.

In Table 1 and Figure 2 we show the size of the hash , or the size of the reduction and the corresponding testing accuracy, where accuracy refers to top-1 accuracy (i.e. the percentage of the time the model predicts the correct category). As can be seen in both Figure 2 and Table 1, all three pipelines successfully preserve the angular distance between the data points to the extent that they are able to achieve high levels of accuracy. In particular, all three pipelines achieve close to 95% accuracy for a hash size of , which corresponds to a 75% compression.

Looking at the results for the hashes using the circulant matrix, the extended pipeline using both the block and Kac matrices performed best, achieving 96.22% and 96.15% accuracies respectively for , the smallest compression we tested. For the larger reduction in dimensionality however, the short -regular hashing pipeline actually did best, with over 78% accuracy.

For the Topelitz Gaussian matrix, all three pipelines performed well for the smallest reduction in dimensionality, achieving over 96% accuracy. However, for the largest compression, the extended -regular hashing pipeline was the most robust, as the other two approaches saw large declines in accuracy to just over 70%.

Random Circulant Toeplitz

/
Extended Short Kac Extended Short Kac
784 / 2 0.9575 0.9622 0.9555 0.9615 0.9609 0.9624 0.9623
392 / 4 0.9501 0.9523 0.9526 0.9477 0.9485 0.9534 0.9457
196 / 8 0.9268 0.9293 0.9150 0.9245 0.9308 0.9094 0.9239
98 / 16 0.8714 0.8597 0.8833 0.8936 0.8771 0.8552 0.869
49 / 32 0.7744 0.7542 0.7852 0.7753 0.7922 0.711 0.7093
Table 1: Experimental results

The results we see for both the Short and Extended -regular hashing pipelines are consistent with the findings in Choromanska, Choromanski et al [6], as well as others. This work once again confirms that structured matrices, which can be stored in linear space, are almost as effective as fully random matrices in preserving the angular distance between data points. As we see here, the fully random matrix does indeed achieve strong results, however it is not obvious that the improvements warrant the increased budgeting from computational complexity that storing unstructured matrices requires.

The key result of this paper is that the pipeline which uses Kac’s random walk matrix instead of the block actually achieves comparable results, in particular for the smaller reductions in dimensionality. This is the result we hoped to see, and could prove useful for future applications given the recent theoretical results from Pillai and Smith.

5 Conclusion

Our results show that all three hashing pipelines are able to reduce dimensionality while preserving the angular distance between input data instances. In particular, we show that a convolutional neural network with a hashing step before the fully connected layers compares favorably with a baseline model with no hash and a hash with a fully random Gaussian matrix when classifying images on the MNIST dataset.

We also showed Kac’s random walk matrix can be used in place of the block in the hashing pipeline to achieve an equal accuracy for significant reductions in dimensionality. The results are consistent when both the circulant and Toeplitz Gaussian matrices were used in the pipeline.

This is an important result, as it offers potential efficiency gains which can boost the performance of practical implementations of deep neural networks, such as in robotics. Given the extent to which fully connected layers contribute to the space requirement of convolutional neural networks, it is likely that optimized versions of our proposed hashing pipeline (for example using the Fast-Fourier Transform) can drastically improve both the time and space required to train and test such networks.

As deep learning continues to grow in popularity, approaches such as this could prove critical when using neural networks in practice.

5.1 Acknowledgements

We could not have done any of this work without the inspiration and guidance of Krzysztof Choromanski, who introduced us to the beauty of random feature maps. We thank him for his time and patience.

References

  • [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. pages 1097–1105. 2012.
  • [2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
  • [3] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550:354 EP –, 10 2017.
  • [4] Felix X. Yu, Sanjiv Kumar, Yunchao Gong, and Shih-Fu Chang. Circulant binary embedding. CoRR, abs/1405.3162, 2014.
  • [5] Xinyang Yi, Constantine Caramanis, and Eric Price. Binary embedding: Fundamental limits and fast algorithm. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2162–2170, 07–09 Jul 2015.
  • [6] Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, and Yann LeCun. Binary embeddings with structured hashed projections. CoRR, abs/1511.05212, 2015.
  • [7] M. Kac. Foundations of kinetic theory. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 3: Contributions to Astronomy and Physics, pages 171–197, 1956.
  • [8] N. S. Pillai and A. Smith. Kac’s Walk on -sphere mixes in steps. ArXiv e-prints, July 2015.
  • [9] Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010.
  • [10] Sanjoy Dasgupta. Learning mixtures of gaussians. Technical report, EECS Department, University of California, Berkeley, May 1999.
  • [11] Sanjoy Dasgupta. Experiments with random projection. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 143–151, 2000.
  • [12] Andrew M. Saxe, Pang Wei Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, and Andrew Y. Ng. On random weights and unsupervised feature learning. In ICML, pages 1089–1096, 2011.
  • [13] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In ICML, volume 37, pages 2285–2294, 2015.
  • [14] William Johnson and Joram Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In Conference in modern analysis and probability (New Haven, Conn., 1982), volume 26, pages 189–206. American Mathematical Society, 1984.
  • [15] Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, and Shih-Fu Chang. On binary embedding using circulant matrices. CoRR, abs/1511.06480, 2015.
  • [16] Yu Cheng, Felix Yu, Rogerio S. Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu Chang. An exploration of parameter redundancy in deep networks with circulant projections, 12 2015.
  • [17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
283451
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description