Pushing the limits of RNN Compression

Pushing the limits of RNN Compression

Urmish Thakker, Igor Fedorov, Jesse Beu, Dibakar Gope, Chu Zhou
Ganesh Dasika ,Matthew Mattina
Arm ML Research Lab
Currently at AMD Research
Abstract

Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size. As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task accuracy. This paper introduces a method to compress RNNs for resource constrained environments using Kronecker product (KP). KPs can compress RNN layers by with minimal accuracy loss. We show that KP can beat the task accuracy achieved by other state-of-the-art compression techniques across 4 benchmarks spanning 3 different applications, while simultaneously improving inference run-time.

1 Introduction

Recurrent Neural Networks (RNNs) achieve state-of-the-art (SOTA) accuracy for many applications that use time-series data. As a result, RNNs can benefit important Internet-of-Things (IoT) applications like wake-word detection zhang2017 , human activity recognition hammerla2016deep ; opp , and predictive maintenance. IoT applications typically run on highly constrained devices. Due to their energy, power, and cost constraints, IoT devices frequently use low-bandwidth memory technologies and smaller caches compared to desktop and server processors. For example, some IoT devices have 2KB of RAM and 32 KB of Flash Memory. The size of typical RNN layers can prohibit their deployment on IoT devices or reduce execution efficiency urmtha01RNN . Thus, there is a need for a compression technique that can drastically compress RNN layers without sacrificing the task accuracy.

First, we study the efficacy of traditional compression techniques like pruning suyog and low-rank matrix factorization (LMF) DBLP:journals/corr/KuchaievG17 ; lmf-good1 . We set a compression target of or more and observe that neither pruning nor LMF can achieve the target compression without significant loss in accuracy. We then investigate why traditional techniques fail, focusing on their influence on the rank and condition number of the compressed RNN matrices. We observe that pruning and LMF tend to either decrease matrix rank or lead to ill-condition matrices and matrices with large singular values.

To remedy the drawbacks of existing compression methods, we propose to use Kronecker Products (KPs) to compress RNN layers. We refer to the resulting models as KPRNNs. We are able to show that our approach achieves SOTA compression on IoT-targeted benchmarks without sacrificing wall clock inference time and accuracy.

2 Related work

KPs have been used in the deep learning community in the past kron1 ; kron2 . For example, kron2 use KPs to compress fully connected (FC) layers in AlexNet. We deviate from kron2 by using KPs to compress RNNs and, instead of learning the decomposition for fixed RNN layers, we learn the KP factors directly. Additionally, kron2 does not examine the impact of compression on inference run-time. In kron1 , KPs are used to stabilize RNN training through a unitary constraint. A detailed discussion of how the present work differs from kron1 can be found in Section 3.

The research in neural network (NN) compression can be roughly categorized into 4 topics: pruning suyog , structured matrix based techniques circular1 , quantization Quant-hubara ; dib1 and tensor decomposition DBLP:journals/corr/KuchaievG17 ; hmd . Compression using structured matrices translates into inference speed-up, but only for matrices of size and larger NIPS2018_8119 on CPUs or when using specialized hardware circular1 . As such, we restrict our comparisons to pruning and tensor decomposition.

3 Kronecker Product Recurrent Neural Networks

3.1 Background

Let , and . Then, the KP between and is given by

(1)

where, , , and is the hadamard product. The variables B and C are referred to as the Kronecker factors of A. The number of such Kronecker factors can be 2 or more. If the number of factors is more than 2, we can use (1) recursively to calculate the resultant larger matrix. For example, in the following equation -

(2)

W can be evaluated by first evaluating to a partial result, say , and then evaluating .

Expressing a large matrix A as a KP of two or more smaller Kronecker factors can lead to significant compression. For example, can be decomposed into Kronecker factors and . The result is a reduction in the number of parameters required to store . Of course, compression can lead to accuracy degradation, which motivates the present work.

3.2 Prior work on using KP to stabilize RNN training flow

Jose et al. kron1 used KP to stabilize the training of vanilla RNN. An RNN layer has two sets of weight matrices - input-hidden and hidden-hidden (also known as recurrent). Jose et al. kron1 use Kronecker factors of size to replace the hidden-hidden matrices of every RNN layer. Thus a traditional RNN cell, represented by:

(3)

is replaced by,

(4)

where (input-hidden matrix) , (hidden-hidden or recurrent matrix) , for , , , and . Thus a sized matrix is expressed as a KP of 8 matrices of size . For an RNN layer with input and hidden vectors of size 256, this can potentially lead to compression (as we only compress the matrix). The aim of Jose et al. kron1 was to stabilize RNN training to avoid vanishing and exploding gradients. They add a unitary constraint to these matrices, stabilizing RNN training. However, in order to regain baseline accuracy, they needed to increase the size of the RNN layers significantly, leading to more parameters being accumulated in the matrix in (4). Thus, while they achieve their objective of stabilizing vanilla RNN training, they achieve only minor compression (). In this paper, we show how to use KP to compress both the input-hidden and hidden-hidden matrices of vanilla RNN, LSTM and GRU cells and achieve significant compression (). We show how to choose the size and the number of Kronecker factor matrices to ensure high compression rates , minor impact on accuracy, and inference speed-up over baseline on an embedded CPU.

3.3 KPRNN Layer

Choosing the number of Kronecker factors:

A matrix expressed as a KP of multiple Kronecker factors can lead to significant compression. However, deciding the number of factors is not obvious. We started by exploring the framework of kron1 . We used Kronecker factor matrices for hidden-hidden/recurrent matrices of LSTM layers of the key-word spotting network zhang2017 . This resulted in an approximately reduction in the number of parameters. However, the accuracy dropped by 4% relative to the baseline. When we examined the matrices, we observed that, during training, the values of some of the matrices hardly changed after initialization. This behavior may be explained by the fact that the gradient flowing back into the Kronecker factors vanishes as it gets multiplied with the chain of matrices during back-propagation. In general, our observations indicated that as the number of Kronecker factors increased, training became harder, leading to significant accuracy loss when compared to baseline.

Input: Matrices of dimension , of dimension and of dimension . ,
Output: Matrix of dimension

1:   {reshapes the x vector to a matrix of dimension }
2:  
3:  
4:   {reshapes the y vector to a matrix of dimension }
Algorithm 1 Implementation of matrix vector product, when matrix is expressed as a KP of two matrices

Additionally, using a chain of matrices leads to significant slow-down during inference on a CPU. For inference on IoT devices, it is safe to assume that the batch size will be one. When the batch size is one, the RNN cells compute matrix vector products during inference. To calculate the matrix-vector product, we need to multiply and expand all of the to calculate the resultant larger matrix, before executing the matrix vector multiplication. Referring to (4), we need to multiply to create before executing the operation . The process of expanding the Kronecker factors to a larger matrix, followed by matrix-vector products, leads to a slower inference than the original uncompressed baseline. Thus, inference for RNNs represented using (3) is faster than the compressed RNN represented using (4). The same observation is applicable anytime the number of Kronecker factors is greater than . The slowdown with respect to baseline increases with the number of factors and can be anywhere between .

However, if the number of Kronecker factors is restricted to two, we can avoid expanding the Kronecker factors into the larger matrix and achieve speed-up during inference. Algorithm 1 shows how to calculate the matrix vector product when the matrix is expressed as a KP of two Kronecker factors. The derivation of this algorithm can be found in kpmv .

Input: is the sorted list of prime factors of , is the sorted list of prime factors of
Output: - Dimension of the first Kronecker factor. - Dimension of the second Kronecker factor

1:  function reduceList (inputList)
2:     temp1 = inputList[0]
3:     inputList.del(0) //Delete the element at position zero
4:     inputList[0] = inputList[0]*temp1
5:     inputList.sort(’ascending’)
6:     return inputList
7:  list2, list1 = reduceList(list2), reduceList(list1).sort(’descending’)
8:  listA, listB = [list1[0],list2[0]], listB = [list1[1],list2[1]]
Algorithm 2 Finding dimension of Kronecker Factors for a matrix of dimension

Choosing the dimensions of Kronecker factors:

A matrix can be expressed as a KP of two Kronecker factors of varying sizes. The compression factor is a function of the size of the Kronecker factors. For example, a matrix can be expressed as a KP of and matrices, leading to a reduction in the number of parameters used to store the matrix. However, if we use Kronecker factors of size and , we achieve a compression factor of . In this paper, we choose the dimensions of the factors to achieve maximum compression using Algorithm 2.

Compressing LSTMs, GRUs and RNNs using the KP:

KPRNN cells are RNN, LSTM and GRU cells with all of the matrices compressed by replacing them with KPs of two smaller matrices. For example, the RNN cell depicted in (3) is replaced by:

(5)

where , , , , and . LSTM, GRU and FastRNN cells are compressed in a similar fashion. Instead of starting with a trained network and decomposing its matrices into Kronecker factors, we replace the RNN/LSTM/GRU cells in a NN with its KP equivalent and train the entire model from the beginning.

4 Results

MNIST- USPS- KWS- HAR1-
LSTM FastRNN LSTM BiLSTM
Reference Paper msr zhang2017 hammerla2016deep
Cell Type LSTM FastRNN LSTM Bi-LSTM
Dataset mnist usps warden opp
Table 1: Benchmarks evaluated in this paper. These benchmarks represent some of the key applications in the IoT domain - Image Classification, Key-word spotting, Bidirectional LSTM. We cover a wide variety of applications and RNN cell types.
Benchmark Name Parameter measured Compression Technique
Baseline Small Baseline
Magnitude
Pruning
LMF KP
MNIST-LSTM Model Size (KB)1 44.73 4.51 4.19 4.9 4.05
Accuracy (%) 99.40 87.50 96.49 97.40 98.44
Compression factor 2 1 17.6
Runtime (ms) 6.3 0.7 0.66 1.8 4.6
HAR1-BiLSTM Model Size (KB)1 1462.3 75.9 75.55 76.39 74.90
Accuracy (%) 91.90 88.84 82.97 89.94 91.14
Compression factor 2 1 19.8 28.6 28.1 29.7
Runtime (ms) 470 29.92 98.2 64.12 157
KWS-LSTM Model Size (KB)1 243.4 16.3 15.56 16.79 15.30
Accuracy (%) 92.5 89.70 84.91 89.13 91.2
Compression factor 2 1 15.8 23.81 21.2 24.47
Runtime (ms) 26.8 2.01 5.9 4.1 17.5
USPS-FastRNN Model Size (KB)1 7.25 1.98 1.92 2.04 1.63
Accuracy (%) 93.77 91.23 88.52 89.56 93.20
Compression factor 2 1 4.4 8.94 8 16
Runtime (ms) 1.17 0.4 0.375 0.28 0.6
  • Model size is calculated assuming 32-bit weights. Further opportunities exist to compress the network via quantization and compressing the fully connected softmax layer.

  • We measure the amount of compression of the LSTM/FastRNN layer of the network

Table 2: Model accuracy and runtime for our benchmarks before and after compression. The baseline networks are compared to networks with RNN layers in the baseline compressed using KPs, magnitude pruning, LMF, or by scaling the network size (Small Baseline). Each compressed network has fewer RNN parameters than the baseline (size indicated). For each row, best results are indicated in bold. The KP-based networks are consistently the most accurate alternative while still having speed-up over the baseline.

Other compression techniques evaluated:

We compare networks compressed using KPRNN with three techniques - pruning, LMF and Small Baseline.

Training platform, infrastructure, and inference run-time measurement:

We use Tensorflow 1.12 as the training platform and 4 Nvidia RTX 2080 GPUs to train our benchmarks. To measure the inference run-time, we implement the baseline and the compressed cells in C++ using the Eigen library and run them on the Arm Cortex-A73 core of a Hikey 960 development board.

Dataset and benchmarks:

We evaluate the impact of compression using the techniques discussed in Section 3 on a wide variety of benchmarks.Table 1 shows the benchmarks used in this work.

4.1 KPRNN networks

Table 2 shows the results of applying the KP compression technique across a wide variety of applications and RNN cells. As mentioned in Section 3, we target the point of maximum compression using two matrix factors.

4.2 Possible explanation for the accuracy difference between KPRNN, pruning, and LMF

In general, the poor accuracy of LMF can be attributed to significant reduction in the rank of the matrix (generally ). KPs, on the other hand, will create a full rank matrix if the Kronecker factors are fully ranked laub2005matrix . We observe that, Kronecker factors of all the compressed benchmarks are fully-ranked. A full-rank matrix can also lead to poor accuracy if it is ill-conditioned. However, the condition numbers of the matrices of the best-performing KP compressed networks discussed in this paper are in the range of to . To prune a network to the same compression factor as KP, networks need to be pruned to 94% sparsity or above. Pruning FastRNN cells to the required compression factor leads to an ill-conditioned matrix. This may explain the poor accuracy of sparse FastRNN networks. However, for other pruned networks, the resultant sparse matrices have a condition number less than and are fully-ranked. Thus, condition number does not explain the loss in accuracy for these benchmarks. To further understand the loss in accuracy of pruned LSTM networks, we looked at the singular values of the resultant sparse matrices in the KWS-LSTM network. Let . The largest singular value of upper-bounds , i.e. the amplification applied by . Thus, a matrix with larger singular value can lead to an output with larger norm linalgbook . Since RNNs execute a matrix-vector product followed by a non-linear sigmoid or tanh layer, the output will saturate if the value is large. The matrix in the LSTM layer of the best-performing pruned KWS-LSTM network has its largest singular value in the range of to while the baseline KWS-LSTM network learns a LSTM layer matrix with largest singular value of and the Kronecker product compressed KWS-LSTM network learns LSTM layers with singular values less than . This might explain the especially poor results achieved after pruning this benchmark. Similar observations can be made for the pruned HAR1 network.

5 Conclusion

We show how to compress RNN Cells by to using Kronecker products. We call the cells compressed using Kronecker products as KPRNNs. KPRNNs can act as a drop in replacement for most RNN layers and provide the benefit of significant compression with marginal impact on accuracy. None of the other compression techniques (pruning, LMF) match the accuracy of the Kronecker compressed networks. We show that this compression technique works across 5 benchmarks that represent key applications in the IoT domain.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393513
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description