Adaptive Quantization for Deep Neural Network

Adaptive Quantization for Deep Neural Network

Yiren Zhou1, Seyed-Mohsen Moosavi-Dezfooli2, Ngai-Man Cheung1, Pascal Frossard2
1
Singapore University of Technology and Design (SUTD)
2École Polytechnique Fédérale de Lausanne (EPFL)
yiren_zhou@mymail.sutd.edu.sg, ngaiman_cheung@sutd.edu.sg
{seyed.moosavi, pascal.frossard}@epfl.ch
   Yiren Zhou1, Seyed-Mohsen Moosavi-Dezfooli2, Ngai-Man Cheung1, Pascal Frossard2
1
Singapore University of Technology and Design (SUTD)
2École Polytechnique Fédérale de Lausanne (EPFL)
yiren_zhou@mymail.sutd.edu.sg, ngaiman_cheung@sutd.edu.sg
{seyed.moosavi, pascal.frossard}@epfl.ch

Supplementary Material: Adaptive Quantization for Deep Neural Network

Yiren Zhou1, Seyed-Mohsen Moosavi-Dezfooli2, Ngai-Man Cheung1, Pascal Frossard2
1
Singapore University of Technology and Design (SUTD)
2École Polytechnique Fédérale de Lausanne (EPFL)
yiren_zhou@mymail.sutd.edu.sg, ngaiman_cheung@sutd.edu.sg
{seyed.moosavi, pascal.frossard}@epfl.ch
   Yiren Zhou1, Seyed-Mohsen Moosavi-Dezfooli2, Ngai-Man Cheung1, Pascal Frossard2
1
Singapore University of Technology and Design (SUTD)
2École Polytechnique Fédérale de Lausanne (EPFL)
yiren_zhou@mymail.sutd.edu.sg, ngaiman_cheung@sutd.edu.sg
{seyed.moosavi, pascal.frossard}@epfl.ch
Abstract

In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy.

Adaptive Quantization for Deep Neural Network


Yiren Zhou1, Seyed-Mohsen Moosavi-Dezfooli2, Ngai-Man Cheung1, Pascal Frossard2 1Singapore University of Technology and Design (SUTD) 2École Polytechnique Fédérale de Lausanne (EPFL) yiren_zhou@mymail.sutd.edu.sg, ngaiman_cheung@sutd.edu.sg {seyed.moosavi, pascal.frossard}@epfl.ch

Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Introduction

Deep neural networks (DNNs) have achieved significant success in various machine learning applications, including image classification (???), image retrieval (??), and natural language processing (?). These achievements come with increasing computational and memory cost, as the neural networks are becoming deeper (?), and contain more filters per single layer (?).

While the DNNs are powerful for various tasks, the increasing computational and memory costs make it difficult to apply on mobile platforms, considering the limited storage space, computation power, energy supply of mobile devices (?), and the real-time processing requirements of mobile applications. There is clearly a need to reduce the computational resource requirements of DNN models so that they can be deployed on mobile devices (?).

In order to reduce the resource requirement of DNN models, one approach relies on model pruning. By pruning some parameters in the model (?), or skipping some operations during the evaluation (?), the storage space and/or the computational cost of DNN models can be reduced. Another approach consists in parameter quantization (?). By applying quantization on model parameters, these parameters can be stored and computed under lower bit-width. The model size can be reduced, and the computation becomes more efficient under hardware support (?). It is worth noting that model pruning and parameter quantization can be applied at the same time, without interfering with each other (?); we can apply both approaches to achieve higher compression rates.

Many deep model compression works have also considered using parameter quantization (???) together with other compression techniques, and achieve good results. However, these works usually assign the same bit-width for quantization in the different layers of the deep network. In DNN models, the layers have different structures, which lead to the different properties related to quantization. By applying the same quantization bit-width for all layers, the results could be sub-optimal. It is however possible to assign different bit-width for different layers to achieve optimal quantization result (?).

In this work, we propose an accurate and efficient method to find the optimal bit-width for coefficient quantization on each DNN layer. Inspired by the analysis in (?), we propose a method to measure the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, by combining the effect caused by all layers, the optimal bit-width is decided for each layer. By this method we avoid the exhaustive search for optimal bit-width on each layer, and make the quantization process more efficient. We apply this method to quantize different models that have been pre-trained on ImageNet dataset and achieve good quantization results on all models. Our method constantly outperforms recent state-of-the-art, i.e., the SQNR-based method (?) on different models, and achieves 20-40% higher compression rate compared to equal bit-width quantization. Furthermore, we give a theoretical analysis on how the quantization on layers affects DNN accuracy. To the best of our knowledge, this is the first work that theoretically analyses the relationship between coefficient quantization effect of individual layers and DNN accuracy.

Related works

Parameter quantization has been widely used for DNN model compression (???). The work in (?) limits the bit-width of DNN models for both training and testing, and stochastic rounding scheme is proposed for quantization to improve the model training performance under low bit-width. The authors in (?) use k-means to train the quantization centroids, and use these centroids to quantize the parameters. The authors in (?) separate the parameter vectors into sub-vectors, and find sub-codebook of each sub-vectors for quantization. In these works, all (or a majority of) layers are quantized with the same bit-width. However, as the layers in DNN have various structures, these layers may have different properties with respect to quantization. It is possible to achieve better compression result by optimizing quantization bit-width for each layer.

Previous works have been done for optimizing quantization bit-width for DNN models (????). The authors in (?) propose an exhaustive search approach to find optimal bit-width for a fully-connected network. In (?), the authors first use exhaustive search to find optimal bit-width for uniform or non-uniform quantization; then two schemes are proposed to reduce the memory consumption during model testing. The exhaustive search approach only works for a relatively small network with few layers, while it is not practical for deep networks. As the number of layers increases, the complexity of exhaustive search increases exponentially. The authors in (?) use mean square quantization error (MSQE) ( error) on layer weights to measure the sensitivity of DNN layers to quantization, and manually set the quantization bit-width for each layer. The work in (?) use the signal-to-quantization-noise ratio (SQNR) on layer weights to measure the effect of quantization error in each layer. These MSQE and SQNR are good metrics for measuring the quantization loss on model weights. However, there is no theoretical analysis to show how these measurements relate to the accuracy of the DNN model, but only empirical results are shown. The MSQE-based approach in (?) minimizes the error on quantized weight, indicating that the error in different layer has the equal effect on the model accuracy. Similarly, in (?), the authors maximize the overall SQNR, and suggest that quantization on different layers has equal contribution to the overall SQNR, thus has equal effect on model accuracy. Both works ignore that the various structure and position of different layers may lead to different robustness on quantization, and thus render the two approaches suboptimal.

In this work, we follow the analysis in (?), and propose a method to measure the effect of quantization error in each DNN layers. Different from (??), which use empirical results to show the relationship between the measurement and DNN accuracy, we conduct a theoretical analysis to show how our proposed method relates to the model accuracy. Furthermore, we show that our bit-width optimization method is more general than the method in (?), which makes our optimization more accurate.

There are also works (??) that use knowledge distillation to train a smaller network using original complex models. It is also possible to combine our quantization framework with knowledge distillation to achieve yet better compression results.

Measuring the effect of quantization noise

In this section, we analyse the effect of quantization on the accuracy of a DNN model. Parameter quantization can result in quantization noise that would affect the performance of the model. Previous works have been done for analyzing the effect of input noise on the DNN model (?); here we use this idea to analyse the effect of noise in intermediate feature maps in the DNN model.

Quantization optimization

The goal of our paper is to find a way to achieve optimal quantization result to compress a DNN model. After the quantization, under controlled accuracy penalty, we would like the model size to be as small as possible. Suppose that we have a DNN with layers. Each layer has parameters, and we apply bit-width quantization in the parameters of layer to obtain a quantized model . Our optimization objective is:

(1)

where is the accuracy of the model , and is the maximum accuracy degradation. Note that it requires enormous computation to calculate the accuracy of the model for all quantization cases. To solve the problem more efficiently, we propose a method to estimate the value of the performance penalty given by .

Quantization noise

Value quantization is a simple yet effective way to compress a model (?). Here we evaluate the effect of using value quantization on model parameters.

Assume that conducting quantization on a value is equivalent to adding noise to the value:

(2)

Here is the original value, , with the set of weights in a layer. Then, is the quantized value, and is the quantization noise. Assume that we use a uniform quantizer, and that the stepsize of the quantized interval is fixed. Following the uniform quantization analysis in (?), if we consider as the quantization noise on all weights in , we have the expectation of as

(3)

where , is the number of weights in , and  (?). Detailed analysis can be found in Supplementary Material. Eq. (3) indicates that every time we reduce the bit-width by 1 bit, will increase by 4 times. This is equivalent to the quantization efficiency of 6dB/bit in (?).

Measurement for quantization noise

Figure 1: Simple DNN model architecture.

From weight domain to feature domain

Eq. (3) shows the quantization noise in the weight domain; here we show how the noise on weight domain can link to the noise in the feature domain.

A simplified DNN classifier architecture is shown in Fig. 1.

Here we define as the weights of layer in the DNN model . And is the last feature map (vector) of the DNN model . As we quantize , the quantization noise is , and there would be a resulting noise on the last feature map . Here we define as the noise on last feature map that is caused by the quantization only on a single layer .

As the value of is proportional to the value of , similar to Eq. (3), the expectation of resulting noise on is:

(4)

This is proved in later sections, empirical results are shown in Fig. 4.

The effect of quantization noise

Similarly to the analysis in (?), we can see that the softmax classifier has a linear decision boundary in the last feature vectors  (?) in Fig. 1. The analysis can be found in the Supplementary Material. Then we apply the result of (?) to bound the robustness of the classifier with respect to manipulation of weights in different layers.

We define to be the adversarial noise, which represents the minimum noise to cause misclassification. For a certain input vector , where is the number of element in z, the is the distance from the datapoint to the decision boundary, which is a fixed value. We define a sorted vector of z as , where the max value is , and second max value is . The result for softmax classifier (or max classifier) can be expressed as: , which is picking up the maximum value in the vectorz.

As adversarial noise is the minimum noise that can change the result of a classifier, we can get the adversarial noise for softmax classifier as , then the norm square of adversarial noise .

Here we define as the noise that we directly add on last feature map . We can consider as the collective effect of all that caused by the quantization on all layers , where is the number of layers.

As mentioned in (?), if we apply random noise rather than adversarial noise on the input vector z for a softmax classifier , it requires higher norm for random noise to causes prediction error with same probability, compared to adversarial noise .

The following result shows the relationship between the random noise and adversarial noise , under softmax classifier with a number of classes equal to :

\thmt@toks\thmt@toks

Let . The following inequalities hold between the norm square of random noise and adversarial noise .

(5)
\thmt@toks\thmt@toks

with probability exceeding .

Lemma 1.

The proof of Lemma The effect of quantization noise can be found in the Supplementary Material. The lemma states that if the norm of random noise is , it does not change the classifier decision with high probability.

Based on Lemma The effect of quantization noise, we can rewrite our optimization problem. Assume that we have a model with accuracy . After adding random noise on the last feature map , the model accuracy drops by . If we have

(6)

we have the relation between accuracy degradation and noise as:

(7)

The detailed analysis can be found in the Supplementary Material. Eq. (7) shows the bound of noise on last feature map . However, adding quantization noise to different layers may have different effect on model accuracy. Suppose we have model for quantization. By adding noise on weights of layer , we induce the noise on last feature map . By quantizing earlier layers, the noise needs to pass through more layers to get , which results in a low rank noise . For example, when quantizing the first layer, is results in , and . When quantizing the last layer, it results in , and . In order to let have equivalent effect on model accuracy as , should be larger than .

By considering the different effects of caused by quantization in different layers, we rewrite Eq. (7) in a more precise form:

(8)

Here is the robustness parameter of layer under accuracy degradation .

Eq. (8) shows a precise relationship between and . If we add quantization noise to layer of model , and get noise on last feature map , then the model accuracy decreases by .

We consider the layer in model as , where is the feature map after layer . Here we consider that the noise would transfer through layers under almost linear transformation (to be discussed in later sections). If we add random noise in the weights of layer , we have the rank of the resulting noise on last feature map given as:

(9)

Based on Eq. (9), we have:

(10)

Eq. (10) suggests that the noise on earlier layers of DNN needs to pass through more layers to affect the last feature map , the noise on would have lower rank, resulting in a lower value of .

From Eq. (8), we can see in particular that when

(11)

the quantization on layer and have same effect on model accuracy. Based on Eq. (11), can be a good measurement for estimating the accuracy degradation caused by quantization noise, regardless of which layer to quantize. Consider as the input in dataset , we have the corresponding feature vector in the last feature map . By quantizing layer in model , we get noise on z. We define the accuracy measurement on layer as:

(12)

The way to calculate is given by:

(13)

The detailed method to calculate will be discussed in the experiment section. Note that, based on the optimization result in Eq. (22), the selected value of does not matter for the optimization result, as long as the the value of is almost independent w.r.t. , which is true according to Fig. 3. So choosing different value of does not change the optimization result. In later sections, we use instead of for simplicity.

From Eq. (12), based on the linearity and additivity of the proposed estimation method (shown in later sections), the measurement of the effect of quantization error in all the layers of the DNN model is shown in Eq. (20).

After we define the accuracy measurement for each layer of model, based on Eq. (8), we can then rewrite the optimization in Eq. (1) as

(14)

where is the accuracy measurement for all layers, and is a constant related to model accuracy degradation , with higher indicating higher .

Linearity of the measurements

In this section we will show that the DNN model are locally linear to the quantization noise measurement , under the assumption that the quantization noise is much smaller than the original value: . That is, if a quantization noise on layer leads to on last feature vector , then we have a quantization noise on layer leads to on last feature vector .

For linear layers like convolutional layers and fully connected layers in the DNN model, the linearity for noise is obvious. Here we mainly focus on the non-linear layers in the DNN model, such as ReLU and Max-pooling layers.

ReLU layers

The ReLU layers is widely used to provide nonlinear activation for DNN. Given the input to a ReLU layer, the output value is calculated as:

(15)

From Eq. (15) we can see that the ReLU layer is linear to noise in most cases. The non-linearity happens only when the noise crossing the zero point, which has small probability when the noise is sufficiently small.

Max-pooling layers

Max-pooling is a nonlinear downsampling layer that reduces the input dimension and controls overfitting. We can consider that max-pooling acts as a filter to the feature maps.

Similarly to the ReLU layer, which can be described as , the max-pooling layer can be describes as , where , with the kernel size for max-pooling. The linearity for noise holds when the noises are sufficiently small and do not alter the order for .

Other layers

For other non-linear layers like Sigmoid and PReLU, the linearity for small noises still holds under the assumptions that the function is smooth along most input ranges, and the noise has very low probability to cross the non-linear region.

Based on the linearity assumption, as we model the quantization noise on weight as Eq. (3), the resulting noise on last feature vector can be modeled as:

(16)

Additivity of the measurements

Noise on single multiplication

Pairwise multiplication is a basic operation in the convolutional layers and fully connected layers of DNN. Given one value in the input , one value in the weight matrix , we have the pairwise multiplication as . If we consider noise in both input and , we have noised value and , and finally .

Noise on one layer

Given a convolutional layer input with size , conv kernel with size , and stride size , we have the output feature map with size . Here and are the height and width of input, is the number of channels of input. and are the height and width of the conv kernel, is the depth of output feature map.

The analysis on fully connected layers will be similar to the analysis on convolutional layers. It can be considered as a special case of convolutional layers when , , , and are equal to 1. For a single value , the noise term of can be expressed as:

(17)

The calculation details can be found in Supplementary Material. Note that the term can be ignored under the assumption that and have same bit-width quantization. The term can be ignored under the assumption that and .

From Eq. (17) we can see that: 1) adding noise to input feature maps and weights separately and independently, is equivalent to adding noise to both input feature maps and weights; 2) regarding the output feature map , adding noise to the input feature maps and weights and doing layer operation (pairwise product), is equivalent to adding the noise directly to the output feature map. We will use these two properties in later sections.

Adding noise to multiple layers

Figure 2: Effect of adding noise to multiple layers.

Fig. 2 shows a 2-layer module inside a DNN model. Given input feature map , after the first conv layer, an intermediate feature map is generated, then after the second conv layer, output feature map is generated. Fig. 2 and 2 show the effect of noise on layer 1 and 2, respectively. And Fig. 2 shows the effect of noise on both layer 1 and 2. By analysing the additivity of , we have:

(18)

Detailed analysis can be found in Supplementary Material. Eq. (18) holds under the assumption that and are independent. This is reasonable in our case, as and are caused by and which are two independent quantization noises. This independence between and is also important for our proposed estimation method.

We can extend Eq. (18) to the situation of layers:

(19)

If we consider the linearity and additivity of the proposed measurement, from Eq. (12) and Eq. (19), as well as the independence of the measurement among different layers, we have the measurement of the effect of quantization errors in all layers in DNN model:

(20)

Eq. (20) suggests that the noise effect of adding noise to each layer separately and independently, is equivalent to the effect of adding noise to all layers simultaneously. We use Eq. (12) as the measurement for noise effect on layer , and the effect of adding noise to all layers can be predicted using Eq. (20).

Layer-wise bit-width optimization

In this section we show the approach for optimizing the layer-wise bit-width quantization to achieve an optimal compression ratio under certain accuracy loss.

Following the discussion from the optimization problem in Eq. (14), our goal is to constraint Eq. (20) to be a small value while minimizing the model size.

Adaptive quantization on multiple layers

Based on Eq. (16) and (20), the optimization Eq. (14) can be expressed as:

(21)

The optimal value of Eq. (21) can be reached when:

(22)

The detailed analysis can be found in Supplementary Material.

Optimal bit-width for each layer

From Eq. (22) we can directly find the optimal bit-width for each layer using the following procedure:

  • Calculate  (Eq. (13)):

    • First, calculate the mean value of adversarial noise for the dataset: .

    • Then, fix value. For example, . Note that the selection of value does not affect the optimization result.

    • For each layer , change the amount of noise added in weight , until the accuracy degradation equals to . Then, record the mean value of noise on the last feature map : .

    • The value can be calculated as: .

    • The details for the calculation of can be found in Fig. 3.

  • Calculate :

    • First, for each layer , fix value. For example, use .

    • Then, record the mean value of noise on the last feature map : .

    • The value can be calculated using Eq. (16): .

  • Calculate :

    • Fix the bitwidth for first layer , for example, . Then bitwidth for layer can be calculated using the Eq. (22):

The detailed algorithm about the above procedure can be found in Supplementary Material. Note that, by selecting different , we achieve different quantization result. A lower value of results in higher compression rate, as well as higher accuracy degradation.

Comparison with SQNR-based approach

Based on the SQNR-based approach (?), the optimal bit-width is reached when:

(23)

The proof can be found in Supplementary Material. Note that compared with our result in Eq. (22), the parameters and are missing. This is consistent with the assumption of the SQNR-based approach, where two layers having the same bit-width for quantization would have the same SQNR value; hence the effects on accuracy are equal. This makes the SQNR-based approach a special case of our approach, when all layers in the DNN model have the equal effect on model accuracy under the same bit-width.

Experimental results

In this section we show empirical results that validate our assumptions in previous sections, and evaluate the proposed bit-width optimization approach.

All codes are implemented using MatConvNet (?). All experiments are conducted using a Dell workstation with E5-2630 CPU and Titan X Pascal GPU.

Empirical results about measurements

To validate the effectiveness of the proposed accuracy estimation method, we conduct several experiments. These experiments validate the relationship between the estimated accuracy, the linearity of the measurement, and the additivity of the measurement.

Here we use Alexnet (?), VGG-16 (?), GoogleNet (?), and Resnet (?) as the model for quantization. Each layer of the model is quantized separately using uniform quantization, but possibly with different bit-width. The quantized model is then tested on the validation set of Imagenet (?), which contains 50000 images in 1000 classes.

Calculate

As Eq. (12) is proposed to measure the robustness of each layer, we conduct an experiment to find value. We use Alexnet as an example.

First, we calculate the adversarial noise for Alexnet on the last feature vector . The calculation is based on Eq. (13). The mean value of for Alexnet is . The distribution of for Alexnet on Imagenet validation set can be found in Supplementary Material.

After finding value, the value of is calculated based on Fig. 3 and Eq. (13). We set the accuracy degradation to be roughly half of original accuracy (57%), which is . Based on the values in Fig. 3, to are equal to , , and .

Here we show the example for Alexnet of how to calculate the value. Note that for other networks like VGG-16, GoogleNet, and Resnet, we also observe that only the value for the last 1 or 2 layers are obviously different than the other values. During our calculation, we can thus focus on the values for the last several layers. Furthermore, in Fig. 3, we find the relationship for different amounts of noise, which requires a lot of calculations. In real cases, we use binary search to find appropriate points under the same accuracy degradation. This makes the process to calculate fast and efficient. Typically, for a deep model with layers, and dataset with size, we require forward passes to calculate accuracy. Here is the trial times over one layer. We can reduce it to (with ) by only calculating values for the last layers.

In our experiments, the calculation of is the most time-consuming part of our algorithm. We use around 15 mins to calculate the value for Alexnet (30 sec for forward pass on the whole dataset), and around 6 hours to calculate the value for Resnet-50 (2 min for forward pass on the whole dataset). This time can be reduced if we only calculate values for the last few layers.

Figure 3: The relationship between different and model accuracy.

Linearity of measurements

Figure 4: The relationship between and .

Fig. 4 shows the relationship between the norm square of noise on quantized weight and on different layers. When the quantization noise on weight is small, we can observe linear relationships. While it is interesting to see that, when the quantization noise is large, the curve does not follow exact linearity, and curves for earlier layers are not as linear as later layers. One possible explanation is that earlier layers in a DNN model are affected by more non-linear layers, such as ReLU and Max-pooling layers. When the noise is large enough to reach the non-linear part of the layer functions (i.e. the zero point of the ReLU function), the curves become non-linear. It is worth noting that, when the non-linearity in most layers happens, the accuracy of the model is already heavily affected (become near zero). So this non-linearity would not affect our quantization optimization process.

Additivity of measurements

Figure 5: The value of when quantize each layer separately, compare to when quantize all layers simultaneously.

Fig. 5 shows the relationship between when we quantize each layer separately, and the value when we quantize all layers together. We can see that when the quantization noise is small, the result closely follows our analysis that ; it validates the additivity of . When the quantization noise is large, the additivity of is not accurate. This result fits our assumption in Eq. (17), where the additivity holds under the condition for all layer in the DNN model. When the noise is too high and we observe the inaccuracy of additivity, the model accuracy is already heavily degraded (near zero). Hence it does not affect the quantization optimization process which rather works in low noise regime.

Optimal bit-width for models

After the validation of the proposed measurement, we conduct experiments to show the results on adaptive quantization. Here we use Alexnet (?), VGG-16 (?), GoogleNet (?), and Resnet-50 (?) to test our bit-width optimization approach. Similarly to the last experiments, the validation set of Imagenet is used. As the SQNR-based method (?) only works for convolutional layers, here we keep the fully connected layers with 16 bits .

Figure 6: Model size after quantization, v.s. accuracy. To compare with SQNR-based method (?), only convolutional layers are quantized.

Fig. 6 shows the quantization results using our method, SQNR-based method (?), and equal bit-width quantization. The equal bit-width quantization means that the number of quantization intervals in all layers are the same. For all three methods, we use uniform quantization for each layer. We can see that for all networks, our proposed method outperforms SQNR-based method, and achieves smaller model size for the same accuracy degradation. It is interesting to see that the SQNR-based method does not obviously outperform equal quantization on the Resnet-50 model. One possible reason is that Resnet-50 contains convolutional layers in its ”bottleneck” structure, which is similar to fully connected layers. As the authors claim in (?), the SQNR-based method does not work for fully connected layers. Note that our method generates more datapoints on the figure, because the optimal bit-width for different layers may contain different decimals. And by rounding the optimal bit-width in different ways, we can generate more bit-width combinations than the SQNR-based methods.

The results for quantization on all layers are shown in Supplementary Material. For Alexnet and VGG-16 model, our method achieves smaller model size with the same accuracy degradation, while for GoogleNet and Resnet-50, our method achieves smaller model size with the same accuracy degradation. These results indicate that our proposed quantization method works better for models with more diverse layer size and structures, like Alexnet and VGG.

Conclusions

Parameter quantization is an important process to reduce the computation and memory costs of DNNs, and to deploy complex DNNs on mobile equipments. In this work, we propose an efficient approach to optimize layer-wise bit-width for parameter quantization. We propose a method that relates quantization to model accuracy, and theoretically analyses this method. We show that the proposed approach is more general and accurate than previous quantization optimization approaches. Experimental results show that our method outperforms previous works, and achieves higher compression rate than SQNR-based methods and equal bit-width quantization. For future works, we will consider combining our method with fine-tuning and other model compression methods to achieve better model compression results.

References

  • [Anwar, Hwang, and Sung 2015] Anwar, S.; Hwang, K.; and Sung, W. 2015. Fixed point optimization of deep convolutional neural networks for object recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1131–1135.
  • [Deng, Hinton, and Kingsbury 2013] Deng, L.; Hinton, G.; and Kingsbury, B. 2013. New types of deep neural network learning for speech recognition and related applications: An overview. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, 8599–8603.
  • [Do, Doan, and Cheung 2016] Do, T.-T.; Doan, A.-D.; and Cheung, N.-M. 2016. Learning to hash with binary deep neural network. In European Conference on Computer Vision (ECCV), 219–234. Springer.
  • [Fawzi, Moosavi-Dezfooli, and Frossard 2016] Fawzi, A.; Moosavi-Dezfooli, S.-M.; and Frossard, P. 2016. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems (NIPS). 1632–1640.
  • [Figurnov et al. 2016] Figurnov, M.; Ibraimova, A.; Vetrov, D. P.; and Kohli, P. 2016. Perforatedcnns: Acceleration through elimination of redundant convolutions. In Advances in Neural Information Processing Systems (NIPS), 947–955.
  • [Gray and Neuhoff 2006] Gray, R. M., and Neuhoff, D. L. 2006. Quantization. IEEE Transactions on Information Theory (TIT) 44(6):2325–2383.
  • [Gupta et al. 2015] Gupta, S.; Agrawal, A.; Gopalakrishnan, K.; and Narayanan, P. 2015. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 1737–1746.
  • [Han et al. 2016] Han, S.; Liu, X.; Mao, H.; Pu, J.; Pedram, A.; Horowitz, M. A.; and Dally, W. J. 2016. Eie: efficient inference engine on compressed deep neural network. In Proceedings of the IEEE International Symposium on Computer Architecture (ISCA), 243–254.
  • [Han, Mao, and Dally 2015] Han, S.; Mao, H.; and Dally, W. J. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
  • [He et al. 2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 770–778.
  • [Hinton, Vinyals, and Dean 2015] Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
  • [Hoang et al. 2017] Hoang, T.; Do, T.-T.; Tan, D.-K. L.; and Cheung, N.-M. 2017. Selective deep convolutional features for image retrieval. arXiv preprint arXiv:1707.00809.
  • [Hwang and Sung 2014] Hwang, K., and Sung, W. 2014. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In 2014 IEEE Workshop on Signal Processing Systems (SiPS), 1–6.
  • [Krizhevsky, Sutskever, and Hinton 2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (NIPS), 1097–1105.
  • [Lin, Talathi, and Annapureddy 2016] Lin, D.; Talathi, S.; and Annapureddy, S. 2016. Fixed point quantization of deep convolutional networks. In International Conference on Machine Learning (ICML), 2849–2858.
  • [Pang, Du, and Zhu 2017] Pang, T.; Du, C.; and Zhu, J. 2017. Robust deep learning via reverse cross-entropy training and thresholding test. arXiv preprint arXiv:1706.00633.
  • [Romero et al. 2014] Romero, A.; Ballas, N.; Kahou, S. E.; Chassang, A.; Gatta, C.; and Bengio, Y. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550.
  • [Simonyan and Zisserman 2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.
  • [Sun, Lin, and Wang 2016] Sun, F.; Lin, J.; and Wang, Z. 2016. Intra-layer nonuniform quantization of convolutional neural network. In 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP), 1–5.
  • [Szegedy et al. 2015] Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 1–9.
  • [Vedaldi and Lenc 2015] Vedaldi, A., and Lenc, K. 2015. Matconvnet – convolutional neural networks for matlab. In Proceeding of the ACM Int. Conf. on Multimedia.
  • [Wu et al. 2016] Wu, J.; Leng, C.; Wang, Y.; Hu, Q.; and Cheng, J. 2016. Quantized convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4820–4828.
  • [You 2010] You, Y. 2010. Audio Coding: Theory and Applications. Springer Science & Business Media.
  • [Zeiler and Fergus 2014] Zeiler, M. D., and Fergus, R. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision (ECCV), 818–833. Springer.
  • [Zhou et al. 2016] Zhou, Y.; Do, T. T.; Zheng, H.; Cheung, N. M.; and Fang, L. 2016. Computation and memory efficient image segmentation. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) PP(99):1–1.

Measuring the effect of quantization noise

Quantization noise

Assume that conducting quantization on a value is equivalent to adding noise to the value:

(2 revisited)

Here is the original value, , is all weights in a layer. is the quantized value, and is the quantization noise. Assume we use a uniform quantizer, that the stepsize of the quantized interval is fixed. Then the quantization noise follows a uniform distribution in range , where is the quantized interval. Based on this, has zero mean, and the variance of the noise . Then we have .

Follow the uniform quantization analysis in (?), given weights in a layer, . If we quantize the weights by bits, the total number of interval would be , and quantization interval would be . If we consider as the quantization noise on all weights in , The expectation of noise square:

(3 revisited)

Where , is the number of weights in , and . Eq. (3) indicates that every time we reduce the bit-width by 1 bit, will increase by 4 times. This is equivalent to the quantization efficiency of 6dB/bit mentioned in (?).

The property of softmax classifier

Figure 1: Simple DNN model architecture.

Similar to the analysis in (?), we analyse the property of softmax classifier.

A DNN classifier can be expressed as a mapping function , where is the input variable, is the parameters, and denotes the number of classes.

From Fig. 1, here we divide the DNN into two parts. In the first part, we have a mapping function , which maps input variables into the feature vectors for the last layer of DNN. In the second part, we have the softmax function as , , where .

The final classification result can be calculated by picking the maximum value of the softmax value: , . Note that this is equivalent to picking the maximum value for feature vector z: , . So we can see that the softmax classifier has a linear decision boundary in the feature vectors  (?).

Proof of Lemma The effect of quantization noise

Lemma ??.
Proof.

Based on Theorem 1 in (?), for an -class classifier, the norm of a random noise to fool the classifier can be bounded from below by

(24)

with a probability exceeding , where . For the softmax layer, , therefore one can write

(25)

Furthermore, , where is largest element of . Put , hence

(26)

From the other hand,

(27)

Therefore,

(28)

which concludes the proof. ∎

Relationship between accuracy and noise

The original quantization optimization problem:

(1 revisited)

Lemma The effect of quantization noise states that if the norm of random noise is , it does not change the classifier decision with high probability. In particular, from Lemma The effect of quantization noise (Eq. (28) in specific), the probability of misclassification can be expressed as:

(29)

Eq. (29) suggest that as we limit the noise to be less than , the probability of misclassification should be less than .

Based on Lemma The effect of quantization noise and Eq. (29), we formulate the relationship between the noise and model accuracy. Assume that we have a model with accuracy . After adding random noise on the last feature map , the model accuracy drops . If we assume that the accuracy degradation is caused by the noise , we can see that the value in Eq. (29) is closely related to :

(30)

If we have:

(6 revisited)

From Eq. (29), we have:

(31)

Eq. (31) indicates that by limiting noise to be less than , we can approximately assume that model accuracy drops less than . As is strictly decreasing, we can see that is strictly increasing w.r.t. . So as the model has higher accuracy degradation , the noise limitation also increase.

Based on Eq. (31), we have the relation between accuracy degradation and noise as:

(7 revisited)

Calculation of noise on convolutional layer

Given a convolutional layer input with size , conv kernel with size , and stride size , we have the output feature map with size . Here and are the height and width of input, is the number of channel of input. and are the height and width of conv kernel, is the depth of output feature map.

The analysis on fully connected layers will be similar to the analysis on convolutional layers. It can be considered as a special case of convolutional layers when , , , and are equal to 1.

Based on the definition of convolutional operation, for a single value , the value is calculated as:

(32)

where is the weight, and is the bias. and .

As we consider noise on both input feature maps and weights, the Eq. (32) will become:

(33)

Then the noise term of can be expressed as: