A Implementation Details

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

Abstract

We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.

\jmlrheading

120001-484/0010/00Itay Hubaraă \ShortHeadingsQuantized Neural NetworksHubara, Courbariaux, Soudry, El-Yaniv and Bengio \firstpageno1

\editor{keywords}

Deep Learning, Neural Networks Compression, Energy Efficient Neural Networks, Computer vision, Language Models.

1 Introduction

Deep Neural Networks (DNNs) have substantially pushed Artificial Intelligence (AI) limits in a wide range of tasks, including but not limited to object recognition from images (Krizhevsky-2012-small; Szegedy-et-al-arxiv2014), speech recognition (Hinton-et-al-2012; Sainath-et-al-ICASSP2013), statistical machine translation (Devlin-et-al-ACL2014; Sutskever-et-al-NIPS2014; Bahdanau-et-al-ICLR2015-small), Atari and Go games (Mnih-et-al-2015; Silver-et-al-2016), and even computer generation of abstract art (Mordvintsev-et-al-2015).

Training or even just using neural network (NN) algorithms on conventional general-purpose digital hardware (Von Neumann architecture) has been found highly inefficient due to the massive amount of multiply-accumulate operations (MACs) required to compute the weighted sums of the neurons’ inputs. Today, DNNs are almost exclusively trained on one or many very fast and power-hungry Graphic Processing Units (GPUs) (Coates-et-al-2013). As a result, it is often a challenge to run DNNs on target low-power devices, and substantial research efforts are invested in speeding up DNNs at run-time on both general-purpose (Vanhoucke-et-al-2011; Gong-et-al-2014; Romero-et-al-2014; Han-et-al-2015) and specialized computer hardware (Farabet-et-al-2011-a; Farabet-et-al-2011-b; Pham-et-al-2012; Chen-et-al-ACM2014; Chen-et-al-IEEE2014; Esser-et-al-2015).

The most common approach is to compress a trained (full precision) network. HashedNets (chen2015compressing) reduce model sizes by using a hash function to randomly group connection weights and force them to share a single parameter value. Gong-et-al-2014 compressed deep convnets using vector quantization, which resulteds in only a accuracy loss. However, both methods focused only on the fully connected layers. A recent work by Han2015 successfully pruned several state-of-the-art large scale networks and showed that the number of parameters could be reduced by an order of magnitude.

Recent works have shown that more computationally efficient DNNs can be constructed by quantizing some of the parameters during the training phase. In most cases, DNNs are trained by minimizing some error function using Back-Propagation (BP) or related gradient descent methods. However, such an approach cannot be directly applied if the weights are restricted to binary values. Soudry-et-al-NIPS2014-small used a variational Bayesian approach with Mean-Field and Central Limit approximation to calculate the posterior distribution of the weights (the probability of each weight to be +1 or -1). During the inference stage (test phase), their method samples from this distribution one binary network and used it to predict the targets of the test set (More than one binary network can also be used). Courbariaux2015 similarly used two sets of weights, real-valued and binary. They, however, updated the real valued version of the weights by using gradients computed by applying forward and backward propagation with the set of binary weights (which was obtained by quantizing the real-value weights to +1 and -1).

This study proposes a more advanced technique, referred to as Quantized Neural Network (QNN), for quantizing the neurons and weights during inference and training. In such networks, all MAC operations can be replaced with and (i.e., counting the number of ones in the binary number) operations. This is especially useful in QNNs with the extremely low precision — for example, when only 1-bit is used per weight and activation, leading to a Binarized Neural Network (BNN). The proposed method is particularly beneficial for implementing large convolutional networks whose neuron-to-weight ratio is very large.

This paper makes the following contributions:

  • We introduce a method to train Quantized-Neural-Networks (QNNs), neural networks with low precision weights and activations, at run-time, and when computing the parameter gradients at train-time. In the extreme case QNNs use only 1-bit per weight and activation(i.e., Binarized NN; see Section 2).

  • We conduct two sets of experiments, each implemented on a different framework, namely Torch7 and Theano, which show that it is possible to train BNNs on MNIST, CIFAR-10 and SVHN and achieve near state-of-the-art results (see Section 4). Moreover, we report results on the challenging ImageNet dataset using binary weights/activations as well as quantized version of it (more than 1-bit).

  • We present preliminary results on quantized gradients and show that it is possible to use only 6-bits with only small accuracy degradation.

  • We present results for the Penn Treebank dataset using language models (vanilla RNNs and LSTMs) and show that with 4-bit weights and activations Recurrent QNNs achieve similar accuracies as their 32-bit floating point counterparts.

  • We show that during the forward pass (both at run-time and train-time), QNNs drastically reduce memory consumption (size and number of accesses), and replace most arithmetic operations with bit-wise operations. A substantial increase in power efficiency is expected as a result (see Section 5). Moreover, a binarized CNN can lead to binary convolution kernel repetitions; we argue that dedicated hardware could reduce the time complexity by .

  • Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy (see Section 6).

  • The code for training and applying our BNNs is available on-line (both the Theano 1 and the Torch framework 2).

2 Binarized Neural Networks

In this section, we detail our binarization function, show how we use it to compute the parameter gradients, and how we backpropagate through it.

2.1 Deterministic vs Stochastic Binarization

When training a BNN, we constrain both the weights and the activations to either or . Those two values are very advantageous from a hardware perspective, as we explain in Section 6. In order to transform the real-valued variables into those two values, we use two different binarization functions, as proposed by Courbariaux-et-al-2015. The first binarization function is deterministic:

(1)

where is the binarized variable (weight or activation) and the real-valued variable. It is very straightforward to implement and works quite well in practice. The second binarization function is stochastic:

(2)

where is the “hard sigmoid” function:

(3)

This stochastic binarization is more appealing theoretically (see Section 4) than the sign function, but somewhat harder to implement as it requires the hardware to generate random bits when quantizing (torii2016asic). As a result, we mostly use the deterministic binarization function (i.e., the sign function), with the exception of activations at train-time in some of our experiments.

2.2 Gradient Computation and Accumulation

Although our BNN training method utilizes binary weights and activations to compute the parameter gradients, the real-valued gradients of the weights are accumulated in real-valued variables, as per Algorithm 1. Real-valued weights are likely required for Stochasic Gradient Descent (SGD) to work at all. SGD explores the space of parameters in small and noisy steps, and that noise is averaged out by the stochastic gradient contributions accumulated in each weight. Therefore, it is important to maintain sufficient resolution for these accumulators, which at first glance suggests that high precision is absolutely required.

Moreover, adding noise to weights and activations when computing the parameter gradients provide a form of regularization that can help to generalize better, as previously shown with variational weight noise (Graves-2011-practical), Dropout (Srivastava14) and DropConnect (Wan+al-ICML2013-small). Our method of training BNNs can be seen as a variant of Dropout, in which instead of randomly setting half of the activations to zero when computing the parameter gradients, we binarize both the activations and the weights.

2.3 Propagating Gradients Through Discretization

The derivative of the sign function is zero almost everywhere, making it apparently incompatible with back-propagation, since the exact gradients of the cost with respect to the quantities before the discretization (pre-activations or weights) are zero. Note that this limitation remains even if stochastic quantization is used. Bengio-arxiv2013 studied the question of estimating or propagating gradients through stochastic discrete neurons. He found that the fastest training was obtained when using the “straight-through estimator,” previously introduced in Hinton’s lectures (Hinton-Coursera2012). We follow a similar approach but use the version of the straight-through estimator that takes into account the saturation effect, and does use deterministic rather than stochastic sampling of the bit. Consider the sign function quantization

and assume that an estimator of the gradient has been obtained (with the straight-through estimator when needed). Then, our straight-through estimator of is simply

(4)

Note that this preserves the gradient information and cancels the gradient when is too large. Not cancelling the gradient when is too large significantly worsens performance. To better understand why the straight-through estimator works well, consider the stochastic binarization scheme in Eq. (2) and rewrite , where is the well-known “hard tanh”,

(5)

In this case the input to the next layer has the following form,

where we use the fact that is the expectation over (see Eqs. (2) and (5)), and define as binarization noise with mean equal to zero. When the layer is wide, we expect the deterministic mean term to dominate, because the noise term is a summation over many independent binarizations from all the neurons in the previous layer. Thus, we argue that the binarization noise can be ignored when performing differentiation in the backward propagation stage. Therefore, we replace (which cannot be computed) with

(6)

which is exactly the straight-through estimator defined in Eq (4). The use of this straight-through estimator is illustrated in Algorithm 1.

A similar binarization process was applied for weights in which we combine two ingredients:

  • Project each real-valued weight to [-1,1], i.e., clip the weights during training, as per Algorithm 1. The real-valued weights would otherwise grow very large without any impact on the binary weights.

  • When using a weight , quantize it using .

Projecting the weights to [-1,1] is consistent with the gradient cancelling when , according to Eq. ( 4).

0:  a minibatch of inputs and targets , previous weights , previous BatchNorm parameters , weight initialization coefficients from (GlorotAISTATS2010-small) , and previous learning rate .
0:  updated weights , updated BatchNorm parameters and updated learning rate .
  {1. Computing the parameter gradients:}
  {1.1. Forward propagation:}
  for  to  do
     
     
     
     if  then
        
     end if
  end for
  {1.2. Backward propagation:}
  {Note that the gradients are not binary.}
  Compute knowing and
  for  to  do
     if  then
        
     end if
     
     
     
  end for
  {2. Accumulating the parameter gradients:}
  for  to  do
     
     
     
  end for
Algorithm 1 Training a BNN. is the cost function for minibatch, , the learning rate decay factor, and , the number of layers. stands for element-wise multiplication. The function Binarize() specifies how to (stochastically or deterministically) binarize the activations and weights, and Clip(), how to clip the weights. BatchNorm() specifies how to batch-normalize the activations, using either batch normalization (Ioffe+Szegedy-2015) or its shift-based variant we describe in Algorithm 2. BackBatchNorm() specifies how to backpropagate through the normalization. Update() specifies how to update the parameters when their gradients are known, using either ADAM (kingma2014adam) or the shift-based AdaMax we describe in Algorithm 3.
0:  Values of over a mini-batch: ; Parameters to be learned: ,
0:  
   {mini-batch mean}
   {centered input}
   {apx variance}
   {normalize}
   {scale and shift}
Algorithm 2 Shift based Batch Normalizing Transform, applied to activation over a mini-batch. is the approximate power-of-2 3, and stands for both left and right binary shift.
0:  Previous parameters , their gradient , and learning rate .
0:  Updated parameters
  {Biased 1st and 2nd raw moment estimates:}
  
  
  {Updated parameters:}
  
Algorithm 3 Shift based AdaMax learning rule (kingma2014adam). indicates the element-wise square . Good default settings are , , . All operations on vectors are element-wise. With and we denote and to the power .
0:  8-bit input vector , binary weights , and BatchNorm parameters .
0:  the MLP output .
  {1. First layer:}
  
  for  to  do
     
  end for
  
  {2. Remaining hidden layers:}
  for  to  do
     
     
  end for
  {3. Output layer:}
  
  
Algorithm 4 Running a BNN with layers.

2.4 Shift-based Batch Normalization

Batch Normalization (BN) (Ioffe+Szegedy-2015) accelerates the training and reduces the overall impact of the weight scale (Courbariaux-et-al-2015). The normalization procedure may also help to regularize the model. However, at train-time, BN requires many multiplications (calculating the standard deviation and dividing by it, namely, dividing by the running variance, which is the weighted mean of the training set activation variance). Although the number of scaling calculations is the same as the number of neurons, in the case of ConvNets this number is quite large. For example, in the CIFAR-10 dataset (using our architecture), the first convolution layer, consisting of only filter masks, converts an image of size to size , which is almost two orders of magnitude larger than the number of weights (87.1 to be exact). To achieve the results that BN would obtain, we use a shift-based batch normalization (SBN) technique, presented in Algorithm 2. SBN approximates BN almost without multiplications. Define as the approximate power-of-2 of z (i.e., the index of the most significant bit (MSB)), and as both left and right binary shift. SBN replaces almost all multiplication with power-of-2-approximation and shift operations:

(7)

The only operation which is not a binary shift or an add is the inverse square root (see normalization operation Algorithm 2). From the early work of lomont2003fast we know that the inverse-square operation could be applied with approximately the same complexity as multiplication. There are also faster methods, which involve lookup table tricks that typically obtain lower accuracy (this may not be an issue, since our procedure already adds a lot of noise). However, the number of values on which we apply the inverse-square operation is rather small, since it is done after calculating the variance, i.e., after averaging (for a more precise calculation, see the BN analysis in Lin2015. Furthermore, the size of the standard deviation vectors is relatively small. For example, these values make up only of the network size (i.e., the number of learnable parameters) in the Cifar-10 network we used in our experiments.

In the experiment we observed no loss in accuracy when using the shift-based BN algorithm instead of the vanilla BN algorithm.

2.5 Shift Based AdaMax

The ADAM learning method (kingma2014adam) also reduces the impact of the weight scale. Since ADAM requires many multiplications, we suggest using instead the shift-based AdaMax we outlined in Algorithm 3. In the experiment we conducted we observed no loss in accuracy when using the shift-based AdaMax algorithm instead of the vanilla ADAM algorithm.

2.6 First Layer

In a BNN, only the binarized values of the weights and activations are used in all calculations. As the output of one layer is the input of the next, the inputs of all the layers are binary, with the exception of the first layer. However, we do not believe this to be a major issue. First, in computer vision, the input representation typically has far fewer channels (e.g, red, green and blue) than internal representations (e.g., 512). Consequently, the first layer of a ConvNet is often the smallest convolution layer, both in terms of parameters and computations (Szegedy-et-al-arxiv2014). Second, it is relatively easy to handle continuous-valued inputs as fixed point numbers, with bits of precision. For example, in the common case of -bit fixed point inputs:

(8)

where is a vector of 1024 8-bit inputs, is the most significant bit of the first input, is a vector of 1024 1-bit weights, and is the resulting weighted sum. This method is used in Algorithm 4.

3 Qunatized Neural network - More than 1-bit

Observing Eq. (8), we can see that using 2-bit activations simply doubles the number of times we need to run our XnorPopCount Kernel (i.e., directly proportional to the activation bitwidth). This idea was recently proposed by zhou2016dorefa (DoReFa net) and miyashita2016convolutional (published on arXive shortly after our preliminary technical report was published there). However, in contrast to to zhou2016dorefa, we did not find it useful to initialize the network with weights obtained by training the network with full precision weights. Moreover, the zhou2016dorefa network did not quantize the weights of the first convolutional layer and the last fully-connected layer, whereas we binarized both. We followed the quantization schemes suggested by miyashita2016convolutional, namely, linear quantization:

(9)

and logarithmic quantization:

(10)

where and are the minimum and maximum scale range respectively. Where is the approximate-power-of-2 of as described in Section 2.4. In our experiments (detailed in Section 4) we applied the above quantization schemes on the weights, activations and gradients and tested them on the more challenging ImageNet dataset.

4 Benchmark Results

4.1 Results on MNIST,SVHN, and CIFAR-10

Data set MNIST SVHN CIFAR-10
Binarized activations+weights, during training and test
BNN (Torch7) 1.40% 2.53% 10.15%
BNN (Theano) 0.96% 2.80% 11.40%
Committee Machines’ Array Baldassi2015 1.35% - -
Binarized weights, during training and test
BinaryConnect Courbariaux-et-al-2015 1.29 0.08% 2.30% 9.90%
Binarized activations+weights, during test
EBP Cheng-et-al-2015 2.2 0.1% - -
Bitwise DNNs Kim-et-al-2016 1.33% - -
Ternary weights, binary activations, during test
hwang-et-al-2014 1.45% - -
No binarization (standard results)
No reg 1.3 0.2% 2.44% 10.94%
Maxout Networks Goodfellow2013a 0.94% 2.47% 11.68%
Gated pooling lee-et-al-2015 - 1.69% 7.62%
Table 1: Classification test error rates of DNNs trained on MNIST (fully connected architecture), CIFAR-10 and SVHN (convnet). No unsupervised pre-training or data augmentation was used.
Figure 1: Training curves for different methods on the CIFAR-10 dataset. The dotted lines represent the training costs (square hinge losses) and the continuous lines the corresponding validation error rates. Although BNNs are slower to train, they are nearly as accurate as 32-bit float DNNs.

We performed two sets of experiments, each based on a different framework, namely Torch7 and Theano. Other than the framework, the two sets of experiments are very similar:

  • In both sets of experiments, we obtain near state-of-the-art results with BNNs on MNIST, CIFAR-10 and the SVHN benchmark datasets.

  • In our Torch7 experiments, the activations are stochastically binarized at train-time, whereas in our Theano experiments they are deterministically binarized.

  • In our Torch7 experiments, we use the shift-based BN and AdaMax variants, which are detailed in Algorithms 2 and 3, whereas in our Theano experiments, we use vanilla BN and ADAM.

Results are reported in Table 1. Implementation details are reported in Appendix A.

Mnist

MNIST is an image classification benchmark dataset (LeCun+98). It consists of a training set of 60K and a test set of 10K 28 28 gray-scale images representing digits ranging from 0 to 9. The Multi-Layer-Perceptron (MLP) we train on MNIST consists of 3 hidden layers. In our Theano implementation we used hidden layers of size 4096 whereas in our Torch implementation we used much smaller size 2048. This difference explains the accuracy gap between the two implementations.

Cifar-10

CIFAR-10 is an image classification benchmark dataset. It consists of a training set of size 50K and a test set of size 10K, where instances are 32 32 color images representing airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks. Both implementations share the same structure as reported in Appendix A. Since the Torch implementation uses stochastic binarization, it achieved slightly better results.

Svhn

Street View House Numbers (SVHN) is also an image classification benchmark dataset. It consists of a training set of size 604K examples and a test set of size 26K, where instances are 32 32 color images representing digits ranging from 0 to 9. Here again we obtained a small improvement in the performance by using stochastic binarization scheeme.

4.2 Results on ImageNet

To test the strength of our method, we applied it to the challenging ImageNet classification task, which is probably the most important classification benchmark dataset. It consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories including objects, animals, scenes, and even some abstract shapes. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top- error rate is the fraction of test images for which the correct label is not among the labels considered most probable by the model. Considerable research has been concerned with compressing ImageNet architectures while preserving high accuracy. Previous approaches include pruning near zero weights (Gong-et-al-2014; han2015deep) using matrix factorization techniques (zhang2015efficient), quantizing the weights (gupta2015deep), using shared weights (chen2015compressing) and applying Huffman codes (han2015deep) among others.

To the best of our knowledge, before the first revision of this paper was published on arXive, no one had reported on successfully quantizing the network’s activations. On the contrary, a recent work (han2015deep) showed that accuracy significantly deteriorates when trying to quantize convolutional layers’ weights below 4-bit (FC layers are more robust to quantization and can operate quite well with only 2 bits). In the present work we attempted to tackle the difficult task of binarizing both weights and activations. Employing the well-known AlexNet and GoogleNet architectures, we applied our techniques and achieved top-1 and top-5 accuracy using AlexNet and top-1 and top-5 accuracy using GoogleNet. While these performance results leave room for improvement (relative to full precision nets), they are by far better than all previous attempts to compress ImageNet architectures using less than 4-bit precision for the weights. Moreover, this advantage is achieved while also binarizing neuron activations.

4.3 Relaxing “hard tanh” boundaries

We discovered that after training the network it is useful to widen the “hard tanh” boundaries and retrain the network. As explained in Section 2.3, the straight-through estimator (which can be written as “hard tanh”) cancels gradients coming from neurons with absolute values higher than 1. Hence, towards the last training iterations most of the gradient values are zero and the weight values cease to update. By relaxing the “hard tanh” boundaries we allow more gradients to flow in the back-propagation phase and improve top-1 accuracies by on AlexNet topology using vanilla BNN.

4.4 2-bit activations

While training BNNs on the ImageNet dataset we noticed that we could not force the training set error rate to converge to zero. In fact the training error rate stayed fairly close to the validation error rate. This observation led us to investigate a more relaxed activation quantization (more than 1-bit). As can be seen in Table 2, the results are quite impressive and illustrate an approximate drop in performance (top-1 accuracy) relative to floating point representation, using only 1-bit weights and 2-bit activation. Following miyashita2016convolutional, we also tried quantizing the gradients and discovered that only logarithmic quantization works. With 6-bit gradients we achieved degradation. Those results are presently state-of-the-art, surpassing those obtained by the DoReFa net (zhou2016dorefa). As opposed to DoReFa, we utilized a deterministic quantization process rather than a stochastic one. Moreover, it is important to note that while quantizing the gradients, DoReFa assigns for each instance in a mini-batch its own scaling factor, which increases the number of MAC operations.

While AlexNet can be compressed rather easily, compressing GoogleNet is much harder due to its small number of parameters. When using vanilla BNNs, we observed a large degradation in the top-1 results. However, by using QNNs with 4-bit weights and activation, we were able to achieve top-1 accuracy (only a drop in performance compared to the 32-bit floating point architecture), which is the current state-of-the-art-compression result over GoogleNet. Moreover, by using QNNs with 6-bit weights, activations and gradients we achieved top-1 accuracy. Full implementation details of our experiments are reported in Appendix A.6.

Model Top-1 Top-5
Binarized activations+weights, during training and test
BNN 41.8% 67.1%
Xnor-Nets\tablefootnote First and last layers were not binarized (i.e., using 32-bit precision weights and activation)  (rastegari2016xnor) 44.2% 69.2%
Binary weights and Quantize activations during training and test
QNN 2-bit activation 51.03% 73.67%
DoReFaNet 2-bit activation\footreffirst_last (zhou2016dorefa) 50.7% 72.57%
Quantize weights, during test
Deep Compression 4/2-bit (conv/FC layer) (han2015deep) 55.34% 77.67%
(gysel2016hardware) - 2-bit 0.01% -%
No Quantization (standard results)
AlexNet - our implementation 56.6% 80.2%
Table 2: Classification test error rates of the AlexNet model trained on the ImageNet 1000 classification task. No unsupervised pre-training or data augmentation was used.
Model Top-1 Top-5
Binarized activations+weights, during training and test
BNN 47.1% 69.1%
Quantize weights and activations during training and test
QNN 4-bit 66.5% 83.4%
Quantize activation,weights and gradients during training and test
QNN 6-bit 66.4% 83.1%
No Quantization (standard results)
GoogleNet - our implementation 71.6% 91.2%
Table 3: Classification test error rates of the GoogleNet model trained on the ImageNet 1000 classification task. No unsupervised pre-training or data augmentation was used.

4.5 Language Models

Recurrent neural networks (RNNs) are very demanding in memory and computational power in comparison to feed forward networks. There are a large variety of recurrent models with the Long Short Term Memory networks (LSTMs) introduced by hochreiter1997long are being the most popular model. LSTMs are a special kind of RNN, capable of learning long-term dependencies using unique gating mechanisms. Recently, ott2016recurrent tried to quantize the RNNs weight matrices using similar techniques as described in Section 2. They observed that the weight binarization methods do not work with RNNs. However, by using 2-bits (i.e., ), they have been able to achieve similar and even higher accuracy on several datasets. Here we report on the first attempt to quantize both weights and activations by trying to evaluate the accuracy of quantized recurrent models trained on the Penn Treebank dataset. The Penn Treebank Corpus (marcus1993building) contains 10K unique words. We followed the same setting as in (mikolov2012context) which resulted in 18.55K words for training set, 14.5K and 16K words in the validation and test sets respectively. We experimented with both vanilla RNNs and LSTMs. For our vanilla RNN model we used one hidden layers of size 2048 and ReLU as the activation function. For our LSTM model we use 1 hidden layer of size 300. Our RNN implementation was constructed to predict the next character hence performance was measured using the bits-per-character (BPC) metric. In the LSTM model we tried to predict the next word so performance was measured using the perplexity per word (PPW) metric. Similar to (ott2016recurrent), our preliminary results indicate that binarization of weight matrices lead to large accuracy degradation. However, as can be seen in Table 4, with 4-bits activations and weights we can achieve similar accuracies as their 32-bit floating point counterparts.

Language Models results on Penn Treebank dataset. FP stands for 32-bit floating point Model Layers Hidden Units bits(weights) bits(activation) Accuracy RNN 1 2048 3 3 1.81 BPC RNN 1 2048 2 4 1.67 BPC RNN 1 2048 3 4 1.11 BPC RNN 1 2048 3 4 1.05 BPC RNN 1 2048 FP FP 1.05 BPC LSTM 1 300 2 3 220 PPW LSTM 1 300 3 4 110 PPW LSTM 1 300 4 4 100 PPW LSTM 1 900 4 4 97 PPW LSTM 1 300 FP FP 97 PPW

Table 4: Language Models results on Penn Treebank dataset.

5 High Power Efficiency during the Forward Pass

Operation MUL ADD
8-bit Integer 0.2pJ 0.03pJ
32-bit Integer 3.1pJ 0.1pJ
16-bit Floating Point 1.1pJ 0.4pJ
32-bit Floating Point 3.7pJ 0.9pJ
Table 6: Energy consumption of memory accesses; see Horowitz2014
Memory size 64-bit Cache
8K 10pJ
32K 20pJ
1M 100pJ
DRAM 1.3-2.6nJ
Table 5: Energy consumption of multiply- accumulations; see Horowitz2014

Computer hardware, be it general-purpose or specialized, is composed of memories, arithmetic operators and control logic. During the forward pass (both at run-time and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which might lead to vastly improved power-efficiency. Moreover, a binarized CNN can lead to binary convolution kernel repetitions, and we argue that dedicated hardware could reduce the time complexity by .

Figure 2: Binary weight filters, sampled from of the first convolution layer. Since we have only unique 2D filters (where is the filter size), filter replication is very common. For instance, on our CIFAR-10 ConvNet, only 42% of the filters are unique.

Memory Size and Accesses

Improving computing performance has always been and remains a challenge. Over the last decade, power has been the main constraint on performance (Horowitz2014). This is why considerable research efforts have been devoted to reducing the energy consumption of neural networks. Horowitz2014 provides rough numbers for the energy consumed by the computation (the given numbers are for 45nm technology), as summarized in Tables 6 and 6. Importantly, we can see that memory accesses typically consume more energy than arithmetic operations, and memory access cost increases with memory size. In comparison with 32-bit DNNs, BNNs require 32 times smaller memory size and 32 times fewer memory accesses. This is expected to reduce energy consumption drastically (i.e., by a factor larger than 32).

XNOR-Count

Applying a DNN mainly involves convolutions and matrix multiplications. The key arithmetic operation of deep learning is thus the multiply-accumulate operation. Artificial neurons are basically multiply-accumulators computing weighted sums of their inputs. In BNNs, both the activations and the weights are constrained to either or . As a result, most of the 32-bit floating point multiply-accumulations are replaced by 1-bit XNOR-count operations. This could have a big impact on dedicated deep learning hardware. For instance, a 32-bit floating point multiplier costs about 200 Xilinx FPGA slices (Govindu-et-al-2004; Beauchamp-et-al-2006), whereas a 1-bit XNOR gate only costs a single slice.

When using a ConvNet architecture with binary weights, the number of unique filters is bounded by the filter size. For example, in our implementation we use filters of size , so the maximum number of unique 2D filters is . However, this should not prevent expanding the number of feature maps beyond this number, since the actual filter is a 3D matrix. Assuming we have filters in the convolutional layer, we have to store a 4D weight matrix of size . Consequently, the number of unique filters is . When necessary, we apply each filter on the map and perform the required multiply-accumulate (MAC) operations (in our case, using XNOR and popcount operations). Since we now have binary filters, many 2D filters of size repeat themselves. By using dedicated hardware/software, we can apply only the unique 2D filters on each feature map and sum the results to receive each 3D filter’s convolutional result. Note that an inverse filter (i.e., [-1,1,-1] is the inverse of [1,-1,1]) can also be treated as a repetition; it is merely a multiplication of the original filter by -1. For example, in our ConvNet architecture trained on the CIFAR-10 benchmark, there are only 42% unique filters per layer on average. Hence we can reduce the number of the XNOR-popcount operations by 3.

QNNs complexity scale up linearly with the number of bits per weight/activation, since it requires the application of the XNOR kernel several times (see Section 3). As of now, QNNs still supply the best compression to accuracy ratio. Moreover, quantizing the gradients allows us to use the XNOR kernel for the backward pass, leading to fully fixed point layers with low bitwidth. By accelerating the training phase, QNNs can play an important role in future power demanding tasks.

6 Seven Times Faster on GPU at Run-Time

It is possible to speed up GPU implementations of QNNs, by using a method sometimes called SIMD (single instruction, multiple data) within a register (SWAR). The basic idea of SWAR is to concatenate groups of 32 binary variables into 32-bit registers, and thus obtain a 32-times speed-up on bitwise operations (e.g., XNOR). Using SWAR, it is possible to evaluate 32 connections with only 3 instructions:

(11)

where is the resulting weighted sum, and and are the concatenated inputs and weights. Those 3 instructions (accumulation, popcount, xnor) take clock cycles on recent Nvidia GPUs (and if they were to become a fused instruction, it would only take a single clock cycle). Consequently, we obtain a theoretical Nvidia GPU speed-up of factor of . In practice, this speed-up is quite easy to obtain as the memory bandwidth to computation ratio is also increased 6 times.

In order to validate those theoretical results, we programmed two GPU kernels:

  • An unoptimized matrix multiplication kernel that serves as our baseline.

  • The XNOR kernel, which is nearly identical to the baseline, except that it uses the SWAR method, as in Equation (11).

The two GPU kernels return identical outputs when their inputs are constrained to or (but not otherwise). The XNOR kernel is about 23 times faster than the baseline kernel and 3.4 times faster than cuBLAS, as shown in Figure 3. Last but not least, the MLP from Section 4 runs 7 times faster with the XNOR kernel than with the baseline kernel, without suffering any loss in classification accuracy (see Figure 3). As MNIST’s images are not binary, the first layer’s computations are always performed by the baseline kernel. The last three columns show that the MLP accuracy does not depend on which kernel is used.

Figure 3: The first 3 columns show the time it takes to perform a (binary) matrix multiplication on a GTX750 Nvidia GPU, depending on which kernel is used. The next three columns show the time it takes to run the MLP from Section 3 on the full MNIST test set. The last three columns show that the MLP accuracy does not depend on the kernel

7 Discussion and Related Work

Until recently, the use of extremely low-precision networks (binary in the extreme case) was believed to substantially degrade the network performance (courbariaux+al-TR2014). Soudry-et-al-NIPS2014-small and Cheng-et-al-2015 proved the contrary by showing that good performance could be achieved even if all neurons and weights are binarized to . This was done using Expectation BackPropagation (EBP), a variational Bayesian approach, which infers networks with binary weights and neurons by updating the posterior distributions over the weights. These distributions are updated by differentiating their parameters (e.g., mean values) via the back propagation (BP) algorithm. Esser-et-al-2015 implemented a fully binary network at run time using a very similar approach to EBP, showing significant improvement in energy efficiency. The drawback of EBP is that the binarized parameters are only used during inference.

The probabilistic idea behind EBP was extended in the BinaryConnect algorithm of Courbariaux-et-al-2015. In BinaryConnect, the real-valued version of the weights is saved and used as a key reference for the binarization process. The binarization noise is independent between different weights, either by construction (by using stochastic quantization) or by assumption (a common simplification; see spang1962reduction). The noise would have little effect on the next neuron’s input because the input is a summation over many weighted neurons. Thus, the real-valued version could be updated using the back propagated error by simply ignoring the binarization noise in the update. With this method, Courbariaux-et-al-2015 were the first to binarize weights in CNNs and achieved near state-of-the-art performance on several datasets. They also argued that noisy weights provide a form of regularization, which could help to improve generalization, as previously shown by Wan+al-ICML2013-small. This method binarized weights while still maintaining full precision neurons.

Lin-et-al-2015 carried over the work of Courbariaux-et-al-2015 to the back-propagation process by quantizing the representations at each layer of the network, to convert some of the remaining multiplications into binary shifts by restricting the neurons’ values to be power-of-two integers. Lin-et-al-2015’s work and ours seem to share similar characteristics .However, their approach continues to use full precision weights during the test phase. Moreover, Lin-et-al-2015 quantize the neurons only during the back propagation process, and not during forward propagation.

Other research (Baldassi2015) showed that full binary training and testing is possible in an array of committee machines with randomized input, where only one weight layer is being adjusted. Gong-et-al-2014 aimed to compress a fully trained high precision network by using quantization or matrix factorization methods. These methods required training the network with full precision weights and neurons, thus requiring numerous MAC operations (which the proposed QNN algorithm avoids). hwang-et-al-2014 focused on a fixed-point neural network design and achieved performance almost identical to that of the floating-point architecture. Kim-et-al-2016 retrained neural networks with binary weights and activations.

As far as we know, before the first revision of this paper was published on arXive, no work succeeded in binarizing weights and neurons, at the inference phase and the entire training phase of a deep network. This was achieved in the present work. We relied on the idea that binarization can be done stochastically, or be approximated as random noise. This was previously done for the weights by Courbariaux-et-al-2015, but our BNNs extend this to the activations. Note that the binary activations are especially important for ConvNets, where there are typically many more neurons than free weights. This allows highly efficient operation of the binarized DNN at run time, and at the forward-propagation phase during training. Moreover, our training method has almost no multiplications, and therefore might be implemented efficiently in dedicated hardware. However, we have to save the value of the full precision weights. This is a remaining computational bottleneck during training, since it is an energy-consuming operation.

Shortly after the first version of this paper was posted on arXiv, several papers tried to improve and extend it. rastegari2016xnor made a small modification to our algorithm (namely multiplying the binary weights and input by their norm) and published promising results on the ImageNet dataset. Note that their method, named Xnor-Net, requires additional multiplication by a different scaling factor for each patch in each sample (rastegari2016xnor) Section 3.2 Eq. 10 and figure 2). This in itself, requires many multiplications and prevents efficient implementation of XnorNet on known hardware designs. Moreover, (rastegari2016xnor) didn’t quantize first and last layers, therefore XNOR-Net are only partially binarized NNs. miyashita2016convolutional suggested a more relaxed quantization (more than 1-bit) for both the weights and activation. Their idea was to quantize both and use shift operations as in our Eq. (4). They proposed to quantize the parameters in their non-uniform, base-2 logarithmic representation. This idea was inspired by the fact that the weights and activations in a trained network naturally have non-uniform distributions. They moreover showed that they can quantize the gradients as well to 6-bit without significant losses in performance (on the Cifar-10 dataset). zhou2016dorefa applied similar ideas to the ImageNet dataset and showed that by using 1-bit weights, 2-bit activations and 6-bit gradients they can achieve top-1 accuracies using the AlexNet architecture. They named this method DoReFa net. Here we outperform DoReFa net and achieve using a 1-2-6 bit quantization scheme (weight-activation-gradients) and using a 1-2-32 quantization scheme. These results confirm that we can achieve comparable results even on a large dataset by applying the Xnor kernel several times. merolla2016deep showed that DNN can be robust to more than just weight binarization. They applied several different distortions to the weights, including additive and multiplicative noise, and a class of non-linear projections.This was shown to improve robustness to other distortions and even boost results. zhengbinarized tried to apply our binarization scheme to recurrent neural network for language modeling and achieved comparable results as well. andri2016yodann even created a hardware implementation to speed up BNNs.

Conclusion

We have introduced BNNs, which binarize deep neural networks and can lead to dramatic improvements in both power consumption and computation speed. During the forward pass (both at run-time and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. Our estimates indicate that power efficiency can be improved by more than one order of magnitude (see Section 5). In terms of speed, we programmed a binary matrix multiplication GPU kernel that enabled running MLP over the MNIST datset 7 times faster (than with an unoptimized GPU kernel) without any loss of accuracy (see Section 6).

We have shown that BNNs can handle MNIST, CIFAR-10 and SVHN while achieving nearly state-of-the-art accuracy. While our results for the challenging ImageNet are not on par with the best results achievable with full precision networks, they significantly improve all previous attempts to compress ImageNet-capable architectures. Moreover, by quantizing the weights and activations to more than 1-bit (i.e., QNNs), we have been able to achieve comparable results to the 32-bit floating point architectures (see Section 4.4 and supplementary material - Appendix B). A major open research avenue would be to further improve our results on ImageNet. Substantial progress in this direction might go a long way towards facilitating DNN usability in low power instruments such as mobile phones.

\acks

We would like to express our appreciation to Elad Hoffer, for his technical assistance and constructive comments. We thank our fellow MILA lab members who took the time to read the article and give us some feedback. We thank the developers of Torch, (Torch-2011) a Lua based environment, and Theano (bergstra+al:2010-scipy; Bastien-Theano-2012), a Python library that allowed us to easily develop fast and optimized code for GPU. We also thank the developers of Pylearn2 (pylearn2_arxiv_2013) and Lasagne (dieleman-et-al-2015), two deep learning libraries built on the top of Theano. We thank Yuxin Wu for helping us compare our GPU kernels with cuBLAS. We are also grateful for funding from NSERC, the Canada Research Chairs, Compute Canada, and CIFAR. We are also grateful for funding from CIFAR, NSERC, IBM, Samsung. This research was supported by The Israel Science Foundation (grant No. 1890/14)

Appendix A Implementation Details

In this section we give full implementation details over our MNIST,SVHN, CIFAR-10 and ImageNet datasets.

a.1 MLP on MNIST (Theano)

MNIST is an image classification benchmark dataset (LeCun+98). It consists of a training set of 60K and a test set of 10K 28 28 gray-scale images representing digits ranging from 0 to 9. In order for this benchmark to remain a challenge, we did not use any convolution, data-augmentation, preprocessing or unsupervised learning. The Multi-Layer-Perceptron (MLP) we train on MNIST consists of 3 hidden layers of 4096 binary units and a L2-SVM output layer; L2-SVM has been shown to perform better than Softmax on several classification benchmarks (Tang-wkshp-2013; Lee-et-al-2014). We regularize the model with Dropout (Srivastava14). The square hinge loss is minimized with the ADAM adaptive learning rate method (kingma2014adam). We use an exponentially decaying global learning rate, as per Algorithm 1, and also scale the learning rates of the weights with their initialization coefficients from (GlorotAISTATS2010-small), as suggested by Courbariaux-et-al-2015. We use Batch Normalization with a minibatch of size 100 to speed up the training. As is typical, we use the last 10K samples of the training set as a validation set for early stopping and model selection. We report the test error rate associated with the best validation error rate after 1000 epochs (we do not retrain on the validation set).

a.2 MLP on MNIST (Torch7)

We use a similar architecture as in our Theano experiments, without dropout, and with 2048 binary units per layer instead of 4096. Additionally, we use the shift base AdaMax and BN (with a minibatch of size 100) instead of the vanilla implementations, to reduce the number of multiplications. Likewise, we decay the learning rate by using a 1-bit right shift every 10 epochs.

a.3 ConvNet on CIFAR-10 (Theano)

CIFAR-10 is an image classification benchmark dataset. It consists of a training set of size 50K and a test set of size 10K, where instances are 32 32 color images representing airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships and trucks. We do not use data-augmentation (which can really be a game changer for this dataset; see Graham-2014). The architecture of our ConvNet is identical to that used by Courbariaux2015 except for the binarization of the activations. The Courbariaux-et-al-2015 architecture is itself mainly inspired by VGG (Simonyan2015). The square hinge loss is minimized with ADAM. We use an exponentially decaying learning rate, as we did for MNIST. We scale the learning rates of the weights with their initialization coefficients from (GlorotAISTATS2010-small). We use Batch Normalization with a minibatch of size 50 to speed up the training. We use the last 5000 samples of the training set as a validation set. We report the test error rate associated with the best validation error rate after 500 training epochs (we do not retrain on the validation set).

CIFAR-10 ConvNet architecture
Input: - RGB image
- 128 convolution layer
BatchNorm and Binarization layers
- 128 convolution and max-pooling layers
BatchNorm and Binarization layers
- 256 convolution layer
BatchNorm and Binarization layers
- 256 convolution and max-pooling layers
BatchNorm and Binarization layers
- 512 convolution layer
BatchNorm and Binarization layers
- 512 convolution and max-pooling layers
BatchNorm and Binarization layers
1024 fully connected layer
BatchNorm and Binarization layers
1024 fully connected layer
BatchNorm and Binarization layers
10 fully connected layer
BatchNorm layer (no binarization)
Cost: Mean square hinge loss
Table 7: Architecture of our CIFAR-10 ConvNet. We only use ”same” convolutions as in VGG (Simonyan2015).

a.4 ConvNet on CIFAR-10 (Torch7)

We use the same architecture as in our Theano experiments. We apply shift-based AdaMax and BN (with a minibatch of size 200) instead of the vanilla implementations to reduce the number of multiplications. Likewise, we decay the learning rate by using a 1-bit right shift every 50 epochs.

a.5 ConvNet on SVHN

SVHN is also an image classification benchmark dataset. It consists of a training set of size 604K examples and a test set of size 26K, where instances are 32 32 color images representing digits ranging from 0 to 9. In both sets of experiments, we follow the same procedure used for the CIFAR-10 experiments, with a few notable exceptions: we use half the number of units in the convolution layers, and we train for 200 epochs instead of 500 (because SVHN is a much larger dataset than CIFAR-10).

a.6 ConvNet on ImageNet

ImageNet classification task consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories including objects, animals, scenes, and even some abstract shapes.

AlexNet:

Our AlexNet implementation consists of 5 convolution layers followed by 3 fully connected layers (see Section 8). Additionally, we use Adam as our optimization method and batch-normalization layers (with a minibatch of size 512). Likewise, we decay the learning rate by 0.1 every 20 epochs.

GoogleNet:

Our GoogleNet implementation consist of 2 convolution layers followed by 10 inception layers, spatial-average-pooling and a fully connected classifier. We also used the 2 auxilary classifiers. Additionally, we use Adam (Kingma2015) as our optimization method and batch-normalization layers (with a minibatch of size 64). Likewise, we decay the learning rate by 0.1 every 10 epochs.

AlexNet ConvNet architecture
Input: - RGB image
- 64 convolution layer and max-pooling layers
BatchNorm and Binarization layers
- 192 convolution layer and max-pooling layers
BatchNorm and Binarization layers
- 384 convolution layer
BatchNorm and Binarization layers
- 256 convolution layer
BatchNorm and Binarization layers
- 256 convolution layer
BatchNorm and Binarization layers
4096 fully connected layer
BatchNorm and Binarization layers
4096 fully connected layer
BatchNorm and Binarization layers
1000 fully connected layer
BatchNorm layer (no binarization)
SoftMax layer (no binarization)
Cost: Negative log likelihood
Table 8: Our AlexNet Architecture.

References

Footnotes

  1. https://github.com/MatthieuCourbariaux/BinaryNet
  2. https://github.com/itayhubara/BinaryNet
  3. Hardware implementation of is as simple as extracting the index of the most significant bit from the number’s binary representation.
100043
This is a comment super asjknd jkasnjk adsnkj
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.