Training Compact Neural Networks with Binary Weights and Low Precision Activations
In this paper, we propose to train a network with binary weights and low-bitwidth activations, designed especially for mobile devices with limited power consumption. Most previous works on quantizing CNNs uncritically assume the same architecture, though with reduced precision. However, we take the view that for best performance it is possible (and even likely) that a different architecture may be better suited to dealing with low precision weights and activations. Specifically, we propose a “network expansion” strategy in which we aggregate a set of homogeneous low-precision branches to implicitly reconstruct the full-precision intermediate feature maps. Moreover, we also propose a group-wise feature approximation strategy which is very flexible and highly accurate. Experiments on ImageNet classification tasks demonstrate the superior performance of the proposed model, named Group-Net, over various popular architectures. In particular, with binary weights and activations, we outperform the previous best binary neural network in terms of accuracy as well as saving more than 5 times computational complexity on ImageNet with ResNet-18 and ResNet-50.
- 1 Introduction
- 2 Related Work
- 3 Method
- 4 Experiment
- 5 Conclusion
- A Layer-wise vs. group-wise on AlexNet
- B Summary of the training algorithm
Designing deeper and wider convolutional neural networks has led to significant breakthroughs in many machine learning tasks, such as image classification [20, 9], object detection [30, 31] and object segmentation . However, accuracy is roughly directly proportional to log(FLOPs) [3, 10] and training accurate CNNs always requires billions of FLOPs. However, running real-time applications on mobile platforms requires low energy consumption and high accuracy. To solve this problem, many existing works [11, 8, 42, 18, 4, 14] focus on network pruning, low-bit quantization and efficient architecture design. In this paper, we aim to design a highly efficient low-precision neural network architecture from the quantization perspective.
Binary neural networks are first proposed in [15, 29] to accelerate inference and save memory usage. To improve the balance between accuracy and complexity, several works [7, 22, 23] propose to employ tensor expansion to approximate the filters or activations while still possessing the advantage of binary operations. In particular, Guo et al.  recursively performs residual quantization on pretrained full-precision weights and does convolution on each binary weight base. Similarly, Li et al.  propose to expand the input feature maps into binary bases in the same manner. And Lin et al.  further expand both weights and activations but use a simple linear approach. However, the weights and activations approximation above are only based on minimizing the local reconstruction error rather than considering the final loss. As a result, the quantization error will be accumulated during propagation which results in apparent accuracy drop especially on large scale datasets (e.g., ImageNet). What’s more, they have to solve the linear regression problem for each layer during feedforward propagation and may suffer from rank deficiency if the bases are too correlated. In contrast, the proposed model is directly learnt to optimize the final objective while still implicitly taking into considerate the feature reconstruction in intermediate layers.
Interestingly, we are also motivated by the energy-efficient architecture design approaches [17, 14, 38]. The objective of all these approaches is to replace the traditional expensive convolution with computational efficient convolutional operations (i.e., depthwise seperable convolution, convolution). Nevertheless, we propose to design extremely low-precision network architectures for dedicated hardware from the quantization view. Most previous quantization works focus on directly quantizing the full-precision architecture. At this point in time we do not yet learn the architecture, but we do begin to explore alternative architectures that we show are better suited to dealing with low precision weights and activations. Specifically, we partition the full-precision model into groups and we decompose each group into a set of low-precision bases while still preserving the original network property. That is, with a bit more complexity and a little more memory space than directly quantizing the model, we can obtain near lossless quantization on the ImageNet dataset. What’s more, the group-wise decomposition strategy (Fig. 2) enjoys highly accurate, fast convergence and flexible properties which can be applied to any network structure in the literature (e.g., VGGNet, ResNet).
We evaluate our models on the challenging CIFAR-100 and ImageNet datasets based on various architectures including AlexNet, ResNet-18 and ResNet-50. Extensive experiments show the effectiveness of the proposed method and the better performance over all the previous state-of-art quantization approaches. We expect that Group-Net will also generalize well on other recognition tasks.
2 Related Work
Network quantization: The recent increasing demand for implementing fixed point deep neural networks on embedded devices motivates the study of network quantization. Several works have been proposed to quantize only parameters for highly accurate compression [21, 41, 39, 5]. For example, Courbariaux et al.  propose to binarize the weights to replace multiply-accumulate operations by simple accumulations. Zhou et al.  propose three interdependent operations, namely weight partition, group-wise quantization and re-training, to achieve lossless weights quantization. Further quantizing activations have also been extensively explored in literature [2, 40, 15, 29, 23, 42]. BNNs  and XNOR-Net  propose to contrain both weights and activations to binary values (i.e., +1 and -1), where the multiply-accumulations can be replaced by the bitwise operations. To make a trade-off between accuracy and complexity, [42, 40, 16, 6] experiment with different combinations of bitwidth for weights and activations and achieve improved accuracy compared to binary neural networks.
Efficient architecture design: There has been a rising interest in designing efficient architecture in the recent literature. Efficient model designs like GoogLeNet  and SqueezeNet  propose to replace 3x3 convolutional kernels with 1x1 size to reduce the complexity while increasing the depth and accuracy. Additionally, separable convolutions are also proved to be effective in Inception approaches [36, 34]. This idea is further generalized as depthwise separable convolutions by Xception , MobileNet  and ShuffleNet  to generate energy-efficient network structure. Recently, neural architecture search [43, 28, 44, 24, 25] using reinforcement learning has been explored for automatic model design. In particular, ENAS  greatly reduces the GPU-hours while still preserving the performance.
The objective of this paper is to binarize the weights and quantize the activations to low precision. To aid with discriptions, we adopt a terminology for architecture as comprising layers, blocks and groups. A layer is a standard single parameterized layer in a network such as a dense or convolutional layer, except with binary weights. A block is a collection of layers in which the output of end layer is connected to the input of the next block (e.g., a residual block). A group is a collection of blocks. We aim to explore two different architecture changes which we call layer-wise and group-wise. These are illustrated in Fig. 1 (respectively in (b) and (c)) along with a baseline architecture in Fig. 1 (a) that simply adopts the same architecture as its “parent” but with binarized weights and quantized activations. In Sec. 3.1, we describe our quantization operation on weights and activations, respectively. Then we describe the layer-wise approach in Sec. 3.2. We further extend it to the flexible group-wise structure in Sec. 3.3.
3.1 Quantization function
For a convolutional layer, we define the input , weight filter and the output , respectively.
Quantization of weights: Following , we estimate the floating-point weight by a binary weight filter and a scaling factor such that , where is the sign of and calculates the mean of absolute values of . In general, the quantization function is non-differentiable and we adopt the straight-through estimator  (STE) to approximate the gradient calculation. Formally, the forward and backward process can be given as follows:
where is the loss. In practice, we find such a binarization scheme is quite stable.
Quantization of activations: As the output of the RELU function is unbounded, the quantization after RELU requires a high dynamic range. It will cause large quantization errors especially when the bit-precision is low. To alleviate this problem, similar to [40, 16], we use a clip function to limit the range of activation to , where (not learned) is fixed during training. Then the truncated activation output is linearly quantized to -bits () and we still use STE to estimate the gradient:
Note that when , we follow the quantization scheme in XNOR-Net  by introducing scale factors for both weights and activations during binarization to preserve the accuracy.
3.2 Layer-wise feature reconstruction
Fig. 1 (b) illustrates the layer-wise feature reconstruction for a single block. At each layer, we aim to reconstruct the full-precision output feature map given the input 3-D tensor using a set of quantized homogeneous branches:
where is the convolutional operation, is the number of branches and is a scale factor. Note that when activations are quantized to bit, operation is just simple fixed-point accumulations similar to BinaryConnect . When activations are also constrained to binary values (i.e., -1 or +1), is the bitwise operations: xnor and bitcount . Note that the convolution with each binary filter can be computed in parallel. We explore both effects in Sec. 4.3.1.
All ’s in Eq. 3 have the same convolution hyperparameters as the original floating-point counterpart. In this way, each low-precision branch gives a rough transformation and all the transformations are aggregated to approximate the original full precision output feature map. It can be expected that with the increase of , we can get more accurate approximation with more complex transformations. A special case is when , it corresponds to directly quantize the full-precision network.
However, this strategy has an apparent limitation. The quantization function in each branch introduces a certain amount of error. Furthermore, as the previous estimation error propagates into the current multi-branch convolutional layer, it will be enlarged by the quantization function and the final aggregation process. As a result, it may cause large quantization errors especially for deeper layers and bring large deviation for gradients during backpropagation. To solve this problem, we further propose a flexible group-wise approximation approach in Sec. 3.3.
Complexity: We consider the binary convolution case here () where operations are XNOR bit counts. One floating-point operation roughly equals to 64 binary operations within one clock cycle. We need to calculate binary convolutions and full-precision additions, thus the speed up ratio can be calculated as:
This equation is valid for the VGG-style architecture that repeatedly stacks layers with the same shape or a ResNet bottleneck block (except for the subsampling layers). And for small enough non-binary bitwidth of activations (e.g., ), the complexity will relatively increase but fixed-point addition is still very efficient for digital chips. The choice of depends on the practical demands.
3.3 Group-wise feature approximation
In layer-wise approach, we approximate each layer separately. However, in this section we explore the idea of approximating an entire group. The motivation is as follows. As explained in Sec. 3.2, the frequency of reconstructing the intermediate activations and suppressing the quantization error is a trade-off. Analogy to the extreme “low-level” layer-wise case, we also show the extreme “high-level” case in Fig. 2 (e) where we directly ensemble a set of low-precision networks. In the ensemble model, the output quantization error for each branch can be very large even though we only aggregate the outputs once. Clearly, these two extreme cases may not be the optimal and it motivates us to explore the “mid-level” cases. More specifically, we propose a group-wise approximation strategy which intends to approximate residual blocks or several consequent layers as a whole. In other words, each group consists of one or multiple residual blocks or even VGG-style plain structure by stacking layers without skip connections. But in this paper, we analyze on the residual structure for convenience.
We first consider the simplest case where each group consists of only one block (i.e., the group comprises one block of Fig. 2 (a)). Then the above layer-wise approximation method can be easily extended to the group-wise structure. The most typical block structure is the bottleneck architecture and we can extend Eq. 3 as:
where is a low-precision residual bottleneck  and is the scale factor. In Eq. 5, we use a linear combination of homogeous low-precision bases to approximate one group, where each base has one quantized block (QB in Fig. 2). We illustrate such a group in Fig. 1 (c) and the framework which consists of these groups in Fig. 2 (b). In this way, we effectively keep the original residual structure in each base to preserve the network capacity. In addition, we keep a balance between supressing quantization error accumulation and feature reconstruction. Moreover, compared to layer-wise strategy, the number of parameters and complexity both decrease since there is no need to apply float-precision tensor aggregation within the group. Interestingly, the multi-branch group-wise design is parallelizable and hardware friendly, which can bring great speed-up during test-time inference.
Furthermore, the group-wise approximation is flexible. We now analyze the case where each group may contain different number of blocks. Suppose we partition the network into groups and it follows a simple rule that the each group must include one or multiple complete residual building blocks. Let be the group indexes at which we approximate the output feature maps. For the -th group, we consider the blocks , where the index if . And we can extend Eq. 5 into multiple blocks format:
where is the residual function , except with binary weights and quantized activations. Based on , we can efficiently construct a network by stacking these groups and each group may consist of one or multiple blocks. Different from Eq. 5, we further expose a new dimension on each base, which is the number of blocks. This greatly increases the flexibility of framework and the optimal structure can be effectively searched with reinforcement learning. We illustrate several possible connections in Fig. 2 and provide detailed discussions in Sec. 3.4.
Relation to ResNeXt : The homogeneous multi-branch architecture design shares some spirit of ResNeXt and enjoys the advantage of introducing a “cardinality” dimension. However, our objectives are totally different. ResNeXt aims to increase the capacity while maintaining the complexity. To achieve this, it first divides the input channels into groups and perform efficient group convolutions implementation. Then all the group outputs are aggregated to approximate the original feature map. In contrast, we first divide the network into groups and directly replicate the floating-point structure for each branch while both weights and activations are quantized. In this way, we can reconstruct the full-precision outputs via aggregating a set of low-precision transformations for complexity reduction in energy-efficient hardwares. Furthermore, our transformations are not restricted to only one block as in ResNeXt.
Relation to ShuffleNet : Based on ResNeXt, ShuffleNet proposes to increase the relation between groups by replacing the traditional group convolution with pointwise group convolution (GConv) and channel shuffle. We can draw an analogy between a convolution group and a branch , where our each branch operates on all input channels. The last GConv layer in the ShuffleNet unit applies a different scale on each output channel and then concatenate them together. However, output channels of share the same scale and we then simply add all branches’ outputs as the final representation of the layer. The number of parameters of our group module increases but all the filters are binary and activations are quantized, which is still suitable for small storage and fast inference.
Relation to tensor expansion approaches [7, 22, 23]: In [7, 23], binary weight bases are directly obtained from the full-precision weights without being learned. And they have to solve the linear regression problem for each layer during forward propagation. In contrast, we don’t directly approximate the full-precision weights. Specifically, the binary weights are end-to-end optimized to minimize the final objective while still implicitly reconstructing the intermediate output feature maps. And in , the input tensor for each layer is decomposed into binary residual tensors and convolved with shared binary weights. However, it cannot guarantee the good approximation of the layer’s output and introduces additional float-precision tensor additions at the beginning of each layer. In contrast to tensor expansion, we propose to approximate the full-precision network via “network expansion". Futhermore, our network structure is quite flexible and highly accurate.
The group-wise approximation approach can be efficiently integrated with the Neural Architecture Search (NAS) frameworks [43, 28, 44, 24, 25] to explore the optimal architecture. In our setting, the architecture hyperparameters that we need to generate are the number of groups and how we partition the blocks into these groups. We assume that the network has blocks and we partition it into groups. Then the search space is . For practical networks, is usually not very large. So our search space is much smaller than those that need to generate each layer’s hyperparameters (e.g., for layers in ENAS ). The proposed approach can also be combined with knowledge distillation strategy as in [42, 32]. The basic idea is to train a target network alongside another guidance network. An additional regularizer is added to minimize the difference between student’s and teacher’s intermediate feature representations for higher accuracy. However, in this paper we only focus on designing efficient low-precision networks and leave these incremental tasks for future extension. We summarize the training algorithm in Sec. S2 in the supplementary material.
The proposed method is evaluated on CIFAR-100  and ImageNet (ILSVRC2012)  datasets. CIFAR-100 is an image classification benchmark containing images of size in a training set of 50,000 and a test set of 10,000. ImageNet is a large-scale dataset which has 1.2M training images from 1K categories and 50K validation images. The evaluation metrics are top-1 and top-5 classification accuracy. Several representative networks are tested: AlexNet  and ResNet . Our implementation is based on Pytorch .
To investigate the performance of the proposed methods, we analyze the effects of the number of bases, different group architectures and the difference between group-wise approximation and layer-wise approximation strategies. We define several methods for comparison as follows:
Layerwise: It implements the layer-wise feature approximation strategy described in Sec. 3.2.
Group-Net v1: We implement with the group-wise feature approximation strategy, where each base consists of one block. It corresponds to the approach described in Eq. 5 and is illustrated in Fig. 2 (b).
Group-Net v2: Similar to Group-Net v1, the only difference is that each group base has two blocks. It is illustrated in Fig. 2 (c) and is explained in Eq. 6.
Group-Net v3: It is an extreme case where each base is a whole network, which can be treated as an ensemble of a set of low-precision networks. This case is shown in Fig. 2 (e).
4.1 Implementation details
As in [29, 2, 40, 42], we quantize the weights and activations of all layers except that the first and last layers have full precisions. In all ImageNet experiments, images are resized to , and a ( for AlexNet) crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted. We do not use any further data augmentation in our implementation. Batch Normalization is applied before each quantization layer as in [2, 40, 42]. We use a simple single-crop testing for standard evaluation. No bias term is utilized. We use Nesterov momentum SGD for optimization. For training all low-bitwidth networks, the mini-batch size and weight decay are set to 128 and 0.0001, respectively. The momentum ratio is 0.9. For training ResNet with non-binary activations, the learning rate starts at 0.05 and is divided by 10 when it gets saturated. We directly learn from scratch since we empirically observe that fine-tuning does not bring further benefits to the performance. But for training ResNet with binary activations, we decrease the learning rate to 0.001 to avoid frequent sign changes and we pretrain its full-precision counterpart for initialization to guarantee the accuracy. We remove the nonlinear function before the classification layer for ResNet based cases. For AlexNet, we directly learn from scratch and the initial learning rate is set to 0.01. Following [2, 42], no dropout is used due to quantization itself can be treated as a regularization.
4.2 Evaluation on ImageNet
In Table 5, we compare our approach with the state-of-the-art quantization approaches BNN , XNOR-Net , DOREFA-Net , HWGQ-Net , EL-Net , SYQ  and ABC-Net , on ImageNet image classification task. For comparison, we consider the AlexNet, ResNet-18 and ResNet-50 in this section. In all cases, our model uses 5 group bases with binary weights and 2-bit activations. The results for HWGQ-Net is also based on binary weights and 2-bit activations. DOREFA-Net and EL-Net use 2-bit weights and 2-bit activations. SYQ employs binary weights and 8-bit activations. ABC-Net is with 5 binary weight bases and 5 binary activation bases, which needs to calculate 25 times binary convolution and 6 times floating-point tensor accumulation in each layer. All the comparison results are directly cited from the corresponding papers (except DOREFA-Net is based on self-implementation). We also report the full-precision accuracy for all comparing models by our implementation. For ResNet, we use Group-Net v1 for comparison. For AlexNet, we treat two subsequent convolutional or dense layers as a group and employ the group-wise quantization strategy. From Table 5, we can observe that the proposed method outperforms all the previous state-of-the-art by a large margin. It proves that the proposed approach can learn to approximate the full-precision network effectively. With binary weights and 2-bit activations, the proposed approach performs quite stably on all compared popular network structures. With more sophisticated quantization methods [13, 12] and optimization [42, 41], we expect that our performance may be further improved for more practical applications.
4.3 Ablation study
The core idea of Group-Net is the group-wise feature reconstruction strategy and we evaluate it comprehensively in this subsection on ImageNet dataset with ResNet-18 and ResNet-50.
4.3.1 Bitwidth impact
|Model||W||A||Top-1||Top-5||Top-1 gap||Top-5 gap|
|ResNet-18 Group-Net v1||1||1||65.2%||85.6%||4.5%||3.8%|
|ResNet-18 Group-Net v1||1||2||67.6%||87.8%||2.1%||1.6%|
|ResNet-18 Group-Net v1||1||4||69.2%||88.5%||0.5%||0.9%|
|ResNet-18 Group-Net v1||1||32||69.6%||89.1%||0.1%||0.3%|
|ResNet-18 Group-Net v2||1||4||68.3%||87.9%||1.4%||1.5%|
|ResNet-18 Group-Net v3||1||4||64.5%||85.0%||5.2%||4.4%|
|ResNet-50 Group-Net v1||1||1||70.4%||89.0%||5.6%||3.9%|
|ResNet-50 Group-Net v1||1||2||73.4%||90.8%||2.6%||2.1%|
|ResNet-50 Group-Net v1||1||4||75.2%||91.7%||0.8%||1.2%|
This set of experiment is performed to assess the influence of activation precision for the final accuracy. We take the Group-Net v1 with ResNet-18 and ResNet-50 on ImageNet for analysis. We still use 5 group bases for experiment. The results are provided in Table 2. Results show that with binary weights and 4-bit or full-precision activations, we can achieve near lossless accuracy. For example, with binary weights and 4-bit activations, the top-1 accuracy drops are only 0.5% and 0.8% for ResNet-18 and ResNet-50, respectively. Interestingly, with binary activations where convolutional operations are all XNOR and bitcount, we can achieve comparable performance with ABC-Net, but that method has more than complexity compared with our group-wise design.
4.3.2 Effect of the number of bases
|Model||Bases||Top-1||Top-5||Top-1 gap||Top-5 gap|
We further explore the influence of the number of group bases to the final performance in Table 3 and Fig. 3 (a). When the base is set to 1, it corresponds to directly quantize the original full-precision network and we observe apparent accuracy drop compared to its full-precision counterpart. With more bases employed, we can find the performance steadily increase. The reason can be attributed to the better approximation of the output feature maps, which is a trade-off between accuracy and complexity. It can be expected that with enough bases, the network should has the capacity to approximate the full-precision network precisely. With the multi-branch group-wise design, we can achieve high accuracy while still significantly reducing inference time and power consumption. Interestingly, each base can be implemented using small resource and the parallel structure is quite friendly to FPGA.
4.3.3 Group space exploration
We are also interested in exploring the influence of the number of blocks in each group base. We present the results in Table 2 and Fig. 3 (b). We observe that by approximating the output feature maps for each block results in the best performance on ResNet-18. It proves that by approximating appropriate intermediate layers, the classification accuracy will increase. However, this connection may not be the optimal. We expect to further boost the performance by integrating with the NAS approaches as discussed in Sec. 3.4.
4.3.4 Layer-wise vs. group-wise
We explore the difference between layer-wise and group-wise design strategies in Table 2 and Fig. 3 (b). By comparing the results, we find a 9.1% significant performance increase between Group-Net v1 and Layerwise under the same bitwidth. Note that the Layerwise approach have similarities between tensor approximation methods in [7, 22, 23] and the differences are described in Sec. 3.3. It strongly shows the necessity for employing the group-wise design strategy to get promising results. It also proves the importance by supressing the cumulative quantization error while making precise output tensor approximation. Moreover, we also speculate that this significant gain is partly due to the preserved block structure within the group bases. Interestingly, group-wise based approaches also converge more stably than the layer-wise one. We further provide the comparison on the network without residual connections (i.e., AlexNet) in Sec. S1 in the supplementary material.
4.4 Evaluation on CIFAR-100
We also report our results with AlexNet on the CIFAR-100 dataset in Table 4. We use the same group-wise strategy described in Sec. 4.2. DOREFA-Net and EL-Net still use 2-bit weights and 2-bit activations. We can observe that out result outperforms all the competing approaches on the small dataset, which proves the robustness of the group-wise feature reconstruction strategy and the generalization ability of the proposed approach.
In this paper, we have begun to explore highly efficient and accurate CNN architectures with binary weights and low-precision activations. Specifically, we have proposed to directly decompose the full-precision network into multiple groups and each group is approximated using a set of low-precision bases which can be optimized in an end-to-end manner. It is much more flexible than the previous approaches that operate layer-wise and can be integrated with the neural architecture search for exploring the optimal structure. The low-precision multi-branch group-wise structure can also be implemented in parallel and bring great benefits for accelerating test-time inference on specialized hardware. Experimental results have proved the robustness of the proposed approach on the ImageNet classification task. We further expect to generalize the work to other computer vision tasks.
Appendix A Layer-wise vs. group-wise on AlexNet
In this section, we empirically analyze the difference between layer-wise and group-wise design strategies on the plain network AlexNet. We perform experiments on the ImageNet dataset. As described in Sec. 4.2 in the paper, we treat two subsequent convolutional or dense layers as a group which we call Group-S1. We also provide the results of Layerwise approach described in Sec. 4 in the paper. From Table 5, we can observe that the group-wise quantization strategy outperforms the Layerwise approach with significant margins. It proves that the group-wise feature approximation approach is not only effective for residual architectures but also works well on plain network architectures without skip connections. We also illustrate the convergence curves for these two approaches in Fig. 4.
Appendix B Summary of the training algorithm
-  Y. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
-  Z. Cai, X. He, J. Sun, and N. Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5918–5926, 2017.
-  A. Canziani, A. Paszke, and E. Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016.
-  F. Chollet. Xception: Deep learning with depthwise separable convolutions. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1251–1258, 2017.
-  M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Proc. Adv. Neural Inf. Process. Syst., pages 3123–3131, 2015.
-  J. Faraone, N. Fraser, M. Blott, and P. H. Leong. Syq: Learning symmetric quantization for efficient deep neural networks. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2018.
-  Y. Guo, A. Yao, H. Zhao, and Y. Chen. Network sketching: Exploiting binary structure in deep cnns. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5955–5963, 2017.
-  S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In Proc. Adv. Neural Inf. Process. Syst., pages 1135–1143, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 770–778, 2016.
-  Y. He and S. Han. Adc: Automated deep compression and acceleration with reinforcement learning. arXiv preprint arXiv:1802.03494, 2018.
-  Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In Proc. IEEE Int. Conf. Comp. Vis., volume 2, page 6, 2017.
-  L. Hou and J. T. Kwok. Loss-aware weight quantization of deep networks. In Proc. Int. Conf. Learn. Repren., 2018.
-  L. Hou, Q. Yao, and J. T. Kwok. Loss-aware binarization of deep networks. In Proc. Int. Conf. Learn. Repren., 2017.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In Proc. Adv. Neural Inf. Process. Syst., pages 4107–4115, 2016.
-  I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
-  F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
-  B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2018.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proc. Adv. Neural Inf. Process. Syst., pages 1097–1105, 2012.
-  F. Li, B. Zhang, and B. Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
-  Z. Li, B. Ni, W. Zhang, X. Yang, and W. Gao. Performance guaranteed network acceleration via high-order residual quantization. In Proc. IEEE Int. Conf. Comp. Vis., pages 2584–2592, 2017.
-  X. Lin, C. Zhao, and W. Pan. Towards accurate binary convolutional neural network. In Proc. Adv. Neural Inf. Process. Syst., pages 344–352, 2017.
-  C. Liu, B. Zoph, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. arXiv preprint arXiv:1712.00559, 2017.
-  H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. In Proc. Int. Conf. Learn. Repren., 2018.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 3431–3440, 2015.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In Proc. Adv. Neural Inf. Process. Syst. Workshops, 2017.
-  H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
-  M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In Proc. Eur. Conf. Comp. Vis., pages 525–542, 2016.
-  J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 779–788, 2016.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proc. Adv. Neural Inf. Process. Syst., pages 91–99, 2015.
-  A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In Proc. Int. Conf. Learn. Repren., 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. Int. J. Comp. Vis., 115(3):211–252, 2015.
-  C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proc. AAAI Conf. on Arti. Intel., volume 4, page 12, 2017.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1–9, 2015.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2818–2826, 2016.
-  S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5987–5995, 2017.
-  X. Zhang, X. Zhou, M. Lin, and J. Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083, 2017.
-  A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. Proc. Int. Conf. Learn. Repren., 2017.
-  S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
-  C. Zhu, S. Han, H. Mao, and W. J. Dally. Trained ternary quantization. Proc. Int. Conf. Learn. Repren., 2017.
-  B. Zhuang, C. Shen, M. Tan, L. Liu, and I. Reid. Towards effective low-bitwidth convolutional neural networks. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2018.
-  B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. In Proc. Int. Conf. Learn. Repren., 2017.
-  B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2018.