Binarized Neural Architecture Search

Binarized Neural Architecture Search


Neural architecture search (NAS) can have a significant impact in computer vision by automatically designing optimal neural network architectures for various tasks. A variant, binarized neural architecture search (BNAS), with a search space of binarized convolutions, can produce extremely compressed models. Unfortunately, this area remains largely unexplored. BNAS is more challenging than NAS due to the learning inefficiency caused by optimization requirements and the huge architecture space. To address these issues, we introduce channel sampling and operation space reduction into a differentiable NAS to significantly reduce the cost of searching. This is accomplished through a performance-based strategy used to abandon less potential operations. Two optimization methods for binarized neural networks are used to validate the effectiveness of our BNAS. Extensive experiments demonstrate that the proposed BNAS achieves a performance comparable to NAS on both CIFAR and ImageNet databases. An accuracy of vs. is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a faster search than the state-of-the-art PC-DARTS.


Neural architecture search (NAS) have attracted great attention with remarkable performance in various deep learning tasks. Impressive results have been shown for reinforcement learning (RL) based methods [37, 36], for example, which train and evaluate more than neural networks across GPUs over days. Recent methods like differentiable architecture search (DARTs) reduce the search time by formulating the task in a differentiable manner [19]. DARTS relaxes the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent, which provides a fast solution for effective network architecture search. To reduce the redundancy in the network space, partially-connected DARTs (PC-DARTs) was recently introduced to perform a more efficient search without compromising the performance of DARTS [31].

Although the network optimized by DARTS or its variants has a smaller model size than traditional light models, the searched network still suffers from an inefficient inference process due to the complicated architectures generated by multiple stacked full-precision convolution operations. Consequently, the adaptation of the searched network to an embedded device is still computationally expensive and inefficient. Clearly the problem requires further exploration to overcome these challenges.

One way to address these challenges is to transfer the NAS to a binarized neural architecture search (BNAS), by exploring the advantages of binarized neural networks (BNNs) on memory saving and computational cost reduction [27]. Binarized filters have been used in traditional convolutional neural networks (CNNs) to compress deep models [23, 7, 6, 30], showing up to 58-time speedup and 32-time memory saving. In [30], the XNOR network is presented where both the weights and inputs attached to the convolution are approximated with binary values. This results in an efficient implementation of convolutional operations by reconstructing the unbinarized filters with a single scaling factor. In [9], a projection convolutional neural network (PCNN) is proposed to realize BNNs based on a simple back propagation algorithm. In our BNAS framework, we re-implement XNOR and PCNN for the effectiveness validation. We show that the BNNs obtained by BNAS can outperform conventional models by a large margin. It is a significant contribution in the field of BNNs, considering that the performance of conventional BNNs are not yet comparable with their corresponding full-precision models in terms of accuracy.

Figure 1: The main steps of our BNAS: (1) Search an architecture based on using PC-DARTS. (2) Select half the operations with less potential from for each edge, resulting in . (3) Select an architecture by sampling (without replacement) one operation from for every edge, and then train the selected architecture. (4) Update the operation selection likelihood based on the accuracy obtained from the selected architecture on the validation data. (5) Abandon the operation with the minimal selection likelihood from the search space for every edge.

The search process of our BNAS consists of two steps. One is the operation potential ordering based on partially-connected DARTs (PC-DARTs) [31] which serves as a baseline for our BNAS. It is further sped up with a second operation reduction step guided by a performance-based strategy. In the operation reduction step, we prune one operation at each iteration from one-half of the operations with less potential as calculated by PC-DARTS. As such, the optimization of the two steps becomes faster and faster because the search space is reduced due to the operation pruning. We can take advantage of the differential framework of DARTS where the search and performance evaluation are in the same setting. We also enrich the search strategy of DARTS. Not only is the gradient used to determine which operation is better, but the proposed performance evaluation is included for further reduction of the search space. In this way BNAS is fast and well built. The contributions of our paper include:

  • BNAS is developed based on a new search algorithm which solves the BNNs optimization and architecture search in a unified framework.

  • The search space is greatly reduced through a performance-based strategy used to abandon operations with less potential, which improves the search efficiency by .

  • Extensive experiments demonstrate that the proposed algorithm achieves much better performance than other light models on CIFAR-10 and ImageNet.

Related Work

Thanks to the rapid development of deep learning, significant gains in performance have been realized in a wide range of computer vision tasks, most of which are manually designed network architectures [16, 28, 11, 14]. Recently, the new approach called neural architecture search (NAS) has been attracting increased attention. The goal is to find automatic ways of designing neural architectures to replace conventional hand-crafted ones. Existing NAS approaches need to explore a very large search space and can be roughly divided into three type of approaches: evolution-based, reinforcement-learning-based and one-shot-based.

In order to implement the architecture search within a short period of time, researchers try to reduce the cost of evaluating each searched candidate. Early efforts include sharing weights between searched and newly generated networks [2]. Later, this method was generalized into a more elegant framework named one-shot architecture search [1, 4, 19, 22, 29, 35, 34]. In these approaches, an over-parameterized network or super network covering all candidate operations is trained only once, and the final architecture is obtained by sampling from this super network. For example, Brock et al. [1] trained the over-parameterized network using a HyperNet [10], and Pham et al. [22] proposed to share parameters among child models to avoid retraining each candidate from scratch. The paper [18] is based on DARTS, which introduces a differentiable framework and thus combines the search and evaluation stages into one. Despite its simplicity, researchers have found some of its drawbacks and proposed a few improved approaches over DARTS [4, 29, 5].

Unlike previous methods, we study BNAS based on efficient operation reduction. We prune one operation at each iteration from one-half of the operations with smaller weights calculated by PC-DARTS, and the search becomes faster and faster in the optimization.

Binarized Neural Architecture Search

In this section, we first describe the search space in a general form, where the computation procedure for an architecture (or a cell in it) is represented as a directed acyclic graph. We then review the baseline PC-DARTS [31], which improves the memory efficiency, but is not enough for BNAS. Finally, an operation sampling and a performance-based search strategy are proposed to effectively reduce the search space. Our BNAS framework is shown in Fig. 1 and additional details of which are described in the rest of this section.

Search Space

Following Zoph et al. (2018); Real et al. (2018); Liu et al. (2018a;b), we search for a computation cell as the building block of the final architecture. A network consists of a pre-defined number of cells [36], which can be either normal cells or reduction cells. Each cell takes the outputs of the two previous cells as input. A cell is a fully-connected directed acyclic graph (DAG) of nodes, i.e., , as illustrated in Fig. 2(a). Each node takes its dependent nodes as input, and generates an output through a sum operation Here each node is a specific tensor (e.g., a feature map in convolutional neural networks) and each directed edge between and denotes an operation , which is sampled from . Note that the constraint ensures there are no cycles in a cell. Each cell takes the outputs of two dependent cells as input, and we define the two input nodes of a cell as and for simplicity. Following [19], the set of the operations consists of operations. They include max pooling, no connection (zero), average pooling, skip connection (identity), dilated convolution with rate , dilated convolution with rate , depth-wise separable convolution, and depth-wise separable convolution, as illustrated in Fig. 2(b). The search space of a cell is constructed by the operations of all the edges, denoted as .

Unlike conventional convolutions, our BNAS is achieved by transforming all the convolutions in to binarized convolutions. We denote the full-precision and binarized kernels as and respectively. A convolution operation in is represented as as shown in Fig. 2(b), where denotes convolution. To build BNAS, one key step is how to binarize the kernels from to , which can be implemented based on state-of-the-art BNNs, such as XNOR or PCNN. As we know, the optimization of BNNs is more challenging than that of conventional CNNs [9, 24], which adds an additional burden to NAS. To solve it, we introduce channel sampling and operation space reduction into differentiable NAS to significantly reduce the cost of GPU hours, leading to an efficient BNAS.

(a) Cell
(b) Operation Set
Figure 2: (a) A cell contains 7 nodes, two input nodes and , four intermediate nodes , , , that apply sampled operations on the input nodes and upper nodes, and an output node that concatenates the outputs of the four intermediate nodes. (b) The set of operations between and , including binarized convolutions.


The core idea of PC-DARTS is to take advantage of partial channel connections to improve memory efficiency. Taking the connection from to for example, this involves defining a channel sampling mask , which assigns to selected channels and to masked ones. The selected channels are sent to a mixed computation of operations, while the masked ones bypass these operations. They are directly copied to the output, which is formulated as:


where and denote the selected and masked channels, respectively, and is the parameter of operation between and .

PC-DARTS sets the proportion of selected channels to by regarding as a hyper-parameter. In this case, the computation cost can also be reduced by times. However, the size of the whole search space is , where is the set of possible edges with intermediate nodes in the fully-connected DAG, and the ”” comes from the two types of cells. In our case with , together with the two input nodes, the total number of cell structures in the search space is . This is an extremely large space to search for a binarized neural architectures which need more time than a full-precision NAS. Therefore, efficient optimization strategies for BNAS are required.

Sampling for BNAS

For BNAS, PC-DARTS is still time and memory consuming because of the large search space, although it is already faster than most of existing NAS methods. We introduce another approach to increasing memory efficiency by reducing the search space . According to , we can select half the operations with less potential from for each edge, resulting in . We then sample an operation from for each edge guided by a performance-based strategy proposed in the next section in order to reduce the search space. We follow the rule of sampling without replacement times. Here sampling without replacement means that after one operation is sampled randomly from , this operation is removed from . For convenience of description, the operations in each edge are transformed to a one-hot indicator vector. In other words we sample only one operation according to the performance-based strategy, which effectively reduces the memory cost compared with PC-DARTS [31].

Performance-based Strategy for BNAS

Reinforcement learning is inefficient in the architecture search due to the delayed rewards in network training, i.e., the evaluation of a structure is usually done after the network training converges. On the other hand, we can perform the evaluation of a cell when training the network. Inspired by [32], we use a performance-based strategy to boost the search efficiency by a large margin. Ying et al. [32] did a series of experiments showing that in the early stage of training, the validation accuracy ranking of different network architectures is not a reliable indicator of the final architecture quality. However, we observe that the experiment results actually suggest a nice property that if an architecture performs badly in the beginning of training, there is little hope that it can be part of the final optimal model. As the training progresses, this observation shows less uncertainty. Based on this observation, we derive a simple yet effective operation abandoning process. During training, along with the increasing epochs, we progressively abandon the worst performing operation in each edge.

To this end, we randomly sample one operation from the operations in for every edge, then obtain the validation accuracy by training the sampled network for one epoch, and finally assign this accuracy to all the sampled operations. These three steps are performed times by sampling without replacement, leading to each operation having exactly one accuracy for every edge.

We repeat it times. Thus each operation for every edge has accuracies . Then we define the selection likelihood of the th operation in for each edge as:


where . And the selection likelihoods of the other operations not in are defined as:


where denotes the smallest integer . The reason to use it is because can be an odd integer during iteration in the proposed Algorithm 1. Eq. 3 is an estimation for the rest operations using a value balanced between the maximum and average of . Then, is updated by:


where is a mask, which is for the operations in and for the others.

Finally, we abandon the operation with the minimal selection likelihood for each edge. Such that the search space size is significantly reduced from to . We have:


The optimal structure is obtained when there is only one operation left in each edge. Our performance-based search algorithm is presented in Algorithm 1. Note that in line 1, PC-DARTS is performed for epochs as the warm-up to find an initial architecture, and line 14 is used to update the architecture parameters for all the edges due to the reduction of the search space .

Input: Training data, Validation data, Searching hyper-graph: , , for all edges;
Output: Optimal structure ;
1 Search an architecture for epochs based on using PC-DARTS;
2 while  do
3        Select consisting of operations with smallest from for every edge;
4        for  epoch do
5               ;
6               for  epoch do
7                      Select an architecture by sampling (without replacement) one operation from for every edge;
8                      Train the selected architecture and get the accuracy on the validation data;
9                      Assign this accuracy to all the sampled operations;
11               end for
13        end for
14       Update using Eq. 4;
15        Update the search space {} using Eq. 5;
16        Search the architecture for epochs based on using PC-DARTS;
17        ;
19 end while
Algorithm 1 Performance-Based Search

Optimization for BNAS

In this paper, the binarized kernel weights are computed based on XNOR [24] or PCNN [9]. Both methods are easily implemented in our BNAS framework, and the source code will be publicly available soon.

Binarizing CNNs, to the best of our knowledge, shares the same implementation framework. Without loss of generality, at layer , let be the direction of a full-precision kernel , and be the shared amplitude. For the binarized kernel corresponding to , we have , where denotes the element-wise multiplication between two matrices. We then employ an amplitude loss function to reconstruct the full-precision kernels as:


where . The element-wise multiplication combines the binarized kernels and the amplitude matrices to approximate the full-precision kernels. The amplitudes are solved in different BNNs, such as [9] and [24]. The complete loss function for BNAS is defined as:


where is the conventional loss function, e.g., cross-entropy.


In this section, we compare our BNAS with state-of-the-art NAS methods, and also compare the BNNs obtained by our BNAS based on XNOR [24] and PCNN [9].

Experiment Protocol

In these experiments, we first search neural architectures on an over-parameterized network on CIFAR-10, and then evaluate the best architecture with a stacked deeper network on the same data set. Then we further perform experiments to search architectures directly on ImageNet. We run the experiment multiple times and find that the resulting architectures only show slight variation in performance, which demonstrates the stability of the proposed method.

Architecture Test Error # Params Search Cost Search
(%) (M) (GPU days) Method
ResNet-18 [11] 3.53 11.1 (32 bits) - Manual
WRN-22 [33] 4.25 4.33 (32 bits) - Manual
DenseNet [14] 4.77 1.0 (32 bits) - Manual
SENet [13] 4.05 11.2 (32 bits) - Manual
ResNet-18 (XNOR) 6.69 11.17 (1 bit) - Manual
ResNet-18 (PCNN) 5.63 11.17 (1 bit) - Manual
WRN22 (PCNN) [9] 5.69 4.29 (1 bit) - Manual
Network in [20] 6.13 4.30 (1 bit) - Manual
NASNet-A [37] 2.65 3.3 (32 bits) 1800 RL
AmoebaNet-A [25] 3.34 3.2 (32 bits) 3150 Evolution
PNAS [17] 3.41 3.2 (32 bits) 225 SMBO
ENAS [22] 2.89 4.6 (32 bits) 0.5 RL
Path-level NAS [3] 3.64 3.2 (32 bits) 8.3 RL
DARTS(first order) [19] 2.94 3.1 (32 bits) 1.5 Gradient-based
DARTS(second order) [19] 2.83 3.4 (32 bits) 4 Gradient-based
PC-DARTS 2.78 3.5 (32 bits) 0.15 Gradient-based
BNAS (full-precision) 2.84 3.3 (32 bits) 0.08 Performance-based
BNAS (XNOR) 5.71 2.3 (1 bit) 0.104 Performance-based
BNAS (XNOR, larger) 4.88 3.5 (1 bit) 0.104 Performance-based
BNAS (PCNN) 3.94 2.6 (1 bit) 0.09375 Performance-based
BNAS (PCNN, larger) 3.47 4.6 (1 bit) 0.09375 Performance-based
Table 1: Test error rates for human-designed full-precision networks, human-designed binarized networks, full-precision networks obtained by NAS, and networks obtained by our BNAS on CIFAR-10. Note that the parameters are bit in binarized networks, and are 32 bits in full-precision networks. For fair comparison, we select the architectures by NAS with similar parameters ( M). In addition, we also train an optimal architecture in a larger setting, i.e., with more initial channels ( in XNOR or in PCNN).

We use the same datasets and evaluation metrics as existing NAS works [19, 3, 37, 17]. First, most experiments are conducted on CIFAR-10 [15], which has K training images and K testing images with resolution and from classes. The color intensities of all images are normalized to . During architecture search, the K training samples of CIFAR-10 is divided into two subsets of equal size, one for training the network weights and the other for finding the architecture hyper-parameters. When reducing the search space, we randomly select K images from the training set as a validation set (used in line 8 of Algorithm 1). To further evaluate the generalization capability, we stack the discovered optimal cells on CIFAR-10 into a deeper network, and then evaluate the classification accuracy on ILSVRC 2012 ImageNet [26], which consists of classes with M training images and K validation images.

In the search process, we consider a total of cells in the network, where the reduction cell is inserted in the second and the fourth layers, and the others are normal cells. There are intermediate nodes in each cell. Our experiments follow PC-DARTS. We set the hyper-parameter in PC-DARTS to for CIFAR-10 so only features are sampled for each edge. The batch size is set to during the search of an architecture for epochs based on (line 1 in Algorithm 1). Note for , the larger has little effect on the final performance, but will cost more search time. We freeze the network hyper-parameters such as , and only allow the network parameters such as filter weights to be tuned in the first epochs. Then in the next 2 epochs, we train both the network hyper-parameters and the network parameters. This is to provide an initialization for the network parameters and thus alleviates the drawback of parameterized operations compared with free parameter operations. We also set (line 4 in Algorithm 1) and (line 14), so the network is trained less than epochs, with a larger batch size of (due to few operation samplings) during reducing the search space. The initial number of channels is . We use SGD with momentum to optimize the network weights, with an initial learning rate of (annealed down to zero following a cosine schedule), a momentum of 0.9, and a weight decay of . The learning rate for finding the hyper-parameters is set to .

After search, in the architecture evaluation step, our experimental setting is similar to [19, 37, 22]. A larger network of cells ( normal cells and reduction cells) is trained on CIFAR-10 for epochs with a batch size of and an additional regularization cutout [8]. The initial number of channels is . We use the SGD optimizer with an initial learning rate of (annealed down to zero following a cosine schedule without restart), a momentum of , a weight decay of and a gradient clipping at . When stacking the cells to evaluate on ImageNet, the evaluation stage follows that of DARTS, which starts with three convolution layers of stride to reduce the input image resolution from to . cells ( normal cells and reduction cells) are stacked after these three layers, with the initial channel number being . The network is trained from scratch for epochs using a batch size of . We use the SGD optimizer with a momentum of , an initial learning rate of (decayed down to zero following a cosine schedule), and a weight decay of . Additional enhancements are adopted including label smoothing and an auxiliary loss tower during training. All the experiments and models are implemented in PyTorch [21].

Results on CIFAR-10

We compare our method with both manually designed networks and networks searched by NAS. The manually designed networks include ResNet [11], Wide ResNet (WRN) [33], DenseNet [14] and SENet [13]. For the networks obtained by NAS, we classify them according to different search methods, such as RL (NASNet [37], ENAS [22], and Path-level NAS [3]), evolutional algorithms (AmoebaNet [25]), Sequential Model Based Optimization (SMBO) (PNAS [17]), and gradient-based methods (DARTS [19] and PC-DARTS [31]).

The results for different architectures on CIFAR-10 are summarized in Tab. 1. Using BNAS, we search for two binarized networks based on XNOR [24] and PCNN [9]. In addition, we also train a larger XNOR variant with initial channels and a larger PCNN variant with initial channels. We can see that the test errors of the binarized networks obtained by our BNAS are comparable to or smaller than those of the full-precision human designed networks, and are significantly smaller than those of the other binarized networks.

Architecture Accuracy (%) Params Search Cost Search
Top1 Top5 (M) (GPU days) Method
ResNet-18 [9] 69.3 89.2 11.17 (32 bits) - Manual
MobileNetV1 [12] 70.6 89.5 4.2 (32 bits) - Manual
ResNet-18 (PCNN) [9] 63.5 85.1 11.17 (1 bit) - Manual
NASNet-A [37] 74.0 91.6 5.3 (32 bits) 1800 RL
AmoebaNet-A [25] 74.5 92.0 5.1 (32 bits) 3150 Evolution
AmoebaNet-C [25] 75.7 92.4 6.4 (32 bits) 3150 Evolution
PNAS [17] 74.2 91.9 5.1 (32 bits) 225 SMBO
DARTS [19] 73.1 91.0 4.9 (32 bits) 4 Gradient-based
PC-DARTS [31] 75.8 92.7 5.3 (32 bits) 3.8 Gradient-based
BNAS (PCNN) 71.3 90.3 6.2 (1 bit) 2.6 Performance-based
Table 2: Comparison with the state-of-the-art image classification methods on ImageNet. BNAS and PC-DARTS are obtained directly by NAS and BNAS on ImageNet, others are searched on CIFAR-10 and then directly transferred to ImageNet.

Compared with the full-precision networks obtained by other NAS methods, the binarized networks by our BNAS have comparable test errors but with much more compressed models. Note that the numbers of parameters of all these searched networks are less than 5M, but the binarized networks only need bit to save one parameter, while the full-precision networks need bits. In terms of search efficiency, compared with the previous fastest PC-DARTS, our BNAS is faster (tested on our platform (NVIDIA GTX TITAN Xp). We attribute our superior results to the proposed way of solving the problem with the novel scheme of search space reduction.

Our BNAS method can also be used to search full-precision networks. In Tab. 1, BNAS (full-precision) and PC-DARTS perform equally well, but BNAS is faster. Both the binarized methods XNOR and PCNN in our BNAS perform well, which shows the generalization of BNAS. Fig. 3 and Fig. 4 show the best cells searched by BNAS based on XNOR and PCNN, respectively.

We also use PC-DARTS to perform a binarized architecture search based on PCNN on CIFAR10, resulting in a network denoted as PC-DARTS (PCNN). Compared with PC-DARTS (PCNN), BNAS (PCNN) achieves a better performance (% vs. % in test accuracy) with less search time ( vs. GPU days). The reason for this may be because the performance based strategy can help find bet

(a) Normal Cell
(b) Reduction Cell
Figure 3: Detailed structures of the best cells discovered on CIFAR-10 using BNAS based on XNOR. In the normal cell, the stride of the operations on input nodes is 1, and in the reduction cell, the stride is 2.

ter operations for recognition.

Results on ImageNet

We further compare the state-of-the-art image classification methods on ImageNet. All the searched networks are obtained directly by NAS and BNAS on ImageNet by stacking the cells. Our binarized network is based on PCNNs. From the results in Tab. 2, we have the following observations: (1) BNAS (PCNN) performs better than human-designed binarized networks (71.3% vs. 63.5%) and has far fewer parameters (6.1M vs. 11.17M). (2) BNAS (PCNN) has a performance similar to the human-designed full-precision networks (71.3% vs. 70.6%), with a much more highly compressed model. (3) Compared with the full-precision networks obtained by other NAS methods, BNAS (PCNN) has little performance drop, but is fastest in terms of search efficiency (0.09375 vs. 0.15 GPU days) and is a much more highly compressed model due to the binarization of the network. The above results show the excellent transferability of our BNAS method.

(a) Normal Cell
(b) Reduction Cell
Figure 4: Detailed structures of the best cells discovered on CIFAR-10 using BNAS based on PCNN. In the normal cell, the stride of the operations on input nodes is 1, and in the reduction cell, the stride is 2.


In this paper, we have proposed BNAS, the first binarized neural architecture search algorithm, which effectively reduces the search time by pruning the search space in early training stages. It is faster than the previous most efficient search method PC-DARTS. The binarized networks searched by BNAS can achieve excellent accuracies on CIFAR-10 and ImageNet. They perform comparable to the full-precision networks obtained by other NAS methods, but with much compressed models.


The work was supported in part by National Natural Science Foundation of China under Grants 61672079, 61473086, 61773117, 614730867. This work is supported by Shenzhen Science and Technology Program KQTD2016112515134654. Baochang Zhang is also with Shenzhen Academy of Aerospace Technology, Shenzhen 100083, China.


  1. A. Brock, T. Lim, J. M. Ritchie and N. Weston (2017) SMASH: one-shot model architecture search through hypernetworks. arXiv. Cited by: Related Work.
  2. H. Cai, T. Chen, W. Zhang, Y. Yu and J. Wang (2018) Efficient architecture search by network transformation. In Proc. of AAAI, Cited by: Related Work.
  3. H. Cai, J. Yang, W. Zhang, S. Han and Y. Yu (2018) Path-level network transformation for efficient architecture search. arXiv. Cited by: Experiment Protocol, Results on CIFAR-10, Table 1.
  4. H. Cai, L. Zhu and S. Han (2018) ProxylessNAS: direct neural architecture search on target task and hardware. arXiv. Cited by: Related Work.
  5. X. Chen, L. Xie, J. Wu and Q. Tian (2019) Progressive differentiable architecture search: bridging the depth gap between search and evaluation. arXiv. Cited by: Related Work.
  6. M. Courbariaux, Y. Bengio and J. David (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In Proc. of NIPS, Cited by: Introduction.
  7. M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv and Y. Bengio (2016) Binarized neural networks: training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv. Cited by: Introduction.
  8. T. DeVries and G. W. Taylor (2017) Improved regularization of convolutional neural networks with cutout. arXiv. Cited by: Experiment Protocol.
  9. J. Gu, C. Li, B. Zhang, J. Han, X. Cao, J. Liu and D. Doermann (2019) Projection convolutional neural networks for 1-bit cnns via discrete back propagation. In Proc. of AAAI, Cited by: Introduction, Search Space, Optimization for BNAS, Optimization for BNAS, Results on CIFAR-10, Table 1, Table 2, Experiments.
  10. D. Ha, A. Dai and Q. V. Le (2016) Hypernetworks. arXiv. Cited by: Related Work.
  11. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In Proc. of CVPR, Cited by: Related Work, Results on CIFAR-10, Table 1.
  12. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv. Cited by: Table 2.
  13. J. Hu, L. Shen and G. Sun (2018) Squeeze-and-excitation networks. In Proc. of CVPR, Cited by: Results on CIFAR-10, Table 1.
  14. G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proc. of CVPR, Cited by: Related Work, Results on CIFAR-10, Table 1.
  15. A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: Experiment Protocol.
  16. A. Krizhevsky, I. Sutskever and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Proc. of NIPS, Cited by: Related Work.
  17. C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L. Li, L. Fei-Fei, A. Yuille, J. Huang and K. Murphy (2018) Progressive neural architecture search. In Proc. of ECCV, Cited by: Experiment Protocol, Results on CIFAR-10, Table 1, Table 2.
  18. H. Liu, K. Simonyan, O. Vinyals, C. Fernando and K. Kavukcuoglu (2017) Hierarchical representations for efficient architecture search. arXiv. Cited by: Related Work.
  19. H. Liu, K. Simonyan and Y. Yang (2018) Darts: differentiable architecture search. arXiv. Cited by: Introduction, Related Work, Search Space, Experiment Protocol, Experiment Protocol, Results on CIFAR-10, Table 1, Table 2.
  20. M. D. McDonnell (2018) Training wide residual networks for deployment using a single bit for each weight. arXiv. Cited by: Table 1.
  21. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga and A. Lerer (2017) Automatic differentiation in pytorch. In Proc. of NIPS, Cited by: Experiment Protocol.
  22. H. Pham, M. Y. Guan, B. Zoph, Q. V. Le and J. Dean (2018) Efficient neural architecture search via parameter sharing. arXiv. Cited by: Related Work, Experiment Protocol, Results on CIFAR-10, Table 1.
  23. M. Rastegari, V. Ordonez, J. Redmon and A. Farhadi (2016) Xnor-net: imagenet classification using binary convolutional neural networks. In Proc. of ECCV, Cited by: Introduction.
  24. M. Rastegari, V. Ordonez, J. Redmon and A. Farhadi (2016) XNOR-net: imagenet classification using binary convolutional neural networks. In Proc. of ECCV, Cited by: Search Space, Optimization for BNAS, Optimization for BNAS, Results on CIFAR-10, Experiments.
  25. E. Real, A. Aggarwal, Y. Huang and Q. V. Le (2018) Regularized evolution for image classifier architecture search. arXiv. Cited by: Results on CIFAR-10, Table 1, Table 2.
  26. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla and M. Bernstein (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision. Cited by: Experiment Protocol.
  27. M. Shen, K. Han, C. Xu and Y. Wang (2019) Searching for accurate binary neural architectures. In Proc. of ICCV Workshops, Cited by: Introduction.
  28. K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv. Cited by: Related Work.
  29. S. Xie, H. Zheng, C. Liu and L. Lin (2018) SNAS: stochastic neural architecture search. arXiv. Cited by: Related Work.
  30. J. F. Xu, V. N. Boddeti and M. Savvides (2016) Local binary convolutional neural networks. In Proc. of CVPR, Cited by: Introduction.
  31. Y. Xu, L. Xie, X. Zhang, X. Chen, G. Qi, Q. Tian and H. Xiong (2019) Partial channel connections for memory-efficient differentiable architecture search. arXiv. Cited by: Introduction, Introduction, Sampling for BNAS, Binarized Neural Architecture Search, Results on CIFAR-10, Table 2.
  32. C. Ying, A. Klein, E. Real, E. Christiansen, K. Murphy and F. Hutter (2019) NAS-bench-101: towards reproducible neural architecture search. arXiv. Cited by: Performance-based Strategy for BNAS.
  33. S. Zagoruyko and N. Komodakis (2016) Wide residual networks. In Proc. of BMVC, Cited by: Results on CIFAR-10, Table 1.
  34. X. Zheng, R. Ji, L. Tang, Y. Wan, B. Zhang, Y. Wu, Y. Wu and L. Shao (2019) Dynamic distribution pruning for efficient network architecture search. arXiv. Cited by: Related Work.
  35. X. Zheng, R. Ji, L. Tang, B. Zhang, J. Liu and Q. Tian (2019) Multinomial distribution learning for effective neural architecture search. Cited by: Related Work.
  36. B. Zoph and Q. V. Le (2016) Neural architecture search with reinforcement learning. arXiv. Cited by: Introduction, Search Space.
  37. B. Zoph, V. Vasudevan, J. Shlens and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In Proc. of CVPR, Cited by: Introduction, Experiment Protocol, Experiment Protocol, Results on CIFAR-10, Table 1, Table 2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description