Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks

Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks

Jaedeok Kim Chiyoun Park1 Hyun-Joo Jung1 Yoonsuck Choe
Artificial Intelligence Center, Samsung Research, Samsung Electronics Co.
56 Seongchon-gil, Secho-gu, Seoul, Korea, 06765
Department of Computer Science and Engineering, Texas A&M University
College Station, TX, 77843, USA
{, chiyoun.park, hj34.jung},
Equal contribution.
1footnotemark: 1

Architecture optimization, which is a technique for finding an efficient neural network that meets certain requirements, generally reduces to a set of multiple-choice selection problems among alternative sub-structures or parameters. The discrete nature of the selection problem, however, makes this optimization difficult. To tackle this problem we introduce a novel concept of a trainable gate function. The trainable gate function, which confers a differentiable property to discrete-valued variables, allows us to directly optimize loss functions that include non-differentiable discrete values such as 0-1 selection. The proposed trainable gate can be applied to pruning. Pruning can be carried out simply by appending the proposed trainable gate functions to each intermediate output tensor followed by fine-tuning the overall model, using any gradient-based training methods. So the proposed method can jointly optimize the selection of the pruned channels while fine-tuning the weights of the pruned model at the same time. Our experimental results demonstrate that the proposed method efficiently optimizes arbitrary neural networks in various tasks such as image classification, style transfer, optical flow estimation, and neural machine translation.


Deep neural networks have been widely used in many applications such as image classification, image generation, and machine translation. However, in order to increase accuracy of the models, the neural networks have to be made larger and require a huge amount of computation [7, 22]. Because it is often not feasible to load and execute such a large model on an on-device platform such as mobile phones or IoT devices, various architecture optimization methods have been proposed for finding an efficient neural network that meets certain design requirements. In particular, pruning methods can reduce both model size and computational costs effectively, but the discrete nature of the binary selection problems makes such methods difficult and inefficient [9, 20].

Gradient descent methods can solve a continuous optimization problem efficiently by minimizing the loss function, but such methods are not directly applicable to discrete optimization problems because they are not differentiable. While many alternative solutions such as simulated annealing [14] have been proposed to handle discrete optimization problems, they are too cost-inefficient in deep learning because we need to train alternative choices to evaluate the sample’s accuracy.

In this paper, we introduce a novel concept of a trainable gate function (TGF) that confers a differentiable property to discrete-valued variables. It allows us to directly optimize, through gradient descent, loss functions that include discrete choices that are non-differentiable. By applying TGFs each of which connects a continuous latent parameter to a discrete choice, a discrete optimization problem can be relaxed to a continuous optimization problem.

Pruning a neural network is a problem that decides which channels (or weights) are to be retained. In order to obtain an optimal pruning result for an individual model, one needs to compare the performance of the model induced by all combinations of retained channels. While specialized structures or searching algorithms have been proposed for pruning, they have complex structures or their internal parameters need to be set manually [9]. The key problem of channel pruning is that there are discrete choices in the combination of channels, which makes the problem of channel selections non-differentiable. Using the proposed TGF allows us to reformulate discrete choices as a simple differentiable learning problem, so that a general gradient descent procedure can be applied, end-to-end.

Our main contributions in this paper are in three fold.

  • We introduce the concept of a TGF which makes a discrete selection problem solvable by a conventional gradient-based learning procedure.

  • We propose a pruning method with which a neural network can be directly optimized in terms of the number of parameters or that of FLOPs. The proposed method can prune and train a neural network simultaneously, so that the further fine-tuning step is not needed.

  • Our proposed method is task-agnostic so that it can be easily applied to many different tasks.

Simply appending TGFs, we have achieved competitive results in compressing neural networks with minimal degradation in accuracy. For instance, our proposed method compresses ResNet-56 [7] on CIFAR-10 dataset [15] by half in terms of the number of FLOPs with negligible accuracy drop. In a style transfer task, we achieved an extremely compressed network which is more than 35 times smaller and 3 times faster than the original network. Moreover, our pruning method has been effectively applied to other practical tasks such as optical flow estimation and neural machine translation.

By connecting discrete and continuous domains through the concept of TGF, we are able to obtain competitive results on various applications in a simple way. Not just a continuous relaxation, it directly connects the deterministic decision to a continuous and differentiable domain. By doing so, the proposed method in this paper could help us solve more practical applications that have difficulties due to discrete components in the architecture.

Related Work

Architecture optimization can be considered as a combinatorial optimization problem. The most important factors are to determine which channels should be pruned within a layer in order to minimize the loss of acquired knowledge.

These can be addressed as a problem that finds the best combination of retained channels where it requires extremely heavy computation. As an alternative, heuristic approaches have been proposed to select channels to be pruned [10, 16]. Although these approaches provide rich intuition about neural networks and can be easily adopted to compress a neural network quickly, such methods tend to be sub-optimal for a given task in practice.

The problem of finding the best combination can be formulated as a reinforcement learning (RL) problem and then be solved by learning a policy network. Bello et al. [1] proposed a method to solve combinatorial optimization problems including traveling salesman and knapsack problems by training a policy network. Zoph and Le [27] proposed an RL based method to find the most suitable architecture. The same approach can be applied to find the best set of compression ratio for each layer that satisfies the overall compression and performance targets, as proposed in [9, 26]. However, RL based methods still require extremely heavy computation.

To tackle the scalability issue, a differentiable approach has been considered in various research based on continuous relaxation [17, 18, 19]. To relax a discrete problem to be differentiable, Liu et al. [17] proposed a method that places a mixture of candidate operations by using softmax. Luo and Wu [20] proposed a type of a self-attention module with a scaled sigmoid function as an activation function to retain channels from probabilistic decision. However, in these methods it is essential to carefully initialize and control the parameters of the attention layers and the scaled sigmoid function. While a differentiable approach is scalable to a large search space, existing approaches determine the set of selected channels in a probabilistic way so that they require an additional step to decide whether to prune each channel or not.

The method we propose here allows us to find the set of channels deterministically by directly optimizing the objective function which confers a differentiable property to discrete-valued variables, thus bypassing the additional step required in probabilistic approaches. The proposed optimization can be performed simply by appending TGFs to a target layer and train it using a gradient descent optimization. The proposed method does not depend on additional parameters, so that it does not need a careful initialization or specialized annealing process for stabilization.

Differentially Trainable Gate Function

Figure 1: Explanation of how to shape a gate function to be trainable. The original gate function has zero derivative as in the left. We add to the original gate function the shaping function multiplied by a desirable derivative shape which changes the derivative of the gate function. The resulting function is a TGF that has a desirable derivative.

Consider combinatorial optimization in a selection problem.


where is a vector of binary selections. is an objective function parameterized by . The optimization problem (1) is a generalized form of a selection problem that can cover a parameterized loss function such as a neural network pruning problem. In case of a pure selection problem we set the domain to be a singleton set.

To make the problem differentiable, we consider as an output of a binary gate function parameterized by an auxiliary variable . We will let be a step function for convenience. 111 Although we only consider a step function as a gate function , the same argument can be easily applied to an almost everywhere differentiable binary gate function. Then the optimization problem (1) is equivalent to the following.


where the problem is defined in continuous domain. That is, if is a global minimum of (2), then is a global minimum of (1).

While the continuous relaxation (2) enables the optimization problem (1) to be solved by gradient descent, a gate function has derivative of zero wherever differentiable and consequently


So, a gradient descent optimization does not work for such a function.

In order to resolve this issue, we consider a new type of gate function which has non-zero gradient and is differentiable almost everywhere. Motivated by [6], we first define a gradient shaping function by


where is a large positive integer and is the greatest integer less than or equal to . Note that this function has near-zero value for all , and its derivative is always one wherever differentiable. Using (4) we consider a trainable gate defined as the following (see Figure 1).

Definition 1

A function is said to be a trainable gate of a gate function with respect to a gradient shape if


Then a trainable gate satisfies the following proposition.

Proposition 1

For any bounded derivative shape whose derivative is also bounded, uniformly converges to as . Moreover, uniformly converges to .

Proof. By definition (5), it satisfies that for all

as .

Also, if is differentiable at , which yields for all

as .  

Proposition 1 guarantees that the trainable gate can approximate the original gate function , while its derivative still approximates the desired derivative shape . It is now possible to control the derivative of the given kernel as we want and hence a gradient descent optimization is applicable to the TGF . For convenience, we will drop the superscript and the desired function from unless there is an ambiguity.

Difference between Probabilistic and Deterministic Decisions

The proposed TGF directly learns a deterministic decision during the training phase unlike existing methods. While a probabilistic decision has been considered in existing differentiable methods [12, 17, 18, 19], probabilistic decisions are not clear to select and hence it needs further decision steps. Due to the on-off nature of our TGF’s decision, we can include more decisive objectives, such as the number of channels, FLOPs or parameters, without requiring approximation to expectation of the distribution or smoothing to non-discrete values.

Figure 2: Difference between probabilistic and deterministic decisions. Each bar in the above graph indicates the decision weight of a hidden node learnt by a probabilistic method. The th label on the -axis represents the learnt weights and of the th hidden node. The graph below shows the results from a TGF where there is no redundant hidden node.

We performed a synthetic experiment in order to see the difference between probabilistic and deterministic approaches. To this end, we generate a training dataset to learn the sine function . Consider a neural network having a single fully-connected layer having 20 hidden nodes each of which adopts the sine function as its activation. Since a training sample in our synthetic dataset is of the form , it is enough to use only one hidden node to express the relation between input and output .

We consider a selection layer consisting of a continuous relaxation function with 20 hidden nodes each of which learns whether to retain the corresponding hidden node or not. Two different types of a relaxation function are addressed: a softmax function for a probabilistic decision [17] and the proposed TGFs for a deterministic decision.

As we can see in Figure 2, the probabilistic method found a solution that uses more than one node. In particular, the top 5 hidden nodes based on the decision weight have similar weight values and . In a training phase the probabilistic decision uses a linear combination of options so that error can be canceled out and as a result the selections will be redundant. On the other hand, the deterministic decision (our proposed TGF) selects only one node exactly. Due to the on-off nature of the deterministic decision, the TGF learns by incorporating the knowledge of selections. So the deterministic decision can choose effectively without redundancy.

While we have considered the binary selection problem so far, it is straightforward to extend the concept of a trainable gate to an -ary case by using -simplex. However, in order to show the practical usefulness of the proposed concept, we will consider the pruning problem, an important application of a TGF, in which it is enough to use a binary gate function.

Differentiable Pruning Method

In this section we develop a method to efficiently and automatically prune a neural network as an important application of the proposed TGF. To this end, using the concept of a TGF we propose a trainable gate layer (TGL) which learns how to prune channels from its previous layer.

Design of a Trainable Gate Layer

Figure 3: The overview of the proposed TGL. The figure shows an example of a TGL appending to a convolution layer. The TGL consists of a set of gate functions each of which determines whether to prune or not. Each gate function outputs a value of 0 or 1 which is multiplied to the corresponding filter of the convolution kernel of the target layer.

The overall framework of our proposed TGL is illustrated in Figure 3. Channel pruning can be formulated by a function that zeros out certain channels of an output tensor in a convolutional neural network and keeps the rest of the values. We thus design a TGL as a set of TGFs whose elements correspond to the output channels of a target layer. Let the th target layer map an input tensor to an output tensor that has channels using a kernel

for . A fully connected layer uses the multiplication instead of a convolution operation , but we here simply use to represent both cases.

Let a TGL prune the th target layer where the TGL consists of trainable weights and a function . The weight , , is used to learn whether the corresponding channel should be masked by zero or not. The TGL masks the output tensor of the th target layer as


where is the pruned output tensor by . Since we have , (6) can be rewritten as

So multiplying to masks the th channel from the kernel . While might not be exactly zero due to the gradient shaping, its effect can be negligible by letting the value of be large enough.

From (6), yields . So the value of the weight can control the th channel of the output tensor . If a step function is used as the gate function in the TGL, implies that the TGL zeros out the th channel from the th layer. Otherwise, the channel remains identical. Hence, by updating the weights , we can make the TGL learn the best combination of channels to prune.

The proposed channel pruning method can be extended to weight pruning or layer pruning in a straightforward way. It can be achieved by applying the trainable gate functions for each elements of the kernel or each layer.

Compression Ratio Control

The purpose of channel pruning is to reduce neural network size or computational cost. However, simply adding the above TGL without any regularization does not ensure that channels are pruned out as much as we need. Unless there is a significant amount of redundancy in the network, having more filters is often advantageous to obtain higher accuracy, so the layer will not be pruned. Taking this issue into account, we add a regularization factor to the loss function that controls the compression ratio of a neural network as desired.

Let be the target compression ratio. In case of reducing the number of FLOPs, the target compression ratio is defined by the ratio between the number of the remaining FLOPs and the total number of FLOPs of the neural network. The weight values of TGLs determine the remaining number of FLOPs, denoted by . We want to reduce the FLOPs of the pruned model by the factor of , that is, we want it to satisfy .

Let be the original loss function to be minimized where denotes the weights of layers in the neural network except the TGLs. We add a regularization term to the loss function as follows in order to control the compression ratio.


where denotes the -norm and is a regularization parameter. The added regularization will ensure that the number of channels will be reduced to meet our desired compression ratio.

Note that minimization of the loss function (7) does not only update the weights of TGLs but also those of the normal layers. A training procedure, therefore, jointly optimizes the selection of the pruned channels while fine-tuning the weights of the pruned model at the same time. In traditional approaches where channel pruning procedure is followed by a separate fine-tuning stage, the importance of each channel may change during fine-tuning stage, which leads to sub-optimal compression. Our proposed method, however, does not fall into such a problem since each TGL automatically takes into account the importance of each channel while adjusting the weights of the original model based on the pruned channels. The loss function indicates that there is a trade-off between the compression ratio and accuracy.

While we have considered the number of FLOPs in this subsection, we can extend easily to other objectives such as the number of weight parameters or channels by replacing the regularization target in (7).

Experimental Results

In this section, we will demonstrate the effectiveness of the proposed TGF through various applications in the image and language domains. We implemented our experiments using Keras [2] unless mentioned otherwise. In order to shape the derivative of the gate function , a constant derivative shape and is used and a simple random weight initialization are used. In all experiments, only convolution or fully-connected layers are considered in calculating the number of FLOPs of a model since the other type of layers, e.g., batch normalization, requires relatively negligible amount of computation.

Image Classification

We used CIFAR-10 and ImageNet datasets for our image classification experiments. We used pretrained weights of ResNet-56 on CIFAR-10 that trained from scratch with usual data augmentations (normalization and random cropping). For VGG-16 on ImageNet we used pre-trained weights that was published in Keras. Although we found that the accuracy of each model in our experimental setup differs slightly from the reported value, the original pre-trained weights were used in our experiments without modification since we wanted to investigate the performance of our proposed method in terms of the accuracy drop.

(a) ResNet-56 (CIFAR-10)
(b) VGG-16 (ImageNet)
Figure 4: Pruning results without fine-tuning. (a) Channel pruning of ResNet-56 and (b) Weight pruning of VGG-16 The -axis represents the compression ratio which is the ratio of the remaining of a pruned model over the original number of FLOPs or parameters.

Pruning without Fine-tuning In order to show the effect of the TGFs, we first considered a selection problem. We kept the pre-trained weights of a model and only pruned the channels or weights without fine-tuning by appending TGLs to convolution and fully-connected layers. That is, we fix the weights and optimize in (2). Figure 3(a) shows the results of channel pruning in ResNet-56 [8] on CIFAR-10. It can be observed that the number of FLOPs can be reduced by half without noticeable change in accuracy even when we do not apply fine-tuning, which implies that the TGFs works as expected.

We also applied weight pruning mentioned in the previous section to VGG-16 on ImageNet (Figure 3(b)). Even though there are more than weight parameters in the model whether to be retained or not, simply plugging-in TGLs to the model allows us to find a selection of redundant weight parameters. Note that the accuracy of VGG-16 even increases to 89.1% from 88.24% when only 10% of the parameters are retained. This phenomenon is due to the fact that reducing the number of non-gated parameters are equivalent to applying regularization to the neural network, and so adding the gating function to each parameter improves the generalization property of the model further.

Model Method (acc) (%)
FP 0.02 (93.06) 0.72
VCP -0.8 (92.26) 0.79
Proposed 0.04 (92.7) 0.71
CP -1.0 (91.8) 0.5
AMC -0.9 (91.9) 0.5
Proposed -0.3 (92.36) 0.51
ADMM 0.0 (88.7) 1/19
Proposed 0.76 (89.0) 1/20
PWP -0.5 (88.2) 1/34
Proposed -0.06 (88.18) 1/50
Table 1: Channel pruning of ResNet-56 on the CIFAR-10 and weight pruning of VGG-16 on ImageNet. acc and denote the accuracy after pruning and the accuracy drop, respectively. (, resp.) means the number of remaining FLOPs (parameters, resp.) over the total number of FLOPs (parameters, resp.) in each model, where a small value means more pruned. Methods: FP [16], CP [10], VCP [25], AMC [9], ADMM and PWP [24].

Pruning with Fine-tuning In the next example, we jointly optimized the weights and selection at the same time in order to incorporate fine-tuning to the selections. Like the previous example we appended TGLs to a model, but we jointly trained both the TGLs and the weight parameters of the model. Table 1 summarizes the pruning results. As shown in the table, our results are competitive with existing methods. For example, in ResNet-56, the number of FLOPs is reduced by half while maintaining the original accuracy. It is also noticeable that we achieve higher accuracy on the compressed VGG-16 model, even if the accuracy of our initial model was worse.

Figure 5: Comparing the effect of gradient shaping in TGFs. sigmoid’ denotes the derivative of the sigmoid function and tanh’ denotes the derivative of the hyperbolic tangent function.

While in our experiments we used a constant function to shape the derivative within the TGFs, The proposed method can adopt any derivative shape by changing in (5). Figure 5 compares the effect of different shaping functions. It shows that the derivative shape does not affect the results critically, which implies that our proposed method is stable over a choice of . It can be concluded that a simple constant derivative shape can be adopted without significant loss of accuracy.

Image Generation

We further applied our proposed pruning method to style transfer and optical flow estimation models which are the most popular applications in image generation.

Style Transfer Style transfer [4, 5] generates a new image by synthesizing contents in given content image with styles in given style image. Because a style transfer network is heavily dependent on the selection of which styles are used, it is not easy to obtain proper pre-trained weights. So, we started from -style transfer network [4] as an original architecture 222 Source code:˙stylization. with randomly initialized weights. It is of course possible to start from pre-trained weights if available.

File size
Original 6.9 10.079 1.674 549
1.8 2.470 0.405 414
0.2 0.227 0.020 160
Table 2: A summary of style transfer model compression results. We measured the size of model files which are saved in binary format (.pb). The inference time was averaged over 10 frames on the CPU of a Galaxy S10. The frame size is 256256 pixels.

Figure 6: Results of the style transfer task. The rightmost column represents a content image and the top row represents 5 style images. Second row: Transferred images produced by the original model. Third row: Transferred images produced by the compressed model (). Fourth row: Transferred images produced by the compressed model ().

To select which channels are retained, a TGL is plugged into the output of each convolution layer in the original architecture except the last one for preserving its generation performance. In training phase, we used ImageNet as a set of content images and chose 5 style images (i.e., ) manually as shown in Figure 6. We trained both original and compressed model from scratch for 20K iterations with a batch size of 16. The number of pruned channels is used as a regularization factor with regularization weight .

The compressed model () is 34.5 times smaller than the original model in terms of file size (Table 2). In order to see the actual inference time, we measured the inference time on the CPU of a Galaxy S10. The compressed model () is more than 3 times faster in terms of the inference time as shown although the generation quality preserved as shown in Figure 6.

Figure 7: The number of retained channels in compressed models. Layers that should match the dimension of outputs share a single TGL (conv3, res1-2, res2-2, res3-2, res4-2, and res5-2).

Figure 7 shows the number of retained channels in each layer. The TGL does not select the number of pruned channels uniformly and it automatically selects which channels are pruned according to the objective function. Without the further pruning steps, our proposed method can train and prune simultaneously the model with a consideration of the target compression ratio as we mentioned in the previous section.

Optical Flow Estimation We next consider a task that learns the optical flow between images [3, 11]. In this experimentation, we used FlowNetSimple [3], which is the same as FlowNetS333 Source code: in [11]. FlowNetS stacks two consecutive input images together and feed them into the network to extract motion information between these images.

Model File size (MB) EPE Time (ms)
Original 148 3.15 292.7
54 2.91 177.8
30 3.13 155.9
Table 3: A summary of the optical flow estimation results. We measured the size of model files which are saved in binary format (.pb). For inference time, we ran both model on Intel Core i7-7700@3.60GHz CPU. The size of image is 384x512 pixels. EPE was averaged over validation data in Flying Chairs dataset.

Starting from the pre-trained model, a TGL is plugged into the output of every convolution and deconvolution layers except the last one for preserving its generation performance. We trained the model with TGLs for 1.2M iterations with a batch size of 8. The Adam optimizer [13] was used with initial learning rate 0.0001 and it was halved every 200K iterations after the first 400K iterations. As in the style transfer task, the number of pruned channels is used as a regularization factor with regularization weight . We used the Flying Chairs dataset [3] for training and testing. The performance of model is measured in average end-point-error (EPE) of validation data.

Table 3 shows the compression results. As we can see in the table, the compressed model () is 4.93 times smaller than the original model in terms of file size and more than 1.88 times faster in terms of inference time. Note that the EPE of the compressed model (3.13) is almost same with that of the original model (3.15). But it is only small a bit worse than EPE reported (2.71) in the paper [3].

Our experimental results demonstrate that the compressed model pruned by TGL automatically finds which channels to be retained for reducing model file size, inference time, and FLOPs, while minimizing performance degradation.

Neural Machine Translation

Model En/De/En-De (BLEU)
Original 48/48/48  –  (27.32)
30/41/48 -0.06  (27.26)
11/20/25 -0.18  (27.14)
Table 4: Results of pruning attention heads of a transformer model. The BLEU score measure on English-to-German newstest2014. En denotes the total number of retained attention heads in the encoder self-attention heads. De and En-De respectively are defined for the decoder self-attention and the encoder-decoder attention. denotes the the fraction of the total number of retained attention heads.

While we have considered various applications, all of those applications are in the image domain. So as the last application we applied our pruning method to a neural machine translation task in the language domain. We tried to compress the transformer model [23] that has been most widely used.

The transformer model consists of an encoder and a decoder. Each layer of the encoder has multiple self-attention heads, whereas each layer of the decoder has multiple self-attention heads and multiple encoder-decoder attention heads. To make each layer compact, we append TGFs each of which masks the corresponding attention head. Note that unlike in the previous tasks, our pruning method can prune at a block-level, an attention head, not just at the level of a single weight or channel.

We performed WMT 2014 English-to-German translation task as our benchmark and implemented on fairseq [21]. We trained the model over 472,000 iterations from scratch. As we can see in Table 4, the BLEU score of a pruned model does not degrade much. In particular, although only 38% of attention heads are retained, the BLEU score degrades only 0.18. Our pruning method improved the computational efficiency in the language domain as well, from which we can conclude that the proposed pruning method is task-agnostic.


In this paper, we introduced the concept of a TGF and a differentiable pruning method as an application of the proposed TGF. The introduction of a TGF allowed us to directly optimize the loss function based on the number of parameters or FLOPs which are non-differentiable discrete values. Our proposed pruning method can be easily implemented by appending TGLs to the target layers, and the TGLs do not need additional internal parameters that need careful tuning. We showed the effectiveness of the proposed method by applying to important applications including image classification and image generation. Despite its simplicity, our experiments show that the proposed method achieves better compression results on various deep learning models. We have also shown that the proposed method is task-agnostic by performing various applications including image classification, image generation, and neural machine translation. We expect that the TGF can be applied to many more applications where we need to train discrete choices and turn them into differentiable training problems.


We would like to thank Sunghyun Choi and Haebin Shin for supports on our machine translation experiments. Kibeom Lee and Jungmin Kwon supports the mobile phone experiments of the style transfer.


  • [1] I. Bello, H. Pham, Q. V. Le, M. Norouzi, and S. Bengio (2017) Neural combinatorial optimization with reinforcement learning. In International Conference on Learning Representations, Cited by: Related Work.
  • [2] F. Chollet et al. (2015) Keras. Note: Cited by: Experimental Results.
  • [3] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox (2015) Flownet: learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 2758–2766. Cited by: Image Generation, Image Generation, Image Generation.
  • [4] V. Dumoulin, J. Shlens, and M. Kudlur (2017) A learned representation for artistic style. In International Conference on Learning Representations, Cited by: Image Generation.
  • [5] L. A. Gatys, A. S. Ecker, and M. Bethge (2015) A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576. Cited by: Image Generation.
  • [6] S. Hahn and H. Choi (2018) Gradient acceleration in activation functions. arXiv preprint arXiv:1806.09783. Cited by: Differentially Trainable Gate Function.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016-06) Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. External Links: ISSN 1063-6919 Cited by: Introduction, Introduction.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2016) Identity mappings in deep residual networks. In European Conference on Computer Vision (ECCV), pp. 630–645. Cited by: Image Classification.
  • [9] Y. He, J. Lin, Z. Liu, H. Wang, L. Li, and S. Han (2018-09) AMC: automl for model compression and acceleration on mobile devices. In European Conference on Computer Vision (ECCV), pp. 784–800. Cited by: Introduction, Introduction, Related Work, Table 1.
  • [10] Y. He, X. Zhang, and J. Sun (2017-10) Channel pruning for accelerating very deep neural networks. In IEEE International Conference on Computer Vision (ICCV), Cited by: Related Work, Table 1.
  • [11] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) Flownet 2.0: evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2462–2470. Cited by: Image Generation.
  • [12] E. Jang, S. Gu, and B. Poole (2017) Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, Cited by: Difference between Probabilistic and Deterministic Decisions.
  • [13] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations, Cited by: Image Generation.
  • [14] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi (1983) Optimization by simulated annealing.. Science 220 4598, pp. 671–80. Cited by: Introduction.
  • [15] A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: Introduction.
  • [16] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf (2017) Pruning filters for efficient convnets. In International Conference on Learning Representations, Cited by: Related Work, Table 1.
  • [17] H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations, Cited by: Related Work, Difference between Probabilistic and Deterministic Decisions, Difference between Probabilistic and Deterministic Decisions.
  • [18] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang (2017-10) Learning efficient convolutional networks through network slimming. In IEEE International Conference on Computer Vision (ICCV), pp. 2755–2763. Cited by: Related Work, Difference between Probabilistic and Deterministic Decisions.
  • [19] C. Louizos, K. Ullrich, and M. Welling (2017) Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pp. 3288–3298. Cited by: Related Work, Difference between Probabilistic and Deterministic Decisions.
  • [20] J. Luo and J. Wu (2018) AutoPruner: an end-to-end trainable filter pruning method for efficient deep model inference. arXiv preprint arXiv:1805.08941. Cited by: Introduction, Related Work.
  • [21] M. Ott, S. Edunov, A. Baevski, A. Fan, S. Gross, N. Ng, D. Grangier, and M. Auli (2019) Fairseq: a fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, Cited by: Neural Machine Translation.
  • [22] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: Introduction.
  • [23] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, pp. 5998–6008. Cited by: Neural Machine Translation.
  • [24] S. Ye, T. Zhang, K. Zhang, J. Li, K. Xu, Y. Yang, F. Yu, J. Tang, M. Fardad, S. Liu, X. Chen, X. Lin, and Y. Wang (2018) Progressive weight pruning of deep neural networks using ADMM. arXiv preprint arXiv:1810.07378. Cited by: Table 1.
  • [25] C. Zhao, B. Ni, J. Zhang, Q. Zhao, W. Zhang, and Q. Tian (2019) Variational convolutional neural network pruning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Table 1.
  • [26] J. Zhong, G. Ding, Y. Guo, J. Han, and B. Wang (2018) Where to prune: using lstm to guide end-to-end pruning.. In IJCAI, pp. 3205–3211. Cited by: Related Work.
  • [27] B. Zoph and Q. V. Le (2016) Neural architecture search with reinforcement learning. In International Conference on Learning Representations, Cited by: Related Work.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description