Efficient SparseWinograd
Convolutional Neural Networks
Abstract
Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd’s minimal filtering algorithm (Lavin, 2015) and network pruning (Han et al., 2015) can reduce the operation count, but these two methods cannot be directly combined – applying the Winograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winogradbased CNNs to enable these methods to exploit sparsity. First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations. Second, we prune the weights in the Winograd domain to exploit static weight sparsity. For models on CIFAR10, CIFAR100 and ImageNet datasets, our method reduces the number of multiplications by , and respectively with loss of accuracy less than , outperforming previous baselines by . We also show that moving ReLU to the Winograd domain allows more aggressive pruning.
Efficient SparseWinograd
Convolutional Neural Networks
Xingyu Liu^{1}^{1}footnotemark: 1 , Jeff Pool^{2}^{2}footnotemark: 2 , Song Han^{3}^{3}footnotemark: 3 ^{4}^{4}footnotemark: 4 , William J. Dally^{1}^{1}footnotemark: 1 ^{2}^{2}footnotemark: 2 
^{1}^{1}footnotemark: 1 Stanford University, ^{2}^{2}footnotemark: 2 NVIDIA, ^{3}^{3}footnotemark: 3 Massachusetts Institute of Technology, ^{4}^{4}footnotemark: 4 Google Brain 
{xyl, dally}@stanford.edu 
1 Introduction
Deep Convolutional Neural Networks (CNNs) have shown significant improvement in many machine learning applications. However, CNNs are computelimited. Their performance is dominated by the number of multiplies needed to perform the convolutions. Moreover, the computational workload of CNNs continues to grow over time. LeCun et al. (1998) proposed a CNN model with less than multiplies for handwritten digit classification. Later, Krizhevsky et al. (2012) developed AlexNet, an ImageNetwinning CNN with more than multiplies. In 2014, ImageNetwinning and runner up CNNs increased the number of multiplies to (Szegedy et al., 2015) and (Simonyan & Zisserman, 2015) respectively. Despite the powerful representational ability of large scale CNNs, their computational workload prohibits deployment on mobile devices.
Two research directions have been explored to address the problem. Lavin (2015) proposed using Winograd’s minimal filtering algorithm (Winograd, 1980) to reduce the number of multiplies needed to perform kernel convolutions. On the other end, pruning the model (Han et al., 2015; 2016b) and exploiting the dynamic sparsity of activations due to ReLU also reduces the required multiplies.
Unfortunately, the above two directions are not compatible: the Winograd transformation fills in the zeros in both the weights and the activations (Figure 1(a)) – eliminating the gain from exploiting sparsity. Thus, for a pruned network, Winograd’s algorithm actually increases the number of multiplies; the loss of sparsity more than offsets the reduced operation count.
In this paper, we introduce two modifications to the original Winogradbased convolution algorithm to eliminate this problem. First, we move the ReLU operation to be after the Winograd transform to also make the activations sparse at the point where the multiplies are performed. Second, we prune the weights after (rather than before) they are transformed. Thus, the weights are sparse when the elementwise multiply is performed — reducing the operation count. Together, these two modifications enable the gains of Winograd’s algorithm and of exploiting sparsity to be combined. We opensource our code and models at https://github.com/xingyul/SparseWinogradCNN.
2 Related Work
Linear Algebra property in Convolution: Previous research proposes using the linear algebra property of convolution to reduce the number of multiplies by trading additions for multiplies. Cong & Xiao (2014) convert convolution into matrix multiplies and utilize the linear algebra property at the submatrix block level. This approach achieves a 47% saving in multiplies. Lavin (2015) exploits the elementlevel linear algebra property of convolution, i.e. Winograd’s minimal filtering algorithm (Winograd, 1980). This approach reduces the number of multiplies by to , depending on the image patch size used in the algorithm. Winograd’s algorithm is also used in a stateoftheart deep learning library, cuDNN (Chetlur et al., 2014), to improve computation efficiency.
Model Compression: Model compression reduces the number of multiplies of CNNs by pruning network parameters (LeCun et al., 1990; Hassibi et al., 1993) and exploiting weight sparsity. Han et al. (2015; 2016b) proposed learning the sparsity pattern of network weights by eliminating weights whose absolute value is less than an empirical threshold. This approach can prune the convolutional layers of the model to only of the original size and reduce the number of multiplies required. Liu et al. (2017) first proposed pruning and retraining the weights in Winograd domain for conventional Winograd convolution. Li et al. (2017) later showed promising results on large datasets and reported sparsity in the Winograd parameters of AlexNet with less than accuracy loss.
Dynamic Activation Sparsity: The ReLU nonlinearity sets activations whose values are negative to zero, causing dynamic sparsity in activations. Model compression can work in tandem with dynamic activation sparsity and reduce multiplication workload. Han et al. (2015) showed that exploiting sparsity of both weights and activations can reduce the number of multiplies by . Huan et al. (2016) further proposed to manually set a small positive ReLU threshold at test time to exploit greater sparsity in activation without losing testing accuracy. Research in novel architectures also led to optimizations for deep learning accelerators to exploit the sparsity in activations. Han et al. (2016a) proposed using a Leading Nonzero Detection unit (LNZD) for their fullyconnected layer accelerator to efficiently skip zeros in input activations. Albericio et al. (2016) proposed a similar mechanism for a convolution layer accelerator.
3 Sparse Winograd Convolution
We first introduce the conventional Winograd convolution and show how sparsity of weights or activations is lost during the dataflow of the algorithm. We then present the novel WinogradReLU CNN architecture. It preserves sparsity in both weights and activations before multiplies are performed and significantly reduces the computational workload.
3.1 Sparsity in Conventional Spatial and Winograd CNN
The basic block of the conventional Winograd convolution algorithm works on an patch (denoted by ) extracted with stride of from an input feature map. With “valid” padding, the patch is convolved with a kernel (denoted by ) to produce an output patch (denoted by ). The output patches are assembled into an output feature map.
Input activation patch and kernel (spatialdomain activation and weights) are transformed using matrices and to be and (Winograddomain activation and weights) respectively, both with shape . After elementwise product in Winograddomain, the output activation is obtained using matrix (equation (1)). Matrices , and are specific. When , and consists of , and , so the multiplication with and only requires addition. It reduces the number of multiplies from to . Lavin (2015) gives details of the algorithm.
(1) 
Spatial Baseline Network: When using a “vanilla” pruned network, as introduced by Han et al. (2015), a ReLU nonlinear operation is performed by the previous layer on spatialdomain input and spatialdomain weight is pruned. The output activation patch is obtained from equation (2). This is illustrated in Figure 1(a) for . Though and may both be sparse due to pruning and ReLU respectively, the elementwise multiply is dense due to and transformations filling the spatialdomain zeros. Sparsity does not reduce the number of multiplies in Winograd’s algorithm.
(2) 
Winograd Native Pruned Network: When using the Winograddomain pruned network introduced by Liu et al. (2017) and Li et al. (2017), the spatialdomain input is ReLUed by the previous layer while the Winograddomain weight is pruned. The output activation patch is obtained from equation (3). The algorithm when is also illustrated in Figure 1(b). Though Winograddomain weights are sparse due to pruning, Winograddomain activations are still dense due to transforms. The sparsity in spatial activations due to ReLU does not reduce the number of multiplies.
(3) 
3.2 WinogradReLU CNN
To address the above problems, we introduce the WinogradReLU Network. Instead of applying ReLU to the activations in the spatial domain, we apply ReLU to the activations in the Winograd domain, as in equation (4) and Figure 1(c). The ReLU operation zeros all negative transformed activations, reducing the number of multiplies in the Winograd domain.
(4) 
In the WinogradReLU CNN, we eliminate the spatialdomain kernel entirely. Because this ReLU is really associated with the previous layer, we perform this transformed ReLU starting with the second layer. We point out that the proposed new CNN architecture is not mathematically equivalent to the vanilla CNN nor the conventional Winograd CNN. Due to the change of network architecture, the training and pruning should also be changed. Our method operates in three phases: dense training, pruning, and retraining.
Dense training: we train a dense kernel directly in the transform domain. The transformed kernel is initialized and trained directly by backpropagation through the inverse transform — eliminating the need to maintain a kernel in the spatial domain or to transform a spatial kernel.
Pruning: we prune the transformed kernel by computing the threshold required to achieve a desired pruning rate and setting all weights whose absolute value less than to zero. In our experiments, we used the same for all WinogradReLU layers. Because sensitivity varies from layer to layer, we expect that better performance could be achieved by varying the pruning rate for each layer .
Retraining: we retrain the model using a “sparsity mask” to force the weights that were pruned to remain zero. The sparsity mask is computed during the pruning step and is kept constant during retraining. The gradient of the network’s loss, , with respect to the input activation and Winograd weights can be derived using the chain rule. Equation (5) shows the calculation of input activation gradient and Winograd weight gradient using the loss gradient passed from upstream layers .
(5) 
4 Experiments
We applied the methodology described above to several different CNNs on different datasets. The original network models are chosen such that the majority of the convolution layers have kernels. This ensures the largest portion of layers can be converted to Winograd convolution layers and ReLU be put in Winograd domain. We used image classification datasets of different scales: CIFAR10, CIFAR100 (Krizhevsky & Hinton, 2009) and ImageNet 2012 (Russakovsky et al., 2015). For network architectures, we chose VGGnagadomi (Nagadomi, 2014), ConvPoolCNNC model (Springenberg et al., 2015) and a variation of ResNet18 (He et al., 2016a) respectively on three datasets. Using the Tensorflow (Abadi et al., 2016) framework, we trained the spatial baseline CNN, corresponding conventional Winograd CNN, and WinogradReLU CNN models from scratch. Then the three models are iteratively pruned and retrained. For a specific dataset, we used the same data augmentation for the training of all models on the dataset.
4.1 Cifar10
We used VGGnagadomi (Nagadomi, 2014) on the CIFAR10 dataset. VGGnagadomi is a lightweight version of VGGNet (Simonyan & Zisserman, 2015). It contains 8 convolution layers with kernels. The best reported validation set accuracy it achieves on CIFAR10 is (Nagadomi, 2014). We trained three models from scratch. The corresponding conventional Winograd CNN model and WinogradReLU CNN model can achieve validation set accuracy of and respectively. The first convolution layer is most sensitive to pruning and we set its density to a constant of . We iteratively pruned and retrained other convolution layers with density from down to .
Figure 2 shows test accuracy as a function of weight density for the three models. The two baseline models can only be pruned to density before accuracy falls significantly (). Our WinogradReLU CNN model can be pruned to density before falling to the same accuracy.
Layer 





Density  Workload  Density  Workload  Density  Workload  
Weight  Act  Weight  Act  Weight  Act  
conv0  
conv1  
conv2  
conv3  
conv4  
conv5  
conv6  
conv7  
conv total              
overall             
Table 1 shows the input activation density and compares the workloads for each pruned convolution layer in three models. Pruning two baseline models reduces the convolution layer workload by and ^{1}^{1}1All Winograd CNN model workload reduction results include the intrinsic reduction. respectively. Pruning the WinogradReLU model reduces the convolution layer workload by , a and improvement respectively over the two baselines. The improvement of overall network workload reduction is and respectively over two baselines.
4.2 Cifar100
We used the ConvPoolCNNC (Springenberg et al., 2015) model on on the CIFAR100 dataset. ConvPoolCNNC contains 9 convolution layers, out of which 7 have kernels. We trained three models from scratch. The spatial baseline CNN model and conventional Winograd CNN model can achieve single model validation accuracy of and respectively. The corresponding WinogradReLU network model can achieve validation set accuracy of . We pruned the first convolution layer to a constant density of . We iteratively pruned and retrained the other layers to densities from down to .
Figure 3 shows the accuracy as a function of density for spatial baseline and WinogradReLU models. The spatialbaseline and WinogradReLU models can be pruned to density without significant () loss of accuracy. In contrast, the conventional Winograd CNN model can only be pruned to density. At a given density, the WinogradReLU model has the highest accuracy.
Layer 





Density  Workload  Density  Workload  Density  Workload  
Weight  Act  Weight  Act  Weight  Act  
conv0  
conv1  
conv2  
conv3  
conv4  
conv5  
conv6  
conv total              
overall             
Table 2 shows the input activation density and compares the workloads for each pruned convolution layer in three models. Pruning two baseline models reduces the convolution layer workload by and respectively. Pruning the WinogradReLU model reduces the workload by , a and improvement respectively over the two baselines. The improvement of overall network workload reduction is and respectively over two baselines.
4.3 ImageNet
We used a variation of the full preactivation version (He et al., 2016b) of ResNet18 (He et al., 2016a) on the ImageNet 2012 dataset. We used this version because it performs the best among various ResNet versions and its structure suits our WinogradReLU approach – its ReLU units are located before convolutions in the residual modules. The variation is different from original ResNet18 by replacing all stride convolution layers with a maxpooling layer followed by a stride convolution layer. Such difference ensure most of convolution layers can be converted to Winograd convolution layer. Another difference is that it doesn’t have the last max pooling layer so the last group of residual modules has spatial size of , in order to keep the spatial size even instead of odd. This setting suits Winograd convolution with best in that even spatial size is required for even values.
We trained three models from scratch. For single model and single central cropping, the spatial baseline CNN model and conventional Winograd CNN model can achieve single model top1/top5 validation accuracy of / and /. The corresponding WinogradReLU CNN model can achieve validation top1/top5 accuracy of /. We kept the first convolution layer intact. We iteratively pruned other convolution layers with density rate from down to .
Figure 4 shows the accuracy as a function of density for three models. The spatial baseline CNN model and conventional Winograd CNN model can be pruned to and respectively without significant () loss of top1 or top5 accuracy. The WinogradReLU model can be pruned much further, to / density without significant () loss of top1/top5 accuracy. At these densities, top1 accuracies are , and for three models respectively, with a dense spatial baseline of ; top5 accuracies are , and for three models respectively, with a dense spatial baseline of .
Layer 





Density  Workload  Density  Workload  Density  Workload  
Weight  Act  Weight  Act  Weight  Act  
res2a_2a  
res2a_2b  
res2b_2a  
res2b_2b  
res3a_2a  
res3a_2b  
res3b_2a  
res3b_2b  
res4a_2a  
res4a_2b  
res4b_2a  
res4b_2b  
res5a_2a  
res5a_2b  
res5b_2a  
res5b_2b  
conv total              
overall             
Table 3 shows the input activation density and compares the workloads for each pruned convolution layer in three models. Pruning the two baseline models reduces the convolution layer workload by and respectively. Pruning the WinogradReLU model reduces the workload by , a and improvement respectively over the two baselines. The improvement of overall network workload reduction is and respectively over two baselines.
5 Discussion
In this section, we summarize the experiment results and compare the three models in terms of a) weight and activation dimensions and b) the dynamic density of activations. We then visualize the kernels to illustrate the pattern of the proposed WinogradReLU model kernel.
5.1 Weight and Activation Dimension
In a convolutional neural network, a convolutionReLU pair acts as a classifier on a spatial patch of an input feature. The dimension of the space being classified is the total number of elements passing through the ReLU layer. The decision boundaries of the classifier are determined by the weights. Insufficient nonzero weights or insufficient activations results in too simple a decision boundary and causes accuracy loss.
Experimental results have shown that WinogradReLU CNN can reach the same accuracy as both vanilla spatial baseline CNN and conventional Winograd CNN without pruning, and that WinogradReLU CNN is more robust to aggressive pruning. In this subsection we provide an explanation for the latter observation from the aspect of activation and weight dimensions. We provide a summary on dimensions in Table 4.





Weight dimension  
ReLU dimension 
Weight Dimension Increase: Compared to a vanilla CNN, a conventional Winograd CNN uses dimension Winograd kernels. Training a Winograd CNN from scratch allows higher dimension () for Winograd kernels, and a WinogradReLU CNN shares these characteristics.
ReLU Dimension Increase: A major difference between our WinogradReLU CNN and conventional Winograd CNN is that the ReLU layers in WinogradReLU CNN have higher dimension. The dimension increase comes from the Winograd transformation extracting feature patches with strides from activations. The total number of extracted Winograddomain activations is , an increase from the spatial domain’s .
We can see that our WinogradReLU architecture has an advantage on the dimensions of weights and activations over other two models. This means WinogradReLU CNNs classify on a higher dimension with more complex decision boundaries, which forms a stronger representational ability in high dimensional image feature space.
5.2 Dynamic Activation Density
As is shown in the ImageNet results in the previous section, dynamic activation density of spatial baseline CNN model varies significantly among layers. Layers at earlier stages typically have higher density in activation than later stages. In WinogradReLU CNN model, the dynamic activation densities vary little among layers and are all close to .
An explanation is that the nature of image convolution ensures activations to be spatially smooth. Thus, due to the structure of matrix (Lavin, 2015), 15 of 16 elements in the matrix of Winograddomain activation patch have a mean close to zero. This benefits classification within a patch since ReLU layer is most powerful when half of activations are positive.
5.3 Kernel Visualization
We visualize the kernels of the proposed WinogradReLU model. We selected the first 6 input and output channels of layer res2a_2a of ResNet18 at three different pruning densities. Unlike spatial domain kernels, WinogradReLU kernels do not show clear physical meanings such as edge or corner detectors. However, we observe that values of the elements (from topleft, 1based indices) in each kernel are typically distinct in a kernel and are most likely kept during aggressive pruning. A possible reason for this is that the elements of Winograddomain activation in a patch are special: interested readers can calculate symbolically and will realize that elements are the only elements that are transformed with a linear combination of only adding and no subtraction. In a spatially smooth activation patch, this means the elements are the ones and the only ones with a nonzero mean.
6 Conclusion and Future Work
We have shown that we can combine the computational savings of sparse weights and activations with the savings of the Winograd transform by making two modifcations to conventional CNNs. To make the weights sparse at the point of multiplication, we train and prune the weights in the transform domain. This simple approach does not reduce the workload with respect to spatial pruning, though, so we move the ReLU nonlinear operation after the Winograd transform to make the activations sparse at the point of multiplication. Moving ReLU to the Winograd domain also allows the weights to be more aggressively pruned without losing accuracy. With a output patch (), the net result is a reduction of , and in computation on three datasets: CIFAR10, CIFAR100 and ImageNet.
We plan to extend this work in the following directions. First, we expect that even greater savings on computation can be realized by using larger patch sizes (e.g., ), and there may be benefit in exploring different Winograd transformation matrices (, and ). Second, we expect that using different pruning rates for each network layer will help maintain accuracy and improve overall workload reduction. Finally, we expect that combining our WinogradReLU network with other network simplification techniques, e.g. quantization of weights and/or activations (Courbariaux et al., 2015; Lin et al., 2016; Rastegari et al., 2016), will reduce the energy of computation even further.
References
 Abadi et al. (2016) Martín Abadi et al. Tensorflow: A system for largescale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI), 2016.
 Albericio et al. (2016) Jorge Albericio, Patrick Judd, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, and Andreas Moshovos. Cnvlutin: Ineffectualneuronfree Deep Neural Network Computing. In Proceedings of the 43rd International Symposium on Computer Architecture (ISCA), 2016.
 Chetlur et al. (2014) Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN: Efficient primitives for deep learning. CoRR, abs/1410.0759, 2014. URL http://arxiv.org/abs/1410.0759.
 Cong & Xiao (2014) Jason Cong and Bingjun Xiao. Minimizing computation in convolutional neural networks. In International Conference on Artificial Neural Networks (ICANN), pp. 281–290. Springer, 2014.
 Courbariaux et al. (2015) Matthieu Courbariaux, Yoshua Bengio, and JeanPierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems (NIPS), 2015.
 Han et al. (2015) Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for efficient neural networks. In Advances in neural information processing systems (NIPS), 2015.
 Han et al. (2016a) Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. EIE: Efficient inference engine on compressed deep neural network. In Proceedings of the 43rd International Symposium on Computer Architecture (ISCA), 2016a.
 Han et al. (2016b) Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In International Conference on Learning Representations (ICLR), 2016b.
 Hassibi et al. (1993) Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in Neural Information Processing Systems (NIPS), pp. 164–164, 1993.
 He et al. (2016a) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016a.
 He et al. (2016b) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision (ECCV), pp. 630–645. Springer, 2016b.
 Huan et al. (2016) Yuxiang Huan, Yifan Qin, Yantian You, Lirong Zheng, and Zhuo Zou. A multiplication reduction technique with nearzero approximation for embedded learning in IoT devices. In SystemonChip Conference (SOCC), 29th IEEE International, pp. 102–107. IEEE, 2016.
 Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (NIPS), 2012.
 Lavin (2015) Andrew Lavin. Fast algorithms for convolutional neural networks. CoRR, abs/1509.09308, 2015. URL http://arxiv.org/abs/1509.09308.
 LeCun et al. (1990) Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems (NIPS), pp. 598–605, 1990.
 LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Li et al. (2017) Sheng R. Li, Jongsoo Park, and Ping Tak Peter Tang. Enabling sparse Winograd convolution by native pruning. CoRR, abs/1702.08597, 2017. URL http://arxiv.org/abs/1702.08597.
 Lin et al. (2016) Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. In Proceedings of International Conference on Learning Representations (ICLR), 2016.
 Liu et al. (2017) Xingyu Liu, Song Han, Huizi Mao, and William J. Dally. Efficient sparsewinograd convolutional neural networks. International Conference on Learning Representations (ICLR) Workshop, 2017.
 Nagadomi (2014) Nagadomi. Code for kagglecifar10 competition. 5th place. https://github.com/nagadomi/kagglecifar10torch7, 2014.
 Rastegari et al. (2016) Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNORNet: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision (ECCV), 2016.
 Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li FeiFei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
 Simonyan & Zisserman (2015) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. In Proceedings of International Conference on Learning Representations (ICLR), 2015.
 Springenberg et al. (2015) Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. International Conference on Learning Representations (ICLR) Workshop, 2015.
 Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015.
 Winograd (1980) Shmuel Winograd. Arithmetic complexity of computations, volume 33. Siam, 1980.