About Pyramid Structure in
Convolutional Neural Networks
Deep convolutional neural networks (CNN) brought revolution without any doubt to various challenging tasks, mainly in computer vision. However, their model designing still requires attention to reduce number of learnable parameters, with no meaningful reduction in performance. In this paper we investigate to what extend CNN may take advantage of pyramid structure typical of biological neurons. A generalized statement over convolutional layers from input till fully connected layer is introduced that helps further in understanding and designing a successful deep network. It reduces ambiguity, number of parameters, and their size on disk without degrading overall accuracy. Performance are shown on state-of-the-art models for MNIST, Cifar-10, Cifar-100, and ImageNet-12 datasets. Despite more than reduction in parameters for Caffe_LENET, challenging results are obtained. Further, despite reduction in training data along with reduction in parameters for Alexnet model and its variations, competitive results are achieved when compared to similar well-engineered deeper architectures.
Convolutional Neural Networks (CNNs) plays a significant role in raising deep learning society. However, all these models use the same concept of producing feature maps in convolutional layer followed by a pooling layer to reduce the dimension of the map. Models such as Alexnet , GoogleNet , VGG  and many others [4, 5, 6, 7]  asserts that the deeper you go, the more the network performs well. In addition, going deeper the models slightly changed the concept for avoiding vanishing of gradients by using class inference in consecutive convolution layers and max pooling layer, or using a layer wise softmax layer that re-boosts the vanishing gradients [2, 3, 9]. Others used new activation functions, weight updates regularization methods, class inferences, layer wise pre-training in supervised approaches which showed very promising results [5, 8]. Increasing number of layers means increasing number of parameters in a network. With the introduction of Network in Network (NIN) model , the issue of reducing parameters is further highlighted once again. However, it tends toward greater computation. As most of parameters exist in fully connected (FC) layers, therefore, C. Szegedy et al.  use sparsity reduction complex methodologies for refining the trained models. S. Han et al.  tried to learn connections in each layer instead of weights and than retraining the network for reducing number of parameters. Unfortunately, these aforementioned models are not suitable for real world mobile devices: since these enourmously rises the computational operations, the number of parameters is unarguably a substantial issue in application space and increases the memory cost due to large size of trained models on the disk.
Biological studies lead to the idea of Image pyramids (IP’s) . IP’s shown to be an efficient data and processing structure for digital images in a variety of vision applications e.g. object, digit recognition . IP’s as well as NN are massively parallel in structure. Pyramids at their simplest are like stack of filtered images with exponentially reduced dimensions. Pyramids have a long relation with machine learning and computer vision. Several models have been proposed based on the concept of pyramids e.g. Neocognitron, early LENET, Pyramidal neural network, spatial pyramids, and several others [13, 14, 15]. In some recent works, pyramid structure is used only in one layer like the Spatial Pyramid Matching (SPM)  for pooling layer. H. Fan et al.  tried to use pyramid structure in its last Conv layer of CNN for face recognition on LFW and showed 97.5% accuracy. P. Wang et al.  utilizes pyramid structure better than the aforementioned work. They applied temporal pyramid pooling to enhance and use the temporal structure of videos just like spatial pyramids in  where, they incorporated weak spatial information of local features. Their pyramidal temporal pooling method showed better results than the state-of-the-art two-stream model  on HMDB1 dataset. On the other hand, models like PyraNet , I-PyraNet , LIPNET  and their extended models emphasized strict pyramidal structure from input till output. Their objective was to show that following strict pyramidal structure can enhance performance as compared to unrestricted models.
We have explored and proposed some basic hints about the main questions regarding CNN for image classification problem. The questions we have identified to answer are: impact of reversing number of kernels in a convolutional layer?, does reversing a model work in every case?, impact of strict pyramidal structure?, and how to reduce number of parameters without complex rules and loss of accuracy?. To answer these questions we have utilized some well-known state-of-the-art models e.g. LENET , AlexNet and its modified referenced model BVLC_Reference_Caffe . We have shown that the same number of filters can be used in different but pyramidal order without affecting the performance of the base network.
The rest of the paper is organized in four sections. In section II, a background of CNN and pyramidal models is provided. Section III introduce strictly pyramidal structure of CNN’s (SPyr_CNN), while section IV tries to answer questions about architecture and performance with our experimental results that we have achieved for referenced CNN’s and SPyr_CNN’s. Finally, in section V, a conclusion is presented with some future directions.
There are four key ideas behind CNNs that take advantage of the properties of an image: local connections, shared weights, pooling and use of many layers. Deep models or specifically CNNs exploit the compositional hierarchies of the images; in which higher-level features are obtained by composing lower-level ones. For example, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. The role of the convolutional layer is to detect local conjunctions of features from the previous layer i.e. edges, axons, etc.; whereas, pooling layers not only reduces dimensionality, but also merges semantically similar features into one. This helps in providing translation and scale invariance to small shifts and distortions. Y. Lecun’s  early CNN followed strict pyramidal approach. In addition its weight sharing concept helped neural networks to reduce burden of large amount of trainable parameters. However, recent CNN’s doesn’t follow strict structure, although they reduced feature map size at each higher layer but increases number of maps as well which further increases total trainable parameters. Some of the well known models such as; AlexNet  that won ILSVRC had 60M parameters, DeepFace model with 120M parameters resulted in best face recognition accuracy, or a recent very deep model  that contains 133-144M parameters for getting about 24-30% top-1 error on ILSVRC-12 dataset starts from feature maps and ends at 512 or more maps at a higher layer.
Therefore, some recent works like  and  highlighted the issue of memory usage in deep networks by reducing number of parameters. M. Lin et al.  proposed a unlikelier local patch modeling in CNN by replacing linear convolutions in each layer with convolving the input with a micro-network filter. This micro filter works like a multi-layer perceptron. This technique was extended and used in inception model  as a micronetworks modules. It works as dimensionality reduction to remove computational bottlenecks and reduce storage costs. Collins and Kohli  used sparsity-inducing regularizers during training to encourage zero-weight connections in the Conv and FC layers. T. Sanath et al.  exploited low-rank matrix factorization to reduce network parameters in FC layers. M. Denil et al.  tried to predict parameters from other parameters.
Strict pyramidal models [14, 25, 26] takes large amount of data as an input, refine it layer by layer in-order to reduce unwanted features and to enhance the final decision based on most likely and less number of unambiguous features. This gave base to many feature extraction and selection process in CV and ML. In computer vision, SPM with SIFT+FV technique dominated ILSVRC classification and detection competitions until AlexNet arrived . Lazebnik et al.  introduces spatial pyramids technique which were mostly used for object recognition before deep CNN’s. J. Masci et al.  used multi-scale pyramid pooling layer to get a fixed size input feature vector. Recently, a much deeper model was introduced having inspiration from both  and . This spatial pyramid pooling (SPP)  approach provides multiple fixed size inputs to FC layer with the help of pyramid pooling, which showed better results than normal fixed size input models e.g. Overfeat, AlexNet etc.
Iii Pyramidal structure in CNNs
Despite success of CNN, there is no principled way of finding good performing CNN architectures other than naive exploration of the architecture space. This is a question that many newcomers ask: how to arrange filters in each layer for modeling a good network? Less reasoning has been given other than; increasing number of maps as the network go deeper will give good results. Therefore, we have introduced simple rule of thumb when designing a CNN architecture based on the study of pyramidal neurons in the brain and human nature, highlighting this main question
”If a deep CNN architecture with specific number of layers and filters works on a specific task, while keeping the number of layers same and reversing the number of filters, provided that it forms pyramidal structure, the resultant pyramidal architecture will result in same or better performance?”
This question highlights two main aspects. We ask for if a network still works, if reversed, and, secondly, if it works also with a pyramidal structure. In addition to pyramidal structure, hints about this question is taken from a trick in PCA approach, where only changing the order of the matrices in calculating covariance matrix, not only reduce the time complexity but also avoided memory overflow issue . Answering this question thus helped in proposing a general rule of modeling a network by reducing a constant factor of filters from layer compare to previous layer , other than better understanding the network behavior. Thus, we have investigated a strict pyramid network SPyr_CNN architecture as shown in Fig. 1. It starts from a big input/first layer and then refine the features at each higher layer, until it achieves a reduced most discriminative set of features as being done in [14, 20, 21]. Imposing strict pyramidal structure should not only retain and improve accuracy, but also retain/improve accuracy and reduce the number of parameters which results in less number of memory space on the disk. And this makes CNNs more feasible for applications where there is lack of memory. To achieve this goal, we will present experimental evidences based on comparison with state-of-the-art models and datasets in the following section.
Iv Experimental Results
We will provide empirical evidence that a SPyr_CNN on small to large benchmark datasets MNIST, CIFAR-10, CIFAR-100 and ImageNet-12 performs similarly or better than complex state-of-the-art deep CNN models despite reduction in parameters. MNIST is a well-known 10 class hand written digits recognition dataset . It consists of 60000 training and 10000 testing images. Cifar-10 is an object recognition dataset . It consists of 60000 colour images with 6000 images per class. In each class 5000 images are used for training where as remaining 1000 images are used for testing. We have used these datasets to perform an in depth study of different models based on comparison with state-of-the-art, since the CNN can be easily trained and tested with reasonable computing and time cost. Cifar-100 is also an object recognition dataset having 100 classes but same number of images as Cifar-10 . However, due to less images per class that results in less training data, it is considered medium size challenging dataset. ImageNet-2012 dataset consist of 1000-classes . It comprises of 1.2 million training images and 50,000 validation images (used as testing set). The results are measured by top- 1 and top-5 error rates. We have only used the provided data for training; all the results were evaluated on the validation set. Top-5 error rate is the metric officially used to rank the methods in the classification challenge. We perform two different experiments in order to evaluate power of pyramidal networks, i.e. with full training sets and with reduced training sets. Hence we reduced 10% and 20% randomly selected images from Cifar-100 and ILSVRC-12 training dataset. To generalize a common rule of selecting number of filters in each layer of a deep network, we used Cifar-10 and ILSVRC-12 for examining the best decrement factor for a successful model. These models will be explored in the up-coming sections. Strict pyramidal models will be represented by SPyr in beginning of each model. We have implemented these SPyr_CNN models on widely deployed Caffe  framework. The hyper-parameters for training setup are all same as Alexnet model and its variant BVLC_Refference_Caffe (BVLC_Ref) model in Caffe  i.e. weight decay as 0.0005, momentum as 0.9, and dropout of 0.5 is used in FC layers. In pre-processing nothing sophisticated has been done. We only used off-the-shelf normalization method available with Caffe. However, the learning parameters vary according to the dataset and respective state-of-the-art method being used in literature. To calculate number of parameters for each layer we adopted number of parameters as , where and are rows and column size of kernel, is number of channels in that layer, and represents number of kernels for upper layer that will results in output feature maps.
Iv-a Impact of Pyramidal Structure
Few standard CNN architectures are examined for the impact of reversing and imposing a pyramidal structure to the networks. The optimal result for MNIST and Cifar-10 is shown respectively by Caffe_LENET and C10 in Table. I. Caffe_LENET has 20 filters in first convolutional layer (C1) and 50 filters in second convolutional layer (C2). If we reverse it i.e. and , we call it SPyr_Rev_LENET and it resulted in competitive accuracy as shown in Table. I. Same is the case for Cifar-10 i.e. SPyr_Rev_C10. Cifar-100 consisted of 50 images per class consisting of 50000 training images. This makes it really challenging and we faced a 0.37% loss in overall accuracy as shown in Table. I. SPyr_Rev_C100. BVLC_Ref is similar to Alexnet but with a slight change in order of pooling and normalization layer . BVLC_Ref model in reverse does not provide a pyramidal structure i.e. Rev_BVLC_Ref has 256, 384, 384, 256, and 96 kernels at C1, C2, C3, C4, and C5, respectively. As expected, it does not learn. Therefore, we rearranged it, not only with the aim to provide a pyramidal structure but also maintaining the same number of filters. Hence, Table. I. SPyr_Rev_BVLC gets 352 kernels in C1 and C2, followed by 256 kernels in C3 and C4. Finally, C5 layer gets 160 kernels due to adding 32 kernels from C1 and C2. As shown, reverse SPyr models helped each standard CNN model in increasing and retaining overall performance. However, in big networks we learned one important point: if the reverse of a model doesn’t result in pyramidal structure, the performance will drop or the network will not learn at all; this is true in the case of Rev_BVLC_Ref model.
The question of how many kernels one should have at first layer or at each layer is still not fully theoretically asserted and is an open research problem. However, in our experiments with SPyr_CNN’s, we experimented that the size of last ’conv’ (C5) layer has high impact on overall accuracy. Therefore, we concluded that there should be enough number of filters at first layer (C1), so that, after reduction in several layers, the last convolutional layer could get almost 40-60% of the filters in C1. Otherwise, there would be a 1-3% increase in error rate, or in worse case, it will not learn at all, e.g. Rev_BVLC_Ref. This is achieved reducing the number of kernels by a factor from each layer to the next one, as shown in Table. III. This ensures pure pyramid structure unlike SPyr_Rev_BVLC where we had two layers having the same number of kernels. We also tested the model on Cifar-10 and ImageNet. Cifar-10 shows that with 10-15% reductions in kernel number gets almost the same performance. Whereas, with a 20% reduction, accuracy decreases by 0.86%. In big models like ImageNet, our SPyr_BVLC_Ref** with 10% reduction in each layer including FC’s layers improves accuracy by 0.59% and 0.61% compared to our SPyr_BVLC_Ref and BVLC_Ref, respectively (seeTable. III). Whereas, models with 20% reduction got bad results or did not learn due to few kernels in ’C5’. Further, BVLC_Ref  results in maximum accuracy at 313000 iteration, whereas, we reported at 393000. However, at the end of 450000 iterations our SPyr_BVLC_Ref** achieved 0.61% accuracy.
Iv-B Parameter Reduction & Size on Disk
Another big impact of SPyr approach is its reduction of parameters. It results in less size of trained model on disk as can be seen in Table. II. We did a number of experiments with MNIST to see how much we can reduce parameters. At first glance, SPyr_Rev_LENET reduced 55.5% parameters and enhanced performance by 0.02%. This reduction in parameters resulted in less space on disk i.e. with parameters it needs whereas, later it took 0.749MB space on disk. We examined it further and designed SPyr_LENET with reduction in parameters. It resulted in 0.03% enhancement over Caffe_LENET. However, when we reduced more than 90% of parameters from convolutional and FC layers, it retained same accuracy as shown in Table. II. SPyr_LENET*. SPyr_LENET** shows that if we concern about accuracy instead of memory space, than SPyr models can perform much better. Similarly for Cifar-10 and Cifar-100, reverse models in Table. II not only reduces parameters but also gave competitive result.
In case of ImageNet, the size of the BVLC_Ref trained model is . Whereas, our SPyr_BVLC_Ref and SPyr_Rev_BVLC have for better results and for less accuracy, respectively. This is due to proper model selection having million fewer parameters resulting in reduction of ambiguity. Reduction is not only in ’Conv’ layers but also from ’FC’ due to less maps connected to neurons. These results showed that this approach can provide significant difference in specific real world application, where lack of memory storage is an issue.
S. Han et al.  reduced parameters with a time consuming three step process. Rather than training weights, they learn those connections which are more important at first step, followed by pruning those connections, and retraining the remaining connection and their weights at final step. They achieved accuracy with reduction in parameters but consumed more time than normal, as shown in Table. II. Caffe_LENET. In addition, they applied complex regularization and loss functions to find the optimal results. However, SPyr_LENET** achieved accuracy with about reduction in parameters and almost same amount of time (i.e. on NVIDIA Quadro K4200 GPU) as Caffe_LENET (other simulations were also running on same computer). See for this Table. II. Further, our reduction is not only done in ’FC’ layers but also in ’Conv’ layers unlike the LeNet-5 Pruned Table. II. (F)  which reduced parameters mainly in ’FC’ layer. Alexnet_Pruned  achieved same result with parameters, but their model took about to iterations. Whereas, we achieved same results with simple techniques, reduction in parameters and same number of iterations. However, due to more maps in initial layer, our model takes slightly more time than BVLC_Ref. In Fig. 3, D100_Model_1 shows one of our pyramidal models that reduced 30.9623% parameters with only 0.0142% degradation in performance compared to reference model. Table. IV summarizes the best models interms of parameters and accuracy for MNIST, Cifar-10, Cifar-100 and ILSVRC-12.
(D) SPyr_BVLC_Ref**= 384-346-308-270-232-4096-3687-1000
Iv-C Performance of Pyramidal Models with Less data
One of the questions raised against CNN is that it doesn’t work if small datasets are available. Therefore, we tested SPyr_CNN’s with reduced data. We divided Cifar-100 and ILSVRC-12 in two new random training sets, i.e. and of the original one. Despite reduction of and randomly selected images for test1 and test2, respectively, we got only gradual degradation in overall accuracy (see Table. V). So, if we properly understand CNNs, we can model such architectures in order to provide optimal results even with small training datasets. Our top-1 accuracy even with data is better than reported in  by . However, when we reduced data other than , the performance drops down below .
Performance of our SPyr_CNN_C100 with reduced data while training is shown in Fig. 2 (with 100%, 90% and 80% training data). Not only the model with 100% training data, but also the models with 90% and 80% data learn smoothly. Similarly, to see the training behavior of ILSVRC-12 with reduced data, Fig. 4 shows its learning behavior during training. Whereas, their performance through out the training process is shown in Fig. 3. The behavior for SPyr_BVLC_Ref with 100% (D100_percent), 90% (D90_percent) and 80% (D80_percent) is almost smooth without any big falls or degradation. Curves shows almost similar nature despite less training data.
Iv-D Comparison with State-of-the-art
We compared the pyramidal structure mainly with Caffe reference models i.e. AlexNet and BVLC_Ref as well as some state-of-the-art models as can be seen in Table. VI. Our SPyr_LENET model for MNIST outperformed based model as discussed in section IV-B. We achieved comparable results in case of Cifar-10 and Cifar-100 from the base models i.e. 81.65% vs 83.34% or 58.64% vs 58.27%. However, in comparison to models like [32, 10], our results are less accurate. By observing their model, unlike ours, they contain a million parameters as well as results from sophisticated and complex models whereas, we achieved with following our assumption of reversing and imposing a SPyr structure. Hence, if we neglect memory and increase parameters by increasing the SPyr network, the performance improves. In case of large scale dataset, the results were quite promising in terms of accuracy. However, it took slightly more time due to more maps at initial layers. This limitation can be avoided by selecting a proper model. As SPyr_LENET* showed that SPyr models can provide better or competitive results even with smaller number of kernels. In terms of reduction of parameters, recent models i.e. Table. VI. (F)  and (G)  reduced far more parameters than our models Table. VI. (C) and (D), but they achieved less top-1 and top-5 error for ILSVRC-12.
Some researchers are trying to avoid fully connected layers as majority of parameters are from those layers. Though, it should be noticed that these layers have great impact on over all accuracy. These are highly dependent on 2D or 3D layers. Therefore, one of the solution is to reduce the size of the last convolutional layer as-well-as number of neurons in specific order. Pyramidal structure is quite feasible for this scenario. We used pyramid structure in FC layers as can be seen in Table. II. (N), (O), and (Q). Table. II. (N) shows better result by following SPyr structure while Table. II. (O) fallback with only , whereas, Table. II. (Q) gives the best result by reducing kernels and neurons at a constant factor of 10. To visualize and understand the output of our trained model, we have shown output maps resulted by first layer of trained BVLC_Ref model and our SPyr_BVLC_Ref in Fig. 5 and 6, respectively. The maps produced by our model are more clear, smooth and understandable as compare to the maps produced by referenced model. This shows that how better and fine grained information is extracted from real input image. Still, a more detailed analysis is desired to precisely assess the effect of SPyr architectures on new deep models and ImageNet datasets. Such a comprehensive quantitative study using multiple networks is extremely time demanding and thus out of the scope of this paper.
|Model Cifar-100||Error %|
|Deeply Supervised Networks ||34.57|
(A) Alexnet 
(B) BVLC_Ref 
(F) Sparse_MemoryBounded 
(G) Alexnet_Pruned 
We have demonstrated empirically that giving pyramidal structure to CNN’s can lead a scale down in the number of parameters as well as less solver memory consumption on disk, hence producing competitive results. Our experimental analysis was carried out on four standard datasets, which showed the effectiveness of the pyramidal structure. Training with reduced training data showed similar and smooth learning with slight decrease in overall accuracy i.e. about decrease with each reduction. A suggestion for selecting number of kernels in each layer, especially first and last convolutional layer is given. In a sense, it makes it even more surprising that simple strict pyramidal model outperforms many existing sophisticated approaches.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in NIPS, 2012, pp. 1–9.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in CVPR, USA, June 7-12, 2015, pp. 1–9.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014. [Online]. Available: http://arxiv.org/abs/1409.1556
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in ACM International Conference on Multimedia, 2014, pp. 675–678.
-  L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus, “Regularization of neural networks using dropconnect,” in ICML), vol. 28, no. 3, May 2013, pp. 1058–1066.
-  J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, “Convolutional kernel networks,” in NIPS, 2014, pp. 2627–2635.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, 2015.
-  H. Kaiming, Z. Xiangyu, R. Shaoqing, and S. Jian, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.
-  M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in ECCV, 2014, pp. 818–833. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-10590-1_53
-  M. Lin, Q. Chen, and S. Yan, “Network in network,” CoRR, vol. abs/1312.4400, 2013. [Online]. Available: http://arxiv.org/abs/1312.4400
-  S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” CoRR, vol. abs/1506.02626, 2015. [Online]. Available: http://arxiv.org/abs/1506.02626
-  H. Bischof and W. G. Kropatsch, “Neural networks versus image pyramids,” in Artificial Neural Nets and Genetic Algorithms, 1993, pp. 145–153. [Online]. Available: http://dx.doi.org/10.1007/978-3-7091-7533-0_23
-  Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, 1998, pp. 2278–2324.
-  S. L. Phung and A. Bouzerdoum, “A pyramidal neural network for visual pattern recognition,” Trans. Neur. Netw., vol. 18, no. 2, pp. 329–343, Mar. 2007.
-  S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in CVPR, USA, 2006, pp. 2169–2178.
-  H. Fan, Z. Cao, Y. Jiang, Q. Yin, and C. Doudou, “Learning deep face representation,” CoRR, vol. abs/1403.2802, 2014. [Online]. Available: http://arxiv.org/abs/1403.2802
-  P. Wang, Y. Cao, C. Shen, L. Liu, and H. T. Shen, “Temporal pyramid pooling based convolutional neural networks for action recognition,” CoRR, vol. abs/1503.01224, 2015. [Online]. Available: http://arxiv.org/abs/1503.01224
-  M. D. Zeiler and R. Fergus, “Stochastic pooling for regularization of deep convolutional neural networks,” CoRR, vol. abs/1301.3557, 2013. [Online]. Available: http://arxiv.org/abs/1301.3557
-  K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in NIPS 27. Curran Associates, Inc., 2014, pp. 568–576.
-  B. Fernandes, G. Cavalcanti, and T. Ren, “Classification and segmentation of visual patterns based on receptive and inhibitory fields,” in HIS, Sept 2008, pp. 126–131.
-  T. Fernandes, B. Cavalcanti, and T. I. Ren, “Lateral inhibition pyramidal neural network for image classification,” Cybernetics, IEEE Transactions on, vol. 43, no. 6, pp. 2082–2092, Dec 2013.
-  M. D. Collins and P. Kohli, “Memory bounded deep convolutional networks,” CoRR, vol. abs/1412.1442, 2014. [Online]. Available: http://arxiv.org/abs/1412.1442
-  T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran, “Low-rank matrix factorization for deep neural network training with high-dimensional output targets.” in ICASSP. IEEE, 2013, pp. 6655–6659. [Online]. Available: http://dblp.uni-trier.de/db/conf/icassp/icassp2013.html#SainathKSAR13
-  M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. D. Freitas, “Predicting parameters in deep learning,” in Advances in Neural Information Processing Systems 26, C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, Eds., 2013, pp. 2148–2156. [Online]. Available: http://media.nips.cc/nipsbooks/nipspapers/paper_files/nips26/1053.pdf
-  K. Fukushima, “Neocognitron: A hierarchical neural network capable of visual pattern recognition,” Neural Networks, vol. 1, no. 2, pp. 119 – 130, 1988.
-  D. H. Hubel, “The visual cortex of the brain,” Sci. Amer., vol. 209, p. 54 62, 1963.
-  J. Masci, U. Meier, G. Fricout, and J. Schmidhuber, “Object Recognition with Multi-Scale Pyramidal Pooling Networks,” arxiv, jul 2012.
-  M. Turk and A. Pentland, “Face recognition using eigenfaces,” in CVPR, Jun 1991, pp. 586–591.
-  Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist/
-  A. Krizhevsky, “Learning multiple layers of features from tiny images,” Tech. Rep., 2009.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.
-  C. Lee, S. Xie, P. W. Gallagher, Z. Zhang, and Z. Tu, “Deeply-supervised nets,” in AISTATS, 2015.