ResFeats: Residual Network Based Features for Image Classification
Deep residual networks have recently emerged as the state-of-the-art architecture in image segmentation and object detection. In this paper, we propose new image features (called ResFeats) extracted from the last convolutional layer of deep residual networks pre-trained on ImageNet. We propose to use ResFeats for diverse image classification tasks namely, object classification, scene classification and coral classification and show that ResFeats consistently perform better than their CNN counterparts on these classification tasks. Since the ResFeats are large feature vectors, we propose to use PCA for dimensionality reduction. Experimental results are provided to show the effectiveness of ResFeats with state-of-the-art classification accuracies on Caltech-101, Caltech-256 and MLC datasets and a significant performance improvement on MIT-67 dataset compared to the widely used CNN features.
Deep convolutional neural networks (CNNs) have shown outstanding results on challenging image classification and detection datasets since the seminal work of . Off-the-shelf image representations learned by these deep networks are powerful and generic. These generic features have been used to solve numerous visual recognition problems [23, 7]. Given the promising performance of these off-the-shelf CNN features, they have become the first choice for solving most computer vision problems .
Training a deep network from scratch is not a feasible option when solving a classification problem with a small number of labelled training examples. Recent evidence [30, 13, 7] suggests that off-the-shelf CNN features have outperformed previous handcrafted features for datasets with a limited amount of training data. These features are domain independent and can be transferred to any specific target task without compromising on performance . Network width, depth and optimization parameters along with the network layer from which these features are extracted play a key role in the effectiveness of transfer learning. This paper attempts to provide an answer to the following question: What are the criteria to select an initial deep network (pre-trained on ImageNet) to extract generic features in order to maximize performance and transferability across domains? To answer this question, we hypothesise that a better optimized and a high performing deep network on ImageNet should result in more powerful and generic image representations. One such network is the deep residual network (ResNet) presented in .
ResNets are easier to train as opposed to other CNN architectures e.g. VGGnet . For example, a 152-layer ResNet which is 8 times deeper than VGGnet, is still less complex and trains faster. Moreover, a 34-layer ResNet contains 3.6 billion multiply-add operations whereas a 19-layer VGGnet has 19.6 billion multiply-add operations (less than 20%) . Very deep networks are known to cause overfitting and saturation in accuracy. However, residual learning and the identity mappings (shortcut connections)  in ResNets have been shown to overcome these problems. This enables ResNets to achieve outstanding results in image detection, localization and segmentation tasks . In this paper, we explore the discrimination power of the image representations extracted from pre-trained ResNets. We name these off-the-shelf ResNet features as ResFeats. Fig. 1 depicts the evolution of traditional classification pipelines.
The main contributions of this paper are listed below:
We introduce ResFeats, which are image features extracted from pre-trained ResNets and test them on diverse image classification tasks including objects, scenes and corals.
We analyse the performance of ResFeats extracted from the outputs of different convolutional layers of ResNet-50  for image classification. We also compare the performance of ResFeats extracted from ResNet-50 with those extracted from a deeper 152-layer ResNet.
We propose a compact 2048-dimensional generic feature vector obtained after dimensionality reduction which is half of the size of the traditional CNN based feature vector (4096 dimensions).
We show that ResFeats achieve a superior classification accuracy compared to off-the-shelf CNN features. We also provide experimental evidence that our proposed method achieves state-of-the-art performance on three out of the four popular and challenging image classification datasets.
The rest of the paper is organized as follows: We briefly discuss the related work in the next section. In Sec. 3.1, we introduce our proposed approach and explain the feature extraction from ResNets. In Sec. 3.2, we describe the dimensionality reduction and classification approaches. Sec. 4 reports the experimental results and Sec. 5 concludes the paper.
2 Related Work
Recent success stories [18, 25, 7, 9] have established deep CNNs as the first choice to solve challenging computer vision tasks. However, training a network from scratch requires a large amount of training data, time and GPUs. Donahue et al.  and Zeiler and Fergus  provided evidence that the generic image representations learned from pre-trained CNNs outperform previous state-of-the-art hand crafted features. However, they did not experiment on a large number of computer vision datasets. Razavian et al.  built on the concept of generic CNN features and proved that off-the-shelf CNN features outperform existing methods. They experimented with more than 10 datasets for tasks such as image classification, object detection, fine grained recognition, attribute detection and visual instance retrieval. OverFeat  was used as the source CNN in the work of .
Chatfield et al.  evaluated the performance of CNN based methods for image classification and compared their methods with previous feature encoding methods. Their findings established that deeper CNN performed better than the shallower models of the same network trained on augmented data. VGGnet  was used as the source CNN in their work. They improved the classification accuracies of popular datasets such as VOC, Caltech-101 and Caltech-256. He et al.  used spatial pyramid pooling of CNN features to further improve the classification accuracy on the Caltech datasets and reported state-of-the-art object classification results.
Scene classification is quite different from object classification due to the presence of multiple objects in a single scene. These object instances can be of varying size and pose, and can be located at different locations in a number of possible layouts in the test image. Consequently, the state-of-the-art performance on scene datasets such as MIT-67 (81% in ) is comparatively lower than the performance on object classification datasets (93.4% for Caltech-101 in ). Towards indoor scene classification, a bag of features approach was proposed to perform VLAD pooling  of CNN features in . Another example is “spatial layout and scale invariant convolutional activations ()” introduced in  to increase the robustness of CNN features. Cimpoi et al.  proposed Fisher Vector (FV) pooling of a deep CNN filter bank (FV-CNN) for texture and material classification. They achieved an accuracy of 81% on MIT-67 dataset (an improvement of 10% over previous state-of-the-art).
Coral classification is a target task which is very different from the source dataset on which deep networks are pre-trained (ImageNet in this case). Despite this dissimilarity, off-the-shelf CNN features have improved the results of existing methods of coral classification [20, 17, 21], thereby demonstrating their strength for transfer learning. The baseline performance on MLC dataset was first reported in . In , a hybrid (hand-crafted + CNN) feature vector was proposed to improve the classification accuracy on this dataset. Khan et al.  used feature vectors extracted from VGGnet alongside cost-sensitive learning to address the class imbalance problem of MLC dataset.
3 Proposed Method
In the following subsections, we describe various steps that are involved in our proposed method with a block diagram in Fig. 2.
3.1 Deep Residual Networks
Deep residual networks are made up of residual units. Each residual unit can be expressed as:
where F is a residual function, is a ReLU function, is the weight matrix, and and are the inputs and outputs of the -th layer. The function is an identity mapping  given by:
The residual function F is defined in  as:
where is the batch normalization,“” denotes convolution and . The essential idea behind residual learning is the branching of the paths for gradient propagation. For CNNs, this idea was first introduced in the form of parallel paths in the inception models of . Residual networks share a few similarities with the highway networks  such as residual blocks and shortcut connections. However, the output of each path in the highway network is controlled by a gating function which is learned during the training phase.
The residual units in ResNets are not stacked together as is the case with convolutional layers in a conventional CNN. Instead, shortcut connections are introduced from the input of each convolutional layer to its output. Using identity mappings as shortcut connections decreases the complexity of the residual networks resulting in deep networks that are faster to train. ResNets can be seen as an ensemble of many paths, instead of viewing it as a very deep architecture. However, all of these network paths in the ResNets are not of the same length. Only one path goes through all of the residual units. Moreover, all of these signal paths do not propagate the gradient which accounts for the faster optimization and training of ResNets. ResNets as deep as 1001-layers have been proposed to achieve superior performances on CIFAR datasets . However, in this paper we have only used ResNet-50 and ResNet-152 whose architectures are described in detail in .
This section introduces ResFeats and elaborates on the process to extract those features from deep residual networks. Generally, the image representations extracted from the deeper layers of a CNN capture higher level features and increase the classification performance. A typical residual unit in a ResNet consists of a block of three convolutional layers . ResFeats are the outputs of residual units unlike the conventional CNN features which usually are the activations of the fully connected layers . The activations of the fully connected layers capture the overall shape of the object contained in the region of interest. The local spatial information is lost when the outputs of the convolutional layer are max-pooled to obtain a 4096 dimensional vector for the activation of FC layer . However, the output vector of a convoltuional layer is rich in spatial information.
ResFeats can be viewed as the output of a deep filter bank. This output is a vector of the form where and is the width and height of the resulting feature vector and being the number of channels in the convolutional layer. Thus ResFeats can be considered as 2-D arrays of local features with dimensions. The local spatial information of this feature vector will be lost when it is propagated to the fully connected layer. Therefore, we do not use the activations of the FC layer of ResNet as a feature vector.
Fig. 3 shows the architecture of the ResNet-50 deep network which we have used for feature extraction. We initialize the network with the weights pre-trained on ImageNet. The learned weights of the deeper layers are usually more class specific e.g. the fully connected layer of ResNet-50 (since there is only one FC layer). We were interested in the classification performance of the output vectors of the preceding convolutional layers. If used appropriately, the convolutional layers of a deep network form very powerful features. Therefore, we extracted the outputs of the last residual unit of the convolutional layers 3, 4 and 5 and used them as feature vectors. These feature vectors were denoted by Res3d, Res4f and Res5c respectively (the letters d, f and c correspond to 4 ,6 and 3 which is the number of the last residual blocks of each layer). Features extracted from the 3rd layer have a lower dimension than the features extracted from the 5th layer. We expected an increase in the performance of ResFeats as we used deeper features. We also extracted these intermediate features from a deeper version of ResNet: ResNet-152 . ResNet-152 have shown a lower error on the ImageNet classification challenge than ResNet-50. Res5c features extracted from the 152-layer ResNet tend to perform better than their ResNet-50 counterparts. The classification results of these features are reported in Sec. 4.
3.3 Dimensionality Reduction and Classification
The outputs of the convolutional layers are much larger in size than the traditional 4096-dimensional CNN based features, for example, the Res5c feature vector is in dimension (more than 100k elements). In order to reduce the computational costs associated with the manipulation of large feature vectors, we propose two methods for dimension reduction. The first method involves implementing a shallow CNN network with one convolutional layer, one max-pooling layer and two fully-connected (FC) layers. We will refer to this network as sCNN in the rest of the paper. The first convolutional layer consists of small filters (i.e. ) along 512 channels. This layer reduces the dimension of Res5c to which is of the same size as the output of the last convolutional layer of VGGnet . The stride is set to 1 and the padding is set to zero for the convolutional layer. This layer is then followed by a max-pooling layer, two FC layers and a soft-max layer for classification. The resulting shallow CNN is very similar to the FC portion of the VGGnet (configuration D ). The resulting sCNN is initialized with random weights and is then trained for each dataset specifically. Fig. 4 (a) shows the architecture of sCNN along with the dimensions of the layers used for Res5c.
In the second proposed method for dimension reduction, we use Principal Component Analysis (PCA) to reduce the Res5c feature vector to an -dimensional vector. Here is the number of channels in the convolutional layer from which ResFeats are extracted. A validation set from each dataset is used to calculate the optimal . The maximum validation accuracy is achieved when is set equal to the number of channels in the corresponding ResFeat. For example, Res5c () is reduced to a 2048-dimensional vector by PCA. The resulting feature vectors are then classified using a linear support vector machine (SVM) classifier. We were motivated to use PCA-SVM classification pipeline due to its popularity to classify off-the-shelf CNN features [1, 6, 23]. Fig. 4 (b) shows the pipeline for PCA-SVM module for Res5c. A comparison of the performance of these two methods is also given in Sec. 4. Our results show that the dimensionality of the ResFeats can be reduced significantly without having a considerable performance drop.
4 Experiments and Results
Object Classification: Caltech-101  contains 9,144 images, divided into 102 categories. The number of images for each category varies between 31 and 800 images. In our experiments, we used 30 images from each class for training and the remaining images were used for testing. Caltech-101 is a very popular dataset for object classification.
Object Classification: Caltech-256  contains 30,607 images, divided into 257 classes (256 objects +1 background). Each category has at least 80 images. This dataset is less popular but more challenging compared to Caltech-101. In our experiments, following , we used 30 and 60 images from each class for training and the rest of the images were used for testing.
Scene Classification: MIT-67  is a very challenging and popular dataset for indoor scene classification. It consists of 15,620 images belonging to 67 classes. The number of images varies between 101 and 738 per class. We followed the standard protocol  which uses a subset of 6700 images (100 per class) for training and testing. There are 80 images from each class in the training set. The remaining 20 images per class are set for testing. We also tested on the augmented version of this dataset by adding cropped and rotated samples. We refer it to as ’MIT-67aug’ in our results.
Coral Classification: Moorea Labelled Corals (MLC)  contains 2055 images collected over three years: 2008, 2009 and 2010. It contains random point annotation (x, y, label) for the nine most abundant labels, four non coral and five coral classes. We have used 87,428 images from the year 2008 for training and the remaining 43,832 images from the same year for testing. This is a challenging dataset since each class exhibits a large variability in shape, color and scale.
4.2 Experimental Settings
We use two deep ResNets to learn our proposed image representations. The network architecture of the first ResNet is shown in Fig. 3. The detailed achitecture of the much deeper ResNet152 is similar to ResNet-50 and is illustrated in detail in . We use the pre-trained models of these two networks which are publicly available. We implemented our proposed method and sCNN classifier network in MatConvNet. LibSVM  was used for training the support vector machines used for classification. -fold cross validation was used to find the best parameters for SVM with . Note that the PCA-SVM was only tested for the highest performing ResFeats i.e., ResFeats-152.
The classification accuracies reported in Sec. 4.3 and 4.4 were achieved by using the sCNN for dimensionality reduction and classification. A performance comparison between sCNN and PCA-SVM module is given in Sec. 4.5 for ResFeats extracted from ResNet-152.
4.3 Performance Analysis: ResFeats
In Table 1, we present the classification accuracies of ResFeats extracted from the output of the 3rd, 4th and 5th convolutional layers on our test datasets. ResFeats from the 5th convolutional layer (Res5c) outperform others for all datasets except the MLC. The difference in the classification accuracy of the ResFeats extracted from different layers tends to follow a pattern that can be associated with the number of classes in the dataset. When the number of classes increases, the difference in the accuracies of Res5c, Res4f and Res3d also increases. For Caltech-256 (257 classes), the difference in the accuracy of Res5c and Res3d ranges between 30-35%. This difference is negligible for MLC dataset which only has nine classes. We conclude that high level features (i.e. Res5c) show the best performance on all datasets except MLC. The same pattern was observed for the corresponding features extracted from ResNet-152.
|Caltech 101 (30)||102||91.8||89.4||77.2|
|Caltech 256 (30)||257||75.4||45.2||46.0|
|Caltech 256 (60)||257||79.3||53.4||44.1|
4.4 Performance Analysis: CNN features vs ResFeats
Table 2 compares the performance of ResFeats with their CNN counterparts for a given dataset. The overall classification accuracy is used to evaluate the performance. To keep the comparison fair, standard train-test splits are used for all datasets. For a fair comparison of classification performance, we only consider the methods which have used CNN features without any post-processing. We compare the CNN features with ResFeats extracted from a 50-layer ResNet and a deeper 152-layer ResNet. ResFeats-50 consistently outperform the CNN features by a margin of at least 4%. Table 4 also shows that ResFeats-152 further improves the classification accuracy by 1-2%. We conclude that ResFeats perform significantly better than the corresponding CNN based features. Moreover, ResFeats extracted from a deeper ResNet perform better than the ones extracted from shallower ResNets.
Caltech 101 (30)
|Caltech 256 (30)||70.6 ||75.4||78.0|
|Caltech 256 (60)||74.2 ||79.3||81.9|
4.5 Image Classification Results
The experiments above compare our ResNet based feature representation with off-the-shelf CNN features. In this section, we compare the performance of ResFeats with other state-of-the-art methods for each dataset.
Caltech-101: We randomly select 30 images per class for training and compare our results with the other existing methods in Table 3. ResFeats with a PCA-SVM classifier beats the current state-of-the-art (He et al. ) by 1.3%. It is worth mentioning here that the authors in  used the spatial pyramid pooling layer in their network to achieve a 93.4% accuracy. We, however, have achieved state-of-the-art accuracy without adding any post-processing modules to ResFeats. This demonstrates the superior classification power of ResFeats.
Bo et al. 
|Zeiler & Fergus ||86.5|
|Chatfield et al. ||88.4|
|He et al. ||93.4|
|ResFeats-50 + sCNN||91.8|
|ResFeats-152 + sCNN||92.6|
|ResFeats-152 + PCA-SVM||94.7|
Caltech-256: We randomly select 30 and 60 images per class for training and report the classification accuracies in Table 4. Our method (both classification modules) outperforms the current state-of-the-art in both experiments. Table. 4 reports an absolute gain of 8.9% and 4.5% on previous state-of-the-art methods on Caltech-256 datasets with 30 and 60 training samples per class respectively.
MIT-67: We report our results on the standard split (80 train, 20 test) on MIT-67 and the augmented version (MIT67-aug) of this dataset in Table 5. We use 16 augmentations of each image: five crops, two rotations and mirrored images of these. The data augmentation used in our experiments is consistent with the one used in . Table 5 shows that ResFeats perform better than all the previous methods except  for the non-augmented dataset. The best performing method on MIT-67, Cimpoi et al. used deep filter banks that are extracted from VGGnet at multiple scales followed by a Fisher Vector (FV) encoding to achieve state-of-the-art performance on MIT-67. However, it is important to note that applying FV encoding to ResFeats is computationally expensive because of the large size of ResFeats (Res5c has more than 100k elements). Also, this method extracted features from the last layer convolution layer of VGGnet by using multiple sizes of each training image. In contrast, we only use a fixed size () to extract ResFeats. For MIT67aug, our method beats the previous best performance by a margin of 8.1%.
MLC: We use the same experimental protocol for MLC dataset as given in . Table 6 shows the classification accuracies for MLC dataset achieved by previous methods. Our proposed method achieves an accuracy gain of 6.8% over the baseline performance of . Off-the-shelf ResFeats outperform the cost-sensitive CNN of  and multi-scale hybrid feature (CNN + hand-crafted feature) approach of .
|Method||Cal-256 (30)||Cal-256 (60)|
Sohn et al. 
|Bo et al. ||48.0||55.2|
|Zeiler & Fergus ||70.6||74.2|
|Chatfield et al. ||–||77.6|
|ResFeats-50 + sCNN||75.4||79.3|
|ResFeats-152 + sCNN||78.0||81.9|
|ResFeats-152 + PCA-SVM||79.5||82.1|
Razavian et al. 
|Gong et al. ||68.9||–|
|Khan et al. ||70.9||–|
|Zhou et al. ||70.8||–|
|Azizpour et al. ||71.3||–|
|Liu et al. ||71.5||–|
|Hayat et al. ||74.4||–|
|Cimpoi et al. ||81.0||–|
|ResFeats-50 + sCNN classifier||71.1||73.0|
|ResFeats-152 + sCNN classifier||73.7||74.9|
|ResFeats-152 + PCA-SVM||75.6||77.1|
Beijbom et al. 
|Khan et al. ||75.2|
|Mahmood et al. ||77.9|
|ResFeats-50+ sCNN classifier||78.8|
|ResFeats-152 + sCNN classifier||80.0|
|ResFeats-152 + PCA-SVM||80.8|
Fig. 5 shows a comparison of the classification accuracy of off-the-shelf CNN representations, ResFeats and current state-of-the-art methods. The results are reported for all the datasets that were used in our experiments. The ResFeats consistently outperformed the CNN features by a large margin. It must be noted in Fig. 5 that for CNN features, only those results are reported which do not use any additional post-processing module. ResFeats with PCA-SVM achieved state-of-the-art classification performances for all the datasets except MIT-67.
In this paper, we used features extracted from deep ResNets off-the-shelf to address three image classification tasks: object, scene and coral classification. We investigated the effectiveness of transfer learning of the ResFeats. We showed that the ResFeats extracted from the deeper layers of a ResNet perform better than the shallower ResFeats. We experimentally confirm that our proposed features are powerful and have a classification accuracy that is higher than the CNN off-the-shelf features. Finally, we improve the state-of-the-art accuracy on Caltech-101, Caltech-256 and MLC datasets. It is worth to further investigate the prospective applications of ResFeats for computer vision tasks such as object localization, image segmentation, instance retrieval and attribute detection.
We thankfully acknowledge Nvidia for providing us a Titan-X GPU for the experiments involved in this research.
-  H. Azizpour, A. Sharif Razavian, J. Sullivan, A. Maki, and S. Carlsson. From generic to specific deep representations for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 36–45, 2015.
-  O. Beijbom, P. J. Edmunds, D. Kline, B. G. Mitchell, D. Kriegman, et al. Automated annotation of coral reef survey images. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1170–1177. IEEE, 2012.
-  L. Bo, X. Ren, and D. Fox. Multipath sparse coding using hierarchical matching pursuit. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 660–667, 2013.
-  C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
-  K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531, 2014.
-  M. Cimpoi, S. Maji, and A. Vedaldi. Deep filter banks for texture recognition and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3828–3836, 2015.
-  J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, pages 647–655, 2014.
-  L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580–587. IEEE, 2014.
-  Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In Computer Vision–ECCV 2014, pages 392–407. Springer, 2014.
-  G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007.
-  M. Hayat, S. H. Khan, M. Bennamoun, and S. An. A spatial layout and scale invariant feature representation for indoor scene classification. IEEE Transactions on Image Processing, 25(10):4829–4841, Oct 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In Computer Vision–ECCV 2014, pages 346–361. Springer, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
-  H. Jégou, M. Douze, C. Schmid, and P. Pérez. Aggregating local descriptors into a compact image representation. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3304–3311. IEEE, 2010.
-  S. H. Khan, M. Bennamoun, F. Sohel, and R. Togneri. Cost sensitive learning of deep feature representations from imbalanced data. arXiv preprint arXiv:1508.03422, 2015.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  L. Liu, C. Shen, and A. van den Hengel. The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4749–4757, 2015.
-  A. Mahmood, M. Bennamoun, S. An, F. Sohel, F. Boussaid, R. Hovey, G. Kendrick, and R. Fisher. Coral classification with hybrid feature representations. In Image Processing (ICIP), 2016 IEEE International Conference on, pages 519–523. IEEE, 2016.
-  A. Mahmood, M. Bennamoun, S. An, F. Sohel, F. Boussaid, R. Hovey, G. Kendrick, and R. Fisher. Coral classification with hybrid feature representations. In OCEANS. IEEE, 2016.
-  A. Quattoni and A. Torralba. Recognizing indoor scenes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 413–420. IEEE, 2009.
-  A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 512–519. IEEE, 2014.
-  P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  K. Sohn, D. Y. Jung, H. Lee, and A. O. Hero. Efficient learning of sparse, distributed, convolutional feature representations for object recognition. In 2011 International Conference on Computer Vision, pages 2643–2650. IEEE, 2011.
-  R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. ICML Workshop, 2015.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
-  A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In Proceeding of the ACM Int. Conf. on Multimedia, 2015.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818–833. Springer, 2014.
-  B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In Advances in neural information processing systems, pages 487–495, 2014.