Factors of Transferability for a Generic ConvNet Representation
Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their distance from the source task such that a correlation between the performance of tasks and their distance from the source task w.r.t. the proposed factors is observed.
Convolutional networks (ConvNets) trace back to the early works on digit and character recognition [11, 23]. Prior to 2012, though, in computer vision field, neural networks were more renowned for their propensity to overfit than for solving difficult visual recognition problems. And within the computer vision community it would have been considered unreasonable, given the overfitting problem, to think that they could be used to train image representations for transfer learning.
However, these perceptions have had to be radically altered by the experimental findings of the last three years. First, deep networks [22, 13], trained using large labelled datasets such as ImageNet , produce by a huge margin the best results on the most challenging image classification  and detection datasets . Second, these deep ConvNets learn powerful generic image representations [36, 8, 28, 48] which can be used off-the-shelf to solve many visual recognition problems . In fact the performance of these representations is so good that at this juncture in computer vision, a deep ConvNet image representation combined with a simple classifier [36, 13] should be the first alternative to try for solving a visual recognition task.
An elaborate classification model based on a generic ConvNet representation has sometimes been shown to improve the performance of a simple classifier [41, 49] and some other times not so significantly [19, 14]. In any case, the field has observed that a better ConvNet representation (e.g. VGGNet  or GoogleNet  instead of AlexNet ) usually gives more boost in the final performance than a more elaborately designed classification model [21, 13, 26].
Following these observations, a relevant question is: How can I then maximize the
performance of the ConvNet representation for my particular
target task? The question becomes especially pertinent if you
only have a limited amount of labelled training data, time and
computational resources because training a specialized deep ConvNet
from scratch is not an option. The question rephrased in more
technical terminology is: how should a deep ConvNet representation be
learned and adjusted to allow better transfer learning from a source
task producing a generic representation to a specific target task? In
this paper we identify the relevant factors and demonstrate, from
experimental evidence, how they should be set given the
categorization of the target task.
The first set of factors that effect the transferability of a ConvNet representation are those defining the architecture and training of the initial deep ConvNet. These include the source task (encoded in the labelled training data), network width and depth, distribution of the training data, optimization parameters. The next set, after learning the “raw” representation, are what we term post-learning parameters. These include whether you fine-tune the network using labelled data from the target task, the network layer from which the representation is extracted and whether the representation should be post-processed by spatial pooling and dimensionality reduction.
Figure 2 gives a graphical overview of how we transfer a ConvNet representation trained for a source task to target task and the factors we consider that affect its transferability and at what stage in the process the factors are applied. While Figure 1 shows how big a difference an optimal configuration for these factors can make for 17 different target tasks.
How should you set these factors? Excitingly we observe that often there is a pattern for these factors. Their optimal settings are correlated with the distance of the target task’s distance from the source task. When occasionally there is an exception to the general pattern there is a plausible explanation. Table I lists some of our findings , driven by our quantitative results, and shows the best settings for the factors we consider and illustrates the correlations we mention.
To summarize deep ConvNet representation are very amenable to transfer learning. The concrete evidence we present for this assessment is that in 16 out of 17 diverse standard computer vision databases the approach just described, based on a deep ConvNet representation trained with ImageNet and optimal settings of the transferability settings, outperforms all published non-ConvNet based methods, see Table VIII.
Outline of the paper
We show these settings follow an interesting pattern which is correlated with the distance between the source and target task, (Figures 3-7 in Section III).
By optimizing the transferability factors we significantly improve (up to 50% error reduction) state-of-the-art on 16 popular visual recognition datasets (Table VIII) using a linear SVM for classification tasks and euclidean distance for instance retrieval.
|Image Classification||Attribute Detection||Fine-grained Recognition||Compositional||Instance Retrieval|
|PASCAL VOC Object ||H3D human attributes ||Cat&Dog breeds ||VOC Human Action ||Holiday scenes |
|MIT 67 Indoor Scenes ||Object attributes ||Bird subordinate ||Stanford 40 Actions ||Paris buildings |
|SUN 397 Scene ||SUN scene attributes ||102 Flowers ||Visual Phrases ||Sculptures |
The concept of learning from related tasks using neural networks and
ConvNets has appeared earlier in the literature
see [32, 3, 15, 24] for a few
examples. We describe two very recent papers which are the most relevant to our
findings in this paper.
In  the authors investigate issues related to the training of ConvNets for the tasks of image
classification (SUN image classification dataset) and object detection
(PASCAL VOC 2007 & 2012). The result of two of their investigations
are especially relevant to us. The first is that they show fine-tuning
a network, pre-trained with the ImageNet dataset, towards a target
task, image classification and object detection, has a positive effect
and this effect increases when more data is used for fine-tuning. They
also show that when training a network with ImageNet one should not
perform early stopping even if one intends to transfer the resulting
representation to a new task. These findings are consistent with a
subset of ours though our conclusions are supported by a larger and
wider set of experiments including more factors.
Yosinski et al.  show that the transferability of a network trained to perform one source task to solve another task is correlated with the distance between the source and target tasks. Yosinski et al.’s source and target tasks are defined as the classification of different subsets of the object categories in ImageNet. Their definition of transferability comes from their training set-up. First a ConvNet is trained to solve the source task. Then the weights from the first layers of this source network are transferred to a new ConvNet that will be trained to solve the target task. The rest of the target ConvNet’s weights are initialized randomly. Then the random weights are updated via fine-tuning while the transferred weights are kept fixed. They show that for larger the final target ConvNet, learned in this fashion, performs worse and the drop in performance is bigger for the target tasks most distant from the source task. This result corresponds to our finding that the performance of the layer used for the ConvNet representation is correlated to the distance between the source and target task. Yosinki et al. also re-confirm that there are performance gains to be made by fine-tuning a pre-trained network towards a target task. However, our results are drawn from a wide range of target tasks which are being used in the field of computer vision. Furthermore, we have investigated more factors in addition to the representation layer as listed in Table I.
Ii Range of target tasks examined
To evaluate the transferability of the ConvNet representation we use a wide range of 17 visual recognition tasks. The tasks are chosen from 5 different subfields of visual recognition: object/scene image classification, visual attribute detection, fine-grained classification, compositional semantic recognition, instance retrieval (see Table II). There are multiple ways one could order these target tasks based on their similarity to the source task of object image classification as defined by ILSVRC12. Table II gives our ordering and we now give the rationale for the ranking.
The group of tasks we consider furthest from the source task is instance retrieval. Each task in this set has no explicit category information and is solved by explicit matching to exemplars. While all the other group of tasks involve classification problems and require an explicit learning phases.
We place attribute detection earlier than fine-grained recognition
because these visual attributes
Next comes perhaps the most interesting and challenging set of category tasks – the compositional recognition tasks. These tasks include classes where the type of interactions between objects is the key indicator and thus requires more sophistication to recognize than the other category recognition tasks.
There are other elements which determine the closeness of a target task to the source task. One is the distribution of the semantic classes and images used within each category. For example the Pet dataset  is the closest of the fine-grained tasks because the ILSVRC classes include many different dog breeds. While, sometimes the task just boils down to the co-occurrence of multiple ILSVRC classes like the MIT indoor scenes. However, compositional recognition tasks usually encode higher level semantic concepts to be inferred from the object interactions, for instance a person holding violin is not considered a positive sample for playing the violin in  nor is a person standing beside a horse considered as the action “riding horse”.
Now, we analyze the effect of each individual factor on the transferability of the learnt representation. We divide the factors into those which should be considered before learning a representation (learning factors) and those which should be considered when using an off-the-shelf network model (post-learning factors).
Iii-a Learning Factors
The ConvNet AlexNet, the first very large
network successfully applied to the ImageNet challenge, has around 60
million parameters consisting of 5 million parameters in the
convolution layers and 55 million parameters in its fully
connected layers. Although this appears to be an unfeasibly large
parameter space the network was successfully trained using the
ImageNet dataset of 1.2 million images labelled with 1000 semantic
classes. More recently, networks larger than AlexNet have
been trained, in particular OverFeat. Which of these
networks produces the best generic image representation and how
important is its size to its performance?
Here we examine the impact of the network’s size (keeping its depth fixed) on different tasks including the original ImageNet image-level object classification. We trained 3 networks of different sizes using the ILSVRC 2012 dataset and also included the OverFeat network in our experiments as the large network. Each network has roughly twice as many parameters as we progress from the smallest to the largest network. For all the networks we kept the number of units in the 6th layer, the first fully connected layer, to 4096. It is this layer that we use for the experiments where we directly compare networks. The number of parameters is changed mainly by halving the number of kernels and the number of fully connected neurons (except the fixed one).
Figure 3 displays the effect of changing the
network size on different visual recognition tasks/datasets. The
largest network works best for Pascal VOC object image classification,
MIT 67 indoor scene image classification, UIUC
object attribute, and Oxford pets dataset.
On the other hand, for all the retrieval tasks the performance of the
over-parametrized OverFeat network consistently suffers because it
appears the generality of its representation is less than those of the
smaller networks. Another interesting observation is that, if the
computational efficiency at test time is critical, one can decrease
the number of network parameters by orders of 2 (Small or Tiny
network) for different tasks but the degradation of the final
performance is sublinear in some cases.
Increasing the network width (number of parameters at each layer) is not the only way of over-parameterizing a ConvNet. In fact, [39, 37] have shown that deeper convolutional networks with more layers achieve better performance on the ILSVRC14 challenge. In a similar spirit, we over-parametrize the network by increasing the number of convolutional layers before the fully connected layer from which we extract the representation. Figure 4 shows the results by incrementally increasing the number of convolutional layers from 5 to 13 (the architectures of these networks is described in Table IV). As this number is increased, the performance on nearly all the datasets increases.
The only tasks for which the results degrade are the retrieval tasks of UKB and Holidays. Interestingly, these two tasks involve measuring the visual similarity between specific instances of classes strongly presented in ImageNet (e.g. a specific book, bottle or musical instrument in UKB, and wine bottle, Japanese food in Holidays dataset). It is, thus, expected that the representation becomes more invariant to instance level differences as we increase the complexity of the representation with more layers.
If we compare the effect of increasing network depth to network width
on the final representation’s performance, we clearly see that
increasing depth is a much more stable over-parametrization of the
network. Both increasing width and depth improve the performance on
tasks close to the source task. However, increasing the width seems to
harm the transferability of features to distant target tasks more than
increasing depth does. This could be attributed to the fact that
increasing depth is more efficient than increasing width in terms of the number of
parameters for representing more complex patterns, next section studies this in a separate experiment. Finally, more
layers means more sequential processing which hurts the
parallelization. We have observed the computational complexity for
learning and using deep ConvNets increases super-linearly with the
number of layers. So, learning a very wide network is computationally
cheaper than learning a very deep network. These issues means the
practitioner must decide on the trade-off between training speed and
Width vs Depth
It is more indicative to directly compare the effect of increasing width and depth on generality of the learned representation. For that purpose we train deep networks of various depth. Particularly, we train a network of depth 16 with similar width to the Tiny, Small and Medium networks in the previous section. Table IV lists the deep networks and their architectures. We compare the results of 10 different networks on four target tasks in Figure 5. By connecting different networks with solid (dashed) directed edges we show the performance of the a deeper (wider) network. It can be observed that increasing the parameters of the network by increasing its depth is a more efficient over-parametrization than increasing its width. That is the slope of solid segments are consistently higher.
It should be noted that most of the parameters of a convolutional network usually is at the first fully connected layer. Thus, the number of outputs of the last convolutional layer (which depends on the preceding subsampling layers) is the major factor for the network size. For example, going from network H to I and then to J slightly increases the number of parameters but considerably increases the performance on the target tasks.
Training the deeper networks is tricky and needs to be done in various stages. For networks with convolutional layers of more than 5 (Medium) we increased the number of convolutional layers by 3 at each time. Then trained the network for a few epochs () with fixed learning rate () and initialized the first convolutional layers of the next network with those of the shallower network. The new convolutional layers and fully connected layers were initialized using random gaussian noise.
Early stopping is used as a way of controlling the generalization of a
model. It is enforced by stopping the learning before it has converged
to a local minima as measured by monitoring the validation loss. This
approach has also been used to improve the generalization of
over-parametrized networks . It is plausible to expect
that the transferability increases with generalization. Therefore, we
investigate the effect of early stopping on the transferability of
learnt representation. Figure 6 shows the
evolution of the performance for various target tasks at different
training iterations. The performance of all tasks saturates at 200K
iterations for all the layers and even earlier for some
tasks. Surprisingly, it can be seen that early stopping does not
improve the transferability of the features whatsoever. However, in this experiments the training does not show strong symptoms of over-fitting. We have observed that if training of the source ConvNet exhibits overfitting (such as in fine-tuning with landmark dataset for improved performance on instance retrieval) early stopping can help to learn more transferable features.
It is natural to expect that one of the most important factors for a learnt representation to be generic is the properties of the source task (disregarding the number of images in the source dataset). The recent development of another large scale dataset called the Places Dataset  labelled with scene classes enabled us to analyze this factor. Table V shows the results for different source tasks of ImageNet, Places, and a hybrid network. The hybrid network is made by combining the ImageNet images with those of the Places dataset. The label set is increased accordingly . It can be observed that results for the tasks very close to the source tasks are improved with the corresponding models (MIT, SUN for Places network). Another interesting observation is that ImageNet features seem to achieve a higher level of generalization for further away tasks. One explanation is that the set of labels is more diverse. Since the number of images in ImageNet is smaller, it shows the importance of diversity of labels as opposed to the number of annotated images when the objective is to achieve a more transferable representation. More concrete experiments on this phenomenon is conducted in the next section.
The Hybrid model boosts the transferability of the Places network but still falls behind the ImageNet network for more distant tasks. This could be again due to the fact that the number of images from the Places dataset dominates those of the ImageNet dataset in training the Hybrid model and as a consequence it is more biased toward the Places Network.
In order to avoid this bias, in another experiment, we
combined the features obtained from the ImageNet network and the
Places network as opposed to Hybrid network, and interestingly this
late fusion works better than Hybrid model (the Hybrid model
where the number of dimensions of the representation is increased to
8192 works worse ). In fact, it achieves the best results on all tasks except for subcategory recognition tasks for which scenes are irrelevant and probably just add noise to the descriptors.
|Image Classification||Attribute Detection||Fine-grained Recognition||Compositional||Instance Retrieval|
|Source task||VOC07||MIT||SUN||H3D||UIUC||Pet||CUB||Flower||Stanf. Act40||Oxf.||Scul.||UKB|
Diversity and Density of Training Data
We saw in the previous section that the distribution of training data for the source task affects the transferability of the learned representation. Annotating millions of images with various labels used for learning a generic representation is expensive and time-consuming. Thus, choosing how many images to label and what set of labels to include is a crucial question. In addition to the source tasks in the previous section, we now examine the influence of statistical properties of the training data such as density and diversity of the images. In this experiment we assume that ImageNet classes indicate different modes of the training data distribution. Such that, by increasing the number of images per class we control the density of the distribution. Moreover, the diversity of the training data can be increased by including additional classes.
In order to compare the effect of diversity and density of training data on the transferability of the learned representation we assume a situtation where there is a certain budget for the number of images to be annotated. Particularly, we experiment for the training data size of 10%, 20%, 50% of the ILSVRC12 1.3 million images. Then the dataset is either constructed using stratified sampling from all classes (reduced density with the same diversity) or by random sampling of the classes with all of their samples (reduced diversity with the same density).
Figure 8 plots the results for various tasks when decreasing density (Figure 7(a)) or diversity (Figure 7(b)) of the source training set from that of ILSVRC12. It can be seen that increasing both diversity and density consistently helps all the tasks and there is still no indication of saturation at the full set of 1.3 million images. Thus there is still room for annotating more images beyond ILSVRC and that most probably increases the performance of the learned representation on all the target tasks. Furthermore no clear correlation can be observed between the degradation of the performances and the distance of the target task. Most importantly, decreasing the diversity of the source task seems to hurt the performance on the target tasks more significantly (slopes on the right plot are higher than left plot). In fact, a point to point comparison of the two plots reveals that when having certain budget of images, increasing diversity is crucially more effective than density. This could be because of the higher levels of feature sharing which happens at higher diversities which helps the generalization of the learned representation and thus is more beneficial for transfer learning.
The network architecture used for this experiment is Medium (AlexNet), so the number of parameters remains the same for all of the experiments. However training the network on the smaller datasets needed a heavier regularization by increasing the weight decay and dropout ratio at the fully connected layers. Without heavy regularization training a Medium network on 10% or 20% exhibits signs of over-fitting in the early stages of training.
Iii-B Post-learning Factors
Different layers of a ConvNet potentially encode different levels of abstraction. The first convolutional layer is usually a collection of Gabor like gray-scale and RGB filters. On the other hand the output layer is directly activated by the semantic labels used for training. It is expected that the intermediate layers span the levels of abstraction between these two extremes. Therefore, we used the output of different layers as the representation for our tasks’ training/testing procedures. The performance of different layers of the pre-trained ConvNet (size: Medium) on ImageNet is shown in figure 7 for multiple tasks.
We observe the same pattern as for the effect of network width. The last layer (1000-way output) is only effective for the PASCAL VOC classification task. In the VOC task the semantic labels are a subset of those in ILSVRC12, the same is true for the Pet dataset classes. The second fully connected layer (Layer 7) is most effective for the UIUC attributes (disjoint groups of ILSVRC12),and MIT indoor scenes (simple composition of ILSVRC12 classes). The first fully connected layer (Layer 6) works best for the rest of the datasets which have semantic labels further away from those used for optimizing the ConvNet representation. An interesting observation is that the first fully connected layer demonstrates a good trade-off when the final task is unknown and thus is the most generic layer within the scope of our tasks/datasets.
Although the last layer units act as probabilities for ImageNet classes, note that results using the last layer with 1000 outputs are surprisingly effective for almost all the tasks. This shows that a high order of image-level information lingers even to the very last layer of the network. It should be mentioned that obtaining results of instance retrieval on convolutional layers is computationally prohibitive and thus they are not included. However, in a simplified scenario, the retrieval results showed a drastic decrease from layer 6 to 5.
In the last subsection, we observed that the best representation for
retrieval tasks is the first fully connected layer by a significant
margin. We further examined using the last convolutional layer in its
original form as the representation for retrieval in a simplified
scenario but achieved relatively poor results. In order to make the
convolutional layer suitable, in this experiment, spatial pooling is
applied to the last convolutional layers output. We use max pooling in this experiment. An spatial pooling with a grid of is equivalent to a soft bag of words representation over the whole image, where words are convolutional kernels. Figure 9 shows the results of different pooling grids for all the retrieval tasks. For the retrieval tasks, where the shapes are
more complicated like sculptures and historical buildings, a higher
resolution of pooling is necessary.
We use principal component analysis (PCA) to reduce the dimensionality
of the transferred representation for each task. We observed that
dimensionality reduction helps all the instance retrieval tasks (most of the time insignificantly though). The main difference between the
retrieval task and other ones is that in retrieval we are interested
in the Euclidean distances between samples in the ConvNet representational space. In that respect, PCA can decrease the curse of dimensionality for distance. However, one could expect that dimensionality reduction would decrease the level of noise (and avoid potential over-fitting to irrelevant features for each specific task). But our experiments shows that this is not the case when using PCA for reducing dimensions. Figure 9(a) shows the results for different tasks as we reduce the dimensionality of ConvNet representations. The results show that the relative performance boost gained by additional dimensions is correlated with the distance of the target task to the original task. We see that saturations appear earlier for the tasks closer to ImageNet. It is amazing to know that effective dimensionality of the ConvNet representations (with 4096 dims) used in these experiments is at most 500 for all visual recognition tasks from different domains. Another interesting observation is that many of the tasks work reasonably well with very low number of dimensions (5-50 dimensions). Remember that these features are obtained by a linear transformation of the original ConvNet representation. This can indicate the capability of ConvNets in linear factorization of the underlying generating factors of semantic visual concepts.
Frequently the goal is to maximize the performance of a recognition system for a specific task or a set of tasks. In this case intuitively specializing the ConvNet to solve the task of interest would be the most sensible path to take. Here we focus on the issue of fine-tuning the ConvNet’s representation with labelled data similar to those we expect to see at test time.
[13, 7] have shown that fine-tuning the network
on a target task helps the performance. Fine-tuning is done by
initializing a network with weights optimized for ILSVRC12. Then,
using the target task training set, the weights are updated. The
learning rate used for fine-tuning is typically set to be less than
the initial learning rate used to optimize the ConvNet for ILSVRC12. This
ensures that the features learnt from the larger dataset are not
forgotten. The step used to shrink the learning rate schedule is also
decreased to avoid over-fitting. We have conducted fine-tuning on the
tasks for which labels are mutually exclusive. The table
in Figure VI shows the results. The gains made by fine-tuning
increase as we move further away from the original image-level object
classification task. Fine-tuning on a relatively small target dataset
is a fast procedure. With careful selection of parameters it
is always at least marginally helpful.
Increasing training data
Zhu et al.  suggest that increasing data is less effective
than increasing the complexity of models or richness of representation
and the former is prone to early performance saturation. Those
observations are made using HOG features to perform object
detection. Here, we want to investigate whether we are close to
saturation point with ConvNet representations.
Increasing data for target task. To measure the effect of adding more data to learn the representation we consider the challenging task of PASCAL VOC 2007 object
detection. We follow the procedure of Girshick et al. 
by fine-tuning the AlexNet network using samples from the Oxford Pet
and Caltech-UCSD birds datasets. We show that although there exists a
large number of samples for those classes in ImageNet (more than
100,000 dogs) adding around 3000 dogs from the Oxford Pet
dataset helps the detection performance significantly. The same
improvement is observed for cat and bird, see the table in Figure
VI. This further adds to the evidence that
specializing a ConvNet representation by fine-tuning, even when the
original task contained the same labels, is helpful.
Increasing data for source task. Furthermore, we investigate how important it is to increase training data for the original ConvNet training. We train two networks, one using SUN397  with 130K images and the other using the Places dataset  with 2.5M images. Then we test the representations on the MIT Indoor Scenes dataset. The representation trained from SUN397 (62.6%) works significantly worse than that of the Places dataset (69.3%). The same trend is observed for other datasets (refer to Table VII). Since ConvNet representations can model very rich representations by increasing its parameters, we believe we are still far from saturation in its richness.
|Image Classification||Attribute Detection||Fine-grained Recognition||Compositional||Instance Retrieval|
|Rep. Layer||last||last||last||last||2nd last||2nd last||2nd last||3rd last||3rd last||3rd last||3rd last||3rd last||4th last||4th last||4th last||4th last||4th last|
Iv Optimized Results
In the previous section, we listed a set of factors which can affect the efficacy of the transformed representation from a generic ConvNet. We studied how best values of these factors are related to the distance of the target task to the ConvNet source task. Using the know-hows obtained from these studies, now we transfer the ConvNet representations using ”Optimized” factors and compare the ”Standard” ConvNet representation used in the field. The ”Standard” ConvNet representation refers to a ConvNet of medium size and depth 8 (AlexNet) trained on 1.3M images of ImageNet, with the representation taken from first fully connected layer (FC6). As can be seen in Table VIII the remaining error of the ”Standard” representation can be decreased by a factor of up to 50% by optimizing its transferability factors.
|ConvNet-FT VOC ||50.0||60.7||56.1|
V Implementation details
The Caffe software  is used to train our
ConvNets. Liblinear is used to train the SVMs we use for
classification tasks. Retrieval results are based on the distance
of whitened ConvNet representations. All parameters were selected
using 4-fold cross-validation.
Learning choices are the same as . In particular, the pipeline for classification tasks is as follows: we first construct the feature vector by getting the average ConvNet feature vector of 12 jittered samples of the original image. The jitters come from crops of 4 corners of the original image, its center and the whole image resized to the size needed by the network (227x227) and their mirrors. We then normalize the ConvNet feature vector, raise the absolute value of each feature dimension to the power of 0.5 and keep its sign. We use linear SVM trained using one-versus-all approach for multilabel tasks (e.g. PASCAL VOC image classification) and linear SVM trained using one-versus-one approach and voting for single label tasks (e.g. MIT Indoor Scene).
The pipeline for the retrieval tasks are as follows: Following  The feature vectors are first normalized, then the dimensionality is reduced using PCA to smaller whitened dimension and the resulting feature is renormalized to the unit length. Since buildings (Oxford and Paris) and scupltures datasets include partial images or the object can appear in small part of the whole image (zoomed in or out images of the object of interest) we use spatial search to match windows from each pair of images. We have 1 sub-patch of size 100% of the whole image, 4 sub-patches of each covering 4/9 size of the image. 9 sub-patches of each covering 4/16 and 16 sub-patches of each covering 4/25 of the image (in total 30 sub-paches). The minimum distance of all sub-patches is considered as the distance of the two images.
Vi Closing Discussion
ConvNet representations trained on ImageNet are becoming the standard image representation. In this paper we presented a systematic study, lacking until now, of how to effectively transfer such representations to new tasks. The most important elements of our study are: We identify and define several factors whose settings affect transferability. Our experiments investigate how relevant each of these factors is to transferability for many visual recognition tasks. We define a categorical grouping of these tasks and order them according to their distance from image classification.
Our systematic experiments have allowed us to achieve the following. First, by optimizing the identified factors we improve the state-of-the-art performance on a very diverse set of standard computer vision databases, see table VIII. Second, we observe and present empirical evidence that the effectiveness of a factor is highly correlated with the distance of the target task from the source task of the trained ConvNet. Finally, we empirically verify that our categorical grouping and ordering of visual recognition tasks is meaningful as the optimal setting of the factors remain constant within each group and vary in a consistent manner across our ordering. Of course, there are exceptions to the general trend. In these few cases we provide simple explanations.
We think the insights generated by our paper can be used to learn more generic features (our ultimate goal). We believe a generic visual representation must encode different levels of visual information (global, local and visual relations) and invariances. Although these levels of information and invariances are interconnected, a task can be analyzed based on which level of information it requires. And this allows us to explain the distance of visual recognition tasks from that of ImageNet and then crucially to identify orthogonal training tasks that should be combined when training a generic representation. Because when we optimize a representation for only one type of invariance and/or visual information we cannot expect it to optimally encode the others.
During ConvNet training it is the loss function, besides the semantic labels, that controls the learnt representation. For example for image classification we want different semantic classes to occupy non-overlapping volumes of the representation space. The cross-entropy loss function promotes this behaviour. While if we want to learn a representation to measure visual similarity we must use a different loss function as we also need the representation of images with the same label to occupy a small volume.
Therefore, in future work, we plan to investigate how to best apply multi-task learning with ConvNets to learn generic representations. We will focus on how to choose the training tasks and loss functions that will force the ConvNet representation to learn many different levels of visual information incorporating different levels of invariances.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs to this research.
- As an aside, depending on the definition of an attribute, the placement of an attribute detection task could be anywhere in the spectrum. For instance, one could define a fine-grained, local and compositional attribute which would then fall furthest from all other tasks (e.g. “wearing glasses” in H3D dataset).
- footnotetext: Note: ”Deep Optimized” results in this table are not always the optimal choices of factors studied in the paper. For instance one would expect a very deep network trained using hybrid model would improve results on MIT and SUN, or a deep and large network would perform better on VOC image classification. Another example is that we could do fine-tuning with the optimal choices of parameters for nearly all tasks. Obviously, it was highly computationally expensive to produce all the existing results. We will update the next versions of the paper with further optimized choices of parameters.
- Imagenet large scale visual recognition challenge 2013 (ilsvrc2013). http://www.image-net.org/challenges/LSVRC/2013/.
- P. Agrawal, R. B. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In ECCV, 2014.
- Andreas, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, pages 41–48, 2006.
- R. Arandjelović and A. Zisserman. Smooth object retrieval using a bag of boundaries. In ICCV, 2011.
- Y. Bengio. Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade (2nd ed.), pages 437–478, 2012.
- L. D. Bourdev, S. Maji, and J. Malik. Describing people: A poselet-based approach to attribute classification. In ICCV, 2011.
- K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. arxiv:1405.3531 [cs.CV], 2014.
- J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.
- M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html.
- A. Farhadi, I. Endres, D. Hoiem, and D. A. Forsyth. Describing objects by their attributes. In CVPR, 2009.
- K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):93–202, 1980.
- E. Gavves, B. Fernando, C. G. M. Snoek, A. W. M. Smeulders, and T. Tuytelaars. Fine-grained categorization by alignments. In ICCV, pages 1713–1720, 2013.
- R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
- R. B. Girshick, F. N. Iandola, T. Darrell, and J. Malik. Deformable part models are convolutional neural networks. In CVPR, 2015.
- S. Gutstein, O. Fuentes, and E. Freudenthal. Knowledge transfer in deep convolutional neural nets. IJAIT, 17(3):555–567, 2008.
- H. Jégou and O. Chum. Negative evidences and co-occurences in image retrieval: The benefit of pca and whitening. In ECCV, pages 774–787, 2012.
- H. Jégou, M. Douze, and C. Schmid. Hamming embedding and weak geometric consistency for large scale image search. In ECCV, 2008.
- Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. 2014.
- A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. Li. Large-scale video classification with convolutional neural networks. In CVPR, pages 1725–1732, 2014.
- P. Koniusz, F. Yan, P.-H. Gosselin, and K. Mikolajczyk. Higher-order Occurrence Pooling on Mid- and Low-level Features: Visual Concept Detection. Technical report, 2013.
- J. Krause, H. Jin, J. Yang, and L. Fei-Fei. Fine-grained recognition without part annotations. In CVPR, 2015.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
- Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. of IEEE, 86(11):2278–2324, 1998.
- L.-J. Li, H. Su, E. P. Xing, and F.-F. Li. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In NIPS, 2010.
- D. Lin, C. Lu, R. Liao, and J. Jia. Learning important spatial pooling regions for scene classification. In CVPR, pages 3726–3733, 2014.
- J. Y. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
- M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008.
- M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, 2014.
- O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawahar. Cats and dogs. In CVPR, 2012.
- G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Proceeding of the 25th Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
- J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular object retrieval in large scale image databases. In CVPR, 2008.
- L. Y. Pratt. Discriminability-based transfer between neural networks. In NIPS, 1992.
- A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
- M. A. Sadeghi and A. Farhadi. Recognition using visual phrases. 2011.
- P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014.
- A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features off-the-shelf: An astounding baseline for visual recognition. In CVPR workshop of DeepVision, 2014.
- K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
- Z. Song, Q. Chen, Z. Huang, Y. Hua, and S. Yan. Contextualizing object detection and classification. In CVPR, 2011.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
- G. Tolias, Y. S. Avrithis, and H. Jégou. To aggregate or not to aggregate: Selective match kernels for image search. In ICCV, pages 1401–1408, 2013.
- A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In CVPR, 2014.
- G. Tsagkatakis and A. E. Savakis. Sparse representations and distance learning for attribute based category recognition. In ECCV Workshops (1), pages 29–42, 2010.
- P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
- J. Xiao, K. A. Ehinger, J. Hays, A. Oliva, and A. Torralba. Sun database: Exploring a large collection of scene categories. In IJCV, 2014.
- J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, pages 3485–3492, 2010.
- B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. J. Guibas, and L. Fei-Fei. Action recognition by learning bases of action attributes and parts. In International Conference on Computer Vision (ICCV), Barcelona, Spain, November 2011.
- J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? arXiv:1411.1792 [cs.LG], 2014.
- M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013.
- N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based r-cnns for fine-grained category detection. In ECCV, 2014.
- N. Zhang, R. Farrell, F. Iandola, and T. Darrell. Deformable part descriptors for fine-grained recognition and attribute prediction. In ICCV, 2013.
- W.-L. Zhao, H. Jégou, G. Gravier, et al. Oriented pooling for dense and non-dense rotation-invariant features. In BMVC, 2013.
- B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition using Places Database. NIPS, 2014.
- X. Zhu, C. Vondrick, D. Ramanan, and C. Fowlkes. Do we need more training data or better models for object detection?. In BMVC, pages 1–11, 2012.