Selecting Relevant Features from a Universal Representation for Few-shot Classification
Popular approaches for few-shot classification consist of first learning a generic data representation based on a large annotated dataset, before adapting the representation to new classes given only a few labeled samples. In this work, we propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches. First, we obtain a universal representation by training a set of semantically different feature extractors. Then, given a few-shot learning task, we use our universal feature bank to automatically select the most relevant representations. We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training, which leads to state-of-the-art results on MetaDataset and improved accuracy on mini-ImageNet.
Keywords:Image Recognition, Few-shot Learning, Feature Selection
Convolutional neural networks  (CNNs) have become a classical tool for modeling visual data and are commonly used in many computer vision tasks such as image classification , object detection [10, 27, 40], or semantic segmentation [10, 28, 42]. One key of the success of these approaches relies on massively labeled datasets such as ImageNet  or COCO . Unfortunately, annotating data at this scale is expensive and not always feasible, depending on the task at hand. Improving the generalization capabilities of deep neural networks and removing the need for huge sets of annotations is thus of utmost importance.
This ambitious challenge may be addressed from different perspectives, such as large-scale unsupervised learning , self-supervised learning [8, 16], or by developing regularization techniques dedicated to deep networks [2, 54]. An alternative solution is to use data that has been previously annotated for a different task than the one considered, for which only a few annotated samples may be available. This approach is particularly useful if the additional data is related to the new task [55, 57], which is unfortunately not known beforehand. How to use effectively this additional data is then an important subject of ongoing research [11, 50, 57]. In this paper, we propose to use a universal image representation, i.e., an exhaustive set of semantically different features. Then, by automatically selecting only relevant feature subsets from the universal representation, we show how to successfully solve a large variety of target tasks.
Specifically, we are interested in few-shot classification, where a visual model is first trained from scratch, i.e. starting from randomly initialized weights, using a large annotated corpus. Then, we evaluate its ability to transfer the knowledge to new classes, for which only very few annotated samples are provided. Simply fine-tuning a convolutional neural network on a new classification task has been shown to perform poorly . This has motivated the community to develop dedicated techniques, allowing effective adaptation with few samples.
Few-shot classification methods typically operate in two stages, consisting of first pre-training a general feature extractor and then building an adaptation mechanism. A common way to proceed is based on meta-learning [11, 37, 46, 48, 49, 51], which is a principle to learn how to adapt to new learning problems. That is, a parametric adaptation module (typically, a neural network) is trained to produce a classifier for new categories, given only a few annotated samples [15, 36, 44]. To achieve this goal, the large training corpus is split into smaller few-shot classification problems [11, 37, 51], that are used to train the adaptation mechanism. Training in such episodic fashion is advocated to alleviate overfitting and improve generalization [11, 37, 51]. However, a more recent work  demonstrates that using a large training set to train the adapter is not necessary, i.e. a linear classifier with similar accuracy can be trained directly from new samples, on top of a fixed feature extractor. Finally, it has been shown that adaptation is not necessary at all ; using a non-parametric prototypical classifier  combined with a properly regularized feature extractor can achieve better accuracy than recent meta-learning baselines [9, 45]. These results suggest that on standard few-shot learning benchmarks [37, 39], the little amount of samples in a few-shot task is not enough to learn a meaningful adaptation strategy.
To address the shortcomings of existing few-shot benchmarks, the authors of  have proposed MetaDataset, which evaluates the ability to learn across different visual domains and to generalize to new data distributions at test time, given few annotated samples. While methods based solely on pre-trained feature extractors  can achieve good results only on test datasets that are similar to the training ones, the adaptation technique  performs well across test domains. The method not only predicts a new few-shot classifier but also adapts the filters of a feature extractor, given an input task. The results thus suggest that feature adaptation may be in fact useful to achieve better generalization.
In contrast to these earlier approaches, we show that feature adaptation can be replaced by a simple feature selection mechanism, leading to better results in the cross-domain setup of . More precisely, we propose to leverage a universal representation – a large set of semantically different features that captures different modalities of a training set. Rather than adapting existing features to a new few-shot task, we propose to select features from the universal representation. We call our approach SUR which stands for Selecting from Universal Representations. In contrast to standard adaptation modules [34, 41, 50] learned on the training set, selection is performed directly on new few-shot tasks using gradient descent. Approaching few-shot learning with SUR has several advantages over classical adaptation techniques. First, it is simple by nature, i.e. selecting features from a fixed set is an easier problem than learning a feature transformation, especially when few annotated images are available. Second, learning an adaptation module on the meta-training set is likely to generalize only to similar domains. In contrast, the selection step in our approach is decoupled from meta-training, thus, it works equally well for any new domain. Finally, we show that our approach achieves better results than current state-of-the-art methods on popular few-shot learning benchmarks. In summary, this work makes the following contributions:
We propose to tackle few-shot classification by selecting relevant features from a universal representation. While universal representations can be built by training several feature extractors or using a single neural network, the selection procedure is implemented with gradient descent.
The official implementation is available at github.com/dvornikita/SUR.
2 Related Work
In this section, we present previous work on few-shot classification and universal representations, which is a term first introduced in .
Typical few-shot classification problems consist of two parts called meta-training and meta-testing . During the meta-training stage, one is given a large-enough annotated dataset, which is used to train a predictive model. During meta-testing, novel categories are provided along with few annotated examples. The goal is to evaluate the ability of the predictive model to adapt and perform well on these new classes.
Typical few-shot learning algorithms [14, 15, 36] first pre-train the feature extractor by supervised learning on the meta-training set. Then, they use meta-learning [46, 49] to train an adaptation mechanism. For example, in [15, 36], adaptation consists of predicting the weights of a classifier for new categories, given a small few-shot training set. The work of  goes beyond the adaptation of a single layer on top of a fixed feature extractor, and additionally generates FiLM  layers that modify convolutional layers. Alternatively, the work of  proposes to train a linear classifier on top of the features directly from few samples from new categories. In the same line of work,  performs implanting, i.e. learning new convolutional filters within the existing CNN layers.
Other methods do not perform adaptation at all. It has been shown in [9, 25, 45] that training a regularized CNN for classification on the meta-training set and using these features directly with a nearest centroid classifier produces state-of-the-art few-shot accuracy. To obtain a robust feature extractor,  distills an ensemble of networks into a single extractor to obtain low-variance features.
Finally, the methods [41, 45] are the most relevant to our work as they also tackle the problem of cross-domain few-shot classification . In , the authors propose to adapt each hidden layer of a feature extractor for a new task. They first obtain a task embedding and use conditional neural process  to generate parameters of modulation FiLM  layers, as well as weights of a classifier for new categories. An adaptation-free method  instead trains a CNN on ImageNet, while optimizing for high validation accuracy on other datasets, using hyper-parameter search. When tested on domains similar to ImageNet, the method demonstrates the highest accuracy, however, it is outperformed by adaptation-based methods when tested on other data distributions .
The term “universal representation” was coined by  and refers to an image representation that works equally well for a large number of visual domains. The simplest way to obtain a universal representation is to train a separate feature extractor for each visual domain and use only the appropriate one at test time. To reduce the computational footprint,  investigates if a single CNN can be useful to perform image classification on very different domains. To achieve this goal, the authors propose to share most of the parameters between domains during training and have a small set of parameters that are domain-specific. Such adaptive feature sharing is implemented using conditional batch normalization , i.e. there is a separate set of batch-norm parameters for every domain. The work of  extends the idea of domain-specific computations in a single network and proposes universal parametric network families, which consist of two parts: 1) a CNN feature extractor with universal parameters shared across all domains, and 2) domain-specific modules trained on top of universal weights to maximize the performance on that domain. It has been found important  to adapt both shallow and deep layers in a neural network in order to successfully solve multiple visual domains. We use the method of this paper in our work when training a parametric network family to produce a universal representation. In contrast, instead of parallel adapters, in this work, we use much simpler FiLM layers for domain-specific computations. Importantly, parametric networks families  and FiLM  adapters only provide a way to efficiently compute universal representation; they are not directly useful for few-shot learning. However, using our SUR strategy on this representation produces a useful set of features leading to state-of-the-art results in few-shot learning.
3 Proposed Approach
We now present our approach for few-shot learning, starting with preliminaries.
3.1 Few-Shot Classification with Nearest Centroid Classifier
The end-goal of few-shot classification is to produce a model which, given a new learning episode and a few labeled examples, is able to generalize to unseen examples for that episode. In other words, the model learns from a small training set , called a support set, and is evaluated on a held-out test set , called a query set. The ’s represent image-label pairs while the pair represents the learning episode, also called a few-shot task. To fulfill this objective, the problem is addressed in two steps. During the meta-training stage, a learning algorithm receives a large dataset with base categories , where it must learn a general feature extractor . During the meta-testing stage, one is given a target dataset with categories , used to repeatedly sample few-shot tasks . Importantly, meta-training and meta-testing datasets have no categories in common, i.e. .
During the meta-testing stage, we use feature representation to build a nearest centroid classifier (NCC). Given a support set , for each category present in this set, we build a class centroid by averaging image representations belonging to this category:
To classify a new sample , we choose a distance function such as Euclidean distance or negative cosine similarity, and assign the sample to the closest centroid . It has been empirically observed [6, 16] that negative cosine similarity works better for few-shot learning, thus we use it in all our experiments.
With the NCC classifier defined above, we may now formally define the concept of universal representation and the procedure for selecting relevant features. In our approach, the meta-training set is used to train a set of feature extractors that form a universal set of features. Each feature extractor maps an input image into a -dimensional representation . These features should capture different types of semantics and can be obtained in various manners, as detailed in Section 3.3.
Parametric Universal Representations.
One way to transform a universal set of features into a vectorized universal representation is, for example, by concatenating all image representations from this set (with or without -normalization). As we show in the experimental section, directly using such for classification with NCC does not work well as many irrelevant features for a new task are present in the representation. Therefore, we are interested in implementing a selection mechanism. In order to do so, we define a selection operation as follows, given a vector in :
where is simply after -normalization. We call a parametrized universal representation, as it contains information from the whole universal set but the exact representation depends on the selection parameters . Using this mechanism, it is possible to select various combinations of features from the universal representation by setting more than one to non-zero values.
Finding optimal selection parameters.
Feature selection is performed during meta-testing by optimizing a probabilistic model, leading to optimal parameters , given a support set of a new task.
Specifically, we consider the NCC classifier from Section 3.1, using instead of , and introduce the likelihood function
Our goal is then to find optimal parameters that maximize the likelihood on the support set, which is equivalent to minimizing the negative log-likelihood:
In practice, we optimize the objective by performing several steps of gradient descent. A solution to this problem would encourage large lambda values to be assigned to representations where intra-class similarity is high while the inter-class similarity is low. This becomes more clear when writing down the cosine similarity between -parametrized universal representation explicitly:
The proposed procedure is what we call Selecting from Universal Representations (SUR).
It is worth noting that nearest centroid classifier is a simple non-parametric model with limited capacity, which only stores a single vector to describe a class. Such limited capacity becomes an advantage when only a few annotated samples are available, as it effectively prevents overfitting. When training and testing across similar domains, SUR is able to select from the universal representation the features optimized for each visual domain. When the target domain distribution does not match any of the train distribution, it is nevertheless able to adapt a few parameters to the target distribution. In such a sense, our method performs a limited form of adaptation, with few parameters only, which is reasonable given that the target task has only a small number of annotated samples.
Sparsity in selection.
As said above, our selection algorithm is a form of weak adaptation, where parameters are adjusted to optimize the image representation, given an input task. Selection parameters are constrained to be in . However, as we show in the experimental section, the resulting vector is sparse in practice, with many entries equal to 0 or 1. This empirical behavior suggests, that our algorithm indeed performs selection of relevant features – a simple and robust form of adaptation. To promote further sparsity, one may use an additional sparsity-inducing penalty such as during the optimization ; however, our experiments show that doing so is not necessary to achieve good results.
3.3 Obtaining Universal Representations
In this section, we explain how to obtain a universal set of feature extractors when one or multiple domains are available for training. Three variants are used in this paper, which are illustrated in Figure 2.
Multiple training domains.
In the case of multiple training domains, we assume that different datasets are available for building a universal representation. We start with a straightforward solution and train a feature extractor for each visual domain independently. That results in a desired universal set . A universal representation is then computed by concatenating the output set, as illustrated in Figure 2(a).
To compute universal representations with a single network, we use parametric network families proposed in . We follow the original paper and first train a general feature extractor – ResNet – using provided training split of ImageNet, and then freeze the network’s weights. For each of the remaining datasets, task-specific parameters are learned by minimizing the corresponding training loss. We use FiLM  layers as domain-specific modules and insert them after each batch-norm layer in the ResNet. We choose to use FiLM layers over originally proposed parallel adapters  because FiLM is much simpler, i.e. performs channel-wise affine transformation, and contains much fewer parameters. This helps to avoid overfitting on small datasets. In summary, each of the datasets has its own set of FiLM layers and the base ResNet with a set of domain-specific FiLM layers constitutes the universal set of extractors . To compute the features of -th domain , we forward the input image trough the ResNet where all the intermediate activations are modulated with the FiLM layers trained on this domain, as illustrated in Figure 2(b). Using parametric network families instead of separate networks to obtain a universal representation reduces the number of stored parameters roughly by . However, to be actually useful for few-shot learning, such representation must be processed by our SUR approach, as described in Section 3.2
Single training domain.
Most of the current few-shot classification benchmarks [1, 23, 37, 52] include a single visual domain, i.e. training and testing splits are formed from different categories of the same dataset. Training a set of different features extractor on the same domain with the strategies proposed above is not possible. Instead, we propose to use a single feature extractor and obtain a set of universal features from its intermediate activations, as illustrated in Figure 2(c). Different intermediate layers extract different features and some of them may be more useful than others for a new few-shot task.
We now present the experiments to analyze the performance of our selection strategy, starting with implementation details.
4.1 Datasets and Experiments Details
We use mini-ImageNet  and Meta-Dataset  to evaluate the proposed approach. The mini-ImageNet  dataset consists of 100 categories (64 for training, 16 for validation, 20 for testing) from the original ImageNet  dataset, with 600 images per class. Since all the categories come from the same dataset, we use mini-ImageNet to evaluate our feature selection strategy in single-domain few-shot learning. During testing on mini-ImageNet, we measure performance over tasks where only 1 or 5 images (shots) per category are given for adaptation and the number of classes in a task is fixed to 5, i.e. 5-way classification. All images are resized to , as suggested originally by .
Meta-Dataset  is much larger than previous few-shot learning benchmarks and it is actually a collection of multiple datasets with different data distributions. It includes ImageNet , Omniglot , Aircraft , CU-Birds , Describable Textures , Quick Draw , Fungi , VGG-Flower , Traffic Sign  and MSCOCO . A short description of each dataset is contained in Appendix. Traffic Sign and MSCOCO datasets are reserved for testing only, while all other datasets have their corresponding train, val and test splits. To better study out-of-training-domain behavior, we follow  and add 3 more testing datasets, namely MNIST  CIFAR10 , and CIFAR100 . In contrast to mini-ImageNet, the number of shots and ways is not fixed here and varies from one few-shot task to another. All images are resized to resolution, as suggested by the authors of the dataset .
When experimenting with Meta-Dataset, we follow  and use ResNet18  as feature extractor. The training details for each dataset are described in Appendix. To report test results on Meta-Dataset, we perform an independent evaluation for each of the 10 provided datasets, plus for 3 extra datasets as suggested by . We follow  and sample 600 tasks for evaluation on each dataset within Meta-Dataset.
When experimenting with mini-ImageNet, we follow popular
works [14, 25, 34] and use
ResNet12  as a feature extractor. During testing, we use
mini-ImageNet’s test set to sample 1000 5-way classification tasks. We
evaluate scenarios where only 1 or 5 examples (shots) of each category are
provided for training and 15 for evaluation.
On both datasets, during meta-training, we use cosine classifier with learnable softmax temperature . During testing, classes and corresponding train/test examples are sampled at random. For all our experiments, we report the mean accuracy (in %) over all test tasks with confidence interval.
To perform feature selection from the universal representation we optimize the selection parameter (defined in Eq. 2) to minimize NCC classification loss (Eq. 4) on the support set. Each individual scalar weight is kept between 0 and 1 using sigmoid function, i.e. . All are initialized with zeros. We optimize the parameters using gradient descent for 40 iterations. At each iteration, we use the whole support set to build a nearest centroid classifier, and then we use the same set of examples to compute the loss, given by Eq. 4. Then, we compute gradients w.r.t and use Adadelta  optimizer with learning rate to perform parameter updates.
4.2 Cross-Domain Few-shot Classification
In this section, we evaluate the ability of SUR to handle different visual domains in MetaDataset . First, we motivate the use of universal representations and show the importance of feature selection. Then, we evaluate the proposed strategy against important baselines and state-of-the-art few-shot algorithms.
Evaluating domain-specific feature extractors.
MetaDataset includes 8 datasets for training, i.e. ImageNet, Omniglot, Aircraft, CU-Birds, Textures, Quick Draw, Fungi, VGG-Flower. We treat each dataset as a separate visual domain and obtain a universal set of features by training 8 domain-specific feature extractors, i.e. a separate ResNet18 for each dataset. Each feature extractor is trained independently from other models, with its own training schedule specified in Appendix. We test the performance of each feature extractor (with NCC) on every test dataset specified in Section 4.1, and report the results in Table 1. Among the 8 datasets seen during training, 5 datasets are better solved with their own features, while 3 other datasets benefit more from ImageNet features. In general, ImageNet features suits 8 out of 13 test datasets best, while 5 others require a different feature extractor for better accuracy. Such results suggest that none of the domain-specific feature extractors alone can perform equally well on all the datasets simultaneously. However, using the whole universal feature set to select appropriate representations would lead to superior accuracy.
|Features trained on:|
|Dataset||ImageNet||Omniglot||Aircraft||Birds||Textures||Quick Draw||Fungi||VGG Flower|
Evaluating feature selection.
We now employ SUR – our strategy for feature selection – as described in Sec. 3.2. A parametrized universal representation is obtained from a universal feature set by concatenation. It is then multiplied by the selection parameters , that are being optimized for each new few-shot task, following Section 4.1. We ran this procedure on all 13 testing datasets and report the results in Figure 3 (a). We compare our method with the following baselines: a) using a single feature extractor pre-trained on ImageNet split of MetaDataset (denoted “ImageNet-F”), b) using a single feature extractor pre-trained on the union of 8 training splits in MetaDataset (denoted “Union-F”) and c) manually setting all , which corresponds to simple concatenation (denoted “Concat-F”). It is clear from the figure that features provided by SUR have much better overall performance than any of the baselines on seen and unseen domains.
Comparison to other approaches.
We now compare SUR against state-of-the-art few-shot methods and report the results in Table 2. The results on MNIST, CIFAR 10 and CIFAR 100 datasets are missing for most of the approaches because those numbers were not reported in the corresponding original papers. Comparison to the best-performing methods on common datasets is summarized in Figure 3 (b). We see that SUR demonstrated state-of-the-art results on 9 out of 13 datasets. BOHNB-E  outperforms our approach on Birds, Textures and VGG Flowers datasets. This is not surprising since these are the only datasets that benefit from ImageNet features more than from their own (see Table 1) and BOHNB-E  is essentially an ensemble of multiple ImageNet-pretrained networks. When tested outside the training domain, SUR consistently outperforms CNAPs  – the state-of-the-art adaptation-based method. Moreover, SUR shows the best results on all 5 datasets never seen during training.
|Test Dataset||ProtoNet ||MAML ||Proto-MAML ||CNAPs ||BOHB-E ||SUR (ours)||SUR-pf (ours)|
Universal representations with parametric network family.
While it is clear that SUR outperforms other approaches, one may raise a concern that the improvement is due to the increased number of parameters, that is, we use 8 times more parameters than in a single ResNet18. To address this concern, we use a parametric network family  that has only more parameters than a single ResNet18. As described in Section 3.3.1, the parametric network family uses ResNet18 as a base feature extractor and FiLM  layers for feature modulation. The total number of additional parameters, represented by all domain-specific FiLM layers is approximately of ResNet18 parameters. For comparison, CNAPs adaptation mechanism is larger than ResNet18 itself. To train the parametric network family, we first train a base CNN feature extractor on ImageNet. Then, for each remaining training dataset, we learn a set of FiLM layers, as detailed in Section 3.3.1. To obtain a universal feature set for an image, we run inference 8 times, each time with a set of FiLM layers, corresponding to a different domain, as described in 3.3.1. Once the universal feature is built, our selection mechanism is applied to it as described before (see Section 4.1). The results of using SUR with a parametric network family are presented in Table 2 as “SUR-pf”. The table suggests that the accuracy on datasets similar to ImageNet is improved which suggests that parameter sharing is beneficial in this case and confirms the original findings of . However, the opposite is true for very different visual domains such as Fungi and QuickDraw. It implies that to do well on significantly different datasets, the base CNN filters must be learned on those datasets from scratch, and simple feature modulation is not competitive.
|Cls||last||76.28 0.41||60.09 0.61|
|concat||75.67 0.41||57.15 0.61|
|SUR||79.25 0.41||60.79 0.62|
|DenseCls||last||78.25 0.43||62.61 0.61|
|concat||79.59 0.42||62.74 0.61|
|SUR||80.04 0.41||63.13 0.62|
|Robust20-dist||last||81.06 0.41||64.14 0.62|
|concat||80.79 0.41||63.22 0.63|
|SUR||81.19 0.41||63.93 0.63|
4.3 Single-domain Few-shot Classification
In this section, we demonstrate the benefits of applying SUR to few-shot
classification, when training and testing classes come from the same dataset. More
specifically, we show how to use our feature selection strategy in order to improve
existing adaptation-free methods.
To test SUR in the single-domain scenario we use mini-ImageNet benchmark and solve 1-shot and 5-shot classification, as described in Section 4.1. When only one domain is available, to obtain a universal set of features, we use activations of network’s intermediate layers, as described in Section 3.3.2.
We experiment with 3 adaptation-free methods. They all use the last layer of ResNet12 as image features and build a NCC on top, however, they differ in a way the feature extractor is trained. The method we call “Cls”, simply trains ResNet12 for classification on the meta-training set. The work of  performs dense classification instead (dubbed “DenseCls”). Finally, the “Robust20-dist” feature extractor  is obtained by ensemble distillation. For any method, the universal feature set is formed from activations of the last 6 layers of the network. This is because the remaining intermediate layers do not contain useful for the final task information, as we show in Appendix.
Here, we explore different ways of exploiting such universal set of features for few-shot classification and report the results in Table 3. We can see that using SUR to select appropriate for classification layers usually works better than using only the penultimate layer (dubbed “last”) or concatenating all the features together (denoted as “concat”). For Robust20-dist, we observe only incremental improvements for 5-shot classification and negative improvement in 1-shot scenario. We attribute this to the fact that the penultimate layer in this network is probably the most useful for new problems and, if not selected, may hinder the final accuracy.
4.4 Analysis of Feature Selection
In this section, we analyze optimized selection parameters when applying SUR on MetaDataset. Specifically, we perform the experiments from Section 4.2, where we select appropriate representations from a set, generated by 8 independent networks. For each test dataset, we then average selection vectors (after being optimized for 40 SGD steps) over 600 test tasks and present them in Figure 4. First, we can see that the resulting are sparse, confirming that most of the time SUR actually select a few relevant features rather than takes all features with similar weights. Second, for a given test domain, SUR tends to select feature extractors trained on similar visual domains. Interestingly, for datasets coming from exactly the same distribution, i.e. CIFAR 10 and CIFAR 100, the averaged selection parameters are almost identical. All of the above suggests that the selection parameters could be interpreted as encoding the importance of featuresâ visual domains for the test domain.
This work was funded in part by the French government under management of Agence Nationale de la Recherche as part of the “Investissements dâavenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute) and reference ANR-19-P3IA-0003 (3IA MIAI@Grenoble Alpes), and was supported by the ERC grant number 714381 (SOLARIS) and a gift from Intel.
Appendix A Implementation and Datasets details
a.1 Full MetaDataset Description
The MetaDataset includes ImageNet  (1000 categories of natural images), Omniglot  (1623 categories of black-and-white hand-written characters from different alphabets), Aircraft  (100 classes of aircraft types), CU-Birds  (200 different bird species), Describable Textures  (43 categories for textures), Quick Draw  (345 different categories of black-and-white sketches), Fungi  (1500 mushroom types), VGG-Flower  (102 flower species), Traffic Sign  (43 classes of traffic signs) and MSCOCO  (80 categories of day-to-day objects). For testing, we additionally employ MNIST  (10 hand-written digits) CIFAR10  (10 classes of common objects), and CIFAR100  (100 classes of common objects). Figure A1 illustrated random samples drawn from each dataset.
a.2 MetaDataset training details
When using multiple ResNet18 on MetaDataset (a single ResNet per dataset) to build a universal representation, we train the networks according to the following procedure. For optimization, we use SGD with momentum and adjust the learning rate using cosine annealing . The starting learning rate, the maximum number of training iterations (“Max iter.”) and annealing frequency (“annealing freq.”) are set individually for each dataset. To regularize training, we use data augmentation, such as random crops and random color augmentations, and set a constant weight decay of . For each dataset, we run a grid search over batch size in [8, 16, 32, 64] and pick the one that maximizes accuracy on the validation set. The hyper-parameters maximizing the validation accuracy are given in Table A1.
When training a parametric network family for building universal representations, we start by adopting a ResNet18 already trained on ImageNet, that we keep fixed for the rest of the training procedure. For each new dataset, we then train a set of domain-specific FiLM layers, modulating intermediate ResNet layers, as described in Section3.3.1. Here, we also use cosine annealing as learning rate policy, employ weight decay and data augmentation, as specified above. In Table A2, we report the training hyper-parameters for each of the datasets.
a.3 mini-ImageNet training details
All the methods we evaluate on mini-ImageNet use ResNet12  as a feature extractor. It is trained with batch size 200 for 48 epochs. For optimization, we use Adam optimizer  with initial learning rate 0.1 which is kept constant for the first 36 epochs. Between epochs 36 and 48, the learning rate was exponentially decreased from 0.1 to , i.e. by dividing the learning rate by after each epoch. As regularization, we use weight decay with multiplier and data augmentation such as random crops, flips and color transformations.
|Test Dataset||learning rate||weight decay||Max iter.||annealing freq.||batch size|
|Test Dataset||learning rate||weight decay||Max iter.||annealing freq.||batch size|
Appendix B Additional Experiments and Ablation Study
b.1 Additional results on MetaDataset
Here we elaborate on using SUR with a universal set of representations obtained from independent feature extractors (see Section 3.2), report an ablation study on varying the number of extractors in the universal set, and report detailed results, corresponding to Figure 3. Specifically, we use 8 domain-specific ResNet18 feature extractors to build a universal representation and evaluate SUR against the baselines. The results are reported in Table A3, which corresponds to Figure 3.
In the following experiment, we remove feature extractors trained on Birds, Textures and VGG Flower from the universal feature set and test the performance of SUR on the set of remaining 5 feature extractors. We chose to remove these feature extractors as none of them gives the best performance on any of the test sets. Hence, they probably do not add new knowledge to the universal set of features. The results are reported in Table A3 (a) as “SUR (5/8)”. As we can see, selecting from the truncated set of features may bring marginal improvements some categories, which suggests that even the simplest form of adaptation – selection – may overfit when very few samples are available. On the other hand, for a test-only dataset Traffic Sign, selecting from all features is beneficial. This result is not surprising, as one generally does not know what features will be useful for tasks not known beforehand, and thus removing seemingly useless features may result in a performance drop.
|Test Dataset||ImageNet-F||Union-F||Concat-F||SUR||SUR (5/8)|
|Cls||last||76.28 0.41||60.09 0.61|
|✓||select||77.39 0.42||61.02 0.62|
|✓||✓||select||79.25 0.41||60.79 0.62|
|✓||✓||✓||select||78.92 0.41||60.71 0.64|
|✓||✓||✓||✓||select||78.80 0.43||60.55 0.62|
|✓||concat||78.43 0.42||60.41 0.62|
|✓||✓||concat||75.67 0.41||57.15 0.61|
|✓||✓||✓||concat||70.90 0.40||53.53 0.61|
|✓||✓||✓||✓||concat||69.40 0.40||51.21 0.60|
|DenseCls||last||78.25 0.43||62.61 0.61|
|✓||select||79.34 0.42||62.46 0.62|
|✓||✓||select||80.04 0.41||63.13 0.62|
|✓||✓||✓||select||79.84 0.42||62.95 0.62|
|✓||✓||✓||✓||select||79.49 0.43||62.58 0.63|
|✓||concat||79.12 0.41||62.51 0.62|
|✓||✓||concat||79.59 0.42||62.74 0.61|
|✓||✓||✓||concat||77.63 0.42||60.14 0.61|
|✓||✓||✓||✓||concat||76.07 0.41||57.78 0.61|
|DivCoop||last||81.06 0.41||64.14 0.62|
|✓||select||81.23 0.42||63.83 0.62|
|✓||✓||select||81.19 0.41||63.93 0.63|
|✓||✓||✓||select||81.11 0.42||63.85 0.62|
|✓||✓||✓||✓||select||81.08 0.42||63.71 0.62|
|✓||concat||81.12 0.42||63.92 0.62|
|✓||✓||concat||80.79 0.41||63.22 0.63|
|✓||✓||✓||concat||80.52 0.42||62.48 0.61|
|✓||✓||✓||✓||concat||80.36 0.42||61.30 0.61|
b.2 Analysis of Feature Selection on MetaDataset
Here, we repeat the experiment from Section 4.4, i.e. studying average values of selection parameters depending on the test dataset. Figure A2 reports the average selection parameters with corresponding confidence intervals. This is in contrast to Figure 4 that reports the average values only, without confidence intervals.
b.3 Importance of Intermediate Layers on mini-ImageNet
We clarify the findings in Section 4.3 4.3 and provide an ablation study on the importance of intermediate layers activations for the meta-testing performance. For all experiments on mini-ImageNet, we use ResNet12 as a feature extractor and construct a universal feature set from activations of intermediate layers. In Table A4, we experiment with adding different layers outputs to the universal set. The universal set is then used to construct the final image representation either through concatenation “concat” or using SUR. The table suggests that adding the first 6 layers negatively influences the performance of the target task. While our SUR approach can still select relevant features from the full set of layers, the negative impact is especially pronounced for the “concat” baseline. This suggests that the first 6 layers do not contain useful for the test task information. For this reason, we do not include them in the universal feature set, when reporting the results in Section 4.3.
We further provide analysis of selection coefficients assigned to different layers in Figure A3. We can see that for all methods, SUR picks from the last 6 layers most of the time. However, it can happen that some of the earlier layers are selected too. According to Table A4, these cases lead to a decrease in performance and suggest the SUR may overfit, when the number of samples if very low.
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
- email: firstname.lastname@example.org
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
- email: email@example.com
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
- email: firstname.lastname@example.org
- (2018) Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136. Cited by: §3.3.2.
- (2019) A kernel perspective for regularizing deep neural networks. In International Conference on Machine Learning (ICML), Cited by: §1.
- (2017) Universal representations: the missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275. Cited by: §2.0.2, §2.
- (2010) MNIST handwritten digit database. Note: \urlhttp://yann.lecun.com/exdb/mnist Cited by: §A.1, §4.1.1.
- (2018) Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1.
- (2019) A closer look at few-shot classification. In International Conference on Learning Representations (ICLR), Cited by: §1, §2.0.1, §2.0.1, §3.1, §4.1.2.
- (2014) Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §A.1, §4.1.1.
- (2017) Multi-task self-supervised visual learning. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1.
- (2019) Diversity with cooperation: ensemble methods for few-shot classification. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1, §2.0.1, §4.3.
- (2017) BlitzNet: a real-time deep network for scene understanding. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1.
- (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (ICML), Cited by: §1, §1, §1, Table 2.
- (2016) Fine-grained visual classification of aircraft. Note: \urlquickdraw.withgoogle.com Cited by: §A.1, §4.1.1.
- (2018) Conditional neural processes. arXiv preprint arXiv:1807.01613. Cited by: §2.0.1.
- (2019) Boosting few-shot visual learning with self-supervision. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.0.1, §4.1.2.
- (2018) Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.0.1.
- (2018) Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), Cited by: §1, §3.1.
- (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.2.
- (2013) Detection of traffic signs in real-world images: the german traffic sign detection benchmark. In International Joint Conference on Neural Networks (IJCNN), Cited by: §A.1, §4.1.1.
- (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. External Links: Cited by: §2.0.2.
- (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §A.3.
- (2009) Learning multiple layers of features from tiny images. Master’s Thesis, University of Toronto. Cited by: §A.1, §4.1.1.
- (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- (2015) Human-level concept learning through probabilistic program induction. Science. Cited by: §A.1, §3.3.2, §4.1.1.
- (1989) Backpropagation applied to handwritten zip code recognition. Neural computation. Cited by: §1.
- (2019) Dense classification and implanting for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.0.1, §2.0.1, §4.1.2, §4.3.
- (2014) Microsoft COCO: common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §A.1, §1, §4.1.1.
- (2016) SSD: single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1.
- (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
- (2016) Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §A.2.
- (2014) Sparse modeling for image and vision processing. Foundations and Trends® in Computer Graphics and Vision. Cited by: §3.2.3.
- (2013) Fine-grained visual classification of aircraft. Technical report Cited by: §A.1, §4.1.1.
- (2013) Distance-based image classification: generalizing to new classes at near-zero cost. IEEE transactions on Pattern Analysis and Machine Intelligence (PAMI). Cited by: §1.
- (2008) Automated flower classification over a large number of classes. In Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Cited by: §A.1, §4.1.1.
- (2018) TADAM: task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §A.3, §1, §2.0.1, §4.1.2.
- (2018) Film: visual reasoning with a general conditioning layer. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.0.1, §2.0.1, §2.0.2, Figure 2, §3.3.1, §4.2.4.
- (2018) Few-shot image recognition by predicting parameters from activations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.0.1.
- (2017) Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), Cited by: 2nd item, §1, §3.3.2, §4.1.1.
- (2018) Efficient parametrization of multi-domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.0.2, Figure 2, §3.3.1, §4.2.4.
- (2018) Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676. Cited by: §1.
- (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- (2019) Fast and flexible multi-task classification using conditional neural adaptive processes. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, §1, §2.0.1, §4.1.1, §4.1.2, §4.2.3, Table 2.
- (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §1.
- (2015) ImageNet large scale visual recognition challenge. Proceedings of the International Conference on Computer Vision (ICCV). Cited by: §A.1, §1, §4.1.1, §4.1.1.
- (2018) Meta-learning with latent embedding optimization. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- (2020) Optimized generic feature learning for few-shot classification across domains. arXiv preprint arXiv:2001.07926. Cited by: §1, §1, §2.0.1, §2.0.1, §4.2.3, Table 2.
- (1997) Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning. Cited by: §1, §2.0.1.
- (2018) FGVCx fungi classification challenge 2018. Note: \urlgithub.com/visipedia/fgvcx_fungi_comp Cited by: §A.1, §4.1.1.
- (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, Table 2.
- (1998) Lifelong learning algorithms. In Learning to learn, pp. 181–209. Cited by: §1, §2.0.1.
- (2019) Meta-dataset: a dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096. Cited by: 2nd item, §1, §1, §1, §2.0.1, §4.1.1, §4.1.1, §4.1.2, §4.2, Table 2.
- (2016) Matching networks for one shot learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- (2011) The Caltech-UCSD Birds-200-2011 Dataset. Technical report Technical Report CNS-TR-2011-001, California Institute of Technology. Cited by: §3.3.2.
- (2011) The caltech-ucsd birds-200-2011 dataset. Cited by: §A.1, §4.1.1.
- (2017) Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:1705.10941. Cited by: §1.
- (2014) How transferable are features in deep neural networks?. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- (2012) Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Cited by: §4.1.3.
- (2019) The visual task adaptation benchmark. arXiv preprint arXiv:1910.04867. Cited by: §1.