Selecting Relevant Features from a Universal Representation for Few-shot Classification

Selecting Relevant Features from a Universal Representation for Few-shot Classification


Popular approaches for few-shot classification consist of first learning a generic data representation based on a large annotated dataset, before adapting the representation to new classes given only a few labeled samples. In this work, we propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches. First, we obtain a universal representation by training a set of semantically different feature extractors. Then, given a few-shot learning task, we use our universal feature bank to automatically select the most relevant representations. We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training, which leads to state-of-the-art results on MetaDataset and improved accuracy on mini-ImageNet.

Image Recognition, Few-shot Learning, Feature Selection


1 Introduction

Convolutional neural networks [24] (CNNs) have become a classical tool for modeling visual data and are commonly used in many computer vision tasks such as image classification [22], object detection [10, 27, 40], or semantic segmentation [10, 28, 42]. One key of the success of these approaches relies on massively labeled datasets such as ImageNet [43] or COCO [26]. Unfortunately, annotating data at this scale is expensive and not always feasible, depending on the task at hand. Improving the generalization capabilities of deep neural networks and removing the need for huge sets of annotations is thus of utmost importance.

This ambitious challenge may be addressed from different perspectives, such as large-scale unsupervised learning [5], self-supervised learning [8, 16], or by developing regularization techniques dedicated to deep networks [2, 54]. An alternative solution is to use data that has been previously annotated for a different task than the one considered, for which only a few annotated samples may be available. This approach is particularly useful if the additional data is related to the new task [55, 57], which is unfortunately not known beforehand. How to use effectively this additional data is then an important subject of ongoing research [11, 50, 57]. In this paper, we propose to use a universal image representation, i.e., an exhaustive set of semantically different features. Then, by automatically selecting only relevant feature subsets from the universal representation, we show how to successfully solve a large variety of target tasks.

Figure 1: Illustration of our approach. (Left) First, we obtain a universal feature representation, consisting of feature blocks with different semantics. (Right) Given a few-shot task, we select only the relevant feature blocks from the universal representation, by optimizing masking parameters on the support set.

Specifically, we are interested in few-shot classification, where a visual model is first trained from scratch, i.e. starting from randomly initialized weights, using a large annotated corpus. Then, we evaluate its ability to transfer the knowledge to new classes, for which only very few annotated samples are provided. Simply fine-tuning a convolutional neural network on a new classification task has been shown to perform poorly [11]. This has motivated the community to develop dedicated techniques, allowing effective adaptation with few samples.

Few-shot classification methods typically operate in two stages, consisting of first pre-training a general feature extractor and then building an adaptation mechanism. A common way to proceed is based on meta-learning [11, 37, 46, 48, 49, 51], which is a principle to learn how to adapt to new learning problems. That is, a parametric adaptation module (typically, a neural network) is trained to produce a classifier for new categories, given only a few annotated samples [15, 36, 44]. To achieve this goal, the large training corpus is split into smaller few-shot classification problems [11, 37, 51], that are used to train the adaptation mechanism. Training in such episodic fashion is advocated to alleviate overfitting and improve generalization [11, 37, 51]. However, a more recent work [6] demonstrates that using a large training set to train the adapter is not necessary, i.e. a linear classifier with similar accuracy can be trained directly from new samples, on top of a fixed feature extractor. Finally, it has been shown that adaptation is not necessary at all [9]; using a non-parametric prototypical classifier [32] combined with a properly regularized feature extractor can achieve better accuracy than recent meta-learning baselines [9, 45]. These results suggest that on standard few-shot learning benchmarks [37, 39], the little amount of samples in a few-shot task is not enough to learn a meaningful adaptation strategy.

To address the shortcomings of existing few-shot benchmarks, the authors of [50] have proposed MetaDataset, which evaluates the ability to learn across different visual domains and to generalize to new data distributions at test time, given few annotated samples. While methods based solely on pre-trained feature extractors [45] can achieve good results only on test datasets that are similar to the training ones, the adaptation technique [41] performs well across test domains. The method not only predicts a new few-shot classifier but also adapts the filters of a feature extractor, given an input task. The results thus suggest that feature adaptation may be in fact useful to achieve better generalization.

In contrast to these earlier approaches, we show that feature adaptation can be replaced by a simple feature selection mechanism, leading to better results in the cross-domain setup of [50]. More precisely, we propose to leverage a universal representation – a large set of semantically different features that captures different modalities of a training set. Rather than adapting existing features to a new few-shot task, we propose to select features from the universal representation. We call our approach SUR which stands for Selecting from Universal Representations. In contrast to standard adaptation modules [34, 41, 50] learned on the training set, selection is performed directly on new few-shot tasks using gradient descent. Approaching few-shot learning with SUR has several advantages over classical adaptation techniques. First, it is simple by nature, i.e. selecting features from a fixed set is an easier problem than learning a feature transformation, especially when few annotated images are available. Second, learning an adaptation module on the meta-training set is likely to generalize only to similar domains. In contrast, the selection step in our approach is decoupled from meta-training, thus, it works equally well for any new domain. Finally, we show that our approach achieves better results than current state-of-the-art methods on popular few-shot learning benchmarks. In summary, this work makes the following contributions:

  • We propose to tackle few-shot classification by selecting relevant features from a universal representation. While universal representations can be built by training several feature extractors or using a single neural network, the selection procedure is implemented with gradient descent.

  • We show that our method outperforms existing approaches in in-domain and cross-domain few-shot learning and sets new state-of-the-art result on MetaDataset [50] and improves accuracy on mini-ImageNet [37].

The official implementation is available at

2 Related Work

In this section, we present previous work on few-shot classification and universal representations, which is a term first introduced in [3].

Few-shot classification.

Typical few-shot classification problems consist of two parts called meta-training and meta-testing [6]. During the meta-training stage, one is given a large-enough annotated dataset, which is used to train a predictive model. During meta-testing, novel categories are provided along with few annotated examples. The goal is to evaluate the ability of the predictive model to adapt and perform well on these new classes.

Typical few-shot learning algorithms [14, 15, 36] first pre-train the feature extractor by supervised learning on the meta-training set. Then, they use meta-learning [46, 49] to train an adaptation mechanism. For example, in [15, 36], adaptation consists of predicting the weights of a classifier for new categories, given a small few-shot training set. The work of [34] goes beyond the adaptation of a single layer on top of a fixed feature extractor, and additionally generates FiLM [35] layers that modify convolutional layers. Alternatively, the work of [6] proposes to train a linear classifier on top of the features directly from few samples from new categories. In the same line of work, [25] performs implanting, i.e. learning new convolutional filters within the existing CNN layers.

Other methods do not perform adaptation at all. It has been shown in [9, 25, 45] that training a regularized CNN for classification on the meta-training set and using these features directly with a nearest centroid classifier produces state-of-the-art few-shot accuracy. To obtain a robust feature extractor, [9] distills an ensemble of networks into a single extractor to obtain low-variance features.

Finally, the methods [41, 45] are the most relevant to our work as they also tackle the problem of cross-domain few-shot classification [50]. In [41], the authors propose to adapt each hidden layer of a feature extractor for a new task. They first obtain a task embedding and use conditional neural process [13] to generate parameters of modulation FiLM [35] layers, as well as weights of a classifier for new categories. An adaptation-free method [45] instead trains a CNN on ImageNet, while optimizing for high validation accuracy on other datasets, using hyper-parameter search. When tested on domains similar to ImageNet, the method demonstrates the highest accuracy, however, it is outperformed by adaptation-based methods when tested on other data distributions [45].

Universal Representations.

The term “universal representation” was coined by [3] and refers to an image representation that works equally well for a large number of visual domains. The simplest way to obtain a universal representation is to train a separate feature extractor for each visual domain and use only the appropriate one at test time. To reduce the computational footprint, [3] investigates if a single CNN can be useful to perform image classification on very different domains. To achieve this goal, the authors propose to share most of the parameters between domains during training and have a small set of parameters that are domain-specific. Such adaptive feature sharing is implemented using conditional batch normalization [19], i.e. there is a separate set of batch-norm parameters for every domain. The work of [38] extends the idea of domain-specific computations in a single network and proposes universal parametric network families, which consist of two parts: 1) a CNN feature extractor with universal parameters shared across all domains, and 2) domain-specific modules trained on top of universal weights to maximize the performance on that domain. It has been found important [38] to adapt both shallow and deep layers in a neural network in order to successfully solve multiple visual domains. We use the method of this paper in our work when training a parametric network family to produce a universal representation. In contrast, instead of parallel adapters, in this work, we use much simpler FiLM layers for domain-specific computations. Importantly, parametric networks families [38] and FiLM [35] adapters only provide a way to efficiently compute universal representation; they are not directly useful for few-shot learning. However, using our SUR strategy on this representation produces a useful set of features leading to state-of-the-art results in few-shot learning.

3 Proposed Approach

We now present our approach for few-shot learning, starting with preliminaries.

3.1 Few-Shot Classification with Nearest Centroid Classifier

The end-goal of few-shot classification is to produce a model which, given a new learning episode and a few labeled examples, is able to generalize to unseen examples for that episode. In other words, the model learns from a small training set , called a support set, and is evaluated on a held-out test set , called a query set. The ’s represent image-label pairs while the pair represents the learning episode, also called a few-shot task. To fulfill this objective, the problem is addressed in two steps. During the meta-training stage, a learning algorithm receives a large dataset with base categories , where it must learn a general feature extractor . During the meta-testing stage, one is given a target dataset with categories , used to repeatedly sample few-shot tasks . Importantly, meta-training and meta-testing datasets have no categories in common, i.e. .

During the meta-testing stage, we use feature representation to build a nearest centroid classifier (NCC). Given a support set , for each category present in this set, we build a class centroid by averaging image representations belonging to this category:


To classify a new sample , we choose a distance function such as Euclidean distance or negative cosine similarity, and assign the sample to the closest centroid . It has been empirically observed [6, 16] that negative cosine similarity works better for few-shot learning, thus we use it in all our experiments.

3.2 Method

With the NCC classifier defined above, we may now formally define the concept of universal representation and the procedure for selecting relevant features. In our approach, the meta-training set is used to train a set of feature extractors that form a universal set of features. Each feature extractor maps an input image into a -dimensional representation . These features should capture different types of semantics and can be obtained in various manners, as detailed in Section 3.3.

Parametric Universal Representations.

One way to transform a universal set of features into a vectorized universal representation is, for example, by concatenating all image representations from this set (with or without -normalization). As we show in the experimental section, directly using such for classification with NCC does not work well as many irrelevant features for a new task are present in the representation. Therefore, we are interested in implementing a selection mechanism. In order to do so, we define a selection operation as follows, given a vector in :


where is simply after -normalization. We call a parametrized universal representation, as it contains information from the whole universal set but the exact representation depends on the selection parameters . Using this mechanism, it is possible to select various combinations of features from the universal representation by setting more than one to non-zero values.

Finding optimal selection parameters.

Feature selection is performed during meta-testing by optimizing a probabilistic model, leading to optimal parameters , given a support set of a new task.

Specifically, we consider the NCC classifier from Section 3.1, using instead of , and introduce the likelihood function


Our goal is then to find optimal parameters that maximize the likelihood on the support set, which is equivalent to minimizing the negative log-likelihood:


In practice, we optimize the objective by performing several steps of gradient descent. A solution to this problem would encourage large lambda values to be assigned to representations where intra-class similarity is high while the inter-class similarity is low. This becomes more clear when writing down the cosine similarity between -parametrized universal representation explicitly:


The proposed procedure is what we call Selecting from Universal Representations (SUR).

It is worth noting that nearest centroid classifier is a simple non-parametric model with limited capacity, which only stores a single vector to describe a class. Such limited capacity becomes an advantage when only a few annotated samples are available, as it effectively prevents overfitting. When training and testing across similar domains, SUR is able to select from the universal representation the features optimized for each visual domain. When the target domain distribution does not match any of the train distribution, it is nevertheless able to adapt a few parameters to the target distribution. In such a sense, our method performs a limited form of adaptation, with few parameters only, which is reasonable given that the target task has only a small number of annotated samples.

Sparsity in selection.

As said above, our selection algorithm is a form of weak adaptation, where parameters are adjusted to optimize the image representation, given an input task. Selection parameters are constrained to be in . However, as we show in the experimental section, the resulting vector is sparse in practice, with many entries equal to 0 or 1. This empirical behavior suggests, that our algorithm indeed performs selection of relevant features – a simple and robust form of adaptation. To promote further sparsity, one may use an additional sparsity-inducing penalty such as during the optimization [30]; however, our experiments show that doing so is not necessary to achieve good results.

3.3 Obtaining Universal Representations

In this section, we explain how to obtain a universal set of feature extractors when one or multiple domains are available for training. Three variants are used in this paper, which are illustrated in Figure 2.

Figure 2: Different ways of obtaining a universal representation. (a) A single image is embedded with multiple domain-specific networks. (b) Using a parametric network family [38] to obtain universal representations. Here, the gray blocks correspond to shared computations and colored blocks correspond to domain-specific FiLM [35] layers. (c) Using intermediate layer activations form a single CNN to form a universal feature set. Gray blocks and arrows indicate shared layers and computation flow respectively, while domain specific ones are shown in corresponding colors. Best viewed in color.

Multiple training domains.

In the case of multiple training domains, we assume that different datasets are available for building a universal representation. We start with a straightforward solution and train a feature extractor for each visual domain independently. That results in a desired universal set . A universal representation is then computed by concatenating the output set, as illustrated in Figure 2(a).

To compute universal representations with a single network, we use parametric network families proposed in [38]. We follow the original paper and first train a general feature extractor – ResNet – using provided training split of ImageNet, and then freeze the network’s weights. For each of the remaining datasets, task-specific parameters are learned by minimizing the corresponding training loss. We use FiLM [35] layers as domain-specific modules and insert them after each batch-norm layer in the ResNet. We choose to use FiLM layers over originally proposed parallel adapters [38] because FiLM is much simpler, i.e. performs channel-wise affine transformation, and contains much fewer parameters. This helps to avoid overfitting on small datasets. In summary, each of the datasets has its own set of FiLM layers and the base ResNet with a set of domain-specific FiLM layers constitutes the universal set of extractors . To compute the features of -th domain , we forward the input image trough the ResNet where all the intermediate activations are modulated with the FiLM layers trained on this domain, as illustrated in Figure 2(b). Using parametric network families instead of separate networks to obtain a universal representation reduces the number of stored parameters roughly by . However, to be actually useful for few-shot learning, such representation must be processed by our SUR approach, as described in Section 3.2

Single training domain.

Most of the current few-shot classification benchmarks [1, 23, 37, 52] include a single visual domain, i.e. training and testing splits are formed from different categories of the same dataset. Training a set of different features extractor on the same domain with the strategies proposed above is not possible. Instead, we propose to use a single feature extractor and obtain a set of universal features from its intermediate activations, as illustrated in Figure 2(c). Different intermediate layers extract different features and some of them may be more useful than others for a new few-shot task.

4 Experiments

We now present the experiments to analyze the performance of our selection strategy, starting with implementation details.

4.1 Datasets and Experiments Details


We use mini-ImageNet [37] and Meta-Dataset [50] to evaluate the proposed approach. The mini-ImageNet [37] dataset consists of 100 categories (64 for training, 16 for validation, 20 for testing) from the original ImageNet [43] dataset, with 600 images per class. Since all the categories come from the same dataset, we use mini-ImageNet to evaluate our feature selection strategy in single-domain few-shot learning. During testing on mini-ImageNet, we measure performance over tasks where only 1 or 5 images (shots) per category are given for adaptation and the number of classes in a task is fixed to 5, i.e. 5-way classification. All images are resized to , as suggested originally by [37].

Meta-Dataset [50] is much larger than previous few-shot learning benchmarks and it is actually a collection of multiple datasets with different data distributions. It includes ImageNet [43], Omniglot [23], Aircraft [31], CU-Birds [53], Describable Textures [7], Quick Draw [12], Fungi [47], VGG-Flower [33], Traffic Sign [18] and MSCOCO [26]. A short description of each dataset is contained in Appendix. Traffic Sign and MSCOCO datasets are reserved for testing only, while all other datasets have their corresponding train, val and test splits. To better study out-of-training-domain behavior, we follow [41] and add 3 more testing datasets, namely MNIST [4] CIFAR10 [21], and CIFAR100 [21]. In contrast to mini-ImageNet, the number of shots and ways is not fixed here and varies from one few-shot task to another. All images are resized to resolution, as suggested by the authors of the dataset [50].

Implementation Details.

When experimenting with Meta-Dataset, we follow [41] and use ResNet18 [17] as feature extractor. The training details for each dataset are described in Appendix. To report test results on Meta-Dataset, we perform an independent evaluation for each of the 10 provided datasets, plus for 3 extra datasets as suggested by [41]. We follow [50] and sample 600 tasks for evaluation on each dataset within Meta-Dataset.

When experimenting with mini-ImageNet, we follow popular works [14, 25, 34] and use ResNet12 [34] as a feature extractor. During testing, we use mini-ImageNet’s test set to sample 1000 5-way classification tasks. We evaluate scenarios where only 1 or 5 examples (shots) of each category are provided for training and 15 for evaluation.
On both datasets, during meta-training, we use cosine classifier with learnable softmax temperature [6]. During testing, classes and corresponding train/test examples are sampled at random. For all our experiments, we report the mean accuracy (in %) over all test tasks with confidence interval.

Feature Selection.

To perform feature selection from the universal representation we optimize the selection parameter (defined in Eq. 2) to minimize NCC classification loss (Eq. 4) on the support set. Each individual scalar weight is kept between 0 and 1 using sigmoid function, i.e. . All are initialized with zeros. We optimize the parameters using gradient descent for 40 iterations. At each iteration, we use the whole support set to build a nearest centroid classifier, and then we use the same set of examples to compute the loss, given by Eq. 4. Then, we compute gradients w.r.t and use Adadelta [56] optimizer with learning rate to perform parameter updates.

4.2 Cross-Domain Few-shot Classification

In this section, we evaluate the ability of SUR to handle different visual domains in MetaDataset [50]. First, we motivate the use of universal representations and show the importance of feature selection. Then, we evaluate the proposed strategy against important baselines and state-of-the-art few-shot algorithms.

Evaluating domain-specific feature extractors.

MetaDataset includes 8 datasets for training, i.e. ImageNet, Omniglot, Aircraft, CU-Birds, Textures, Quick Draw, Fungi, VGG-Flower. We treat each dataset as a separate visual domain and obtain a universal set of features by training 8 domain-specific feature extractors, i.e. a separate ResNet18 for each dataset. Each feature extractor is trained independently from other models, with its own training schedule specified in Appendix. We test the performance of each feature extractor (with NCC) on every test dataset specified in Section 4.1, and report the results in Table 1. Among the 8 datasets seen during training, 5 datasets are better solved with their own features, while 3 other datasets benefit more from ImageNet features. In general, ImageNet features suits 8 out of 13 test datasets best, while 5 others require a different feature extractor for better accuracy. Such results suggest that none of the domain-specific feature extractors alone can perform equally well on all the datasets simultaneously. However, using the whole universal feature set to select appropriate representations would lead to superior accuracy.

Features trained on:
Dataset ImageNet Omniglot Aircraft Birds Textures Quick Draw Fungi VGG Flower
ImageNet 56.31.0 18.50.7 21.50.8 23.90.8 26.10.8 23.10.8 31.20.8 24.30.8
Omniglot 67.51.2 92.40.5 55.21.3 59.51.3 48.41.3 80.00.9 59.71.2 54.21.4
Aircraft 50.40.9 17.00.5 85.40.5 30.90.7 23.90.6 25.20.6 33.70.8 25.10.6
Birds 71.70.8 13.70.6 18.00.7 64.70.9 20.20.7 17.90.6 40.70.9 24.50.8
Textures 70.20.7 30.60.6 33.10.6 37.10.6 57.30.7 38.50.7 50.40.7 45.40.8
Quick Draw 52.41.0 50.31.0 36.01.0 38.81.0 35.70.9 80.70.6 35.41.0 39.41.0
Fungi 39.11.0 10.50.5 14.00.6 21.20.7 15.50.7 13.00.6 62.70.9 22.60.8
VGG Flower 84.30.7 24.80.7 44.60.8 57.20.8 42.30.8 36.90.8 76.10.8 77.10.7
Traffic Sign 63.10.8 44.00.9 57.70.8 61.70.8 55.20.8 50.20.8 53.50.8 57.90.8
MSCOCO 52.81.0 15.10.7 21.20.8 22.50.8 25.80.9 19.90.7 29.30.9 27.30.9
MNIST 77.20.7 90.90.5 69.50.7 74.20.7 55.90.8 86.20.6 69.40.7 66.90.7
CIFAR 10 66.30.8 33.00.7 37.80.7 39.30.7 39.20.7 36.10.7 33.60.7 38.20.7
CIFAR 100 55.71.0 14.90.7 22.50.8 25.60.8 24.10.8 21.40.7 22.20.8 26.50.9
Table 1: Performance of feature extractors trained with different datasets on Meta-Dataset. The first column indicates the dataset within Meta-Dataset, used for testing, the first row gives the name of a dataset, used to pre-train the feature extractor. The rest of the table gives the accuracy of feature extractors on few-shot classification, when applied with NCC. The average accuracy and confidence intervals computed over 600 few-shot tasks. The numbers in bold indicate that a method achieves the best accuracy on a corresponding dataset.

Evaluating feature selection.

We now employ SUR – our strategy for feature selection – as described in Sec. 3.2. A parametrized universal representation is obtained from a universal feature set by concatenation. It is then multiplied by the selection parameters , that are being optimized for each new few-shot task, following Section 4.1. We ran this procedure on all 13 testing datasets and report the results in Figure 3 (a). We compare our method with the following baselines: a) using a single feature extractor pre-trained on ImageNet split of MetaDataset (denoted “ImageNet-F”), b) using a single feature extractor pre-trained on the union of 8 training splits in MetaDataset (denoted “Union-F”) and c) manually setting all , which corresponds to simple concatenation (denoted “Concat-F”). It is clear from the figure that features provided by SUR have much better overall performance than any of the baselines on seen and unseen domains.

Figure 3: Performance of different few-shot methods on MetaDataset. (a) Comparison of our selection strategy to baselines. The chart is generated from the table in Appendix. (b) Comparing our selection strategy to state-of-the-art few-shot learning methods. The chart is generated from Table 2. The axes indicate the accuracy of methods on a particular dataset. The legend specifies colors, corresponding to different methods.

Comparison to other approaches.

We now compare SUR against state-of-the-art few-shot methods and report the results in Table 2. The results on MNIST, CIFAR 10 and CIFAR 100 datasets are missing for most of the approaches because those numbers were not reported in the corresponding original papers. Comparison to the best-performing methods on common datasets is summarized in Figure 3 (b). We see that SUR demonstrated state-of-the-art results on 9 out of 13 datasets. BOHNB-E [45] outperforms our approach on Birds, Textures and VGG Flowers datasets. This is not surprising since these are the only datasets that benefit from ImageNet features more than from their own (see Table 1) and BOHNB-E [45] is essentially an ensemble of multiple ImageNet-pretrained networks. When tested outside the training domain, SUR consistently outperforms CNAPs [41] – the state-of-the-art adaptation-based method. Moreover, SUR shows the best results on all 5 datasets never seen during training.

Test Dataset ProtoNet  [48] MAML  [11] Proto-MAML  [50] CNAPs  [41] BOHB-E [45] SUR (ours) SUR-pf (ours)
ImageNet 44.51.1 32.41.0 47.91.1 52.31.0 55.41.1 56.31.1 56.41.2
Omniglot 79.61.1 71.91.2 82.90.9 88.40.7 77.51.1 93.10.5 88.50.8
Aircraft 71.10.9 52.80.9 74.20.8 80.50.6 60.90.9 85.40.7 79.50.8
Birds 67.01.0 47.21.1 70.01.0 72.20.9 73.60.8 71.41.0 76.40.9
Textures 65.20.8 56.70.7 67.90.8 58.30.7 72.80.7 71.50.8 73.10.7
Quick Draw 65.90.9 50.51.2 66.60.9 72.50.8 61.20.9 81.30.6 75.70.7
Fungi 40.31.1 21.01.0 42.01.1 47.41.0 44.51.1 63.11.0 48.20.9
VGG Flower 86.90.7 70.91.0 88.51.0 86.00.5 90.60.6 82.80.7 90.60.5
Traffic Sign 46.51.0 34.21.3 34.21.3 60.20.9 57.51.0 70.40.8 65.10.8
MSCOCO 39.91.1 24.11.1 24.11.1 42.61.1 51.91.0 52.41.1 52.11.0
MNIST - - - 92.70.4 - 94.30.4 93.20.4
CIFAR 10 - - - 61.50.7 - 66.80.9 66.40.8
CIFAR 100 - - - 50.11.0 - 56.61.0 57.11.0

Table 2: Comparison to existing methods on Meta-Dataset. The first column indicates the of a dataset used for testing. The first row gives a name of a few-shot algorithm. Here, SUR stands for our method, i.e. selection from a set of different feature extractors, and SUR-pf for selection from a parametric network family. The body of the table contains average accuracy and confidence intervals computed over 600 few-shot tasks. The numbers in bold have intersecting confidence intervals with the most accurate method.

Universal representations with parametric network family.

While it is clear that SUR outperforms other approaches, one may raise a concern that the improvement is due to the increased number of parameters, that is, we use 8 times more parameters than in a single ResNet18. To address this concern, we use a parametric network family [38] that has only more parameters than a single ResNet18. As described in Section 3.3.1, the parametric network family uses ResNet18 as a base feature extractor and FiLM [35] layers for feature modulation. The total number of additional parameters, represented by all domain-specific FiLM layers is approximately of ResNet18 parameters. For comparison, CNAPs adaptation mechanism is larger than ResNet18 itself. To train the parametric network family, we first train a base CNN feature extractor on ImageNet. Then, for each remaining training dataset, we learn a set of FiLM layers, as detailed in Section 3.3.1. To obtain a universal feature set for an image, we run inference 8 times, each time with a set of FiLM layers, corresponding to a different domain, as described in 3.3.1. Once the universal feature is built, our selection mechanism is applied to it as described before (see Section 4.1). The results of using SUR with a parametric network family are presented in Table 2 as “SUR-pf”. The table suggests that the accuracy on datasets similar to ImageNet is improved which suggests that parameter sharing is beneficial in this case and confirms the original findings of [38]. However, the opposite is true for very different visual domains such as Fungi and QuickDraw. It implies that to do well on significantly different datasets, the base CNN filters must be learned on those datasets from scratch, and simple feature modulation is not competitive.

Method Aggregation 5-shot 1-shot
Cls last 76.28 0.41 60.09 0.61
concat 75.67 0.41 57.15 0.61
SUR 79.25 0.41 60.79 0.62
DenseCls last 78.25 0.43 62.61 0.61
concat 79.59 0.42 62.74 0.61
SUR 80.04 0.41 63.13 0.62
Robust20-dist last 81.06 0.41 64.14 0.62
concat 80.79 0.41 63.22 0.63
SUR 81.19 0.41 63.93 0.63
Table 3: Comparison to other methods on 1- and 5-shot mini-ImageNet. The first column specifies a way of training a feature extractor, while the second column reflects how the final image representation is constructed. The two last columns display the accuracy on 1- and 5-shot learning tasks. To evaluate our methods we performed independent experiments on mini-ImageNet-test and report the average and confidence interval. The best accuracy is in bold.

4.3 Single-domain Few-shot Classification

In this section, we demonstrate the benefits of applying SUR to few-shot classification, when training and testing classes come from the same dataset. More specifically, we show how to use our feature selection strategy in order to improve existing adaptation-free methods.
To test SUR in the single-domain scenario we use mini-ImageNet benchmark and solve 1-shot and 5-shot classification, as described in Section 4.1. When only one domain is available, to obtain a universal set of features, we use activations of network’s intermediate layers, as described in Section 3.3.2.
We experiment with 3 adaptation-free methods. They all use the last layer of ResNet12 as image features and build a NCC on top, however, they differ in a way the feature extractor is trained. The method we call “Cls”, simply trains ResNet12 for classification on the meta-training set. The work of [25] performs dense classification instead (dubbed “DenseCls”). Finally, the “Robust20-dist” feature extractor [9] is obtained by ensemble distillation. For any method, the universal feature set is formed from activations of the last 6 layers of the network. This is because the remaining intermediate layers do not contain useful for the final task information, as we show in Appendix.
Here, we explore different ways of exploiting such universal set of features for few-shot classification and report the results in Table 3. We can see that using SUR to select appropriate for classification layers usually works better than using only the penultimate layer (dubbed “last”) or concatenating all the features together (denoted as “concat”). For Robust20-dist, we observe only incremental improvements for 5-shot classification and negative improvement in 1-shot scenario. We attribute this to the fact that the penultimate layer in this network is probably the most useful for new problems and, if not selected, may hinder the final accuracy.

Figure 4: Frequency of selected features depending on the test domain in MetaDataset. The top row indicates a testing dataset. The leftmost column a dataset the feature extractor has been trained on. A cells at location reflects the average value of selection parameter assigned to the -th feature extractor when tested on -th dataset. The values are averaged over 600 few-shot test tasks for each dataset.

4.4 Analysis of Feature Selection

In this section, we analyze optimized selection parameters when applying SUR on MetaDataset. Specifically, we perform the experiments from Section 4.2, where we select appropriate representations from a set, generated by 8 independent networks. For each test dataset, we then average selection vectors (after being optimized for 40 SGD steps) over 600 test tasks and present them in Figure 4. First, we can see that the resulting are sparse, confirming that most of the time SUR actually select a few relevant features rather than takes all features with similar weights. Second, for a given test domain, SUR tends to select feature extractors trained on similar visual domains. Interestingly, for datasets coming from exactly the same distribution, i.e. CIFAR 10 and CIFAR 100, the averaged selection parameters are almost identical. All of the above suggests that the selection parameters could be interpreted as encoding the importance of features’ visual domains for the test domain.

5 Acknowledgements

This work was funded in part by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute) and reference ANR-19-P3IA-0003 (3IA MIAI@Grenoble Alpes), and was supported by the ERC grant number 714381 (SOLARIS) and a gift from Intel.

Appendix A Implementation and Datasets details

a.1 Full MetaDataset Description

The MetaDataset includes ImageNet [43] (1000 categories of natural images), Omniglot [23] (1623 categories of black-and-white hand-written characters from different alphabets), Aircraft [31] (100 classes of aircraft types), CU-Birds [53] (200 different bird species), Describable Textures [7] (43 categories for textures), Quick Draw [12] (345 different categories of black-and-white sketches), Fungi [47] (1500 mushroom types), VGG-Flower [33] (102 flower species), Traffic Sign [18] (43 classes of traffic signs) and MSCOCO [26] (80 categories of day-to-day objects). For testing, we additionally employ MNIST [4] (10 hand-written digits) CIFAR10 [21] (10 classes of common objects), and CIFAR100 [21] (100 classes of common objects). Figure A1 illustrated random samples drawn from each dataset.

Figure A1: Samples from all MetaDataset datasets Each line gives 8 random samples from a dataset specified above.

a.2 MetaDataset training details

When using multiple ResNet18 on MetaDataset (a single ResNet per dataset) to build a universal representation, we train the networks according to the following procedure. For optimization, we use SGD with momentum and adjust the learning rate using cosine annealing [29]. The starting learning rate, the maximum number of training iterations (“Max iter.”) and annealing frequency (“annealing freq.”) are set individually for each dataset. To regularize training, we use data augmentation, such as random crops and random color augmentations, and set a constant weight decay of . For each dataset, we run a grid search over batch size in [8, 16, 32, 64] and pick the one that maximizes accuracy on the validation set. The hyper-parameters maximizing the validation accuracy are given in Table A1.

When training a parametric network family for building universal representations, we start by adopting a ResNet18 already trained on ImageNet, that we keep fixed for the rest of the training procedure. For each new dataset, we then train a set of domain-specific FiLM layers, modulating intermediate ResNet layers, as described in Section3.3.1. Here, we also use cosine annealing as learning rate policy, employ weight decay and data augmentation, as specified above. In Table A2, we report the training hyper-parameters for each of the datasets.

a.3 mini-ImageNet training details

All the methods we evaluate on mini-ImageNet use ResNet12 [34] as a feature extractor. It is trained with batch size 200 for 48 epochs. For optimization, we use Adam optimizer [20] with initial learning rate 0.1 which is kept constant for the first 36 epochs. Between epochs 36 and 48, the learning rate was exponentially decreased from 0.1 to , i.e. by dividing the learning rate by after each epoch. As regularization, we use weight decay with multiplier and data augmentation such as random crops, flips and color transformations.

Test Dataset learning rate weight decay Max iter. annealing freq. batch size
ImageNet 480,000 48,000 64
Omniglot 50,000 3,000 16
Aircraft 50,000 3,000 8
Birds 50,000 3,000 16
Textures 50,000 1,500 32
Quick Draw 480,000 48,000 64
Fungi 480,000 15,000 32
VGG Flower 50,000 1,500 8
Table A1: Training hyper-parameters of individual feature networks on MetaDataset. The first column indicates the dataset used for training. The first row gives the name of he hyper-parameter. The body of the table contains hyper-parameters that produced the most accurate model on the validation set.
Test Dataset learning rate weight decay Max iter. annealing freq. batch size
Omniglot 40,000 3,000 16
Aircraft 30,000 1,500 32
Birds 30,000 1,500 16
Textures 40,000 1,500 16
Quick Draw 400,000 15,000 32
Fungi 400,000 15,000 32
VGG Flower 30,000 3,000 16
Table A2: Training hyper-parameters of the parametric network family on MetaDataset. The first column indicates the dataset used for training. The first row gives the name of he hyper-parameter. The body of the table contains hyper-parameters that produced the most accurate model on the validation set.

Appendix B Additional Experiments and Ablation Study

b.1 Additional results on MetaDataset

Here we elaborate on using SUR with a universal set of representations obtained from independent feature extractors (see Section 3.2), report an ablation study on varying the number of extractors in the universal set, and report detailed results, corresponding to Figure 3. Specifically, we use 8 domain-specific ResNet18 feature extractors to build a universal representation and evaluate SUR against the baselines. The results are reported in Table A3, which corresponds to Figure 3.

In the following experiment, we remove feature extractors trained on Birds, Textures and VGG Flower from the universal feature set and test the performance of SUR on the set of remaining 5 feature extractors. We chose to remove these feature extractors as none of them gives the best performance on any of the test sets. Hence, they probably do not add new knowledge to the universal set of features. The results are reported in Table A3 (a) as “SUR (5/8)”. As we can see, selecting from the truncated set of features may bring marginal improvements some categories, which suggests that even the simplest form of adaptation – selection – may overfit when very few samples are available. On the other hand, for a test-only dataset Traffic Sign, selecting from all features is beneficial. This result is not surprising, as one generally does not know what features will be useful for tasks not known beforehand, and thus removing seemingly useless features may result in a performance drop.

Test Dataset ImageNet-F Union-F Concat-F SUR SUR (5/8)
ImageNet 56.31.0 44.60.7 19.51.6 56.31.1 56.41.1
Omniglot 67.51.2 86.10.9 91.50.5 93.10.5 93.30.5
Aircraft 50.40.9 82.20.6 33.71.4 85.40.7 85.80.7
Birds 71.70.8 72.11.1 18.81.3 71.41.0 71.20.9
Textures 70.20.7 62.71.0 34.50.9 71.50.8 70.60.8
Quick Draw 52.31.0 70.70.9 51.20.9 81.30.6 81.40.6
Fungi 39.11.0 56.20.8 12.60.4 63.11.0 63.31.0
VGG Flower 84.30.7 82.50.8 40.31.2 82.80.7 83.00.7
Traffic Sign 63.10.8 63.80.9 48.20.6 70.40.8 69.50.8
MSCOCO 52.81.0 42.31.0 17.80.4 52.41.1 52.91.0
MNIST 77.20.7 84.80.6 89.60.7 94.30.4 94.30.4
CIFAR 10 66.30.8 51.40.8 34.70.8 66.80.9 67.30.8
CIFAR 100 55.71.0 39.51.0 18.90.6 56.61.0 56.71.0

Table A3: Motivation for feature selection. The table shows accuracy of different feature combinations on the Meta-Detaset test splits. The first column indicates the dataset the algorithms are tested on, the first row gives a name of a few-shot algorithm. The body of the table contains average accuracy and confidence intervals computed over 600 few-shot tasks. The numbers in bold lie have intersecting confidence intervals with the most accurate method.
Method 1-3 4-6 7-9 10-12 Aggregation 5-shot 1-shot
Cls last 76.28 0.41 60.09 0.61
select 77.39 0.42 61.02 0.62
select 79.25 0.41 60.79 0.62
select 78.92 0.41 60.71 0.64
select 78.80 0.43 60.55 0.62
concat 78.43 0.42 60.41 0.62
concat 75.67 0.41 57.15 0.61
concat 70.90 0.40 53.53 0.61
concat 69.40 0.40 51.21 0.60
DenseCls last 78.25 0.43 62.61 0.61
select 79.34 0.42 62.46 0.62
select 80.04 0.41 63.13 0.62
select 79.84 0.42 62.95 0.62
select 79.49 0.43 62.58 0.63
concat 79.12 0.41 62.51 0.62
concat 79.59 0.42 62.74 0.61
concat 77.63 0.42 60.14 0.61
concat 76.07 0.41 57.78 0.61
DivCoop last 81.06 0.41 64.14 0.62
select 81.23 0.42 63.83 0.62
select 81.19 0.41 63.93 0.63
select 81.11 0.42 63.85 0.62
select 81.08 0.42 63.71 0.62
concat 81.12 0.42 63.92 0.62
concat 80.79 0.41 63.22 0.63
concat 80.52 0.42 62.48 0.61
concat 80.36 0.42 61.30 0.61
Table A4: Comparison to other methods on 1- and 5-shot mini-ImageNet. The first column gives the name of the feature extractor. Columns 2-5 indicate if corresponding layers of ResNet12 were added to the universal set of representations. Column “Aggregation” specifies how the universal set was used to obtain a vector image representation. The two last columns display the accuracy on 1- and 5-shot learning tasks. To evaluate our methods we performed independent experiments on mini-ImageNet-test and report the average and confidence interval. The best accuracy is in bold.

b.2 Analysis of Feature Selection on MetaDataset

Here, we repeat the experiment from Section 4.4, i.e. studying average values of selection parameters depending on the test dataset. Figure A2 reports the average selection parameters with corresponding confidence intervals. This is in contrast to Figure 4 that reports the average values only, without confidence intervals.

Figure A2: Frequency of selected features depending on the test domain in MetaDataset. The top row indicates a testing dataset. The leftmost column presents a dataset the feature extractor has been trained on. A cells at location reflects the average value of selection parameter assigned to the -th feature extractor when tested on -th dataset with corresponding confidence intervals. The values are averaged over 600 few-shot test tasks for each dataset.

b.3 Importance of Intermediate Layers on mini-ImageNet

We clarify the findings in Section 4.3 4.3 and provide an ablation study on the importance of intermediate layers activations for the meta-testing performance. For all experiments on mini-ImageNet, we use ResNet12 as a feature extractor and construct a universal feature set from activations of intermediate layers. In Table A4, we experiment with adding different layers outputs to the universal set. The universal set is then used to construct the final image representation either through concatenation “concat” or using SUR. The table suggests that adding the first 6 layers negatively influences the performance of the target task. While our SUR approach can still select relevant features from the full set of layers, the negative impact is especially pronounced for the “concat” baseline. This suggests that the first 6 layers do not contain useful for the test task information. For this reason, we do not include them in the universal feature set, when reporting the results in Section 4.3.

We further provide analysis of selection coefficients assigned to different layers in Figure A3. We can see that for all methods, SUR picks from the last 6 layers most of the time. However, it can happen that some of the earlier layers are selected too. According to Table A4, these cases lead to a decrease in performance and suggest the SUR may overfit, when the number of samples if very low.

Figure A3: Frequency of selecting intermediate layer’s activation’s on mini-ImageNet for 5-shot classification. The top row indicates intemediate layer. The leftmost column gives the name of a method used to pre-train the feature extractor. Each cells reflects the average value of selection parameter assigned to the -th intemediate layer with corresponding confidence intervals. The values are averaged over 1000 few-shot test tasks for each dataset.


  1. Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
  2. email:
  3. Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
  4. email:
  5. Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
  6. email:


  1. L. Bertinetto, J. F. Henriques, P. H. Torr and A. Vedaldi (2018) Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136. Cited by: §3.3.2.
  2. A. Bietti, G. Mialon, D. Chen and J. Mairal (2019) A kernel perspective for regularizing deep neural networks. In International Conference on Machine Learning (ICML), Cited by: §1.
  3. H. Bilen and A. Vedaldi (2017) Universal representations: the missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275. Cited by: §2.0.2, §2.
  4. C. Burges (2010) MNIST handwritten digit database. Note: \url Cited by: §A.1, §4.1.1.
  5. M. Caron, P. Bojanowski, A. Joulin and M. Douze (2018) Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1.
  6. W. Chen, Y. Liu, Z. Kira, Y. Wang and J. Huang (2019) A closer look at few-shot classification. In International Conference on Learning Representations (ICLR), Cited by: §1, §2.0.1, §2.0.1, §3.1, §4.1.2.
  7. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed and A. Vedaldi (2014) Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §A.1, §4.1.1.
  8. C. Doersch and A. Zisserman (2017) Multi-task self-supervised visual learning. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1.
  9. N. Dvornik, C. Schmid and J. Mairal (2019) Diversity with cooperation: ensemble methods for few-shot classification. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1, §2.0.1, §4.3.
  10. N. Dvornik, K. Shmelkov, J. Mairal and C. Schmid (2017) BlitzNet: a real-time deep network for scene understanding. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1.
  11. C. Finn, P. Abbeel and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (ICML), Cited by: §1, §1, §1, Table 2.
  12. N. Fox-Gieg (2016) Fine-grained visual classification of aircraft. Note: \ Cited by: §A.1, §4.1.1.
  13. M. Garnelo, D. Rosenbaum, C. J. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. W. Teh, D. J. Rezende and S. Eslami (2018) Conditional neural processes. arXiv preprint arXiv:1807.01613. Cited by: §2.0.1.
  14. S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez and M. Cord (2019) Boosting few-shot visual learning with self-supervision. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.0.1, §4.1.2.
  15. S. Gidaris and N. Komodakis (2018) Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.0.1.
  16. S. Gidaris, P. Singh and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), Cited by: §1, §3.1.
  17. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.2.
  18. S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing and C. Igel (2013) Detection of traffic signs in real-world images: the german traffic sign detection benchmark. In International Joint Conference on Neural Networks (IJCNN), Cited by: §A.1, §4.1.1.
  19. S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. External Links: Link Cited by: §2.0.2.
  20. D. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §A.3.
  21. A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Master’s Thesis, University of Toronto. Cited by: §A.1, §4.1.1.
  22. A. Krizhevsky, I. Sutskever and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
  23. B. M. Lake, R. Salakhutdinov and J. B. Tenenbaum (2015) Human-level concept learning through probabilistic program induction. Science. Cited by: §A.1, §3.3.2, §4.1.1.
  24. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel (1989) Backpropagation applied to handwritten zip code recognition. Neural computation. Cited by: §1.
  25. Y. Lifchitz, Y. Avrithis, S. Picard and A. Bursuc (2019) Dense classification and implanting for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.0.1, §2.0.1, §4.1.2, §4.3.
  26. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick (2014) Microsoft COCO: common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §A.1, §1, §4.1.1.
  27. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu and A. C. Berg (2016) SSD: single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1.
  28. J. Long, E. Shelhamer and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  29. I. Loshchilov and F. Hutter (2016) Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §A.2.
  30. J. Mairal, F. Bach and J. Ponce (2014) Sparse modeling for image and vision processing. Foundations and Trends® in Computer Graphics and Vision. Cited by: §3.2.3.
  31. S. Maji, J. Kannala, E. Rahtu, M. Blaschko and A. Vedaldi (2013) Fine-grained visual classification of aircraft. Technical report Cited by: §A.1, §4.1.1.
  32. T. Mensink, J. Verbeek, F. Perronnin and G. Csurka (2013) Distance-based image classification: generalizing to new classes at near-zero cost. IEEE transactions on Pattern Analysis and Machine Intelligence (PAMI). Cited by: §1.
  33. M. Nilsback and A. Zisserman (2008) Automated flower classification over a large number of classes. In Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Cited by: §A.1, §4.1.1.
  34. B. Oreshkin, P. R. López and A. Lacoste (2018) TADAM: task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §A.3, §1, §2.0.1, §4.1.2.
  35. E. Perez, F. Strub, H. De Vries, V. Dumoulin and A. Courville (2018) Film: visual reasoning with a general conditioning layer. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.0.1, §2.0.1, §2.0.2, Figure 2, §3.3.1, §4.2.4.
  36. S. Qiao, C. Liu, W. Shen and A. L. Yuille (2018) Few-shot image recognition by predicting parameters from activations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.0.1.
  37. S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), Cited by: 2nd item, §1, §3.3.2, §4.1.1.
  38. S. Rebuffi, H. Bilen and A. Vedaldi (2018) Efficient parametrization of multi-domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.0.2, Figure 2, §3.3.1, §4.2.4.
  39. M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle and R. S. Zemel (2018) Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676. Cited by: §1.
  40. S. Ren, K. He, R. Girshick and J. Sun (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
  41. J. Requeima, J. Gordon, J. Bronskill, S. Nowozin and R. E. Turner (2019) Fast and flexible multi-task classification using conditional neural adaptive processes. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, §1, §2.0.1, §4.1.1, §4.1.2, §4.2.3, Table 2.
  42. O. Ronneberger, P. Fischer and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §1.
  43. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg and L. Fei-Fei (2015) ImageNet large scale visual recognition challenge. Proceedings of the International Conference on Computer Vision (ICCV). Cited by: §A.1, §1, §4.1.1, §4.1.1.
  44. A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero and R. Hadsell (2018) Meta-learning with latent embedding optimization. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
  45. T. Saikia, T. Brox and C. Schmid (2020) Optimized generic feature learning for few-shot classification across domains. arXiv preprint arXiv:2001.07926. Cited by: §1, §1, §2.0.1, §2.0.1, §4.2.3, Table 2.
  46. J. Schmidhuber, J. Zhao and M. Wiering (1997) Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning. Cited by: §1, §2.0.1.
  47. B. Schroeder and Y. Cui (2018) FGVCx fungi classification challenge 2018. Note: \ Cited by: §A.1, §4.1.1.
  48. J. Snell, K. Swersky and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, Table 2.
  49. S. Thrun (1998) Lifelong learning algorithms. In Learning to learn, pp. 181–209. Cited by: §1, §2.0.1.
  50. E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P. Manzagol and H. Larochelle (2019) Meta-dataset: a dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096. Cited by: 2nd item, §1, §1, §1, §2.0.1, §4.1.1, §4.1.1, §4.1.2, §4.2, Table 2.
  51. O. Vinyals, C. Blundell, T. Lillicrap and D. Wierstra (2016) Matching networks for one shot learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
  52. C. Wah, S. Branson, P. Welinder, P. Perona and S. Belongie (2011) The Caltech-UCSD Birds-200-2011 Dataset. Technical report Technical Report CNS-TR-2011-001, California Institute of Technology. Cited by: §3.3.2.
  53. C. Wah, S. Branson, P. Welinder, P. Perona and S. Belongie (2011) The caltech-ucsd birds-200-2011 dataset. Cited by: §A.1, §4.1.1.
  54. Y. Yoshida and T. Miyato (2017) Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:1705.10941. Cited by: §1.
  55. J. Yosinski, J. Clune, Y. Bengio and H. Lipson (2014) How transferable are features in deep neural networks?. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
  56. M. D. Zeiler (2012) Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Cited by: §4.1.3.
  57. X. Zhai, J. Puigcerver, A. Kolesnikov, P. Ruyssen, C. Riquelme, M. Lucic, J. Djolonga, A. S. Pinto, M. Neumann and A. Dosovitskiy (2019) The visual task adaptation benchmark. arXiv preprint arXiv:1910.04867. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description