L2AE-D: Learning to Aggregate Embeddings for Few-shot Learning with Meta-level Dropout

L2AE-D: Learning to Aggregate Embeddings for Few-shot Learning with Meta-level Dropout

Heda Song Automated Scheduling Optimisation and Planning (ASAP) Research Group    Mercedes Torres Torres Computer Vision Laboratory
University of Nottingham, Nottingham NG8 1BB, UK {heda.song,mercedes.torrestorres,ender.ozcan,isaac.triguero}@nottingham.ac.uk
   Ender Özcan Automated Scheduling Optimisation and Planning (ASAP) Research Group    Isaac Triguero Automated Scheduling Optimisation and Planning (ASAP) Research Group
Abstract

Few-shot learning focuses on learning a new visual concept with very limited labelled examples. A successful approach to tackle this problem is to compare the similarity between examples in a learned metric space based on convolutional neural networks. However, existing methods typically suffer from meta-level overfitting due to the limited amount of training tasks and do not normally consider the importance of the convolutional features of different examples within the same channel. To address these limitations, we make the following two contributions: (a) We propose a novel meta-learning approach for aggregating useful convolutional features and suppressing noisy ones based on a channel-wise attention mechanism to improve class representations. The proposed model does not require fine-tuning and can be trained in an end-to-end manner. The main novelty lies in incorporating a shared weight generation module that learns to assign different weights to the feature maps of different examples within the same channel. (b) We also introduce a simple meta-level dropout technique that reduces meta-level overfitting in several few-shot learning approaches. In our experiments, we find that this simple technique significantly improves the performance of the proposed method as well as various state-of-the-art meta-learning algorithms. Applying our method to few-shot image recognition using Omniglot and miniImageNet datasets shows that it is capable of delivering a state-of-the-art classification performance.

Keywords:
Few-shot learning Meta-learning Metric-learning Embedding aggregation Attention mechanism Meta-level dropout

1 Introduction

In recent years, deep learning techniques have dramatically been developed achieving high classification accuracy on visual recognition systems [12, 13]. These techniques usually require a large amount of labelled data to learn an appropriate model while they struggle when provided with very few data. However, in many real-world visual recognition tasks, such as images of new species or rare diseases, it is impractical to collect much labelled data. This highly restricts the successful application of deep learning. In addition, their learning style is typically not consistent with a human visual system that can generalise a new visual concept after seeing a few images based on previous experience. To address these issues, the computer vision community has raised enthusiasm for the challenge of learning from very few data, also known as few-shot learning [6, 18].

Few-shot learning typically aims to learn a new visual concept from a limited number of labelled examples. Overfitting can easily occur using conventional machine learning algorithms in such few-shot regime. To avoid this, we need a learning approach with a high generalisation ability. Inspired by the way humans are capable of quickly learning based on accumulated experience, many meta-learning approaches for few-shot learning have recently been proposed. In general, these methods learn a meta-learner to extract meta-knowledge from a distribution of few-shot learning tasks and further use it to assist unseen tasks. The extracted meta-knowledge can be represented by different algorithm components, such as a general feature extractor [17, 32, 29], a distance metric [31], promising initial model parameters [7, 2, 8, 19], optimisation strategies [26], a model parameter predictor [23, 25, 11], a example generator [5, 36, 9], scale and/or shift vectors for activation adaptation [24], or label propagation [22, 10, 20].

Although these approaches achieve a good performance, they still suffer from several issues. Some methods need to fine-tune the base model when executing target tasks [7, 2, 8, 19, 26]. Others introduce complex model architectures or external memory, which require more computing resources [10, 22, 20, 23, 11]. Generative model-based approaches learn to generate more artificial examples, but they may create some non-informative examples when provided with noisy training examples [5, 36, 9]. Metric learning based approaches are straightforward and efficient [29]. They use Convolutional Neural Networks (CNNs) to extract the embeddings of examples, which are represented by a set of feature maps, and make predictions by comparing the similarities between embeddings. However, they seldom consider outliers in a class or borrow useful features from other classes [17, 32, 29, 31]. Due to the limited amount of data in few-shot learning, as presented in Fig. 1, the training examples may inherently contain uncertainties, such as the outliers shown in Fig. 1(a). If we simply use the mean of each class’s embeddings as the class representative, as performed in [29], the possible outliers may force the representative to deviate from the class centre in the embedding space. Therefore, it is necessary to appropriately handle the effect of outliers. However, an outlier may actually contain some useful features, which could help strengthen part of the class representative. Similarly, even the examples of different classes may share some similar features as shown in Fig. 1(b), which could be used to help them to be distinguished from other classes in multi-class classification. Therefore, our goal is to reduce the impact of outliers and use as much as useful information as possible in few-shot learning.

In addition, these meta-learners may also suffer from overfitting. Although, meta-learners are trained on different few-shot learning tasks, they may consist of overlapped classes, because there are limited number of classes in the meta-training dataset and some of them are similar. For example, there are only 100 classes of objects in miniImageNet [26] and some of them are different breeds of dogs. Thus, meta-learners could be trained to perform well on meta-training tasks and not generalise well on meta-testing tasks comprised of unseen classes.

Figure 1: Illustration of our motivation. Each embedding (rounded rectangle) consists of three feature maps (coloured squares), with outliers shown in dashed borders. (a) Binary classification with five training examples per class. We show the real class centres in the embedding space (solid circles) and the mean of each class’ embeddings (hollow circle). (b) 4-class classification with one training example per class. Dashed arrows link similar feature maps in the embeddings from different classes.

To tackle the above issues, we propose L2AE-D (Learning to Aggregate Embeddings with Meta-level Dropout), a novel meta-learning approach for few-shot learning that learns to aggregate embeddings with meta-level dropout. L2AE-D learns a CNNs based feature extractor and a channel-wise attention mechanism in an end-to-end manner. The feature extractor is used to transform the input images into discriminative embeddings. The channel-wise attention mechanism is learned to assign larger weights to useful feature maps and smaller weights to noisy ones of different embeddings within the same channel. We propose different learning strategies for one-shot and few-shot tasks aiming to effectively exploit the few training embeddings. We also introduce a meta-level dropout technique into the meta-training process to prevent meta-level overfitting. We test this technique in several representative meta-learning approaches and it significantly improves their performance. We evaluate the proposed method on Omniglot [18] and miniImageNet [26] datasets, and it achieves state-of-the-art performance.

2 Related Work and Motivation

Our method falls into the research field of few-shot learning. Section 2.1 investigates the recent progress in this field, and explains our motivations. Besides, L2AE-D is based on an attention mechanism and dropout. We briefly review these techniques used in image classification in Section 2.2 and 2.3, respectively.

2.1 Few-shot Learning approaches

Most recent works tackle few-shot learning by meta-learning due to its high generalisation ability. In general, they learn a meta-learner to extract meta-knowledge from a number of few-shot learning tasks and use it to assist in unseen ones. Depending on the type of meta-knowledge, these methods can be broadly classified into three categories.

Fast parametrisation based approaches: Approaches in this class aim to learn a fast parametrisation strategy for quickly fine-tuning the base learner to adapt to new few-shot learning tasks. The most representative method, Model-Agnostic Meta-Learning (MAML) [7], learns the model’s initial parameters that can be adapted to task-specific model parameters by a few gradient descent steps based on few examples. MAML has been extended in various ways, such as introducing a first-order gradient to reduce the computational burden [2], learning model’s initial parameters together with optimisation strategies (Meta-learner-LSTM [26]), to further accelerate the fine-tuning process, choosing a subset of model parameters to fine-tune in order to make the model more task-specific [19], modelling a distribution of prior model parameters to handle the inherent uncertainty of few-shot learning [8]. Rather than fine-tune the base model, some other methods learn a meta-learner to directly predict the parameters of the base model [23, 11, 25]. One of them learns to predict the parameters of the fully-connected layer from the activations (Activation2Weights) [25]. These methods can be faster while some of them use external memory, which require more resources to store the adequate historical information. Instead, our approach executes target few-shot tasks in a feed-forward manner without external memory, which is quick and does not require additional resources.

Generative model based approaches: These approaches learn to generate artificial examples to compensate the lack of training data. The Neural Statistician approach learns to produce statistics of a dataset, such as mean or variance, which are used to specify a Gaussian distribution for generating data [5]. Other methods introduce generative adversarial networks to learn sharper decision boundaries (MetaGAN) [36] or model the latent distribution of novel classes [9]. These meta-learners generate fake examples to assist few-shot learning tasks. However, these examples could be non-informative when the few training examples are not representative. Conversely, our method learns to aggregate useful information and suppress noisy information, which can be more stable.

Metric learning approaches: The approaches in this class learn to compare the similarity between examples in a learned metric space. Most approaches learn a general feature extractor, which is usually represented by CNNs [17, 32, 29, 31, 10, 22, 20], to transform examples into embeddings and then compute the similarity between each pair of training and query embeddings based on weighted L1 distance (Siamese Nets [17]), cosine distance (Matching Nets [32]), Euclidean distance (Prototypical Networks (ProtoNets) [29]) or a learned distance metric (Relation Network (RN) [31]). Finally, the queries can be classified by a linear [17, 31], a k-nearest neighbours [29] or a weighted k-nearest neighbours [32] classifier. Some other approaches in this branch propagate label information from training examples to unlabelled query examples based on similarity [10, 22, 20]. Specifically, Transductive Propagation Network (TPN) [20] and Graph Neural Networks (GNNs) [10] learn a graph construction module and propagate labels within the graph. Another approach combines temporal convolutions and soft attention to propagate label information [22]. These methods also aggregate embeddings, but they treat each embedding as a whole. Instead, we aggregate feature maps in each channel, which could make use of more information, even from an outlier or an example from different classes.

2.2 Attention Mechanisms

An attention mechanism aims to tell a machine learner where to focus, which is inspired by the human perception system. It has been extensively studied these years and applied to various machine learning tasks, such as machine translation [3] or image caption [34]. Recently, a few works have introduced attention mechanisms to CNNs for image classification tasks [15, 33, 13]. Our method is related to [33, 13]. These two approaches both learn a channel attention module to assign different weights to different feature maps in each convolutional layer. Their aim is to emphasise useful features and suppress irrelevant ones for each example in large-scale image classification tasks. However, our goal is to handle uncertainty and fully use the few training examples in few-shot learning, so that we carry out an attention mechanism along a different dimension. Specifically, they learn a multi-layer perceptron to assign weights to the feature maps for a single embedding, while we learn CNNs to assign weights to the feature maps of different embeddings in the same channel. Besides, our attention module is shared between channels of all layers while they learn an attention module for each layer. It is noteworthy that several meta-learning approaches also introduce the attention mechanism to tackle few-shot learning problems [32, 22, 11]. However, we use it in different ways and for different purposes. The approaches in [32, 22] use attention to propagate labels based on the similarities between a query and training examples. The method in [11] use attention to generate a classifier’s weights for unseen classes. In contrast, our attention mechanism is used to assign different weights to the feature maps of different examples, aiming at handling uncertainty and fully using the few training examples.

2.3 Dropout

Dropout is a simple way to prevent neural networks from overfitting [30]. However, it is seldom applied to convolutional layers in CNNs, because the shared-filter architecture dramatically reduces the number of model parameters which reduce the model’s capacity to overfit [30]. Still, the experimental results in [30] show performing dropout in convolutional layers can prevent overfitting and further improve the performance on image recognition tasks. Different from applying dropout on common machine learning tasks, we perform dropout in the meta-level to tackle meta-level overfitting problems, in which the dropped model is used for both training and testing examples during meta-training.

3 L2AE-D: Learning to Aggregate Embeddings with Meta-level Dropout

In this section, we describe the proposed learning to aggregate embeddings with meta-level dropout (L2AE-D) method. We define the problem of few-shot learning in Section 3.1. Then, Section 3.2 describes our model consisting of an embedding, attention and distance module. The specific model architecture is discussed in Section 3.3. Finally, Section 3.4 presents how we perform meta-level dropout.

3.1 Problem Set-Up

Few-shot classification problems [6] aim to classify testing examples into one of unique classes based on labelled training examples for each of class, which is called -way -shot classification. For each -way -shot classification task, the training set contains training examples and the testing set contains testing examples that share the same label space with . In conventional machine learning, we could train a learner to predict the label for each testing example in based on . However, the learner cannot be trained effectively based on such few training examples.

A number of approaches including our method tackle the problem by meta-learning. Typically, we have three meta-sets, meta-training set , meta-validation set and meta-testing set . Their respective label space is disjoint from each other. The meta-training set is used for training a meta-learner that generalises well across a distribution of few shot learning tasks, which is represented as . The set is used to select suitable hyper-parameters of the meta-learner. We can evaluate the meta-learner on the set.

Since the set includes a large amount of different few shot classification tasks, it is best to train the meta-learner in an episode-based manner as proposed in [32]. In each meta-training iteration, a single few shot classification () is sampled to train the meta-learner based on its performance on . We can also introduce the strategy of batch meta-training as done in [7]. Thus, in each meta-training iteration, we sample a batch of few shot classification to train the meta-learner.

3.2 Model

L2AE-D can be divided into three modules: embedding module , attention module and distance module as shown in Fig. 2 and 3. The attention module is different for the -shot and -shot cases. Fig. 2 shows our strategy for -way -shot classification and Fig. 3 depicts our strategy for -way -shot classification.

Figure 2: 5-way 1-shot classification with L2AE-D: (1) Training samples are transformed by into embeddings (set of feature maps shown in coloured squares); (2) To strengthen the first feature map for the first class, we put it in the first channel and the other feature maps in the others, then we feed the concatenated 5-channel feature maps into to generate aggregation weights; (3) The 5 feature maps are aggregated based on the generated weights; (4) To make predictions, we feed a query into , then compare its embedding with the aggregated training embeddings in the distance module. This outputs a one-hot vector representing the predicted label of the query.

Embedding module: This module aims to extract features of each input image and transform it into embeddings. For each input example belonging to the -th class, we feed it into the embedding module to generate an embedding , which comprises feature maps with the size of . Then, the training embeddings are fed into the attention module.

Figure 3: C-way 5-shot classification with our approach. L2AE-D aggregates embeddings for each class: (1) The training examples are transformed by into embeddings represented by a set of feature maps; (2) For each channel, we collect the feature maps and feed them into the attention module; (3) The feature maps are concatenated in depth and fed into to generate aggregation weights; (4) The feature maps are then aggregated based on the generated weights to represent a feature for this class.

Attention module: This module is used for generating aggregation weights of the feature maps in a channel-wise manner as shown in Fig. 4. Also, it is shared among different channels. We use two different strategies to do aggregation for -shot and -shot tasks as shown in Fig. 2 and 3, respectively. For -shot -way tasks, we aggregate the feature maps of training embeddings in the same class to be the class-representative feature maps. For the -th channel, we join the corresponding feature maps of training embeddings in the -th class as . For -way -shot tasks, we aggregate the feature maps of training embeddings from different classes, since there is only one training embedding in each class. To generate aggregation weights for the -th class in the -th channel, we concatenate the corresponding feature maps of training embeddings from different classes as . Note that we locate in the first channel and other feature maps behind randomly in .

Next, the concatenated feature maps are inputted into CNNs based attention networks , which produce the aggregation weights for -shot tasks or for -shot tasks. After that, we can aggregate the feature maps based on the weights . The aggregated feature map of the -th class in the -th channel is represented by . In the end, we concatenate the aggregated feature maps in all channels and obtain a new embedding for the -th class. The new training embedding set is then represented by , in which can be seen as a class representative.

Figure 4: The architecture of the attention module.

Distance module: This module is used to measure the distance between the embeddings of query examples, , and the aggregated embeddings . Following  [29], we choose the Euclidean distance as the distance function . Thus, the distance between and is computed by .

Loss function: We consider cross-entropy loss to train our model. First, the softmax function is applied over the negative distance between the query embeddings and aggregated training embeddings as follows:

Then the loss function can be formulated as

where is the number of query examples in each training epoch.

3.3 Model Architecture

L2AE-D follows the same architecture as the embedding module in prior approaches [32, 29], which contain 4 convolutional blocks. Each block is composed of a convolution with 64 filters, followed by batch normalisation (BN) [14], a ReLU nonlinearity and a max-pooling. For Omniglot, due to the small size of the input images, we feed the embeddings into the CNNs based attention module and we remove the max-pooling layer from the last convolutional block. The architecture of our attention module showed in Fig. 4 consists of 2 convolutional blocks and a fully connected (FC) layer. Each convolutional block in this module comprises a convolution with 32 filters, followed by batch normalisation, a ReLU nonlinearity. The FC layer results in a -dimensional output, which represent the aggregation weights for the feature maps. For -shot tasks, we use the softmax function after the FC layer since we aim to assign positive weights, whose sum is 1, to the embeddings of the same class.

3.4 Meta-level Dropout

Most meta-learning approaches use multi-layer CNNs to extract features on few-shot learning. As discussed before, we incorporate the dropout technique in the meta-level to tackle meta-level overfitting. Specifically, we randomly drop part of units of CNNs for each few-shot learning task during meta-training and the dropped model is used to extract features on both training and testing examples for each specific task. During meta-testing, we use the whole trained CNNs to extract features on both training and testing examples. Note that the dropout in the convolutional layers works in a different way from that in the fully connected layers, because the kernel weights are shared with the units at different spatial positions, so that, the weights would still be updated by backpropagation even if part of units are dropped. The actual effect of performing dropout in the convolutional layers is to scale the learning rate, which can also help with preventing overfitting. We find this technique can improve several meta-learning approaches significantly according to the experimental results in Section 4.4.

4 Experiments

This section evaluates our method on the widely studied datasets Omniglot [18] and miniImageNet [26]. The experimental setup is provided in Section 4.1. Section 4.2 and 4.3 analyse the results on Omniglot and miniImageNet, respectively. We also introduce a meta-level dropout technique into several promising few-shot learning approaches and we test its behaviour on miniImageNet in Section 4.4. Section 4.5 visualises the working of L2AE-D based on T-distributed Stochastic Neighbor Embedding (t-SNE) [21]. The code for L2AE-D is available online111github.com/Heda-Song/L2AE-D.

4.1 Experimental setup

This section introduces the details of the two used datasets and the configurations followed to test the behaviour of L2AE-D against the state-of-the-art.

  • Omniglot consists of 1,623 handwritten characters collected from 50 alphabets. There are 20 examples of each character, which are drawn by different people. We augmented the datasets with rotations with multiple 90 degrees as proposed by [28] to get 6492 classes. Following [7], we randomly select 1,200 classes (4,800 classes after augmentation) for meta-training, 100 classes (400 classes after augmentation) for meta-validation, and the remaining 323 (1292 classes after augmentation) for meta-testing. All the input images are resized to as suggested by [32] to get a suitable sized embedding.

  • miniImageNet was proposed by [32] derived from the original ILSVRC-12 dataset [27]. It comprises 100 classes of colour images with 600 of each (60,000 in total). In our experiments, we use the widely used splits proposed by [26], which divides the 100 classes into 64 for meta-training, 16 for meta-validation and 20 for meta-testing. All the input images are resized to as done by most few-shot learning approaches [26, 29, 7]. Note that the existing approaches use different tools to resize the images in miniImageNet. We use the library provided by OPENCV [4] following [20].

To allow for fair comparisons with the current state-of-the-art, we maintain the different experimental setups reported on Omniglot (20-way 5-shot, 20-way 1-shot, 5-way 5-shot, 5-way 1-shot) and miniImageNet (5-way 5-shot, 5-way 1-shot). All experiments are performed using TensorFlow [1] on a Titan V GPU.

  • Meta-training: Following most existing methods [29, 7, 31], we train our model in an episode-based manner and use a meta-batch size of 4, which means in each episode we randomly sample 4 -way -shot classification tasks to train the model. For each few-shot task, besides the training examples, we randomly sample 5 or 15 query examples per class to compute the loss for Omniglot and miniImageNet, respectively. We train our model with Adam [16] with a initial learning rate of in an end-to-end manner [29, 31]. We cut the learning rate in half every 20,000 episodes to stabilise training and use meta-validation set to choose the best-performing model for meta-testing. It is noteworthy that existing methods conduct BN in different ways. As pointed out in [26], there would be a bad impact on performance if we use the global BN statistics accumulated from meta-training set to normalise batches of examples in meta-testing set, since there is no overlapping between the classes in these two sets. Thus, we perform BN on each batch of examples following [7, 31]. Specifically, for each task during both meta-training and meta-testing, we use each batch’s statistics to normalise the training or query examples, which can be seen as a transductive way.

  • Meta-testing: To be consistent with the existing few-shot learning approaches, we evaluate our model on 1,000 or 600 randomly sampled -way -shot classification tasks, which consist of training examples and 5 or 15 query examples per class, for Omniglot and miniImageNet, respectively. We report the average accuracy on these tasks with 95% confidence intervals. However, we find that most previous methods only use a single seed to randomly sample a batch of testing tasks and report the average accuracy. Since there are a large number of tasks in meta-testing, they may sample a large proportion of easy-to-classify or difficult-to-classify tasks using different seeds, which would lead to a result with high variance. To get a more reliable result, we use 10 different seeds to randomly sample different batches of testing tasks for 10 times and report the best, worst and average accuracy. Note that the existing methods are not strictly comparable since their experimental settings are not consistent with each other.

4.2 Analysis of the Results on Omniglot

We compare our approach against state-of-the-art methods from each family of few-shot learning approaches that provide experimental results on Omniglot. They are MAML [7] from fast-parametrisation based approaches, Neural Statistician [5] and MetaGAN [36] from generative model based approaches, and Siamese Nets [17], Matching Nets [32], ProtoNets [29], GNN [10] and RN [31] as metric learning approaches. Their reported experimental results and ours are shown in Table 1. In general, all the methods perform worse on 20-way tasks than 5-way tasks, which shows 20-way tasks are more difficult. L2AE-D achieves state-of-the-art performance on 20-way tasks even in the worst case and competitive results on 5-way tasks. Besides, our results are very stable, since the differences between the best and worst accuracies for all the tasks are no more than 0.2%. On 5-way 5-shot and 20-way 1-shot tasks, L2AE-D mostly obtains the best performance on different batches of tasks (using different seeds) since the average and best accuracy are the same. MetaGAN performs better on 5-way tasks by generating more examples to assist RN while it improves marginally upon RN.

Model 5-way Acc. 20-way Acc.
1-shot 5-shot 1-shot 5-shot
Siamese Nets [17] 96.7% 98.4% 88.0% 96.5%
Matching Nets [32] 98.1% 98.9% 93.8% 98.5%
Neural Statistician [5] 98.1% 99.5% 93.2% 98.1%
ProtoNets [29] 98.8% 99.7% 96.0% 98.9%
GNN [10] 99.2 % 99.7% 97.4% 99.0 %
MAML [7] 98.70.4% 99.90.1% 95.80.3% 98.90.2%
RN [31] 99.60.2% 99.80.1% 97.60.2% 99.10.1%
MetaGAN [36]+RN [31] 99.670.18% 99.860.11% 97.640.17% 99.210.1%
L2AE-D (worst) 99.20.2% 99.70.1% 97.70.2% 99.20.1%
L2AE-D (average) 99.30.2% 99.80.1% 97.80.2% 99.20.1%
L2AE-D (best) 99.40.2% 99.80.1% 97.80.2% 99.30.1%
Table 1: Few shot classification results on Omniglot averaged over 1,000 testing tasks. The shows 95% confidence over tasks. The best-performing results are highlighted in bold. All the results are rounded to 1 decimal place other than MetaGAN’s that are reported with 2 decimal places.

4.3 Analysis of the Results on miniImageNet

The existing few-shot learning approaches typically use two types of models to extract features, 4-layer CNNs [7, 29, 31] and deep residual networks [22, 9, 24]. Deep residual network [12] is a kind of neural network with skip connections and more hidden layers, which has a more complex architecture but better representation capability compared to 4-layer CNNs. For a fair comparison, we compare our method with prior approaches that are based on the same type of model, 4-layer CNNs. As before, we choose state-of-the-art methods from each family that provide experimental results on miniImageNet. They are Meta-learner-LSTM [26], MAML [7] and Activation2Weights [25] from fast-parameterisation based approaches, MetaGAN [36] from generative model based approaches, Matching Nets [32], ProtoNets [29], GNN [10], RN [31] and TPN [20] from metric learning approaches. Their reported experimental results and ours are shown in Table 2. L2AE-D achieves state-of-the-art performance on 5-way 5-shot classification even in the worst case. On 5-way 1-shot classification, L2AE-D (average) provides the second best result, which is slightly worse than Activations2Weights. However, the feature extractor of Activations2Weights is trained with more classes (higher ways) and more queries in each meta-training episode. In contrast, our model is trained on 5-way classification with 15 queries per episode, which is consistent with the setting of most existing approaches. Besides, TPN obtains very competitive results on 1-shot and 5-shot classification. However, TPN is a transductive method that requires unlabelled data to propagate labels and its performance is affected by the number of query examples. Even though we use query batch statistics to normalise the query examples in a transductive way, we can simply modify it into an inductive way by using training batch statistics to normalise the query data without decreasing the performance much.

Model FT 5-way Acc.
1-shot 5-shot
Matching Nets [32] N 43.56 0.84% 55.31 0.73%
Meta-Learner-LSTM [26] N 43.44 0.77% 60.60 0.71%
MAML (1 query) [7] Y 48.70 1.84% 63.11 0.92%
ProtoNets [29] N 49.42 0.78% 68.20 0.66%
GNN [10] N 50.33 0.36% 66.41 0.63%
RN [31] N 50.44 0.82% 65.32 0.70%
MetaGAN [36]+RN [31] N 52.71 0.64% 68.63 0.67%
TPN [20] N 53.75 0.86% 69.43 0.67%
Activations2Weights [25] N 54.53 0.40% 67.87 0.70%
L2AE-D (worst) N 53.03 0.84% 69.53 0.65%
L2AE-D (average) N 53.85 0.85% 70.16 0.65%
L2AE-D (best) N 54.26 0.87% 70.76 0.67%
Table 2: Few-shot classification results on miniImageNet averaged over 600 tests based on 4-layer CNNs. The shows 95% confidence over tasks. FT stands for fine-tuning. The best-performing results are highlighted in bold.

4.4 Analysis of the effect of Meta-level Dropout

Since the augmented Omniglot dataset includes much more classes (4,800) than miniImageNet (64) in the meta-training set, the meta-learners do not suffer much from meta-level overfitting on Omniglot. Therefore, we focus on miniImagent to analyse the effect of meta-level dropout through 5-way 1-shot tasks. We introduce meta-level dropout into several representative meta-learning approaches, including MAML [7], ProtoNets [29] and RN [31]. Specifically, we use their provided code and add dropout in the middle two convolutional layers before max-pooling with the keep probability of 0.5, because there is more co-adaptation of features in the middle layers [35] and we find this setting can achieve better results. We compare the results with dropout to the reported ones of the chosen methods except MAML, since it tests on 1 query per class using 32 filters in CNNs. We modified their setting to use 64 filters and test on 15 queries per class in order to be consistent with the settings of other methods. We evaluate these methods on 5-way 1-shot classification in the same way as Section 4.3. The experimental results in Table 3 show that adding meta-level dropout can significantly improve several promising meta-learning approaches, as well as ours. It can also be seen that, even without dropout, L2AE also outperforms those representative few-shot learning approaches including ProtoNets that we improve upon.

Model 5-way 1-shot Acc.
without dropout with dropout
MAML (64filters, 15 queries) [7] 47.71 0.84% 50.43 0.87%
ProtoNets [29] 49.42 0.78% 52.08 0.81%
RN [31] 50.44 0.82% 52.40 0.85%
L2AE 51.55 0.82% 53.85 0.85% (L2AE-D)
Table 3: Few shot classification results on miniImageNet with or without dropout averaged over 600 testing tasks. The shows 95% confidence over tasks.

4.5 Visualisation of the working of L2AE-D

To further show how our approach works, we visualise the aggregated embeddings for the unseen few-shot classification tasks in the meta-testing set based on t-SNE [21]. t-SNE is a technique for dimensionality reduction that is particularly well suited for the visualisation of high-dimensional datasets [21]. Fig. 5(a) shows the visualisation of the aggregated embeddings for an unseen 5-way 1-shot classification task on Omniglot. The embeddings aggregated from different classes tend to move away from their own cluster and be farther from the clusters of other classes.

Figure 5: t-SNE visualisation of the aggregated embeddings of unseen classes for a 5-way 1-shot classification task on Omniglot (a) and a 5-way 5-shot task on miniImagenet (b). The embeddings of training samples are shown as points. Aggregated embeddings are shown as triangles. The embeddings of regular examples are shown as crosses. The Means of training embeddings are shown as diamonds.

Fig. 5(b) shows the visualisation of the aggregated embeddings for an unseen 5-way 5-shot classification task on miniImagenet. When there are unrepresentative examples in the training set, which means they are far from the cluster of their class, the mean of training embeddings [29] deviates from a good position that represents a class in the embedding space. However, our aggregated embeddings stick to a representative position in the embedding space and are much more stable regardless of unrepresentative examples.

5 Conclusions

In this paper, we propose a novel meta-learning approach for aggregating useful convolutional features and suppressing noisy ones based on a channel-wise attention mechanism. We propose two different learning strategies for one-shot and few-shot tasks aiming to fully and effectively use the few training examples. Our model does not require any fine-tuning and can be trained in an end-to-end manner. In addition, we tackle the problem of meta-level overfitting by introducing a meta-level dropout technique. This technique significantly improve several well-known meta-learning approaches as well as ours. Furthermore, we achieve state-of-the-art performance over 20-way classification tasks on Omniglot and 5-way tasks on miniImageNet, which demonstrate the effectiveness and competitiveness of our method.

References

  • [1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)
  • [2] Alex, N., Joshua, A., John, S.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018)
  • [3] Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: International Conference on Learning Representations (2015)
  • [4] Bradski, G.: The OpenCV Library. Dr. Dobb’s Journal of Software Tools (2000)
  • [5] Edwards, H., Storkey, A.: Towards a neural statistician. In: International Conference on Learning Representations (2017)
  • [6] Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence 28(4), 594–611 (2006)
  • [7] Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. pp. 1126–1135 (2017)
  • [8] Finn, C., Xu, K., Levine, S.: Probabilistic model-agnostic meta-learning. In: Advances in Neural Information Processing Systems. pp. 9537–9548 (2018)
  • [9] Gao, H., Shou, Z., Zareian, A., Zhang, H., Chang, S.F.: Low-shot learning via covariance-preserving adversarial augmentation networks. In: Advances in Neural Information Processing Systems. pp. 983–993 (2018)
  • [10] Garcia, V., Bruna, J.: Few-shot learning with graph neural networks. In: International Conference on Learning Representations (2018)
  • [11] Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4367–4375 (2018)
  • [12] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
  • [13] Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7132–7141 (2018)
  • [14] Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning. pp. 448–456 (2015)
  • [15] Jetley, S., Lord, N.A., Lee, N., Torr, P.H.: Learn to pay attention. In: International Conference on Learning Representations (2018)
  • [16] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015)
  • [17] Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop (2015)
  • [18] Lake, B., Salakhutdinov, R., Gross, J., Tenenbaum, J.: One shot learning of simple visual concepts 33(33) (2011)
  • [19] Lee, Y., Choi, S.: Gradient-based meta-learning with learned layerwise metric and subspace. In: International Conference on Machine Learning. pp. 2933–2942 (2018)
  • [20] Liu, Y., Lee, J., Park, M., Kim, S., Yang, E., Hwang, S.J., Yang, Y.: Learning to propagate labels: Transductive propagation network for few-shot learning. In: International Conference on Learning Representations (2019)
  • [21] Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov), 2579–2605 (2008)
  • [22] Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. In: International Conference on Learning Representations (2018)
  • [23] Munkhdalai, T., Yu, H.: Meta networks. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. pp. 2554–2563 (2017)
  • [24] Oreshkin, B., López, P.R., Lacoste, A.: Tadam: Task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems. pp. 719–729 (2018)
  • [25] Qiao, S., Liu, C., Shen, W., Yuille, A.L.: Few-shot image recognition by predicting parameters from activations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7229–7238 (2018)
  • [26] Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: International Conference on Learning Representations (2017)
  • [27] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3), 211–252 (2015)
  • [28] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T.: Meta-learning with memory-augmented neural networks. In: International conference on machine learning. pp. 1842–1850 (2016)
  • [29] Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems. pp. 4077–4087 (2017)
  • [30] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1), 1929–1958 (2014)
  • [31] Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1199–1208 (2018)
  • [32] Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in neural information processing systems. pp. 3630–3638 (2016)
  • [33] Woo, S., Park, J., Lee, J.Y., So Kweon, I.: CBAM: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision. pp. 3–19 (2018)
  • [34] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: International conference on machine learning. pp. 2048–2057 (2015)
  • [35] Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in neural information processing systems. pp. 3320–3328 (2014)
  • [36] Zhang, R., Che, T., Ghahramani, Z., Bengio, Y., Song, Y.: Metagan: An adversarial approach to few-shot learning. In: Advances in Neural Information Processing Systems. pp. 2371–2380 (2018)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
350899
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description