Cross-Domain Few-Shot Learning by Representation Fusion

Cross-Domain Few-Shot Learning by Representation Fusion

Abstract

In order to quickly adapt to new data, few-shot learning aims at learning from few examples, often by using already acquired knowledge. The new data often differs from the previously seen data due to a domain shift, that is, a change of the input-target distribution. While several methods perform well on small domain shifts like new target classes with similar inputs, larger domain shifts are still challenging. Large domain shifts may result in high-level concepts that are not shared between the original and the new domain. However, low-level concepts like edges in images might still be shared and useful. For cross-domain few-shot learning, we suggest representation fusion to unify different abstraction levels of a deep neural network into one representation. We propose Cross-domain Hebbian Ensemble Few-shot learning (CHEF), which achieves representation fusion by an ensemble of Hebbian learners acting on different layers of a deep neural network that was trained on the original domain. On the few-shot datasets miniImagenet and tieredImagenet, where the domain shift is small, CHEF is competitive with state-of-the-art methods. On cross-domain few-shot benchmark challenges with larger domain shifts, CHEF establishes novel state-of-the-art results in all categories. We further apply CHEF on a real-world cross-domain application in drug discovery. We consider a domain shift from bioactive molecules to environmental chemicals and drugs with twelve associated toxicity prediction tasks. On these tasks, that are highly relevant for computational drug discovery, CHEF significantly outperforms all its competitors.

\iclrfinalcopy

1 Introduction

Currently, deep learning is criticized because it is data hungry, has limited capacity for transfer, insufficiently integrates prior knowledge, and presumes a largely stable world (Marcus, 2018). In particular, these problems appear after a domain shift, that is, a change of the input-target distribution. A domain shift forces deep learning models to adapt. The goal is to exploit models that were trained on the typically rich original data for solving tasks from the new domain with much less data. Examples for domain shifts are new users or customers, new products and product lines, new diseases (e.g. adapting from SARS to COVID19), new images from another field (e.g. from cats to dogs or from cats to bicycles), new social behaviors after societal change (e.g. introduction of cell phones, pandemic), self-driving cars in new cities or countries (e.g. from European countries to Arabic countries), and robot manipulation of new objects.

Domain shifts are often tackled by meta-learning (Schmidhuber, 1987; Bengio et al., 1990; Hochreiter et al., 2001), since it exploits already acquired knowledge to adapt to new data. One prominent application of meta-learning dealing with domain shifts is few-shot learning, since, typically, from the new domain much less data is available than from the original domain. Meta-learning methods perform well on small domain shifts like new target classes with similar inputs. However, larger domain shifts are still challenging for current approaches. Large domain shifts lead to inputs, which are considerably different from the original inputs and possess different high-level concepts. Nonetheless, low-level concepts are often still shared between the inputs of the original domain and the inputs of the new domain. For images, such shared low-level concepts can be edges, textures, small shapes, etc. One way of obtaining low level concepts is to train a new deep learning model from scratch, where the new data is merged with the original data. However, although models of the original domain are often available, the original data, which the models were trained on, often are not. This might have several reasons, e.g. the data owner does no longer grant access to the data, General Data Protection Regulation (GDPR) does no longer allow access to the data, IP restrictions prevent access to the data, sensitive data items must not be touched anymore (e.g. phase III drug candidates), or data is difficult to extract again. We therefore suggest to effectively exploit original data models directly by accessing not only high level but also low level abstractions. In this context, we propose a cross-domain few-shot learning method extracting information from different levels of abstraction in a deep neural network.

Representation fusion. Deep Learning constructs neural network models that represent the data at multiple levels of abstraction (LeCun et al., 2015). We introduce representation fusion, which is the concept of unifying and merging information from different levels of abstraction. Representation fusion uses a fast and adaptive system for detecting relevant information at different abstraction levels of a deep neural network, which we will show allows solving versatile and complex cross-domain tasks.

CHEF. We propose cross-domain ensemble few-shot learning (CHEF) that achieves representation fusion by an ensemble of Hebbian learners, which are built upon a trained network. CHEF naturally addresses the problem of domain shifts which occur in a wide range of real-world applications. Furthermore, since CHEF only builds on representation fusion, it can adapt to new characteristics of tasks like unbalanced data sets, classes with few examples, change of the measurement method, new measurements in unseen ranges, new kind of labeling errors, and more. The usage of simple Hebbian learners allows the application of CHEF without needing to backpropagate information through the backbone network.

The main contributions of this paper are:

  • We introduce representation fusion as the concept of unifying and merging information from different layers of abstraction.

  • We introduce CHEF13 as our new cross-domain few-shot learning method that builds on representation fusion. We show that using different layers of abstraction allows one to successfully tackle various few-shot learning tasks across a wide range of different domains. CHEF does not need to backpropagate information through the backbone network.

  • We apply CHEF to various cross-domain few-shot tasks and obtain several state-of-the-art results. We further apply CHEF to cross-domain real-world applications from drug discovery, where we outperform all competitors.

Related work. Representation fusion builds on learning a meaningful representation (Bengio et al., 2013; Girshick et al., 2014) at multiple levels of abstraction (LeCun et al., 2015; Schmidhuber, 2015). The concept of using representations from different layers of abstraction has been used in CNN architectures (LeCun et al., 1998) such as Huang et al. (2017); Rumetshofer et al. (2018); Hofmarcher et al. (2019), in CNNs for semantic segmentation in the form of multi-scale context pooling (Yu and Koltun, 2015; Chen et al., 2018), and in the form of context capturing and symmetric upsampling (Ronneberger et al., 2015). Work on domain shifts discusses the problem that new inputs are considerably different from the original inputs (Kouw and Loog, 2019; Wouter, 2018; Webb et al., 2018; Gama et al., 2014; Widmer and Kubat, 1996). Domain adaptation (Pan and Yang, 2009; Ben-David et al., 2010) overcomes this problem by e.g. reweighting the original samples (Jiayuan et al., 2007) or learning a classifier in the new domain. Domain adaptation where only few data is available in the new domain (Ben-David et al., 2010; Lu et al., 2020) is called cross-domain few-shot learning (Guo et al., 2019; Lu et al., 2020; Tseng et al., 2020), which is an instance of the general few-shot learning setting (Fei-Fei et al., 2006). Few-shot learning can be roughly divided into three approaches (Lu et al., 2020; Hospedales et al., 2020): (i) augmentation, (ii) metric learning, and (iii) meta-learning. For (i), where the idea is to learn an augmentation to produce more than the few samples available, supervised (Dixit et al., 2017; Kwitt et al., 2016) and unsupervised (Hariharan and Girshick, 2017; Pahde et al., 2019; Gao et al., 2018) methods are considered. For (ii), approaches aim to learn a pairwise similarity metric under which similar samples obtain high similarity scores (Koch et al., 2015; Ye and Guo, 2018; Hertz et al., 2006). For (iii), methods comprise embedding and nearest-neighbor approaches (Snell et al., 2017b; Sung et al., 2018; Vinyals et al., 2016), finetuning approaches (Finn et al., 2017; Rajeswaran et al., 2019; Ravi and Larochelle, 2017; Andrychowicz et al., 2016), and parametrized approaches (Gidaris and Komodakis, 2018; Ye et al., 2020; Lee et al., 2019; Yoon et al., 2019; Mishra et al., 2018; Hou et al., 2019; Rusu et al., 2018). Few-shot classification under domain shifts for metric-based methods has been discussed in Tseng et al. (2020). Ensemble methods for few-shot learning have been applied in Dvornik et al. (2019), where an ensemble of distance-based classifiers is designed from different networks. In contrast, our method builds an ensemble of different layers from the same network. Hebbian learning as part of a few-shot learning method has been implemented in Munkhdalai and Trischler (2018), where fast weights that are used for binding labels to representations are generated by a Hebbian learning rule.

2 Cross-domain few-shot learning

Domain shifts. We assume to have data , where is the input data and is the target data. A domain is a distribution over assigning each pair a probability . A domain shift is a change from to . We measure the magnitude of the domain shift by a distance between the distributions and . We consider four types of domain shifts (Kouw and Loog, 2019; Wouter, 2018; Webb et al., 2018; Gama et al., 2014; Widmer and Kubat, 1996):

  • Prior shift (small domain shift): is changed to , while stays the same. For example, when new classes are considered (typical case in few-shot learning): and .

  • Covariate shift (large domain shift): is changed to , while stays the same. For example, when new inputs are considered, which occurs when going from color to grayscale images, using a new measurement device, or looking at traffic data from different continents: and .

  • Concept shift: is changed to , while stays the same. For example, when including new aspects changes the decision boundaries: and .

  • General domain shift: domain shift between to . For example, going from Imagenet data to grayscale X-ray images (typical case in cross-domain datasets).

Domain shift for images. We consider the special case that the input is an image. In general, domain shifts can be measured on the raw image distributions e.g. by using the -divergence (Ben-David et al., 2010). However, distances between raw image distributions were shown to be less meaningful in computer vision tasks than abstract representations of deep neural networks (Heusel et al., 2017; Salimans et al., 2016). We approximate the distance between the joint distributions by the distance between the marginals , which is exact in the case of the covariate shift for certain choices of , like e.g. the Jensen-Shannon divergence. For the distance between the marginals we use the Fréchet Inception Distance (FID; Heusel et al., 2017), which has proven reliable for measuring performance of Generative Adversarial Networks (Goodfellow et al., 2014).

Cross-domain few-shot learning. Large domain shifts lead to inputs, which are considerably different from the original inputs. As a result, the model trained on the original domain will not work anymore on the new domain. To overcome this problem, domain adaptation techniques are applied (Pan and Yang, 2009; Ben-David et al., 2010). Domain adaption can be achieved in several ways, e.g. by reweighting the original samples (Jiayuan et al., 2007). Another possibility is to learn a classifier in the new domain. Domain adaptation where in the new domain only few data is available (Ben-David et al., 2010) which can be used for learning is called cross-domain few-shot learning (Guo et al., 2019; Lu et al., 2020; Tseng et al., 2020). In an -shot -way few-shot learning setting, the training set (in meta learning also called one episode) consists of samples for each of the classes.

3 Cross-domain Hebbian Ensemble Few-shot learning (CHEF)

We propose a new cross-domain few-shot learning method, CHEF, that consists of an ensemble of Hebbian learners built on representation fusion. Figure 1 sketches our CHEF approach. In principle, any learning algorithm can be used for representation fusion. We choose a Hebbian learning rule because it is simple and fast while being robust and reliable.

Figure 1: Working principle of CHEF. An ensemble of Hebbian learners is applied to the upper layers of a trained neural network. Distilling information from different layers of abstraction is called representation fusion. Each Hebbian learner is iteratively optimized and the results are combined.

Hebbian few-shot learning built on representation fusion. CHEF builds its ensemble of Hebbian learners using representation fusion. Deep learning models (LeCun et al., 2015) provide hierarchical representations that allow to fuse information from different layers of abstraction. In contrast to many other methods, CHEF does not require backpropagation of error signals through the entire backbone network. Only the parameters of the Hebbian learners that are obtained for the uppermost layers need adjustment. This makes CHEF extremely fast and versatile.

Obtaining one Hebbian Learner. We consider an -shot -way few-shot learning setting. Let be a feature vector obtained from activating a pre-trained backbone network with a sample up to a certain layer, where is the number of units in that layer. We combine the feature vectors into a matrix and initialize a weight matrix . In accordance with Hebb (2005); Frégnac (2002), we use the Hebbian learning rule

(1)

for a given number of steps, where is a Hebbian learning rate and is the matrix of postsynaptic responses . We design the number of steps for which to run equation 1 as a hyperparameter of our method. Given a loss function and few-shot labels , we choose the postsynaptic response

(2)

We initialize the weight matrix with zeros. In principle, any other initialization is possible. However, the learning rule is designed for rather strong updates and, therefore, the initialization scheme is of minor importance.

Combining several Hebbian Learners. The closer a layer is to the network output the more specific are its features. Conversely, the closer a layer is to the input of the network, the more general are the features. In cross-domain few-shot learning, it is not a priori clear how specific or general the features should be because this depends on how close the target domain is to the training domain. Therefore, we design our few-shot learning algorithm such that it can flexibly choose the specificity of the features depending on the current episode. We achieve this by representation fusion, where the Hebbian learning rule is applied to several layers at different levels of the backbone network in parallel. This yields a separate prediction for each level of abstraction. The final classification result is then obtained from the sum of logits arising from the respective Hebbian learners. A schematic view of CHEF is shown in Alg. 1.

function HebbRule()
     
     for  do
         
         
     end for
     return
end function
function Ensemble()
     
     return
end function
Algorithm 1 CHEF algorithm. The data matrix consists of input vectors and the label matrix consists of the corresponding label vectors . The function BB activates the backbone network up to a certain layer specified by an index . is the set of indices specifying the layers used in the ensemble, is the loss function of the few-shot learning task at hand. The function HebbRule executes steps of the Hebbian learning rule and yields a weight matrix that maps the feature vectors in to vectors of length , which are then used for -fold classification. is the Hebbian learning rate.

4 Experiments

We apply CHEF to four cross-domain few-shot challenges, where we obtain state-of-the-art results in all categories. The four cross-domain few-shot challenges are characterized by domain shifts of different size, which we measure using the Fréchet-Inception-Distance (FID). We conduct ablation studies showing the influence of the different layer representations on the results. Further, we test CHEF on two standardized image-based few-shot classification benchmark datasets established in the field, which are characterized by a prior domain shift: miniImagenet (Vinyals et al., 2016) and tieredImagenet (Ren et al., 2018). Finally, we illustrate the impact of our CHEF approach on two real-world applications in the field of drug discovery, which are characterized first by a small domain shift and second by a large domain shift.

4.1 Cross-domain few-shot learning

Dataset and evaluation. The cross-domain few-shot learning challenge (Guo et al., 2019) uses miniImagenet as training domain and then evaluates the trained models on four different test domains with increasing distance to the training domain: 1) CropDisease (Mohanty et al., 2016) consisting of plant disease images, 2) EuroSAT (Helber et al., 2019), a collection of satellite images, 3) ISIC2018 (Tschandl et al., 2018; Codella et al., 2019) containing dermoscopic images of skin lesions, and 4) ChestX (Wang et al., 2017) containing a set of X-ray images. For evaluation, we measure the accuracy drawing 800 tasks (five test instances per class) from the cross-domain test set. Following prior work, we focus on 5-way/5-shot, 5-way/20-shot, and 5-way/50-shot tasks. We report the average accuracy and a 95 % confidence interval across all test images and tasks.

Measuring the domain shifts via FID. In Guo et al. (2019), the four datasets of the new domain are characterized by their distance to the original domain using three criteria: whether images contain perspective distortion, the semantic content of images, and color depth. In Table 1, we provide measurements of the domain shift of these four datasets with respect to the original miniImagenet dataset using the FID. The FID measurements confirm the characterization in Guo et al. (2019), except that the EuroSAT dataset is closer to the original domain than the CropDisease dataset. The difference in both FID measurements is mostly driven by the mean terms. This can be explained by the fact that the FID does not measure perspective distortion and satellite images might have a higher variety of shapes and colors than plant images.

Dataset Conceptual difference to original domain (miniImagenet) FID
CropDisease None 257.58
EuroSAT No perspective distortion 151.64
ISIC2018 No perspective distortion, unnatural content 294.05
ChestX No perspective distortion, unnatural content, different color depth 312.52
Table 1: Conceptual difference and domain shift between miniImagenet and the four cross-domain datasets CropDisease, EuroSAT, ISIC2018, and ChestX. The domain shift is measured using the FID.

CHEF implementation. We perform pre-training on the miniImagenet dataset similar but not identical to that in Ye et al. (2020). We utilize a softmax output layer with as many units as classes are contained in the meta-training and the meta-validation sets combined. We make a validation split on the combination of these two sets for supervised learning, i.e. instead of separating whole classes into the validation set (vertical split) we move a randomly selected fraction of samples of each class into the validation set (horizontal split) as it is standard in supervised learning. We evaluate CHEF using the same ResNet-10 backbone architecture as in Guo et al. (2019). For better representation fusion, we place two fully connected layers after the last convolutional layer. We perform model selection during training using the cross-entropy loss function on the horizontal data split, and perform hyperparameter selection for CHEF on the vertical data split.

Results and ablation study. CHEF achieves state-of-the-art performance in all 12 categories. Results are provided in Table 2. To further study the influence and power of the representation fusion, we use a pre-trained PyTorch (Paszke et al., 2019) ResNet-18 network. 5-way 5-shot and 50-shot results are reported in Fig. 2 (5-way 20-shot results can be found in the appendix). Results are obtained by applying our Hebbian learning rule to the logits of the output layer and to the pre-activations of the blocks 4 through 8 individually and we also examine an ensemble of them. The results are considerably better than the above reported ResNet-10 results, which presumably arises from the fact that the power of representation fusion is larger since the ResNet-18 network is pretrained on the whole Imagenet dataset. This illustrates the power of CHEF considering better feature abstractions. Another interesting insight is that for the ChestX dataset, the dataset with the largest domain shift, the lower level features gain importance. In general, the farther the domain is away from the original domain the more important are features from lower layers, i.e. features that are less specific to the original domain. Since CHEF combines features of different specificity to the training domain, it is particularly powerful in cross-domain settings.

CropDiseases 5-way EuroSAT 5-way
Method 5-shot 20-shot 50-shot 5-shot 20-shot 50-shot
MatchingNet 66.39 0.78 76.38 0.67 58.53 0.73 64.45 0.63 77.10 0.57 54.44 0.67
MatchingNet+FWT 62.74 0.90 74.90 0.71 75.68 0.78 56.04 0.65 63.38 0.69 62.75 0.76
MAML 78.05 0.68 89.75 0.42 - 71.70 0.72 81.95 0.55 -
ProtoNet 79.72 0.67 88.15 0.51 90.81 0.43 73.29 0.71 82.27 0.57 80.48 0.57
ProtoNet+FWT 72.72 0.70 85.82 0.51 87.17 0.50 67.34 0.76 75.74 0.70 78.64 0.57
RelationNet 68.99 0.75 80.45 0.64 85.08 0.53 61.31 0.72 74.43 0.66 74.91 0.58
RelationNet+FWT 64.91 0.79 78.43 0.59 81.14 0.56 61.16 0.70 69.40 0.64 73.84 0.60
MetaOpt 68.41 0.73 82.89 0.54 91.76 0.38 64.44 0.73 79.19 0.62 83.62 0.58
CHEF (Ours) 86.87 0.27 94.78 0.12 96.77 0.08 74.15 0.27 83.31 0.14 86.55 0.15
ISIC 5-way ChestX 5-way
Method 5-shot 20-shot 50-shot 5-shot 20-shot 50-shot
MatchingNet 36.74 0.53 45.72 0.53 54.58 0.65 22.40 0.7 23.61 0.86 22.12 0.88
MatchingNet+FWT 30.40 0.48 32.01 0.48 33.17 0.43 21.26 0.31 23.23 0.37 23.01 0.34
MAML 40.13 0.58 52.36 0.57 - 23.48 0.96 27.53 0.43 -
ProtoNet 39.57 0.57 49.50 0.55 51.99 0.52 24.05 1.01 28.21 1.15 29.32 1.12
ProtoNet+FWT 38.87 0.52 43.78 0.47 49.84 0.51 23.77 0.42 26.87 0.43 30.12 0.46
RelationNet 39.41 0.58 41.77 0.49 49.32 0.51 22.96 0.88 26.63 0.92 28.45 1.20
RelationNet+FWT 35.54 0.55 43.31 0.51 46.38 0.53 22.74 0.40 26.75 0.41 27.56 0.40
MetaOpt 36.28 0.50 49.42 0.60 54.80 0.54 22.53 0.91 25.53 1.02 29.35 0.99
CHEF (Ours) 41.26 0.34 54.30 0.34 60.86 0.18 24.72 0.14 29.71 0.27 31.25 0.20
Results reported in Guo et al. (2019)
Table 2: Comparative results of few-shot learning methods on four proposed cross-domain few-shot challenges CropDiseases, EuroSAT, ISIC, and ChestX. The average 5-way few-shot classification accuracies (, top-1) along with confidence intervals are reported on the test split of each dataset.
Figure 2: 5-shot and 50-shot top-1 accuracies (along with confidence intervals) of different residual blocks and the output layer of an Imagenet-pretrained ResNet-18 and the ensemble result (orange, “ens”) on the four different datasets of the cross-domain few-shot learning benchmark. For comparison, also the ResNet-10 ensemble results (green) are included.

4.2 miniImagenet and tieredImagenet

Datasets and evaluation. The miniImagenet dataset (Vinyals et al., 2016) consists of 100 randomly chosen classes from the ILSVRC-2012 dataset (Russakovsky et al., 2015). We use the commonly-used class split proposed in Ravi and Larochelle (2017). The tieredImagenet dataset (Ren et al., 2018) is a subset of ILSVRC-2012 (Russakovsky et al., 2015), composed of 608 classes grouped in 34 high-level categories. For evaluation, we measure the accuracy drawing 800 tasks (five test instances per class) from the meta-test set. Following prior work, we focus on 5-way/1-shot and 5-way/5-shot tasks. We report the average accuracy and a 95 % confidence interval across all test images and tasks.

CHEF implementation and results. We perform pre-training of the respective backbone networks on the miniImagenet and the tieredImagenet dataset in the same way as described in Sec. 4.1. We evaluate CHEF using two different backbone architectures: a Conv-4 and a ResNet-12 network. We use the Conv-4 network described by Vinyals et al. (2016). Following Lee et al. (2019), we configure the ResNet-12 backbone as 4 residual blocks, which contain a max-pooling and a batch-norm layer and are regularized by DropBlock (Ghiasi et al., 2018). Again, model selection and hyper-parameter tuning is performed as described in Sec. 4.1. CHEF achieves state-of-the-art performance in 5 out of 8 categories. Results are provided in Table 3. An ablation study of the miniImagenet and tieredImagenet results can be found in the appendix.

miniImagenet 5-way tieredImagenet 5-way
Method Backbone 1-shot 5-shot 1-shot 5-shot
MatchingNet (Vinyals et al., 2016) Conv-4 - -
Meta-LSTM (Ravi and Larochelle, 2017) Conv-4 - -
MAML (Finn et al., 2017) Conv-4
ProtoNets (Snell et al., 2017a) Conv-4
Reptile (Nichol et al., 2018) Conv-4
RelationNet (Sung et al., 2018) Conv-4
IMP (Allen et al., 2019) Conv-4 - -
FEAT (Ye et al., 2020) Conv-4 - -
Dynamic FS (Gidaris and Komodakis, 2018) Conv-4 - -
CHEF (Ours) Conv-4
SNAIL (Mishra et al., 2018) ResNet-12 - -
TADAM (Oreshkin et al., 2018) ResNet-12 - -
MTL (Sun et al., 2019) ResNet-12 - -
VariationalFSL (Zhang et al., 2019) ResNet-12 - -
TapNet (Yoon et al., 2019) ResNet-12
MetaOptNet (Lee et al., 2019) ResNet-12
CTM (Li et al., 2019) ResNet-12
CAN (Hou et al., 2019) ResNet-12
FEAT (Ye et al., 2020) ResNet-12
Dynamic FS (Gidaris and Komodakis, 2018) ResNet-12 - -
CHEF (Ours) ResNet-12
Results reported in (Liu et al., 2019)
Table 3: Comparative results of few-shot learning methods on the two benchmark datasets miniImagenet and tieredImagenet. The average 5-way few-shot classification accuracies (, top-1) along with confidence intervals are reported on the test split of each dataset.

4.3 Example Application: Drug Discovery

In drug discovery, it is essential to know properties of drug candidates, such as biological activities or toxicity (Sturm et al., 2020; Mayr et al., 2018). Since the measurements of these properties require time- and cost-intensive laboratory experiments, machine learning models are used to substitute such measurements (Hochreiter et al., 2018). However, due to the high experimental effort often only few high-quality measurements are available for training. Thus, few-shot learning is highly relevant for computational drug discovery.

Problem setting. We consider a 50-shot cross-domain few-shot learning setting in the field of toxicity prediction, utilizing the Tox21 Data Challenge dataset (Tox21) with twelve different toxic effects (Huang and Xia, 2017; Mayr et al., 2016). Around 50 available measurements is a typical scenario when introducing a new high-quality assay in drug design. So far, the standard approach to deal with such few data points was to use machine learning methods like Support Vector Machines (SVMs; Cortes and Vapnik, 1995) or Random Forests (RFs; Breiman, 2001). However, these methods do not exploit the rich data available, like the ChEMBL20 drug discovery benchmark (ChEMBL20) (Mayr et al., 2018; Gaulton et al., 2017). Viewing the Tox21 data as a domain shift of the ChEMBL20 allows the application of cross-domain few-shot learning methods. In this setting, a domain shift can be observed both in the input data and in the target data. The molecules (input domain) are strongly shifted towards a specialized chemical space, with a Jaccard index of between the two datasets, and the biological effects (output domain) are shifted towards toxicity without any overlap in this domain. A further shift is in the distribution of the target labels, which are now much more imbalanced in comparison to ChEMBL20. In order to mirror this distribution shift correctly, the number of toxic vs. non-toxic molecules in the training sets for each of the twelve few-shot tasks (twelve different toxic effects) are sub-sampled accordingly. For example, the 50-shot few-shot scenario (50-50 toxic/non-toxic) is adjusted to a 10-90 scenario. For the twelve few-shot learning tasks, training samples are drawn from the training set and test samples are drawn from the test set of the Tox21 data, respectively. We sample individually for each of the twelve tasks.

CHEF implementation for molecules. We first train a fully-connected deep neural network (FCN) for the prediction of bioactivities from the ChEMBL20 database (original domain). The network is trained in a massive multi-task setting, where 1,830 tasks are predicted at once, such that the network is forced to learn proficient representations that can be shared for multiple tasks (Ma et al., 2015; Unterthiner et al., 2014). The total number of 892,480 features of the ChEMBL20 database was reduced by a sparseness criterion on the molecules to 1,866 features. The neurons in the input layer of the FCN represent one of 1,866 ECFP6 (Rogers and Hahn, 2010) features, which are used as a feature representation for describing the raw structure of the molecules. Each neuron of the output layer represents one of the 1,830 prediction tasks. We use the pre-trained network and apply CHEF by representation fusion of the three bottleneck layers of the network for predicting the twelve different toxic effects of the new domain of the Tox21 Data Challenge.

Method ROC-AUC
CHEF 0.76 0.02
SVM 0.66 0.03
RF 0.64 0.03
Table 4: ROC-AUC performance for few-shot drug discovery. CHEF is compared to conventional methods (SVM, RF) for the prediction of toxic effects. Mean and standard deviation are computed across twelve different effects and across 100 differently sampled training and test sets.

Experimental evaluation. We evaluate the performance of CHEF on the twelve tasks of the Tox21 Data Challenge and compare it to conventional methods, like SVMs and RFs, that are used in drug design when few data is available. We use SVMs with a MinMax kernel since it previously yielded the best results (Mayr et al., 2018). For CHEF, only the 1,866 ECFP input features of ChEMBL20 pre-training network database are used where features with only few occurrences in the training set are discarded since they do not give enough learning signal for neural network training. For SVMs and RFs, all encountered ECFP features are used. ROC-AUC values are computed across the twelve tasks of the Tox21 Data Challenge and across 100 differently sampled training and test sets. CHEF achieves significantly better ROC-AUC values than SVMs and RFs. Table 4 shows the results (-value  for both SVM and RF when using a paired Wilcoxon test). Results for the twelve individual tasks and a more detailed description is given in the appendix. CHEF significantly outperforms traditional methods in drug design, which demonstrates the great potential of cross-domain few-shot learning in this field.

5 Conclusion

We have introduced CHEF as new cross-domain few-shot learning method. CHEF builds on the concept of representation fusion, which unifies information from different levels of abstraction. Representation fusion allows one to successfully tackle various few-shot learning problems with large domain shifts across a wide range of different tasks. CHEF obtains new state-of-the-art results in all categories of the broader study of cross-domain few-shot learning benchmarks. Finally, we have tested the performance of CHEF in a real-world cross-domain application in drug discovery, i.e. toxicity prediction when a domain shift appears. CHEF significantly outperforms all traditional approaches demonstrating great potential for applications in computational drug discovery.

Acknowledgments

The ELLIS Unit Linz, the LIT AI Lab, the Institute for Machine Learning, are supported by the Federal State Upper Austria. IARAI is supported by Here Technologies. We thank the projects AI-MOTION (LIT-2018-6-YOU-212), DeepToxGen (LIT-2017-3-YOU-003), AI-SNN (LIT-2018-6-YOU-214), DeepFlood (LIT-2019-8-YOU-213), Medical Cognitive Computing Center (MC3), PRIMAL (FFG-873979), S3AI (FFG-872172), DL for granular flow (FFG-871302), ELISE (H2020-ICT-2019-3 ID: 951847), AIDD (MSCA-ITN-2020 ID: 956832). We thank Janssen Pharmaceutica, UCB Biopharma SRL, Merck Healthcare KGaA, Audi.JKU Deep Learning Center, TGW LOGISTICS GROUP GMBH, Silicon Austria Labs (SAL), FILL Gesellschaft mbH, Anyline GmbH, Google Brain, ZF Friedrichshafen AG, Robert Bosch GmbH, TÜV Austria, and the NVIDIA Corporation.

Appendix A Appendix

a.1 Experimental setup

In the following, we give further details on our experimental setups.

Cross-domain few-shot learning

We utilize a ResNet-10 backbone architecture as proposed in Guo et al. (2019). The residual blocks have 64, 128, 256, 512, 4000, and 1000 units, where the latter two are fully connected ReLU layers. We use a learning rate of 0.1, momentum term of 0.9, L2 weight decay term of , batch size of 256, dropout rate of 0.5 during pre-training. These values were tuned on the horizontal validation set of miniImagenet. For few-shot learning, we choose a Hebbian learning rate of and run the Hebb rule for steps. These values were tuned on the vertical validation set of miniImagenet.

Figures 2 and 3 show the performance of the pre-trained PyTorch ResNet-18 network, where the pre-training is on the entire Imagenet dataset. Additionally, the individual performances of the ResNet-18 layers are depicted. The miniImagenet pre-trained ResNet-10 is shown for comparison. The plots show the general tendency that the ensemble performance on domains which are farther away from the training domain relies more heavily on features in lower layers, i.e. features with less specificity to the original domain.

CropDiseases 5-way EuroSAT 5-way
Method 5-shot 20-shot 50-shot 5-shot 20-shot 50-shot
CHEF (ResNet-10) 86.87 0.27 94.78 0.12 96.77 0.08 74.15 0.27 83.31 0.14 86.55 0.15
CHEF (ResNet-18) 91.34 0.16 96.99 0.07 98.07 0.09 83.44 0.28 91.62 0.13 93.65 0.11
ISIC 5-way ChestX 5-way
Method 5-shot 20-shot 50-shot 5-shot 20-shot 50-shot
CHEF (ResNet-10) 41.26 0.34 54.30 0.34 60.86 0.18 24.72 0.14 29.71 0.27 31.25 0.20
CHEF (ResNet-18) 46.29 0.16 58.85 0.26 65.01 0.17 26.11 0.16 31.83 0.35 36.47 0.26
Table 5: Results of our few-shot learning method CHEF on four proposed cross-domain few-shot challenges CropDiseases, EuroSAT, ISIC, and ChestX. We compare the ResNet-10 architecture pre-trained on miniImagenet to the ResNet-18 architecture pre-trained on Imagenet. The average 5-way few-shot classification accuracies (, top-1) along with confidence intervals are reported on the test split of each dataset.
Figure 3: 20-shot top-1 accuracies (along with confidence intervals) of different residual blocks and the output layer of an Imagenet-pretrained ResNet-18 and the ensemble result (orange, “ens”) on the four different datasets of the cross-domain few-shot learning benchmark. For comparison, also the ResNet-10 ensemble results (green) are included.

miniImagenet and tieredImagenet

Backbone pre-training. For the miniImagenet and tieredImagenet experiments, we utilize Conv-4 and ResNet-12 architectures as backbone networks. The Conv-4 network is described in detail by Vinyals et al. (2016). It is a stack of 4 modules, each of which consists of a convolutional layer with 64 units, a batch normalization layer (Ioffe and Szegedy, 2015), a ReLU activation and max-pooling layer. On top, we place two fully connected ReLU layers with 400 and 100 units, respectively. The ResNet-12 is described in Lee et al. (2019). We configure the backbone as 4 residual blocks with 64, 160, 320, 640, 4000, and 1000 units, respectively, where the latter two are ReLU-activated fully connected layers. The residual blocks contain a max-pooling and a batch-norm layer and are regularized by DropBlock (Ghiasi et al., 2018) with block sizes of for the first two blocks and for the latter two blocks.

We pre-train these backbone models for 500 epochs with three different learning rates , , and . For this we use the PyTorch SGD module for stochastic gradient descent with a momentum term of , an L2 weight decay factor of , a mini-batchsize of , and a dropout probability of . This pre-training is performed on the horizontal training set of the miniImagenet and the tieredImagenet dataset, resulting in 3 trained models per dataset. We apply early stopping by selecting the model with the lowest loss on the horizontal validation set, while evaluating the model performance after each epoch.

Few-shot learning. For few-shot learning, we perform a grid search to determine the best hyper-parameter setting for each of the datasets and each of the 1-shot and 5-shot settings, using the loss on the vertical validation set. We treat the 3 backbone models that were pre-trained with different learning rates, as described in the previous paragraph, as hyper-parameters. The hyper-parameters used for this grid-search are listed in table 6.

After determining the best hyper-parameter setting following this procedure, we perform 1-shot and 5-shot learning on the vertical test sets of miniImagenet and tieredImagenet using different random seeds, respectively. The results are listed in table 3.

parameter values
learning rate of pre-trained model
dropout probability
Hebbian learning rate
number of Hebb rule steps
Table 6: Hyper-parameter search space for 1-shot and 5-shot learning on miniImagenet and tieredImagenet using Conv-4 and ResNet-12 backbone models. Best hyper-parameters were evaluated using a grid-search and the loss on the vertical validation set of miniImagenet or tieredImagenet.

Ensemble learning and performance of individual layers. To evaluate the performance of the Hebbian learning using only individual layers versus using the ensemble of layers, we additionally perform the few-shot learning on the vertical test sets using only individual layers as input to the Hebbian learning. As shown in figures 4 and 5, the performance using only the individual layers varies strongly throughout 1-shot and 5-shot learning and the miniImagenet and tieredImagenet dataset. This indicates that the usefulness of the representations provided by the individual layers strongly depends on the data and task setting. In contrast to this, the ensemble of layers reliably achieves either best or second best performance throughout all settings.

Figure 4: Ablation study of the Conv-4 architecture on the miniImagenet and tieredImagenet datasets for 1-shot and 5-shot. The plots show the individual performances of Hebbian learners acting on single layers and their ensemble performance along with 95% confidence intervals. The labels on the -axis indicate how far the respective layer is from the output layer.
Figure 5: Ablation study of the ResNet-12 architecture on the miniImagenet and tieredImagenet datasets for 1-shot and 5-shot. The plots show the individual performances of Hebbian learners acting on single layers and their ensemble performance along with 95% confidence intervals. The labels on the -axis indicate how far the respective layer is from the output layer.

Example Application: Drug Discovery

Details on pre-training on the ChEMBL20 database For training a fully-connected deep neural network (FCN) on the ChEMBL20 database, the number of 892,480 features is reduced by a sparseness criterion on the molecules to 1,866 features. The FCN is trained on 1.1 million molecules for 1,000 epochs minimizing binary cross-entropy and masking out missing values by using an objective, as described in (Mayr et al., 2018).

Details on the compared methods We use ECFP6 features for a raw molecule representation. Note that the number of possible distinct ECFP6 features is not predefined, since a new molecule may be structurally different from all previously seen ones, and it might therefore consist of new unseen ECFP6 features. For SVMs, a MinMax kernel (Mayr et al., 2016) is used, which operates directly on counts of ECFP6 features and used LIBSVM (Chang and Lin, 2011) as provided by scikit-learn (Pedregosa et al., 2011). For RFs, the implementation of scikit-learn with 1000 trees and kept default values for the other hyperparameters is used.

Detailed results on the Tox21 dataset in a few-shot setup Table 7 lists detailed results and -values of all twelve few-shot tasks of the Tox21 Data Challenge. For calculating -values, a paired Wilcoxon test is used.

Dataset CHEF SVM RF p-value SVM p-value RF
NR.AhR 0.86 0.07 0.79 0.07 0.75 0.07 2.90e-12 1.19e-17
NR.AR 0.79 0.09 0.60 0.11 0.61 0.11 1.20e-17 5.25e-18
NR.AR.LBD 0.84 0.05 0.47 0.11 0.52 0.10 1.94e-18 1.95e-18
NR.Aromatase 0.74 0.08 0.68 0.09 0.64 0.09 3.77e-09 1.12e-13
NR.ER 0.73 0.08 0.70 0.08 0.65 0.09 1.39e-03 4.25e-11
NR.ER.LBD 0.71 0.08 0.68 0.09 0.65 0.10 1.96e-03 2.40e-06
NR.PPAR.gamma 0.66 0.07 0.61 0.10 0.60 0.11 8.04e-06 3.05e-06
SR.ARE 0.76 0.08 0.66 0.08 0.61 0.09 2.43e-14 2.51e-17
SR.ATAD5 0.68 0.07 0.62 0.10 0.61 0.10 2.23e-07 7.65e-10
SR.HSE 0.74 0.06 0.62 0.10 0.60 0.10 3.42e-16 1.40e-16
SR.MMP 0.89 0.05 0.81 0.08 0.79 0.09 7.36e-15 4.41e-16
SR.p53 0.77 0.08 0.67 0.10 0.63 0.10 3.50e-13 8.86e-17
Table 7: ROC-AUC performances for the twelve individual few-shot tasks (rows) of the Tox21 Data Challenge. CHEF is compared to conventional methods (SVM, RF). Averages and standard deviations are computed across 100 differently sampled training and test sets. The last two columns show the results of paired Wilcoxon tests with the null hypotheses given that SVM and RF perform better, respectively.

Footnotes

  1. footnotemark:
  2. footnotemark:
  3. footnotemark:
  4. footnotemark:
  5. footnotemark:
  6. footnotemark:
  7. footnotemark:
  8. footnotemark:
  9. footnotemark:
  10. footnotemark:
  11. footnotemark:
  12. footnotemark:
  13. Our implementation is available at github.com/ml-jku/chef.

References

  1. Infinite mixture prototypes for few-shot learning. In Proceedings of the 36th International Conference on Machine Learning, pp. 232–241. Cited by: Table 3.
  2. Learning to learn by gradient descent by gradient descent. In Proc. Adv. Neural Inf. Process. Syst. (NIPS), pp. 3981–3989. Cited by: §1.
  3. A theory of learning from different domains. Machine Learning 79, pp. 151–175. Cited by: §1, §2, §2.
  4. Learning a synaptic learning rule. Citeseer. Cited by: §1.
  5. Representation learning: a review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35 (8), pp. 1798–1828. Cited by: §1.
  6. Random forests. Machine learning 45 (1), pp. 5–32. Cited by: §4.3.
  7. LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST) 2 (3), pp. 1–27. Cited by: §A.1.3.
  8. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801–818. Cited by: §1.
  9. Skin lesion analysis toward melanoma detection 2018: a challenge hosted by the international skin imaging collaboration (isic). arXiv preprint arXiv:1902.03368. Cited by: §4.1.
  10. Support-vector networks. Machine learning 20 (3), pp. 273–297. Cited by: §4.3.
  11. Aga: attribute-guided augmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7455–7463. Cited by: §1.
  12. Diversity with cooperation: ensemble methods for few-shot classification. In Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3723–3731. Cited by: §1.
  13. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence 28 (4), pp. 594–611. Cited by: §1.
  14. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §1, Table 3.
  15. Hebbian synaptic plasticity, comparative and developmental aspects. In The Handbook of Brain Theory and Neural Networks, M. Arbib (Ed.), pp. 515–521. Cited by: §3.
  16. A survey on concept drift adaptation. ACM Computing Surveys 46 (4) (English). External Links: Document, ISSN 0360-0300 Cited by: §1, §2.
  17. Low-shot learning via covariance-preserving adversarial augmentation networks. In Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), pp. 975–985. Cited by: §1.
  18. The chembl database in 2017. Nucleic acids research 45 (D1), pp. D945–D954. Cited by: §4.3.
  19. DropBlock: a regularization method for convolutional networks. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi and R. Garnett (Eds.), pp. 10727–10737. Cited by: §A.1.2, §4.2.
  20. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375. Cited by: §1, Table 3.
  21. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587. Cited by: §1.
  22. Generative adversarial nets. In Proc. Adv. Neural Inf. Process. Syst. (NIPS), pp. 2672–2680. Cited by: §2.
  23. A new benchmark for evaluation of cross-domain few-shot learning. arXiv preprint arXiv:1912.07200. Cited by: §A.1.1, §1, §2, §4.1, §4.1, §4.1, Table 2.
  24. Low-shot visual recognition by shrinking and hallucinating features. In Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3018–3027. Cited by: §1.
  25. The organization of behavior: a neuropsychological theory. Psychology Press. Cited by: §3.
  26. Eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12 (7), pp. 2217–2226. Cited by: §4.1.
  27. Learning a kernel function for classification with small training samples. In Proc. Int. Conf. Mach. Learn. (ICML), pp. 401–408. Cited by: §1.
  28. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626–6637. Cited by: §2.
  29. Machine learning in drug discovery. ACS Publications. Cited by: §4.3.
  30. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87–94. Cited by: §1.
  31. Accurate prediction of biological assays with high-throughput microscopy images and convolutional networks. Journal of chemical information and modeling 59 (3), pp. 1163–1171. Cited by: §1.
  32. Meta-learning in neural networks: A survey. arXiv. External Links: 2004.05439 Cited by: §1.
  33. Cross attention network for few-shot classification. In Advances in Neural Information Processing Systems, pp. 4005–4016. Cited by: §1, Table 3.
  34. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §1.
  35. Editorial: tox21 challenge to build predictive models of nuclear receptor and stress response pathways as mediated by exposure to environmental toxicants and drugs. front. Tox21 Challenge to Build Predictive Models of Nuclear Receptor and Stress Response Pathways as Mediated by Exposure to Environmental Toxicants and Drugs 5 (3), pp. 5. Cited by: §4.3.
  36. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, F. Bach and D. Blei (Eds.), Proceedings of Machine Learning Research, Vol. 37, Lille, France, pp. 448–456. Cited by: §A.1.2.
  37. Correcting sample selection bias by unlabeled data. In Advances in Neural Information Processing Systems 19, B. Schölkopf, J. C. Platt and T. Hoffman (Eds.), pp. 601–608. Cited by: §1, §2.
  38. Siamese neural networks for one-shot image recognition. In Proc. Int. Conf. Mach. Learn. (ICML) Deep Learn. Workshop, Vol. 2. Cited by: §1.
  39. A review of domain adaptation without target labels. IEEE transactions on pattern analysis and machine intelligence. External Links: Document, ISSN 0162-8828 Cited by: §1, §2.
  40. One-shot learning of scene locations via feature trajectory transfer. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 78–86. Cited by: §1.
  41. Deep learning. Nature 521 (7553), pp. 436–444. Cited by: §1, §1, §3.
  42. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §1.
  43. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10657–10665. Cited by: §A.1.2, §1, §4.2, Table 3.
  44. Finding task-relevant features for few-shot learning by category traversal. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 1–10. Cited by: Table 3.
  45. Learning to propagate labels: transductive propagation network for few-shot learning. In ICLR, Cited by: Table 3.
  46. Learning from very few samples: a survey. External Links: 2009.02653 Cited by: §1, §2.
  47. Deep neural nets as a method for quantitative structure–activity relationships. Journal of chemical information and modeling 55 (2), pp. 263–274. Cited by: §4.3.
  48. Deep learning: A critical appraisal. arXiv. External Links: 1801.00631 Cited by: §1.
  49. DeepTox: toxicity prediction using deep learning. Frontiers in Environmental Science 3, pp. 80. Cited by: §A.1.3, §4.3.
  50. Large-scale comparison of machine learning methods for drug target prediction on ChEMBL. Chemical science 9 (24), pp. 5441–5451. Cited by: §A.1.3, §4.3, §4.3, §4.3.
  51. A simple neural attentive meta-learner. In International Conference on Learning Representations, Cited by: §1, Table 3.
  52. Using deep learning for image-based plant disease detection. Frontiers in plant science 7, pp. 1419. Cited by: §4.1.
  53. Metalearning with hebbian fast weights. arXiv preprint arXiv:1807.05076. Cited by: §1.
  54. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999. Cited by: Table 3.
  55. Tadam: task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pp. 721–731. Cited by: Table 3.
  56. Low-shot learning from imaginary 3d model. In Proc. IEEE Winter Conf. Applica. Comput. Vis. (WACV), pp. 978–985. Cited by: §1.
  57. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22 (10), pp. 1345–1359. Cited by: §1, §2.
  58. PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingleAlché-Buc, E. Fox and R. Garnett (Eds.), pp. 8026–8037. Cited by: §4.1.
  59. Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §A.1.3.
  60. Meta-learning with implicit gradients. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingleAlché-Buc, E. Fox and R. Garnett (Eds.), pp. 113–124. Cited by: §1.
  61. Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §1, §4.2, Table 3.
  62. Meta-learning for semi-supervised few-shot classification. In International Conference on Learning Representations, Cited by: §4.2, §4.
  63. Extended-connectivity fingerprints. Journal of Chemical Information and Modeling 50 (5), pp. 742–754. Cited by: §4.3.
  64. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1.
  65. Human-level protein localization with convolutional neural networks. In International Conference on Learning Representations, Cited by: §1.
  66. Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §4.2.
  67. Meta-learning with latent embedding optimization. In International Conference on Learning Representations, Cited by: §1.
  68. Improved techniques for training gans. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon and R. Garnett (Eds.), pp. 2234–2242. Cited by: §2.
  69. Deep learning in neural networks: an overview. Neural networks 61, pp. 85–117. Cited by: §1.
  70. Evolutionary principles in self-referential learning. on learning now to learn: the meta-meta-meta…-hook. Diploma Thesis, Technische Universitat Munchen, Germany. Cited by: §1.
  71. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077–4087. Cited by: Table 3.
  72. Prototypical networks for few-shot learning. In Proc. Advances Neural Inf. Process. Syst. (NIPS), pp. 4077–4087. Cited by: §1.
  73. Industry-scale application and evaluation of deep learning for drug target prediction. Journal of Cheminformatics 12, pp. 1–13. Cited by: §4.3.
  74. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 403–412. Cited by: Table 3.
  75. Learning to compare: relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199–1208. Cited by: §1, Table 3.
  76. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data 5, pp. 180161. Cited by: §4.1.
  77. Cross-domain few-shot classification via learned feature-wise transformation. In Proc. Int. Conf. Learn. Represent. (ICLR), Cited by: §1, §2.
  78. Multi-task deep networks for drug target prediction. In Neural Information Processing System, Vol. 2014, pp. 1–4. Cited by: §4.3.
  79. Matching networks for one shot learning. In NIPS, Cited by: §A.1.2, §1, §4.2, §4.2, Table 3, §4.
  80. Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2097–2106. Cited by: §4.1.
  81. Analyzing concept drift and shift from sample data. Data Mining and Knowledge Discovery 32, pp. 1179–1199. Cited by: §1, §2.
  82. Learning in the presence of concept drift and hidden contexts. Machine learning 23 (1), pp. 69–101. Cited by: §1, §2.
  83. An introduction to domain adaptation and transfer learning. ArXiv abs/1812.11806. Cited by: §1, §2.
  84. Few-shot learning via embedding adaptation with set-to-set functions. In Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §4.1, Table 3.
  85. Deep triplet ranking networks for one-shot recognition. arXiv preprint arXiv:1804.07275. Cited by: §1.
  86. TapNet: neural network augmented with task-adaptive projection for few-shot learning. In ICML, Cited by: §1, Table 3.
  87. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122. Cited by: §1.
  88. Variational few-shot learning. In The IEEE International Conference on Computer Vision (ICCV), Cited by: Table 3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
417363
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description