Fewshot Learning for Timeseries Forecasting
Abstract
Timeseries forecasting is important for many applications. Forecasting models are usually trained using timeseries data in a specific target task. However, sufficient data in the target task might be unavailable, which leads to performance degradation. In this paper, we propose a fewshot learning method that forecasts a future value of a timeseries in a target task given a few timeseries in the target task. Our model is trained using timeseries data in multiple training tasks that are different from target tasks. Our model uses a few timeseries to build a forecasting function based on a recurrent neural network with an attention mechanism. With the attention mechanism, we can retrieve useful patterns in a small number of timeseries for the current situation. Our model is trained by minimizing an expected test error of forecasting next timestep values. We demonstrate the effectiveness of the proposed method using 90 timeseries datasets.
1 Introduction
Timeseries forecasting is important for many applications, which include finantial markets [4, 23, 8], enegy management [11, 46], traffic system [29, 20, 31, 55], and environmental engineering [27]. Recently, deep learning methods, such as Long Short Term Memory (LSTM) [17], have been widely used for timeseries forecasting models due to its high performance [3, 35, 29].
Forecasting models are usually trained using timeseries data in a specific target task, where we want to forecast future values. For example, to train traffic congesting forecasting models, we use traffic congestion timeseries data at many locations. However, sufficient data in the target task might be unavailable, which leads to performance degradation.
In this paper, we propose a fewshot learning method that forecasts timeseries in a target task given a few timeseries, where timeseries in the target task are not given in a training phase. The proposed method trains our model using timeseries data in multiple training tasks that are different from the target task. Figure 1 illustrates our problem formulation. Timeseries in other tasks might have similar dynamics to those in the target task. For example, many timeseries include trend that shows the longterm tendency of the timeseries to increase or decrease. Also, timeseries that are related to human activity, such as traffic volume and electric power consumption, exhibit daily and/or weekly cyclic dynamics. By using knowledge learned from various timeseries data, we can improve the forecasting performance of the target task.
Given a few timeseries, which are called a support set, our model outputs a value at the next timestep of a timeseries, which is called a query. In particular, first, we obtain representations of the support set with a bidirectional LSTM. Then, we forecast future values of the query considering the support representations based on an attention mechanism as well as the query’s own pattern based on an LSTM. With the attention mechanism, we can retrieve useful patterns in the support sets to forecast at the current situation. In addition, we can handle the support set with the different number of timeseries with different length using the attention mechanism. Given a target task, our model forecasts future values that are tailored to the target task without retraining. Our model is trained by minimizing an expected test error of forecasting next timestep values given a support set, which are calculated using data in multiple training tasks.
The main contributions of this paper are:

Our method is the first method of fewshot learning for timeseries forecasting that does not require retraining given target tasks.

Our model can handle different support size and different timeseries length with an attention mechanism and LSTMs.

We demonstrate the effectiveness of the proposed method using 90 timeseries datasets.
The remainder of this paper is organized as follows. In Section 2, we briefly review related work. In Section 3, we propose our model and its training procedure for fewshot timeseries forecasting. In Section 4, we show that the proposed method outperforms existing methods. Finally, we give a concluding remark and future work in Section 5.
2 Related work
For transfer knowledge in source tasks to target tasks, many transfer learning, domain adaptation, and multitask learning methods have been proposed [49, 33, 21, 19, 26]. However, these methods require relatively a large number of timeseries of target tasks. To reduce the required number of target examples, fewshot learning, or metalearning, has been attract considerable attention recently [45, 6, 40, 2, 51, 47, 5, 13, 32, 24, 14, 44, 54, 12, 15, 22, 16, 7, 41, 42, 50, 34, 53, 28]. There are some applications of fewshot learning to timeseries forecasting [18, 43, 30, 38, 48, 1]. Existing fewshot timeseries forecasting methods can be categorized into two: finetunebased and meta featurebased. Finetunebased methods [18, 43] train models using training tasks, and finetune the models given target tasks. On the other hand, the proposed method does not need to retrain the model given target tasks. Meta featurebased methods [30, 38, 48, 1] use meta features of timeseries, such as standard deviation and length, to select forecasting models. In contrast, the proposed method does not require to determine meta features; it extracts latent representations of timeseries with LSTMs. Neural networkbased timeseries forecasting models [36], which can be considered as a metalearning method, are used for zeroshot timeseries forecasting [37]. Since they consider zeroshot learning, where no target examples are given, they cannot use given target timeseries data. Recurrent attentive neural processes [39] uses recurrent neural networks with an attention mechanism for metalearning, where attentions are connected to past sequences for extending neural processes [22, 52, 15]. Therefore, they cannot use different timeseries data given as a support set. On the other hand, the proposed method connects attentions to timeseries data in a support set to use them for improving the performance.
3 Proposed method
We describe our model that uses a support set to build a forecasting function in Section 3.1. In Section 3.2, we present the training procedure for our model given sets of timeseries in multiple tasks. Then, we describe a test phase in Section 3.3.
3.1 Model
Let be a support set, where is the th timeseries, is a scalar continuous value at timestep , is its length, and is the number of timeseries in the support set. Our model uses support set to build a forecasting function that outputs predictive value at the next timestep given query timeseries in the same task with the support set. Figure 2 illustrates our model.
First, we obtain representations of each timestep for each timeseries in suppot set using a bidirectional LSTM in the form of hidden states:
(1) 
where and are forward and backword LSTMs, and and are foward and backword hidden states of the th support timeseries at timestep . The forward (backword) hidden state () contains information about the timeseries before (after) timestep . We use concatenated vector of the forward and backward hidden states, , as the representation of the th timeseries at timestep where represents the concatenation of vectors, and . With the bidirectional LSTM, we can encode both past and future information in representation , which is important for forecasting. In addition, LSTMs enable us to handle timeseries in different lengths.
Second, we obtain a representation of query timeseries with LSTM :
(2) 
where is the hidden state at timestep . We use the hidden state at the last timestep as query’s representation .
Third, we extract knowledge from the support set that is useful for forecasting using an attention mechanism:
(3) 
where , and are linear projection matrices. When there are support timeseries that have locally similar patterns with the query, the attention mechanism retrieves information of the point, . The similarity is calculated by the inner product between linearly transformed support representations, and linearly transformed query representations, . By training our model so as to minimize the expected forecasting error as described in Section 3.2, the attention mechanism retrieves information that is effective to improve the forecasting performance. Since parameters of the attention mechanism, , and , do not depend on the number of timeseries in the support set, we can deal with support sets with different sizes.
Then, we forecast a value at next timestep using both attention output and query representation :
(4) 
where is a feedforward neural network, and is parameters in our model, which are parameters of bidirectional LSTMs, , , LSTM , and feedforward neural network , linear projection matrices in the attention mechanism, , and . By including query representation in the input of the neural network, we can forecast using its own past values even if there is no useful information in the support set.
3.2 Training
In a training phase, we are given a set of onedimensional timeseries in tasks , where is the set of timeseres in task , is the th timeseries in task , is a scalar continuous value at timestep , is its length, and is the number of timeseries in task .
We estimate model parameters by minimizing the expected loss on a query set given a support set using an episodic training framework, where support and query sets are randomly generated from training datasets to simulate target tasks:
(5) 
where represents an expectation,
(6) 
is the mean squared error on the predictions at the next timestep values in query set given support set , is the number of instances in the query set, is length of the th timeseries in the query set, is the value of the th sequence at timestep , and is the timeseries until timestep .
The training procedure of our model is shown in Algorithm 1. For each iteration, we randomly generate support and query sets (Lines 3 – 5) from a randomly selected task. Given the support and query sets, we calculate the loss (Line 6) by (6). We update model parameters by using stochastic gradient descent methods (Line 7).
3.3 Test
In a test phase, we are given a few timeseries in a new task as a support set. Then, we obtain a model that forecasts value at the next timestep given query timeseries in task .
4 Experiments
4.1 Data
We evaluated the proposed method using timeseries datasets obtained from UCR Time Series Classification Archive [10, 9]. There were originally timeseries data in 128 tasks. We omit tasks that contain missing values, timeseries with the length shorter than 100, and less than 50 timeseries. Then, we obtained timeseries data in 90 tasks. We used values at first 100 timesteps for each timeseries. We randomly split into 55 training, 10 validation and 25 target tasks, where each task contains 50 timeseries. We normalized the values for each task with mean zero and variance one.
4.2 Our model setting
We used bidirectional LSTM with hidden units for encoding support sets, and LSTM with hidden units for encoding query sets. In the attention mechanism, we used and . For the neural network to output a focasting value , we used threelayered feedforward neural network with 32 hidden units. The activation function in the neural networks were rectified linear unit, . Optimization was performed using Adam [25] with learning rate and dropout rate . The maximum number of training epochs was 500, and the validation datasets were used for early stopping. We set support set size at , and query set size at .
4.3 Comparing methods
We compared the proposed method with three types of training frameworks: modelagnostic metalearning (MAML), domainindependent learning (DI), and domainspecific learning (DS). With MAML, initial model parameters are optimized so that they perform well when finetuned with a support set. For the finetuning, Adam with learning rate and five epochs were used. With DI, a model was trained by minimizing the error on all training tasks. With DS, a model was trained by minimizing the error on the support set of the target task. For MAML, DI, and DS, we used three types of models: LSTM, neural network (NN), and linear models (Linear). With LSTM, we used LSTM with 32 hidden units. For forecasting values at the next timestep, we used a threelayered feedforward neural network with 32 hidden units that takes the output of the LSTM. With NN, we used threelayered feedforward neural networks with 32 hidden units that take values at one timestep before. With Linear, we used linear regression models that takes values at one timestep before. We also compared with a method that output values that are the same with the previous timestep (Pre).
4.4 Results
LSTM  NN  Linear  

Ours  MAML  DI  DS  MAML  DI  DS  MAML  DI  DS  Pre  
ACSF1  1.007  1.006  1.016  1.035  1.175  1.279  1.066  1.309  1.364  1.037  1.556 
Adiac  0.034  0.041  0.034  0.063  0.064  0.063  0.110  0.109  0.122  0.197  0.076 
ArrowHead  0.048  0.055  0.049  0.060  0.073  0.072  0.105  0.104  0.110  0.177  0.075 
BME  0.087  0.093  0.088  0.137  0.119  0.106  0.145  0.170  0.170  0.277  0.119 
Beef  0.047  0.070  0.039  0.080  0.070  0.067  0.121  0.108  0.119  0.371  0.073 
CBF  0.607  0.582  0.622  0.562  0.664  0.675  0.571  0.585  0.588  0.597  0.661 
Car  0.025  0.039  0.028  0.050  0.036  0.046  0.059  0.046  0.085  0.152  0.038 
ChlorineConcentration  0.551  0.497  0.571  0.513  0.520  0.554  0.533  0.529  0.562  0.553  0.556 
CinCECGTorso  0.152  0.152  0.154  0.231  0.197  0.156  0.175  0.177  0.166  0.284  0.138 
Coffee  0.062  0.064  0.070  0.077  0.084  0.091  0.125  0.111  0.129  0.193  0.075 
Computers  0.398  0.515  0.407  0.751  0.663  0.451  0.596  0.473  0.477  0.544  0.553 
CricketX  0.395  0.423  0.407  0.459  0.439  0.424  0.456  0.486  0.459  0.492  0.455 
CricketY  0.400  0.394  0.392  0.412  0.405  0.431  0.422  0.453  0.431  0.479  0.441 
CricketZ  0.443  0.469  0.457  0.508  0.488  0.471  0.494  0.532  0.506  0.542  0.514 
DiatomSizeReduction  0.024  0.033  0.028  0.039  0.038  0.046  0.048  0.046  0.068  0.143  0.039 
ECG5000  0.197  0.192  0.222  0.180  0.279  0.231  0.242  0.335  0.309  0.376  0.216 
ECGFiveDays  0.387  0.372  0.483  0.414  0.582  0.579  0.452  0.615  0.647  0.624  0.547 
EOGHorizontalSignal  0.160  0.160  0.159  0.164  0.161  0.162  0.183  0.185  0.180  0.236  0.153 
EOGVerticalSignal  0.145  0.143  0.155  0.140  0.146  0.147  0.145  0.159  0.151  0.194  0.139 
Earthquakes  1.054  1.016  1.087  1.002  1.033  1.108  1.007  1.168  1.215  0.997  1.397 
EthanolLevel  0.033  0.076  0.036  0.187  0.053  0.055  0.082  0.041  0.071  0.251  0.024 
FaceAll  0.406  0.413  0.423  0.450  0.649  0.630  0.576  0.671  0.752  0.625  0.638 
FaceFour  0.357  0.333  0.353  0.363  0.337  0.361  0.358  0.378  0.393  0.456  0.346 
FacesUCR  0.442  0.401  0.458  0.516  0.802  0.661  0.548  0.710  0.779  0.631  0.647 
FiftyWords  0.051  0.059  0.056  0.181  0.095  0.101  0.164  0.167  0.179  0.258  0.120 
Fish  0.021  0.032  0.026  0.059  0.040  0.038  0.055  0.042  0.077  0.144  0.036 
FordA  0.107  0.111  0.135  0.200  0.273  0.292  0.336  0.429  0.420  0.511  0.316 
FordB  0.102  0.109  0.130  0.160  0.293  0.277  0.315  0.448  0.436  0.521  0.326 
FreezerRegularTrain  0.224  0.227  0.226  0.236  0.229  0.260  0.247  0.240  0.245  0.295  0.224 
FreezerSmallTrain  0.227  0.227  0.226  0.253  0.228  0.255  0.240  0.237  0.238  0.292  0.224 
Fungi  0.121  0.150  0.139  0.535  0.191  0.218  0.285  0.246  0.249  0.321  0.176 
GunPoint  0.064  0.072  0.065  0.123  0.072  0.071  0.109  0.109  0.122  0.191  0.088 
GunPointAgeSpan  0.067  0.070  0.067  0.068  0.071  0.085  0.099  0.083  0.106  0.177  0.067 
GunPointMaleVersusFemale  0.056  0.064  0.055  0.067  0.118  0.065  0.099  0.090  0.102  0.244  0.056 
GunPointOldVersusYoung  0.043  0.049  0.047  0.061  0.067  0.073  0.116  0.115  0.124  0.203  0.074 
Ham  0.127  0.123  0.131  0.301  0.255  0.226  0.302  0.326  0.311  0.426  0.221 
HandOutlines  0.031  0.052  0.032  0.101  0.046  0.058  0.101  0.059  0.104  0.224  0.041 
Haptics  0.372  0.307  0.373  0.404  0.478  0.412  0.378  0.486  0.497  0.554  0.363 
Herring  0.023  0.031  0.030  0.037  0.044  0.046  0.074  0.062  0.077  0.148  0.046 
HouseTwenty  0.388  0.392  0.401  0.383  0.387  0.400  0.379  0.404  0.369  0.416  0.389 
InlineSkate  0.058  0.066  0.064  0.092  0.075  0.073  0.082  0.055  0.087  0.182  0.043 
InsectEPGRegularTrain  0.032  0.246  0.018  0.814  0.167  0.033  0.510  0.061  0.071  0.621  0.004 
InsectEPGSmallTrain  0.030  0.093  0.022  0.685  0.516  0.036  0.484  0.025  0.064  0.545  0.002 
InsectWingbeatSound  0.090  0.101  0.103  0.159  0.146  0.185  0.191  0.286  0.262  0.354  0.196 
LargeKitchenAppliances  0.846  0.919  0.876  0.862  1.176  1.212  0.955  1.133  1.114  1.040  1.325 
Lightning2  0.042  0.056  0.044  0.079  0.054  0.055  0.059  0.064  0.084  0.185  0.026 
Lightning7  0.255  0.259  0.264  0.290  0.276  0.268  0.260  0.247  0.252  0.290  0.258 
Mallat  0.016  0.033  0.023  0.040  0.046  0.050  0.139  0.061  0.110  0.174  0.038 
Meat  0.049  0.055  0.054  0.167  0.093  0.084  0.151  0.178  0.182  0.276  0.122 
MedicalImages  0.173  0.183  0.209  0.327  0.250  0.248  0.363  0.370  0.351  0.403  0.251 
MixedShapesRegularTrain  0.047  0.051  0.047  0.081  0.068  0.064  0.070  0.098  0.091  0.188  0.051 
MixedShapesSmallTrain  0.031  0.039  0.034  0.041  0.054  0.049  0.050  0.086  0.089  0.179  0.039 
NonInvasiveFetalECGThorax1  0.045  0.048  0.046  0.072  0.084  0.088  0.114  0.168  0.146  0.210  0.109 
NonInvasiveFetalECGThorax2  0.046  0.050  0.044  0.057  0.069  0.075  0.107  0.171  0.135  0.205  0.105 
OSULeaf  0.052  0.060  0.057  0.123  0.085  0.086  0.123  0.119  0.132  0.193  0.084 
OliveOil  0.032  0.043  0.035  0.053  0.055  0.056  0.118  0.113  0.124  0.180  0.072 
Phoneme  0.401  0.433  0.382  0.468  0.646  0.442  0.496  0.510  0.484  0.581  0.452 
PigAirwayPressure  0.025  0.038  0.032  0.071  0.098  0.070  0.067  0.057  0.076  0.196  0.019 
PigArtPressure  0.031  0.043  0.033  0.079  0.047  0.056  0.100  0.067  0.096  0.173  0.045 
PigCVP  0.078  0.084  0.076  0.101  0.077  0.084  0.082  0.084  0.105  0.174  0.070 
Plane  0.105  0.111  0.105  0.160  0.138  0.135  0.200  0.294  0.245  0.316  0.189 
PowerCons  0.339  0.319  0.346  0.404  0.360  0.378  0.405  0.447  0.396  0.461  0.352 
RefrigerationDevices  0.577  0.579  0.589  0.606  0.591  0.648  0.612  0.593  0.613  0.656  0.606 
Rock  0.048  0.086  0.022  0.804  0.844  0.051  0.641  0.054  0.068  0.243  0.010 
ScreenType  0.281  0.352  0.284  0.422  0.291  0.288  0.312  0.294  0.284  0.336  0.304 
SemgHandGenderCh2  0.852  0.870  0.847  0.834  0.950  0.959  0.887  0.919  0.946  0.899  1.100 
SemgHandMovementCh2  0.911  0.914  0.910  0.909  0.964  0.993  0.955  0.969  1.003  0.945  1.161 
SemgHandSubjectCh2  0.826  0.833  0.825  0.827  0.900  0.934  0.859  0.898  0.924  0.876  1.072 
ShapeletSim  1.027  1.019  1.052  1.003  1.070  1.181  1.015  1.148  1.202  1.004  1.382 
ShapesAll  0.032  0.044  0.035  0.078  0.069  0.050  0.091  0.079  0.081  0.191  0.039 
LSTM  NN  Linear  
Ours  MAML  DI  DS  MAML  DI  DS  MAML  DI  DS  Pre  
SmallKitchenAppliances  1.082  1.077  1.079  1.086  1.162  1.096  1.111  1.271  1.243  1.207  1.277 
StarLightCurves  0.027  0.052  0.024  0.256  0.094  0.036  0.068  0.038  0.072  0.464  0.012 
Strawberry  0.065  0.064  0.074  0.134  0.088  0.086  0.140  0.154  0.161  0.234  0.116 
SwedishLeaf  0.132  0.132  0.129  0.207  0.154  0.159  0.220  0.259  0.241  0.314  0.187 
Symbols  0.029  0.047  0.031  0.072  0.066  0.050  0.083  0.065  0.088  0.192  0.039 
ToeSegmentation1  0.150  0.156  0.152  0.197  0.185  0.186  0.222  0.255  0.250  0.318  0.186 
ToeSegmentation2  0.146  0.157  0.149  0.192  0.169  0.174  0.190  0.200  0.203  0.270  0.166 
Trace  0.123  0.114  0.124  0.160  0.160  0.164  0.195  0.227  0.210  0.285  0.162 
TwoPatterns  0.603  0.577  0.609  0.578  0.616  0.626  0.578  0.602  0.590  0.625  0.600 
UMD  0.078  0.080  0.076  0.153  0.132  0.113  0.133  0.151  0.155  0.261  0.104 
UWaveGestureLibraryAll  0.085  0.097  0.089  0.134  0.137  0.083  0.117  0.112  0.123  0.433  0.084 
UWaveGestureLibraryX  0.041  0.064  0.036  0.109  0.114  0.067  0.094  0.100  0.110  0.423  0.053 
UWaveGestureLibraryY  0.030  0.060  0.029  0.088  0.040  0.036  0.123  0.067  0.093  0.197  0.043 
UWaveGestureLibraryZ  0.030  0.047  0.034  0.074  0.058  0.048  0.085  0.074  0.094  0.215  0.048 
Wafer  0.424  0.427  0.450  0.419  0.437  0.440  0.438  0.497  0.469  0.511  0.491 
Wine  0.126  0.112  0.121  0.233  0.198  0.215  0.256  0.334  0.302  0.353  0.227 
WordSynonyms  0.075  0.091  0.078  0.211  0.129  0.116  0.173  0.184  0.190  0.281  0.136 
Worms  0.068  0.086  0.068  0.117  0.091  0.075  0.108  0.098  0.107  0.196  0.072 
WormsTwoClass  0.067  0.095  0.068  0.119  0.089  0.083  0.124  0.090  0.110  0.196  0.072 
Yoga  0.032  0.041  0.037  0.111  0.053  0.057  0.089  0.083  0.101  0.176  0.055 
Average  0.224  0.235  0.231  0.295  0.293  0.272  0.299  0.305  0.312  0.387  0.285 
#Best  62  43  39  15  5  6  4  1  2  1  17 
Tables 1 and 2 show the rooted mean squared error of next timestep forecasting for each target task averaged over 30 experiments with different training, validation and target splits. The proposed method achieved the performance that was not different from the best method 62 among 90 target tasks, which was the most among comparing methods. Generally, LSTM was better than NN, and NN was better than Linear. This result indicates that LSTMbased recurrent neural networks are appropriate for forecasting timeseries. LSTMMAML was worse than the proposed method. The reason is that timeseries dynamics are very different across tasks, and it is difficult to finetune well from a single initial model parameter setting for diverse tasks. On the other hand, the proposed method flexibly adapt to target tasks with attention mechanisms given the support set. LSTMDI performed similar performance to LSTMMAML. Although LSTMDI does not use support sets of target tasks, it can give taskspecific forecasting by taking query timeseries as input with LSTM. Figure 3 shows some examples of true and forecasted values by the proposed method, LSTMMAML, and LSTMDI. The proposed method forecasted appropriately with different dynamics of target tasks.
Figure 4(a) shows the average mean squared error with different numbers of training tasks by the proposed method, LSTMMAML, and LSTMDI. All the methods decreased the errors as the number of training tasks increased. Figure 4(b) shows the average mean squared error with different test support size by the proposed method and LSTMMAML, where the training support size was three. Even when the test support size was different from training, the proposed method and LSTMMAML decreased the error as the test support size increased. Table 3 shows the average computational time in seconds of training with all training tasks, and computational time of test for each target task. on computers with 2.30GHz CPUs with five cores. The proposed method had slightly shorter training and test time than LSTMMAML.
(a) Beef  (b) Chlorine  (c) EOG 
(d) FaceFour  (e) InlineSkate  (f) PigCVP 
(g) Rock  (h) WordSynonyms  (i) Worms 
(a) #training tasks  (b) Test support size 
Ours  LSTMMAML  LSTMDI  
Train  40,203  47,978  5,297 
Test  15  20  2 
5 Conclusion
In this paper, we proposed a metalearning method for timeseries forecasting, where our model is trained with many timeseries datasets. Our model can forecast future values that are specific to a target task using a few timeseries data in the target task by recurrent neural networks with an attention mechanism. For future work, we plan to apply the proposed method to multivariate timeseries datasets.
References
 A. R. Ali, B. Gabrys, and M. Budka. Crossdomain metalearning for timeseries forecasting. Procedia Computer Science, 126:9–18, 2018.
 M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989, 2016.
 M. Assaad, R. Boné, and H. Cardot. A new boosting algorithm for improved timeseries forecasting with recurrent neural networks. Information Fusion, 9(1):41–55, 2008.
 E. M. Azoff. Neural Network Time Series Forecasting of Financial Markets. John Wiley & Sons, Inc., 1994.
 S. Bartunov and D. Vetrov. Fewshot generative modelling with generative matching networks. In International Conference on Artificial Intelligence and Statistics, pages 670–678, 2018.
 Y. Bengio, S. Bengio, and J. Cloutier. Learning a synaptic learning rule. In International Joint Conference on Neural Networks, 1991.
 J. Bornschein, A. Mnih, D. Zoran, and D. J. Rezende. Variational memory addressing in generative models. In Advances in Neural Information Processing Systems, pages 3920–3929, 2017.
 L.J. Cao and F. E. H. Tay. Support vector machine with adaptive parameters in financial time series forecasting. IEEE Transactions on Neural Networks, 14(6):1506–1518, 2003.
 H. A. Dau, A. Bagnall, K. Kamgar, C.C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh. The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6):1293–1305, 2019.
 H. A. Dau, E. Keogh, K. Kamgar, C.C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, Yanping, B. Hu, N. Begum, A. Bagnall, A. Mueen, G. Batista, and HexagonML. The UCR time series classification archive, October 2018. https://www.cs.ucr.edu/~eamonn/time_series_data_2018/.
 C. Deb, F. Zhang, J. Yang, S. E. Lee, and K. W. Shah. A review on time series forecasting techniques for building energy consumption. Renewable and Sustainable Energy Reviews, 74:902–924, 2017.
 H. Edwards and A. Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016.
 C. Finn, P. Abbeel, and S. Levine. Modelagnostic metalearning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, pages 1126–1135, 2017.
 C. Finn, K. Xu, and S. Levine. Probabilistic modelagnostic metalearning. In Advances in Neural Information Processing Systems, pages 9516–9527, 2018.
 M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. W. Teh, D. Rezende, and S. A. Eslami. Conditional neural processes. In International Conference on Machine Learning, pages 1690–1699, 2018.
 L. B. Hewitt, M. I. Nye, A. Gane, T. Jaakkola, and J. B. Tenenbaum. The variational homoencoder: Learning to learn high capacity generative models from few examples. arXiv preprint arXiv:1807.08919, 2018.
 S. Hochreiter and J. Schmidhuber. Long shortterm memory. Neural Computation, 9(8):1735–1780, 1997.
 A. Hooshmand and R. Sharma. Energy predictive models with limited data using transfer learning. In Proceedings of the Tenth ACM International Conference on Future Energy Systems, pages 12–16, 2019.
 Y. Jia, Y. Zhang, R. Weiss, Q. Wang, J. Shen, F. Ren, P. Nguyen, R. Pang, I. L. Moreno, Y. Wu, et al. Transfer learning from speaker verification to multispeaker texttospeech synthesis. In Advances in Neural Information Processing Systems, pages 4480–4490, 2018.
 T. A. Jilani, S. A. Burney, and C. Ardil. Multivariate high order fuzzy time series forecasting for car road accidents. International Journal of Computational Intelligence, 4(1):15–20, 2007.
 T. W. Killian, S. Daulton, G. Konidaris, and F. DoshiVelez. Robust and efficient transfer learning with hidden parameter markov decision processes. In Advances in Neural Information Processing Systems, pages 6250–6261, 2017.
 H. Kim, A. Mnih, J. Schwarz, M. Garnelo, A. Eslami, D. Rosenbaum, O. Vinyals, and Y. W. Teh. Attentive neural processes. arXiv preprint arXiv:1901.05761, 2019.
 K.j. Kim. Financial time series forecasting using support vector machines. Neurocomputing, 55(12):307–319, 2003.
 T. Kim, J. Yoon, O. Dia, S. Kim, Y. Bengio, and S. Ahn. Bayesian modelagnostic metalearning. In Advances in Neural Information Processing Systems, 2018.
 D. P. Kingma and J. Ba. ADAM: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
 A. Kumagai, T. Iwata, and Y. Fujiwara. Transfer anomaly detection by inferring latent domain representations. In Advances in Neural Information Processing Systems, pages 2467–2477, 2019.
 G. Lachtermacher and J. D. Fuller. Backpropagation in hydrological time series forecasting. In Stochastic and Statistical Methods in Hydrology and Environmental Engineering, pages 229–242. Springer, 1994.
 B. M. Lake. Compositional generalization through meta sequencetosequence learning. In Advances in Neural Information Processing Systems, pages 9788–9798, 2019.
 N. Laptev, J. Yosinski, L. E. Li, and S. Smyl. Timeseries extreme event forecasting with neural networks at uber. In International Conference on Machine Learning, volume 34, pages 1–5, 2017.
 C. Lemke and B. Gabrys. Metalearning for time series forecasting and forecast combination. Neurocomputing, 73(1012):2006–2016, 2010.
 Y. Li, R. Yu, C. Shahabi, and Y. Liu. Diffusion convolutional recurrent neural network: Datadriven traffic forecasting. arXiv preprint arXiv:1707.01926, 2017.
 Z. Li, F. Zhou, F. Chen, and H. Li. MetaSGD: Learning to learn quickly for fewshot learning. arXiv preprint arXiv:1707.09835, 2017.
 M. Long, H. Zhu, J. Wang, and M. I. Jordan. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning, pages 2208–2217, 2017.
 J. Narwariya, P. Malhotra, L. Vig, G. Shroff, and T. Vishnu. Metalearning for fewshot time series classification. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pages 28–36. 2020.
 O. Ogunmolu, X. Gu, S. Jiang, and N. Gans. Nonlinear systems identification using deep dynamic neural networks. arXiv preprint arXiv:1610.01439, 2016.
 B. N. Oreshkin, D. Carpov, N. Chapados, and Y. Bengio. NBEATS: Neural basis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437, 2019.
 B. N. Oreshkin, D. Carpov, N. Chapados, and Y. Bengio. Metalearning framework with applications to zeroshot timeseries forecasting. arXiv preprint arXiv:2002.02887, 2020.
 R. B. Prudêncio and T. B. Ludermir. Metalearning approaches to selecting time series models. Neurocomputing, 61:121–137, 2004.
 S. Qin, J. Zhu, J. Qin, W. Wang, and D. Zhao. Recurrent attentive neural process for sequential data. arXiv preprint arXiv:1910.09323, 2019.
 S. Ravi and H. Larochelle. Optimization as a model for fewshot learning. In International Conference on Learning Representations, 2017.
 S. Reed, Y. Chen, T. Paine, A. v. d. Oord, S. Eslami, D. Rezende, O. Vinyals, and N. de Freitas. Fewshot autoregressive density estimation: Towards learning to learn distributions. arXiv preprint arXiv:1710.10304, 2017.
 D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra. Oneshot generalization in deep generative models. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, pages 1521–1529, 2016.
 M. Ribeiro, K. Grolinger, H. F. ElYamany, W. A. Higashino, and M. A. Capretz. Transfer learning with seasonal and trend adjustment for crossbuilding energy forecasting. Energy and Buildings, 165:352–363, 2018.
 A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell. Metalearning with latent embedding optimization. In International Conference on Learning Representations, 2019.
 J. Schmidhuber. Evolutionary principles in selfreferential learning. on learning now to learn: The metametameta…hook. Master’s thesis, Technische Universitat Munchen, Germany, 1987.
 A. Sfetsos. A comparison of various forecasting techniques applied to mean hourly wind speed time series. Renewable Energy, 21(1):23–35, 2000.
 J. Snell, K. Swersky, and R. Zemel. Prototypical networks for fewshot learning. In Advances in Neural Information Processing Systems, pages 4077–4087, 2017.
 T. S. Talagala, R. J. Hyndman, G. Athanasopoulos, et al. Metalearning how to forecast time series. Monash Econometrics and Business Statistics Working Papers, 6:18, 2018.
 C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu. A survey on deep transfer learning. In International Conference on Artificial Neural Networks, pages 270–279. Springer, 2018.
 W. Tang, L. Liu, and G. Long. Fewshot timeseries classification with dual interpretability. In ICML Time Series Workshop. 2019.
 O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016.
 T. Willi, J. Masci, J. Schmidhuber, and C. Osendorfer. Recurrent neural processes. arXiv preprint arXiv:1906.05915, 2019.
 Y. Xie, H. Jiang, F. Liu, T. Zhao, and H. Zha. Meta learning with relational information for short sequences. In Advances in Neural Information Processing Systems, pages 9901–9912, 2019.
 H. Yao, Y. Wei, J. Huang, and Z. Li. Hierarchically structured metalearning. In International Conference on Machine Learning, pages 7045–7054, 2019.
 B. Yu, H. Yin, and Z. Zhu. Spatiotemporal graph convolutional neural network: A deep learning framework for traffic forecasting.