Self-Net: Lifelong Learning via Continual Self-Modeling

Self-Net: Lifelong Learning via Continual Self-Modeling

Blake Camp Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA

{bcamp2,jmandivarapu1}@student.gsu.edu, restrada1@gsu.edu Both authors contributed equally.
   Jaya Krishna Mandivarapu Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA

{bcamp2,jmandivarapu1}@student.gsu.edu, restrada1@gsu.edu
   Rolando Estrada\orcidID0000-0003-1607-2618 Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA

{bcamp2,jmandivarapu1}@student.gsu.edu, restrada1@gsu.edu
Abstract

Learning a set of tasks over time, also known as continual learning (CL), is one of the most challenging problems in artificial intelligence. While recent approaches achieve some degree of CL in deep neural networks, they either (1) grow the network parameters linearly with the number of tasks, (2) require storing training data from previous tasks, or (3) restrict the network’s ability to learn new tasks. To address these issues, we propose a novel framework, Self-Net, that uses an autoencoder to learn a set of low-dimensional representations of the weights learned for different tasks. We demonstrate that these low-dimensional vectors can then be used to generate high-fidelity recollections of the original weights. Self-Net can incorporate new tasks over time with little retraining and with minimal loss in performance for older tasks. Our system does not require storing prior training data and its parameters grow only logarithmically with the number of tasks. We show that our technique outperforms current state-of-the-art approaches on numerous datasets—including continual versions of MNIST, CIFAR10, CIFAR100, and Atari—and we demonstrate that our method can achieve over 10X storage compression in a continual fashion. To the best of our knowledge, we are the first to use autoencoders to sequentially encode sets of network weights to enable continual learning.

Keywords:
Continual learning Deep learning Autoencoders.

1 Introduction

Lifelong or continual learning (CL) is one of the most challenging problems in machine learning, and it remains a significant hurdle in the quest for artificial general intelligence (AGI) [2, 5]. In this paradigm, a single system must learn to solve new tasks without forgetting previously learned information. Different tasks might require different data (e.g., images vs. text) or they might process the same data in different ways (e.g., classifying an object in an image vs. segmenting it). Crucially, in CL there is no point at which a system stops learning; it must always be able to update its representation of its problem domain(s).

Figure 1: Framework overview: Our proposed system has a set of reusable task-specific networks (TN), a Buffer for storing the latest tasks, and a lifelong, auto-encoder (AE) for long-term storage. Given new tasks , where is the number of tasks previously encountered, we first train task-networks to learn optimal parameters for these tasks. These networks are temporarily stored in the Buffer. When the Buffer fills up, we incorporate the new networks into our long-term representation by retraining the AE on both its approximations of previously learned networks and the new batch of networks. When an old network is needed (e.g. when a task is revisited), we reconstruct its weights and load them onto the corresponding TN (solid arrow). Even when the latent representation is asymptotically smaller than , the reconstructed network closely approximates the performance of the original.

CL is particularly challenging for deep neural networks because they are trained end-to-end. In standard deep learning we tune all of the network’s parameters based on training data, usually via backpropagation [20]. While this paradigm has proven highly successful for individual tasks, it is not suitable for continual learning because it overwrites existing weights (a phenomenon evocatively dubbed catastrophic forgetting [19]). For example, if we first train a network on task A and then on task B, the latter training will modify the weights learned for A, thus likely reducing the network’s performance on this task.

There are several approaches that can achieve some degree of continual learning in deep networks. However, existing methods suffer from at least one of three limitations: they either (1) restrict the network’s ability to learn new tasks by penalizing changes to existing weights [7, 27, 22, 14]; (2) expand the model size linearly as the number of tasks grows [21, 10] (or dynamically define task-specific sub-networks [26, 11], which is asymptotically equivalent); or (3) retrain on old tasks. In the latter, we either (a) store some of the old training data directly [12, 17, 14], thus increasing storage requirements linearly (and at a faster rate than increasing the network, since data tends to be higher dimensional), or (b) they use compressed data [16, 6, 23, 25], which complicates training.

In this paper, we propose a novel approach, Self-Net, that overcomes the aforementioned limitations by decoupling how it learns a new task from how it stores it. Figure 1 provides an overview of our proposed framework. Our system grows only logarithmically with the number of tasks, while retaining excellent performance across all learned tasks. Our approach is loosely inspired by the role that the hippocampus is purported to play in memory consolation [24]. As noted in [15], during learning the brain forms an initial neural representation in cortical regions; the hippocampus then consolidates this representation into a form that is optimized for storage and retrieval. These complementary biological mechanisms enable continual learning by efficiently consolidating knowledge and prior experiences. In this spirit, we propose a system that consists of three components: (1) a set of reusable task-networks (TNs), (2) a Buffer in which we store the latest learned weights exactly, and (3) a lifelong autoencoder (AE) with which we can encode an arbitrary number of older tasks. The AE learns a low-dimensional representation for each of the high-dimensional parameter vectors that define weights of the TNs. Thus, our system self-models its own behavior, allowing it to approximate previously learned parameters instead of storing them directly. In short, when our system learns a new task, it firsts trains an appropriate TN using standard deep learning and then stores a copy of the weights in the Buffer. When the Buffer fill up, the AE learns a set of compact, latent vectors for the weights in the Buffer. The Self-Net then discards the original weights, freeing up the Buffer to store new tasks. If our system needs to solve a previously learned task, it generates an approximation of the original weights by feeding the corresponding latent vector through the AE and then loading the reconstructed weights onto a TN.

Our approach leverages the flexibility of conventional neural networks while avoiding their inability to remember old tasks. More specifically, a TN is free to modify its parameters as needed to learn a new task, since previously learned weights are encoded by the AE. Our AE doesn’t simply memorize old weights; our experiments show that an AE can encode a very large number of networks while retaining excellent performance on all tasks (Section 4.3). Our framework can even incorporate fine-tuning by initializing a TN with the weights from a previous, related task. Below, we first overview existing CL methods for deep network and then detail our approach in Section 3.

2 Prior work

Several methods have recently emerged for continual learning in deep networks, although, as noted above, existing approaches either (1) restrict new learning, (2) grow the number of parameters linearly, or (3) require old training data. Notable examples of the first type include Elastic Weight Consolidation (EWC) [7], Synaptic Intelligence [27], Variational Continual Learning [14] (which also reuses old data), and Progress & Compress [22]. These approaches reuse the same network for each new task, but they apply a regularization method to restrict changes in weights over time. Hence, they typically use constant space111Although, as noted in [4], standard EWC stores an set of Fisher weights for each task, so it actually grows linearly. The modified version proposed in [4] does use constant space.. EWC, in particular, uses the diagonal of the Fisher information matrix between the weights learned for the new task vs. the old tasks. Like our proposed approach, Progress & Compress also uses both a task-network and a long-term storage network; however, it uses EWC to update the weights of the latter, so it has very similar performance to this first method.

The second category includes Progressive Networks [21], Dynamically Expandable Networks [26], and Context-Dependent Gating [11]. These methods achieve excellent performance, but they grow the network linearly with the number of tasks, which is asymptotically the same as using independent networks. Thus, they cannot scale to large numbers of tasks. Their advantage is in facilitating transfer learning, i.e., using previous learning to speed up new learning.

Finally, some methods store a fraction of the old training data and use it to retrain the network on previously learned tasks. Key approaches include Experience Replay [12] iCarl [17], Variational Continual Learning [14], and Learning without Forgetting [10]. Unfortunately, this paradigm combines the drawbacks of the previous two. First, most of these methods use a single network, so they cannot continually learn a large number of tasks well. Second, their storage requirements grow linearly in the number of tasks because they have to store old training data. Moreover, data usually takes up orders of magnitude more space than the network itself because a trained network is effectively a compressed representation of the training set [1]. A few methods reduce this storage requirement by storing a compressed representation of the data. Methods of this type include Lifelong Generative Modeling [16], FearNet [6], and Deep Generative Replay [23]. Our proposed approach uses a similar idea but instead stores the networks themselves, rather than the data. Our scheme has two advantages over compressing the data. First, networks are much smaller, so we can encode them more quickly, using less space. Second, by reconstructing the networks directly, we do not need to retrain task-networks on data from previous tasks.

3 Methodology

Figure 1 provides a high-level overview of our proposed approach. Our Self-Net system uses a set of reusable task-networks (TNs), a Buffer for storing newly learned tasks, and a lifelong autoencoder (AE) for storing older tasks. In addition, we store an latent vector for each task. Each TN is just a standard neural network, which can learn regression, classification, or reinforcement learning tasks (or some combination of the three). For ease of discussion, we will focus on the case where there is a single TN and the Buffer can hold only one network; the extension to multiple networks and larger Buffers is trivial. The AE is made up of an encoder that compresses an input vector into a lower-dimensional, latent vector and a decoder that maps back to the higher-dimensional space. Our system can produce high-fidelity recollections of the learned weights, despite this intermediate compression. In our experiments, we used a contractive autoencoder (CAE) [18] due to its ability to quickly incorporate new values into its latent space.

In CL, we must learn different tasks sequentially. To learn these tasks independently, one would need to train and save networks, with parameters each, for a total of space. In contrast, we propose using our AE to encode each of these networks as an latent vector. Thus, our method uses only space, where the term accounts for the TNs and the fixed-size Buffer. Despite this compression, our experiments show that we can obtain a high-quality approximation of previously learned weights, even when the number of tasks exceeds the number of parameters in the AE (Sec. 4.3). Below, we first describe how to encode a single task-network before discussing how to encode multiple tasks in a continual fashion.

3.1 Single-network encoding

Let be a task (e.g., recognizing faces) and let be the -dimensional vector of parameters of a network trained to solve . That is, using a task-network with parameters , we can achieve performance on (e.g., a classification accuracy of 95%). Now, let be the approximate reconstruction of by our autoencoder and let be the performance that we obtain by using these reconstructed weights for task . Our goal is to minimize any performance loss w.r.t. the original weights. If the performance of the reconstructed weights is acceptable, then we can simply store the latent vector , instead of the original vector .

If we had access to the test data for , we could assess this difference in performance directly and train our AE until we achieve an acceptable margin :

(1)

For example, for a classification task we could stop training our AE if the drop in accuracy is less than .

In a continual learning setting, though, the above scheme requires storing validation data for each old task. Instead, we measure a distance between the original and reconstructed weights and stop training when we achieve a suitably close approximation. Empirically, we determined that the cosine similarity,

(2)

is an excellent proxy for a network’s performance. Unlike the mean-squared error, this distance metric is scale-invariant, so it is equally suitable for weights of different scales. As detailed in Section 4, a cosine similarity of 0.997 or higher yielded excellent performance for a wide variety of tasks and architectures.

In addition, one can improve the efficacy with which the AE learns a new task by encouraging the parameters of all task-networks to remain in the same general neighborhood. This can be accomplished by fine-tuning all networks from a common source and penalizing large deviations from this initial configuration with a regularization term. Formally, let be the source parameters, ideally optimized for some highly-related task. Without loss of generality, we can define the loss function of task-network for task as:

(3)

where is the regularization coefficient determining the importance of remaining close to the source parameters vs. optimizing for the current task. By encouraging the parameters for all task-networks to remain close to one another, we make it easier for the AE to learn a low-dimensional representation of the original space.

3.2 Continual encoding

We will now detail now to use our Self-Net to encode a sequence of trained networks in a continual fashion. Let be the size of the Buffer, and let be the number of tasks which have been previously encountered. As noted above, we train each of these task-networks using conventional backpropagation, one per task. Now, assume that our AE has already learned to encode the first task-networks. We will now show how to encode the most recent batch of task-networks corresponding to tasks into compressed representations while still remembering all previously trained networks.

1:Let T be the set of all Tasks encountered during the lifetime of the system
2:Let be the size of the Buffer
3:E = []
4:initialize AE
5:Set cosine_threshold
6:for idx,curr_task in enumerate(Tdo
7:      if Buffer is not full then
8:            - Intitialize TN
9:            - Train the TN for curr_task until optimized
10:            - Buffer.append(TN)
11:            if Buffer is full then
12:                 R = []
13:                 for  encoded-network in E do
14:                       r = AE.Decoder(encoded-network)
15:                       R.append(r)                  
16:                 for  network in Buffer do
17:                       flat_network = extract and flatten parameters from network
18:                       R.append(flat_network)                  
19:                 average_cosine_similarity =
20:                 E = []
21:                 while  average_cosine_similarity cosine_threshold do
22:                       for r_idx,r enumerate(Rdo
23:                             calculate AE_loss using Equation (LABEL:eqn:caeLoss).
24:                             back-propagate AE w.r.t r
25:                             update average_cosine_similarity using cos(r,AE(r))
26:                             E[r_idx] = AE.Encoder(r)                                         
27:                 empty Buffer                   
Algorithm 1 Lifelong Learning via Continual Self-Modeling

Let be the set of latent vectors for the first networks. In order to integrate new networks into the latent space, we first recollect all previously trained networks by feeding each as input to the decoder of the AE. We thus generate a set of recollections, or approximations, of the original networks (see Fig. 1). We then append each network in the Buffer to and retrain the AE on all networks until it can reconstruct them, i.e., until the average of their respective cosine similarities is above the predefined threshold. Algorithm 1 summarizes our continual learning strategy.

As we show in our experiments, our compressed network representations still achieve excellent performance compared to the original parameters. Since each is simply a vector of network parameters, it can easily be loaded back onto a task-network with the correct architecture. This allows us to discard the original networks and store networks using only space. In addition, our framework can efficiently encode many different types and sizes of networks in a continual fashion. In particular, we can encode a network of arbitrary size using a constant-size AE (that takes inputs of size ) by splitting the input network into subvectors222We pad with zeros whenever and are not multiples of each other., such that (). As we verify in Section 4, we can effectively reconstruct a large network from its subvectors and still achieve a suitable performance threshold.

As Fig. 2 illustrates, we empirically found a strong correlation between a reconstructed network’s performance and its cosine similarity w.r.t. to the original network. Intuitively, this implies that vectors of network parameters that have a cosine similarity approaching 1 will exhibit near-identical performance on the underlying task. Thus, the cosine similarity can be used as a terminating condition during retraining of the AE. That is, there exists a cosine similarity threshold above which the performance of the reconstructed network can be expected to be sufficiently similar to that of the original. In practice, we found a threshold of .997 to be sufficient for most experiments. Below, we offer empirical results which demonstrate the efficacy and flexibility of our approach.

Figure 2: Robustness analysis of network performance as a function of cosine similarity: Each dot represents the accuracy of a reconstructed network and the dotted lines are the baseline performances of the original networks. The above values for three datasets (Permuted MNIST (in pink), MNIST (in cyan), and CIFAR-10 (in blue), show that cosine similarity values above 0.997 guarantee nearly optimal performance.

4 Experimental Results

In order to evaluate the continual-learning performance of Self-Net, we carried out a range of experiments on a variety of datasets, in both supervised and reinforcement-learning (RL) settings. We first performed a robustness analysis to establish the degree to which an approximation of a network can deviate from the original and still retain comparable performance on the underlying task (Section 4.1). Then, we evaluated the performance of our approach on the following continual-learning datasets: Permuted MNIST [7], Split MNIST [14], Split CIFAR-10 [27], Split CIFAR-100 [27], and successive Atari games [12] (we describe each dataset below). As our experiments show, Self-Net can effectively encode each of these different types of networks in sequential fashion, effectively achieving continual learning and outperforming several competing techniques. Finally, we also analyzed our system’s performance under three additional scenarios: (1) very large numbers of tasks, (2) different sizes of AEs, and (3) different task-network architectures. We detail each experiment below.

4.1 Robustness analysis

Our approach relies upon approximations of previously learned networks, and we assume no access to validation data for previously learned tasks. Thus, we require a method for estimating the performance of a reconstructed network which does not rely upon explicit testing on a validation set.

Figure 2 shows the relationship between performance and deviations from the original parameters as measured by cosine similarity, for three datasets. There is a clear correlation between the amount of parameter dissimilarity and the probability of a decrease in performance. That is, given an approximate network that deviates from the original by some amount, the potential still exists that such a network will retain comparable performance. However, as the degree of deviation increases, the probability that the performance remains high falls steadily. Thus, in order to assume, with reasonable confidence, that the performance of a reconstructed network will be sufficiently high, the AE must minimize the degree of deviation as much as possible.

Empirically, we established a cosine similarity threshold above which the probability of high task-performance stabilizes, as seen in Figure 2. This threshold can be used as a terminating condition during retraining of the AE, and it allows the performance of a reconstructed network to be approximated without access to any validation data. In our experiments, a common threshold yields good performance across a variety of different types and sizes of networks.

4.2 Experiments on CL datasets

Permuted MNIST: As an initial evaluation of Self-Net’s CL performance, we trained convolutional feed-forward neural networks with 21,840 parameters on successive tasks, each defined by distinct permutations of the MNIST dataset [9], for 10-digit classification. We used networks with 2 convolution layers (kernels of size 5x5, and stride 1x1), 1 hidden layer (320x50), and 1 output layer (50x10). Our CAE had three, fully connected layers with 21,840, 2000, and 20 parameters, resp. Thus, our latent vectors were of size 20. For this experiment, we used a Buffer of size 1. Each task network was encoded by our lifelong AE in sequential fashion, and the accuracies of all reconstructed networks were examined at the end of each learning stage (i.e., after learning a new task). Figure 3 (top) shows the mean performance after each stage. Our technique almost perfectly matched the performances achieved by independently trained networks, and it dramatically outperformed other state-of-the-art approaches including EWC [7], Online EWC (the correction to EWC proposed in [4]), and Progress & Compress [22]. As a baseline, we also show the results for SGD (no regularization), L2-based regularization in which we compare new weights to all the previous weights, and Online L2, which only measures deviations from the weights learned in the previous iteration. Not only does our technique allow for superior knowledge retention, but it does not inhibit knowledge acquisition necessary for new tasks. The result is minimal degradation in performance as the number of tasks grow.

Split MNIST: We performed a similar continual learning task but with different binary classification objectives on subsets of the MNIST dataset (Split MNIST) [14]. Our task-networks, CAE, and Buffer size were the same as for Permuted MNIST (except that the outputs of the task-networks were binary, instead of 10 classes). Tasks were defined by tuples comprised of the positive and negative digit class(es), e.g., ([pos={1}, neg={6,7,8,9}], [pos={6}, neg={1,2,3,4}], etc.). Here, the training and test sets consisted of approximately 40% positive examples and 60% negative examples. In this domain, too, our technique dramatically outperformed competing approaches, as seen in Figure 3 (middle).

Figure 3: CL performance comparisons with average test set accuracy on all observed tasks at each stage for (top) Permuted MNIST, (middle) Split MNIST, and (bottom) Split CIFAR-10.

Split CIFAR-10: We then verified that our proposed approach could reconstruct larger, more sophisticated networks. Similar to the Split MNIST experiments above, we divided the CIFAR-10 dataset [8] into multiple training and test sets, yielding 10 binary classification tasks (one per class). We then trained a task-specific network on each class. Here, we used TNs having an architecture which consisted of 2 convolutional layers, followed by 3 fully connected hidden layers, and a final layer having 2 output units. In all, these task networks consisted of more than 60K parameters. Again, for this experiment we used a Buffer of size 1. Our CAE had three, fully connected layers with 20442, 1000, and 50 parameters, resp. As noted below, we split the 60K networks into three subvectors to encode them with our autoencoder. The individual task-networks achieved accuracies ranging from 78% to 84%, and a mean accuracy of approximate 81%. Importantly, we encoded these larger networks using almost the same CAE architecture as the one used in the MNIST experiments. This was achieved by splitting the 60K parameter vectors into three subvectors. As noted in Section 3, by splitting a larger input vector into smaller subvectors, we can encode networks of arbitrary sizes. As seen in Figure 3 (bottom), the accuracies of the reconstructed CIFAR networks also nearly matched the performances of their original counterparts, while also outperforming all other techniques.

Figure 4: CL performance comparisons with average test set accuracy on all observed tasks at each stage for CIFAR-100.

Split CIFAR-100: We applied the same learning approach for the CIFAR-100 dataset [8]. We split the dataset into 10 distinct batches comprised of 10 classes of images each. This resulted in 10 separate datasets, each designed for 10-class classification tasks. We used the same task-network architecture and Buffer size as in our CIFAR-10 experiments, modified slightly to accommodate a 10-class classification objective. The trained networks achieved accuracies ranging from 46% to 59%. We then encoded these networks using the same CAE architecture described in the previous experiments, again accounting for the input size discrepancy by splitting the task-networks into smaller subvectors. As seen in Figure 4, our technique almost perfectly matched the performances achieved by independently trained networks.

Incremental Atari: To evaluate the CL performance of Self-Net in the challenging context of reinforcement learning, we used the code available at [3] to implement a modified Async Advantage Actor-Critic (A3C) framework, originally introduced in [13], to attempt to learn successive Atari games while retaining good performance across all games. A3c simultaneously learns a policy and a value function for estimating expected future rewards. Specifically, the model we used was comprised of 4 convolutional layers (kernals of size 3x3, and strides of size 2x2), a GRU layer (800x256), and two ouput layers: an Actor (256xNum_Actions), and Critic (256x1), resulting in a complex model architecture and over 800K parameters. Critically, this entire model can be flattened and encoded by the single AE in our Self-Net framework having three, fully connected layers with 76863, 2000, and 200 parameters, resp. For these experiments we also used a Buffer of size 1.

Similar to previous experiments, we trained our system on successive tasks, specifically the following Atari games: Boxing, Star Gunner, Kangaroo, Pong, and Space Invaders. Figure 5 shows the near-perfect retention of performance on each of the 5 games over the lifetime of the system. This was accomplished by training on each game only once, never revisiting the game for training purposes. The dashed, vertical lines demarcate the different stages of continual learning. That is, each stage indicates that a new network was trained for a new game, over 40M frames. Afterwards, the mean (dashed, horizontal black lines) and standard-deviation (solid, horizontal black lines) of the network’s performance were computed by allowing it to play the game, unrestricted, for 80 episodes. After each stage, the performances of all reconstructed networks were examined by re-playing each game with the appropriate reconstructed network. As Figure 5 shows, the cumulative means and SD’s of the reconstructed networks closely mimic those achieved by their original counterparts.

Figure 5: CL on five Atari games with Self-Net: To evaluate the reconstruction score at each stage, we ran the reconstructed networks for 80 full game episodes. The cumulative mean score is nearly identical to the original TN at each stage.

4.3 Performance and storage scalability

In CL, there is a trade-off between storage and performance. Using different networks for tasks yields optimal performance but uses space, while regularized methods such as Online EWC only require space but suffer a steep drop in performance as the number of tasks grows. Our experiments on CL datasets show that our approach achieves much better performance retention than existing approaches by using slightly more space, . More precisely, we can quantify performance with respect to implicit storage compression. For example, by the tenth task, Online EWC [4] has essentially performed 10x compression because it uses 1/10th of the overall storage required by ten different networks; however, its performance by this point is very poor. In contrast, our system achieves 10X compression when the size of the stored latent vectors grows to . In the following experiments, we verified that our method retains excellent performance even when reaching 10X compression, thus confirming that our AE is not simply memorizing previously learned weights.

Figure 6: 10X Compression for Split-MNIST: Orange lines denote the average accuracy achieved by individual networks, one per task. Green lines denote the average accuracy when training the AE to encode all networks as a single batch. Blue lines indicate the average accuracy obtained by Self-Net at each CL Stage. Top: 50 tasks with latent Vectors of size 5 and a Buffer of size 5. Middle: 100 tasks with latent vectors of size 10 and Buffer of size 10. The x-axis (top and middle) denotes the compression factor achieved at each learning stage. Bottom: the training epochs required by the 5-dimensional AE to incorporate new networks decreases rapidly over time.

The top two plots of Fig. 6 show the mean performance for 50 and 100 Split-MNIST tasks, with latent vectors of size 5 and 10, resp. As before, the AE had 21432 input parameters. For comparison, we also plotted the original networks’ performance and the performance of the reconstructions when the AE learns all the tasks in a single batch. The line with dots represents the CL system, where each dot indicates the point where the AE had to encode a new set of networks because the Buffer had filled up. For these experiments, we used a Buffer size of 5 and 10, resp.; these values were chosen so that each new batch of networks yielded an integer compression ratio, e.g., after encoding 15 networks with a latent vector of size 5, the Self-Net achieved 3X compression. Here, we fine-tuned all networks from the mean of the initial set of networks and penalized deviations from this source vector (using ), as described in Section 3. This regularization allowed the AE to incorporate subsequent networks with very little additional training, as seen in stages 4-10 (bottom image of Fig. 6).

For 10X compression, the Self-Net with a latent vector of size 5 retained 95.7% average performance across 50 Split-MNIST tasks, while the Self-Net with 10-dimensional latent vectors retained 95.2% across 100 tasks. This represents a relative change of only 3.3% compared to the original performance of 99%. In contrast, existing methods dropped to 50% performance for 10X compression on this dataset (Fig. 3).

4.4 Splitting networks and using multiple architectures

Splitting larger networks into smaller sub-vectors allows us to use a smaller AE. As an additional analysis, we verified that the smaller AE can be trained in substantially less time than a larger one. Figure 7 (left) shows the respective training rates of an AE with 20,000 input units (blue line)—trained to reconstruct 3 sub-vectors of length 20,000—compared to that of a larger one, with 61,000 input units (yellow line), trained on a 60K CIFAR-10 network. Clearly, using more inputs for a smaller AE enables us to more quickly encode larger networks. Finally, we also validated that the same AE can be used to encode trained networks of different sizes and architectures. Figure 7 (right) shows that the same AE can simultaneously reconstruct 5 MNIST networks and 1 CIFAR network so that all approach their original baseline accuracies.

Figure 7: Additional analyses: Left: the AE training efficiency is improved when large networks are split into smaller subvectors. Right: a single AE can encode networks of different architectures and sizes.

5 Conclusions and future work

In this paper, we introduced a scalable approach for multi-context continual learning that decouples learning a set of parameters from storing them for future use. Our proposed framework makes use of state-of-the-art autoencoders to facilitate lifelong learning via continual self-modeling. Our empirical results confirm that our method can efficiently acquire and retain knowledge in continual fashion, even for very large numbers of tasks. In future work, we plan to improve the efficiency with which the autoencoder can continually model vast numbers of task networks. Furthermore, we will explore how to use the latent space to extrapolate to new tasks based on existing learned tasks with little or no training data. We also intend to compress the latent space even further (e.g., using only latent vectors for tasks). Promising approaches include clustering the latent vectors into sets of closely related tasks and using sparse latent representations. Finally, we will also investigate how to infer the current task automatically, without a task label.

References

  • [1] Doersch, C.: Tutorial on Variational Autoencoders. ArXiv e-prints (Jun 2016)
  • [2] Goodfellow, I.J., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 (2013)
  • [3] Greydanus, S.: baby-a3c. https://github.com/greydanus/baby-a3c (2017)
  • [4] Huszár, F.: Note on the quadratic penalties in elastic weight consolidation. Proceedings of the National Academy of Sciences 115(11), E2496–E2497 (2018). https://doi.org/10.1073/pnas.1717042115, http://www.pnas.org/content/115/11/E2496
  • [5] Kemker, R., Abitino, A., McClure, M., Kanan, C.: Measuring catastrophic forgetting in neural networks. CoRR abs/1708.02072 (2018)
  • [6] Kemker, R., Kanan, C.: Fearnet: Brain-inspired model for incremental learning. In: International Conference on Learning Representations (2018), https://openreview.net/forum?id=SJ1Xmf-Rb
  • [7] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., Hadsell, R.: Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences 114(13), 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114, http://www.pnas.org/content/114/13/3521
  • [8] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
  • [9] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (Nov 1998). https://doi.org/10.1109/5.726791
  • [10] Li, Z., Hoiem, D.: Learning without forgetting. In: ECCV (2016)
  • [11] Masse, N.Y., Grant, G.D., Freedman, D.J.: Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. CoRR abs/1802.01569 (2018), http://arxiv.org/abs/1802.01569
  • [12] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning. ArXiv e-prints (Dec 2013)
  • [13] Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T.P., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. CoRR abs/1602.01783 (2016), http://arxiv.org/abs/1602.01783
  • [14] Nguyen, C.V., Li, Y., Bui, T.D., Turner, R.E.: Variational continual learning. In: International Conference on Learning Representations (2018), https://openreview.net/forum?id=BkQqq0gRb
  • [15] Preston, A., Eichenbaum, H.: Interplay of hippocampus and prefrontal cortex in memory. Current Biology 23(17), R764 – R773 (2013). https://doi.org/https://doi.org/10.1016/j.cub.2013.05.041, http://www.sciencedirect.com/science/article/pii/S0960982213006362
  • [16] Ramapuram, J., Gregorova, M., Kalousis, A.: Lifelong Generative Modeling. ArXiv e-prints (May 2017)
  • [17] Rebuffi, S., Kolesnikov, A., Lampert, C.H.: icarl: Incremental classifier and representation learning. CoRR abs/1611.07725 (2016), http://arxiv.org/abs/1611.07725
  • [18] Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: Explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on International Conference on Machine Learning. pp. 833–840. ICML’11, Omnipress, USA (2011), http://dl.acm.org/citation.cfm?id=3104482.3104587
  • [19] Robins, A.: Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science 7(2), 123–146 (1995)
  • [20] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (Oct 1986), http://dx.doi.org/10.1038/323533a0
  • [21] Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., Hadsell, R.: Progressive Neural Networks. ArXiv e-prints (Jun 2016)
  • [22] Schwarz, J., Luketina, J., Czarnecki, W.M., Grabska-Barwinska, A., Whye Teh, Y., Pascanu, R., Hadsell, R.: Progress & Compress: A scalable framework for continual learning. ArXiv e-prints (May 2018)
  • [23] Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. CoRR abs/1705.08690 (2017), http://arxiv.org/abs/1705.08690
  • [24] Teyler, T.J., DiScenna, P.: The hippocampal memory indexing theory. Behavioral Neuroscience 100(2), 147–154 (1986). https://doi.org/10.1037/0735-7044.100.2.147
  • [25] Triki, A.R., Aljundi, R., Blaschko, M.B., Tuytelaars, T.: Encoder based lifelong learning. CoRR abs/1704.01920 (2017), http://arxiv.org/abs/1704.01920
  • [26] Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: International Conference on Learning Representations (2018), https://openreview.net/forum?id=Sk7KsfW0-
  • [27] Zenke, F., Poole, B., Ganguli, S.: Improved multitask learning through synaptic intelligence. CoRR abs/1703.04200 (2017), http://arxiv.org/abs/1703.04200
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
383161
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description