Incremental multi-domain learning with network latent tensor factorization

Incremental multi-domain learning with network latent tensor factorization

Adrian Bulat   Jean Kossaifi11footnotemark: 1   Georgios Tzimiropoulos   Maja Pantic
Samsung AI Center, Cambridge
United Kingdom
{adrian.bulat, j.kossaifi, georgios.t, maja.pantic}@samsung.com
Equal contribution.
Abstract

The prominence of deep learning, large amount of annotated data and increasingly powerful hardware made it possible to reach remarkable performance for supervised classification tasks, in many cases saturating the training sets. However, adapting the learned classification to new domains remains a hard problem due to at least three reasons: (1) the domains and the tasks might be drastically different; (2) there might be very limited amount of annotated data on the new domain and (3) full training of a new model for each new task is prohibitive in terms of memory, due to the shear number of parameter of deep networks. Instead, new tasks should be learned incrementally, building on prior knowledge from already learned tasks, and without catastrophic forgetting, i.e. without hurting performance on prior tasks. To our knowledge this paper presents the first method for multi-domain/task learning without catastrophic forgetting using a fully tensorized architecture. Our main contribution is a method for multi-domain learning which models groups of identically structured blocks within a CNN as a high-order tensor. We show that this joint modelling naturally leverages correlations across different layers and results in more compact representations for each new task/domain over previous methods which have focused on adapting each layer separately. We apply the proposed method to 10 datasets of the Visual Decathlon Challenge and show that our method offers on average about reduction in number of parameters and superior performance in terms of both classification accuracy and Decathlon score. In particular, our method outperforms all prior work on the Visual Decathlon Challenge.

1 Introduction

It is now commonly accepted that supervised learning with deep neural networks can provide satisfactory solutions for a wide range of problems. If the aim is to focus on a single task only, then a deep neural network can be trained to obtain satisfactory performance given the availability of sufficient amount of labelled training data and computational resources. This is the setting under which Convolutional Neural Networks (CNNs) have been employed in order to provide state-of-the-art solutions for a wide range of Computer Vision problems such as recognition [16, 36, 9], detection [31], semantic segmentation [20, 8] and human pose estimation [25] to name a few.

However, visual perception is not just concerned with being able to learn a single task at a time, assuming an abundance of labelled data, memory and computing capacity. A more desirable property is to be able to learn a set of tasks, possibly over multiple different domains, under limited memory and finite computing power. This setting is a very general one and many instances of it have been studied in Computer Vision and Machine Learning under various names. The main difference comes from whether we vary the task to be performed (classification or regression), or the domain, which broadly speaking refers to the distribution of the data or the labels for the considered task. These can be classified in main categories:

Multi-task learning:

most commonly this refers to learning different classification (or regression) tasks (typically) jointly from a single domain. For example, given a facial image one may want to train a CNN to estimate the bounding box, facial landmarks, facial attributes, facial expressions and identity [28].

Transfer learning:

this refers to transferring knoweldge from one learned task to another (possibly very different) one typically via fine-tuning [10]. For example, a pre-trained model on Imagenet can be fine-tuned on another dataset for face detection. Transfer learning results in a different model for the new task.

Domain adaptation:

this setting most commonly refers to learning the same task over a different domain for which training data is available but typically there is little labelled data for the new domain (e.g. [34, 40]). For example, one may learn a model for semantic segmentation using synthetic data (where pixel labels are readily available) and try to convert this model into a new one that works well for the domain of real images [35].

Multi-domain learning:

this refers to learning a single model to perform different tasks over different domains (e.g. [13, 3]). For example, one might want to learn a single model where most of the parameters are shared to classify facial expressions and MNIST digits. Note that this setting is much more challenging than the one of transfer learning which yields different models per each task.

Multi-domain incremental learning:

this is the same as above but training data are not initially available for all tasks (e.g. [29, 30, 32]). For example, initially a model can be trained on Imagenet, and then new training data become available for facial expressions. In this case, one wants to learn a single model to handle Imagenet classification and facial expressions.

Our paper is concerned with this last problem: Multi-domain incremental learning. A key aspect of this setting is that the new task should be learned without harnessing the classification accuracy and representational power of the original model. This is called learning without catastrophic forgetting [7, 19]. Another important aspect is to keep newly introduced memory requirements low: a newly learned model should use as much as possible existing knowledge learned from already learned tasks, i.e. from a practical perspective, it should re-use or adapt the weights of an already trained (on a different task) network.

The aforementioned setting has only recently attracted the attention of the neural network community. Notably, the authors of [29] introduced the Visual Decathlon Challenge which is concerned with incrementally converting an Imagenet classification model to new ones for another 9 different domain/tasks.

To our knowledge there are only a few methods that have been proposed recently in order to solve it [29, 30, 32, 22]. These works all have in common that incremental learning is achieved with layer-specific adapting modules (which are simply called adapters) applied to each CNN layer separately. Although the adapters have only a small number of parameters, because they are layer specific, the total number of parameters introduced by the adaptation process scales linearly with the the number of layers, and in practice an adaptation network requires about 10% extra parameters (see also [30]). Our main contribution is to propose a tensor method for multi-domain incremental learning that requires significantly less number of new parameters for each new task. In summary, our contributions are:

  • We propose the first fully-tensorized method for multi-domain learning without catastrophic forgetting. Our method differs from previously proposed layer-wise adaptation methods (and their straightforward layer-wise extensions) by grouping all identically structured blocks of a CNN within a single high-order tensor.

  • Our proposed method outperforms all previous works on the Visual Decathlon Challenge, both in terms of average accuracy and challenge score.

  • We perform a thorough evaluation of our model on the 10 datasets of the visual decathlon challenge and show that our method offers on average about reduction in model parameters compared with training a new network from scratch and superior performance over the state-of-the-art in terms of compression rate, classification accuracy and Decathlon points.

  • We show both theoretically and empirically that this joint modelling naturally leverages correlations across different layers and results in learning more compact representations for each new task/domain.

Intuitively, our method first learns, on the source domain, a task agnostic core tensor. This represents a shared, domain-agnostic, latent subspace. For each new domains, this core is specialized by learning a set of task specific factors defining the multi-linear mapping from the shared subspace to the parameter space of each of the domains.

2 Closely Related Work

In this section, we review the related work on incremental multi-domain learning and tensor methods.

Incremental Multi-Domain Learning is the focus of only a few methods, at least for vision-related classification problems. The works of [32] and [29] introduce the concept of layer adapters. Theses convert each layer111The last layer typically requires retraining because the number of classes will be in general different. of a pre-trained CNN (typically on Imagenet) to adapt to a new classification task, for which new training data becomes available. Because the layers of the pre-trained CNN remain fixed, such approaches avoid the problem of catastrophic forgetting [7, 19] so that performance on the original task is preserved. The method of [32] achieves this by computing new weights for each layer as a linear combination of old weights where the combination is learned in an end-to-end manner for all layers via back-propagation on the new task. The work in [29] achieves the same goal by introducing small residual blocks composed of batch-norm followed by convolutional layers after each convolution of the original pre-trained network. Similarly, the newly introduced parameters are learned via back-propagation. The same work introduced the Visual Decathlon Challenge which is concerned with incrementally adapting an Imagenet classification model to new and completely different domains and tasks. More recently, [30] extends [29] by making the adapters to work in parallel with the convolutional layers.

Although the adapters have only a small number of parameters each, they are layer specific, and hence the total number of parameters introduced by the adaptation process grows linearly with the the number of layers. In practice, an adaptation network requires about extra parameters (see also [30]). Finally, the work of [22] learns to adapt to a new task by learning how to mask individual weights of a pre-trained network.

Our method significantly differs from these works in that it models groups of identically structured blocks within a CNN with a single high-order tensor. This results in a much more compact representations for each new task/domain, with a latent subspace shared between domains. Only a set of factors, representing a very small fraction of this subspace, need to be learnt for each new task or domain.

Tensor methods. A detailed review of tensor methods falls outside the scope of this section. Herein, we focus on methods which have been used to re-parametrize existing individual convolutional layers. This is done mainly to speed up computation or to reduce the number of parameters. The authors in [18] propose to decompose each of the 4D tensors representing the convolutional layers of a pretrained network into a sum of rank– tensors using CP decomposition. [12] propose a similar approach but use Tucker decomposition instead of CP. [1] also used CP decomposition, but optimize this using the tensor power method. The method of [5] proposed a method to share parameters within a ResNext-like block [41] by applying a Generalized Block Decomposition to a 4-th order tensor. As we show a straightforward extension of existing multi-domain adaptation methods (e.g. [32]) using tensors results in an adaptation model with a large number of parameters. To improve this, we propose to model groups of identically structured blocks within a CNN with a single high-order tensor.

Figure 1: Overview of our method First, a task agnostic core is learned jointly with the domain specific factors on the source domain/task (bottom). For a new target domain/task, the same core is specialized for the new task by training a new set of factors (left), similarly for any new task (right). Intuitively, the core represents a task agnostic subspace, while the task specific factors define the multi-linear mapping from that subspace to the parameter space of each of the domains. Note that here, we represent the order tensors in 3D for clarity.

3 Method

In this section, we introduce our method (depicted in Figure 1) for incremental multi-domain learning, starting by the notation used Sec. 3.1. By considering a source domain and output space , we aim to learn a function (here, a ResNet based architecture) parametrized by a tensor , . The model and its tensor parametrization are introduced in detail in Section 3.2. The main idea is to then learn a task agnostic latent manifold on the source domain. The parameter tensor is obtained from with task specific factors . Given a new target task, we then adapt and learn a new parameter tensor by specialising with a new set of task specific factors . This learning process is detailed in Section 3.3. In practice, most of the parameters are shared in , while the factors only contain a fraction of the parameters, which leads to large savings in terms of number of parameters. We offer an in-depth analysis of these space savings in Section 3.4.

3.1 Notation

In this paper, we denote vectors (1 order tensors) as , matrices (2 order tensors) as and tensors, which generalize the concept of matrices for orders (number of dimensions) higher than 2, as . is the identity matrix. Tensor contraction with a matrix, also called n–mode product, is defined, for a tensor and a matrix , as the tensor , with:

3.2 Latent network parametrization

Figure 2: Overall network architecture. The network consists of 3 macro-modules, colour-coded as follows: orange (1st), dark-orange (2nd) and red (3d). Each macro-module consists of 4 basic residual blocks [9] composed of 2 convolutional layers with a kernel size of . After the second and third macro-module, we add a conv layer that projects the features.

We propose to group all the parameters of a neural network into a set of high-order hyper-parameters. We do so by collecting all the weights of the neural network into parameter tensors or order . While the proposed method is not architecture specific, to allow for a fair comparison in terms of overall representation power, we follow [29, 30, 32] and use a modified ResNet-26 [9]. The network consists of 3 macro-modules, each consisting of 4 basic residual blocks [9] (see Fig. 2 for an overview). Each of these blocks contain two convolutional layers with filters. Following [29], the macro-modules output 64, 128, and 256 channels respectively. Throughout the network the resolution is dropped multiple times. First, at the beginning of each macro-module using a convolutional layer with a stride of 2. A final drop in resolution is done at the end of the network, before the classification layer, using an adaptive average pooling layer that reduces the spatial dimensions to resolution of px.

In order to facilitate the proposed grouped tensorization process, we moved the feature projection layer (a convolutional layer with filters), required each time the number of features changes between blocks, outside of the macro-modules (i.e. we place a convolutional layer with a kernel before the and macro-modules). The overall architecture is depicted in Fig. 2.

We closely align our tensor re-parametrization to the network structure by grouping together all the convolutional layers within the same macro-module. For each macro-module , we construct a -order tensor collecting the weights in that group:

(1)

where is the tensor for the  macro-module. The 6 dimensions of the tensor are obtained as follows: corresponds to the shape of the weights of a particular convolution layer and represents the number of output channels, number of input channels, kernel width and kernel height respectively. The  mode corresponds to the number of basic blocks per residual module (2 in this case) and, finally, corresponds to the number of residual blocks present in each macro-module (4 for the specific architecture used).

Our model should be compared with previous methods for incremental multi-domain adaptation like [32] (the method of [29] can be expressed in a similar way) which learn a linear transformation per layer. In particular, [32] learns a 2D adaptation matrix per convolutional layer. Moreover, prior work on tensors (e.g. [12]) has focused on standard layer-wise modelling with a -order the shape of which is . In contrast, our model has two additional dimensions and, in general, can accommodate an arbitrary number of dimensions depending on the architecture used.

3.3 Multi-Domain Tensorized Learning

We now consider we have tasks, from potentially very different domains. The traditional approach would consist in learning as many models, one for each task. In our framework, this would be equivalent to learning one parameter tensor independently for each task . Instead, we propose that each of the parameters tensors are obtain from a latent subspace, modelled by a task agnostic tensor . The (multi-linear) mapping between this task agnostic core and the parameter tensor is then given by a set of task specific factors that specialize the task agnostic subspace for the source domain . Since the reasoning is the same for each of the macro-modules, for clarity, and without loss of generality, we omit the in the notation.

Specifically, we write, for the source domain :

(2)

where is a task-agnostic full rank core shared between all domains and a set of task specific (for domain ) projection factors. We assume here that the task used to train the shared core is a general one with many classes and large amount of training data (e.g. Imagenet classification). Moreover, a key observation to make at this point is that the number of parameters for the factors is orders of magnitudes smaller than the number of parameters of the core.

For each new target domain , we form a new parameter tensor obtained from the same latent subspace . This is done by learning a new set of factors to specialize for the new task:

(3)

Note that the new factors represent only a small fraction of the total number of parameters, the majority of which are contained within the shared latent subspace. By expressing the new weight tensor as a function of the factors , one can learn them on the new task given that labelled data are available in an end-to-end manner via back-propagation. This allows to efficiently adapt the domain agnostic subspace to the new domains while retaining the performance on the original task, and training only a small number of additional parameters. Fig. 1 shows a graphical representation of our method, where the weight tensors have been simplified to 3D for clarity.

Auxiliary loss function:

To prevent degenerate solutions and facilitate learning, we additionally explore orthogonality constraints on the task specific factors. This type of constraints have been shown to encourage regularization, improving the overall convergence stability and final accuracy [4, 2]. In addition, by adding such constraint, we aim to enforce the factors of the decomposition to be full-column rank, which would ensure that the core of the decomposition preserves essential properties of the full weight tensor such as the Kruskal rank [11]. In practice, rather than a hard constraint, we add a loss to the objective function:

(4)

The regularization parameter was validated on a small validation set.

3.4 Complexity Analysis

In terms of unique, task specific parameters learned, our grouping strategy is significantly more efficient than a layer-wise parametrization. For a given group of convolutional layers, in this work defined by the macro-module structure present in a ResNet architecture, we can express the total number of parameters for a Layer-wise Tucker case (this is not proposed in this work but mentioned here for comparison purposes) as follows:

In particular, in the case of a full rank decomposition, by denoting the number of convolutional layers, we get:

(5)

where is the number of re-parametrized layers in a given group.

For the linear case [32], we have that , and the number of parameters simplifies to:

As opposed to this, for our proposed method, by grouping the parameters together into a single high-order tensor, the total number of parameters is:

(6)

For the full-rank case , this simplies to:

(7)

Note that here, and so .

Because in practice , by using the proposed method, we achieve times less task-specific parameters.

Substituting the variables from Eq. (5) and Eq. (7) with the numerical values specific to the architecture used in this work, for each of the 3 groups, for the layer-wise case, we obtain in total: parameters. By contrast, using the same setting for our proposed method, we get , thus verifying .

Making the same assumptions as for the linear case, given that we use square convolutional kernels (i.e. ), and , Eq. (7) becomes: , resulting in less parameters than in the linear case ( for the model used).

Conclusion:

Our proposed approach uses times less parameters per group than the layers-wise Tucker decomposition and times less parameters than the layer-wise linear decomposition. For the ResNet-26 architecture used in this work .

Dataset
Model #param ImNet Airc. C100 DPed DTD GTSR Flwr OGlt SVHN UCF mean Score
#images - 1.3M 7k 50k 30k 4k 40k 2k 26k 70k 9k - -
Rebuffi et al. [29] 2 59.23 63.73 81.31 93.30 57.02 97.47 83.43 89.82 96.17 50.28 77.17 2643
Rosenfeld et al. [32] 57.74 64.11 80.07 91.29 56.54 98.46 86.05 89.67 96.77 49.38 77.01 2851
Mallaya et al. [22] 57.69 65.29 79.87 96.99 57.45 97.27 79.09 87.63 97.24 47.48 76.60 2838
Series Adap. [30] 60.32 61.87 81.22 93.88 57.13 99.27 81.67 89.62 96.57 50.12 77.17 3159
Parallel Adap. [30] 60.32 64.21 81.91 94.73 58.83 99.38 84.68 89.21 96.54 50.94 78.07 3412
Parallel SVD [29] 60.32 66.04 81.86 94.23 57.82 99.24 85.74 89.25 96.62 52.50 78.36 3398
Ours 61.48 67.36 80.84 93.22 59.10 99.64 88.99 88.91 96.95 47.90 78.43 3585
Table 1: Comparison to the state-of-the-art: Top-1 classification accuracy (%) and overall decathlon scores on all dataset from the Visual Decathlon challenge. Our method achieves the best performance for both metrics.

4 Experimental setting

In this section, we detail the experimental setting, metrics used and implementation details.

Datasets:

We evaluate our method on the different datasets from very different visual domains that compose the Decathlon challenge [29]. This challenge assesses explicitly methods designed to solve problem 4 defined in section 1, i.e. incremental multi-domain learning without catastrophic forgetting. Imagenet [33] contains millions images distributed across classes. Following [29, 30, 32], this was used as the source domain to train the shared low-rank manifold for our model as detailed in Eq. (2).The FGVC-Aircraft Benchmark (Airc.) [21] contains 10,000 aircraft images across 100 different classes; CIFAR100 (C100) [15] is composed of small images in classes; Daimler Mono Pedestrian Classification Benchmark (DPed) [23] is a dataset for pedestrian detection (binary classification) composed of 50,000 images; Describable Texture Dataset (DTD) [6] contains images, for texture categories; the German Traffic Sign Recognition (GTSR) Benchmark [38] is a dataset of images of traffic sign categories; Flowers102 (Flwr) [26] contains flower categories with between and images per class; Omniglot (OGlt) [17] is a dataset of images representing handwritten characters from different alphabets; the Street View House Numbers (SVHN) [24] is a digit recognition dataset containing images in classes. Finally, UCF101 (UCF) [37] is an action recognition dataset composed of 13,320 images representing 101 action classes.

Metrics:

We follow the evaluation protocol of the Decathlon Challenge and report results in terms of mean accuracy and decathlon score S, computed as follows:

(8)

where is considered to be the upper limit allowed for a given task in order to receive points, is an exponent that controls the reward proportionality, and a scalar that enforces the limit of 1000 points per task. where is the strong baseline from [29].

Implementation details:

We first train our adapted ResNet-26 model(Fig. 2) on ImageNet for 90 epochs using SGD with momentum (), using a learning rate of that is decreased in steps by every 30 epochs. To avoid over-fitting, we use a weight decay equal to . During training, we follow the best practices and randomly apply scale jittering, random cropping (to px) and flipping. We initialize our weights from a normal distribution , before decomposing them using Tucker decomposition (Section 3). Finally, we train the obtained core and factors (via back-propagation) by reconstructing the weights on the fly.

For the remaining domains, we load the task-independent core and the factors trained on imagenet, freeze the core weights and only fine-tune the factors, batch-norm layers and the two projection layers, all of which account for of the total number of parameters in total. The linear layer at the end of the network is trained from scratch for each task and was initialized from a uniform distribution. Depending on the size of the dataset, we adjust the weight decay to avoid overfitting ( for larger datasets) and up to for the smaller ones (e.g. Flowers102).

PyTorch [27] was used to implement and train the models; TensorLy [14] was used for all tensor operations.

5 Results

Here, we assess the performance of the proposed approach by i) comparing to existing state-of-the-art methods on the challenging Visual Decathlon Challenge [29] (5.1) i) a thorough study of the method, including constraints imposed on the core and factors of the model.

5.1 Comparison with state-of-the-art

Herein, we compare against the current state-of-the-art methods on multi-domain transfer learning [29, 30, 32, 22] on the decathlon dataset. We train our core subspace on ImageNet and incrementally adapt to all other domains. We report, for all methods, the relative increase in number of parameters (per domain), the top-1 accuracy on each of the domain, as well as the average accuracy and overall challenge score, Table 1.

Our approach outperforms all the methods, by points in terms of both decathlon score ( vs. ) and mean average, despite requiring significantly less task dependent parameters. Furthermore, in terms of efficiency our approach outperforms even the joint compression method of [30] (denoted as “Parallel SVD”) that takes advantage of the data redundancy in-between tasks.

Figure 3: Rank regularization impact on ImageNet when training from scratch, for ranks achieving compression ratios of (full-rank), (reducing the rank with one over the number of blocks dimension) and (decreasing the rank of #input and #output channels).

5.2 Inter-class transfer learning

Most of recent work on multi-domain incremental learning attempts to transfer the knowledge from a model pre-trained on a large scale dataset such as ImageNet to another, easier datasets and/or tasks. In this work, we go on step further and explore the efficiency of our transfer learning approach when such source dataset (or computational resources) are not available, by starting from a model pre-trained on a much smaller dataset. Table 3 shows the results for a network pre-trained of Cifar100. Notice that on some datasets (i.e. GTSRB, OGlt) such model can match and even marginally surpass the performance of its Imagenet counterpart. On the other hand, on some of the more challenging datasets (i.e. DTD, aircraft) there is still a large gap. This suggest that the features learned by Cifar-trained model are less generic and diverse. This is due to both the low quantity of available samples and the easiness/overfitting on the original dataset. A potential solution for this may be to enforce a diversity loss, however we leave the exploration of this area for future work.

Figure 4: Top-1 classification accuracy (%) on DPed, DTD, GTSRB, UFC as function of the amount of training data. Our method is compared with the performance of a network for which both the cores and the factors are fine-tuned on these datasets, also trained with the same amount of data.
Dataset
DTD 52.2% 51.3% 51.0%
vgg-flowers 80.9% 83.8% 82.2%
Table 2: Effect of enforcing weight orthogonality constraints for various values of on two datasets: DTD and vgg-flowers. Results reported in terms of Top-1 classification accuracy (%) on the validation set of the these datasets.
Model Pretrained on Airc. C100 DPed DTD GTSR Flwr OGlt SVHN UCF
ImageNet 55.6 80.7 99.67 52.2 99.96 83.8 88.18 95.66 78.6
Ours Cifar100 41.7 74.5 99.82 37.55 99.98 70.9 88.35 95.43 72.1
Table 3: Mean Top-1 accuracy (%) on validation set reported for two settings: (a) A model trained on Imagenet and adapted for the rest of the datasets, (same as the one used for the decathlon setting) (first row) and (b) a more challenging scenario where we train a model on Cifar100 and adapt it for the other datasets (second row). Notice that our method produced satisfactory results even for setting (b), marginally outperforming the Imagenet model on some datasets. This clearly illustrates the representational power of learned model and the generalization capabilities of the proposed method.
Figure 5: Orthogonality loss on the factors on the DTD dataset. We show the orthogonality loss defined in Eq. (4), for (first row), (second row) and (third row). Unsurprisingly, the orthogonality constraint is mostly violated for small values of . Interestingly, we observe that the factors are almost all orthogonal, except for the dimensions and of the weight tensor, which correspond respectively to the number of input and output channels, confirming that these require the most adaptation from the task agnostic subspace.

5.3 Varying the amount of training data

An interesting aspect of incremental multi-domain learning not addressed thus far is what the performance on new domains/tasks is the situations where there is only a limited amount of labelled data available for the new domains. Although not all 9 remaining tasks of the Decathlon assume abundance of training data, in this section, we systematically assess this by varying the amount of training data for 4 tasks, namely DPed, DTD, GTSRB, UFC. Fig. 4 shows the classification accuracy on these datasets as function of the amount of training data. In the same figure, we also report the performance of a network for which both the cores and the factors are fine-tuned on these datasets, also trained with the same amount of data. In general, we observe that our method is at least as good as the fine-tuned network which should be considered as a very strong baseline requiring as many parameters as the original Imagenet-trained model. This validates the robustness of our model for the case of training with limited amount of training data.

5.4 Rank regularization

It is well-known that low-rank structure act as regularization mechanisms [39]. By jointly modelling the parameters of our model as a high order tensor, our model allows such constraint, effectively regularizing the whole network, thus preventing over-fitting. This also allows for more efficient representations, by leveraging the redundancy in the multi-linear structure of the network, allowing for large compression ratios, without decrease in performance.

Therefore in this section we opted for investigating this possibility. To this end, we firstly attempted to train our Imagenet model by imposing a low-rank constraint on the weight tensor. However, as Fig. 3 shows by doing that performance on the base task of Imagenet already drops significantly; hence we did not pursue the possibility of rank regularization further. We attribute this to the very small number of parameters in the ResNet model.

5.5 Effect of orthogonality regularization

To prevent degenerate solutions and facilitate learning, we added orthogonality constraints on the task specific factors. This type of constraints have been shown to encourage regularization, improving the overall convergence stability and final accuracy [4, 2].In addition, by adding such constraints, we aim to enforce the factors of the decomposition to be full-column rank, which would ensure that the core of the decomposition preserves essential properties of the full weight tensor such as the Kruskal rank [11]. This orthogonality constraint was enforced using a regularization term, rather than via a hard constraint. See Table 2 for results on two selected small datasets, namely DTD and vgg-flowers.

6 Conclusions

We proposed a novel method for incremental multi-domain learning using tensors. By modelling groups of identically structured blocks within a CNN as a high-order tensor, we are able to express the parameter space of a deep neural network as a (multi-linear) function of a task-agnostic subspace. This task-agnostic core is then specialized by learning a set of small, task-specific factors for each new domains. We show that this joint modelling naturally leverages correlations across different layers and results in a more compact representations for each new task/domain over previous methods which have focused on adapting each layer separately. We test the proposed method on the datasets of the Visual Decathlon Challenge and show that our method offers on average about reduction in model parameters and outperforms existing work, both in terms of classification accuracy and Decathlon points.

References

  • [1] M. Astrid and S. Lee. Cp-decomposition with tensor power method for convolutional neural networks compression. CoRR, abs/1701.07148, 2017.
  • [2] N. Bansal, X. Chen, and Z. Wang. Can we gain more from orthogonality regularizations in training deep cnns? arXiv preprint arXiv:1810.09102, 2018.
  • [3] H. Bilen and A. Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.
  • [4] A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016.
  • [5] Y. Chen, X. Jin, B. Kang, J. Feng, and S. Yan. Sharing residual units through collective tensor factorization in deep neural networks. 2017.
  • [6] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In CVPR, 2014.
  • [7] R. M. French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128–135, 1999.
  • [8] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In ICCV, 2017.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [10] M. Huh, P. Agrawal, and A. A. Efros. What makes imagenet good for transfer learning? arXiv preprint arXiv:1608.08614, 2016.
  • [11] B. Jiang, F. Yang, and S. Zhang. Tensor and its tucker core: The invariance relationships. Numerical Linear Algebra with Applications, 24(3):e2086.
  • [12] Y. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. CoRR, 05 2016.
  • [13] I. Kokkinos. Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In CVPR, 2017.
  • [14] J. Kossaifi, Y. Panagakis, A. Anandkumar, and M. Pantic. Tensorly: Tensor learning in python. CoRR, abs/1610.09555, 2018.
  • [15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [17] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
  • [18] V. Lebedev, Y. Ganin, M. Rakhuba, I. V. Oseledets, and V. S. Lempitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. CoRR, abs/1412.6553, 2014.
  • [19] D. Li, Y. Yang, Y.-Z. Song, and T. M. Hospedales. Learning to generalize: Meta-learning for domain generalization. arXiv preprint arXiv:1710.03463, 2017.
  • [20] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  • [21] S. Maji, E. Rahtu, J. Kannala, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
  • [22] A. Mallya, D. Davis, and S. Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In ECCV, 2018.
  • [23] S. Munder and D. M. Gavrila. An experimental study on pedestrian classification. IEEE TPAMI, 28(11):1863–1868, 2006.
  • [24] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshops, volume 2011, 2011.
  • [25] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In ECCV, 2016.
  • [26] M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICVGIP, 2008.
  • [27] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
  • [28] R. Ranjan, V. M. Patel, and R. Chellappa. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE TPAMI, 2017.
  • [29] S.-A. Rebuffi, H. Bilen, and A. Vedaldi. Learning multiple visual domains with residual adapters. In NIPS, 2017.
  • [30] S.-A. Rebuffi, H. Bilen, and A. Vedaldi. Efficient parametrization of multi-domain deep neural networks. In CVPR, 2018.
  • [31] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  • [32] A. Rosenfeld and J. K. Tsotsos. Incremental learning through deep adaptation. arXiv preprint arXiv:1705.04228, 2017.
  • [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  • [34] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010.
  • [35] S. Sankaranarayanan, Y. Balaji, A. Jain, S. N. Lim, and R. Chellappa. Learning from synthetic data: Addressing domain shift for semantic segmentation. In CVPR, 2018.
  • [36] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv, 2014.
  • [37] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
  • [38] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural networks, 32, 2012.
  • [39] C. Tai, T. Xiao, X. Wang, and W. E. Convolutional neural networks with low-rank regularization. CoRR, abs/1511.06067, 2015.
  • [40] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In CVPR, 2017.
  • [41] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
352280
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description