Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains

Modular Universal Reparameterization:
Deep Multi-task Learning Across Diverse Domains

Elliot Meyerson
Cognizant
elliot.meyerson@cognizant.com &Risto Miikkulainen
Cognizant
The University of Texas at Austin
risto@cs.utexas.edu
Abstract

As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future.

 

Modular Universal Reparameterization:
Deep Multi-task Learning Across Diverse Domains


  Elliot Meyerson Cognizant elliot.meyerson@cognizant.com Risto Miikkulainen Cognizant The University of Texas at Austin risto@cs.utexas.edu

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Deep learning methods and applications continue to become more diverse. They now solve problems that deal with fundamentally different kinds of data, including those of human behavior, such as vision, language, and speech, as well as those of natural phenomena, such as biological, geological, and astronomical processes.

Across these domains, deep learning architectures are painstakingly customized to different problems. However, despite this extreme customization, a crucial amount of functionality is shared across solutions. For one, architectures are all made of the same ingredients: some creative composition and concatenation of high-dimensional linear maps and elementwise nonlinearities. They also share a common set of training techniques, including popular initialization schemes and gradient-based optimization methods. The fact that the same small toolset is successfully applied to all these problems implies that the problems have a lot in common. Sharing these tools across problems exploits some of these commonalities, i.e., by setting a strong prior on the kinds of methods that will work. Such sharing is methodological, with humans determining what is shared.

This observation begs the question: Are there commonalities across these domains that methodological sharing cannot capture? Note that this question is different from that addressed by previous work in deep multi-task learning (DMTL), where the idea is to share knowledge across tasks in the same domain or modality, such as within vision [5, 30, 33, 38, 54, 58] or language [9, 13, 16, 31, 34]. In contrast, this question is fundamental to general problem solving: Can it be beneficial to share learned functionality across a diverse set of tasks, such as a 2D convolutional vision network, an LSTM model for natural language, and a 1D convolutional model for genomics? Specifically, this paper considers the following problem: Given an arbitrary set of (architecture,task) pairs, can learned functionality be shared across architectures to improve performance in each task?

Drawing on existing approaches to DMTL, a first approach to this problem is developed, showing that such effective sharing is indeed possible. The approach is based on decomposing the general multi-task learning problem into several fine-grained and equally-sized subproblems, or pseudo-tasks. Training a set of (architecture,task) pairs then corresponds to solving a set of related pseudo-tasks, whose relationships can be exploited by shared functional modules. To make this framework practical, an efficient search algorithm is introduced for optimizing the mapping between pseudo-tasks and the modules that solve them, while simultaneously training the modules themselves. The approach, modular universal reparameterization (MUiR), is validated in a synthetic MTL benchmark problem, and then applied to large-scale sharing between the disparate modalities of vision, NLP, and genomics. It leads to improved performance on each task, and highly-structured architecture-dependent sharing dynamics, in which the modules that are shared more demonstrate increased properties of generality. These results show that MUiR makes it possible to share knowledge across diverse domains, thus establishing a key ingredient for building general problem solving systems in the future.

2 Problem Statement and Related Work

This paper is concerned with the following question: Given an arbitrary set of (architecture,task) pairs, can learned functionality be shared across architectures to improve performance in each task? Any method that answers this question must satisfy two requirements: (1) It must support any given set of architectures, and (2) it must align parameters across the given architectures.

Parameters in two architectures are aligned if they have some learnable tensor in common. An alignment across architectures implies how tasks are related, and how much they are related. The goal of DMTL is to improve performance across tasks through joint training of aligned architectures, exploiting inter-task regularities. In recent years, DMTL has been applied within areas such as vision [5, 30, 33, 38, 54, 58], natural language [9, 13, 16, 31, 34], speech [19, 43, 52], and reinforcement learning [11, 20, 48]. The rest of this section reviews existing DMTL methods, showing that none of these methods satisfy both conditions (1) and (2).

The classical approach to DMTL considers a joint model across tasks in which some aligned layers are shared completely across tasks, and the remaining layers remain task-specific [7]. In practice, the most common approach is to share all layers except for the final classification layers [11, 13, 18, 19, 20, 31, 41, 52, 58]. A more flexible approach is to not share parameters exactly across shared layers, but to factorize layer parameters into shared and task-specific factors [3, 23, 28, 32, 53, 54]. Such approaches work for any set of architectures that have a known set of aligned layers. However, these methods only apply when such alignment is known a priori. That is, they do not meet condition (2).

One approach to overcome the alignment problem is to design an entirely new architecture that integrates information from different tasks and is maximally shared across tasks [5, 16, 22]. Such an approach can even be used to share knowledge across disparate modalities [22]. However, by disregarding task-specific architectures, this approach does not meet condition (1). Related approaches attempts to learn how to assemble a set of shared modules in different ways to solve different tasks, whether by gradient descent [36], reinforcement learning [42], or evolutionary architecture search [30]. These methods also construct new architectures, so they do not meet condition (1); however, they have shown that including a small number of location-specific parameters is crucial to sharing functionality across diverse locations.

Drawing on the methods above, this paper introduces a first approach that meets both conditions. First, a simple decomposition is introduced that applies to any set of architectures and supports automatic alignment. This decomposition is extended to include a small number of location-specific parameters, which are integrated in a manner mirroring factorization approaches. Then, an efficient alignment method is developed that draws on automatic assembly methods. These methods combine to make it possible to share effectively across diverse architectures and modalities.

3 Modular Universal Reparameterization

This section presents a framework for decomposing sets of (architecture,task) pairs into equally-sized subproblems (i.e., pseudo-tasks), sharing functionality across aligned subproblems via a simple factorization, and optimizing this alignment with an efficient stochastic algorithm.

3.1 Decomposition into linear pseudo-tasks

Consider a set of tasks , with corresponding model architectures , each parameterized by a set of trainable tensors . In MTL, these sets have non-trivial pairwise intersections, and are trained in a joint model to find optimal parameters for each task:

(1)

where is a prediction and is a sample-wise loss function for the th task. Given fixed task architectures, the key question in designing an MTL model is how the should be aligned. The following decomposition provides a generic way to frame this question.

Suppose each tensor in each can be decomposed into equally-sized parameter blocks of size , and there are such blocks total across all . Then, the parameterization for the entire joint model can be rewritten as:

(2)

That is, the entire joint parameter set can be regarded as a single tensor . The vast majority of parameter tensors in practice can be decomposed in this way such that each defines a linear map. For one, the weight matrix of a dense layer with inputs and outputs can be broken into blocks of size , where the th block defines a map between units to of the input space and units to of the output space. This approach can be extended to convolutional layers by separately decomposing each matrix corresponding to a single location in the receptive field. Similarly, the parameters of an LSTM layer are contained in four matrices, each of which can be separately decomposed. When and are relatively small, the requirement that and divide their respective dimensions is a minor constraint; layer sizes can be adjusted without noticeable effect, or overflowing parameters from edge blocks can be discarded.

Figure 1: Pseudo-task decomposition. Architecture , for task , induces a pseudo-task solved by a function . is an encoder that provides input to , and is a decoder that uses the output of to produce the final prediction. If is effective for many [task, encoder, decoder] combinations, then it shows generic functionality.

Now, if each defines a linear map, then training corresponds to solving linear pseudo-tasks [37] that define subproblems within the joint model. Suppose defines a linear map in . Then, the th pseudo-task is solved by completing the computational graph of with the subgraph corresponding to removed. The th pseudo-task is denoted by a five-tuple

(3)

where is the encoder that maps each to the input of a function solving the pseudo-task, and takes the output of that function (and possibly ) to the prediction . The parameters and characterize and , respectively.

In general, given a pseudo-task, the model for the th task is completed by a differentiable function that connects the pseudo-task’s inputs to its outputs. The goal for solving this pseudo-task is to find a function that minimizes the loss of the underlying task. The completed model is given by

(4)

This formulation is depicted in Figure 1. Since all pseudo-tasks induced by Eq. 2 have the same input-output specification, if solves one of them, it can be applied to any of them in a modular way.

Since all pseudo-tasks are derived from the same universe of tasks and architectures, sharing modules across them can be valuable. Indeed, sharing across related parameter blocks is a common tool to improve generalization in deep learning. For example, a convolutional layer can be viewed as a dense layer with parameter blocks shared across space, and a recurrent layer as a sequential network of dense layers with parameter blocks shared across depths, i.e., time. Similarly, the standard DMTL approach is to design a joint architecture with some parameter blocks shared across related tasks. This paper extends DMTL to sharing factors across related pseudo-tasks.

3.2 Reparameterization by hypermodules

Assuming an effective alignment of related pseudo-tasks exists, how should parameters be shared across them? Reusing modules at qualitatively different locations in a network has been successful when a small number of location-specific parameters are included to increase flexibility [30, 36], and has been detrimental when such parameters are not included [42]. To include such parameters in a simple and flexible way, and avoid additional assumptions about the kind of sharing that can occur, each can be generated by a hypermodule, the module-specific analog of a hypernetwork [15, 45].

Associate with the th pseudo-task a context vector . Suppose there is also a collection of hypermodules , with , and let be an alignment function that indicates which hypermodule solves the th pseudo-task. Then, the parameters of the underlying architectures are generated by

(5)

where denotes the 1-mode (vector) product of a tensor and a vector [25]. In other words, the value at is the dot product between and the fiber in associated with the th element of . With the additional goal of optimizing , the block decomposition (Eq. 2) can now be written as

(6)

To accurately apply Eq. 6 to a set of architectures, the parameter initialization scheme must be preserved. Say the parameters of a layer are initialized i.i.d. with variance and mean 0, and each is initialized with a distinct hypermodule . When , is a sum of random variables, so it is impossible to initialize and i.i.d. such that is initialized from a uniform distribution. However, it is possible to initialize from a normal distribution, by initializing from a normal distribution and initializing with constant magnitude :

(7)

In this paper, and are determined by He normal initialization [17], which implies a unique . Although could be initialized uniformly from , it is instead initialized to the constant , to encourage compatibility of hypermodules across contexts. Similarly, the fact that all have the same makes it easier for them to capture functionality that applies across pseudo-tasks.

Although it is pessimistic to initialize each pseudo-task with its own hypermodule, parsimonious models can be achieved through optimization of . Using the same hypermodule for many pseudo-tasks has the side-benefit of reducing the size of the joint model. The original model in Eq. 2 has trainable parameters, while Eq. 6 has , which is more parsimonious only when i.e., when each hypermodule is used for more than pseudo-tasks on average. However, after training, any hypermodule used fewer than times can be replaced with the parameters it generates, so the model complexity at inference is never greater than that of the original model: where is the number of pseudo-tasks parameterized by hypermodules used fewer than times. An algorithm that improves parsimony in this way while exploiting related pseudo-tasks is introduced next.

3.3 Interleaved optimization of pseudo-task alignment

1:  Create initial solutions each of length
2:  while any is suboptimal do
3:     for  to  do
4:        for  to  do
5:           
6:           for  to  do
7:              With probability ,
8:     for  to  do
9:        
Algorithm 1 Decomposed -valued (1 + )-EA

Given the above decomposition and reparameterization, the goal is to find an optimal alignment , given by a fixed-length mapping , with possible choices for each element. Let be a scoring function that returns the performance of a mapping via training and evaluation of the joint model. In order to avoid training the model from scratch each iteration, existing DMTL approaches that include nondifferentiable optimization interleave this optimization with gradient-based updates [8, 30, 33, 37, 42]. These methods take advantage of the fact that at every iteration there are scores, one for each task. These scores can be optimized in parallel, and faster convergence is achieved, by effectively decomposing the problem into subproblems. This section illustrates that such problem decomposition can be greatly expanded, leading to practical optimization of .

In general, may be decomposed into submappings , each with a distinct evaluation function . For simplicity, let each submapping be optimized with an instance of the (1+)-EA, a Markovian algorithm that is robust to noise, dynamic environments, and local optima [12, 39, 46], and is a component of existing DMTL methods [30, 37]. The algorithm generates new solutions by resampling elements of the best solution with an optimal fixed probability. Algorithm 1 extends the (1+)-EA to optimizing submappings in parallel. Assume each has length , , all are linear, i.e., where are positive scalars, is the indicator function, and is a unique optimal mapping, with . The runtime of this algorithm (number of iterations through the while loop) is summarized by the following result (proof in S.1):

Theorem 3.1.

The expected time of the decomposed -valued (1+1)-EA is when all are linear.

Resulting runtimes for key values of are given in Table 1.

Decomposition Level None (Multi-task) Per-task (Single-task) Per-block (Pseudo-task)
Expected Convergence Time
Table 1: Complexity of pseudo-task alignment. This table gives the expected times of Algorithm 1 for finding the optimal mapping of pseudo-tasks to hypermodules, in a model with tasks. The runtime of pseudo-task-level optimization scales logarithmically with the size of the model.

As expected, setting gives a substantial speed-up over . However, when is small relative to , e.g., when sharing across a small number of complex models, the factor of in the numerator is a bottleneck. Setting overcomes this issue, and corresponds to having a distinct evaluation function for each pseudo-task.

The pessimistic initialization suggested in Section 3.2 avoids initial detrimental sharing, but introduces another bottleneck: large . This bottleneck can be overcome by sampling hypermodules in Line 7 proportional to their usage in . Such proportional sampling encodes a prior which biases search towards modules that already show generality, and yields the following result (proof in S.2):

Theorem 3.2.

The expected time of the decomposed K-valued (1+1)-EA with pessimistic initialization and proportional sampling is , when , and all are linear.

Again, this fast convergence requires a pseudo-task-level evaluation function . The solution adopted in this paper is to have the model indicate its hypermodule preference directly through backpropagation, by learning a softmax distribution over modules at each location. Similar distributions over modules have been learned in previous work [30, 36, 44]. In Algorithm 1, at a given time there are active mapping functions . Through backpropagation, the modules for each location can compete by generalizing Eq. 5 to include a soft-merge operation:

(8)

where is a vector of weights that induces a probability distribution over hypermodules. Through training, the learned probability of is the model’s belief that is the best option for location out of . Using this belief function, Algorithm 1 can optimize while simultaneously learning the model parameters. Each iteration, the algorithm trains the model via Eq. 8 with backpropagation for steps, and returns , accounting for duplicates. In contrast to existing model-design methods, task performance does not guide search; this avoids overfitting to the validation set over many generations. Validation performance is only used for early stopping. Pseudocode for the end-to-end algorithm, along with additional training considerations, are given in S.3. The algorithm is evaluated experimentally in the next section.

4 Experiments

This section evaluates the approach developed in Section 3. First, the dynamics of the approach are validated a synthetic MTL benchmark. Second, the approach is applied to a scale-up problem of sharing across diverse architectures and modalities. See S.4 for additional experimental details.

4.1 Validating framework dynamics on a synthetic dataset

This section considers an MTL problem where the ground truth alignment is known. The dataset contains three groups of ten linear regression tasks with input dimension 20, but only 15 training samples per task [23]. The ground truth parameter vector for tasks within a group differ only by a scalar. Tasks cannot be solved without exploiting this regularity. Two versions of the problem were considered, one with Gaussian noise added to sample outputs, and one with no noise. As in previous work, each task model is linear, consisting of a single weight vector . In the single-task (STL) case, these vectors are trained independently. In the MTL case (MUiR), , and each task is reparameterized with a single hypermodule . So, Algorithm 1 is initialized with 30 hypermodules, and should converge to using only three, i.e., one for each group. For comparison, a Random search setup is included (i.e., replacing argmax in Algorithm 1 with a random choice), as well as an Oracle setup, in which is fixed to the true group alignment. Unlike in previous work, five training samples for each task were withheld as validation data, making the setup more difficult.

MUiR quickly converges to the true underlying grouping in the noiseless case (Figure 2),

Figure 2: Visualizing convergence. These images show the convergence of on the synthetic dataset. Each color corresponds to a distinct hypermodule. The color shown at each location is the hypermodule currently in use for that task. After generation 59 the model remains at the optimal solution indefinitely, demonstrating the efficient convergence of MUiR.

and yields optimal test loss (Table 2).

Method Clean Noisy
STL [23] - 0.97
MTL-FEAT [3] - 0.48
DG-MTL [23] - 0.42
GO-MTL [28] - 0.35
STL (ours)
MUiR + Random
MUiR + Oracle
MUiR + Optimization
Table 2: Synthetic results. MUiR achieves perfect test RMSE in the clean case, even outperforming the Oracle, which can sometimes overfit. MUiR similarly outperforms baselines in the noisy case. Since a linear model is optimal for this dataset, MUiR cannot improve over the best linear method, but it achieves comparable results despite differences in the setup that make it more difficult: withholding data for validation and absence of additional regularization. Also, in contrast to the other methods, MUiR learns the number of groups automatically.

In the noisy case, MUiR results in a similar improvement over the baselines. Since a linear model is optimal for this dataset, MUiR cannot improve over the best linear method, but it achieves comparable results, despite differences in the setup that make generalization more difficult: withholding data for validation and absence of additional regularization. These results show that the softmax evaluation function effectively determines the value of hypermodules at each location. The next section shows that the algorithm scales to more complex problems.

4.2 Sharing across diverse architectures and modalities

This experiment applies MUiR in its intended setting: sharing across diverse architectures and modalities. The hypermodules generate linear maps, and have context size , as in previous work on hypernetworks [15]. The joint model shares across a vision problem, an NLP problem, and a genomics problem (see S.5 for additional dataset and architecture details).

The first task is CIFAR-10, the classic image classification benchmark of 60K images [26]. As in previous work on hypernetworks, WideResNet-40-1 (WRN) is the underlying model [15, 55], yielding 2268 blocks to parameterize with hypermodules. The second task is WikiText-2 language modeling benchmark with over 2M tokens [35]. The underlying model is the standard stacked LSTM model with two LSTM layers each with 256 units [56], yielding 4096 blocks. The third task is CRISPR binding prediction, where the goal is to predict the propensity of a CRISPR protein complex to bind to (and cut) unintended locations in the genome [21]. The dataset contains binding affinities for over 30M base pairs. The underlying model, DeepBind-256, is from the DeepBind family of 1D-convolutional models designed for protein binding problems [2, 57], yielding 6400 blocks.

For each of these three task-architecture pairs, a chain of comparisons were run, with increasing generality: a Baseline that trained the original architecture; an Intratask setup that applied MUiR optimization within a single task model; cross-modal optimization for each pair of tasks; and a cross-modal run across all three tasks.

Modality Architecture Baseline Intratask W+S W+D S+D W+S+D L+S L+D L+S+D
Vision WRN-40-1 (W) 8.48 8.50 8.69 9.20 - 9.02 - - -
Text Stacked LSTM (S) 134.41 132.06 130.63 - 132.62 128.10 129.73 - 130.77
DNA DeepBind-256 (D) 0.1540 0.1466 - 0.1461 0.1469 0.1464 - 0.1469 0.1464
Vision LeNet (L) 21.08 20.67 - - - - 21.02 19.59 20.23
Table 3: Cross-modal results. This table shows the performance of each architecture across a chain of comparisons. Baseline trains the underlying model; Intratask uses MUiR with a single task architecture; the remaining setups indicate multiple architectures trained jointly with MUiR. Lower scores are better: classification error for vision, perplexity for text and MSE for DNA. For each architecture, the top two setups are in bold. The LSTM, DeepBind, and LeNet models all benefit from cross-modal sharing; and in all 16 cases, MUiR improves their performance over Baseline. Although the text and DNA models both benefit from sharing with WRN, the effect is not reciprocated. The fact that LeNet improves suggests that it is not a problem in transferring across modalities, but that WRN has an architecture that is easier to share from than to. Overall, the ability of MUiR to improve performance, even in the intratask case, indicates that it can exploit pseudo-task regularities.

The main result is that the text and genomics models always improve when they are trained with MUiR, and improve the most when they are trained jointly with the WRN model (Table 3). This result raises a key question: Does the (WRN,vision) pair behave differently because of WRN or because of vision? To answer this question, an additional set of experiments were run using LeNet [29] as the vision model. This model does indeed always improve with MUiR, and improves the most with cross-modal sharing (Table 3), while similarly improving the text and genomics models. The improvements for all three tasks are significant (S.4). Overall, the results confirm that MUiR can improve performance by sharing across diverse modalities. A likely reason that the benefit of WRN is one-directional is that the modules in WRN are highly specialized to work together as a deep stack. They provide useful diversity in the search for general modules, but they are hard to improve using such modules. This result is important because it both illustrates where the power of MUiR is coming from (diversity) and identifies a key challenge for future methods.

To understand the discovery process of MUiR, Figure 3a

(a)     (b)

Figure 3: (a) Module sharing over time. The number of modules shared exclusively by each subset of tasks is shown for a MUiR run. The differences across subsets show that MUiR optimizes alignment in an architecture-dependent way. For example, the number of modules used only by the WRN and LSTM models always stays small, and the number used only by the DeepBind model eventually shrinks to almost zero, suggesting that the genomics model plays a central role in sharing. As a side-benefit of this optimization, the number of parameters in the model decreases (blue line). (b) Layer-level sharing. To measure sharing across pairs of layers, for each pair in an L+S+D run, this heatmap shows how many times more likely pairs of pseudo-tasks from those layers are to use the same module than they would by chance. Sharing is highly architecture-dependent, with the 1D-convolutional model playing a central role between the 2D-convolutional and 1D-LSTM models.

shows the number of modules used exclusively by each subset of tasks over time in a W+D+S run. The relative size of each subset stabilizes as is optimized, and is consistent over independent runs, showing that MUiR shares in an architecture-dependent way. In particular, the number of modules used only by W and S models remains small, and the number used only by D shrinks to near zero, suggesting that the genomics model plays a central role in sharing. Analyzed at the layer level in the L+S+D setup, the bulk of sharing does indeed involve D (Figure 3b). D and L are both convolutional, while D and S process 1-dimensional input, which may make it easier for L and S to share with D than directly with each other.

A side-benefit of MUiR is that the number of model parameters decreases over time (up to in Figure 3a), which is helpful when models need to be small, e.g., on mobile devices. Such shrinkage is achieved when the optimized model has many modules that are used for many pseudo-tasks. Hypermodules are considered generic if they are used more than times in the joint model, and specific otherwise. Similarly, pseudo-tasks are considered generic if they use generic modules and specific otherwise, along with their contexts and generated linear maps. Sets of generic and specific tensors were compared based on statistical properties of their learned parameters. The generic tensors had significantly smaller average standard deviation, L2-norm, and max value (Table 4).

Parameter Group Stdev Mean Norm Max
Hypermodules 7e-4 3e-1 8e-4 6e-3
Contexts 1e-43 1e-143 4e-138 5e-126
Linear Maps 3e-153 5e-2 5e-153 4e-146
Table 4: Generic vs. specific modules. For a W+S+D run of MUiR, this table gives two-tailed -values (Mann-Whitney) comparing generic vs. specific weight tensors over four statistics for each parameter group: modules, contexts, and the linear maps they generate. The generic tensors tend to have a much tighter distribution of parameters, indicative of better generalization: They must be applied in many situations with minimal disruption to overall network behavior.

Such a tighter distribution of parameters indicates greater generality [4, 27].

5 Discussion and Future Work

Given a set of deep learning problems defined by potentially disparate (architecture,task) pairs, MUiR shows that learned functionality can be effectively shared between them. As the first solution to this problem, MUiR takes advantage of existing DMTL approaches, but it is possible to improve it with more sophisticated and insightful methods in the future. Hypermodules are able to capture general functionality, but more sophisticated factorizations could make it easier to exploit pseudo-task relationships [32, 54]. Similarly, the -EA is simple and amenable to analysis, but more sophisticated optimization schemes [10, 44, 51] may be critical in scaling to more open-ended settings. In particular, the modularity of MUiR makes extensions to lifelong learning [1, 6, 49] especially promising: It should be possible to collect and refine a compact set of modules that are assembled in new ways to solve future tasks as they appear, seamlessly integrating new architectural methodologies. Such functionality is fundamental to general problem solving, providing a foundation for integrating and extending knowledge across all behaviors during the lifetime of an intelligent agent.

6 Conclusion

To go beyond methodological sharing in deep learning, this paper introduced an approach to learning sharable functionality from a diverse set of problems. Training a set of (architecture,task) pairs is viewed as solving a set of related pseudo-tasks, whose relatedness can be exploited by optimizing a mapping between hypermodules and the pseudo-tasks they solve. By integrating knowledge in a modular fashion across diverse domains, the approach establishes a key ingredient for general problem solving systems in the future.

References

  • [1] D. Abel, D. Arumugam, L. Lehnert, and M. Littman. State abstractions for lifelong reinforcement learning. In Proc. of ICML, pages 10–19, 2018.
  • [2] B. Alipanahi, A. Delong, M. T. Weirauch, and B. J. Frey. Predicting the sequence specificities of dna-and rna-binding proteins by deep learning. Nature biotechnology, 33(8):831, 2015.
  • [3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
  • [4] P. L. Bartlett. For valid generalization the size of the weights is more important than the size of the network. In NIPS, pages 134–140, 1997.
  • [5] H. Bilen and A. Vedaldi. Integrated perception with recurrent multi-task neural networks. In NIPS, pages 235–243. 2016.
  • [6] E. Brunskill and L. Li. Pac-inspired option discovery in lifelong reinforcement learning. In Proc. of ICML, pages 316–324, 2014.
  • [7] R. Caruana. Multitask learning. In Learning to learn, pages 95–133. Springer US, 1998.
  • [8] Z. Chen, V. Badrinarayanan, C.-Y. Lee, and A. Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proc. of ICML 2018, 2018.
  • [9] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of ICML, pages 160–167, 2008.
  • [10] K. Deb and C. Myburgh. Breaking the billion-variable barrier in real-world optimization using a customized evolutionary algorithm. In Proc. of GECCO, pages 653–660, 2016.
  • [11] C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural network policies for multi-task and multi-robot transfer. In Proc. of ICRA, pages 2169–2176, 2017.
  • [12] B. Doerr, T. Jansen, and C. Klein. Comparing global and local mutations on bit strings. In Proc. of GECCO, pages 929–936, 2008.
  • [13] D. Dong, H. Wu, W. He, D. Yu, and H. Wang. Multi-task learning for multiple language translation. In Proc. of ACL, pages 1723–1732, 2015.
  • [14] B. Eisenberg. On the expectation of the maximum of iid geometric random variables. Statistics & Probability Letters, 78(2):135–143, 2008.
  • [15] D. Ha, A. M. Dai, and Q. V. Le. Hypernetworks. In Proc. of ICLR, 2017.
  • [16] K. Hashimoto, C. Xiong, Y. Tsuruoka, and R. Socher. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proc. of EMNLP, pages 1923–1933, 2017.
  • [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. of CVPR, pages 770–778, 2016.
  • [18] J. T. Huang, J. Li, D. Yu, L. Deng, and Y. Gong. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In Proc. of ICASSP, pages 7304–7308, 2013.
  • [19] Z. Huang, J. Li, S. M. Siniscalchi, I.-F. Chen, J. Wu, and C.-H. Lee. Rapid adaptation for deep neural networks through multi-task learning. In Proc. of Interspeech, 2015.
  • [20] M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Proc. of ICLR, 2017.
  • [21] C. Jung, J. A. Hawkins, S. K. Jones, et al. Massively parallel biophysical analysis of crispr-cas complexes on next generation sequencing chips. Cell, 170(1):35–47, 2017.
  • [22] L. Kaiser, A. N. Gomez, N. Shazeer, A. Vaswani, N. Parmar, L. Jones, and J. Uszkoreit. One model to learn them all. CoRR, abs/1706.05137, 2017.
  • [23] Z. Kang, K. Grauman, and F. Sha. Learning with whom to share in multi-task feature learning. In Proc. of ICML, pages 521–528, 2011.
  • [24] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [25] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51:455–500, 2009.
  • [26] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
  • [27] A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In NIPS, pages 950–957, 1992.
  • [28] A. Kumar and H. Daumé, III. Learning task grouping and overlap in multi-task learning. In Proc. of ICML, pages 1723–1730, 2012.
  • [29] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. of the IEEE, 86(11):2278–2324, 1998.
  • [30] J. Liang, E. Meyerson, and R. Miikkulainen. Evolutionary architecture search for deep multitask networks. In Proc. of GECCO, 2018.
  • [31] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y. Y. Wang. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proc. of NAACL, pages 912–921, 2015.
  • [32] M. Long, Z. Cao, J. Wang, and P. S. Yu. Learning multiple tasks with multilinear relationship networks. In NIPS, pages 1593–1602. 2017.
  • [33] Y. Lu, A. Kumar, S. Zhai, Y. Cheng, T. Javidi, and R. S. Feris. Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification. Proc. of CVPR, 2017.
  • [34] M. T. Luong, Q. V. Le, I. Sutskever, O. Vinyals, and L. Kaiser. Multi-task sequence to sequence learning. In Proc. of ICLR, 2016.
  • [35] S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models. CoRR, abs/1609.07843, 2016.
  • [36] E. Meyerson and R. Miikkulainen. Beyond shared hierarchies: Deep multitask learning through soft layer ordering. In Proc. of ICLR, 2018.
  • [37] E. Meyerson and R. Miikkulainen. Pseudo-task augmentation: From deep multitask learning to intratask sharing—and back. In Proc. of ICML, 2018.
  • [38] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross-stitch networks for multi-task learning. In Proc. of CVPR, 2016.
  • [39] F. Neumann and C. Witt. On the runtime of randomized local search and simple evolutionary algorithms for dynamic makespan scheduling. In Proc. of IJCAI, pages 3742–3748, 2015.
  • [40] A. Paske et al. Automatic differentiation in pytorch. 2017.
  • [41] R. Ranjan, V. M. Patel, and R. Chellappa. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. CoRR, abs/1603.01249, 2016.
  • [42] C. Rosenbaum, T. Klinger, and M. Riemer. Routing networks: Adaptive selection of non-linear functions for multi-task learning. In Proc. of ICLR, 2018.
  • [43] M. L. Seltzer and J. Droppo. Multi-task learning in deep neural networks for improved phoneme recognition. In Proc. of ICASSP, pages 6965–6969, 2013.
  • [44] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In Proc. of ICLR, 2017.
  • [45] K. O. Stanley, D. B. D’Ambrosio, and J. Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15:185–212, 2009.
  • [46] D. Sudholt. On the robustness of evolutionary algorithms to noise: Refined results and an example where noise helps. In Proc. of GECCO, pages 1523–1530, 2018.
  • [47] R. S. Sutton and A. G. Barto. Introduction to reinforcement learning. MIT Press, 1998.
  • [48] Y. Teh, V. Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. Heess, and R. Pascanu. Distral: Robust multitask reinforcement learning. In NIPS, pages 4499–4509. 2017.
  • [49] S. Thrun and L. Pratt. Learning to Learn. 2012.
  • [50] C. Witt. Tight bounds on the optimization time of a randomized search heuristic on linear functions. Combinatorics, Probability and Computing, 22(2):294–318, 2013.
  • [51] L. A. Wolsey and G. L. Nemhauser. Integer and combinatorial optimization. John Wiley & Sons, 2014.
  • [52] Z. Wu, C. Valentini-Botinhao, O. Watts, and S. King. Deep neural networks employing multi-task learning and stacked bottleneck features for speech synthesis. In Proc. of ICASSP, pages 4460–4464, 2015.
  • [53] Y. Yang and T. Hospedales. A unified perspective on multi-domain and multi-task learning. In Proceedings of ICLR, 2015.
  • [54] Y. Yang and T. Hospedales. Deep multi-task representation learning: A tensor factorisation approach. In Proc. of ICLR, 2017.
  • [55] S. Zagoruyko and N. Komodakis. Wide residual networks. CoRR, abs/1605.07146, 2016.
  • [56] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014.
  • [57] H. Zeng, M. D. Edwards, G. Liu, and D. K. Gifford. Convolutional neural network architectures for predicting dna-protein binding. Bioinformatics, 32(12):i121–i127, 2016.
  • [58] Z. Zhang, L. Ping, L. C. Chen, and T. Xiaoou. Facial landmark detection by deep multi-task learning. In Proc. of ECCV, pages 94–108, 2014.

Appendix S Supplemental Material

s.1 Proof of Theorem 3.1.

The expected time of the decomposed -valued (1+1)-EA is for linear .

Proof.

The proof is a direct extension of the result for the non-decomposed binary-valued algorithm [50], which converges in iterations with high probability. Following that proof exactly, but replacing binary variables to -valued variables increases the convergence time to . Then, each subproblem in the decomposed version converges in time with high probability, that is, the CDF of the convergence of each instance is dominated by an exponential random variable with mean . The maximum of i.i.d. exponential random variables with mean is , where is the th harmonic number [14]. So, the expected convergence time of the entire algorithm is . ∎

s.2 Proof of Theorem 3.2.

The expected time of the decomposed K-valued (1+1)-EA with pessimistic initialization and proportional sampling is , when , and all are linear.

Proof.

Let be a variable tracking the number of locations whose module is wrong at iteration . , since the first location is initialized correctly. Let be the expected number of locations whose module is incorrect at time given that are incorrect at time . Then,

(9)

which yields a closed form for :

(10)

If at most 1 location is incorrect, optimizing this location takes constant time. The goal is to find such that :

(11)

Since the expected time to get from to is one iteration, and convergence is faster when is lower, is an upper bound on the expected runtime of the algorithm. ∎

s.3 Additional algorithm details

For the model to learn its hypermodule preferences efficiently, a special learning rate is assigned to the soft weights in Eq. 8. In the experiments, setting this rate to one or two orders of magnitudes larger than that of the rest of the model yields reliable results.

The complete end-to-end algorithm is given in Algorithm 2.

1:  Initialize any non-sharable model parameters .
2:  Initialize , , and with .
3:  Train via Eq. 5 for backprop steps.
4:  for  generations do
5:     for  do
6:        
7:        
8:        for  do
9:           
10:      -element random subset of
11:     for  do
12:        for  do
13:            randomHypermodule()
14:     Train via Eq. 8 for backprop steps.
15:     Evaluate using the validation set for each task.
16:     for  do
17:        
18:  Revert to the state with best validation performance.
19:  Train via Eq. 5 for backprop steps.
Algorithm 2 Interleaved optimization of module alignment

The algorithm interleaves model training with optimization of . Interleaving makes the algorithm efficient, because the model need not be trained from scratch each generation. Instead, hypermodule options are sampled for each of pseudo-tasks, for some . Although in theory yields the fastest convergence, setting improves the stability of training, reducing the noise that comes from shocking pseudo-tasks with new modules. In the experiments, was found to yield reliable results. Training can also be made smoother by training for steps before optimizing , and by initializing the probability of the current best hypermodule to be for some small . If is initialized to 0, then, for

(12)

However, in this paper, in all experiments, so that there is no initial bias towards previously selected hypermodules.

Note that the choice of is limited by scalability concerns. The cost of one gradient update is approximately times that of the original model. This pressure towards small is why was used in Section 4.2. This scalability pressure also makes it crucial that the results in Section 3.3 apply in the case of .

As required by Theorem 3.2, new hypermodules for a pseudo-task are selected with probability proportional to their current usage. When a hypermodule is no longer used anywhere, it has effectively been deleted. When the number of active hypermodules is less than the initial number , for theoretical robustness, a small probability of creating a new hypermodule is always included, similar to the -greedy approach in reinforcement learning [47]. In this paper, is manually set to in all experiments. The distribution for sampling existing hypermodules is then

(13)

In practice, there may be some parameters that are not naturally decomposable via Eq. 2. In particular, the initial layer that transforms raw input and the output layer that produces predictions are modality-specific. They are useful as unshared adapters that learn permutations and scaling to translate between specific and generic representations. For example, for each task in S.5, the first and last layers of its architecture are reserved as adapters.

s.4 Additional experiment details

All models were implemented in PyTorch [40]. Each run was performed using a single NVIDIA GTX 1080 Ti GPU with 12GB ram.

All models were trained using Adam with default parameters [24]. When the learned parameters are reset each generation, their corresponding auxiliary state in Adam is reset as well, to prevent unmeaningful application of this state.

The synthetic dataset in Section 4.1 contains 30 linear regression tasks, each with the same 20-dimensional input space and 1-dimensional output [23]. Each task was generated from a random parameter vector, by multiplying random inputs by this vector to generate 15 training samples and 50 test samples. The goal is to minimize RMSE averaged over all tasks. The tasks are grouped into three groups of ten tasks each. The parameter vector for tasks within a group differ only by a scalar factor. Tasks cannot be solved reliably without exploiting this regularity. The linear models in these experiments use a batch size of 10 in training.

For the results in Table 2, each setup was run ten times. Mean and standard error are reported. Surprisingly, in the clean case, the MUiR + Oracle setup performs worse than MUiR + Optimization. This result is due to the fact that the Oracle setup is still able to occasionally overfit to one of the thirty tasks, because there is so little data, and there are no other forms of regularization. In particular, note that the median RMSE for both MUiR + Oracle and MUiR + Optimization was 0.00. In the noisy case, the noise itself provides sufficient regularization for the Oracle to overcome this issue. However, the improvement of Optimization over Oracle in the clean case illustrates a strength of MUiR that is also captured in Table 4: Since each module is trained in many locations over the course of optimization, it is forced to learn generalizable functionality.

In Figure 2, the first 10 tasks correspond to the first ground truth group, the second 10 to the second group, and the third to the third group. The “Score” at each generation is a coarse measure for how close is to the optimal mapping: Each task adds 1 if the module it uses is shared and only used by tasks in its true group, adds 0 if the module is unshared, and adds -1 if the module is shared by tasks outside of its true group.

In the experiments in Section 4.1, 99 iterations of random search were performed for the noisy case over the hyperparameter ranges , , , and . The setting with the best validation loss was , , , and . This setting was then used across ten runs in both the clean and the noisy case to collect the results in Table 2. Since the linear models learn quickly, was not needed and set to 0.

To scale up to the experiments in Section 4.2, the hyperparameter settings above were copied exactly, except for , , , and , which were manually adapted from those in Section 4.1: was set to 1 for maximum computational efficiency; was increased to so that locations could quickly ignore clearly low-performing modules; was increased to 1000 to handle the larger problem size; was set to so that model could initially stabilize before alignment optimization.

In Section 4.2, one run was performed for each of the setups in Table 3, i.e., five to seven runs were performed for each architecture. To confirm the significance of the results, twenty additional runs were performed for the baselines L, S, and, D, as well as for the cross-domain setup L+S+D. The means are shown in Table 3. The mean ( std. err.) for the baselines was 21.08 (), 0.1540 (), and 134.41 (), respectively, while for L+S+D they were 20.23 (), 0.1464 (), and 130.77 (). For all three of these improvements (Welch’s t-test).

In the results in Table 4 there were 666 generic modules, 4344 specific; and 4363 generic pseudo-tasks (i.e., contexts and linear maps) and 8401 specific. Notably, the differences between generic and specific tensors appear for both hypermodules, which are trained for a variable number of pseudo-tasks, and contexts, which are each trained for only one pseudo-task.

s.5 Dataset and architecture details

CIFAR-10. This image classification benchmark has 50,000 training images and 10,000 test images [26]. Of the training images, 5,000 are randomly withheld for validation. As in previous work on hypernetworks, WideResNet-40-1 (WRN) is the underlying model, and standard data augmentation is used [15]. The first and last layers of the model are reserved as adapter layers. All remaining convolutional layers are reparameterized by hypermodules, yielding a total of 2268 blocks. WideResNet defines a family of vision models, each defined by a depth parameter and a width parameter . WideResNet-40-1 has and . This model is the smallest (in terms of parameters) high-performing model in the standard WideResNet family. For the additional set of experiments using LeNet [29] as the vision model, all layer sizes were increased to the nearest multiple of 16. This model is sequential with five layers, of which the middle three are reparameterized. Both CIFAR-10 models use a batch size of 128 for training.

WikiText-2. This language modeling benchmark has 2,088,628 training tokens, 217,646 validation tokens, and 245,569 test tokens, with a vocab size of 33,278 [35]. The goal is to minimize perplexity. The underlying model is the standard stacked LSTM model with two LSTM layers each with 256 units, and preprocessing is performed as in previous work [56]. The LSTM layers are reparameterized by hypermodules, yielding a total of 4096 blocks. This standard model has one main parameter, LSTM size. In general, increasing the size improves performance. Common LSTM sizes are 200, 650, and 1000. To simplify the setup by making the LSTM weight kernels divisible by the output dimension of hypermodules, the experiments in Section 4.2 use an LSTM size of 256. The model begins with a word embedding layer, and ends with a dense layer mapping its output to a softmax over the vocabulary. This model uses a batch size of 20 for training.

CRISPR Binding Prediction. The goal of this dataset is to predict the propensity of a CRISPR protein complex to bind to (and cut) unintended locations in the genome [21]. This is an important personalized medicine problem, since it indicates the risk of the technology for a particular genome. When using the technology, there is one particular (target) location that is intended to be cut out by the CRISPR complex, so that this location can be edited. If the complex makes other (off-target) cuts, there may be unintended consequences. Predicting the binding affinity at off-target locations gives an assessment of the risk of the procedure. The dataset contains binding affinities for approximately million base pairs (bp). Input consists of 201bp windows of one-hot-encoded nucleobases centered around each location. The data is randomly split into non-overlapping training, validation, and test sets, with approximately one million samples withheld for validation and one million for testing. The underlying model, DeepBind-256, is from the DeepBind family of 1D-convolutional models designed for protein binding problems [2, 57]. The first layer embeds the input into 256 channels. The second layer is a 1D convolution with kernel size 24, and 256 output channels, followed by global max pooling. The third layer is fully-connected with 256 hidden units. The final layer is fully-connected with a single output that indicates the predicted binding affinity. The loss is MSE. The middle two layers are reparameterized by hypermodules, yielding 6400 blocks. This model uses a batch size of 256 for training.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
371403
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description