On the Transformer Growth for Progressive BERT Training
As the excessive pre-training cost arouses the need to improve efficiency, considerable efforts have been made to train BERT progressively–start from an inferior but low-cost model and gradually increase the computational complexity.
Our objective is to help advance the understanding of such Transformer growth and discover principles that guide progressive training.
First, we find that similar to network architecture selection, Transformer growth also favors compound scaling.
Specifically, while existing methods only conduct network growth in a single dimension, we observe that it is beneficial to use compound growth operators and balance multiple dimensions (e.g., depth, width, and input length of the model).
Moreover, we explore alternative growth operators in each dimension via controlled comparison to give practical guidance for operator selection.
In light of our analyses, the proposed method CompoundGrow speeds up BERT pre-training by and for the base and large models respectively while achieving comparable performances
gredRGB219,68,55 \definecolorgblueRGB66,133,244 \definecolorgyellowRGB244,180,0 \definecolorggreenRGB15,157,88 \definecolorggreyRGB115,115,115 \iclrfinalcopy
Thanks to the increasing computational power, pre-trained language models have been breaking the glass ceiling for natural language processing tasks (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020). However, with great power comes great challenges: the excessive computational consumption required by large-scale model training significantly impedes the efficient iteration of both research exploration and industrial applications. To lower the training cost, many attempts have been made to conduct progressive training, which starts from training an inferior but low-cost model, and gradually increases its resource consumption (Gong et al., 2019; Devlin et al., 2019). Typically, two components are needed for designing such progressive training algorithms–the growth scheduler and the growth operator (Dong et al., 2020). The former controls when to conduct network growth, and the latter controls how to perform network growth. Here, our objectives are to better understand growth operators for Transformer models, and specifically, to help design better progressive algorithms for BERT (Devlin et al., 2019) pre-training.
Firstly, we discuss the importance of using compound growth operators, which balance different model dimensions (e.g., number of layers, hidden dimension, and input sequence length). Despite of previous efforts made on Transformer growth, they mainly focus on one single model dimension: either the length (Devlin et al., 2019) or the depth dimension (Gong et al., 2019). In this work, however, we find that compound effect plays a vital role in growing a model to different capacities, just like its importance in deciding network architectures under certain budgets (Tan and Le, 2019). Here, we show that growing a Transformer from both dimensions leads to better performance with less training cost, which verifies our intuitions and shows the potential of using compound growth operators in progressive BERT training.
Moreover, we further explore the potential choices of growth operators on each dimension. We conduct controlled experiments and comprehensive analyses to compare various available solutions. These analyses further guide the design of effective compound growth operators. Specifically, we observe that, on the length dimension, embedding pooling is more effective than directly truncating sentences. On the width dimension, parameter sharing outperforms low-rank approximation.
Guided by our analyses, we propose CompoundGrow by combining the most effective growth operator on each dimension. In our experiments, we reveal how CompoundGrow can help train standard BERT models with substantially less cost without sacrificing final performance. The final model speeds up the overall pre-training by 73.6% and 82.2% on BERT-base and BERT-large models respectively. The speedup and performance preservation hold for both BERT-base and BERT-large models.
Our main contributions are two-fold:
We conduct comprehensive analyses on Transformer growth. Specifically, we first recognize and verify the importance of balancing different model dimensions during the growth, and then explore and evaluate potential growth operators to provide practical guidance.
Guided by our analyses, we propose CompoundGrow, which progressively grows a Transformer from its length, width and depth dimensions. Controlled experiments demonstrate its effectiveness in reducing the training cost of both BERT-base and BERT-large models without sacrificing performance.
2 Related Works
Progressive training was originally proposed to improve training stability, which starts from an efficient and small model and gradually increase the model capacity (Simonyan and Zisserman, 2014). Recent study leverages this paradigm to accelerate model training. For example, multi-level residual network (Chang et al., 2018) explores the possibility of augmenting network depth in a dynamic system of view and transforms each layer into two subsequent layers. AutoGrow (Wen et al., 2019) attempts to automate the discover of proper depth to achieve near-optimal performance on different datasets. LipGrow (Dong et al., 2020) proposes a learning algorithm with an automatic growing scheduler for convolution nets. At the same time, many studies have been conducted on the model growing operators. Network Morphism (Wei et al., 2016, 2017) manages to grow a layer to multiple layers with the represented function intact. Net2net (Chen et al., 2015) is a successful application to transfer knowledge to a wider network with function-preserving initialization. Similar ideas can also be discovered in many network architectures, including progressive growing of GAN (Karras et al., 2017) and Adaptive Computation Time (Graves, 2016; Jernite et al., 2016).
As large-scale pre-training keeps advancing the state-of-the-art (Devlin et al., 2019; Radford, 2018), their overwhelming computational consumption becomes the major burden towards further developing more powerful models (Brown et al., 2020). Preliminary application of progressive training has been made on Transformer pre-training. Devlin et al. (2019) designs two-stage training with a reduced sequence length for the first 90% of updates. Gong et al. (2019) stack shallow model trained weights to initialize a deeper model, which grows the BERT-base model on the depth dimension and achieves 25% shorter training time.
Consider a Transformer model with layers (Vaswani et al., 2017). As in Figure 1, each Transformer layer consists of a self-attention layer and a feedforward layer. Here, we refer to the number of tokens in a sentence as , the embedding dimmension as , and the hidden dimension as . Also, we mark the layer inputs as with shape .
Feedforward Layer. Transformers use two-layer perceptrons as feedforward networks. Specifically, it is defined as , where is the non-linear function (i.e. GELU), and are parameters. The feedforward layer requires Mult-Add operations and parameters.
Self-Attention Layer. In Transformer models, attention layers are designed as multi-head attention layers, which allow the network to have multiple focuses in a single layer and is crucial for model performance (Chen et al., 2018). It is defined as (with heads): , where is the row-wise softmax function and are parameters. and are matrices with shape (). and arematrices with shape (). Parameters without subscript refer the concatenation of all -head parameters, e.g., . The self-attention layer requires Mult-Adds, and parameters.
In summary, the computation complexity of a layer Transformer would be . Progressive training methods aim to start from small models with less computational cost, and gradually grow such models to the full model during the training stages.
4 Compound Transformer Growth
In this section, we formulate the task of progressive model growth, introduce our compound growth hypothesis, and conduct empirical verification.
4.1 Progressive Training
Algorithm 1 presents a generic setup for progressive training. In each training stage , the corresponding growth operator grows the model . Then, is updated by the optimizer before entering the next training step. Correspondingly, our goal is to maximize the final model performance after all training stages, which can be formulated as:
where is the empirical loss function, and refers to parameter updates of one whole epoch.
4.2 Compound Growth
Note that our objective (Equation 1) is close to the objective of EfficientNet (Tan and Le, 2019), which aims to find the optimal network architecture by maximizing the model accuracy for a given resource budget:
where is a CNN network, , , are coefficients to scale its depth, width, and resolution. The success of EfficientNet demonstrates the importance of balancing different dimensions. Here, we also recognize such a balance as the key to develop effective progressive training algorithms.
Intuitively, growing the network from more than one dimension creates much larger potential to get better performance with less resource. Restricting the growing operator from handling all dimensions would lead to inferior performance, as . The optimal value of the objective function (in Equation 1) is bounded by the feasible set of the growing operator. Due to the gap between the optimal solution and the accessible strategy in real-world situations, we proceed to empirically verify the following claim:
Claim 1. —
With the same tuning budget, growth operators that balance different model dimensions usually perform better than operators restricted to a single dimension.
4.3 Empirical Verification
We use two growth dimensions (i.e., length and depth) to verify our intuitions. To roughly reduce the training cost to of the original value, we can either use data of original sequence length, or use a small model with only Transformer layers. Alternatively, we can also jointly use -long data and -layer Transformers. We split the total training steps into low-cost steps for the low-cost model and final steps for the full model after the one-time model growth. We then use the same hyperparameter setting to evaluate the fine-tuning performance of various full models.
Across different settings (columns) and metrics (rows), the compound operator consistently outperforms or at least achieves comparable results with single-dimensional operators. The observation meets our intuition: to achieve same speedup, compound method can distribute the required reduction on model size to different dimensions, and achieve better performance.
5 Explore Growth Operators for Transformers
|MNLI||SQuAD v1.1||SQuAD v2.0||MNLI||SQuAD v1.1||SQuAD v2.0|
|FFN Share Param.||83.92||83.02||89.91||75.83||78.56||86.28||85.60||92.02||80.92||83.85|
Knowing the importance of compound growing, we still face many possible design choices to make a low-cost Transformer model and grow it to a larger one on each dimension. Due to the lack of discussion and analysis of conversions on the length dimension in literature, we first empirically explore operators for this dimension. Then, we extend the choice of growing operator to a third dimension, the width of intermediate outputs.
5.1 Length Dimension.
Data Truncation first limits the maximum length of input sequences by truncating the training sentences to a shorter length, and then train the model on full length data. Note that, shorter input sequences usually come with less masked tokens to predict in each sentence. For instance, Devlin et al. (2019) first use sentences of at most 128 tokens (with 20 masked tokens) before training on data of 512 tokens (with 76 masked tokens). The major issue of this data truncation operator is the incomplete update of positions embeddings. The model needs to learn embeddings for the extra positions from scratch at the last stage.
Embedding Pooling Inspired by the idea of multigrid training in the vision domain (Wu et al., 2020), we train the model with “low-resolution text” through embedding pooling. Its major potential benefit over data truncation is that it leaves the training data intact and can update all position encodings.
The existence of position embeddings gives Transformer the unique advantage in processing tokens regardless of their input orders, Thus, we first reorder the input token embeddings to separate masked tokens from unmasked ones in pre-training, and only apply pooling to the unmasked tokens. In this way, all masked tokens manage to preserve their unique representations for the masked language modeling task. Specifically, since the output length of self-attention modules is decided by the length of query vectors, we only conduct pooling on query vectors and keep key/value vectors intact.
As shown in the first group of Table 1, data truncation (sequence length = ) and mean pooling () has similar performance on MNLI and SQuAD v1.1, while mean pooling outperforms data truncation on SQuAD v2.0.
5.2 Width Dimension
On width dimension we aim to reduce the intra-block cost of the feedforward network (FFN).
Matrix Factorization A straightforward method is to decompose the original weight matrix as the product of two small matrices and in the early stage. In the late stage of training, we would recover as and unleashes the full potential.
Parameter Sharing Since parameters in FFN in pairs, we can directly share parameters instead of decomposition. Specifically, we split the original weight matrix and into pairs of small matrixes along the hidden dimension . In this way, the feed forward network output can be calculated as . Therefore, we first set in the early stage. Then, at the growth step, we vertically duplicate (share) horizontally for times as the new , and vertically duplicate for times as the new . Formally, for a given input ,
which preserves the output after the growth as well.
|speedup||speedup||MNLI Acc.||SQuAD v1.1||SQuAD v2.0|
Experiment Setups. All our models are implemented based on the TensorFlow implementation
We first train the original BERT model, where each batch contains 256 input sequences, each consisting of at most 512 tokens. For all other progressively trained models, we require the model to finally grow to the original BERT model at the last growing stage, so their final performances are directly comparable. We control the total training steps of all models to be 1M.
The original BERT models use the AdamW (Loshchilov and Hutter, 2017) optimizer with learning rate decay from 0.0001 to 0 and 10K steps of warmup. At the start of each progressive training stage, the learning rate is reset to 0.0001 and keeps decaying following the original schedule.
Compared Method. Previous studies have rarely focused on progressive Transformer growth for BERT training, and progressive Transformer stacking (Gong et al., 2019) is the only directly comparable method to the best of our knowledge. We apply their method on the official BERT model with the same training setting, learning rate schedule and hardware as our method, and achieves better performance than the reported numbers in the original paper. To further unleash the potential of the compared method, we adjust their original training schedule to 300K steps with \sfrac14 number of layers, 400K steps with \sfrac12 number of layers, and 300K steps with the full model. The new training schedule is much faster than the reported one (speedup from the reported +25% to +64.9%) and still gives better final performance than the original paper. This is the fastest stacking model we can get without performance drop.
Our Method. For CompoundGrow, we apply (1) mean embedding pooling with size 2 on the length dimension; (2) parameter sharing with on FFN modules on the width dimension; (3) stacking on the depth dimension. Following the setting of the progressive stacking baseline, we also try to equally distribute 1M training steps. We start with the model treatments on all three dimensions. We train the model with \sfrac14 number of layers and \sfrac12 number of layers for 200K steps respectively, and then stack it to full layers with treatments on the width and length dimensions for another 300K steps. At the last stage, we train the full model for 300K steps, just like the compared method.
Table 2 shows the speedup of different models.
We estimate the inference FLOPs for compared models and get their real training time from the Tensorflow profiler
Table 3 shows the test performance on the GLUE benchmark. Both compared methods achieve at least the same performance as the original BERT model. While CompoundGrow saves more training time, it achieves the same performance with stacking on the large model. On the base model, stacking is better in terms of average GLUE score, mainly due to its advantage on the CoLA dataset. Such an unusual gap on CoLA might be caused by its relatively small volume and corresponding random variance (Dodge et al., 2020). On the larger and more robust MNLI dataset, the compared methods achieve almost the same score.
To have a deeper understanding of the compared methods, we study their speed-performance trade-off by adjusting the training schedule. Specifically, each time we reduce 200K low-cost training steps for both models, and compare their validation F1 score on SQuaDv2.0. As Figure 4 shows, CompoundGrow has clear performance advantage when given comparable training budgets, which further verifies our hypothesis.
In this work we empirically verify the compound effect for Transformer growth. Different from previous works, we propose to grow a low-cost Transformer model from more than one dimension. We show that the compound growth method achieves better performance than single-dimensional growth method with comparable training budget. We apply controlled method to compare available growth operators on different dimensions to provide practical guidance in operator selection. Our final model speeds up the training of the BERT-base and BERT-large model by and in walltime respectively while achieving comparable performance. Meanwhile, the study of compound growth leaves substantial space for future improvement, especially on the design of growth operators on different dimensions. From another perspective, it remains an open research direction to study the relationships between different operators and explore effective schedulers to coordinate different training stages of progressive training.
- Code will be released for reproduction and future studies.
- Language models are few-shot learners. ArXiv abs/2005.14165. Cited by: §1, §2.
- Multi-level residual networks from dynamical systems view. In International Conference on Learning Representations, External Links: Cited by: §2.
- TensorFlow official model garden External Links: Cited by: §6.
- The best of both worlds: combining recent advances in neural machine translation. In ACL, Cited by: §3.
- Net2Net: accelerating learning via knowledge transfer. External Links: Cited by: §2.
- Funnel-transformer: filtering out sequential redundancy for efficient language processing. arXiv preprint arXiv:2006.03236. Cited by: §6.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: §1, §1, §2, §5.1, §6.
- Fine-tuning pretrained language models: weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305. Cited by: §6.
- Towards adaptive residual network training: a neural-ode perspective. In Proceedings of the Thirty-seventh International Conference on Machine Learning (ICML 2020), Cited by: §1, §2.
- Efficient training of bert by progressively stacking. In ICML, Cited by: §1, §1, §2, §6.
- Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983. Cited by: §2.
- Variable computation in recurrent neural networks. arXiv preprint arXiv:1611.06188. Cited by: §2.
- Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: §2.
- RoBERTa: a robustly optimized bert pretraining approach. ArXiv abs/1907.11692. Cited by: §1.
- Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Cited by: §6.
- Deep contextualized word representations. ArXiv abs/1802.05365. Cited by: §1.
- Improving language understanding by generative pre-training. Cited by: §2.
- Know what you don’t know: unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Cited by: §6.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2.
- EfficientNet: rethinking model scaling for convolutional neural networks. ArXiv abs/1905.11946. Cited by: §1, §4.2.
- Attention is all you need. ArXiv abs/1706.03762. Cited by: §3.
- GLUE: a multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations, Cited by: Table 3, §6.
- Modularized morphing of neural networks. arXiv preprint arXiv:1701.03281. Cited by: §2.
- Network morphism. In International Conference on Machine Learning, pp. 564–572. Cited by: §2.
- AutoGrow: automatic layer growing in deep convolutional networks. External Links: Cited by: §2.
- A multigrid method for efficiently training video models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 153–162. Cited by: §5.1.