BERT-of-Theseus: Compressing BERT by Progressive Module Replacing

BERT-of-Theseus: Compressing BERT by Progressive Module Replacing

Abstract

In this paper, we propose a novel model compression approach to effectively compress BERT by progressive module replacing. Our approach first divides the original BERT into several modules and builds their compact substitutes. Then, we randomly replace the original modules with their substitutes to train the compact modules to mimic the behavior of the original modules. We progressively increase the probability of replacement through the training. In this way, our approach brings a deeper level of interaction between the original and compact models, and smooths the training process. Compared to the previous knowledge distillation approaches for BERT compression, our approach leverages only one loss function and one hyper-parameter, liberating human effort from hyper-parameter tuning. Our approach outperforms existing knowledge distillation approaches on GLUE benchmark, showing a new perspective of model compression. 2

1 Introduction

With the prevalence of deep learning, many huge neural models have been proposed and achieve state-of-the-art performance in various fields [12, 38]. Specifically, in Natural Language Processing (NLP), pretraining and fine-tuning have become the new norm of most tasks. Transformer-based pretrained models [4, 21, 42, 31, 6] have dominated the field of both Natural Language Understanding (NLU) and Natural Language Generation (NLG). These models benefit from their “overparameterized” nature [24] and contain millions or even billions of parameters, making it computationally expensive and inefficient considering both memory consumption and high latency. This drawback enormously hinders the applications of these models in production.

To resolve this problem, many techniques have been proposed to compress a neural network. Generally, these techniques can be categorized into Quantization [10], Weights Pruning [11] and Knowledge Distillation (KD) [14]. Among them, KD has received much attention for compressing pretrained language models. KD exploits a large teacher model to “teach” a compact student model to mimic the teacher’s behavior. In this way, the knowledge embedded in the teacher model can be transferred into the smaller model. However, the retained performance of the student model relies on a well-designed distillation loss function which forces the student model to behave as the teacher. Recent studies on KD [33, 15] even leverage more sophisticated model-specific distillation loss functions for better performance.

Different from previous KD studies which explicitly exploit a distillation loss to minimize the distance between the teacher model and the student model, we propose a new genre of model compression. Inspired by the famous thought experiment “Ship of Theseus3” in Philosophy, where all components of a ship are gradually replaced by new ones until no original component exists, we propose Theseus Compression for BERT (BERT-of-Theseus), which progressively substitutes modules of BERT with modules of fewer parameters. We call the original model and compressed model predecessor and successor, in correspondence to the concepts of teacher and student in KD, respectively. As shown in Figure 1, we first specify a substitute (successor module) for each predecessor module (i.e., modules in the predecessor model). Then, we randomly replace each predecessor module with its corresponding successor module by a probability and make them work together in the training phase. After convergence, we combine all successor modules to be the successor model for inference. In this way, the large predecessor model can be compressed into the compact successor model.

Theseus Compression shares a similar idea with KD, which encourages the compressed model to behave like the original, but holds many merits. First, we only use the task-specific loss function in the compression process. However, KD-based methods use task-specific loss, together with one or multiple distillation losses as its optimization objective. The use of only one loss function throughout the whole compression process allows us to unify the different phases and keep the compression in a total end-to-end fashion. Also, selecting various loss functions and balancing the weights of each loss for different tasks and datasets are always laborious [33, 28]. Second, different from recent work [15], Theseus Compression does not use Transformer-specific features for compression thus is potential to compress a wide spectrum of models. Third, instead of using the original model only for inference in KD, our approach allows the predecessor model to work in association with the compressed successor model, enabling a deeper gradient-level interaction and a smoother training process. Moreover, the different module permutations mixing both predecessor and successor modules adds extra regularization, similar to Dropout [32]. With a Curriculum Learning [1] driven replacement scheduler, our approach achieves great performance compressing BERT [4], a large pretrained Transformer model.

To summarize, our contribution is two-fold: (1) We propose a novel approach, Theseus Compression, revealing a new pathway to model compression, with only one loss function and one hyper-parameter. (2) Our compressed BERT model is faster while retaining more than performance of the original model, outperforming other KD-based compression baselines.

(a) Compression Training
(b) Successor Fine-tuning and Inference
Figure 1: The workflow of BERT-of-Theseus. In this example, we compress a 6-layer predecessor to a 3-layer successor . and contain two and one layer, respectively. (a) During module replacing training, each predecessor module is replaced with corresponding successor module by the probability of . (b) During successor fine-tuning and inference, all successor modules are combined for calculation.

2 Related Work

Model Compression. Model compression aims to reduce the size and computational cost of a large model while retaining as much performance as possible. Conventional explanations [3, 43] claim that the large number of weights is necessary for the training of deep neural network but a high degree of redundancy exists after training. Recent work [8] proposes The Lottery Ticket Hypothesis claiming that dense, randomly-initialized and feed-forward networks contain subnetworks that can be recognized and trained to get a comparable test accuracy to the original network. Quantization [10] reduces the number of bits used to represent a number in a model. Weights Pruning [11, 13] conducts a binary classification to decide which weights to be trimmed from the model. Knowledge Distillation (KD) [14] aims to train a compact model which behaves like the original one. FitNets [27] demonstrates that “hints” learned by the large model can benefit the distillation process. Born-Again Neural Network [9] reveals that ensembling multiple identical-parameterized students can outperform a teacher model. LIT [17] introduces block-wise intermediate representation training. Liu et al. [20] distilled knowledge from ensemble models to improve the performance of a single model on NLU tasks. Tan et al. [35] exploited KD for multi-lingual machine translation. Different from KD-based methods, our proposed Theseus Compression is the first approach to mix the original model and compact model for training. Also, no additional loss is used throughout the whole compression procedure which eliminates the tricky hyper-parameter tuning for various losses.

Faster BERT. Very recently, many attempts have been made to speed up a large pretrained language model (e.g., BERT [4]). Michel et al. [22] reduced the parameters of a BERT model by pruning unnecessary heads in the Transformer. Shen et al. [29] quantized BERT to 2-bit using Hessian information. Also, substantial modification has been made to Transformer architecture. Fan et al. [7] exploited a structure dropping mechanism to train a BERT-like model which is resilient to pruning. ALBERT [18] leverages matrix decomposition and parameter sharing. However, these models cannot exploit ready-made model weights and require a full retraining. Tang et al. [36] used a BiLSTM architecture to extract task-specific knowledge from BERT. DistilBERT [28] applies a naive Knowledge Distillation on the same corpus used to pretrain BERT. Patient Knowledge Distillation (PKD) [33] designs multiple distillation losses between the module hidden states of the teacher and student models. Pretrained Distillation [37] pretrains the student model with a self-supervised masked LM objective on a large corpus first, then performs a standard KD on supervised tasks. TinyBERT [15] conducts the Knowledge Distillation twice with data augmentation. MobileBERT [34] devises a more computationally efficient architecture and applies knowledge distillation with a bottom-to-top layer training procedure.

3 BERT-of-Theseus

In this section, we introduce module replacing, the technique proposed for BERT-of-Theseus. Further, we introduce a Curriculum Learning driven scheduler to obtain better performance. The workflow is shown in Figure 1.

3.1 Module Replacing

The basic idea of Theseus Compression is very similar to KD. We want the successor model to act like a predecessor model. KD explicitly defines a loss to measure the similarity of the teacher and student. However, the performance greatly relies on the design of the loss function [14, 33, 15]. This loss function needs to be combined with task-specific loss [33, 17]. Different from KD, Theseus Compression only requires one task-specific loss function (e.g., Cross Entropy), which closely resembles a fine-tuning procedure. Inspired by Dropout [32], we propose module replacing, a novel technique for model compression. We call the original model and the target model predecessor and successor, respectively. First, we specify a successor module for each module in the predecessor. For example, in the context of BERT compression, we let one Transformer layer to be the successor module for two Transformer layers. Consider a predecessor model which has modules and a successor model which has predefined modules. Let denote the predecessor model, and denote the the predecessor modules and their corresponding substitutes, respectively. The output vectors of the -th module is denoted as . Thus, the forward operation can be described in the form of:

(1)

During compression, we apply module replacing. First, for -th module, is an independent Bernoulli random variable which has probability to be and to be .

(2)

Then, the output of the -th model is calculated as:

(3)

where denotes the element-wise multiplication, . In this way, the predecessor modules and successor modules work together in the training. Since the permutation of the hybrid model is random, it adds extra noises as a regularization for the training of the successor, similar to Dropout [32].

During training, similar to a fine-tuning process, we optimize a regular task-specific loss, e.g., Cross Entropy:

(4)

where is the -th training sample; is its corresponding ground-truth label; and denote a class label and the set of class labels, respectively. For back-propagation, the weights of all predecessor modules are frozen and only weights of successor will be updated. For both the embedding layer and output layer of the predecessor model are weight-frozen and directly adopted for the successor model in this training phase. In this way, the gradient can be calculated across both the predecessor and successor modules, allowing the interaction on a deeper level.

3.2 Successor Fine-tuning and Inference

Since all successor modules have not been combined for training yet, we further carry out a post-replacement fine-tuning phase. After the replacing compression converges, we collect all successor modules and combine them to be the successor model :

(5)

Since each is smaller than in size, the predecessor model is in essence compressed into a smaller model . Then, we fine-tune the successor model by optimizing the same loss of Equation 4. The whole procedure including module replacing and successor fine-tuning is illustrated in Figure 2(a). Finally, we use the fine-tuned successor for inference as Equation 5.

3.3 Curriculum Replacement

 

Figure 2: The replacing curves of a constant module replace rate and a replacement scheduler. We use different shades of gray to mark the two phases of Theseus Compression: (1) Module replacing. (2) Successor fine-tuning.

Although setting a constant replacement rate can meet the need for compressing a model, we further highlight a Curriculum Learning [1] driven replacement scheduler, which helps progressively substitute the modules in a model. Similar to Curriculum Dropout [23], we devise a replacement scheduler to dynamically tune the replacement rate .

Here, we leverage a simple linear scheduler to output the dynamic replacement rate for step .

(6)

where is the coefficient and is the basic replacement rate. The replacing rate curve with a replacement scheduler is illustrated in Figure 2(b).

In this way, we unify the two previously separated training stages and encourage an end-to-end easy-to-hard learning process. First, with more predecessor modules present, the model would more likely to correctly predict thus have a relatively small cross-entropy loss, which is helpful for smoothing the learning process. Then, at a later time of compression, more modules can be present together, encouraging the model to gradually learn to predict with less guidance from the predecessor and steadily transit to the successor fine-tuning stage.

Second, at the beginning of the compression, when , considering the average learning rate for all successor modules, the expected number of replaced modules is and the expected average learning rate is:

(7)

where is the constant learning rate set for the compression and is the equivalent learning rate considering all successor modules. Thus, when applying a replacement scheduler, a warm-up mechanism [25] is essentially adopted at the same time, which helps the training of a Transformer.

4 Experiments

In this section, we introduce the experiments of Theseus Compression for BERT [4] compression. We compare BERT-of-Theseus with other compression methods and further conduct experiments to analyze the results.

Method # Layer # Param. Loss Function External Data Used? Model-Agnostic?
BERT-base [4] 12 110M CE\textsubscriptMLM - -
Fine-tuning 6 66M CE\textsubscriptTASK
Vanilla KD [14] 6 66M CE\textsubscriptKD + CE\textsubscriptTASK
BERT-PKD [33] 6 66M CE\textsubscriptKD + PT\textsubscriptKD + CE\textsubscriptTASK
DistillBERT [28] 6 66M CE\textsubscriptKD + Cos\textsubscriptKD + CE\textsubscriptMLM ✓ (unlabeled)
PD-BERT [37] 6 66M CE\textsubscriptMLM + CE\textsubscriptKD + CE\textsubscriptTASK ✓ (unlabeled)
TinyBERT [15] 4 15M MSE\textsubscriptattn + MSE\textsubscripthidn + MSE\textsubscriptembd + CE\textsubscriptKD ✓ (unlabeled + labeled) ✗ (Transformer only)
MobileBERT [34] 24 25M FMT+AT+PKT+CE\textsubscriptKD+CE\textsubscriptMLM ✓ (unlabeled) ✗ (BERT only)
BERT-of-Theseus (Ours) 6 66M CE\textsubscriptTASK
Table 1: Comparison of different BERT compression approaches. “CE” and “MSE” stand for Cross Entropy and Mean Square Error, respectively. “KD” indicates the loss is for Knowledge Distillation. “CE\textsubscriptTASK” and “CE\textsubscriptMLM” indicate Cross Entropy calculated on downstream tasks and Masked LM pretraining, respectively. Other loss functions are described in their corresponding papers.

4.1 Datasets

We evaluate our proposed approach on GLUE benchmark [39]. Specifically, we test on Microsoft Research Paraphrase Matching (MRPC) [5], Quora Question Pairs (QQP)4 and STS-B [2] for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) [30] for Sentiment Classification; Multi-Genre Natural Language Inference Matched (MNLI-m), Multi-Genre Natural Language Inference Mismatched (MNLI-mm) [41], Question Natural Language Inference (QNLI) [26] and Recognizing Textual Entailment (RTE) [39] for the Natural Language Inference (NLI) task; The Corpus of Linguistic Acceptability (CoLA) [40] for Linguistic Acceptability. Note that we exclude WNLI [19] from GLUE since the original BERT [4] paper excludes this task as well for convergence problems.

The accuracy is used as the metric for SST-2, MNLI-m, MNLI-mm, QNLI and RTE. The F1 and accuracy are used for MRPC and QQP. The Pearson correlation and Spearman correlation are used for STS-B. Matthew’s correlation is used for CoLA. The results reported for the test set of GLUE are in the same format as on the official leaderboard. For the sake of comparison with [28], on dev set of GLUE, the result of MNLI is the average result on MNLI-m and MNLI-mm; the results on MRPC and QQP are reported with the average of F1 and accuracy; the result reported on STS-B is the average of the Pearson correlation and Spearman correlation.

4.2 Experimental Settings

We test our approach under a task-specific compression setting [33, 37] instead of a pretraining compression setting [28, 34]. That is to say, we use no external unlabeled corpus but only the training set of each task in GLEU to compress the model. The reason why we test our model under such a setting is that we intend to straightforwardly verify the effectiveness of our generic compression approach. The fast training process of task-specific compression (e.g., no longer than GPU hours for any task of GLUE) computationally enables us to conduct more analytical experiments. For comparision, DistillBERT [28] takes GPU hours to train. Plus, in real-world applications, this setting provides with more flexibility when selecting from different pretrained LMs (e.g., BERT, RoBERTa [21]) for various downstream tasks and it is easy to adopt a newly released model, without a time-consuming pretraining compression.

On the other hand, we acknowledge that a general-purpose compressed BERT can better facilitate the downstream applications in the community since it requires less computational resource to simply fine-tune a small model than compressing a large one. Thus, we release a general-purpose compressed BERT as well.

Formally, we define the task of compression as trying to retain as much performance as possible when compressing the officially released BERT-base (uncased) 5 to a 6-layer compact model with the same hidden size, following the settings in [28, 33, 37]. Under this setting, the compressed model has 24M parameters for the token embedding (identical to the original model) and 42M parameters for the Transformer layers and obtains a speed-up for inference.

4.3 Training Details

We fine-tune BERT-base as the predecessor model for each task with the batch size of , the learning rate of , and the number of epochs as . As a result, we are able to obtain a predecessor model with comparable performance with that reported in previous studies [28, 33, 15].

Afterward, for training successor models, following [28, 33], we use the first layers of BERT-base to initialize the successor model since the over-parameterized nature of Transformer [38] could cause the model unable to converge while training on small datasets. During module replacing, We fix the batch size as 32 for all evaluated tasks to reduce the search space. All variables only sample once for a training batch. The maximum sequence length is set to 256 on QNLI and 128 for the other tasks. We perform grid search over the sets of learning rate as {1e-5, 2e-5}, the basic replacing rate as {0.1, 0.3}, the scheduler coefficient making the dynamic replacing rate increase to within the first {1000, 5000, 10000, 30000} training steps. We apply an early stopping mechanism and select the model with the best performance on the dev set. We conduct our experiments on a single Nvidia V100 16GB GPU. The peak memory usage is identical to fine-tuning a BERT-base, since there would be at most 12 layers training at the same time. The training time for each task varies depending on the different sizes of training sets. For example, it takes 20 hours to train on MNLI but less than 30 minutes on MRPC.

Method CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B Macro
(8.5K) (393K) (3.7K) (105K) (364K) (2.5K) (67K) (5.7K) Score
Teacher / Predecessor Model
BERT-base [4] 54.3 83.5 89.5 91.2 89.8 71.1 91.5 88.9 82.5
Models Compressed with External Data
DistillBERT [28] 43.6 79.0 87.5 85.3 84.9 59.9 90.7 81.2 76.5
PD-BERT [37] - 83.0 87.2 89.0 89.1 66.7 91.1 - -
Models Compressed without External Data
Fine-tuning 43.4 80.1 86.0 86.9 87.8 62.1 89.6 81.9 77.2
Vanilla KD [14] 45.1 80.1 86.2 88.0 88.1 64.9 90.5 84.9 78.5
BERT-PKD [33] 45.5 81.3 85.7 88.4 88.4 66.5 91.3 86.2 79.2
BERT-of-Theseus 51.1 82.3 89.0 89.5 89.6 68.2 91.5 88.7 81.2
Table 2: Experimental results on the dev set of GLUE. The numbers under each dataset indicate the number of training samples.
Method CoLA MNLI-m/mm MRPC QNLI QQP RTE SST-2 STS-B Macro
(8.5K) (393K) (3.7K) (105K) (364K) (2.5K) (67K) (5.7K) Score
Teacher / Predecessor Model
BERT-base [4] 52.1 84.6 / 83.4 88.9 / 84.8 90.5 71.2 / 89.2 66.4 93.5 87.1 / 85.8 80.0
Model Compressed with External Data
PD-BERT [37] - 82.8 / 82.2 86.8 / 81.7 88.9 70.4 / 88.9 65.3 91.8 - -
Models Compressed without External Data
Fine-tuning 41.5 80.4 / 79.7 85.9 / 80.2 86.7 69.2 / 88.2 63.6 90.7 82.1 / 80.0 75.6
Vanilla KD [14] 42.9 80.2 / 79.8 86.2 / 80.6 88.3 70.1 / 88.8 64.7 91.5 82.1 / 80.3 76.4
BERT-PKD [33] 43.5 81.5 / 81.0 85.0 / 79.9 89.0 70.7 / 88.9 65.5 92.0 83.4 / 81.6 77.0
BERT-of-Theseus 47.8 82.4 / 82.1 87.6 / 83.2 89.6 71.6 / 89.3 66.2 92.2 85.6 / 84.1 78.6
Table 3: Experimental results on the test set from GLUE server. The numbers under each dataset indicate the number of training samples.

4.4 Baselines

As shown in Figure 1, we compare the layer numbers, parameter numbers, loss function, external data usage and model agnosticism of our proposed approach to existing methods. We set up a baseline of vanilla Knowledge Distillation [14] as in [33]. Additionally, we directly fine-tune an initialized 6-layer BERT model on GLUE tasks to obtain a natural fine-tuning baseline. Under the setting of compressing 12-layer BERT-base to a 6-layer compact model, we choose BERT-PKD [33], PD-BERT [37], and DistillBERT [28] as strong baselines. Note that DistillBERT [28] is not directly comparable here since it uses a pretraining compression setting. Both PD-BERT and DistillBERT use external unlabeled corpus. We do not include TinyBERT [15] since it has a different size setting, conducts distillation twice, and leverages extra augmented data for GLUE tasks. We also exclude MobileBERT [34], due to its redesigned Transformer block and different model size. Besides, in these two studies, the loss functions are not architecture-agnostic thus limit their applications on other models.

Method MNLI MRPC QNLI QQP RTE SST-2 STS-B
BERT-base [4] 83.5 89.5 91.2 89.8 71.1 91.5 88.9
DistillBERT [28] 79.0 87.5 85.3 84.9 59.9 90.7 81.2
BERT-of-Theseus \textsubscriptMNLI 82.1 87.5 88.8 88.8 70.1 91.8 87.8
Table 4: Experimental results of our general-purpose model on GLUE-dev.

4.5 Experimental Results

We report the experimental results on the dev set of GLUE in Table 2 and submit our predictions to the GLUE test server and obtain the results from the official leaderboard as shown in Table 3. Note that DistillBERT does not report on test set. The BERT-base performance reported on GLUE dev set is the predecessor fine-tuned by us. The results of BERT-PKD on dev set are reproduced by us using the official implementation. In the original paper of BERT-PKD, the results of CoLA and STS-B on test set are not reported, thus we reproduce these two results. Fine-tuning and Vanilla KD baselines are both implemented by us. All other results are from the original papers. The macro scores here are calculated in the same way as the official leaderboard but are not directly comparable with GLUE leaderboard since we exclude WNLI from the calculation.

Overall, our BERT-of-Theseus retains and of the BERT-base performance on GLUE dev set and test set, respectively. On every task of GLUE, our model dramatically outperforms the fine-tuning baseline, indicating that with the same loss function, our proposed approach can effectively transfer knowledge from the predecessor to the successor. Also, our model obviously outperforms the vanilla KD [14] and Patient Knowledge Distillation (PKD) [33], showing its supremacy over the KD-based compression approaches. On MNLI, our model performs better than BERT-PKD but slightly lower than PD-BERT [37]. However, PD-BERT exploits an additional corpus which provides much more samples for knowledge transferring. Also, we would like to highlight that on RTE, our model achieves nearly identical performance to BERT-base and on QQP our model even outperforms BERT-base. To analyze, a moderate model size may help generalize and prevent overfitting on downstream tasks. Notably, on both large datasets with more than 350K samples (e.g., MNLI and QQP) and small datasets with fewer than 4K samples (e.g., MRPC and RTE), our model can consistently achieve good performance, verifying the robustness of our approach.

4.6 General-purpose Model

Although our approach achieves good performance under a task-specific setting, it requires more memory to fine-tune a full-size predecessor than a compact BERT (e.g., DistillBERT [28]). Liu et al. [21] found that a model fine-tuned on MNLI can successfully transfer to other sentence classification tasks. Thus, we release our compressed model by conducting compression on MNLI as a general-purpose compact BERT to facilitate downstream applications. After compression, we fine-tune the successor model on other sentence classification tasks and compare the results with DistillBERT [28] in Table 4. Our general-purpose model achieves an identical performance on MRPC and remarkably outperforms DistillBERT on the other sentence-level tasks.

Replacement QNLI MNLI QQP
Predecessor 91.87 84.54 89.48
88.50 (-3.37) 81.89 (-2.65) 88.58 (-0.90)
90.54 (-1.33) 83.33 (-1.21) 88.43 (-1.05)
90.76 (-1.11) 83.27 (-1.27) 88.86 (-0.62)
90.46 (-1.41) 83.34 (-1.20) 88.86 (-0.62)
90.74 (-1.13) 84.16 (-0.38) 89.09 (-0.39)
90.57 (-1.30) 84.09 (-0.45) 89.06 (-0.42)

Table 5: Impact of the replacement for different modules on GLUE-dev. indicates the replacement of the -th module from the predecessor.

Figure 3: Average performance drop when replacing the predecessor module with successor module on QNLI, MNLI and QQP (dev set).

5 Analysis

In this section, we conduct extensive experiments to analyze our BERT-of-Theseus.

5.1 Impact of Module Replacement

As pointed out in previous work [7], different layers of a Transformer play imbalanced roles for inference. To explore the effect of different module replacements, we iteratively use one compressed successor module (constant replacing rate, without successor fine-tuning) to replace its corresponding predecessor module on QNLI, MNLI and QQP, as shown in Table 5. We illustrate the average performance drop on three tasks in Figure 3. Surprisingly, different from a similar study for the importance of different Transformer layers in [7], which is basically a U-curve, our results show that the replacement of the last two modules have only a trivial influence on the overall performance while the replacement of the first module significantly harms the performance. To analyze, the linguistic features are mainly extracted by the first few layers. Therefore, the reduced representation capability becomes the bottleneck for the following layers. On the contrary, high-quality low-level features can help the following layers, thus the reduced module size has only a limited influence on the final results.

5.2 Impact of Replacing Rate

We attempt to adopt different replacing rates on GLUE tasks. First, we fix the batch size to be and learning rate to be and conduct compression on each task. On the other hand, as we analyzed in Section 3.3, the equivalent learning rate is affected by the replacing rate. To further eliminate the influence of learning rate, we fix the equivalent learning rate to be and adjust learning rate for different replacing rates by .

We illustrate the results with different replacing rates on two representative tasks (MRPC and RTE) in Figure 4. The trivial gap between two curves in both figures indicate that the effect of different replacing rates on equivalent learning rate is not the main factor for the performance differences. Generally speaking, BERT-of-Theseus is not very sensitive to different replacing rates. A replacing rate in the range between and can always lead to a satisfying performance on all GLUE tasks. However, a significant performance drop can be observed on all tasks if the replacing rate is too small (e.g., ). On the other hand, the best replacing rate differs across tasks.

(a) MRPC
(b) RTE
Figure 4: Performance of different replacing rate on MRPC and RTE. “LR” and “ELR” denote that the learning rate and equivalent learning rate are fixed, respectively.
Strategy CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B
Constant Replacing Rate 44.4 81.9 87.1 88.5 88.6 66.4 90.6 88.4
Anti-curriculum 42.8 (-1.6) 79.8 (-2.1) 85.6 (-1.5) 87.8 (-0.7) 87.6 (-1.0) 62.4 (-4.0) 88.8 (-1.8) 85.4 (-3.0)
Curriculum Scheduler 51.1 (+6.7) 82.3 (+0.4) 89.0 (+1.9) 89.5 (+1.0) 89.6 (+2.0) 68.2 (+1.8) 91.5 (+0.9) 88.7 (+0.3)
Table 6: Comparison of models compressed with a constant replacing rate, a curriculum replacement scheduler and its corresponding anti-curriculum scheduler on GLUE-dev.

5.3 Impact of Replacement Scheduler

To study the impact of our curriculum replacement strategy, we compare the results of BERT-of-Theseus compressed with a constant replacing rate and with a replacement scheduler. The constant replacing rate for the baseline is searched over {0.5, 0.7, 0.9}. Additionally, we implement an “anti-curriculum” baseline, similar to the one in [23]. For each task, we adopt the same coefficient and basic replacing rate to calculate the as Equation 6 for both curriculum replacement and anti-curriculum. However, we use as the dynamic replacing rate for anti-curriculum baseline. Thus, we can determine whether the improvement of curriculum replacement is simply due to an inconstant replacing rate or an easy-to-hard curriculum design.

As shown in Table 6, our model compressed with curriculum scheduler consistently outperforms a model compressed with a constant replacing rate. On the contrary, a substantial performance drop is observed on the model compressed with an anti-curriculum scheduler, which further verifies the effectiveness of the curriculum replacement strategy.

6 Discussion and Conclusion

In this paper, we propose Theseus Compression, a novel model compression approach. We use this approach to compress BERT to a compact model, which outperforms other models compressed by Knowledge Distillation, with only one hyper-parameter, one loss function and no external data. Our work highlights a new genre of model compression and reveals a new path towards model compression.

A known limitation of our approach is that to allow a successor module to replace a predecessor module, they must have the same input and output sizes. First, given this restriction, we can still perform depth reduction (i.e., reducing the number of layers). As analyzed in [28], the hidden size dimension has a smaller impact on computational efficiency than the depth, for a fixed parameter budget. Second, there have been many developed in-place substitutes (e.g., ShuffleNet unit [44] for ResBlock [12], Reformer Layer [16] for Transformer Layer [38]), which can be directly adopted as the successor. Third, it is possible to use a feed-forward neural network to map features between the hidden spaces of different sizes [15].

Although our model has achieved good performance compressing BERT, it would be interesting to explore its possible applications in other neural models. As summarized in Table 1, our model does not rely on any model-specific features to compress BERT. Therefore, it is potential to apply Theseus Compression to other large models (e.g., ResNet [12] in Computer Vision). For the future work, we would like to conduct Theseus Compression on Convolutional Neural Network and Graph Neural Network.

Footnotes

  1. footnotemark:
  2. Code and pretrained model are available at https://github.com/JetRunner/BERT-of-Theseus
  3. https://en.wikipedia.org/wiki/Ship_of_Theseus
  4. https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs
  5. https://github.com/google-research/bert

References

  1. Y. Bengio, J. Louradour, R. Collobert and J. Weston (2009) Curriculum learning. In ICML, Cited by: §1, §3.3.
  2. A. Conneau and D. Kiela (2018) SentEval: an evaluation toolkit for universal sentence representations. In LREC, Cited by: §4.1.
  3. M. Denil, B. Shakibi, L. Dinh, M. Ranzato and N. de Freitas (2013) Predicting parameters in deep learning. In NIPS, Cited by: §2.
  4. J. Devlin, M. Chang, K. Lee and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, Cited by: §1, §1, §2, §4.1, Table 1, Table 2, Table 3, Table 4, §4.
  5. W. B. Dolan and C. Brockett (2005) Automatically constructing a corpus of sentential paraphrases. In IWP@IJCNLP, Cited by: §4.1.
  6. L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou and H. Hon (2019) Unified language model pre-training for natural language understanding and generation. In NeurIPS, Cited by: §1.
  7. A. Fan, E. Grave and A. Joulin (2020) Reducing transformer depth on demand with structured dropout. In ICLR, Cited by: §2, §5.1.
  8. J. Frankle and M. Carbin (2019) The lottery ticket hypothesis: finding sparse, trainable neural networks. In ICLR, Cited by: §2.
  9. T. Furlanello, Z. C. Lipton, M. Tschannen, L. Itti and A. Anandkumar (2018) Born-again neural networks. In ICML, Proceedings of Machine Learning Research, Vol. 80, pp. 1602–1611. Cited by: §2.
  10. Y. Gong, L. Liu, M. Yang and L. Bourdev (2014) Compressing deep convolutional networks using vector quantization. External Links: 1412.6115 Cited by: §1, §2.
  11. S. Han, H. Mao and W. J. Dally (2016) Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding. In ICLR, Cited by: §1, §2.
  12. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §6, §6.
  13. Y. He, X. Zhang and J. Sun (2017) Channel pruning for accelerating very deep neural networks. In ICCV, Cited by: §2.
  14. G. Hinton, O. Vinyals and J. Dean (2015) Distilling the knowledge in a neural network. External Links: 1503.02531 Cited by: §1, §2, §3.1, §4.4, §4.5, Table 1, Table 2, Table 3.
  15. X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang and Q. Liu (2019) TinyBERT: distilling bert for natural language understanding. External Links: 1909.10351 Cited by: §1, §1, §2, §3.1, §4.3, §4.4, Table 1, §6.
  16. N. Kitaev, L. Kaiser and A. Levskaya (2020) Reformer: the efficient transformer. In ICLR, Cited by: §6.
  17. A. Koratana, D. Kang, P. Bailis and M. Zaharia (2019) LIT: learned intermediate representation training for model compression. In ICML, Cited by: §2, §3.1.
  18. Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma and R. Soricut (2020) ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In ICLR, Cited by: §2.
  19. H. J. Levesque (2011) The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, Cited by: §4.1.
  20. X. Liu, P. He, W. Chen and J. Gao (2019) Improving multi-task deep neural networks via knowledge distillation for natural language understanding. External Links: 1904.09482 Cited by: §2.
  21. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer and V. Stoyanov (2019) RoBERTa: a robustly optimized bert pretraining approach. External Links: 1907.11692 Cited by: §1, §4.2, §4.6.
  22. P. Michel, O. Levy and G. Neubig (2019) Are sixteen heads really better than one?. In NeurIPS, Cited by: §2.
  23. P. Morerio, J. Cavazza, R. Volpi, R. Vidal and V. Murino (2017) Curriculum dropout. In ICCV, pp. 3564–3572. Cited by: §3.3, §5.3.
  24. P. Nakkiran, G. Kaplun, Y. Bansal, T. Yang, B. Barak and I. Sutskever (2019) Deep double descent: where bigger models and more data hurt. External Links: 1912.02292 Cited by: §1.
  25. M. Popel and O. Bojar (2018-04) Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics 110 (1), pp. 43–70. External Links: ISSN 1804-0462, Document Cited by: §3.3.
  26. P. Rajpurkar, J. Zhang, K. Lopyrev and P. Liang (2016) SQuAD: 100, 000+ questions for machine comprehension of text. In EMNLP, pp. 2383–2392. Cited by: §4.1.
  27. A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta and Y. Bengio (2015) FitNets: hints for thin deep nets. In ICLR, Cited by: §2.
  28. V. Sanh, L. Debut, J. Chaumond and T. Wolf (2019) DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. External Links: 1910.01108 Cited by: §1, §2, §4.1, §4.2, §4.2, §4.3, §4.3, §4.4, §4.6, Table 1, Table 2, Table 4, §6.
  29. S. Shen, Z. Dong, J. Ye, L. Ma, Z. Yao, A. Gholami, M. W. Mahoney and K. Keutzer (2019) Q-bert: hessian based ultra low precision quantization of bert. External Links: 1909.05840 Cited by: §2.
  30. R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng and C. Potts (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, Cited by: §4.1.
  31. K. Song, X. Tan, T. Qin, J. Lu and T. Liu (2019) MASS: masked sequence to sequence pre-training for language generation. In ICML, Cited by: §1.
  32. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15 (1), pp. 1929–1958. Cited by: §1, §3.1, §3.1.
  33. S. Sun, Y. Cheng, Z. Gan and J. Liu (2019) Patient knowledge distillation for BERT model compression. In EMNLP/IJCNLP, pp. 4322–4331. Cited by: §1, §1, §2, §3.1, §4.2, §4.2, §4.3, §4.3, §4.4, §4.5, Table 1, Table 2, Table 3.
  34. Z. Sun, H. Yu, X. Song, R. Liu, Y. Yang and D. Zhou (2019) MobileBERT: task-agnostic compression of bert by progressive knowledge transfer. Cited by: §2, §4.2, §4.4, Table 1.
  35. X. Tan, Y. Ren, D. He, T. Qin, Z. Zhao and T. Liu (2019) Multilingual neural machine translation with knowledge distillation. In ICLR, Cited by: §2.
  36. R. Tang, Y. Lu, L. Liu, L. Mou, O. Vechtomova and J. Lin (2019) Distilling task-specific knowledge from bert into simple neural networks. External Links: 1903.12136 Cited by: §2.
  37. I. Turc, M. Chang, K. Lee and K. Toutanova (2019) Well-read students learn better: on the importance of pre-training compact models. External Links: 1908.08962 Cited by: §2, §4.2, §4.2, §4.4, §4.5, Table 1, Table 2, Table 3.
  38. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser and I. Polosukhin (2017) Attention is all you need. In NIPS, Cited by: §1, §4.3, §6.
  39. A. Wang, A. Singh, J. Michael, F. Hill, O. Levy and S. R. Bowman (2019) GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, Cited by: §4.1.
  40. A. Warstadt, A. Singh and S. R. Bowman (2019) Neural network acceptability judgments. TACL 7, pp. 625–641. Cited by: §4.1.
  41. A. Williams, N. Nangia and S. R. Bowman (2018) A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT, Cited by: §4.1.
  42. Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdinov and Q. V. Le (2019) XLNet: generalized autoregressive pretraining for language understanding. In NeurIPS, pp. 5754–5764. Cited by: §1.
  43. S. Zhai, Y. Cheng, Z. (. Zhang and W. Lu (2016) Doubly convolutional neural networks. In NIPS, Cited by: §2.
  44. X. Zhang, X. Zhou, M. Lin and J. Sun (2018) ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In CVPR, Cited by: §6.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
407028
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description