BERTScore: Evaluating Text Generation with BERT

BERTScore: Evaluating Text Generation with BERT

Abstract

We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTScore is more robust to challenging examples when compared to existing metrics.

\iclrfinalcopy

1 Introduction

Automatic evaluation of natural language generation, for example in machine translation and caption generation, requires comparing candidate sentences to annotated references. The goal is to evaluate semantic equivalence. However, commonly used methods rely on surface-form similarity only. For example, Bleu (Papineni et al., 2002), the most common machine translation metric, simply counts -gram overlap between the candidate and the reference. While this provides a simple and general measure, it fails to account for meaning-preserving lexical and compositional diversity.

In this paper, we introduce BERTScore, a language generation evaluation metric based on pre-trained BERT contextual embeddings (Devlin et al., 2019). BERTScore computes the similarity of two sentences as a sum of cosine similarities between their tokens’ embeddings.

BERTScore addresses two common pitfalls in -gram-based metrics (Banerjee and Lavie, 2005). First, such methods often fail to robustly match paraphrases. For example, given the reference people like foreign cars, Bleu and Meteor (Banerjee and Lavie, 2005) incorrectly give a higher score to people like visiting places abroad compared to consumers prefer imported cars. This leads to performance underestimation when semantically-correct phrases are penalized because they differ from the surface form of the reference. In contrast to string matching (e.g., in Bleu) or matching heuristics (e.g., in Meteor), we compute similarity using contextualized token embeddings, which have been shown to be effective for paraphrase detection (Devlin et al., 2019). Second, -gram models fail to capture distant dependencies and penalize semantically-critical ordering changes (Isozaki et al., 2010). For example, given a small window of size two, Bleu will only mildly penalize swapping of cause and effect clauses (e.g. A because B instead of B because A), especially when the arguments A and B are long phrases. In contrast, contextualized embeddings are trained to effectively capture distant dependencies and ordering.

We experiment with BERTScore on machine translation and image captioning tasks using the outputs of 363 systems by correlating BERTScore and related metrics to available human judgments. Our experiments demonstrate that BERTScore correlates highly with human evaluations. In machine translation, BERTScore shows stronger system-level and segment-level correlations with human judgments than existing metrics on multiple common benchmarks and demonstrates strong model selection performance compared to Bleu. We also show that BERTScore is well-correlated with human annotators for image captioning, surpassing Spice, a popular task-specific metric (Anderson et al., 2016). Finally, we test the robustness of BERTScore on the adversarial paraphrase dataset PAWS (Zhang et al., 2019), and show that it is more robust to adversarial examples than other metrics. The code for BERTScore is available at https://github.com/Tiiiger/bert_score.

2 Problem Statement and Prior Metrics

Natural language text generation is commonly evaluated using annotated reference sentences. Given a reference sentence tokenized to tokens and a candidate tokenized to tokens , a generation evaluation metric is a function . Better metrics have a higher correlation with human judgments. Existing metrics can be broadly categorized into using -gram matching, edit distance, embedding matching, or learned functions.

2.1 -gram Matching Approaches

The most commonly used metrics for generation count the number of -grams that occur in the reference and candidate . The higher the is, the more the metric is able to capture word order, but it also becomes more restrictive and constrained to the exact form of the reference.

Formally, let and be the lists of token -grams () in the reference and candidate sentences. The number of matched -grams is , where is an indicator function. The exact match precision () and recall () scores are:

Several popular metrics build upon one or both of these exact matching scores.

Bleu

The most widely used metric in machine translation is Bleu (Papineni et al., 2002), which includes three modifications to . First, each -gram in the reference can be matched at most once. Second, the number of exact matches is accumulated for all reference-candidate pairs in the corpus and divided by the total number of -grams in all candidate sentences. Finally, very short candidates are discouraged using a brevity penalty. Typically, Bleu is computed for multiple values of (e.g. ) and the scores are averaged geometrically. A smoothed variant, SentBleu (Koehn et al., 2007) is computed at the sentence level. In contrast to Bleu, BERTScore is not restricted to maximum -gram length, but instead relies on contextualized embeddings that are able to capture dependencies of potentially unbounded length.

Meteor

Meteor (Banerjee and Lavie, 2005) computes and while allowing backing-off from exact unigram matching to matching word stems, synonyms, and paraphrases. For example, running may match run if no exact match is possible. Non-exact matching uses an external stemmer, a synonym lexicon, and a paraphrase table. Meteor 1.5 (Denkowski and Lavie, 2014) weighs content and function words differently, and also applies importance weighting to different matching types. The more recent Meteor++ 2.0 (Guo and Hu, 2019) further incorporates a learned external paraphrase resource. Because Meteor requires external resources, only five languages are supported with the full feature set, and eleven are partially supported. Similar to Meteor, BERTScore allows relaxed matches, but relies on BERT embeddings that are trained on large amounts of raw text and are currently available for 104 languages. BERTScore also supports importance weighting, which we estimate with simple corpus statistics.

Other Related Metrics

NIST (Doddington, 2002) is a revised version of Bleu that weighs each -gram differently and uses an alternative brevity penalty. Bleu (Galley et al., 2015) modifies multi-reference Bleu by including human annotated negative reference sentences. chrF (Popović, 2015) compares character -grams in the reference and candidate sentences. chrF++ (Popović, 2017) extends chrF to include word bigram matching. Rouge (Lin, 2004) is a commonly used metric for summarization evaluation. Rouge- (Lin, 2004) computes (usually ), while Rouge- is a variant of with the numerator replaced by the length of the longest common subsequence. CIDEr (Vedantam et al., 2015) is an image captioning metric that computes cosine similarity between -- weighted -grams. We adopt a similar approach to weigh tokens differently. Finally, Chaganty et al. (2018) and Hashimoto et al. (2019) combine automatic metrics with human judgments for text generation evaluation.

2.2 Edit-distance-based Metrics

Several methods use word edit distance or word error rate (Levenshtein, 1966), which quantify similarity using the number of edit operations required to get from the candidate to the reference. TER (Snover et al., 2006) normalizes edit distance by the number of reference words, and ITER (Panja and Naskar, 2018) adds stem matching and better normalization. PER (Tillmann et al., 1997) computes position independent error rate, CDER (Leusch et al., 2006) models block reordering as an edit operation. CharacTer (Wang et al., 2016) and EED (Stanchev et al., 2019) operate on the character level and achieve higher correlation with human judgements on some languages.

2.3 Embedding-based Metrics

Word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Grave et al., 2018; Nguyen et al., 2017; Athiwaratkun et al., 2018) are learned dense token representations. Meant 2.0 (Lo, 2017) uses word embeddings and shallow semantic parses to compute lexical and structural similarity. Yisi-1 (Lo et al., 2018) is similar to Meant 2.0, but makes the use of semantic parses optional. Both methods use a relatively simple similarity computation, which inspires our approach, including using greedy matching (Corley and Mihalcea, 2005) and experimenting with a similar importance weighting to Yisi-1. However, we use contextual embeddings, which capture the specific use of a token in a sentence, and potentially capture sequence information. We do not use external tools to generate linguistic structures, which makes our approach relatively simple and portable to new languages. Instead of greedy matching, WMD (Kusner et al., 2015), WMDO (Chow et al., 2019), and SMS (Clark et al., 2019) propose to use optimal matching based on earth mover’s distance (Rubner et al., 1998). The tradeoff6 between greedy and optimal matching was studied by Rus and Lintean (2012). Sharma et al. (2018) compute similarity with sentence-level representations. In contrast, our token-level computation allows us to weigh tokens differently according to their importance.

2.4 Learned Metrics

Various metrics are trained to optimize correlation with human judgments. Beer (Stanojević and Sima’an, 2014) uses a regression model based on character -grams and word bigrams. Blend (Ma et al., 2017) uses regression to combine 29 existing metrics. Ruse (Shimanaka et al., 2018) combines three pre-trained sentence embedding models. All these methods require costly human judgments as supervision for each dataset, and risk poor generalization to new domains, even within a known language and task (Chaganty et al., 2018). Cui et al. (2018) and Lowe et al. (2017) train a neural model to predict if the input text is human-generated. This approach also has the risk of being optimized to existing data and generalizing poorly to new data. In contrast, the model underlying BERTScore is not optimized for any specific evaluation task.

3 BERTScore

Given a reference sentence and a candidate sentence , we use contextual embeddings to represent the tokens, and compute matching using cosine similarity, optionally weighted with inverse document frequency scores. Figure 1 illustrates the computation.

Figure 1: Illustration of the computation of the recall metric . Given the reference and candidate , we compute BERT embeddings and pairwise cosine similarity. We highlight the greedy matching in red, and include the optional importance weighting.

Token Representation

We use contextual embeddings to represent the tokens in the input sentences and . In contrast to prior word embeddings (Mikolov et al., 2013; Pennington et al., 2014), contextual embeddings, such as BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018), can generate different vector representations for the same word in different sentences depending on the surrounding words, which form the context of the target word. The models used to generate these embeddings are most commonly trained using various language modeling objectives, such as masked word prediction (Devlin et al., 2019).

We experiment with different models (Section 4), using the tokenizer provided with each model. Given a tokenized reference sentence , the embedding model generates a sequence of vectors . Similarly, the tokenized candidate is mapped to . The main model we use is BERT, which tokenizes the input text into a sequence of word pieces (Wu et al., 2016), where unknown words are split into several commonly observed sequences of characters. The representation for each word piece is computed with a Transformer encoder (Vaswani et al., 2017) by repeatedly applying self-attention and nonlinear transformations in an alternating fashion. BERT embeddings have been shown to benefit various NLP tasks (Devlin et al., 2019; Liu, 2019; Huang et al., 2019; Yang et al., 2019a).

Similarity Measure

The vector representation allows for a soft measure of similarity instead of exact-string (Papineni et al., 2002) or heuristic (Banerjee and Lavie, 2005) matching. The cosine similarity of a reference token and a candidate token is . We use pre-normalized vectors, which reduces this calculation to the inner product . While this measure considers tokens in isolation, the contextual embeddings contain information from the rest of the sentence.

BERTScore

The complete score matches each token in to a token in to compute recall, and each token in to a token in to compute precision. We use greedy matching to maximize the matching similarity score,7 where each token is matched to the most similar token in the other sentence. We combine precision and recall to compute an F1 measure. For a reference and candidate , the recall, precision, and F1 scores are:

Importance Weighting

Previous work on similarity measures demonstrated that rare words can be more indicative for sentence similarity than common words (Banerjee and Lavie, 2005; Vedantam et al., 2015). BERTScore enables us to easily incorporate importance weighting. We experiment with inverse document frequency () scores computed from the test corpus. Given reference sentences , the score of a word-piece token is

where is an indicator function. We do not use the full - measure because we process single sentences, where the term frequency () is likely 1. For example, recall with weighting is

Because we use reference sentences to compute , the scores remain the same for all systems evaluated on a specific test set. We apply plus-one smoothing to handle unknown word pieces.

Baseline Rescaling

Because we use pre-normalized vectors, our computed scores have the same numerical range of cosine similarity (between and ). However, in practice we observe scores in a more limited range, potentially because of the learned geometry of contextual embeddings. While this characteristic does not impact BERTScore’s capability to rank text generation systems, it makes the actual score less readable. We address this by rescaling BERTScore with respect to its empirical lower bound as a baseline. We compute using Common Crawl monolingual datasets.8 For each language and contextual embedding model, we create M candidate-reference pairs by grouping two random sentences. Because of the random pairing and the corpus diversity, each pair has very low lexical and semantic overlapping.9 We compute by averaging BERTScore computed on these sentence pairs. Equipped with baseline , we rescale BERTScore linearly. For example, the rescaled value of is:

After this operation is typically between and . We apply the same rescaling procedure for and . This method does not affect the ranking ability and human correlation of BERTScore, and is intended solely to increase the score readability.

4 Experimental Setup

We evaluate our approach on machine translation and image captioning.

Contextual Embedding Models

We evaluate twelve pre-trained contextual embedding models, including variants of BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), XLNet (Yang et al., 2019b), and XLM (Lample and Conneau, 2019). We present the best-performing models in Section 5. We use the 24-layer model10 for English tasks, 12-layer model for Chinese tasks, and the 12-layer cased multilingual model for other languages.11 We show the performance of all other models in Appendix F. Contextual embedding models generate embedding representations at every layer in the encoder network. Past work has shown that intermediate layers produce more effective representations for semantic tasks (Liu et al., 2019a). We use the WMT16 dataset  (Bojar et al., 2016) as a validation set to select the best layer of each model (Appendix B).

Machine Translation

Our main evaluation corpus is the WMT18 metric evaluation dataset (Ma et al., 2018), which contains predictions of 149 translation systems across 14 language pairs, gold references, and two types of human judgment scores. Segment-level human judgments assign a score to each reference-candidate pair. System-level human judgments associate each system with a single score based on all pairs in the test set. WMT18 includes translations from English to Czech, German, Estonian, Finnish, Russian, and Turkish, and from the same set of languages to English. We follow the WMT18 standard practice and use absolute Pearson correlation and Kendall rank correlation to evaluate metric quality, and compute significance with the Williams test (Williams, 1959) for and bootstrap re-sampling for as suggested by Graham and Baldwin (2014). We compute system-level scores by averaging BERTScore for every reference-candidate pair. We also experiment with hybrid systems by randomly sampling one candidate sentence from one of the available systems for each reference sentence (Graham and Liu, 2016). This enables system-level experiments with a higher number of systems. Human judgments of each hybrid system are created by averaging the WMT18 segment-level human judgments for the corresponding sentences in the sampled data. We compare BERTScores to one canonical metric for each category introduced in Section 2, and include the comparison with all other participating metrics from WMT18 in Appendix F.

In addition to the standard evaluation, we design model selection experiments. We use 10K hybrid systems super-sampled from WMT18. We randomly select 100 out of 10K hybrid systems, and rank them using the automatic metrics. We repeat this process 100K times. We report the percentage of the metric ranking agreeing with the human ranking on the best system (Hits@1). In Tables 23-28, we include two additional measures to the model selection study: (a) the mean reciprocal rank of the top metric-rated system according to the human ranking, and (b) the difference between the human score of the top human-rated system and that of the top metric-rated system.

Additionally, we report the same study on the WMT17 (Bojar et al., 2017) and the WMT16 (Bojar et al., 2016) datasests in Appendix F.12 This adds 202 systems to our evaluation.

Image Captioning

We use the human judgments of twelve submission entries from the COCO 2015 Captioning Challenge. Each participating system generates a caption for each image in the COCO validation set (Lin et al., 2014), and each image has approximately five reference captions. Following Cui et al. (2018), we compute the Pearson correlation with two system-level metrics: the percentage of captions that are evaluated as better or equal to human captions (M1) and the percentage of captions that are indistinguishable from human captions (M2). We compute BERTScore with multiple references by scoring the candidate with each available reference and returning the highest score. We compare with eight task-agnostic metrics: Bleu (Papineni et al., 2002), Meteor (Banerjee and Lavie, 2005), Rouge-L (Lin, 2004), CIDEr (Vedantam et al., 2015), BEER (Stanojević and Sima’an, 2014), EED (Stanchev et al., 2019), chrF++ (Popović, 2017), and CharacTER (Wang et al., 2016). We also compare with two task-specific metrics: Spice (Anderson et al., 2016) and Leic (Cui et al., 2018). Spice is computed using the similarity of scene graphs parsed from the reference and candidate captions. Leic is trained to predict if a caption is written by a human given the image.

Metric encs ende enet enfi enru entr enzh
(5/5) (16/16) (14/14) (9/12) (8/9) (5/8) (14/14)
BLEU .970/.995 .971/.981 .986/.975 .973/.962 .979/.983 .657/.826 .978/.947
ITER .975/.915 .990/.984 .975/.981 .996/.973 .937/.975 .861/.865 .980/.0--
RUSE .981/.0-- .997/.0-- .990/.0-- .991/.0-- .988/.0-- .853/.0-- .981/.0--
YiSi-1 .950/.987 .992/.985 .979/.979 .973/.940 .991/.992 .958/.976 .951/.963
.980/.994 .998/.988 .990/.981 .995/.957 .982/.990 .791/.935 .981/.954
.998/.997 .997/.990 .986/.980 .997/.980 .995/.989 .054/.879 .990/.976
.990/.997 .999/.989 .990/.982 .998/.972 .990/.990 .499/.908 .988/.967
(idf) .985/.995 .999/.990 .992/.981 .992/.972 .991/.991 .826/.941 .989/.973
Table 1: Absolute Pearson correlations with system-level human judgments on WMT18. For each language pair, the left number is the to-English correlation, and the right is the from-English. We bold correlations of metrics not significantly outperformed by any other metric under Williams Test for that language pair and direction. The numbers in parenthesis are the number of systems used for each language pair and direction.
Metric encs ende enet enfi enru entr enzh
BLEU .956/.993 .969/.977 .981/.971 .962/.958 .972/.977 .586/.796 .968/.941
ITER .966/.865 .990/.978 .975/.982 .989/.966 .943/.965 .742/.872 .978/.0--
RUSE .974/.0-- .996/.0-- .988/.0-- .983/.0-- .982/.0-- .780/.0-- .973/.0--
YiSi-1 .942/.985 .991/.983 .976/.976 .964/.938 .985/.989 .881/.942 .943/.957
.965/.989 .995/.983 .990/.970 .976/.951 .976/.988 .846/.936 .975/.950
.989/.995 .997/.991 .982/.979 .989/.977 .988/.989 .540/.872 .981/.980
.978/.993 .998/.988 .989/.978 .983/.969 .985/.989 .760/.910 .981/.969
(idf) .982/.995 .998/.988 .988/.979 .989/.969 .983/.987 .453/.877 .980/.963
Table 2: Absolute Pearson correlations with system-level human judgments on WMT18. We use 10K hybrid super-sampled systems for each language pair and direction. For each language pair, the left number is the to-English correlation, and the right is the from-English. Bolding criteria is the same as in Table 1.
Metric encs ende enet enfi enru entr enzh
BLEU .134/.151 .803/.610 .756/.618 .461/.088 .228/.519 .095/.029 .658/.515
ITER .154/.000 .814/.692 .742/.733 .475/.111 .234/.532 .102/.030 .673/.0--
RUSE .214/.0-- .823/.0-- .785/.0-- .487/.0-- .248/.0-- .109/.0-- .670/.0--
YiSi-1 .159/.178 .809/.671 .749/.671 .467/.230 .248/.544 .108/.398 .613/.594
.173/.180 .706/.663 .764/.771 .498/.078 .255/.545 .140/.372 .661/.551
.163/.184 .804/.730 .770/.722 .494/.148 .260/.542 .005/.030 .677/.657
.175/.184 .824/.703 .769/.763 .501/.082 .262/.544 .142/.031 .673/.629
(idf) .179/.178 .824/.722 .760/.764 .503/.082 .265/.539 .004/.030 .678/.595
Table 3: Model selection accuracies (Hits@1) on WMT18 hybrid systems. We report the average of 100K samples and the 0.95 confidence intervals are below . We bold the highest numbers for each language pair and direction.
Metric encs ende enet enfi enru entr enzh
(5k/5k) (78k/ 20k) (57k/32k) (16k/10k) (10k/22k) (9k/1k) (33k/29k)
BLEU .233/.389 .415/.620 .285/.414 .154/.355 .228/.330 .145/.261 .178/.311
ITER .198/.333 .396/.610 .235/.392 .128/.311 .139/.291 -.029/.236 .144/.0--
RUSE .347/.0-- .498/.0-- .368/.0-- .273/.0-- .311/.0-- .259/.0-- .218/.0--
YiSi-1 .319/.496 .488/.691 .351/.546 .231/.504 .300/.407 .234/.418 .211/.323
.387/.541 .541/.715 .389/.549 .283/.486 .345/.414 .280/.328 .248/.337
.388/.570 .546/.728 .391/.594 .304/.565 .343/.420 .290/.411 .255/.367
.404/.562 .550/.728 .397/.586 .296/.546 .353/.423 .292/.399 .264/.364
(idf) .408/.553 .550/.721 .395/585 .293/.537 .346/.425 .296/.406 .260/.366
Table 4: Kendall correlations with segment-level human judgments on WMT18. For each language pair, the left number is the to-English correlation, and the right is the from-English. We bold correlations of metrics not significantly outperformed by any other metric under bootstrap sampling for that language pair and direction. The numbers in parenthesis are the number of candidate-reference sentence pairs for each language pair and direction.

5 Results

Machine Translation

Tables 1--3 show system-level correlation to human judgements, correlations on hybrid systems, and model selection performance. We observe that BERTScore is consistently a top performer. In to-English results, RUSE (Shimanaka et al., 2018) shows competitive performance. However, RUSE is a supervised method trained on WMT16 and WMT15 human judgment data. In cases where RUSE models were not made available, such as for our from-English experiments, it is not possible to use RUSE without additional data and training. Table 4 shows segment-level correlations. We see that BERTScore exhibits significantly higher performance compared to the other metrics. The large improvement over Bleu stands out, making BERTScore particularly suitable to analyze specific examples, where SentBleu is less reliable. In Appendix A, we provide qualitative examples to illustrate the segment-level performance difference between SentBleu and BERTScore. At the segment-level, BERTScore even significantly outperforms RUSE. Overall, we find that applying importance weighting using at times provides small benefit, but in other cases does not help. Understanding better when such importance weighting is likely to help is an important direction for future work, and likely depends on the domain of the text and the available test data. We continue without weighting for the rest of our experiments. While recall , precision , and F1 alternate as the best measure in different setting, F1 performs reliably well across all the different settings. Our overall recommendation is therefore to use F1. We present additional results using the full set of 351 systems and evaluation metrics in Tables 12--28 in the appendix, including for experiments with importance weighting, different contextual embedding models, and model selection.

Image Captioning

Table 6 shows correlation results for the COCO Captioning Challenge. BERTScore outperforms all task-agnostic baselines by large margins. Image captioning presents a challenging evaluation scenario, and metrics based on strict -gram matching, including Bleu and Rouge, show weak correlations with human judgments. importance weighting shows significant benefit for this task, suggesting people attribute higher importance to content words. Finally, Leic (Cui et al., 2018), a trained metric that takes images as additional inputs and is optimized specifically for the COCO data and this set of systems, outperforms all other methods.

Metric M1 M2 Bleu -0.019 -0.005 Meteor 0.606 0.594 Rouge-L 0.090 0.096 CIDEr 0.438 0.440 Spice 0.759 0.750 Leic 0.939 0.949 BEER 0.491 0.562 EED 0.545 0.599 chrF++ 0.702 0.729 CharacTER 0.800 0.801 -0.105 -0.041 0.888 0.863 0.322 0.350 (idf) 0.917 0.889
Table 5: Pearson correlation on the 2015 COCO Captioning Challenge. The M1 and M2 measures are described in Section 4. Leic uses images as additional inputs. Numbers with are cited from Cui et al. (2018). We bold the highest correlations of task-specific and task-agnostic metrics.
Type Method QQP PAWSQQP Trained on QQP (supervised) DecAtt 0.939 0.263 DIIN 0.952 0.324 BERT 0.963 0.351 Trained on QQP + PAWSQQP (supervised) DecAtt - 0.511 DIIN - 0.778 BERT - 0.831 Metric (Not trained on QQP or PAWSQQP) Bleu 0.707 0.527 Meteor 0.755 0.532 Rouge-L 0.740 0.536 chrF++ 0.577 0.608 BEER 0.741 0.564 EED 0.743 0.611 CharacTER 0.698 0.650 0.757 0.687 0.744 0.685 0.761 0.685 (idf) 0.777 0.693
Table 6: Area under ROC curve (AUC) on QQP and PAWSQQP datasets. The scores of trained DecATT (Parikh et al., 2016), DIIN (Gong et al., 2018), and fine-tuned BERT are reported by Zhang et al. (2019). Numbers with are scores on the held-out test set of QQP. We bold the highest correlations of task-specific and task-agnostic metrics.

Speed

Despite the use of a large pre-trained model, computing BERTScore is relatively fast. We are able to process 192.5 candidate-reference pairs/second using a GTX-1080Ti GPU. The complete WMT18 en-de test set, which includes 2,998 sentences, takes 15.6sec to process, compared to 5.4sec with SacreBLEU (Post, 2018), a common Bleu implementation. Given the sizes of commonly used test and validation sets, the increase in processing time is relatively marginal, and BERTScore is a good fit for using during validation (e.g., for stopping) and testing, especially when compared to the time costs of other development stages.

6 Robustness Analysis

We test the robustness of BERTScore using adversarial paraphrase classification. We use the Quora Question Pair corpus (QQP; Iyer et al., 2017) and the adversarial paraphrases from the Paraphrase Adversaries from Word Scrambling dataset (PAWS; Zhang et al., 2019). Both datasets contain pairs of sentences labeled to indicate whether they are paraphrases or not. Positive examples in QQP are real duplicate questions, while negative examples are related, but different questions. Sentence pairs in PAWS are generated through word swapping. For example, in PAWS, Flights from New York to Florida may be changed to Flights from Florida to New York and a good classifier should identify that these two sentences are not paraphrases. PAWS includes two parts: PAWSQQP, which is based on the QQP data, and PAWSWiki. We use the PAWSQQP development set which contains 667 sentences. For the automatic metrics, we use no paraphrase detection training data. We expect that pairs with higher scores are more likely to be paraphrases. To evaluate the automatic metrics on QQA, we use the first 5,000 sentences in the training set instead of the the test set because the test labels are not available. We treat the first sentence as the reference and the second sentence as the candidate.

Table 6 reports the area under ROC curve (AUC) for existing models and automatic metrics. We observe that supervised classifiers trained on QQP perform worse than random guess on PAWSQQP, which shows these models predict the adversarial examples are more likely to be paraphrases. When adversarial examples are provided in training, state-of-the-art models like DIIN (Gong et al., 2018) and fine-tuned BERT are able to identify the adversarial examples but their performance still decreases significantly from their performance on QQP. Most metrics have decent performance on QQP, but show a significant performance drop on PAWSQQP, almost down to chance performance. This suggests these metrics fail to to distinguish the harder adversarial examples. In contrast, the performance of BERTScore drops only slightly, showing more robustness than the other metrics.

7 Discussion

We propose BERTScore, a new metric for evaluating generated text against gold standard references. BERTScore is purposely designed to be simple, task agnostic, and easy to use. Our analysis illustrates how BERTScore resolves some of the limitations of commonly used metrics, especially on challenging adversarial examples. We conduct extensive experiments with various configuration choices for BERTScore, including the contextual embedding model used and the use of importance weighting. Overall, our extensive experiments, including the ones in the appendix, show that BERTScore achieves better correlation than common metrics, and is effective for model selection. However, there is no one configuration of BERTScore that clearly outperforms all others. While the differences between the top configurations are often small, it is important for the user to be aware of the different trade-offs, and consider the domain and languages when selecting the exact configuration to use. In general, for machine translation evaluation, we suggest using , which we find the most reliable. For evaluating text generation in English, we recommend using the 24-layer model to compute BERTScore. For non-English language, the multilingual is a suitable choice although BERTScore computed with this model has less stable performance on low-resource languages. We report the optimal hyperparameter for all models we experimented with in Appendix B

Briefly following our initial preprint publication, Zhao et al. (2019) published a concurrently developed method related to ours, but with a focus on integrating contextual word embeddings with earth mover’s distance (EMD; Rubner et al., 1998) rather than our simple matching process. They also propose various improvements compared to our use of contextualized embeddings. We study these improvements in Appendix C and show that integrating them into BERTScore makes it equivalent or better than the EMD-based approach. Largely though, the effect of the different improvements on BERTScore is more modest compared to their method. Shortly after our initial publication, YiSi-1 was updated to use BERT embeddings, showing improved performance (Lo, 2019). This further corroborates our findings. Other recent related work includes training a model on top of BERT to maximize the correlation with human judgments (Mathur et al., 2019) and evaluating generation with a BERT model fine-tuned on paraphrasing (Yoshimura et al., 2019). More recent work shows the potential of using BERTScore for training a summarization system (Li et al., 2019) and for domain-specific evaluation using SciBERT (Beltagy et al., 2019) to evaluate abstractive text summarization (Gabriel et al., 2019).

In future work, we look forward to designing new task-specific metrics that use BERTScore as a subroutine and accommodate task-specific needs, similar to how Wieting et al. (2019) suggests to use semantic similarity for machine translation training. Because BERTScore is fully differentiable, it also can be incorporated into a training procedure to compute a learning loss that reduces the mismatch between optimization and evaluation objectives.

Acknowledgement

This research is supported in part by grants from the National Science Foundation (III-1618134, III-1526012, IIS1149882, IIS-1724282, TRIPODS-1740822, CAREER-1750499), the Office of Naval Research DOD (N00014-17-1-2175), and the Bill and Melinda Gates Foundation, SAP, Zillow, Workday, and Facebook Research. We thank Graham Neubig and David Grangier for for their insightful comments. We thank the Cornell NLP community including but not limited to Claire Cardie, Tianze Shi, Alexandra Schofield, Gregory Yauney, and Rishi Bommasani. We thank Yin Cui and Guandao Yang for their help with the COCO 2015 dataset.

Appendix A Qualitative Analysis

Case No. Reference and Candidate Pairs Human Bleu

Bleu

1. : At the same time Kingfisher is closing 60 B&Q outlets across the country 38 125 530
: At the same time, Kingfisher will close 60 B & Q stores nationwide
2. : Hewlett-Packard to cut up to 30,000 jobs 119 39 441
: Hewlett-Packard will reduce jobs up to 30.000
3. : According to opinion in Hungary, Serbia is ‘‘a safe third country". 23 96 465
: According to Hungarian view, Serbia is a ‘‘safe third country."
4. : Experts believe November’s Black Friday could be holding back spending. 73 147 492
: Experts believe that the Black Friday in November has put the brakes on spending
5. : And it’s from this perspective that I will watch him die. 37 111 414
: And from this perspective, I will see him die.

Bleu

6. : In their view the human dignity of the man had been violated. 500 470 115
: Look at the human dignity of the man injured.
8. : For example when he steered a shot from Ideye over the crossbar in the 56th minute. 516 524 185
: So, for example, when he steered a shot of Ideye over the latte (56th).
7. : A good prank is funny, but takes moments to reverse. 495 424 152
: A good prank is funny, but it takes only moments before he becomes a boomerang.
9. : I will put the pressure on them and onus on them to make a decision. 507 471 220
: I will exert the pressure on it and her urge to make a decision.
10. : Transport for London is not amused by this flyposting "vandalism." 527 527 246
: Transport for London is the Plaka animal "vandalism" is not funny.

Human

11. : One big obstacle to access to the jobs market is the lack of knowledge of the German language. 558 131 313
: A major hurdle for access to the labour market are a lack of knowledge of English.
12. : On Monday night Hungary closed its 175 km long border with Serbia. 413 135 55
: Hungary had in the night of Tuesday closed its 175 km long border with Serbia.
13. : They got nothing, but they were allowed to keep the clothes. 428 174 318
: You got nothing, but could keep the clothes.
14. : A majority of Republicans don’t see Trump’s temperament as a problem. 290 34 134
: A majority of Republicans see Trump’s temperament is not a problem.
15. :His car was still running in the driveway. 299 49 71
: His car was still in the driveway.

Human

16. : Currently the majority of staff are men. 77 525 553
: At the moment the men predominate among the staff.
17. : There are, indeed, multiple variables at play. 30 446 552
: In fact, several variables play a role.
18. : One was a man of about 5ft 11in tall. 124 551 528
: One of the men was about 1,80 metres in size.
19. : All that stuff sure does take a toll. 90 454 547
: All of this certainly exacts its toll.
20. : Wage gains have shown signs of picking up. 140 464 514
: Increases of wages showed signs of a recovery.
Table 7: Examples sentences where similarity ranks assigned by Human, , and Bleu differ significantly on WMT16 German-to-English evaluation task. : gold reference, : candidate outputs of MT systems. Rankings assigned by Human, , and Bleu are shown in the right three columns. The sentences are ranked by the similarity, i.e. rank 1 is the most similar pair assigned by a score. An ideal metric should rank similar to humans.

We study BERTScore and SentBleu using WMT16 German-to-English (Bojar et al., 2016). We rank all 560 candidate-reference pairs by human score, BERTScore, or SentBleu from most similar to least similar. Ideally, the ranking assigned by BERTScore and SentBleu should be similar to the ranking assigned by the human score.

Table 7 first shows examples where BERTScore and SentBleu scores disagree about the ranking for a candidate-reference pair by a large number. We observe that BERTScore is effectively able to capture synonyms and changes in word order. For example, the reference and candidate sentences in pair 3 are almost identical except that the candidate replaces opinion in Hungary with Hungarian view and switches the order of the quotation mark (‘‘) and a. While BERTScore ranks the pair relatively high, SentBleu judges the pair as dissimilar, because it cannot match synonyms and is sensitive to the small word order changes. Pair 5 shows a set of changes that preserve the semantic meaning: replacing to cut with will reduce and swapping the order of 30,000 and jobs. BERTScore ranks the candidate translation similar to the human judgment, whereas SentBleu ranks it much lower. We also see that SentBleu potentially over-rewards -gram overlap, even when phrases are used very differently. In pair 6, both the candidate and the reference contain the human dignity of the man. Yet the two sentences convey very different meaning. BERTScore agrees with the human judgment and ranks the pair low. In contrast, SentBleu considers the pair as relatively similar because of the significant word overlap.

The bottom half of Table 7 shows examples where BERTScore and human judgments disagree about the ranking. We observe that BERTScore finds it difficult to detect factual errors. For example, BERTScore assigns high similarity to pair 11 when the translation replaces German language with English and pair 12 where the translation incorrectly outputs Tuesday when it is supposed to generate Monday. BERTScore also fails to identify that 5ft 11in is equivalent with 1.80 metres in pair 18. As a result, BERTScore assigns low similarity to the eighth pair in Table 7. SentBleu also suffers from these limitations.

Figure 2 visualizes the BERTScore matching of two pairs of candidate and reference sentences. The figure illustrates how matches synonymous phrases, such as imported cars and foreign cars. We also see that effectively matches words even given a high ordering distortion, for example the token people in the figure.

Figure 2: BERTScore visualization. The cosine similarity of each word matching in are color-coded.

Appendix B Representation Choice

As suggested by previous works (Peters et al., 2018; Reimers and Gurevych, 2019), selecting a good layer or a good combination of layers from the BERT model is important. In designing BERTScore, we use WMT16 segment-level human judgment data as a development set to facilitate our representation choice. For Chinese models, we tune with the WMT17 ‘‘en-zh’’ data because the language pair ‘‘en-zh’’ is not available in WMT16. In Figure 3, we plot the change of human correlation of over different layers of BERT, RoBERTa, XLNet and XLM models. Based on results from different models, we identify a common trend that computed with the intermediate representations tends to work better. We tune the number of layer to use for a range of publicly available models.13 Table 8 shows the results of our hyperparameter search.

Model Total Number of Layers Best Layer
bert-base-uncased 12 9
bert-large-uncased 24 18
bert-base-cased-finetuned-mrpc 12 9
bert-base-multilingual-cased 12 9
bert-base-chinese 12 8
roberta-base 12 10
roberta-large 24 17
roberta-large-mnli 24 19
xlnet-base-cased 12 5
xlnet-large-cased 24 7
xlm-mlm-en-2048 12 7
xlm-mlm-100-1280 16 11
Table 8: Recommended layer of representation to use for BERTScore. The layers are chosen based on a held-out validation set (WMT16).
Figure 3: Pearson correlation of computed with different models, across different layers, with segment-level human judgments on the WMT16 to-English machine translation task. The WMT17 English-Chinese data is used for the BERT Chinese model. Layer 0 corresponds to using BPE embeddings. Consistently, correlation drops significantly in the final layers.

Appendix C Ablation Study of MoverScore

Word Mover’s Distance (WMD; Kusner et al., 2015) is a semantic similarity metric that relies on word embeddings and optimal transport. MoverScore (Zhao et al., 2019) combines contextual embeddings and WMD for text generation evaluation. In contrast, BERTScore adopts a greedy approach to aggregate token-level information. In addition to using WMD for generation evaluation, Zhao et al. (2019) also introduce various other improvements. We do a detailed ablation study to understand the benefit of each improvement, and to investigate whether it can be applied to BERTScore. We use a 12-layer uncased BERT model on the WMT17 to-English segment-level data, the same setting as Zhao et al. (2019).

We identify several differences between MoverScore and BERTScore by analyzing the released source code. We isolate each difference, and mark it with a bracketed tag for our ablation study:

  1. [MNLI] Use a BERT model fine-tuned on MNLI (Williams et al., 2018).

  2. [PMEANS] Apply power means (Rücklé et al., 2018) to aggregate the information of different layers.14

  3. [IDF-L] For reference sentences, instead of computing the scores on the 560 sentences in the segment-level data ([IDF-S]), compute the scores on the 3,005 sentences in the system-level data.

  4. [SEP] For candidate sentences, recompute the scores on the candidate sentences. The weighting of reference tokens are kept the same as in [IDF-S]

  5. [RM] Exclude punctuation marks and sub-word tokens except the first sub-word in each word from the matching.

We follow the setup of Zhao et al. (2019) and use their released fine-tuned BERT model to conduct the experiments. Table 9 shows the results of our ablation study. We report correlations for the two variants of WMD Zhao et al. (2019) study: unigrams (WMD1) and bigrams (WMD2). Our corresponds to the vanilla setting and the importance weighted variant corresponds to the [IDF-S] setting. The complete MoverScore metric corresponds to [IDF-S]+[SEP]+[PMEANS]+[MNLI]+[RM]. We make several observations. First, for all language pairs except fi-en and lv-en, we can replicate the reported performance. For these two language pairs, Zhao et al. (2019) did not release their implementations at the time of publication.15 Second, we confirm the effectiveness of [PMEANS] and [MNLI]. In Appendix F, we study more pre-trained models and further corroborate this conclusion. However, the contribution of other techniques, including [RM] and [SEP], seems less stable. Third, replacing greedy matching with WMD does not lead to consistent improvement. In fact, oftentimes BERTScore is the better metric when given the same setup. In general, for any given language pair, BERTScore is always among the best performing ones. Given the current results, it is not clear tht WMD is better than greedy matching for text generation evaluation.

Ablation Metric cs-en de-en fi-en lv-en ru-en tr-en zh-en
Vanilla WMD1 0.628 0.655 0.795 0.692 0.701 0.715 0.699
WMD2 0.638 0.661 0.797 0.695 0.700 0.728 0.714
0.659 0.680 0.817 0.702 0.719 0.727 0.717
IDF-S WMD1 0.636 0.662 0.824 0.709 0.716 0.728 0.713
WMD2 0.643 0.662 0.821 0.708 0.712 0.732 0.715
0.657 0.681 0.823 0.713 0.725 0.718 0.711
IDF-L WMD1 0.633 0.659 0.825 0.708 0.716 0.727 0.715
WMD2 0.641 0.661 0.822 0.708 0.713 0.730 0.716
0.655 0.682 0.823 0.713 0.726 0.718 0.712
IDF-L + SEP WMD1 0.651 0.660 0.819 0.703 0.714 0.724 0.715
WMD2 0.659 0.662 0.816 0.702 0.712 0.729 0.715
0.664 0.681 0.818 0.709 0.724 0.716 0.710
IDF-L + SEP + RM WMD1 0.651 0.686 0.803 0.681 0.730 0.730 0.720
WMD2 0.664 0.687 0.797 0.679 0.728 0.735 0.718
0.659 0.695 0.800 0.683 0.734 0.722 0.712
IDF-L + SEP + PMEANS WMD1 0.658 0.663 0.820 0.707 0.717 0.725 0.712
WMD2 0.667 0.665 0.817 0.707 0.717 0.727 0.712
0.671 0.682 0.819 0.708 0.725 0.715 0.704
IDF-L + SEP + MNLI WMD1 0.659 0.679 0.822 0.732 0.718 0.746 0.725
WMD2 0.664 0.682 0.819 0.731 0.715 0.748 0.722
0.668 0.701 0.825 0.737 0.727 0.744 0.725
IDF-L + SEP + PMEANS + MNLI WMD1 0.672 0.686 0.831 0.738 0.725 0.753 0.737
WMD2 0.677 0.690 0.828 0.736 0.722 0.755 0.735
0.682 0.707 0.836 0.741 0.732 0.751 0.736
IDF-L + SEP + PMEANS + MNLI + RM WMD1 0.670 0.708 0.821 0.717 0.738 0.762 0.744
WMD2 0.679 0.709 0.814 0.716 0.736 0.762 0.738
0.676 0.717 0.824 0.719 0.740 0.757 0.738
Table 9: Ablation Study of MoverScore and BERTScore using Pearson correlations on the WMT17 to-English segment-level data. Correlations that are not outperformed by others for that language pair under Williams Test are bolded. We observe that using WMD does not consistently improve BERTScore.

Appendix D Additional Experiments on Abstractive Text Compression

We use the human judgments provided from the MSR Abstractive Text Compression Dataset (Toutanova et al., 2016) to illustrate the applicability of BERTScore to abstractive text compression evaluation. The data includes three types of human scores: (a) meaning: how well a compressed text preserve the meaning of the original text; (b) grammar: how grammatically correct a compressed text is; and (c) combined: the average of the meaning and the grammar scores. We follow the experimental setup of Toutanova et al. (2016) and report Pearson correlation between BERTScore and the three types of human scores. Table 10 shows that has the highest correlation with human meaning judgments, and correlates highly with human grammar judgments. provides a balance between the two aspects.

Type Metric Meaning Grammar Combined
BERTScore 0.36 0.47 0.46
0.64 0.29 0.52
0.58 0.41 0.56
Common metrics Bleu 0.46 0.13 0.33
Meteor 0.53 0.11 0.36
ROUGE-L 0.51 0.16 0.38
SARI 0.50 0.15 0.37
Best metrics according to Toutanova et al. (2016) SKIP-2+Recall+MULT-PROB 0.59 N/A 0.51
PARSE-2+Recall+MULT-MAX N/A 0.35 0.52
PARSE-2+Recall+MULT-PROB 0.57 0.35 0.52
Table 10: Pearson correlations with human judgments on the MSR Abstractive Text Compression Dataset.

Appendix E BERTScore of Recent MT Models

Task Model Bleu
WMT14 En-De ConvS2S (Auli et al., 2017) 0.266 0.6099 0.6055 0.6075 0.8499 0.8482 0.8488
Transformer-big (Ott et al., 2018) 0.298 0.6587 0.6528 0.6558 0.8687 0.8664 0.8674
DynamicConv (Wu et al., 2019) 0.297 0.6526 0.6464 0.6495 0.8664 0.8640 0.8650
WMT14 En-Fr ConvS2S (Auli et al., 2017) 0.408 0.6998 0.6821 0.6908 0.8876 0.8810 0.8841
Transformer-big (Ott et al., 2018) 0.432 0.7148 0.6978 0.7061 0.8932 0.8869 0.8899
DynamicConv (Wu et al., 2019) 0.432 0.7156 0.6989 0.7071 0.8936 0.8873 0.8902
IWSLT14 De-En Transformer-iwslt (Ott et al., 2019) 0.350 0.6749 0.6590 0.6672 0.9452 0.9425 0.9438
LightConv  (Wu et al., 2019) 0.348 0.6737 0.6542 0.6642 0.9450 0.9417 0.9433
DynamicConv (Wu et al., 2019) 0.352 0.6770 0.6586 0.6681 0.9456 0.9425 0.9440
Table 11: Bleu scores and BERTScores of publicly available pre-trained MT models in fairseq (Ott et al., 2019). We show both rescaled scores marked with and raw BERTScores. : trained on unconfirmed WMT data version, : trained on WMT16 + ParaCrawl, : trained on WMT16, : trained by us using fairseq.

Table 11 shows the Bleu scores and the BERTScores of pre-trained machine translation models on WMT14 English-to-German, WMT14 English-to-French, IWSLT14 German-to-English task. We used publicly available pre-trained models from fairseq (Ott et al., 2019).16 Because a pre-trained Transformer model on IWSLT is not released, we trained our own using the fairseq library. We use multilingual cased 17 for English-to-German and English-to-French pairs, and English uncased 18 for German-to-English pairs. Interestingly, the gap between a DynamicConv (Wu et al., 2019) trained on only WMT16 and a Transformer (Ott et al., 2018) trained on WMT16 and ParaCrawl19 (about 30 more training data) becomes larger when evaluated with BERTScore rather than Bleu.

Appendix F Additional Results

In this section, we present additional experimental results:

  1. Segment-level and system-level correlation studies on three years of WMT metric evaluation task (WMT16--18)

  2. Model selection study on WMT18 10K hybrid systems

  3. System-level correlation study on 2015 COCO captioning challenge

  4. Robustness study on PAWS-QQP.

Following BERT (Devlin et al., 2019), a variety of Transformer-based (Vaswani et al., 2017) pre-trained contextual embeddings have been proposed and released. We conduct additional experiments with four types of pre-trained embeddings: BERT, XLM (Lample and Conneau, 2019), XLNet (Yang et al., 2019b), and RoBERTa (Liu et al., 2019b). XLM (Cross-lingual Language Model) is a Transformer pre-trained on the translation language modeling of predicting masked tokens from a pair of sentence in two different languages and masked language modeling tasks using multi-lingual training data. Yang et al. (2019b) modify the Transformer architecture and pre-train it on a permutation language modeling task resulting in some improvement on top of the original BERT when fine-tuned on several downstream tasks. Liu et al. (2019b) introduce RoBERTa (Robustly optimized BERT approach) and demonstrate that an optimized BERT model is comparable to or sometimes outperforms an XLNet on downstream tasks.

We perform a comprehensive study with the following pre-trained contextual embedding models:20

  • BERT models: bert-base-uncased, bert-large-uncased, bert-based-chinese, bert-base-multilingual-cased, and bert-base-cased-mrpc

  • RoBERTa models: roberta-base, roberta-large, and roberta-large-mnli

  • XLNet models: xlnet-base-cased and xlnet-base-large

  • XLM models: xlm-mlm-en-2048 and xlm-mlm-100-1280

f.1 WMT Correlation Study

Experimental setup

Because of missing data in the released WMT16 dataset (Bojar et al., 2016), we are only able to experiment with to-English segment-level data, which contains the outputs of 50 different systems on 6 language pairs. We use this data as the validation set for hyperparameter tuning (Appendix B). Table 12 shows the Pearson correlations of all participating metrics and BERTScores computed with different pre-trained models. Significance testing for this dataset does not include the baseline metrics because the released dataset does not contain the original outputs from the baseline metrics. We conduct significance testing between BERTScore results only.

The WMT17 dataset (Bojar et al., 2017) contains outputs of 152 different translations on 14 language pairs. We experiment on the segment-level and system-level data on both to-English and from-English language pairs. We exclude fi-en data from the segment-level experiment due to an error in the released data. We compare our results to all participating metrics and perform standard significance testing as done by Bojar et al. (2017). Tables 13--16 show the results.

The WMT18 dataset (Ma et al., 2018) contains outputs of 159 translation systems on 14 language pairs. In addition to the results in Tables 1--4, we complement the study with the correlations of all participating metrics in WMT18 and results from using different contextual models for BERTScore.

Results

Table 12--22 collectively showcase the effectiveness of BERTScore in correlating with human judgments. The improvement of BERTScore is more pronounced on the segment-level than on the system-level. We also see that more optimized or larger BERT models can produce better contextual representations (e.g., comparing and ). In contrast, the smaller XLNet performs better than a large one. Based on the evidence in Figure 8 and Tables 12--22, we hypothesize that the permutation language task, though leading to a good set of model weights for fine-tuning on downstream tasks, does not necessarily produce informative pre-trained embeddings for generation evaluation. We also observe that fine-tuning pre-trained models on a related task, such as natural language inference (Williams et al., 2018), can lead to better human correlation in evaluating text generation. Therefore, for evaluating English sentences, we recommend computing BERTScore with a 24-layer RoBERTa model fine-tuned on the MNLI dataset. For evaluating Non-English sentences, both the multilingual BERT model and the XLM model trained on 100 languages are suitable candidates. We also recommend using domain- or language-specific contextual embeddings when possible, such as using BERT Chinese models for evaluating Chinese tasks. In general, we advise users to consider the target domain and languages when selecting the exact configuration to use.

f.2 Model Selection Study

Experimental setup

Similar to Section 4, we use the 10K hybrid systems super-sampled from WMT18. We randomly select 100 out of 10K hybrid systems, rank them using automatic metrics, and repeat this process 100K times. We add to the results in the main paper (Table 3) performance of all participating metrics in WMT18 and results from using different contextual embedding models for BERTScore. We reuse the hybrid configuration and metric outputs released in WMT18. In addition to the Hits@1 measure, we evaluate the metrics using (a) mean reciprocal rank (MRR) of the top metric-rated system in human rankings, and (b) the absolute human score difference (Diff) between the top metric- and human-rated systems. Hits@1 captures a metric’s ability to select the best system. The other two measures quantify the amount of error a metric makes in the selection process. Tables 23--28 show the results from these experiments.

Results

The additional results further support our conclusion from Table 3: BERTScore demonstrates better model selection performance. We also observe that the supervised metric RUSE displays strong model selection ability.

f.3 Image Captioning on COCO

We follow the experimental setup described in Section 4. Table 29 shows the correlations of several pre-trained contextual embeddings. We observe that precision-based methods such as Bleu and are weakly correlated with human judgments on image captioning tasks. We hypothesize that this is because human judges prefer captions that capture the main objects in a picture for image captioning. In general, has a high correlation, even surpassing the task-specific metric Spice Anderson et al. (2016). While the fine-tuned RoBERTa-Large model does not result in the highest correlation, it is one of the best metrics.

f.4 Robustness Analysis on PAWS-QQP

We present the full results of the robustness study described in Section 6 in Table 30. In general, we observe that BERTScore is more robust than other commonly used metrics. BERTScore computed with the 24-layer RoBERTa model performs the best. Fine-tuning RoBERTa-Large on MNLI (Williams et al., 2018) can significantly improve the robustness against adversarial sentences. However, a fine-tuned BERT on MRPC (Microsoft Research Paraphrasing Corpus) (Dolan and Brockett, 2005) performs worse than its counterpart.

Setting Metric cs-en de-en fi-en ro-en ru-en tr-en
560 560 560 560 560 560
Unsupervised DPMFcomb 0.713 0.584 0.598 0.627 0.615 0.663
metrics-f 0.696 0.601 0.557 0.662 0.618 0.649
cobalt-f. 0.671 0.591 0.554 0.639 0.618 0.627
upf-coba. 0.652 0.550 0.490 0.616 0.556 0.626
MPEDA 0.644 0.538 0.513 0.587 0.545 0.616
chrF2 0.658 0.457 0.469 0.581 0.534 0.556
chrF3 0.660 0.455 0.472 0.582 0.535 0.555
chrF1 0.644 0.454 0.452 0.570 0.522 0.551
UoW-ReVaL 0.577 0.528 0.471 0.547 0.528 0.531
wordF3 0.599 0.447 0.473 0.525 0.504 0.536
wordF2 0.596 0.445 0.471 0.522 0.503 0.537
wordF1 0.585 0.435 0.464 0.508 0.497 0.535
SentBLEU 0.557 0.448 0.484 0.499 0.502 0.532
DTED 0.394 0.254 0.361 0.329 0.375 0.267
Supervised BEER 0.661 0.462 0.471 0.551 0.533 0.545
Pre-Trained 0.729 0.617 0.719 0.651 0.684 0.678
0.741 0.639 0.616 0.693 0.660 0.660
0.747 0.640 0.661 0.723 0.672 0.688
(no idf) 0.723 0.638 0.662 0.700 0.633 0.696
(no idf) 0.745 0.656 0.638 0.697 0.653 0.674
(no idf) 0.747 0.663 0.666 0.714 0.662 0.703
0.697 0.618 0.614 0.676 0.62 0.695
0.723 0.636 0.587 0.667 0.648 0.664
0.725 0.644 0.617 0.691 0.654 0.702
(idf) 0.713 0.613 0.630 0.693 0.635 0.691
(idf) 0.727 0.631 0.573 0.666 0.642 0.662
(idf) 0.735 0.637 0.620 0.700 0.658 0.697
0.756 0.671 0.701 0.723 0.678 0.706
0.768 0.684 0.677 0.720 0.686 0.699
0.774 0.693 0.705 0.736 0.701 0.717
(idf) 0.758 0.653 0.704 0.734 0.685 0.705
(idf) 0.771 0.680 0.661 0.718 0.687 0.692
(idf) 0.774 0.678 0.700 0.740 0.701 0.711
0.738 0.642 0.671 0.712 0.669 0.671
0.745 0.669 0.645 0.698 0.682 0.653
0.761 0.674 0.686 0.732 0.697 0.689
(idf) 0.751 0.626 0.678 0.723 0.685 0.668
(idf) 0.744 0.652 0.638 0.699 0.685 0.657
(idf) 0.767 0.653 0.688 0.737 0.705 0.685
0.757 0.702 0.709 0.735 0.721 0.676
0.765 0.713 0.686 0.718 0.714 0.676
0.780 0.724 0.728 0.753 0.738 0.709
(idf) 0.771 0.682 0.705 0.727 0.714 0.681
(idf) 0.762 0.695 0.683 0.711 0.708 0.678
(idf) 0.786 0.704 0.727 0.747 0.732 0.711
0.777 0.718 0.733 0.744 0.729 0.747
0.790 0.731 0.702 0.741 0.727 0.732
0.795 0.736 0.733 0.757 0.744 0.756
(idf) 0.794 0.695 0.731 0.752 0.732 0.747
(idf) 0.792 0.706 0.694 0.737 0.724 0.733
(idf) 0.804 0.710 0.729 0.760 0.742 0.754
0.708 0.612 0.639 0.650 0.606 0.690
0.728 0.630 0.617 0.645 0.621 0.675
0.727 0.631 0.640 0.659 0.626 0.695
(idf) 0.726 0.618 0.655 0.678 0.629 0.700
(idf) 0.734 0.633 0.618 0.66 0.635 0.682
(idf) 0.739 0.633 0.649 0.681 0.643 0.702
0.710 0.577 0.643 0.647 0.616 0.684
0.732 0.600 0.610 0.636 0.627 0.668
0.733 0.600 0.643 0.655 0.637 0.691
(idf) 0.728 0.574 0.652 0.669 0.633 0.681
(idf) 0.735 0.592 0.597 0.642 0.629 0.662
(idf) 0.742 0.592 0.643 0.670 0.645 0.685
0.688 0.569 0.613 0.645 0.583 0.659
0.715 0.603 0.577 0.645 0.609 0.644
0.713 0.597 0.610 0.657 0.610 0.668
(idf) 0.728 0.576 0.649 0.681 0.604 0.683
(idf) 0.730 0.597 0.591 0.659 0.622 0.669
(idf) 0.739 0.594 0.636 0.682 0.626 0.691
Table 12: Pearson correlations with segment-level human judgments on WMT16 to-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples.
Setting Metric cs-en de-en fi-en lv-en ru-en tr-en zh-en
560 560 560 560 560 560 560
Unsupervised chrF 0.514 0.531 0.671 0.525 0.599 0.607 0.591
chrF++ 0.523 0.534 0.678 0.520 0.588 0.614 0.593
MEANT 2.0 0.578 0.565 0.687 0.586 0.607 0.596 0.639
MEANT 2.0-nosrl 0.566 0.564 0.682 0.573 0.591 0.582 0.630
SentBLEU 0.435 0.432 0.571 0.393 0.484 0.538 0.512
TreeAggreg 0.486 0.526 0.638 0.446 0.555 0.571 0.535
UHH_TSKM 0.507 0.479 0.600 0.394 0.465 0.478 0.477
Supervised AutoDA 0.499 0.543 0.673 0.533 0.584 0.625 0.583
BEER 0.511 0.530 0.681 0.515 0.577 0.600 0.582
BLEND 0.594 0.571 0.733 0.577 0.622 0.671 0.661
BLEU2VEC 0.439 0.429 0.590 0.386 0.489 0.529 0.526
NGRAM2VEC 0.436 0.435 0.582 0.383 0.490 0.538 0.520
Pre-Trained 0.625 0.659 0.808 0.688 0.698 0.713 0.675
0.653 0.645 0.782 0.662 0.678 0.716 0.715
0.654 0.671 0.811 0.692 0.707 0.731 0.714
(idf) 0.626 0.668 0.819 0.708 0.719 0.702 0.667
(idf) 0.652 0.658 0.789 0.678 0.696 0.703 0.712
(idf) 0.657 0.680 0.823 0.712 0.725 0.718 0.711
0.599 0.630 0.788 0.657 0.659 0.710 0.681
0.613 0.620 0.754 0.616 0.650 0.685 0.705
0.627 0.647 0.792 0.656 0.676 0.717 0.712
(idf) 0.609 0.630 0.801 0.680 0.676 0.712 0.682
(idf) 0.611 0.628 0.759 0.633 0.665 0.687 0.703
(idf) 0.633 0.649 0.803 0.678 0.690 0.719 0.713
0.638 0.685 0.816 0.717 0.719 0.746 0.693
0.661 0.676 0.782 0.693 0.705 0.744 0.730
0.666 0.701 0.814 0.723 0.730 0.760 0.731
(idf) 0.644 0.692 0.827 0.728 0.729 0.734 0.689
(idf) 0.665 0.686 0.796 0.712 0.729 0.733 0.730
(idf) 0.671 0.707 0.829 0.738 0.745 0.746 0.729
0.639 0.663 0.801 0.689 0.688 0.700 0.704
0.648 0.652 0.768 0.651 0.669 0.684 0.734
0.675 0.683 0.818 0.693 0.707 0.718 0.740
(idf) 0.629 0.655 0.804 0.702 0.711 0.707 0.700
(idf) 0.652 0.646 0.773 0.667 0.676 0.689 0.734
(idf) 0.673 0.673 0.823 0.708 0.719 0.721 0.739
0.658 0.724 0.811 0.743 0.727 0.720 0.744
0.685 0.714 0.778 0.711 0.718 0.713 0.759
0.710 0.745 0.833 0.756 0.746 0.751 0.775
(idf) 0.644 0.721 0.815 0.740 0.734 0.736 0.734
(idf) 0.683 0.705 0.783 0.718 0.720 0.726 0.751
(idf) 0.703 0.737 0.838 0.761 0.752 0.764 0.767
0.694 0.736 0.822 0.764 0.741 0.754 0.737
0.706 0.725 0.785 0.732 0.741 0.750 0.760
0.722 0.747 0.822 0.764 0.758 0.767 0.765
(idf) 0.686 0.733 0.836 0.772 0.760 0.767 0.738
(idf) 0.697 0.717 0.796 0.741 0.753 0.757 0.762
(idf) 0.714 0.740 0.835 0.774 0.773 0.776 0.767
0.595 0.579 0.779 0.632 0.626 0.688 0.646
0.603 0.560 0.746 0.617 0.624 0.689 0.677
0.610 0.580 0.775 0.636 0.639 0.700 0.675
(idf) 0.616 0.603 0.795 0.665 0.659 0.693 0.649
(idf) 0.614 0.583 0.765 0.640 0.648 0.697 0.688
(idf) 0.627 0.603 0.795 0.663 0.665 0.707 0.684
0.620 0.622 0.796 0.648 0.648 0.694 0.660
0.622 0.601 0.758 0.628 0.645 0.684 0.701
0.635 0.627 0.794 0.654 0.664 0.705 0.698
(idf) 0.635 0.633 0.808 0.673 0.672 0.688 0.649
(idf) 0.626 0.611 0.770 0.646 0.661 0.682 0.700
(idf) 0.646 0.636 0.809 0.675 0.682 0.700 0.695
0.565 0.594 0.769 0.631 0.649 0.672 0.643
0.592 0.586 0.734 0.618 0.647 0.673 0.686
0.595 0.605 0.768 0.641 0.664 0.686 0.683
(idf) 0.599 0.618 0.795 0.670 0.686 0.690 0.657
(idf) 0.624 0.605 0.768 0.652 0.680 0.684 0.698
(idf) 0.630 0.624 0.798 0.676 0.698 0.698 0.694
Table 13: Absolute Pearson correlations with segment-level human judgments on WMT17 to-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples.
Setting Metric en-cs en-de en-fi en-lv en-ru en-tr en-zh
32K 3K 3K 3K 560 247 560
Unsupervised AutoDA 0.041 0.099 0.204 0.130 0.511 0.409 0.609
AutoDA-TECTO 0.336 - - - - - -
chrF 0.376 0.336 0.503 0.420 0.605 0.466 0.608
chrF+ 0.377 0.325 0.514 0.421 0.609 0.474 -
chrF++ 0.368 0.328 0.484 0.417 0.604 0.466 0.602
MEANT 2.0 - 0.350 - - - - 0.727
MEANT 2.0-nosrl 0.395 0.324 0.565 0.425 0.636 0.482 0.705
SentBLEU 0.274 0.269 0.446 0.259 0.468 0.377 0.642
TreeAggreg 0.361 0.305 0.509 0.383 0.535 0.441 0.566
Supervised BEER 0.398 0.336 0.557 0.420 0.569 0.490 0.622
BLEND - - - - 0.578 - -
BLEU2VEC 0.305 0.313 0.503 0.315 0.472 0.425 -
NGRAM2VEC - - 0.486 0.317 - - -
Pre-Trained 0.412 0.364 0.561 0.435 0.606 0.579 0.759
0.443 0.430 0.587 0.480 0.663 0.571 0.804
0.440 0.404 0.587 0.466 0.653 0.587 0.806
(idf) 0.411 0.328 0.568 0.444 0.616 0.555 0.741
(idf) 0.449 0.416 0.591 0.479 0.665 0.579 0.796
(idf) 0.447 0.379 0.588 0.470 0.657 0.571 0.793
0.406 0.383 0.553 0.423 0.562 0.611 0.722
0.446 0.436 0.587 0.458 0.626 0.652 0.779
0.444 0.424 0.577 0.456 0.613 0.628 0.778
(idf) 0.419 0.367 0.557 0.427 0.571 0.595 0.719
(idf) 0.450 0.424 0.592 0.464 0.632 0.644 0.770
(idf) 0.448 0.419 0.580 0.459 0.617 0.644 0.771
Table 14: Absolute Pearson correlation () and Kendall correlation () with segment-level human judgments on WMT17 from-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples.
Setting Metric cs-en de-en fi-en lv-en ru-en tr-en zh-en
4 11 6 9 9 10 16
Unsupervised BLEU 0.971 0.923 0.903 0.979 0.912 0.976 0.864
CDER 0.989 0.930 0.927 0.985 0.922 0.973 0.904
CharacTER 0.972 0.974 0.946 0.932 0.958 0.949 0.799
chrF 0.939 0.968 0.938 0.968 0.952 0.944 0.859
chrF++ 0.940 0.965 0.927 0.973 0.945 0.960 0.880
MEANT 2.0 0.926 0.950 0.941 0.970 0.962 0.932 0.838
MEANT 2.0-nosrl 0.902 0.936 0.933 0.963 0.960 0.896 0.800
NIST 1.000 0.931 0.931 0.960 0.912 0.971 0.849
PER 0.968 0.951 0.896 0.962 0.911 0.932 0.877
TER 0.989 0.906 0.952 0.971 0.912 0.954 0.847
TreeAggreg 0.983 0.920 0.977 0.986 0.918 0.987 0.861
UHH_TSKM 0.996 0.937 0.921 0.990 0.914 0.987 0.902
WER 0.987 0.896 0.948 0.969 0.907 0.925 0.839
Supervised AutoDA 0.438 0.959 0.925 0.973 0.907 0.916 0.734
BEER 0.972 0.960 0.955 0.978 0.936 0.972 0.902
BLEND 0.968 0.976 0.958 0.979 0.964 0.984 0.894
BLEU2VEC 0.989 0.936 0.888 0.966 0.907 0.961 0.886
NGRAM2VEC 0.984 0.935 0.890 0.963 0.907 0.955 0.880
Pre-Trained 0.975 0.936 0.991 0.993 0.918 0.981 0.892
0.995 0.975 0.944 0.978 0.953 0.991 0.975
0.987 0.961 0.979 0.991 0.937 0.991 0.953
(idf) 0.983 0.937 0.998 0.992 0.939 0.985 0.878
(idf) 0.997 0.981 0.962 0.968 0.977 0.985 0.949
(idf) 0.992 0.967 0.995 0.992 0.960 0.996 0.951
0.982 0.926 0.990 0.987 0.916 0.970 0.899
0.999 0.979 0.950 0.982 0.957 0.977 0.985
0.994 0.957 0.986 0.994 0.938 0.980 0.960
(idf) 0.989 0.936 0.992 0.979 0.931 0.976 0.892
(idf) 0.999 0.987 0.962 0.980 0.975 0.979 0.973
(idf) 0.997 0.968 0.995 0.997 0.956 0.989 0.963
0.981 0.937 0.991 0.996 0.921 0.987 0.905
0.996 0.975 0.953 0.985 0.954 0.992 0.977
0.990 0.960 0.981 0.995 0.938 0.992 0.957
(idf) 0.986 0.938 0.998 0.995 0.939 0.994 0.897
(idf) 0.997 0.982 0.967 0.979 0.974 0.992 0.966
(idf) 0.994 0.965 0.993 0.995 0.958 0.998 0.959
0.987 0.930 0.984 0.966 0.916 0.963 0.955
0.999 0.982 0.947 0.979 0.956 0.986 0.984
0.996 0.961 0.993 0.993 0.937 0.983 0.982
(idf) 0.990 0.938 0.980 0.956 0.929 0.967 0.962
(idf) 0.998 0.987 0.963 0.979 0.971 0.986 0.974
(idf) 0.996 0.970 0.999 0.994 0.952 0.989 0.982
0.989 0.948 0.984 0.949 0.927 0.960 0.967
0.998 0.988 0.957 0.983 0.969 0.982 0.984
0.996 0.973 0.997 0.991 0.949 0.984 0.987
(idf) 0.989 0.959 0.975 0.935 0.944 0.968 0.974
(idf) 0.995 0.991 0.962 0.979 0.981 0.981 0.970
(idf) 0.996 0.982 0.998 0.991 0.965 0.991 0.984
0.994 0.963 0.995 0.990 0.944 0.981 0.974
0.995 0.991 0.962 0.981 0.973 0.985 0.984
0.999 0.982 0.992 0.996 0.961 0.988 0.989
(idf) 0.995 0.970 0.997 0.985 0.955 0.988 0.979
(idf) 0.994 0.992 0.967 0.977 0.983 0.988 0.972
(idf) 0.999 0.989 0.996 0.997 0.972 0.994 0.987
0.988 0.938 0.993 0.993 0.914 0.974 0.960
0.999 0.978 0.956 0.977 0.946 0.981 0.980
0.996 0.963 0.986 0.991 0.932 0.981 0.978
(idf) 0.992 0.951 0.998 0.996 0.930 0.982 0.939
(idf) 0.999 0.986 0.968 0.973 0.964 0.987 0.955
(idf) 0.998 0.974 0.996 0.994 0.950 0.990 0.970
0.991 0.944 0.996 0.995 0.924 0.982 0.943
0.996 0.981 0.945 0.971 0.961 0.986 0.958
0.999 0.969 0.986 0.992 0.945 0.992 0.961
(idf) 0.995 0.955 0.999 0.996 0.941 0.985 0.937
(idf) 0.993 0.985 0.951 0.960 0.975 0.974 0.910
(idf) 1.000 0.978 0.994 0.993 0.962 0.994 0.954
0.983 0.933 0.994 0.989 0.918 0.973 0.928
0.998 0.978 0.949 0.983 0.957 0.985 0.972
0.994 0.960 0.985 0.995 0.938 0.984 0.964
(idf) 0.986 0.940 0.997 0.992 0.939 0.979 0.916
(idf) 0.999 0.983 0.966 0.980 0.975 0.991 0.952
(idf) 0.995 0.967 0.996 0.998 0.959 0.993 0.958
Table 15: Absolute Pearson correlations with system-level human judgments on WMT17 to-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems.
Setting Metric en-cs en-de en-lv en-ru en-tr en-zh
14 16 17 9 8 11
Unsupervised BLEU 0.956 0.804 0.866 0.898 0.924 --
CDER 0.968 0.813 0.930 0.924 0.957 --
CharacTER 0.981 0.938 0.897 0.939 0.975 0.933
chrF 0.976 0.863 0.955 0.950 0.991 0.976
chrF++ 0.974 0.852 0.956 0.945 0.986 0.976
MEANT 2.0 -- 0.858 -- -- -- 0.956
MEANT 2.0-nosrl 0.976 0.770 0.959 0.957 0.991 0.943
NIST 0.962 0.769 0.935 0.920 0.986 --
PER 0.954 0.687 0.851 0.887 0.963 --
TER 0.955 0.796 0.909 0.933 0.967 --
TreeAggreg 0.947 0.773 0.927 0.921 0.983 0.938
UHH_TSKM -- -- -- -- -- --
WER 0.954 0.802 0.906 0.934 0.956 --
Supervised AutoDA 0.975 0.603 0.729 0.850 0.601 0.976
BEER 0.970 0.842 0.930 0.944 0.980 0.914
BLEND -- -- -- 0.953 -- --
BLEU2VEC 0.963 0.810 0.859 0.903 0.911 --
NGRAM2VEC -- -- 0.862 -- -- --
Pre-Trained 0.959 0.798 0.960 0.946 0.981 0.970
0.982 0.909 0.957 0.980 0.979 0.994
0.976 0.859 0.959 0.966 0.980 0.992
(idf) 0.963 0.760 0.960 0.947 0.984 0.971
(idf) 0.985 0.907 0.955 0.981 0.984 0.982
(idf) 0.979 0.841 0.958 0.968 0.984 0.991
0.967 0.825 0.965 0.953 0.974 0.977
0.980 0.902 0.965 0.982 0.977 0.979
0.979 0.868 0.969 0.971 0.976 0.986
(idf) 0.968 0.809 0.965 0.955 0.980 0.975
(idf) 0.981 0.894 0.964 0.984 0.983 0.968
(idf) 0.979 0.856 0.966 0.973 0.982 0.979
Table 16: Absolute Pearson correlations with system-level human judgments on WMT17 from-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems.
Setting Metric cs-en de-en et-en fi-en ru-en tr-en zh-en
5K 78K 57K 16K 10K 9K 33K
Unsupervised CharacTER 0.256 0.450 0.286 0.185 0.244 0.172 0.202
ITER 0.198 0.396 0.235 0.128 0.139 -0.029 0.144
Meteor++ 0.270 0.457 0.329 0.207 0.253 0.204 0.179
SentBLEU 0.233 0.415 0.285 0.154 0.228 0.145 0.178
UHH_TSKM 0.274 0.436 0.300 0.168 0.235 0.154 0.151
YiSi-0 0.301 0.474 0.330 0.225 0.294 0.215 0.205
YiSi-1 0.319 0.488 0.351 0.231 0.300 0.234 0.211
YiSi-1 srl 0.317 0.483 0.345 0.237 0.306 0.233 0.209
Supervised BEER 0.295 0.481 0.341 0.232 0.288 0.229 0.214
BLEND 0.322 0.492 0.354 0.226 0.290 0.232 0.217
RUSE 0.347 0.498 0.368 0.273 0.311 0.259 0.218
Pre-Trained 0.349 0.522 0.373 0.264 0.325 0.264 0.232
0.370 0.528 0.378 0.291 0.333 0.257 0.244
0.373 0.531 0.385 0.287 0.341 0.266 0.243
(idf) 0.352 0.524 0.382 0.27 0.326