Exploring the Robustness of NMT Systems to Nonsensical Inputs
Neural machine translation (NMT) systems have been shown to give undesirable translation when a small change is made in the source sentence. In this paper, we study the behaviour of NMT systems when multiple changes are made to the source sentence. In particular, we ask the following question “Is it possible for an NMT system to predict same translation even when multiple words in the source sentence have been replaced?”. To this end, we propose a soft-attention based technique to make the aforementioned word replacements. The experiments are conducted on two language pairs: English-German (en-de) and English-French (en-fr) and two state-of-the-art NMT systems: BLSTM-based encoder-decoder with attention and Transformer. The proposed soft-attention based technique achieves high success rate and outperforms existing methods like HotFlip by a significant margin for all the conducted experiments. The results demonstrate that state-of-the-art NMT systems are unable to capture the semantics of the source language. The proposed soft-attention based technique is an invariance-based adversarial attack on NMT systems. To better evaluate such attacks, we propose an alternate metric and argue its benefits in comparison with success rate.
Neural machine translation (NMT) systems, with the advent of Transformers , have achieved remarkable success in the past few years. Unlike recurrent architectures, Transformers are composed solely of attention layers which allows for training at a lower cost (FLOPs). Moreover, Transformers have been shown to perform better than recurrent architectures. Recently, BERT-based embeddings  have been used on a variety of natural language processing (NLP) tasks like question answering, natural language inference etc., achieving state-of-the-art results. In the field of computer vision, CNN-based systems despite achieving impressive performance have been shown to have blind spots making them vulnerable to adversarial attacks. Given such a widespread usage of Transformer architecture, a natural question arises “Are Transformers robust to noise in the input text or do they have blind spots as well?”.
In this regard, several studies have been done to study the robustness of NLP systems. Feng et al.  show that question answering models predict the same answer even when most of the words are removed from the question. To fool a text classifier, Ebrahimi et al. introduce HotFlip technique which shows that a character-level NLP model changes its prediction when few characters are flipped. In the ideal scenario, we want a character-level NLP model to be invariant to character flips (especially when the number of flips are few). Hence, it is undesirable if the prediction of a model changes when few characters in the input text are flipped. Belinkov and Bisk  show that NMT systems can be fooled via synthetic and natural character level noises. The aforementioned HotFlip technique has also been extended to neural machine translation (NMT) systems . However in , the authors study the robustness of a CNN-based recurrent architecture  rather than Transformer. With regards to robustness of Transformer, recent studies [8, 9] have shown that Transformers predict different translations for two semantically similar source sentences.
|src||Not a single body should remain|
|undiscovered or unidentified .|
|adv-src||unaware topic single body should remain|
|undsubmitted covered Within uniunclear|
|pred||Kein einziger Körper sollte unbehandelt|
|oder geklärt bleiben .|
In this paper, we explore the following question “Is it possible for an NMT system to predict same translation even when multiple words in the source sentence have been replaced?”. This is different from the work of Cheng et al.[8, 9], where one expects the NMT system to predict the same translation. We perform the experiments on subword-level based NMT systems, namely BLSTM-based encoder-decoder with attention  and Transformer . Given a source sentence , our goal is to replace multiple words ’s with new words ’s while ensuring that the predicted translation remains unchanged. To achieve this, we propose a soft-attention based technique. Table I shows one such example of the proposed technique. Since the NMT model is subword-level, in our experiments, we replace subword with word. For example, in Table I, the word “undiscovered” is broken into three subwords: und, is and covered. The subword “is” is replaced by the word “submitted” leading to the phrase “undsubmitted covered” in the adversarial source sentence. In contrast to character flips, we want the NMT system to be sensitive (i.e. not invariant) to word flips, especially when multiple words are replaced. Such word-level invariances captured by the model are undesirable. The proposed technique might generate sentences which are semantically incorrect as shown in Table I. Even in such cases, the NMT system is expected to give different translations since it may lead to lack of trust of the end user on the NMT system if two completely different sentences are assigned the same translation. This is in line with the work done by He and Glass  where a dialogue generator is expected to never output egregious sentences regardless of the semantic correctness of the input sentence.
I-a Related work
Several attempts have been made to study the robustness of NMT systems to noise in the input text [5, 6, 8, 9, 12]. Ebrahimi et al.  propose HotFlip to attack a character-level NMT system. The HotFlip technique encodes a character flip as a vector. It chooses the optimum character flip based on the directional derivatives of the gradient of loss with respect to one-hot input along the flip vectors. These directional derivatives give a first-order estimate of the change in loss when a character is flipped. This technique can be used for flipping words instead of characters as well. Cheng et al.  show that replacing words by their synonyms lead to erroneous translations and propose an adversarial learning framework to ensure that the two source sentences, original and its noisy counterpart, get similar representations by the encoder of the NMT system. Cheng et al.  show that the NMT systems output different translations for two semantically similar source sentences. The authors propose a training framework which uses the original training data along with the noisy data to enhance the robustness of NMT systems to such noise. Liu et al.  show that NMT systems are extremely sensitive to homophone noises and propose joint embedding of textual and phonetic information of a word to improve the robustness to homophone noise.
In this paper, we study the robustness of NMT systems from a different perspective. We replace multiple words in the source sentence while trying to ensure that the predicted translation is unchanged. An NMT system is expected to output a different translation when replacement of multiple words completely changes the semantics of the original source sentence. This is different from prior works where one expects the NMT system to output same (or similar) translation for the noisy source sentence. To replace multiple words, we propose an invariance based adversarial attack. The task of multiple replacements can be broken down into two subtasks: (i) traversing the position indices for replacement (i.e. the order in which words are replaced) and (ii) replacing the word. We propose novel strategies for each of the two subtasks and show that they outperform baseline methods. The experiments are conducted on two language pairs namely English-German (en-de) and English-French (en-fr) and two state-of-the-art NMT systems, BLSTM-based encoder-decoder with attention  and Transformer .
I-B Contributions of this work
The main contributions of this paper are summarized below:
We show that the current state-of-the-art NMT systems are indeed non-robust to multiple word replacements. This shows that the current NMT systems are unable to capture the semantics of the source language.
We propose a novel technique to traverse the position indices for replacement. The proposed technique uses the norm of the gradient of loss with respect to input embeddings. The results show that the proposed technique outperforms random baseline.
Given the traversal of position indices, we propose a soft-attention based technique for choosing a word. The results show that the proposed technique outperforms HotFlip for all the experimental settings by a significant margin.
We propose a BLEU-based metric to evaluate the effectiveness of an invariance-based attack and show the merits of the proposed metric in comparison to success rate.
In this section, we describe the proposed method in detail. In Section II-A, we outline the vocabulary pruning method which is a pre-processing step of the proposed method. In Section II-B, we describe the proposed technique for position indices traversal. In Section II-C, we describe the proposed technique for word replacement. Finally in Section II-D, we describe how the two techniques are used for doing multiple replacements over the source sentence.
Ii-a Vocabulary Pruning
The NMT models in the present work use a shared vocabulary for source and target languages. Let denote the shared vocabulary set. We use the training corpus in the source language to find the set of unique words. Let denote this set. We consider the set intersection . Hence denotes the set of proper words in the source language present in the vocabulary of NMT model. Let denote the original sentence in the source language. Given , we remove the words present in the original sentence from the set , i.e. . We use to select new words for replacement.
Ii-B Position Indices Traversal
Let denote a sentence in the source language and denote the one-hot representation of the sentence i.e. where is if word is present in position and otherwise. Let denote the embedded version of input where ’s are -dimensional and denote the predicted translation of the NMT model for the original source sentence . We consider the standard negative log likelihood loss given by
where denotes the probability assigned to the word by the NMT model and x is one-hot representation of the source sentence . Let denote the set of position indices which have already been traversed. We choose the position for replacement, , using the following equation
where is the -norm, is the embedding and is the gradient of the loss function with respect to . The rationale behind choosing the replacement position in this way is that the term tells us about the sensitivity of loss function with respect to the embedding and hence changing a word at a position which has the minimum -norm should not have a large impact on the predicted translation. We refer to this technique as Min-Grad. We summarize the Min-Grad method in Algorithm 1.
Ii-C Word Replacement
Let denote the position for word replacement. We replace with a probability distribution i.e. where is set to if the word does not belong to . We set all the other ’s to be equal initially. Let denote the modified input. We modify the non-zero ’s using gradient descent in order to minimize . Note that only the non-zero ’s are modified. We modify ’s until either iterations is reached or a particular word is assigned a probability greater than for consecutive iterations. Finally, for the position , we choose the word where . Since this technique picks a word using soft-attention over the vocabulary set , we refer to it as Soft-Att. We summarize the Soft-Att method in Algorithm 2.
Ii-D Proposed method
In order to make multiple replacements over the original source sentence, , we use the two methods (Min-Grad and Soft-Att) iteratively. We name the proposed method Min-Grad + Soft-Att.
The proposed method makes at most sweeps over the source sentence. Within a particular sweep, we choose the position of replacement using Min-Grad method. This is followed by Soft-Att method to identify the new word to replace with, at the particular position. Note that Soft-Att always picks a word from the pruned vocabulary set, . Whether the replacement does take place depends on the min loss criteria. We initially set the min loss, , to a very high value (i.e. ). This ensures that at least one replacement always takes place. If in a previous sweep, a replacement has taken place at the position identified by the Min-Grad, then we compare the loss obtained from the Soft-Att method with the loss of the current sentence. If the loss obtained from the Soft-Att method is less than the loss of the current sentence, then the replacement is done and is updated accordingly. The logic behind this step is to ensure that the new source sentence is better than the old one in terms of . Whereas, if no replacement has taken place so far at the position identified by the Min-Grad, then we compare the loss obtained from the Soft-Att method with . If the loss obtained from the Soft-Att method is less than , then the replacement is done and is updated accordingly. We update as where are the loss obtained from the Soft-Att method and the original loss respectively. Capping the min loss at original loss allows us to do more replacements while ensuring an optimal solution at the same time. We stop the algorithm if no replacement takes place in a particular sweep. For ease of understanding, we summarize the proposed method in Algorithm 3.
Apart from the proposed method, we also study three baseline methods, namely, random + Soft-Att, Min-Grad + HotFlip and random + HotFlip. The random baselines refer to the method where traversal of position indices is done randomly instead of via Min-Grad and HotFlip baselines refer to the method where word replacement is done via HotFlip instead of Soft-Att. Note that the other methods like [8, 9, 12] study robustness of NMT systems in a different framework and hence, these methods are not applicable for comparison with the method presented here. HotFlip being a general method for word/character replacement is relevant to our setting and hence, comparable to the proposed method.
Iii Implementation Details
We perform experiments on two language pairs from TED talks dataset . The two language pairs are (i) English-German (en-de) and (ii) English-French (en-fr). The dataset statistics for the two language pairs are given in Table II. We train BLSTM-based encoder-decoder with attention translation model using OpenNMT-py for the two language pairs. We use the standard implementation provided in the repository
We use the Transformer base model configuration  for both the language pairs. The model consists of encoder-decoder layers. We closely follow the implementation provided by Sachan and Neubig  for training the Transformer models. Both the NMT models, BLSTM-based encoder-decoder with attention and Transformer, use byte pair encoding with merge operations . Also, both the NMT models use beam search with beam width of during prediction. Table III shows the BLEU score  for the trained NMT models on the test set of TED dataset. The BLEU scores for Transformer are similar to the results reported by Sachan and Neubig . As expected, Transformer achieves a higher BLEU score than BLSTM-based encoder-decoder with attention for the two language pairs.
|Success Rate||NOR||Success Rate||NOR|
|BLSTM||random + HotFlip||25.4%||0.23, 0.21||28.2%||0.21, 0.18|
|Min-Grad + HotFlip||31.8%||0.22, 0.19||40.2%||0.19, 0.17|
|random + Soft-Att||61.2%||0.58, 0.62||64.6%||0.62, 0.67|
|Min-Grad + Soft-Att||67.8%||0.58, 0.61||70.8%||0.61, 0.66|
|Transformer||random + HotFlip||35.0%||0.26, 0.24||40.6%||0.24, 0.21|
|Min-Grad + HotFlip||45.0%||0.26, 0.24||44.0%||0.23, 0.21|
|random + Soft-Att||50.2%||0.40, 0.39||59.0%||0.37, 0.35|
|Min-Grad + Soft-Att||61.6%||0.41, 0.42||64.8%||0.36, 0.34|
To study the proposed attack, we randomly select sentences from the test set of TED dataset. The values of the different hyperparameters are as follows: and . The size of the vocabulary set (i.e. the set of proper words in the source language) for English-German and English-French are and respectively. The code for the proposed attack will be made publicly available.
In this section, we discuss the results of the proposed method in comparison with the baseline methods. In Section IV-A, we look at the success rate and number of replacements of different methods across NMT models. In Section IV-B, we evaluate the effectiveness of various method based on a BLEU-based metric. We also argue why the BLEU-based metric is more appropriate than success rate to measure effectiveness of invariance-based attacks. Note that, we use BLSTM as a shorthand for BLSTM-based encoder-decoder with attention in this section. Finally, we try to analyze the nature of replacements in successful adversarial examples and Section IV-C presents our observations.
|en-de||random + HotFlip||51.04||80.49||47.53||36.42||43.66|
|Min-Grad + HotFlip||53.23||83.13||49.15||36.51||44.76|
|random + Soft-Att||32.01||84.79||29.72||20.62||27.85|
|Min-Grad + Soft-Att||31.17||88.55||31.09||20.63||27.43|
|en-fr||random + HotFlip||55.51||85.18||40.35||52.00||36.18|
|Min-Grad + HotFlip||57.92||88.40||41.98||54.39||37.68|
|random + Soft-Att||33.61||89.77||21.59||32.37||19.09|
|Min-Grad + Soft-Att||35.40||91.99||23.28||34.32||20.29|
|en-de||random + HotFlip||57.09||71.35||48.84||43.90||49.42|
|Min-Grad + HotFlip||59.28||75.55||50.38||45.96||52.26|
|random + Soft-Att||13.77||87.14||19.20||18.36||21.62|
|Min-Grad + Soft-Att||14.49||89.86||19.74||18.51||21.98|
|en-fr||random + HotFlip||60.87||79.62||39.28||58.60||41.73|
|Min-Grad + HotFlip||63.97||84.87||41.16||61.80||44.94|
|random + Soft-Att||12.99||92.44||10.62||28.34||12.12|
|Min-Grad + Soft-Att||12.66||93.87||9.95||27.21||11.92|
Iv-a Success rate
Table IV shows the success rate and the mean, median of the number of replacements (normalized by the length of original sentence) for different methods. For a particular NMT model, we define the success rate of a method as the percentage of adversarial sentences which were assigned the same translation as the original source sentence () by the NMT model. We report both the success rate and number of replacements since for two attacks with similar success rate, the one with more number of replacement is better. Furthermore, it is more likely that the meaning of the sentence has changed if the number of replacements are higher.
1: Comparing Min-Grad and random: As we can see from Table IV, for both Hotflip and Soft-Att, Min-Grad method gives significant improvement in success rate in comparison with random baseline across all the NMT models. The number of replacement for Min-Grad is comparable with random. This shows that the improvement in success rate is significant since otherwise, an attack method can achieve higher success rate by doing less number of replacements.
2: Comparing Soft-Att and HotFlip: From Table IV, across all the NMT models, we can see that Soft-Att significantly outperforms HotFlip both in terms of success rate and number of replacements.
3: Comparing BLSTM and Transformer: Table IV shows that Transformer is more robust to our proposed method than BLSTM. This is because our proposed method has less number of replacements and lower success rate in case of Transformer than BLSTM for both the language pairs. Interestingly, HotFlip has higher success rate and similar number of replacement in case for Transformer than BLSTM.
Overall, as is evident from Table IV, our proposed method (Min-Grad + Soft-Att) achieves the highest success rate across the NMT models.
|BLSTM||random + HotFlip||45.58||44.17|
|Min-Grad + HotFlip||46.46||45.40|
|random + Soft-Att||17.16||14.33|
|Min-Grad + Soft-Att||16.97||13.57|
|Transformer||random + HotFlip||39.63||39.77|
|Min-Grad + HotFlip||40.10||40.71|
|random + Soft-Att||25.08||23.38|
|Min-Grad + Soft-Att||24.35||24.26|
Iv-B BLEU-based metric
While success rate is the most straightforward metric to measure the efficiency of an invariance based attack on an NMT system, it does have some disadvantages. As mentioned earlier, an attack method can achieve a higher success rate by doing fewer replacements. Hence, comparing both the success rate and number of replacement simultaneously is a better approach. However, there are still few issues that we need to address (a) Although more number of replacements does increase the chances of the meaning of the original sentence being changed, one can think of pathological examples where many replacements are made without significant change in the meaning. (b) It is possible that the original/adversarial sentence are assigned the same translation by the NMT model due to the property of the target language rather than a deficiency in the NMT model. As an example, if the target language does not have gender markers and continuous tense, then the two sentences “He is playing guitar.” and “She plays guitar.” will have the same translation.
To address these issues, we propose a BLEU-based metric to evaluate efficiency of an invariance based attack. In the present work, we have NMT models, for each language pair. Consider the case where Min-Grad+Soft-Att is used to attack en-de Transformer resulting in pair of original/adversarial source sentences. To address issue (a), we can translate the original/adversarial source sentences in French using en-fr Transformer. If the meaning has not changed significantly, we can expect the BLEU score for the French translations to be high. To address issue (b), we can translate the original/adversarial source sentences in German using en-de BLSTM (since the target language is German). If the translations of Transformer were similar due to the property of target language, we can expect the BLEU score for the German translations by the BLSTM to be high as well.
To summarize, an effective invariance based attack is expected to give pair of original/adversarial source sentences whose corresponding translations by the model under attack have high BLEU scores and whose corresponding translations by the other NMT models have low BLEU scores.
Table V shows the BLEU scores for the original/adversarial sentence (src) and their respective translation by the four NMT models. In Table V, denotes the Transformer model under attack (e.g. en-de), denotes the other Transformer model (e.g. en-fr), and are the BLSTM counterparts of and . Similarly, Table VI shows the BLEU scores for the original/adversarial sentence (src) and their respective translation by the four NMT models. In Table VI, denotes the BLSTM model under attack, denotes the other BLSTM model, and are the Transformer counterparts of and . For an attack to be effective, BLEU score for should be high and the other four BLEU scores should be low. Note that the BLEU score for is related with the number of replacements reported in Table IV. The two metrics are inversely related; more number of replacement implies lower BLEU score for .
From Tables V and VI, we can see that Soft-Att achieves a higher BLEU score for in comparison with HotFlip for all the experimental settings. Moreover, the other four BLEU score are lower for Soft-Att than HotFlip. This result showcases the efficiency of the proposed method since it outperforms HotFlip in terms of success rate, number of replacements and BLEU scores. The fact that, for the proposed method, BLEU scores is low for other NMT models also shows that the adversarial sentences are not transferable in nature. In other words, the pair of original/adversarial sentences are specific to the NMT model.
In a general setting, let there be NMT models denoted by ,,.., where is the NMT model under attack. Using the BLEU-based metric, we propose a composite score, to evaluate the efficiency of an attack method M as follows.
where denote the BLEU score for and NMT model respectively. For an attack method M to be more effective, should be lower. Table VII shows values for different methods and across NMT models. This table nicely summarizes the results presented in Tables V and VI. The values demonstrate that the state-of-the-art NMT systems are unable to capture the semantics of the adversarial examples generated by the the proposed method, Min-Grad+Soft-Att.
|en-de||src||And because God loves her , I did get married .|
|adv-src||plus because God loves them kilograms me been abused married .|
|pred||Und weil Gott sie sie liebt , wurde ich verheiratet .|
|src||I want to know the people behind my dinner choices .|
|adv-src||I want ordinarily know the humans behind my dinner flog arguments|
|pred||Ich möchte die Menschen hinter meinen Abendessen kennen .|
|en-fr||src||I was clearly more nervous than he was .|
|adv-src||adaptations was clearly more nervous label he was .|
|pred||J’étais clairement plus nerveux qu’il était .|
|src||A dome , one of these ten-foot domes .|
|adv-src||An dome pale an of Those exes 3 foot domEvelyn tat|
|pred||Un dôme , un de ces dômes de 3 mètres .|
|en-de||src||Is it something about the light ?|
|adv-src||Is Bald passage about the light ?|
|pred||Geht es um das Licht ?|
|src||So the whole is literally more than the sum of its parts .|
|adv-src||Small the whole is bucks more than number sum Von His parts rank|
|pred||Das Ganze ist mehr als die Summe seiner Teile .|
|en-fr||src||They look like the stuff we walk around with .|
|adv-src||Hudson look like the ping we walk fishes with .|
|pred||Ils ressemblent à ce que nous marchons avec .|
|src||There are many , many problems out there .|
|adv-src||look numerous supported stays behold problems hundred there .|
|pred||Il y a de nombreux problèmes .|
Iv-C A Comment on Types of Words Replaced
In order to understand what types of words are replaced to generate successful adversarial examples, we observe that there is no clear trend about the types of words replaced. Both highly frequent (stop words) and thematic words are getting replaced. The model under attack remains invariant to replacement of highly thematic words as well as frequent words by semantically very different words. Invariance is observed even in case of introduction of named-entities (NEs). While trying to understand if specific parts-of-speech (POS) are vulnerable, no clear tendency is noted. These observations are highlighted through the examples given in Tables VIII (for BLSTM-based encoder-decoder with attention model) and IX (for Transformer based translation model) which are generated by Min-Grad+Soft-Att method.
V Conclusion and Future Work
The proposed study shows word-level undesirable invariances captured by an NMT system. We define undesirable invariances as the scenario in which the predicted translation remains unchanged when multiple words in the source sentence are replaced changing the semantic of the input sentence. Two language pairs, namely, English-German (en-de) and English-French (en-fr) are considered to investigate the behaviour of two state-of-the-art NMT systems: BLSTM-based encoder-decoder with attention and Transformer. We break down the problem of replacing a word into two sub-problems: traversing position indices and replacing a word given a position. Two techniques, Min-Grad and Soft-Att are proposed for the two sub-problems. The results show that the proposed techniques significantly outperform HotFlip and random related baselines. We also propose an alternate BLEU-based metric to evaluate the effectiveness of an invariance based attack and argue its effectiveness in comparison to success rate.
This study was motivated to explore the robustness of NMT systems to nonsensical inputs. Our results demonstrate that although the state-of-the-art NMT systems achieve high BLEU score, these systems are not efficient enough to capture the semantics of the source sentence. This shows the need to enhance robustness of NMT systems to nonsensical inputs. In the works of [8, 9, 12], the authors modify the training algorithms to improve robustness of NMT systems to the particular type of noise in consideration. Similar approach has been followed to develop robust image classifiers . However, for the proposed attack, developing a noise-aware training algorithm is a challenging task. This is due to the lack of gold translations for the adversarial examples obtained via the proposed attack which was not the case for the previous studies. Hence, developing robust training strategies to counter invariance-based attacks is a possible area of future research.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 5998–6008.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186.
- S. Feng, E. Wallace, A. Grissom II, M. Iyyer, P. Rodriguez, and J. Boyd-Graber, “Pathologies of neural models make interpretations difficult,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018, pp. 3719–3728.
- J. Ebrahimi, A. Rao, D. Lowd, and D. Dou, “Hotflip: White-box adversarial examples for text classification,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 2018, pp. 31–36.
- Y. Belinkov and Y. Bisk, “Synthetic and natural noise both break neural machine translation,” in International Conference on Learning Representations, 2018.
- J. Ebrahimi, D. Lowd, and D. Dou, “On adversarial examples for character-level neural machine translation,” in Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, 2018, pp. 653–663.
- M. R. Costa-jussà and J. A. R. Fonollosa, “Character-based neural machine translation,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 357–361.
- Y. Cheng, Z. Tu, F. Meng, J. Zhai, and Y. Liu, “Towards robust neural machine translation,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association for Computational Linguistics, Jul. 2018, pp. 1756–1766.
- Y. Cheng, L. Jiang, and W. Macherey, “Robust neural machine translation with doubly adversarial inputs,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, Jul. 2019, pp. 4324–4333.
- T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2015, pp. 1412–1421.
- T. He and J. Glass, “Detecting egregious responses in neural sequence-to-sequence models,” in International Conference on Learning Representations, 2019.
- H. Liu, M. Ma, L. Huang, H. Xiong, and Z. He, “Robust neural machine translation with joint textual and phonetic embedding,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, Jul. 2019, pp. 3044–3049.
- Y. Qi, D. Sachan, M. Felix, S. Padmanabhan, and G. Neubig, “When and why are pre-trained word embeddings useful for neural machine translation?” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). New Orleans, Louisiana: Association for Computational Linguistics, Jun. 2018, pp. 529–535.
- D. Sachan and G. Neubig, “Parameter sharing methods for multilingual self-attentional translation models,” in Proceedings of the Third Conference on Machine Translation. Association for Computational Linguistics, 2018.
- R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rare words with subword units,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 1715–1725.
- K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.