A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification

# A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification

Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, Rui Yan
Institute of Computer Science and Technology, Peking University, China
zhangyaoyuan, yezhenxu, fengyansong, zhaody, ruiyan@pku.edu.cn
###### Abstract
$\dagger$$\dagger$footnotetext: Equal contribution.

Sentence simplification reduces semantic complexity to benefit people with language impairments. Previous simplification studies on the sentence level and word level have achieved promising results but also meet great challenges. For sentence-level studies, sentences after simplification are fluent but sometimes are not really simplified. For word-level studies, words are simplified but also have potential grammar errors due to different usages of words before and after simplification. In this paper, we propose a two-step simplification framework by combining both the word-level and the sentence-level simplifications, making use of their corresponding advantages. Based on the two-step framework, we implement a novel constrained neural generation model to simplify sentences given simplified words. The final results on Wikipedia and Simple Wikipedia aligned datasets indicate that our method yields better performance than various baselines.

\emnlpfinalcopy

## 1 Introduction

Sentence simplification is a standard NLP task of reducing reading complexity for people who have limited linguistic skills to read. In particular, children, non-native speakers and individuals with language impairments such as Dyslexia Rello et al. (2013), Aphasic Carroll et al. (1999) and Autistic Evans et al. (2014), would benefit from the task which makes sentences easier to understand. There are two categories for the task: lexical simplification and sentence simplification. Both categories enable a paraphrasing process, turning a normal input into a simpler output while maintaining the same/similar semantics between the input and the output.

Inspired by the great achievements of machine translation, several studies treate the sentence simplification problem as monolingual translation task and achieve promising results Specia (2010); Zhu et al. (2010); Coster and Kauchak (2011); Wubben et al. (2012); Xu et al. (2016). These studies apply the phrase-based statistical machine translation (PB-SMT) model or the syntactic-based translation model (SB-SMT). Both PB-SMT and SB-SMT require high-level features or even rules empirically chosen by humans. Recently, neural machine translation (NMT) based on sequence-to-sequence model (seq2seq) Cho et al. (2014); Sutskever et al. (2014); Bahdanau et al. (2014) shows more powerful capabilities than traditional SMT systems. NMT applies deep learning regimes and extracts features automatically without human supervision.

We observe that sentence simplification usually means the simple output sentence is derived from the normal input sentence with parts of terms changed, as shown in Table 1. Due to such an intrinsic character, applying machine translation methods directly is likely to generate a completely identical sentence as the input sentence, no matter standard SMT or NMT. Although MT methods indeed have advanced the research of sentence simplification tasks, there is still plenty of room for improvements. To the best of our knowledge, there are few competitive simplification models using translation models even with neural network architectures so far.

Besides sentence-level simplification, there is lexical simplification which substitutes long and infrequent words with their shorter and more frequent synonyms by complex word identification, substitution generation, substitution selection and other processes. Recent lexical simplification models by Horn et al. Horn et al. (2014), Glavaš et al. Glavaš and Štajner (2015), Paetzold et al. Paetzold and Specia (2016) and Pavlick et al. Pavlick and Callison-Burch (2016) have accumulated substantial numbers of synonymous word pairs. This makes it possible for us to simplify complex words before simplifying the whole sentence. However, even though synonyms have similar semantic meaning, they might have different usages. Replacing complex words with their simpler synonyms is an intuitive way to simplify sentences, but not always works due to potential grammatical errors after the switchings (shown in Table 1). Moreover, lexical substitution is just one way to simplify sentences. We can also simplify sentences by splitting, deletion, reordering and so on.

For the sentence-level simplification, we generally obtain output results with few grammar errors although it does not guarantee that they are simplified; for the lexical-level simplification, we can simplify the complicated parts of the sentences but it does not always guarantee that they are grammatically fluent. It is an intuitive and exciting idea to combine both methods together and make use of their corresponding advantages so that we can obtain simplified sentences with good readability and fluency. To be more specific, the simplification process of an input sentence is conducted in two steps. 1) We first identify complex word(s) and replace them with their simpler synonyms according to a pre-constructed knowledge base111A knowledge base such as PPDB contains millions of paraphrasing word pairs to change between simple words and complex words Pavlick and Callison-Burch (2016). 2) The second step is to generate a legitimate sentence given the simplified word(s) with appropriate syntactic structures and grammar. Another key issue for the second step is that we need to maintain the same/similar semantic meaning of the input sentence. To this end, we still stick to the translation paradigm by translating the complex sentence into a simple sentence.

In this paper, our contributions are as follows:

We propose a two-step simplification framework to combine the advantages of both word-level simplification and sentence-level simplification to make the generated sentences fluent, readable and simplified.

We implement a novel constrained seq2seq model which fits our task scenario: certain word(s) are required to exist in the seq2seq process. We start from the constraint of one given word and extend the constraints to multiple given words.

We evaluate the proposed method and neural model on English Wikipedia and Simple English Wikipedia datasets. The experimental results indicate that our model achieves better results than a series of baseline algorithms in terms of iBLEU scores, Flesch readability and human judgments. This paper is organized as follows. In Section 2 we review related works, and describe our proposed method and model in Section 3. In Section 4, we describe the experimental setups and show results. In Section 5, we conclude our paper and discuss the future work.

## 2 Related Work

In previous studies, researchers of sentence-level simplification mostly address the simplification task as a monolingual machine translation problem. Specia et al. Specia (2010) use the standard PB-SMT implemented in Moses toolkit Koehn et al. (2007) to translate the original sentences to the simplified ones. Similarly, Coster and Kauchak Coster and Kauchak (2011) extend the PB-SMT model by adding phrase deletion. Wubben et al. Wubben et al. (2012) make a further effort by reranking the Moses’ n-best output based on their dissimilarity to the input. Most recently, Xu et al. Xu et al. (2016) have proposed a SB-SMT model, achieving better performance than Wubben’s system. In general, sentence-level simplification maintains the semantic meaning and fluency but does not always guarantee the literal simplification.

As for word-level simplification, there are impressive results as well. Horn et al. Horn et al. (2014) extract over 30,000 paraphrase rules for lexical simplification by identifying aligned words in English Wikipedia and Simple English Wikipedia. Glavaš et al. Glavaš and Štajner (2015) employ GloVe Pennington et al. (2014) to generate synonyms for the complex words. Instead of using the parallel datasets, their approach only requires a single corpus. Paetzold et al. Paetzold and Specia (2016) propose a new word embeddings model to deal with the limitation that the traditional models do not accommodate ambiguous lexical semantics. Pavlick et al. Pavlick and Callison-Burch (2016) release about 4,500,000 simple paraphrase rules by extracting normal paraphrases rules from a bilingual corpus and reranking the simplicity scores of these rules by a supervised model. Thanks to their efforts, there is a large number of effective methods for identifying complex words, finding corresponding simple synonyms and selecting qualified substitutions. However, sometimes simplifying complicated words directly with simple synonyms violates grammar rules and usages.

Recent progress in deep learning with neural networks brings great opportunities for the development of stronger NLP systems such as neural machine translation (NMT). Deep learning is heavily driven by data, requiring few human efforts to create high-level features. Specifically, the sequence-to-sequence RNN model Cho et al. (2014); Sutskever et al. (2014); Bahdanau et al. (2014); Mou et al. (2015, 2016) has a remarkable ability to characterize word sequences and generate natural language sentences. However, the seq2seq NMT model still shares the problem with other MT-based methods: lack of literal simplification from time to time.

Overall, sentence-level and word-level sentence simplification both have their strengths and weaknesses. In this paper, we propose a two-step method fusing their corresponding advantages.

## 3 Neural Generation Model

We generate simplified sentences using a sequence-to-sequence model trained on a parallel corpus, namely English Wikipedia and Simple English Wikipedia. We have simplified word(s) identified from the first step, but the standard sequence-to-sequence model cannot guarantee the existence of such word(s). Therefore, we propose a constrained sequence-to-sequence (Constrained Seq2Seq) model with the given simplified word(s) as constraints during the sentence generation.

### 3.1 Methodology

Since there has been many efforts working on the establishment of word simplification pairs Horn et al. (2014); Glavaš and Štajner (2015); Paetzold and Specia (2016); Pavlick and Callison-Burch (2016), we do not focus on the identification of words that require simplification or the methods of selecting what simpler words to switch. Instead, we change words according to these knowpledge base and proceed to the neural sentence generation model, assuming the word substitutions are correct based on previous studies on synonym alignments. To be more specific, given an input sentence, the simplification process is conducted in two steps:

Step 1. According to previous studies on lexical substitution, we first identify complex words in the input sentence and then substitute them with their simpler synonyms;

Step 2. Given the simplified words from the first step as constraints, we propose a constrained seq2seq model which encodes the input sentence as a vector (encoder) and decodes the vector into a simplified sentence (decoder). The generation process is conditioned on the simplified word(s) and consists of both backward and forward generation.

We proceed to introduce the proposed constrained seq2seq model in the next section.

### 3.2 Constrained Seq2Seq Model

Given an input sequence and a switching word pair of a complex word and its simpler synonym , we aim to generate a simplified sentence as the output where is contained in , i.e., . There could be multiple constraint words and we start from the simplest situation with only one constraint word. Here and denote the lengths of the source and target sentences respectively, is the vocabulary size of the source and target sentences.

The simpler word splits the output sentence into two sequences: backward sequence and forward sentence . The joint probabilities of and are:

 p(yb)=s−1∏i=1p(ys−i|ys,…,ys−i+1,x)
 p(yf)=m−s∏i=1p(ys+i|y1,…,ys,…,ys+i−1,x) (1)

As shown in Figure 1, to generate a sequence with a constraint , we first generate a sequence from to . Then, we generate a forward sequence conditioned on the generated sequence from to . In this way, the output sequence is = , , where is the reverse of . In our paper, we apply the bi-directional recurrent neural network (BiRNN) Schuster and Paliwal (1997) with gated recurrent units (GRUs) Cho et al. (2014) for both the backward and forward generation process. We encode the input sequence as follows:

 zt=σ(Wzet+Uz→ht−1) (2) rt=σ(Wret+Ur→ht−1) ~ht=tanh(Whet+Uh[rt∘→ht−1]) →ht=(1−zt)∘→ht−1+zt∘~ht

where is the embedding vector of word and denotes the word embedding dimensionality; , are weight matrices222Bias term are omitted for simplicity and denotes the number of hidden states; is the hidden state of BiRNN’s forward direction at time . Likewise, the hidden state of BiRNN’s backward direction is denoted as .

As last, we concatenate the bidirectional hidden states as the sentence embedding:

 ht=[→h⊤t;←h⊤t]⊤ (3)

RNN Decoder We adapt the gated recurrent unit with the attention mechanism at the decoder part, where the hidden state is computed by:

 z′t =σ(W′ze′t−1+U′zst−1+Czct) (4) r′t =σ(W′re′t−1+U′rst−1+Crct) ~st =tanh(Wse′t−1+U′s[r′t∘st−1]+Csct) st =(1−z′t)∘st−1+z′t∘~st

where is the embedding vector of word and denotes the word embedding dimensionality; , , are weight matrices and denotes the number of hidden states. The initial hidden state is computed by , where is the mean value of all hidden states of Encoder and .

The context vector is recomputed at each step by an alignment model:

 ct =n∑j=1αtjhj (5) αtj =exp(etj)∑nk=1exp(etk)
 etj =a(st−1,hj)

where is the alignment weight implemented by function which is an attention mechanism to align the input token and the output token . The more tightly they match each other, the higher scores they obtain.

With the decoder state and the context vector , we approximately compute each conditional probability either in the backward sequence or the forward sequence as Eq.:

 p(yt|y1,…,yt−1,x) =p(yt|yt−1,st,ct) (6)

According to the Eq. and Eq., we in turn obtain the backward sentence = and the forward sentence = , with the maximal estimated probability by beam search. Finally, we concatenate the reverse backward sequence , the simpler word and the forward sequence to output the entire sentence . Notice that can exist at any position in .

### 3.3 Multi-constrained Seq2Seq

We just illustrated how to put a single constraint word into the sequence-to-sequence generation process, while actually there can be more than one constraint words which are simplified before the sentence generation, as shown in Table 1. We extend the single constraint into multiple constraints by a Multi-Constrained Seq2Seq model. Without loss of generosity, we define the multiple keywords as and illustrate Multi-Constrained Seq2Seq in Figure 2.

We first illustrate the situation with two simplified words, i.e., = , namely and . We generally take the complex word with the least term frequency as the first constrained word and use the same method as in Section 3.2 to generate the first output sentence . In the second round of generation, we take the first output sentence as the input and generate the second output sentence with as the constraint. Compared with the single-pass generation with a single constraint word, we have an output sentence , …, with two constraint words after a two-pass generation. The relative position of and depends on the input sentence .

When there are more than two constraint words, i.e., , the system architecture remains the same with more repeating passes of generation included (shown in Figure 2). To decode the -th output sentence , we encode the output sentence to the next word embeddings of the multi-constrained seq2seq model.

Note that after each pass of generation, other constrained words may be deleted or simplified already. if there are complex word(s) which need to be simplified, the system will repeat the simplification process.

## 4 Experiment

### 4.1 Dataset and Setups

We evaluate our proposed approach on the parallel corpus from Wikipedia and Simple Wikipedia in English333This dataset is available at http://www.cs.pomona.edu/~dkauchak/simplification. We randomly split the corpus into 123,626 sentence pairs (each pair as a normal sentence and its simplification in parallel) for training, 5,000 sentences for validation and 600 sentences for testing. There can be noises in the dataset. We filter out test samples when the output is identical as the input without any simplification. We also applied lowercasing preprocess to all samples. Our vocabulary size is 60,000 while out-of-vocabulary words are mapped to the token “unk”.

The RNN encoder and decoder of our model both have 1,000 hidden units; the word embedding dimensionality is 620. We use the Adadelta Zeiler (2012) to optimize all parameters.

### 4.2 Comparison Methods

In this paper, we conduct the experiments on the English Wikipedia and Simple English Wikipedia datasets to compare our proposed method against several representative algorithms.

Moses. It is a standard phrase-based machine translation model Koehn et al. (2007).ã

SBMT. SBMT is a syntactic-based machine translation model Xu et al. (2016), which is implemented on the open-source Joshua toolkit Post et al. (2013). The simplification model is optimized to the SARI metric and leverages the PPDB dataset Pavlick and Callison-Burch (2016) as a rich source of simplification operations.

Lexical Substitution. This method only substitutes the complex words with the simplified word(s) which we use as the constraint word(s) in our model and leave other words of the input sentence unchanged. This model shares the same hypothesis as our model.

Seq2Seq. The sequence-to-sequence model is the state-of-the-art neural machine translation model Cho et al. (2014) with the attention mechanism applied Bahdanau et al. (2014).

Constrained Seq2Seq. We propose a novel neural sentence generation model based on sequence-to-sequence paradigm with one constraint word. We use Multi-Constrained Seq2Seq to denote the scenario when there is more than one constraint words.

### 4.3 Evaluation Metrics

\ULforem

Automatic Evaluation To evaluate the performance of different methods for the simplification task, we leverage four automatic evaluation metrics444The highest n-gram order of all correlation related metrics is set to 4 in our experiments: Flesch-Kincaid grade level (FK) Kincaid et al. (1975), SARI Xu et al. (2016), BLEU Papineni et al. (2002) and iBLEU Sun and Zhou (2012). FK is widely used for readability. SARI evaluates the simplicity by explicitly measuring the goodness of words that are added, deleted and kept. BLEU is originally designed for MT and evaluates the output by using n-gram matching between the output and the reference. Several studies indicate that BLEU alone is not really suitable for the simplification task Zhu et al. (2010); Štajner et al. (2015); Xu et al. (2016). In many cases of sentence simplification, the output sequence looks similar to the input sequence with only part of it simplified. Due to this situation, there is prominent insufficiency for the standard BLEU metric: even though the output sequence does not perform any simplification operations on the input sequence, it is still likely to obtain a high BLEU score. It is necessary to penalize the output sentence that is too similar to the input sentence. Therefore, the iBLEU metric is more suitable for simplification as it balances similarity and simplicity. Given an output sentence , a reference sentence and input sentence , iBLEU555 is set to 0.9 as suggested by Sun et al. Sun and Zhou (2012) is defined as:

 iBLEU=α×BLEU(O,R)− (7) (1−α)×BLEU(O,I)

Human Evaluation Human judgment is the ultimate evaluation metric for all natural language processing tasks. We randomly select 120 source sentences from our test dataset and invite 20 graduate students (include native speakers) to evaluate the simplified sentences by all systems according to the source sentence. For fairness, we conduct a blind review: the evaluators are not aware which methods produce the simplification results. Following earlier studies Wubben et al. (2012); Xu et al. (2016), we asked participants to rate Grammaticality (the extent to which the simplified sentence is grammatically correct and fluently readable), Meaning (the extent to which the simplified sentence has the same meaning as the input) and Simplicity (the extent to which the simplified sequence maintains similar meaning). All these three human evaluation metrics are in 5-points scale from lowest 0 points to highest 4 points. Note that if one generated sentence is identical to the source sentence, we rate Grammaticality with 4 points, Meaning with 4 points and Simplicity with 0 points for this target sentence.

### 4.4 Overall Performance

The automatic evaluation results are listed in Table 2. Moses has the worst performance. It obtains a fair BLEU(O, R) score that is 28.28. But its BLEU(O, I) score is 99.62, indicating that Moses fails to simplify most of the sentences. As this failure, its FK, iBLEU and SARI scores are all quite low. SBMT has the similar performance like Moses and neither simplify the output sentences nor promote the readability. The overall results of the Seq2Seq system are better than Moses and SBMT. Though its BLEU(O, R) score is little lower than Moses and SBMT, its output sentences are not mostly identical to the input sentences as its BLEU(O, I) score is only 66.94. It also achieves better FK(12.74), iBLEU(16.27) and SARI(33.16) scores than Moses and SBMT. Lexical Substitution only substitutes the complex words so that it obtains the highest BLEU(O, R) and SARI scores. But it gets the worst FK readability. In general, both Constrained Seq2Seq and Multi-Constrained Seq2Seq under our proposed framework outperform baselines. They have higher similarities to the reference and lower similarities to the input than other systems. So the iBLEU scores of our two systems are higher than baselines, which are 20.26 and 19.87 respectively. The SARI score of our two systems is also pretty high. As for FK readability, our two systems achieve the best result.

The human evaluation results are displayed in Table 3. Moses generates 116 sentences that are completely identical to the input sentences. As the Grammaticality of the identical sentences are rated with 4 points, the Meaning with 4 points and the Simplicity with 0 points, Moses gets the highest score (3.99) both in Grammaticality and Meaning but obtains the lowest score (0.02) in Simplicity. Similar to Moses, SBMT generates 99 sentences that are not really simplified so that SBMT obtains similar results like Moses. Seq2Seq outperforms Moses and SBMT systems judged by the overall performance and obtains 3.28 in Grammaticality, 3.45 in Meaning and 0.96 in Simplicity. The results of Lexical Substitution in Meaning and Simplicity are rather high. But as shown by the score of Grammaticality, the sentences generated by Lexical Substitution contain many grammar errors which are not surprising. Our Constrained Seq2Seq and Multi-Constrained Seq2Seq outperform in Simplicity than baselines. The Meaning scores of our systems are 2.81 and 2.65. Simple English Wikipedia has a quite similar score, 2.83, which indicates that to some extent, both our systems and Simple English Wikipedia have a semantic loss when simplifying sentences. As for Grammaticality, Constrained Seq2Seq is better than Lexical Substitution. Multi-Constrained Seq2Seq performs worse than Constrained Seq2Seq in Grammaticality but better in Simplicity.

Judged by the overall performances, Constrained Seq2Seq and Multi-Constrained Seq2Seq outperform other off-the-shelf sentence-level simplification methods as their generated sentences are literally simplified and legitimate.

### 4.5 Analysis and Case Studies

In Table 4, we show some typical examples of all systems. Among them, Moses, SBMT and the Seq2Seq model generate a completely identical sentence to the input sentence as they do in most cases. Lexical Substitution paraphrases the complex words “key”, “hub” and “a great deal of” with the simple words “important”, “center” and “many”. As seen, the article for the word important should be changed from “a” to “an” but Lexical Substitution fails to deal with such kind of errors. As for our proposed model, it generates the output sentences conditioned on the simplified word “center” and deletes the complex phrase “a great deal of”. Taking the generated sentence of Constrained Seq2Seq model as a input, the Multi-Constrained Seq2Seq model substitutes the less frequent words “key” with the word “important”. It also changes the adverbial clause “serving as … until the 1980s” with a simple sentence structure “it became … until the 1980s”, which shows that our models are more flexible and more effective than other baseline systems.

## 5 Conclusion

In this paper, we propose a new two-step method for sentence simplification by combining word-level simplification and sentence-level simplification. We run experiments on the parallel datasets of Wikipedia and Simple Wikipedia and the results show that our methods outperform various baselines with better readability, flexibility and simplicity achieved. In the future, we plan to take more factors (e.g., sentence length or grammar rules) into account and formulate them as constraints into our proposed model.

## References

• Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate .
• Carroll et al. (1999) John Carroll, Guido Minnen, Darrenz Pearce, Canning Yvonne, Devlin Siobhan, and John Tait. 1999. Simplifying text for language-impaired readers. In Proceedings of EACL. pages 269–270.
• Cho et al. (2014) Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation .
• Coster and Kauchak (2011) William Coster and David Kauchak. 2011. Learning to simplify sentences using wikipedia. In Proceedings of the workshop on monolingual text-to-text generation. Association for Computational Linguistics, pages 1–9.
• Evans et al. (2014) Richard Evans, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR). pages 121–140.
• Glavaš and Štajner (2015) Goran Glavaš and Sanja Štajner. 2015. Simplifying lexical simplification: Do we need simplified corpora. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pages 63–68.
• Horn et al. (2014) Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a lexical simplifier using wikipedia. In ACL (2). pages 458–463.
• Kincaid et al. (1975) J. Peterand Robert P. Fishburne Jr Kincaid, Richard L. Rogers, and Brad S. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. In Naval Technical Training Command Millington TN Research Branch.
• Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris C.Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. The Association for Computer Linguistics.
• Mou et al. (2016) Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In arXiv preprint arXiv:1607.00970.
• Mou et al. (2015) Lili Mou, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2015. Backward and forward language modeling for constrained sentence generation. In arXiv preprint arXiv:1512.06612.
• Paetzold and Specia (2016) Gustavo H. Paetzold and Lucia Specia. 2016. Unsupervised lexical simplification for non-native speakers. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, pages 3761–3767.
• Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. pages 311–318.
• Pavlick and Callison-Burch (2016) Ellie Pavlick and Chris Callison-Burch. 2016. Simple ppdb: A paraphrase database for simplification. In The 54th Annual Meeting of the Association for Computational Linguistics. page 143.
• Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532–1543.
• Post et al. (2013) Matt Post, Juri Ganitkevitch, Luke Orland, Jonathan Weese, Yuan Cao, and Chris Callison-Burch. 2013. Joshua 5.0: Sparser, better, faster, server. In Proceedings of the Eighth Workshop on Statistical Machine Translation. pages 206–212.
• Rello et al. (2013) Luz Rello, Ricardo Baeza-Yates, and Horacio Saggion. 2013. The impact of lexical simplification by verbal paraphrases for people with and without dyslexia. In International Conference on Intelligent Text Processing and Computational Linguistics. Springer Berlin Heidelberg, pages 501–512.
• Schuster and Paliwal (1997) Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks 11:2673–2681.
• Specia (2010) Lucia Specia. 2010. Translating from complex to simplified sentences. In International Conference on Computational Processing of the Portuguese Language. Springer Berlin Heidelberg, pages 30–39.
• Sun and Zhou (2012) Hong Sun and Ming Zhou. 2012. Joint learning of a dual smt system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers. Association for Computational Linguistics, volume 2.
• Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112.
• Štajner et al. (2015) Sanja Štajner, Hannah Béchara, and Horacio Saggion. 2015. A deeper exploration of the standard pb-smt approach to text simplification and its evaluation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL).
• Wubben et al. (2012) Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers. volume 1, pages 1015–1024.
• Xu et al. (2016) Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. In Transactions of the Association for Computational Linguistics 4. pages 401–415.
• Zeiler (2012) Matthew D. Zeiler. 2012. Adadelta: an adaptive learning rate method. In arXiv preprint arXiv.
• Zhu et al. (2010) Zhemin Zhu, Delphine Bernhard, , and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd international conference on computational linguistics. Association for Computational Linguistics, pages 1353–1361.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters