UnibucKernel: A kernel-based learning method for complex word identification

UnibucKernel: A kernel-based learning method for complex word identification

Abstract

In this paper, we present a kernel-based learning approach for the 2018 Complex Word Identification (CWI) Shared Task. Our approach is based on combining multiple low-level features, such as character n-grams, with high-level semantic features that are either automatically learned using word embeddings or extracted from a lexical knowledge base, namely WordNet. After feature extraction, we employ a kernel method for the learning phase. The feature matrix is first transformed into a normalized kernel matrix. For the binary classification task (simple versus complex), we employ Support Vector Machines. For the regression task, in which we have to predict the complexity level of a word (a word is more complex if it is labeled as complex by more annotators), we employ -Support Vector Regression. We applied our approach only on the three English data sets containing documents from Wikipedia, WikiNews and News domains. Our best result during the competition was the third place on the English Wikipedia data set. However, in this paper, we also report better post-competition results.

\aclfinalcopy

1 Introduction

A key role in reading comprehension by non-native speakers is played by lexical complexity. To date, researchers in the Natural Language Processing (NLP) community have developed several systems to simply texts for non-native speakers [Petersen and Ostendorf(2007)] as well as native speakers with reading disabilities [Rello et al.(2013)Rello, Baeza-Yates, Bott, and Saggion] or low literacy levels [Specia(2010)]. The first task that needs to be addressed by text simplification methods is to identify which words are likely to be considered complex. The complex word identification (CWI) task raised a lot of attention in the NLP community, as it has been addressed as a stand-alone task by some researchers [Davoodi et al.(2017)Davoodi, Kosseim, and Mongrain]. More recently, researchers even organized shared tasks on CWI [Paetzold and Specia(2016a), Štajner et al.(2018)Štajner, Biemann, Malmasi, Paetzold, Specia, Tack, Yimam, and Zampieri]. The goal of the 2018 CWI Shared Task [Štajner et al.(2018)Štajner, Biemann, Malmasi, Paetzold, Specia, Tack, Yimam, and Zampieri] is to predict which words can be difficult for a non-native speaker, based on annotations collected from a mixture of native and non-native speakers. Although the task features a multilingual data set, we participated only in the English monolingual track, due to time constraints. In this paper, we describe the approach of our team, UnibucKernel, for the English monolingual track of the 2018 CWI Shared Task [Štajner et al.(2018)Štajner, Biemann, Malmasi, Paetzold, Specia, Tack, Yimam, and Zampieri]. We present results for both classification (predicting if a word is simple or complex) and regression (predicting the complexity level of a word) tasks. Our approach is based on a standard machine learning pipeline that consists of two phases: feature extraction and classification/regression. In the first phase, we combine multiple low-level features, such as character n-grams, with high-level semantic features that are either automatically learned using word embeddings [Mikolov et al.(2013)Mikolov, Sutskever, Chen, Corrado, and Dean] or extracted from a lexical knowledge base, namely WordNet [Miller(1995), Fellbaum(1998)]. After feature extraction, we employ a kernel method for the learning phase. The feature matrix is first transformed into a normalized kernel matrix, using either the inner product between pairs of samples (computed by the linear kernel function) or an exponential transformation of the inner product (computed by the Gaussian kernel function). For the binary classification task, we employ Support Vector Machines (SVM) [Cortes and Vapnik(1995)], while for the regression task, we employ -Support Vector Regression (SVR) [Chang and Lin(2002)]. We applied our approach only on the three English monolingual data sets containing documents from Wikipedia, WikiNews and News domains. Our best result during the competition was the third place on the English Wikipedia data set. Nonetheless, in this paper, we also report better post-competition results.

The rest of this paper is organized as follows. Related work on complex word identification is presented in Section 2. Our method is presented in Section 3. Our experiments and results are presented in Section 4. Finally, we draw our conclusions and discuss future work in Section 5.

2 Related Work

Although text simplification methods have been proposed since more than a couple of years ago [Petersen and Ostendorf(2007)], complex word identification has not been studied as a stand-alone task until recently [Shardlow(2013)], with the first shared task on CWI organized in 2016 [Paetzold and Specia(2016a)]. With some exceptions [Davoodi et al.(2017)Davoodi, Kosseim, and Mongrain], most of the related works are actually the system description papers of the 2016 CWI Shared Task participants. Among the top participants, the most popular classifier is Random Forest [Brooke et al.(2016)Brooke, Uitdenbogerd, and Baldwin, Mukherjee et al.(2016)Mukherjee, Patra, Das, and Bandyopadhyay, Ronzano et al.(2016)Ronzano, Abura’ed, Espinosa Anke, and Saggion, Zampieri et al.(2016)Zampieri, Tan, and van Genabith], while the most common type of features are lexical and semantic features [Brooke et al.(2016)Brooke, Uitdenbogerd, and Baldwin, Mukherjee et al.(2016)Mukherjee, Patra, Das, and Bandyopadhyay, Paetzold and Specia(2016b), Quijada and Medero(2016), Ronzano et al.(2016)Ronzano, Abura’ed, Espinosa Anke, and Saggion]. Some works used Naive Bayes [Mukherjee et al.(2016)Mukherjee, Patra, Das, and Bandyopadhyay] or SVM [Zampieri et al.(2016)Zampieri, Tan, and van Genabith] along with the Random Forest classifier, while others used different classification methods altogether, e.g. Decision Trees [Quijada and Medero(2016)], Nearest Centroid [Palakurthi and Mamidi(2016)] or Maximum Entropy [Konkol(2016)]. Along with the lexical and semantic features, many have used morphological [Mukherjee et al.(2016)Mukherjee, Patra, Das, and Bandyopadhyay, Paetzold and Specia(2016b), Palakurthi and Mamidi(2016), Ronzano et al.(2016)Ronzano, Abura’ed, Espinosa Anke, and Saggion] and syntactic [Mukherjee et al.(2016)Mukherjee, Patra, Das, and Bandyopadhyay, Quijada and Medero(2016), Ronzano et al.(2016)Ronzano, Abura’ed, Espinosa Anke, and Saggion] features.

\newcite

paetzold-specia:2016:SemEval2 proposed two ensemble methods by applying either hard voting or soft voting on machine learning classifiers trained on morphological, lexical, and semantic features. Their systems ranked on the first and the second places in the 2016 CWI Shared Task. \newciteronzano-EtAl:2016:SemEval employed Random Forests based on lexical, morphological, semantic and syntactic features, ranking on the third place in the 2016 CWI Shared Task. \newcitekonkol:2016:SemEval trained Maximum Entropy classifiers on word occurrence counts in Wikipedia documents, ranking on the fourth place, after \newciteronzano-EtAl:2016:SemEval. \newcitewrobel:2016:SemEval ranked on fifth place using a simple rule-based approach that considers one feature, namely the number of documents from Simple English Wikipedia in which the target word occurs. \newcitemukherjee-EtAl:2016:SemEval employed the Random Forest and the Naive Bayes classifiers based on semantic, lexicon-based, morphological and syntactic features. Their Naive Bayes system ranked on the sixth place in the 2016 CWI Shared Task. After the 2016 CWI Shared Task, \newcitezampieri-EtAl:2017:NLPTEA combined the submitted systems using an ensemble method based on plurality voting. They also proposed an oracle ensemble that provides a theoretical upper bound of the performance. The oracle selects the correct label for a given word if at least one of the participants predicted the correct label. The results reported by \newcitezampieri-EtAl:2017:NLPTEA indicate that there is a significant performance gap to be filled by automatic systems.

Compared to the related works, we propose the use of some novel semantic features. One set of features is inspired by the work of \newciteShotgunWSD-EACL-2017 in word sense disambiguation, while another set of features is inspired by the spatial pyramid approach [Lazebnik et al.(2006)Lazebnik, Schmid, and Ponce], commonly used in computer vision to improve the performance of the bag-of-visual-words model [Ionescu et al.(2013)Ionescu, Popescu, and Grozea, Ionescu and Popescu(2015)].

3 Method

The method that we employ for identifying complex words is based on a series of features extracted from the word itself as well as the context in which the word is used. Upon having the features extracted, we compute a kernel matrix using one of two standard kernel functions, namely the linear kernel or the Gaussian kernel. We then apply either the SVM classifier to identify if a word is complex or not, or the -SVR predictor to determine the complexity level of a word.

3.1 Feature Extraction

As stated before, we extract features from both the target word and the context in which the word appears. Form the target word, we quantify a series of features based on its characters. More specifically, we count the number of characters, vowels and constants, as well as the percentage of vowels and constants from the total number of characters in the word. Along with these features, we also quantify the number of consecutively repeating characters, e.g. double consonants. For example, in the word “innovation”, we can find the double consonant “nn”. We also extract n-grams of 1, 2, 3 and 4 characters, based on the intuition that complex words tend to be formed of a different set of n-grams than simple words.

Other features extracted from the target word are the part-of-speech and the number of senses listed in the WordNet knowledge base [Miller(1995), Fellbaum(1998)], for the respective word. If the complex word is actually composed of multiple words, i.e. it is a multi-word expression, we generate the features for each word in the target and sum the corresponding values to obtain the features for the target multi-word expression.

In the NLP community, word embeddings [Bengio et al.(2003)Bengio, Ducharme, Vincent, and Jauvin, Karlen et al.(2008)Karlen, Weston, Erkan, and Collobert] are used in many tasks, and became more popular due to the word2vec [Mikolov et al.(2013)Mikolov, Sutskever, Chen, Corrado, and Dean] framework. Word embeddings methods have the capacity to build a vectorial representation of words by assigning a low-dimensional read-valued vector to each word, with the property that semantically related words are projected in the same vicinity of the generated space. Word embeddings are in fact a learned representation of words where each dimension represents a hidden feature of the word [Turian et al.(2010)Turian, Ratinov, and Bengio]. We devise additional features for the CWI task with the help of pre-trained word embeddings. The first set of features based on word embeddings takes into account the word’s context. More precisely, we record the minimum, the maximum and the mean value of the cosine similarity between the target word and each other word from the sentence in which the target word occurs. The intuition for using this set of features is that a word can be complex if it is semantically different from the other context words, and this difference should be reflected in the embedding space. Having the same goal in mind, namely to identify if the target word is an outlier with respect to the other words in the sentence, we employ a simple approach to compute sense embeddings using the semantic relations between WordNet synsets. We note that this approach was previously used for unsupervised word sense disambiguation in [Butnaru et al.(2017)Butnaru, Ionescu, and Hristea]. To compute the sense embedding for a word sense, we first build a disambiguation vocabulary or sense bag. Based on WordNet, we form the sense bag for a given synset by collecting the words found in the gloss of the synset (examples included) as well as the words found in the glosses of semantically related synsets. The semantic relations are chosen based on the part-of-speech of the target word, as described in [Butnaru et al.(2017)Butnaru, Ionescu, and Hristea]. To derive the sense embedding, we embed the collected words in an embedding space and compute the median of the resulted word vectors. For each sense embedding of the target word, we compute the cosine similarity with each and every sense embedding computed for each other word in the sentence, in order to find the minimum, the maximum and the mean value.

Figure 1: A set of word vectors represented in a 2D space generated by applying PCA on 300-dimensional word embeddings.
Figure 2: A grid of applied on the 2D embedding space. For example, the word “valedictorian” is located in bin number . Consequently, the corresponding one-hot vector contains a non-zero value at index .

Using word embeddings, we also managed to define a set of useful features based on the location of the target word in the embedding space. In this last set of features, we first process the word vectors in order to reduce the dimensionality of the vector space from 300 components to only 2 components, by applying Principal Component Analysis (PCA) [Hotelling(1933)]. Figure 1 illustrates a couple of semantically related words, that are projected in the same area of the 2-dimensional (2D) embedding space generated by PCA. In the newly generated space, we apply a grid to divide the space into multiple and equal regions, named bins. This process is inspired by the spatial pyramids [Lazebnik et al.(2006)Lazebnik, Schmid, and Ponce] used in computer vision to recover spatial information in the bag-of-visual-words [Ionescu et al.(2013)Ionescu, Popescu, and Grozea, Ionescu and Popescu(2015)]. After we determine the bins, we index the bins and encode the index of the bin that contains the target word as a one-hot vector. Various grid sizes could provide a more specific or a more general location of a word in the generated space. For this reason, we use multiple grid sizes starting from coarse divisions such as , , and , to fine divisions such as and . In Figure 2, we show an example with a grid that divides the space illustrated in Figure1 into bins, and the word “valedictorian” is found in bin number . The corresponding one-hot vector, containing a single non-zero value at index , is also illustrated in Figure 2. The thought process for using this one-hot representation is that complex words tend to reside alone in the semantic space generated by the word embedding framework.

3.2 Kernel Representation

Kernel-based learning algorithms work by embedding the data into a Hilbert space and by searching for linear relations in that space, using a learning algorithm. The embedding is performed implicitly, that is by specifying the inner product between each pair of points rather than by giving their coordinates explicitly. The power of kernel methods [Ionescu and Popescu(2016), Shawe-Taylor and Cristianini(2004)] lies in the implicit use of a Reproducing Kernel Hilbert Space induced by a positive semi-definite kernel function. Despite the fact that the mathematical meaning of a kernel is the inner product in a Hilbert space, another interpretation of a kernel is the pairwise similarity between samples.

The kernel function offers to the kernel methods the power to naturally handle input data that is not in the form of numerical vectors, such as strings, images, or even video and audio files. The kernel function captures the intuitive notion of similarity between objects in a specific domain and can be any function defined on the respective domain that is symmetric and positive definite. In our approach, we experiment with two commonly-used kernel functions, namely the linear kernel and the Radial Basis Function (RBF) kernel. The linear kernel is easily obtained by computing the inner product of two feature vectors and :

where denotes the inner product. In a similar manner, the RBF kernel (also known as the Gaussian kernel) between two feature vectors and can be computed as follows:

In the experiments, we replace with a constant value , and tune the parameter instead of .

A technique that improves machine learning performance for many applications is data normalization. Because the range of raw data can have significant variation, the objective functions optimized by the classifiers will not work properly without normalization. The normalization step gives to each feature an approximately equal contribution to the similarity between two samples. The normalization of a pairwise kernel matrix containing similarities between samples is obtained by dividing each component to the square root of the product of the two corresponding diagonal elements:

3.3 Classification and Regression

In the case of binary classification problems, kernel-based learning algorithms look for a discriminant function, a function that assigns to examples that belong to one class and to examples that belong to the other class. This function will be a linear function in the Hilbert space, which means it will have the form:

for some weight vector and some bias term . The kernel can be employed whenever the weight vector can be expressed as a linear combination of the training points, , implying that can be expressed as follows:

where is the number of training samples and is a kernel function.

Various kernel methods differ in the way they find the vector (or equivalently the dual vector ). Support Vector Machines [Cortes and Vapnik(1995)] try to find the vector that defines the hyperplane that maximally separates the images in the Hilbert space of the training examples belonging to the two classes. Mathematically, the SVM classifier chooses the weights and the bias term that satisfy the following optimization criterion:

where is the label (/) of the training example , is a regularization parameter and . We use the SVM classifier for the binary classification of words into simple versus complex classes. On the other hand, we employ -Support Vector Regression (-SVR) in order to predict the complexity level of a word (a word is more complex if it is labeled as complex by more annotators). The -Support Vector Machines [Chang and Lin(2002)] can handle both classification and regression. The model introduces a new parameter , that can be used to control the amount of support vectors in the resulting model. The parameter is introduced directly into the optimization problem formulation and it is estimated automatically during training.

4 Experiments

4.1 Data Sets

Data Set Train Validation Test
English News
English WikiNews
English Wikipedia
Table 1: A summary with the number of samples in each data set of the English monolingual track of the 2018 CWI Shared Task.

The data sets used in the English monolingual track of the 2018 CWI Shared Task [Štajner et al.(2018)Štajner, Biemann, Malmasi, Paetzold, Specia, Tack, Yimam, and Zampieri] are described in [Yimam et al.(2017)Yimam, Štajner, Riedl, and Biemann]. Each data set covers one of three distinct genres (News, WikiNews and Wikipedia), and the samples are annotated by both native and non-native English speakers. Table 1 presents the number of samples in the training, the validation (development) and the test sets, for each of the three genres.

4.2 Classification Results

Data Set Kernel Accuracy -score Competition Rank Post-Competition Rank
English News linear
English News RBF
English WikiNews linear
English WikiNews RBF
English Wikipedia linear
English Wikipedia RBF      
Table 2: Classification results on the three data sets of the English monolingual track of the 2018 CWI Shared Task. The methods are evaluated in terms of the classification accuracy and the -score. The results marked with an asterisk are obtained during the competition. The other results are obtained after the competition.

Parameter Tuning. For the classification task, we used the SVM implementation provided by LibSVM [Chang and Lin(2011)]. The parameters that require tuning are the parameter of the RBF kernel and the regularization parameter of the SVM. We tune these parameters using grid search on each of the three validation sets included in the data sets prepared for the English monolingual track. For the parameter , we select values from the set . For the regularization parameter we choose values from the set . Interestingly, we obtain the best results with the same parameter choices on all three validation sets. The optimal parameter choices are and . We use these parameters in all our subsequent classification experiments.

Data Set Kernel Mean Absolute Error Post-Competition Rank
English News linear
English News RBF
English WikiNews linear
English WikiNews RBF
English Wikipedia linear
English Wikipedia RBF
Table 3: Regression results on the three data sets of the English monolingual track of the 2018 CWI Shared Task. The methods are evaluated in terms of the mean absolute error (MAE). The reported results are obtained after the competition.

Results. Our results for the classification task on the three data sets included in the English monolingual track are presented in Table 2. We would like to note that, before the competition ended, we observed a bug in the code that was used in most of our submissions. In the feature extraction stage, the code produced NaN (not a number) values for some features. In order to make the submissions in time, we had to eliminate the samples containing NaN values in the feature vector. Consequently, most of our results during the competition were lower than expected. However, we managed to fix this bug and recompute the features in time to resubmit new results, but only for the RBF kernel on the English Wikipedia data set. The rest of the results presented in Table 2 are produced after the bug fixing and after the submission deadline. Nevertheless, for a fair comparison with the other systems, we include our rankings during the competition as well as the post-competition rankings.

The results reported in Table 2 indicate that the RBF kernel is more suitable for the CWI task than the linear kernel. Our best -score on the English News data set is , which is nearly lower than the top scoring system, which attained during the competition. On the English WikiNews data set, our best -score () is once again about lower than the top scoring system, which obtained during the competition. On the English Wikipedia data set, our best -score is . With this score, we ranked as the third team on the English Wikipedia data set. Two systems performed better on English Wikipedia, one that reached the top -score of and one that reached the second-best scored of . Overall, our system performed quite well, but it can surely benefit from the addition of more features.

4.3 Regression Results

Although we did not submit results for the regression task, we present post-competition regression results in this section.

Parameter Tuning. For the regression task, the parameters that require tuning are the parameter of the RBF kernel and the -SVR parameters and . As in the classification task, we tune these parameters using grid search on the validation sets provided with the three data sets included in the English monolingual track. For the parameter , we select values from the set . For the regularization parameter we choose values from the set . The preliminary results on the validation sets indicate the best parameter choices for each data set. For the English News data set, we obtained the best validation results using and . For the English WikiNews and English Wikipedia data sets, we obtained the best validation results using and . For the parameter , we leave the default value of provided by LibSVM [Chang and Lin(2011)].

Results. The regression results on the three data sets included in the English monolingual track are presented in Table 3. The systems are evaluated in terms of the mean absolute error (MAE). As in the classification task, we can observe that the RBF kernel provides generally better results than the linear kernel. On two data sets, English News and English WikiNews, we obtain better MAE values than all the systems that participated in the competition. Indeed, the best MAE on English News reported during the competition is , and we obtain a smaller MAE () using the RBF kernel. Similarly, with a MAE of for the RBF kernel, we surpass the top system on English WikiNews, which attained a MAE of during the competition. On the third data set, English Wikipedia, we attain the second-best score (), after the top system, that obtained a MAE of during the competition. Compared to the classification task, we report better post-competition rankings in the regression task. This could be explained by two factors. First of all, the number of participants in the regression task was considerably lower. Second of all, we believe that -SVR is a very good regressor which is not commonly used, surpassing alternative regression methods in other tasks as well, e.g. image difficulty prediction [Ionescu et al.(2016)Ionescu, Alexe, Leordeanu, Popescu, Papadopoulos, and Ferrari].

5 Conclusion

In this paper, we described the system developed by our team, UnibucKernel, for the 2018 CWI Shared Task. The system is based on extracting lexical, syntatic and semantic features and on training a kernel method for the prediction (classification and regression) tasks. We participated only in the English monolingual track. Our best result during the competition was the third place on the English Wikipedia data set. In this paper, we also reported better post-competition results.

In this work, we treated each English data set independently, due to the memory constraints of our machine. Nevertheless, we believe that joining the training sets provided in the English News, the English WikiNews and the English Wikipedia data sets into a single and larger training set can provide better performance, as the model’s generalization capacity could improve by learning from an extended set of samples. We leave this idea for future work. Another direction that could be explored in future work is the addition of more features, as our current feature set is definitely far from being exhaustive.

References

  1. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3(Feb):1137–1155.
  2. Julian Brooke, Alexandra Uitdenbogerd, and Timothy Baldwin. 2016. Melbourne at SemEval 2016 Task 11: Classifying Type-level Word Complexity using Random Forests with Corpus and Word List Features. In Proceedings of SemEval, pages 975–981, San Diego, California. Association for Computational Linguistics.
  3. Andrei M. Butnaru, Radu Tudor Ionescu, and Florentina Hristea. 2017. ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing. In Proceedings of EACL, pages 916–926.
  4. Chih-Chung Chang and Chih-Jen Lin. 2002. Training -Support Vector Regression: Theory and Algorithms. Neural Computation, 14:1959–1977.
  5. Chih-Chung Chang and Chih-Jen Lin. 2011. LibSVM: A Library for Support Vector Machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  6. Corinna Cortes and Vladimir Vapnik. 1995. Support-Vector Networks. Machine Learning, 20(3):273–297.
  7. Elnaz Davoodi, Leila Kosseim, and Matthew Mongrain. 2017. A Context-Aware Approach for the Identification of Complex Words in Natural Language Texts. In Proceedings of ICSC, pages 97–100.
  8. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.
  9. Harold Hotelling. 1933. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24(6):417.
  10. Radu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim Papadopoulos, and Vittorio Ferrari. 2016. How hard can it be? Estimating the difficulty of visual search in an image. In Proceedings of CVPR, pages 2157–2166.
  11. Radu Tudor Ionescu and Marius Popescu. 2015. PQ kernel: a rank correlation kernel for visual word histograms. Pattern Recognition Letters, 55:51–57.
  12. Radu Tudor Ionescu and Marius Popescu. 2016. Knowledge Transfer between Computer Vision and Text Mining. Advances in Computer Vision and Pattern Recognition. Springer International Publishing.
  13. Radu Tudor Ionescu, Marius Popescu, and Cristian Grozea. 2013. Local Learning to Improve Bag of Visual Words Model for Facial Expression Recognition. Workshop on Challenges in Representation Learning, ICML.
  14. Michael Karlen, Jason Weston, Ayse Erkan, and Ronan Collobert. 2008. Large scale manifold transduction. In Proceedings of ICML, pages 448–455. ACM.
  15. Michal Konkol. 2016. UWB at SemEval-2016 Task 11: Exploring Features for Complex Word Identification. In Proceedings of SemEval, pages 1038–1041, San Diego, California. Association for Computational Linguistics.
  16. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. 2006. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. Proceedings of CVPR, 2:2169–2178.
  17. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. Proceedings of NIPS, pages 3111–3119.
  18. George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 38(11):39–41.
  19. Niloy Mukherjee, Braja Gopal Patra, Dipankar Das, and Sivaji Bandyopadhyay. 2016. JU_NLP at SemEval-2016 Task 11: Identifying Complex Words in a Sentence. In Proceedings of SemEval, pages 986–990, San Diego, California. Association for Computational Linguistics.
  20. Gustavo Paetzold and Lucia Specia. 2016a. SemEval 2016 Task 11: Complex Word Identification. In Proceedings of SemEval, pages 560–569, San Diego, California. Association for Computational Linguistics.
  21. Gustavo Paetzold and Lucia Specia. 2016b. SV000gg at SemEval-2016 Task 11: Heavy Gauge Complex Word Identification with System Voting. In Proceedings of SemEval, pages 969–974, San Diego, California. Association for Computational Linguistics.
  22. Ashish Palakurthi and Radhika Mamidi. 2016. IIIT at SemEval-2016 Task 11: Complex Word Identification using Nearest Centroid Classification. In Proceedings of SemEval, pages 1017–1021, San Diego, California. Association for Computational Linguistics.
  23. Sarah E. Petersen and Mari Ostendorf. 2007. Text Simplification for Language Learners: A Corpus Analysis. In Proceedings of SLaTE.
  24. Maury Quijada and Julie Medero. 2016. HMC at SemEval-2016 Task 11: Identifying Complex Words Using Depth-limited Decision Trees. In Proceedings of SemEval, pages 1034–1037, San Diego, California. Association for Computational Linguistics.
  25. Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Horacio Saggion. 2013. Simplify or Help?: Text Simplification Strategies for People with Dyslexia. In Proceedings of W4A, pages 15:1–15:10.
  26. Francesco Ronzano, Ahmed Abura’ed, Luis Espinosa Anke, and Horacio Saggion. 2016. TALN at SemEval-2016 Task 11: Modelling Complex Words by Contextual, Lexical and Semantic Features. In Proceedings of SemEval, pages 1011–1016, San Diego, California. Association for Computational Linguistics.
  27. Matthew Shardlow. 2013. A Comparison of Techniques to Automatically Identify Complex Words. In Proceedings of the ACL Student Research Workshop, pages 103–109.
  28. John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press.
  29. Lucia Specia. 2010. Translating from complex to simplified sentences. In Proceedings of PROPOR, pages 30–39.
  30. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL, pages 384–394. Association for Computational Linguistics.
  31. Sanja Štajner, Chris Biemann, Shervin Malmasi, Gustavo Paetzold, Lucia Specia, Anaïs Tack, Seid Muhie Yimam, and Marcos Zampieri. 2018. A Report on the Complex Word Identification Shared Task 2018. In Proceedings of BEA-13, New Orleans, United States. Association for Computational Linguistics.
  32. Krzysztof Wróbel. 2016. PLUJAGH at SemEval-2016 Task 11: Simple System for Complex Word Identification. In Proceedings of SemEval, pages 953–957, San Diego, California. Association for Computational Linguistics.
  33. Seid Muhie Yimam, Sanja Štajner, Martin Riedl, and Chris Biemann. 2017. CWIG3G2 - Complex Word Identification Task across Three Text Genres and Two User Groups. In Proceedings of IJCNLP (Volume 2: Short Papers), pages 401–407, Taipei, Taiwan.
  34. Marcos Zampieri, Shervin Malmasi, Gustavo Paetzold, and Lucia Specia. 2017. Complex Word Identification: Challenges in Data Annotation and System Performance. In Proceedings of NLPTEA, pages 59–63, Taipei, Taiwan. Asian Federation of Natural Language Processing.
  35. Marcos Zampieri, Liling Tan, and Josef van Genabith. 2016. MacSaar at SemEval-2016 Task 11: Zipfian and Character Features for ComplexWord Identification. In Proceedings of SemEval, pages 1001–1005, San Diego, California. Association for Computational Linguistics.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
131482
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description