Sequence Labeling Parsing by Learning Across Representations

Sequence Labeling Parsing by Learning Across Representations

Michalina Strzyz   David Vilares   Carlos Gómez-Rodríguez
Universidade da Coruña, CITIC
FASTPARSE Lab, LyS Research Group, Departamento de Computación
Campus de Elviña, s/n, 15071 A Coruña, Spain
{michalina.strzyz,david.vilares,carlos.gomez}@udc.es
Abstract

We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions. To do so, we cast the problem as multitask learning (mtl). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an mtl sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average mtl models with auxiliary losses for constituency parsing outperform single-task ones by 1.05 F1 points, and for dependency parsing by 0.62 uas points.

1 Introduction

Constituency Chomsky (1956) and dependency grammars Mel’cuk (1988); Kübler et al. (2009) are the two main abstractions for representing the syntactic structure of a given sentence, and each of them has its own particularities (Kahane and Mazziotta, 2015). While in constituency parsing the structure of sentences is abstracted as a phrase-structure tree (see Figure 0(a)), in dependency parsing the tree encodes binary syntactic relations between pairs of words (see Figure 0(b)).

When it comes to developing natural language processing (nlp) parsers, these two tasks are usually considered as disjoint tasks, and their improvements therefore have been obtained separately Charniak (2000); Nivre (2003); Kiperwasser and Goldberg (2016); Dozat and Manning (2017); Ma et al. (2018); Kitaev and Klein (2018).

Despite the potential benefits of learning across representations, there have been few attempts in the literature to do this. \newciteklein2003fast considered a factored model that provides separate methods for phrase-structure and lexical dependency trees and combined them to obtain optimal parses. With a similar aim, \newciteren2013combine first compute the n best constituency trees using a probabilistic context-free grammar, convert those into dependency trees using a dependency model, compute a probability score for each of them, and finally rerank the most plausible trees based on both scores. However, these methods are complex and intended for statistical parsers. Instead, we propose a extremely simple framework to learn across constituency and dependency representations.

Contribution

(i) We use sequence labeling for constituency Gómez-Rodríguez and Vilares (2018) and dependency parsing (Strzyz et al., 2019) combined with multi-task learning (mtl) Caruana (1997) to learn across syntactic representations. To do so, we take a parsing paradigm (constituency or dependency parsing) as an auxiliary task to help train a model for the other parsing representation, a simple technique that translates into consistent improvements across the board. (ii) We also show that a single mtl model following this strategy can robustly produce both constituency and dependency trees, obtaining a performance and speed comparable with previous sequence labeling models for (either) constituency or dependency parsing. The source code is available at https://github.com/mstrise/seq2label-crossrep

2 Parsing as Sequence Labeling

Notation

We use to denote an input sentence. We use bold style lower-cased and math style upper-cased characters to refer to vectors and matrices (e.g. and ).

Sequence labeling is a structured prediction task where each token in the input sentence is mapped to a label Rei and Søgaard (2018). Many nlp tasks suit this setup, including part-of-speech tagging, named-entity recognition or chunking Sang and Buchholz (2000); Toutanova and Manning (2000); Tjong Kim Sang and De Meulder (2003). More recently, syntactic tasks such as constituency parsing and dependency parsing have been successfully reduced to sequence labeling Spoustová and Spousta (2010); Li et al. (2018); Gómez-Rodríguez and Vilares (2018); Strzyz et al. (2019). Such models compute a tree representation of an input sentence using tagging actions.

We will also cast parsing as sequence labeling, to then learn across representations using multi-task learning. Two are the main advantages of this approach: (i) it does not require an explicit parsing algorithm nor explicit parsing structures, and (ii) it massively simplifies joint syntactic modeling. We now describe parsing as sequence labeling and the architecture used in this work.

Constituency parsing as tagging

\newcite

GomVilEMNLP2018 define a linearization method to transform a phrase-structure tree into a discrete sequence of labels of the same length as the input sentence. Each label is a three tuple where: is an integer that encodes the number of ancestors in the tree shared between a word and its next one (computed as relative variation with respect to ), is the non-terminal symbol shared at the lowest level in common between said pair of words, and (optional) is a leaf unary chain that connects to . Figure 0(a) illustrates the encoding with an example.111In this work we do not use the dual encoding by \newciteVilaresMTL2019, which combines the relative encoding with a top-down absolute scale to represent certain relations.

Dependency parsing as tagging

Strzyz et al. (2019) also propose a linearization method to transform a dependency tree into a discrete sequence of labels. Each label is also represented as a three tuple . If , ’s head is the th closest word with PoS tag to the right of . If , the head is the th closest word to the left of that has as a PoS tag . The element represents the syntactic relation between the head and the dependent terms. Figure 0(b) depictures it with an example.

{adjustbox}

max width= \Tree

(1,S,NP)

(1,VP, )

(1,NP, )

(-2,S, )

()

(a) A constituency tree
{adjustbox}

max width= {dependency}[label theme = default] {deptext}[column sep=1em] ROOT & He & has & good & control & .
& N & V & J & N & .
& 1 & 2 & 3 & 4 & 5

& (+1,V,nsubj) & (-1,ROOT,root) & (+1,N,amod) & (-1,V,dobj) & (-1,V,punct)
\depedge32nsubj \depedge35dobj \depedge54amod \depedge13root \depedge36punct \wordgroup[group style=fill=orange!40, draw=brown, inner sep=.6ex]522a0 \wordgroup[group style=fill=orange!40, draw=brown, inner sep=.6ex]533a1 \wordgroup[group style=fill=orange!40, draw=brown, inner sep=.6ex]544a2 \wordgroup[group style=fill=orange!40, draw=brown, inner sep=.6ex]555a3 \wordgroup[group style=fill=orange!40, draw=brown, inner sep=.6ex]566a4

(b) A dependency tree
Figure 1: An example of constituency and dependency trees with their encodings.

Tagging with lstms

We use bidirectional lstms (bilstms) to train our models Hochreiter and Schmidhuber (1997); Schuster and Paliwal (1997). Briefly, let be an abstraction of a lstm that processes the input from left to right, and let be another lstm processing the input in the opposite direction, the output of a bilstm at a timestep is computed as: . Then, is further processed by a feed-forward layer to compute the output label, i.e. . To optimize the model, we minimize the categorical cross-entropy loss, i.e. . In Appendix A we detail additional hyperpameters of the network. In this work we use NCRFpp Yang and Zhang (2018) as our sequence labeling framework.

3 Learning across representations

To learn across representations we cast the problem as multi-task learning. mtl enables learning many tasks jointly, encapsulating them in a single model and leveraging their shared representation (Caruana, 1997; Ruder, 2017). In particular, we will use a hard-sharing architecture: the sentence is first processed by stacked bilstms shared across all tasks, with a task-dependent feed-forward network on the top of it, to compute each task’s outputs. In particular, to benefit from a specific parsing abstraction we will be using the concept of auxiliary tasks Plank et al. (2016); Bingel and Søgaard (2017); Coavoux and Crabbé (2017), where tasks are learned together with the main task in the mtl setup even if they are not of actual interest by themselves, as they might help to find out hidden patterns in the data and lead to better generalization of the model.222Auxiliary losses are usually given less importance during the training process. For instance, Hershcovich et al. (2018) have shown that semantic parsing benefits from that approach.

The input is the same for both types of parsing and the same number of timesteps are required to compute a tree (equal to the length of the sentence), which simplifies the joint modeling. In this work, we focus on parallel data (we train on the same sentences labeled for both constituency and dependency abstractions). In the future, we plan to explore the idea of exploiting joint training over disjoint treebanks Barrett et al. (2018).

3.1 Baselines and models

We test different sequence labeling parsers to determine whether there are any benefits in learning across representations. We compare: (i) a single-task model for constituency parsing and another one for dependency parsing, (ii) a multi-task model for constituency parsing (and another for dependency parsing) where each element of the 3-tuple is predicted as a partial label in a separate subtask instead of as a whole, (iii) different mtl models where the partial labels from a specific parsing abstraction are used as auxiliary tasks for the other one, and (iv) an mtl model that learns to produce both abstractions as main tasks.

Single-paradigm, single-task models (s-s)

For constituency parsing, we use the single-task model by Gómez-Rodríguez and Vilares (2018). The input is the raw sentence and the output for each token a single label of the form =. For dependency parsing we use the model by Strzyz et al. (2019) to predict a single dependency label of the form = for each token.

Single-paradigm, multi-task models (s-mtl)

For constituency parsing, instead of predicting a single label output of the form , we generate three partial and separate labels , and through three task-dependent feed-forward networks on the top of the stacked bilstms. This is similar to Vilares et al. (2019). For dependency parsing, we propose in this work a mtl version too. We observed in preliminary experiments, as shown in Table 1, that casting the problem as 3-task learning led to worse results. Instead, we cast it as a 2-task learning problem, where the first task consists in predicting the head of a word , i.e. predicting the tuple , and the second task predicts the type of the relation . The loss is here computed as =, where is the partial loss coming from the subtask .

{adjustbox}

max width= Model UAS LAS s-s 93.81 91.59 s-mtl(2) 94.03 91.78 s-mtl(3) 93.66 91.47

Table 1: Comparison of the single-paradigm models for dependency parsing evaluated on the PTB dev set where each label is learned as single, 2- or 3-tasks.

Double-paradigm, multi-task models with auxiliary losses (d-mtl-aux)

We predict the partial labels from one of the parsing abstractions as main tasks. The partial labels from the other parsing paradigm are used as auxiliary tasks. The loss is computed as =, where is an auxiliary loss and its specific weighting factor. Figure 2 shows the architecture used in this and the following multi-paradigm model.

Double paradigm, multi-task models (d-mtl)

All tasks are learned as main tasks instead.

{adjustbox}

max width=

Figure 2: Architecture of our double-paradigm, mtl model with 3-task learning for constituency parsing and 2-task learning for dependency parsing.
{adjustbox}

max width= Model Dependency Parsing Constituency Parsing UAS LAS F1 English (PTB) s-s 93.60 91.74 90.14 s-mtl 93.84 91.83 90.32 d-mtl-aux 94.05 92.01 90.39 d-mtl 93.96 91.90 89.81 Basque s-s 86.20 81.70 89.54 s-mtl 87.42 81.71 90.86 d-mtl-aux 87.19 81.73 91.12 d-mtl 87.09 81.77 90.76 French s-s 89.13 85.03 80.68 s-mtl 89.54 84.89 81.34 d-mtl-aux 89.52 84.97 81.33 d-mtl 89.45 85.07 81.19 German s-s 91.24 88.79 84.19 s-mtl 91.54 88.75 84.46 d-mtl-aux 91.58 88.80 84.38 d-mtl 91.45 88.67 84.28 Hebrew s-s 82.74 75.08 88.85 s-mtl 83.42 74.91 91.91 d-mtl-aux 83.90 75.89 91.83 d-mtl 82.60 73.73 91.10 Hungarian s-s 88.24 84.54 90.42 s-mtl 88.69 84.54 90.76 d-mtl-aux 88.99 84.95 90.69 d-mtl 88.89 84.89 90.93 Korean s-s 86.47 84.12 83.33 s-mtl 86.78 84.39 83.51 d-mtl-aux 87.00 84.60 83.39 d-mtl 86.64 84.34 83.08 Polish s-s 91.17 85.64 92.59 s-mtl 91.58 85.04 93.17 d-mtl-aux 91.37 85.20 93.36 d-mtl 92.00 85.92 93.52 Swedish s-s 86.49 80.60 83.81 s-mtl 87.22 80.61 86.23 d-mtl-aux 87.24 80.34 86.53 d-mtl 87.15 80.71 86.44 average s-s 88.36 84.13 87.06 s-mtl 88.89 84.07 88.06 d-mtl-aux 88.98 84.28 88.11 d-mtl 88.80 84.11 87.90

Table 2: Results on the ptb and spmrl test sets.
{adjustbox}

max width= Model Dependency parsing Constituency Parsing UAS LAS F1 Chen and Manning (2014) 91.80 89.60 Kiperwasser and Goldberg (2016) 93.90 91.90 Dozat and Manning (2017) 95.74 94.08 Ma et al. (2018) 95.87 94.19 Fernández-G and Gómez-R (2019) 96.04 94.43 Vinyals et al. (2015) 88.30 Zhu et al. (2013) 90.40 Vilares et al. (2019) 90.60 Dyer et al. (2016) 91.20 Kitaev and Klein (2018) 95.13 d-mtl-aux 94.05 92.01 90.39

Table 3: Comparison of existing models against the d-mtl-aux model on the PTB test set.
{adjustbox}

max width= Model Basque French German Hebrew Hungarian Korean Polish Swedish average Nivre et al. (2007) 70.11 77.98 77.81 69.97 70.15 82.06 75.63 73.21 74.62 Ballesteros (2013) 78.58 79.00 82.75 73.01 79.63 82.65 79.89 75.82 78.92 Ballesteros et al. (2015) (char+POS) 78.61 81.08 84.49 72.26 76.34 86.21 78.24 74.47 78.96 De La Clergerie (2013) 77.55 82.06 84.80 73.63 75.58 81.02 82.56 77.54 79.34 Björkelund et al. (2013) (ensemble) 85.14 85.24 89.65 80.89 86.13 86.62 87.07 82.13 85.36 D-MTL-AUX 84.02 83.85 88.18 74.94 80.26 85.93 85.86 79.77 82.85

Table 4: Dependency parsing: existing models evaluated with LAS scores on the SPMRL test set.
{adjustbox}

max width= Model Basque French German Hebrew Hungarian Korean Polish Swedish average Fernández-González and Martins (2015) 85.90 78.75 78.66 88.97 88.16 79.28 91.20 82.80 84.22 Coavoux and Crabbé (2016) 86.24 79.91 80.15 88.69 90.51 85.10 92.96 81.74 85.66 Björkelund et al. (2013) (ensemble) 87.86 81.83 81.27 89.46 91.85 84.27 87.55 83.99 86.01 Coavoux and Crabbé (2017) 88.81 82.49 85.34 89.87 92.34 86.04 93.64 84.00 87.82 Vilares et al. (2019) 91.18 81.37 84.88 92.03 90.65 84.01 93.93 86.71 88.10 Kitaev and Klein (2018) 89.71 84.06 87.69 90.35 92.69 86.59 93.69 84.35 88.64 D-MTL-AUX 91.12 81.33 84.38 91.83 90.69 83.39 93.36 86.53 87.83

Table 5: Constituency parsing: existing models evaluated with F1 score on the SPMRL test set.
{adjustbox}

max width= Model Dependency parsing Constituency parsing s-s s-mtl d-mtl-aux d-mtl

Table 6: Sentences/second on the ptb test set.

4 Experiments

4.1 Data

In the following experiments we use two parallel datasets that provide syntactic analyses for both dependency and constituency parsing.

Ptb

For the evaluation on English language we use the English Penn Treebank Marcus et al. (1993), transformed into Stanford dependencies De Marneffe et al. (2006) with the predicted PoS tags as in Dyer et al. (2016).

Spmrl

We also use the spmrl datasets, a collection of parallel dependency and constituency treebanks for morphologically rich languages Seddah et al. (2014). In this case, we use the predicted PoS tags provided by the organizers. We observed some differences between the constituency and dependency predicted input features provided with the corpora. For experiments where dependency parsing is the main task, we use the input from the dependency file, and the converse for constituency, for comparability with other work. d-mtl models were trained twice (one for each input), and dependency and constituent scores are reported on the model trained on the corresponding input.

Metrics

We use bracketing F-score from the original evalb and eval_spmrl official scripts to evaluate constituency trees. For dependency parsing, we rely on las and uas scores where punctuation is excluded in order to provide a homogeneous setup for PTB and SPMRL.

4.2 Results

Table 2 compares single-paradigm models against their double-paradigm mtl versions. On average, mtl models with auxiliary losses achieve the best performance for both parsing abstractions. They gain F1 points on average in comparison with the single model for constituency parsing, and uas and las points for dependency parsing. In comparison to the single-paradigm MTL models, the average gain is smaller: 0.05 f1 points for constituency parsing, and 0.09 uas and 0.21 las points for dependency parsing.

mtl models that use auxiliary tasks (d-mtl-aux) consistently outperform the single-task models (s-s) in all datasets, both for constituency parsing and for dependency parsing in terms of uas. However, this does not extend to las. This different behavior between uas and las seems to be originated by the fact that 2-task dependency parsing models, which are the basis for the corresponding auxiliary task and mtl models, improve uas but not las with respect to single-task dependency parsing models. The reason might be that the single-task setup excludes unlikely combinations of dependency labels with PoS tags or dependency directions that are not found in the training set, while in the 2-task setup, both components are treated separately, which may be having a negative influence on dependency labeling accuracy.

In general, one can observe different range of gains of the models across languages. In terms of uas, the differences between single-task and mtl models span between (Basque) and (Hebrew); for las, and (both for Hebrew); and for F1, (Hebrew) and (Korean). Since the sequence labeling encoding used for dependency parsing heavily relies on PoS tags, the result for a given language can be dependent on the degree of the granularity of its PoS tags.

In addition, Table 3 provides a comparison of the d-mtl-aux models for dependency and constituency parsing against existing models on the PTB test set. Tables 4 and 5 shows the results for various existing models on the SPMRL test sets.333Note that we provide these SPMRL results for merely informative purposes. While they are the best existing results to our knowledge in these datasets, not all are directly comparable to ours (due to not all of them using the same kinds of information, e.g. some models do not use morphological features). Also, there are not many recent results for dependency parsing on the SPMRL datasets, probably due to the popularity of UD corpora. For comparison, we have included punctuation for this evaluation.

Table 6 shows the speeds (sentences/second) on a single core of a CPU444Intel Core i7-7700 CPU 4.2 GHz.. The d-mtl setup comes at almost no added computational cost, so the very good speed-accuracy tradeoff already provided by the single-task models is improved.

5 Conclusion

We have described a framework to leverage the complementary nature of constituency and dependency parsing. It combines multi-task learning, auxiliary tasks, and sequence labeling parsing, so that constituency and dependency parsing can benefit each other through learning across their representations. We have shown that mtl models with auxiliary losses outperform single-task models, and mtl models that treat both constituency and dependency parsing as main tasks obtain strong results, coming almost at no cost in terms of speed. Source code will be released upon acceptance.

Acknowlegments

This work has received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01).

References

  • M. Ballesteros, C. Dyer, and N. A. Smith (2015) Improved transition-based parsing by modeling characters instead of words with LSTMs. Lisbon, Portugal, pp. 349–359. External Links: Link, Document Cited by: Table 4.
  • M. Ballesteros (2013) Effective morphological feature selection with maltoptimizer at the spmrl 2013 shared task. pp. 63–70. Cited by: Table 4.
  • M. Barrett, J. Bingel, N. Hollenstein, M. Rei, and A. Søgaard (2018) Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 302–312. Cited by: §3.
  • J. Bingel and A. Søgaard (2017) Identifying beneficial task relations for multi-task learning in deep neural networks. CoRR abs/1702.08303. External Links: Link, 1702.08303 Cited by: §3.
  • A. Björkelund, O. Cetinoglu, R. Farkas, T. Mueller, and W. Seeker (2013) (Re) ranking meets morphosyntax: state-of-the-art results from the spmrl 2013 shared task. pp. 135–145. Cited by: Table 4, Table 5.
  • R. Caruana (1997) Multitask learning. Machine learning 28 (1), pp. 41–75. Cited by: §1, §3.
  • E. Charniak (2000) A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pp. 132–139. Cited by: §1.
  • D. Chen and C. D. Manning (2014) A fast and accurate dependency parser using neural networks. Doha, Qatar, pp. 740–750. External Links: Link Cited by: Table 3.
  • N. Chomsky (1956) Three models for the description of language. IRE Transactions on information theory 2 (3), pp. 113–124. Cited by: §1.
  • M. Coavoux and B. Crabbé (2016) Neural greedy constituent parsing with dynamic oracles. pp. 172–182. Cited by: Table 5.
  • M. Coavoux and B. Crabbé (2017) Multilingual lexicalized constituency parsing with word-level auxiliary tasks. pp. 331–336. Cited by: Table 5, §3.
  • E. De La Clergerie (2013) Exploring beam-based shift-reduce dependency parsing with dyalog: results from the spmrl 2013 shared task. pp. 53–62. Cited by: Table 4.
  • M. De Marneffe, B. MacCartney, C. D. Manning, et al. (2006) Generating typed dependency parses from phrase structure parses.. pp. 449–454. Cited by: §4.1.
  • T. Dozat and C. D. Manning (2017) Deep biaffine attention for neural dependency parsing. External Links: Link Cited by: §1, Table 3.
  • C. Dyer, A. Kuncoro, M. Ballesteros, and N. A. Smith (2016) Recurrent neural network grammars. pp. 199–209. External Links: Document, Link Cited by: Table 3, §4.1.
  • D. Fernández-González and C. Gómez-Rodríguez (2019) Left-to-right dependency parsing with pointer networks. Minneapolis, Minnesota, USA, pp. to appear. Cited by: Table 3.
  • D. Fernández-González and A. F. T. Martins (2015) Parsing as reduction. Beijing, China, pp. 1523–1533. External Links: Link, Document Cited by: Table 5.
  • C. Gómez-Rodríguez and D. Vilares (2018) Constituent parsing as sequence labeling. pp. 1314–1324. External Links: Link Cited by: §1, §2, §3.1.
  • D. Hershcovich, O. Abend, and A. Rappoport (2018) Multitask parsing across semantic representations. pp. 373–385. External Links: Link Cited by: §3.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §2.
  • S. Kahane and N. Mazziotta (2015) Syntactic polygraphs. a formalism extending both constituency and dependency. Cited by: §1.
  • E. Kiperwasser and Y. Goldberg (2016) Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4, pp. 313–327. Cited by: §1, Table 3.
  • N. Kitaev and D. Klein (2018) Constituency parsing with a self-attentive encoder. pp. 2676–2686. Cited by: §1, Table 3, Table 5.
  • S. Kübler, R. McDonald, and J. Nivre (2009) Dependency parsing. Synthesis Lectures on Human Language Technologies 1 (1), pp. 1–127. Cited by: §1.
  • Z. Li, J. Cai, S. He, and H. Zhao (2018) Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track ProceedingsProceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Proceedings of the Eighth International Workshop on Parsing Technologies (IWPTProceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long PapersProceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesProceedings of the 2018 Conference on Empirical Methods in Natural Language ProcessingLrecProceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical LanguagesThe 54th Annual Meeting of the Association for Computational LinguisticsTwenty-Third International Joint Conference on Artificial IntelligenceProceedings of the 25th International Conference on Machine LearningProceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)Proceedings of the 27th International Conference on Computational LinguisticsProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)Mathematics of LanguageProceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)Advances in neural information processing systemsProceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Proceedings of the 2015 Conference on Empirical Methods in Natural Language ProcessingProceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich LanguagesProceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich LanguagesProceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich LanguagesProceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ICML ’08, Vol. 1162211, Santa Fe, New Mexico, USA, pp. 3203–3214. Cited by: §2.
  • X. Ma, Z. Hu, J. Liu, N. Peng, G. Neubig, and E. Hovy (2018) Stack-pointer networks for dependency parsing. pp. 1403–1414. Cited by: §1, Table 3.
  • M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini (1993) Building a large annotated corpus of english: the penn treebank. Computational linguistics 19 (2), pp. 313–330. Cited by: §4.1.
  • I. A. Mel’cuk (1988) Dependency syntax: theory and practice. SUNY press. Cited by: §1.
  • J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. Kübler, S. Marinov, and E. Marsi (2007) MaltParser: a language-independent system for data-driven dependency parsing. Natural Language Engineering 13 (2), pp. 95–135. Cited by: Table 4.
  • J. Nivre (2003) An efficient algorithm for projective dependency parsing. Nancy, France, pp. 149–160. External Links: Link Cited by: §1.
  • B. Plank, A. Søgaard, and Y. Goldberg (2016) Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. pp. 412. Cited by: §3.
  • M. Rei and A. Søgaard (2018) Zero-shot sequence labeling: transferring knowledge from sentences to tokens. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Vol. 1, pp. 293–302. Cited by: §2.
  • S. Ruder (2017) An overview of multi-task learning in deep neural networks. CoRR abs/1706.05098. External Links: Link, 1706.05098 Cited by: §3.
  • E. F. T. K. Sang and S. Buchholz (2000) Introduction to the conll-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop, Cited by: §2.
  • M. Schuster and K. K. Paliwal (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. Cited by: §2.
  • D. Seddah, S. Kübler, and R. Tsarfaty (2014) Introducing the spmrl 2014 shared task on parsing morphologically-rich languages. pp. 103–109. Cited by: §4.1.
  • D. Spoustová and M. Spousta (2010) Dependency parsing as a sequence labeling task. The Prague Bulletin of Mathematical Linguistics 94 (1), pp. 7–14. External Links: Link Cited by: §2.
  • M. Strzyz, D. Vilares, and C. Gómez-Rodríguez (2019) Viable dependency parsing as sequence labeling. Minneapolis, Minnesota, pp. 717–723. External Links: Link Cited by: §1, §2, §2, §3.1.
  • E. F. Tjong Kim Sang and F. De Meulder (2003) Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pp. 142–147. External Links: Link Cited by: §2.
  • K. Toutanova and C. D. Manning (2000) Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 13, pp. 63–70. Cited by: §2.
  • D. Vilares, M. Abdou, and A. Søgaard (2019) Better, faster, stronger sequence tagging constituent parsers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), Minneapolis, Minnesota, USA, pp. to appear. Cited by: §3.1, Table 3, Table 5.
  • O. Vinyals, Ł. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton (2015) Grammar as a foreign language. pp. 2773–2781. Cited by: Table 3.
  • J. Yang and Y. Zhang (2018) NCRF++: an open-source neural sequence labeling toolkit. Proceedings of ACL 2018, System Demonstrations, pp. 74–79. Cited by: §2.
  • M. Zhu, Y. Zhang, W. Chen, M. Zhang, and J. Zhu (2013) Fast and accurate shift-reduce constituent parsing. pp. 434–443. Cited by: Table 3.

Appendix A Model parameters

The models were trained up to 150 iterations and optimized with Stochastic Gradient Descent (SGD) with a batch size of 8. The best model for constituency parsing was chosen with the highest achieved F1 score on the development set during the training and for dependency parsing with the highest las score. The best double paradigm, multi-task model was chosen based on the highest harmonic mean among las and F1 scores.

Table 7 shows model hyperparameters.

{adjustbox}

max width= Initial learning rate 0.02 Time-based learning rate decay 0.05 Momentum 0.9 Dropout 0.5 Dimension Word embedding 100 Char embedding 30 Self-defined features 20 555Models trained on PTB treebank used PoS tag embedding size of 25 in order to assure the same setup for comparison with the previously reported results. Word hidden vector 800 Character hidden vector 50 Type of mtl model Weighting factor for each task 2-task 1 3-task 1 with auxiliary task : 1 and : 0.2 with auxiliary task : 1 and : 0.1 Multi-task and 1

Table 7: Model hyperparameters. indicates dependency parsing and constituency parsing.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388320
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description