ConvAMR: Abstract meaning representation parsing

ConvAMR: Abstract meaning representation parsing

Lai Dac Viet, Nguyen Le Minh, Ken Satoh
{vietld, nguyenml}@jaist.ac.jp, ksatoh@nii.ac.jp
Abstract

Convolutional neural networks (CNN) have recently achieved remarkable performance in a wide range of applications. In this research, we equip convolutional sequence-to-sequence (seq2seq) model with an efficient graph linearization technique for abstract meaning representation parsing. Our linearization method is better than the prior method at signaling the turn of graph traveling. Additionally, convolutional seq2seq model is more appropriate and considerably faster than the recurrent neural network models in this task. Our method outperforms previous methods by a large margin on both the standard dataset LDC2014T12. Our result indicates that future works still have a room for improving parsing model using graph linearization approach.

1 Introduction

Abstract Meaning Representation (AMR) forms a rooted acyclic directed graph that represents the content of a sentence. All nodes and edges of the AMR graph are labeled according to the sense of the words in a sentence. AMR parsing is the task of converting a given sentence to a corresponding graph. AMRs have been applied to several applications such as event extraction [13, 7], text summarization [6, 11] and text generation [15, 14]. However, AMR annotation which requires a lot of human effort limits the outcome of data-driven approaches, one of which being neural network based methods [10, 3]. Therefore, a highly accurate parser is necessary in order to intensify other applications which are based on AMR.

Three different ways are widely utilized to demonstrate AMR graphs. First, conjunction form represents AMR to measure the similarity between two AMR graphs and some logic applications. Secondly, the PENMAN notation is used on several occasions that are related to human reading and writing such as annotation and data observation. Thirdly, computer programs commonly store AMRs as graph structure in memory. Figure 1 illustrates three typical representation approaches. In an AMR graph, each node is managed using an unique ID called variables. The content of a node is expressed by a semantic concept, which can be an English word (e.g. ) or a PropBank frameset (e.g. want-01) or a special keyword (e.g. the ”-” sign). The edge between two vertices is labeled using more than 100 relations including semantic relations (e.g. :location, :name), and frameset argument index (e.g.:ARG0, :ARG1). AMR also provides the inverse form of relations (e.g. :location vs :location-of).

To compare two semantic graphs, Cai et al [4] introduced the SMATCH score. This score measures the level of structural overlapping between two structures. SMATCH score has been widely applied in measuring the accuracy of AMR parser.

Figure 1: Three AMR formats of the same sentence ”The dog wants to eat a bone”. The conjunction form, the PENMAN notation and the graph are located on the top left, the bottom left and the right, respectively.

Transition-based parsers have made notable achievements in graph parsing such as dependency tree [5]. Currently, AMR parsers are benefiting from the power of this approach. Motivated by the analogy between dependency tree and AMR graph, Wang et al. [18] proposed the first transition system for parsing AMR graph. Figure 2 illustrates the dependency tree and the AMR graph of the sentence: ”The domicile of a juridical person shall be at the location of its principal office”. These two structures share some nodes (e.g.domicile, person, juridical), and their node interrelations (e.g.person - juridical). Wang et al. defined a two-stage process for their system: (1)parsing a sentence into a dependency tree using existing parsers such as Stanford parser and Charniak parser; (2) converting the obtained tree into AMR graph by an eight-action transition system. Their later works have investigated a richer feature set including co-reference, semantic role labeling, word cluster [17]; rich name entity tag, and ISI verbalization list[16].

Figure 2: Dependency tree and AMR graph

NeuralAMR [10] has succeeded at both AMR parsing and sentence generation as the result of a bootstrapping training strategy on a 20-million-sentence unsupervised dataset. An efficient adaptation of machine translation to AMR parsing by Barzdins et al [2] indicates that character-based features are better than word-based features. Targeting at the sparsity of the AMR graph data, the vocabulary of AMR is limited to 2000 in the work of Peng et al [12]. The work of Ballesteros et al has combined recurrent neural network and transition system into a deep transition model [1]. Among those methods, the information is encoded in LSTM hidden state using embedding vector and syntactic features instead of gathering a large number of features which are introduced in the conventional transition method.

Although recent studies have utilized Long Short-Term Memory (LSTM) in AMR parsing [10, 1], there are several disadvantages of employing LSTM compared to CNN. First, LSTM models long dependency, which might be noise to generate a linearized graph, whereas CNN provides a shorter dependency which is advantageous to generate graph traversal. Secondly, LSTM requires a chronologically computing process that restrains the ability of parallelization; on the contrary, CNN enables simultaneous parsing. In this paper, we present the first success of applying convolutional seq2seq in AMR parsing. The main contributions of this research are:

  • An outstanding performance with 5 points SMATCH score improvement resulted from the proposed AMR parsing model using depth-first-search graph linearization and convolutional seq2seq network.

  • A new public AMR test 111https://github.com/nguyenlab/crest set of legal document.

  • The first study of AMR parsing in the legal domain.

2 Method

In this section, we first present the formalization of the AMR parsing task. We then demonstrate in detail two main parts of our model: the graph conversion including linearization and de-linearization presented in section 2.1, and convolutional seq2seq model presented in section 2.2.

Given the training dataset where and stand for the set of sentences and the set of corresponding AMR graphs, our supervised learning model with parameter set maximizes the following problem:

2.1 Graph linearization and de-linearization

Seq2seq model requires sequential representation of features and labels, therefore, the AMR graph must be presented as a sequence. However, the raw AMR text cannot be an appropriate format due to its imbalance of tokens. Raw AMR text contains too many round brackets and variables which present less semantic information than other components such as concepts, constants, and English words. Unlike the prior work [10], in our model, the graphs pass through a much simpler pre-processing series which consists of variable removal, graph linearization, and infrequent word replacement. For stripping the AMR text, we modified the depth-first-search traversal from the work of Kontas et al [10] in the way of marking the end of a path. The left parentheses are ignored and the right parentheses are replaced by doubling the concept of the terminal node.

The process of recovering the stripped text from the graph is called de-linearization. The graph which contains multiple nodes of a single concept might not be perfectly reversed because those nodes have been collapsed into one. We show the level of information loss corresponding to each dataset in section 2. Table 1 demonstrates the converting process.

Original article No abuse of rights is permitted.
(p / permit-01 :polarity -
AMR Text    :ARG1 (a / abuse-01
      :ARG1 (r / right-05)))
(permit-01 :polarity -
Variable removal    :ARG1 (abuse-01
      :ARG1 (right-05)))
Linearization permit-01 :polarity - - :ARG1 abuse-01 :ARG1 right-05 right-05
abuse-01 permit-01
Table 1: Graph conversion process in detail

We conducted the measurement of information loss to prove the efficiency of this graph conversion method. All graphs in the official AMR corpus were passed through a full linearization process to get the linearized versions. These sequences were then the input of recovering process to obtain the AMR graph set . The information loss is calculated by equation 1 from the SMATCH score. The result of the test is presented in table 2.

(1)

2.2 Convolutional sequence to sequence model

Our proposal is to utilize three different seq2seq models which have showed their strengths in machine translation. They are the combination of a convolutional encoder and an LSTM decoder; and the fully convolutional seq2seq model. The first model uses a multilayer bi-directional LSTM encoder to produce hidden states from the input. The decoder gathers the hidden states and then generates output with attention mechanism. We made a further modification by supplementing a dropout layer locating between two consecutive LSTM layers. The second model bases on the work of Gehring et al [8] where the bi-directional LSTM is alternated by a convolutional encoder. The final model fully applies convolutional neural network with attention mechanism [9].

2.3 Data annotation

The Semeval competitions allowed participants to access multiple AMR corpus annotated manually but no large corpus has been made accessible to the public. Especially, there is no open AMR resource for any specific domain such as the juristic document or scientific document. Therefore, we manually annotated a corpus for the English version of the Japan Civil Code. The code is organized in multiple levels including chapter, part, article, paragraph, and sentence.

The pre-processing consists of the following steps: gathering articles, removing all article prefixes and article IDs, then splitting the article into sentences. We labeled each sentence with an ID containing the article name, the paragraph index, and the sentence index. To annotate the sentences, we used the web-based editor 222https://amr.isi.edu/editor.html provided by ISI group. This editor provides a combination of command line and graphical interface. The Propbank corpus is integrated into the search engine to minimize the time it takes to choose a proper meaning of the words. Two annotators are given a list of article sentences and annotate corpus independently. After finishing their own works, the annotators are invited to discuss and aggregate their outcomes into a single result. We call this dataset JCivilCode-1.0. The statistics of this corpus is presented in table 2.

3 Experiment & Result

We conducted the experiment on two datasets in different domains. The first one is the official dataset LDC2014T12, which we designed the first experiment configuration with. The second configuration was made on our self-annotated dataset by mixing the training set and the validation set of LDC2014T12 together with JCivilCode-1.0 as the test set. We decided to train and test on two different domains because the number of pair of sentences and graphs are not too large. The performance of the proposed approaches is assessed using SMATCH score. To compare our model with other ones, we collected the performance results of other works on LDC2014T12 from the original paper. We also run their best public pre-trained model on JCivilCode-1.0.

LDC2014T12 JCivilCode-1.0
Train set 10,312 0
Valid set 1,368 0
Test set 1,371 157
Domain News Legal
Information loss 0.21 0.20
Table 2: Dataset characteristics.

Table 3 shows that our proposed model outperformed both the transition-based methods and the neural-based methods on LDC2014T12 whereas our methods are much simpler than prior works. The NeuralAMR lies on an intensive preprocessing with graph simplification and strong name-entity anonymization. The stack-LSTM model gathers many types of syntactic features including name entity, part-of-speech, dependency tree. Moreover, CAMR relies on rich features of a single node, node pair, path, distance, action [18] and semantic role labeling [17]. On the other hand, our models employ only word embedding as feature after a three-step preprocessing as described in section 2.1.

Linearization method might create two foreseeable issues though it significantly increased the accuracy of neural network method. First, entity redundancy occurs if the graph contains multiple nodes who share an identical concept. The second issue is the syntax error of the output because the neural network does not guarantee that the output follows the PENMAN notation. Table 4 shows some sample of JCivilCode-1.0 and output of our model. The bold words in the table show the error that our model generated.

Method LDC2014T12 JCivilCode-1.0
NeuralAMR [10] 0.62 -
Stack LSTM [1] 0.64 -
CAMR [17] 0.66 -
CAMR [16] 0.66 0.46
Conv encoder, LSTM decoder 0.69 0.59
Fully conv seq2seq 0.71 0.60
Table 3: Dataset information
Gold standard System
[Node collision] A person who has become subject to the ruling of commencement of guardianship shall be an adult ward, and a guardian of an adult shall be appointed for him/her.
(a / and
   :op1 (a2 / adult
      :domain (p / person
         :ARG1-of (s / subject-01
            :ARG2 (c / commence-01
               :ARG1 (g / guard-01)))))
   :op2 (a3 / appoint-01
      :ARG1 (p2 / person)
      :ARG2 (g2 / guardian
         :poss p))) (a0 / and
   :op1 (x0 / ¡¡unk¿¿
      :domain (p0 / person
         :ARG1-of (s0 / subject-01
            :ARG2 (c0 / commence-01
               :ARG1 (g0 / guard-01)))))
   :op2 (a1 / appoint-01
      :ARG1 p0
         :ARG2 x0
            :poss p0))
[Syntax error] Unless otherwise provided by applicable laws, regulations or treaties, foreign nationals shall enjoy private rights.
(e / enjoy-01
   :ARG0 (n / national
      :mod (f / foreign))
   :ARG1 (r / right-05
      :ARG1-of (p / private-02))
   :condition (p2 / provide-01 :polarity -
         :OR (o / or
            :op1 (l / law
               :mod (a / applicable))
            :op2 (r2 / regulate-01)
            :op3 (t / treaty)))) (e0 / enjoy-01
   :ARG0 (n0 / national
      :mod (f0 / foreign))
   :ARG1 (x0 / ¡¡unk¿¿)
   :ARG1-of x0
   :condition (p0 / provide-01
      :polarity (x1 / -)
      x0
         (o0 / or
            :op1 (l0 / law
               :mod x0)
Table 4: Two type of structural error.

4 Conclusion

We published the first release of a testing set of Japan Civil Code for AMR parsing. We presented the efficiency of the convolutional seq2seq model on Abstract Meaning Representation parsing. By using a simple but effective graph linearization methods, our model achieved a competitive accuracy. The result indicates a certain possibility of higher performance on many application basing on AMR. However, this method revealed two technical issues that we plan to investigate more in future research.

5 Acknowledgement

This work was supported by JST CREST Grant Number JPMJCR1513, Japan. The authors would like to thank our colleagues and reviewers for their intensive comments and suggestions.

References

  • [1] Ballesteros, M., Al-Onaizan, Y.: Amr parsing using stack-lstms. EMNLP (2017)
  • [2] Barzdins, G., Gosko, D.: RIGA at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on AMR parsing accuracy. CoRR abs/1604.01278 (2016), http://arxiv.org/abs/1604.01278
  • [3] Buys, J., Blunsom, P.: Oxford at semeval-2017 task 9: Neural amr parsing with pointer-augmented attention. In: SemEval-2017. pp. 914–919 (August 2017)
  • [4] Cai, S., Knight, K.: Smatch: an evaluation metric for semantic feature structures. In: ACL (2). pp. 748–752 (2013)
  • [5] Chen, D., Manning, C.: A fast and accurate dependency parser using neural networks pp. 740–750 (2014)
  • [6] Dohare, S., Karnick, H.: Text summarization using abstract meaning representation. arXiv preprint arXiv:1706.01678 (2017)
  • [7] Garg, S., Galstyan, A., Hermjakob, U., Marcu, D.: Extracting biomolecular interactions using semantic parsing of biomedical text. In: AAAI. pp. 2718–2726 (2016)
  • [8] Gehring, J., Auli, M., Grangier, D., Dauphin, Y.N.: A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344 (2016)
  • [9] Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122 (2017)
  • [10] Konstas, I., Iyer, S., Yatskar, M., Choi, Y., Zettlemoyer, L.: Neural AMR: sequence-to-sequence models for parsing and generation. CoRR (2017)
  • [11] Liu, F., Flanigan, J., Thomson, S., Sadeh, N., Smith, N.A.: Toward abstractive summarization using semantic representations. In: NAACL. pp. 1077–1086 (2015)
  • [12] Peng, X., Wang, C., Gildea, D., Xue, N.: Addressing the data sparsity issue in neural amr parsing. EACL (2017)
  • [13] Rao, S., Marcu, D., Knight, K., Daumé III, H.: Biomedical event extraction using abstract meaning representation. BioNLP 2017 pp. 126–135 (2017)
  • [14] Song, L., Zhang, Y., Peng, X., Wang, Z., Gildea, D.: Amr-to-text generation as a traveling salesman problem. In: EMNLP (2016)
  • [15] Takase, S., Suzuki, J., Okazaki, N., Hirao, T., Nagata, M.: Neural headline generation on abstract meaning representation. In: EMNLP. pp. 1054–1059 (2016)
  • [16] Wang, C., Pradhan, S., Pan, X., Ji, H., Xue, N.: Camr at semeval-2016 task 8: An extended transition-based amr parser. In: SemEval-2016. pp. 1173–1178 (June 2016)
  • [17] Wang, C., Xue, N., Pradhan, S.: Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In: ACL-IJCNLP (Vol 2). pp. 857–862 (July 2015)
  • [18] Wang, C., Xue, N., Pradhan, S.: A transition-based algorithm for amr parsing. In: NAACL:HLT. pp. 366–375 (May–June 2015)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
1821
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description