A Survey on Semantic Parsing from the Perspective of Compositionality

A Survey on Semantic Parsing from the Perspective of Compositionality

Abstract

Different from previous surveys in semantic parsing (Kamath and Das, 2018) and knowledge base question answering(KBQA) (Chakraborty et al., 2019; Zhu et al., 2019; Höffner et al., 2017) we try to takes a different perspective on the study of semantic parsing. Specifically, we will focus on (a) meaning composition from syntactical structure  (Partee, 1975), and (b) the ability of semantic parsers to handle lexical variation given the context of a knowledge base (KB). In the following section after an introduction of the field of semantic parsing and its uses in KBQA, we will describe meaning representation using grammar formalism CCG (Steedman, 1996). We will discuss semantic composition using formal languages in 2. In section 3 we will consider systems that uses formal languages e.g. -calculus Steedman (1996), -DCS Liang (2013). Section 4 and 5 consider semantic parser using structured-language for logical form. Section 6 is on different benchmark dataset ComplexQuestions (Bao et al., 2016) and GraphQuestions (Su et al., 2016) that can be used to evaluate semantic parser on their ability to answer complex questions that are highly compositional in nature.

\aclfinalcopy

1 Introduction

One of the main challenge in Knowledge Base Question Answering (KBQA) is semantic parsing - the construction of a complete, formal, symbolic, meaning representation (MR) of a sentence (Wong and Mooney, 2006). Most commonly used formal frameworks use a combination of -calculus and first order logic (FOL) e.g. CCG (Zettlemoyer and Collins, 2005), -DCS (Liang, 2013). The logical-expression further needs to be grounded in a knowledge base (KB), in the case of KBQA, the challenge here is lexical variation. The two main challenges that any KBQA system has to tackle are language compositionality: evident by the choice of the formal language and the construction mechanism and lexical variation: grounding the words/phrases to appropriate KB entity/relation.

Language Compositionality:

According to Frege’s principle of compositionality: “the meaning of a whole is a function of the meaning of the parts. The study of formal semantics for natural language to appropriately represent sentence meaning has a long history (Szabó, 2017). Most semantic formalism follow the tradition of Montague’s grammar (Partee, 1975) i.e. there is a one-to-one correspondence between syntax and semantics e.g. CCG (Zettlemoyer and Collins, 2005). We will not be delving into the concept of semantic representation in formal language in this survey.

Lexical Variance:

Lexical variation in human language is huge. Differences in the surface form of words in the natural language and the label of the corresponding entity/relation in the KB is mainly due to the polysemy. For example attend may be referred to by label Education (Berant et al., 2013). Similarly paraphrases of a sentence may have different phrases to mean the same thing, e.g. ’What is your profession’, ’What do you do for a living’. ’What is your source of earning’ all these variation may points to label profession (Berant and Liang, 2014).

The two challenges are elegantly summed up in a function by Mitchell and Lapata (2010) i.e. the combined meaning of symbol a and b is function of lexicon a and lexicon b under the syntactic relation and the context . We propose that the context be the knowledge base (KB). KBQA provides an appropriate ground for testing different semantic parsing approaches empirically. There are some KBQA systems which use semantic parser as a module in their pipeline (Reddy et al., 2014; Cheng et al., 2017) where the purview of semantic parsing is to get to the logical-expression, and a downstream process takes up lexicon grounding or disambiguation. There are some systems which don’t have such seperations e.g. SEMPRE(Berant et al., 2013). However, both type of systems do resort to some formal language or intermediate logical form. We exclude KBQA systems which use non-symbolic representation (Cohen et al., 2020).

We describe here some terminology commonly used in the study of semantic parsing (Diefenbach et al., 2018; Kamath and Das, 2018). 1. Intermediate logical form: represents the complete meaning of the natural language using formal language e.g. -calculus (Zettlemoyer and Collins, 2005; Hakimov et al., 2015), -DCS (Liang, 2013) or structured language Yih et al. (2015); Hu et al. (2018b). This is the main output of semantic parsing. 2. phrase-mapping: mapping phrases in the question to their corresponding resource in the KB is required to provide a real-word context to the intermediate logical form. This process is also called grounding of the logical form, thus obtaining a grounded logical form. 3. Disambiguation: of many resources obtained in the phrase-mapping process only a few will be right according to the semantic of the natural language. 4. Query construction: Querying the KB-endpoint requires translation of the grounded logical form into a query language e.g. SPARQL. Translation from grounded logical form to the query language is a deterministic process.

2 Compositionality in Formal Semantics

Considering the compositionality in the natural language in the sense that meaning of the whole sentence is constructed from meaning of its parts. According to Pelletier (2011) this is a compositionality in the “functional sense”: something is compositional if it is a complex thing with some property that can be defined in terms of a function of the same property of its parts (with due consideration to the way the parts are combined). In formal semantics the complex things is a syntactically complex sentence and the property of interest is meaning, while combining the parts due consideration has to be given to how those parts are syntactically present in the complex sentence. The choice of a formal language to represent meaning of a complex sentence in a system trying to parse the natural language sentence into functional composition of meaningful parts greatly affects its capacity (expressiveness). Such a system is known as Semantic Parser.

2.1 Formal Language

First order predicate logic (FOPL) can be used to represent meaning of natural language sentence, however it fails to represent some concepts in the natural language, e.g. “How many primes are less than 10?” (Liang, 2016) - FOPL doesn’t have a function to count the number of elements. The formal semantics can use a higher-order language e.g. -calculus, say a higher order function exists, that can count the number of elements in a set. Thus we can represent the previous questions as . Without going into further details of the formal language, let’s consider an example here showing compositional use of -calculus to represent the meaning of a complex sentence. E.g. the sentence “Those who had children born in Seattle.” Liang (2013).

Take another example showing coordination in -calculus ”Sqaure blue or round yellow pillow” (Artzi et al., 2013) which is represented as

Many semantic parsing systems use only part of the operators available in -calculus thus they are limited in expressiveness by their choice of operators, not by the choice of formal language.

2.2 Structured language

A graph-structured logical form or a tree-structured logical form can also be used to represent the meaning of a natural language sentence.

Graph-structured language

E.g. Semantic Query Graph(SQG) Hu et al. (2018a), has nodes representing constant/values and edges representing relation. The edges could be seen as analogous to the binary relation of the logical formalism. This definition of SQG directly corresponds to the many knowledge graphs like dbpedia (Auer et al., 2007), freebase (Bollacker et al., 2008). With four primitive operations to manipulate a graph structure: connect and merge that operate on pair of nodes, expand and fold that operate on single node (Hu et al., 2018b), and with higher-order functions attached to nodes (Yih et al., 2015), graph-structure makes for good candidate for logical form of a natural language sentence e.g. Figure 1.

Figure 1: logical form for “How many primes are less than 10?” as graph-structure

Tree-structured language

Logical languages with tree-hierarchy can represent such hierarchy of natural language as well. FunQL Cheng et al. (2017); Zelle and Mooney (1996a); Kate et al. (2005) is a variable free functional language encoding tree-hierarchy. It has a predicate-arguments form and a recursive tree structure, where the non-terminal are predicate and the terminal nodes in the tree makes for an argument, e.g. sentence “which states do not border texas?” is:

Dependency based Compositional Semantics(DCS) (Liang et al., 2013) is another tree-structured logical forms, where the logical-form, a tree, is called DCS Trees. In its basic version DCS proposes only two operations join and aggregate and the full version comes with higher order function like argmax etc. readers are referred to Liang et al. (2013). An example of DCS tree as logical form is shown in figure 2.

Figure 2: DCS tree for sentence “Major city in California”

DCS tree proposes to reduce the complexity in compositinally creating the logical form of a sentence, however, its being a tree-structured logical form brings some limitations, such as it can’t be used to represent bound anaphora as in sentence “those who had a child who influenced them” (Liang, 2013).

3 Logic Based Formalism

Many semantic parsing systems use higher-order formal logic to represent meaning of natural language sentence e.g. calculus (Zettlemoyer and Collins, 2005), DCS (Berant et al., 2013; Berant and Liang, 2014). A first order predicate logic can only express simple natural language sentence of type yes/no or one that seek set of elements fulfilling a logical expression. To operate on set of elements the formal languages are augmented with higer-order function, eg. count(A) that would return cardinality of set A. Calculus Carpenter (1997) is a higher order functional language, it is more expressive, it can represent natural language constructs like count, superlative etc. DCS (Liang, 2013) which doesn’t use existential variable, borrows hugely from Calculus. Logical formalism that was introduced by Liang et al. (2013) as DCS-tree, will be discussed in Section 5

3.1 Statistical Models

Zettlemoyer and Collins (2005) have used Calculus as intermediate logical form to represent meaning of the natural language sentence. The construction mechanism requires an initial set of CCG lexicon , where the lexicons have also been assigned semantic meaning using abstractions, e.g. for sentence “Utah borders Idaho”. Given a sentence and its logical-expression, a rule based function creates lexicon specific to the sentence. Together with , the new set form the search space for the probabilistic CCG parser. The parser uses beam-search to come up with a high probability parse of the sentence. There are sequence of derivation stages used by a parser to reach to the logical form , however they are taken as hidden variable while learning the parameter of the parser. The set of lexicon will swell when trained over larger datasets, even though the algorithm adds only lexicon that are part of the final logical expression. This work was state of the art at the time, giving precision of and recall on Geo880 dataset.

Berant et al. (2013) was perhaps the first work using a simplified DCS as the intermediate logical form, there system is called SEMPRE. The logical form is built recursively in a bottom up fashion starting with a lexicon leading upto the final logical form. The lexicon are logical predicates in KG which are used for a natural language phrase. The process of determining a lexicon involves creating typed-NL phrases by aligning a text-corpus with KG (Lin and Etzioni, 2012), then the set of supporting entity-pairs is determined, the logical predicate makes for a lexicon if . Using typed-phrases also helps in tackling polysemy e.g. ”born in” could mean PlaceOfBirth or DateOfBirth which can be tackled using typed ”born in”[Person, Location] and ”born in”[Person, Date] respectively. For composition of lexicon into logical form there are small set of rules: say are two lexicons, they can undergo following operations join , aggregations , intersections or bridging . The last operation bridging is used when the relation between two entity is weakly implied or it is implicit. If the two entities(also called unary ) have types, say bridging introduces all binary predicates that have type structure , thus the logical form . During the process of generating the logical form, a feature vector take shape as well, which is later used to score the candidate logical form with a log-linear model. During evaluation the logical-form which is most probable gets used. SEMPRE scores F1 on WebQuestions which Berant et al. (2013) introduced.

Discussion

Of the two systems described above, Zettlemoyer and Collins (2005) have used Calculus with additional quantifiers count, argmax, and definite operator besides the universal and existential quantifiers. However, Berant et al. (2013) have tried to simplify the verbosity of expression by not using existential quantifiers, considering them implicit, and borrow other operators of Calculus when necessary. Qualitatively both are logical forms are equally expressive. The difference between the two approaches are in the way they carry out the learning of the parser. Zettlemoyer and Collins uses a fully supervised approach, by using a fully annotated dataset of sentence and logical expression, while Berant et al. uses weak-supervision by using a dataset of question and answer pairs. We can’t compare these to model empirically here because they have been applied on two different dataset.

3.2 Neural Encoder-Decoder Models

A neural encoder-decoder architecture can learn to generate formal logical expression representing meaning of a sentence. There are many works employing ecoder-decoder architecture in the context of semantic parsing e.g. Dong and Lapata (2016); Jia and Liang (2016); Dong and Lapata (2018). Unlike statistical models, described above which require feature engineering and high quality lexicon (Zettlemoyer and Collins, 2005), the Neural Network based model can be trained to learn the features required for semantic parsing by themselves.

Dong and Lapata (2016) have considered the problem of semantic parsing as sequence transduction task, converting a sequence of words in natural language into a sequence of -terms in the logical form. The encoder-decoder architecture is made of an LSTM-encoder and an LSTM-decoder. The index of the words in NL sentence is first converted to a vector using an embedding matrix, then passed to encoder’s input layer , one-by-one in sequence until time step , when encoder has seen all the words in the input sentence. Next time step and onward belongs to the decoder. The input to the decoder in its first time step is hidden state of encoder and the index of the start of sequence symbol . At each time step afterward the decoder takes at input layer word vector corresponding to the previous predicted word and gives the probability distribution on output vocabulary of logical tokens/predicates. At inference the next word in the output sequence is obtained using a greedy-search (first-best) over output vocabulary and its conditional probability returned by the model . The seq2tree architecture proposed by them considers the hierarchical nature of the logical form. It generate a tree-structured logical form recursively, figure 3. They have evaluated their systems in closed domain on Geo880 dataset, where the seq2tree models scores F1

Figure 3: tree-structure for logical form AB(C)

Jia and Liang (2016) as well have employed encoder-decoder architecture and train their model on augmented dataset. The augmented dataset is obtained using production rules of context free grammar, e.g. replacing an entity by its type, or replacing a word of, say type by a whole-phrase when the phrase type checks out .

Dong and Lapata (2018) improves upon its previous work slightly by introducing another intermediate decoder called sketch-decoder. Thus it has for modules in the network: . The idea being to gloss over the low-level information like variable names, their values to create a coarse-representation of the input sentence, which can they guide the output-decoder into generating better formed logical expressions.

Discussion:

LSTMs are easy engineering tools for sequence-transduction, but they don’t learn the rules of the grammar (Sennhauser and Berwick, 2018). Sennhauser and Berwick (2018) used 4 production rules and generated a 2Dyck language, which is a string having only brackets ‘[’ ‘]’ and ‘{’ ‘}’, e.g. string “{[{}[]]} ”. They showed that LSTM don’t generalise well, the error rate for out-of-sample test data is 8-14 times high compared to in-sample error-rate of 0.3% for LSTM using 50 hidden units. They observe that LSTM learn the sequential-statistical correlation and that they don’t store irrelevant information, but LSTM fail to learn the 4-rules of the CFG used to generate the language. They observe that novel architecture may be required to learn the grammar-rules, and there are works in that directions e.g. dynamic network architecture (Looks et al., 2017) of hardwired structural constrain (Kiperwasser and Goldberg, 2016; Joulin and Mikolov, 2015)

4 Graph based formalism

The semantic parser using graph based formalism resort to a labeled-graph also called Semantic Query Graph(SQG) as semantic representation of the natural language (NL). Graph as an abstraction is widely used and Label graph are a good way to represent the meaning of NL sentence, where the nodes represent a concept/person and the edges between represent their inter-dependency. Besides, graph is also the language of many Knowledge Graphs(KG). There are many works using SQG as logical formalism liek Reddy et al. (2014); Yih et al. (2015); Bao et al. (2016); Hu et al. (2018a, b).

Semantic Query Graph:

A semantic query graph SQG, by Hu et al. (2018a), is a graph, in which each vertex is associated with an entity phrase, class phrase or wild-cards in the natural language (NL) sentence ; and each edge is associated with a relation phrase in the NL sentence, where .

The semantic parsing pipeline used by Reddy et al. (2014) takes a natural language (NL) sentence and uses a syntactic parser for CCG by Clark and Curran (2004). A CCG grammar provides with one-to-one correspondence between syntax and semantic, syntax being the word-category and semantic represented using a -expression, together they are referred as CCG-lexicon. Further in the pipeline there is a graph-parser parsing the -expression as a label graph. The nodes in the graph may be attached to math functions like unique, count etc. as required by the semantics of the NL sentence. The label-graph is not yet contextualized against a world/KG, therefore also called an ungrounded graph. A contextualization process of nodes and edges of the ungrounded graphs against a KG is carried out using beam search. So, given an ungrounded graph (u), labels on its nodes and edges are run through KG to find possible candidate entities and relations respectively. Thus obtaining a set of possible grounded graphs {g}. A structured-perceptron is then used to wade through candidate graphs by scoring their feature vectors as dot product with model parameters .

Yih et al. (2015) have tried to bring in a part of -calculus into a graphical representation by introducing variable nodes (corresponding to existential variables) and answer nodes (corresponding to the bounded variable in lambda-expression), as well as aggregation function such as argmin, argmax, count into semantic query graph. The generation of a query graph is formulated as transition of states, each state being a subgraph in the KG thus always grounded. Starting with an empty query graph to state with only entity nodes, the next state add path to answer node known as state finally adding the constraints to obtain the full query graph as state . The transition from one state to another is defined in terms of a well defined set of actions i.e. add entity nodes, path nodes, constraints nodes and aggregation respectively. Candidate states, which only contains single entity nodes, are decided using an entity linking system (Yang and Chang, 2016). Transitioning to state requires making 1-hops/2-hops and scoring all the candidate paths for a possible predicate by comparing its similarity with questions-pattern (obtained from the question after replacing the entity in with place holders ¡e¿) using two CNN models. The construction of the final SQG which is state requires adding constraints (entity/class) or math functions, which is guided by heuristics, e.g. of all the resource-nodes in KG attached to variable nodes in select one if it is an entity occurring in the question. The candidate SQG thus obtained are ranked by the F1 score based on answers they receive when executed on KG.

Hu et al. introduced two frameworks, RFF (Relation (edge)-First Framework) and NFF (Node First Framework), for semantic parsing capitalizing upon semantic query graph (SQG). The constructions mechanism in both the framework uses dependency parse of a sentence, say . In RFF the relation mentions is a subtree in . The relation mention should have a matching relation in a predefined set . A matching relation must have all its words in the subtree . The associated nodes/arguments to relation edge are determined based on pos-tags coming out from the subtree . Further, the relation mentions and the node/arg-phrases, which are in their surface form, are mapped to predicates/predicate paths and entities/classes respectively in the KG. The mapping from to predicates/predicate paths uses a relation mention dictionary , which utilizes the set of relations mentions and their Supporting-Entities as in Patty (Nakashole et al., 2012). Similarly, the mapping from nodes/args-phrase uses an entity mentions dictionary CrossWikis by Spitkovsky and Chang (2012). The second framework NFF is pitted to be robust to errors in dependency parsing as compared to RFF. RFF uses dependency-tree to determine relation mentions, dependency structure and pos-tags, however NFF uses dependency-tree only to decide if there should be an edge between to entity nodes. Another advantage of using NFF is that it allows unlabelled edges between entities to represent implicit relations eg. an unlabelled edge between nodes Chinese and actor. The implicit relations are resolved during query evaluations. NFF requires first to extract all entity mention using a dictionary-based approach Deng et al. (2015), then using dependency tree introduce edges between nodes: wherever two entity-mentions are adjacent in the dependency-tree the relation edges in the SQG is kept unlabelled (implicit relation) and where they are far apart, the words in the dependency-tree makes for the label of the relations edge in the SQG. The query evaluation in both the framework is similar i.e. finding top-k matching subgraph in KG corresponding to a SQG. Each node and edge in the SQG comes with a candidate-list and each candidate gets a tf-idf score when retrieved from . The only difference being SQG in the NFF may have few edges left unmatched. Hu et al. (2018a) also propose to use bottom-up approach of forming the SQG and finding a correct match. Starting with a single node of the SQG and finding a match in the KG, then expanding the node to a partial SQG and scoring its corresponding match.

Sun et al. (2020) allude to the fact that current dependency parser err in longer and complex sentences and they propose to first do a coarse (skeleton) parsing of the complex sentence into auxiliary clauses and pass it over to NFF (Hu et al., 2018a) to take up the fine-grained semantic parsing. The skeleton parsing is modeled as set of four steps 1. identifying whether a sentence could be parsed into main-clause and auxiliary-clause, 2. identifying the text-span in the sentence that makes for an auxiliary clause after the first step is true, 3. identifying the headword in the sentence that governs the texts-span and 4. identifying the dependency relation between headword and the text-span. The four above steps are modelled using BERT (Devlin et al., 2019). Four different models are fine-tuned as a task in single sentence classification (SSC), question answering(QA), questions answering and sentence pair classification (SPC) respectively for step 1, 2, 3 and 4. Different from NFF which uses tf-idf score for ranking candidates and the matched query graph, Sun et al. have used a sentence-level scorer and a word-level scorer. The sentence-level scorer scores the similarity of a test-sentence against a training-sentence, favoring training sentences which can provide their underlying query graph to the test-sentence which when executed on the KG should retrieve non-empty result. The sentence-level scorer is a fine-tuned BERT for the task of SPC. The word-level scorers scores bag-of-words (BOW) in sentence (after removing entities and stop words) and BOW in query graph which mainly consists of predicates, thus scoring appropriateness of the predicates used in the SQG.

Discussion:

We summarized four semantic parsing systems above, which use graph as intermediate logical form. They differ mainly in the way they generate the SQG, while Reddy et al. (2014) and Hu et al. (2018a) use a syntactic parser (CCG and dependency-tree) Yih et al. (2015) doesn’t use any suntactic parser. It uses a state-transition based approach to generate the SQG keeping partial states always grounded. We see that on WebQuestions (Yih et al., 2015) STAGG achieves . The state-transition based approach with a different set of actions (connect, merge, expand and fold) is used in a different work by Sen Hu Hu et al. (2018b) which got an F1 of on WebQuestions.

5 Tree structured logical form

Cheng et al. (2017) uses nested tree-structured logical form known as funQL. FunQL has a predicate-argument structure, where the predicate is a non-terminal (NT) and the arguments may be another sub-tree or a terminal node. The s-expression in funQL besides telling how the semantics are composed also tells how it could be derived by nesting words of natural language taken either as terminal or non-terminal node of the tree. The semantic parsing system by Cheng et al. (2017) separate the generation of logical form and mapping of lexicon to knowledge base entity and relation into two different stages. The logical form generation is done in a task-independent fashion. The logical form obtained is called ungrounded logical form. The model generates the ungrounded logical form by applying sequence of actions in actions-set on the stack of s-expression, and choosing a word from an input-buffer (used to store words in the sentence) randomly. The model is trained to learn a distribution of possible action set and the word to be chosen from the input buffer conditioned on the sentence and the state of the stack of s-expression. The stack is represented using a stack-LSTM (Dyer et al., 2015), say . The state of input-buffer is encoded using a Bi-LSTM (Hochreiter and Schmidhuber, 1997) and is adaptively weighted at each time step using the stack-state . The two probabilities are given below:

where is concatenation of and and are weight matrix. The mapping of lexicon to database entity and predicates is done using a bi-linear neural network

The training objective is to maximize the likelihood of the grounded logical form, therefore it considers the ungrounded logical form as latent variable and marginalise over it.

Discussion

The transition based approach with just tree-operation can generate all the tree-structured candidate logical-form. It makes a good case for its adoptions where the inherent structure of the language could be limited to tree-like.

6 Benchmark Datasets

The benchmark dataset used to evaluate the few systems discussed in this survey are described in table 1. Benchmark datasets have grown in complexity as well as in size overtime. Partly driven by demand of larger and larger machine learning model which require large data to train and partly to find a better semantic parser which would be able to parse complex and varied set of questions.

Geo880:

Introduced by (Zelle and Mooney, 1996b) is a closed domain dataset of questions related to US geography. Semantic parsing systems based on encoder-decoder architecture seems to favour close domain dataset. Thus purely evaluating the ability of the semantic parser to generate correct logical form.

WebQuestions

(Berant et al., 2013) introduced this dataset, collected from google suggest api. The dataset has question and denotation paris, collected from Freebase(Bollacker et al., 2008). Yih et al. (2016) later annotated the data with logical forms, to show that annotation help in learning.

GraphQuestions

(Su et al., 2016) introduced this dataset which was obtained by presenting Aamazon Mechanical Turk workers with 500 Freebase graph queries and asking them to verbalise it into natural language.

ComplexWebQuestions

ComplexWebQuestions v1.1 (Talmor and Berant, 2018) is a dataset of complex questions split into train test and dev set. Each question comes with a SPARQL query that can be executed against Freebase (Bollacker et al., 2008) as well as a set of web-snippet (366.8 snippet per question) which can be used by a reading comprehension model to find out an answer of the questions. The dataset was created using seed from WebQuestionsSP (Yih et al., 2016), where a seed SPARQL query is taken from the WebQuestionsSP and combined with another fact from freebase according to some set rules. The SPARQL query thus formed is complex, which is then automatically translated in a natural language using template. The question so formed is not correct but could be understood, to get a grammatically correct form of the question Amazon Mechanical Turk workers are asked to paraphrase it.

ComQA

The questions in ComQA (Abujabal et al., 2018) come from WikiAnswers Community QA Platform. The 11,214 question are divided into 4,834 clusters of paraphrases with help from crowd-sourcing. The questions are real and not based on templates. There are variety of questions such as simple, temporal, compositional (requiring answer of simple parts first before the final answer to the questions is possible), comparison (comparative, superlative and ordinal), telegraphic (keyword-queries), tuple (connected entities form an answer) and empty(questions with no answers).

Dataset Source Pairs Train-Test-Dev
Geo880 (Zelle and Mooney, 1996b) Q-LF 600, -, 280
WebQuestions (Berant et al., 2013) GoogleSuggest Q-Ans 5810, -, -
GraphQuestions (Su et al., 2016) FB Q-LF 5166, - , -
ComplexWebQuestions (Talmor and Berant, 2018) WikiAnswers 34689,-,-
ComQA (Abujabal et al., 2018) WikiAnswers Q-LF 11214, -, -
Table 1: Benchmark Dataset

References

  1. Comqa: a community-sourced dataset for complex factoid question answering with paraphrase clusters. arXiv preprint arXiv:1809.09528. Cited by: §6, Table 1.
  2. Semantic parsing with combinatory categorial grammars.. ACL (Tutorial Abstracts) 3. Cited by: §2.1.
  3. Dbpedia: a nucleus for a web of open data. In The semantic web, pp. 722–735. Cited by: §2.2.
  4. Constraint-based question answering with knowledge graph. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 2503–2514. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality, §4.
  5. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1533–1544. Cited by: §1, §1, §3.1, §3.1, §3, §6, Table 1.
  6. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1415–1425. Cited by: §1, §3.
  7. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247–1250. Cited by: §2.2, §6, §6.
  8. Type-logical semantics. MIT press. Cited by: §3.
  9. Introduction to neural network based approaches for question answering over knowledge graphs. arXiv preprint arXiv:1907.09361. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality.
  10. Learning structured natural language representations for semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 44–55. External Links: Link, Document Cited by: §1, §2.2, §5.
  11. Parsing the WSJ using CCG and log-linear models. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona, Spain, pp. 103–110. External Links: Link, Document Cited by: §4.
  12. Scalable neural methods for reasoning with a symbolic knowledge base. arXiv preprint arXiv:2002.06115. Cited by: §1.
  13. A unified framework for approximate dictionary-based entity extraction. The VLDB Journal 24 (1), pp. 143–167. Cited by: §4.
  14. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. External Links: Link, Document Cited by: §4.
  15. Core techniques of question answering systems over knowledge bases: a survey. Knowledge and Information systems 55 (3), pp. 529–569. Cited by: §1.
  16. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 33–43. Cited by: §3.2, §3.2.
  17. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 731–742. Cited by: §3.2, §3.2.
  18. Transition-based dependency parsing with stack long short-term memory. arXiv preprint arXiv:1505.08075. Cited by: §5.
  19. Applying semantic parsing to question answering over linked data: addressing the lexical gap. In Natural Language Processing and Information Systems: 20th International Conference on Applications of Natural Language to Information Systems, NLDB 2015, Passau, Germany, June 17-19, 2015, Proceedings, Vol. 9103. Cited by: §1.
  20. Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §5.
  21. Survey on challenges of question answering in the semantic web. Semantic Web 8 (6), pp. 895–920. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality.
  22. Answering natural language questions by subgraph matching over knowledge graphs. IEEE Transactions on Knowledge and Data Engineering 30 (5), pp. 824–837. Cited by: §2.2, §4, §4, §4, §4, §4.
  23. A state-transition framework to answer complex questions over knowledge base. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2098–2108. Cited by: §1, §2.2, §4, §4.
  24. Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622. Cited by: §3.2, §3.2.
  25. Inferring algorithmic patterns with stack-augmented recurrent nets. External Links: 1503.01007 Cited by: §3.2.
  26. A survey on semantic parsing. arXiv preprint arXiv:1812.00978. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality, §1.
  27. Learning to transform natural to formal languages. In AAAI, pp. 1062–1068. Cited by: §2.2.
  28. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4, pp. 313–327. External Links: ISSN 2307-387X, Link, Document Cited by: §3.2.
  29. Learning dependency-based compositional semantics. Computational Linguistics 39 (2), pp. 389–446. Cited by: §2.2, §3.
  30. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality, §1, §1, §2.1, §2.2, §3.
  31. Learning executable semantic parsers for natural language understanding. Commun. ACM 59 (9), pp. 68–76. External Links: ISSN 0001-0782, Link, Document Cited by: §2.1.
  32. Entity linking at web scale. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), pp. 84–88. Cited by: §3.1.
  33. Deep learning with dynamic computation graphs. External Links: 1702.02181 Cited by: §3.2.
  34. Composition in distributional models of semantics. Cognitive science 34 (8), pp. 1388–1429. Cited by: §1.
  35. PATTY: a taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 1135–1145. Cited by: §4.
  36. Montague grammar and transformational grammar. Linguistic Inquiry 6 (2), pp. 203–300. External Links: ISSN 00243892, 15309150, Link Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality, §1.
  37. Compositionality. External Links: Document, Link Cited by: §2.
  38. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics 2, pp. 377–392. External Links: Link, Document Cited by: §1, §4, §4, §4.
  39. Evaluating the ability of lstms to learn context-free grammars. arXiv preprint arXiv:1811.02611. Cited by: §3.2.
  40. A cross-lingual dictionary for english wikipedia concepts. Cited by: §4.
  41. Surface structure and interpretation. MIT press. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality.
  42. On generating characteristic-rich question sets for qa evaluation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 562–572. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality, §6, Table 1.
  43. SPARQA: skeleton-based semantic parsing for complex questions over knowledge bases.. In AAAI, pp. 8952–8959. Cited by: §4.
  44. Compositionality. In The Stanford Encyclopedia of Philosophy, E. N. Zalta (Ed.), Note: \urlhttps://plato.stanford.edu/archives/sum2017/entries/compositionality/ Cited by: §1.
  45. Repartitioning of the complexwebquestions dataset. arXiv preprint arXiv:1807.09623. Cited by: §6, Table 1.
  46. Learning for semantic parsing with statistical machine translation. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pp. 439–446. Cited by: §1.
  47. S-mart: novel tree-based structured learning algorithms applied to tweet entity linking. arXiv preprint arXiv:1609.08075. Cited by: §4.
  48. Semantic parsing via staged query graph generation: question answering with knowledge base. Cited by: §1, §2.2, §4, §4, §4.
  49. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 201–206. Cited by: §6, §6.
  50. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pp. 1050–1055. Cited by: §2.2.
  51. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pp. 1050–1055. Cited by: §6, Table 1.
  52. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI’05, Arlington, Virginia, USA, pp. 658–666. External Links: ISBN 0974903914 Cited by: §1, §1, §1, §3.1, §3.1, §3.2, §3.
  53. Statistical learning for semantic parsing: a survey. Big Data Mining and Analytics 2 (4), pp. 217–239. Cited by: A Survey on Semantic Parsing from the Perspective of Compositionality.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414570
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description