Compositional Operators in Distributional Semantics1footnote 11footnote 1Accepted for publication in Springer Science Reviews journal. The final version will be available at link. springer.com.

Compositional Operators in Distributional Semantics1

Abstract

This survey presents in some detail the main advances that have been recently taking place in Computational Linguistics towards the unification of the two prominent semantic paradigms: the compositional formal semantics view and the distributional models of meaning based on vector spaces. After an introduction to these two approaches, I review the most important models that aim to provide compositionality in distributional semantics. Then I proceed and present in more detail a particular framework (Coeckeetal) based on the abstract mathematical setting of category theory, as a more complete example capable to demonstrate the diversity of techniques and scientific disciplines that this kind of research can draw from. This paper concludes with a discussion about important open issues that need to be addressed by the researchers in the future.

Keywords: natural language processing; distributional semantics; compositionality;

vector space models; formal semantics; category theory; compact closed categories

1 Introduction

The recent developments on the syntactical and morphological analysis of natural language text constitute the first step towards a more ambitious goal, that of assigning a proper form of meaning to arbitrary text compounds. Indeed, for certain really “intelligent” applications, such as machine translation, question-answering systems, paraphrase detection, or automatic essay scoring, to name just a few, there will always exist a gap between raw linguistic information (such as part-of-speech labels, for example) and the knowledge of the real world that is needed for the completion of the task in a satisfactory way. Semantic analysis has exactly this role, aiming to close (or reduce as much as possible) this gap by linking the linguistic information with semantic representations that embody this elusive real-world knowledge.

The traditional way of adding semantics to sentences is a syntax-driven compositional approach: every word in the sentence is associated with a primitive symbol or a predicate, and these are combined to larger and larger logical forms based on the syntactical rules of the grammar. At the end of the syntactical analysis, the logical representation of the whole sentence is a complex formula that can be fed to a theorem prover for further processing. Although such an approach seems intuitive, it has been shown that it is rather inefficient for any practical application (for example, bos2006logical get very low recall scores for a textual entailment task). Even more importantly, the meaning of the atomic units (words) is captured in an axiomatic way, namely by ad-hoc unexplained primitives that have nothing to say about the real semantic value of the specific words.

On the other hand, distributional models of meaning work by building co-occurrence vectors for every word in a corpus based on its context, following Firth’s intuition that “you should know a word by the company it keeps” (Firth). These models have been proved useful in many natural language tasks (see Section LABEL:sec:word2sentence) and can provide concrete information for the words of a sentence, but they do not scale up to larger constituents of text, such as phrases or sentences. Given the complementary nature of these two distinct approaches, it is not a surprise that compositional abilities of distributional models have been the subject of much discussion and research in recent years. Towards this purpose researchers exploit a wide variety of techniques, ranging from simple mathematical operations like addition and multiplication to neural networks and even category theory. The purpose of this paper is to provide a concise survey of the developments that have been taking place towards the goal of equipping distributional models of meaning with compositional abilities.

The plan is the following: In Sections 2 and LABEL:sec:dissem I provide an introduction to compositional and distributional models of meaning, respectively, explaining the basic principles and assumptions on which they rely. Then I proceed to review the most important methods aiming towards their unification (Section LABEL:sec:compdistr). As a more complete example of such a method (and as a demonstration of the multidisciplinarity of Computational Linguistics), Section LABEL:sec:disco describes the framework of Coeckeetal, based on the abstract setting of category theory. Section LABEL:sec:sentencespace provides a closer look to the form of a sentence space, and how our sentence-producing functions (i.e. the verbs) can be built from a large corpus. Finally, Section LABEL:sec:challenges discusses important philosophical and practical open questions and issues that form part of the current and future research.

2 Compositional semantics

Compositionality in semantics offers an elegant way to address the inherent property of natural language to produce infinite structures (phrases and sentences) from finite resources (words). The principle of compositionality states that the meaning of a complex expression can be determined by the meanings of its constituents and the rules used for combining them. This idea is quite old, and glimpses of it can be spotted even in works of Plato. In his dialogue Sophist, Plato argues that a sentence consists of a noun and a verb, and that the sentence is true if the verb denotes the action that the noun is currently performing. In other words, Plato argues that (a) a sentence has a structure; (b) the parts of the sentence have different functions; (c) the meaning of the sentence is determined by the function of its parts. Nowadays, this intuitive idea is often attributed to Gottlob Frege, who expresses similar views in his “Foundations of Mathematics”, originally published in 1884. In an undated letter to Philip Jourdain, included in “Philosophical and Mathematical Correspondence” (fregeletter), Frege provides an explanation for the reason this idea seems so intuitive:

“The possibility of our understanding propositions which we have never heard before rests evidently on this, that we can construct the sense of a proposition out of parts that correspond to words.”

This forms the basis of the productivity argument, often used as a proof for the validity of the principle: humans only know the meaning of words, and the rules to combine them in larger constructs; yet, being equipped with this knowledge, we are able to produce new sentences that we have never uttered or heard before. Indeed, this task seems natural even for a 3-years old child—however, its formalization in a way reproducible by a computer has been proven nothing but trivial. The modern compositional models owe a lot to the seminal work of Richard Montague (1930-1971), who has managed to present a systematic way of processing fragments of the English language in order to get semantic representations capturing their “meaning” (Mon1; Mon2; Mon3). In his “Universal Grammar” (Mon2), Montague states:

“There is in my opinion no important theoretical difference between natural languages and the artificial languages of logicians.”

Montague supports this claim by detailing a systematization of the natural language, an approach which became known as Montague grammar. To use Montague’s method, one would need two things: first, a resource which will provide the logical forms of each specific word (a lexicon); and second, a way to determine the correct order in which the elements in the sentence should be combined in order to end up with a valid semantic representation. A natural way to address the latter, and one traditionally used in computational linguistics, is to use the syntactic structure as a means of driving the semantic derivation (an approach called syntax-driven semantic analysis). In other words, we assume that there exists a mapping from syntactic to semantic types, and that the composition in the syntax level implies a similar composition in the semantic level. This is known as the rule-to-rule hypothesis (Bach:76).

In order to provide an example, I will use the sentence ‘Every man walks’. We begin from the lexicon, the job of which is to provide a grammar type and a logical form to every word in the sentence:

{exe}\ex{xlist}\ex

every \ex man \ex walks

The above use of formal logic (especially higher-order) in conjunction with -calculus was first introduced by Montague, and from then on it constitutes the standard way of providing logical forms to compositional models. In the above lexicon, predicates of the form and are true if the individuals denoted by and carry the property (or, respectively, perform the action) indicated by the predicate. From an extensional perspective, the semantic value of a predicate can be seen as the set of all individuals that carry a specific property: will be true if the individual belongs to the set of all individuals who perform the action of walking. Furthermore, -terms like or have the role of placeholders that remain to be filled. The logical form , for example, reflects the fact that the entity which is going to be tested for the property of manhood is still unknown and it will be later specified based on the syntactic combinatorics. Finally, the form in (2) reflects the traditional way for representing a universal quantifier in natural language, where the still unknown part is actually the predicates acting over a range of entities.

In -calculus, function application is achieved via the process of -reduction: given two logical forms and , the application of the former to the latter will produce a version of where all the free occurrences of in have been replaced by . More formally:

(1)

Let us see how we can apply the principle of compositionality to get a logical form for the above example sentence, by repeatedly applying -reduction between the semantic forms of text constituents following the grammar rules. The parse tree in (2) below provides us a syntactic analysis:

\qtreecenterfalse{exe}\ex\Tree

[ .S [ .NP [ .Det Every ] [.N man ] ] [.Verb_IN walks ] ]

Our simple context-free grammar (CFG) consists of two rules:

{exe}\ex
a noun phrase consists of a determiner and a noun
a sentence consists of a noun phrase and an intransitive verb

These rules will essentially drive the semantic derivation. Interpreted from a semantic perspective, the first rule states that the logical form of a noun phrase is derived by applying the logical form of a determiner to the logical form of a noun. In other words, in (2) will be substituted by the logical form for man (2). The details of this reduction are presented below:

{exe}\ex

Similarly, the second rule signifies that the logical form of the whole sentence is derived by the combination of the logical form of the noun phrase as computed in (2) above with the logical form of the intransitive verb (2):

{exe}\ex

Thus we have arrived at a logical form which can be seen as a semantic representation of the whole sentence. The tree below provides a concise picture of the complete semantic derivation:

Footnotes

  1. Accepted for publication in Springer Science Reviews journal. The final version will be available at link. springer.com.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
39207
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description