Visually Grounded Neural Syntax Acquisition

Visually Grounded Neural Syntax Acquisition

Haoyue Shi    Jiayuan Mao    Kevin Gimpel    Karen Livescu
: Toyota Technological Institute at Chicago, IL, USA
: ITCS, Institute for Interdisciplinary Information Sciences, Tsinghua University, China
{freda, kgimpel, klivescu}@ttic.edu, mjy14@mails.tsinghua.edu.cn
  HS and JM contributed equally to the work.
Abstract

We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for learning syntactic representations and structures without explicit supervision. The model learns by looking at natural images and reading paired captions. VG-NSL generates constituency parse trees of texts, recursively composes representations for constituents, and matches them with images. We define the concreteness of constituents by their matching scores with images, and use it to guide the parsing of text. Experiments on the MSCOCO data set show that VG-NSL outperforms various unsupervised parsing approaches that do not use visual grounding, in terms of scores against gold parse trees. We find that VG-NSL is much more stable with respect to the choice of random initialization and the amount of training data. We also find that the concreteness acquired by VG-NSL correlates well with a similar measure defined by linguists. Finally, we also apply VG-NSL to multiple languages in the Multi30K data set, showing that our model consistently outperforms prior unsupervised approaches.111 Project page: https://ttic.uchicago.edu/~freda/project/vgnsl

Figure 1: We propose to use image-caption pairs to extract constituents from text, based on the assumption that similar spans should be matched to similar visual objects and these concrete spans form constituents.

1 Introduction

We study the problem of visually grounded syntax acquisition. Consider the images in Figure 1, paired with the descriptive texts (captions) in English. Given no prior knowledge of English, and sufficient such pairs, one can infer the correspondence between certain words and visual attributes, (e.g., recognizing that “a cat” refers to the objects in the blue boxes). One can further extract constituents, by assuming that concrete spans of words should be processed as a whole, and thus form the constituents. Similarly, the same process can be applied to verb or prepositional phrases.

This intuition motivates the use of image-text pairs to facilitate automated language learning, including both syntax and semantics. In this paper we focus on learning syntactic structures, and propose the Visually Grounded Neural Syntax Learner (VG-NSL, shown in Figure 2). VG-NSL acquires syntax, in the form of constituency parsing, by looking at images and reading captions.

At a high level, VG-NSL builds latent constituency trees of word sequences and recursively composes representations for constituents. Next, it matches the visual and textual representations. The training procedure is built on the hypothesis that a better syntactic structure contributes to a better representation of constituents, which then leads to better alignment between vision and language. We use no human-labeled constituency trees or other syntactic labeling (such as part-of-speech tags). Instead, we define a concreteness score of constituents based on their matching with images, and use it to guide the parsing of sentences. At test time, no images paired with the text are needed.

We compare VG-NSL with prior approaches to unsupervised language learning, most of which do not use visual grounding. Our first finding is that VG-NSL improves over the best previous approaches to unsupervised constituency parsing in terms of scores against gold parse trees. We also find that many existing approaches are quite unstable with respect to the choice of random initialization, whereas VG-NSL exhibits consistent parsing results across multiple training runs. Third, we analyze the performance of different models on different types of constituents, and find that our model shows substantial improvement on noun phrases and prepositional phrases which are common in captions. Fourth, VG-NSL is much more data-efficient than prior work based purely on text, achieving comparable performance to other approaches using only 20% of the training captions. In addition, the concreteness score, which emerges during the matching between constituents and images, correlates well with a similar measure defined by linguists. Finally, VG-NSL can be easily extended to multiple languages, which we evaluate on the Multi30K data set elliott2016multi30k; elliott2017findings consisting of German and French image captions.

Figure 2: VG-NSL consists of two modules: a textual module for inferring structures and representations for captions, and a visual-semantic module for matching constituents with images. VG-NSL induces constituency parse trees of captions by looking at images and reading paired captions.

2 Related Work

Linguistic structure induction from text.

Recent work has proposed several approaches for inducing latent syntactic structures, including constituency trees (choi2018learning; yogatama2016learning; maillard2018latent; havrylov2019cooperative; kim2019unsupervised; drozdov2019unsupervised) and dependency trees (shi2019learning), from the distant supervision of downstream tasks. However, most of the methods are not able to produce linguistically sound structures, or even consistent ones with fixed data and hyperparameters but different random initializations (williams2018latent).

A related line of research is to induce latent syntactic structure via language modeling. This approach has achieved remarkable performance on unsupervised constituency parsing shen2017neural; shen2019ordered, especially in identifying the boundaries of higher-level (i.e., larger) constituents. To our knowledge, the Parsing-Reading-Predict Network (PRPN; shen2017neural) and the Ordered Neuron LSTM (ON-LSTM; shen2019ordered) currently produce the best fully unsupervised constituency parsing results. One issue with PRPN, however, is that it tends to produce meaningless parses for lower-level (smaller) constituents htut2018grammar.

Over the last two decades, there has been extensive study targeting unsupervised constituency parsing (klein2002generative; klein2004corpus; klein2005natural; bod2006all; bod2006unsupervised; ponvert2011simple) and dependency parsing (klein2004corpus; smith-eisner:2006:COLACL; spitkovsky-alshawi-jurafsky:2010:NAACLHLT; han2017dependency). However, all of these approaches are based on linguistic annotations. Specifically, they operate on the part-of-speech tags of words instead of word tokens. One exception is spitkovsky2011unsupervised, which produces dependency parse trees based on automatically induced pseudo tags.

In contrast to these existing approaches, we focus on inducing constituency parse trees with visual grounding. We use parallel data from another modality (i.e., paired images and captions), instead of linguistic annotations such as POS tags. We include a detailed comparison between some related works in the supplementary material.

There has been some prior work on improving unsupervised parsing by leveraging extra signals, such as parallel text (snyder-naseem-barzilay:2009:ACLIJCNLP), annotated data in another language with parallel text (ganchev-gillenwater-taskar:2009:ACLIJCNLP), annotated data in other languages without parallel text (cohen-das-smith:2011:EMNLP), or non-parallel text from multiple languages (cohen-smith:2009:NAACLHLT09). We leave the integration of other grounding signals as future work.

Grounded language acquisition.

Grounded language acquisition has been studied for image-caption data (christie-EtAl:2016:EMNLP2016), video-caption data (siddharth2014seeing; yu2015compositional), and visual reasoning (mao2019neurosymbolic). However, existing approaches rely on human labels or rules for classifying visual attributes or actions. Instead, our model induces syntax structures with no human-defined labels or rules.

Meanwhile, learning visual-semantic representations in a joint embedding space (ngiam2011multimodal) is a widely studied approach, and has achieved remarkable results on image-caption retrieval (kiros2014unifying; faghri2017vse++; shi2018learning), image caption generation (kiros2014unifying; karpathy2015deep; ma2015multimodal), and visual question answering (malinowski2015ask). In this work, we borrow this idea to match visual and textual representations.

Concreteness estimation.

turney2011literal define concrete words as those referring to things, events, and properties that we can perceive directly with our senses. Subsequent work has studied word-level concreteness estimation based on text (turney2011literal; hill2013concreteness), human judgments (silberer2012grounded; hill2014concreteness; brysbaert2014concreteness), and multi-modal data (hill2014learning; hill2014multi; kiela2014improving; young2014image; hessel2018quantifying; silberer2017visually; bhaskar2017exploring). As with hessel2018quantifying and kiela2014improving, our model uses multi-modal data to estimate concreteness. Compared with them, we define concreteness for spans instead of words, and use it to induce linguistic structures.

3 Visually Grounded Neural Syntax Learner

Given a set of paired images and captions, our goal is to learn representations and structures for words and constituents. Toward this goal, we propose the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for the grounded acquisition of syntax of natural language. VG-NSL is inspired by the idea of semantic bootstrapping (pinker1984language), which suggests that children acquire syntax by first understanding the meaning of words and phrases, and linking them with the syntax of words.

At a high level (Figure 2), VG-NSL consists of 2 modules. First, given an input caption (i.e., a sentence or a smaller constituent), as a sequence of tokens, VG-NSL builds a latent constituency parse tree, and recursively composes representations for every constituent. Next, it matches textual representations with visual inputs, such as the paired image with the constituents. Both modules are jointly optimized from natural supervision: the model acquires constituency structures, composes textual representations, and links them with visual scenes, by looking at images and reading paired captions.

3.1 Textual Representations and Structures

VG-NSL starts by composing a binary constituency structure of text, using an easy-first bottom-up parser goldberg2010efficient. The composition of the tree from a caption of length consists of steps. Let denote the textual representations of a sequence of constituents after step , where . For simplicity, we use to denote the word embeddings for all tokens (the initial representations).

At step , a score function , parameterized by , is evaluated on all pairs of consecutive constituents, resulting in a vector of length :

score

We implement as a two-layer feed-forward network.

A pair of constituents is sampled from all pairs of consecutive constituents, with respect to the distribution produced by a softmax:222 At test time, we take the argmax.

The selected pair is combined to form a single new constituent. Thus, after step , the number of constituents is decreased by 1. The textual representation for the new constituent is defined as the L2-normed sum of the two component constituents:

We find that using a more complex encoder for constituents, such as GRUs, will cause the representations to be highly biased towards a few salient words in the sentence (e.g., the encoder encodes only the word “cat” while ignoring the rest part of the caption; shi2018learning; wu2019unified). This significantly degrades the performance of linguistic structure induction.

We repeat this score-sample-combine process for steps, until all words in the input text have been combined into a single constituent (Figure 3). This ends the inference of the constituency parse tree. Since at each time step we combine two consecutive constituents, the derived tree contains constituents (including all words).

Figure 3: An illustration of how VG-NSL composes a constituency parse tree. At each step, the score function is evaluated on all pairs of consecutive constituents (dashed lines). Next, a pair of constituents is sampled from all pairs w.r.t. a distribution computed by the softmax of all predicted scores. The selected pair of constituents is combined into a larger one, while the other constituents remain unchanged (solid lines).

3.2 Visual-Semantic Embeddings

We follow an approach similar to that of kiros2014unifying to define the visual-semantic embedding (VSE) space for paired images and text constituents. Let denote the vector representation of an image , and denote the vector representation of the -th constituent of its corresponding text caption. During the matching with images, we ignore the tree structure and index them as a list of constituents. A function is defined as the matching score between images and texts:

where the parameter vector aligns the visual and textual representations into a joint space.

3.3 Training

We optimize the visual-semantic representations () and constituency structures () in an alternating approach. At each iteration, given constituency parsing results of caption, is optimized for matching the visual and the textual representations. Next, given the visual grounding of constituents, is optimized for producing constituents that can be better matched with images. Specifically, we optimize textual representations and the visual-semantic embedding space using a hinge-based triplet ranking loss:

where and index over all image-caption pairs in the data set, while and enumerate all constituents of a specific caption ( and , respectively), is the set of image representations, is the set of textual representations of all constituents, and is a constant margin, denotes . The loss extends the loss for image-caption retrieval of kiros2014unifying, by introducing the alignments between images and sub-sentence constituents.

We optimize textual structures via distant supervision: they are optimized for a better alignment between the derived constituents and the images. Intuitively, the following objective encourages adjectives to be associated (combined) with the corresponding nouns, and verbs/prepositions to be associated (combined) with the corresponding subjects and objects. Specifically, we use REINFORCE (williams1992simple) as the gradient estimator for . Consider the parsing process of a specific caption , and denote the corresponding image embedding . For a constituent of , we define its (visual) concreteness as:

(1)

where is a fixed margin. At step , we define the reward function for a combination of a pair of constituents (, ) as:

(2)

where . In plain words, at each step, we encourage the model to compose a constituent that maximizes the alignment between the new constituent and the corresponding image. During training, we sample constituency parse trees of captions, and reinforce each composition step using Equation 2. During test, no paired images of text are needed.

3.4 The Head-Initial Inductive Bias

English and many other Indo-European languages are usually head-initial baker2001atoms. For example, in verb phrases or prepositional phrases, the verb (or the preposition) precedes the complements (e.g., the object of the verb). Consider the simple caption a white cat on the lawn. While the association of the adjective (white) could be induced from the visual grounding of phrases, whether the preposition (on) should be associated with a white cat or the lawn is more challenging to induce. Thus, we impose an inductive bias to guide the learner to correctly associate prepositions with their complements, determiners with corresponding noun phrases, and complementizers with the corresponding relative clauses. Specifically, we discourage abstract constituents (i.e., constituents that cannot be grounded in the image) from being combined with a preceding constituent, by modifying the original reward definition (Equation 2) as:

(3)

where is a scalar hyperparameter, is the image embedding corresponding to the caption being parsed, and denotes the abstractness of the span, defined analogously to concreteness (Equation 1):

The intuition here is that the initial heads for prepositional phrases (e.g., on) and relative clauses (e.g., which, where) are usually abstract words. During training, we encourage the model to associate these abstract words with the succeeding constituents instead of the preceding ones. It is worth noting that such an inductive bias is language-specific, and cannot be applied to head-final languages such as Japanese baker2001atoms. We leave the design of head-directionality inductive biases for other languages as future work.

4 Experiments

We evaluate VG-NSL for unsupervised parsing in a few ways: score with gold trees, self-consistency across different choices of random initialization, performance on different types of constituents, and data efficiency. In addition, we find that the concreteness score acquired by VG-NSL is consistent with a similar measure defined by linguists. We focus on English for the main experiments, but also extend to German and French.

4.1 Data Sets and Metrics

We use the standard split of the MSCOCO data set lin2014microsoft, following \newcitekarpathy2015deep. It contains 82,783 images for training, 1,000 for development, and another 1,000 for testing. Each image is associated with 5 captions.

For the evaluation of constituency parsing, the Penn Treebank (PTB; marcus1993building) is a widely used, manually annotated data set. However, PTB consists of sentences from abstract domains, e.g., the Wall Street Journal (WSJ), which are not visually grounded and whose linguistic structures can hardly be induced by VG-NSL. Here we evaluate models on the MSCOCO test set, which is well-matched to the training domain; we leave the extension of our work to more abstract domains to future work. We apply Benepar kitaev2018constituency,333 https://pypi.org/project/benepar an off-the-shelf constituency parser with state-of-the-art performance (95.52 score) on the WSJ test set,444 We also manually label the constituency parse trees for 50 captions randomly sampled from the MSCOCO test split, where Benepar has an score of 95.65 with the manual labels. Details can be found in the supplementary material. to parse the captions in the MSCOCO test set as gold constituency parse trees. We evaluate all of the investigated models using the score compared to these gold parse trees.555 Following convention sekine1997evalb, we report the score across all constituents in the data set, instead of the average of sentence-level scores.

4.2 Baselines

Model NP VP PP ADJP Avg. Self
Random 47.3 10.5 17.3 33.5 27.1 32.4
Left 51.4 1.8 0.2 16.0 N/A
Right 32.2 23.4 18.7 14.4 N/A
PMI 54.2 16.0 14.3 39.2 N/A
PRPN (shen2017neural)    72.8    33.0    61.6    35.4    52.5 60.3
ON-LSTM (shen2019ordered) 74.4 11.8 41.3 44.0 45.5 69.3
Gumbel (choi2018learning) 50.4 8.7 15.5 34.8 27.9 40.1
VG-NSL (ours) 79.6 26.2 42.0 22.0 50.4 87.1
VG-NSL+HI (ours) 74.6 32.5 66.5 21.7 53.3 90.2
VG-NSL+HI+FastText (ours)* 78.8 24.4 65.6 22.0 54.4 89.8
Concreteness estimation–based models
turney2011literal* 65.5 30.8 35.3 30.4 42.5 N/A
turney2011literal+HI* 74.5 26.2 47.6 25.6 48.9 N/A
brysbaert2014concreteness* 54.1 27.8 27.0 33.1 34.1 N/A
brysbaert2014concreteness+HI* 73.4 23.9 50.0 26.1 47.9 N/A
hessel2018quantifying 50.9 21.7 32.8 27.5 33.2 N/A
hessel2018quantifying+HI 72.5 34.4 65.8 26.2 52.9 N/A
Table 1: Recall of specific typed phrases, and overall score, evaluated on the MSCOCO test split, averaged over 5 runs with different random initializations. We also include self-agreement score williams2018latent across the 5 runs. denotes standard deviation. * denotes models requiring extra labels and/or corpus, and denotes models requiring a pre-trained visual feature extractor. We highlight the best number in each column among all models that do not require extra data other than paired image-caption data, as well as the overall best number. The Left, Right, PMI, and concreteness estimation–based models have no standard deviation or self (shown as N/A) as they are deterministic given the training and/or testing data.

We compare VG-NSL with various baselines for unsupervised tree structure modeling of texts. We can categorize the baselines by their training objective or supervision.

Trivial tree structures.

Similarly to recent work on latent tree structures williams2018latent; htut2018grammar; shi2018tree, we include three types of trivial baselines without linguistic information: random binary trees, left-branching binary trees, and right-branching binary trees.

Syntax acquisition by language modeling and statistics.

shen2017neural proposes the Parsing-Reading-Predict Network (PRPN), which predicts syntactic distances (shen2018straight) between adjacent words, and composes a binary tree based on the syntactic distances to improve language modeling. The learned distances can be mapped into a binary constituency parse tree, by recursively splitting the sentence between the two consecutive words with the largest syntactic distance.

Ordered neurons (ON-LSTM; shen2019ordered) is a recurrent unit based on the LSTM cell (hochreiter1997long) that explicitly regularizes different neurons in a cell to represent short-term or long-term information. After being trained on the language modeling task, shen2019ordered suggest that the gate values in ON-LSTM cells can be viewed as syntactic distances shen2018straight between adjacent words to induce latent tree structures. ON-LSTM has the state-of-the-art unsupervised constituency parsing performance on the WSJ test set. We train both PRPN and ON-LSTM on all captions in the MSCOCO training set and use the models as baselines.

Inspired by the syntactic distance–based approaches (shen2017neural; shen2019ordered), we also introduce another baseline, PMI, which uses negative pointwise mutual information ChurchHanks90 between adjacent words as the syntactic distance. We compose constituency parse trees based on the distances in the same way as PRPN and ON-LSTM.

Syntax acquisition from downstream tasks.

\newcite

choi2018learning propose to compose binary constituency parse trees directly from downstream tasks using the Gumbel softmax trick jang2016categorical. We integrate a Gumbel tree-based caption encoder into the visual semantic embedding approach kiros2014unifying. The model is trained on the downstream task of image-caption retrieval.

Syntax acquisition from concreteness estimation.

Since we apply concreteness information to train VG-NSL, it is worth comparing against unsupervised constituency parsing based on previous approaches for predicting word concreteness. This set of baselines includes semi-supervised estimation turney2011literal, crowdsourced labeling brysbaert2014concreteness, and multimodal estimation hessel2018quantifying. Note that none of these approaches has been applied to unsupervised constituency parsing. Implementation details can be found in the supplementary material.

Based on the concreteness score of words, we introduce another baseline similar to VG-NSL. Specifically, we recursively combine two consecutive constituents with the largest average concreteness, and use the average concreteness as the score for the composed constituent. The algorithm generates binary constituency parse trees of captions. For a fair comparison, we implement a variant of this algorithm that also uses a head-initial inductive bias and include the details in the appendix.

4.3 Implementation Details

Across all experiments and all models (including baselines such as PRPN, ON-LSTM, and Gumbel), the embedding dimension for words and constituents is 512. For VG-NSL, we use a pre-trained ResNet-101 (he2016deep), trained on ImageNet (russakovsky2015imagenet), to extract vector embeddings for images. Thus, is a mapping from a 2048-D image embedding space to a 512-D visual-semantic embedding space. As for the score function in constituency parsing, we use a hidden dimension of 128 and ReLU activation. All VG-NSL models are trained for 30 epochs. We use an Adam optimizer (kingma2015adam) with initial learning rate to train VG-NSL. The learning rate is re-initialized to after 15 epochs. We tune other hyperparameters of VG-NSL on the development set using the self-agreement score williams2018latent over 5 runs with different choices of random initialization.

4.4 Results: Unsupervised Constituency Parsing

We evaluate the induced constituency parse trees via the overall score, as well as the recall of four types of constituents: noun phrases (NP), verb phrases (VP), prepositional phrases (PP), and adjective phrases (ADJP) (Table 1). We also evaluate the robustness of models trained with fixed data and hyperparameters, but different random initialization, in two ways: via the standard deviation of performance across multiple runs, and via the self-agreement score williams2018latent, which is the average taken over pairs of different runs.

Among all of the models which do not require extra labels, VG-NSL with the head-initial inductive bias (VG-NSL+HI) achieves the best score. PRPN shen2017neural and a concreteness estimation-based baseline (hessel2018quantifying) both produce competitive results. It is worth noting that the PRPN baseline reaches this performance without any information from images. However, the performance of PRPN is less stable than that of VG-NSL across random initializations. In contrast to its state-of-the-art performance on the WSJ full set (shen2019ordered), we observe that ON-LSTM does not perform well on the MSCOCO caption data set. However, it remains the best model for adjective phrases, which is consistent with the result reported by shen2019ordered.

In addition to the best overall scores, VG-NSL+HI achieves competitive scores across most phrase types (NP, VP and PP). Our models (VG-NSL and VG-NSL+HI) perform the best on NP and PP, which are the most common visually grounded phrases in the MSCOCO data set. In addition, our models produce much higher self than the baselines shen2017neural; shen2019ordered; choi2018learning, showing that they reliably produce reasonable constituency parse trees with different initialization.

We also test the effectiveness of using pre-trained word embeddings. Specifically, for VG-NSL+HI+FastText, we use a pre-trained FastText embedding (300-D, joulin2016fasttext), concatenated with a 212-D trainable embedding, as the word embedding. Using pre-trained word embeddings further improves performance to an average of 54.4% while keeping a comparable self .

4.5 Results: Data Efficiency

(a) The percent data- curves.
(b) The percent data-self curves.
Figure 4: score and self score with respect to the amount of training data. All numbers are averaged over 5 runs with different random initialization.

We compare the data efficiency for PRPN (the strongest baseline method), ON-LSTM, VG-NSL, and VG-NSL+HI. We train the models using 1%, 2%, 5%, 10%, 20%, 50% and 100% of the MSCOCO training set, and report the overall and self scores on the test set (Figure 4).

Compared to PRPN trained on the full training set, VG-NSL and VG-NSL+HI reach comparable performance using only 20% of the data (i.e., 8K images with 40K captions). VG-NSL tends to quickly become more stable (in terms of the self score) as the amount of data increases, while PRPN and ON-LSTM remain less stable.

4.6 Analysis: Consistency with Linguistic Concreteness

Model/method VG-NSL (+HI)
turney2011literal 0.74 0.72
brysbaert2014concreteness 0.71 0.71
hessel2018quantifying 0.84 0.85
Table 2: Agreement between our concreteness estimates and existing models or labels, evaluated via the Pearson correlation coefficient computed over the most frequent 100 words in the MSCOCO test set, averaged over 5 runs with different random initialization.

During training, VG-NSL acquires concreteness estimates for constituents via Equation 1. Here, we evaluate the consistency between word-level concreteness estimates induced by VG-NSL and those produced by other methods turney2011literal; brysbaert2014concreteness; hessel2018quantifying. Specifically, we measure the correlation between the concreteness estimated by VG-NSL on MSCOCO test set and existing linguistic concreteness definitions (Table 2). For any word, of which the representation is , we estimate its concreteness by taking the average of , across all associated images . The high correlation between VG-NSL and the concreteness scores produced by \newciteturney2011literal and \newcitebrysbaert2014concreteness supports the argument that the linguistic concept of concreteness can be acquired in an unsupervised way. Our model also achieves a high correlation with hessel2018quantifying, which also estimates word concreteness based on visual-domain information.

4.7 Analysis: Self-Agreement Score as the Criterion for Model Selection

Model Criterion Avg. Self
VG-NSL Self 50.4 87.1
VG-NSL R@1 47.7 83.4
VG-NSL+HI Self 53.3 90.2
VG-NSL+HI R@1 53.1 88.7
Table 3: Average scores and Self scores of VG-NSL and VG-NSL+HI with different model selection methods. R@1 denotes using recall at 1 kiros2014unifying as the model selection criterion. All hyperparameters are tuned with respect to self-agreement score. The numbers are comparable to those in Table 1.

We introduce a novel hyperparameter tuning and model selection method based on the self-agreement score.

Let denote the j-th checkpoint of the i-th model trained with hyperparameters , where and differ in their random initialization. The hyperparameters are tuned to maximize:

where denotes the score between the trees generated by two models, the number of different runs, and the margin to ensure only nearby checkpoints are compared.666 In all of our experiments, .

After finding the best hyperparameters , we train the model for another times with different random initialization, and select the best models by

We compare the performance of VG-NSL selected by the self score and that selected by recall at 1 in image-to-text retrieval (R@1 in Table 3; kiros2014unifying). As a model selection criterion, self consistently outperforms R@1 (avg. : 50.4 vs. 47.7 and 53.3 vs. 53.1 for VG-NSL and VG-NSL+HI, respectively). Meanwhile, it is worth noting that even if we select VG-NSL by R@1, it shows better stability compared with PRPN and ON-LSTM (Table 1), in terms of the score variance across different random initialization and self . Specifically, the variance of avg. is always less than 0.6 while the self is greater than 80.

Note that the PRPN and ON-LSTM models are not tuned using self , since these models are usually trained for hundreds or thousands of epochs and thus it is computationally expensive to evaluate self . We leave the efficient tuning of these baselines by self as a future work.

4.8 Extension to Multiple Languages

Model EN DE FR
PRPN   30.8   31.5   27.5
ON-LSTM   38.7   34.9   27.7
VG-NSL 33.5 36.3 34.3
VG-NSL+HI 38.7 38.3 38.1
Table 4: scores on the Multi30K test split young2014image; elliott2016multi30k; elliott2017findings, averaged over 5 runs with different random initialization. denotes the standard deviation.

We extend our experiments to the Multi30K data set, which is built on the Flickr30K data set young2014image and consists of English, German elliott2016multi30k, and French elliott2017findings captions. For Multi30K, there are 29,000 images in the training set, 1,014 in the development set and 1,000 in the test set. Each image is associated with one caption in each language.

We compare our models to PRPN and ON-LSTM in terms of overall score (Table 4). VG-NSL with the head-initial inductive bias consistently performs the best across the three languages, all of which are highly head-initial baker2001atoms. Note that the scores here are not comparable to those in Table 1, since Multi30K (English) has 13x fewer captions than MSCOCO.

5 Discussion

We have proposed a simple but effective model, the Visually Grounded Neural Syntax Learner, for visually grounded language structure acquisition. VG-NSL jointly learns parse trees and visually grounded textual representations. In our experiments, we find that this approach to grounded language learning produces parsing models that are both accurate and stable, and that the learning is much more data-efficient than a state-of-the-art text-only approach. Along the way, the model acquires estimates of word concreteness.

The results suggest multiple future research directions. First, VG-NSL matches text embeddings directly with embeddings of entire images. Its performance may be boosted by considering structured representations of both images (e.g., lu2016visual; wu2019unified) and texts (steedman2000syntactic). Second, thus far we have used a shared representation for both syntax and semantics, but it may be useful to disentangle their representations (steedman2000syntactic). Third, our best model is based on the head-initial inductive bias. Automatically acquiring such inductive biases from data remains challenging (kemp2006learning; gauthier2018word). Finally, it may be possible to extend our approach to other linguistic tasks such as dependency parsing (christie2016resolving), coreference resolution (kottur2018visual), and learning pragmatics beyond semantics (andreas2016reasoning).

There are also limitations to the idea of grounded language acquisition. In particular, the current approach has thus far been applied to understanding grounded texts in a single domain (static visual scenes for VG-NSL). Its applicability could be extended by learning shared representations across multiple modalities (castrejon2016learning) or integrating with pure text-domain models (such as PRPN, shen2017neural).

Acknowledgement

We thank Allyson Ettinger for helpful suggestions on this work, and the anonymous reviewers for their valuable feedback.

References


Supplementary Material
The supplementary material is organized as follows. First, in Section A, we summarize and compare existing models for constituency parsing without explicit syntactic supervision. Next, in Section B, we present more implementation details of VG-NSL. Third, in Section C, we present the implementation details for all of our baseline models. Fourth, in Section D, we present the evaluation details of Benepar kitaev2018constituency on the MSCOCO data set. Fifth, in Section E, we qualitatively and quantitatively compare the concreteness scores estimated or labeled by different methods. Finally, in Section F, we show sample trees generated by VG-NSL on the MSCOCO test set.

Appendix A Overview of Models for Constituency Parsing without Explicit Syntactic Supervision

Model Objective Extra Label Multi- Stochastic Extra
modal Corpus
CCM klein2002generative* MAP POS
DMV-CCM klein2005natural* MAP POS
U-DOP bod2006unsupervised* Probability Estimation POS
UML-DOP bod2006all* MAP POS
PMI N/A
Random N/A
Left N/A
Right N/A
PRPN shen2017neural LM
ON-LSTM shen2019ordered LM
Gumbel softmaxchoi2018learning Cross-modal Retrieval
VG-NSL (ours) Cross-modal Retrieval
VG-NSL+HI (ours) Cross-modal Retrieval
Concreteness estimation based models
turney2011literal* N/A
Concreteness
(Partial)
turney2011literal+HI* N/A
Concreteness
(Partial)
brysbaert2014concreteness* N/A
Concreteness
(Full)
brysbaert2014concreteness+HI* N/A
Concreteness
(Full)
hessel2018quantifying N/A
hessel2018quantifying+HI N/A
Table 5: Comparison of models for constituency parsing without explicit syntactic supervision. * denotes models requiring extra labels, such as POS tags or manually labeled concreteness scores. All multimodal methods listed in the table require a pretrained visual feature extractor (i.e., ResNet-101; he2016deep, he2016deep). A model is labeled as stochastic if for fixed training data and hyperparameters the model may produce different results (e.g., due to different choices of random initialization). To the best of our knowledge, results on concreteness estimation turney2011literal; brysbaert2014concreteness; hessel2018quantifying have not been applied to unsupervised parsing so far.
{parsetree}

( .. (.. (.. (.. (.. ‘A’ ‘cat’ ) ‘is’ ) ‘on’ ) ‘the’ ) ‘ground’ )

(a) Left-branching tree.
{parsetree}

(.. ‘A’ (.. ‘cat’(.. ‘is’ (.. ‘on’ ( .. ‘the’ ‘ground’ ) ) ) ) )

(b) Right-branching tree.
Figure 5: Examples of some trivial tree structures.

Shown in Table 5, we compare existing models for constituency parsing without explicit syntactic supervision, with respect to their learning objective, dependence on extra labels or extra corpus, and other features. The table also includes the analysis of previous works on parsing sentences based on gold part-of-speech tags.

Appendix B Implementation Details for VG-NSL

We adopt the code released by faghri2017vse++777https://github.com/fartashf/vsepp as the visual-semantic embedding module for VG-NSL. Following them, we fix the margin to . We also use the vocabulary provided by faghri2017vse++,888http://www.cs.toronto.edu/~faghri/vsepp/vocab.tar which contains 10,000 frequent words in the MSCOCO data set. Out-of-vocabulary words are treated as unseen words. For either VG-NSL or baselines, we use the same vocabulary if applicable.

Hyperparameter tuning.

As stated in main text, we use the self-agreement score williams2018latent as an unsupervised signal for tuning all hyperparamters. Besides the learning rate and other conventional hyperparameters, we also tune , the hyperparameter for the head-initial bias model. indicates the weight of penalization for “right abstract constituents”. We choose from and found that gives the best self-agreement score.

Appendix C Implementation Details for Baselines

Trivial tree structures.

We show examples for left-branching binary trees and right-branching binary trees in Figure 5. As for binary random trees, we iteratively combine two randomly selected adjacent constituents. This procedure is similar to that shown in Algorithm 2.

Parsing-Reading-Predict Network (PRPN).

We use the code released by shen2017neural to train PRPN.999https://github.com/yikangshen/PRPN We tune the hyperparameters with respect to language modeling perplexity jelinek1977perplexity. For a fair comparison, we fix the hidden dimension of all hidden layers of PRPN as 512. We use an Adam optimizer kingma2015adam to optimize the parameters. The tuned parameters are number of layers (1, 2, 3) and learning rate (, , ). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity.

Ordered Neurons (ON-LSTM).

We use the code release by shen2019ordered to train ON-LSTM.101010https://github.com/yikangshen/Ordered-Neurons We tune the hyperparameters with respect to language modeling perplexity jelinek1977perplexity, and use perplexity as an early stopping criterion. For a fair comparison, the hidden dimension of all hidden layers is set to 512, and the chunk size is changed to 16 to fit the hidden layer size. Following the original paper (shen2019ordered), we set the number of layers to be 3, and report the constituency parse tree with respect to the gate values output by the second layer of ON-LSTM. In order to obtain a better perplexity, we explore both Adam kingma2015adam and SGD as the optimizer. We tune the learning rate (, , for Adam, and , , , for SGD). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity.

PMI based constituency parsing.

We estimate the pointwise mutual information (PMI; ChurchHanks90) between two words using all captions in MSCOCO training set. We apply negative PMI as syntactic distance shen2018straight to generate a binary constituency parse tree recursively. The method of constituency parsing with a given list of syntactic distances is shown in Algorithm 1.

Input: text length , list of syntactic distances
Output: Boundaries of constituents
= parse(, , )
 
Function parse(, left, right)
if  then
       return EmptySet
end if
union(
   {(left, right)},
  parse (, left, ),
  parse (, , right)
)
return boundaries
Algorithm 1 Constituency parsing based on given syntactic distance.

Gumbel-softmax based latent tree.

We integrate Gumbel-softmax latent tree based text encoder choi2018learning111111https://github.com/jihunchoi/unsupervised-treelstm to the visual semantic embedding framework faghri2017vse++, and use the tree structure produced by it as a baseline.

turney2011literal brysbaert2014concreteness hessel2018quantifying VG-NSL+HI
turney2011literal 1.00 0.84 0.58 0.72
brysbaert2014concreteness 0.84 1.00 0.55 0.71
hessel2018quantifying 0.58 0.55 1.00 0.85
VG-NSL+HI 0.72 0.71 0.85 1.00
Table 6: Pearson correlation coefficients between existing concreteness estimation methods, including baselines and VG-NSL+HI. In order to make a fair comparison, the correlation coefficients are evaluated on the 100 most frequent words on MSCOCO test set.
Figure 6: Normalized concreteness scores of example words.

Concreteness estimation.

For the semi-supervised concreteness estimation, we reproduce the experiments by turney2011literal, applying the manually labeled concreteness scores for 4,295 words from the MRC Psycholinguistic Database Machine Usable Dictionary coltheart1981mrc as supervision,121212http://ota.oucs.ox.ac.uk/headers/1054.xml and use English Wikipedia pages to estimate PMI between words.131313https://dumps.wikimedia.org/other/static_html_dumps/April_2007/en/ The PMI is then used to compute similarity between seen and unseen words, which is further used as weights to estimate concreteness for unseen words. For the concreteness scores from crowdsourcing, we use the released data set of brysbaert2014concreteness.141414http://crr.ugent.be/archives/1330 Similarly to VG-NSL, the multimodal concreteness score hessel2018quantifying is also estimated on the MSCOCO training set, using an open-sourced implementation.151515https://github.com/victorssilva/concreteness

Constituency parsing with concreteness scores.

Denote as the concreteness score estimated by a model for the word . Given a sequence of concreteness scores of caption tokens denoted by , we aim to produce a binary constituency parse tree. We first normalize the concreteness scores to the range of , via:161616 For the concreteness scores estimated by hessel2018quantifying, we let before normalizing, as the original scores are in the range of .

We treat unseen words (i.e., out-of-vocabulary words) in the same way in VG-NSL, by assigning the concreteness of to unseen words, with the assumption that unseen words are the most abstract ones.

We compose constituency parse trees using the normalized concreteness scores by iteratively combining consecutive constituents. At each step, we select two adjacent constituents (initially, words) with the highest average concreteness score and combine them into a larger constituent, of which the concreteness is the average of its children. We repeat the above procedure until there is only one constituent left.

As for the head-initial inductive bias, we weight the concreteness of the right constituent with a hyperparemeter when ranking all pairs of consecutive constituents during selection. Meanwhile, the concreteness of the composed constituent remains the average of the two component constituents. In order to keep consistent with VG-NSL, we set in all of our experiments.

The procedure is summarized in Algorithm 2.

Input: list of normalized concreteness scores , hyperparameter
Output: Boundaries of constituents
for  to  do
      
      
end for
while  do
      
       add to
       + () +
       + () +
       + () +
end while
Algorithm 2 Constituency parsing based on concreteness estimation.
{parsetree}

(.. (.. ‘Three’ ‘white’ ‘sinks’ ) (.. ‘in’ (.. (.. ‘a’ ‘bathroom’ ) (.. ‘under ’ ‘mirrors’ ) ) ) )

(a) Constituency parse tree labeled by Benepar kitaev2018constituency.
{parsetree}

(.. (.. ‘Three’ ‘white’ ‘sinks’ ) (.. ‘in’ (.. ‘a’ ‘bathroom’ ) ) (.. ‘under’ ‘mirrors’ ) )

(b) Manually labeled constituency parse tree.
Figure 7: A failure example by Benepar, where it fails to parse the noun phrase “three white sinks in a bathroom under mirrors” – according to human commonsense, it is much more common for sinks, rather than a bathroom, to be under mirrors. However, most of the constituents (e.g., “three white sinks” and “under mirrors”) are still successfully extracted by Benepar.

Appendix D Details of Manual Ground Truth Evaluation

It is important to confirm that the constituency parse trees of the MSCOCO captions produced by Benepar kitaev2018constituency are of high enough qualities, so that they can serve as reliable ground truth for further evaluation of other models. To verify this, we randomly sample 50 captions from the MSCOCO test split, and manually label the constituency parse trees without reference to either Benepar or the paired images, following the principles by bies1995bracketing as much as possible.171717 The manually labeled constituency parse trees are publicly available at https://ttic.uchicago.edu/~freda/vgnsl/manually_labeled_trees.txt Note that we only label the tree structures without constituency labels (e.g., NP and PP). Most failure cases by Benepar are related to human commonsense in resolving parsing ambiguities, e.g., prepositional phrase attachments (Figure 7).

We compare the manually labeled trees and those produced by Benepar kitaev2018constituency, and find that the score between them are 95.65.

Appendix E Concreteness by Different Models

e.1 Correlation between Different Concreteness Estimations

We report the correlation of different methods for concreteness estimation, shown in (Table 6). The concreteness given by turney2011literal and brysbaert2014concreteness highly correlate with each other. The concreteness scores estimated on multi-modal dataset (hessel2018quantifying) also moderately correlates with the aforementioned two methods (turney2011literal; brysbaert2014concreteness). Compared to the concreteness estimated by hessel2018quantifying, the one estimated by our model has a stronger correlation with the scores estimated from linguistic data turney2011literal; brysbaert2014concreteness.

e.2 Concreteness Scores of Sample Words by Different Methods

We present the concreteness scores estimated or labeled by different methods in Figure 6, which qualitatively shows that different methods correlate with others well.

Appendix F Sample Trees Generated by VG-NSL

Figure 6 shows the sample trees generated by VG-NSL with the head-initial inductive bias (VG-NSL+HI). All captions are chosen from the MSCOCO test set.

{parsetree}

( .. ( .. ‘a’ ‘kitchen’ ) ( .. ‘with’ ( .. ( .. ‘two’ ‘windows’ ) ( .. ‘and’ ( .. ‘two’ ( .. ‘metal’ ‘sinks’ ) ) ) ) ) )

(a) a kitchen with two windows and two metal sinks
{parsetree}

( .. ( .. ‘a’ ( .. ‘blue’ ( .. ‘small’ ‘plane’ ) ) ) ( .. ‘standing’ ( .. ‘at’ ( .. ‘the’ ‘airstrip’ ) ) ) )

(b) a blue small plane standing at the airstrip
{parsetree}

( .. ( .. ( .. ‘young’ ‘boy’ ) ‘sitting’ ) ( .. ‘on’ ( .. ‘top’ ( .. ‘of’ ( .. ‘a’ ‘briefcase’ ) ) ) ) )

(c) young boy sitting on top of a briefcase
{parsetree}

( .. ( .. ( .. ‘a’ ( .. ‘small’ ‘dog’ ) ) ‘eating’ ) ( .. ( .. ‘a’ ‘plate’ ) ( .. ‘of’ ‘broccoli’ ) ) )

(d) a small dog eating a plate of broccoli
{parsetree}

( .. ( .. ( .. ( .. ‘a’ ‘building’ ) ( .. ‘with’ ( .. ‘a’ ( .. ‘bunch’ ( .. ‘of’ ‘people’ ) ) ) ) ) ( .. ‘standing’ ‘around’ ) ) ‘it’ )

(i) a building with a bunch of people standing around it
{parsetree}

( .. ( .. ( .. ‘a’ ‘horse’ ) ‘walking’ ) ( .. ‘by’ ( .. ( .. ‘a’ ‘tree’ ) ( .. ‘in’ ( .. ‘the’ ‘woods’ ) ) ) ) )

(j) a horse walking by a tree in the woods
{parsetree}

( .. ( .. ( .. ‘the’ ( .. ‘golden’ ‘waffle’ ) ) ( .. ‘has’ ( .. ‘a’ ‘banana’ ) ) ) ( .. ‘in’ ‘it’ ) )

(k) the golden waffle has a banana in it .
{parsetree}

( .. ( .. ( .. ‘a’ ‘bowl’ ) ( .. ‘full’ ( .. ‘of’ ‘oranges’ ) ) ) ( .. ‘that’ ( .. ‘still’ ( .. ‘have’ ‘stems’ ) ) ) )

(l) a bowl full of oranges that still have stems
{parsetree}

( .. ( .. ‘there’ ( .. ‘is’ ( .. ‘a’ ‘person’ ) ) ) ( .. ‘that’ ( .. ‘is’ ( .. ‘sitting’ ( .. ( .. ‘in’ ( .. ‘the’ ‘boat’ ) ) ( .. ‘on’ ( .. ‘the’ ‘water’ ) ) ) ) ) ) )

(u) there is a person that is sitting in the boat on the water
{parsetree}

( .. ( .. ( .. ‘a’ ‘sandwich’ ) ( .. ‘and’ ‘soup’ ) ) ( .. ‘sit’ ( .. ‘on’ ( .. ‘a’ ‘table’ ) ) ) )

(v) a sandwich and soup sit on a table
{parsetree}

( .. ( .. ‘a’ ( .. ‘big’ ‘umbrella’ ) ) ( .. ‘sitting’ ( .. ‘on’ ( .. ‘the’ ‘beach’ ) ) ) )

(w) a big umbrella sitting on the beach
Figure 6: Examples of parsing trees generated by VG-NSL.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
392040
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description