Compound Probabilistic ContextFree Grammars
for Grammar Induction
Abstract
We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic contextfree grammar. In contrast to traditional formulations which learn a single stochastic grammar, our contextfree rule probabilities are modulated by a persentence continuous latent variable, which induces marginal dependencies beyond the traditional contextfree assumptions. Inference in this grammar is performed by collapsed variational inference, in which an amortized variational posterior is placed on the continuous variable, and the latent trees are marginalized with dynamic programming. Experiments on English and Chinese show the effectiveness of our approach compared to recent stateoftheart methods for grammar induction.
1 Introduction
^{†}^{†} Code: https://github.com/harvardnlp/compoundpcfgGrammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic contextfree grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm Lari and Young (1990); Carroll and Charniak (1992). While the reasons for the failure are manifold and not completely understood, two major potential causes are the illbehaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefullycrafted auxiliary objectives Klein and Manning (2002), priors or nonparametric models Kurihara and Sato (2006); Johnson et al. (2007); Liang et al. (2007); Wang and Blunsom (2013), and manuallyengineered features Huang et al. (2012); Golland et al. (2012) to encourage the desired structures to emerge.
We revisit these aforementioned issues in light of advances in model parameterization and inference. First, contrary to common wisdom, we find that parameterizing a PCFG’s rule probabilities with neural networks over distributed representations makes it possible to induce linguistically meaningful grammars by simply optimizing log likelihood. While the optimization problem remains nonconvex, recent work suggests that there are optimization benefits afforded by overparameterized models Arora et al. (2018); Xu et al. (2018); Du et al. (2019), and we indeed find that this neural PCFG is significantly easier to optimize than the traditional PCFG. Second, this factored parameterization makes it straightforward to incorporate side information into rule probabilities through a sentencelevel continuous latent vector, which effectively allows different contexts in a derivation to coordinate. In this compound PCFG—continuous mixture of PCFGs—the contextfree assumptions hold conditioned on the latent vector but not unconditionally, thereby obtaining longerrange dependencies within a treebased generative process.
To utilize this approach, we need to efficiently optimize the log marginal likelihood of observed sentences. While compound PCFGs break efficient inference, if the latent vector is known the distribution over trees reduces to a standard PCFG. This property allows us to perform grammar induction using a collapsed approach where the latent trees are marginalized out exactly with dynamic programming. To handle the latent vector, we employ standard amortized inference using reparameterized samples from a variational posterior approximated from an inference network Kingma and Welling (2014); Rezende et al. (2014).
2 Probabilistic ContextFree Grammars
We consider contextfree grammars (CFG) consisting of a 5tuple where is the distinguished start symbol, is a finite set of nonterminals, is a finite set of preterminals,^{1}^{1}1Since we will be inducing a grammar directly from words, is roughly the set of partofspeech tags and is the set of constituent labels. However, to avoid issues of label alignment, evaluation is only on the tree topology. is a finite set of terminal symbols, and is a finite set of rules of the form,
A probabilistic contextfree grammar (PCFG) consists of a grammar and rule probabilities such that is the probability of the rule . Letting be the set of all parse trees of , a PCFG defines a probability distribution over via where is the set of rules used in the derivation of . It also defines a distribution over string of terminals via
where , i.e. the set of trees such that ’s leaves are . We will slightly abuse notation and use
to denote the posterior distribution over the unobserved latent trees given the observed sentence , where is the indicator function.
Parameterization
The standard way to parameterize a PCFG is to simply associate a scalar to each rule with the constraint that they form valid probability distributions, i.e. each nonterminal is associated with a fullyparameterized categorical distribution over its rules. This direct parameterization is algorithmically convenient since the Mstep in the EM algorithm Dempster et al. (1977) has a closed form. However, there is a long history of work showing that it is difficult to learn meaningful grammars from natural language data with this parameterization Carroll and Charniak (1992).^{2}^{2}2In preliminary experiments we were indeed unable to learn linguistically meaningful grammars with this PCFG. Successful approaches to unsupervised parsing have therefore modified the model/learning objective by guiding potentially unrelated rules to behave similarly.
Recognizing that sharing among rule types is beneficial, we propose a neural parameterization where rule probabilities are based on distributed representations. We associate embeddings with each symbol, introducing input embeddings for each symbol on the left side of a rule (i.e. ). For each rule type , is parameterized as follows,
where is the product space , and are MLPs with two residual layers (see section A.1 for the full parameterization). We will use to denote the set of input symbol embeddings for a grammar , and to refer to the parameters of the neural network used to obtain the rule probabilities. A graphical modellike illustration of the neural PCFG is shown in Figure 1 (left).
It is clear that the neural parameterization does not change the underlying probabilistic assumptions. The difference between the two is analogous to the difference between countbased vs. feedforward neural language models, where feedforward neural language models make the same Markov assumptions as the countbased models but are able to take advantage of shared, distributed representations.
3 Compound PCFGs
A compound probability distribution Robbins (1951) is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process,
Compound distributions provide the ability to model rich generative processes, but marginalizing over the latent parameter can be computationally intractable unless conjugacy can be exploited.
In this work, we study compound probabilistic contextfree grammars whose distribution over trees arises from the following generative process: we first obtain rule probabilities via
where is a prior with parameters (spherical Gaussian in this paper), and is a neural network that concatenates the input symbol embeddings with and outputs the sentencelevel rule probabilities ,
where denotes vector concatenation. Then a tree/sentence is sampled from a PCFG with rule probabilities given by ,
This can be viewed as a continuous mixture of PCFGs, or alternatively, a Bayesian PCFG with a prior on sentencelevel rule probabilities parameterized by .^{3}^{3}3Under the Bayesian PCFG view, is a distribution over (a subset of the prior), and is thus a hyperprior. Importantly, under this generative model the contextfree assumptions hold conditioned on , but they do not hold unconditionally. This is shown in Figure 1 (right) where there is a dependence path through if it is not conditioned upon. Compound PCFGs give rise to a marginal distribution over parse trees via
where . The subscript in denotes the fact that the rule probabilities depend on . Compound PCFGs are clearly more expressive than PCFGs as each sentence has its own set of rule probabilities. However, it still assumes a treebased generative process, making it possible to learn latent tree structures.
Our motivation for the compound PCFG is based on the observation that for grammar induction, contextfree assumptions are generally made not because they represent an adequate model of natural language, but because they allow for tractable training.^{4}^{4}4A piece of evidence for the misspecification of firstorder PCFGs as a statistical model of natural language is that if one pretrains a firstorder PCFG on supervised data and continues training with the unsupervised objective (i.e. log marginal likelihood), the resulting grammar deviates significantly from the supervised initial grammar while the log marginal likelihood improves Johnson et al. (2007). Similar observations have been made for partofspeech induction with Hidden Markov Models Merialdo (1994). We can in principle model richer dependencies through vertical/horizontal Markovization Johnson (1998); Klein and Manning (2003) and lexicalization Collins (1997). However such dependencies complicate training due to the rapid increase in the number of rules. Under this view, we can interpret the compound PCFG as a restricted version of some lexicalized, higherorder PCFG where a child can depend on structural and lexical context through a shared latent vector.^{5}^{5}5Another interpretation of the compound PCFG is to view it as a vectorized version of indexed grammars Aho (1968), which extend CFGs by augmenting nonterminals with additional index strings that may be inherited or modified during derivation. Compound PCFGs instead equip nonterminals with a continuous vector that is always inherited. We hypothesize that this dependence among siblings is especially useful in grammar induction from words, where (for example) if we know that watched is used as a verb then the noun phrase is likely to be a movie.
In contrast to the usual Bayesian treatment of PCFGs which places priors on global rule probabilities Kurihara and Sato (2006); Johnson et al. (2007); Wang and Blunsom (2013), the compound PCFG assumes a prior on local, sentencelevel rule probabilities. It is therefore closely related to the Bayesian grammars studied by Cohen et al. (2009) and Cohen and Smith (2009), who also sample local rule probabilities from a logistic normal prior for training dependency models with valence (DMV) Klein and Manning (2004).
Inference in Compound PCFGs
The expressivity of compound PCFGs comes at a significant challenge in learning and inference. Letting be the parameters of the generative model, we would like to maximize the log marginal likelihood of the observed sentence . In the neural PCFG the log marginal likelihood can be obtained by summing out the latent tree structure using the inside algorithm Baker (1979), which is differentiable and thus amenable to gradientbased optimization.^{6}^{6}6In the context of the EM algorithm, directly performing gradient ascent on the log marginal likelihood is equivalent to performing an exact Estep (with the insideoutside algorithm) followed by a gradientbased Mstep, i.e. Salakhutdinov et al. (2003); BergKirkpatrick et al. (2010); Eisner (2016). In the compound PCFG, the log marginal likelihood is given by,
Notice that while the integral over makes this quantity intractable, when we condition on , we can tractably perform the inner summation to obtain using the inside algorithm. We therefore resort to collapsed amortized variational inference. We first obtain a sample from a variational posterior distribution (given by an amortized inference network), then perform the inner marginalization conditioned on this sample. The evidence lower bound is then,
and we can calculate given a sample from a variational posterior . For the variational family we use a diagonal Gaussian where the mean/logvariance vectors are given by an affine layer over maxpooled hidden states from an LSTM over . We can obtain lowvariance estimators for by using the reparameterization trick for the expected reconstruction likelihood and the analytical expression for the KL term Kingma and Welling (2014).
We remark that under the Bayesian PCFG view, since the parameters of the prior (i.e. ) are estimated from the data, our approach can be seen as an instance of empirical Bayes Robbins (1956).^{7}^{7}7See Berger (1985) (chapter 4), Zhang (2003), and Cohen (2016) (chapter 3) for further discussion on compound models and empirical Bayes.
MAP Inference
After training, we are interested in comparing the learned trees against an annotated treebank. This requires inferring the most likely tree given a sentence, i.e. . For the neural PCFG we can obtain the most likely tree by using the Viterbi version of the inside algorithm (CKY algorithm). For the compound PCFG, the is intractable to obtain exactly, and hence we estimate it with the following approximation,
where is the mean vector from the inference network. The above approximates the true posterior with , the Dirac delta function at the mode of the variational posterior.^{8}^{8}8Since is continuous with respect to , we have This quantity is tractable as in the PCFG case. Other approximations are possible: for example we could use as an importance sampling distribution to estimate the first integral. However we found the above approximation to be efficient and effective in practice.
4 Experimental Setup
Data
We test our approach on the Penn Treebank (PTB) Marcus et al. (1993) with the standard splits (221 for training, 22 for validation, 23 for test) and the same preprocessing as in recent works Shen et al. (2018, 2019), where we discard punctuation, lowercase all tokens, and take the top 10K most frequent words as the vocabulary. This setup is more challenging than traditional setups, which usually experiment on shorter sentences and use gold partofspeech tags.
Hyperparameters
Our PCFG uses 30 nonterminals and 60 preterminals, with 256dimensional symbol embeddings. The compound PCFG uses 64dimensional latent vectors. The bidirectional LSTM inference network has a single layer with 512 dimensions, and the mean and the log variance vector for are given by maxpooling the hidden states of the LSTM and passing it through an affine layer. Model parameters are initialized with Xavier uniform initialization. For training we use Adam Kingma and Ba (2015) with = 0.75, and learning rate of 0.001, with a maximum gradient norm limit of 3. We train for 10 epochs with batch size equal to 4. We employ a curriculum learning strategy Bengio et al. (2009) where we train only on sentences of length up to 30 in the first epoch, and increase this length limit by 1 each epoch. Similar curriculumbased strategies have used in the past for grammar induction Spitkovsky et al. (2012). During training we perform early stopping based on validation perplexity.^{9}^{9}9However, we used against validation trees on PTB to select some hyperparameters (e.g. grammar size), as is sometimes done in grammar induction. Hence our PTB results are arguably not fully unsupervised in the strictest sense of the term. The hyperparameters of the PRPN/ON baselines are also tuned using validation for fair comparison. Finally, to mitigate against overfitting to PTB, experiments on CTB utilize the same hyperparameters from PTB.
Baselines and Evaluation
We observe that even on PTB, there is enough variation in setups across prior work on grammar induction to render a meaningful comparison difficult. Some important dimensions along which prior works vary include, (1) lexicalization: earlier work on grammar induction generally assumed gold (or induced) partofspeech tags Klein and Manning (2004); Smith and Eisner (2004); Bod (2006); Snyder et al. (2009), while more recent works induce grammar directly from words Spitkovsky et al. (2013); Shen et al. (2018); (2) use of punctuation: even within papers that induce a grammar directly from words, some papers employ heuristics based on punctuation as punctuation is usually a strong signal for start/end of constituents Seginer (2007); Ponvert et al. (2011); Spitkovsky et al. (2013), some train with punctuation Jin et al. (2018); Drozdov et al. (2019); Kim et al. (2019), while others discard punctuation altogether for training Shen et al. (2018, 2019); (3) train/test data: some works do not explicitly separate out train/test sets Reichart and Rappoport (2010); Golland et al. (2012) while some do Huang et al. (2012); Parikh et al. (2014); Htut et al. (2018). Maintaining train/test splits is less of an issue for unsupervised structure learning, however in this work we follow the latter and separate train/test data. (4) evaluation: for unlabeled , almost all works ignore punctuation (even approaches that use punctuation during training typically ignore them during evaluation), but there is some variance in discarding trivial spans (widthone and sentencelevel spans) and using corpuslevel versus sentencelevel .^{10}^{10}10Corpuslevel calculates precision/recall at the corpus level to obtain , while sentencelevel calculates for each sentence and averages across the corpus. In this paper we discard trivial spans and evaluate on sentencelevel per recent work Shen et al. (2018, 2019).
Given the above, we mainly compare our approach against two recent, strong baselines with open source code: Parsing Predict Reading Network (PRPN)^{11}^{11}11https://github.com/yikangshen/PRPN Shen et al. (2018) and Ordered Neurons (ON)^{12}^{12}12https://github.com/yikangshen/OrderedNeurons Shen et al. (2019). These approaches train a neural language model with gated attentionlike mechanisms to induce binary trees, and achieve strong unsupervised parsing performance even when trained on corpora where punctuation is removed. Since the original results were on both language modeling and grammar induction, their hyperparameters were presumably tuned to do well on both and thus may not be optimal for just unsupervised parsing. We therefore tune the hyperparameters of these baselines for unsupervised parsing only (i.e. on validation ).
PTB  CTB  
Model  Mean  Max  Mean  Max 
PRPN Shen et al. (2018)  37.4  38.1  
ON Shen et al. (2019)  47.7  49.4  
URNNG Kim et al. (2019)  45.4  
DIORA Drozdov et al. (2019)  58.9  
Left Branching  8.7  9.7  
Right Branching  39.5  20.0  
Random Trees  19.2  19.5  15.7  16.0 
PRPN (tuned)  47.3  47.9  30.4  31.5 
ON (tuned)  48.1  50.0  25.4  25.7 
Scalar PCFG  15.0  
Neural PCFG  50.8  52.6  25.7  29.5 
Compound PCFG  55.2  60.1  36.0  39.8 
Oracle Trees  84.3  81.1 
5 Results and Discussion
Table 1 shows the unlabeled scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search.^{13}^{13}13Training perplexity was much higher than in the neural case, indicating significant optimization issues. However we did not experiment with online EM Liang and Klein (2009), and it is possible that such methods would yield better results. See section A.2 for the full results (including corpuslevel ) broken down by sentence length.
Table 2 analyzes the learned tree structures. We compare similarity as measured by against gold, left, right, and “self” trees (top), where self score is calculated by averaging over all 6 pairs obtained from 4 different runs. We find that PRPN is particularly consistent across multiple runs. We also observe that different models are better at identifying different constituent labels, as measured by label recall (Table 2, bottom). While left as future work, this naturally suggests an ensemble approach wherein the empirical probabilities of constituents (obtained by averaging the predicted binary constituent labels from the different models) are used either to supervise another model or directly as potentials in a CRF constituency parser. Finally, all models seemed to have some difficulty in identifying SBAR/VP constituents which typically span more words than NP constituents.
Induced Trees for Downstream Tasks
While the compound PCFG has fewer independence assumptions than the neural PCFG, it is still a more constrained model of language than standard neural language models (NLM) and thus not competitive in terms of perplexity: the compound PCFG obtains a perplexity of 196.3 while an LSTM language model (LM) obtains 86.2 (Table 3).^{14}^{14}14We did manage to almost match the perplexity of an NLM by additionally conditioning the terminal probabilities on previous history, i.e. where is the hidden state from an LSTM over . However the unsupervised parsing performance was far worse ( 25 on the PTB). In contrast, both PRPN and ON perform as well as an LSTM LM while maintaining good unsupervised parsing performance.
PRPN  ON  Neural  Compound  
PCFG  PCFG  
Gold  47.3  48.1  50.8  55.2 
Left  1.5  14.1  11.8  13.0 
Right  39.9  31.0  27.7  28.4 
Self  82.3  71.3  65.2  66.8 
SBAR  50.0%  51.2%  52.5%  56.1% 
NP  59.2%  64.5%  71.2%  74.7% 
VP  46.7%  41.0%  33.8%  41.7% 
PP  57.2%  54.4%  58.8%  68.8% 
ADJP  44.3%  38.1%  32.5%  40.4% 
ADVP  32.8%  31.6%  45.5%  52.5% 
We thus experiment to see if it is possible to use the induced trees to supervise a more flexible generative model that can make use of tree structures—namely, recurrent neural network grammars (RNNG) Dyer et al. (2016). RNNGs are generative models of language that jointly model syntax and surface structure by incrementally generating a syntax tree and sentence. As with NLMs, RNNGs make no independence assumptions, and have been shown to outperform NLMs in terms of perplexity and grammaticality judgment when trained on gold trees Kuncoro et al. (2018); Wilcox et al. (2019). We take the best run from each model and parse the training set,^{15}^{15}15The train/test was similar for all models. and use the induced trees to supervise an RNNG for each model using the parameterization from Kim et al. (2019).^{16}^{16}16https://github.com/harvardnlp/urnng We are also interested in syntactic evaluation of our models, and for this we utilize the framework and dataset from Marvin and Linzen (2018), where a model is presented two minimally different sentences such as:
the senators near the assistant are old  
*the senators near the assistant is old 
and must assign higher probability to grammatical sentence.
PPL  Syntactic Eval.  

LSTM LM  86.2  60.9%  
PRPN  87.1  62.2%  47.9 
Induced RNNG  95.3  60.1%  47.8 
Induced URNNG  90.1  61.8%  51.6 
ON  87.2  61.6%  50.0 
Induced RNNG  95.2  61.7%  50.6 
Induced URNNG  89.9  61.9%  55.1 
Neural PCFG  252.6  49.2%  52.6 
Induced RNNG  95.8  68.1%  51.4 
Induced URNNG  86.0  69.1%  58.7 
Compound PCFG  196.3  50.7%  60.1 
Induced RNNG  89.8  70.0%  58.1 
Induced URNNG  83.7  76.1%  66.9 
RNNG on Oracle Trees  80.6  70.4%  71.9 
+ URNNG Finetuning  78.3  76.1%  72.8 
Additionally, Kim et al. (2019) report perplexity improvements by finetuning an RNNG trained on gold trees with the unsupervised RNNG (URNNG)—whereas the RNNG is is trained to maximize the joint log likelihood , the URNNG maximizes a lower bound on the log marginal likelihood with a structured inference network that approximates the true posterior. We experiment with a similar approach where we finetune RNNGs trained on induced trees with URNNGs. We perform early stopping for both RNNG and URNNG based on validation perplexity. See section A.3 for the full experimental setup.
The results are shown in Table 3. For perplexity, RNNGs trained on induced trees (Induced RNNG in Table 3) are unable to improve upon an LSTM LM, in contrast to the supervised RNNG which does outperform the LSTM language model (Table 3, bottom). For grammaticality judgment however, the RNNG trained with compound PCFG trees outperforms the LSTM LM despite obtaining worse perplexity,^{17}^{17}17Kuncoro et al. (2018, 2019) also observe that models that achieve lower perplexity do not necessarily perform better on syntactic evaluation tasks. and performs on par with the RNNG trained on binarized gold trees. Finetuning with the URNNG results in improvements in perplexity and grammaticality judgment across the board (Induced URNNG in Table 3). We also obtain large improvements on unsupervised parsing as measured by , with the finetuned URNNGs outperforming the respective original models.^{18}^{18}18Li et al. (2019) similarly obtain improvements by refining a model trained on induced trees on classification tasks. This is potentially due to an ensembling effect between the original model and the URNNG’s structured inference network, which is parameterized as a neural CRF constituency parser Durrett and Klein (2015); Liu et al. (2018).^{19}^{19}19While left as future work, it is possible to use the compound PCFG itself as an inference network. Also note that the scores for the URNNGs in Table 3 are optimistic since we selected the bestperforming runs of the original models based on validation to parse the training set. Finally, as noted by Kim et al. (2019), a URNNG trained from scratch fails to outperform a rightbranching baseline on this version of PTB where punctuation is removed.
he retired as senior vice president finance and administration and chief financial officer of the company oct. N 
kenneth j. unk who was named president of this thrift holding company in august resigned citing personal reasons 
the former president and chief executive eric w. unk resigned in june 
unk ’s president and chief executive officer john unk said the loss stems from several factors 
mr. unk is executive vice president and chief financial officer of unk and will continue in those roles 
charles j. lawson jr. N who had been acting chief executive since june N will continue as chairman 
unk corp. received an N million army contract for helicopter engines 
boeing co. received a N million air force contract for developing cable systems for the unk missile 
general dynamics corp. received a N million air force contract for unk training sets 
grumman corp. received an N million navy contract to upgrade aircraft electronics 
thomson missile products with about half british aerospace ’s annual revenue include the unk unk missile family 
already british aerospace and french unk unk unk on a british missile contract and on an airtraffic control radar system 
meanwhile during the the s&p trading halt s&p futures sell orders began unk up while stocks in new york kept falling sharply 
but the unk of s&p futures sell orders weighed on the market and the link with stocks began to fray again 
on friday some market makers were selling again traders said 
futures traders say the s&p was unk that the dow could fall as much as N points 
meanwhile two initial public offerings unk the unk market in their unk day of national overthecounter trading friday 
traders said most of their major institutional investors on the other hand sat tight 
Model Analysis
We analyze our best compound PCFG model in more detail. Since we induce a full set of nonterminals in our grammar, we can analyze the learned nonterminals to see if they can be aligned with linguistic constituent labels. Figure 2 visualizes the alignment between induced and gold labels, where for each nonterminal we show the empirical probability that a predicted constituent of this type will correspond to a particular linguistic constituent in the test set, conditioned on its being a correct constituent (for reference we also show the precision). We observe that some of the induced nonterminals clearly align to linguistic nonterminals. Further results, including preterminal alignments to partofspeech tags,^{20}^{20}20As a POS induction system, the manytoone performance of the compound PCFG using the preterminals is 68.0. A similarlyparameterized compound HMM with 60 hidden states (an HMM is a particularly type of PCFG) obtains 63.2. This is still quite a bit lower than the stateoftheart Tran et al. (2016); He et al. (2018); Stratos (2019), though comparison is confounded by various factors such as preprocessing (e.g. we drop punctuation). A neural PCFG/HMM obtains 68.2 and 63.4 respectively. are shown in section A.4.
We next analyze the continuous latent space. Table 4 shows nearest neighbors of some sentences using the mean of the variational posterior as the continuous representation of each sentence. We qualitatively observe that the latent space seems to capture topical information. We are also interested in the variation in the leaves due to when the variation due to the tree structure is held constant. To investigate this, we use the parsed dataset to obtain pairs of the form , where is the th subtree of the (approximate) MAP tree for the th sentence. Therefore each mean vector is associated with subtrees, where is the sentence length. Our definition of subtree here ignores terminals, and thus each subtree is associated with many mean vectors. For a frequently occurring subtree, we perform PCA on the set of mean vectors that are associated with the subtree to obtain the top principal component. We then show the constituents that had the 5 most positive/negative values for this top principal component in Table 5. For example, a particularly common subtree—associated with 180 unique constituents—is given by
(NT04 (T13 ) (NT12 (NT20 (NT20 (NT07 (T05 )  
The top 5 constituents with the most negative/positive values are shown in the top left part of Table 5. We find that the leaves , which form a 6word constituent, vary in a regular manner as is varied. We also observe that root of this subtree (NT04) aligns to prepositional phrases (PP) in Figure 2, and the leaves in Table 5 (top left) are indeed mostly PP. However, the model fails to identify ((T40 ) (T22 )) as a constituent in this case (as well as well in the bottom right example). See appendix A.5 for more examples. It is possible that the model is utilizing the subtrees to capture broad templatelike structures and then using to fill them in, similar to recent works that also train models to separate “what to say” from “how to say it” Wiseman et al. (2018); Peng et al. (2019); Chen et al. (2019a, b).



Limitations
We report on some negative results as well as important limitations of our work. While distributed representations promote parameter sharing, we were unable to obtain improvements through more factorized parameterizations that promote even greater parameter sharing. In particular, for rules of the type , we tried having the output embeddings be a function of the input embeddings (e.g. where is an MLP), but obtained worse results. For rules of the type , we tried using a characterlevel CNN dos Santos and Zadrozny (2014); Kim et al. (2016) to obtain the output word embeddings Jozefowicz et al. (2016); Tran et al. (2016), but found the performance to be similar to the wordlevel case.^{21}^{21}21It is also possible to take advantage of pretrained word embeddings by using them to initialize output word embeddings or directly working with continuous emission distributions Lin et al. (2015); He et al. (2018) We were also unable to obtain improvements through normalizing flows Rezende and Mohamed (2015); Kingma et al. (2016). However, given that we did not exhaustively explore the full space of possible parameterizations, the above modifications could eventually lead to improvements with the right setup.
Relatedly, the models were quite sensitive to parameterization (e.g. it was important to use residual layers for ), grammar size, and optimization method. We also noticed some variance in results across random seeds, as shown in Table 2. Finally, despite vectorized GPU implementations, training was significantly more expensive (both in terms of time and memory) than NLMbased grammar induction systems due to the dynamic program, which makes our approach potentially difficult to scale.
6 Related Work
Grammar induction has a long and rich history in natural language processing. Early work on grammar induction with pure unsupervised learning was mostly negative Lari and Young (1990); Carroll and Charniak (1992); Charniak (1993), though Pereira and Schabes (1992) reported some success on partially bracketed data. Clark (2001) and Klein and Manning (2002) were some of the first successful statistical approaches to grammar induction. In particular, the constituentcontext model (CCM) of Klein and Manning (2002), which explicitly models both constituents and distituents, was the basis for much subsequent work Klein and Manning (2004); Huang et al. (2012); Golland et al. (2012). Other works have explored imposing inductive biases through Bayesian priors Johnson et al. (2007); Liang et al. (2007); Wang and Blunsom (2013), modified objectives Smith and Eisner (2004), and additional constraints on recursion depth Noji et al. (2016); Jin et al. (2018).
While the framework of specifying the structure of a grammar and learning the parameters is common, other methods exist. Bod (2006) consider a nonparametricstyle approach to unsupervised parsing by using random subsets of training subtrees to parse new sentences. Seginer (2007) utilize an incremental algorithm to unsupervised parsing which makes local decisions to create constituents based on a complex set of heuristics. Ponvert et al. (2011) induce parse trees through cascaded applications of finite state models.
More recently, neural networkbased approaches to grammar induction have shown promising results on inducing parse trees directly from words. Shen et al. (2018, 2019) learn tree structures through soft gating layers within neural language models, while Drozdov et al. (2019) combine recursive autoencoders with the insideoutside algorithm. Kim et al. (2019) train unsupervised recurrent neural network grammars with a structured inference network to induce latent trees, and Shi et al. (2019) utilize image captions to identify and ground constituents.
Our work is also related to latent variable PCFGs Matsuzaki et al. (2005); Petrov et al. (2006); Cohen et al. (2012), which extend PCFGs to the latent variable setting by splitting nonterminal symbols into latent subsymbols. In particular, latent vector grammars Zhao et al. (2018) and compositional vector grammars Socher et al. (2013) also employ continuous vectors within their grammars. However these approaches have been employed for learning supervised parsers on annotated treebanks, in contrast to the unsupervised setting of the current work.
7 Conclusion
This work explores grammar induction with compound PCFGs, which modulate rule probabilities with persentence continuous latent vectors. The latent vector induces marginal dependencies beyond the traditional firstorder contextfree assumptions within a treebased generative process, leading to improved performance. The collapsed amortized variational inference approach is general and can be used for generative models which admit tractable inference through partial conditioning. Learning deep generative models which exhibit such conditional Markov properties is an interesting direction for future work.
Acknowledgments
We thank Phil Blunsom for initial discussions which seeded many of the core ideas in the present work. We also thank Yonatan Belinkov and Shay Cohen for helpful feedback, and Andrew Drozdov for providing the parsed dataset from their DIORA model. YK is supported by a Google Fellowship. AMR acknowledges the support of NSF 1704834, 1845664, AWS, and Oracle.
References
 Aho (1968) Alfred Aho. 1968. Indexed Grammars—An Extension of ContextFree Grammars. Journal of the ACM, 15(4):647–671.
 Arora et al. (2018) Sanjeev Arora, Nadav Cohen, and Elad Hazan. 2018. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. In Proceedings of ICML.
 Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. In Proceedings of NIPS.
 Baker (1979) James K. Baker. 1979. Trainable Grammars for Speech Recognition. In Proceedings of the Spring Conference of the Acoustical Society of America.
 Bengio et al. (2009) Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum Learning. In Proceedings of ICML.
 BergKirkpatrick et al. (2010) Taylor BergKirkpatrick, Alexandre BouchardCote, John DeNero, and Dan Klein. 2010. Painless Unsupervised Learning with Features. In Proceedings of NAACL.
 Berger (1985) James O. Berger. 1985. Statistical Decision Theory and Bayesian Analysis. Springer.
 Bod (2006) Rens Bod. 2006. An AllSubtrees Approach to Unsupervised Parsing. In Proceedings of ACL.
 Carroll and Charniak (1992) Glenn Carroll and Eugene Charniak. 1992. Two Experiments on Learning Probabilistic Dependency Grammars from Corpora. In AAAI Workshop on StatisticallyBased NLP Techniques.
 Charniak (1993) Eugene Charniak. 1993. Statistical Language Learning. MIT Press.
 Chen and Manning (2014) Danqi Chen and Christopher D. Manning. 2014. A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of EMNLP.
 Chen et al. (2019a) Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019a. Controllable Paraphrase Generation with a Syntactic Exemplar. In Proceedings of ACL.
 Chen et al. (2019b) Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019b. A Multitask Approach for Disentangling Syntax and Semantics in Sentence Sepresentations. In Proceedings of NAACL.
 Clark (2001) Alexander Clark. 2001. Unsupervised Induction of Stochastic Context Free Grammars Using Distributional Clustering. In Proceedings of CoNLL.
 Cohen (2016) Shay B. Cohen. 2016. Bayesian Analysis in Natural Language Processing. Morgan and Claypool.
 Cohen et al. (2009) Shay B. Cohen, Kevin Gimpel, and Noah A Smith. 2009. Logistic Normal Priors for Unsupervised Probabilistic Grammar Induction. In Proceedings of NIPS.
 Cohen and Smith (2009) Shay B. Cohen and Noah A Smith. 2009. Shared Logistic Normal Distributions for Soft Parameter Tying in Unsupervised Grammar Induction. In Proceedings of NAACL.
 Cohen et al. (2012) Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2012. Spectral Learning of LatentVariable PCFGs. In Proceedings of ACL.
 Collins (1997) Michael Collins. 1997. Three Generative, Lexicalised Models for Statistical Parsing. In Proceedings of ACL.
 Dempster et al. (1977) Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38.
 Drozdov et al. (2019) Andrew Drozdov, Patrick Verga, Mohit Yadev, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised Latent Tree Induction with Deep InsideOutside Recursive AutoEncoders. In Proceedings of NAACL.
 Du et al. (2019) Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. 2019. Gradient Descent Provably Optimizes Overparameterized Neural Networks. In Proceedings of ICLR.
 Durrett and Klein (2015) Greg Durrett and Dan Klein. 2015. Neural CRF Parsing. In Proceedings of ACL.
 Dyer et al. (2016) Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent Neural Network Grammars. In Proceedings of NAACL.
 Eisner (2016) Jason Eisner. 2016. InsideOutside and ForwardBackward Algorithms Are Just Backprop (Tutorial Paper). In Proceedings of the Workshop on Structured Prediction for NLP.
 Golland et al. (2012) Dave Golland, John DeNero, and Jakob Uszkoreit. 2012. A FeatureRich Constituent Context Model for Grammar Induction. In Proceedings of ACL.
 He et al. (2018) Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2018. Unsupervised Learning of Syntactic Structure with Invertible Neural Projections. In Proceedings of EMNLP.
 Htut et al. (2018) Phu Mon Htut, Kyunghyun Cho, and Samuel R. Bowman. 2018. Grammar Induction with Neural Language Models: An Unusual Replication. In Proceedings of EMNLP.
 Huang et al. (2012) Yun Huang, Min Zhang, and Chew Lim Tan. 2012. Improved Constituent Context Model with Features. In Proceedings of PACLIC.
 Jin et al. (2018) Lifeng Jin, Finale DoshiVelez, Timothy Miller, William Schuler, and Lane Schwartz. 2018. Unsupervised Grammar Induction with Depthbounded PCFG. In Proceedings of TACL.
 Johnson (1998) Mark Johnson. 1998. PCFG Models of Linguistic Tree Representations. Computational Linguistics, 24:613–632.
 Johnson et al. (2007) Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian Inference for PCFGs via Markov chain Monte Carlo. In Proceedings of NAACL.
 Jozefowicz et al. (2016) Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv:1602.02410.
 Kim et al. (2016) Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. CharacterAware Neural Language Models. In Proceedings of AAAI.
 Kim et al. (2019) Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. 2019. Unsupervised Recurrent Neural Network Grammars. In Proceedings of NAACL.
 Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of ICLR.
 Kingma et al. (2016) Diederik P. Kingma, Tim Salimans, and Max Welling. 2016. Improving Variational Inference with Autoregressive Flow. arXiv:1606.04934.
 Kingma and Welling (2014) Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In Proceedings of ICLR.
 Kitaev and Klein (2018) Nikita Kitaev and Dan Klein. 2018. Constituency Parsing with a SelfAttentive Encoder. In Proceedings of ACL.
 Klein and Manning (2002) Dan Klein and Christopher Manning. 2002. A Generative ConstituentContext Model for Improved Grammar Induction. In Proceedings of ACL.
 Klein and Manning (2004) Dan Klein and Christopher Manning. 2004. Corpusbased Induction of Syntactic Structure: Models of Dependency and Constituency. In Proceedings of ACL.
 Klein and Manning (2003) Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of ACL.
 Kuncoro et al. (2018) Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs Can Learn SyntaxSensitive Dependencies Well, But Modeling Structure Makes Them Better. In Proceedings of ACL.
 Kuncoro et al. (2019) Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable SyntaxAware Language Models Using Knowledge Distillation. In Proceedings of ACL.
 Kurihara and Sato (2006) Kenichi Kurihara and Taisuke Sato. 2006. Variational Bayesian Grammar Induction for Natural Language. In Proceedings of International Colloquium on Grammatical Inference.
 Lari and Young (1990) Karim Lari and Steve Young. 1990. The Estimation of Stochastic ContextFree Grammars Using the InsideOutside Algorithm. Computer Speech and Language, 4:35–56.
 Li et al. (2019) Bowen Li, Lili Mou, and Frank Keller†. 2019. An Imitation Learning Approach to Unsupervised Parsing. In Proceedings of ACL.
 Liang and Klein (2009) Percy Liang and Dan Klein. 2009. Online EM for Unsupervised models. In Proceedings of NAACL.
 Liang et al. (2007) Percy Liang, Slav Petrov, Michael I. Jordan, and Dan Klein. 2007. The Infinite PCFG using Hierarchical Dirichlet Processes. In Proceedings of EMNLP.
 Lin et al. (2015) ChuCheng Lin, Waleed Ammar, Chris Dyer, , and Lori Levin. 2015. Unsupervised POS Induction with Word Embeddings. In Proceedings of NAACL.
 Liu et al. (2018) Yang Liu, Matt Gardner, and Mirella Lapata. 2018. Structured Alignment Networks for Matching Sentences. In Proceedings of EMNLP.
 Marcus et al. (1993) Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330.
 Marvin and Linzen (2018) Rebecca Marvin and Tal Linzen. 2018. Targeted Syntactic Evaluation of Language Models. In Proceedings of EMNLP.
 Matsuzaki et al. (2005) Takuya Matsuzaki, Yusuke Miyao, and Junichi Tsujii. 2005. Probabilistic CFG with Latent Annotations. In Proceedings of ACL.
 Merialdo (1994) Bernard Merialdo. 1994. Tagging English Text with a Probabilistic Model. Computational Linguistics, 20(2):155–171.
 Noji et al. (2016) Hiroshi Noji, Yusuke Miyao, and Mark Johnson. 2016. Using Leftcorner Parsing to Encode Universal Structural Constraints in Grammar Induction. In Proceedings of EMNLP.
 Parikh et al. (2014) Ankur P. Parikh, Shay B. Cohen, and Eric P. Xing. 2014. Spectral Unsupervised Parsing with Additive Tree Metrics. In Proceedings of ACL.
 Peng et al. (2019) Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text Generation with Exemplarbased Adaptive Decoding. In Proceedings of NAACL.
 Pereira and Schabes (1992) Fernando Pereira and Yves Schabes. 1992. InsideOutside Reestimation from Partially Bracketed Corpora. In Proceedings of ACL.
 Petrov et al. (2006) Slav Petrov, Leon Barret, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of ACL.
 Ponvert et al. (2011) Elis Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simpled Unsupervised Grammar Induction from Raw Text with Cascaded Finite State Methods. In Proceedings of ACL.
 Press and Wolf (2016) Ofir Press and Lior Wolf. 2016. Using the Output Embedding to Improve Language Models. In Proceedings of EACL.
 Reichart and Rappoport (2010) Roi Reichart and Ari Rappoport. 2010. Improved Fully Unsupervised Parsing with Zoomed Learning. In Proceedings of EMNLP.
 Rezende and Mohamed (2015) Danilo J. Rezende and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows. In Proceedings of ICML.
 Rezende et al. (2014) Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of ICML.
 Robbins (1951) Herbert Robbins. 1951. Asymptotically Subminimax Solutions of Compound Statistical Decision Problems. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pages 131–149. Berkeley: University of California Press.
 Robbins (1956) Herbert Robbins. 1956. An Empirical Bayes Approach to Statistics. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, pages 157–163. Berkeley: University of California Press.
 Salakhutdinov et al. (2003) Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. 2003. Optimization with EM and ExpectationConjugateGradient. In Proceedings of ICML.
 dos Santos and Zadrozny (2014) Cícero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning Characterlevel Representations for PartofSpeech Tagging. In Proceedings of ICML.
 Seginer (2007) Yoav Seginer. 2007. Fast Unsupervised Incremental Parsing. In Proceedings of ACL.
 Shen et al. (2018) Yikang Shen, Zhouhan Lin, ChinWei Huang, and Aaron Courville. 2018. Neural Language Modeling by Jointly Learning Syntax and Lexicon. In Proceedings of ICLR.
 Shen et al. (2019) Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks. In Proceedings of ICLR.
 Shi et al. (2019) Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu. 2019. Visually Grounded Neural Syntax Acquisition. In Proceedings of ACL.
 Smith and Eisner (2004) Noah A. Smith and Jason Eisner. 2004. Annealing Techniques for Unsupervised Statistical Language Learning. In Proceedings of ACL.
 Snyder et al. (2009) Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised Multilingual Grammar Induction. In Proceedings of ACL.
 Socher et al. (2013) Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with Compositional Vector Grammars. In Proceedings of ACL.
 Spitkovsky et al. (2012) Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2012. Three DependencyandBoundary Models for Grammar Induction. In Proceedings of EMNLPCoNLL.
 Spitkovsky et al. (2013) Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2013. Breaking Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction. In Proceedings of EMNLP.
 Stern et al. (2017) Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A Minimal SpanBased Neural Constituency Parser. In Proceedings of ACL.
 Stratos (2019) Karl Stratos. 2019. Mutual Information Maximization for Simple and Accurate PartofSpeech Induction. In Proceedings of NAACL.
 Tai et al. (2015) Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From TreeStructured Long ShortTerm Memory Networks. In Proceedings of ACL.
 Tran et al. (2016) Ke Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsupervised Neural Hidden Markov Models. In Proceedings of the Workshop on Structured Prediction for NLP.
 Wang and Blunsom (2013) Pengyu Wang and Phil Blunsom. 2013. Collapsed Variational Bayesian Inference for PCFGs. In Proceedings of CoNLL.
 Wang and Chang (2016) Wenhui Wang and Baobao Chang. 2016. Graphbased Dependency Parsing with Bidirectional LSTM. In Proceedings of ACL.
 Wilcox et al. (2019) Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019. Structural Supervision Improves Learning of NonLocal Grammatical Dependencies. In Proceedings of NAACL.
 Wiseman et al. (2018) Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning Neural Templates for Text Generation. In Proceedings of EMNLP.
 Xu et al. (2018) Ji Xu, Daniel Hsu, and Arian Maleki. 2018. Benefits of OverParameterization with EM. In Proceedings of NeurIPS.
 Xue et al. (2005) Naiwen Xue, Fei Xia, Fu dong Chiou, and Marta Palmer. 2005. The Penn Chinese Treebank: Phrase Structure Annotation of a Large Corpus. Natural Language Engineering, 11:207–238.
 Zhang (2003) CunHui Zhang. 2003. Compound Decision Theory and Empirical Bayes Methods. The Annals of Statistics, 31:379–390.
 Zhao et al. (2018) Yanpeng Zhao, Liwen Zhang, and Kewei Tu. 2018. Gaussian Mixture Latent Vector Grammars. In Proceedings of ACL.
 Zhu et al. (2015) Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long ShortTerm Memory Over Tree Structures. In Proceedings of ICML.
Appendix A Appendix
a.1 Model Parameterization
Neural PCFG
We associate an input embedding for each symbol on the left side of a rule (i.e. ) and run a neural network over to obtain the rule probabilities. Concretely, each rule type is parameterized as follows,
where is the product space , and are MLPs with two residual layers,
The bias terms for the above expressions (including for the rule probabilities) are omitted for notational brevity. In Figure 1 we use the following to refer to rule probabilities of different rule types,
where denotes the set of rules with on the left hand side.
Compound PCFG
The compound PCFG rule probabilities given a latent vector ,
Again the bias terms are omitted for brevity, and are as before where the first layer’s input dimensions are appropriately changed to account for concatenation with .
Sentencelevel  
WSJ10  WSJ20  WSJ30  WSJ40  WSJFull  
Left Branching  17.4  12.9  9.9  8.6  8.7 
Right Branching  58.5  49.8  44.4  41.6  39.5 
Random Trees  31.8  25.2  21.5  19.7  19.2 
PRPN (tuned)  58.4  54.3  50.9  48.5  47.3 
ON (tuned)  63.9  57.5  53.2  50.5  48.1 
Neural PCFG  64.6  58.1  54.6  52.6  50.8 
Compound PCFG  70.5  63.4  58.9  56.6  55.2 
Oracle  82.1  84.1  84.2  84.3  84.3 
Corpuslevel  
WSJ10  WSJ20  WSJ30  WSJ40  WSJFull  
Left Branching  16.5  11.7  8.5  7.2  6.0 
Right Branching  58.9  48.3  42.5  39.4  36.1 
Random Trees  31.9  23.9  20.0  18.1  16.4 
PRPN (tuned)  59.3  53.6  49.7  46.9  44.5 
ON (tuned)  64.7  56.3  51.5  48.3  45.6 
Neural PCFG  63.5  56.8  53.1  51.0  48.7 
Compound PCFG  70.6  62.0  57.1  54.6  52.4 
Oracle  83.5  85.2  84.9  84.9  84.7 
a.2 Corpus/Sentence by Sentence Length
For completeness we show the corpuslevel and sentencelevel broken down by sentence length in Table 6, averaged across 4 different runs of each model.
Label  S  SBAR  NP  VP  PP  ADJP  ADVP  Other  Freq.  Acc. 
NT01  0.0%  0.0%  81.8%  1.1%  0.0%  5.9%  0.0%  11.2%  2.9%  13.8% 
NT02  2.2%  0.9%  90.8%  1.7%  0.9%  0.0%  1.3%  2.2%  1.1%  44.0% 
NT03  1.0%  0.0%  2.3%  96.8%  0.0%  0.0%  0.0%  0.0%  1.8%  37.1% 
NT04  0.3%  2.2%  0.5%  2.0%  93.9%  0.2%  0.6%  0.3%  11.0%  64.9% 
NT05  0.2%  0.0%  36.4%  56.9%  0.0%  0.0%  0.2%  6.2%  3.1%  57.1% 
NT06  0.0%  0.0%  99.1%  0.0%  0.1%  0.0%  0.2%  0.6%  5.2%  89.0% 
NT07  0.0%  0.0%  99.7%  0.0%  0.3%  0.0%  0.0%  0.0%  1.3%  59.3% 
NT08  0.5%  2.2%  23.3%  35.6%  11.3%  23.6%  1.7%  1.7%  2.0%  44.3% 
NT09  6.3%  5.6%  40.2%  4.3%  32.6%  1.2%  7.0%  2.8%  2.6%  52.1% 
NT10  0.1%  0.1%  1.4%  58.8%  38.6%  0.0%  0.8%  0.1%  3.0%  50.5% 
NT11  0.9%  0.0%  96.5%  0.9%  0.9%  0.0%  0.0%  0.9%  1.1%  42.9% 
NT12  0.5%  0.2%  94.4%  2.4%  0.2%  0.1%  0.2%  2.0%  8.9%  74.9% 
NT13  1.6%  0.1%  0.2%  97.7%  0.2%  0.1%  0.1%  0.1%  6.2%  46.0% 
NT14  0.0%  0.0%  0.0%  98.6%  0.0%  0.0%  0.0%  1.4%  0.9%  54.1% 
NT15  0.0%  0.0%  99.7%  0.0%  0.3%  0.0%  0.0%  0.0%  2.0%  76.9% 
NT16  0.0%  0.0%  0.0%  100.0%  0.0%  0.0%  0.0%  0.0%  0.3%  29.9% 
NT17  96.4%  2.9%  0.0%  0.7%  0.0%  0.0%  0.0%  0.0%  1.2%  24.4% 
NT18  0.3%  0.0%  88.7%  2.8%  0.3%  0.0%  0.0%  7.9%  3.0%  28.3% 
NT19  3.9%  1.0%  86.6%  2.4%  2.6%  0.4%  1.3%  1.8%  4.5%  53.4% 
NT20  0.0%  0.0%  99.0%  0.0%  0.0%  0.3%  0.2%  0.5%  7.4%  17.5% 
NT21  94.4%  1.7%  2.0%  1.4%  0.3%  0.1%  0.0%  0.1%  6.2%  34.7% 
NT22  0.1%  0.0%  98.4%  1.1%  0.1%  0.0%  0.2%  0.2%  3.5%  77.6% 
NT23  0.4%  0.9%  14.0%  53.1%  8.2%  18.5%  4.3%  0.7%  2.4%  49.1% 
NT24  0.0%  0.2%  1.5%  98.3%  0.0%  0.0%  0.0%  0.0%  2.3%  47.3% 
NT25  0.3%  0.0%  1.4%  98.3%  0.0%  0.0%  0.0%  0.0%  2.2%  34.6% 
NT26  0.4%  60.7%  18.4%  3.0%  15.4%  0.4%  0.4%  1.3%  2.1%  23.4% 
NT27  0.0%  0.0%  48.7%  0.5%  0.7%  13.1%  3.2%  33.8%  2.0%  59.7% 
NT28  88.2%  0.3%  3.8%  0.9%  0.1%  0.0%  0.0%  6.9%  6.7%  76.5% 
NT29  0.0%  1.7%  95.8%  1.0%  0.7%  0.0%  0.0%  0.7%  1.0%  62.8% 
NT30  1.6%  94.5%  0.6%  1.2%  1.2%  0.0%  0.4%  0.4%  2.1%  49.4% 
NT01  0.0%  0.0%  0.0%  99.2%  0.0%  0.0%  0.0%  0.8%  2.6%  41.1% 
NT02  0.0%  0.3%  0.3%  99.2%  0.0%  0.0%  0.0%  0.3%  5.3%  15.4% 
NT03  88.2%  0.3%  3.6%  1.0%  0.1%  0.0%  0.0%  6.9%  7.2%  71.4% 
NT04  0.0%  0.0%  100.0%  0.0%  0.0%  0.0%  0.0%  0.0%  0.5%  2.4% 
NT05  0.0%  0.0%  0.0%  96.6%  0.0%  0.0%  0.0%  3.4%  5.0%  1.2% 
NT06  0.0%  0.4%  0.4%  98.8%  0.0%  0.0%  0.0%  0.4%  1.2%  43.7% 
NT07  0.2%  0.0%  95.3%  0.9%  0.0%  1.6%  0.1%  1.9%  2.8%  60.6% 
NT08  1.0%  0.4%  95.3%  2.3%  0.4%  0.2%  0.3%  0.2%  9.4%  63.0% 
NT09  0.6%  0.0%  87.4%  1.9%  0.0%  0.0%  0.0%  10.1%  1.0%  33.8% 
NT10  78.3%  17.9%  3.0%  0.5%  0.0%  0.0%  0.0%  0.3%  1.9%  42.0% 
NT11  0.3%  0.0%  99.0%  0.3%  0.0%  0.3%  0.0%  0.0%  0.9%  70.3% 
NT12  0.0%  8.8%  76.5%  2.9%  5.9%  0.0%  0.0%  5.9%  2.0%  3.6% 
NT13  0.5%  2.0%  1.0%  96.6%  0.0%  0.0%  0.0%  0.0%  1.7%  50.7% 
NT14  0.0%  0.0%  99.1%  0.0%  0.0%  0.6%  0.0%  0.4%  7.7%  14.8% 
NT15  2.9%  0.5%  0.4%  95.5%  0.4%  0.0%  0.0%  0.2%  4.4%  45.2% 
NT16  0.4%  0.4%  17.9%  5.6%  64.1%  0.4%  6.8%  4.4%  1.4%  38.1% 
NT17  0.1%  0.0%  98.2%  0.5%  0.1%  0.1%  0.1%  0.9%  9.6%  85.4% 
NT18  0.1%  0.0%  95.7%  1.6%  0.0%  0.1%  0.2%  2.3%  4.7%  56.2% 
NT19  0.0%  0.0%  98.9%  0.0%  0.4%  0.0%  0.0%  0.7%  1.3%  72.6% 
NT20  2.0%  22.7%  3.0%  4.8%  63.9%  0.6%  2.3%  0.6%  6.8%  59.0% 
NT21  0.0%  0.0%  14.3%  42.9%  0.0%  0.0%  42.9%  0.0%  2.2%  0.7% 
NT22  1.4%  0.0%  11.0%  86.3%  0.0%  0.0%  0.0%  1.4%  1.0%  15.2% 
NT23  0.1%  0.0%  58.3%  0.8%  0.4%  5.0%  1.7%  33.7%  2.8%  62.7% 
NT24  0.0%  0.0%  100.0%  0.0%  0.0%  0.0%  0.0%  0.0%  0.6%  70.2% 
NT25  2.2%  0.0%  76.1%  4.3%  0.0%  2.2%  0.0%  15.2%  0.4%  23.5% 
NT26  0.0%  0.0%  2.3%  94.2%  3.5%  0.0%  0.0%  0.0%  0.8%  24.0% 
NT27  96.6%  0.2%  1.5%  1.1%  0.3%  0.2%  0.0%  0.2%  4.3%  32.2% 
NT28  1.2%  3.7%  1.5%  5.8%  85.7%  0.9%  0.9%  0.3%  7.6%  64.9% 
NT29  3.0%  82.0%  1.5%  13.5%  0.0%  0.0%  0.0%  0.0%  0.6%  45.4% 
NT30  0.0%  0.0%  1.0%  60.2%  19.4%  1.9%  4.9%  12.6%  2.1%  10.4% 
Gold  15.0%  4.8%  38.5%  21.7%  14.6%  1.7%  0.8%  2.9% 
a.3 Experiments with RNNGs
For experiments on supervising RNNGs with induced trees, we use the parameterization and hyperparameters from Kim et al. (2019), which uses a 2layer 650dimensional stack LSTM (with dropout of 0.5) and a 650dimensional tree LSTM Tai et al. (2015); Zhu et al. (2015) as the composition function.
Concretely, the generative story is as follows: first, the stack representation is used to predict the next action (shift or reduce) via an affine transformation followed by a sigmoid. If shift is chosen, we obtain a distribution over the vocabulary via another affine transformation over the stack representation followed by a softmax. Then we sample the next word from this distribution and shift the generated word onto the stack using the stack LSTM. If reduce is chosen, we pop the last two elements off the stack and use the tree LSTM to obtain a new representation. This new representation is shifted onto the stack via the stack LSTM. Note that this RNNG parameterization is slightly different than the original from Dyer et al. (2016), which does not ignore constituent labels and utilizes a bidirectional LSTM as the composition function instead of a tree LSTM. As our RNNG parameterization only works with binary trees, we binarize the gold trees with right binarization for the RNNG trained on gold trees (trees from the unsupervised methods explored in this paper are already binary). The RNNG also trains a discriminative parser alongside the generative model for evaluation with importance sampling. We use a CRF parser whose span score parameterization is similar similar to recent works Wang and Chang (2016); Stern et al. (2017); Kitaev and Klein (2018): position embeddings are added to word embeddings, and a bidirectional LSTM with 256 hidden dimensions is run over the input representations to obtain the forward and backward hidden states. The score for a constituent spanning the th and th word is given by,
where the MLP has a single hidden layer with nonlinearity followed by layer normalization Ba et al. (2016).
For experiments on finetuning the RNNG with the unsupervised RNNG, we take the discriminative parser (which is also pretrained alongside the RNNG on induced trees) to be the structured inference network for optimizing the evidence lower bound. We refer the reader to Kim et al. (2019) and their open source implementation^{22}^{22}22https://github.com/harvardnlp/urnng for additional details. We also observe that as noted by Kim et al. (2019), a URNNG trained from scratch on this version of PTB without punctuation failed to outperform a rightbranching baseline.
The LSTM language model baseline is the same size as the stack LSTM (i.e. 2 layers, 650 hidden units, dropout of 0.5), and is therefore equivalent to an RNNG with completely right branching trees. The PRPN/ON baselines for perplexity/syntactic evaluation in Table 3 also have 2 layers with 650 hidden units and 0.5 dropout. Therefore all models considered in Table 3 have roughly the same capacity. For all models we share input/output word embeddings Press and Wolf (2016). Perplexity estimation for the RNNGs and the compound PCFG uses 1000 importanceweighted samples.
For grammaticality judgment, we modify the publicly available dataset from Marvin and Linzen (2018)^{23}^{23}23https://github.com/BeckyMarvin/LM_syneval to only keep sentence pairs that did not have any unknown words with respect to our PTB vocabulary of 10K words. This results in 33K sentence pairs for evaluation.
a.4 Nonterminal/Preterminal Alignments
a.5 Subtree Analysis
Table 8 lists more examples of constituents within each subtree as the top principical component is varied. Due to data sparsity, the subtree analysis is performed on the full dataset. See section 5 for more details.
(NT13 (T12 ) (NT25 (T39 ) (T58 )))  
would be irresponsible  has been growing 
could be delayed  ’ve been neglected 
can be held  had been made 
can be proven  had been canceled 
could be used  have been wary 
(NT04 (T13 ) (NT12 (T60 ) (NT18 (T60 ) (T21 ))))  
of federally subsidized loans  in fairly thin trading 
of criminal racketeering charges  in quiet expiration trading 
for individual retirement accounts  in big technology stocks 
without prior congressional approval  from small price discrepancies 
between the two concerns  by futuresrelated program buying 
(NT04 (T13 ) (NT12 (T05 ) (NT01 (T18 ) (T25 ))))  
by the supreme court  in a stockindex arbitrage 
of the bankruptcy code  as a hedging tool 
to the bankruptcy court  of the bond market 
in a foreign court  leaving the stock market 
for the supreme court  after the new york 
(NT12 (NT20 (NT20 (T05 ) (T40 )) (T40 )) (T22 ))  
a syrian troop pullout  the frankfurt stock exchange 
a conventional soviet attack  the late sell programs 
the housepassed capitalgains provision  a great buying opportunity 
the official creditors committee  the most active stocks 
a syrian troop withdrawal  a major brokerage firm 
(NT21 (NT22 (NT20 (T05 ) (T40 )) (T22 )) (NT13 (T30 ) (T58 )))  
the frankfurt market was mixed  the grammrudman targets are met 
the u.s. unit edged lower  a private meeting is scheduled 
a news release was prepared  the key assumption is valid 
the stock market closed wednesday  the budget scorekeeping is completed 
the stock market remains fragile  the tax bill is enacted 
(NT03 (T07 ) (NT19 (NT20 (NT20 (T05 ) (T40 )) (T40 )) (T22 )))  
have a high default risk  rejected a reagan administration plan 
have a lower default risk  approved a shortterm spending bill 
has a strong practical aspect  has an emergency relief program 
have a good strong credit  writes the hud spending bill 
have one big marketing edge  adopted the underlying transportation measure 
(NT13 (T12 ) (NT25 (T39 ) (NT23 (T58 ) (NT04 (T13 ) (T43 )))))  
has been operating in paris  will be used for expansion 
has been taken in colombia  might be room for flexibility 
has been vacant since july  may be built in britain 
have been dismal for years  will be supported by advertising 
has been improving since then  could be used as weapons 
(NT04 (T13 ) (NT12 (NT06 (NT20 (T05 ) (T40 )) (T22 )) (NT04 (T13 ) (NT12 (T18 ) (T53 )))))  
for a health center in south carolina  with an opposite trade in stockindex futures 
by a federal jury in new york  from the recent volatility in financial markets 
of the appeals court in new york  of another steep plunge in stock prices 
of the further thaw in u.s.soviet relations  over the past decade as pension funds 
of the service corps of retired executives  by a modest recovery in share prices 
(NT10 (T55 ) (NT05 (T02 ) (NT19 (NT06 (T05 ) (T41 )) (NT04 (T13 ) (NT12 (T60 ) (T21 ))))))  
to integrate the products into their operations  to defend the company in such proceedings 
to offset the problems at radio shack  to dismiss an indictment against her claiming 
to purchase one share of common stock  to death some N of his troops 
to tighten their hold on their business  to drop their inquiry into his activities 
to use the microprocessor in future products  to block the maneuver on procedural grounds 
(NT13 (T12 ) (NT25 (T39 ) (NT23 (T58 ) (NT04 (T13 ) (NT12 (NT20 (T05 ) (T40 )) (T22 ))))))  
has been mentioned as a takeover candidate  would be run by the joint chiefs 
has been stuck in a trading range  would be made into a separate bill 
had left announced to the trading mob  would be included in the final bill 
only become active during the closing minutes  would be costly given the financial arrangement 
will get settled in the short term  would be restricted by a new bill 
(NT10 (T55 ) (NT05 (T02 ) (NT19 (NT06 (T05 ) (T41 )) (NT04 (T13 ) (NT12 (T60 ) (NT18 (T18 ) (T53 )))))))  
to supply that country with other defense systems  to enjoy a loyalty among junk bond investors 
to transfer its skill at designing military equipment  to transfer their business to other clearing firms 
to improve the availability of quality legal service  to soften the blow of declining stock prices 
to unveil a family of highend personal computers  to keep a lid on shortterm interest rates 
to arrange an acceleration of planned tariff cuts  to urge the fed toward lower interest rates 
(NT21 (NT22 (T60 ) (NT18 (T60 ) (T21 ))) (NT13 (T07 ) (NT02 (NT27 (T47 ) (T50 )) (NT10 (T55 ) (NT05 (T47 ) (T50 ))))))  
unconsolidated pretax profit increased N % to N billion  amex short interest climbed N % to N shares 
its total revenue rose N % to N billion  its pretax profit rose N % to N million 
total operating revenue grew N % to N billion  its pretax profit rose N % to N billion 
its group sales rose N % to N billion  fiscal firsthalf sales slipped N % to N million 
total operating expenses increased N % to N billion  total operating expenses increased N % to N billion 