Halo: Learning Semantics-Aware Representations for Cross-Lingual Information Extraction

Halo: Learning Semantics-Aware Representations for Cross-Lingual Information Extraction

Hongyuan Mei,  Sheng Zhang1,  Kevin Duh,  Benjamin Van Durme
Center for Language and Speech Processing, Johns Hopkins University
  equal contribution
1footnotemark: 1

Cross-lingual information extraction (CLIE) is an important and challenging task, especially in low resource scenarios. To tackle this challenge, we propose a training method, called Halo, which enforces the local region of each hidden state of a neural model to only generate target tokens with the same semantic structure tag. This simple but powerful technique enables a neural model to learn semantics-aware representations that are robust to noise, without introducing any extra parameter, thus yielding better generalization in both high and low resource settings.

1 Introduction

Cross-lingual information extraction (CLIE) is the task of distilling and representing factual information in a target language from the textual input in a source language Sudo et al. (2004); Zhang et al. (2017c). For example, Fig. 1 illustrates a pair of input Chinese sentence and its English predicate-argument information111The predicate-argument information is usually denoted by relation tuples. In this work, we adopt the tree-structured representation generated by PredPatt White et al. (2016), which was a lightweight tool available at https://github.com/hltcoe/PredPatt., where predicate and argument are well used semantic structure tags.

It is of great importance to solve the task, as to provide viable solutions to extracting information from the text of languages that suffer from no or little existing information extraction tools. Neural models have empirically proven successful in this task Zhang et al. (2017c, b), but still remain unsatisfactory in low resource (i.e. small number of training samples) settings. These neural models learn to summarize a given source sentence and target prefix into a hidden state, which aims to generate the correct next target token after being passed through an output layer. As each member in the target vocabulary is essentially either predicate or argument, a random perturbation on the hidden state should still be able to yield a token with the same semantic structure tag. This inductive bias motivates an extra term in training objective, as shown in Fig. 2, which enforces the surroundings of any learned hidden state to generate tokens with the same semantic structure tag (either predicate or argument) as the centroid. We call this technique Halo, because the process of each hidden state taking up its surroundings is analogous to how the halo is formed around the sun. The method is believed to help the model generalize better, by learning more semantics-aware and noise-insensitive hidden states without introducing extra parameters.

Figure 1: Example of cross-lingual information extraction: Chinese input text (a) and linearized English PredPatt output (b), where ‘:p’ and blue stand for predicate while ‘:a’ and purple denote argument.
Figure 2: Visualization of Halo method. While a neural model learns to summarizes the current known information into a hidden state and predict the next target token, the surroundings of this hidden state in the same space (two-dimensional in this example) are supervised to generate tokens with the same semantic structure tag. For example, at the last shown step, the centroid of purple area is the summarized hidden state and learns to predict ‘mortars:a’, while a randomly sampled neighbor is enforced to generate an argument, although it may not be ‘mortars’ (thus denoted by ‘?’). Similar remarks apply to the blue regions.

2 The Problem

We are interested in learning a probabilistic model that directly maps an input sentence of the source language into an output sequence of the target language , where can be any human natural language (e.g. Chinese) and is the English PredPatt White et al. (2016). In the latter vocabulary, each type is tagged as either predicate or argument—those with “:p” are predicates while those with “:a” are arguments.

For any distribution in our proposed family, the log-likelihood of the model given any pair is:


where is a special beginning of sequence token.

We denote vectors by bold lowercase Roman letters such as , and matrices by bold capital Roman letters such as throughout the paper. Subscripted bold letters denote distinct vectors or matrices (e.g., ). Scalar quantities, including vector and matrix elements such as and , are written without bold. Capitalized scalars represent upper limits on lowercase scalars, e.g., . Function symbols are notated like their return type. All functions are extended to apply elementwise to vectors and matrices.

3 The Method

In this section, we first briefly review how the baseline neural encoder-decoder models work on this task, and then introduce our novel and well-suited training method Halo.

3.1 Baseline Neural Models

Previous neural models on this task Zhang et al. (2017c, b) all adopt an encoder-decoder architecture with recurrent neural networks, particularly LSTMs Hochreiter and Schmidhuber (1997). At each step in decoding, the models summarize the input and output prefix into a hidden state , and then project it with a transformation matrix to a distribution over the target English PredPatt vocabulary : \cref@addtoresetequationparentequation


where is a -dimensional one vector such that is a valid distribution.

Suppose that the ground truth target token at this step is , the probability of generating under the current model is , obtained by accessing the -th element in the vector . Then the log-likelihood is constructed as , and the model is trained by maximizing this objective over all the training pairs.

3.2 Halo

Our method adopts a property of this task—the vocabulary is partitioned into , set of predicates that end with “:p”, and , set of arguments that end with “:a”. As a neural model would summarize everything known up to step into , would a perturbation around still generate the same token ? This bias seems too strong, but we can still reasonably assume that would generate a token with the same semantic structure tag (i.e. predicate or argument). That is, the prediction made by should end with “:p” if is a predicate, and with “:a” otherwise.

This inductive bias provides us with another level of supervision. Suppose that at step , a neighboring is randomly sampled around , and is then used to generate a distribution in the same way as th equation 2. Then we can get a distribution over , by summing all the probabilities of predicates and those of arguments: \cref@addtoresetequationparentequation


This aggregation is shown in Fig. 3. Then the extra objective is , where if the target token (i.e. ending with “:p”) and otherwise.

Figure 3: Visualization of how (distribution over ) is obtained by aggregating (distribution over ).

Therefore, we get the joint objective to maximize by adding and :


which enables the model to learn more semantics-aware and noise-insensitive hidden states by enforcing the hidden states within a region to share the same semantic structure tag.222One can also sample multiple, rather than one, neighbors for one hidden state and then average their . In our experimental study, we only try one for computational cost and found it effective enough.

3.2.1 Sampling Neighbors

Sampling a neighbor around is essentially equivalent to adding noise to it. Note that in a LSTM decoder that previous work used, because where and . Therefore, extra work is needed to ensure . For this purpose, we follow the recipe333Alternatives do exist. For example, one can transform from to , add random (e.g. Gaussian) noise in the latter space and then transform back to . These tricks are valid as long as they find neighbors within the same space as is.:

  • [noitemsep]

  • Sample by independently sampling each entry from an uniform distribution over ;

  • Sample a scalar from a Beta distribution where and are hyperparameters to be tuned;

  • Compute such that lies on the line segment between and .

Note that the sampled hidden state is only used to compute , but not to update the LSTM hidden state, i.e., is independent of .

3.2.2 Roles of Hyperparameters

The Halo technique adds an inductive bias into the model, and its magnitude is controlled by :

  • [noitemsep]

  • to ensure ;

  • makes , thus providing no extra supervision on the model;

  • makes uniformly sampled in entire , and causes underfitting just like a -2 regularization coefficient goes to infinity.

We sample a valid from a Beta distribution with and , and their magnitude can be tuned on the development set:

  • [noitemsep]

  • When and is finite, or is finite and , we have ;

  • When and is finite, or is finite and , we have ;

  • Larger and yield larger variance of , and setting to be a constant is a special case that , and is fixed.

Besides and , the way of partitioning (i.e. the definition of ) also serves as a knob for tuning the bias strength. Although on this task, the predicate and argument tags naturally partition the vocabulary, we are still able to explore other possibilities. For example, an extreme is to partition into different singletons, meaning that —a perturbation around should still predict the same token. But this extreme case does not work well in our experiments, verifying the importance of the semantic structure tags on this task.

Dataset Number of Pairs Vocabulary Size Token/Type
Train Dev Test Source Target
Table 1: Statistics of each dataset.
Method Chinese Uzbek Turkish Somali
Pred Arg Pred Arg Pred Arg Pred Arg
ModelZ 22.07 30.06 39.06 10.76 12.46 24.08 7.47 6.49 17.76 13.06 13.91 25.38
ModelP 22.10 30.04 39.83 12.50 18.81 25.93 9.04 12.90 21.13 13.22 16.71 26.83
ModelP-Halo 23.18 30.85 41.23 12.95 19.23 27.63 10.21 12.55 22.57 14.26 17.06 27.73
Table 2: Bleu and F1 scores of different models on all these datasets, where Pred stands for predicate and Arg for argument. Best numbers are highlighted as bold.

4 Related Work

Cross-lingual information extraction has drawn a great deal of attention from researchers. Some Sudo et al. (2004); Parton et al. (2009); Ji (2009); Snover et al. (2011); Ji and Nothman (2016) worked in closed domains, i.e. on a predefined set of events and/or entities, Zhang et al. (2017c) explored this problem in open domain and their attentional encoder-decoder model significantly outperformed a baseline system that does translation and parsing in a pipeline. Zhang et al. (2017b) further improved the results by inventing a hierarchical architecture that learns to first predict the next semantic structure tag and then select a tag-dependent decoder for token generation. Orthogonal to these efforts, Halo aims to help all neural models on this task, rather than any specific model architecture.

Halo can be understood as a data augmentation technique Chapelle et al. (2001); Van der Maaten et al. (2013); Srivastava et al. (2014); Szegedy et al. (2016); Gal and Ghahramani (2016). Such tricks have been used in training neural networks to achieve better generalization, in applications like image classification Simard et al. (2000); Simonyan and Zisserman (2015); Arpit et al. (2017); Zhang et al. (2017a) and speech recognition Graves et al. (2013); Amodei et al. (2016). Halo differs from these methods because 1) it makes use of the task-specific information—vocabulary is partitioned by semantic structure tags; and 2) it makes use of the human belief that the hidden representations of tokens with the same semantic structure tag should stay close to each other. Some

5 Experiments

We evaluate our method on several real-world CLIE datasets measured by BLEU Papineni et al. (2002) and F1, as proposed by Zhang et al. (2017c). For the generated linearized PredPatt outputs and their references, the former metric444The MOSES implementation Koehn et al. (2007) was used as in all the previous work on this task. measures their n-gram similarity, and the latter measures their token-level overlap. In fact, F1 is computed separately for predicate and argument, as F1 Pred and F1 Arg respectively.

5.1 Datasets

Multiple datasets were used to demonstrate the effectiveness of our proposed method, where one sample in each dataset is a source language sentence paired with its linearized English PredPatt output. These datasets were first introduced as the DARPA LORELEI Language Packs Strassel and Tracey (2016), and then used for this task by Zhang et al. (2017c, b). As shown in table 1, the Chinese dataset has almost one million training samples and a high token/type ratio, while the others are low resourced, meaning they have much fewer samples and lower token/type ratios.

5.2 Model Implementation

Before applying our Halo technique, we first improved the current state-of-the-art neural model of Zhang et al. (2017b) by using residual connections He et al. (2016) and multiplicative attention Luong et al. (2015), which effectively improved the model performance. We refer to the model of Zhang et al. (2017b) and our improved version as ModelZ and ModelP respectively555Z stands for Zhang and P for Plus..

5.3 Experimental Details

In experiments, instead of using the full vocabularies shown in table 1, we set a minimum count threshold for each dataset, to replace the rare words by a special out-of-vocabulary symbol. These thresholds were tuned on dev sets.

The Beta distribution is very flexible. In general, its variance is a decreasing function of , and when is fixed, the mean is an increasing function of . In our experiments, we fixed and only lightly tuned on dev sets. Optimal values of stay close to .

5.4 Results

As shown in Table 2, ModelP outperforms ModelZ on all the datasets measured by all the metrics, except for F1 Pred on Chinese dataset. Our Halo technique consistently boosts the model performance of ModelP except for F1 Pred on Turkish.

Additionally, experiments were also conducted on two other low resource datasets Amharic and Yoruba that Zhang et al. (2017b) included, and in Halo was found optimal on the dev sets. In such cases, this regularization was not helpful so no comparison need be made on the held-out test sets.

6 Conclusion and Future Work

We present a simple and effective training technique Halo for the task of cross-lingual information extraction. Our method aims to enforce the local surroundings of each hidden state of a neural model to only generate tokens with the same semantic structure tag, thus enabling the learned hidden states to be more aware of semantics and robust to random noise. Our method provides new state-of-the-art results on several benchmark cross-lingual information extraction datasets, including both high and low resource scenarios.

As future work, we plan to extend this technique to similar tasks such as POS tagging and Semantic Role Labeling. One straightforward way of working on these tasks is to define the vocabularies as set of ‘word-type:POS-tag’ (so ) and ‘word-type:SR’ (so ), such that our method is directly applicable. It would also be interesting to apply Halo widely to other tasks as a general regularization technique.


This work was supported in part by the JHU Human Language Technology Center of Excellence (HLTCOE), and DARPA LORELEI. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.


  • Amodei et al. (2016) Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning. pages 173–182.
  • Arpit et al. (2017) Devansh Arpit, Stanislaw Jastrzkebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. 2017. A closer look at memorization in deep networks. In International Conference on Machine Learning. pages 233–242.
  • Chapelle et al. (2001) Olivier Chapelle, Jason Weston, Léon Bottou, and Vladimir Vapnik. 2001. Vicinal risk minimization. In Advances in neural information processing systems. pages 416–422.
  • Gal and Ghahramani (2016) Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems. pages 1019–1027.
  • Graves et al. (2013) Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, pages 6645–6649.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • Ji (2009) Heng Ji. 2009. Cross-lingual predicate cluster acquisition to improve bilingual event extraction by inductive learning. In Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics. Association for Computational Linguistics, Boulder, Colorado, USA, pages 27–35. http://www.aclweb.org/anthology/W09-1704.
  • Ji and Nothman (2016) Heng Ji and Joel Nothman. 2016. Overview of TAC-KBP2016 Tri-lingual EDL and its impact on end-to-end cold-start KBP. In Proceedings of the Text Analysis Conference (TAC).
  • Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. Association for Computational Linguistics, pages 177–180.
  • Luong et al. (2015) Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318.
  • Parton et al. (2009) Kristen Parton, Kathleen R. McKeown, Bob Coyne, Mona T. Diab, Ralph Grishman, Dilek Hakkani-Tür, Mary Harper, Heng Ji, Wei Yun Ma, Adam Meyers, Sara Stolbach, Ang Sun, Gokhan Tur, Wei Xu, and Sibel Yaman. 2009. Who, what, when, where, why? comparing multiple approaches to the cross-lingual 5w task. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, Suntec, Singapore, pages 423–431. http://www.aclweb.org/anthology/P/P09/P09-1048.
  • Simard et al. (2000) Patrice Y Simard, Yann A Le Cun, John S Denker, and Bernard Victorri. 2000. Transformation invariance in pattern recognition: Tangent distance and propagation. International Journal of Imaging Systems and Technology 11(3):181–197.
  • Simonyan and Zisserman (2015) Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations.
  • Snover et al. (2011) Matthew Snover, Xiang Li, Wen-Pin Lin, Zheng Chen, Suzanne Tamang, Mingmin Ge, Adam Lee, Qi Li, Hao Li, Sam Anzaroot, and Heng Ji. 2011. Cross-lingual slot filling from comparable corpora. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web. Association for Computational Linguistics, Stroudsburg, PA, USA, BUCC ’11, pages 110–119. http://dl.acm.org/citation.cfm?id=2024236.2024256.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958.
  • Strassel and Tracey (2016) Stephanie Strassel and Jennifer Tracey. 2016. Lorelei language packs: Data, tools, and resources for technology development in low resource languages. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA), Paris, France.
  • Sudo et al. (2004) Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2004. Cross-lingual information extraction system evaluation. In Proceedings of the 20th international Conference on Computational Linguistics. Association for Computational Linguistics, page 882.
  • Szegedy et al. (2016) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 2818–2826.
  • Van der Maaten et al. (2013) Laurens Van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Weinberger. 2013. Learning with marginalized corrupted features. In Proceedings of The 30th International Conference on Machine Learning.
  • White et al. (2016) Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1713–1723. https://aclweb.org/anthology/D16-1177.
  • Zhang et al. (2017a) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2017a. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 .
  • Zhang et al. (2017b) Sheng Zhang, Kevin Duh, and Ben Van Durme. 2017b. Selective decoding for cross-lingual open information extraction. In Proceedings of the 8th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Taipei, Taiwan.
  • Zhang et al. (2017c) Sheng Zhang, Kevin Duh, and Benjamin Van Durme. 2017c. MT/IE: Cross-lingual open information extraction with neural sequence-to-sequence models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, Valencia, Spain, pages 64–70. http://www.aclweb.org/anthology/E17-2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description