Self-organized Hierarchical Softmax

Self-organized Hierarchical Softmax

Yikang Shen
University of Montreal
yi-kang.shen@umontreal.ca
   Shawn Tan
University of Montreal
tanjings@iro.umontreal.ca
   Christopher Pal
Polytechnique Montreal
christopher.pal@polymtl.ca
   Aaron Courville
University of Montreal
aaron.courville@gmail.com
Abstract

We propose a new self-organizing hierarchical softmax formulation for neural-network-based language models over large vocabularies. Instead of using a predefined hierarchical structure, our approach is capable of learning word clusters with clear syntactical and semantic meaning during the language model training process. We provide experiments on standard benchmarks for language modeling and sentence compression tasks. We find that this approach is as fast as other efficient softmax approximations, while achieving comparable or even better performance relative to similar full softmax models.

1 Introduction

The softmax function and its variants are an essential part of neural network based models for natural language tasks, such as language modeling, sentence summarization, machine translation and language generation.

Given a hidden vector, the softmax can assign probability mass to each word in a vocabulary. The hidden vector could be generated from the preceding context, source sentence, dialogue context, or just random variables. The model decides how the context is converted into the hidden vector, and there are several choices for this, including recurrent neural network Hochreiter and Schmidhuber (1997); Mikolov et al. (2010), feed forward neural network Bengio et al. (2003a) or log-bilinear models Mnih and Hinton (2009). In our experiments here, we use a long short-term memory (LSTM) model for language modeling, and a sequence-to-sequence model with an attention mechanism for sentence compression. Both models are simple but have been shown capable of achieving state-of-the-art results. Our focus is to demonstrate that with a well designed structure, the hierarchical softmax approach can perform as accurately as the full softmax, while maintaining improvements in efficiency.

For word-level models, the size of the vocabulary is very important for higher recall and a more accurate understanding of the input. However the training speed for models with softmax output layers quickly decreases as the vocabulary size grows. This is due to the linear increase of parameter size and computation cost with respect to vocabulary.

Many approaches have been proposed to reduce the computational complexity of large softmax layers Mikolov et al. (2011a); Chen et al. (2016); Grave et al. (2017). These methods can largely be divided into two categories:

  1. Approaches that can compute a normalized distribution over the entire vocabulary with a lower computational cost Chen et al. (2016); Grave et al. (2017). Normalized probabilities can be useful for sentence generation tasks, such as machine translation and summarization.

  2. Methods that provide unnormalized values Bengio et al. (2003b); Mikolov et al. (2013b). These methods are usually more efficient in the training process, but less accurate.

In this paper, we propose a self-organized hierarchical softmax, which belongs in the first category. In contrast to previous hierarchical softmax methods which have used predefined clusters, we conjecture here that a hierarchical structure learned from the corpus may improve model performance. Instead of using term frequencies as clustering criteria Mikolov et al. (2011a); Grave et al. (2017), we want to explore the probability of clustering words together considering their preceding context. The main contributions of this paper are as follows:

  • We propose an algorithm to learn a hierarchical structure during the language model learning process. The goal of this algorithm is to maximize the probability of a word belonging to its cluster considering its preceding context.

  • We conduct experiments for two different tasks: language modeling and sentence summarization. Results show that our learned hierarchical softmax can achieve comparable accuracy for language modeling, and even better performance for summarization when compared to a standard softmax. We also provide clustering results, which indicate a clear semantic relevance between words in the same cluster.

  • Empirical results show that our approach provides a more than a speed up compared to the standard softmax.

2 Related Work

Representing probability distributions over large vocabularies is computationally challenging. In neural language modeling, the standard approach is to use a softmax function that output a probability vector over the entire vocabulary. Many methods have been proposed to approximate the softmax with lower computational cost Mikolov et al. (2011a); Chen et al. (2016); Bengio et al. (2003b); Grave et al. (2017). We briefly review the most popular methods below.

2.1 Softmax-based approaches

Hierarchical Softmax (HSM):

Goodman (2001) and its variants are the most popular approximations. In general, this approach organizes the output vocabulary into a tree where the leaves are words and intermediate nodes are latent variables, or classes. The tree structure could have many levels and there is a unique path from root to each word. The probability of a word is the product of probabilities of each node along its path. In practice, we could use a tree with two layers, where we want to organize words into simple clusters. In this case, the computational complexity reduces from to . If we use a deeper structure like the Huffman Tree, the computational complexity could be reduced to . In general, the hierarchical structure is built on frequency binning Mikolov et al. (2011b); Grave et al. (2017) or word similarities Chelba et al. (2013); Le et al. (2011); Chen et al. (2016). In this paper, we propose another word-similarity-based hierarchical structure. But, instead of performing k-means over pre-learned word embeddings, we propose a new approach that learns hierarchical structure based on the model’s historical prediction during the language model learning process.

Differentiated softmax (D-softmax):

Chen et al. (2016) is based on the intuition that not all words require the same number of parameters: The many occurrences of frequent words allows us to fit many parameters to them, while extremely rare words might only allow us to fit relatively few parameters. D-softmax assign different dimension of vector to words according to their frequency to speed up the training and save memory. Adaptive softmax Grave et al. (2017) can be seen as a combination of frequency binning HSM and D-softmax.

CNN-softmax:

Jozefowicz et al. (2016) is inspired by the idea that we could use convolution network to produce word embedding from a character level model. Aside from a big reduction in number of parameters and incorporating morphological knowledge from words, this method can also easily deal with out-of-vocabulary words, and allows parallel training over corpora that have different vocabulary size. But this method does not decrease the computational complexity compared to the standard full softmax Jozefowicz et al. (2016).

2.2 Sampling-based approaches

Sampling based approaches approximate the normalization in the denominator of the softmax with some other loss that is cheap to compute. However, sampling based approaches are only useful at training time. During inference, the full softmax still needs to be computed to obtain a normalized probability. These approaches have been successfully applied to language modeling Bengio and Senécal (2008), machine translation Jean et al. (2015), and computer vision Joulin et al. (2016).

Importance sampling (IS):

Bengio et al. (2003b); Bengio and Senécal (2008) select a subset of the vocabulary as negative samples to approximate the softmax normalization. Originally unigram or bigram distribution of word in entire corpus are used for sampling negative samples Bengio et al. (2003b), but researchers found that sampling from a more carefully designed distribution could help achieve a better accuracy. Instead, two variants of n-gram distributions are proposed:

  1. an interpolated bigram distribution and unigram distribution Bengio and Senécal (2008),

  2. a power-raised unigram distribution Mikolov et al. (2013a).

Noise Contrastive Estimation (NCE):

Noise Contrastive Estimation (NCE) is proposed in Gutmann and Hyvärinen (2010); Mnih and Kavukcuoglu (2013) as a more stable sampling method than IS. NCE does not try to estimate the probability of a word directly. Instead, it uses an auxiliary loss that works to distinguish the original distribution from a noisy one. Mnih and Teh (2012) showed that good performance can be achieved even without computing the softmax normalization.

3 Self-organized Hierarchical Softmax

3.1 Cluster-based Hierarchical Softmax

We employ a modified 2-layer hierarchical softmax to compute the distribution of next word in a sentence. Given vocabulary of size , and pre-softmax hidden states , we first project into a cluster vector and a word vector ,

(1)

where . The cluster distribution can be expressed as

(2)

where is set of clusters, is vector representation of clusters. The in-cluster probability function is

(3)

where is vector representation of words, is the cluster assigned to in . Thus, the final probability function is

(4)

If the number of cluster is in and the maximum number of words in cluster is in , then the computational cost of normalization at each layers is only , (as opposed to for the standard softmax). Thus a large matrix dot product is transformed into two small matrix dot product, which are very efficient on a GPU Grave et al. (2017).

3.2 Cluster Perplexity

In order to evaluate the quality of a clustering over words, we propose the cluster perplexity:

(5)

where is number of words in the dataset, is context preceding , is the probability that words in cluster appear behind . Given a word cluster and , this metric evaluate the difficulty to choose correct cluster. If words that share similar context have been successfully grouped together, the should be small.

In addition to , we also propose the in-cluster perplexity:

(6)

where is the probability of word appearing after given a subset of vocabulary . If contains words that share the same context with , should be large.

3.3 Optimizing Cluster Perplexity

With the definitions in Equations 5 and 6 established, our goal is to minimize :

(7)

where is term frequency of word in the corpus, and

(8)

is the average of over different preceding contexts that followed by word .

According to equation 7, we need a that maximize the weighted sum of . While directly computing is intractable, the output of equation 2 at training time can be considered a sample of . We propose to use exponential smoothing to estimate :

(9)

is a weighted sum over all historical samples of under different context and parameters, previously seen in training. The smoothing factor is defined as

(10)

where is the raw count of in the entire dataset.

3.4 Greedy Word Clustering

In practice, we assigned two constraints to each cluster:

  1. The number of words in each cluster cannot exceed , where is a hyperparameter; and

  2. The sum of term frequencies in each cluster should be smaller than a frequency budget . This is known as the the frequency-budget trick Chen et al. (2016).

These constraints prevent us from getting clusters that are either too large, which make computing in-cluster distributions very expensive, or too unbalanced in frequency, which will bias our word cluster distribution.

Data: Word cluster distribution
Result: New clusters
Generate empty clusters for word in descend order do
       for cluster in descend order do
             if  and  then
                   add into break
             end if
            
       end for
      
end for
return clusters
Algorithm 1 Greedy Word Cluster Assignment

The greedy algorithm 1 is proposed to optimize the cluster perplexity. As each cluster has limited positions for words, some words cannot be assign to their best cluster . If we assign words according to certain order, then words at the tail end of the sequence will be less likely to be assign to their best cluster. In the algorithm, we assign words to clusters in descending order of their term frequency . In this schema, high frequency words have priority to choose clusters, because they have higher weight in equation 7.

3.5 Training Language Model with Self-organized HSM

In the training phase, we start from a randomly initialized word cluster, and update parameters using gradient descent based optimization algorithms, updating word cluster every iterations. is a hyperparameter that is chosen based on dataset size and vocabulary size. This learning process can also be considered as an EM algorithm: In the E-Step, we update the clusters; in M-Step, we update parameters based on the new clusters.

4 Language Modeling Experiment

Language Modeling (LM) is a central task in NLP. The goal of LM is to learn a probability distribution over a sequence of tokens from a given vocabulary set . The joint distribution is defined as a product of conditional distribution of tokens given their preceding context. Given a sequence of word , the probability distribution can be defined as:

(11)

To address this problem, much work has been done on both parametric and non-parametric approaches. In recent years, parametric models based on neural networks have became the standard method. In our experiment, we used the standard word-level Long-Short Term Memory (LSTM) model, since multiple works show it can obtain state-of-the-art performance on different datasets Jozefowicz et al. (2016); Grave et al. (2017).

4.1 Dataset

We evaluate our method on the text8111http://mattmahoney.net/dc/textdata dataset, and use the perplexity (ppl) as an evaluation metric. We also provide the training time for full softmax and our approach. Text8 is a standard compression dataset containing a pre-processed version of the first 100 million characters from Wikipedia in English. It has been recently used for language modeling (Mikolov et al., 2014) and has a vocabulary of 44k words. The dataset partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance Mikolov et al. (2014).

4.2 Implementation

In our experiments, we use the same setting as the one reported in Grave et al. (2017). A one-layer LSTM model is used. Both the dimension of hidden state and dimension of the input word embeddings is set to 512. LSTM parameters are regularized with weight decay (). Batch size is set to 128. We use Adagrad Duchi et al. (2011) with learning rate 0.1, the norm of the gradients is clipped to 0.25, and a 20 steps gradient truncation is applied.

For our model, we set the number of clusters to , and the maximum number of words in each cluster is with , and frequency budget is as in Chen et al. (2016). We update the word clusters every mini-batches.

4.3 Baseline Methods

We compare the proposed approach with (1) full softmax, (2) importance sampling Bengio et al. (2003b), (3) hierarchical softmax (HSM) with frequency binning Mikolov et al. (2011a), (4) differentiated softmax Chen et al. (2016), and (5) adaptive softmax Grave et al. (2017). As we use the same implementation settings in Grave et al. (2017), we use their experiment results for baseline methods. Instead of using torch, we use theano Bergstra et al. (2010) to implement our approach. Thus, in order to compare computation time, we implement another full softmax language model with theano. Our full softmax has the same perplexity on the development set as the one reported in Grave et al. (2017).

4.4 Experimental results

ppl
Full softmax 144
Importance sampling 166
HSM (frequency binning) 166
D-softmax 195
Adaptive softmax 147
Our full softmax 144.17
Self-organized HSM 144.77
Table 1: Experiment results on Text8 dataset, Adagrad after 5 epochs, learning rate is 0.1
Training time
Our full softmax 77 min
Self-organized HSM 20 min
Table 2: Training time on Text8 training set
Figure 1: Perplexity, cluster perplexity, and in-cluster perplexity on the development set, while training on the text8 dataset. The ”Class change rate” line shows the percentage of words that changed cluster after the cluster update algorithm. The ”Freq sum of changed word” line shows term frequency sum of words that changed cluster.

Table 1 shows results on the text8 dataset. Our approach provides the best perplexity among all approximation approaches, nearly performing as well as a full softmax. Table 2 shows that our approach is almost 4 times faster than a normal softmax, with the speed-up continuing to increase as the vocabulary size increases.

Figure 1 monitors learning process of our approach. At the beginning of training, we observe a high cluster perplexity, and very low in-cluster perplexity. Because we initialize clusters randomly, our model has difficulties to predict cluster given the preceding sequence context of the target word. As training continues, our cluster update algorithm considers the cluster word assignments based on the distribution given by Equation 9, resulting in the cluster perplexity decreasing rapidly. In contrast, the in-cluster perplexity first increases and then decreases slowly. Because our approach assigns similar words into the same cluster that are difficult to distinguish, the model has to explicitly learn small differences between words that share a similar context. In the end, the model reaches a balance between cluster and in-cluster perplexity.

Cluster Words in same cluster
1 be have use do include make become support take show play change run
2 transmitted acclaimed pressed shipped stolen swept marketed contested blamed judged
3 delivering selecting issuing stealing imposing lowering asserting supplying regulating
4 kn mw dy volts cubes volt kcal mev ohm bhp o unces megabytes
5 empire catholic romans byzantine rulers emperors conquest catholics kingdoms catholicism
6 actor author writer singer actress director composer poet musician artist politician bishop
7 zeus achilles venus leto heracles saul ptolemy hera aphrodite beowulf ajax athena caligula
8 iraq texas afghanistan sweden boston hungary brazil iran wales michigan denmark virginia
Table 3: Word clustering examples. Each line is a subset of words that belong to one cluster learn from the text8 corpus.

Table 3 shows some examples of words that belong to the same cluster. We observe a strong syntactical similarity between these words, and semantic closeness. These examples show that our cluster update algorithm is capable of placing words with similar context into the same cluster. It is interesting to see that the unsupervised approach could learn word cluster with clear meaning.

5 Abstractive Sentence Summarization Experiment

Summarization is an important challenge in natural language understanding. The aim is to produce a condensed representation of an input text that captures the core meaning of the original.

Given a vocabulary and a sequence of words , a summarizer takes as input and outputs a shortened sentence of length . Assuming that words in the output sentence also belong to , we can express the output as . The output sentence is then called a summary of the input sentence. Thus, the probability distribution of the summary can be defined as:

(12)

For an extractive summarization, the probability distribution is on the set of input words, while for an abstractive summarization the distribution is on the entire vocabulary. In this experiment, we focus on the abstractive summarization task, which is more difficult and computationally expensive.

5.1 Dataset

We trained our model on the Gigaword5 dataset Napoles et al. (2012). This dataset was generated by pairing the headline of each article with its first sentence to create a source-compression pair. Rush et al. (2015) provided scripts to filter out outliers, resulting in roughly 3.8M training pairs, a 400K validation set, and a 400K test set. We use the most frequent 69k words in the title as input and output vocabulary, which correspond to the decoder vocabulary size used in Rush et al. (2015). Out of vocabulary words are represented with a symbol .

We evaluate our method on both the standard DUC-2004 dataset and the single reference Gigaword5 test set. The DUC corpus222http://duc.nist.gov/duc2004/tasks.html 2004 corpus consists of 500 documents, each having 4 human generated reference titles. Evaluation of this dataset uses the limited-length Rouge Recall at 75 bytes on DUC validation and test sets. In our work, we simply run the models trained on Gigaword corpus as they are, without tuning them on the DUC validation set. The only change we have made to the decoder is to suppress the model from emitting the end-of-summary tag, and forced it to emit exactly 30 words for every summary. Rush et al. (2015) provides a random sampled 2000 title-headline pair as test set. We acquired the exact test sample used by them to make a precise comparison of our models with theirs. Like Nallapati et al. (2016) and Chopra et al. (2016), we use the full length F1 variant of Rouge333http://www.berouge.com/Pages/default.aspx to evaluate our system.

5.2 Implementation

In this experiment, we use the standard Encoder-Decoder with attention architecture. Both encoder and decoder a consists of a single layer uni-direction LSTM model, and an attention mechanism over the source-hidden states and a softmax layer to output a distribution probability over an output vocabulary. The hidden state dimension and input embedding are both set to 512. All parameters are regularized with weight decay (). Batch size is 128. We use Adam Kingma and Ba (2014) with a learning rate of 0.001. No dropout or gradient clipping is used. At decode time, we use beamsearch of size 5 to generate summary. The maximum length of output summary is limited to 30.

For our approach, we learn word clusters through training a language model on the titles of our training dataset. We then use these clusters as the fixed structure for hierarchical softmax in the summarization model.

5.3 Baseline methods

We compared the performance of our model with state-of-the-art models that are trained with teacher forcing and cross-entropy loss, including: (1) TOPIARY Zajic et al. (2004), (2) ABS+ Rush et al. (2015), (3) RAS-Elman Chopra et al. (2016), and (4) words-1vk5k-1sent Nallapati et al. (2016). We also include our implementation of normal softmax as a baseline method. There are some newly proposed summarization models that come up with different type of loss function including the reconstructive loss function Miao and Blunsom (2016), and the minimum risk training (MRT) loss Shen et al. (2016). We did not compare with these methods, since this experiment is focused on evaluating our approach against other softmax-based approaches under similar implementations and learning settings.

5.4 Experiment Results

TOPIARY 25.16 6.46 20.12
ABS+ 28.18 8.49 23.81
RAS-Elman 28.97 8.26 24.06
words-1vk5k-1sent 28.61 9.42 25.24
Full softmax 27.77 9.01 24.36
Self-organized HSM 29.20 9.62 25.65
Table 4: Experiment results on DUC2004 testset
ABS+ 29.76 11.88 26.42
RAS-Elman 33.78 15.97 31.15
words-1vk5k-1sent 33.17 16.02 30.98
Full softmax 33.54 15.49 31.52
Self-organized HSM 34.07 16.00 31.95
Table 5: Experiment results on Gigawords testset

I(1): china is expected to become the world ’s largest market for motorcycles by #### , with annual demand topping ## million units , according to a report published wednesday .

G: china to become world ’s largest motorcycle market

S: china to become world ’s largest motorcycles

H: china to become world ’s largest motorcycle market

I(2): eleven opposition parties went to tanzania ’s high court wednesday to seek a ruling nullifying the east african nation ’s tanzania ’s first multi-party elections .

G: opposition seeks nullification of elections

S: ## opposition parties go to tanzania court

H: tanzanian opposition seeks ruling on tanzanian elections

I(3): india has secured contracts from egypt and syria to supply spare parts for mig fighters , breaking russia ’s virtual monopoly , officials here said wednesday .

G: india beats russia to supply mig spares to egypt syria

S: india secures contracts from egypt syria

H: india to supply parts for mig fighters

I(4): sri lankan security forces wednesday geared up for the “ bloodiest fighting ” yet in their bid to capture the tamil tiger headquarters at jaffna town , amid intense resistance from the rebels , military officials said .

G: sri lanka braces for bloodiest battle tigers deny fleeing by amal jayasinghe

S: sri lanka gears up for jaffna fighting

H: sri lankan security forces gear up for fighting

I(5): japanese prime minister tomiichi murayama told us defense secretary william perry here wednesday he would uphold bilateral security ties despite strong opposition to the us bases in okinawa .

G: murayama vows to uphold security ties with us

S: murayama vows to defend bilateral security ties

H: murayama vows to uphold security ties with us

I(6): russian defence minister pavel grachev left for moscow on wednesday after winding up a three-day official visit to greece during which he signed a military cooperation accord with his counterpart gerassimos arsenis .

G: grachev ends three-day visit to greece

S: russian minister leaves for moscow

H: grachev leaves greece for moscow

I(7): an explosion in iraq’s restive northeastern province of diyala killed two us soldiers and wounded two more , the military reported monday .

G: two us soldiers killed in iraq blast december toll ###

S: two us soldiers killed in iraq

H: two us soldiers killed two wounded in iraq

Figure 2: Example sentence summaries produced on Gigaword. I is the input, G is the true headline, S is the full softmax, and H is the self-organized HSM.
Training time
Full softmax 210 min
Self-organized HSM 63 min
Table 6: Training time on Gigaword5 training set

Table 4 and Table 5 show results of our approach comparing the different methods. Our approach not only outperforms the full softmax method, but also outperforms state-of-the-art methods on most of evaluation metrics. Table 6 also shows that our approach is 3 times faster than standard full softmax, which is widely used in all kinds of different summarization models. Figure 2 present examples of summaries generated by self-organized HSM in comparing with true headline and full softmax outputs.

As the word cluster learned on Gigaword corpus shows similar in-cluster syntactical similarity and semantic closeness, we suggest that this hierarchical structure decomposes the difficult word generation task into 2 easier tasks. The first task is to decide correct syntactical role for next word, and the second task is to find semantically correct words in a subset of the vocabulary.

6 Conclusion

In this paper we have proposed a new self organizing variant of the hierarchical softmax. We observe that this approach can achieve the same performance as a full softmax approach for language modelling, and even better performance for sentence summarization. Our approximation approach is also as efficient as other hierarchical softmax approximation techniques. In particular in our experiments we observe that our self-organized HSM is at least 3 times faster than a full softmax approach.

Our approach yields self organized word clusters which are influenced by the context of words. Examining word clusters produced by or approach reveals that our method groups words according to their syntactical role and semantic similarity. These results are appealing in that we have obtained a certain level of understanding of grammar rules without explicit part-of-speech tagged input. We therefore think that the use of this approach shows promise for other NLP tasks, including machine translation and natural language generation.

References

  • Bengio et al. (2003a) Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003a. A neural probabilistic language model. Journal of machine learning research 3(Feb):1137–1155.
  • Bengio and Senécal (2008) Yoshua Bengio and Jean-Sébastien Senécal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks 19(4):713–722.
  • Bengio et al. (2003b) Yoshua Bengio, Jean-Sébastien Senécal, et al. 2003b. Quick training of probabilistic neural nets by importance sampling. In AISTATS.
  • Bergstra et al. (2010) James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: A cpu and gpu math compiler in python. In Proc. 9th Python in Science Conf. pages 1–7.
  • Chelba et al. (2013) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005 .
  • Chen et al. (2016) Welin Chen, David Grangier, and Michael Auli. 2016. Strategies for training large vocabulary neural language models. Proceedings of ACL 2016 .
  • Chopra et al. (2016) Sumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. 2016. Abstractive sentence summarization with attentive recurrent neural networks. Proceedings of NAACL-HLT16 pages 93–98.
  • Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159.
  • Goodman (2001) Joshua Goodman. 2001. Classes for fast maximum entropy training. In 2001. Proceedings.(ICASSP’01). 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, volume 1, pages 561–564.
  • Grave et al. (2017) Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. 2017. Efficient softmax approximation for gpus. Proceedings of ICLR 2017 .
  • Gutmann and Hyvärinen (2010) Michael Gutmann and Aapo Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS. volume 1, page 6.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • Jean et al. (2015) Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. Proceeding of ACL 2015 .
  • Joulin et al. (2016) Armand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. 2016. Learning visual features from large weakly supervised data. In European Conference on Computer Vision. Springer, pages 67–84.
  • Jozefowicz et al. (2016) Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. Proceedings of ICLR 2016 .
  • Kingma and Ba (2014) Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
  • Le et al. (2011) Hai-Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and François Yvon. 2011. Structured output layer neural network language model. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pages 5524–5527.
  • Miao and Blunsom (2016) Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. Proceedings of EMNLP 2016 .
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. Proceeding of ICLR 2013 .
  • Mikolov et al. (2011a) Tomáš Mikolov, Anoop Deoras, Daniel Povey, Lukáš Burget, and Jan Černockỳ. 2011a. Strategies for training large scale neural network language models. In 2011 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE, pages 196–201.
  • Mikolov et al. (2014) Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc’Aurelio Ranzato. 2014. Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753 .
  • Mikolov et al. (2010) Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. volume 2, page 3.
  • Mikolov et al. (2011b) Tomáš Mikolov, Stefan Kombrink, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2011b. Extensions of recurrent neural network language model. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pages 5528–5531.
  • Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119.
  • Mnih and Hinton (2009) Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems. pages 1081–1088.
  • Mnih and Kavukcuoglu (2013) Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in neural information processing systems. pages 2265–2273.
  • Mnih and Teh (2012) Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. Proceeding of ICML 2012 .
  • Nallapati et al. (2016) Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. Proceedings of CoNLL .
  • Napoles et al. (2012) Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Association for Computational Linguistics, pages 95–100.
  • Rush et al. (2015) Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. Proceedings of ACL 2015 .
  • Shen et al. (2016) Shiqi Shen, Yu Zhao, Zhiyuan Liu, Maosong Sun, et al. 2016. Neural headline generation with sentence-wise optimization. arXiv preprint arXiv:1604.01904 .
  • Zajic et al. (2004) David Zajic, Bonnie Dorr, and Richard Schwartz. 2004. Bbn/umd at duc-2004: Topiary. In Proceedings of the HLT-NAACL 2004 Document Understanding Workshop, Boston. pages 112–119.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
13951
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description