Efficient Contextual Representation Learning Without Softmax Layer

Efficient Contextual Representation Learning Without Softmax Layer

Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang
Peking University
University of California, Los Angeles
liliunian@pku.edu.cn, patrickchen@g.ucla.edu
{chohsieh, kwchang}@cs.ucla.edu

Abstract

Contextual representation models have achieved great success in improving various downstream tasks. However, these language-model-based encoders are difficult to train due to the large parameter sizes and high computational complexity. By carefully examining the training procedure, we find that the softmax layer (the output layer) causes significant inefficiency due to the large vocabulary size. Therefore, we redesign the learning objective and propose an efficient framework for training contextual representation models. Specifically, the proposed approach bypasses the softmax layer by performing language modeling with dimension reduction, and allows the models to leverage pre-trained word embeddings. Our framework reduces the time spent on the output layer to a negligible level, eliminates almost all the trainable parameters of the softmax layer and performs language modeling without truncating the vocabulary. When applied to ELMo, our method achieves a 4 times speedup and eliminates 80% trainable parameters while achieving competitive performance on downstream tasks.

Efficient Contextual Representation Learning Without Softmax Layer


Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang Peking University University of California, Los Angeles liliunian@pku.edu.cn, patrickchen@g.ucla.edu {chohsieh, kwchang}@cs.ucla.edu

1 Introduction

In recent years, text representation learning approaches, such as ELMo (Peters et al., 2018a), GPT-1 (Radford et al., 2018), BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019), have been developed to provide generic contextual representations for the natural language, which have led to large improvements to various downstream tasks. The key idea underneath is to train a contextual encoder with a language model objective on a large unannotated text corpus. During the training, part of the text is masked and the goal is to encode the remaining context and predict the missing part. Specifically, given a corpus with vocabulary size , the model consists of two parts: 1) an encoder is learned to embed the context and output an -dimensional vector, and 2) the output vector goes through a softmax layer, where it is multiplied by an embedding matrix and is fed into a softmax function, to produce a conditional distribution of the missing word. The encoders trained in such a way are able to capture generic contextual information of the input text and have been used in a variety of benchmark tasks to establish state-of-the-art results.

However, training contextual representations is known to be a resource-hungry process. For example, ELMo was reported to take about two weeks to train on a one-billion-token corpus with a vocabulary of 800,000 words using three GPUs111https://github.com/allenai/bilm-tf/issues/55. This slow training procedure hinders the development cycle, prevents fine-grained parameter tuning, and makes training contextual representations inaccessible to a broader community. More importantly, the success of these models stems from the large amount of data they used. It is challenging, if not impossible, to train a contextual representation model on a larger corpus with tens or hundreds of billions of tokens.

In this work, we explore how to accelerate contextual representation learning. We target the softmax layer as the major cause of inefficiency. This component takes up a huge portion of all trainable parameters (80% for ELMo) and consumes a huge amount of training time. However, this layer will be discarded in the final model as the goal of contextual representation learning is to build a generic encoder. Therefore, it is rather a waste to allocate extensive computational resources to the softmax layer to obtain the best prediction of masked words. As mentioned, language modeling can be viewed as predicting the missing words based on a context vector generated by the contextual encoder and an embedding matrix for target words (Press and Wolf, 2017; Inan et al., 2016). Learning the contextual encoder is difficult while learning word representation has been extensively studied (Mikolov et al., 2013). It is natural to use a pre-trained word embedding to replace and thus decouple learning contexts and words.

In this paper, we propose an efficient framework to learn the contextual encoder by leveraging pre-trained word embeddings222The code and models will be released in the near future.. Instead of using a softmax layer to predict the distribution of the missing word, we utilize and extend the Semfit layer (Kumar and Tsvetkov, 2018) to predict the embedding of the missing word. In the training process, the contextual encoder is learned by minimizing the distance between its output and a pre-trained target word embedding. The Semfit layer, with constant time complexity and small memory footprint, perfectly serves our desire to decouple learning contexts and words and devote most computational resources to the contextual encoder. Our contributions are as follows:

  • We introduce the Semfit layer into contextual representation learning and further improve it with open-vocabulary word embeddings. The resulting model is computationally efficient and can be trained with an untruncated vocabulary. (Section 3)

  • We discuss the global objective of the Semfit layer and draw a connection to dimension reduction. We show that the Semfit layer is particularly suitable for contextual representation learning. (Section 4)

  • We empirically show that our approach significantly reduces the training time of ELMo, while maintaining competitive performance on most of the end tasks. (Section 5)

  • We conduct a thorough analysis of our approach, discussing different modeling choices and computational efficiency. We also analyze the subword language model, a strong baseline used in GPT and BERT to circumvent the problem of large vocabulary. (Section 6)

2 Related Work

Contextual Representation

We review recently proposed contextual representation models from two aspects: how they are trained and how these pre-trained models are used in downstream tasks.

CoVe (McCann et al., 2017) trained a machine translation model, and used the source language encoder as a contextual representation model for other downstream tasks. As large in-domain parallel corpus is hard to obtain, the potential of CoVe is limited. In contrast, a few recent approaches learn contextual encoder on unannotated corpus with language model objectives. ELMo (Peters et al., 2018a) concatenated a forward and a backward LSTM-based language model while GPT-1 (Radford et al., 2018) and GPT-2 (Radford et al., 2019) used unidirectional transformer-based language models. BERT (Devlin et al., 2018) introduced masked language models and provided deep bidirectional representation.

There are mainly two existing strategies for applying pre-trained contextual representations to downstream tasks: 1) feature-based and 2) fine-tuning. For the feature-based approach (e.g., ELMo, CoVe), fixed features are extracted from the contextual encoder and inserted into task-specific architectures. In the fine-tuning approach (e.g., BERT, GPT-1), the contextual encoder is designed as a part of the network architecture for downstream tasks, and its parameters are fine-tuned with downstream task data. BERT was designed for the fine-tuning approach but it was also evaluated with the feature-based approach. GPT-2 is a scaled-up version of GPT-1 and exhibits strong performance under zero-shot settings.

The Large Vocabulary Issue

The large and ever-growing vocabulary has been considered an obstacle to scaling up language models. We review existing solutions to this issue from both the language modeling and contextual representation literature.

Most studies for language modeling focus on directly reducing the complexity of the softmax layer. Following Kumar and Tsvetkov (2018), we roughly group them into two categories: sampling-based approximations and structural approximations. Sampling-based approximations include the sampled softmax (Bengio et al., 2003) and NCE (Mnih and Teh, 2012). The sampled softmax approximates the normalization term of softmax by sampling a subset of negative targets, while NCE replaces the softmax with a binary classifier. On the other hand, structural approximations such as the hierarchical softmax (Morin and Bengio, 2005) and the adaptive softmax (Grave et al., 2016), form a structural hierarchy to avoid expensive normalization. The adaptive softmax, in particular, group words in the vocabulary into either a short-list or clusters of rare words. For frequent words, a softmax over the short-list would suffice, which reduces computation and memory usage significantly. The adaptive softmax has been shown to achieve results close to that of the full softmax whilst maintaining high GPU efficiency (Merity et al., 2018).

Regarding contextual representation models, ELMo used the sampled softmax while GPT and BERT resorted to subword methods. Specifically, they used WordPiece (Wu et al., 2016) or BPE (Sennrich et al., 2016) to split the words into subwords and the language models were trained to consume and predict these subwords. This method is efficient and scalable, as the subword vocabulary can be kept small. Language models trained in this way are neither strictly word-level nor character-level. Hence we categorize them as subword-level language models. One potential drawback, however, is that these models produce representations for fragments of words, and it takes extra efforts to generate word-level representations from them. In this paper, we focus on word-level language models and we will discuss subword-level language models in Section 6.2.

3 Approach

In this section, we illustrate our approach to accelerating the training process of contextual representation models, where the goal is to build a generic contextual encoder.

We first review the procedure for learning a contextual encoder. Given a sequence of words , the encoder seeks to form a rich contextual representation for every word based on their surrounding words, i.e. the context. An input layer, which is usually a word embedding or a character-CNN (Kim et al., 2016), produces a context-insensitive word representation for word . Then, goes through a K-layer contextualizing structure, such as LSTM (Hochreiter and Schmidhuber, 1997), Gated CNN (Dauphin et al., 2017), or Transformer (Vaswani et al., 2017). Each layer outputs a context-dependent vector 333For example, the context-dependent vectors are the hidden states if LSTM is used as the contextualizing structure.. The top layer’s output is the final output of the encoder. Notice that when using this contextual encoder in downstream tasks, recent methods have gone beyond simply using the top layer output. They either combine different layers’ outputs to obtain hierarchical representation (ELMo) or fine-tune the whole encoder (GPT). To avoid confusion, we refer to the contextual information captured by the encoder as the contextual representation. We denote the top layer’s output vectors as and refer to it as the context vector.

Training the contextual encoder involves embedding a context into a context vector and using it to predict a missing word444In unidirectional language models, for word , is only dependent on or , and we train the to predict or . For the masked language model in BERT, part of the text is masked and the task is to predict the masked words. Suppose word is masked, is actually dependent on the whole masked sentence and we train to predict .. The conventional way is to attach a softmax layer to the contextual encoder. The softmax layer multiplies with an embedding matrix and then uses a softmax function to produce a conditional distribution over the vocabulary. Suppose the target word is and the context is , the learning objective for each instance is maximizing the negative log-likelihood of under :

(1)

is a row from corresponding to and the second term sums over the vocabulary. is the size of the vocabulary while is the size of the context vector. The overall learning objective is

(2)

where is the number of occurrences of the pair in the corpus. Note that the size of and the computational complexity of the normalization term (the second term in Eq. (1)) scale linearly to the vocabulary size. Therefore, when the vocabulary size is large, the vanilla softmax layer becomes the bottleneck. Many approaches (Section 2) have been proposed to address this issue, but we think there is still room for improvement.

We accelerates the training by replacing the softmax layer with a Semfit layer. Instead of maximizing the log-likelihood, after embedding the context into , we directly minimize the distance between the context vector and a target word embedding ,

(3)

Notice that the target word embedding is pre-trained and fixed. The distance function could be the L2 distance, cosine distance or some probabilistic distance metrics (see discussions in Section 4). For the rest of this section, we will discuss the advantages of the proposed approach in contextual representation learning.

3.1 Computational Efficiency

We analyze the computational efficiency of the proposed approach against existing acceleration approaches for the softmax layer. In particular, we discuss the sampled softmax and the adaptive softmax, both being popular choices for speeding up softmax (Jozefowicz et al., 2016; Grave et al., 2016; Merity et al., 2018).

Computational Time Complexity

In the Semfit layer, we only need to calculate the distance between two -dimensional vectors. Without the normalization term or the need to sample words, the Semfit layer has time complexity, which grants scalability with respect to the vocabulary size. The time spent on the Semfit layer remains constant and negligible regardless of the vocabulary size. In comparison, the time complexity of the sampled softmax is proportional to the number of negative samples per batch. When the vocabulary is huge, a large number of negative samples are needed (Jozefowicz et al., 2016). For the adaptive softmax, the time complexity is determined by the capacities of the short-list and rare-word clusters, which grows sub-linearly with respect to the vocabulary size.

Trainable Parameter Size

The softmax layer takes up a huge part of the parameters of a language model. Here, we list the parameter sizes of models reported in the literature trained on the One Billion Word Benchmark (Chelba et al., 2013), which is also the corpus ELMo was trained on. For ELMo, the character-CNN and LSTM have about 100 million parameters while the softmax layer has 400 million parameters. For the bigLSTM in Jozefowicz et al. (2016), the softmax takes up 840 million parameters, while all other parts have 182 million parameters. These models used the sampled softmax, which is only designed to accelerate calculating the normalization term and does not reduce the trainable parameter size. The adaptive softmax proposes to reduce for rare words. The trainable parameter size is effectively reduced but still remains sizable. For a model trained on the same corpus (Grave et al., 2016), the LSTM has around 50 million parameters while the adaptive softmax still amounts to 240 million parameters. Our approach, on the other hand, uses a pre-trained word embedding, reducing the trainable parameters of the output layer from hundreds of millions to almost zero.

GPU Memory Efficiency

Our approach exhibits exceptional GPU memory efficiency, due to reductions of the computational time complexity (with fewer intermediate results to keep) and trainable parameter size (with fewer parameters to store555We keep the pre-trained embedding in the main memory instead of loading them to the GPU memory since it does not need to be updated. This comes at a cost as we need to move the embedding needed for words in a batch from CPU to GPU at each time step. When the GPU memory is abundant, we could keep the word embedding on GPU to avoid this additional communication cost. But on mainstream hardwares, the GPU memory is often comparatively limited and this trick proves to be beneficial in our preliminary experiments.). As a result, compared to the adaptive softmax, models with our technique are 2 to 5 times more memory efficient (Section 6.3). The memory efficiency further adds up to the speed advantage, because loading more words in each batch allows better parallelism when utilizing GPUs.

Scalability Across GPUs

To scale up modern deep learning systems, frameworks are designed to train models with synchronous SGD with very large batch size and consequently a large number of GPUs (Goyal et al., 2017; You et al., 2018). However, the overhead of running on multiple GPUs is the communication cost spent on synchronizing the parameters and their gradients across machines, which is proportional to the size of parameters that need to be updated. For the sampled softmax, due to the use of the sparse gradient, the communication cost is proportional to the number of the sampled words. For the adaptive softmax, since full softmax is still used within the short-list and each cluster, the sparse gradient trick is not available and the communication cost is proportional to the trainable parameter size. The Semfit layer, on the other hand, incurs little communication cost, making it more efficient to train models on multiple GPUs.

Easily-accessible Word Embedding

With all these computational advantages, the only prerequisite for the Semfit layer is a pre-trained word embedding. Fortunately, learning word embeddings is much cheaper than learning contextual representations. For example, training a FastText embedding (Bojanowski et al., 2017) on the One Billion Word Benchmark took merely two hours on an average CPU machine. Moreover, several existing word embeddings trained on large corpora in different domains are publicly available.

3.2 Open-vocabulary Word Embedding

Theoretically, we can use any pre-trained word representation as the target word embedding for the Semfit layer. We exploit a particular kind of word representation, the open-vocabulary word embedding, such as the FastText embedding and the mimick model (Pinter et al., 2017). These embeddings utilize character or subword information to provide embedding for out-of-vocabulary (OOV) words. Combining the Semfit layer with open-vocabulary embedding, we can train contextual encoders with untruncated vocabulary while making substantial simplifications to the input layer.

Scalable Word Representation

In the vanilla softmax, a simple -by- matrix is used to provide word representation. However, when we further scale up current language models, the parameter size of this matrix could become intractable. The adaptive softmax proposes to reduce for rare words such that the parameter size grows only sub-linearly. Here, we take another route and propose to utilize the open-vocabulary word embedding, which could represent an unlimited number of words with a fixed number of parameters, providing the much-needed scalability.

Learning with An Untruncated Vocabulary

Combining the time complexity and scalable word representation, we can conduct training with an untruncated vocabulary. Softmax-based methods keep a vocabulary to calculate the normalization term. With the Semfit layer, we only need the target word embedding and the context vector to conduct training. There is no need for truncating the vocabulary or keeping one because we could use multi-process to dynamically prepare the embedding.

According to Jozefowicz et al. (2016), the ability to model rare words is an essential advantage of the neural models against N-gram models. Now with an untruncated vocabulary, we possess the power to model unlimited rare words. This feature is especially attractive if we are training on domains or languages with a long tail, for example, the biomedical domain where truncating the vocabulary may not be acceptable.

Open-vocabulary Input Layer

To be easily applied in various tasks, the contextual encoder usually has an open-vocabulary input layer. ELMo used a character-CNN but it is relatively slow. Thus we propose to reuse the pre-trained open-vocabulary word embedding as the input layer of the contextual encoder, reducing the time complexity of the input layer to . This also aligns with the main spirit of our approach, which is to spend computational resources on the most important part, the contextualizing structure like LSTM.

4 The Semfit Layer

Though the Semfit layer is intuitive, its properties are less known. In this section, we provide an analysis on the Semfit layer, extending the work of Kumar and Tsvetkov (2018). We investigate the global objective of different distance functions and link the Semfit layer to probabilistic language modeling and dimension reduction, which further justifies the intuition behind the Semfit layer. Finally, we discuss why the Semfit layer is particularly suited for contextual representation learning.

4.1 Global Objective

Recall that the Semfit layer minimizes the distance between the context vector and the target word embedding , . There are several choices regarding the distance function . We investigate these losses with different distance functions from Kumar and Tsvetkov (2018) 666For L2 loss, the is the unnormalized word embedding while for cosine and NLLvMF loss, the is the normalized word embedding. For simplicity, we abuse the notation and use uniformly. is the normalized .:

  • L2:

  • Cosine:

  • NLLvMF:

Although the above losses have different interpretations, we find that they are similar. Specifically, we can rewrite the global objective in an uniform way777 is a vector corresponding to the context . , and take different forms in different losses. In the L2 loss, is the unormalized , , and . In the cosine loss, is , , and . In NLLvMF, is the unormalized while . serves as a way to constrain the norm of .:

The first term minimizes the inner-product between the contextual encoder output and the target embedding, and the second term adds different constraints on the encoder output’s norm. Although for different losses the norm of the optimal might be different, the direction of the optimal should be the same as . Simply put, the Semfit layer models the weighted average word embedding under a context.

4.2 Dimension Reduction

We seek to justify the very idea of the Semfit layer. We want to answer why minimizing the distance between the encoder output and the target word embedding should work.

We first show that the Semfit layer is essentially performing language modeling after dimension reduction. Following Levy and Goldberg (2014) and Yang et al. (2017), language modeling can be viewed as modeling a conditional probability matrix , where is the number of all possible contexts and is the vocabulary size. Each row of corresponds to the conditional distribution of the word under a certain context. Softmax-based methods seek to find a and , such that best approximates . For the Semfit Layer, we are modeling , which translates to in the matrix form. is pre-trained and fixed. We are essentially conducting “multivariate regression” on after dimension reduction with as the projection matrix.

So the question becomes, what matrix serves as a good projection matrix and why a pre-trained word embedding would be a good choice? When doing dimension reduction, we either strive to preserve the most variance (PCA) (Jolliffe, 2011) or achieve the least reconstruction error (SVD) (Horn and Johnson, 1990). Suppose we have access to , we could easily perform PCA or SVD. Concretely, one could perform SVD on , and get where , and . In PCA, is centered and in SVD, is the raw . is the matrix of rank that best approximates the original matrix . is the optimal projection matrix in either case. In practice, though we could not get the full , we could use a simplified definition of context and get an approximated .

Interestingly, using a simplified definition of context is a common practice in learning word embedding (Levy and Goldberg, 2014). SVD is also a recognized method to construct word embedding. When is the PMI matrix or the conditional probability matrix, serves as a good word embedding (Levy and Goldberg, 2014), where is a tunable parameter.

This finding shows that there exists a natural link between the Semfit layer and SVD word embedding. Moreover, we could intuitively see that other kinds of word embedding might also serve well as the projection matrix . Suppose we want to preserve the variance of in the spirit of PCA. If two contexts have very different conditional distribution, then for these two contexts should have very different directions, assuming that dissimilar words have dissimilar word vectors. Thus, a pre-trained word embedding might help us preserve much of the variance.

4.3 Decoding Algorithm

The above analysis strengthens our argument that the Semfit layer especially suits contextual representation learning. To illustrate, one could see that the Semfit layer may not excel at predicting the next word, which in theory is determined by its ability to approximate . The Semfit layer models , and unless we get the optimal , it is hard to get an approximated from ; therefore, we cannot calculate perplexity from the Semfit layer. This could cause problems to tasks like machine translation or text generation where the ability to predict the next word is essential. For contextual representation, there exists no such problem as we do not seek to induce .

As a way to predict the next word with the Semfit layer, Kumar and Tsvetkov (2018) proposed to search for the nearest neighbor of the context vector in the target embedding space . This decoding algorithm is sub-optimal unless the distribution of is sharp. Suppose our training resulted in the global optimum, finding the nearest neighbor of is equivalent to finding the word vector with the largest inner product with 888This is true for the cosine and NLLvMF distance. For L2 distance it is more complicated and we mainly focus on the cosine and NLLvMF distance here. But note that the decoding algorithm is still flawed under L2 distance.. For a specific word , its inner product

(4)

is influenced not only by ’s conditional probability, but also by its distance to other words and their conditional probabilities. As a result, the most desired word does not always get chosen. Rewriting this in a matrix form, we are essentially calculating during decoding. Unless is from the SVD of , the resulting matrix is not an approximation for . Note that in some applications, such as machine translation, the conditional distribution is often sharp. In this case, the first term in Eq. (4) dominates over the second term and the approximation is valid.

5 Experiment

The proposed approach is generic and can be applied to accelerating the training process of word-level contextual representation models. In this section, we take ELMo as an example and demonstrate that our approach significantly speeds up ELMo, while largely maintaining its performance.

5.1 Setup

ELMo consists of a forward and a backward language model, trained on the One Billion Word Benchmark for 10 epochs. For a fair comparison, all models we introduce are trained on the same corpus for 10 epochs. All experiments are conducted on a workstation with 8 GeForce GTX 1080Ti GPUs, 40 Intel Xeon E5 CPUs, and 128G main memory. The training code is written in PyTorch such that we could evaluate on most downstream tasks with AllenNLP (Gardner et al., 2018).

Models

Model Input Contextualizing Output
ELMo CNN LSTM Sampled Softmax
ELMo-S FastText LSTM w/ LN Semfit w/ FastText
ELMo-A FastText LSTM w/ LN Adaptive Softmax
ELMo-S FastText LSTM w/ LN Semfit w/ FastText
ELMo-S Trained CNN LSTM w/ LN Semfit w/ Trained CNN
ELMo-Sub Subword LSTM w/ LN Softmax
Table 1: Specifications of ELMo models we introduce in Section 5 and Section 6.

To verify the efficacy of the proposed method, we introduce an ELMo model trained with our acceleration approach (ELMo-S). We also include the original ELMo (ELMo) for comparison. Our main proposal is to use the Semfit layer as the output layer but there are substantial differences between ELMo and ELMo-S besides the output layer. To make a fair comparison, we introduce an ELMo model with the adaptive softmax (ELMo-A). ELMo-A is designed to differ from ELMo-S only in the output layer so that we could study the effect of the Semfit layer in isolation.

In the following, we describe the details of these models mainly from three aspects: 1) the input layer, 2) the contextualizing structure and 3) the output layer. A brief summary of these models are available in Table 1.

  • ELMo: The input layer is a character-CNN, and the contextualizing structure is an LSTM with projection (Sak et al., 2014). The output layer is a sampled softmax with 8192 negative samples per batch. This model is provided in AllenNLP by Peters et al. (2018a).

  • ELMo-S: The input layer is the FastText embedding trained on Common Crawl (Mikolov et al., 2017), denoted as FastText999There are two ways to use the FastText embedding. For words in a pre-defined vocabulary, one could use the conventional word embedding; for OOV words, one could use subword embedding to compute word embedding. For consistency, we always use the subword embedding.. The contextualizing structure is an LSTM with projection the same size as the one in ELMo, but we added layer norm (Ba et al., 2016) after the projection layer as we find it beneficial. The output layer is the Semfit layer with FastText embedding as the target embedding. We use the cosine distance as the distance function of the Semfit layer because it is free of hyper-parameters and we find that it gives a satisfying and stable performance in preliminary experiments. The learning rate schedule from Chen et al. (2018) is used to aid large-batch training.

  • ELMo-A: The input layer, the contextualizing structure and the training recipe of ELMo-A is kept the same as ELMo-S. The only difference is that the output layer of ELMo-A is an adaptive softmax with 120 million parameters, half of the size of the one reported in Grave et al. (2016) on the same corpus. ELMo-A achieves a perplexity of 35.8, 3.9 points lower than ELMo’s 39.7. This shows that it is an efficient yet strong baseline.

We also list the performance of the following models for reference. ELMo and Base are results listed in Peters et al. (2018a) but they are tested using different configurations. FastText is the non-contextual word embedding trained on the Common Crawl corpus with 600 billion tokens, which serves as a baseline non-contextual model.

Model Time SpeedUp Batch Params
ELMo 14 x 3 1x 128 499m
ELMo-A 5.7 x 4 1.8x 256 196m
ELMo-S 2.5 x 4 4.2x 768 76m
Table 2: Model Efficiency. Time is in Days x Cards format. Batch is the maximal batch size per card. Params is the total trainable parameters in millions.
Task ELMo Base FastText ELMo ELMo-A ELMo-S ELMo-S ELMo-S
SNLI 88.7 88.0 87.7 88.5 88.9 88.8 88.4 88.2
Coref NA NA 68.90 72.9 72.9 72.9 73.0 72.9
SST-5 54.7 51.4 51.30 0.77 52.96 2.26 53.58 0.77 53.80 0.73 53.86 4.02 53.38 0.68
NER 92.22 90.15 90.97 0.43 92.51 0.28 92.28 0.20 92.24 0.10 92.03 0.47 92.24 0.36
SRL 84.6 81.4 80.2 83.4 82.7 82.4 82.2 82.8
Table 3: Performance of main competing models and two ablation models on five NLP benchmarks. Due to the small test sizes for NER and SST-5, we report mean and standard deviation across three runs.

Downstream Tasks

We follow ELMo and use the feature-based approach to evaluate contextual representations on downstream benchmarks. ELMo was evaluated on six benchmarks and we conduct evaluations on five of them. SQuAD (Rajpurkar et al., 2016) is not available for implementation reasons101010The SQuAD experiment in Peters et al. (2018a) was conducted with code written in TensorFlow. The experiment setting is not currently available in AllenNLP (https://github.com/allenai/allennlp/pull/1626), nor can it be easily replicated in PyTorch.. We briefly review the benchmarks and task-specific models. For detailed descriptions, please refer to Peters et al. (2018a).

  • SNLI: The textual entailment task seeks to determine whether a “hypothesis” can be entailed from a “premise”. The dataset is the SNLI dataset (Bowman et al., 2015) and the model is ESIM (Chen et al., 2017).

  • Coref: Coreference resolution is the task of clustering mentions in text that refer to the same underlying entities. The dataset is from CoNLL 2012 shared task (Pradhan et al., 2012) and the model is from Lee et al. (2018).

  • SST-5: SST-5 (Socher et al., 2013) involves selecting one of five labels to describe a sentence from a movie review. The model is the BCN model from McCann et al. (2017).

  • NER: The CoNLL 2003 NER task (Sang and De Meulder, 2003) consists of newswire from the Reuters RCV1 corpus tagged with four different entity types. The model is a biLSTM-CRF from Peters et al. (2018a), similar to Collobert et al. (2011).

  • SRL: Semantic role labeling (SRL) models the predicate-argument structure of a sentence, and is often described as answering “Who did what to whom”. The model is from He et al. (2017) and the dataset is from Pradhan et al. (2013).

For SNLI, SST-5, NER, and SRL, we used the same downstream models as in Peters et al. (2018a) re-implemented in AllenNLP. For Coref, Peters et al. (2018a) used the model from Lee et al. (2017). The authors from Lee et al. (2017) later updated their model and achieved superior scores with the new model (Lee et al., 2018). Thus we adopted the improved model in our experiments on Coref. For all the tasks, we use the default configuration with modest tuning, but all models are tested under the same configurations. Notice that the hyper-parameters are tuned to maximize the performance for the original ELMo and they may not be optimal for other models111111For example, the number of epochs is tuned for ELMo and some models may need more epochs to train. In addition, for SRL, the reported score by AllenNLP is lower than the score from CoNLL official script.. But since all models are tested under the same hyper-parameters and our setting favorites the baseline ELMo model, the results still reflect the performance of our approach.

5.2 Model Efficiency

In Table 2, we report the computational efficiency of the models121212The statistics of ELMo are reported by the authors of ELMo so the number of the GPU used is different for ELMo. We tested ELMo-A and ELMo-S using the same kind of GPU so the numbers are directly comparable. This setting actually favors ELMo as the communication cost on three cards is smaller than that on four cards.. Overall, our simplifications to the input layer and the output layer of ELMo brings significant computational efficiency. ELMo-S is 4.2x faster and 6x more memory efficient than ELMo.

To give a clear view of the speedup the Semfit layer brings, we compare ELMo-S with ELMo-A. ELMo-A differs from ELMo-S only in the output layer by using an adaptive softmax half the size of the one reported in Grave et al. (2016). Still, ELMo-S has a 2.3x speed advantage and is 3 times more memory efficient.

Moreover, the contextualizing structure used in these three models is an LSTM with projection written without cuDNN acceleration131313There is no readily-available implementation of LSTM with projection in PyTorch with cuDNN acceleration now., which is slower than a cuDNN LSTM (Chetlur et al., 2014) and much slower than other fast structures like QRNN (Bradbury et al., 2016), SRU (Lei et al., 2018), Gated CNN or Transformer. Peters et al. (2018b) showed that such faster structures could deliver close performance and 3-5x speedup. Thus our approach might achieve even better speedup with faster structures.

5.3 Performance on Downstream Tasks

Table 3 reports the downstream task performance of each representation model. We focus on the three main competing models (ELMo, ELMo-A, ELMo-S) in the middle columns.

Our approach (ELMo-S) works especially well on semantic-centric tasks, such as SNLI, Coref, and SST-5. It shows competitive or even better performance than ELMo and ELMo-A. However, for tasks that required a certain level of syntactic information, such as NER and SRL (He et al., 2018), ELMo-S suffers from slight performance degradation, but it still holds a large advantage over the pure word embedding model, FastText. We suspect that the performance degradation is due to the pre-trained embedding we used. Therefore, we conduct further analyses and discuss the results in Section 6.1.

6 Analysis

We conduct further analyses regarding certain modeling choices of our approach, the subword-level language models and conduct a more detailed analysis of computational efficiency.

6.1 Modeling Choices

Sensitivity to the Pre-trained Embedding

Our previous experiments are based on the word embedding trained on Common Crawl. In this experiment, we analyze how sensitive our approach is to the pre-trained word embeddings. We trained a FastText embedding on the One Billion Word Benchmark and denote an ELMo-S model trained with this embedding as the input and target embedding as ELMo-S. Notice that the code we used to train the one-billion-word embedding lacks a few new features compared to the common-crawl embedding provided in Mikolov et al. (2017). The one-billion-word embedding and consequently ELMo-S might be further improved with the new features. Comparing it to ELMo-S (Table 3), we find that this model holds up surprisingly well, with only minor performance decrease. ELMo-S is competitive with ELMo on SNLI, Coref, and SST-5 while being inferior on NER and SRL, still better than FastText.

We especially note that this ELMo-S model does not enjoy any additional resources since the pre-trained embedding is trained on the same corpus ELMo was trained on and training only took two hours. The performance of this model, especially on SNLI, Coref, and SST-5, is attractive given that we have made substantial simplifications to the model.

Semantic Versus Syntactic

In Section 5, we observed that models with FastText embedding uniformly performed worse than ELMo on SRL, which relied heavily on syntactic information. We suspect that the FastText embedding might be weaker on capturing syntactic information, while Peters et al. (2018b) revealed that the CNN layer in ELMo is strong on capturing syntactic information. This motivates us to explore whether we could use syntactic-rich embedding and get better results on SRL. We find that the trained CNN layer from ELMo, surprisingly, serves as a kind of syntactic-rich word embedding. When we use that CNN layer as a pre-trained word embedding to train a ELMo-S model (denoted as ELMo-S), we observed a notable performance increase on SRL (Table 3).

ELMo-S is also a natural extension to an intriguing idea, CNN softmax from Jozefowicz et al. (2016). They proposed to use a CNN to provide word representation for the softmax layer. The attraction lies in that CNN is very parameter-efficient and also posses the open-vocabulary feature. However, Jozefowicz et al. (2016) pointed out that the CNN softmax sometimes cannot differentiate between words with similar spellings but different meanings, which could potentially explain why ELMo-S is inferior on certain semantic tasks (SNLI and SST5).

6.2 Subword-level Language Models

Models SNLI Coref SST-5 NER SentEval
Avg SST-5 SST-2 TREC MR SUBJ
ELMo-Sub 87.1 72.4 53.02 2.08 92.17 0.56 80.30 45.25 84.42 92.13 78.40 93.83
ELMo-S 88.4 73.0 53.86 4.02 92.03 0.47 80.31 44.99 83.21 91.60 78.85 93.84
ELMo-S 88.8 72.9 53.80 0.73 92.24 0.10 80.96 45.22 85.45 92.87 80.05 94.11
FastText 87.7 68.90 51.30 0.77 90.97 0.43 71.10 38.40 80.12 71.80 74.30 89.06
Table 4: Performance of models including ELMo-Sub on downstream tasks. We show the average score of ten classification tasks from SentEval and showcase five of them.
Vocab TimeBatch16 TimeOneCard TimeFourCard Params Batch
Semfit 0.03s 13.04s 4.61s 20m 5120
Adaptive 40K 0.05s 24.78s 9.68s 106m 2048
800K 0.06s 29.99s 16.14s 265m 1024
2000K 0.10s 54.77s 92.66s 420m 160
Table 5: Statistics on the computation efficiency of the Semfit layer and the adaptive softmax. Time is in second. Params: Number of trainable parameters of the whole model in millions. Batch: Maximal batch size per card.

Next, we discuss the advantages and potential disadvantages of subword-level language models. By splitting the original words into smaller fragments (subwords), these models have a small vocabulary and can deal with arbitrary words, essentially circumventing the large vocabulary problem. They also possess the open-vocabulary feature in the input layer as they can split unseen words into seen subwords.

However, these models produce contextual representations for subwords rather than words. More concretely, consider a word ABC consisting of three subwords A, B, and C. Under the subword method, we would get a contextual representation for A, B, and C respectively. In some scenarios, we just want exactly one representation vector for word ABC instead of three. BERT approached this by using the representation for A as the representation for the whole word ABC. This method seems like an ad-hoc workaround rather than a principled solution. We are concerned that this could be unsuitable in scenarios where precise word-level representation is required. For example, it might not be optimal to use the subword-level contextual representation with the feature-based method in word-level tasks such as Coref.

We conduct a controlled experiment to verify our concern. We introduce a subword-level ELMo using the BPE segmentation with 30,000 merges, denoted as ELMo-Sub. ELMo-Sub differs from ELMo-S in the input layer and the output layer. The input layer is a vanilla lookup-table-style subword embedding while the output layer is a full softmax layer. The input and softmax embedding are tied but trained from scratch. All other settings are kept the same as those of ELMo-S. To make sure that ELMo-Sub is a fair and properly-implemented baseline, we additionally introduce some sentence-level tasks with simple architectures that we think ELMo-Sub should be good at. Concretely, we include classification tasks in SentEval (Conneau and Kiela, 2018), which attaches a simple softmax classifier on top of sentence embeddings to train models for sentence-level tasks. We follow Perone et al. (2018) to get sentence embeddings from contextual representations.

We find that ELMo-Sub has inconsistent performance (Table 4). On SentEval, SST-5, and NER, it has similar performance with ELMo-S. However, it lags behind on Coref and has almost breaking performance on SNLI. It even failed to outperform FastText on SNLI, a non-contextual model, which we considered baseline. These results are consistent with the observation in a recent work (Kitaev and Klein, 2018). They find that special design has to be made to apply BERT to constituency parsing because of the subword segmentation.

However, we notice that the scope of this experiment is limited. It is very likely that when the model is scaled up or used with the fine-tuning method, the aforementioned issue is alleviated, as evidenced by the performance of GPT and BERT. It is hard to judge whether the subword-level language models will work well under a specific setting, and we leave that to future work.

It is noteworthy that our approach still holds a speed advantage over subword-level language models. In those models, a softmax layer is still needed to normalize over a relatively small but non-negligible vocabulary. Moreover, words are split into subwords, which increases the length of a sentence. In our experiment, training ELMo-Sub takes 3.9 days on four cards, which is 1.56x slower than training ELMo-S.

6.3 Computational Efficiency

In this section, we aim to provide a detailed study on the computational efficiency of the Semfit layer. We follow the setting in Grave et al. (2016) to compare with the adaptive softmax. We use a uni-directional LSTM with 2048 units. The input embedding is fixed as in our previous experiments. We vary the vocabulary size and show the results in Table 5.

We first explain some of the statistics we report:

  • TimeBatch16: Time needed to finish a single batch with 16 examples. This reflects the computational time complexity of models.

  • TimeOneCard: Time needed to process one million words on one GPU when using the maximal batch size. This reflects the actual running time of models on one GPU card and is affected by both the computational time complexity and GPU memory efficiency.

  • TimeFourCard: Time needed to process one million words on a machine with four GPUs when using the maximal batch size. This reflects the actual running time of models on four GPU cards and is included to study the communication cost across GPUs.

We have the following observations:

Speedup and Batch Size

When the vocabulary size is small (e.g., 40K), the speed gain from replacing softmax with Semfit for each batch is small (reflected by TimeBatch16). However, Semfit still benefits from its memory efficiency. Therefore, by using a larger batch size, the overall speedup (TimeOneCard) is large. On the other hand, when the vocabulary size is larger, Adaptive becomes slower, while the complexity of Semfit remains constant.

Multi-GPU Scalability

The speed superiority of the Semfit layer is magnified when we move to multiple GPUs. The speedup of the Semfit layer on TimeFourCard is consistently higher than that on TimeOneCard. This will be very useful when we scale to dozens or hundreds of machines.

Super Large Vocabulary

The Semfit layer has a great advantage when the vocabulary is super large. In the 2,000K vocabulary test, the speedup on one card is already great. Using four cards for the adaptive softmax is even counter-productive as the communication cost exceeds the benefit of more GPUs. In this experiment, the hyper-parameters of Adaptive are the same as those in the 800K vocabulary test. We note that this could be suboptimal for Adaptive as one could choose to trade off accuracy for efficiency and change the hyper-parameters to allocate even less computational resources to the rare words. Notice that the 2,000K vocabulary is not an impractical setting. It is created on a tokenized 250-billion-word Common Crawl corpus (Panchenko et al., 2017), which only covers words that appear more than 397 times.

7 Conclusion and Future Work

We introduced an efficient framework to learn contextual representation without the softmax layer. The experiments with ELMo showed that we significantly accelerate the training of the current models while maintaining competitive performance on various downstream tasks. We also provided a theoretical explanation on the Semfit layer. For future work, we are interested in extending our approach to other contextual representation models such as BERT and scale these models to larger datasets.

Acknowledgements

We would like to thank Muhao Chen for helpful comments. We also thank Yulia Tsvetkov and Sachin Kumar for help with implementing the Semfit layer as well as Jieyu Zhao, Kenton Lee, and Nelson Liu for providing reproducible code for experiments.

References

  • Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
  • Bengio et al. (2003) Yoshua Bengio, Jean-Sébastien Senécal, et al. 2003. Quick training of probabilistic neural nets by importance sampling. In AISTATS.
  • Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL.
  • Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP.
  • Bradbury et al. (2016) James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. arXiv preprint arXiv:1611.01576.
  • Chelba et al. (2013) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
  • Chen et al. (2018) Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In ACL.
  • Chen et al. (2017) Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In ACL.
  • Chetlur et al. (2014) Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759.
  • Collobert et al. (2011) Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research.
  • Conneau and Kiela (2018) Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In LREC.
  • Dauphin et al. (2017) Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In ICML.
  • Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Gardner et al. (2018) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640.
  • Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677.
  • Grave et al. (2016) Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. 2016. Efficient softmax approximation for gpus. arXiv preprint arXiv:1609.04309.
  • He et al. (2017) Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In ACL.
  • He et al. (2018) Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In ACL.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
  • Horn and Johnson (1990) Roger A Horn and Charles R Johnson. 1990. Matrix Analysis. Cambridge University Press.
  • Inan et al. (2016) Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462.
  • Jolliffe (2011) Ian Jolliffe. 2011. Principal component analysis. In International Encyclopedia of Statistical Science. Springer Berlin Heidelberg.
  • Jozefowicz et al. (2016) Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.
  • Kim et al. (2016) Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In AAAI.
  • Kitaev and Klein (2018) Nikita Kitaev and Dan Klein. 2018. Multilingual constituency parsing with self-attention and pre-training. arXiv preprint arXiv:1812.11760.
  • Kumar and Tsvetkov (2018) Sachin Kumar and Yulia Tsvetkov. 2018. Von mises-fisher loss for training sequence to sequence models with continuous outputs. arXiv preprint arXiv:1812.04616.
  • Lee et al. (2017) Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In EMNLP.
  • Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In NAACL-HLT.
  • Lei et al. (2018) Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In EMNLP.
  • Levy and Goldberg (2014) Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems.
  • McCann et al. (2017) Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems.
  • Merity et al. (2018) Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240.
  • Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  • Mikolov et al. (2017) Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2017. Advances in pre-training distributed word representations. arXiv preprint arXiv:1712.09405.
  • Mnih and Teh (2012) A Mnih and YW Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In ICML.
  • Morin and Bengio (2005) Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. AISTATS 2005.
  • Panchenko et al. (2017) Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, and Chris Biemann. 2017. Building a web-scale dependency-parsed corpus from commoncrawl. arXiv preprint arXiv:1710.01779.
  • Perone et al. (2018) Christian S. Perone, Roberto Silveira, and Thomas S. Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259.
  • Peters et al. (2018a) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In NAACL-HLT.
  • Peters et al. (2018b) Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In EMNLP.
  • Pinter et al. (2017) Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword rnns. In EMNLP.
  • Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, and Nianwen Xue, editors. 2012. EMNLP-CoNLL.
  • Pradhan et al. (2013) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In CoNLL.
  • Press and Wolf (2017) Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. EACL.
  • Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
  • Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
  • Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP.
  • Sak et al. (2014) Haşim Sak, Andrew Senior, and Françoise Beaufays. 2014. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128.
  • Sang and De Meulder (2003) Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050.
  • Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL.
  • Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
  • Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
  • Yang et al. (2017) Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2017. Breaking the softmax bottleneck: A high-rank RNN language model. arXiv preprint arXiv:1711.03953.
  • You et al. (2018) Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. 2018. Imagenet training in minutes. In Proceedings of the 47th International Conference on Parallel Processing, ICPP 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
342081
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description