Automatic Generation of Chinese Short Product Titles for Mobile Display

Automatic Generation of Chinese Short Product Titles for Mobile Display

Yu Gong111Equally contribution., Xusheng Luo111Equally contribution.222Work performed while interning at Alibaba Group., Kenny Q. Zhu333Corresponding author., Shichen Liu, Wenwu Ou
Search Algorithm Team, Alibaba Group
Shanghai Jiao Tong University
{gongyu.gy, lxs140564, shichen.lsc}@alibaba-inc.com,
kzhu@cs.sjtu.edu.cn, santong.oww@taobao.com
Abstract

This paper studies the problem of automatically extracting a short title from a manually written longer description of E-commerce products for display on mobile devices. It is a new extractive summarization problem on short text inputs, for which we propose a feature-enriched network model, combining three different categories of features in parallel. Experimental results show that our framework significantly outperforms several baselines by a substantial gain of 4.5%. Moreover, we produce an extractive summarization dataset for E-commerce short texts and will release it to the research community.

{CJK}

UTF8gbsn

1 Introduction

Mobile Internet is fast becoming the primary venue for E-commerce. People have got used to browsing through collections of products and making transactions on the relatively small mobile phone screens. All major E-commerce giants such as Amazon, eBay and Taobao offer mobile apps that are poised to supersede the conventional websites.

Figure 1: A cut-off long title on an E-commerce mobile app, vs. a corresponding short title.

When a product is featured on an E-commerce website or mobile app, it is often associated with a textual title which describes the key characteristics of the product. These titles, written by merchants, often contain gory details, so as to maximize the chances of being retrieved by user search queries. Therefore, such titles are often verbose, over-informative, and hardly readable. While this is okay for display on a computer’s web browser, it becomes a problem when such longish titles are displayed on mobile apps. Take Figure 1 as an example. The title for a red sweater on an E-commerce mobile app is “ONE MORE文墨2017夏装新款印花连帽上衣长袖短款喇叭袖百搭卫衣女”. Due to the limited display space on mobile phones, original long titles (usually more than 20 characters) will be cut off, leaving only the first several characters “ONEMORE文墨2017夏装… (ONE MORE 2017 summer woman…)” on the screen, which is completely incomprehensible, unless the user clicks on the product and load the detailed product page.

Thus, in order to properly display product listing on a mobile screen, one has to significantly simplify (e.g., to under 10 characters) the long titles while keeping the most important information. This way, user only has to glance through the search result page to make quick decision whether they want to click into a particular product. Figure 1 also shows as comparison an alternate display of a shortened title for the same product. The short title in the left snapshot is “印花连帽短款喇叭袖卫衣”, which means “printedhoodyshortflare sleevesweater”.

In this paper, we attempt to extract short titles from their longer, more verbose counterparts for E-commerce products. To the best of our knowledge, this is the first attempt that attacks the E-commerce product short title extraction problem.

This problem is related to text summarization, which generates a summary by either extracting or abstracting words or sentences from the input text. Existing summarization methods have primarily been applied to news or other long documents, which may contain irrelevant information. Thus, the goal of traditional summarization is to identify the most essential information in the input and condense it into something as fluent and readable as possible.

We would attack our problem with an extractive summarization approach, rather than an abstractive one for these reasons. First, our input title is relatively shorter and contains less noise, (average 27 characters; see Table 1). Some words in the long title may not be important but they are all relevant to the product. Thus, it is sufficient to decide if each word should or should not stay in the summary. Second, the number of words in the output is strictly constrained in our problem due to size of the display. Generative (abstractive) approaches do not perform as well when there’s such a constraint. Finally, for E-commerce, it is better for the words in the summary to come from the original title. Using different words may lead to a change of original intention of the merchant.

State-of-the-art neural summarization models [\citeauthoryearCheng and Lapata2016, \citeauthoryearNarayan et al.2017] are generally based on attentional RNN frameworks and have been applied on news or wiki-like articles. However, in E-commerce, customers are not so sensitive to the order of the words in a product title. Besides using deep RNN with attention mechanism to encode word sequence, we believe other single-word level semantic features such as NER tags and TF-IDF scores will be as just as useful and should be given more weights in the model. In this paper, we propose a feature-enriched neural network model, which is not only deep but also wide, aiming to effectively shorten original long titles.

The contributions of this paper are summarized below:

  • We collect and will open source a product title summary dataset (Section 2).

  • We present a novel feature-enriched network model, combining three different types of word level features (Section 4), and the results show the model outperforms several strong baseline methods, with score of 0.725 (Section 5.4).

  • By deploying the framework on an E-commerce mobile app, we witnessed improved online sales and better turnover conversion rate in the popular 11/11 shopping season (Section 5.5).

2 Data Collection

Figure 2: Procedure of data collection in Youhaohuo.

Publicly available large-scale summarization dataset is rare. Existing document summarization datasets include DUC2111http://duc.nist.gov/data.html, TAC3222http://www.nist.gov/tac/2015/KBP/ and TREC4333http://trec.nist.gov/ for English, and LCSTS444http://icrc.hitsz.edu.cn/Article/show/139.html for Chinese. In this work, we create a dataset on short title extraction for E-commerce products. This dataset comes from a module in Taobao named “有好货”(Youhaohuo)555https://h5.m.taobao.com/lanlan/index.html. Youhaohuo is a collection of high-quality products on Taobao. If you click a product in Youhaohuo, you will be redirected to the detailed product page (including product title). What is different from ordinary Taobao products is that online merchants are required to submit a short title for each Youhaohuo product. This short title, written by humans, is readable and describes the key properties of the product. Furthermore, most of these short titles are directly extracted from the original product titles. Thus, we believe Youhaohuo is a good data source of extractive summarization for product descriptions.

Figure 2 shows how we collected the data. On the left is a web page in Youhaohuo displaying several products, each of which contains an image and a short title below. When clicking on the bottom right dress, we jump to the detailed page on the right. The title next to the picture in red box is the manually written short title, which says “MIUCO针织马甲假两件收腰连衣裙” (MIUCO tight dress with knit vest). This short title is extracted from the long title below in the blue box. Notice that all the characters in the short tile are directly extracted from the long title (red boxes inside blue box). In addition to the characters in the short title, the long title also contains extra information such as “女装2017冬新” (woman’s wear brand new in winter 2017). In this work, we segment the original long titles and short titles into Chinese words by jieba666https://pypi.python/pypi/jieba/.

The dataset consists of 6,481,623 pairs of original and short product titles, which is the largest short text summarization dataset to date. We call it large extractive summary dataset for E-commerce (LESD4EC), whose statistics is shown in Table 1. We believe this dataset will contribute to the future research of short text summarization777The dataset will be published after the paper is accepted. Notice all the original data can be crawled online..

No. of summaries 6,481,623
No. of words per text 12
No. of chars per text 27
No. of words per summary 5
No. of chars per summary 11
Table 1: Statistics of LESD4EC dataset.

3 Problem Definition

In this section we formally define the problem of short title extraction. A char is a single Chinese or English character. A segmented word (or term) is a sequence of several chars such as “Nike” or “牛仔裤”(jean). A product title, denoted as , is a sequence of words . Let be a sequence of labels over , where . The corresponding short title is a subsequence of , denoted as , where and .

We regard short title extraction task as a sequence classification problem. Each word is sequentially visited in the original product title order and a binary decision is made. We do this by scoring each word within and predicting a label , indicating whether the word should or should not be included in the short title . As we apply supervised training, the objective is to maximize the likelihood of all word labels , given the input product title and model parameters :

(1)

4 Feature-Enriched Neural Extractive Model

In this section, we describe our extractive model for product short title extraction. The overall architecture of our neural network based extractive model is shown in Figure 3. Basically, we use a Recurrent Neural Network (RNN) as the main building block of the sequential classifier. However, unlike traditional RNN-based sequence labeling models used in NER or POS tagging, where all the word level features are fed into RNN cell, we instead divide the features into three parts, namely Content, Attention and Semantic respectively. Finally we combine all three features in an ensemble.

Figure 3: Architecture of Feature-Enriched Neural Extractive Model.

4.1 Content Feature

To encode the product title, we first look up an embedding matrix to get the word embeddings . Here, denotes the dimension of the embeddings and denotes the vocabulary size of natural language words. Then, the embeddings are fed into a bidirectional LSTM networks. To this end, we get two hidden state sequences, from the forward network and from the backward network. We concatenate the forward hidden state of each word with corresponding backward hidden state, resulting in a representation . At this point, we obtain the representation of the product title .

The content feature of current word is then calculated as:

(2)

where and are model parameters.

4.2 Attention Feature

In order to measure the importance of each word relevant to the whole product title, we borrow the idea of attention mechanism [\citeauthoryearBahdanau et al.2014, \citeauthoryearLuong et al.2015] to calculate a relevance score between the hidden vector of current word and representation of the entire title sequence.

The representation of the entire product title is modeled as a non-linear transformation of the average pooling of the concatenated hidden states of the BiLSTM:

(3)

Therefore, the attention feature of current word is calculated by a Bilinear combination function as:

(4)

where is a parameter matrix.

4.3 Semantic Feature

Apart from the two hidden features calculated using an RNN encoder, we design another kind of feature including TF-IDF and NER Tag, to capture the deep semantics of each word in a product title.

4.3.1 Tf-Idf

Tf-idf, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in corpus or a sentence in a document.

A simple choice to calculate term frequency of current word is to use the number of its occurrences (or count) in the title:

(5)

In the case of inverse document frequency, we calculate it as:

(6)

where is the number of product titles in the corpus and is the number of titles containing the word .

By combining the above two, the tf-idf score of word in a product title , denoted as , is then calculated as the product of and .

We design a feature vector containing three values: tf score, idf score and tf-idf score and the calculate a third feature:

(7)
(8)

4.3.2 Ner

We use a specialized NER tool for E-commerce to label entities in a product title. In total there are types of entities, which are of common interest in E-commerce scenario, such as “颜色”(color), “风格”(style) and “尺寸规格 ”(size). For example, in segmented product title “包邮 Nike 品牌 的 红色 运动裤” (Nike red sweatpants with free shipping), “包邮 ”(Free shipping) is labeled as Marketing_Service, “Nike” is labeled as Brand, “红色”(Red) is labeled as Color and “运动裤”(Sweatpants) is labeled as Category. We use one-hot representation to encode NER feature of each word and then integrate it into the model by proposing a fourth feature:

(9)

4.4 Ensemble

We combine all the features above into one final score of word :

(10)

where is the sigmoid or logistic function, which restrain the score between and . Based on that, we set a threshold to decide whether we keep word in the short title or not.

Our model is very much like the Wide & Deep Model architecture [\citeauthoryearCheng et al.2016]. While the content and attention features are deep since they rely on deep RNN structure, the semantic features are relatively wide and linear.

5 Experiments

In this section, we first introduce the experimental setup and the previous state-of-the-art systems as comparison to our own model known as Feature-Enriched-Net. We then show the implementation details and the evaluation results before giving some discussions on the results.

5.1 Training and Testing Data

We randomly select 500,000 product titles as our training data, and another 50,000 for testing. Each product title is annotated with a sequence of binary label , i.e., each word is labeled with (included in short title) or (not included in short title). Readers may refer to Section 2 for the details about how we collect the product titles and their corresponding short titles.

5.2 Baseline Systems

Since there are no previous work that directly solves the short title extraction for E-commerce product, we select our baselines from three categories. The first one is traditional methods. We choose a keyword extraction framework known as TextRank [\citeauthoryearMihalcea and Tarau2004]. It first infers an importance score for each word within the long title by an algorithm similar to PageRank, then decides whether each word should or should not be kept in the short title according to the scores.

The second category is standard sequence labeling systems. We choose the system mentioned by Huang et al.\shortcitehuang2015bidirectional, in which a multi-layer BiLSTM is used. Compared to our system, it does not exploit the attention mechanism and any side feature information. We substitute the Conditional Random Field (CRF) layer with Logistic Regression to make it compatible with our binary labeling problem. We call this system BiLSTM-Net.

The last category of methods is attention-based frameworks, which use encoder-decoder architecture with attention mechanism. We choose Pointer Network [\citeauthoryearVinyals et al.2015] as a comparison and call it Pointer-Net. During decoding, it looks at the whole sentence, calculates the attentional distribution and then makes decisions based on the attentional probabilities.

5.3 Implementation Details

We pre-train word embeddings used in our model on the whole product titles data plus an extra corpus called “E-commerce Product Recommended Reason”, which is written by online merchants and is also extracted from YouHaoHuo (Section 2). We use the Word2vec [\citeauthoryearMikolov et al.2013a, \citeauthoryearMikolov et al.2013b] CBOW model with context window size , negative sampling size , iteration steps and hierarchical softmax . The size of pre-trained word embeddings is set to . For Out-Of-Vocabulary (OOV) words, embeddings are initialized as zero. All embeddings are updated during training.

For recurrent neural network component in our system, we used a two-layers LSTM network with unit size . All product titles are padded to a maximum sentence length of .

We perform a mini-batch cross-entropy loss training with a batch size of sentences for training epochs. We use Adam optimizer, and the learning rate is initialized with .

5.4 Offline Evaluation

To evaluate the quality of automatically extracted short titles, we used ROUGE [\citeauthoryearLin and Hovy2003] to compare model generated short titles to manually-written short titles. In this paper, we only report ROUGE-1, mainly because linguistic fluency and word order are not of concern in this task. Unlike previous works in which ROUGE is a recall-oriented metric, we jointly consider precision, recall and F1 score, since recall only presents the ratio of the number of extracted words included in ground-truth short title over the total number of words in ground-truth short title. However, due to the limited display space on mobile phones, the number of words (or characters) of extracted short title itself should be constrained as well. Thus, precision is also measured in our experiments. And is considered a comprehensive evaluation metric:

Where is the manually written short title, and is the number of overlapping words appearing in short titles generated by model and humans.

Final results on the test set are shown in Table 2. We can find that our method (Feature-Enriched-Net) outperforms the baselines on both precision and recall. Our method achieves the best F1 score, which improves by a relative gain of 4.5%. Pointer-Net would achieve higher precision score for its attentive ability to the whole sentence and select the most important words. Our method considers long-short memories, attention mechanism and other abundant semantic features. That is to say, our model has the ability to extract the most essential words from original long titles and make the short titles more accurate and comprehensive.

We also tune the threshold used in BiLSTM-Net and our Feature-Enriched-Net. This threshold indicates how large the predicted likelihood should be so that a word will be included in the final short title. We reported the results in Figure 4, in which we set as , and . From the figures, we can conclude that our model stably performs better than the other.

Models
TextRank 0.430 0.219 0.290
BiLSTM-Net 0.637 0.751 0.689
Pointer-Net 0.648 0.746 0.694
Feature-Enriched-Net 0.675 0.783 0.725
Table 2: Final results on the test set. We report ROUGE-1 Precision, Recall and corresponding F1. We use the tuned threshold (see Figure 4) for BiLSTM-Net and Feature-Enriched-Net. Best ROUGE score in each column is highlighted in boldface.
Figure 4: Offline results of Bi-LSTM-Net and Feature-Enriched-Net under different thresholds; Online A/B Testing of sales volume and Turnover Conversion Rate

5.5 Online A/B Testing

This subsection presents the results of online evaluation in the search result page scenario of an E-commerce mobile app with a standard A/B testing configuration. Due to the limited display space on mobile phones, only a fixed number of chars (12 characters in this app) can be shown out and excessive part will be cut off. Therefore, unlike previously mentioned inference approach (with threshold in Section 4.4), we regard it as classic Knapsack Problem. Each word in the product title is an item with weight and value , where represents the char length of word and represents the predicted likelihood of word by our model. The maximum weight capacity of our knapsack (also known as char length limit) is . Then the target is:

where means should be reserved in the short title. Similar to the standard solution to 0-1 Knapsack Problem, we use a Dynamic Programming (DP) algorithm.

In our online A/B testing evaluation, 3% of the users were randomly selected as testing group (about 3.4 million user views (UV)), in which we substituted the original cut-off long title displayed to users with extracted short titles by our model with DP inference. We claim that after showing the short titles with most important keywords, users have much better idea what the product is about on the “search result” page and thus find the product they want more easily. During the popular Double 11 shopping season, we deployed A/B testing for 5 days (from 2017-11-02 to 2017-11-06) and achieved on average 2.31% and 1.22% improvements of sales volume and turnover conversion rate (see Figure 4 for each day). This clearly shows that better short product titles are more user-friendly and hence improve the sales substantially.

ORIGINAL xunruo 熏 若 双生 设计师 品牌 泡泡 系列 一 字领 掉 袖 连衣裙 预订 款
(Bookable XunRuo twin designer brand bubble series dress with boat neckline and off sleeves.)
HUMAN 一字领 掉 袖 连衣裙
(Dress with boat neckline and off sleeves.)
Feature-Enriched-Net 泡泡 系列 一字领 掉 袖 连衣裙
(Bubble series dress with boat neckline and off sleeves.)
BiLSTM-Net 熏 若 品牌 掉 袖 连衣裙 预订
(Bookable XunRuo brand dress with off sleeves.)
Table 3: A real experimental case with 12-chars length limit. Pointer-Net is not included since encoder-decoder architecture can’t directly adapt to character length limit

5.6 Discussions

In Table 3, we show a real case of original long title, along with short title annotated by human beings, predicted by BiLSTM-Net and Feature-Enriched-Net respectively. From the human annotated short title, we find that a proper short title should contain the most important elements of the product, such as category (“dress”) and description of properties (“boat neckline and off sleeves”). While some other elements such as brand terms (“XunRuo twin designer brand”) or service terms (“open for reservation”) should not be kept in the short title. Our Feature-Enriched Net has the ability to generate a satisfying short title, while baseline model tends to miss some essential information.

However, there is still room for improvement. Terms with similar meaning may co-occur in the short title generated by our model, when they all happen to be important terms such as a category. For example, “皮衣” and “皮夹克” both mean “jacket” in a long title, and the model tends to keep both of them. However, only one of them is enough and the space saved can be used to display other useful information to customers. We will explore intra-attention [\citeauthoryearPaulus et al.2017] as an extra feature in our future work.

6 Related Work

The task of summarization aims at generating short summaries for long documents or sentences. This research problem has drove lots of attentions in recent years and existing methods can be categorized mainly into two perspectives.

Extractive summarization methods [\citeauthoryearErkan and Radev2004, \citeauthoryearMcDonald2007, \citeauthoryearWong et al.2008] produce summaries by concatenating several sentences or words found directly in the original texts. Several methods have been used to select the summary-worthy sentences, including binary classifiers [\citeauthoryearKupiec et al.1995], Markov models [\citeauthoryearConroy and O’leary2001], graphic model [\citeauthoryearErkan and Radev2004, \citeauthoryearMihalcea2005] and integer linear programming (ILP) [\citeauthoryearWoodsend and Lapata2010]. Compared to traditional methods which heavily rely on human-engineered features, neural network based approaches [\citeauthoryearKågebäck et al.2014, \citeauthoryearCheng and Lapata2016, \citeauthoryearNallapati et al.2017, \citeauthoryearNarayan et al.2017] have gained popularity rapidly. The general idea of these methods is to regard the extractive summary as a sequence classification problem and adopt RNN like networks. Besides, attention-based framework can perform better by attending to the whole document when extracting a word (or sentence) [\citeauthoryearCheng and Lapata2016]. Our work is based on extractive framework as well. Besides a deep attentional RNN network, we also employ explicit semantic features such as TF-IDF score and NER tags, making our model more informative.

On the other hand, abstractive summarization methods [\citeauthoryearChen et al.2016, \citeauthoryearNallapati et al.2016, \citeauthoryearSee et al.2017], which have the ability to generate text beyond the original input text, in most cases, can produce more coherent and concise summaries. These approaches are mostly centered on the attention mechanism and augmented with recurrent decoders [\citeauthoryearChopra et al.2016], Abstract Meaning Representations [\citeauthoryearTakase et al.], hierarchical networks [\citeauthoryearNallapati et al.2016] and pointer network [\citeauthoryearSee et al.2017]. However, It is not necessary to use abstractive methods in short title extraction problem in this paper as the input is product title, which already contains the required informative words.

Our work can be comfortably set in the area of short text summarization. This line of research is in fact essentially sentence compression working on short text inputs such as tweets, microblogs or single sentence. Recent advances typically contribute to improving seq-to-seq learning, or attentional RNN encoder-decoder structures [\citeauthoryearChopra et al.2016, \citeauthoryearNallapati et al.2016]. While these methods are mostly abstractive, we use an extractive framework, combining with deep attentional RNN network and explicit semantic features, due to different scenarios of summarization problem. Besides, while existing summarization systems focus on news articles or other documents, we are the first to do summarization in a mobile E-commerce setting.

7 Conclusion

To the best of our knowledge, this is the first piece of work that focuses on extractive summarization for E-commerce product title. We propose a deep and wide model, combining attentional RNN framework with rich semantic features such as TF-IDF scores and NER tags. Our model outperforms several popular summarization models, and achieves ROUGE-1 F1 score of 0.725. Besides, the result of online A/B testing shows substantial benefits of our model in real online shopping scenario. Possible future works include handling similar terms that appear in the short titles generated by our model.

References

  • [\citeauthoryearBahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
  • [\citeauthoryearChen et al.2016] Qian Chen, Xiao-Dan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. Distraction-based neural networks for modeling document. In IJCAI, pages 2754–2760, 2016.
  • [\citeauthoryearCheng and Lapata2016] Jianpeng Cheng and Mirella Lapata. Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252, 2016.
  • [\citeauthoryearCheng et al.2016] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pages 7–10. ACM, 2016.
  • [\citeauthoryearChopra et al.2016] Sumit Chopra, Michael Auli, and Alexander M Rush. Abstractive sentence summarization with attentive recurrent neural networks. In NAACL-HLT, pages 93–98, 2016.
  • [\citeauthoryearConroy and O’leary2001] John M Conroy and Dianne P O’leary. Text summarization via hidden markov models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 406–407. ACM, 2001.
  • [\citeauthoryearErkan and Radev2004] Günes Erkan and Dragomir R Radev. Lexpagerank: Prestige in multi-document text summarization. In EMNLP, volume 4, pages 365–371, 2004.
  • [\citeauthoryearHuang et al.2015] Zhiheng Huang, Wei Xu, and Kai Yu. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991, 2015.
  • [\citeauthoryearKågebäck et al.2014] Mikael Kågebäck, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. Extractive summarization using continuous vector space models. In Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)@ EACL, pages 31–39, 2014.
  • [\citeauthoryearKupiec et al.1995] Julian Kupiec, Jan Pedersen, and Francine Chen. A trainable document summarizer. In Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, pages 68–73. ACM, 1995.
  • [\citeauthoryearLin and Hovy2003] Chin-Yew Lin and Eduard Hovy. Automatic evaluation of summaries using n-gram co-occurrence statistics. In NAACL-HLT-Volume 1, pages 71–78. Association for Computational Linguistics, 2003.
  • [\citeauthoryearLuong et al.2015] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
  • [\citeauthoryearMcDonald2007] Ryan McDonald. A study of global inference algorithms in multi-document summarization. Advances in Information Retrieval, pages 557–564, 2007.
  • [\citeauthoryearMihalcea and Tarau2004] Rada Mihalcea and Paul Tarau. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, 2004.
  • [\citeauthoryearMihalcea2005] Rada Mihalcea. Language independent extractive summarization. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, pages 49–52. Association for Computational Linguistics, 2005.
  • [\citeauthoryearMikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  • [\citeauthoryearMikolov et al.2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
  • [\citeauthoryearNallapati et al.2016] Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023, 2016.
  • [\citeauthoryearNallapati et al.2017] Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. hiP (yi= 1— hi, si, d), 1:1, 2017.
  • [\citeauthoryearNarayan et al.2017] Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, and Shay B Cohen. Neural extractive summarization with side information. arXiv preprint arXiv:1704.04530, 2017.
  • [\citeauthoryearPaulus et al.2017] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017.
  • [\citeauthoryearSee et al.2017] Abigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368, 2017.
  • [\citeauthoryearTakase et al.] Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. Neural headline generation on abstract meaning representation.
  • [\citeauthoryearVinyals et al.2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700, 2015.
  • [\citeauthoryearWong et al.2008] Kam-Fai Wong, Mingli Wu, and Wenjie Li. Extractive summarization using supervised and semi-supervised learning. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 985–992. Association for Computational Linguistics, 2008.
  • [\citeauthoryearWoodsend and Lapata2010] Kristian Woodsend and Mirella Lapata. Automatic generation of story highlights. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 565–574. Association for Computational Linguistics, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
133406
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description