A Multi-Modal Chinese Poetry Generation Model
Recent studies in sequence-to-sequence learning demonstrate that RNN encoder-decoder structure can successfully generate Chinese poetry. However, existing methods can only generate poetry with a given first line or user’s intent theme. In this paper, we proposed a three-stage multi-modal Chinese poetry generation approach. Given a picture, the first line, the title and the other lines of the poem are successively generated in three stages. According to the characteristics of Chinese poems, we propose a hierarchy-attention seq2seq model which can effectively capture character, phrase, and sentence information between contexts and improve the symmetry delivered in poems. In addition, the Latent Dirichlet allocation (LDA) model is utilized for title generation and improve the relevance of the whole poem and the title. Compared with strong baseline, the experimental results demonstrate the effectiveness of our approach, using machine evaluations as well as human judgments.
China is known as “the kingdom of poetry”. This is not only because of the long history of Chinese poetry, but also the number of poets and works, which has a special and significant place in Chinese social life and cultural development.
Poetry is the carrier of language, the most original and the most authentic art. It is engraved with human reason and emotion, wise and thought, imagination and shouting, rough and smooth. There are several writing formats for Chinese Tang poetry, among which quatrain is perhaps the best-known one which requires strict rules including words, rhyming, tone and antithesis, we illustrate an example of famous quatrains in Figure 1.
1) Words. The quatrain consists of 4 lines of sentences, and the length of each line is fixed to 5 or 7 characters.
2) Rhyming. The syllables of Chinese characters are composed of initials and finals. Rhyming words must have same finals. Poetry pays attention to the beauty of melody and rhythm. Therefore, the Chinese poetry must rhyme. Rhyme means to put the same rhyme in the same place, usually at the end of a sentence (the underlined characters in Figure 1).
3) Tone. This is the character of Chinese. The height, the rise and fall and the length of speech form the tones of Chinese. The four ancient sounds were:Â level tone, rising tone, fallingâtone, and entering tone. The relationship between the four tones and the rhyme is very close. The words in different tones usually can’t rhyme. Poets divided the four voices into two broad categories: Ping (level tone) or Ze (down-ward tone). The Ping and Ze alternates in the verses according to certain rules, so that the tone is diversified rather than monotonous.
Writing a good poem is difficult and poets need to master profound literary skills. It is hard to master such skills for ordinary people. In recent years, automatic Chinese poetry generation has made great progress. There are several different kinds of approaches to generate poems. One of the most promising approaches is taking the generation of Chinese poem lines as a sequence-to-sequence learning problem [1, 2]. The RNN Encoder-Decoder model with attention mechanism [3, 4] is employed to generate Chinese poems. It has been shown that this sequence-to-sequence (seq2seq) neural model can successfully generate Chinese poems [5, 6].
However, there are still some defects in these existing approaches: (i) These methods can only generate a poem with a given first line or the user’s intention, and often generate non-thematic poetry or the theme of the whole poetry is not consistent with the theme of the user’s intention. (ii) Some properties of quatrains such as symmetry are not considered. (iii) They cannot generate titles for the generated poems.
To address these issues, we propose a multi-modal three-stage approach to generate the Chinese quatrain and its relevant title with a given picture or a theme: 1) We first obtain a theme related phrase from an external knowledge base called ShiXueHanYing. If the input is a picture, it is mapped into a specific theme with a GoogleNet [7, 8] called image recognition module which is fine-tuned on our manually build dataset. To enhance the relevance of the poem and the theme, the Backward and Forward Language Model [9, 10] (B/F-LM) with GRU cell  are employed to generate the first line of the poem which explicitly incorporates the theme related phrase. 2) After the first line generation, we utilize an LDA model to find a suitable theme related phrase from ShiXueHanYing as the title after first line generation. This title is going to guide other lines’ generation to make the whole poem more relevant with the title. 3) We propose a hierarchy-attention seq2seq model (HieAS2S) to generate the remaining poem line by line. This model can effectively capture character, phrase, and sentence information between contexts and improve the symmetry delivered in poems.
For machine evaluation, we modify the BLEU evaluation which is used in [5, 12] to find more suitable references for evaluation. Furthermore, we regard whether the generated poems satisfy the rhyme and tone as an additional evaluation. Our experimental studies indicate that the proposed HieAS2S model outperforms several variants of seq2seq model. The proposed three stage method performs better than strong baselines in both machine and human evaluation. In addition, our title generation method performs completely well when compared with standard seq2seq model with attention mechanism. Particularly, we develop a web application for users to use and evaluate our approach. Most users give satisfactory evaluations.
Ii Related Work
As poetry is one of the most significant and popular literature all over the world, the topic of poem generation has attracted lots of researchers over the past decades. There are several different kinds of approaches to generate poem. The first kind of approach is based on templates and rules. For instance, leveraging semantic and grammar templates , based on WordNet  and parts of speech , word association norms , genetic algorithms [17, 18], and text summarization . In these papers, templates are employed to construct poems according to a set of constraints such as rhyme, meter, stress and word frequency. The second kind of method is based on statistical machine translation (SMT). For Chinese couplets111A pair of lines of poetry which adhere to certain rules. generation,  translate the first line to the second line by using a phrase-based SMT approach. And  expand this method to generate four-line Chinese quatrains.
With the development of deep learning on natural language generation, neural networks have been applied on poetry generation.  first presents a model for Chinese poetry generation based on recurrent neural networks. Given some input keywords, they use a character-based RNN language model  to generate the first line, and then the other lines are generated sequentially by a variant RNN.  use Long Short-term Memory (LSTM) based seq2seq model with attention mechanism to generate Song Iambics. Then  extend this model to generate Chinese quatrains. Furthermore,  propose a RNN based model with attention mechanism and polishing schema to generate Chinese poems and Chinese couplets . To ensure that the generated poem is coherent and semantically consistent with the users’ intents,  propose a two-stage poetry generating method. They first plan the sub-topics of the poem and then generate each line using a modified RNN encoder-decoder model.  take the generation of poem lines as a sequence-to-sequence learning problem. They build three poem line generation blocks based on RNN Encoder-Decoder (word-to-line, line-to-line and context-to-line) to generate a whole quatrain. More recently,  propose a simple memory-augmented neural model to generate innovative poems.  employ Conditional Variational AutoEncoder (CVAE) [29, 30, 31] for Chinese poetry generation.
Our approach is closely related to the works of deep learning mentioned above. However, several important differences make our approach novel: 1) Above mentioned methods cannot generate the titles for the generated poems. We propose a three-stage approach to generate Chinese quatrains. This approach is able to generate high-quality quatrains with relevant titles. 2) We introduce a multi-modal way for Chinese quatrains generation. We extend the generation way to support poetry generation through pictures. 3) According to the characteristics of Chinese poems, we first incorporate phrase feature to poetry generation model and propose the HieAS2S model. This model can effectively capture character, phrase, and sentence information between contexts and improve the symmetry delivered in poems.
In this section, we introduce the algorithm of our approach step by step, including: 1) generating the first line which explicitly incorporates the theme related phrase, 2) title generation with LDA model, 3) other lines generation with HieAS2S model. The framework of our generation approach is shown in Figure 2.
Iii-a First Line Generation
Existing approaches usually generate poems by given users’ intent themes. We extend this way to support the generation from pictures. The algorithm of first line generation is shown in the left side of the Figure 2. Given a picture, we first map it into a specific ShiXueHanYing theme with image recognition module. ShiXueHanYing is a poetic phrase taxonomy organized by Wenwei Liu in 1735 which consists of 1,016 manually picked themes. Each theme contains dozens of related phrases. There are more than phrases in total and the length of each term is between and Chinese characters.
For image recognition module, we retrain the final layer of a GoogleNet which has been fully trained on ImageNet  before. The dataset we use for this retraining is a manually build image dataset according to the themes of ShiXueHanYing. Because there are many fine-grained or abstract themes which are difficult for classification, we manually cluster and filter the themes into 40 classes. Then we build a large picture dataset labeled with the theme called PoetryImage which has more than 40,000 pictures in total and about 1000 pictures for each class as training set and 100 for each as test set. The top-1 error rates of our image recognition module in the test set is 7.8% while the top-3 error rates is 4.7%.
After mapping the given picture into a theme, we retrieve all related phrases and randomly pick one. Then we employ the B/F-LM to generate the first line which explicitly incorporates this theme related phrases. As shown in Figure 2, the B/F-LM consists of a backward RNN language model and forward RNN language model with GRU cell. Since we know a prior theme related phrase should be appear in the sentence, we reverse the theme related phrase and start with it to generate the backward sequence using the backward RNN. Then we feed the result to forward RNN to generate the whole line.
Iii-B Title Generation
Although the seq2seq model with attention mechanism has achieved good results on abstractive summarization [33, 34], we find it performs poorly for Chinese poetry title generation. Through our analysis, there are two main reasons:
1) The titles in the training datasets contain a lot of noise. For example, many poets are used to taking the titles according to their surroundings while writing poems. These titles usually contain some specific names of persons or landscapes.
2) Generating the title end-to-end from a poem which is already formed by highly concise language is a difficult task.
The model can be easily overfitting and generate some unsuitable titles which may contain some unrelated person names and place names. Because of the first aforementioned reason, we also rule out matching-based methods to find a title of a human-written poem from the corpus for a generated poem. Finally, we find a better method to indirectly generate the title. Instead of generating the poetry title after the whole poem has been generated, we employ an LDA model to find a suitable phrase from ShiXueHanYing as the title after the first line generated. Then this title is going to guide other lines’ generation to make the whole poem more relevant with it.
As topics have long been investigated as the significant latent aspects of terms, we use a large corpus includes Chinese poems, Song Iambics and ancient Chinese proses to train a 100-topic LDA model. After training, we obtain the probability distribution vector T of phrase belonging to each topic , which is
We define the relevance coefficient of phrase and as follows:
After first line generated, the first line is segmented into several phrases . Then we select the most suitable phrase which do not appear in the first line from all theme related phrases as the title:
Since not all phrases are suitable as titles, we use POS tagging to restrict what phrases of ShiXueHanYing can be alternative titles in advance.
Iii-C The hierarchy-attention seq2seq model
According to the characteristics of Chinese poetry such as symmetry, we propose a hierarchy-attention seq2seq model called HieAS2S for other lines generation. Compared with the standard seq2seq attention model, this model can effectively capture the information of context at hierarchical scales, i.e, character, phrase and sentence level.
After generating the first line and the title, the other lines are generated successively. Given previous -1 generated lines , the HieAS2S model models the probability of the -th line . For simplicity, we use to denote the first characters of -th line. According to the probability theory, we have:
Here is the -th character of the -th line and is the length of sentence .
Hierarchy Memory. The architecture of the HieAS2S model is shown in the right of Figure 2. Firstly, we introduce the encoder part. The one-hot character vectors of current generated lines are individually mapped into a -dimensional vector space . We use pre-trained character embeddings which are trained on a large external corpus. Then a bi-directional RNN  with GRU cell converts these vectors into two sequences of d-dimensional vectors to capture sentence information.
To consider the phrase information, similar to , we apply 1-D convolution with three different filter window sizes (unigram, bigram and trigram) on the character embedding vectors to obtain phrase features. At each location , we compute the inner product of the character vectors with different window size filters:
here is the filter weight of window size . consists of character embeddings starting from the location . Then we apply max-pooling across different n-grams convolution results at each location :
This 1-D convolution and max-pooling learn to adaptively select different gram features at each location and preserve the original sequence length and order. After that, we obtain the phrase vectors .
Multiple Attention. For the decoder part, we employ GRU RNN with attention mechanism  to generate the next line. Here we take , , and as three kinds of hierarchy attention memories and calculate attended context vectors. Since the dimension of is twice of and . In order to make these dimensions equal, we design two variants: 1) We concatenate and for each time step and obtain . Then we concatenate and across time step as the hierarchy attention memory . 2) We tile each twice individually and obtain . We do the same for to obtain . Then we concatenate , , and across time step as the hierarchy attention memory .
The -th GRU hidden state s of decoder part is calculated as:
Here g is linear combination of attended context vector c and the character embedding of (i-1)-th character :
The attended context vector c is computed as a weighted sum of the hierarchy attention memory H:
And the equation for calculating the weight of each H is as follows:
We define each conditional probability as:
Reranking. Given previous generated lines , we implement beam search to generate candidate -th lines . Here is the beam width. To make the whole poem more relevant with the title, we rerank all candidate lines by the pre-defined score. The score of -th candidate sentence of -th line is defined as:
Here is the title and is segmented into a set of phrases . The second term of above equation measures the correlation between the sentence and the title. The PPL in the first term is the perplexity  which is one kind of important metric of Nature Language Processing (NLP). The PPL of sentence is defined as follows:
where is the length of sentence and is the -th token.
We take the highest score candidate sentence as the -th line of the generated poem.
Our experiments revolve around the following questions: Q1: As we introduce the phrase feature into the HieAS2S model, does this feature help? Which configuration is the most effective one? Q2: Judging from the human views, how does the proposed three-stage approach compare with the strong baseline? Q3: Whether our method can generate the suitable titles for the generated poems?
We built a large poetry corpus called corpus-P which contains 149,524 traditional Chinese poems in various genres. The most poems in corpus-P are quatrains or regulated verses. We randomly chose 3000 quatrains for validation and 3000 quatrains for testing. After the preprocessing of low frequency characters, the size of vocabulary is 5295. This poetry corpus was used to train B/F-LM and HieAS2S model. Another external large corpus (corpus-M) including 18,657 Chinese Song Iambics, 17,000K characters from ancient Chinese proses and poetry corpus were used to train LDA model and pre-train character embeddings. For image recognition module, we manually filtered and clustered themes of ShiXueHanYing into 40 classes, and built a image dataset including over 40,000 pictures. 1000 pictures were randomly chosen for each class as train set and 100 for each as test set.
For LDA model training, the Jieba Chinese text segmentation222https://github.com/fxsjy/jieba (a Python based Chinese word segmentation module) is employed for segmentation and building dictionary for LDA model. Particularly, we added all theme related phrases of ShiXueHanYing to this dictionary. After that, we used gensim333https://radimrehurek.com/gensim/ (a free Python library for NLP) to help us to train a 100-topics LDA model on the corpus-M. The experiments indicate that using corpus-M instead of corpus-P to train LDA model is conducive to improve the performance of LDA model, in terms of PPL.
We used noise-contrastive estimation (NCE)  method to pre-train 512-dimension character embeddings with a skip-gram model . Priori knowledge was brought into the model by these character embeddings which were trained on corpus-M. Note that some characters of ShiXueHanYing rarely appear in corpus-P, the pre-trained character embeddings can help the models to learn them better.
To train the B/F-LM model, we further pre-processed the training data. For each poem in the training data, if the first line of the poem contains a phrase in ShiXueHanYing, we used it as the split word and reversed the first half of the line to train the backward LM. Otherwise, we randomly picked one word as the split word. For the HieAS2S model training, we followed  and used their proposed training strategy called hybrid-style training (training 5-char poems and 7-char poems using the same model with a type indicator) to improve the model. We used Adam optimization method  with 128 mini-batch size for training. The learning rate was set to 0.001 which can be a constant or dynamically set [41, 42]. Several techniques were investigated to train and improve the model, including RNN-dropout , gradient clip and weight decay. The hyper-parameters were chosen empirically and adjusted in the validation set. It is worth mentioning that we found our models equipped with GRU cells performed slightly better than LSTM cells in our experiments.
Iv-C The Ablation Study (Q1)
In the first experiment, we aimed to test the effectiveness of the proposed model. We evaluated and compared the HieAS2S model with several variants. Four different models were tested, they were:
1) The standard seq2seq model with attention mechanism (baseline). The attention memory of this model is .
2) The seq2seq model whose attention memory consists of and . This model is presented by  called AS2S.
3) The proposed HieAS2S model whose attention memory is called HieAS2S-concat.
4) The proposed HieAS2S model whose attention memory is , we called it HieAS2S-tile.
To be fairness and reduce the impact of the first line generation, here we didn’t implement our first line generation method to generate first lines. Instead, we randomly picked 1000 poems from the test set and took their first lines as inputs for above models to generate the whole poems.
For evaluation, referring to [5, 12, 6], we used BLEU-2 score as a cheap evaluation metric to evaluate these 4000 generated poems. Here we slightly modified this method. Since each poem was generated by a given first line, we constructed the reference set as follows: For each ShiXueHanYing theme, we firstly counted the number of co-occurrence words for each poem in the dataset and the related phrases of the theme. We retained the top 20 poems with the largest number of co-occurrence words as the reference set for each theme. We used the same method to judge the themes of the poems whose first lines were used for poetry generation. Then for each generated poem, we retrieved the theme of the original poem of its first line, and took the reference set of this theme as the reference set of this generated poem. In order to ensure the effectiveness of this modified BLEU method, we did a comparative experiment with positive and negative examples. For these 1000 poems whose first lines were used to generate poetry, we firstly calculated their BLEU scores called the positive-groundtruth scores with their themes’ reference sets. Then for each of these poems, we replaced its theme with a random ShiXueHanYing theme and calculated its BLEU score as the negative-groundtruth score.
Because the above BLEU method can’t evaluate the rule-consistency of generated poems, we followed  and used the RHYTHM score for further evaluation. The RHYTHM score is used to measure whether a generated poem meet the constraints of tonal and rhyme which is defined as follows:
where denotes a poem line, denotes the length of , and means meets the constraints of tonal while means meets the constraints of rhyme.
The results are shown in Table I. From the last two rows of the table, we can see that the BLEU-2 scores of positive-groundtruth is much higher than negative-groundtruth which shows that the modified BLEU score is effective. As we can see from the first four rows of the table, the AS2S performs better than baseline. And both of the proposed model HieAS2S-tile and HieAS2S-concat outperform other models, in both terms of BLEU-2 scores and RHYTHM scores. In addition, the HieAS2S-tile model performs better than HieAS2S-concat. Through our analysis, the HieAS2S-tile model divides word features and phrase features separately, so that the model can better capture the phrase information. This results show that the phrase features are helpful and demonstrate the effectiveness of our proposed models.
Iv-D Human evaluation (Q2)
In the second experiment, we compared the proposed three-stage approach with the strong baseline AS2S  by human evaluation. In this second experiment, we used HieAS2S-tile instead of HieAS2S-concat. Since human evaluation is time-consuming and laborious, we mainly compared the proposed method with one of the most popular approaches which achieved the state-of-the-art performance to reduce human efforts. The AS2S model was fully compared with most of the previous poetry generation approaches such as SMT , Seq2Seq , LSTM language model , and RNNPG  in . It has shown that this approach performs better than the rest of the approaches, so we didn’t compare our method with those approaches. In addition, both our proposed method and AS2S can generate poetry by given ShiXueHanYing themes.
For each method, we selected 30 ShiXueHanYing themes to generate 60 quatrains with beam size 10. For further comparison, we also involved 40 unfamous human-written quatrains in the evaluation. We invited 10 human experts to evaluate these 160 poems. Following [21, 12, 26], we set four evaluation standards for human evaluators to judge the poems: Poeticness, Fluency, Coherence and Meaning. The score of each aspect ranges from 1 to 5 with the higher score the better. The detailed illustration is listed below:
(a) Poeticness: Does the poem follow the rhyme and tone requirements?
(b) Fluency: Does the poem read smoothly and fluently?
(c) Meaning: Does the poem have a certain meaning and artistic conception?
(d) Coherence: Is the poem coherent across lines?
Table II presents the results. Our method performs better than AS2S in all four metrics. This results show the effectiveness of our method. Compared our method with human-written, we found that the Poeticness and Fluency scores of our method are slightly lower than human-written poems. However, the Meaning and Coherence scores of poems written by human are still much higher than those generated by our method. Particularly, we developed a web application for users to use and evaluate our approach, most users give satisfactory evaluations to our approach. Figure 3 shows an example of 5-char quatrain generated on our web application with a user-uploaded picture and another 7-char quatrain generated with a given theme.
Iv-E Title evaluation (Q3)
In this experiment, we evaluated our title generation method. We compared the proposed title generated approach with standard Seq2Seq model which has achieved good results on abstractive summarization [33, 34]. We filtered the poems whose titles are longer than 15 characters or contain low-frequency characters on corpus-P. After that, these poem-title training pairs were used to train a GRU seq2seq model with attention mechanism.
For evaluation, we firstly implemented our three stage methods to generate 100 poems (including titles) with random ShiXueHanYing themes. Secondly, these 100 poems (without the titles) were fed into the seq2seq title generation model to generate another 100 titles. Finally we did a pair comparison experiment. Given each generated poem and its two different titles generated by two methods, we asked the experts to decide which title is more appropriate.
The result of our method vs Seq2Seq is 83:17. This result shows that our method significantly outperforms the Seq2Seq model. We found Seq2Seq model tends to generate very general titles such as “Early Spring”, “Send to friend”, and “Departure”. In addition, the Seq2Seq model also often generate some titles which contain specific geographical or landscape names which are not related to poetry. We show an example of the test which contains a poem and a pairs of titles in Figure 4.
V Conclusion and Future Work
In this paper, we have three contributions:
We propose a three-stage approach to generate Chinese quatrains. This approach is able to generate high-quality quatrains with relevant titles. Our experiments demonstrate that the proposed methods of title generation and poetry generation both outperform the strong baselines.
We introduce a multi-modal way for Chinese quatrains generation. We extend the generation way to support poetry generation through pictures. Furthermore, we manually build a large image-to-theme dataset.
We add phrase feature to poetry generation model and propose the HieAS2S model. Our experiments show that this phrase information is helpful and the HieAS2S model performs better than several variants and strong baseline.
In the future, we will explore the following further work:
Extending our approach to generate Song Iambics which have bigger challenges.
Focusing on combining semantic image segmentation to further strengthen the relationship between images and poetry.
This work was supported by the National Science Foundation of China (Grant No.61625204), partially supported by the State Key Program of National Science Foundation of China (Grant No.61432012 and 61432014).
-  I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in neural information processing systems, 2014, pp. 3104–3112.
-  K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014.
-  D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
-  M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” arXiv preprint arXiv:1508.04025, 2015.
-  Q. Wang, T. Luo, and D. Wang, Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test. Springer International Publishing, 2016.
-  X. Yi, R. Li, M. Sun, X. Yi, R. Li, M. Sun, X. Yi, R. Li, and M. Sun, Generating Chinese Classical Poems with RNN Encoder-Decoder, 2017.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” arXiv preprint arXiv:1512.00567, 2015.
-  L. Mou, R. Yan, G. Li, L. Zhang, and Z. Jin, “Backward and forward language modeling for constrained sentence generation,” Computer Science, vol. 4, no. 6, pp. 473–482, 2016.
-  L. Mou, Y. Song, R. Yan, G. Li, L. Zhang, and Z. Jin, “Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation,” 2016.
-  K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” Computer Science, 2014.
-  X. Zhang and M. Lapata, “Chinese poetry generation with recurrent neural networks.” in EMNLP, 2014, pp. 670–680.
-  H. G. Oliveira, “Poetryme: a versatile platform for poetry generation,” Computational Creativity, Concept Invention, and General Intelligence, vol. 1, p. 21, 2012.
-  C. Fellbaum et al., “Wordnet: An electronic database,” 1998.
-  M. Agirrezabal, B. Arrieta, A. Astigarraga, and M. Hulden, “Pos-tag based poetry generation with wordnet,” in Proceedings of the 14th European Workshop on Natural Language Generation, 2013, pp. 162–166.
-  Y. Netzer, D. Gabay, Y. Goldberg, and M. Elhadad, “Gaiku: Generating haiku with word associations norms,” in Proceedings of the Workshop on Computational Approaches to Linguistic Creativity. Association for Computational Linguistics, 2009, pp. 32–39.
-  R. Manurung, G. Ritchie, and H. Thompson, “Using genetic algorithms to create meaningful poetic text,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 24, no. 1, pp. 43–64, 2012.
-  H. Manurung, “An evolutionary algorithm approach to poetry generation,” 2004.
-  R. Yan, H. Jiang, M. Lapata, S.-D. Lin, X. Lv, and X. Li, “i, poet: Automatic chinese poetry composition through a generative summarization framework under constrained optimization.” in IJCAI, 2013.
-  L. Jiang and M. Zhou, “Generating chinese couplets using a statistical mt approach,” in Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1. Association for Computational Linguistics, 2008, pp. 377–384.
-  J. He, M. Zhou, and L. Jiang, “Generating chinese classical poems with statistical machine translation models.” in AAAI, 2012.
-  T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. Khudanpur, “Recurrent neural network based language model.” in Interspeech, vol. 2, 2010, p. 3.
-  Q. Wang, T. Luo, D. Wang, and C. Xing, “Chinese song iambics generation with neural attention-based model,” arXiv preprint arXiv:1604.06274, 2016.
-  R. Yan, “i, poet: Automatic poetry composition through recurrent neural networks with iterative polishing schema.” IJCAI, 2016.
-  R. Yan, C.-T. Li, X. Hu, and M. Zhang, “Chinese couplet generation with neural network structures.”
-  Z. Wang, W. He, H. Wu, H. Wu, W. Li, H. Wang, and E. Chen, “Chinese poetry generation with planning based neural network,” arXiv preprint arXiv:1610.09889, 2016.
-  J. Zhang, Y. Feng, D. Wang, Y. Wang, A. Abel, S. Zhang, and A. Zhang, “Flexible and creative chinese poetry generation using neural memory,” pp. 1364–1373, 2017.
-  X. Yang, X. Lin, S. Suo, and M. Li, “Generating thematic chinese poetry with conditional variational autoencoder,” 2017.
-  O. Fabius and J. R. V. Amersfoort, “Variational recurrent auto-encoders,” Computer Science, 2014.
-  X. Yan, J. Yang, K. Sohn, and H. Lee, “Attribute2image: Conditional image generation from visual attributes,” vol. 10, no. 2, pp. 776–791, 2015.
-  S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio, “Generating sentences from a continuous space,” Computer Science, 2015.
-  J. Deng, W. Dong, R. Socher, and L. J. Li, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 2009, pp. 248–255.
-  A. M. Rush, S. Chopra, and J. Weston, “A neural attention model for abstractive sentence summarization,” Computer Science, 2015.
-  S. Chopra, M. Auli, and A. M. Rush, “Abstractive sentence summarization with attentive recurrent neural networks,” in Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016, pp. 93–98.
-  M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
-  J. Lu, J. Yang, D. Batra, and D. Parikh, “Hierarchical question-image co-attention for visual question answering,” 2016.
-  F. Jelinek, R. L. Mercer, L. R. Bahl, and J. K. Baker, “Perplexityâa measure of the difficulty of speech recognition tasks,” The Journal of the Acoustical Society of America, vol. 62, no. S1, pp. S63–S63, 1977.
-  A. Mnih and K. Kavukcuoglu, “Learning word embeddings efficiently with noise-contrastive estimation,” in Advances in Neural Information Processing Systems, 2013, pp. 2265–2273.
-  T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in neural information processing systems, 2013, pp. 3111–3119.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  J. C. Lv, Y. Zhang, and T. Kok Kiong, “Global convergence of gha learning algorithm with nonzero-approaching learning rates,” IEEE Transactions on Neural Networks (TNN), vol. 18, no. 6, pp. 1557–1571, 2007.
-  J. C. Lv, T. Kok Kiong, Y. Zhang, and S. Huang, “Convergence analysis of a class of hyvärinen–oja’s ica learning algorithms with constant learning rates,” IEEE Transactions on Signal Processing (TSP), vol. 57, no. 5, pp. 1811–1824, 2009.
-  Y. Gal and Z. Ghahramani, “A theoretically grounded application of dropout in recurrent neural networks,” Statistics, pp. 285–290, 2015.
-  M. Sundermeyer, R. Schlüter, and H. Ney, “Lstm neural networks for language modeling.” in Interspeech, 2012, pp. 194–197.