# Recurrent Point Processes for Dynamic Review Models

## Abstract

Recent progress in recommender system research has shown the importance of including temporal representations to improve interpretability and performance. Here, we incorporate temporal representations in continuous time via recurrent point process for a dynamical model of reviews. Our goal is to characterize how changes in perception, user interest and seasonal effects affect review text.

## Introduction

Costumer reviews provide a rich and natural source of unstructured data which can be leverage to improve interactive and conversational recommender system performance [12]. Reviews are effectively a form of recommendation. Although causal and temporal relations have been know to improve the performance of recommender systems [18], recent natural language process (NLP) methodologies for rating and reviews [20] lack behind at incorporating temporal structure in language representations. In the present work, we exploit recurrent neural network (RNN) models for point process and include neural representations of text to characterize costumer reviews. Our goal is to capture the changes in taste and importance of items during time, and how such changes reflect on the text produced by the different users.

The reviews research have sought to characterize usefulness and generation of reviews [8, 15] and provide better representations for rating prediction [7]. The need to interact with costumers have lead to question answering solutions [2, 19]. Deep neural networks models for rating predictions use embedding representations as well as convolutions neural networks [1]. Dynamic models of text however have shown more success from the bayesian perspective within topic models [16, 17]. Self exciting point processes have allow for clustering of document streams [6, 9]. Different from these works, we focus on the temporal aspects of the text for each review.

## Recurrent Point Review Model (RPRM)

Consider an item (e.g. a business, service or movie) and assume that, since its opening to the public, it has received a collection of reviews , where labels the creation time of review and corresponds to its text^{1}

We start by transforming the text of each review into a bag of words (BoW) representation , where is the vocabulary size [10].

Recurrent Point Process (RPP):
Let us consider a point process with compact support . Formally, we write the likelihood of a new arrival (i.e. a new review) as an inhomogeneous Poisson process between reviews, conditioned on the history ^{2}

(1) |

where is (locally) integrable and is known as the intensity function of the point process. Following [13], [5], we define the functional dependence of the intensity function to be given by a RNN with hidden state , where an exponential function guarantees that the intensity is non-negative

(2) |

Here the vector and the scalars and are trainable variables. The update equation for the hidden variables of the recurrent network can be written as a general non-linear function

(3) |

where and label the creation time and the text’s BoW representation of review , respectively, and denotes the network’s parameters. We thus use the BoW representation of the review text as marks in the recurrent marked temporal point process [4]. Inserting Eq. (2) into Eq. (1) and integrating over time immediately yields the likelihood as a function of .

Dynamic Neural Text Model: To model the text component of reviews we assume the words in review are generated independently, conditioned on , the temporal representation of the RPP above. Specifically, we follow [14] and write the conditional probability of generating the th word of the th review as

(4) | ||||

(5) |

where and are trainable parameters, and is the one-hot representation of the word at position . The complete log-likelihood of the RPRM model can then by written as

(6) |

where denotes the inter-review time for item and is a multinomial distribution over word probabilities (Eq. (4)) and counts.

Baseline model: In order to test our model we define LSTM-BoW, which models the inter-review time as the mean of an exponential distribution with parameter , and the probability over words as . The functions and are given by neural networks with parameter , and is the hidden state of an LSTM network [11]. We also consider additional LSTM and RPP models which only take as input, as to check whether the BoW representation helps in the prediction of the inter-review times.

## Experiments and Results

We test our models on the Yelp19 dataset^{3}

We trained all models on maximum likelihood and use two evaluation metrics: Root-mean-squared error (RMSE) on the inter-review times and predictive perplexity on the review text. The latter is defined as [17], where is the number of words in review and is the number of reviews at time . Our results are presented in Table 1 and show that the best model in both metrics is the Recurrent Point Review Model. Note also that the models that leverage the information encoded in the text (through ) show improvement of the RMSE (with respect to the inter-review time) over the models which do not see .

Model | RMSE | Pred. Perplexity | |
---|---|---|---|

LSTM | 96.8813 | 0.1788 | - |

RPP | 96.3794 | 0.1873 | - |

LSTM-BoW | 95.3414 | 0.2046 | 519.90 |

RPRM | 92.3850 | 0.2533 | 511.32 |

## Conclusion and Future Work

In this work we incorporate a bag of word language model as the marks of a recurrent temporal point process. This creates a model which characterize temporal and causal representation for text, allowing for a richer representation for costumers reviews. We show that this improves predictive performance for the time of the reviews, as well as opening the door for text prediction. We will extend this methodology for rating prediction as well as more complex models of text.^{4}

### Footnotes

- In what follows we shall drop the index over items.
- a.k.a. filtration.
- https://www.yelp.com/dataset
- One part of this research has been funded by the Federal Ministry of Education and Research of Germany as part of the competence center for machine learning ML2R (01—S18038A).

### References

- (2017) Transnets: learning to transform for recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 288–296. Cited by: Introduction.
- (2019) Driven answer generation for product-related questions in e-commerce. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 411–419. Cited by: Introduction.
- (2007) An introduction to the theory of point processes: volume ii: general theory and structure. Springer Science & Business Media. Cited by: Recurrent Point Review Model (RPRM).
- (2016) Recurrent marked temporal point processes: embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1555–1564. Cited by: Recurrent Point Review Model (RPRM).
- (2016) Recurrent marked temporal point processes: embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1555–1564. Cited by: Recurrent Point Review Model (RPRM).
- (2015) Dirichlet-hawkes processes with applications to clustering continuous-time document streams. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 219–228. Cited by: Introduction.
- (2019) Structured neural topic models for reviews. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3429–3439. Cited by: Introduction.
- (2019) Product-aware helpfulness prediction of online reviews. In The World Wide Web Conference, pp. 2715–2721. Cited by: Introduction.
- (2015) Hawkestopic: a joint model for network inference and topic modeling from text-based cascades. In International conference on machine learning, pp. 871–880. Cited by: Introduction.
- (2009) Replicated softmax: an undirected topic model. In Advances in Neural Information Processing Systems 22, Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams and A. Culotta (Eds.), pp. 1607–1614. External Links: Link Cited by: Recurrent Point Review Model (RPRM), Experiments and Results.
- (1997-12) Long short-term memory. Neural computation 9, pp. 1735–80. External Links: Document Cited by: Recurrent Point Review Model (RPRM).
- (2019) DAML: dual attention mutual learning between ratings and reviews for item recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 344–352. Cited by: Introduction.
- (2017) The neural hawkes process: a neurally self-modulating multivariate point process. In Advances in Neural Information Processing Systems, pp. 6738–6748. Cited by: Recurrent Point Review Model (RPRM).
- (2016) Neural variational inference for text processing. In International conference on machine learning, pp. 1727–1736. Cited by: Recurrent Point Review Model (RPRM).
- (2019) Generating product descriptions from user reviews. In The World Wide Web Conference, pp. 1354–1364. Cited by: Introduction.
- (2018) Dynamic embeddings for language evolution. In Proceedings of the 2018 World Wide Web Conference, pp. 1003–1011. Cited by: Introduction.
- (2012) Continuous time dynamic topic models. arXiv preprint arXiv:1206.3298. Cited by: Introduction, Experiments and Results.
- (2017) Recurrent recommender networks. In Proceedings of the tenth ACM international conference on web search and data mining, pp. 495–503. Cited by: Introduction.
- (2018) Aware answer prediction for product-related questions incorporating aspects. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 691–699. Cited by: Introduction.
- (2017) Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pp. 425–434. Cited by: Introduction.