Uncertainty on Asynchronous Time Event Prediction

Uncertainty on Asynchronous Time Event Prediction

Marin Biloš , Bertrand Charpentier111 , Stephan Günnemann
Technical University of Munich, Germany
{bilos, charpent, guennemann}@in.tum.de
Equal contribution
Abstract

Asynchronous event sequences are the basis of many applications throughout different industries. In this work, we tackle the task of predicting the next event (given a history), and how this prediction changes with the passage of time. Since at some time points (e.g. predictions far into the future) we might not be able to predict anything with confidence, capturing uncertainty in the predictions is crucial. We present two new architectures, WGP-LN and FD-Dir, modelling the evolution of the distribution on the probability simplex with time-dependent logistic normal and Dirichlet distributions. In both cases, the combination of RNNs with either Gaussian process or function decomposition allows to express rich temporal evolution of the distribution parameters, and naturally captures uncertainty. Experiments on class prediction, time prediction and anomaly detection demonstrate the high performances of our models on various datasets compared to other approaches.

1 Introduction

Discrete events, occurring irregularly over time, are a common data type generated naturally in our everyday interactions with the environment (see Fig. 1(a) for an illustration). Examples include messages in social networks, medical histories of patients in healthcare, and integrated information from multiple sensors in complex systems like cars. The problem we are solving in this work is: given a (past) sequence of asynchronous events, what will happen next? Answering this question enables us to predict, e.g., what action an internet user will likely perform or which part of a car might fail.

While many recurrent models for asynchronous sequences have been proposed in the past [18, 5], they are ill-suited for this task since they output a single prediction (e.g. the most likely next event) only. In an asynchronous setting, however, such a single prediction is not enough since the most likely event can change with the passage of time – even if no other events happen. Consider a car approaching another vehicle in front of it. Assuming nothing happens in the meantime, we can expect different events at different times in the future. When forecasting a short time, one expects the driver to start overtaking; after a longer time one would expect braking; in the long term, one would expect a collision. Thus, the expected behavior changes depending on the time we forecast, assuming no events occured in the meantime. Fig. 1(a) illustrates this schematically: having observed a square and a pentagon, it is likely to observe a square after a short time, while a circle after a longer time. Clearly, if some event occurs, e.g. braking/square, the event at the (then) observed time will be taken into account, updating the temporal prediction.

An ad-hoc solution to this problem would be to discretize time. However, if the events are near each other, a high sampling frequency is required, giving us very high computational cost. Besides, since there can be intervals without events, an artificial ‘no event’ class is required.

In this work, we solve these problems by directly predicting the entire evolution of the events over (continuous) time. Given a past asynchronous sequence as input, we can predict and evaluate for any future timepoint what the next event will likely be (under the assumption that no other event happens in between which would lead to an update of our model). Crucially, the likelihood of the events might change and one event can be more likely than others multiple times in the future. This periodicity exists in many event sequences. For instance, given that a person is currently at home, a smart home would predict a high probability that the kitchen will be used at lunch and/or dinner time (see Fig. 0(a) for an illustration). We require that our model captures such multimodality.

(a)
(b)
Figure 1: (a) An event can be expected multiple times in the future. (b) At some times we should be uncertain in the prediction. Yellow denotes higher probability density.

While Fig. 0(a) illustrates the evolution of the categorical distribution (corresponding to the probability of a specific event class to happen), an issue still arises outside of the observed data distribution. E.g. in some time intervals we can be certain that two classes are equiprobable, having observed many similar examples. However, if the model has not seen any examples at specific time intervals during training, we do not want to give a confident prediction. Thus, we incorporate uncertainty in a prediction directly in our model. In places where we expect events, the confidence will be higher, and outside of these areas the uncertainty in a prediction will grow as illustrated in Fig. 0(b). Technically, instead of modeling the evolution of a categorical distribution, we model the evolution of a distribution on the probability simplex. Overall, our model enables us to operate with the asynchronous discrete event data from the past as input to perform continuous-time predictions to the future incorporating the predictions’ uncertainty. This is in contrast to existing works as [5, 17].

2 Model Description

We consider a sequence of events , where denotes the class of the th event and is its time of occurrence. We assume the events arrive over time, i.e. , and we introduce as the observed time gap between the th and the th event. The history preceding the th event is denoted by . Let denote the set of probability vectors that form the -dimensional simplex, and be a family of probability distributions on this simplex parametrized by parameters . Every sample corresponds to a (categorical) class distribution.

Given and , our goal is to model the evolution of the class probabilities, and their uncertainty, of the next event over time. Technically, we model parameters , leading to a distribution over the class probabilities for all . Thus, we can estimate the most likely class after a time gap by calculating , where is the expected probability vector. Even more, since we do not consider a point estimate, we can get the amount of certainty in a prediction. For this, we estimate the probability of class being more likely than the other classes, given by . This tells us how certain we are that one class is the most probable (i.e. ’how often’ is the argmax when sampling from ).

Two expressive and well-established choices for the family are the Dirichlet distribution and the logistic-normal distribution (Appendix A). Based on a common modeling idea, we present two models that exploit the specificities of these distributions: the WGP-LN (Sec. 2.1) and the FD-Dir (Sec. 2.2). We also introduce a novel loss to train these models in Sec. 2.3.

Independent of the chosen model, we have to tackle two core challenges: (1) Expressiveness. Since the time dependence of may be of different forms, we need to capture complex behavior. (2) Locality. For regions out of the observed data we want to have a higher uncertainty in our predictions. Specifically for , i.e. far into the future, the distribution should have a high uncertainty.

2.1 Logistic-Normal via a Weighted Gaussian Process (WGP-LN)

(a)
(b)
Figure 2: The model framework. (a) During training we use sequences . (b) Given a new sequence of events the model generates pseudo points that describe , i.e. the temporal evolution of the distribution on the simplex. These pseudo points are based on the data that was observed in the training examples and weighted accordingly. We also have a measure of certainty in our prediction.

We start by describing our model for the case when is the family of logistic-normal (LN) distributions. How to model a compact yet expressive evolution of the LN distribution? Our core idea is to exploit the fact that the LN distribution corresponds to a multivariate random variable whose logits follow a normal distribution – and a natural way to model the evolution of a normal distribution is a Gaussian Process. Given this insight, the core idea of our model is illustrated in Fig. 2: (1) we generate pseudo points based on a hidden state of an RNN whose input is a sequence, (2) we fit a Gaussian Process to the pseudo points, thus capturing the temporal evolution, and (3) we use the learned GP for estimating the parameters and of the final LN distribution at any specific time . Thus, by generating a small number of points we characterize the full distribution.

Classic GP. To keep the complexity low, we train one GP per class . That is, our model generates points per class , where represents logits. Note that the first coordinate of each pseudo point corresponds to time, leading to the temporal evolution when fitting the GP. Essentially we perform a non-parameteric regression from the time domain to the logit space. Indeed, using a classic GP along with the pseudo points, the parameters of the logistic-normal distribution, and , can be easily computed for any time in closed form:

(1)

where is the gram matrix w.r.t. the pseudo points of class based on a kernel (e.g. ). Vector contains at position the value , and the value , and . At every time point the logits then follow a multivariate normal distribution with mean and covariance .

Using a GP enables us to describe complex functions. Furthermore, since a GP models uncertainty in the prediction depending on the pseudo points, uncertainty is higher in areas far away from the pseudo points. Specifically, it holds for distant future; thus, matching the idea of locality. However, uncertainty is always low around the pseudo points. Thus should be carefully picked since there is a trade-off between having high certainty at (too) many time points and the ability to capture complex behavior. Thus, in the following we present an extended version solving this problem.

Figure 3: WGP on toy data with different weights. (a) All weights are 1 – classic GP. (b) Zero weights discard points. (c) Mixed weight assignment.

Weighted GP. We would like to pick large enough to express rich multimodal functions and allow the model to discard unnecessary points. To do this we generate an additional weight vector that assigns the weight to a point . Giving a zero weight to a point should discard it, and giving will return the same result as with a classic GP. To achieve this goal, we introduce a new kernel function:

RNN

GP

Figure 4: Model diagram
(2)

where is the same as above. The function weights the kernel according to the weigths for and . We require to have the following properties: (1) should be a valid kernel over the weights, since then the function is a valid kernel as well; (2) the importance of pseudo points should not increase, giving ; this fact implies that a point with zero weight will be discarded since as desired. The function is a simple choice that fulfills these properties. In Fig. 3 we show the effect of different weights when fitting of a GP (see Appendix B for a more detailed discussion of the behavior of the kernel).

To predict and for a new time , we can now simply apply Eq. 1 based on the new kernel , where the weight for the query point is .

To summarize: From a hidden state we use a a neural network to generate weighted pseudo points per class . Fitting a Weighted GP to these points enables us to model the temporal evolution of and, thus, accordingly of the logistic-Normal distribution. Fig. 4 shows an illustration of this model.

Note that the cubic complexity of a GP, due to the matrix inversion, is not an issue since the number is usually small (), while still allowing to represent rich multimodal functions. Crucially, given the loss defined in Sec. 2.3, our model is fully differentiable, enabling us efficient training.

2.2 Dirichlet via a Function Decomposition (FD-Dir)

Next, we consider the Dirichlet distribution to model the uncertainty in the predictions. The goal is to model the evolution of the concentrations parameters of the Dirichlet over time. Since unlike to the logistic-normal, we cannot draw the connection to the GP, we propose to decompose the parameters of the Dirichlet distribution with expressive (local) functions in order to allow complex dependence on time.

Since the concentration parameters need to be positive, we propose the following decomposition of in the log-space

(3)

where the real-valued scalar is a constant prior on which takes over in regions where the Gaussians are close to .

The decomposition into a sum of Gaussians is beneficial for various reasons: {enumerate*}[label=()]

First note that the concentration parameter can be viewed as the effective number of observations of class . Accordingly the larger , the more certain becomes the prediction. Thus, the functions can describe time regions where we observed data and, thus, should be more certain; i.e. regions around the time where the ’width’ is controlled by .

Since most of the functions’ mass is centered around their mean, the locality property is fulfilled. Put differently: In regions where we did not observed data (i.e. where the functions are close to ), the value is close to the prior value . In the experiments, we use , thus in the out of observed data regions; a common (uninformative) prior value for the Dirichlet parameters. Specifically for the resulting predictions have a high uncertainty.

Lastly, a linear combination of translated Gaussians is able to approximate a wide family of functions [3]. And similar to the weighted GP, the coefficients allow discarding unnecessary basis functions.

The basis functions parameters are the output of the neural network, and can also be interpreted as weighted pseudo points that determine the regression of Dirichlet parameters , i.e. , over time (Fig. 2 & Fig. 4). The concentration parameters themselves have also a natural interpretation: they can be viewed as the rate of events after time gap .

2.3 Model Training with the Distributional Uncertainty Loss

The core feature of our models is to perform predictions in the future with uncertainty. The classical cross-entropy loss, however, is not well suited to learn uncertainty on the categorical distribution since it is only based on a single (point estimate) of the class distribution. That is, the standard cross-entropy loss for the event between the true categorical distribution and the predicted (mean) categorical distribution is . Due to the point estimate , the uncertainty on is completely neglected.

Instead, we propose the uncertainty cross-entropy which takes into account uncertainty:

(4)

Remark that the uncertainty cross-entropy does not use the compound distribution but considers the expected cross-entropy. Based on Jensen’s inequality, it holds: . Consequently, a low value of the uncertainty cross-entropy guarantees a low value for the classic cross entropy loss, while additionally taking the variation in the class probabilities into account. A comparison between the classic cross entropy and the uncertainty cross-entropy on a simple classification task and anomaly detection in asynchronous event setting is presented in Appendix F.

In practice the true distribution is often a one hot-encoded representation of the observed class which simplifies the computations. During training, the models compute and evaluate it at the true time of the next event given the past event and the history . The final loss for a sequence of events is simply obtained by summing up the loss for each event .

Fast computation. In order to have an efficient computation of the uncertainty cross-entropy, we propose closed-form expressions. (1) Closed-form loss for Dirichlet. Given that the observed class is one hot-encoded by , the uncertain loss can be computed in closed form for the Dirichlet:

(5)

where denotes the digamma function and . (2) Loss approximation for GP. For WGP-LN, we approximate based on second order series expansion (Appendix C):

(6)

Note that we can now fully backpropagate through our loss (and through the models as well), enabling to train our methods efficiently with automatic differentiation frameworks and, e.g., gradient descent.

Regularization. While the above loss much better incorporates uncertainty, it is still possible to generate pseudo points with high weight values outside of the observed data regime giving us predictions with high confidence. To eliminate this behaviour we introduce a regularization term :

(7)

For the WGP-LN, and correspond to the mean and the variance of the class logits which are pushed to prior values of and . For the FD-Dir, and correspond to the mean and the variance of the class probabilities where the regularizer on the mean can actually be neglected because of the prior introduced in the function decomposition (Eq. 3). In experiments, is set to for WGP-LN and for FD-Dir which is the variance of the classic Dirichlet prior with concentration parameters equal to . For both models, this regularizer forces high uncertainty on the interval . In practice, the integrals can be estimated with Monte-Carlo sampling whereas and are hyperparameters which are tuned on a validation set.

In [15], to train models capable of uncertain predictions, another dataset or a generative models to access out of observed distribution samples is required. In contrast, our regularizer suggests a simple way to consider out of distribution data which does not require another model or dataset.

3 Point Process Framework

Our models FD-Dir and WGP-LN predict , enabling to evaluate, e.g., after a specific time gap . This corresponds to a conditional distribution over the classes. In this section, we introduce a point process framework to generalize FD-Dir to also predict the time distribution . This enables us to predict, e.g., the most likely time the next event is expected or to evaluate the joint distribution . We call the model FD-Dir-PP.

We modify the model so that each class is modelled using an inhomogeneous Poisson point process with positive locally integrable intensity function . Instead of generating parameters by function decomposition, FD-Dir-PP generates intensity parameters over time: . The main advantage of such general decomposition is its potential to describe complex multimodal intensity functions contrary to other models like RMTPP [5] (Appendix D). Since the concentration parameter and the intensity parameter both relate to the number of events of class around time , it is natural to convert one to the other.

Given this -multivariate point process, the probability of the next class given time and the probability of the next event time are and where . Since the classes are now modelled via a point proc., the log-likelihood of the event is:

(8)

The terms (ii) and (iii) act like a regularizer on the intensities by penalizing large cumulative intensity on the time interval where no events occurred. The term (i) is the standard cross-entropy loss at time . Or equivalently, by modeling the distribution , we see that term (i) is equal to (see Section 2.3). Using this insight, we obtain our final FD-Dir-PP model: We achieve uncertainty on the class prediction by modeling as concentration parameters of a Dirichlet distribution and train the model with the loss of Eq. 8 replacing term (i) by . As it becomes apparent FD-Dir-PP differs from FD-Dir only in the regularization of the loss function, enabling it to be interpreted as a point process.

4 Related Work

Predictions based on discrete sequences of events regardless of time can be modelled by Markov Models [1] or RNNs, usually with its more advanced variants like LSTMs [10] and GRUs [4]. To exploit the time information some models [14, 18] additionally take time as an input but still output a single prediction for the entire future. In contrast, temporal point process framework defines the intensity function that describes the rate of events occuring over time.

RMTPP [5] uses an RNN to encode the event history into a vector that defines an exponential intensity function. Hence, it is able to capture complex past dependencies and model distributions resulting from simple point processes, such as Hawkes [9] or self-correcting [11], but not e.g. multimodal distributions. On the other hand, Neural Hawkes Process [17] uses continuous-time LSTM which allows specifying more complex intensity functions. Now the likelihood evaluation is not in closed-form anymore, but requires Monte Carlo integration. However, these approaches, unlike our models, do not provide any uncertainty in the predictions. In addition, WGP-LN and FD-Dir can be extended with a point process framework while having the expressive power to represent complex time evolutions.

Uncertainty in machine learning has shown a great interest [8, 7, 13]. For example, uncertainty can be imposed by introducing distributions over the weights [2, 16, 19]. Simpler approaches introduce uncertainty directly on the class prediction by using Dirichlet distribution independent of time [15, 20]. In contrast, the FD-Dir model models complex temporal evolution of Dirichlet distribution via function decomposition which can be adapted to have a point process interpretation.

Other methods introduce uncertainty time series prediction by learning state space model with Gaussian processes [6, 23]. Alternatively, RNN architecture has been used to model the probability density function over time [25]. Compared to these models, the WGP-LN model uses both Gaussian processes and RNN to model uncertainty and time. Our models are based on pseudo points. Pseudo points in a GP have been used to reduce the computational complexity [21]. Our goal is not to speed up the computation, since we control the number of points that are generated, but to give them different importance. In [24] a weighted GP has been considered by rescaling points; in contrast, our model uses a custom kernel to discard (pseudo) points.

5 Experiments

We evaluate our models on large-scale synthetic and real world data. We compare to neural point process models: RMTPP [5] and Neural hawkes process [17]. Additionally, we use various RNN models with the knowledge of the time of the next event. We measure the accuracy of class prediction, accuracy of time prediction, and evaluate on an anomaly detection task to show prediction uncertainty.

We split the data into train, validation and test set (60%–20%–20%) and tune all models on a validation set using grid search over learning rate, hidden state dimension and regularization. After running models on all datasets times we report mean and standard deviation of test set accuracy. Details on model selection can be found in Appendix H.1. The code and further supplementary material is available online.111https://www.kdd.in.tum.de/uncertainty-event-prediction

We use the following data (more details in Appendix G): (1) Graph. We generate data from a directed Erdős–Rényi graph where nodes represent the states and edges the weighted transitions between them. The time it takes to cross one edge is modelled with one normal distribution per edge. By randomly walking along this graph we created K asynchronous events with unique classes. (2) Stack Exchange.222https://archive.org/details/stackexchange Sequences contain rewards as events that users get for participation on a question answering website. After preprocessing according to [5] we have 40 classes and over 480K events spread over 2 years of activity of around 6700 users. The goal is to predict the next reward a user will receive. (3) Smart Home [22].333https://sites.google.com/site/tim0306/datasets We use a recorded sequence from a smart house with 14 classes and over events. Events correspond to the usage of different appliances. The next event will depend on the time of the day, history of usage and other appliances. (4) Car Indicators. We obtained a sequence of events from car’s indicators that has around events with 12 unique classes. The sequence is highly asynchronous, with ranging from milliseconds to minutes.

Visualization. To analyze the behaviour of the models, we propose visualizations of the evolutions of the parameters predicted by FD-Dir and WGP-LN.

Set-up: We use two toy datasets where the probability of an event depends only on time. The first one (3-G) has three classes occuring at three distinct times. It represents the events in the Fig. 12(a). The second one (Multi-G) consists of two classes where one of them has two modes and corresponds to the Fig. 0(a). We use these datasets to showcase the importance of time when predicting the next event. In Fig. 5, the four top plots show the evolution of the categorical distribution for the FD-Dir and the logits for the WGP-LN with points each. The four bottom plots describe the certainty of the models on the probability prediction by plotting the probability that the probability of class is higher than others, as introduced in Sec. 2. Additionally, the evolution of the dirichlet distribution over the probability simplex is presented in Appendix E.

FD-Dir on 3-G
WGP-LN on 3-G
FD-Dir on Multi-G
WGP-LN on Multi-G
Figure 5: Visualization of the prediction evolution. The red line indicates the true time of the next event for an example sequence. Here, both models predict the orange class, which is correct, and capture the variation of the class distributions over time. Generated points from WGP-LN are plotted with the size corresponding to the weight. For predictions in the far future, both models given high uncertainty.

Results. Both models learn meaningful evolutions of the distribution on the simplex. For the 3-G data, we can distinguish four areas: the first three correspond to the three classes; after that the prediction is uncertain. The Multi-G data shows that both models are able to approximate multimodal evolutions.

Class prediction accuracy. The aim of this experiment is to assess whether our models can correctly predict the class of the next event, given the time at which it occurs. For this purpose, we compare our models against Hawkes and RMTPP and evalute prediction accuracy on the test set.

Results. We can see (Fig. 6) that our models consistently outperform the other methods on all datasets. Results of the other baselines can be found in Appendix H.2.

Figure 6: Class accuracy (top; higher is better) and Time-Error (bottom; lower is better).

Time-Error evaluation. Next, we aim to assess the quality of the time intervals at which we have confidence in one class. Even though WGP-LN and the FD-Dir do not model a distribution on time, they still have intervals at which we are certain in a class prediction, making the conditional probability a good indicator of the time occurrence of the event.

Set-up. While models predicting a single time for the next event often use the MSE score , in our case the MSE is not suitable since one event can occur at multiple time points. In the conventional least-squares approach, the mean of the true distribution is an optimal prediction; however, here it is almost always wrong. Therefore, we use another metric which is better suited for multimodal distributions. Assume that a model returns a score function for each class regarding the next event , where a large value means the class is likely to occur at time . We define . The Time-Error computes the size of the time intervals where the predicted score is larger than the score of the observed time . Hence, a performant model would achieve a low Time-Error if its score function is high at time . As the score function in our models, we use the corresponding class probability .

Results. We can see that our models clearly obtain the best results on all datasets. The point process version of FD-Dir does not improve the performance. Thus, taking also into account the class prediction performance, we recommend to use our other two models. In Appendix H.3 we compare FD-Dir-PP with other neural point process models on time prediction using the MSE score and achieve similar results.

Anomaly detection & Uncertainty. The goal of this experiment is twofold: (1) it assesses the ability of the models to detect anomalies in asynchronous sequences, (2) it evaluates the quality of the predicted uncertainty on the categorical distribution. For this, we use a similar set-up as [15].

Set-up: The experiments consist in introducing anomalies in datasets by changing the occurrence time of % of the events (at random after the time transformation described in appendix G). Hence, the anomalies form out-of-distribution data, whereas unchanged events represent in-distribution data. The performance of the anomaly detection is assessed using Area Under Receiver Operating Characteristic (AUROC) and Area Under Precision-Recall (AUPR). We use two approaches: (i) We consider the categorical uncertainty on , i.e., to detect anomalies we use the predicted probability of the true event as the anomaly score. (ii) We use the distribution uncertainty at the observed occurrence time provided by our models. For WGP-LN, we can evaluate directly (difference of two normal distributions). For FD-Dir, this probability does not have a closed-form solution so instead, we use the concentration parameters which are also indicators of out-of-distribution events. For all scores, i.e , and , a low value indicates a potential anomaly around time .

Figure 7: AUROC and APR comparison across dataset on anomaly detection. The orange and blue bars use categorical uncertainty score whereas the green bars use distributional uncertainty.

Results. As seen in Fig. 5, the FD-Dir and the WGP-LN have particularly good performance. We observe that the FD-Dir gives better results especially with distributional uncertainty. This might be due to the power of the concentration parameters that can be viewed as number of similar events around a given time.

6 Conclusion

We proposed two new methods to predict the evolution of the probability of the next event in asynchronous sequences, including the distributions’ uncertainty. Both methods follow a common framework consisting in generating pseudo points able to describe rich multimodal time-dependent parameters for the distribution over the probability simplex. The complex evolution is captured via a Gaussian Process or a function decomposition, respectively; still enabling easy training. We also provided an extension and interpretation within a point process framework. In the experiments, WGP-LN and FD-Dir have clearly outperformed state-of-the-art models based on point processes; for event and time prediction as well as for anomaly detection.

Acknowledgement

This research was supported by the German Federal Ministry of Education and Research (BMBF), grant no. 01IS18036B, and by the BMW AG. The authors would like to thank Bernhard Schlegel for helpful discussion and comments. The authors of this work take full responsibilities for its content.

References

  • [1] R. Begleiter, R. El-Yaniv and G. Yona (2004) On prediction using variable order markov models. J. Artif. Int. Res. 22 (1), pp. 385–421. External Links: ISSN 1076-9757, Link Cited by: §4.
  • [2] C. Blundell, J. Cornebise, K. Kavukcuoglu and D. Wierstra (2015) Weight uncertainty in neural network. In PMLR, F. Bach and D. Blei (Eds.), Vol. 37, pp. 1613–1622. External Links: Link Cited by: §4.
  • [3] C. Calcaterra and A. Boldt (2008) Approximating with gaussians. arXiv preprint arXiv:0805.3795. Cited by: §2.2.
  • [4] J. Chung, C. Gulcehre, K. Cho and Y. Bengio (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Cited by: §H.1, §4.
  • [5] N. Du, H. Dai, R. Trivedi, U. Upadhyay, M. Gomez-Rodriguez and L. Song (2016) Recurrent marked temporal point processes: embedding event history to vector. In KDD, Cited by: Appendix D, §H.3, §1, §1, §3, §4, §5, §5.
  • [6] S. Eleftheriadis, T. Nicholson, M. Deisenroth and J. Hensman (2017) Identification of gaussian process state space models. In NIPS, External Links: Link Cited by: §4.
  • [7] D. Eswaran, S. Günnemann and C. Faloutsos (2017) The power of certainty: a dirichlet-multinomial model for belief propagation. In SDM, pp. 144–152. External Links: ISBN 978-1-61197-497-3, Document Cited by: §4.
  • [8] M. Fortunato, C. Blundell and O. Vinyals (2017) Bayesian recurrent neural networks. arXiv preprint arXiv:1704.02798. Cited by: §4.
  • [9] A. G. Hawkes (1971) Spectra of some self-exciting and mutually exciting point processes. Biometrika 58 (1), pp. 83–90. Cited by: §4.
  • [10] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §4.
  • [11] V. Isham and M. Westcott (1979) A self-correcting point process. Stochastic Processes and Their Applications 8 (3), pp. 335–347. Cited by: §4.
  • [12] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. In ICLR, Cited by: §H.1.
  • [13] B. Lakshminarayanan, A. Pritzel and C. Blundell (2015) Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In NIPS, Cited by: §4.
  • [14] Y. Li, N. Du and S. Bengio (2018) Time-dependent representation for neural event sequence prediction. In ICLR Workshop, External Links: Link Cited by: §4.
  • [15] A. Malinin and M. Gales (2018) Predictive uncertainty estimation via prior networks. In NIPS, Cited by: §F.1, §2.3, §4, §5.
  • [16] P. L. McDermott and C. K. Wikle (2019) Bayesian Recurrent Neural Network Models for Forecasting and Quantifying Uncertainty in Spatial-Temporal Data. Entropy 21(2): 184 (2019). Cited by: §4.
  • [17] H. Mei and J. M. Eisner (2017) The neural hawkes process: a neurally self-modulating multivariate point process. In NIPS, Cited by: §1, §4, §5.
  • [18] D. Neil, M. Pfeiffer and S. Liu (2016) Phased lstm: accelerating recurrent network training for long or event-based sequences. In NIPS, Cited by: §1, §4.
  • [19] H. Ritter, A. Botev and D. Barber (2018) A scalable laplace approximation for neural networks. In ICLR, External Links: Link Cited by: §4.
  • [20] P. Sadowski and P. Baldi (2019) Neural network regression with beta, dirichlet, and dirichlet-multinomial outputs. External Links: Link Cited by: §4.
  • [21] E. Snelson and Z. Ghahramani (2006) Sparse gaussian processes using pseudo-inputs. In NIPS, Cited by: §4.
  • [22] G. E. T.L.M. van Kasteren and B. Kröse (2010) Activity recognition in pervasive intelligent environments of the atlantis ambient and pervasive intelligence series, atlantis press. L. Chen (Ed.), Cited by: §5.
  • [23] R. Turner, M. Deisenroth and C. Rasmussen (2010) State-space inference and learning with gaussian processes. In AISTATS, Y. W. Teh and M. Titterington (Eds.), External Links: Link Cited by: §4.
  • [24] J. Wen, N. Hassanpour and R. Greiner (2018) Weighted gaussian process for estimating treatment effect. Cited by: §4.
  • [25] K. Yeo, I. Melnyk, N. Nguyen and E. K. Lee (2018) Learning temporal evolution of probability distribution with recurrent neural network. External Links: Link Cited by: §4.

Supplementary Materials: Uncertainty on Asynchronous Time Event Prediction

Appendix A Distributions

For reference, we give here the definition of the Dirichlet and Logistic-normal distribution.

a.1 Dirichlet distribution

The Dirichlet distribution with concentration parameters , where , has the probability density function:

(A.1)

where is a gamma function:

a.2 Logistic-normal distribution (LN)

The logistic normal distribution is a generalization of the logit-normal distribution for the multidimensional case. If follows a normal distribution, , then

follows a logistic-normal distribution.

Appendix B Behavior of the min kernel

The desired behavior of the min kernel function can easily be illustrated by considering the gram matrix and vector , which are required to estimate and for a new time point . W.l.o.g. consider pseudo points such that . Since the new query point is observed we assign it weight . It follows:

(B.2)

Assuming returns without the first row and without the first row and column. Plugging them back into equation 1 we can see that the point is discarded, as desired. In practice, the weights have values from interval which in turn gives us the ability to softly discard points. This is shown in Fig. 3 we can see that the mean line does not have to cross through the points with weights and the variance can remain higher around them.

Appendix C Computation of the approximation for the uncertainty cross-entropy of WGP-LN

Given true categorical distribution and predicted the uncertainty cross-entropy can be calculated as in Eq. 4. For the WGP-LN model , where are logits that come from a Gaussian process and follow a normal distribution . Therefore, follows a log-normal distribution. We will use this to derive an approximation of the loss. From now on, we omit from the equations. Mean and variance for are then:

(C.3)

The expectation of the cross entropy loss given that logits are following a normal distribution is

(C.4)

In general, given a random variable , we can approximate expectation of by performing a second order Taylor expansion around the mean :

(C.5)

Using C.5 together with C.3 and plugging into C.4 we get a closed-form solution for the loss for event :

(C.6)

Appendix D Non Expressiveness of RMTPP intensities

The intensity function has the following form in the RMTPP model [5]:

(D.7)

The variables , and are learned parameters and is given by the hidden state of an RNN. The only dependence on is . RMTPP is then limited to monotonic intensity functions with respect to time.

Appendix E Dirichlet Evolution

Our goal is to model the evolution of a distribution on a probability simplex. Fig. 0(b) shows this for two classes. In general, we can do the same for multiple classes. Fig. 8 shows an example of the Dirichlet distribution for three classes, and how it changes over time. This evolution is the output of the FD-Dir model trained on the 3-G dataset, created to simulate the car example from Sec. 1 (see also Fig. 12(a) in Appendix G). The three classes: overtaking, breaking and collision occur independently of each other at three different times. The represent the corners of the triangle in Fig. 8.

We can distinguish three cases: (a) at first we are certain that the most likely class is overtaking; (b) as time passes, the most likely class becomes breaking, (c) and finally collision. After that, we are in the area where we have not seen any data and do not have a confident prediction (d).

(a)
(b)
(c)
(d)
Figure 8: Dirichlet distribution at different time for the 3-G dataset with

Appendix F Comparison of the classical cross-entropy and the uncertainty cross-entropy

f.1 Simple classification task

In this section, we do not consider temporal data. The goal of this experiment is to show the benefit of the uncertainty cross-entropy compare with the classical cross-entropy loss on a simple classification task. As a consequence, we do not consider RNN in this section. We use a simple two layers neural network to predict the concentration parameters of a Dirichlet distribution from the input vector.

Set-up. The set-up is similar to [15] and consists of two datasets of 1500 instances divided in three equidistant 2-D Gaussians. One dataset contains non-overlapping classes (NOG) whereas the other contains overlapping classes (OG). Given one input , we train simple two layers neural networks to predict the concentration parameters of a Dirichlet distribution which model the uncertainty on the categorical distribution . On each dataset, we train two neural networks. One neural network is trained with the classic cross-entropy loss which uses only the mean prediction . The second neural network is trained with the uncertainty cross-entropy loss plus a simple -regularizer:

(F.8)

where is the input 2-D vector and is its euclidean neighbourhood of size . We set for the non-overlapping Gaussians and for the overlapping Gaussians.

Results. The categorical entropy is a good indicator to know how certain is the categorical distribution at point . A high entropy meaning that the categorical distribution is uncertain. For non overlapping Gaussians (Fig. 8(a) and 8(b)), we remark that both losses learn uncertain categorical distribution only on thin borders. However, for overlapping Gaussians (See Fig. 8(c) and 8(d)),the uncertainty cross-entropy loss learns more uncertain categorical distributions because of the thicker borders.

Other interesting results are the concentration parameters learned by the two models (Fig. 10, Fig. 11). The classic cross-entropy loss learns very high value for which does match with the true distribution of the data. In contrast, the uncertainty cross-entropy learn meaningful alpha values for both datasets (delimiting the in-distribution areas for and centred around the classes for the others).

(a) NOG - CE - Cat. Ent.
(b) NOG - UCE - Cat. Ent.
(c) OG - CE - Cat. Ent.
(d) OG - UCE - Cat. Ent.
Figure 9: The Figures 8(a) and 8(b) plot the entropy of the categorical distribution learned on a classification task with three non-overlapping Gaussians. They show categorical entropy learned with the classic cross-entropy and learned with the uncertainty cross-entropy. The Figures 8(c) and 8(d) plot the entropy of the categorical distribution learned on a classification task with three overlapping Gaussians. They show categorical entropy learned with the classic cross-entropy and learned with the uncertainty cross-entropy.
(a) CE -
(b) CE -
(c) CE -
(d) CE -
(e) UCE -
(f) UCE -
(g) UCE -
(h) UCE -
Figure 10: Concentration parameters of the Dirichlet distribution on a classification task with three non-overlapping Gaussians. The figures 9(a), 9(b), 9(c), 9(d) are , , , learned with the classic cross-entropy. The figures 9(a), 9(b), 9(c), 9(d) are , , , learned with the uncertainty cross-entropy.
(a) CE -
(b) CE -
(c) CE -
(d) CE -
(e) UCE -
(f) UCE -
(g) UCE -
(h) UCE -
Figure 11: Concentration parameters of the Dirichlet distribution on a classification task with three non-overlapping Gaussians. The figures 10(a), 10(b), 9(c), 10(d) are , , , learned with the classic cross-entropy. The figures 10(a), 10(b), 9(c), 10(d) are , , , learned with the uncertainty cross-entropy.

f.2 Asynchronous Event Prediction

In this section, we consider temporal data. The goal of this experiment is again to show the benefit of the uncertainty cross-entropy compared to the classical cross-entropy in the case of asynchronous event prediction.

Set-up. For this purpose, we use the same set-up describe in the experiment Anomaly detection & Uncertainty. We trained the model FD-Dir with three different type of losses: (1) The classical cross-entropy (CE), (2) The classical cross-entropy with regularization described in section 2.3 (CE + reg) and (3) The classical uncertainty cross-entropy with regularization described in section 2.3 (UCE + reg).

Figure 12: Loss comparison in anomaly detection

Results. The results are shown in Fig. 12. The loss UCE + reg consistently improves the anomaly detection based on the distribution uncertainty.

Appendix G Datasets

In this section we describe the datasets in more detail. The time gap between two events is first log-transformed before applying min-max normalization: with , .

3-G.

We use and draw from a normal distribution . This dataset tries to imitate the setting from Fig. 12(a) as explained in 1. We generate 1000 events. Probability density is shown in figure 12(b). Models that are not taking time into account cannot solve this problem. Below is the code. We create the Multi-G dataset similarly.

(a) Car example explained in section 1 where probabilities of events to occur change over time
(b) Probability density of events in K-Gaussians dataset. We can see that classes are independent of history.
def generate():
    data = np.zeros((1000, 2))
    for i in range(1000):
        i_class = np.random.choice(3, 1)[0]
        time = np.random.normal(i_class + 1, 1.)
        while time <= 0:
            time = np.random.normal(i_class + 1, 1.)
        data[i, 0] = i_class
        data[i, 1] = time
    return data

Car Indicators.

A sequence contains signals from a single car during one ride. We remove signals that are perfectly correlated giving 6 unique classes in the end. Top 3 classes make up 33%, 32%, and 16% of a total respectively. From figure 13 we can see that the setting is again asynchronous.

Figure 13: Probability density of events in Car Indicators dataset for 2 selected classes. Time is log-transformed.

Graph.

We generate graph with nodes and edges between them. We assign variables and to each transition (edge) between events (nodes). The time it takes to make a transition between nodes and is drawn from normal distribution . By performing a random walk on the graph we create thousand events. This dataset is similar to K-Gaussians with the difference that a model needs to learn the relationship between events together with the time dependency. Parts of the trace are shown in figure 14.

Figure 14: Trace of events for random graph. Different colors represent different classes and width of a single column represents the time that passed.

Appendix H Details of experiments

We test our models (WGP-LN, FD-Dir and DPP) against neural point process models (RMTPP and Hawkes) and simple baselines (RNN and LSTM – getting only history as an input, F-RNN and F-LSTM – having also the real time of the next event as an additional input; thus, they get a strong advantage!). We test on real world (Stack Exchange, Car Indicators and Smart Home) and synthetic datasets (Graph). We show that our models consistently outperform all the other models when evaluated with class prediction accuracy and Time-Error.

h.1 Model selection

We apply the same tuning technique to all models. We split all datasets into train–validation–test sets (), use the validation set to select a model and the test set to get final scores. For Stack Exchange dataset we split on users. In all other datasets we split the trace based on time. We search over dimension of a hidden state , batch size and regularization parameter . We use the same learning rate for all models and an Adam optimizer [12], run each of them times for maximum of epochs with early stopping after consecutive epochs without improvement in the validation loss. For the number of points we pick for WGP-LN and for FD-Dir. WGP-LN and FD-Dir have additional regularization (Eq. 7) with hyperparameters and . For both models we choose . Model with the highest mean accuracy on the validation set is selected. We use GRU cell [4] for both of our models. We trained all models on GPUs (1TB SSD).

h.2 Results

Tables 1 and 2, together with Fig. 15 show test results for all models on all datasets for Class accuracy and Time-Error.

Figure 15: Class accuracy (top) and Time-Error (bottom) comparison across datasets
Car Indicators Graph Smart Home Stack Exchange
FD-Dir 0.909 0.005 0.701 0.002 0.522 0.013 0.522 0.001
Dir-PP 0.912 0.006 0.691 0.006 0.415 0.054 0.515 0.002
WGP-LN 0.877 0.010 0.685 0.005 0.500 0.017 0.519 0.003
Hawkes 0.834 0.022 0.585 0.008 0.435 0.017 0.513 0.001
RMTPP 0.858 0.004 0.257 0.005 0.472 0.016 0.492 0.000
F-LSTM 0.855 0.006 0.657 0.002 0.411 0.029 -
F-RNN 0.849 0.013 0.615 0.011 0.472 0.035 -
LSTM 0.858 0.010 0.251 0.008 0.375 0.026 -
RNN 0.838 0.016 0.258 0.008 0.437 0.017 -
Table 1: Class accuracy comparison for all models on all datasets
Car Indicators Graph Smart Home Stack Exchange
FD-Dir 0.115 0.040 0.101 0.001 0.111 0.011 0.289 0.019
WGP-LN 0.184 0.047 0.120 0.008 0.127 0.010 0.077 0.016
FD-Dir-PP 0.132 0.031 0.106 0.004 0.143 0.022 0.375 0.007
Hawkes 0.412 0.091 0.158 0.005 0.170 0.035 0.507 0.003
RMTPP 0.860 0.004 0.257 0.005 0.474 0.016 0.721 0.001
F-LSTM 0.277 0.118 0.141 0.002 0.209 0.023 -
F-RNN 0.516 0.105 0.146 0.004 0.186 0.011 -
LSTM 0.860 0.010 0.251 0.008 0.376 0.026 -
RNN 0.841 0.016 0.258 0.008 0.439 0.017 -
Table 2: Time-Error comparison for all models on all datasets

h.3 Time Prediction with Point Processes

The benefit of the point process framework is the ability to get the point estimate for the time of the next event:

(H.9)

where

(H.10)

The usual way to evaluate the quality of this prediction is using an MSE score. As we have already discussed in Sec. 5, this is not optimal for our use case. Nevertheless, we did preliminary experiments comparing our neural point process model FD-Dir-PP to others. We use RMTPP [5] since it achieves the best results. On Car Indicators dataset our model has mean MSE score of 0.4783 while RMTPP achieves 0.4736. At the same time FD-Dir-PP outperforms RMTPP on other tasks (see Sec. 5).

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398154
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description