Learning Latent Events from Network Message Logs
We consider the problem of separating error messages generated in large distributed data center networks into error events. In such networks, each error event leads to a stream of messages generated by hardware and software components affected by the event. These messages are stored in a giant message log. We consider the unsupervised learning problem of identifying the signatures of events that generated these messages; here, the signature of an error event refers to the mixture of messages generated by the event. One of the main contributions of the paper is a novel mapping of our problem which transforms it into a problem of topic discovery in documents. Events in our problem correspond to topics and messages in our problem correspond to words in the topic discovery problem. However, there is no direct analog of documents. Therefore, we use a non-parametric change-point detection algorithm, which has linear computational complexity in the number of messages, to divide the message log into smaller subsets called episodes, which serve as the equivalents of documents. After this mapping has been done, we use a well-known algorithm for topic discovery, called LDA, to solve our problem. We theoretically analyze the change-point detection algorithm, and show that it is consistent and has low sample complexity. We also demonstrate the scalability of our algorithm on a real data set consisting of million messages collected over a period of days, from a distributed data center network which supports the operations of a large wireless service provider.
The delivery of modern data and web-based services requires the execution of a chain of network functions at different elements in distributed data-centers. This is true for video-based services, gaming services, cellular data/voice services, etc., each of which requires processing from multiple coupled networked entities hosting different network functions. For example, modern wireless networks rely on servers and virtual machines (VM) residing in distributed data centers to establish voice calls or data sessions, authenticate users, check user compliance with monthly voice/data limits, verify if users have paid their monthly bills, add to users’ bills for extra services, etc., all of which are done before completing a call. Efficient management and operations of these services is of paramount importance as networks grow increasingly complex with the advent of technologies like virtualization and 5G. An integral component of network management is the ability to identify and understand error events, when failures occur in the hardware and/or software components of the network. However, the complex interdependence between coupled networking functions poses a significant challenge in characterizing an error event due to the fact that error messages can be generated in network elements beyond the actual source of error. In this paper, we are interested in the problem of mining latent error event information from messages generated by servers, VMs, base stations, routers, and links in large-scale distributed data center networks. The mined events are useful for troubleshooting purposes. Also, the correlations captured through each learned event could be subsequently used for on-line detection of potential errors. While our methodology is broadly applicable to any type of data center network, we validate our algorithms by applying them to a large data set provided by a major wireless network service provider, so we will occasionally use terminology specific to this application to motivate our problem and solution methodology.
In most operational networks, all messages and alarms from distributed network elements are logged with time stamps into message logs. The logs from different network elements could be pooled together in a central database for subsequent analysis. While mining error logs have been studied extensively in different contexts, (see [1, 2] for excellent surveys; also see Section I-B), there are some fundamental differences in our setting. Modern data center and communication networks consist of components bought from different vendors, and each component is designed to generate an error message when it cannot execute a job. This poses a challenge in mining messages because there is no common model or standard that dictates the content and format of these error messages. Another challenge stems from the fact that each end-to-end service consists of multiple network functions each of which generates diverse error messages when failures happen. The following example provides an illustration.
Motivating Example: Suppose Alice makes a cellphone call to Bob. This call is first routed through a base station which is attached to a data center verifying the caller credentials. If Alice is not at her home location, a VM at this data center must contact a database at her home location to verify her credentials. Once the credentials are verified, the caller’s cellular base station connects to the base station near Bob through a complicated network spanning many geographical locations. Consider two potential error scenarios: (i) an error occurs at a router in the path from Bob to Alice’s base station, (ii) error at a router connecting the data centers verifying the caller’s credentials. In either scenario, the call will fail to be established leading to the generation of error messages not only at the failed routers but also at network functions (implemented in a cluster of VMs) responsible for call establishment. Furthermore, if the error leads to additional call failures, then respective base-stations could send alarms indicating higher than normal call failures. Additionally, depending on vendor of a given network element, the timing and content of the error messages could be different.
Indeed, the source, timing, and message-components of the error are all latent. In this paper we are interested in extracting patterns from messages generated by common faults/errors (also referred to as events). Specifically, our goal of this paper is to mine event signatures (i.e., distribution of messages for each event) and event occurrences (i.e., the begin and the end time of each event) from the message log. Based on the motivating example, we now note the following fundamental characteristics which make our error event mining problem challenging:
In our setting, the source of an error is usually not known. There could be error messages due to network-component level failures or due to network service-level failure. In case of a service-level failure, error-message could be generated by a component that is functional by itself. For example, when the link between an authentication server and the network core fails, this could lead to call establishment failures which are logged by network functions responsible for call establishment. Furthermore, the same type of error log-message could be generated due to many different errors. From a data modeling point of view, each (latent) event can be viewed as a probabilistic-mixture of multiple log-messages at different elements and also, the set of log messages generated by different events could have non-zero intersection.
Each error event can produce a sequence of messages, including the same type of message multiple times, and the temporal order between distinct messages from the same event could vary based on the latency between network elements, network-load, co-occurrence of other uncorrelated events, etc. Thus, the temporal pattern of messages may also contain useful information for our purpose. In our model, the message occurrence times are modeled as a stochastic process.
These messages could correspond to multiple simultaneous events without any further information on the start-time and end-time of each event.
An additional challenge arises from the fact that network topology information is unknown, because modern networks are very complicated and are constantly evolving due to the churn (addition or deletion) of routers and switches. Third-party vendor software and hardware have no way of providing information to localize and understand the errors. Thus, topological information cannot be used for event mining purposes.
The practical novelty of our work comes from modeling for all of the above factors and proposing scalable algorithms that learn the latent event signatures (the notion of signature will be made precise later) along with their occurrence times.
We note that, in different works on event mining (see Section I-B), the concept of event is different depending on the problem-context. It could either mean semantic-event or message template, or a cluster of such templates, or in some cases event itself is equivalent to message (where tagged event streams are available) or a transaction/system-event. In our work, an event simply refers to a real-world occurrence of a fault/incident somewhere in the distributed/networking system such that each event leads to a generation of error messages at multiple network elements.
We model each event as a probabilistic mixture of messages from different sources 111It is more precise to use the terminology event-class to refer to a specific fault-type; each occurrence can be referred to as an instance of some event class. However, for simplicity, we simply refer to event-class as event and we just say occurrence of the event to mean instance of this class.. In other words, the probability distribution over messages characterizes an event, and thus acts as the signature of the event. Each occurrence of an event also has a start/end time and several messages can be generated during the occurrence of an event. We only observe the messages and their time-stamps while the event signatures and duration window is unknown; also there could be multiple simultaneous events occurring in the network. Given this setting, we study the following unsupervised learning problem: given collection of time-stamped log-messages, learn the latent event signatures and event start/end times.
The main contributions of the paper are as follows:
Novel algorithmic framework: We present a novel way of decomposing the problem into simpler sub-problems. Our method, which we will call CD-LDA, decomposes the problem into two parts: the first part consists of a change-point detection algorithm which identifies time instants at which either a new event starts in the network or an existing event comes to an end, and, the second part of the algorithm uses Latent Dirichlet Allocation (LDA) (see ) to classify messages into events. This observation that one can use change-point detection, followed by LDA, for event classification is one of the novel ideas in the paper.
Scalable change-point detection: While the details of the LDA algorithm itself are standard, non-parametric change-point detection as we have used in this paper is not as well studied. We adapt an idea from  to design an algorithm where is the number of messages in the message log. Our change detection algorithm uses an easy to compute total-variation (TV) distance. We analyze the sample complexity of (i.e, the number of samples required to detect change points with a high-degree of accuracy) of our change-point detection algorithm using the method of types and Pinsker’s inequality from information theory. To the best of our knowledge, no such sample complexity results exist for the algorithm in .
Experimental validation: We use two different real-world data sets from a large operational network to perform the following validation of our approach. First, we compare our algorithm to two existing approaches adapted to our setting: a Bayesian inference-based algorithm and Graph-based clustering algorithm. We show the benefits of our approach compared to these methods in terms of scalability and performance, by applying it to small samples extracted from a large data set consisting of million messages. Second, we validate our method against two real world events by comparing the event signature learned by our method with domain expert validated event signature for a smaller data set consisting of K messages222Note that manual inference of event signatures is not scalable; we did this for the purpose of validation.. Finally, we also show results to indicate scalability of our method by applying to the entire million message data set.
We note that this paper is an extended version of  that appeared in a workshop.
I-B Context and Related Work
Data-driven techniques have been shown to be very useful in extracting meaningful information out of system-logs and alarms for large and complex systems. The primary goal of this “knowledge” extraction is to assist in diagnosing the underlying problems responsible for log-messages and events. Two excellent resources for the large body of work done in the area are [1, 2]. Next, we outline some of the key challenges in this knowledge extraction, associated research in the area, and our problem in the context of existing work.
Mining and clustering unstructured logs: Log-messages are unstructured textual data without any annotation for the underlying fault. A significant amount of research has focused on converting unstructured logs to common semantic events . Note that the notion of semantic events is different from the actual real-world events responsible for generating the messages, nevertheless, such a conversion helps in providing a canonical description of the log-messages that enables subsequent correlation analysis. These works exploit the structural similarity among different messages to either compute an intelligent log-parser or cluster the messages based on message texts [6, 7, 8, 2]. Each cluster can be viewed as an semantic event which can help in diagnosing the underlying root-cause. One work closely related to ours is , in which the authors mine network log messages to first extract templates and then learn pairwise implication rules between template-pairs. Our setting and objective are somewhat different, we model events as message-distributions from different elements with each event occurrence having certain start and end times; the messages belonging to an event and the associated occurrence time-windows are hidden (to be learned). A more recent work  develops algorithms to mine underlying structural-event as a work-flow graph. The main differences are that, each transaction is a fixed sequence of messages unlike our setting where each message could be generated multiple times based on some hidden stochastic process, and furthermore, in our setting, there could be multiple events manifested in the centralized log-server.
Mining temporal patterns: Log-messages are time-series data and thus the temporal patterns contain useful information. Considerable amount of research has gone into learning latent patterns, trends and relationship between events based on timing information in the messages [11, 12, 13]. We refer to [14, 2, 15] for survey of these approaches. Extracted event-patterns could be used to construct event correlation graphs that could be mined using techniques such as graph-clustering. Specifically, these approaches are useful when event-streams are available as time-series. We are interested in scenarios where each event is manifested in terms of time-series of unstructured messages and furthermore, same message could arise from multiple events. Nevertheless, certain techniques developed for temporal event mining could be adapted to our setting as we describe in Section IV-A2; our results indicate that such an adaptation works well under certain conditions. Note that, our goal is to also learn the event-occurrence times.
Event-summarization: In large dynamic systems, messages could be generated from multiple components due to reasons ranging from software bugs, system faults, operational activities, security alerts etc. Thus it is very useful to have a global summarized snapshot of messages based on logs. Most works in this area exploit the inter-arrival distribution and co-occurrence of events [16, 17, 18, 19, 2] to produce summarized correlation between events. These methods are useful when the event-stream is available and possible event-types are known in advance. This limits the applicability to large-systems like ours where event types are unknown along with their generation time-window.
The body of work closest to out work are the works on event-summarization. However, there are some fundamental differences in our system: (i) we do not have a readily available event-stream, instead, our observables are log-messages, (ii) the event-types are latent variables not known in advance and all we observe are message streams, (iii) the time-boundaries of different latent-events is a learning objective, and (iv) since we are dealing with large system with multiple components where different fault-types are correlated, the same message could be generated for different root-causes (real-world events).
Apart from the above, a recent paper  which uses deep learning models for anomaly detection in message logs by modelling logs as a natural language sequence is also worth a mention.
Ii Problem Statement and Preliminaries
Before we describe our problem statement, we first explain the notion of messages in the context of our work.
Message: In our work, messages generated by different network elements are one of two types: syslog texts in the form of raw-texts, and alarms.
Syslog texts: These are raw-textual messages sent by software components from different elements to a logging server. Raw syslog data fields include timestamp, source, and message text. Since the number of distinct messages are very large and many of them have common patterns, it is often useful [6, 7, 8, 2] to decompose the message text into two parts: an invariant part called template, and parameters associated with template. For example, a syslog message “service wqffv failed due to connection failure to IP address a.b.c.d using port 8231” would reduce to template “service wqffv failed due to connection failure to IP address * using port *.” There are many existing methods to extract such templates[1, 2], ranging from tree-based methods to NLP based methods. In our work, we have a template-extraction pre-processing step before applying our methods. We also say message to simply mean the extracted templates.
Alarms: Network alarms are indication of faults and each alarm type refers to the specific fault condition in a network element. Each alarm has a unique name and the occurrences are also tagged with timestamps. In this work, we view each alarm as a message. Note that, since each alarm has a unique name/id associated with it, we do not pre-process alarms before applying our methods. Example of alarms are mmscRunTImeError, mmscEAIFUnavailable sent from a network service named MMSC.
Problem Statement: We are given a data set consisting of messages generated by error events in a large distributed data-center network. We assume that the messages are generated in the time interval The set of messages in the data set come from discrete and finite set
We use the term message to mean either a template extracted from a message or an alarm-id. Each message has a timestamp associated with it, which indicates when the message was generated. Suppose that an event started occurring at time and finished at time In the interval of time event will generate a mixture of messages from a subset of which we will denote by In general, an event can occur multiple times in the data set. If an event occurs multiple times in the data set, then each occurrence of the event will have start and finish times associated with it.
As noted before, for simplicity, we will say event to mean an event-class and occurrence of an event to mean an instance from the class. An event is characterized by its message set and the probability distribution with which messages are chosen from which we will denote by i.e., denotes the probability that event will generate a message For compactness of notation, one can simply think of as being defined over the entire set of messages with if Thus, fully characterizes the event and can be viewed as the signature of the event. We assume that the support set of messages for two different events are not identical.
It is important to note that the data set simply consists of messages from the set there is no explicit information about the events in the data set, i.e., the event information is latent. The goal of the paper is to solve the following inference problem: from the given data set identify the set of events that generated the messages in the data set, and for each instance of event, identify when it started and finished. In other words, the output of the inference algorithm should contain the following information:
The number of events which generated the data set.
The signatures of these events:
For each event the number of times it occurred in the data set and, for each occurrence, its start and finish times.
Notations: We use the notation for the message. Also, let be the timestamp associated with the message. Thus the data set can be characterized by tuples of data points.
Machine-learning pipeline: In Figure 1, we show the machine-learning pipeline for completeness. This paper focuses on the module “Latent Event Learner” which has data-processing step followed by the key proposed algorithm in the paper, namely CD-LDA algorithm which we describe in Section III. Syslog texts require more pre-processing while alarms do not. We have shown the two types of messages in the pipeline figure, but for the purposes for developing an algorithm, in the rest of the paper, we only refer to messages without distinguishing between them.
Iii Algorithm CD-LDA
We now present our solution to this problem which we call CD-LDA (Change-point Detection-Latent Dirichlet Allocation). The key novelty in the paper is the connection that we identify between event identification in our problem and topic modeling in large document data sets, a problem that has been widely studied in the natural language processing literature. In particular, we process our data set into a form that allows us to use a widely-used algorithm called LDA to solve our problem. In standard LDA, we are given multiple documents, with many words in each document. The goal is to identify the mixture of latent topics that generated the documents, where each topic is identified with a collection of words and a probability distribution over the words. Our data set has similar features: we have events (which are the equivalents of topics) and messages (which are the equivalents of words) which are generated by the events. However, we do not have a concept of documents. A key idea in our paper is to divide the data set into smaller data sets, each of which will be called an episode. The episodes will be the equivalents of documents in our problem. We do this using a technique called non-parametric change-point detection.
Now we describe the concept of an episode. An episode is an interval of time over which the same set of events occur i.e. there is no event-churn, and at time instants on either side of the interval, the set of events that occur are different from the set of events in the episode. Thus, we can divide our data set of events such that no two consecutive episodes have the same set of events. We present an example to clarify the concept of an episode. Suppose the duration of the message data set Suppose event occurred from time to time event occurred from time to time and event occurred from time to time Then there are four episodes in this data set: one in the time interval where only one event occurs, one in the time interval where events occur, one in the time interval where events occur and finally one in where only event occurs. We assume that between successive episodes, at most one new event starts or one existing event ends.
We use change-point detection to identify episodes. To understand how the change-point detection algorithm works, we first summarize the characteristics of an episode:
An episode consists of a mixture of events, and each event consists of a mixture of messages.
Since neighboring episodes consist of different mixtures of events, neighboring episodes also contain different mixtures of messages (due to our assumption that different events do not generate the same set of messages).
Thus, successive episodes contain different message distributions and therefore, the time instances where these distributions change are the episode boundaries, which we will call change points.
In our data set, the messages contain time stamps. In general, the inter-arrival time distributions of messages are different in successive episodes, due to the fact that the episodes represent different mixtures of events. This fact can be further exploited to improve the identification of change points.
Based on our discussion so far in this section, CD-LDA has two-phases as follows:
Change-point detection: In this phase, we detect the start and end time of each episode. In other words, we identify the time-points where a new event started or an existing event ended. This phase is described in detail in Section III-A.
Applying LDA: In this phase, we show that, once episodes are known, LDA based techniques can be used to solve the problem of computing message distribution for each event. Subsequently, we can also infer the occurrence times for each event. This phase along with the complete algorithm is described in Section III-B.
Iii-a Change-point Detection
Suppose we have data points and a known number of change points . The data points between two consecutive change points are drawn i.i.d from the same distribution333The i.i.d. assumption is not always true in practice as messages could be sparser in time in the beginning of an event. Indeed, the algorithms developed in this work does not rely on the i.i.d. assumption, however, the assumption allows us to prove useful theoretical guarantees. In the inference problem, each data point could be a possible change point. A naive exhaustive search to find the best locations would have a computational complexity of . Nonparametric approaches to change-point detection aim to solve this problem with much lower complexity even when the number of change points is unknown and there are few assumptions on the family of distributions, , ,.
The change point detection algorithm we use is hierarchical in nature. This is inspired by the work in . Nevertheless our algorithm has certain key differences as discussed in section III-C1. It is easier to understand the algorithm in the setting of only one change point, i.e., two episodes. Suppose that is a candidate change point among the points. The idea is to measure the change in distribution between the points to the left and right of . We use the TV distance between the empirical distributions estimated from the points to the left and right of the candidate change point . In our context the TV distance between two probability mas functions and is given by one half the distance . This is maximized over all values of to estimate the location of the change point. If the distributions are sufficiently different in the two episodes the TV distance between the empirical distributions is expected to be highest for the correct location of the change point in comparison to any other candidate point (we rigorously prove this in the proof Theorem 1, 2).
Further, we also have different inter-arrival times for messages in different episodes. Hence we use a combination of TV distance and mean inter-arrival time as the metric to differentiate the two distributions444One can potentially use a weighted combination of the TV distance and mean inter-arrival time as a metric with the weight being an hyper parameter. While the unweighted metric performs well in out real-life datasets, it is an interesting future direction of research to understand how to optimally choose a weighted combination in general. We denote this metric by .
where are empirical estimates of message distributions to the left and right and are empirical estimates of the mean inter-arrival time to the left and right of respectively. The empirical distributions , have components. For each , we can write
The mean inter-arrival time and are defined as
We sometimes write as , where the argument . Symbol denotes the index as a fraction of and it can take discrete values between to . takes value only when event occurs and otherwise.
Algorithm 1 describes the algorithm in the one change point case. To make the algorithm more robust, we declare a change point only when the episode length is at least and the maximum value of the metric (1) is at least .
Let us consider a simple example to illustrate the idea of change-point detection with one change-point. Suppose we have a sequence of messages with unequal inter-arrival times as shown in Fig. 2. All the messages are the same, but the first half of the messages arrive at a rate higher than the second half of the messages. In this scenario, our metric reduces to the difference in the mean inter-arrival times between the two episodes. So, . The function in terms of data point for this example is shown in Fig 2. As we show later in section III-C, the shape of will be close to the following when the number of samples is large: will be increasing to the left of change point , attain its maximum at the change point and decrease to the right.
The above algorithm tries to detect a single change point first, and if such a change point is found, it divides the data set into two parts, one consisting of messages to the left of the change point and the other consisting of messages to the right of the change point. The single change-point detection algorithm is now applied to each of the two smaller datasets. This is repeated recursively till no more change points are detected.
Iii-A1 Discussion: What metric for change point detection?
We have used the TV distance between two distributions to estimate the change point in metric 1. One can also use other distance measures like the distance, the Jensen-Shannon (J-S) distance, the Hellinger distance, or the metric used in . The metric used in  is shown to be an unbiased estimator of the distance for categorical data in Appendix K of the supplementary material. We argue that for our data set, all of the above distances give similar performance. Our data set has 97m points and 39330 types of messages. In the region where the number of data points is much more than the dimension of the distribution, estimating a change point through all of the above metrics give order wise similar error rate. We show this through synthetic data experiments since we do not know the ground truth to compute the error in estimating the change point in the real dataset.
We present one such experiment with a synthetic dataset here. Consider two distributions and whose support set consists of points. We assume that is the uniform distribution, while , and . There are data points. The first half of the data points are independently drawn from and the second half of the data points are drawn from . Table I shows the absolute error in estimating the change point at to be of the order of for all the distance metrics.
|Metric||Unbiased , ||J-S||Hellinger|
We test the distance metric on real data and we show in section IV-B that it is satisfactory. Since we do not know the ground truth, we take a small part of the real data set where we can can visually identify the approximate location of the major change points. The change point algorithm with metric correctly estimates these locations.
A graph based change point detection algorithm in  can be adapted to our problem such that the metric computation is linear in the number of messages. We can do this if we consider a graph with nodes as the messages and edges connecting message of the same type. But, one can show that the metric in  is not consistent for this adaptation.
Iii-B Latent Dirichlet Allocation
In the problem considered in this paper, each episode can be thought of as a document and each message can be thought of as a word. Like in the LDA model where each topic is latent, in our problem, each event is latent and can be thought of as a distribution over messages. Unlike LDA-based document modeling, we have time-stamps associated with messages, which we have already used to extract episodes from our data set. Additionally, this temporal information can also be used in a Bayesian inference formulation to extract events and their signatures. However, to make the algorithm simple and computationally tractable, as in the original LDA model, we assume that there is no temporal ordering to the episodes or messages within the episodes. Our experiments suggest that this choice is reasonable and leads to very good practical results. However, one can potentially use the temporal information too as in [24, 25], and this is left for future work.
If we apply the LDA algorithm to our episodes, the output will be the event signatures and episode signatures , where an episode signature is a probability distribution of the events in the episode. In other words, LDA assumes that each message in an episode is generated by first picking an event within an episode from the episode signature and then picking a message from the event based on the event signature.
For our event mining problem, we are interested in event signatures and finding the start and finish times of each occurrence of an event. Therefore, the final step (which we describe next) is to extract the start and finish times from the episode signatures.
Putting it all together: In order to detect all the episodes in which the event occurs prominently, we proceed as follows. We collect all episodes for which the event occurrence probability is greater than a certain threshold . We declare the start and finish times of the collected episodes as the start and finish times of the various occurrences of the event . If an event spans many contiguous episodes, then the start time of the first episode and the end time of the last contiguous episode can be used as the start and finish time of this occurrence of the event. However, for simplicity, this straightforward step in not presented in the detailed description of the algorithm in Algorithm 3.
CD-LDA algorithm works without knowledge of topology graph of message-generating elements. If topology graph is known, then the algorithm can be improved as follows. We can run change-detection phase separately for messages restricted to each element and its graph neighbors (either single-hop or two-hop neighbors). The union of change-points could be used in the subsequent LDA phase. Since impact of an event is usually restricted to few hops within the topology, such an approach detects change points better by eliminating several messages far from event-source.
Note that the LDA algorithm requires an input for the number of events . However, one can run LDA for different values of and choose the one with maximum likelihood . Hence need not be assumed to be an input to CD-LDA. One can also use the Hierarchical Dirichlet Process (HDP) algorithm  which is an extension of LDA and figure out the number of topics from the data. In our experiments, we use the maximum likelihood approach to estimate the number of events. This is exaplined in section IV-C1.
Iii-C Analysis of CD
As mentioned earlier, the novelty in the CD-LDA algorithm lies in the connection we make to topic modeling in document analysis. In this context, our key contribution is an efficient algorithm to divide the data set of messages into episodes (documents). Once this is done, the application of the LDA of episodes (documents), consisting of messages (words) generated by events (topics) is standard. Therefore, the correctness and efficiency of the CD part of the algorithm will determine the correctness and efficiency of CD-LDA as a whole. We focus on analyzing the CD part of the algorithm in this section. Due to space limitations, we only present the main results here, and the proofs can be found in the supplementary material.
Section III-C1 shows that the computational complexity of CD algorithm is linear in the number of data points. Section III-C2 contains the asymptotic analysis of the CD algorithm while section III-C3 has the finite sample results.
Iii-C1 Computational complexity of CD
In this section we discuss the computational complexities of Algorithm 1 and Algorithm 2. We will first discuss the computational complexity of detecting a change point in case of one change point. Algorithm 1 requires us to compute for . From the definition of in (1), we only need to compute the empirical probability estimates , , and the empirical mean of the inter arrival time , for every value of between to .
We focus on the computation of , . Consider any message in the distribution. For each we can compute , in for every value of by using neighbouring values of , .
The computation of for every value of from to is similar.
Performing the above computations for all messages, results in a computational complexity of In the case of change points, it is straightforward to see that we require computations. In much of our discussion, we assume and are constants and therefore, we present the computational complexity results in terms of only.
Related work: Algorithm 2 executes the process of determining change points hierarchically. This idea was inspired by the work in . However, the metric we use to detect change points is different from that of . The change in metric necessitates a new analysis of the consistency of the CD algorithm which we present in the next subsection. Further, for our metric, we are also able to derive sample complexity results which are presented in a later subsection.
Iii-C2 The consistency of change-point detection
In this section we discuss the consistency of the change-point detection algorithm, i.e., when the number of data points goes to infinity one can accurately detect the location of the change points. In both this subsection and the next, we assume that the inter-arrival times of messages within each episode are i.i.d., and are independent (with possibly different distributions) across episodes.
For is well-defined and attains its maximum at one of the change points if there is at least one change point.
The proof of the theorem 1 for the single change-point case is relatively easy, but the proof in the case of multiple change points is rather involved. So, due to space limitations, we only provide a proof of the single change point case and refer the interested reader to Appendix C in the supplemental material for the proof of the multiple change point case.
Proof for single change point case: We first discuss the single change point case. Let the change point be at index . The location of the change point is determined by the point where maximizes over . We will show that when is large the argument where maximizes converges to the change point .
Suppose all the points to the left of the change point are chosen i.i.d from a distribution and all the points from the right of are chosen from a distribution , where . Also, say the inter-arrival times ’s are chosen i.i.d from distribution and to the left and right of change point , respectively.
Let , be the index of any data point and , the index of the change point.
Case 1 : Suppose we consider the value of to the left of the actual change point, i.e, or . The distribution to the left of , , has all the data points chosen from the distribution . So is the empirical estimate for . On the other hand, the data points to the right of come from a mixture of distribution and . has fraction of samples from and fraction of samples from . Figure 3 below explains it pictorially.
So and defined in (3) converges to
Similarly, we can say that the empirical mean estimates and converge to
Note that from the definition of , .
Case 2 : Proceeding in a similar way to Case 1, we can show
From Case 1 and Case 2, we have
Equation (11) shows that the maximum of is obtained at .
Iii-C3 The sample complexity of change-point detection
In the previous subsection, we studied the CD algorithm in the limit as In this section, we analyze the algorithm when there are only a finite number of samples. For this purpose, we assume that the inter-arrival distribution of messages have sub-Gaussian tails.
We say that Algorithm CD is correct if the following conditions are satisfied. Let be a desired accuracy in estimation of the change point.
Given , Algorithm CD is correct if
there are change points and the algorithm gives such that .
there is no change point and .
Now we can state the correctness theorem for Algorithm 2. The sample complexity is shown to scale logarithmically with the number of change points.
The proof of this theorem uses the method of types and Pinsker’s inequality. We present here the proof for the single change point case. Due to space constraints, we move the proof for multiple change points to Appendix E in the supplementary material.
We first characterize the single change point case in finite sample setting. In order to get the sample complexity, we prove the correctness for Algorithm 1 as per Definition 1 with high probability. Before we go into the proof, we state the assumptions on under which the proof is valid.
Suppose a change point exists at index and the metric converges to at the change point. Then can only be chosen in following region: has to be less than the value of the metric at the change point, ; has to be less than the minimum episode length, .
If a change point exists at index , has to chosen less than the minimum episode length minus , .
The threshold .
Given a change point exists at ,
occurs. Say the event denotes .
Given a change point does not exist,
When a change point does not exist we write . Say the event denotes
We analyze each part in (12) separately.
Case 1: Suppose no change point exists and say all the data points are drawn from the same mutinomial distribution and all inter-arrival times are generated i.i.d from a distribution . Given event , if are all less than , then . So . Now, we can use Sanov’s theorem followed by Pinsker’s inequality to upper bound each of the above terms as
Case 2: Next, we look at the case when a change point exists at . Say the messages are drawn from a distribution to the left of the change point and to the right of the change point. Also, suppose the inter-arrival time distribution to the left of the change point is and the inter-arrival time distribution to the right is . According to our assumptions, is chosen such that . Hence
Given the assumption on , . The rest of the proof deals with upper bounding and .
In lemma 1-3 we develop the characteristics of and when a change point exists at . Lemma 1-3 are proved in Appendix F,G of the supplementary material. First, we analyze the concentration of for any value of in the Lemma 1.
w.p. at least for all values of when is defined.
Lemma 1 shows that the empirical estimate is very close to the asymptotic value with high probability. Recall that the argument at which maximizes is . we next show in Lemma 3 that the value of metric at is very close to the value of the at the change point .
Finally, in Lemma 3 we show that is close to the change point with high probability.
Also, using lemma 2 and assuming that is chosen such that ,
Iv Evaluation with Real Datasets
We now present our experimental results with real data sets from large operational network. The purpose of experiments is three-fold. First, we wish to compare our proposed CD-LDA algorithm with other techniques proposed (adapted to our setting) in the literature. Second, we want to validate our results against manual expert-derived event signature for a prominent event. Third, we want to understand the scalability of our method with respect to very large data sets.
Datasets used: We use two data sets: one from a legacy network of physical elements like routers, switches etc., and another from a recently deployed virtual network function (VNF). The VNF dataset is used to validate our algorithm by comparing with expert knowledge. The other one is used to show that our algorithm is scalable, i.e., it can handle large data sets and it is less sensitive to the hyper parameters.
Dataset-1: This data set consists of around 97 million raw syslog messages collected from 3500 distinct physical network elements (mostly routers) from a nationwide operational network over a 15-day period in 2017. There are types of messages.
Dataset-2: The second data set consists of around messages collected from distinct physical/virtual network elements over a 3 month period from a newly deployed virtual network function (VNF) which is implemented on a data-center using multiple VMs.
We implemented the machine-learning pipeline as shown in Figure 1. The main algorithmic component in the figure shows CD-LDA algrothm; however, for the purpose of comparison, we also implemented two additional algorithms described shortly. Before the data is applied to any of the algorithms, there are two-steps, namely, Template-extraction (in case of textual syslog data) and pre-processing (for both syslog and alarms). These steps are described in Appendix J in the supplementary material.
Iv-a Benchmark Algorithms
We compare CD-LDA with the following algorithms.
Iv-A1 Algorithm B: A Bayesian inference based algorithm
We consider a fully Bayesian inference algorithm to solve the problem. A Bayesian inference algorithm requires some assumptions on the statistical generative model by which the messages are generated. Our model here is inspired by topic modelling across documents generated over multiple eras. Suppose that there are events which generated our data set, and event has a signature as mentioned earlier. The generative model for generating each message is assumed to be as follows.
To generate a message, we first assume that an event is chosen with probability
Next, a message is chosen with probability
Finally, a timestamp is associated with the message which is chosen according to a beta distribution where the parameters of the beta distribution are distinct for different events.
The parameters of the generative model are unknown. As in standard in such models, we assume a prior on some of these parameters. Here, as in , we assume that there is a prior distribution on over the space of all possible and a prior over the space of all possible The prior is assumed to be independent of Given these priors, the Bayesian inference problem becomes a maximum likelihood estimation problem, i.e.,
We use Gibbs sampling to solve the above maximization problem. There are two key differences between Algorithm B and proposed CD-LDA. CD-LDA first breaks up the datasets into smaller episodes whereas Algrothm-B uses prior distributions (the beta distributions) to model the fact that different events happen at different times. We show that, such an algorithm works, but the inference procedure is dramatically slow due to additional parameters to infer .
Iv-A2 Algorithm C: A Graph-clustering based algorithm
For the purposes of comparison, we will also consider a very simple graph-based clustering-based algorithm to identify events. This algorithm is inspired from graph based clustering used in event log data in . The basic idea behind the algorithm is as follows: we construct a graph whose nodes are the messages in the set We divide the continuous time interval into timeslots, where each timeslot is of duration For simplicity, we will assume that is divisible by We draw an edge between a pair of nodes (messages) and label the edge by a distance metric between the messages, which roughly indicates the likelihood with which two messages are generated by the same event. Then, any standard distance-based clustering algorithm on the graphs will cluster the messages into clusters, and one can interpret each cluster as an event. Clearly, the algorithm has the following major limitation: it can detect for an event and not In some applications, this may be sufficient. Therefore, we consider this simple algorithm as a possible candidate algorithm for our real data set.
We now describe how the similarity metric is computed for two nodes and Let be the number of timeslots during which a message occurs and let be the number of timeslots during which both and appear in the same timeslot. Then, the distance metric between nodes and is defined as
Thus, a smaller indicates that and co-occur frequently. The idea behind choosing this metric is as follows: messages generated by the same event are likely to occur closer together in time. Thus, being small indicates that the messages are more likely to have been generated by the same event, and thus are closer together in distance.
Iv-B Results: Comparison with Benchmark Algorithms
For the purposes of this section only, we consider a smaller slice of data from Dataset-1. Instead of considering all the 97 million messages, we take a small slice of 10,000 messages over a 3 hour duration from 135 distinct routers. Let us call this data set . There are two reasons for considering this smaller slice. Firstly, it is easier to visually observe the ground truth in this small data set and verify visually if CD-LDA is giving us the ground truth. We can also compare the results from different methods with this smaller data set. Secondly, as we show later in this section, the Bayesian inference Algorithm-B is dramatically slow and so running it over the full dataset is not feasible. Nevertheless, the smaller dataset allows us to validate the key premise behind our main algorithm, i.e., the decomposition of the algorithm into the CD and LDA parts.
Applying CD-LDA on this dataset slice: Figure 4a shows the data points in x-axis and the message-ids on y-axis. Figure 4b shows the episodes after the CD part of CD-LDA, where we chose and For the LDA part, instead of specifying the number of events, we use maximum likelihood to find the optimal number of events and based on this, the number of events was found to be
We next compare event signatures produced by CD-LDA with Algorithm B and Algorithm C.
CD-LDA versus Algorithm B: For all unknown distributions, we assume a uniform prior in Algorithm B. Algorithm B is run with input number of events as . It turns out that, with events the algorithm converges to a solution which has maximum likelihood. However, upon clustering the event signatures based on TV-distance between the event signatures, we find only two events. The maximum TV-distance between the events signatures found from the two algorithms is . Hence, we can conclude that the event signatures found by both the algorithms are very similar.
Despite the fact that Algorithm B using fewer hyper-parameters, it is not fast enough to run on large data sets. Figure 5a shows the time taken by CD-LDA and Algorithm B as we increase the size of the data set from to points. With data points and events as input Algorithm B takes 3 hours whereas CD-LDA only takes 26.57 seconds. Clearly, we cannot practically run Algorithm B on large data sets with millions of points.
CD-LDA versus Algorithm C: In this section we compare CD-LDA versus algorithm C on data set . Algorithm C can produce the major event clusters as CD-LDA, but does not provide the start and end time for the events. We form the co-occurrence graph for Algorithm C with edge weight as described in section IV-A2 and nodes as messages which occur more than at least times in the data set . All the edges with weight more than are discarded and we run a clique detection algorithm in the resulting graph.
We quantitatively compare the event signature of the top two cliques found by Algorithm C with those found by CD-LDA. Suppose that message sets identified by Algorithm C for the two events are and respectively. Message sets (messages with probability more than ) identified by CD-LDA for the two events are denoted by and . We can now compute the Jaccard Index between the two sets.
Since the full Bayesian inference (Algorithm B) agrees with CD-LDA closely, we can conclude that Algorithm C gets a large fraction of the messages associated with the event correctly. However, it also misses a significant fraction of the messages, and additionally Algorithm C does not provide any information about start and end times of the events. Also, the events found are sensitive to the threshold for choosing the graph edges, something we have carefully chosen for this small data set.
Iv-C Results: Comparison with Expert Knowledge and Scalability
Validation by comparing with manual event signature: The intended use-case of our methodology is for learning events where the scale of data and system does not allow for manual identification of event signatures. However, we still wanted to validate our output against a handful of event signatures inferred manually by domain experts. For the purpose of this section, we ran CD-LDA for Dataset-2 which is for an operational VNF. For this data set, an expert had identified that a known service issue had occurred on two dates: 11-Oct and 26-Nov, 2017. This event generated messages with Ids Ping_vm, SNMP_AgentCheck, SNMP_ntpd, SNMP_sshd, SNMP_crond, SNMP_Swap, SNMP_CPU, SNMP_Mem, SNMP_Filespace.
We ran CD-LDA on this data set with parameters and . We chose events for the LDA phase by looking at the likelihood computed using cross validation for different number of topics. See section IV-C1 for details of the maximium likelihood approach. Table II shows the events detected by CD-LDA in decreasing order of probability. Also, top messages are listed for each event. Indeed, we note that Event resembles the expert provided event. CD-LDA detected this event as having occurred from 2017-10-08 17:35 to 2017-10-17 15:55 and 2017-11-25 13:45 to 2017-11-26 03:10. The longer than usual detection window for 11-Oct is due to the fact that there were other events occurring simultaneously in the network and the Event contributed to small fraction of messages generated during this time window. Finally, as shown in Table II, our method also discovered several event signatures not previously known.
Scalability and sensitivity: To understand the scalability of CD-LDA with data size, we ran it on Dataset-1 with about million data points. CD-LDA was run with the following input: , and the number of events equal to The CD part of the algorithm detects change points. The sensitivity of this output with respect to is discussed next. The event signatures are quite robust to these parameter choice, but as expected, the accuracy of the start and finish time estimates of the events will be poorer for large values of and Overall, CD-LDA takes about hours to run, which is quite reasonable for a dataset of this size. Reducing the running time by using other methods for implementing LDA, such as variational inference, is a topic for future work.
Parameter specifies the minimum duration of episode that can be detected in the change detection. By increasing we can control to detect the more sharp change points (change points across which the change in distribution is large), and decreasing helps us detect the soft change points as well. So control the granularity of the change point detection algorithm. Parameter is a user defined parameter to detect the episodes in which a particular event occurs. We demonstrate the sensitivity of CDLDA to and . We run CDLDA with on Dataset-2 and compare it with results when run with parameters . Table V shows the first two events for parameters when compared to first two events for parameter . CDLDA detects 57 change points with whereas it only detects change points with . Despite this, Figure 6 and Table V shows that the event signatures for the first two events are almost the same. But, since the episodes are larger in duration with , the start and end times of the first two events are less accurate than . In particular, event 2 is shown to occur from 2-10 05:00 to 2-14 00:00 with in Table V whereas it broken into two episodes, 2-10 05:00 to 2-10 13:33 and 2-10 15:27 to 2-14 00:00 , with .
|TV dist in||TV dist in|
|2017-02-14 00:00 to 2017-02-15 23:59||2017-02-06 19:29 to 2017-02-07 16:42|
|2017-02-08 00:00 to 2017-02-08 06:25|
|2017-02-08 23:59 to 2017-02-10 04:07|
|2017-02-10 05:00 to 2017-02-14 00:00|
|2017-02-14 00:00 to 2017-02-15 23:59||2017-02-05 06:21 to 2017-02-07 16:42|
|2017-02-08 00:00 to 2017-02-10 00:00|
|2017-02-10 03:07 to 2017-02-10 04:07|
|2017-02-10 05:00 to 2017-02-10 13:33|
|2017-02-10 15:27 to 2017-02-14 00:00|
Iv-C1 Selection of the number of topics in LDA
For Dataset-1, we do 10-fold cross validation. We group the 58 documents found by change detection into 10 sets randomly. We compute the likelihood on one group with a model trained using documents in the remaining 9 groups. We plot the average likelihood in Figure 7 vs the number of topics. There is a decrease in likelihood around and hence, we choose the number of topics as .
For Dataset-2, we do -fold cross validation and choose the number of topics as from the Figure 8 below. In this case, we create the groups of documents in the following way. Out of documents, group has document number , group has documents , etc. Sub sampling in this fashion respects the ordering in the documents.
V Conclusions and future work
In this paper we look at the problem of detecting events in an error log generated by a distributed data center network. The error log consists of error messages with time stamps. Our goal is to detect latent events which generate these messages and find the distribution of messages for each event. We solve this problem by relating it to the topic modelling problem in documents. We introduce a notion of episodes in the time series data which serves as the equivalent of documents. Also we propose a linear time change detection algorithm to detect these episodes. We present consistency and sample complexity results for this change detection algorithm. Further we demonstrate the performance of our algorithm on a real dataset by comparing it with two benchmark algorithms existing in the literature. We believe, our approach is generic enough to be applied to other problem settings where the data has similar characteristics as network logs.
-  T. Li, L. Shwartz, and G. Y. Grabarnik, “System event mining: Algorithms and applications,” KDD 2017 Tutorial, 2017. [Online]. Available: https://users.cs.fiu.edu/~taoli/event-mining/
-  T. Li, C. Zeng, Y. Jiang, W. Zhou, L. Tang, Z. Liu, and Y. Huang, “Data-driven techniques in computing system management,” ACM Comput. Surv., vol. 50, no. 3, pp. 45:1–45:43, Jul. 2015. [Online]. Available: http://doi.acm.org/10.1145/3092697
-  D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” J. Mach. Learn. Res., vol. 3, pp. 993–1022, Mar. 2003. [Online]. Available: http://dl.acm.org/citation.cfm?id=944919.944937
-  D. S. Matteson and N. A. James, “A nonparametric approach for multiple change point analysis of multivariate data,” vol. 109, 06 2013.
-  S. Satpathi, S. Deb, R. Srikant, and H. Yan, “Learning latent events from network message logs,” Workshop on mining and learning from time series, 2018. [Online]. Available: https://milets18.github.io/papers/milets18_paper_13.pdf
-  A. A. Makanju, A. N. Zincir-Heywood, and E. E. Milios, “Clustering event logs using iterative partitioning,” in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’09. New York, NY, USA: ACM, 2009, pp. 1255–1264. [Online]. Available: http://doi.acm.org/10.1145/1557019.1557154
-  T. Li, F. Liang, S. Ma, and W. Peng, “An integrated framework on mining logs files for computing system management,” in Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, ser. KDD ’05. New York, NY, USA: ACM, 2005, pp. 776–781. [Online]. Available: http://doi.acm.org/10.1145/1081870.1081972
-  L. Tang and T. Li, “Logtree: A framework for generating system events from raw textual logs,” in Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 2010, pp. 491–500.
-  T. Qiu, Z. Ge, D. Pei, J. Wang, and J. Xu, “What happened in my network: Mining network events from router syslogs,” in Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, ser. IMC ’10. New York, NY, USA: ACM, 2010, pp. 472–484. [Online]. Available: http://doi.acm.org/10.1145/1879141.1879202
-  F. Wu, P. Anchuri, and Z. Li, “Structural event detection from log messages,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’17. New York, NY, USA: ACM, 2017, pp. 1175–1184. [Online]. Available: http://doi.acm.org/10.1145/3097983.3098124
-  R. Agrawal and R. Srikant, “Mining sequential patterns,” in Proceedings of the Eleventh International Conference on Data Engineering, ser. ICDE ’95. Washington, DC, USA: IEEE Computer Society, 1995, pp. 3–14. [Online]. Available: http://dl.acm.org/citation.cfm?id=645480.655281
-  D. Cheng, M. T. Bahadori, and Y. Liu, “Fblg: A simple and effective approach for temporal dependence discovery from time series data,” in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’14. New York, NY, USA: ACM, 2014, pp. 382–391. [Online]. Available: http://doi.acm.org/10.1145/2623330.2623709
-  C. Zeng, Q. Wang, W. Wang, T. Li, and L. Shwartz, “Online inference for time-varying temporal dependency discovery from time series,” in Big Data (Big Data), 2016 IEEE International Conference on. IEEE, 2016, pp. 1281–1290.
-  C. H. Mooney and J. F. Roddick, “Sequential pattern mining – approaches and algorithms,” ACM Comput. Surv., vol. 45, no. 2, pp. 19:1–19:39, Mar. 2013. [Online]. Available: http://doi.acm.org/10.1145/2431211.2431218
-  J. A. Silva, E. R. Faria, R. C. Barros, E. R. Hruschka, A. C. P. L. F. d. Carvalho, and J. a. Gama, “Data stream clustering: A survey,” ACM Comput. Surv., vol. 46, no. 1, pp. 13:1–13:31, Jul. 2013. [Online]. Available: http://doi.acm.org/10.1145/2522968.2522981
-  Y. Jiang, C.-S. Perng, and T. Li, “Natural event summarization,” in Proceedings of the 20th ACM International Conference on Information and Knowledge Management, ser. CIKM ’11. New York, NY, USA: ACM, 2011, pp. 765–774. [Online]. Available: http://doi.acm.org/10.1145/2063576.2063688
-  P. Wang, H. Wang, M. Liu, and W. Wang, “An algorithmic approach to event summarization,” in Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD ’10. New York, NY, USA: ACM, 2010, pp. 183–194. [Online]. Available: http://doi.acm.org/10.1145/1807167.1807189
-  W. Peng, C. Perng, T. Li, and H. Wang, “Event summarization for system management,” in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’07. New York, NY, USA: ACM, 2007, pp. 1028–1032. [Online]. Available: http://doi.acm.org/10.1145/1281192.1281305
-  N. Tatti and J. Vreeken, “The long and the short of it: Summarising event sequences with serial episodes,” in Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’12. New York, NY, USA: ACM, 2012, pp. 462–470. [Online]. Available: http://doi.acm.org/10.1145/2339530.2339606
-  M. Du, F. Li, G. Zheng, and V. Srikumar, “Deeplog: Anomaly detection and diagnosis from system logs through deep learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’17. New York, NY, USA: ACM, 2017, pp. 1285–1298. [Online]. Available: http://doi.acm.org/10.1145/3133956.3134015
-  Y. Kawahara and M. Sugiyama, “Sequential change-point detection based on direct density-ratio estimation,” Statistical Analysis and Data Mining: The ASA Data Science Journal, vol. 5, no. 2, pp. 114–127, 2012.
-  A. Lung-Yut-Fong, C. Lévy-Leduc, and O. Cappé, “Homogeneity and change-point detection tests for multivariate data using rank statistics,” arXiv preprint arXiv:1107.1971, 2011.
-  H. Chen and N. Zhang, “Graph-based change-point detection,” Ann. Statist., vol. 43, no. 1, pp. 139–176, 02 2015. [Online]. Available: https://doi.org/10.1214/14-AOS1269
-  X. Wang and A. McCallum, “Topics over time: A non-markov continuous-time model of topical trends,” in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’06. New York, NY, USA: ACM, 2006, pp. 424–433. [Online]. Available: http://doi.acm.org/10.1145/1150402.1150450
-  C. Wang, D. Blei, and D. Heckerman, “Continuous time dynamic topic models,” arXiv preprint arXiv:1206.3298, 2012.
-  T. L. Griffiths and M. Steyvers, “Finding scientific topics,” Proceedings of the National academy of Sciences, vol. 101, no. suppl 1, pp. 5228–5235, 2004.
-  M. Hoffman, F. R. Bach, and D. M. Blei, “Online learning for latent dirichlet allocation,” in advances in neural information processing systems, 2010, pp. 856–864.
-  M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley, “Stochastic variational inference,” The Journal of Machine Learning Research, vol. 14, no. 1, pp. 1303–1347, 2013.
-  A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky, “Tensor decompositions for learning latent variable models,” Journal of Machine Learning Research, vol. 15, pp. 2773–2832, 2014. [Online]. Available: http://jmlr.org/papers/v15/anandkumar14b.html
-  T. Bansal, C. Bhattacharyya, and R. Kannan, “A provable svd-based algorithm for learning topics in dominant admixture corpus,” in Advances in Neural Information Processing Systems 27, 2014, pp. 1997–2005.
-  C. Wang, J. Paisley, and D. Blei, “Online variational inference for the hierarchical dirichlet process,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011, pp. 752–760.
-  E. Sy, S. A. Jacobs, A. Dagnino, and Y. Ding, “Graph-based clustering for detecting frequent patterns in event log data,” in 2016 IEEE International Conference on Automation Science and Engineering (CASE), Aug 2016, pp. 972–977.
Appendix A Supplementary Material
This is the supplementary material for the paper ‘Learning Latent Events from Network Message Logs’.
Appendix B Which algorithm for inference in LDA model?
In order to perform this inference of event and episode signatures using topic modelling, many inference techniques exist: Gibbs sampling on the LDA model, variational inference , online variational inference , stochastic variational inference . There are also provable inference models based on spectral methods, such as the tensor decomposition method in  and the SVD based method in . We use one of the popular python package based on Gibbs sampling based inference, , for the real data experiments. One can also choose to use other more recent methods for inference as mentioned above. We work in the region where the number of messages are much larger that than the number of types of messages. In this region we show that most of the inference algorithms perform the same for our problem through a synthetic data experiment.
So we compare three different inference algorithms, namely, Gibbs sampling on the LDA model, online variational inference  and the tensor decomposition method in . We build an example with types of messages and messages. We generate the time series as follows: There are two events, event has message distribution and event has message distribution . Episode starts from message to message and has only event ; episode starts from message to and has half of event and half of event . Episode begins at message and continues till the end with only event occurring in this episode. We run change detection based on metric followed by three different types of topic modeling inference algorithms on the episodes. We compare the inferred event-message distribution to the true event message distribution by computing the norm between the estimated and the true distribution maximized over all events. Table VI summarizes the results. We can see that the error in estimating the event-message distributions are in the same order of magnitude.
Appendix C Proof for multiple change point case, Theorem 1
To study the case of multiple change points,  exploits the fact that their metric for change-point detection is convex between change points. However, the TV distance we use is not convex between two change points. But we work around this problem in the proof of theorem 1 by showing that is increasing to the left of the first change point, unimodal/increasing/decreasing between any two change points and decreasing to the right of the last change point. Hence, any global maximum of for is located at a change point.
Suppose we have more than one change points. We plan to show that and is increasing to the left of the first change point, unimodal/increasing/decreasing between two consecutive change points and decreasing to the right of last change point. If this happens, then we can conclude that one of the global maximas of occurs at a change point. Using similar techniques from the single change point case, it is easy to show that is increasing to the left of first change point and decreasing to the right of last change point (The proof is left to the readers as an exercise). Hence, it remains to show that is unimodal/increasing/decreasing between two consecutive change points. Lemma 4 proves this result. The prove of lemma 4 is relegated to Appendix D in the supplementary material.
is unimodal or increasing or decreasing between two consecutive change points when there is more than one change point.
When we say is unimodal between two consecutive change points , it means that there exists such that for and for .
Appendix D Proof of Lemma 4
Consider any two consecutive change points at index and . Suppose the data points are drawn i.i.d from distribution between change points and . The data points to the left of are possibly drawn independently from more than one distribution. But, for the asymptotic analysis we can assume that the data points to the left of are possibly drawn i.i.d from the mixture of more than one distribution distribution. Lets call this mixture distribution . Similarly, the data points to the right of can be assumed to be drawn i.i.d from a mixture distribution . Let the inter-arrival time be drawn from a distribution to the left of be, between and and to the right of .
Suppose we consider the region between change points and . So is a mixture of fraction of samples from and fraction from . is a mixture of fraction from and fraction from . So