Identifying and Alleviating Concept Drift in Streaming Tensor Decomposition

Identifying and Alleviating Concept Drift in Streaming Tensor Decomposition

Ravdeep Pasricha Department of Computer Science and Engineering, University of California Riverside, CA, USA 11email: (rpasr001, egujr001)@ucr.edu, and epapalex@cs.ucr.edu    Ekta Gujral Department of Computer Science and Engineering, University of California Riverside, CA, USA 11email: (rpasr001, egujr001)@ucr.edu, and epapalex@cs.ucr.edu    and Evangelos E. Papalexakis Department of Computer Science and Engineering, University of California Riverside, CA, USA 11email: (rpasr001, egujr001)@ucr.edu, and epapalex@cs.ucr.edu
Abstract

Tensor decompositions are used in various data mining applications from social network to medical applications and are extremely useful in discovering latent structures or concepts in the data. Many real-world applications are dynamic in nature and so are their data. To deal with this dynamic nature of data, there exist a variety of online tensor decomposition algorithms. A central assumption in all those algorithms is that the number of latent concepts remains fixed throughout the entire stream. However, this need not be the case. Every incoming batch in the stream may have a different number of latent concepts, and the difference in latent concepts from one tensor batch to another can provide insights into how our findings in a particular application behave and deviate over time. In this paper, we define “concept” and “concept drift” in the context of streaming tensor decomposition, as the manifestation of the variability of latent concepts throughout the stream. Furthermore, we introduce SeekAndDestroy, an algorithm that detects concept drift in streaming tensor decomposition and is able to produce results robust to that drift. To the best of our knowledge, this is the first work that investigates concept drift in streaming tensor decomposition. We extensively evaluate SeekAndDestroy on synthetic datasets, which exhibit a wide variety of realistic drift. Our experiments demonstrate the effectiveness of SeekAndDestroy, both in the detection of concept drift and in the alleviation of its effects, producing results with similar quality to decomposing the entire tensor in one shot. Additionally, in real datasets, SeekAndDestroy outperforms other streaming baselines, while discovering novel useful components.

Keywords:
Tensor analysis, streaming, concept drift, unsupervised learning
\DeclareCaptionType

copyrightbox

1 Introduction

Data comes in many shapes and sizes. Many real world applications deal with data that is multi-aspect (or multi-dimensional) in nature. An example of multi-aspect data would be interactions between different users in a social network over period of time. Interactions like who messages whom, who liked whose posts or who shared (re-tweet) whose post. This can be modeled as a three-mode tensor, user-user being two modes of the tensor and time being the third mode, where each data point can be considered as an interaction between two users.

Tensor decomposition has been used in many data mining applications and is an extremely useful tool for finding latent structures in tensor in an unsupervised fashion. There exist a wide variety of tensor decomposition models and algorithms available, interested readers can refer to [9, 13] for details. In this paper, our main focus is on CP/PARAFAC decomposition [7] (henceforth refered to as CP for brevity), which decomposes a tensor into a sum of rank-one tensors, each one being a latent factor (or concept) in the data. CP has been widely used in many applications, due to its ability to uniquely uncover latent components in a variety of unsupervised multi-aspect data mining applications [13].

In today’s world data is not static, data keeps on evolving over time. In real world applications like stock market and e-commerce websites hundred of transaction (if not thousands) takes place every second, or in applications like social media where every second, thousands of new interactions take place forming new communities of users who interact with each other. In this example, we consider each community of people within the graph as a concept.

There has been a considerable amount of work in dealing with online or streaming CP decomposition [16, 6, 11], where the goal is to absorb the updates to the tensor in the already computed decomposition, as they arrive, and avoid recomputing the decomposition every time new data arrives. However, despite the already existing work in the literature, a central issue has been left, to the best of our knowledge, entirely unexplored. All of the existing online/streaming tensor decomposition literature assumes that the concepts in the data (whose number is equal to the rank of the decomposition) remains fixed throughout the lifetime of the application. What happens if the number of components changes? What if a new component is introduced, or an existing component splits into two or more new components? This is an instance of concept drift in unsupervised tensor analysis, and this paper is a look at this problem from first principles.

Our contributions in this paper are the following:

  • Characterizing concept drift in streaming tensors: We define concept and concept drift in time evolving tensors and provide a quantitative method to measure the concept drift.

  • Algorithm for detecting and alleviating concept drift in streaming tensor decomposition: We provide an algorithm which detects drift in the streaming data and also updates the previous decomposition without any assumption on the rank of the tensor.

  • Experimental evaluation on real & synthetic data: We extensively evaluated our method on both synthetic and real datasets and out-perform state of the art methods in cases where the rank is not known a priori and perform on par in other cases.

  • Reproducibility: Our implementation is made publicly available111https://github.com/ravdeep003/conceptDrift for reproducibility of experiments.

2 Problem Formulation

2.1 Tensor Definition and Notations

Tensor is collection of stacked matrices () with dimension , where and represents rows and columns of matrix and represents number of views. In other words, a tensor is a higher order abstraction of a matrix. For simplicity, we call the term “dimension” as “mode” of tensor, where “modes” are the numbers of views used to index the tensor. The rank of a tensor = is defined as the minimum number of rank-1 tensors computed from its latent components which are required to re-produce as their sum. Table 1 represents the notations used throughout the paper.

\ssmall
Symbols Definition

Tensor, Matrix, Column vector, Scalar
Set of Real Numbers
Outer product
Frobenius norm, norm
column of
Khatri-Rao product (column-wise Kronecker product [13])
Table 1: Table of symbols and their description

Tensor Batch: A batch is a (N-1)-mode partition of tensor where size is varied only in one mode and other modes remain unchanged. Here, tensor is of dimension and existing tensor is of dimension . The full tensor where its temporal mode . The tensor can be partition into horizontal (I,:,:) , lateral (:,J,:), and frontal (:,:,K) mode.

CP decomposition: The most popular and extensively used tensor decompositions is the Canonical Polyadic or CANDECOMP/PARAFAC decomposition, referred to as CP decomposition henceforth. Given a 3-mode tensor of dimension , and rank at most can be written

, , and and . For tensor approximation, we adopted minimizing least square criteria as where is the sum of squares of its all elements and is Frobenius (norm). The CP model is nonconvex in and . We refer interested readers to popular surveys [9, 13] on tensors decompositions and its applications for more details.

2.2 Problem Definition

Let us consider a social media network like Facebook, where a large number of users () update information every single minute, and Twitter, where about users tweet every minute222https://mashable.com/2012/06/22/data-created-every-minute/. Here, we have interactions arriving continuously at high velocity, where each interaction consists of User Id, Tag Ids , Device, and Location information etc. How can we capture such dynamic user interactions? How to identify concepts which can signify a potential newly emerging community, complete disappearance of interactions, or a merging of one or more communities to a single one? When using tensors to represent such dynamically evolving data, our problem falls under “streaming” or “online” tensor analysis. Decomposing streaming or online tensors is challenging task, and concept drift in incoming data makes the problem significantly more difficult, especially in applications where we care about characterizing the concepts in the data, in addition to merely approximating the streaming tensor adequately.

Before we conceptualize the problem that our paper deals with, we define certain terms which are necessary to set up the problem. Consider and be two incremental batches of a streaming tensors of rank and respectively. Let be the initial tensor at time and be the batch of the streaming tensor which arrives at time such as . The CP decomposition for these two tensors is given as follows:

(1)
(2)

Concept: In case of tensors, we define concept as one latent component; a sum of such components make up the tensor. In above equations tensor and has R and F concepts respectively.
Concept Overlap: We define concept overlap as the set of latent concepts that are common or shared between two streaming CP decompositions. Consider Figure 1 where and both are equal to three, which means both tensors and have three concepts. Each concept of corresponds to each concept of . This means that there are three concepts that overlap between and . The minimum and maximum number of concept overlaps between two tensors can be zero and respectively. Thus, the value of concept overlap lies between 0 and . In Section 3 we propose an algorithm for detecting such overlap.

(3)
Figure 1: Complete overlap of concepts

New Concept: If there exists a set of concepts which are not similar to any of the concepts already present in the most recent prior tensor batch, we call all such concepts in that set as new concepts. Consider Figure 2, where has two concepts and has three concepts . We see that at time tensor batch has three concepts, out of which, two match with tensor concepts and one concept(namely concept 3) does not match with any concept of . In this scenario we say that concept and are overlapping concepts and concept is a new concept.

Figure 2: (a)Concept Appears    (b) Concept disappears

Missing Concept: If there exists a set of concepts which was present at time , but was missing at future time , we call the concepts in the set missing concepts. For example, consider Figure 2, at time , the CP decomposition of has three concepts, and at time CP decomposition of has two concepts. Two concepts of and match with each other and one concept, present at , is missing at ; we label that concept, as missing concept.
Running Rank: Running Rank (runningRank) at time is defined as the total number of unique concepts (or latent components) seen until time . Running Rank is different from tensor rank of a tensor batch. It may or may not be equal to rank of the current tensor batch. Consider Figure 1, runningRank at time is three, since the total unique number of concepts seen until is three. Similarly runningRank of Figure 2 at time is three, even though rank of is two, since the number unique concepts seen until is three.

Let us assume rank of the initial tensor batch at time is and rank of next subsequent tensor batch at time is . Then runningRank at time is sum of running rank at and number of new concepts discovered from to . At time running rank is equal to initial rank of the tensor batch in this case .

(4)

Concept Drift: Concept drift is usually defined in terms of supervised learning [3, 14, 15]. In [14], authors define concept drift in unsupervised learning as the change in probability distribution of a random variable over time. We define concept drift in the context of latent concepts, which is based on rank of the tensor batch. We first give an intuitive description of concept in terms of running rank, and then define concept drift.
Intuition: Consider running rank at time be and running at time be . If is not equal to , then there is a concept drift i.e. either a new concept has appeared, or a concept has disappeared. However, this definition does not capture every single case. Assume if is equal to . In this case, there is no drift only for when there is a complete overlap. However there may be concept drift present even if is equal to , since a concept might disappear while runningRank remains the same.
Definition: Whenever a new concept appears, a concept disappears, or both from time to , this phenomenon is defined as concept drift.

In a streaming tensor application, a tensor batch arrives at regular intervals of time. Before we decompose a tensor batch to get latent concepts, we need to know the rank of the tensor. Finding tensor rank is a hard problem [8] and it is beyond the scope of this paper. There has been considerable amount of work which approximates rank of a tensor[12, 10]. In this paper we employ AutoTen [12] to compute a low rank of a tensor. As new advances in tensor rank estimation happen, our proposed method will also benefit.

{mdframed}

[linecolor=red!60!black,backgroundcolor=gray!20,linewidth=2pt,topline=false,rightline=false, leftline=false]

Problem 1

Given (a) tensor of dimensions and rank , (b) of dimensions of rank at time and respectively as shown in figure 3. Compute of dimension of rank equal to runningRank at time as shown in equation using factor matrices ofand.

(5)
Figure 3: Problem formulation

3 Proposed Method

Consider a social media application where thousands of connections are formed every second like who follows whom or who interacts with whom. These connections formed can be viewed as forming communities. Over a period of time communities disappear, new communities appear or some communities re-appear after sometime. Number of communities at any given point of time is dynamic. There is no way of knowing what communities will appear or disappear in future. When this data stream is captured as a tensor, communities refer to latent concepts and appearing and disappearing of communities over a period of a time is referred to as concept drift. Here we need a dynamic way of figuring out number of communities in a tensor batch rather than assuming constant number of communities in all tensor batches.

To the best of our knowledge, there is no algorithmic approach that detects concept drift in streaming tensor decomposition. As we mentioned in Section 1, there has been considerable amount of work [6, 16, 11] which deals with streaming tensor data and applies batch decomposition on incoming slices and combine the results. But these methods don’t take change of rank in consideration, which could reveal new latent concept in the data sets. Even if we know the rank(latent concept) of the complete tensor, the tensor batches of that tensor might not have same rank as the complete tensor.

In this paper we propose SeekAndDestroy, a streaming CP decomposition algorithm that does not make assumption on the rank of the tensor. SeekAndDestroy detects the rank of every incoming batch in order to decompose it, and finally, updates the existing decomposition after detecting and alleviating concept drift, as defined in Section 2.

An integral part of SeekAndDestroy is detecting different concepts and identifying concept drift in streaming tensor. In order to do this successfully, we need to solve following problems:

  • Finding the rank of a tensor batch.

  • Finding New Concept, Concept Overlap and Missing Concept between two consecutive tensor batch decomposition.

  • Updating the factor matrices to incorporate the new and missing concepts along with concept overlaps.

Finding number of latent concepts: Finding rank of a tensor is a hard problem [8] and is not in the scope of this work. There has been work, such as [12] and [10] which tries to approximately estimate the rank of a tensor. Our algorithm employs AutoTen [12], which is an automatic and unsupervised algorithm to find the rank of an tensor. As this part is not our contribution, in Section 4, we perform our experiments on synthetic data where we know the rank (and use that information as given to us by an “oracle”) and repeat those experiments using AutoTen, comparing the error between them; the gap in quality signifies room for improvement that SeekAndDestroy will reap, if rank estimation is solved more accurately in the future.

Finding Concept overlap: Given a rank of tensor batch, we compute its latent components using CP decomposition. Consider Figure 3 as an example. At time , the number of latent concepts we computed is represented by , and we already had components before new batch arrived. In this scenario, there could be three possible cases: (1) (2) (3) .

For each one of the cases mentioned above, there may be a new concepts appear at , or concepts disappear from to , or there could be shared concepts between two decompositions. In Figure 3. we see that, even though is equal to , we have one new concept, one missing concept and two shared/overlapping concepts. Now, at time , we have four unique concepts, which means our runningRank at is four.

In order to discover which concepts are shared, new, or missing we use the Cauchy-Schwartz inequality which states for two vectors a and b we have . Algorithm 2 provides the general outline of technique used in finding concepts. It takes a column-normalized matrix and of size and respectively as input. We compute the dot product for all permutations of columns between two matrices, as shown below

and are the respective columns. If the computed dot product is higher than the threshold value, the two concepts match, and we consider them as shared/overlapping between and . If the dot product between a column in and with all the columns in has a value less than the threshold, we consider it as a new concept. This solves problem P2. In the experimental evaluation, we demonstrate the behavior of SeekAndDestroy with respect to that threshold.

SeekAndDestroy: This is our overall proposed algorithm, which detects concept drift between the two consecutive tensor batch decompositions, as illustrated in Algorithm 1 and updates the decomposition in a fashion robust to the drift. SeekAndDestroy takes factor matrices(, , ) of previous tensor batch (say at time ), running rank at () and new tensor batch() (say at time ) as inputs. Subsequently, SeekAndDestroy computes the tensor rank for the batch (batchRank) for using AutoTen. Using the estimated rank batchRank, SeekAndDestroy computes the CP decomposition of , which returns factor matrices . We normalize the columns of to unit norm and we store the normalized matrices into normMatA, normMatB, and normMatC, as shown by lines 3-4 of Algorithm 1. Both and normalized matrix A are passed to function as described above. This returns the indexes of new concept and indexes of overlapping concepts from both matrices. Those indexes inform SeekAndDestroy, while updating the factor matrices, where to append the overlapped concepts. If there are new concepts, we update and factor matrices simply by adding new columns from normalized factor matrices of as shown in lines 9-10 of Algorithm 1. Furthermore, we update the running rank by adding number of new concept discovered to the previous running rank. If there is only overlapping concepts and no new concepts, then and factor matrices does not change.

0:  Tensor of size , Factor matrices of size , and respectively, runningRank, mode.
0:  Factor matrices of size , and , , runningRank.
1:  
2:   CP.
3:   Compute Column Normalization of .
4:   Absorb and Normalize .
5:  
6:  
7:  if newConcept then
8:     
9:     
10:     
111:      update depending on the New Concept,
Concept Overlap, overlapConceptOld indices and runningRank
12:  else
13:     
14:     
215:      update depending on the New Concept,
Concept Overlap, overlapConceptOld indices and runningRank
16:  end if
17:  Update depending on the New Concept and Concept Overlap indices
18:  if newConcept or  then
19:     Concept Drift Detected
20:  end if
Algorithm 1 SeekAndDestroy for Detecting & Alleviating Concept Drift

Updating Factor Matrix C: In this paper, for simplicity of exposition, we are focusing on streaming data that are increasing only on one mode. However, our proposed method readily generalizes to cases where more than one modes grow over time.

In order to update the “evolving” factor matrix ( in our case), we use a different technique from the one used to update and . If there is a new concept discovered in normMatC then

where is of size , is of size and is of size .
If there are overlapping concepts, then we update accordingly as shown below; in this case is again of size .

If there are missing concepts we append an all-zeros matrix (column vector) to those indexes.

The scaling factor : When we reconstruct the tensor from updated factor (normalized) matrices, we need a way to re-scale the columns of those those factor matrices. In our approach we compute element wise product on normalized columns of factor matrices (, , ) of as shown in line 5 of Algorithm 1. We use the same technique as the one used in updating C matrix, in order to match the values between two consecutive intervals, and we add this value to previously computed values. If it is a missing concept. we simply add zero to it. While reconstructing the tensor we take the average of vector over the number of batches received and we re-scale the components as follows

0:  Factor matrices of size , respectively.
0:  newConcept, conceptOverlap, overlapConceptOld
1:  
2:  if  then
3:     Generate all the permutations for [1:R]
14:     foreach permutation do
2      Compute dot product of
3 end foreach
5:  else if  then
6:     Generate all the permutations(1:R, batchRank)
47:     foreach permutation do
5      Compute dot product of
6 end foreach
8:  else if  then
9:     Generate all the permutations (1:batchRank, R)
710:     foreach permutation do
8      Compute dot product of
9 end foreach
11:  end if
12:  Select the best permutation based on the maximum sum.
13:  If dot product value of a column is less than threshold it a New Concept
14:  If dot product value of a column id more than threshold then its a Concept Overlap.
15:  Return column index’s of New Concept and Concept Overlap for both matrices
Algorithm 2 Find Concept Overlap

4 Experimental Evaluation

We evaluate our algorithm on the following criteria:
Q1: Approximation quality: We compare SeekAndDestroy’s approximation accuracy against state-of-the-art streaming baselines, in data that we generate synthetically so that we observe different instances of concept drift. In cases where SeekAndDestroy outperforms the baselines, we argue that this is due to the detection and alleviation of concept drift.
Q2: Concept Drift detection accuracy: We evaluate how effectively SeekAndDestroy is able to detect concept drift in synthetic cases, where we control the drift patterns.
Q3: Sensitivity analysis: As shown in Section 3, SeekAndDestroy expects the matching threshold as a user input. Furthermore, its performance may depend on the selection of the batch size. Here, we experimentally evaluate SeekAndDestroy’s sensitivity along those axes.
Q4: Effectiveness on real data: In addition to measuring SeekAndDestroy’s performance in real data, we also evaluate its ability to identify useful and interpretable latent concepts in real data, which elude other streaming baselines.

4.1 Experimental Setup

We implemented our algorithm in matlab using tensor toolbox library [2] and we evaluate our algorithm on both synthetic and real data. We use  [12] method available in literature to find rank of incoming batch.

In order to have full control of the drift phenomena, we generate synthetic tensors with different rank for every tensor batch, we control the batch rank of the tensor with factor matrix C. Table 2 shows the specification of the datasets created. For instance dataset SDS2 has a initial tensor batch whose tensor rank is and last tensor batch whose tensor rank is (full rank). The batches in between the initial and final tensor batch can have any rank between initial and final rank(in this case 2-10). The reason we assign the final batch rank as the full rank is to make sure the tensor created is not rank deficient. We make the synthetic tensor generator available as part of our code release.

\ssmall
DataSet Dimension Initial Rank Full Rank Batch Size Matching Threshold
SDS1 100 x 100 x 100 2 5 10 0.6
SDS2 10
SDS3 300 x 300 x 300 2 5 50 0.6
SDS4 10
SDS5 500 x 500 x 500 2 5 100 0.6
SDS6 10
Table 2: Table of Datasets analyzed

In order for us to obtain robust estimates of performance, we require all experiments to either 1) run for 1000 iterations, or 2) the standard deviation converges to a second significant digit (whichever occurs first). For all reported results, we use the median and the standard deviation.

4.2 Evaluation Metrics

We evaluate SeekAndDestroy and the baselines methods using relative error. Relative Error provides the measure of effectiveness of the computed tensor with respect to the original tensor and is defined as follows:

(6)

The lower the value of a relative error the better the fitness of the tensor computed.

4.3 Baselines for Comparison

To evaluate our method, we compare SeekAndDestroy with two state-of-the-art streaming baselines: OnlineCP [16] and SamBaTen [6]. Both baselines assume that the rank remains fixed throughout the entire stream. When we evaluate the approximation accuracy of the baselines, we run two different versions of each method, with different input ranks: 1) Initial Rank, which is the rank of the initial batch, same as the one that SeekAndDestroy uses, and 2) Full Rank, which is the “oracle” rank of the full tensor, if we assume we could compute that in the beginning of the stream. Clearly, Full Rank offers a great advantage to the baselines since it provides information from the future.

4.4 Q1: Approximation quality

The first dimension that we evaluate is the approximation quality. More specifically, we evaluate whether SeekAndDestroy is able to achieve good approximation of the original tensor (in the form of low error) in case where concept drift is occurring in the stream. Table 3 contains the general results of SeekAndDestroy’s accuracy, as compared to the baselines. We observe that SeekAndDestroy outperforms the two baselines, in the pragmatic scenario where they are given the same starting rank as SeekAndDestroy (Initial Rank). In the “oracle” case, OnlineCP performs better than SamBaTen and SeekAndDestroy, however this case is not realistic and can be seen an very advantageous lower bound on the error for OnlineCP.

\ssmall
DataSet OnlineCP (Initial Rank) OnlineCP (Full Rank) SamBaTen (Initial Rank) SamBaTen (Full Rank) SeekAndDestroy
SDS1 0.27820.0221 0.1970.086 0.2610.048 0.3170.058 0.2830.075
SDS2 0.25370.0125 0.1680.507 0.2440.028 0.4800.051 0.2530.0412
SDS3 0.27310.0207 0.2050.164 0.3850.021 0.4450.164 0.2660.081
SDS4 0.2450.013 0.1710.537 0.2990.045 0.4020.049 0.2210.0423
SDS5 0.27190.0198 0.2060.022 0.5590.046 0.5190.0219 0.2560.105
SDS6 0.2380.013 0.1710.374 0.5100.036 0.5470.0276‘ 0.2080.0433
Table 3: Approximation error for SeekAndDestroy and the baselines. SeekAndDestroy outperforms the baselines in the realistic case where all methods start with the same rank.

Through extensive experimentation we made the following interesting observation: in the cases where most of the concepts in the stream appear in the beginning of the stream (e.g., in batches 2 and 3), SeekAndDestroy was able to further outperform the baselines. This is due to the fact that, if SeekAndDestroy has already “seen” most of the possible concepts early-on in the stream, it is more likely to correctly match concepts in later batches of the stream, since there already exists an almost-complete set of concepts to compare against. Indicatively,in this case SeekAndDestroy achieved where as OnlineCP achieved .

4.5 Q2: Concept drift detection accuracy

The second dimension along which we evaluate SeekAndDestroy is its ability to successfully detect concept drift. Figure 4 shows the rank discovered by SeekAndDestroy at every point of the stream, plotted against the actual rank. We observe that SeekAndDestroy is able to successfully identify changes in rank, which, as we have already argued, signify concept drift. Furthermore, Table 4(b) shows three example runs that demonstrate the concept drift detection accuracy.

Figure 4: SeekAndDestroy is able to successfully detect concept drift, which is manifested as changes in the rank throughout the stream.

4.6 Q3: Sensitivity analysis

The results we have presented so far for SeekAndDestroy have used a matching threshold of 0.6. The reason why this threshold was chosen is because it is intuitively larger than a 50% match, which is a reasonable matching threshold. In this experiment, we investigate the sensitivity of SeekAndDestroy to the matching threshold parameter. Table 4(a) shows exemplary approximation errors for thresholds of 0.4, 0.6, and 0.8. We observe that 1) the choice of threshold is fairly robust for values around 50%, and 2) the higher the threshold, the better the approximation, with threshold of 0.8 achieving the best performance.

\ssmall
Threshold SDS2 SDS4 0.4 0.2530.041 0.221 0.042
0.6
0.2530.041 0.221 0.042

0.8
0.101 0.040 0.033 0.011

Running Actual Predicted Approximation Error Rank Rank Rank Actual Rank Predicted Rank 6 [2,4,3,4,3,3,5,3,3,5] [2,4,3,4,3,3,5,3,3,6] 0.185 0.194 6 [2,4,3,4,3,3,5,3,3,5] [2,4,3,4,3,3,5,3,3,6] 0.185 0.197 7 [2,4,3,4,3,3,5,3,3,5] [2,4,3,5,3,3,6,3,3,6] 0.185 0.278
Table 4: (a)Experimental results for error of approximation of incoming batch with different matching threshold values. Dataset SDS2 and SDS4 are of dimension and , respectively. We see that the threshold is fairly robust around 0.5, and a threshold of 0.8 achieves the highest accuracy (b) Experimental results on SDS1 for error of approximation of incoming slices with known and predicted rank.

4.7 Q4: Effectiveness on real data

To evaluate effectiveness of our method on real data, we use the Enron time-evolving communication graph dataset [1]. Our hypothesis is that in such complex real data, there exists concept drift in streaming tensor decomposition. In order to validate that hypothesis, we compare the approximation error incurred by SeekAndDestroy against the one incurred by the baselines, shown in Table 5. We observe that the approximation error of SeekAndDestroy is lower than the two baselines. Since the main difference between SeekAndDestroy and the baselines is that SeekAndDestroy takes concept drift into consideration, and strives to alleviate its effects, this result 1) provides further evidence that there exists concept drift in the Enron data, and 2) demonstrates SeekAndDestroy’s effectiveness on real data.

The final rank for Enron as computed by SeekAndDestroy was 7, indicating the existence of 7 time-evolving communities in the dataset. This number of communities is higher than what previous tensor-based analysis has uncovered [1, 5]. However, analyzing the (static) graph using a highly-cited method [4], we were able to detect 7 communities, therefore SeekAndDestroy may be discovering subtle communities that have eluded previous tensor analysis. In order to verify that, we delved deeper into the communities and we plot their temporal evolution (taken from matrix ) along with their annotations (when inspecting the top-5 senders and receivers within each community). Indeed, a subset of the communities discovered matches with the ones already known in the literature [1, 5]. Additionally, SeekAndDestroy was able to discover community #3, which refers to a group of executives, including the CEO. This community appears to be active up until the point that the CEO transition begins, after which point it dies out. This behavior is indicative of concept drift, and SeekAndDestroy was able to successfully discover and extract it.

Running Predicted Batch Approximation Error
Rank Full Rank Size SeekAndDestroy SambaTen OnlineCP
70.88 40.57 22 0.68 0.002 0.759 0.059 0.941 0.001

Table 5: Evaluation on Real dataset
Figure 5: Timeline of concepts discovered in Enron.

5 Related Work

In this section, we provide review of the literature related to our method. Broadly, online tensor methods can be categorized into following main categories:
Tensor decomposition:Tensor decomposition techniques are widely used for static data. With the explosion of big data, data grows at a rapid speed and an extensive study required on the online tensor decomposition problem. Sidiropoulos [11] introduced two well-known PARAFAC based methods namely RLST (recursive least square) and SDT (simultaneous diagonalization tracking) to address the online 3-mode tensor decomposition. Zhou et al. [16] proposed OnlineCP for accelerating online factorization that can track the decompositions when new updates arrived for N-mode tensors. Gujral et al. [6] proposed Sampling-based Batch Incremental Tensor Decomposition algorithm which updates online computation of canonical parafac and perform all computations in the reduced summary space. However, all of them are not directly applicable to concept drift situations.
Concept Drift: The survey paper [14] provides the qualitative definitions of characterizing the drifts on data stream models.They formally defined drift based on Subject, Frequency, Transition, Re-occurrence and Magnitude of data. Furthermore, they evaluated various scenarios for supervised and un-supervised pure class and variance drift with given magnitude and results were very promising. To the best of our knowledge, this is the first work to discuss concept drift in tensor decomposition.

6 Conclusions

In this paper we defined ‘concept’ and ‘concept drift’ in context of streaming tensors and provide an Algorithm SeekAndDestroy which detects drift and alleviates it without making any assumption on the rank of the tensor. We demonstrate the effectiveness of our algorithm against other state-of-the-art methods by out performing them when the rank of tensor is unknown. Furthermore, we demonstrate SeekAndDestroy’s effectiveness in detecting concept drift. Finally, we apply SeekAndDestroy on a real time-evolving dataset, discovering novel drifting concepts.

References

  • [1] Bader, B., Harshman, R., Kolda, T.: Analysis of latent relationships in semantic graphs using dedicom. In: Workshop for Algorithms on Modern Massive Data Sets (2006)
  • [2] Bader, B.W., Kolda, T.G., et al.: Matlab tensor toolbox version 2.6. Available online (February 2015), http://www.sandia.gov/~tgkolda/TensorToolbox/
  • [3] Bifet, A., Gama, J., Pechenizkiy, M., Zliobaite, I.: Handling concept drift: Importance, challenges and solutions. PAKDD-2011 Tutorial, Shenzhen, China (2011)
  • [4] Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment 2008(10), P10008 (2008)
  • [5] Evangelos E. Papalexakis, Faloutsos, C., Sidiropoulos, N.D.: Parcube: Sparse parallelizable tensor decompositions. In: ECML-PKDD’12
  • [6] Gujral, E., Pasricha, R., Papalexakis, E.E.: Sambaten: Sampling-based batch incremental tensor decomposition. arXiv preprint arXiv:1709.00668 (2017)
  • [7] Harshman, R.: Foundations of the parafac procedure: Models and conditions for an” explanatory” multimodal factor analysis (1970)
  • [8] Håstad, J.: Tensor rank is np-complete. Journal of Algorithms 11(4), 644–654 (1990)
  • [9] Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Review 51(3), 455–500 (2009)
  • [10] Mørup, M., Hansen, L.K.: Automatic relevance determination for multi-way models. Journal of Chemometrics 23(7-8), 352–363 (2009)
  • [11] Nion, D., Sidiropoulos, N.: Adaptive algorithms to track the parafac decomposition of a third-order tensor. IEEE Transactions on (2009)
  • [12] Papalexakis, E.E.: Automatic unsupervised tensor mining with quality assessment. In: Proceedings of the 2016 SIAM International Conference on Data Mining. pp. 711–719. SIAM (2016)
  • [13] Papalexakis, E.E., Faloutsos, C., Sidiropoulos, N.D.: Tensors for data mining and data fusion: Models, applications, and scalable algorithms. ACM Transactions on Intelligent Systems and Technology (TIST) 8(2),  16 (2017)
  • [14] Webb, G.I., Hyde, R., Cao, H., Nguyen, H.L., Petitjean, F.: Characterizing concept drift. Data Mining and Knowledge Discovery 30(4), 964–994 (Jul 2016)
  • [15] Webb, G.I., Lee, L.K., Petitjean, F., Goethals, B.: Understanding concept drift. CoRR abs/1704.00362 (2017), http://arxiv.org/abs/1704.00362
  • [16] Zhou, S., Vinh, N.X., Bailey, J., Jia, Y., Davidson, I.: Accelerating online cp decompositions for higher order tensors. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 1375–1384. ACM (2016)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
225575
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description