Online Social Media Recommendation over StreamsThis paper appears at ICDE 2019

Online Social Media Recommendation over Streamsthanks: This paper appears at ICDE 2019

Xiangmin Zhou, Dong Qin, Xiaolu Lu, Lei Chen, Yanchun Zhang
 RMIT University, Melbourne, Australia
 {xiangmin.zhou,dong.qin,xiaolu.lu}@rmit.edu.au
 Hong Kong University of Science and Technology, Hong Kong, China
 leichen@cse.ust.hk
 Victoria University, Melbourne, Australia
 yanchun.zhang@vu.edu.au
Abstract

As one of the most popular services over online communities, the social recommendation has attracted increasing research efforts recently. Among all the recommendation tasks, an important one is social item recommendation over high speed social media streams. Existing streaming recommendation techniques are not effective for handling social users with diverse interests. Meanwhile, approaches for recommending items to a particular user are not efficient when applied to a huge number of users over high speed streams. In this paper, we propose a novel framework for the social recommendation over streaming environments. Specifically, we first propose a novel Bi-Layer Hidden Markov Model (BiHMM) that adaptively captures the behaviors of social users and their interactions with influential official accounts to predict their long-term and short-term interests. Then, we design a new probabilistic entity matching scheme for effectively identifying the relevance score of a streaming item to a user. Following that, we propose a novel indexing scheme called CPPse-index for improving the efficiency of our solution. Extensive experiments are conducted to prove the high performance of our approach in terms of the recommendation quality and time cost.

User interests, Bi-Layer HMM, Social stream.

I Introduction

With the explosive growth of online service platforms, an increasing number of people and enterprises are undertaking personal and professional tasks online. Recent statistics shows there are now 15 million active Australians on Facebook, which is 60% of the Australian population 3. The digital universe is doubling in size every two years, and by 2020 the data users create and copy annually will reach 44 trillion gigabytes 1. In order for organizations, governments, and individuals to understand their users, and promote their products or services, it is necessary for them to analyse these social data and recommend the media or online services in real time. A large volume of social media are proliferated in the form of streams, which has raised the demand of online media stream recommendation. Recommending streaming items over social communities is very important for many applications such as entertainment, online product promotion, and news broadcasting. For instance, the fans can enjoy their idols’ performances once they are available online by continuously receiving the recommendations from the system over the dynamically changing social networks such as YouTube. An online company may accelerate the propagation of its digital commercials via the stream recommender systems to potential customers to boost the sales of their products. For news broadcasting, users can be notified in time what is happening moment by moment, and take prompt action in crises. Practically, these applications are time-critical, which demands the development of efficient stream recommendation approaches.

We study the problem of continuous recommendation over social communities. Given a new incoming social item , a relevance function on social item and users, we aim to deliver the item to the top users that have the highest relevance scores. For example, a clip on a new KFC dessert can be broadcasted to the top interested users immediately after the uploading, which directly increases the product purchase and brand recall. For stream recommendation, three key issues need to be addressed. First, we need to construct a robust model that effectively predicts the short-term and long-term interests of different social users. While users’ long-term interests keep relatively stable, their short-term interests can be changed rapidly due to the frequent social activities. Users’ behaviors can be affected by their previous activities and their interacted media producers as well. For instance, a user interested in football games may become interested in music after watching a broadcasting from a producer on the family of David Beckham and Victoria Beckham. A good model should be able to capture the users’ temporal involvement over their own activities and their media producers to reflect users’ current preferences for high quality recommendation. Then, we need to design a novel solution for matching the streaming items with social users. As a large number of near duplicate items may appear in media streams, it is unreasonable to recommend them to a target user repeatedly. For example, John watched a video of Refael Nadal in Australian Open 2018. He may get bored after watching Nadal’s videos repeatedly. Probably, he is interested in the videos on other tennis players as well, such as Roger Federer and Maria Sharapova. A good item-user matching approach should be able to recommend diverse items to an interested user. Finally, we need to design an efficient index scheme for searching the interested users with respect to an incoming item. According to the statistics from Hootsuite 2, YouTube has more than 1.5 billion users in 2018, and the number is increasing annually. Obviously, sequentially matching each incoming item with this huge number of users is infeasible for the efficient recommendation.

Based on the evaluation objectives of recommendation, the previous social recommendation approaches can be classified into two categories, relevance-based 40, 41, 5, 20, 33 and diversity-based 25, 14, 19, 7, 36. Relevance-based approaches identify the most similar items matched with a user predefined profile based on the present content and context features, producing a list of items relevant to the ones viewed by this user in the past. With these approaches, near duplicate social items can be repeatedly recommended to a certain user. Diversity-based approaches aim at mining a broad range of items that belong to different categories as diverse as possible and meanwhile, they are interesting to the target user. However, existing diversity-based approaches handle the user preferences as static, which ignores the temporal evolution of social users’ preference. Recent recommendation approaches have been proposed to capture the user preferences over streams 21, 8, 9, 17. They mainly focus on how to extend the traditional recommendation techniques such as matrix factorization 15 to streaming environments by applying them to media data with the support of efficient stream processing. These approaches can efficiently conduct stream recommendation as they do not need to consider the whole user viewing history, which ignores the long-term interests of users and the requirements of broad item coverage to users. However, long-term interests reflect users’ inherent characters and their stable preferences over life, which greatly affects users’ behaviors in social activities. For example, John regularly enjoys movies online after work. Recently, affected by the war in Syria, John has browsed some videos related to this war. However, when the war is ended, John would get back to his regular activity of watching movies in spare time, and still hope to receive recommendation on movies. Meanwhile, redundant items are added to user profiles, which is a barrier to the representation ability and visibility of their preferences.

In this paper, we propose a graphical model-based framework for effective and efficient social item recommendation over streams. Specifically, we first propose a novel Bi-Layer Hidden Markov Model (BiHMM) to capture each user’ media browsing history and his interest patterns over a set of media producers for predicting his next interested media category. To measure the relevance between a user and an item, we design an entity-based item-user ranking function, which considers the short-term and long-term interests, and the diversity of the recommended items. Finally, we generate recommendation over streams based on the relevance between an incoming item and each user, and accelerate this process by using a novel signature-tree-based index scheme called CPPse-index. The main contributions of this work are summarized as follows:

  • We propose a novel graphical model called Bi-Layer Hidden Markov Model (BiHMM) to predict users’ long-term and short-term interests. BiHMM well captures users’ interest dependency over various media producers.

  • We propose a novel item-user matching scheme that embeds the users’ long-term and short-term interests, and the item descriptions over their expanded entities. The new matching scheme takes into account the diversity issue of recommendation.

  • We design a new CPPse-index scheme to improve the recommendation efficiency, which is guaranteed by a novel upper-bound-based candidate pruning. The test results prove the effectiveness and efficiency of our approach.

The remainder of this paper is organised as follow. Section II reviews the related work on streaming recommendation. Section III formulates our social media recommendation over streams. Section IV presents our BiHMM model for user interest prediction, and our proposed matching scheme between items in media stream and social users, followed by our index scheme in Section V. We report the experimental evaluation results in Section VI, and conclude the whole work in VII.

Ii Relate Work

We review existing literature on two topics closely related to our work, including the recommendation over streams and the diversity-based recommendation.

Ii-a Recommendation over streams

Approaches have been proposed for recommendation over social streams 8, 11, 42, 10, 21, 16, 28, 9. Most of stream recommender systems focus on adapting the traditional approaches to stream settings. Chandramouli et al. 8 designed the StreamRec system, where the user-item interaction matrix for collaborative filtering is only updated when a subscription list is changed, which reduces the time cost greatly. The matrix factorization (MF) is the most popular technique in CF-based recommendation. However, it cannot be directly applied to stream-based recommendation due to the high cost of computing the stochastic gradient descent (SGD). To solve the problem, Zhuang et al. 42 proposed a parallel SGD which greatly speeds the SGD calculation. Diaz-Aviles et al. 11 consider collaborative filtering as an online ranking problem and present Stream Ranking Matrix Factorization (RMFX) for optimizing the personalized ranking of topics. Chen et al. 10 models users and items using competitive matrix factorization for temporal stream recommendation. Lommatzsch and Albayrak 21 apply the traditional collaborative filtering to the user interaction patterns within the recent time window. However, this technique is only applicable for items with strong temporal patterns, such as news articles. Huang et al. 16 conducts collaborative filtering over Apache Storm, which achieves high efficiency in stream recommendation. Subbian et al. 28 proposed a probabilistic neighbourhood-based algorithm for performing recommendations in real-time. The similarity between a given item and each of all other items is computed. The rating of a user to a particular item is predicted by calculating the weighted average of the ratings of its most similar items in this user’s profile. Chang et al. 9 model user-item relationship with the temporal dynamics incorporating both hidden topic evolution and new user/item introduction. These collaborative filtering-based approaches highly rely on the user ratings, which is not reliable over streams, thus the effectiveness of recommendation can not be guaranteed. In this work, we aim to solve the stream recommendation problem by predicting user long-term and short-term interests, constructing robust user models over them, and generating the recommendation results by matching user profiles and each incoming item.

Ii-B Diversity-based recommendation

Traditional diversity-based recommender systems exploit the item-item relationship for achieving as diverse results as possible. Typical diversity-based recommendation can be classified into two categories: (1) recommendation candidate re-ranking-based 37, 30, 14, 19; and (2) candidate filtering-based. Recommendation candidate re-ranking-based methods generate a list of recommendation candidates re-ranked based on the similarity between each other, such that the diverse results appear at the top ranked positions. Zhang et al. 37 maintain a list of recommendation candidates that are updated iteratively based on the PageRank scores of the candidates and the new items in data collection. Tong et al. 30 and He et al. 14 use greedy algorithms to select the diverse items such that the distance between the current selected item and its previous one is maximized. Hurley 19 ranks the items based on each of their attributes, and the overall ranks of these items are obtained by integrating the weighted pairwise rank difference. The core of re-ranking-based methods is to adjust the order of the resulting list. Thus the diversity of recommendation is limited to a small scope.

Candidate filtering-based approaches directly exclude the items in data collection close to those in the resulting list in the recommendation generation to achieve the high diversity of the recommendation. In 35, the diversity is introduced by measuring the dissimilarity between items and the preference of the target user with respect to the item to select the items that are far from each other but well match users’ preference. In 18, the trade-off between diversity and matching quality is formulated as a binary optimization problem, and the diversity level can be explicitly tuned. In 25, the recommendation is treated as a multi-objective problem that combines several recommendation methods in a way of maximizing the diversity. Puthiya Parambath et al. 23 represent the items as a similarity graph, and conduct recommendation by finding a small set of unrated items that best covers a subset of items positively rated by the user. These approaches do not consider the diversity in items themselves, which can provide more candidates in recommendation generation. The notations used in this paper are listed in Table I.

Notation Definition
Pre-defined categories of social items.
The activity pattern of a social user
Producer, the user that creates the item.
Consumer, the user who browsed the item.
A set of extracted entities.
A social item.
The long-term interest list of .
The short-term interest window of .
TABLE I: Notation Table.

Iii Framework of Our Solution

Fig. 1: Framework of the stream recommendation

In this work, we propose a social stream and item stream Recommendation framework (ssRec), as shown in Fig. 1. Our framework includes two major components, the user interest prediction and the user-item matching. Besides, we design the CPPse-index to optimize the efficiency. The user interest prediction predicts users’ interests based on Bi-layer HMM (BiHMM) model, as shown in Fig. 1(a). The user-item matching provides a ranking function between a stream item and a social user based on the predicted interests. Given a stream item, we encode it as an item vector as shown in Fig. 1(b), which is further used for the matching between the item and user profiles. We propose a novel index structure, CPPse-index, to facilitate the recommendation process as shown in Fig. 1(c) . Users with the same interest are grouped together. We output top ranked users by searching the CPPse-index. We will detail these modules in Sections IV-V.

Iv Bi-Layer HMM-Based Recommendation Model

We will present our Bi-Layer HMM (BiHMM) that predicts the category which a user may browse immediately after the current time, and a probability-based item-user ranking.

Fig. 2: The application scenario, where the is a user browsing content created from the BBC news, which is a producer .

Iv-a The Bi-Layer HMM Model

An important feature of social platforms is the user engagement. Users can create new social items instead of consuming media only. Thus, in this work, we consider a user in two modes: (i) the producer and (ii) the consumer:

Definition 1.

A user creating social items is a producer (), and a user browsing social items is a consumer ().

Note that a user can be either a producer or a consumer. Users who are only in producer mode like BBC News are regarded as data sources, and do not receive any recommendations.

Consider a real scenario shown in Fig. 2. A user behavioral trajectory may follow the categorical pattern: “music, sports and military”. Such pattern may also exist in the social item creating process. For example, BBC news may create social items following the categorical pattern “military, world and politics”. Assuming a consumer’s behavior is independent of the producer may be too strong to be applied in real production systems. As shown in the example, when a bursting event happens and is captured by a that a user is following, the regular behavioral trajectory of the user is highly likely to be interrupted. To capture this dependencies, we propose a Bi-Layer HMM as shown in Fig. 3.

Unlike the single-layer HMM that considers consumers’ behavior only, there are two layers in our model: a-HMM and b-HMM. The a-HMM layer captures the patterns on a set of producers that a user consumer is interested in, while the b-HMM layer models his browsing trajectories. Each dashed box represents one producer , circles are the current hidden states, and the gray rectangles represent observed behaviors. We use arrows to show the relation between two states. For example, if there is an arrow from to , it means is decided by . Let be the hidden state of a at time and be the hidden state of a consumer user at time . As a user’s next state may be correlated with any of the , the hidden states in both layers are dependent. For example, if browsed an item in category under state . and is created by a producer under the hidden state , then under our model setting, the next state of is both decided by and the current state . We will discuss elements in the BiHMM model as follows.

Fig. 3: The BiHMM model. is the -th Hidden state of the influential user , is -th Hidden state of the producer , and is the item category.

The a-HMM Layer for Modelling Producers.  We first build the a-HMM layer to model users that create social items. Assume that the activity of a user creating a social item is independent of other users. Then, we can apply classic HMM technique to model the social item creation process for all producers. In the modelling process, three components need to be estimated: (i) the hidden states , where is the number of hidden states; (ii) the state transition probability matrix ; and (iii) the observation matrix . Each element in is computed using , and each element in is computed using . Note that , where is the number of observations. Suppose that the initial state probability distribution is , where , and . Based on our previous analysis, the parametrization of a-HMM is . We use Baum-Welch algorithm 32 to learn all three parameters. In the prediction, given an observed category , its associated hidden state is obtained using Viterbi Algorithm 12.

The b-HMM Layer for Modelling Consumers.  As emphasized, the interaction of a consumer to a social item depends on both the historical trajectory of the user and that of the producers interest this consumer. Thus, we build the b-HMM by considering both the trajectory of a user’s historical activities and that of its interactions with different producers.

Just as a-HMM, b-HMM has three major components: hidden states, state transition probability matrix and the observation matrix. Let be the number of hidden states in b-HMM, the -th hidden states. Each entry in the state transition matrix is then computed as , where is the hidden state from producers. Similarly, each entry in the observation probability matrix can be obtained via . Note that is the observed social item category, Like a-HMM, the parametrized representation of b-HMM is , where is the initial state probability distribution and .

The classic parameter estimation approach for HMM cannot be directly applied to the estimation in b-HMM due to the dependency of its states to the a-HMM. Thus we reformulate the representation of b-HMM by integrating the states of two layers in the BiHMM. Consider the next hidden state in b-HMM determined by both and from b-HMM and a-HMM respectively. We can denote the new state of b-HMM as . Accordingly, the state transition probability matrix can be converted into . The observation probability matrix becomes after transformation. The b-HMM is represented as . Based on our new representation, we can train the b-HMM by the same way used in the a-HMM.

With the learned b-HMM, the next observation is predicted as follows. Given a series of observations , we first predict the series of hidden states, , which have the highest probabilities of generating these observations. Then, we exploit the Viterbi Algorithm 12 to predicate the top- categories interesting a user.

Iv-B Modelling User Profiles and Stream Data

Stream Data Models.  In our recommendation scenario, two types of data streams should be considered: the social item data stream and the user-item interaction data stream. The social item stream is generated as a data stream by the high-velocity media data uploading. Let be the social item stream over a time period . We need to construct a model that well captures the items’ content and contexts within the time window. Meanwhile, as the item uploading and users’ interactions with items, the temporally frequent user-item interactions form a user-item interaction stream. A good data model should capture the user-item interactions in a streaming mode, rather than the static ones appearing in traditional recommender systems 15, 34. In addition to the social property (the producer of the item), we also consider the item itself, and specifically, entities in the item description. Given an item , we describe it as a triplet , where c is its category, its producer, the set of extracted entities from . Consider a video description as below:

Australian Open 2017 Men’s Final Roger Federer vs Rafael Nadal Full Match.

We can represent it as a set of its entities {“Australian Open”, “Roger Federer”, “Rafael Nadal”, “Match”}. Clearly, if a user’s long-term interest list contains one or more of these entities multiple times in the category , then it is highly likely that the user would like the current social item . However, considering the entities in an item only may generate the less diversified recommendation results. To solve this problem, we apply the expansion techniques on each entity. Expansion entity sets are extracted based on the proximity heuristics 29, from item descriptions. If two entities often co-occurred closely in the same category, we believe they are strongly related. Given two entities, the expansion weight between them is calculated by their proximity. Since we consider all the entities in media descriptions, the location information on videos appearing as entities is embedded in the model.

User Models.  Similar to all classic recommender systems, we consider users’ long-term interests, which can be inferred from users’ historical interaction logs. On the other hand, due to the effect of some external events, users’ interests may be changed in a short-term time period. For example, users who usually watch sport news only may start following some political news as the poll commences. Thus, we consider the short-term interests of users as well. Both long-term and short-term interests are important. If we only consider the long-term interests, the recommendation results lose the recency. Reversely, considering the users recent activities only will lead to the users interest drift.

For each user, we construct a user profile based on the long-term interest list and short-term interest window. Instead of tracking fine-graind social items, we consider their categories only, as it is enough for us to infer users’ interest patterns through item categories. The short-term interest window of a user has a fixed-size, and keeps his latest interaction records, while his long-term interest list includes all the rest of records in his whole browsing history. Let be the long-term interest list of a user , which is a social item sequence in temporal order. If we consider each item using a pair category-producer, then . We maintain users’ short-term interests within a fixed-size recent time window in the way similar to . When the short-term interest window is full, will be flushed to . As such, each user profile is modelled as a pair of category-producer sequences (CPPse), which describes the long-term and short-term user interaction patterns.

Iv-C Entity-Based Item-User Matching

Using our BiHMM model, we can compute the probability of a media consumer browsing a specific category . However, our ultimate goal is to identify the relevance score between a user and an item. Thus, we need to design a user-item matching based on the output of BiHMM.

We calculate the relevance score of an item to a user over his long-term interests by estimating the probability of matching , denoted as , as below:

(1)

in which is the probability output by the BiHMM that describes the likelihood of a user browsing an item in category ; is the expansion weight if the entity is from the expansion set , otherwise . In our solution, we apply proximity heuristics to compute the expansion weights, considering the co-occurrences of entity pairs. Both and are estimated using Maximum Likelihood Estimation (MLE), indicating the probability of a user being interested in the item given the producer and the description respectively. For computational convenience, we consider the log-likelihood score and reformulate the long-term-based recommendation score computation as:

(2)

It is known that the entities and producer of the current item may have never appeared in the user’s long-term history. Under this situation, a zero probability will be given in the MLE estimation. This may hamper the effectiveness on the diversification and serendipity. To prevent the zero probability, we apply the Dirichlet smoothing technique to both producer and entities. The final recommendation score is parametrized by , integrating long-term and short-term scores:

(3)

in which is computed using Equation 1, and is the score computed using the same function but based on users’ short-term interests:

(4)

Note that for the short-term interest, we only consider the prediction probability output from the BiHMM model. This is because we only maintain a window of recent items and the MLE estimation over a few social items leads to the imprecise estimation results on the user interests.

V Recommendation Generation Optimization

We present our recommendation generation in details. Given an incoming social item , a collection of social users , and a relevance function , our recommendation finds a list of social users with the best relevance to , i.e., for any and , holds. To perform the recommendation, a naive method is to compute the similarity between and each of social users. Given a set of users, this naive method requires relevance calculations, which is inappropriate to high speed streams. High-dimensional indexes, like R-tree variants 6, and B-tree based indexes 39, 38, are not designed for online processing, thus inapplicable to our problem either. An efficient index scheme is demanded for the online environment.

V-a The CPPse-index structure

Fig. 4: CPPse-index structure. Shaded rectangles represent pointers, and at the bottom represents user profile records. Note that both IEntry (IE) and LEntry (LE) contain pointers.

We propose the CPPse-index to improve recommendation efficiency. Our index includes two core parts: (1) a chained hash table that maps each online item to its extended signature-trees; (2) a number of extended signature-trees, each of which stores user profiles over a particular category in a user block. User blocks are generated by one pass clustering 27 based on each user’s long-term categorical interests and cosine similarity. We construct an extended signature tree for each category of a user block. As such, the number of entities covered by a signature is greatly reduced, and the signature representation is highly compact, leading to a compact signature tree. Fig. 4 shows our CPPse-index structure.

We use chained hash tables to organize the category-entity pairs due to its simplicity. Like Zhou et al. 40, we select the class of shift-add-xor string hashing functions for mapping category-entity pairs to hashcodes, considering their important properties such as uniformity, universality, applicability and efficiency 24. Let be a string of characters, a seed and an intermediate hash value after examination of characters. The components in the class of shift-add-xor are defined as:

(5)

Here, denotes the left-shift of value by bits, is the right-shift of value by bits. Given a pair of item category and entity that forms a phrase, we first generate an initial hash code using the equation 5 (a), then recursively compute the intermediate hash code over its first characters using the equation 5 (b), and finally obtain the modulo value of the hash code over its characters. Given a set of category-entity names, we organize it as a chained hash table with a number of hash buckets. Each element of the hash table is a triad denoted as , where key is the hash value, sptr the set of pointers to the extended signature-trees under this category, and nextptr pointer to the next category-entity pair with the same hash code. Given a category-entity pair, its hash bucket is located based on its hash code, and its triad is inserted into this bucket. A chained hash table is constructed by inserting the triads of all the category-entity pairs in the media set. Each category-entity pair can be at most covered by user blocks, so at most sptr are needed, where is the total amount of user blocks. If an extended signature-tree does not contain this category-entity pair, the corresponding pointer will point to null.

Signature-tree is a high-dimensional index structure derived from R-tree family, with more efficient querying and updating. Such improvement is obtained by generating bitmap encoding and then perform boolean conjunction queries. Apparently, it is not applicable in our setting as we need to consider the quantification of social consumer activities while encoding its signatures. To handle this problem, we extend the signature-trees by designing a new encoding scheme: an impact encoding for maintaining user profiles and a frequency-based encoding for queries. We construct two types of entries in the extended signature-tree: an internal entry (IEntry) that summarizes statistics of its children and a leaf entry (LEntry) that represents a user’s profile. Given a user in block under a category , the leaf entry contains the user’s long-term interest list and short-term interest window. Its long-term interest list is described as a tuple , where is the probability of user browsing an item in , and are total numbers of producers and entities in the users’ history, respectively. The and are two impact lists, storing lists of and , respectively. Users’ short-time interest representation is constructed from the most recent item sequence, stored in a fixed-length window. In addition to the user profile statistics, each LEntry is also attached with a pointer to its user profile record.

IEntry is created during the construction of a tree. An IEntry is a virtual “user” whose interests cover all of its children. Like LEntry, data statistics on the long-term and short-term interests of the virtual “user” are extracted as the signature of this IEntry, which are computed by applying to all children over their corresponding signature components. Similarly, an IEntry is attached with a pointer to its child subtree.

User block num 1 10 20 30 40 50
Max entity num 4000 475 257 155 123 101
Max producer num 98 53 40 39 32 25
TABLE II: The factors relevant to user profile signature size

Table II shows the statistics over our Youtube dataset. As we can see, applying user blocking reduces the entry size in a tree by large. Without blocking, 4,000 entities should be considered for each entry, which is infeasible for the memory-based index. The size of entity set over a user block is much smaller, which greatly saves the memory cost of the index.

V-B KNN query

We perform the item-user matching by top- query over the CPPse-index. Before we proceed to the detailed KNN query algorithm, we need to decide how to generate a pseudo-query given an item and an extended signature-tree, and how to measure the relevance score of an item to an IEntry.

As introduced previously, each incoming item is described as a triplet , where is the category of the social item, is the creator of the social item and is the set of entities extracted from the social item. However, the triplet representation cannot be directly used as a query. To conduct KNN query over CPPse-index, we need to generate a signature for the item. Given an item , let be its category, its pseudo-query is generated by converting its to one-hot encoding over , and entity set into the frequency vector over the entity set for each block containing and any entities in , where is user block . We also keep a -dimensional vector to record the weight of each entity. Note that, if the subtrees constructed from different user blocks are identified based on the category and the entity set of item , then different encodings will be generated for . The following example illustrates the query generation process.
Example 1. Suppose we have an incoming item in category sports with a collection of entities . Let be 3 user blocks containing category sports and some elements in . We have , where ={weSpeakFootball, Wrzzer, SirMan, bundesteam}, {Beckham, football, worldcup, FIFA, Brazil, Messi}. Suppose , is Wrzzer, and Beckham, worldcup, worldcup. After expansion, the entity set of becomes =Beckham, Messi, worldcup, FIFA, worldcup, FIFA, and the entity weight vector of is . We encode all elements of , generating the signature of over , the signature of over together with its weight vector . The two signatures and the entity weight vector are connected to form a complete signature of item . Thus the signature of over is . By the same way, we can generate the signature of over , and that over .

Given an item and an IEntry, we define their relevance function, which is the Recommendation Upper Bound of the measure between and an LEntry below this IEntry.

Definition 2.

Consider an internal entry and an item . After encoding the item into , the relevance between them can be computed by plugging statistics into Equation 3, which then becomes:

(6)

where and are maximal BiHMM probability of all I-Node’s children, for the long-term and short-term interests, respectively; is a set of entities kept in the current tree, is the expansion weight vector corresponding to the entity. According to Definition 2, we have the following lemma:

Lemma 1.

Given an internal entry IEntry and an item , for any internal entry IEntry’ in the subtree of IEntry, the following inequality holds:

Proof. Suppose , . From the construction of CPPse-index, we know , , and . Suppose the item is encoded as . Given , as , we have:

Thus, we have . As logarithmic is strictly monotone increasing, . By the same way, we have . Thus, , i.e.,

Lemma 2.

Given an internal entry IEntry and an item , for any user covered by I-node, the following inequality holds:

Proof. Suppose and . can be encoded as . We have , , and . We consider two cases.

  • If IEntry is the parent of item , we can directly have .

  • If IEntry is not the parent of , we can find a branch in our CPPse-index that from IEntry to , say IEntry, ,…,, . By Lemma 1, we have . By i), we have . Thus, . ∎

1:Input: CPPse-index and the social item
2:Output: a ranked list of users,
3: is a size max-heap
4:for each in  do
5:     
6:      ptr is a pointer to current tree
7:      encoding w.r.t tree
8:     
9: curr_p is a priority queue
10:for  from 0 to  do
11:     for all entry in node that ptr points to do
12:         Enqueue      
13:while curr_p is non-empty do
14:     
15:     if  then
16:         if entry is LEntry then
17:               Update heap
18:         else
19:              for all c_entry in node that entry points to do
20:                  
21:                  if  then
22:                       Enqueue                                                
23:return
Algorithm 1 KNN Query Processing

Lemmas 1-2 guarantee no false item pruning can happen in the query processing. Algorithm 1 illustrates the general framework for the KNN query over CPPse-index. Given an incoming social item , our algorithm performs KNN query by three important steps: (1) compute the hash values based on the entity–category pairs contained in , by which a set of extended signature trees are located (Lines 5-6); (2) generate pseudo-query based on the item and each located extended signature tree (Line 7); (3) select and rank the top- relevant users (Lines 13-22). We maintain a max-heap with size as our output ranked list. In the ranking process, a priority queue curr_p is maintained, including the recommendation score to the entry, current entry and the generated query. The recommendation score is used as the comparison key to decide if the priority queue should be updated. The queue will be updated if the current recommendation score is bigger than the lowest score kept in resulting heap . In the entire process, we only consider the entries that have scores larger than LB. When the current entry is an IEntry, its children will be put into the priority queue (Lines 1922) if their score is larger than the LB in ; otherwise we update the heap (Line 17).

V-C Dynamic Maintenance

This section discusses the dynamic maintenance of our CPPse-index. In social communities, the user information is highly dynamic due to the frequent user activities. For one thing, when users browse media data, their user interest patterns change. Users may browse the media containing existing entities, which changes the entity frequencies in their user profile signatures. Users may browse the media covering new incoming entities as well, which expands their signatures and adds new category-entity pairs to be kept in the hash table. For another, new users may join social community, which adds new profiles to be maintained. We need to maintain the short-term interest window, update the user profile representations and all their ancestor internal entries in CPPse-index to reflect the recent updates in social community.

1:Input: CPPse-index and , user profiles to update
2:return updated CPPse-index
3:for each in {do
4:     
5:     for all pairs in history do
6:         
7:          is a set of new entity category pairs      
8:      get all extended trees
9:     Insert to hash table if it is non-empty.
10:     for each ptr in  do
11:         
12:         if find LE then update LE and its ancestors
13:         else                
14:return CPPse-index
Algorithm 2 User Profile Update Maintenance

We maintain the CPPse-index periodically by checking the activities of social users. Algorithm 2 shows the detailed process of handling the social updates. Given a set of updated user profiles, our algorithm performs the maintenance mainly in three steps: (1) update the user profile representation and locate the extended signature-trees of each user by hash mapping over the category-entity pairs in his long-term interest list (lines 4-8); (2) update the hash table if necessary (line 9); (3) find current user profile from the identified extended signature-trees (line 11), update the extended signature-tree containing the current user profile (lines 1213). We search the chained hash table, and find the category-entity pairs that match those in an updated user profile. If a category-entity pair from the current user profile can not be found from the hash table, it means a new entity has come and needs to be inserted into the hash index. The signatures of the tree containing the current user are expanded to include the unseen entity. To fit the unseen entities, following the classic technique for memory management in database systems, we reserve space of each entry, and fill it with zones. Hence, we just increase counters for all entries, until no updating is required. For the leaf entry of the current user profile, we execute two different update operations, depending whether the short-term window in the entry is full or not. If the short-term window is not full, we only keep the newly arrived social item in the window. Otherwise, we computes all frequency counters of items in the window, write items in the window into the user profile record, and put the new items in the window. If a user profile can be found from an extended signature-tree, it is an existing user with new activities, and its signature is reconstructed by counting the frequencies of its entities and that of the producers in his browsing list. All the signatures of its ancestor entries are reconstructed based on its new signature. If a user profile does not appear in any signature-tree, it is a new user. We find its block and further to the signature-tree for it to be inserted. As such, the user profiles are well maintained to reflect their recent social activity patterns.

Vi Experimental Evaluation

This section evaluates the high effectiveness and efficiency of our proposed social stream recommendation.

Vi-a Experimental Setup

We conduct the experiments on four datasets: (1) A real-world dataset YTube that is constructed by crawling the YouTube website using the 20 most popular queries 41. YTube consists of the media data of 787,010 videos that were uploaded to YouTube from 2012 to 2016. For each video, we crawled its title, description, uploader and its interacted user information in the ranked results. Producers and consumers for videos are identified according to Definition 1. (2) A real-world MovieLens dataset, MLens 13, which is publicly available and consists of 20 million user-movie interactions between 138,493 users and 27,278 movies from 09/01/1995 to 31/03/2015. Since there are no categories or producers available in MLens, we generate them based on our observation on YTube dataset that producers often create social items of one category. We generate the category information by clustering all MLens movies based on their ratings, and regard the users who create social items for one category only and have frequent interactions as producers. (3) A synthetic dataset SynYTube created using synthpop 22 based on YTube. (4) A synthetic set SynMLens created with synthpop 22 based on MLens. The details of these datasets are shown in Table III, including the numbers of producers , consumers , entities , interactions and social items .

Dataset
YTube 3,146 8.41M 54,327 19 49M 787,010
SynYTube 3,146 8.41M 54,327 19 52M 787,010
MLens 586 138,221 28,195 15 20M 27,278
SynMLens 593 138,198 28,195 15 21M 27,278
TABLE III: Overview of datasets.

Vi-B Evaluation Methodology

We evaluate our proposed ranking method social stream and user stream Recommendation (ssRec) in terms of effectiveness and efficiency. First, we evaluate the effect of the short-term interest window in terms of both window size and the balance parameter in Equation 3. We show their sensitivity and the optimal values of the two parameters. Then we evaluate the impact of using expansion techniques in the recommendation, and that of user profile updates. Finally, the effectiveness and efficiency of our recommendation approach are evaluated using the optimal parameter settings.

We follow Wang et al. 31 to set up the stream simulation environment. We first order all interactions by timestamps, and then evenly split them into six partitions, the first two of which are the training sets while the other four are reserved for testing. When the current partition is used for training, its immediate next partition is used for testing. All effectiveness values are reported when the partition is used for testing only.

We compare our ssRec to two state-of-the-art baselines, CTT 17 and UCD 36. CTT fuses collaborative filtering, type and temporal factor together to generate recommendation over streams. UCD is a diversity-based method, where user profiles are expanded with their neighbours.

The effectiveness of all methods in the experiments are evaluated using precision at (P@31 unless specified, which is computed as: , where is the cutoff in the ranked user list, #Hit is the number of correct recommendation, and is the number of social items in test data partitions. We evaluate the efficiency of different approaches based on the average response time for an item on the stream. All experiments are conducted on a server using an Intel Xeon E5 CPU with 25 GB RAM running RHEL v6.3 Linux. We use TagMe 26 for entity extraction and implement the recommendation process over Apache Storm.

Vi-C Effectiveness Evaluation

We first compare our BiHMM model with traditional HMM model to verify the dependency of user interests on the media producers. Then, we evaluate the effect of parameters, and , by conducting the -based recommendation to find their optimal values. Finally, we compare our proposed approach with the state-of-art stream recommendation approaches, and evaluate the effect of user profile updates in our approach.

Vi-C1 Comparing BiHMM and HMM

We prove the superiority of our proposed BiHMM by comparing with HMM. The number of optimal hidden states are tuned based on the user browsing history. We divide the browsing history of users based on their media browsing time into two parts: the first 80% historical data in the profile for training and the latter 20% history data for testing. Note that, here we consider users’ interaction information in both producer and consumer modes. Our model is evaluated by Accuracy, which is the correct prediction percentage of a user’s next interest category among all. For each user, we decide the optimal number of hidden states over HMM by testing the Accuracy of the model at different state number values from 1 to a number where the Accuracy reaches the peak. The optimal number of hidden states is obtained when the highest Accuracy is achieved. Using the optimal hidden state number of each consumer user, we train our BiHMM model by embedding the hidden states of producers appearing in each browsing history, and obtain the optimal parameters for BiHMM, including the initial state probability distribution, the state transition probability matrix and the state transition probability matrix.

(a) YTube
(b) SynYTube
(c) MLens
(d) SynMLens
Fig. 5: Effectiveness comparison between BiHMM and HMM.

We test the prediction accuracy of BiHMM and HMM for each consumer, put the users with the same optimal hidden state number into the same group, and report the prediction results for different groups. Fig. 5 shows the prediction results of two models for groups with optimal hidden state numbers from 1 to 8. From the figure, we can observe the same trend across four datasets and for all users – that the BiHMM is better than the HMM. The results have further verified our hypothesis that consumers’ interests are dependent on the producers as well, which is not captured by HMM.

Vi-C2 Effect of

We evaluate the impact of the short-interest window size over our simulated stream data to find the optimal . We test the prediction precision (P@) of the recommendation by varying from 1 to 10, where means the number of recently browsed items in a window. At each value, we measure the prediction precision of recommendation by changing the weight of short-term interest measure from 0.1 to 1 with step 0.1, and report the optimal precision value. The prediction precisions for one partition are calculated based on whether the recommendation is accepted by the users in its next partition. For example, if we tune on the first partition, then we evaluate the prediction precision using the data in the second partition and only keep the #Hit in it until we complete the tests over all partitions. After all partitions are used in the test, we compute P@ by considering all testing partitions, and the results are reported in Fig. 6.

(a) YTube
(b) SynYTube
(c) MLens
(d) SynMLens
Fig. 6: Effect of short-term interest window size

Clearly, when a small is adopted, the user short-term interests are not accurately predicted due to the interest drift. On the other hand, if a large is employed, the short-term interest may fall back to the long-term interest. The optimal effectiveness is always achieved when . Thus we set the default to 5 and use it in all following tests.

Vi-C3 Effect of

(a) YTube
(b) SynYTube
(c) MLens
(d) SynMLens
Fig. 7: Effect of short-interest weight

As our final recommendation score consists of both short-term and long-term components, the balance parameter in Equation 3 may be important. We apply the same parameter tuning settings over simulated item streams as previously discussed, with the short-term interest window fixed to 5. The test results on the test sets only are reported in Fig. 7. As we can observe, the recommendation effectiveness is increased with the increase of , reaches an optimal point, and then decreases for each of the four datasets. We obtain the optimal values for two types of datasets, which are 0.4 and 0.3 for YTube and MLens respectively. Since the two synthetic datasets are generated according to the original data distribution, they have the same optimal settings as their original dataset. A larger on YTube also suggests that users’ interests are less robust on YouTube than on the MovieLens website, which is intuitive as items on YouTube are also created more quickly than on the MovieLens.

Vi-C4 Recommendation Effectiveness Comparison

We use the optimal settings obtained from our previous experiments and evaluate our final recommendation effectiveness by comparing with existing competitors, CTT and UCD. As described in Section IV-C, entity-based expansion is applied to introduce diversity in recommendation. To better show the effectiveness gain of using the expansion techniques, we report the results of our alternative ssRec-ne that neglects the entity expansion as a reference. We recommend the streaming items to top users, where is set to 5, 10, 20 and 30 respectively. Fig. 8 shows the comparison results.

(a) YTube
(b) SynYTube
(c) MLens
(d) SynMLens
Fig. 8: Effectiveness comparison

As we can see, our social stream and user steam recommendation approach (ssRec) achieves a much better performance on all four datasets compared with the other alternative, the stream recommendation without entity expansion (ssRec). It is because the expansions exploit more entities closely related to user’s interests, which reveal user’s potential interests. Without entity expansion, the system only recommends the items based on the exact matched entities, which limits user’s interest into a narrow scope, resulting in a low recommendation precision. Comparing with existing competitors, our ssRec approach performs best at all settings among all considered methods, and the improvement is consistent across all four datasets. This is because we consider both the short-term and long-term interests of users in terms of their social properties (producer-consumer dependencies) and item contents (entity expansion) in their interest prediction, which provides a complete representation of user’s preferences. CTT performs worst because it ignores the user’s short-term interest and the diversity of item-user interaction. Thus, the users’ recent interested items cannot be recommended. Meanwhile, ignoring the diversity of recommendation leads to a resulting list containing almost same items, which does not reflect the complete view of user interests. Although UCD exploits diversity-based user profile to find more diverse items for users, it neglects the significance of short-term interest as what CTT does, leading to a lower . All these confirm that our proposed method is superior to other competitors in terms of effectiveness.

Vi-C5 Effect of User Profile Updates

We test the effect of user profile updates on the effectiveness of recommendation over four test collections. For each collection, we consider two settings: (1) a stream setting on which the model is updated from the previous partition (ssRec); and (2) a static setting on which the model is trained on the training set and the update operations are ignored (ssRec-nu). We measure the effectiveness of our recommender system under two settings on the four test partitions. Fig. 9 shows that the effectiveness changes with respect to different top- target users. As we can observe, with user profile updates, we obtain a big effectiveness gain on P@. This is because with the updates in user profile, the user’s long-term and short-term interests can be well captured. Without updates, user’s profiles do not reflect their recent activity patterns. The improvement of ssRec over ssRec-nu confirms the importance of dynamic maintenance.

(a) YTube
(b) SynYTube
(c) MLens
(d) SynMLens
Fig. 9: Effect of user profile updates
(a) YTube
(b) SynYTube
(c) MLens
(d) SynMLens
Fig. 10: Efficiency comparison

Vi-D Efficiency Evaluation

We evaluate the efficiency of our proposed CPPse-index in terms of its recommendation and update costs. Our CPPse-index is implemented over Apache Storm, which is a real-time fault-tolerant distributed data processing system 4. The bolt in Apache Storm is responsible for receiving inputs and works as the CPU. We configure the number of bolts over Apache Storm same as the category number of each dataset.

Vi-D1 Recommendation Efficiency Comparison

We compare our proposed method with the state-of-the-art methods CTT and UCD in terms of the average response time per item on the stream. Here, is set to 30. Fig. 10 shows the time cost of recommendation, where the number of partitions indicates the data set size in the simulation setting and the time cost is accumulated over the four test partitions. Clearly, our approach is much faster than both CTT and UCD, especially when a large number of items are required to be recommended. More importantly, the average recommendation cost of our proposed method is less affected by the size of items while the cost of both CTT and UCD increases almost exponentially to the item size. This is because the CPPse-index representation prunes out the false alarm candidates in the user-item matching process, while the other two methods can only process all candidates sequentially. Moreover, UCD performs worse than CTT due to the extra time cost from the diversity-based matching in it.

Vi-D2 Efficiency of Media Updates

We test the cost of media updates over our CPPse-index by changing the size of updates. The time cost changes over different context updates are reported in Fig. 11. Clearly, the cost increases steadily with the update size increase. This is because our CPPse-index processes the media updates with the support of hash scheme and user blocking techniques, which quickly locates the positions of the entries with user activity updates. This has proved our CPPse-index can be updated efficiently when the user profile updates happen.

Fig. 11: Efficency of social updates

Vii Conclusion

This paper studies the problem of media stream recommendation. We first propose a novel Bi-Layer HMM model for predicting the users’ long-term interest patterns. Then, we model both user profile and media data as streams, and propose a novel probability-based item-user matching approach. Finally, we propose an index scheme that optimizes the time cost of stream recommendation. The experimental results demonstrate the high effectiveness and efficiency of our proposed stream recommendation approach.

References

  • 1 https://www.emc.com/leadership/digital-universe/2014iview/executive-summary.htm.
  • 2 https://blog.hootsuite.com/youtube-stats-marketers/.
  • 3 https://www.socialmedianews.com.au/social-media-statistics-australia-december-2017/.
  • 4 http://storm.apache.org/.
  • Balakrishnan et al. 2018 A. Balakrishnan, D. Bouneffouf, N. Mattei, and F. Rossi. Using contextual bandits with behavioral constraints for constrained online movie recommendation. In IJCAI, pages 5802–5804, 2018.
  • Beckmann et al. 1990 N. Beckmann, H.-P. Kriegel, R. Schneider, and B. Seeger. The r*-tree: An efficient and robust access method for points and rectangles. In SIGMOD, pages 322–331, 1990.
  • Castells et al. 2015 P. Castells, N. J. Hurley, and S. Vargas. Novelty and diversity in recommender systems. In Recommender Systems Handbook, pages 881–918. Springer, 2015.
  • Chandramouli et al. 2011 B. Chandramouli, J. J. Levandoski, A. Eldawy, and M. F. Mokbel. Streamrec: a real-time recommender system. In SIGMOD, pages 1243–1246, 2011.
  • Chang et al. 2017 S. Chang, Y. Zhang, J. Tang, D. Yin, Y. Chang, M. A. Hasegawa-Johnson, and T. S. Huang. Streaming recommender systems. In WWW, pages 381–389, 2017.
  • Chen et al. 2013 C. Chen, H. Yin, J. Yao, and B. Cui. Terec: A temporal recommender system over tweet stream. VLDB, 6(12):1254–1257, 2013.
  • Diaz-Aviles et al. 2012 E. Diaz-Aviles, L. Drumond, L. Schmidt-Thieme, and W. Nejdl. Real-time top-n recommendation in social streams. In RecSys, pages 59–66, 2012.
  • Forney 1973 G. D. Forney. The Viterbi algorithm. IEEE, 61(3):268–278, 1973.
  • Harper and Konstan 2016 F. M. Harper and J. A. Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4):19, 2016.
  • He et al. 2012 J. He, H. Tong, Q. Mei, and B. Szymanski. Gender: A generic diversified ranking algorithm. In NIPS, pages 1142–1150, 2012.
  • He et al. 2016 X. He, H. Zhang, M.-Y. Kan, and T.-S. Chua. Fast matrix factorization for online recommendation with implicit feedback. In SIGIR, pages 549–558, 2016.
  • Huang et al. 2015 Y. Huang, B. Cui, W. Zhang, J. Jiang, and Y. Xu. Tencentrec: Real-time stream recommendation in practice. In SIGMOD, pages 227–238, 2015.
  • Huang et al. 2016 Y. Huang, B. Cui, J. Jiang, K. Hong, W. Zhang, and Y. Xie. Real-time video recommendation exploration. In SIGMOD, pages 35–46, 2016.
  • Hurley and Zhang 2011 N. Hurley and M. Zhang. Novelty and diversity in top-n recommendation -analysis and evaluation. ACM Trans. Internet Technol., 10(4):14, 2011.
  • Hurley 2013 N. J. Hurley. Personalised ranking with diversity. In RecSys, pages 379–382, 2013.
  • Li et al. 2018 P. Li, G. Zhang, L. Chao, and Z. Xie. Personalized recommendation system for offline shopping. In ICALIP, pages 445–449, 2018.
  • Lommatzsch and Albayrak 2015 A. Lommatzsch and S. Albayrak. Real-time recommendations for user-item streams. In Proc SAC, pages 1039–1046, 2015.
  • Nowok et al. 2016 B. Nowok, G. Raab, and C. Dibben. synthpop: Bespoke creation of synthetic data in R. J. Stat. Softw., 74(11):1–26, 2016.
  • Puthiya Parambath et al. 2016 S. A. Puthiya Parambath, N. Usunier, and Y. Grandvalet. A coverage-based approach to recommendation diversity on similarity graph. In RecSys, pages 15–22, 2016.
  • Ramakrishna and Zobel 1997 M. V. Ramakrishna and J. Zobel. Performance in practice of string hashing functions. In DASFAA, pages 215–224, 1997.
  • Ribeiro et al. 2015 M. T. Ribeiro, N. Ziviani, E. S. D. Moura, I. Hata, A. Lacerda, and A. Veloso. Multiobjective pareto-efficient approaches for recommender systems. ACM Trans. Intell. Syst. Technol., 5(4):53, 2015.
  • Scaiella et al. 2012 U. Scaiella, P. Ferragina, A. Marino, and M. Ciaramita. Topical clustering of search results. In ” WSDM”, pages 223–232, 2012.
  • Schweikardt 2009 N. Schweikardt. One-Pass Algorithm, pages 1948–1949. Springer US, Boston, MA, 2009.
  • Subbian et al. 2016 K. Subbian, C. Aggarwal, and K. Hegde. Recommendations for streaming data. In CIKM, pages 2185–2190, 2016.
  • Tao and Zhai 2007 T. Tao and C. Zhai. An exploration of proximity measures in information retrieval. In SIGIR, pages 295–302. ACM, 2007.
  • Tong et al. 2011 H. Tong, J. He, Z. Wen, R. Konuru, and C.-Y. Lin. Diversified ranking on large graphs: an optimization viewpoint. In SIGKDD, pages 1028–1036, 2011.
  • Wang et al. 2018 Q. Wang, H. Yin, Z. Hu, D. Lian, H. Wang, and Z. Huang. Neural memory streaming recommender networks with adversarial training. In SIGKDD, pages 2467–2475, 2018.
  • Welch 2003 L. R. Welch. Hidden markov models and the baum-welch algorithm. IEEE Inf. Theory Newslett., 53(4):10–13, 2003.
  • Yang et al. 2017 X. Yang, C. Liang, M. Zhao, H. Wang, H. Ding, Y. Liu, Y. Li, and J. Zhang. Collaborative filtering-based recommendation of online social voting. IEEE Trans. Comput. Social Syst., 4(1):1–13, 2017.
  • Yao et al. 2015 L. Yao, Q. Z. Sheng, A. H. Ngu, J. Yu, and A. Segev. Unified collaborative and content-based web service recommendation. IEEE Trans. Serv. Comput., 8(3):453–466, 2015.
  • Yu et al. 2017 H. Yu, Y. Wang, Y. Fan, S. Meng, and R. Huang. Accuracy is not enough: Serendipity should be considered more. In IMIS, pages 231–241, 2017.
  • Zanitti et al. 2018 M. Zanitti, S. Kosta, and J. Sørensen. A user-centric diversity by design recommender system for the movie application domain. In Companion of WWW, pages 1381–1389, 2018.
  • Zhang et al. 2005 B. Zhang, H. Li, Y. Liu, L. Ji, W. Xi, W. Fan, Z. Chen, and W.-Y. Ma. Improving web search results using affinity graph. In SIGIR, pages 504–511, 2005.
  • Zhou et al. 2010 X. Zhou, X. Zhou, L. Chen, and A. Bouguettaya. Efficient subsequence matching over large video databases. VLDB J., 21(4):489–508, 2012.
  • Zhou et al. 2010 X. Zhou, X. Zhou, L. Chen, Y. Shu, A. Bouguettaya, and J. A. Taylor. Adaptive subspace symbolization for content-based video detection. IEEE Trans. Knowl. Data Eng., 22(10):1372–1387, 2010.
  • Zhou et al. 2015 X. Zhou, L. Chen, Y. Zhang, L. Cao, G. Huang, and C. Wang. Online video recommendation in sharing community. In SIGMOD, pages 1645–1656, 2015.
  • Zhou et al. 2017 X. Zhou, L. Chen, Y. Zhang, D. Qin, L. Cao, G. Huang, and C. Wang. Enhancing online video recommendation using social user interactions. VLDB J., 26(5):637–656, 2017.
  • Zhuang et al. 2013 Y. Zhuang, W.-S. Chin, Y.-C. Juan, and C.-J. Lin. A fast parallel sgd for matrix factorization in shared memory systems. In RecSys, pages 249–256, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
332398
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description