Spotting Collusive Behaviour of Online Fraud Groups in Customer Reviews

Spotting Collusive Behaviour of Online Fraud Groups in Customer Reviews

Sarthika Dhawan    Siva Charan Reddy Gangireddy    Shiv Kumar   
Tanmoy Chakraborty
\affiliationsIIIT-Delhi, India, NSUT, Delhi, India
\emails{sarthika15170, sivag}@iiitd.ac.in, shivk.it.16@nsit.net.in, tanmoy@iiitd.ac.in
Abstract

Online reviews play a crucial role in deciding the quality before purchasing any product. Unfortunately, spammers often take advantage of online review forums by writing fraud reviews to promote/demote certain products. It may turn out to be more detrimental when such spammers collude and collectively inject spam reviews as they can take complete control of users’ sentiment due to the volume of fraud reviews they inject. Group spam detection is thus more challenging than individual-level fraud detection due to unclear definition of a group, variation of inter-group dynamics, scarcity of labeled group-level spam data, etc. Here, we propose DeFrauder, an unsupervised method to detect online fraud reviewer groups. It first detects candidate fraud groups by leveraging the underlying product review graph and incorporating several behavioral signals which model multi-faceted collaboration among reviews. It then maps reviewers into an embedding space and assigns a spam score to each group such that groups comprising spammers with highly similar behavioral traits achieve high spam score. While comparing with five baselines on four real-world datasets (two of them were curated by us), DeFrauder shows superior performance by outperforming the best baseline with 17.64% higher NDCG@50 (on average) across datasets.

Spotting Collusive Behaviour of Online Fraud Groups in Customer Reviews


Sarthika Dhawan, Siva Charan Reddy Gangireddy, Shiv Kumar,
Tanmoy Chakraborty

IIIT-Delhi, India, NSUT, Delhi, India

{sarthika15170, sivag}@iiitd.ac.in, shivk.it.16@nsit.net.in, tanmoy@iiitd.ac.in

1 Introduction

Nowadays, online reviews are becoming highly important for customers to take any purchase-related decisions. Driven by the immense financial profits from product reviews, several blackmarket syndicates facilitate to post deceptive reviews to promote/demote certain products. Groups of such fraud reviewers are hired to take complete control of the sentiment about products. Collective behaviour is therefore more subtle than individual behaviour. At individual level, activities might be normal; however, at the group level they might be substantively different from the normal behavior. Moreover, it is not possible to understand the actual dynamics of a group by aggregating the behaviour of its members due to the complicated, multi-faceted, and evolving nature of inter-personal dynamics. Spammers in such collusive groups also adopt intelligent strategies (such as paraphrasing reviews of each other, employing a subset of their resources to one product, etc.) to evade detection.

Previous studies mostly focused on individual-level fraud detection (Lim et al., 2010; Fei et al., 2013; Akoglu et al., 2013). Few other studies which realized the detrimental effect of such collective activities detected groups simply based on Frequent Itemset Mining (FIM) (Mukherjee et al., 2012; Xu et al., 2013; Allahbakhsh et al., 2013). They thus focused more on ranking fraud groups, paying less attention to judge the quality of the detected groups.

(Wang et al., 2016) pointed out several limitations of FIM for group detection – high computational complexity at low minimum support, absence of temporal information, unable to capture overlapping groups, prone to detect small and tighter groups, etc.

In this paper, we propose DeFrauder111DeFrauder: Detecting Fraud Reviewer Groups, Code is available in (Dhawan et al., ). , a novel architecture for fraud reviewer group detection. DeFrauder contributes equally to (i) the detection of potential fraud groups by incorporating several coherent behavioral signals of reviewers, and (ii) the ranking of groups based on their degree of spamicity by proposing a novel ranking strategy. Experiments on four real-world labeled datasets (two of them were prepared by us) show that DeFrauder significantly outperforms five baselines – it beats the best baseline by 11.92% higher accuracy for detecting groups, and 17.64% higher NDCG@50 for ranking groups (averaged over all datasets).

In short, our contributions are fourfold: (i) two novel datasets, (ii) novel method for reviewer group detection, (iii) novel method for ranking groups, and (iv) a comprehensive evaluation to show the superiority of DeFrauder.

2 Related Work

Due to the abundance of literature (Kou et al., 2004) on online fraud detection, we restrict our discussion to fraud user detection on product review sites, which we deem as pertinent to this paper.

In user-level fraud detection, notable studies include (Lim et al., 2010) which proposed scoring methods to measure the degree of spamicity of a reviewer based on rating behaviors; (Wang et al., 2011) which developed a graph-based fraud reviewer detection model; (Fei et al., 2013) which exploited burstiness in reviews to detect fake reviewers. SpEagle (Akoglu et al., 2013) utilizes clues from all metadata as well as relational data and harnesses them collectively under a unified framework. ASM (Mukherjee et al., 2013a) facilitates modeling spamicity as latent factor and allows to exploit various behavioral footprints of reviewers. (Wang et al., 2012) argued that the dynamic behavioral signals can be captured through a heterogeneous reviewer graph by considering reviewers, reviews and products together. FraudEagle (Akoglu et al., 2013) spots fraudsters and fake reviews simultaneously. (Dutta et al., 2018; Chetan et al., 2019) discussed about collusion in social media.

Few studies attempted to detect group-level fraud reviewer groups. GSRank (Mukherjee et al., 2012) was the first method of this kind, which identifies spam reviewer groups using FIM, and ranks groups based on the relationship among groups, products, and individual reviewers. (Mukherjee et al., 2012) argued that due to the scarcity of data, unsupervised method should be used to tackle this problem. Other studies largely focused on improving the ranking algorithm, ignoring the performance of the group detection method (they also used FIM for detecting groups) (Xu et al., 2013; Allahbakhsh et al., 2013; Mukherjee et al., 2013b; Ott et al., 2011). (Xu and Zhang, 2015) proposed an Expectation Maximization algorithm to compute the collusive score of each group detected using FIM. (Wang et al., 2016) argued that FIM tends to detect small-size and tighter groups. They proposed GSBP, a divide-and-conquer algorithm which emphasizes on the topological structure of the reviewer graph. GGSpam (Wang et al., 2018b) directly generates spammer groups consisting of group members, target products, and spam reviews.

3 Proposed DeFrauder Framework

In this section, we explain DeFrauder  which is a three-stage algorithm – detecting candidate fraud groups, measuring different fraud indicators of candidate groups, and ranking groups based on the group spam score.

3.1 Group Fraud Indicators

Here we present six fraud indicators which measure the spamicity of a group (Mukherjee et al., 2012; Wang et al., 2018b, 2016) by taking into account linguistic, behavioral, structural and temporal signals. All indicators are normalized to – larger value indicates more spamming activity. Let and be the entire set of reviewers and products. Let be the set of reviewers in a group , and be the set of target products reviewed by the reviewers in . Each reviewer reviewed a set of products .

To reduce the contingency of small groups, we use a penalty function which is a logistic function (Wang et al., 2016): . We subtract from the sum since we consider minimum number of reviewers and products within a group to be and , respectively; therefore (Wang et al., 2018a).

(i) Review Tightness (RT): It is the ratio of the number of reviews in (assuming each reviewer is allowed to write one review to each product) to the product of the size of reviewer set and product set in .

(1)

If significant proportion of people co-reviewed several products, then it may imply a spam group activity.

(ii) Neighbor Tightness (NT): It is defined as the average value of the Jaccard similarity (JS) of the product sets of each reviewer pair.

(2)

If the product sets are highly similar, both the reviewers are then highly likely to be together in a collusive group.

(iii) Product Tightness (PT): It is the ratio of the number of common products to the number of products reviewed by all the members in (Wang et al., 2016).

(3)

Group members reviewing certain number of products and not reviewing any other products are more likely to indulge in fraud activities.

(iv) Rating Variance (RV): Group spammers tend to give similar ratings while reviewing any product. Let be the variance of the rating scores of product by reviewers in . We take the average variance for all target products. The variance degree is converted into spam score between .

(4)

(v) Product Reviewer Ratio (RR): It is defined as the average value of the ratio of the number of reviewers in who reviewed product to the number of all the reviewers of , denoted by :

(5)

If a product is mainly reviewed by the reviewers in , then the group is highly likely to be a spammer group.

(vi) Time Window (TW): Fraudsters in a group are likely to post fake reviews during a short-time interval. Given a group , and a product , we define the time-window based spamicity as

(6)

where is the date when the first review of product was launched, is the latest date of review for product posted by any reviewer in group . is a time threshold (set to 30 days in our experiments as suggested in (Wang et al., 2018b)).

1:Initialize:
2: Set of candidate groups
3:
4: Potential merged groups
5:
6:Iterate:
7:
8:
9:
10:
11:
12:function GroupDetector()
13:     for each isolated node in  do
14:          and remove from      
15:     for each pair  do
16:         if  then
17:              if  then
18:                  
19:              else
20:                  if  then
21:                                                                      
22:     for each  do
23:         =
24:         .add() and remove from      
25:     for each connected component with  do
26:          and remove from      
27:     for each group  do
28:         if  then
29:                             
30:     return
31:end function
Algorithm 1 ExtractGroups

3.2 Detection of Candidate Fraud Groups

We propose a novel graph-based candidate group detection method based on the “coherence principle”.

Hypothesis 1 (Coherence Principle)

Fraudsters in a group are coherent in terms of – (a) the products they review, (b) ratings they give to the products, and (c) the time of reviewing the products.

We show that each component of this hypothesis is statistically significant (see Sec. 5). We incorporate these three factors into a novel attributed product-product graph construction – such that each indicates a product-rating pair and its attribute consists of the set of reviewers who rated with . An edge indicates the co-reviewing and co-rating patterns of two products and with rating and , respectively. The edge attribute indicates the set of co-reviewers who reviewed both and and gave same ratings and respectively within the same time (defined in Sec. 3.3). Note that edge connecting same product with different ratings wont exist in as we assume that a reviewer is not allowed to give multiple reviews/ratings to a single product.

We then propose ExtractGroups (pseudecode in Algo 1, a toy example in Supplementary (Dhawan et al., )), a novel group detection algorithm that takes as an input and executes a series of operations through GroupDetector() (Line 12) – isolated nodes are first removed (Lines 13-14), edge attributes are then merged and removed if Jaccard similarity (JS) of product sets that the corresponding reviewers reviewed is greater than a threshold (set as ). Any group of reviewers eliminated due to the consideration of only common reviewers during the creation of edges is also checked through JS in order to avoid loosing any potential candidate groups (Lines 15-24). Before proceeding to the next iteration, connected components containing more than two nodes are removed (Lines 25-26). We define as the average of six group level indicators defined in Sec. 3.1, and consider those as potential groups whose exceeds (Lines 27-29, defined in Sec. 5.2).

It then converts the remaining structure of into an attributed line graph (edges converted into vertices and vice versa) as follows: corresponds to in and ; an edge represents co-reviewing and co-rating patterns of products , and ; the corresponding edge attribute is . is again fed into GroupDetector in the next iteration. Essentially, in each iteration, we keep clubbing reviewers together, who exhibit coherent reviewing patterns. The iteration continues until none of the edges remain in the resultant graph, and a set of candidate fraud groups are returned.

The worst case time complexity of ExtractGroups is , where is the number of iterations, and is the number of edges in (see Supplementary (Dhawan et al., ) for more details).

Theorem 3.1 (Theorem of convergence)

ExtractGroups will converge within a finite number of iterations.

See Supplementary (Dhawan et al., ) for the proof.

3.3 Ranking of Candidate Fraud Groups

Once candidate fraud groups are detected, DeFrauder ranks these groups based on their spamicity. It involves two steps – mapping reviewers into an embedding space based on their co-reviewing patterns, and ranking groups based on how close the constituent reviewers are in the embedding space.

Reviewer2Vec: Embedding Reviewers

Our proposed embedding method, Reviewer2Vec is motivated by (Wang et al., 2018b). Given two reviewers and co-reviewing a product by writing reviews and with the rating and at time and respectively, we define the collusive spamicity of , w.r.t. as:

(7)

where,

Where . Coefficients , and control the importance of time, rating and the similarity of review content, respectively ( and ). We set as same review content signifies more collusion as compared to the coherence in ratings and time (Wang et al., 2018b).

is the degree of suspicion of product . , and are the parameters of the model. If the posting time difference among reviewers and on is beyond or their rating on deviates beyond (where , where and are the maximum and minimum ratings that a product can achieve respectively), we do not consider this co-reviewing pattern. ExtractGroups achieves best results with and (see Sec. 5.3).

is the collusiveness between two reviewers w.r.t. product they co-reviewed; however there might be other reviewers who reviewed as well. Lesser the number of reviewers of , more is the probability that and colluded. This factor is handled by after passing it through a sigmoid function. is a normalizing factor which ranges between (set as (Wang et al., 2018b)). We take the cosine similarly of two reviews and after mapping them into embedding space using Word2Vec (Mikolov et al., 2013).

Combining the collusive spamicity of a pair of reviewers across all the products they co-reviewed, we obtain the overall collusive spamicity between two reviewers:

We then create a reviewer-reviewer spammer graph (Wang et al., 2018b) which is a bi-connected and weighted graph , where corresponds to the set of reviewers , and two reviewers and are connected by an edge with the weight . Once is created, we use state-of-the-art node embedding method to generate node (reviewer) embeddings (see Sec. 5.3).

Ranking Groups

For each detected group, we calculate the density of the group based on (Euclidean) distance of each reviewer in the group with the group’s centroid in the embedding space. An average of all distances is taken as a measure of spamicity of the group. Let be the vector representation of reviewer in the embedding space. The group spam score of group is measured as:

(8)

4 Datasets

We collected four real-world datasets – YelpNYC: hotel/restaurant reviews of New York city (Rayana and Akoglu, 2015); YelpZip: aggregation of reviews on restaurants/hotels from a number of areas with continuous zip codes starting from New York city (Rayana and Akoglu, 2015); Amazon: reviews on musical instruments (He and McAuley, 2016), and Playstore: reviews of different applications available on Google Playstore. Fake reviews and spammers are already marked in both the Yelp datasets  (Rayana and Akoglu, 2015). For the remaining datasets, we employed three human experts222They were social media experts, and their age ranged between 25-35. to label spammers based on the instructions mentioned in (Shojaee et al., 2015; Mukherjee et al., 2013a; Hill, 2018). They were also given full liberty to apply their own experience. The inter-rater agreement was and (Fleiss’ multi-rater kappa) for Amazon and Playstore, respectively. Table 1 shows the statistics of the datasets.

Dataset # Reviews # Reviewers # Products
YelpNYC 359052 160225 923
YelpZIP 608598 260227 5044
Amazon 10261 1429 900
Playstore 325424 321436 192
Table 1: Statistics of four datasets.
Method YelpNYC YelpZIP Amazon Playstore
ND ND ND ND
GGSpam 1218 0.57 0.47 0.56 1167 0.61 0.47 0.56 144 0.13 0.21 0.23 1213 0.74 0.40 0.46
GSBP 809 0.56 0.33 0.52 807 0.47 0.37 0.52 115 0.41 0.21 0.68 250 0.74 0.38 0.47
GSRank 998 0.1 0.4 0.56 1223 0.13 0.42 0.70 2922 0.29 0.26 0.14 994 0.57 0.37 0.47
DeFrauderR 4399 0.12 0.35 6815 0.13 0.03 197 0.23 0.23 385 0.37 0.11
DeFrauderT 152 0.23 0.31 3666 0.64 0.20 807 0.7 0.47 200 0.45 0.12
DeFrauder 1118 0.73 0.50 0.6 4574 0.67 0.48 0.6 713 0.71 0.5 0.76 940 0.84 0.37 0.78
Table 2: Performance of the competing methods: GGSpam (Wang et al., 2018b), GSBP (Wang et al., 2016), GSRank (Mukherjee et al., 2012), DeFrauderR, DeFrauderT, and DeFrauder. Number of groups detected () are mentioned after removing groups of size less than 2 (cyan regions). Accuracy for the group detection (white regions) and ranking (gray regions) is reported in terms of EMD (the higher, the better), and NDCG@50 (ND) respectively. DeFrauderR and DeFrauderT are used only for group detection. The ranking methods of all baselines are run on the groups detection by DeFrauder.

5 Experimental Results

Statistical Validation: To measure the statistical significance of each component (say, products reviewed by the group members) of Hypothesis 1 (Sec. 3.2), we randomly generate pairs of reviewers (irrespective of the groups they belong to) and measure how their co-reviewing patterns (cumulative distribution) are different from the case if a pair of reviewers co-occur together in the same group. We hypothesize that both these patterns are different (whereas null hypothesis rejects our hypothesis). We observe that the difference is statistically significant as (Kolmogorov-Smirnov test). See Supplementary (Dhawan et al., ) for more discussion. We then perform evaluation in two stages – quality of the detected groups, and quality of the ranking algorithms.

5.1 Baseline Methods

We consider three existing methods (see Sec. 2 for their details) as baselines for both the evaluations – (i) GSBP (Wang et al., 2016),(ii) GGSpam (Wang et al., 2018b),and (iii) GSRank (Mukherjee et al., 2012).

5.2 Evaluating Candidate Groups

Along with the three baselines mentioned above, we also consider two variations of DeFrauder as baselines for group detection: DeFrauderR constructs the attributed product-product graph based only on co-rating without considering the time of reviewing; and DeFrauderT constructs based only on same co-reviewing time without considering co-rating. This also helps in justifying why both time and rating are important in constructing for group detection. (Mukherjee et al., 2012) suggested to use cumulative distribution (CDF) of group size (GS) and review content similarity (RCS) to evaluate the quality of the spam groups. Group size (GS) favors large fraud groups as large groups are more damaging than smaller ones. Here we discard groups with size less than : . Review Content Similarity (RCS) is defined as a linear combination of two factors: (a) Group Content Similarity (GCS) which captures inter-reviewer review content similarity (as spammers copy reviews of each other): , (b) Group Member Content Similarity (GMCS) which captures intra-reviewer review similarity (as spammers may copy/modify their own previous reviews): . Finally, .

The larger the deviation of each distribution from the vertical axis (measured in terms of Earth Mover’s Distance (EMD)), the better the quality of the detected method (Mukherjee et al., 2012).

Comparison: We choose the following parameter values as default based on our parameter selection strategy (Fig. 3): : 20 days, . The number of groups we obtain from different datasets is reported in Table 2. Fig. 1 shows the CDF of GS and RCS for all the competing methods on YelpNYC333Results are similar on the other datasets, and thus omitted., which is further summarized quantitatively (in terms of EMD) in Table 2. DeFrauder outperforms the best baseline by 11.92% and 1.84% higher relative EMD (averaged over four datasets) for GS and RCS, respectively. We also notice that DeFrauderT performs better than DeFrauderR, indicating that temporal coherence is more important than rating coherence in detecting potential groups.

Figure 1: CDF of (a) GS and (b) RCS for YelpNYC. The more the distance of a CDF (corresponding to a method) from y-axis, the better the performance of the method.

5.3 Evaluating Ranking Algorithm

We use NDCG@k, a standard graded evaluation metric. Since each reviewer was labeled as fraud/genuine, we consider the graded relevance value (ground-truth relevance score used in Normalized Discounted Cumulative Gain (NDCG) (Manning et al., 2010)) for each group as the fraction of reviewers in a group, who are marked as fraud. The candidate groups are then ranked by each competing method, and top groups are judged based on NDCG.

Figure 2: Performance on YelpNYC. Baselines are run with their detected groups as well as with the groups (+D) detected by DeFrauder (same naming convention as in Table 2).

Comparison: We choose the following parameter values as default based on our parameter selection strategy (Fig. 3): , , days, . We use Node2Vec (Grover and Leskovec, 2016) for embedding444We tried with DeepWalk (Perozzi et al., 2014) and LINE (Tang et al., 2015) and obtained worse results compared to Node2Vec.. Fig. 2 shows the performance of the competing methods for different values of (top groups returned by each method). Since DeFrauder produces better groups (Sec. 5.2), we also check how the ranking method of each baseline performs on the groups detected of DeFrauder. Fig. 2 shows that with DeFrauder’s group structure, GGSpam and GSBP show better performance than DeFrauder till ; after that DeFrauder dominates others. However, all the baselines perform poor with their own detected groups. This result also indicates the efficiency of our group detection method. Table 2 reports that DeFrauder beats other methods across all the datasets except YelpZIP on which GSRank performs better with DeFrauder’s detected groups. Interestingly, no single baseline turns out to be the best baseline across datasets. Nevertheless, DeFrauder outperforms the best baseline (varies across datasets) by 17.64% higher relative NDCG@50 (averaged over all the datasets).

Figure 3: Parameter selection on YelpNYC. We vary each parameter keeping others as default. Since produces best results for group detection, we kept this value for ranking as well.

6 Conclusion

In this paper, we studied the problem of fraud reviewer group detection in customer reviews. We established the principle of coherence among fraud reviewers in a group in terms of their co-reviewing patterns. This paper contributed in four directions: Datasets: We collected and annotated two new datasets which would be useful to the cybersecurity community; Characterization: We explored several group-level behavioral traits to model inter-personal collusive dynamics in a group; Method: We proposed DeFrauder, a novel method to detect and rank fraud reviewer groups; Evaluation: Exhaustive experiments were performed on four datasets to show the superiority of DeFrauder compared to five baselines.

Acknowledgement

The work was partially funded by Google Pvt. Ltd., DST (ECR/2017/00l691) and Ramanujan Fellowship. The authors also acknowledge the support of the Infosys Centre of AI, IIIT-Delhi, India.

References

  • Akoglu et al. [2013] Leman Akoglu, Rishi Chandy, and Christos Faloutsos. Opinion fraud detection in online reviews by network effects. In ICWSM, pages 985––994, 2013.
  • Allahbakhsh et al. [2013] Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Elisa Bertino, Norman Foo, et al. Collusion detection in online rating systems. In Asia-Pacific Web Conference, pages 196–207. Springer, 2013.
  • Chetan et al. [2019] Aditya Chetan, Brihi Joshi, Hridoy Sankar Dutta, and Tanmoy Chakraborty. Corerank: Ranking to detect users involved in blackmarket-based collusive retweeting activities. In WSDM, pages 330–338, 2019.
  • [4] Sarthika Dhawan, Siva Charan Reddy Gangireddy, Shiv Kumar, and Tanmoy Chakraborty. Defrauder: Supplementary. https://github.com/LCS2-IIITD/DeFrauder.
  • Dutta et al. [2018] Hridoy Sankar Dutta, Aditya Chetan, Brihi Joshi, and Tanmoy Chakraborty. Retweet us, we will retweet you: Spotting collusive retweeters involved in blackmarket services. In ASONAM, pages 242–249, 2018.
  • Fei et al. [2013] Geli Fei, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. Exploiting burstiness in reviews for review spammer detection. In ICWSM, 2013.
  • Grover and Leskovec [2016] Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In SIGKDD, pages 855–864, 2016.
  • He and McAuley [2016] Ruining He and Julian McAuley. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW, pages 507–517, 2016.
  • Hill [2018] Catey Hill. 10 secrets to uncovering which online reviews are fake. https://www.marketwatch.com/story/10-secrets-to-uncovering-which-online-reviews-are-fake-2018-09-21, 2018.
  • Kou et al. [2004] Yufeng Kou, Chang-Tien Lu, Sirirat Sirwongwattana, and Yo-Ping Huang. Survey of fraud detection techniques. In ICNSC, pages 749–754, 2004.
  • Lim et al. [2010] Ee-Peng Lim, Viet-An Nguyen, Nitin Jindal, Bing Liu, and Hady Wirawan Lauw. Detecting product review spammers using rating behaviors. In CIKM, pages 939–948. ACM, 2010.
  • Manning et al. [2010] Christopher Manning, Prabhakar Raghavan, and Hinrich Schütze. Introduction to information retrieval. Natural Language Engineering, 16(1):100–103, 2010.
  • Mikolov et al. [2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119, 2013.
  • Mukherjee et al. [2012] Arjun Mukherjee, Bing Liu, and Natalie Glance. Spotting fake reviewer groups in consumer reviews. In WWW, pages 191––200, New York, April 2012.
  • Mukherjee et al. [2013a] Arjun Mukherjee, Abhinav Kumar, Junhui Liu, Bing Wang, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. Spotting opinion spammers using behavioral footprints. In SIGKDD, Chicago,USA, August 2013.
  • Mukherjee et al. [2013b] Arjun Mukherjee, Vivek Venkataraman, Bing Liu, and Natalie Glance. What yelp fake review filter might be doing? In ICWSM, pages 1–12, 2013.
  • Ott et al. [2011] Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T Hancock. Finding deceptive opinion spam by any stretch of the imagination. In ACL-HLT, pages 309–319, 2011.
  • Perozzi et al. [2014] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In SIGKDD, pages 701–710, 2014.
  • Rayana and Akoglu [2015] Shebuti Rayana and Leman Akoglu. Collective opinion spam detection: Bridging review networks and metadata. In SIGKDD, pages 985––994, Sydney, NSW, Australia, August 2015.
  • Shojaee et al. [2015] Somayeh Shojaee, Azreen Azman, Masrah Murad, Nurfadhlina Sharef, and Nasir Sulaiman. A framework for fake review annotation. In UKSIM-AMSS, pages 153–158, 2015.
  • Tang et al. [2015] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In WWW, pages 1067–1077, 2015.
  • Wang et al. [2011] Guan Wang, Sihong Xie, Bing Liu, and S Yu Philip. Review graph based online store review spammer detection. In ICDM, pages 1242–1247. IEEE, 2011.
  • Wang et al. [2012] Guan Wang, Sihong Xie, Bing Liu, and Philip S Yu. Identify online store review spammers via social review graph. ACM TIST, 3(4):61:1–61:21, 2012.
  • Wang et al. [2016] Zhuo Wang, Tingting Hou, Dawei Song, Zhun Li, and Tianqi Kong. Detecting review spammer groups via bipartite graph projection. The Computer Journal, 59(6):861–874, 2016.
  • Wang et al. [2018a] Zhuo Wang, Songmin Gu, and Xiaowei Xu. Gslda: Lda-based group spamming detection in product reviews. Applied Intelligence, 48(9):3094–3107, 2018.
  • Wang et al. [2018b] Zhuo Wang, Songmin Gu, Xiangnan Zhao, and Xiaowei Xu. Graph-based review spammer group detection. KIAS, 55(3):571–597, Jun 2018.
  • Xu and Zhang [2015] Chang Xu and Jie Zhang. Towards collusive fraud detection in online reviews. In ICDM, pages 1051–1056, 2015.
  • Xu et al. [2013] Chang Xu, Jie Zhang, Kuiyu Chang, and Chong Long. Uncovering collusive spammers in chinese review websites. In CIKM, pages 979–988, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
370538
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description