Explicit Feedbacks Meet with Implicit Feedbacks : A Combined Approach for Recommendation System

Explicit Feedbacks Meet with Implicit Feedbacks : A Combined Approach for Recommendation System

Supriyo Mandal Department of Computer Science and Engineering,
Indian Institute of Technology Patna,
Patna, Bihar, India 801103,
supriyo.pcs17@iitp.ac.in.
   Abyayananda Maiti Department of Computer Science and Engineering,
Indian Institute of Technology Patna,
Patna, Bihar, India 801103,
abyaym@iitp.ac.in.
Abstract

Recommender systems recommend items more accurately by analyzing users’ potential interest on different brands’ items. In conjunction with users’ rating similarity, the presence of users’ implicit feedbacks like clicking items, viewing items specifications, watching videos etc. have been proved to be helpful for learning users’ embedding, that helps better rating prediction of users. Most existing recommender systems focus on modeling of ratings and implicit feedbacks ignoring users’ explicit feedbacks. Explicit feedbacks can be used to validate the reliability of the particular users and can be used to learn about the users’ characteristic. Users’ characteristic mean what type of reviewers they are. In this paper, we explore three different models for recommendation with more accuracy focusing on users’ explicit feedbacks and implicit feedbacks. First one is that predicts users’ rating more accurately based on user’s three explicit feedbacks (rating, helpfulness score and centrality) and second one is , where user’s implicit feedback (view relationship) is considered. Last one is , where both type of feedbacks are considered. In this model users’ explicit feedbacks’ similarity indicate the similarity of their reliability and characteristic and implicit feedback’s similarity indicates their preference similarity. Extensive experiments on real world dataset, Amazon.com online review dataset shows that our models perform better compare to base-line models in term of users’ rating prediction. model also performs better rating prediction compare to baseline models for cold start users and cold start items.

Recommendation System, Probabilistic Matrix Factorization, Review Network, Explicit Feedback, Amazon.com review data.

1 Introduction

In various domains like E-commerce platforms, online news, online movie sites etc., recommender system performs an important role in attenuating information overburden, having been notoriously adopted. Based on information of demographic profiles and previous preferences of users, recommender systems [11] predict users’ rating or purchasing decision of items and recommend right items or right news or suitable friends to the interested users based on prediction. Most of the existing recommendation approaches could be mainly categorized into Content-Based approach [9] and Collaborative Filtering () approach [12]. Memory Based method [15] and Model Based method [5] are two categories of .

To learn high quality user-item embedding, researchers are considering various side informations related to users’ implicit feedbacks, user-item interactions [2], product reviews [4, 14] and product images [10], that lead to better rating prediction. A user purchases an item based on her preference or influenced by others. After purchasing the item, the user gives positive (good) or negative (bad) rating based on her satisfaction level. There are different types of reviewers in online merchandise sites such as positive reviewers, critical reviewers or reliable reviewers. As for example, if any user gives bad rating for her purchasing item and more number of users click her review as “helpful yes”, that means she is not only a critical reviewer but also a reliable reviewer because more number of users support her review and rating. Considering for another case if more number of users click her review as “ helpful no”, that means she is only a critical reviewer but not a reliable one.

In our method, high helpfulness score and centrality score with positive rating (company should fix threshold value for rating) means the user is not only a positive reviewer but also a reliable reviewer and low helpfulness score and low centrality score with positive rating means the user is only a positive reviewer. Similarly, high helpfulness score and high centrality score with negative rating means the user is not only a critical reviewer but also a reliable reviewer and low helpfulness score and low centrality score with negative rating means the user is only a negative or critical reviewer. The company should fix the threshold value to define high, low score range and positive or negative rating range. In [2, 4, 10, 14] authors are focusing on only implicit feedbacks like user-item interactions, users’ view similarity, product images etc. that lead to better user’s preference prediction not rating prediction because a user’s rating prediction depends on what type of reviewer she is: positive reviewer or critical reviewer or reliable reviewer. These type of characteristic, we easily predict from user’s explicit feedbacks’ similarity. They ignore users’ explicit feedbacks that indicate users’ characteristic.

In our research, we consider three explicit feedbacks like ratings, helpfulness score and centrality score and one implicit feedback like view relationship with user-item. From implicit feedback, we can predict a user’s preference areas more accurately and from explicit feedbacks we can guess the user’s predicted rating on her preferred items more accurately. The online merchandise company should recommend such items to a user who not only purchases the item based on her preference but also gives good rating because a user’s negative rating or review always affects financial health of a company.

If a user’s (positive reviewer) helpfulness score and centrality score is similar to the particular users whose helpfulness score and centrality score are high and rating is positive, that means the user is a positive and a reliable reviewer and her rating activity will be similar to other positive and reliable reviewers. Similarly, if a user’s (critical reviewer) helpfulness score and centrality score is similar to the particular users whose helpfulness score and centrality score are high and rating is negative, that means the user is a critical and reliable reviewer and her rating activity will be similar to other critical and reliable reviewers. For another case, if a user’s helpfulness score and centrality score is similar to the particular users whose helpfulness score and centrality score are low with positive or negative rating, that means the user is only positive or critical reviewer, respectively. So, based on explicit feedbacks similarity, we can predict more accurately a user’s characteristic, that helps us to predict her rating activity and implicit feedback similarity helps us to predict a user’s preference areas. If we consider both explicit and implicit feedback similarity, then we can predict a user’s characteristic and preference area both more accurately that help us to predict her ratings activity more precisely.

The rest of the paper is organized as follows: In section 2, we describe our methodology that contains, Probabilistic Model () on explicit feedbacks, Probabilistic Model () on implicit feedback, and the fusion of explicit feedbacks and implicit feedback. In section 3, we show performance of our proposed models and in section 4, we give conclusion and future work.

2 Methodology

Explicit Feedback : In this paper, we consider three explicit feedbacks such as user’s rating, helpfulness score and centrality score.

i) Rating Scores : In online merchandise site a user gives rating on her purchased item. We denote as the rating of user for item . For example, Amazon.com ratings span is from to . In this paper, we follow the same rating scale. Rating , are considered as negating rating and to are mentioned as positive rating.

ii) Helpfulness Score : Before purchasing an item users are expected to read the previous reviews regarding the particular item. In most of the merchandise sites after each review, it asks the question, “Was this review helpful to you? (Answer Yes/No)”. “Yes” answer indicates that the review is helpful to the user. “No” answer indicates that the review is not proper or not truthful. This helpfulness data can be used to validate the reliability of the particular user’s item review.

We define a new formula to evaluate helpfulness score of the review given by user for item as follows:

(1)

where is the number of users who marked the review given by user for item as helpful and is the total number of users who have answered that question (total count of yes and no). If is in the range of [ +3, +5 ], is equal to 2, otherwise 1. The above equation is quadratic in nature because we want to give more priority to the particular users who have more number of “yes click”. Helpfulness score with positive sign indicates that the user is a positive reviewer and negative sign indicates that the user is a critical reviewer. It is very difficult to get the exact information about the users who not only marked the review as helpful but also purchase the item. So we do not consider this scenario. We assume that the users who marked the review as helpful are interested to purchase the item. In the real world, there are many users who read the reviews of previous users but do not click on helpful “yes” or “no”. Our experimental dataset does not provide such type of information. So, it is out of our consideration.

iii) Review Network and Centrality Score :

Figure 1: Bipartite network between items and users. An edge denotes a review is written by a user on an item.

When we are going to purchase any item from online merchandise sites, we read previous customers’ reviews which are related to that particular item. We can build a network of reviewers based on their purchasing items and timestamps. We name this network as Review Network[16].

Fig. 1 depicts a bipartite network presenting the review data set. Two sets of nodes in that bipartite network are item set and user set. If user writes a review on item then there will be an edge between them. Essentially each edge represents a distinct review. Notice that, each review has a time stamp of its creation. We identify each edge by a unique number that indicates logical time stamp of edge creation. Please note that in this figure we have not specified original time stamps. For depiction purpose, we have assumed some time direction in Fig. 1. The edges between user - item and user - item are identified by time stamp and respectively, that means user posts her rating after user . According to the review post timing, here we assume that user purchases the item after user . In Fig. 1, , are items and , , , , , are users who have written reviews on one or two items.

In this paper, we use degree centrality to evaluate centrality score of each user for a particular item in review network. We evaluate a user’s centrality score based on how many other users read her review. Our dataset does not provide exact information about the users who read others reviews regarding a particular item. When a user wants to read the previous reviews of other users for a particular item, in online merchandise site there are two option : a) Most recent reviews b) Top ranking reviews. Based on our realistic assumption, centrality score of each user is evaluated.

Most recent reviews are selected based on current time stamp and the time stamp when the users post reviews. Based on most recent reviews centrality score () of user for item is evaluated from our proposed equation:

(2)

where is total number of users who purchase item . starts from 1 means is the first user who purchases item . As for example, in Fig. 1 is the first user who writes review regarding item . Before purchasing the same item, reads ’s review as most recent review and for , ’s centrality score = 1. When purchases the same item, she may read ’s review as second most recent review and for , ’s centrality score = 1/2. Total centrality score of user for item = .

Now we consider centrality score of users based on top ranking reviews. We evaluate top ranking reviews based on their helpfulness score for all positive reviewers and all critical reviewers separately from our proposed equation :

(3)

where, is the centrality score of user for item based on top ranking review. is the ranking for user . Higher helpfulness score means rank is higher. indicates how many users purchase the same item after user.

Total centrality score is calculated based on our proposed technique as follows:

(4)

where is weightage value. The company’s management would decide the exact weightage value. Here we set weightage = 0.5. If is in the range of [ +3, +5 ], is equals to 2, otherwise 1. Centrality score with negative sign indicates that the user is critical reviewer and positive sign for positive reviewer.

Implicit Feedback : In this paper, we consider view relationship as implicit feedback between user-item to predict a user’s preference areas more accurately.

i) View Relationship : Before purchasing any item, users always view different items on the same category based on their preference areas. The online merchandise sites have information about the view history of registered users, that helps us to understand the preference area of a user. Based on this information, recommender system recommends items to users based on their interest.

Probabilistic Model with Explicit Feedbacks (RHC-PMF) :

In this section, we first describe how ratings is approximated by helpfulness score and centrality scores . In this section, we introduce user-item rating matrix. Typically there are three type of objects, namely users, ratings and items. Suppose, = , , …., be the set of users, = , , …., be the set of items, = , , …., be the set of ratings, = , , …., be the set of helpfulness scores and = , , …., be the set of centrality scores where are the number of users and items respectively. is the number of rating, helpfulness score and centrality individually.

We use the matrix = [] to indicate the user-item rating matrix produced by the users who give ratings on different purchasing items, where is the rating score that is given by user on item . Similarly, the matrix = [] to indicate the user-item helpfulness score based matrix gained by the users on different purchasing items, where is the helpfulness score that is scored by user on item . Another matrix = [] to indicate the user-item centrality score based matrix gained by the users on different purchasing items, where is centrality score that is scored by user on item .

Our method try to factorize the rating matrix into two matrices and . is the user latent factor matrix with each column being ’s latent feature vector that indicates how user ’s taste is similar to other users based on ratings and is the item latent factor matrix with each column being the latent feature vector of the item that indicates how the other users rate on item . The helpfulness score based matrix is also factorized into two matrices and where, is the user latent factor matrix with each column being ’s latent feature vector that indicates how user ’s helpfulness score is similar to other users and is the item latent factor matrix with each column being the latent feature vector of the item that indicates how the other users get helpfulness score on item . Similarly, the centrality score based matrix is factorized into two matrices and . Here we consider users’ explicit feedbacks like ratings, helpfulness score and centrality and try to represent each user and item by a low-dimensional vector. MF tries to map both users and items to a joint latent factor space with low-dimensionally such that user-item interactions are modeled as inner items in that space.

We define the conditional distribution  [8] over the observed ratings as:

(5)

where denotes the probability function of a Gaussian distribution with mean and variance . is equal to 1 if user rated item , and 0 otherwise. Tanh function is used to map the range of within [-1,+1], and we map into the same range [-1,+1] using the following Eq. 6.

(6)

where x is a value into an interval [a,b] and we have to map it into an interval [c,d]. is scaled value of x into the interval [c,d].

For each hidden variable, we place zero-mean spherical Gaussian priors [1] as follows :

(7)

Please note that, we do not consider a uniform variance of all users as shown in Eq.7. We try to make more reasonable to characterize different users with different prior variance for better recommendation. Here is the number of ratings given by user means we have to adjust user ’s prior variance according to . The more number of ratings given by user will contribute more accuracy to learn her rating activity and consequently, the smaller the uncertainty. This means that the prior variance of user will be inversely proportional to . is the number of users who rate on item .

Similar as Eq. 57 we have defined the another conditional distribution over the observed helpfulness score, that is , where and , the priors of the user and item features based on helpfulness score, are modeled as zero-mean spherical Gaussian distribution. Due to space limitation, expressions are omitted. Here is also mapped into the range [-1, +1] using Eq. 6. Similarly, here we do not consider a uniform variance. Another conditional distribution over the observed centrality score is where and , the priors of the user and item features based on centrality score, are modeled as zero-mean spherical Gaussian distribution.

In our model user vector based on rating approximates to the user vector based on helpfulness score. So, we define a conditional distribution of given as follows:

(8)

where variance controls the extent by which approximates . Similarly, we define another conditional distribution of given , that is , where variance controls the extent by which user vector based on rating approximates based on centrality score.

Now, we can compute the posterior distribution over the hidden variables considering three explicit feedbacks:

(9)

Probabilistic Model with Implicit Feedback (RV-PMF) :

In this section, we first describe how ratings is approximated by view relationship between users and items. In this section, we introduce view relationship between user-item. Typically there are three type of objects, namely users, items and view-score. Suppose, = , , …., be the set of users, = , , …., be the set of items, = , , …., be the set of view-score. We use the matrix = [] to indicate the user-item view matrix produced by the users who view different items, where view score is equals to 1, if user views item based on her interest otherwise 0.

Our method try to factorize the view matrix into two matrices and . In this section, we have defined the conditional distribution over the observed view items of different users that is , where and , the priors of the user and item features, are modeled as zero-mean spherical Gaussian distribution. Here, is also mapped -1 to +1 scale.

In our model user vector based on rating approximates to the user vector based on her view activity of different items. So, we define a conditional distribution of given , that is where variance controls the extent by which approximates . We can compute the posterior distribution over the hidden variables based on implicit feedback:

(10)

RHCV-PMF Model—Fusion of Explicit Feedback and Implicit Feedback :

In this section, we fuse both model ( and ) and design a model named as where user’s rating are associated with her explicit feedback (helpfulness score and centrality) and implicit feedback (view activity).

The posterior distribution over the features of users and item based on explicit feedbacks and implicit feedback is given by:

(11)

To calculate the maximum posterior estimation, we get the log of above posterior probability distribution [3]. Derivation of log-posterior probability of Eq. 11 is omitted due to space limitation. Maximizing the log-posterior probability with the hyper-parameters (i.e., the observation noise variance and prior variances) is equivalent to minimizing the objective function Eq. 12, where, . denotes the Frobenius norm. is the indicator function that is equal to 1 if user has information regarding item , otherwise = 0. As for example, is equal to 1 if user gains helpfulness score on item , otherwise = 0. is the total number of ratings for which user gains helpfulness score and is the total number of users who gain helpfulness score for the rating on item . is the total number of ratings for which gains centrality score and is the total number of users who gain centrality score for the rating on . Here is the total number of items, those are viewed by user and is the total number of users who view the item and

The objective function as follows:

(12)

Training of Model : Gradient descent algorithm is used to train our model, i.e., to minimize the above objective function. The gradients of (Eq. 12) with respect to , , , , , , and are presented as follows:

(13)

where, is the derivative of tanh function and is updated as

(14)

where is learning rate. Similarly, we evaluate , , , , , , , update the parameters and respective equations are omitted due to space limitation.

Rating Prediction : While has not converged, compute the gradients and update the parameters. Finally we evaluate predicted rating . The range of is -1 to +1 scale and it is mapped into +1 to +5 scale using Eq. 6.

3 Experiments

Data Statistics: Our models are applied on Amazon.com online review dataset collected by [7] with different datasets on electronics, books, music etc. The statistics of the dataset are shown in Table 2. Amazon.com dataset is extremely sparse. The sparseness of the datasets would clearly deteriorate the result of most exiting recommender systems but our proposed models overcome it.

Dataset # users # items # reviews/ ratings
Electronics 811,034 82,067 1,241,778 99.998%
Books 2,588,991 929,264 12,886,488 99.999%
Music 1,134,684 556,814 6,396,350 99.998%
Movies and TV 1,224,267 212,836 7,850,072 99.996%
Home and Kitchen 644509 79006 991794 99.999%
Amazon Instant Video 312930 22204 717651 99.989%
Table 2: Comparison MSE results for Amazon.com online reviews dataset on model with different settings of dimensionality (not large number).
Dataset
Electronics 0.914 0.918 0.922 0.925
Books 0.912 0.822 0.811 0.811
Music 0.701 0.701 0.699 0.698
Movies and TV 0.792 0.791 0.767 0.733
Table 1: Statistics of the dataset

Evaluation Metric and baseline methods : One widely used evaluation metric , mean square error (MSE) is considered to evaluate the performance of our models. Five previously proposed models , Matrix Factorization (MF) [8], Latent Dirichlet Allocation using MF (LDAMF) [7], Collaborative Topic Regression (CTR) [13], Hidden Factors as Topics (HFT) [7], Ratings Meet Review (RMR) [6] and Modeling on product image and “also-viewed” product information (VMCF) [10] are considered as baseline models. All these models are applied on Amazon.com dataset and predict rating of a user for a particular item. In table 3, to column shows results of the baseline models.

Parameters : For our model , we perform our experiment with , {0.1, 0.3, 0.5}, , [0.1, 1] and [0.1, 0.5] while is fixed to 5. As a result, while = = 0.2, = 0.1 manifest the best performance for all experimental datasets. = 0.2 yield the best performance for Electronics and Music dataset. = 0.3 yield the best performance for Books, Movies and TV dataset. Details of observation with different parameters are omitted due to space limitation.

Evaluation : We randomly select 80 % of the user ratings dataset for training, and the remaining 20 % is used for testing. Random sampling is independently conducted five times and we perform our experiment on the baseline models and our models. In Table 3, comparison of MSE results between baseline models and our models based on Amazon.com online review dataset are shown. We use = 5 for all models. In this table, column indicates different types of dataset, to column shows the performance of base-line models. Last three columns show the performance of our models. For each dataset, our models perform better significantly. model, fusion of and , performs better compare to and .

Dataset  MF  LDAMF  CTR  HFT  RMR  VMCF  RHC-PMF  RV-PMF  RHCV-PMF
Electronics 1.828 1.823 1.764 1.722 1.722 1.521 1.233 1.523 0.914
Books 1.107 1.109 1.106 1.138 1.113 1.021 0.987 1.034 0.912
Music 0.956 0.958 0.959 0.980 0.959 0.950 0.899 0.951 0.701
Movies and TV 1.119 1.117 1.114 1.119 1.120 1.028 0.917 1.029 0.792
Home and Kitchen 1.628 1.610 1.577 1.531 1.501 1.373 1.133 1.377 1.191
Amazon Instant Video 1.330 1.328 1.291 1.260 1.270 1.269 1.145 1.271 1.102
Table 3: Comparison MSE results between baseline models (mention in to column) and our models (mention to column) based on Amazon.com online reviews dataset, where rating scale 1 to 5, = 5 and 80% as Training Dataset is used.

Different settings of dimensionality : Increasing (not large value) should add more flexibility to a model and as a result, it should improve the performance. But in Table 2, we notice that increment of for Books, Music, Movies and TV improved the result but for Electronics dataset, it did not improve the result. For this type of contradictory results, our opinion is that Electronics dataset is smaller than the other three datasets and increasing leads to more parameter in the model which leads to overfitting.

Dataset Cold start items Cold start users
 ItemCF  VMCF  RHCV-PMF  ItemCF  VMCF  RHCV-PMF
Electronics 1.957 1.547 1.112 1.833 1.534 1.217
Books 1.277 1.212 1.133 1.298 1.227 1.109
Music 1.134 1.116 0.981 1.155 1.234 1.107
Movies and TV 1.533 1.487 1.177 1.567 1.503 1.113
Table 4: Comparison MSE results of cold-start items and users between baseline models,  [12],  [10] and our model based on Amazon.com online reviews dataset, where rating scale 1 to 5 and = 5.

Performance on cold start users and items : We also evaluate the performance of our model for cold-start users and cold-start items as shown in Table 4. For cold start items we compare our performance with two baseline models [12], [10] that are mentioned in and column, and for cold start users we compare our model performance with two same baseline models that are mentioned in and column. The baseline models and our models are applied on Amazon.com dataset. We consider the users who have expressed less than four ratings as cold start users and for items, which have received less than four ratings in the training dataset, are considers as cold start items. Our observation is that 50 % users and 40 % items are cold start users and items, respectively. Our model performs better than baseline models due to the consideration of both implicit and explicit feedback for cold start users and items also.

4 Conclusion and Future Work

In our research work, we investigate Probabilistic Matrix Factorization () based model for recommendation system, that considers explicit feedbacks and implicit feedbacks. In our work, it is proved that fusion of explicit feedbacks and implicit feedbacks is really effective. In our investigation, it is clearly proved that our model is really effective for cold start users and items also.

In future we would like to apply our dataset on other models. We would like to experiment on other online merchandise companies’ datasets. Several online merchandise companies connect with social medias and users share their reviews in social medias. Now social networks are available in social medias and allow sources for suitable recommendation. We would like to investigate if social networks can be utilized to learn users’ preference areas and item rating activity.

References

  • [1] Dueck, D., Frey, B., Dueck, D., Frey, B.J.: Probabilistic sparse matrix factorization. University of Toronto technical report PSI-2004-23 (2004)
  • [2] He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.S.: Neural collaborative filtering. In: Proceedings of the 26th International Conference on World Wide Web, pp. 173–182. International World Wide Web Conferences Steering Committee (2017)
  • [3] Jamali, M., Ester, M.: A matrix factorization technique with trust propagation for recommendation in social networks. In: Proceedings of the fourth ACM conference on Recommender systems, pp. 135–142. ACM (2010)
  • [4] Kim, D., Park, C., Oh, J., Lee, S., Yu, H.: Convolutional matrix factorization for document context-aware recommendation. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 233–240. ACM (2016)
  • [5] Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8) (2009)
  • [6] Ling, G., Lyu, M.R., King, I.: Ratings meet reviews, a combined approach to recommend. In: Proceedings of the 8th ACM Conference on Recommender systems, pp. 105–112. ACM (2014)
  • [7] McAuley, J., Leskovec, J.: Hidden factors and hidden topics: understanding rating dimensions with review text. In: Proceedings of the 7th ACM conference on Recommender systems, pp. 165–172. ACM (2013)
  • [8] Mnih, A., Salakhutdinov, R.R.: Probabilistic matrix factorization. In: Advances in neural information processing systems, pp. 1257–1264 (2008)
  • [9] Mooney, R.J., Roy, L.: Content-based book recommending using learning for text categorization. In: Proceedings of the fifth ACM conference on Digital libraries, pp. 195–204. ACM (2000)
  • [10] Park, C., Kim, D., Oh, J., Yu, H.: Do also-viewed products help user rating prediction? In: Proceedings of the 26th International Conference on World Wide Web, pp. 1113–1122. International World Wide Web Conferences Steering Committee (2017)
  • [11] Resnick, P., Varian, H.R.: Recommender systems. Communications of the ACM 40(3), 56–58 (1997)
  • [12] Sarwar, B., Karypis, G., Konstan, J., Riedl, J.: Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on World Wide Web, pp. 285–295. ACM (2001)
  • [13] Wang, C., Blei, D.M.: Collaborative topic modeling for recommending scientific articles. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 448–456. ACM (2011)
  • [14] Wang, H., Wang, N., Yeung, D.Y.: Collaborative deep learning for recommender systems. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1235–1244. ACM (2015)
  • [15] Wang, J., De Vries, A.P., Reinders, M.J.: Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 501–508. ACM (2006)
  • [16] Mandal, Supriyo and Maiti, Abyayananda :Social Promoter Score (SPS) and Review Network: A Method and a Tool for Predicting Financial Health of an Online Shopping Brand. In : arXiv preprint arXiv:1804.04464.(2018).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
313667
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description