Sequential Embedding Induced Text Clustering, a Non-parametric Bayesian Approach

Sequential Embedding Induced Text Clustering, a Non-parametric Bayesian Approach

Tiehang Duan Department of Computer Science and Engineering
State University of New York at Buffalo, NY 14226, United States
Qi Lou Department of Computer Science
University of California, Irvine, CA 92617, United States
Sargur N. Srihari Department of Computer Science and Engineering
State University of New York at Buffalo, NY 14226, United States
Xiaohui Xie Department of Computer Science
University of California, Irvine, CA 92617, United States
Abstract

Current state-of-the-art nonparametric Bayesian text clustering methods model documents through multinomial distribution on bags of words. Although these methods can effectively utilize the word burstiness representation of documents and achieve decent performance, they do not explore the sequential information of text and relationships among synonyms. In this paper, the documents are modeled as the joint of bags of words, sequential features and word embeddings. We proposed Sequential Embedding induced Dirichlet Process Mixture Model (SiDPMM) to effectively exploit this joint document representation in text clustering. The sequential features are extracted by the encoder-decoder component. Word embeddings produced by the continuous-bag-of-words (CBOW) model are introduced to handle synonyms. Experimental results demonstrate the benefits of our model in two major aspects: 1) improved performance across multiple diverse text datasets in terms of the normalized mutual information (NMI); 2) more accurate inference of ground truth cluster numbers with regularization effect on tiny outlier clusters.

\newfloatcommand

capbtabboxtable[][\FBwidth]

1 Introduction

The goal of text clustering is to group documents based on the content and topics. It has wide applications in news classification and summarization, document organization, trend analysis and content recommendation on social websites [14, 19]. While text clustering shares the challenges of general clustering problems including high dimensionality of data, scalability to large datasets and prior estimation of cluster number [1], it also bears its own uniqueness: 1) text data is inherently sequential and the order of words matters in the interpretation of document meaning. For example, the sentence “people eating vegetables” has a totally different meaning from the sentence “vegetables eating people”, although two sentences share the same bag-of-words representation. 2) Many English words have synonyms. Clustering methods taking synonyms into account will possibly be more effective to identify documents with similar meanings.

Pioneering works in text clustering have been done to address the general challenges of clustering. Among them nonparametric Bayesian text clustering utilizes Dirichlet process to model the mixture distribution of text clusters and eliminate the need of pre-specifying the number of clusters. Current methods use bag of words for document modeling. In this work, as shown in Fig. 1, the Bayesian nonparametric model is extended to utilize knowledge extracted from an encoder-decoder model and word2vec embedding, and documents are jointly modeled by bag of words, sequential features and word embeddings. We derive an efficient collapsed Gibbs sampling algorithm for performing inference under the new model.

Figure 1: Illustration of the proposed sequential embedding induced Dirichlet process mixture model (SiDPMM).

1.0.1 Our Contributions.

1) The proposed SiDPMM is able to incorporate rich feature representations. To the best of our knowledge, this is the first work that utilizes sequential features in nonparametric Bayesian text clustering. The features are extracted through an encoder-decoder model. It also takes synonyms into account by including CBOW word embeddings as text features, considering that documents formed with synonym words are more likely to be clustered together. 2) We derive a collapsed Gibbs sampling algorithm for the proposed model, which enables efficient inference. 3) Experimental results show that our model outperforms current state-of-the-art methods across multiple datasets, and have a more accurate inference on the number of clusters due to its desirable regularization effect on tiny outlier clusters.

2 Related Work

Traditional clustering algorithms such as K-means, Hierarchical Clustering, Singular Value Decomposition, Affinity Propagation have been successfully applied in the field of text clustering (see [27] for a comparison of these methods on short text clustering). Algorithms utilizing spectral graph analysis [4], sparse matrix factorization [31], probabilistic models [28, 26, 21] were proposed for performance improvement. As text is usually represented by a huge sparse vector, previous works have shown that feature selection [15] and dimension reduction [9] are also crucial to the task.

Most classic methods require access to prior knowledge about the number of clusters, which is not always available in many real-world scenarios. Dirichlet Process Mixture Model (DPMM) has achieved state-of-the-art performance in text clustering with its capability to model arbitrary number of clusters [33, 32]; number of clusters is automatically selected in the process of posterior inference. Variational inference [2, 16] and Gibbs sampling [24, 7] can be applied to infer cluster assignments in these models.

A closely related field of text clustering is topic modeling. Instead of clustering the documents, topic modeling aims to discover latent topics in document collections [3]. Recent works showed performance of topic modeling can be significantly improved by integrating word embeddings in the model [34, 18, 12, 5].

The encoder-decoder model was recently introduced in natural language processing and computer vision to model sequential data such as phrases [11, 10, 30, 29] and videos [13]. It has shown great performance on a number of tasks including machine translation [6], question answering [25] and video description [13]. Its strength of extracting sequential features is revealed in these applications.

3 Description of SiDPMM

Our text clustering model is based on the Dirichlet process mixture model (DPMM), the limit form of the Dirichlet mixture model (DMM). When DPMM is applied to clustering, the size of clusters are characterized by the stick-breaking process, and prior of cluster assignment for each sample is characterized by the Chinese restaurant process. The Dirichlet process can model arbitrary number of clusters which is typically inferred via collapsed Gibbs sampling or variational inference. We refer readers to [24, 2] for more details about DPMM.

We tailor DPMM to our task by learning clusters with multiple distinct information sources for documents, i.e., bag-of-words representations, word embeddings and sequential embeddings, which requires specifically designed likelihood, priors, and inference mechanism.

Notation Meaning
the -th document
documents belonging to cluster excluding
total number of clusters
cluster assignment of
cluster assignments of cluster excluding document i
parameters of cluster
number of documents in cluster
number of words in document
occurrence of word in document
number of words in cluster excluding
occurrence of word in cluster excluding
the set of bag of words in
sequential information embedding of
word embedding of
vocabulary size
set of hyper-parameters
set of hyper-parameters
parameter of Chinese restaurant process
hyper-parameter for multinomial modeling of bag of words
dimensionality of sequential embedding vector
parameter of multinomial distribution for the -th cluster
Table 1: Notations

To start with, we first introduce the likelihood function over documents:

(1)

where , with and . is the word embedding and is the encoded sequential vector. The multinomial component captures the distribution of bag of words; the Normal components , measure similarities of word and sequential embeddings. This model is general enough to model the characteristic of any text and also specific enough to capture the key information of each document including word embeddings and sequential embeddings.

The prior is set to be conjugate with the likelihood for integrating out the cluster parameters during the inference phase. As Dirichlet distribution is the conjugate prior of multinomial distribution and Normal-inverse-Wishart(NiW) is the conjugate prior of normal distribution, we used the composition of Dirichlet distribution and NiW distribution to serve as the conjugate prior , which is defined as:

(2)

where Diri denotes the Dirichlet distribution and NiW denotes the Normal-inverse-Wishart distribution. denotes hyper-parameters for the encoder-decoder component and denotes hyper-parameters for CBOW word embedding component.

4 Inference via Collapsed Gibbs Sampling

We adopt collapsed Gibbs sampling for inference due to its efficiency. It reduces the dimensionality of the sampling space by integrating out cluster parameters, which leads to faster convergence.

The cluster assignment for document is decided based on the posterior distribution . It can be represented as product of cluster prior and document likelihood.

(3)

Based on the Chinese restaurant process depiction of DPMM, we have

(4)

is the total number of documents in the corpus excluding current document .

Given the number of variables introduced in the model, direct sampling from the joint distribution is not practical. Thus, we assume conditional independence on the variables by allowing the factorization of the second term in (3) as:

(5)

The calculation for each component , and is derived below:

(6)

where the first term in the above integral is

(7)

is the probability of term bursting in cluster and is the count of term in document . The second term in (6) is

(8)

By defining similar to [32], we have

(9)

Based on (7) and (9), (6) becomes

(10)

As we see from (10), the high dimensionality challenge of text clustering is naturally circumvented by multiplying one dimension of the vector space at a time. and in (5) are derived based on properties of NiW distribution:

(11)

where and are the mean and variance of the sequential embedding, includes which is the hyper-parameter in the NiW distribution of cluster .

We define the normalization constant of NiW distribution as

(12)

where is the dimensionality of sequential embedding vector. Therefore

(13)

As , we have

(14)

The derivation of is analogous to that of as they are following the same form of distribution, thus,

(15)

Algorithm LABEL:algo_cgs presents the complete inference procedure.

\@float

algocf[!ht]     \end@float

5 Extraction of Sequential Feature and Synonyms Embedding

In this section, we describe how to extract sequential embeddings with an encoder-decoder component and synonyms embeddings with the CBOW model.

The encoder-decoder component is formed with two LSTM stacks [8], one is for mapping the sequential input data to a fixed length vector, the other is for decoding the vector to a sequential output. To learn embeddings, we set the input sequence and output sequence to be the same. An illustration of the encoder-decoder mechanism is shown in Fig. 1(a). The last output of the encoder LSTM stack contains information of the whole phrase. In machine translation, researchers have found the information is rich enough for the original phrase to be decoded into translations of another language [20].

LSTM Layer

LSTM Layer

LSTM Layer

LSTM Layer

LSTM Layer

LSTM Layer

Encoded Sequential Vector

A

B

C

D

X

Y

Z

X

Y

Z

Encoder Hidden State Vectors

(a)
(b)
Figure 2: (a) The Encoder-Decoder Component. It is formed by two LSTM stacks, one is for mapping a sequential input data to a fixed-length vector, the other is for decoding the vector to a sequential output. (b) Word embedding of Google News Title Set. Words describing the same topic have similar embeddings and are clustered together

Current state-of-the-art text clustering methods adopt one-hot encoding for word representation. It neglects semantic relationship between similar words. Recently, researchers have shown multiple degrees of similarity can be revealed among words with word embedding techniques [23]. Utilizing such embeddings means we can cluster the documents based on meaning of words instead of the word itself. As shown in Fig. 1(b), words describing the same topic have similar embeddings and are clustered together. The CBOW model is used to learn word embeddings by predicting each word based on word context (weighted nearby surrounding words). The embedding vector is the average of word embeddings in . Readers are referred to [22] for details about the CBOW model.

6 Experiments

In this section, we will demonstrate the effectiveness of our approach through a series of experiments. The detailed experimental settings are as follows:

6.0.1 Datasets

We run experiments on four diverse datasets including 20 News Group (20NG)111http://qwone.com/~jason/20Newsgroups/, Tweet Set222http://trec.nist.gov/data/microblog.html, and two datasets from [32]: Google News Title Set (T-Set) and Google News Snippet Set (S-Set). The 20NG dataset contains long documents with an average length of 138 while the documents in T-Set and Tweet Set are short with average length less than 10. Phrase structures are sparse in T-Set, while rich in 20NG and S-Set. The Tweet Set contains moderate phrase structures.

6.0.2 Baselines

We compare SiDPMM against two classic clustering methods, K-means and latent Dirichlet allocation (LDA), and two recent methods GSDMM and GSDPMM that are state-of-the-art in nonparametric Bayesian text clustering.

6.0.3 Metrics

We take the normalized mutual information (NMI) as the major evaluation metric in our experiments since NMI is widely used in this field. NMI scores range from 0 to 1. Perfect labeling is scored to 1 while random assignments tend to achieve scores close to 0.

K SiDPMM SiDPMM-sf333SiDPMM model only integrating sequential features SiDPMM-we444SiDPMM model only integrating word embeddings K-means LDA GSDMM GSDPMM 20NG 10 .689.006 .686.005 .680.006 .235.008 .585 .013 .613 .007 .667 .004 20 .689.006 .686.005 .680.006 .321.006 .602 .012 .642 .004 .667 .004 30 .689.006 .686.005 .680.006 .336.005 .611 .012 .649 .005 .667 .004 50 .689.006 .686.005 .680.006 .348.006 .617 .013 .656 .002 .667 .004 T-Set 100 .878.003 .872.003 .877.005 .687.005 .769 .012 .830 .004 .873 .002 150 .878.003 .872.003 .877.005 .721.009 .784 .015 .852 .009 .873 .002 152 .878.003 .872.003 .877.005 .720.007 .786 .014 .853 .009 .873 .002 200 .878.003 .872.003 .877.005 .730.008 .806 .013 .868 .006 .873 .002 S-Set 100 .916.004 .910.005 .902.003 .739.006 .848 .005 .854 .004 .891 .004 150 .916.004 .910.005 .902.003 .756.006 .850 .006 .867 .008 .891 .004 152 .916.004 .910.005 .902.003 .757.007 .852 .005 .867 .009 .891 .004 200 .916.004 .910.005 .902.003 .768.007 .862 .004 .885 .005 .891 .004 Tweet 50 .894.007 .887.006 .884.005 .696.008 .775 .012 .844 .006 .875 .005 90 .894.007 .887.006 .884.005 .725.007 .797 .011 .862 .008 .875 .005 110 .894.007 .887.006 .884.005 .732.006 .806 .010 .867 .006 .875 .005 150 .894.007 .887.006 .884.005 .742.006 .811 .012 .871 .004 .875 .005
Table 2: NMI scores on various dataset-parameter settings. is the prior number of clusters for K-means, LDA and GSDMM, set to be four different values including the ground truth for each dataset. is not used for SiDPMM and GSDPMM. 20 independent runs for each setting.

6.0.4 Encoder-decoder component

We truncate the sequence length to be 48 for Tweet Set and Google News dataset and 240 for 20NG dataset. The document with characters length shorter than this sequence length is padded with zeros. The encoder-decoder model is trained for 10 iterations. The length of hidden vectors is set to be 40, and length of input vector is 67 (number of different characters). Weights in the LSTM stack are uniformly initialized to be 0.01. Adam [17] optimizer is used to optimize the network with its learning rate set to 0.01.

6.0.5 Word embedding component

The vocabulary size is set to 100,000 which is enough to accommodate most of the words present in the dataset. We set the embedding vector length to be 40. To facilitate training with small datasets such as the Tweet Set, we augment each dataset with a well-known large-scaled text dataset555http://mattmahoney.net/dc/text8.zip during training. Window size is set to be 1, meaning we only consider the words that are neighbors of the target word as its word context. We apply stochastic gradient descent for optimization with a total of 100,000 descent steps.

6.0.6 Priors

Hyper-parameter of the Dirichlet process is set to be , where is number of documents in the dataset. Hyper-parameter for the Multinomial modeling of bag of words is , and parameters for the prior NiW distribution of word embedding and sequential embedding are .

6.1 Empirical Results

Table 2 reports the mean and standard deviation of the NMI scores across various settings. From Table 2, we observe that SiDPMM outperforms K-means, LDA and GSDMM across all the settings by significant margins. GSDPMM has comparative performance with SiDPMM on T-Set, while SiDPMM performs better in other three datasets. We noted the average length of T-Set is short and phrase structures are scarce in its documents. To unveil the influence of each of the component on the model performance, we included implementation of SiDPMM model only integrating sequential features (denoted as SiDPMM-sf) and SiDPMM model only integrating word embeddings (denoted as SiDPMM-we) into the comparison. We noted the contribution from sequential embedding is significant in 20NG, S-Set and moderate in Tweet-Set.

(a) 20NG
(b) Tweet-Set
(c) S-Set
(d) T-Set
Figure 3: Number of clusters with size above a given threshold found in each iteration by SiDPMM and GSDPMM. A cluster with size smaller than the given threshold does not count. Plots (a)-(d) are for the datasets 20NG, Tweet-Set, S-Set and T-Set respectively.

SiDPMM and GSDPMM can automatically determine the number of clusters. Table 5 shows that number of clusters inferred by SiDPMM are much more accurate compared to those from GSDPMM across all the datasets. We can observe that GSDPMM tends to create more clusters than SiDPMM. As illustrated in Fig. 3, many of those clusters created by GSDPMM are quite small; while in constrast, SiDPMM tends to suppress tiny clusters and thus are more robust to outliers. The sequential and word embedding components in SiDPMM are responsible for this regularization effect on number of clusters.

The hyper-parameter in the Dirichlet process determines the prior probability of creating a new cluster (see eq. (4)). We explore the influence of different values on our model. Fig. 5 shows that the number of clusters typically grows with ; as observed for Tweet Set, T-Set and S-Set, but not the case for the 20NG dataset. This reveals the relative strength of prior (compared to likelihood) in determining posterior cluster distribution. The documents in 20NG have large average length (137.5 words per document). In the sampling process, the likelihood dominates the posterior distribution and the small difference caused by different in the prior distribution is negligible, while for documents with small average length, the difference in likelihood is not significant and thus prior affects more of the posterior distribution.

{floatrow}\capbtabbox
Number of Clusters Diff. Ratio
Ground Truth GSDPMM SiDPMM GSDPMM SiDPMM
20NG 20 52 31 160% 55%
T-Set 152 323 171 113% 13%
S-Set 152 246 126 62% 17%
Tweet 110 161 99 46% 10%
Figure 4: Inferred number of clusters by SiDPMM and GSDPMM. Other baseline methods are not included because they require pre-specified number of clusters.
\ffigbox
Figure 5: Number of clusters found by SiDPMM with different values, revealing the relative strength of prior (compared to likelihood) in determining posterior distribution

7 Conclusion

In this paper, we propose a nonparametric Bayesian text clustering method (SiDPMM) which models documents as the joint of bag of words, word embeddings and sequential features. The approach is based on the observation that sequential information plays a key role in the interpretation of phrases and word embedding is very effective for measuring similarity between synonyms. The sequential features are extracted with an encoder-decoder component and word embeddings are extracted with the CBOW model. A detailed collapsed Gibbs sampling algorithm is derived for the posterior inference. Experimental results show our approach outperforms current state-of-the-art methods, and is more accurate in inferring the number of clusters with the desirable regularization effect on tiny scattered clusters.

References

  • [1] Berkhin, P.: A Survey of Clustering Data Mining Techniques, pp. 25–71. Springer Berlin Heidelberg, Berlin, Heidelberg (2006)
  • [2] Blei, D.M., Jordan, M.I.: Variational inference for dirichlet process mixtures. Bayesian Anal. 1(1), 121–143 (03 2006), https://doi.org/10.1214/06-BA104
  • [3] Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (Mar 2003)
  • [4] Cai, D., He, X., Han, J.: Srda: An efficient algorithm for large-scale discriminant analysis. IEEE Transactions on Knowledge and Data Engineering 20(1), 1–12 (Jan 2008)
  • [5] Cha, M., Gwon, Y., Kung, H.T.: Language modeling by clustering with word embeddings for text readability assessment. In: CIKM’17. pp. 2003–2006. ACM, New York, NY, USA (2017), http://doi.acm.org/10.1145/3132847.3133104
  • [6] Cho, K., et al.: Learning phrase representations using rnn encoder–decoder for statistical machine translation. In: EMNLP’14. pp. 1724–1734. Association for Computational Linguistics, Doha, Qatar (Oct 2014), http://www.aclweb.org/anthology/D14-1179
  • [7] Duan, T., Pinto, J.P., Xie, X.: Parallel clustering of single cell transcriptomic data with split-merge sampling on dirichlet process mixtures. Bioinformatics p. bty702 (2018), http://dx.doi.org/10.1093/bioinformatics/bty702
  • [8] Duan, T., Srihari, S.N.: Layerwise interweaving convolutional lstm. In: Canadian Conference on AI (2017)
  • [9] Gomez, J.C., Moens, M.F.: Pca document reconstruction for email classification. Computational Statistics and Data Analysis 56(3), 741 – 751 (2012)
  • [10] Gu, Y., Chen, S., Marsic, I.: Deep multimodal learning for emotion recognition in spoken language. CoRR abs/1802.08332 (2018), http://arxiv.org/abs/1802.08332
  • [11] Gu, Y., Li, X., Chen, S., Zhang, J., Marsic, I.: Speech intention classification with multimodal deep learning. In: Canadian Conference on AI (2017)
  • [12] Guangxu Xun, Yaliang Li, W.X.Z.J.G.A.Z.: A correlated topic model using word embeddings. In: IJCAI-17. pp. 4207–4213 (2017)
  • [13] Hori, C., Hori, T., Lee, T., Sumi, K., Hershey, J.R., Marks, T.K.: Attention-based multimodal fusion for video description. CoRR abs/1701.03126 (2017)
  • [14] Hotho, A., Staab, S., Maedche, A.: Ontology-based text clustering. In: In Proceedings of the IJCAI2001 Workshop Text Learning: Beyond Supervision (2001)
  • [15] Huang, R., Yu, G., Wang, Z.: Dirichlet process mixture model for document clustering with feature partition. IEEE Trans. on Knowl. and Data Eng. 25(8), 1748–1759 (Aug 2013)
  • [16] Ji, G., Hughes, M.C., Sudderth, E.B.: From patches to images: A nonparametric generative model. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. pp. 1675–1683 (2017), http://proceedings.mlr.press/v70/ji17a.html
  • [17] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014), http://arxiv.org/abs/1412.6980
  • [18] Li, Y., Xiao, H., Qin, Z., Miao, C., Su, L., Gao, J., Ren, K., Ding, B.: Towards differentially private truth discovery for crowd sensing systems. CoRR abs/1810.04760 (2018)
  • [19] Liu, M., Chen, L., Liu, B., Wang, X.: Vrca: A clustering algorithm for massive amount of texts. In: IJCAI’15. pp. 2355–2361. AAAI Press (2015), http://dl.acm.org/citation.cfm?id=2832415.2832576
  • [20] Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. CoRR abs/1508.04025 (2015)
  • [21] Madsen, R.E., Kauchak, D., Elkan, C.: Modeling word burstiness using the dirichlet distribution. In: ICML ’05. pp. 545–552. ACM, New York, NY, USA (2005)
  • [22] Mikolov, T., et al.: Distributed representations of words and phrases and their compositionality. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) NIPS, pp. 3111–3119. Curran Associates, Inc. (2013)
  • [23] Mikolov, T., Yih, W.t., Zweig, G.: Linguistic regularities in continuous space word representations. In: HLT-NAACL. pp. 746–751 (2013)
  • [24] Neal, R.M.: Markov chain sampling methods for dirichlet process mixture models. Journal of Computational and Graphical Statistics 9(2), 249–265 (2000)
  • [25] Nie, Y.p., Han, Y., Huang, J.m., Jiao, B., Li, A.p.: Attention-based encoder-decoder model for answer selection in question answering. Frontiers of Information Technology & Electronic Engineering 18(4), 535–544 (Apr 2017)
  • [26] Nigam, K., McCallum, A.K., Thrun, S., Mitchell, T.: Text classification from labeled and unlabeled documents using em. Mach. Learn. 39(2-3), 103–134 (May 2000)
  • [27] Rangrej, A., Kulkarni, S., Tendulkar, A.V.: Comparative study of clustering techniques for short text documents. In: Proceedings of the 20th International Conference Companion on World Wide Web. pp. 111–112. WWW ’11, ACM, New York, NY, USA (2011)
  • [28] Shafiei, M.M., Milios, E.E.: Latent dirichlet co-clustering. In: Sixth International Conference on Data Mining (ICDM’06). pp. 542–551 (Dec 2006)
  • [29] Suo, Q., Ma, F., Canino, G., Gao, J., Zhang, A., Gnasso, A., Tradigo, G., Veltri, P.: An attention-based recurrent neural networks framework for health data analysis. In: Proceedings of the 26th Italian Symposium on Advanced Database Systems, Castellaneta Marina (Taranto), Italy, June 24-27, 2018. (2018), http://ceur-ws.org/Vol-2161/paper13.pdf
  • [30] Suo, Q., Ma, F., Canino, G., Gao, J., Zhang, A., Veltri, P., Agostino, G.: A multi-task framework for monitoring health conditions via attention-based recurrent neural networks. AMIA … Annual Symposium proceedings. AMIA Symposium 2017, 1665–1674 (04 2018)
  • [31] Wang, F., Zhang, C., Li, T.: Regularized clustering for documents. In: SIGIR ’07. pp. 95–102. ACM, New York, NY, USA (2007)
  • [32] Yin, J., Wang, J.: A model-based approach for text clustering with outlier detection. In: 2016 IEEE 32nd International Conference on Data Engineering (ICDE). pp. 625–636 (May 2016)
  • [33] Yu, G., Huang, R., Wang, Z.: Document clustering via dirichlet process mixture model with feature selection. In: KDD ’10. pp. 763–772. ACM, New York, NY, USA (2010)
  • [34] Zhang, H., Li, Y., Ma, F., Gao, J., Su, L.: Texttruth: An unsupervised approach to discover trustworthy information from multi-sourced text data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 2729–2737. KDD ’18, ACM, New York, NY, USA (2018), http://doi.acm.org/10.1145/3219819.3219977
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
322036
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description