Improving Document Classification with Multi-Sense Embeddings

Improving Document Classification with Multi-Sense Embeddings

Vivek Gupta University of Utah, email: vgupta@cs.utah.eduIIT Karaggpur, email: ankit.kgpian@gmail.comUniversity of Utah, email: pnokhiz@cs.utah.eduIIT Guwahati, email: harshitgupta@iitg.ac.inIISC Bangalore, email: ppt@iisc.ac.in    Ankit Kumar IIT Karaggpur, email: ankit.kgpian@gmail.comUniversity of Utah, email: pnokhiz@cs.utah.eduIIT Guwahati, email: harshitgupta@iitg.ac.inIISC Bangalore, email: ppt@iisc.ac.in    Pegah Nokhiz University of Utah, email: pnokhiz@cs.utah.eduIIT Guwahati, email: harshitgupta@iitg.ac.inIISC Bangalore, email: ppt@iisc.ac.in    Harshit Gupta IIT Guwahati, email: harshitgupta@iitg.ac.inIISC Bangalore, email: ppt@iisc.ac.in    Partha Talukdar IISC Bangalore, email: ppt@iisc.ac.in
Abstract

Efficient representation of text documents is an important building block in many NLP tasks. Research on long text categorization has shown that simple weighted averaging of word vectors for sentence representation often outperforms more sophisticated neural models. Recently proposed Sparse Composite Document Vector (SCDV) [mekala2017scdv] extends this approach from sentences to documents using soft clustering over word vectors. However, SCDV disregards the multi-sense nature of words, and it also suffers from the curse of higher dimensionality. In this work, we address these shortcomings and propose SCDV-MS. SCDV-MS utilizes multi-sense word embeddings and learns a lower dimensional manifold. Through extensive experiments on multiple real-world datasets, we show that SCDV-MS embeddings outperform previous state-of-the-art embeddings on multi-class and multi-label text categorization tasks. Furthermore, SCDV-MS embeddings are more efficient than SCDV in terms of time and space complexity on textual classification tasks. We have released SCDV-MS source code with the paper. 111https://github.com/vgupta123/SCDV-MS

\ecaisubmission

1 Introduction

Distributed word embeddings such as word2vec [mikolov2013linguistic] are effective in capturing the semantic meanings of the words by representing them in a lower-dimensional continuous space. A smooth inverse frequency-based word vector averaging technique for sentence embeddings was proposed by [arora2016simple]. However, because the final representation is in the same space as the word vectors, these methods are only capable of capturing the meaning of a single sentence. Thus, embedding a large text document in a dense, low-dimensional space is a challenging task.

[mekala2017scdv] attempted to resolve this problem by proposing a clustering-based word vector averaging approach (SCDV) for embedding larger text documents. SCDV embeds each document by cluster-based averaging, thus representing each document in a more representative space than the original vectors. This model combines the word embedding models with a latent topic model where the topic space is learned efficiently using a soft clustering technique over embeddings. The final document vectors are also made sparse to reduce time and space complexities in several downstream tasks.

SCDV has many shortcomings: applying thresholding-based sparsity on the document representations can be unreliable since it is highly sensitive to the number of documents in the corpus. SCDV does not utilize the word contexts to disambiguate word sense for learning sense-aware document representations. Ignoring the multi-sense nature of the words during the representation leads to cluster ambiguity. SCDV neglects the negative additive effect of common words such as ‘and’, ‘the’, etc. during final document representations. Lastly, the documents represented by SCDV suffer from the curse of high dimensionality and cannot be utilized for deep learning applications which require low-dimensional continuous representations. To overcome these challenges, we proposed a novel document representation technique, namely SCDV-MS. Our proposed SCDV-MS mitigated the above shortcomings by the following contributions.

  1. To overcome the problem of cluster ambiguity SCDV-MS replaced the single sense word vector representations with multi-sense context-sensitive word representations to resolve word sense disambiguation.

  2. SCDV-MS removed the noise in the final representation by applying a threshold-based sparsity directly on fuzzy word cluster assignments instead of the document representations. Sparser representations result in better performance, lower time and space complexities.

  3. To overcome the noisy negative additive effect of common words such as ‘and’, ‘the’, etc. SCDV-MS learned and used Doc2VecC-initialized [chen2017efficient] robust word vectors to zero out common and high frequent words.

  4. Lastly, we showed that the sparse word-topic vectors can be projected into a non-linear local neighborhood preserving a manifold to learn continuous distributed representations much more effectively and efficiently than SCDV which proves to be useful for deep learning application.

Overall, we show that: disambiguating the multiple senses of words based on their context words (adjacent words) can lead to better document representations. Sparsity in representations is helpful for effective and efficient lower-dimensional manifold representation learning. Representation noise at words’ level has a significant impact on the final downstream tasks.

In section 1, we provided a brief introduction to the problem statement. In section 2 we discuss the related work in document representations. In section 3, we describe the proposed algorithm and then discuss the proposed modifications compared to SCDV in section 4. We move on to experiments in section 5, followed by conclusions in section 6.

2 Related Work

For document representation, averaging of word-vectors with an unweighted scheme was proposed by [smolensky1990tensor, mitchell-lapata-2008-vector, mitchell2010composition, mikolov2013distributed]. [pranjal2015weighted] extended the previous simple averaging model by tf-idf weighting of word vectors to form document vectors. [le2014distributed] proposed paragraph models (PV-DM and PV-DBOW) similar to word vector models (CBoW and SGNS) by treating each paragraph as a pseudoword. [socher2013recursive] used a parse tree to train a Recursive Neural Network (RNN) with supervision. In addition, several neural network models such as seq2seq models: Recurrent Neural Networks (RNN) [mikolov2010recurrent], Long Short Term Memory Networks (LSTM) [gers2002learning] and a hierarchical model: Convolutional Neural Networks (CNN) [kim2014convolutional, kalchbrenner2014convolutional] were proposed to capture syntax while representing documents. [wieting2015paraphrase] used supervised learning over the Paraphrase Dataset (PPDB) to learn Paraphrastic Sentence embeddings (PSL). Later, [wieting2015towards] also propose an optimization of word embeddings based on a neural network and a cosine similarity measure.

Several models such as WTM [wtm], TWE [liu2015learning], NTSG [liu2015learning], LTSG [ltsg], w2v-LDA [wtvlda], TV+MeanWV [tvMeanWV], Topic2Vec [topic2vec], Gaussian-LDA [gaussianlda], Lda2vec [lda2vec], ETM [dieng2019topic], D-ETM [dieng2019dynamic] and MvTM [mvtm] combine topic modeling [Blei:2003] with word vectors to learn better word and sentence representations. [kiros2015] cast the distributional hypothesis to a sentence level by proposing skip-thought document vectors. Recently, two deep contextual word embeddings namely ELMo [Peters:2018] and BERT [devlin2018bert] were proposed. These contextual embeddings perform as state of the art on multiple NLP tasks since they are very effective in capturing the surrounding context. Interestingly, [li2015multi] checks the effect of using multi-sense embeddings on various NLP tasks. However, our goal is different and aim at effectively using multi-sense words embeddings to learn better document representations. A hard clustering-based averaging of word vectors was proposed by [vivek, hadi2019vector] to form document vectors. [gupta2019unsupervised] extended the approach with a better partitioning technique and tried it on other natural language tasks. [mekala2017scdv] further improved the state-of-the-art SCDV by using fuzzy clustering and tf-idf weighted word averaging. Their method outperformed earlier models on several NLP tasks.

3 Proposed Algorithm SCDV-MS

In this section, we will describe our new proposed algorithm 1 in details. The algorithm is similar to SCDV [mekala2017scdv], but with important modifications and has three main components as described below:

Data: Documents D,
Result: Document vectors ,
/* Word Sense Disambiguation */
1 Use adagram for word sense disambiguation;
2 Annotate each word with a sense according to the neighboring context words;
3 Obtain word vectors () on annotated corpus using Doc2VecC;
4 for each word  do
5      obtain idf values, idf(w),  ;
      /* is the vocabulary */
6     
7      end for
      /* Word Vector Clustering */
8      Fuzzy clusters in clusters;
9      Each word and cluster , obtain P ;
10      = make-sparse();
      /* Word Topic Vectors */
11      for each word  do
12           for each cluster  do
13                SP;
14               
15                end for
16                idf(w) ;
                /* is concatenation */
                /* Optional: Manifold Learning */
17                = manifold-proj();
18               
19                end for
                /* SCDV-MS Representation */
20                for   do
                     /* initialize vectors */
21                     = ;
22                     for word in D do
23                          += ;
24                          end for
25                         
26                          end for
Algorithm 1 SCDV-MS

3.1 Word Sense Disambiguation (Algo 1: 1 - 1):

We employed the widely-used AdaGram [bartunov2016breaking] algorithm to disambiguate the multi-sense words in our corpora. We chose AdaGram because it’s a nonparametric Bayesian extension of Skip-gram which automatically learns the counts of the senses of the multi-sense words and their representations. 222One could also replace AdaGram with [dai2017mixture] and [athiwaratkun2018probabilistic] We first trained the AdaGram algorithm on the training corpora. 333 https://github.com/sbos/AdaGram.jl We used the trained model to annotate the words with the corresponding word senses in all train-test examples. We then trained the Doc2vecC algorithm on an annotated corpus to obtain the final multi-sense word vectors. Lastly, we obtained the idf values of words of the vocabulary which we will use as a means for weighting the rare words (Lines 4-6 Algo 1).

3.2 Word Vector Clustering (Algo 1: 1 - 1)

Similar to the SCDV approach, we clustered the word embeddings using Gaussian Mixture Models (GMM), which is a soft clustering technique, and obtained the word-cluster assignments probabilities P. Additionally, we made use of the fact that GMMs have an irrelevant noisy tail and made the cluster probability assignment manually sparse by zeroing the values of P. Retaining only the top maximum P and zeroing the rest values results in a sparse word-cluster assignment vector for each word. Here, represents the total number of clusters and is the sparsity constant (). One can use different values of for each word () depending on the values of P . However, in our experiments we did not observe significant performance difference when is varied with respect to the words.

(1)

outputs indices of the top maximum assignments.

3.3 Word Topic Vectors (Algo 1: 1 - 1)

Similar to SCDV, for each word , we created different word-cluster vectors of dimensions () by weighting the word’s embedding with sparse probability distribution for the cluster, i.e., . Next, we concatenated the word-cluster vectors () which are word-topic vectors in total, into a embedding vector. We then weighted it with inverse document frequency (idf) of to obtain word-topic vectors (). For all words appearing in a given document , we computed the average of the corresponding projected lower dimensional word-topic vectors to obtain the document vector . Furthermore, one can optionally project the () into a lower dimensional continuous representation called reduced word topic vectors, (), using manifold learning algorithms, namely Random Projection [achlioptas2003database], PCA [abdi2010principal] and Denoising Autoencoders [vincent2010stacked]. We can then use them instead of () for document representation. We call this reduction-based representation method as R-SCDV-MS. Refer to section 4.1 on manifold learning for more details.

(2)
(3)
(4)

is the concatenation and manifold-proj is the manifold learning algorithm we utilized. Figures 3 and 3 show the flow-chart of high level flow of our proposed SCDV-MS embedding.

4 Discussion on Proposed Modifications

In this section, we will describe the modifications applied to the SCDV embeddings in details.

4.1 Word Representation: Single Sense vs Multi Sense

SCDV-MS used a multi-sense approach instead of single sense word embeddings because SCDV does not disambiguate the senses of the words based on the context words used in the documents. SCDV-MS performed an automatic word sense disambiguation using multi-sense word embeddings according to the context determined by the neighboring words to resolve cluster ambiguity for polysemous words. Table 1 shows examples of multi-sense words along with their fitting context and the prominent words of the assigned clusters.

Figure 1: Effect of sense disambiguation on word cluster assignment probabilities

Figure 1 shows the effect of sense disambiguation on fuzzy GMM clustering on the 20NewsGroup dataset. The same word is assigned to different clusters depending on its context which helps in resolving the word cluster ambiguities, e.g., without sense disambiguation, the word ’Subject’ belongs to cluster with probability and cluster with probability . But after sense disambiguation we acquire two word embeddings of the word ‘Subject’, i.e., ‘Subject#1’ and ‘Subject#2’. ‘Subject#1’ belongs to cluster with probability of and ‘Subject#2’ belongs to cluster with probability . So depending on the context in which word ‘Subject’ is used, the algorithm assigns ‘Subject’ to a single cluster based on its sense; thus word cluster ambiguity is resolved. We observe similar disambiguation effects for other polysemous words in the corpus.

Multi-Sense Words Sentence (Context Words) Prominent Cluster Words
Subject The math subject is a nightmare for many students physics, chemistry, math, science
In anxiety, he sent an email without a subject mail, letter, email, gmail
After promotion, he went to Maldives for spring break vacation, holiday, trip, spring
Break Breaking government websites is common for hackers encryption, cipher, security, privacy
Use break to stop growing recursion loops if, elseif, endif, loop, continue
Unit The S.I. unit of distance is meter calculation, distance, mass, length
Multimeter shows a unit of 5V electronics, KWH, digital, signal
Interest His interest lies in astrophysics information, enthusiasm, question
Bank’s interest rates are controlled by RBI bank, market, finance, investment
Table 1: Examples of multi-sense words along with their context words and the corresponding prominent cluster words

4.2 Thresholding Word Cluster Assignments

In SCDV, the thresholding is applied to the final document vectors. However, applying the hard thresholding in an earlier stage in word cluster assignments () results in better heavy tail noise removal and yields more robust representations. Thus, SCDV-MS applied the hard thresholding directly on the word cluster assignments instead of the final document representations. Furthermore, applying sparsity over vocabulary words with fewer dimensions () instead of millions of documents () results in higher efficiency (). Here, is the number documents, is vocabulary, is number of clusters, and is word vector dimensions. Empirically, on 20NewGroups we observe that about of entries in word-cluster assignments for all words are close to ( ). For each word on average, the probability of cluster assignment () for cluster assignments out of (variance of ) is less than . Thus, applying thresholding at word-cluster assignment level is reasonable.

4.3 Doc2VecC vs SGNS Representations

In the SCDV approach, the SGNS word vectors represent the common words (mainly stop words) as non-zero vectors. This makes the clusters redundant and generates a heavy tail noise. SCDV-MS addressed this issue by using Doc2VecC [chen2017efficient] which introduces corruption while doing context addition in word embeddings to help in learning robust word vector representations. In this approach, the common words of the corpus are forcefully learned as zeroed vectors. We observed that using Doc2VecC trained word vectors results in non-redundant diverse clusters. Thus, using Doc2VecC trained word vectors not only improves the performance but also reduces the feature size. There is no running time overhead for Doc2VecC compared to the SGNS.

4.4 Low Dimensional Manifold Learning

SCDV [mekala2017scdv] represents documents as high dimensional sparse vectors. The SCDV approach showed that such vectors are useful for linear classification models. However, being useful for many downstream applications (especially the ones using deep learning models) requires a continuous low dimensional document representation similar to word vectors. To overcome this issue, SCDV-MS projected the sparse word-topic vectors into a lower-dimensional manifold which preserves the local neighborhood using simple techniques such as random projection. Furthermore, the manifold learning is applied over word vocabularies instead of millions of documents, which is more efficient. Dimensionality reduction for SCDV-MS is roughly () (where is the number of documents and is the size of vocabulary) faster than SCDV.

Figure 2: Flowchart representing modified computation.
Figure 3: Flowchart representing final document vector computation.

5 Experimental Results

Document embeddings obtained using SCDV-MS can be used as direct features for downstream supervised tasks. We experimented with text classification tasks (see Table 2) to show the effectiveness of our embeddings. We evaluated the following questions through our experiments.

Task Dataset #Classes #train / #test
Multi-class 20NewsGroup 20 11K / 8K
Multi-label Reuters-21578 444 13K / 6K
Table 2: Text classification datasets overview.
  1. Does disambiguating word-cluster assignments using multi-sense embedding improve classification performance?

  2. Does hard thresholding over word-cluster assignments improve performance, space and time complexities?

  3. Is representational noise reduction using Doc2Vec initialization effective?

  4. Can effective lower dimensional manifold be learned from the sparse high dimensional word topic vectors?

Baselines: We considered the following baselines: Bag-of-Words (BoW) [harris54], Bag of Word Vector (BoWV) [vivek], 444https://bit.ly/2X0XfBH Sparse Composite Document Vectors (SCDV) [mekala2017scdv], 555https://bit.ly/36NxGZh paragraph vectors [le2014distributed], pmeans [pmeans], ELMo [elmo], Topical word embeddings (TWE-1) [AAAI159314], Neural Tensor Skip-Gram Model (NTSG-1 to NTSG-3) [liu2015learning], tf-idf weighted average word-vector [pranjal2015weighted] and weighted Bag of Concepts (weight-BoC) [boc], and BERT [devlin2018bert]. In BoC we built topic-document vectors by counting the member words in each topic. For BERT, we reported the results on the unsupervised pre-trained (pr) model because of a fair comparison to our approach which is also unsupervised. In Doc2VecC [chen2017efficient] averaging and training the vectors was done jointly with corruption. Also, in SIF [arora2016simple] we used the inverse frequency weights for weighting while averaging word vectors, and finally removed the common components from the average. The results of our proposed embeddings is represented by SCDV-MS in Tables 5 and 3. We also compared our representation with topic modeling-based embedding methods, described in the related work.

The Experimental Setting: We learned the word embeddings with Skip-Gram Negative Sampling (SGNS) using commonly used parameters, e.g., negative sample size of , minimum word frequency of , and the window size of . We ensure for usual data cleansing like stop word removal, lemmetization and stemming for all the baselines. In addition, we used simple models such as LinearSVM for multi-class classification and Logistic regression with a OneVsRest setting for the multi-label classification tasks so that we can directly compare our results with the previous approaches which uses the same classifiers. Similar to SCDV, to tune the hyperparameters, we employed a 5-fold cross-validation on the F1 score. We also used the Doc2VecC model [chen2017efficient] to initialize the word embeddings on the annotated corpora for performance improvement. To ensure a fair comparison with SCDV, we fixed the same number of clusters to and used full covariance for GMM clustering for all experiments based on our best empirical results with cross-validation. We tuned the hard threshold sparsity constant from range with cross-validation to select the best hyper-parameter for making the word cluster assignments sparse. Moreover, we used AdaGram [bartunov2016breaking] for disambiguating the sense of multi-sense words using a neighborhood of context words on both sides, so that the window size is . We first ranked our words based on their tf-idf scores; we then selected a practicable number (top words) as candidates for the polysemic words. Next, we selected the true polysemic words by applying AdaGram on the candidates. 666https://bit.ly/2Jv6wxX The best parameter settings were used to generate baselines results. We used dimensions for the tf-idf weighted word-vector model, for the paragraph vector model, topics and dimensional vectors for TWE/NTSG/LTSG, and topics and dimensional word vectors for SCDV and BOWV. We reported the average of runs. Our results were robust across multiple runs with a variance of O().

5.1 Text Classification Results

We evaluated the classifier’s performance on multi-class classification using several metrics such as accuracy, macro-averaging precision, recall, and macro F1-score. Table 3 shows a comparison with multiple state-of-the-art document representations (the first 7 except BERT/ELMo are clustering-based, the next 11 topic-word embeddings based, the next 6 are simple averaging or topic modeling methods) on the 20NewsGroup dataset. We also reported the results (micro F1) on the classes of 20NewsGroup in Table 4. Furthermore, we evaluated the multi-label classification performance using several metrics such as Precision@K, nDCG@k [bhatia2015sparse], Coverage error, Label ranking average precision score (LRAPS)777Section of https://goo.gl/4GrR3M and macro F1-score. Table 5 shows the results on the Reuters dataset.

Model Accuracy Precision Recall F-measure
SCDV-MS 86.19 86.20 86.18 86.16
R-SCDV-MS 84.9 84.9 84.9 84.9
BERT (pr)[devlin2018bert] 84.9 84.9 85.0 85.0
SCDV [mekala2017scdv] 84.6 84.6 84.5 84.6
RandBin 83.9 83.99 83 .9 83.76
BoWV [vivek] 81.6 81.1 81.1 80.9
pmeans [pmeans] 81.9 81.9 81.9 81.5
Doc2VecC [chen2017efficient] 84.0 84.1 84.1 84.0
BoE [boe] 83.1 83.1 83.1 83.1
NTSG-2 [liu2015learning] 82.5 83.7 82.8 82.4
LTSG [ltsg] 82.8 82.4 81.8 81.8
WTM [wtm] 80.9 80.3 80.3 80.0
ELMo [elmo] 74.1 74.0 74.1 73.9
w2v-LDA [wtvlda] 77.7 77.4 77.2 76.9
TV+MeanWV [tvMeanWV] 72.2 71.8 71.5 71.6
MvTM [mvtm] 72.2 71.8 71.5 71.6
TWE-1 [AAAI159314] 81.5 81.2 80.6 80.6
lda2Vec [lda2vec] 81.3 81.4 80.4 80.5
lda [Blei:2003] 72.2 70.8 70.7 70.0
weight-AvgVec [pranjal2015weighted] 81.9 81.7 81.9 81.7
BoW [pranjal2015weighted] 79.7 79.5 79.0 79.0
weight-BOC [pranjal2015weighted] 71.8 71.3 71.8 71.4
PV-DBoW [le2014distributed] 75.4 74.9 74.3 74.3
PV-DM [le2014distributed] 72.4 72.1 71.5 71.5
Table 3: Performance on multi-class classification. Values in bold show the best performance using the SCDV-MS embeddings.
Class Name SCDV SCDV-MS R-SCDV-MS
alt.atheism 80.14 81.35 80.39
comp.graphics 78.99 76.84 76.95
comp.os.ms-windows.misc 75.65 77.65 78.28
comp.sys.ibm.pc.hardware 72.08 73.43 68.38
comp.sys.mac.hardware 82.15 86.82 80.16
comp.windows.x 81.8 82.97 83.27
misc.forsale 82.8 85.13 84.99
rec.autos 89.06 92.53 91.77
rec.motorcycles 94.27 96.11 94.27
rec.sport.baseball 93.57 96.47 93.68
rec.sport.hockey 97.27 96.78 96.41
sci.crypt 93.1 92.82 93.5
sci.electronics 77.38 77.45 74.25
sci.med 88.58 92.30 91.57
sci.space 90.33 91.40 90.71
soc.religion.christian 89.56 89.97 89.76
talk.politics.guns 80.69 84.18 83.05
talk.politics.mideast 95.96 95.95 96.1
talk.politics.misc 69.33 73.49 73.67
talk.religion.misc 65.53 65.54 60.48
Table 4: Class-wise F1-Score on the 20newsgroup dataset with different document representations.
Model Prec
@1 Prec
@5 nDCG
@5 Cover.
Error LRAPS F1
Score
SCDV-MS 95.06 37.56 50.20 5.87 94.21 82.71
R-SCDV-MS 93.56 37.00 49.47 6.74 92.96 81.94
BERT (pr) 93.8 37 49.6 6.3 93.1 81.9
SCDV 94.00 37.05 49.6 6.65 93.34 81.77
Doc2VecC 93.45 36.86 49.28 6.83 92.66 81.29
pmeans 93.29 36.65 48.95 7.66 91.72 77.81
BoWV 92.90 36.14 48.55 8.16 91.46 79.16
TWE-1 90.91 35.49 47.54 8.16 91.46 79.16
PV-DM 87.54 33.24 44.21 13.15 86.21 70.24
PV-DBoW 88.78 34.51 46.42 11.28 87.43 73.68
tfidf AvgVec 89.33 35.04 46.83 9.42 87.90 71.97
Table 5: Performance on various metrics for multi-label classification on the Reuters dataset. Values in bold show the best performance using the SCDV-MS algorithm.

Datasets: We evaluated our approach by running multi-class classification experiments on the 20NewsGroup dataset, 888http://qwone.com/j̃ason/20Newsgroups/ and multi-label classification experiments on the Reuters-21578 dataset. 999https://goo.gl/NrOfu For more details on dataset statistics refer to Table 2. We used script for datasets preprocessing. 101010https://bit.ly/2PXDdXj

Ablation (w/o) 20NewsGroup Reuters
Sparsity 85.28 0.002 82.17 0.001
Doc2VecC 85.41 0.001 82.08 0.002
MultiSense 85.16 0.001 82.43 0.001
All 84.61 0.004 81.77 0.003
None 86.16 0.002 82.71 0.002
Table 6: Ablation Study reporting F1 scores. In , is the variance across several runs.
Dimension Random
Projection PCA
(SubSpace) Autoencoder
200 78.97 80.62 81.44
500 82.19 83.14 83.83
1000 83.75 83.80 84.31
2000 84.47 84.34 84.80
3000 84.94 84.86 84.90
Table 7: Performance in terms of accuracy with various dimensionality reduction methods on the 20NewsGroup dataset. Similar results were acquired for precision, recall, and F1 score.
Figure 4: Percentage loss in F1-Score (%RL) after Random Projection-based dimensionality reduction on 20NewsGroup.
Figure 5: Percentage loss in F1-Score (%RL) after Autoencoder-based dimensionality reduction on 20NewsGroup.
Figure 6: Percentage loss in F1-Score (%RL) after PCA (Subspace)-based dimensionality reduction on 20NewsGroup.
Embedding Dimen Accuracy Precision Recall F1-Score
Word2Vec 200 82.07 0.003 82.22 0.005 81.9 0.004 81.9 0.005
Reduce Word Topic Vector 200 82.13 0.002 82.36 0.003 82.06 + 0.003 82.05 0.003
Word2Vec 2000 82.31 0.004 82.87 0.005 82.31 0.004 82.38 0.005
Reduce Word Topic Vector 2000 82.85 0.001 83.18 0.004 82.64 0.002 82.68 0.003
Table 8: Performance of Convolutional Neural Network (CNN) for multi-class text classification on 20NewsGroup with the original word embeddings ( dimensions) and the reduced word topic vectors ( dimensions). In , is the variance across several runs.
Method Vocab
Dim
Sparsity()
Sparsity() Cluster
(sec) Feature
( sec) Predict
( sec) Training
(min) Model
Size (KB)
Space(MB)
SCDV 15591 12000 1 81 242 2.56 119 82 1900 748
SCDV-MS 25466 12000 98 74 569 0.06 111 79 1900 71
R-SCDV-MS 25466 2000 0 0 576 0.86 14 66 333 203
Table 9: Time and Space complexity analysis of embedding methods. Bold values represent the best results.
Red-Dim Subspace Rank Reduction Criteria
rank reduce to else the original rank
rank reduce to else the original rank
rank reduce to else the original rank
rank reduce to else the original rank
Table 10: PCA based subspace rank reduction criteria.
Method Dimen Prec@1 Prec nDCG Cover LRAP F1
nDCG @5 @5 Error Score
500 92.14 36.53 48.74 8.02 91.34 79.96
Auto 1000 92.95 36.82 49.17 7.14 92.32 81.05
Encoder 2000 93.39 36.95 49.39 6.87 92.75 81.65
3000 93.56 37 49.47 6.74 92.96 81.94
500 91.98 36.26 48.47 7.41 91 79.03
Random 1000 92.59 36.62 48.91 6.98 91.84 80.39
Projection 2000 93.39 36.83 49.26 6.95 92.59 81.12
3000 93.32 36.91 49.33 6.78 92.75 81.39
500 90.73 36.03 48.06 8.40 90.11 78.06
PCA 1000 92.15 36.55 48.78 7.44 91.48 80
(SubSpace) 2000 92.95 36.87 49.22 6.86 92.30 81.2
3000 93.38 36.97 49.4 6.69 92.8 81.48
Table 11: Performance on reduced word topic vectors using several reduction techniques on various dimensions for multi-label classification – the Reuters dataset.
Method Random
Projection PCA
(SubSpace) Autoencoder
Time (sec) 35 66 608
Table 12: Time complexity for dimensionality reduction of word topic vectors to a -dimension dense representation using various reduction techniques on 20NewsGroup

Results and Analysis. We observed that our modified embeddings (SCDV-MS) with Doc2VecC-initialized word vectors, direct thresholding on word cluster assignments, and multi-sense disambiguation using AdaGram outperforms all earlier embeddings on both the 20NewsGroup and the Reuters datasets. From class-wise results in Table 4, we notice a consistent performance improvement where we are outperforming SCDV in 18 out of 20 classes. It should be noted that the improvement on Reuters is not as great as the 20NewsGroup dataset due to the fact that the number of unique polysemic words in Reuters () is significantly fewer than 20NewsGroup (); thus each word is assigned to only one cluster. Therefore, the use of AdaGram for sense disambiguation and the sparsity operation over the word-cluster assignments does not improve the performance by a large margin. We verified this claim in the ablation study below. We can conclude that our modifications yield notable improvements if the dataset has more multi-sense words.

Ablation Study: To understand the contributions of each of the three modifications, we compared five different versions of our embeddings. In the first version, we ablated the sparsity of the word-cluster assignments and applied sparsity directly on the document vectors similar to SCDV while keeping the Doc2VecC multi-sense word embeddings and the sense annotated corpus intact. In the second version, we ablated the Doc2VecC embeddings with normal SGNS embedding while keeping the word topic vector sparsity, and the sense annotated corpus intact. In the third version, we ablated multi-sense embeddings and the annotation of the corpus while keeping the Doc2VecC word training and the word topic vector sparsity intact. We also compared our results with an all ablation approach, i.e., the SCDV baseline and a none ablation approach, i.e., our new embeddings in SCDV-MS. Table 6 shows the results obtained with ablation on 20NewsGroup and Reuters datasets. We obtained the best performance with the none ablation approach, i.e., SCDV-MS. Thus, we can conclude that all three modifications is needed to yield the best performance. Multi-sense is the most pivotal improvement for 20newsgroup since by ablating it we observed the lowest performance out of ablating each of the three modifications. Whereas on the Reuters dataset, multi-sense was the least important because of fewer multi-sense words. On Reuters, the noise removal at word level representations was the most important.

Comparison with Contextual Embeddings: SCDV-MS is a lot simpler than unsupervised contextual embeddings like BERT (pr) and ELMo, but it outperformed them. We presume that SCDV-MS’s concentration on capturing semantics (local and global) in sparse high dimension representations instead of capturing both semantics and syntax in single lower dimensional continuous representations (what BERT does) is the reason behind our method’s superior performance. Because understanding syntax is not as influential as semantics in our classification and similarity tasks.

5.2 wtv’s Lower Dimensional Manifold Learning

We tried three popular lower-dimensional manifold learning algorithms to reduce high dimensional sparse word topic vectors to lower-dimensional word embeddings, namely Random Projection [achlioptas2003database], PCA (Subspace) [abdi2010principal], and Autoencoders [vincent2010stacked]. For PCA (Subspace), we observed that not all subspaces ( dimensions) of word topic vectors have a complete full rank (). Most of the subspaces were of ranks much smaller than (rank ). Therefore, we applied PCA on each of dimension subspaces separately and concatenated the subspace reduced vectors. We refer to Table 10 as the criteria for PCA-based subspace rank reduction. For Auto-encoders, we used the standard architecture for reducing the word topic vectors from to dimensions, through a intermediate layer of dimensions (see Figure 7). We used the mean squared error minimization, non-linear activation, and Adam optimization routine for training the autoencoders. We used the initial learning rate of , and for Adam optimization routine.

Figure 7: The AutoEncoder architecture () used to reduce the word topic vectors from dimensions to dimensions.

Results: Table 7 shows the performance of the dimensionality reduction techniques such as Random Projection, PCA (Subspace), and Autoencoders on the 20NewsGroup dataset. We observed that autoencoders outperform other reduction methods because they easily fit any non-linear function. Also, we observed only a small decrease of in the performance after a reduction to dimensions with most methods. This decrease is associated to loss due to data compression and can be explained by information theory. However, word topic vectors can be efficiently projected into a lower-dimensional manifold without a significant loss in the performance. We compared the percentage loss in performance (F1-Score) %RL = on text classification with a dimension-reduced through random projection for both SCDV and SCDV-MS on 20newsgroup. In Figures 6, 6 and 6 we observe that the loss in SCDV-MS’s F1-Score is distinctively less compared to SCDV for all reduction methods. Furthermore, we observed that the reduction time for SCDV-MS was shorter than SCDV, particularly for random projection, because SCDV-MS are sparser than SCDV . We also tried a direct reduction of final document representations which yielded a poor performance and took a longer reduction time for both embeddings. Overall, reducing SCDV-MS is much more effective than reducing SCDV or document vectors. Similar observations for a multi-label classification task on the Reuters dataset, see Table 11.

Application to Deep Learning: One significance of our reduction of is that they can be used as direct word embeddings in popular deep learning architectures such as CNNs on downstream classification tasks. We used the same architecture (see the supplementary material, Figure 8 111111 https://bit.ly/33473wk) for both embeddings. Employing CNN, our results outperformed the original word embeddings of the same dimension for the 20NewsGroup classification task, shown in Table 8.

5.3 Time and Space Complexity

Table 9 illustrates empirical results for time and space complexities on SCDV ( dimensions), SCDV-MS ( dimensions) and reduced R-SCDV-MS ( dimensions).

Feature Formation Time: Due to the direct thresholding of word cluster assignments in SCDV-MS, the word topic vectors () are extremely sparse. SCDV-MS has only of active attributes ( sparse), whereas SCDV has of active attributes (only sparse). Therefore, we can use an efficient sparse operation (sparse addition and multiplication) over sparse vectors to speedup feature formation. We observed that by adding sparsity we can reduce the feature formation time by a significant factor of .

Overall Prediction Time: Overall, due to enhanced feature formation and reduced loading time, SCDV-MS will predict faster compared to the original SCDV. However, we observed an insignificant difference in the prediction time and model size as SCDV sparsifies the final document vectors (both equally sparse). Furthermore, after reducing the SCDV-MS to -dense dimension features using auto-encoders, we observed a distinctive reduction of times in the prediction time. One can directly store the reduced for the complete vocabulary instead of the reduction model. Refer to Table 12 for SCDV-MS reduction timing. Furthermore, one can directly also reduce the words appearing in the documents i.e. use a real-time reduction model during prediction.

Vector Space Complexity, Training Time, and Model Size: We only require of the original space to store sparse . We achieved these improvements despite having times of the size of the original vocabulary due to multi-sense word embeddings. The projected is times larger than the of SCDV-MS; however, it is times smaller than the in SCDV. SCDV is marginally ( times) sparser than SCDV-MS due to manual thresholding of document vectors. SCDV-MS also has a slightly faster training time compared to the original SCDV. We also observed that our training model on reduced vectors (R-SCDV-MS) is times smaller than SCDV, and the training process of our classifier is times faster than SCDV.

6 Conclusion

In this paper, we proposed several novel modifications to overcome the shortcomings of SCDV, a state-of-the-art document embedding method. Our proposed SCDV-MS, outperformed the previous embedding (including SCDV and BERT (pr)) on the downstream tasks of text classification. Overall, we have shown that disambiguating multi-sense words based on context words (adjacent words) can lead to better document representations. Sparsity in representation is helpful for effective and efficient lower-dimensional manifold representation learning. Representation noise in words level can have a significant impact on the downstream tasks.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
399555
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description