Domain Adapted Word Embeddings for Improved Sentiment Classification

Domain Adapted Word Embeddings for Improved Sentiment Classification

Prathusha K Sarma Yingyu Liang William A Sethares
Abstract

Generic word embeddings are trained on large-scale generic corpora; Domain Specific (DS) word embeddings are trained only on data from a domain of interest. This paper proposes a method to combine the breadth of generic embeddings with the specificity of domain specific embeddings. The resulting embeddings, called Domain Adapted (DA) word embeddings, are formed by aligning corresponding word vectors using Canonical Correlation Analysis (CCA) or the related nonlinear Kernel CCA. Evaluation results on sentiment classification tasks show that the DA embeddings substantially outperform both generic and DS embeddings when used as input features to standard or state-of-the-art sentence encoding algorithms for classification.

affiliationtext: University of Wisconsin-Madison
{kameswarasar,sethares}@wisc.edu,
yliang@cs.wisc.edu

Domain Adapted Word Embeddings for Improved Sentiment Classification


1 Introduction

Generic word embeddings such as Glove and word2vec (Pennington et al., 2014; Mikolov et al., 2013) which are pre-trained on large sets of raw text, have demonstrated remarkable success when used as features to a supervised learner in various applications such as the sentiment classification of text documents. There are, however, many applications with domain specific vocabularies and relatively small amounts of data. The performance of generic word embedding in such applications is limited, since word embeddings pre-trained on generic corpora do not capture domain specific semantics/knowledge, while embeddings learned on small data sets are of low quality.

A concrete example of a small-sized domain specific corpus is the Substances User Disorders (SUDs) data set (Quanbeck et al., 2014; Litvin et al., 2013), which contains messages on discussion forums for people with substance addictions. These forums are part of a mobile health intervention treatment that encourages participants to engage in sobriety-related discussions. The goal of such treatments is to analyze content of participant’s digital media content and provide human intervention via machine learning algorithms. This data is both domain specific and limited in size. Other examples include customer support tickets reporting issues with taxi-cab services, product reviews, reviews of restaurants and movies, discussions by special interest groups and political surveys. In general they are common in domains where words have different sentiment from what they would have elsewhere.

Such data sets present significant challenges for word embedding learning algorithms. First, words in data on specific topics have a different distribution than words from generic corpora. Hence using generic word embeddings obtained from algorithms trained on a corpus such as Wikipedia, may introduce considerable errors in performance metrics on specific downstream tasks such as sentiment classification. For example, in SUDs, discussions are focused on topics related to recovery and addiction; the sentiment behind the word ‘party’ may be very different in a dating context than in a substance abuse context. Thus domain specific vocabularies and word semantics may be a problem for pre-trained sentiment classification models (Blitzer et al., 2007). Second, there is insufficient data to completely retrain a new set of word embeddings. The SUD data set consists of a few hundred people and only a fraction of these are active (Firth et al., 2017), (Naslund et al., 2015). This results in a small data set of text messages available for analysis. Furthermore, content is generated spontaneously on a day to day basis, and language use is informal and unstructured. Fine-tuning the generic word embedding also leads to noisy outputs due to the highly non-convex training objective and the small amount of data. Since such data sets are common, a simple and effective method to adapt word embedding approaches is highly valuable. While existing work (Yin and Schütze, 2016)(Luo et al., 2014)(Mehrkanoon and Suykens, 2017)(Anoop et al., 2015)(Blitzer et al., 2011) combines word embeddings from different algorithms to improve upon intrinsic tasks such as similarities, analogies etc, there does not exist a concrete method to combine multiple embeddings to perform domain adaptation or improve on extrinsic tasks.

This paper proposes a method for obtaining high quality word embeddings that capture domain specific semantics and are suitable for tasks on the specific domain. The new Domain Adapted (DA) embeddings are obtained by combining generic embeddings and Domain Specific (DS) embeddings via CCA/KCCA. Generic embeddings are trained on large corpora and do not capture domain specific semantics, while DS embeddings are obtained from the domain specific data set via algorithms such as Latent Semantic Analysis (LSA) or other embedding methods. The two sets of embeddings are combined using a linear CCA (Hotelling, 1936) or a nonlinear kernel CCA (KCCA) (Hardoon et al., 2004). They are projected along the directions of maximum correlation, and a new (DA) embedding is formed by averaging the projections of the generic embeddings and DS embeddings. The DA embeddings are then evaluated in a sentiment classification setting. Empirically, it is shown that the CCA/KCCA combined DA embeddings improve substantially over the generic embeddings, DS embeddings and a concatenation-SVD (concSVD) based baseline.

The remainder of this paper is organized as follows. Section 2 briefly introduces the CCA/KCCA and details the procedure used to obtain the DA embeddings. Section 3 describes the experimental set up. Section 4 discusses the results from sentiment classification tasks on benchmark data sets using standard classification as well as using a sophisticated neural network based sentence encoding algorithm. Section 5 concludes this work.

2 Domain Adapted Word Embeddings

Training word embeddings directly on small data sets leads to noisy outputs while embeddings from generic corpora fail to capture specific local meanings within the domain. Here we combine DS and generic embeddings using CCA KCCA, which projects corresponding word vectors along the directions of maximum correlation.

Let be the matrix whose columns are the domain specific word embeddings (obtained by, e.g., running the LSA algorithm on the domain specific data set), where is its vocabulary and is the dimension of the embeddings. Similarly, let be the matrix of generic word embeddings (obtained by, e.g., running the GloVe algorithm on the Common Crawl data), where is the vocabulary and is the dimension of the embeddings. Let . Let be the domain specific embedding of the word , and be its generic embedding. For one dimensional CCA, let and be the projection directions of and respectively. Then the projected values are,

(1)

CCA maximizes the correlation between and to obtain and such that

(2)

where is the correlation between the projected word embeddings and is the expectation over all words .

The -dimensional CCA with can be defined recursively. Suppose the first pairs of canonical variables are defined. Then the pair is defined by seeking vectors maximizing the same correlation function subject to the constraint that they be uncorrelated with the first pairs. Equivalently, matrices of projection vectors and are obtained for all vectors in and where . Embeddings obtained by and are projections along the directions of maximum correlation.

The final domain adapted embedding for word is given by where the parameters and can be obtained by solving the following optimization,

(3)

Solving (3) gives a weighted combination with , i.e., the new vector is equal to the average of the two projections:

(4)

Because of its linear structure, the CCA in (2) may not always capture the best relationships between the two matrices. To account for nonlinearities, a kernel function, which implicitly maps the data into a high dimensional feature space, can be applied. For example, given a vector , a kernel function is written in the form of a feature map defined by such that given and

In kernel CCA, data is first projected onto a high dimensional feature space before performing CCA. In this work the kernel function used is a Gaussian kernel, i.e.,

The implementation of kernel CCA follows the standard algorithm described in several texts such as (Hardoon et al., 2004); see reference for details.

Data Set Embedding Avg Precision Avg F-score Avg AUC
Yelp
KCCA(Glv, LSA)
CCA(Glv, LSA)
KCCA(w2v, LSA)
CCA(w2v, LSA)
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
KCCA(w2v, DSw2v)
CCA(w2v, DSw2v)
concSVD(Glv, LSA)
concSVD(w2v, LSA)
concSVD(GlvCC, LSA)
GloVe
GloVe-CC
word2vec
LSA
word2vec
85.36 2.8
83.69 4.7
87.45 1.2
84.52 2.3
88.11 3.0
83.69 3.5
78.09 1.7
86.22 3.5
80.14 2.6
85.11 2.3
84.20 3.7
77.13 4.2
82.10 3.5
82.80 3.5
75.36 5.4
73.08 2.2
81.892.8
79.482.4
83.361.2
80.022.6
85.352.7
78.994.2
76.041.7
84.352.4
78.503.0
83.512.2
80.393.7
72.327.9
76.743.4
78.283.5
71.174.3
70.972.4
82.571.3
80.332.9
84.100.9
81.042.1
85.802.4
80.033.7
76.661.5
84.652.2
78.922.7
83.802.0
80.833.9
74.175.0
78.172.7
79.353.1
72.574.3
71.762.1
Amazon
KCCA(Glv, LSA)
CCA(Glv, LSA)
KCCA(w2v, LSA)
CCA(w2v, LSA)
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
KCCA(w2v, DSw2v)
CCA(w2v, DSw2v)
concSVD(Glv, LSA)
concSVD(w2v, LSA)
concSVD(GlvCC, LSA)
GloVe
GloVe-CC
word2vec
LSA
word2vec
86.301.9
84.682.4
87.091.8
84.801.5
89.732.4
85.672.3
85.683.2
83.503.4
82.362.0
87.282.9
84.931.6
81.582.5
79.912.7
84.551.9
82.654.4
74.205.8
83.002.9
82.272.2
82.632.6
81.421.9
85.472.4
83.832.3
81.233.2
81.314.0
81.303.5
86.172.5
77.812.3
77.622.7
81.632.8
80.522.5
73.923.8
72.495.0
83.393.2
82.781.7
83.502.0
82.121.3
85.562.6
84.212.1
82.202.9
81.863.7
81.512.5
86.422.0
79.521.7
78.722.7
81.462.6
81.452.0
76.403.2
73.114.8
IMDB
DA
KCCA(Glv, LSA)
CCA(Glv, LSA)
KCCA(w2v, LSA)
CCA(w2v, LSA)
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
KCCA(w2v, DSw2v)
CCA(w2v, DSw2v)
concSVD(Glv, LSA)
concSVD(w2v, LSA)
concSVD(GlvCC, LSA)
GloVe
GloVe-CC
word2vec
LSA
word2vec
73.841.3
73.352.0
82.364.4
80.664.5
54.502.5
54.082.0
60.653.5
58.472.7
73.253.7
53.872.2
78.283.2
64.442.6
50.531.8
78.923.7
67.921.7
56.873.6
73.073.6
73.003.2
78.952.7
75.954.5
54.422.9
53.033.5
58.953.2
57.623.0
74.553.2
51.775.8
77.673.7
65.183.5
62.393.5
74.883.1
69.795.3
56.043.1
73.172.4
73.062.0
79.662.6
77.233.8
53.912.0
54.902.1
58.953.7
58.033.9
73.024.7
53.541.9
74.552.9
64.622.6
49.962.3
75.602.4
69.713.8
59.538.9
A-CHESS
DA
KCCA(Glv, LSA)
CCA(Glv, LSA)
KCCA(w2v, LSA)
CCA(w2v, LSA)
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
KCCA(w2v, DSw2v)
CCA(w2v, DSw2v)
concSVD(Glv, LSA)
concSVD(w2v, LSA)
concSVD(GlvCC, LSA)
GloVe
GloVe-CC
word2vec
LSA
word2vec
32.071.3
32.701.5
33.451.3
33.063.2
36.381.2
32.112.9
25.591.2
24.881.4
27.272.9
29.842.3
28.091.9
30.822.0
38.130.8
32.672.9
27.421.6
24.480.8
39.322.5
35.484.2
39.811.0
34.021.1
34.714.8
36.854.4
28.273.1
29.173.1
34.453.0
36.323.3
35.061.4
33.673.4
27.453.1
31.721.6
34.382.3
27.973.7
65.961.3
62.152.9
65.920.6
60.910.9
61.362.6
62.993.1
57.251.7
57.762.0
61.592.3
62.941.1
62.132.6
60.802.3
57.491.2
59.640.5
61.561.9
57.082.5
Table 1: This table shows results from the classification task using sentence embeddings obtained from weighted averaging of word embeddings. Metrics reported are average Precision, F-score and AUC and the corresponding standard deviations (STD). Best results are attained by KCCA (GlvCC, LSA) and are highlighted in boldface.
Data Set Embedding Avg Precision Avg F-score Avg AUC
Yelp
GlvCC
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
concSVD(GlvCC,LSA)
RNTN
86.471.9
91.060.8
86.261.4
85.532.1
83.111.1
83.512.6
88.662.4
82.611.1
84.901.7
-
83.832.2
88.762.4
83.990.8
84.961.5
-
Amazon
GlvCC
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
concSVD(GlvCC, LSA)
RNTN
87.932.7
90.562.1
87.122.6
85.731.9
82.840.6
82.413.3
86.522.0
83.182.2
85.192.4
-
83.242.8
86.741.9
83.782.1
85.172.6
-
IMDB
GlvCC
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
concSVD(GlvCC, LSA)
RNTN
54.023.2
59.767.3
53.621.6
52.752.3
80.880.7
53.035.2
53.266.1
50.625.1
53.056.0
-
53.012.0
56.463.4
58.753.7
53.542.5
-
A-CHESS
GlvCC
KCCA(GlvCC, LSA)
CCA(GlvCC, LSA)
concSVD(GlvCC, LSA)
RNTN
52.215.1
55.375.5
54.343.6
40.414.2
-
55.265.6
50.675.0
48.762.9
44.755.2
-
74.283.6
69.893.1
68.782.4
68.133.8
-
Table 2: This table shows results obtained by using sentence embeddings from the InferSent encoder in the sentiment classification task. Metrics reported are average Precision, F-score and AUC along with the corresponding standard deviations (STD). Best results are obtained by KCCA (GlvCC, LSA) and are highlighted in boldface.

3 Experimental Evaluation

This section evaluates DA embeddings in binary sentiment classification tasks on four standard data sets. Document embeddings are obtained via (i) a standard framework, i.e document embeddings are a weighted combination of their constituent word embeddings and (ii) by initializing a state of the art sentence encoding algorithm InferSent (Conneau et al., 2017) with word embeddings to obtain sentence embeddings. Encoded sentences are then classified using a Logistic Regressor.

3.1 Datasets

The following balanced and imbalanced data sets are used for experimentation,

  • Yelp: This is a balanced data set consisting of 1000 restaurant reviews obtained from Yelp. Each review is labeled as either ‘Positive’ or ‘Negative’. There are a total of 2049 distinct word tokens in this data set.

  • Amazon: In this balanced data set there are 1000 product reviews obtained from Amazon. Each product review is labeled either ‘Positive’ or ‘Negative’. There are a total of 1865 distinct word tokens in this data set.

  • IMDB: This is a balanced data set consisting of 1000 reviews for movies on IMDB. Each movie review is labeled either ‘Positive’ or ‘Negative’. There are a total of 3075 distinct word tokens in this data set.

  • A-CHESS: This is a proprietary data set111Center for Health Enhancement System Services at UW-Madison obtained from a study involving users with alcohol addiction. Text data is obtained from a discussion forum in the A-CHESS mobile app (Quanbeck et al., 2014). There are a total of 2500 text messages, with 8 of the messages indicative of relapse risk. Since this data set is part of a clinical trial, an exact text message cannot be provided as an example. However, the following messages illustrate typical messages in this data set, “I’ve been clean for about 7 months but even now I still feel like maybe I won’t make it.” Such a message is marked as ‘threat’ by a human moderator. On the other hand there are other benign messages that are marked ‘not threat’ such as “30 days sober and counting, I feel like I am getting my life back.” The aim is to eventually automate this process since human moderation involves considerable effort and time. This is an unbalanced data set ( 8 of the messages are marked ‘threat’) with a total of 3400 distinct work tokens.

The first three data sets are obtained from (Kotzias et al., 2015).

3.2 Word embeddings and baselines:

This section briefly describes the various generic and DS embeddings used. We also compare against a basic DA embedding baseline in both the standard framework and while initializing the neural network baseline.

  • Generic word embeddings: Generic word embeddings used are GloVe222https://nlp.stanford.edu/projects/glove/ from both Wikipedia and common crawl and the word2vec (Skip-gram) embeddings333https://code.google.com/archive/p/word2vec/. These generic embeddings will be denoted as Glv, GlvCC and w2v.

  • DS word embeddings: DS embeddings are obtained via Latent Semantic Analysis (LSA) and via retraining word2vec on the test data sets using the implementation in gensim444https://radimrehurek.com/gensim/. DS embeddings via LSA are denoted by LSA and DS embeddings via word2vec are denoted by DSw2v.

  • concatenation-SVD baseline: Generic and DS embeddings are concatenated to form a single embeddings matrix. SVD is performed on this matrix and the resulting singular vectors are projected onto the largest singular values to form resultant word embeddings. These meta-embeddings proposed by (Yin and Schütze, 2016) have demonstrated considerable success in intrinsic tasks such as similarities, analogies etc.

Details about dimensions of the word embeddings and kernel hyperparameter tuning are found in the supplemental material.

The following neural network baselines are used in this work,

  • InferSent:This is a bidrectional LSTM based sentence encoder (Conneau et al., 2017) that learns sentence encodings in a supervised fashion on a natural language inference (NLI) data set. The aim is to use the sentence encoder trained on the NLI data set to learn generic sentence encodings for use in transfer learning applications.

  • RNTN: The Recursive Neural Tensor Network (Socher et al., 2013) baseline is a neural network based dependency parser that performs sentiment analysis. Since the data sets considered in our experiments have binary sentiments we compare against this baseline as well.

Note that InferSent is fine-tuned with a combination of GloVe common crawl embeddings and DA embeddings, and concSVD. The choice of GloVe common crawl embeddings is in keeping with the experimental conditions of the authors of InferSent. Since the data sets at hand do not contain all the tokens required to retrain InferSent, we replace word tokens that are common across our test data sets and InferSent training data with the DA embeddings and concSVD.

Since we have a combination of balanced and unbalanced test data sets, test metrics reported are Precision, F-score and AUC. We perform 10-fold cross validation to determine hyperparameters and so we report averages of the performance metrics along with the standard deviation.

4 Results and Discussion

From Tables 1 and 2 we see that DA embeddings perform better than concSVD as well as the generic and DS word embeddings, when used in a standard classification task as well as when used to initialize a sentence encoding algorithm. As expected, LSA DS embeddings provide better results than word2vec DS embeddings. Note that on the imbalanced A-CHESS data set, on the standard classification task, KCCA embeddings perform better than the other baselines across all three performance metrics. However from Table 2, GlvCC embeddings achieve a higher average F-score and AUC over KCCA embeddings that obtain the highest precision.

While one can argue that when evaluating a classifier, the F-score and AUC are better indicators of performance, it is to be noted that A-CHESS is highly imbalanced and precision is calculated on the minor (positive) class that is of most interest. Also note that, InferSent is retrained on the balanced NLI data set that is much larger in size than the A-CHESS test set. Certainly such a training set has more instances of positive samples. Thus when using generic word embeddings to initialize the sentence encoder, which uses the outputs in the classification task, the overall F-score and AUC are better.

From our hypothesis, KCCA embeddings are expected to perform better than the others because CCA/KCCA provides an intuitively better technique to preserve information from both the generic and DS embeddings. On the other hand the concSVD based embeddings do not exploit information in both the generic and DS embeddings. Furthermore, in their work (Yin and Schütze, 2016) propose to learn an ‘ensemble’ of meta-embeddings by learning weights to combine different generic word embeddings via a simple neural network. We determine the proper weight for combination of DS and generic embeddings in the CCA/KCCA space using the simple optimization problem given in Equation (3).

Thus, task specific DA embeddings formed by a proper weighted combination of DS and generic word embeddings are expected to do better than the concSVD embeddings and individual generic and/or DS embeddings and this is verified empirically. Also note that the LSA DS embeddings do better than the word2vec DS embeddings. This is expected due to the size of the test sets and the nature of the word2vec algorithm. We expect similar observations when using GloVe DS embeddings owing to the similarities between word2vec and GloVe.

5 Conclusion

This paper presents a simple yet effective method to learn Domain Adapted word embeddings that generally outperform generic and Domain Specific word embeddings in sentiment classification experiments on a variety of standard data sets. CCA/KCCA based DA embeddings generally outperform even a concatenation based methods.

Acknowledgments

We would like to thank Ravi Raju for lending computing support for training our neural network baselines. We also thank the anonymous reviewers for their feedback and suggestions.

References

  • Anoop et al. (2015) KR Anoop, Ramanathan Subramanian, Vassilios Vonikakis, KR Ramakrishnan, and Stefan Winkler. 2015. On the utility of canonical correlation analysis for domain adaptation in multi-view headpose estimation. In Image Processing (ICIP), 2015 IEEE International Conference on. IEEE, pages 4708--4712.
  • Blitzer et al. (2007) John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440--447.
  • Blitzer et al. (2011) John Blitzer, Sham Kakade, and Dean Foster. 2011. Domain adaptation with coupled subspaces. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. pages 173--181.
  • Conneau et al. (2017) Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364 .
  • Firth et al. (2017) Joseph Firth, John Torous, Jennifer Nicholas, Rebekah Carney, Simon Rosenbaum, and Jerome Sarris. 2017. Can smartphone mental health interventions reduce symptoms of anxiety? a meta-analysis of randomized controlled trials. Journal of Affective Disorders .
  • Hardoon et al. (2004) David R Hardoon, Sandor Szedmak, and John Shawe-Taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural computation 16(12):2639--2664.
  • Hotelling (1936) Harold Hotelling. 1936. Relations between two sets of variates. Biometrika 28(3/4):321--377.
  • Kotzias et al. (2015) Dimitrios Kotzias, Misha Denil, Nando De Freitas, and Padhraic Smyth. 2015. From group to individual labels using deep features. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pages 597--606.
  • Litvin et al. (2013) Erika B Litvin, Ana M Abrantes, and Richard A Brown. 2013. Computer and mobile technology-based interventions for substance use disorders: An organizing framework. Addictive behaviors 38(3):1747--1756.
  • Luo et al. (2014) Yong Luo, Jian Tang, Jun Yan, Chao Xu, and Zheng Chen. 2014. Pre-trained multi-view word embedding using two-side neural network. In AAAI. pages 1982--1988.
  • Mehrkanoon and Suykens (2017) Siamak Mehrkanoon and Johan AK Suykens. 2017. Regularized semipaired kernel cca for domain adaptation. IEEE Transactions on Neural Networks and Learning Systems .
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111--3119.
  • Naslund et al. (2015) John A Naslund, Lisa A Marsch, Gregory J McHugo, and Stephen J Bartels. 2015. Emerging mhealth and ehealth interventions for serious mental illness: a review of the literature. Journal of mental health 24(5):321--332.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532--1543.
  • Quanbeck et al. (2014) Andrew Quanbeck, Ming-Yuan Chih, Andrew Isham, Roberta Johnson, and David Gustafson. 2014. Mobile delivery of treatment for alcohol use disorders: A review of the literature. Alcohol research: current reviews 36(1):111.
  • Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing. pages 1631--1642.
  • Yin and Schütze (2016) Wenpeng Yin and Hinrich Schütze. 2016. Learning word meta-embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1351--1360.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
192131
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description