Learning Crosslingual Word Embeddings without Bilingual Corpora

Learning Crosslingual Word Embeddings without Bilingual Corpora

Abstract

Crosslingual word embeddings represent lexical items from different languages in the same vector space, enabling transfer of NLP tools. However, previous attempts had expensive resource requirements, difficulty incorporating monolingual data or were unable to handle polysemy. We address these drawbacks in our method which takes advantage of a high coverage dictionary in an EM style training algorithm over monolingual corpora in two languages. Our model achieves state-of-the-art performance on bilingual lexicon induction task exceeding models using large bilingual corpora, and competitive results on the monolingual word similarity and cross-lingual document classification task.

1Introduction

Monolingual word embeddings have had widespread success in many NLP tasks including sentiment analysis [25], dependency parsing [6], machine translation [2]. Crosslingual word embeddings are a natural extension facilitating various crosslingual tasks, e.g. through transfer learning. A model built in a source resource-rich language can then applied to the target resource poor languages [30]. A key barrier for crosslingual transfer is lexical matching between the source and the target language. Crosslingual word embeddings are a natural remedy where both source and target language lexicon are presented as dense vectors in the same vector space [13].

Most previous work has focused on down-stream crosslingual applications such as document classification and dependency parsing. We argue that good crosslingual embeddings should preserve both monolingual and crosslingual quality which we will use as the main evaluation criterion through monolingual word similarity and bilingual lexicon induction tasks. Moreover, many prior work [3] used bilingual or comparable corpus which is also expensive for many low-resource languages. impose a less onerous data condition in the form of linked Wikipedia entries across several languages, however this approach tends to underperform other methods. To capture the monolingual distributional properties of words it is crucial to train on large monolingual corpora [18]. However, many previous approaches are not capable of scaling up either because of the complicated objective functions or the nature of the algorithm. Other methods use a dictionary as the bridge between languages [19], however they do not adequately handle translation ambiguity.

Our model uses a bilingual dictionary from Panlex [12] as the source of bilingual signal. Panlex covers more than a thousand languages and therefore our approach applies to many languages, including low-resource languages. Our method selects the translation based on the context in an Expectation-Maximization style training algorithm which explicitly handles polysemy through incorporating multiple dictionary translations (word sense and translation are closely linked [22]). In addition to the dictionary, our method only requires monolingual data, as an extension of the continuous bag-of-word (CBOW) model [20]. We experiment with several variations of our model, whereby we predict only the translation or both word and its translation and consider different ways of using the different learned center-word versus context embeddings in application tasks. We also propose a regularisation method to combine the two embedding matrices during training. Together, these modifications substantially improve the performance across several tasks. Our final model achieves state-of-the-art performance on bilingual lexicon induction task, large improvement over word similarity task compared with previous published crosslingual word embeddings, and competitive result on cross-lingual document classification task. Notably, our embedding combining techniques are general, yielding improvements also for monolingual word embedding. Our contributions are:

  • Propose a new crosslingual training method for learning vector embeddings, based only on monolingual corpora and a bilingual dictionary.

  • Evaluate several methods for combining embeddings which help in both crosslingual and monolingual evaluations.

  • Achieve uniformly excellent results which are competitive in monolingual, bilingual and crosslingual transfer settings.

2Related work

There is a wealth of prior work on crosslingual word embeddings, which all exploit some kind of bilingual resource. This is often in the form of a parallel bilingual text, using word alignments as a bridge between tokens in the source and target languages, such that translations are assigned similar embedding vectors [18]. These approaches are affected by errors from automatic word alignments, motivating other approaches which operate at the sentence level [3] through learning compositional vector representations of sentences, in order that sentences and their translations representations closely match. The word embeddings learned this way capture translational equivalence, despite not using explicit word alignments. Nevertheless, these approaches demand large parallel corpora, which are not available for many language pairs.

use bilingual comparable text, sourced from Wikipedia. Their approach creates a psuedo-document by forming a bag-of-words from the lemmatized nouns in each comparable document concatenated over both languages. These pseudo-documents are then used for learning vector representations using Word2Vec. Their system, despite its simplicity, performed surprisingly well on a bilingual lexicon induction task (we compare our method with theirs on this task.) Their approach is compelling due to its lesser resource requirements, although comparable bilingual data is scarce for many languages. Related, exploit the comparable part of Wikipedia. They represent word using Wikipedia entries which are shared for many languages.

A bilingual dictionary is an alternative source of bilingual information. randomly replace the text in a monolingual corpus with a random translation, using this corpus for learning word embeddings. Their approach doesn’t handle polysemy, as very few of the translations for each word will be valid in context. For this reason a high coverage or noisy dictionary with many translations might lead to poor outcomes. , and filter a bilingual dictionary for one-to-one translations, thus side-stepping the problem, however discarding much of the information in the dictionary. Our approach also uses a dictionary, however we use all the translations and explicitly disambiguate translations during training.

Another distinguishing feature on the above-cited research is the method for training embeddings. and use a cascade style of training where the word embeddings in both source and target language are trained separately and then combined later using the dictionary. Most of the other works train multlingual models jointly, which appears to have better performance over cascade training [10]. For this reason we also use a form of joint training in our work.

3Word2Vec

Our model is an extension of the contextual bag of words (CBOW) model of , a method for learning vector representations of words based on their distributional contexts. Specifically, their model describes the probability of a token at position using logistic regression with a factored parameterisation,

where is a vector encoding the context over a window of size centred around position , is the vocabulary and the parameters and are matrices referred to as the context and word embeddings. The model is trained to maximise the log-pseudo likelihood of a training corpus, however due to the high complexity of computing the denominator of , propose negative sampling as an approximation, by instead learning to differentiate data from noise (negative examples). This gives rise to the following optimisation objective

where is the training data and is the number of negative examples randomly drawn from a noise distribution .

4Our Model

Our approach extends CBOW to model bilingual text, using two monolingual corpora and a bilingual dictionary. We believe this data condition to be less stringent than requiring parallel or comparable texts as the source of the bilingual signal. It is common for field linguists to construct a bilingual dictionary when studying a new language, as one of the first steps in the language documentation process. Translation dictionaries are a rich information source, capturing much of the lexical ambiguity in a language through translation. For example, the word bank in English might mean the river bank or financial bank which corresponds to two different translations sponda and banca in Italian. If we are able to learn to select good translations, then this implicitly resolves much of the semantic ambiguity in the language, and accordingly we seek to use this idea to learn better semantic vector representations of words.

4.1Dictionary replacement

To learn bilingual relations, we use the context in one language to predict the translation of the centre word in another language. This is motivated by the fact that the context is an excellent means of disambiguating the translation for a word. Our method is closely related to , however we only replace the middle word with a translation while keeping the context fixed. We replace each centre word with a translation on the fly during training, predicting instead but using the same formulation as (Equation 1) albeit with an augmented matrix to cover word types in both languages.

The translation is selected from the possible translations of listed in the dictionary. The problem of selecting the correct translation from the many options is reminiscent of the problem faced in expectation maximisation (EM), in that cross-lingual word embeddings will allow for accurate translation, however to learn these embeddings we need to know the translations. We propose an EM-inspired algorithm, as shown in Algorithm ?, which operates over both monolingual corpora, and . The vector is the semantic representation combining both the centre word, , and the context,1 which is used to choose the best translation into the other language from the bilingual dictionary .2 After selecting the translation, we use together with the context vector to make a stochastic gradient update of the CBOW log-likelihood.

4.2Joint Training

Words and their translations should appear in very similar contexts. One way to enforce this is to jointly learn to predict both the word and its translation from its monolingual context. This gives rise to the following joint objective function,

where controls the contribution of the two terms. For our experiments, we set . The negative examples are drawn from combined vocabulary unigram distribution calculated from combined data .

4.3Combining Embeddings

Many vector learning methods learn two embedding spaces and . Usually only is used in application. The use of , on the other hand, is under-studied [15] with the exception of who use a linear combination , with minor improvement over alone.

We argue that with our model is better at capturing the monolingual regularities and is better at capturing bilingual signal. The intuition for this is as follows. Assuming that we are predicting the word finance and its Italian translation finanze from the context (money, loan, bank, debt, credit) as shown in Figure 1. In only the context word representations are updated and in only the representations of finance, finanze and negative samples such as tree and dog are updated. CBOW learns good embeddings because each time it updates the parameters, the words in the contexts are pushed closer to each other in the space. Similarly, the target word and the translation are also pushed closer in the space. This is directly related to poitwise mutual information values of each pair of word and context explained in .

Figure 1: Example of \mathbf{V} and \mathbf{U} space during training.
Figure 1: Example of and space during training.

Thus, is bound to better at bilingual lexicon induction task and is better at monolingual word similarity task.

The simple question is, how to combine both and to produce a better representation. We experiment with several ways to combine and . First, we can follow to interpolate and in the post-processing step. i.e.

where controls the contribution of each embedding space. Second, we can also concatenate and instead of interpolation such that where and is the combined vocabulary from .

Moreover, we can also fuse and during training. For each word considered in Equation 3 in space including with , we encourage the model to learn similar representation in both and by adding a regularization term to the objective function in equation (Equation 3) during training.

where controls to what degree we should bind two spaces together.

5Experiment Setup

We want to test the cross-lingual property, monolingual property and the down-stream usefulness of our crosslingual word embeddings (CLWE). For the crosslingual property we adopt the bilingual lexicon induction task from . For the monolingual property we adopt the word similarity task on common datasets such as WordSim353 and Rareword. To demonstrate the usefulness of our CLWE, we also evaluate on the conventional crosslingual document classification task.

5.1Monolingual Data

The monolingual data is mainly from the pre-processed Wikipedia dump from . The data is already cleaned and tokenized. We additionally low-cased all words. Normally, the monolingual word embeddings are trained on billions of words. However, getting that much of monolingual data for a low-resource language is also challenging. That is why we only select the top 5 million sentences (around 100 million words) for each language.

5.2Dictionary

The bilingual dictionary is the only source of bilingual correspondence in our technique. We want a dictionary that covers many languages so that our approach can be applied widely to many low-resource languages. We use Panlex, a dictionary which currently covers around 1300 language varieties with about 12 million expressions. The translations in PanLex come from various sources such as glossaries, dictionaries, automatic inference from other languages, etc. Accordingly, Panlex has high language coverage but often noisy translations. 3

6Bilingual Lexicon Induction

Given a word in a source language, the bilingual lexicon induction (BLI) task is to predict its translation in the target language. proposed this task to test crosslingual word embeddings. The difficulty of this is that it is evaluated using recall at one where each term has only a single gold translation. The model must be very discriminative in order to score well.

We build the CLWE for 3 language pairs: it - en, es - en and nl - en, using similar parameters setting with .4 The remaining tunable parameters in our system are from Equation (Equation 5), and the choice of algorithm for combining embeddings (see §Section 8).

Qualitative evaluation We jointly train the model to predict both and the translation , combine and during training with regularization sensitivity and use as the output for each language pair. Table ? shows the top 10 closest words in both source and target languages according to cosine similarity. Note that the model correctly identifies the translation in English, and the top 10 words in both source and target languages are highly related. This qualitative evaluation initially demonstrates the ability of our CLWE to capture both the bilingual and monolingual relationship.

Quantitative evaluation
Table 1: Bilingual Lexicon Induction performance from Spanish, Italian and Dutch to English. + Panlex/Wikt is our reimplementation using Panlex/Wiktionary dictionary. All our models use Panlex as the dictionary. We reported the recall at 1 and 5. The best performance is bold.
Model
+ Panlex 37.6 63.6 26.6 56.3 49.8 76.0 38.0 65.3
+ Wikt 61.6 78.9 62.6 81.1 65.6 79.7 63.3 79.9
BilBOWA: 51.6 - 55.7 - 57.5 - 54.9 -
68.9 - 68.3 - 39.2 - 58.8 -
Our model (random selection) 41.1 62.0 57.4 75.4 34.3 55.5 44.3 64.3
Our model (EM selection) 67.3 79.5 66.8 82.3 64.7 82.4 66.3 81.4
+ Joint model 68.0 80.5 70.5 83.3 68.8 84.0 69.1 82.6
+ combine embeddings () 71.6 84.4 78.7 89.5 76.9 90.1 75.7 88.0
+ lemmatization 71.8 85.0 79.6 90.4 77.1 90.6 76.2 88.7

Table 1 shows our results compared with prior work. We reimplement using Panlex and Wiktionary dictionaries. The result with Panlex is substantially worse than with Wiktionary. This confirms our hypothesis in §Section 2. That is the context might be very biased if we just randomly replace the training data with the translation especially with noisy dictionary such as Panlex.

Our model when randomly picking the translation is similar to , using the Panlex dictionary. The biggest difference is that they replace the training data (both context and middle word) while we fix the context and only replace the middle word. For a high coverage yet noisy dictionary such as Panlex, our approach gives better average score. Our non-joint model with EM to select the translation5, out-performs just randomly select the translation by a significant margin.

Our joint model, as described in equation (Equation 3) which predicts both target word and the translation, further improves the performance, especially for Dutch. We use equation (Equation 5) to combine both context embeddings and word embeddings for all three language pairs. This modification during training substantially improves the performance. More importantly, all our improvements are consistent for all three language pairs and both evaluation metrics, showing the robustness of our models.

Our combined model out-performed previous approaches by a large margin. used bilingual comparable data, but this might be hard to obtain for some language pairs. Their performance on Dutch is poor because their comparable data between English and Dutch is small. Besides, they also use POS tagger and lemmatizer to filter only Noun and reduce the morphology complexity during training. These tools might not be available for many languages. For a fairer comparison to their work, we also use the same Treetagger [23] to lemmatize the output of our combined model before evaluation. Table 1 (+lemmatization) shows some improvements but minor. It demonstrates that our model is already good at disambiguating morphology. For example, the top 2 translations for Spanish word lenguas in English are languages and language which correctly prefer the plural translation.

7Monolingual Word Similarity

Now we consider the efficacy of our CLWE on monolingual word similarity. Our experiment setting is similar with . We evaluated on English monolingual similarity on WordSim353 (WS-EN), RareWord (RW-En) and German version of WordSim353 (WS-De) [8]. Each of those datasets contain many tuples where score is given by annotators showing the semantic similarity between and . The system must give the score correlated with human judgment.

4pt

We train the model as described in equation (Equation 5), which is exactly the same model as combine embeddings in Table 1. Since the evaluation involves German and English word similarity, we train the CLWE for English - German pair. Table ? shows the performance of our combined model compared with several baselines. Our combined model out-performed both and 6 which represent the best published crosslingual embeddings trained on bitext and monolingual data respectively.

We also train the monolingual CBOW model with the same parameter settings on the monolingual data for each language. Surprisingly, our combined model performs better than the monolingual CBOW baseline which makes our result closer to the monolingual state-of-the-art on each different dataset. However, the best monolingual methods use massive monolingual data [24], WordNet and output of commercial search engines [31].

Next we explain the gain of our combined model compared with the monolingual CBOW model. First, we compare the combined model with the joint-model w.r.t. monolingual CBOW model (Table ?). It shows that the improvement seems mostly come from combining and . If we apply the combining algorithm to the monolingual CBOW model (CBOW + combine), we also observe an improvement. Clearly most of the improvement is from combining and , however our and are much more complementary. The other improvements can be explained by the observation that a dictionary can improve monolingual accuracy through linking synonyms [7]. For example, since plane, airplane and aircraft have the same Italian translation aereo, the model will encourage those words to be closer in the embedding space.

8Model selection

Combining context embeddings and word embeddings results in an improvement in both monolingual similarity and bilingual lexicon induction. In §Section 4.3, we introduce several combination methods including post-processing (interpolation and concatenation) and during training (regularization). In this section, we justify our parameter and model choices.

Figure 2: Performance of word embeddings interpolated using different values of \gamma evaluated using BLI (Recall@1, Recall@5) and English monolingual WordSim353 (WS-En).
Figure 2: Performance of word embeddings interpolated using different values of evaluated using BLI (Recall@1, Recall@5) and English monolingual WordSim353 (WS-En).

We use English - Italian pair for tuning purposes, considering the value of in Equation 4. Figure 2 shows the performances using different values of . The two extremes where and corresponds to no interpolation where we just use or respectively. As increases, the performance on WS-En increases yet BLI decreases. These results confirm our hypothesis in §Section 4.3 that is better at capturing bilingual relation and is better at capturing monolingual relation. As a compromise, we choose in our experiments. Similarly, we tune the regularization sensitivity in equation (Equation 5) which combines embeddings space during training. We test with and using , or the interpolation of both as the learned embeddings, evaluated on the same BLI and WS-En. We select .

Table ? shows the performance with and without using combining algorithms mentioned in §Section 4.3. As the compromise between both monolingual and crosslingual tasks, we choose regularization + as the combination algorithm. All in all, we apply the regularization algorithm for combining and with and as the output for all language pairs without further tuning.

9Crosslingual Document Classification

In this section, we evaluate our CLWE on a downstream crosslingual document classification (CLDC) task. In this task, the document classifier is trained on a source language and then applied directly to classify a document in the target language. This is convenient for a target low-resource language where we do not have document annotations. The experimental setup is from .7 The train and test data are from Reuter RCV1/RCV2 corpus [16].

The documents are represented as the bag of word embeddings weighted by tf.idf. A multi-class classifier is trained using the average perceptron algorithm on 1000 documents in the source language and tested on 5000 documents in the target language. We use the CLWE, such that the document representation in the target language embeddings is in the same space with the source language.

2pt

We build the en-de CLWE using combined models as described in equation (Equation 5). Following prior work, we also use monolingual data from the RCV1/RCV2 corpus [13].8

Table ? shows the CLDC results for various CLWE. Despite its simplicity, our model achieves competitive performance. Note that aside from our model, all other models in Table ? use a large bitext (Europarl) which may not exist for many low-resource languages, limiting their applicability.

10Conclusion

Previous CLWE methods often impose high resource requirements yet have low accuracy. We introduce a simple framework based on a large noisy dictionary. We model polysemy using EM translation selection during training to learn bilingual correspondences from monolingual corpora. Our algorithm allows to train on massive amount of monolingual data efficiently, representing monolingual and bilingual properties of language. This allows us to achieve state-of-the-art performance on bilingual lexicon induction task, competitive result on monolingual word similarity and crosslingual document classification task. Our combination techniques during training, especially using regularization, are highly effective and could be used to improve monolingual word embeddings.

Footnotes

  1. Using both embeddings gives a small improvement compared to just using context vector alone.
  2. We also experimented with using expectations over translations, as per standard EM, with slight degredation in results.
  3. We also experimented with a growing crow-sourced dictionary from Wiktionary. Our initial observation is that the translation quality is better but lower-coverage. For example, for Engish - Italian dictionary, Panlex and Wiktionary has the coverage of 42.1% and 16.8% respectively for the top 100k most frequent English words from Wikipedia. The average number of translations are 5.2 and 1.9 respectively. We observed similar trend using Panlex and Wiktionary dictionary in our model. However, using Panlex results in much better performance. We can run the model on the combined dictionary from both Panlex and Wiktionary but we leave it for future work.
  4. Default learning rate of 0.025, negative sampling with 25 samples, subsampling rate of value , embedding dimension , window size and run for 15 epochs.
  5. Optimizing equation (Equation 3) with .
  6. use Panlex dictionary
  7. The data split and code are kindly provided by the authors.
  8. We randomly sample documents in RCV1 and RCV2 corpus and selected around 85k documents to form 400k monolingual sentences for both English and German. For each document, we perform basic processing including: lower-case, remove tags and tokenize. These monolingual data are then concatenated with the monolingual data from Wikipedia to form the final training data.

References

  1. 2013.
    Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. Polyglot: Distributed word representations for multilingual nlp.
  2. 2014.
    Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate.
  3. 2014.
    Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. An autoencoder approach to learning bilingual word representations.
  4. 2011.
    Dipanjan Das and Slav Petrov. Unsupervised part-of-speech tagging with bilingual graph-based projections.
  5. 2015.
    Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser.
  6. 2015.
    Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transition-based dependency parsing with stack long short-term memory.
  7. 2014.
    Manaal Faruqui and Chris Dyer. Improving vector space word representations using multilingual correlation.
  8. 2001.
    Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited.
  9. 2015.
    Stephan Gouws and Anders Søgaard. Simple task-specific bilingual word embeddings.
  10. 2015.
    Stephan Gouws, Yoshua Bengio, and Greg Corrado. Bilbowa: Fast bilingual distributed representations without word alignments.
  11. 2014.
    Karl Moritz Hermann and Phil Blunsom. Multilingual models for compositional distributed semantics.
  12. 2014.
    David Kamholz, Jonathan Pool, and Susan Colowick. Panlex: Building a resource for panlingual lexical translation.
  13. 2012.
    Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. Inducing crosslingual distributed representations of words.
  14. 2014.
    Tomáš Kočiský, Karl Moritz Hermann, and Phil Blunsom. Learning bilingual word representations by marginalizing alignments.
  15. 2014.
    Omer Levy and Yoav Goldberg. Neural word embedding as a factorization.
  16. 2004.
    David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. Rcv1: A new benchmark collection for text categorization research.
  17. 2013.
    Thang Luong, Richard Socher, and Christopher D. Manning. Better word representations with recursive neural networks for morphology.
  18. 2015.
    Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Bilingual word representations with monolingual quality in mind.
  19. 2013a.
    Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation.
  20. 2013b.
    Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations.
  21. 2014.
    Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation.
  22. 1999.
    Philip Resnik and David Yarowsky. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation.
  23. 1995.
    Helmut Schmid. Improvements in part-of-speech tagging with an application to german.
  24. 2016.
    Noam Shazeer, Ryan Doherty, Colin Evans, and Chris Waterson. Swivel: Improving embeddings by noticing what’s missing.
  25. 2013.
    Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank.
  26. 2015.
    Anders Søgaard, Željko Agić, Héctor Martínez Alonso, Barbara Plank, Bernd Bohnet, and Anders Johannsen. Inverted indexing for cross-lingual nlp.
  27. 2012.
    Oscar Täckström, Ryan McDonald, and Jakob Uszkoreit. Cross-lingual word clusters for direct transfer of linguistic structure.
  28. 2015.
    Ivan Vulić and Marie-Francine Moens. Bilingual word embeddings from non-parallel document-aligned data applied to bilingual lexicon induction.
  29. Proceedings of the Eighteenth Conference on Computational Natural Language Learning, chapter Distributed Word Representation Learning for Cross-Lingual Dependency Parsing, pages 119–129.
    Min Xiao and Yuhong Guo, 2014. Association for Computational Linguistics.
  30. 2001.
    David Yarowsky and Grace Ngai. Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora.
  31. 2012.
    Wen-tau Yih and Vahed Qazvinian. Measuring word relatedness using heterogeneous vector space models.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
10381
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description