Mind Your Language: Abuse and Offense Detection for Code-Switched Languages

Mind Your Language: Abuse and Offense Detection for Code-Switched Languages

Raghav Kapoor
raghavk.co@nsit.net.in \AndYaman Kumar
Adobe Systems
ykumar@adobe.com \AndKshitij Rajput
kshitijr.co@nsit.net.in \ANDRajiv Ratn Shah
IIIT, Delhi
rajivratn@iiitd.ac.in \AndPonnurangam Kumaraguru
IIIT, Delhi
pk@iiitd.ac.in \AndRoger Zimmermann
NUS, Singapore

In multilingual societies like the Indian subcontinent, use of code-switched languages is much popular and convenient for the users. In this paper, we study offense and abuse detection in the code-switched pair of Hindi and English (i.e. Hinglish), the pair that is the most spoken. The task is made difficult due to non-fixed grammar, vocabulary, semantics and spellings of Hinglish language. We apply transfer learning and make a LSTM based model for hate speech classification. This model surpasses the performance shown by the current best models to establish itself as the state-of-the-art in the unexplored domain of Hinglish offensive text classification. We also release our model and the embeddings trained for research purposes.


[1]#1 \new@command\new@command ̵\new@command\new@commandΓmissing

Mind Your Language: Abuse and Offense Detection for Code-Switched Languages

Raghav Kapoor MIDAS Lab, NSIT-Delhi raghavk.co@nsit.net.in                        Yaman Kumar Adobe Systems ykumar@adobe.com                        Kshitij Rajput MIDAS Lab, NSIT-Delhi kshitijr.co@nsit.net.in

Rajiv Ratn Shah IIIT, Delhi rajivratn@iiitd.ac.in                        Ponnurangam Kumaraguru IIIT, Delhi pk@iiitd.ac.in                        Roger Zimmermann NUS, Singapore rogerz@comp.nus.edu.sg

Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.


With the penetration of internet among masses, the content being posted on social media channels has uptaken. Specifically, in the Indian subcontinent, number of Internet users has crossed 500 mi111https://bit.ly/2MCXz2Q, and is rising rapidly due to inexpensive data222https://bit.ly/2NUjOGh. With this rise, comes the problem of hate speech, offensive and abusive posts on social media. Although there are many previous works which deal with Hindi and English hate speech (the top two languages in India), but very few on the code-switched version (Hinglish) of the two (?). This is partially due to the following reasons: (i) Hinglish consists of no-fixed grammar and vocabulary. It derives a part of its semantics from Devnagari and another part from the Roman script. (ii) Hinglish speech and written text consists of a concoction of words spoken in Hindi as well as English, but written in the Roman script. This makes the spellings variable and dependent on the writer of the text. Hence code-switched languages present tough challenges in terms of parsing and getting the meaning out of the text. For instance, the sentence, “Modiji foreign yatra par hai”, is in the Hinglish language. Somewhat correct translation of this would be, “Mr. Modi is on a foriegn tour”. However, even this translation has some flaws due to no direct translation available for the word ji, which is used to show respect. Verbatim translation would lead to “Mr. Modi foreign tour on is”. Moreover, the word yatra here, can have phonetic variations, which would result in multiple spellings of the word as yatra, yaatra, yaatraa, etc. Also, the problem of hate speech has been rising in India, and according to the policies of the government and the various social networks, one is not allowed to misuse his right to speech to abuse some other community or religion. Due to the various difficulties associated with the Hinglish language, it is challenging to automatically detect and ban such kind of speech.

Thus, with this in mind, we build a transfer learning based model for the code-switched language Hinglish, which outperforms the baseline model of (?). We also release the embeddings and the model trained.

Hinglish English Hinglish English
acha good gunda thug
s**la blo*dy ra*di h*oker
Table 1: Examples of word-pairs in Hinglish-English dictionary
Category Tweet Translation
Benign sache sapooto aap ka balidan hamesha yaad rahega True sons, your sacrifice would be remembered.
Hate Inducing Bik gya Porkistan Porkistan (Derogatory term for Pakistan) has been sold
Abusive Kis m*darch*d ki he giri hui harkt Which m*therf*cker has done this
Table 2: Examples of tweets in the dataset and their translations


Our methodology primarily consists of these steps: Pre-processing of the dataset, training of word embeddings, training of the classifier model and then using that on HEOT dataset.


In this work, we use the datasets released by (?) and HEOT dataset provided by (?). The datasets obtained pass through these steps of processing: (i) Removal of punctuatios, stopwords, URLs, numbers, emoticons, etc. This was then followed by transliteration using the Xlit-Crowd conversion dictionary 333https://github.com/chsasank/indic-transliteration and translation of each word to English using Hindi to English dictionary444http://www.cfilt.iitb.ac.in/~hdict/webinterface_user/. To deal with the spelling variations, we manually added some common variations of popular Hinglish words. Final dictionary comprised of 7200 word pairs. Additionally, to deal with profane words, which are not present in Xlit-Crowd, we had to make a profanity dictionary (with 209 profane words) as well. Table 1 gives some examples from the dictionary.

Training Word Embeddings

We tried Glove (?) and Twitter word2vec (?) code for training embeddings for the processed tweets. The embeddings were trained on both the datasets provided by (?) and HEOT. These embeddings help to learn distributed representations of tweets. After experimentation, we kept the size of embeddings fixed to 100.

Model Accuracy
Davidson et al. 0.57
Our Model with embeddings trained on Glove 0.87
Our Model with embeddings trained on Word2Vec 0.82
Our Model with pre-trained Word2Vec embeddings 0.59
Mathur et al 0.83
Table 3: Comparison of accuracy scores on HEOT dataset
Model Accuracy
Davidson et al. 0.90
Our Model with embeddings trained on Glove 0.89
Our Model with embeddings trained on Word2Vec 0.86
Mathur et al. 0.75
Table 4: Comparison of accuracy scores on (?) dataset

Classifier Model

Both the HEOT and (?) datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). Some examples from the dataset are shown in Table 2. We use a LSTM based classifier model for training our model to classify these tweets into these three categories. An overview of the model is given in the Figure 1. The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2. Categorical crossentropy loss was used for the last layer due to the presence of multiple classes. We use Adam optimizer along with L2 regularisation to prevent overfitting. As indicated by the Figure 1, the model was initially trained on the dataset provided by (?), and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage. The model hyperparameters were experimentally selected by trying out a large number of combinations through grid search.


Table 3 shows the performance of our model (after getting trained on (?)) with two types of embeddings in comparison to the models by (?) and (?) on the HEOT dataset averaged over three runs. We also compare results on pre-trained embeddings. As shown in the table, our model when given Glove embeddings performs better than all other models. For comparison purposes, in Table 4 we have also evaluated our results on the dataset by (?).

Figure 1: LSTM based model for tweet classification


In this paper, we presented a pipeline which given Hinglish text can classify it into three categories: offensive, abusive and benign. This LSTM based model performs better than the other systems present. We also release the code, the dictionary made and the embeddings trained in the process. We believe this model would be useful in hate speech detection tasks for code-switched languages.


  • [Davidson et al. 2017] Davidson, T.; Warmsley, D.; Macy, M.; and Weber, I. 2017. Automated hate speech detection and the problem of offensive language. arXiv preprint arXiv:1703.04009.
  • [Godin et al. 2015] Godin, F.; Vandersmissen, B.; De Neve, W.; and Van de Walle, R. 2015. Multimedia lab acl wnut ner shared task: Named entity recognition for twitter microposts using distributed word representations. In Proceedings of the Workshop on Noisy User-generated Text, 146–153.
  • [Mathur et al. 2018] Mathur, P.; Shah, R.; Sawhney, R.; and Mahata, D. 2018. Detecting offensive tweets in hindi-english code-switched language. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, 18–26.
  • [Pennington, Socher, and Manning 2014] Pennington, J.; Socher, R.; and Manning, C. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 1532–1543.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description