Abstract: With the rise of online abuse, the NLP community has begun investigating the use of neural architectures to generate counterspeech that can "counter'' the vicious tone of such abusive speech and dilute/ameliorate their rippling effect over the social network. However, most of the efforts have so far been primarily focused on English. To bridge the gap for low-resource languages such as Bengali and Hindi, we create a benchmark dataset of 5,062 abusive speech/counterspeech pairs, of which 2,460 pairs are in Bengali, and 2,602 pairs are in Hindi. We implement several baseline models considering various interlingual transfer mechanisms with different configurations to generate suitable counterspeech to set up an effective benchmark. We observe that the monolingual setup yields the best performance. Further, using synthetic transfer, language models can generate counterspeech to some extent; specifically, we notice that transferability is better when languages belong to the same language family.
Paper Type: long
Research Area: Resources and Evaluation
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: Hindi, Bengali
0 Replies
Loading