No offence, Bert - I insult only humans! Multilingual sentence-level attack on toxicity detection networks

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Language Modeling and Analysis of Language Models
Submission Track 2: NLP Applications
Keywords: toxicity detection, adversarial attack, multilingual, neural networks
TL;DR: If we add positive words to a toxic message, we can fool toxicity detection.
Abstract: We introduce a simple yet efficient sentence-level attack on black-box toxicity detector models. By adding several positive words or sentences to the end of a hateful message, we are able to change the prediction of a neural network and pass the toxicity detection system check. This approach is shown to be working on seven languages from three different language families. We also describe the defence mechanism against the aforementioned attack and discuss its limitations.
Submission Number: 5417
Loading