Don't be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks
TL;DR: We propose adversarial attacks from the perspective of malicious users and introduce pooling strategies to defend against them.
Abstract: Offensive language detection is an important task for filtering out abusive expressions and improving online user experiences. However, malicious users often attempt to avoid filtering systems through the involvement of textual noises. In this paper, we propose these evasions as user-intended adversarial attacks that insert special symbols or leverage the distinctive features of the Korean language. Furthermore, we introduce simple yet effective pooling strategies in a layer-wise manner to defend against the proposed attacks, focusing on the preceding layers not just the last layer to capture both offensiveness and token embeddings. We demonstrate that these pooling strategies are more robust to performance degradation even when the attack rate is increased, \textit{without} training of such patterns. Notably, we found that models pre-trained on clean texts could achieve a performance in detecting attacked offensive language, comparable to models pre-trained on noisy texts by utilizing these pooling strategies.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: Korean
0 Replies
Loading