Learning to love diligent trolls: Accounting for rater effects in the dialogue safety task

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Efficient Methods for NLP
Submission Track 2: Dialogue and Interactive Systems
Keywords: chatbots, trolls, safety, automated essay scoring, latent class analysis
TL;DR: For cleaning trolled training data of chatbot safety judgments, methodology from automated essay scoring overcomes limitations of cross validation, in terms of efficiency and (no GPU needed) robustness (accurate even with majority "diligent" trolls).
Abstract: Chatbots have the risk of generating offensive utterances, which must be avoided. Post-deployment, one way for a chatbot to continuously improve is to source utterance/label pairs from feedback by live users. However, among users are trolls, who provide training examples with incorrect labels. To de-troll training data, previous work removed training examples that have high user-aggregated cross-validation (CV) error. However, CV is expensive; and in a coordinated attack, CV may be overwhelmed by trolls in number and in consistency among themselves. In the present work, I address both limitations by proposing a solution inspired by methodology in automated essay scoring (AES): have multiple users rate each utterance, then perform latent class analysis (LCA) to infer correct labels. As it does not require GPU computations, LCA is inexpensive. In experiments, I found that the AES-like solution can infer training labels with high accuracy when trolls are consistent, even when trolls are the majority.
Submission Number: 5137
Loading