LionGuard: A Contextualized Moderation Classifier to Tackle Localized Unsafe Content

Published: 19 Jan 2025, Last Modified: 26 Oct 202531st International Conference on Computational Linguistics: Industry TrackEveryoneCC BY 4.0
Abstract: As large language models (LLMs) become increasingly prevalent in a wide variety of applications, concerns about the safety of their outputs have become more significant. Most efforts at safety-tuning or moderation today take on a predominantly Western-centric view of safety, especially for toxic, hateful, or violent speech. In this paper, we describe LionGuard, a Singapore-contextualized moderation classifier that can serve as guardrails against unsafe LLM usage. When assessed on Singlish data, LionGuard outperforms existing widely-used moderation APIs, which are not finetuned for the Singapore context, by at least 14% (binary) and up to 51% (multi-label). Our work highlights the benefits of localization for moderation classifiers and presents a practical and scalable approach for low-resource languages, particularly English-based creoles.
Loading