GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace

ACL ARR 2025 May Submission5386 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper investigates the underlying mechanisms of toxicity generation in Large Language Models (LLMs) and proposes an effective detoxification approach. Prior work typically considers the Feed-Forward Network (FFN) as the main source of toxicity, representing toxic regions as a set of toxic vectors or layer-wise subspaces. However, our in-depth analysis reveals that the **global toxic subspace** offers a more effective and comprehensive representation of toxic region within the model. Building on this insight, we propose **GloSS** (**Gl**obal T**o**xic **S**ubspace **S**uppression), a lightweight, four-stage method that mitigates toxicity by identifying and removing the global toxic subspace from the parameters of FFN. Experiments across a range of LLMs show that GloSS achieves state-of-the-art detoxification performance while preserving the models’ general capabilities, without requiring large-scale data or model retraining. WARNING: This paper contains context which is toxic in nature.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/unfairness mitigation; transparency
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 5386
Loading