Bias Neutralization in Non-Parallel Texts: A Cyclic Approach with Auxiliary Guidance

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Submission Track 2: Ethics in NLP
Keywords: Bias Correction, Subjective Bias, Generative Adversarial Networks, Unsupervised Learning, Auxiliary Guidance
TL;DR: A Cyclic Adversarial Network-based approach for automatic subjective bias correction in non-parallel text.
Abstract: Objectivity is a goal for Wikipedia and many news sites, as well as a guiding principle of many large language models. Indeed, several methods have recently been developed for automatic subjective bias neutralization. These methods, however, typically rely on parallel text for training (i.e. a biased sentence coupled with a non-biased sentence), demonstrate poor transfer to new domains, and can lose important bias-independent context. Toward expanding the reach of bias neutralization, we propose in this paper a new approach called FairBalance. Three of its unique features are: i) a cycle consistent adversarial network enables bias neutralization without the need for parallel text; ii) the model design preserves bias-independent content; and iii) through auxiliary guidance, the model highlights sequences of bias-inducing words, yielding strong results in terms of bias neutralization quality. Extensive experiments demonstrate how FairBalance significantly improves subjective bias neutralization compared to other methods.
Submission Number: 4228
Loading