PEFTDebias : Capturing debiasing information using PEFTs

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Ethics in NLP
Submission Track 2: Efficient Methods for NLP
Keywords: paramter-efficient finetuning (PEFT), bias mitigation, debias, bias, gender, group, language model, debiasing, LoRA debias, prompt debias, adapter debias
TL;DR: PEFTDebias is a novel approach for parameter efficient debiasing of language models, consisting of upstream and downstream phases, and various PEFTs for efficient bias across different axes.
Abstract: The increasing use of foundation models highlights the urgent need to address and eliminate implicit biases present in them that arise during pretraining. In this paper, we introduce PEFTDebias, a novel approach that employs parameter-efficient fine-tuning (PEFT) to mitigate the biases within foundation models. PEFTDebias consists of two main phases: an upstream phase for acquiring debiasing parameters along a specific bias axis, and a downstream phase where these parameters are incorporated into the model and frozen during the fine-tuning process. By evaluating on four datasets across two bias axes namely gender and race, we find that downstream biases can be effectively reduced with PEFTs. In addition, we show that these parameters possess axis-specific debiasing characteristics, enabling their effective transferability in mitigating biases in various downstream tasks.
Submission Number: 5232
Loading