Enhancing Fairness in In-Context Learning: Prioritizing Minority Samples in Demonstrations

Published: 19 Mar 2024, Last Modified: 30 Mar 2024Tiny Papers @ ICLR 2024 ArchiveEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, LLM fairness, in-context learning, tabular data
Abstract: Recent studies highlight the effectiveness of using in-context learning to steer large language models (LLMs) in processing tabular data, a challenging task given the structured nature of such data. Despite advancements, the fairness implications of this approach remain underexplored. This study delves into how varying demonstrations impact LLM fairness, particularly by examining the distribution of selected samples in prompts. We find that deliberately including minority samples in prompts can significantly enhance fairness awareness in LLMs, without compromising their predictive performance.
Submission Number: 26
Loading