Abstract: As LLMs are increasingly applied in socially impactful settings, concerns about gender bias have prompted growing efforts both to measure and mitigate such bias. These efforts often rely on evaluation tasks that differ from natural language distributions, as they typically involve carefully constructed task prompts that overtly or covertly signal the presence of bias-related content. In this paper, we examine how signaling the evaluative purpose of a task impacts measured gender bias in LLMs.
Concretely, we test models under prompt conditions that (1) make the testing context salient, and (2) make gender-focused content salient. We then assess prompt sensitivity across four task formats with both token-probability and discrete-choice metrics. We find that even minor prompt changes can substantially alter bias outcomes, sometimes reversing their direction entirely. Discrete-choice metrics further tend to amplify bias relative to probabilistic measures. These findings do not only highlight the brittleness of LLM bias evaluations but open a new puzzle for the NLP benchmarking and development community: To what extent can well-controlled testing designs trigger testing-environment performance, and how do we construct fine-tuning data that minimizes this inference behavior towards more robust bias assessment protocols.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Bias Measurement; Prompt Sensitivity; Gender Bias; Large Language Models (LLMs); Evaluation Metrics
Contribution Types: Model analysis & interpretability
Languages Studied: English
Keywords: Bias Measurement; Prompt Sensitivity; Gender Bias; Large Language Models (LLMs);
Submission Number: 6828
Loading