It’s How You Ask: User-Centric Gender Bias in LLM-Generated Emails

ACL ARR 2026 January Submission7157 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: gender bias, email, user bias
Abstract: In this work, we study user-centered gender bias in LLMs: whether the same task yields different outputs when phrased in language patterns historically associated with women versus men. In a realistic email-generation setting, we build controlled prompt pairs by perturbing only gender-correlated stylistic features and evaluate outputs on complexity and sophistication. Across multiple models, women-associated prompts elicit consistently shorter and less lexically sophisticated emails, with implications for disparity in user experience. These differences are strongly with correlated with perceived professionalism and authority, potentially further entrenching gender disparity in professional settings.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Ethics, Bias, and Fairness, Computational Social Science, Cultural Analytics, and NLP for Social Good
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 7157
Loading