Caged Birds and Cute Bookworms: Feminine Tropes and Implicit Gender Bias in Large Language Models

Published: 29 Apr 2026, Last Modified: 29 Apr 2026Eval Eval @ ACL 2026 PosterEveryoneRevisionsCC BY 4.0
Keywords: corpus creation, language/cultural bias analysis, model bias/fairness evaluation, NLP datasets
TL;DR: This paper introduces a curated dataset for diagnosing implicit gender bias in trope-based narratives generated by large language models, and demonstrates its use with four open LLMs.
Abstract: This paper introduces a curated dataset for diagnosing \textit{implicit gender bias} through feminine tropes in narratives generated by large language models. Drawing from a crowd-sourced database of tropes from television media, we create prompts that elicit narratives from LLMs based on historically gendered tropes. We find that LLMs tend to revert to feminine characters in these narratives, even when prompted without explicit gender references, and also when prompted with non-binary (``they/them'') gender references for the main character. In some cases, even when prompted with masculine pronouns (``he/him''), LLMs still use feminine pronouns to describe the main character. The paper describes our dataset creation process and the evaluation of four open-weight models. We discuss implications for future research in mitigating implicit gender bias and its associated representational harms in LLMs, as well as the complex relationship between language models and societal values.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Type: Research Paper
Archival Status: Archival
Submission Number: 52
Loading