Abstract: Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names
are an important point of information for personalisation. Names can enter chatbot conversations through direct user input (requested by chatbots), as part of task contexts such as CV reviews, or as built-in memory features that store user information for personalisation. We
study biases associated with names by measuring cultural presumptions in the responses generated by LLMs when presented with common
suggestion-seeking queries, which might involve making assumptions about the user. Our analyses demonstrate strong assumptions about cultural identity associated with names present in LLM generations across multiple cultures. Our work has implications for designing more
nuanced personalisation systems that avoid reinforcing stereotypes while maintaining meaningful customization.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: language/cultural bias analysis
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4269
Loading