Abstract: Large language models (LLMs) are increasingly integrated into our daily lives and personalized. However, LLM personalization might also increase unintended side effects. Recent work suggests that persona prompting can lead models to falsely refuse user requests. However, no work has fully quantified the extent of this issue. To address this gap, we measure the impact of 15 sociodemographic personas (based on gender, race, religion, and disability) on false refusal. To control for other factors, we also test 16 different models, 3 tasks (Natural Language Inference, politeness, and offensiveness classification), and nine prompt paraphrases. We propose a Monte Carlo-based method to quantify this issue in a sample-efficient manner.
Our results show that as models become more capable, personas impact the refusal rate less. However, we find that the choice of model significantly influences false refusals, especially in sensitive content tasks. The impact of certain sociodemographic personas further increases the false refusal effect in some models, which suggests that there are underlying biases in the alignment strategies or safety mechanisms.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: false refusals, model evaluation, persona prompting, sociodemographic personas, large language models, model safety
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 2961
Loading