When ``A Helpful Assistant'' Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large Language Models

ACL ARR 2024 June Submission3502 Authors

16 Jun 2024 (modified: 22 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Prompting serves as the major way humans interact with Large Language Models (LLM). Commercial AI systems commonly define the role of the LLM in system prompts. For example, ChatGPT uses ``You are a helpful assistant'' as part of the default system prompt. Despite current practices to add personas in system prompts, it is unclear how different personas affect the models' performance. In this study, we present a systematic evaluation of personas in system prompts. We create a list of 162 roles covering 6 types of interpersonal relationships and 8 domains of expertise. Through extensive analysis of 4 popular LLMs and 2410 factual questions, we show that adding personas in system prompts does not improve the models' performance over a range of questions compared with the control setting where no persona is added. Despite this, further analysis suggests that the gender, type, and domain of the persona could all affect the consequential prediction accuracy. We further experimented with a list of persona search strategies and found that while aggregating the results from the best personas for each question could significantly lead to higher prediction accuracies, automatically identifying the best persona is challenging and may not be significantly better than random selection. Overall, our result suggests that while adding persona may lead to performance gain in certain settings, the effect of each persona can be largely random.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Interpretability and Analysis of Models for NLP, Question Answering, Resources and Evaluation
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 3502
Loading