Abstract: Large Language Models (LLMs) show impressive conversational abilities but sometimes show `persona drift' problems, where their interaction patterns or styles become inconsistent over time. As the problem has not been thoroughly examined yet, this study examines consistency of expressed persona across nine LLMs. Specifically, we (1) investigate whether LLMs could maintain consistent patterns in expressed persona and (2) analyze the effect of the model family, parameter sizes, and types of given persona. Our experiments involve multi-turn conversations on personal themes, analyzed in qualitative and quantitative ways. Experimental results indicate three findings. (1) Larger models experience greater persona drift. (2) Model differences exist, but their effect is not stronger than parameter sizes. (3) Assigning a persona may not help to maintain persona expressions. We hope these three findings can help to improve persona consistency in AI-driven dialogue systems, particularly in long-term conversations.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Psycho-demographic trait prediction, Evaluation and metrics (Dialogue and Interactive Systems), Conversation, Communication
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 757
Loading