Keywords: Large Language Models, Role-Playing, BigFive, Personality Traits
TL;DR: Enhancing Role-Playing Abilities of Large Language Models by Integrating Personality Traits
Abstract: Large language models have catalyzed the development of personalized dialogue systems, leading to the emergence of numerous role-playing conversational agents. While previous research has predominantly focused on enhancing the model's capability to follow instructions by designing character profiles, it has neglected the psychological factors that drive human conversations. In this paper, we propose Orca, a framework for data processing and training custom LLM characters by integrating personality traits. Orca comprises four stages: (1) Personality traits inference, leveraging LLMs to infer users' BigFive personality trait reports and scores. (2) Data Augmentation, simulating users' profiles, background stories, and psychological activities. (3) Dataset construction, employing personality-conditioned instruction prompting (PCIP) to stimulate LLMs. (4) Modeling and Training, utilizing personality-conditioned instruction tuning (PTIT and PSIT) to enhance existing open-source LLMs using the generated data. We introduce OrcaBench, the first benchmark for evaluating the quality of LLM-generated content on social platforms across multiple scales. Our experiments demonstrate that our proposed model achieves superior performance on this benchmark, highlighting its excellence and effectiveness in perceiving personality traits, thereby significantly improving role-playing abilities.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7586
Loading