Rethinking Personality Assessment from Human-Agent Dialogues: Fewer Rounds May Be Better Than More

ACL ARR 2025 May Submission899 Authors

16 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Personality assessment is essential for developing user-centered systems, playing a critical role across domains including hiring, education, and personalized system design. With the integration of conversational AI systems into daily life, automatically assessing human personality through natural language interaction has gradually gained more attention. However, existing personality assessment datasets based on natural language generally lack consideration of interactivity. Therefore, we propose Personality-1260, a Chinese dataset containing 1260 interaction rounds between humans and agents with different personalities, aiming to support research on personality assessment. Based on this dataset, we designed experiments to explore the effects of different interaction rounds and agent personalities on personality assessment. Results show that fewer interaction rounds perform better in most cases, and agents with different personalities stimulate different expressions of users' personalities. These findings provide guidance for the design of interactive personality assessment systems.
Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: human-AI interaction/cooperation, human-centered evaluation, human factors in NLP
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: Chinese
Keywords: LLMs, Personality Assessment, Human-AI Interaction, Big Five Personality Theory
Submission Number: 899
Loading