Abstract: Personality induction into LLMs aims to simulate human personality traits, with potential applications in personalized interactions and high-quality human-like synthetic data generation. It is thus a promising but challenging frontier in natural language processing. In our study, we use the Essays Dataset, as its extended narratives are better suited for modeling stable personality traits; shorter texts, by contrast, often reflect mood states rather than personality. We explore two key aspects: First, we show that different fine-tuning methods significantly reduce the variance observed in psychological test-based evaluations, which in pre-trained models have previously been shown to be unstable, thereby making them more reliable. Second, despite this improvement, our results show that personality induction in LLMs suffers from low accuracy when tuned on unguided text, suggesting that such text might lack the nuanced cues essential for an accurate expression of personality.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: Interpretability and Analysis of Models for NLP, Human-Centered NLP, Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis, Position papers
Languages Studied: English
Submission Number: 3937
Loading