Abstract: Inducing Large Language Models (LLMs) to exhibit specified personalities is critical for various applications like role-playing and social support. Psychological findings suggest that personalities comprise multiple inherently correlated traits with dynamic expression across contexts, yet most existing methods neglect these characteristics, consequently hindering human-like interactions. Inspired by the lexical hypothesis and the trait activation theory of personality, we propose Context-aware Contrastive Lexical Prompting (CACLP), which resolves trait exhibition conflicts via lexical knowledge and dynamically selects context-aware adjectives for multi-trait inducting in LLMs. Specifically, CACLP eliminates semantically conflicting adjectives using WordNet to construct conflict-free adjectives describing multi-trait personalities by considering both target traits and their opposites. Then, it dynamically selects context-relevant adjectives via Natural Language Inference (NLI) to align responses with various contexts. Extensive experiments across three widely studied personality models on diverse LLMs demonstrate CACLP’s general superiority over baseline methods, especially on smaller models.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: computational psycholinguistics
Contribution Types: Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 8221
Loading