Abstract: Role-playing involves making Large Language Models (LLMs) generate responses in the manner of a specific character. This task is relatively easy for LLMs, as they excel at simulating human behaviors. Existing works mainly focus on ensuring the consistency of a character's personality, information, and tone by fine-tuning the models or using specialized prompts. However, these models often lack the ability to fully embody the mindset of the character, making it difficult for them to generate responses that align with the character's way of thinking. This limitation leads to a poor user experience. To solve this problem, we propose a Thinking Before Speaking (TBS) model in this paper, which can mimic the character's logical reasoning process and generate reflections before answering a question. We enhance the training data for each set of dialogues by incorporating logical reasoning based on the character's profile and the contextual content of the conversations. This approach enables the model to learn and replicate the character's mindset. Additionally, we include a small number of questions beyond the character's knowledge scope to train the model on how to appropriately decline to answer. To verify the effectiveness of our model, we prepare new evaluation datasets and metrics. Experimental results show that the TBS model achieves best role-playing performance in terms of tone, information, and mindset.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: spoken dialogue systems, embodied agents
Contribution Types: Data resources, Data analysis, Theory
Languages Studied: English, Chinese
Submission Number: 920
Loading