Dream to Chat: Model-based Reinforcement Learning on Dialogues with User Belief Modeling

ACL ARR 2025 May Submission2131 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: World models have been widely utilized in robotics, gaming, and auto-driving. However, their applications on natural language tasks are relatively limited. In this paper, we construct the dialogue world model, which could predict the user's emotion, sentiment, and intention, and future utterances. By defining a POMDP, we argue emotion, sentiment and intention can be modeled as the user belief and solved by maximizing the information bottleneck. By this user belief modeling, we apply the model-based reinforcement learning framework to the dialogue system, and propose a framework called DreamCUB. Experiments show that the pretrained dialogue world model can achieve state-of-the-art performances on emotion classification and sentiment identification, while dialogue quality is also enhanced by joint training of the policy, critic and dialogue world model. Further analysis shows that this manner holds a reasonable exploration-exploitation balance and also transfers well to out-of-domain scenarios such as empathetic dialogues.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: LLM, dialogue system, MBRL, world model, POMDP, emotion classification, sentiment analysis
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Keywords: LLM, MBRL, World Model, Dialogue System, POMDP, user belief
Submission Number: 2131
Loading