Keywords: LLM, Conversation Planning, MCTS, self-improvement
TL;DR: Conversation planning typically uses many LLM queries for look-ahead simulation to select responses that maximize long-term rewards. By learning transition and reward models in text semantic space, we conversation plan without needing LLM queries.
Abstract: Large language models (LLMs) are used in chatbots or AI assistants to hold conversations with a human user. In such applications, the quality (e.g., user engagement, safety) of a conversation is important and can only be exactly known at the end of the conversation. To improve its expected quality, conversation planning reasons about the stochastic transitions within a conversation to select the optimal LLM response at each turn. Existing simulation-based conversation planning algorithms typically select the optimal response by simulating future conversations with a large number of LLM queries at every turn. However, this process is extremely time-consuming and hence impractical to be used during inference time for real-time conversations. This paper presents a novel approach called Semantic space COnversation Planning with improved Efficiency (SCOPE) that exploits the dense semantic representation of conversations to automatically learn the rewards associated with each LLM response. In particular, SCOPE models the stochastic transitions in conversation semantics and their associated rewards to plan entirely within the semantic space. By exploring responses in semantic space under the MCTS framework, SCOPE updates the Q-value of each LLM response during inference time automatically, finding the optimal LLM response at every conversation turn without needing additional LLM queries for simulation. As a result, SCOPE can perform conversation planning 70 times faster than conventional simulation-based planning algorithms when applied to a wide variety of conversation starters and two reward functions seen in the real world, while improving the conversation quality within practical planning budgets. Our code can be found at: https://github.com/chenzhiliang94/convo-plan-SCOPE.
Submission Number: 74
Loading