TL;DR: CollabLLM is a unified fine-tuning framework that optimizes LLMs for effective and efficient multiturn collaboration with users.
Abstract: Large Language Models are typically trained with next-turn rewards, limiting their ability to optimize for long-term interaction. As a result, they often respond passively to ambiguous or open-ended user requests, failing to help users reach their ultimate intents and leading to inefficient conversations. To address these limitations, we introduce CollabLLM, a novel and general training framework that enhances multiturn human-LLM collaboration. Its key innovation is a collaborative simulation that estimates the long-term contribution of responses
using Multiturn-aware Rewards. By reinforcement fine-tuning these rewards, CollabLLM goes beyond responding to user requests, and actively uncovers user intent and offers insightful suggestions—a key step towards more human-centered AI. We also devise a multiturn interaction benchmark with three challenging tasks such as document creation. CollabLLM significantly outperforms our baselines with averages of 18.5% higher task performance and 46.3% improved interactivity by LLM judges. Finally, we conduct a large user study with 201 judges, where CollabLLM increases user satisfaction by 17.6% and reduces user spent time by 10.4%.
Lay Summary: Many people use AI chatbot (language models) to write, code, or solve problems, but current language models often fall short in real conversations. They tend to respond passively to vague questions instead of helping users clarify their goals and drag users into frustrating and inefficient interactions because they don’t plan ahead.
We introduce CollabLLM, a new training method that teaches the language model to look several turns into the future. During training we simulate whole conversations and give each reply a “multiturn-aware reward” based on how much it helps the conversation in the future. This reward encourages the language model to ask clarifying questions, surface missing details, and offer constructive next steps.
CollabLM was tested on writing assistance, coding, and math tutoring. It beat strong baselines on task success and on how interative and efficient the conversations are. In a study with 201 real users, it raised satisfaction scores and reduced the time needed to finish tasks by 10%. CollabLLM makes everyday language model assistants more proactive, efficient, and genuinely user-centered.
Link To Code: https://github.com/Wuyxin/collabllm
Primary Area: Social Aspects->Alignment
Keywords: Human-centered Large Language Model, Multiturn Interaction, Collaborative Problem-Solving, Reinforcement Learning
Submission Number: 1940
Loading