Keywords: Proactive Dialogue Systems, Task-Oriented Dialogue (TOD), Slot Ontology Learning
TL;DR: The paper introduces a framework for task-oriented dialogue systems that learns proactive agent behaviors from missed user signals, enhancing information-seeking proactivity and improving recall, precision, and slot coverage in dialogues.
Abstract: Task-oriented dialogue (TOD) systems have traditionally emphasized goal completion with fixed slot ontologies and database-backed execution. Recent research emphasizes the need for proactive agents that can take initiative to elicit missing task information. Prior approaches learn policies for proactive actions but assume a fixed action space defined by a static slot ontology, while other works on slot schema induction identifies what task ontology should be captured yet without operationalizing it into proactive agent behaviors. We introduce a method that learns proactive agent behaviors directly from dialogue interactions by mining missed opportunities---instances where users voluntarily provide unrequested information. Our approach uses large language models (LLMs) to (i) detect such opportunities, (ii) reverse-generate candidate proactive questions, and (iii) incrementally cluster them into a hierarchical slot ontology with priorities and examples. This evolving structure is then integrated into the agent’s action space, enabling domain-adaptive, information-seeking proactivity. Experiments on MultiWOZ 2.4 show that adding our proactive framework on top of a base LLM leads to consistent improvements in recall, precision, and early slot coverage.
Submission Number: 109
Loading