Enhancing Large Language Model Powered Task-Oriented Dialogue Systems Through Look-Forward Motivated GoalsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Unlike the general dialogue system which emphasizes the semantic performance, the task-oriented dialogue (ToD) systems aim to achieve the dialogue goal \textbf{efficiently} and \textbf{successfully} in multiple turns. Additionally, the development of large language models (LLMs) has significantly enhanced the question answering and dialogue generation, and makes them become increasingly popular in current practical scenarios. Unfortunately, existing LLM-powered ToD systems lack the direct reward toward the final dialogue goal and do not account for the proactive aspects of dialogue, which can enhance efficiency. To fill these gaps, we introduce the \textbf{ProToD} (Proactively Goal-Driven LLM-powered ToD) approach, which anticipates the future dialogue actions and incorporates the goal-oriented reward signal to enhance ToD systems. Additionally, we present a novel evaluation method that assesses ToD systems based on goal-driven dialogue simulations. This method allows us to gauge user satisfaction, system efficiency and success rate while overcoming the limitations of current Information and Success metrics. We conduct the empirical experiments on the MultiWoZ 2.1 and SGD dataset. Especially, results on the MultiWoZ 2.1 dataset demonstrate that our model can achieve superior performance using only 10\% of the data compared to previous end-to-end fully supervised models. This improvement is accompanied by enhanced user satisfaction and efficiency.
Paper Type: long
Research Area: Dialogue and Interactive Systems
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview