Thoughts to Target: Enhance Planning for Target-driven Conversation

ACL ARR 2024 June Submission4537 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In conversational AI, large-scale models excel in various tasks but struggle with target-driven conversation planning. Current methods, such as chain-of-thought reasoning and tree-search policy learning techniques, either neglect plan rationality or require extensive human simulation procedures. Addressing this, we propose a novel two-stage framework, named EnPL, to improve the LLMs' capability in planning conversations towards designated targets, including (1) distilling natural language plans from target-driven conversation corpus and (2) generating new plans with demonstration-guided in-context learning. Specifically, we first propose a filter approach to distill a high-quality plan dataset, ConvPlan(Resources of this paper can be found at https://anonymous.4open.science/r/ConvPlan-2023). With the aid of corresponding conversational data and support from relevant knowledge bases, we validate the quality and rationality of these plans. Then, these plans are leveraged to help guide LLMs to further plan for new targets. Empirical results demonstrate that our method significantly improves the planning ability of LLMs, especially in target-driven conversations. Furthermore, EnPL is demonstrated to be quite effective in creating large-scale target-driven conversation datasets, paving the way for constructing extensive target-driven conversational models.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: target-driven planning, large language model, dialog generation
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 4537
Loading