Investigating Answer Validation Using Noise Identification and Classification in Goal-Oriented Dialogues
Abstract: Goal-oriented conversational systems based on large language models (LLMs) provide the potential capability to gather the necessary requirements for solving tasks or developing solutions. However, in real-world scenarios, non-expert users may respond incorrectly to dialogue questions, which can impede the system’s effectiveness in eliciting accurate information. This paper presents a novel approach to detecting and categorizing noisy answers in goal-oriented conversations, with a focus on modeling linear programming problems. Using a current LLM, Gemini, we develop multi-agent synthetic conversations based on problem statements from the benchmark optimization modeling dataset NL4Opt to generate dialogues in the presence of noisy answers too. Our experiments show the LLM is not sufficiently equipped with the capabilities to detect noisy answers and hence, in almost 59% of the cases where there is a noisy answer, the LLM continues with the conversation without any attempts at resolving
Loading