Evaluating Intention Understanding Capability of Large Language Models in Persuasive DialoguesDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: This study aims to verify whether Large Language Models (LLMs) understand intentions from utterances in dialogues. Despite LLMs being already applied in various real-world scenarios such as dialogue systems, no representative dialogue dataset exists to verify to what extent they understand speakers' intentions. We hypothesize that LLMs understand speakers' intentions during conversations. To verify this, we construct a dataset from persuasive dialogues featuring multiple-choice questions predicting the speaker's intention within conversational contexts. When engaging in a persuasive conversation smoothly, especially when making a request or reply inconvenient for others, it is crucial to consider their perspectives through their speech. This feature makes the persuasive dialogue suitable for the dataset of measuring intention understanding ability. We incorporate the concept of `face acts,' which categorize how speech influences psychological states, to explore how utterances affect mental states. This approach aims to measure intention understanding capability by focusing on crucial intentions. We employ the largest available LLMs and measure how well they understand intention in persuasive dialogues. The experimental results suggest that LLMs already possess high intention understanding ability.
Paper Type: long
Research Area: Dialogue and Interactive Systems
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview