Make an Offer They Can't Refuse: Grounding Bayesian Persuasion in Real-World Dialogues without Pre-Commitment
Keywords: Bayesian Persuasion, Information Design, Conversational AI, Strategic Dialogue
TL;DR: This work adapts Bayesian Persuasion into a natural language framework for LLMs without the need for pre-commitment, showing its effectiveness in persuading both models and humans.
Abstract: Persuasion, a fundamental social capability for humans, remains a challenge for AI systems such as large language models (LLMs). Existing studies often overlook the strategic use of information asymmetry in message design or rely on strong assumptions of pre-commitment common knowledge.
In this work, we explore the application of Bayesian Persuasion (BP) in natural language dialogue, to enhance the strategic persuasion capabilities of LLMs. Our framework incorporates a commitment-communication mechanism, where the persuader explicitly outlines an information schema by narrating their potential types, thereby guiding the persuadee in performing the intended Bayesian belief update.
We evaluate two variants of our approach: Semi-Formal-Natural-Language (SFNL) BP and Fully-Natural-Language (FNL) BP, benchmarking them against non-BP baselines within a comprehensive evaluation framework.
Experiments show that BP strategies consistently outperform baselines both in single-turn and multi-turn dialogues. Specifically, SFNL excels in logical credibility, while FNL demonstrates superior emotional resonance and robustness. Furthermore, we show that supervised fine-tuning enables smaller models to achieve persuasion performance comparable to larger foundational models.
Track: Long Paper
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 73
Loading