Is Your LLM-Based Multi-Agent a Reliable Real-World Planner? Exploring Fraud Detection in Travel Planning
Abstract: The rise of Large Language Model-based Multi-Agent Planning has leveraged advanced frameworks to enable autonomous and collaborative task execution. Some systems rely on platforms like review sites and social media, which are prone to fraudulent information, such as fake reviews or misleading descriptions. This reliance poses risks, potentially causing financial losses and harming user experiences. To evaluate the risk of planning systems in real-world applications,
we introduce $\textbf{WandaPlan}$, an evaluation environment mirroring real-world data and injected with deceptive content.
We assess system performance across three fraud cases: Misinformation Fraud, Team-Coordinated Multi-Person Fraud, and Level-Escalating Multi-Round Fraud. We reveal significant weaknesses in existing frameworks that prioritize task efficiency over data authenticity. At the same time, we validate WandaPlan's generalizability, capable of assessing the risks of real-world open-source planning frameworks. To mitigate the risk of fraud, we propose integrating an anti-fraud agent, providing a solution for reliable planning.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: LLM Agents, fraud/misinformation detection
Contribution Types: NLP engineering experiment
Languages Studied: English
Keywords: LLM-based multi-agent planning, fraud/misinformation detection
Submission Number: 292
Loading