Keywords: AI Safety, Alignment, Large Models
TL;DR: We detect deceptive behaviors in multimodal AI systems via debate with image.
Abstract: Are frontier AI systems becoming more capable? *Certainly*.
Yet such progress is not an unalloyed blessing but rather a *Trojan horse*: behind their performance leaps lie more insidious and destructive safety risks, namely deception.
Unlike hallucination, which arises from insufficient capability and leads to mistakes, deception represents a deeper threat in which models deliberately mislead users through complex reasoning and insincere responses.
As system capabilities advance, deceptive behaviours have spread from textual to multimodal settings, amplifying their potential harm.
**First and foremost, how can we monitor these covert multimodal deceptive behaviors?**
Nevertheless, current research remains almost entirely confined to text, leaving the deceptive risks of multimodal large language models unexplored. In this work, we systematically reveal and quantify multimodal deception risks, introducing *MM-DeceptionBench*, the first benchmark explicitly designed to evaluate multimodal deception. Covering six categories of deception, MM-DeceptionBench characterizes how models strategically manipulate and mislead through combined visual and textual modalities. On the other hand, multimodal deception evaluation is almost a blind spot in existing methods.
Its stealth, compounded by visual–semantic ambiguity and the complexity of cross-modal reasoning, renders action monitoring and chain-of-thought monitoring largely ineffective. To tackle this challenge, we propose *debate with images*, a novel multi-agent debate monitor framework. By compelling models to ground their claims in visual evidence, this method substantially improves the detectability of deceptive strategies. Experiments show that it consistently increases agreement with human judgements across all tested models, boosting Cohen’s kappa by 1.5× and accuracy on GPT-4o by 1.25×.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 11592
Loading