Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models
Keywords: multimodal misinformation detection, vision-language models, creator intent
TL;DR: We reveal that state-of-the-art VLMs remain blind to misleading creator intent, establishing the need for intent-aware benchmarks and models as the next frontier in multimodal misinformation detection.
Abstract: The impact of misinformation arises not only from factual inaccuracies but also from the misleading narratives that creators deliberately embed. Interpreting such creator intent is therefore essential for multimodal misinformation detection (MMD) and effective information governance. To this end, we introduce DeceptionDecoded, a large-scale benchmark of 12,000 image–caption pairs grounded in trustworthy reference articles, created using an intent-guided simulation framework that models both the desired influence and the execution plan of news creators. The dataset captures both misleading and non-misleading cases, spanning manipulations across visual and textual modalities, and supports three intent-centric tasks: (1) misleading intent detection, (2) misleading source attribution, and (3) creator desire inference. We evaluate 14 state-of-the-art vision–language models (VLMs) and find that they struggle with intent reasoning, often relying on shallow cues such as surface-level alignment, stylistic polish, or heuristic authenticity signals. These results highlight the limitations of current VLMs and position DeceptionDecoded as a foundation for developing intent-aware models that go beyond shallow cues in MMD.
Primary Area: datasets and benchmarks
Submission Number: 1711
Loading