Keywords: Chain-of-Thought, Multimodal Large Language Model
Abstract: Comic-based visual question answering (CVQA) poses distinct challenges to multimodal large language models (MLLMs) due to its reliance on symbolic abstraction, narrative logic, and humor, which differ from conventional VQA tasks. Although Chain-of-Thought (CoT) prompting is widely used to enhance MLLM reasoning, surprisingly, its direct application to CVQA often degrades performance, especially in small-scale models. Our theoretical and empirical analyses reveal that standard CoT in CVQA suffers from state entanglement, spurious transitions, and exploration inefficiency, with small models particularly vulnerable in resource-constrained settings. To address these issues, we propose a novel comic reasoning framework, designed to produce more faithful and transferable reasoning chains in small MLLMs. Specifically, our framework combines modular CoT generation with GRPO-based reinforcement fine-tuning and a novel structured reward. Experiments on three comic VQA benchmarks show that our method outperforms state-of-the-art models by an average of $\mathbf{10.4\%}$ (up to $\mathbf{15.2\%}$). When used as a plug-in component, it further yields an average improvement of $\mathbf{12.1\%}$ across different MLLMs.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 11499
Loading