Mixture-of-Visual-Thoughts: Exploring Context-Adaptive Reasoning Mode Selection for General Visual Reasoning

ICLR 2026 Conference Submission12932 Authors

18 Sept 2025 (modified: 25 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: visual reasoning, adaptive reasoning, multimodal large language models
TL;DR: We introduce an mixture-of-visual-thoughts paradigm that unifies different visual reasoning modes within a model and guides it to adaptively select the appropriate mode based on context, achieving consistent gains across various scenarios.
Abstract: Current visual reasoning methods mainly focus on exploring specific reasoning modes. Although improvements can be achieved in particular domains, they struggle to develop general reasoning capabilities. Inspired by this, we propose a novel adaptive reasoning paradigm, $\underline{\text{M}}$ixture-$\underline{\text{o}}$f-$\underline{\text{V}}$isual-$\underline{\text{T}}$houghts (**MoVT**), which unifies different reasoning modes within a single model and guides it to select the appropriate mode based on context. To achieve this, we introduce **AdaVaR**, a two-stage $\underline{\text{Ada}}$ptive $\underline{\text{V}}$isu$\underline{\text{a}}$l $\underline{\text{R}}$easoning learning framework: different modes are unified and learned during the supervised cold-start stage, and the mode selection capability is induced via an RL process with a carefully designed AdaGRPO algorithm. Extensive experiments show that AdaVaR effectively guides the model to learn and differentiate multiple modes and perform context-adaptive mode selection, achieving consistent improvement across various scenarios, highlighting MoVT as an effective solution for building general visual reasoning models.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 12932
Loading