HiDe: Rethinking The Zoom-IN method in High Resolution MLLMs via Hierarchical Decoupling

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Large Language Models, Visual Details, Attention, high resolution
TL;DR: To overcome MLLM distraction by backgrounds in high-res images, our training-free method decouples key objects, reconstructing them into a compact, layout-preserved view to enable accurate reasoning.
Abstract: Multimodal Large Language Models (MLLMs) have made significant strides in visual understanding tasks. However, their performance on high-resolution images remains suboptimal. While existing approaches often attribute this limitation to perceptual constraints and argue that MLLMs struggle to recognize small objects, leading them to use "zoom in" strategies for better detail, our analysis reveals a different cause: the main issue is not object size, but rather caused by complex background interference. We systematically analyze this "zoom in" operation through a series of decoupling experiments and propose the Hierarchical Decoupling Framework (HiDe), a training-free framework that uses Token-wise Attention Decoupling (TAD) to decouple the question tokens and identify the key information tokens, then leverages their attention weights to achieve precise alignment with the target visual regions. Subsequently, it employs Layout-Preserving Decoupling (LPD) to decouple these regions from the background and reconstructs a compact representation that preserves essential spatial layouts while eliminating background interference. HiDe sets a new SOTA on V\*Bench, HRBench4K, and HRBench8K, boosting Qwen2.5-VL 7B and InternVL3 8B to SOTA (92.1\% and 91.6\% on V\*Bench), even surpassing RL methods. After optimization, HiDe uses 75\% less memory than the previous training-free approach. Code is provided in the supplementary materials.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 9014
Loading