Explicit Visual Alignment for Multimodal Large Language Model

ACL ARR 2026 January Submission7935 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Large Language Model, Visual Alignment
Abstract: Most existing MLLMs (Multimodal Large Language Models) rely on implicit vision–language feature alignment, which enables strong global reasoning but often struggles with fine-grained visual perception tasks. To bridge this gap, we propose E-Align (Explicit Visual Alignment), a new paradigm that shifts from opaque latent feature mapping to interpretable textual alignment. Building on E-Align, we develop Explicit-VL, which integrates a lightweight, parameter-efficient explicit adapter that converts visual encoder outputs into structured textual hints. Unlike implicit vectors, these hints are naturally aligned with the LLM’s symbolic processing, guiding it to attend to specific visual details. Extensive experiments demonstrate that Explicit-VL excels at knowledge-based VQA and vision-centric understanding, hallucinates less, and performs better on object detection. Overall, our results suggest that E-Align strikes a better balance between global reasoning and fine-grained perception, and highlight its potential as a promising paradigm for more effective vision–language feature alignment.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: cross-modal pretraining,vision question answering,multimodality
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 7935
Loading