Keywords: Multimodal large language model, Visual projector, Language-guide visual token selection.
Abstract: Visual projector plays a crucial role in bridging the visual model and the large language model (LLM) in modern multimodal LLM.
Typically, MLLMs utilize a simple MLP to preserve all visual tokens, causing a heavy computational burden and redundant visual tokens.
Some recent works adopt either a resampler or an adaptive pooling to reduce the visual tokens. However, they only reduce the visual tokens based on the image feature,
leading to the feature misalignment between visual tokens and text tokens. In this paper, we present a novel Language-guidance Visual Projector (LVP), where the text
feature serves as a guide to selecting the important visual tokens. Specially, we first adopt a lightweight text encoder to extract the text feature. Then, a lightweight
cross-modal feature enhancement module is proposed to enhance the cross-modal feature alignment. Finally, we select the important visual tokens according to the feature similarity between visual tokens and text tokens and apply
a deformable attention module to integrate the visual features from the visual encoder into the selected visual tokens. We further propose a multi-level language-guidance visual projector, which selects the visual tokens from different stages of the visual encoder.
Extensive experiments demonstrate that our LVP compresses the visual tokens by 75\%~95\% while achieving competitive even better performance across diverse benchmarks with a significant efficiency advantage. For instance, LLaVA1.5-LVP with Qwen2.5-7B
obtains 72.4\% accuracy on VQA$^\text{T}$, realizing the state-of-the-art result. The code and the model will be released.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6390
Loading