Abstract: Multi-modal Large Language Models (MLLMs) have recently exhibited impressive general-
purpose capabilities by leveraging vision foundation models to encode the core concepts of
images into representations. These are then combined with instructions and processed by the
language model to generate high-quality responses. Despite significant progress in enhancing
the language component, challenges persist in optimally fusing visual encodings within the
language model for task-specific adaptability. Recent research has focused on improving
this fusion through modality adaptation modules but at the cost of significantly increased
model complexity and training data needs. In this paper, we propose EMMA (Efficient
Multi-Modal Adaptation), a lightweight cross-modality module designed to efficiently fuse
visual and textual encodings, generating instruction-aware visual representations for the
language model. Our key contributions include: (1) an efficient early fusion mechanism
that integrates vision and language representations with minimal added parameters (less
than 0.2% increase in model size), (2) an in-depth interpretability analysis that sheds light
on the internal mechanisms of the proposed method; (3) comprehensive experiments that
demonstrate notable improvements on both specialized and general benchmarks for MLLMs.
Empirical results show that EMMA boosts performance across multiple tasks by up to 9.3%
while significantly improving robustness against hallucinations.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The following revisions have been incorporated into the camera-ready version in response to the Area Chair’s feedback:
- An ablation study has been added in Section 4, comparing EMMA’s Linear Visual Alignment module with a cross-attention variant. The results are presented in Table 4.
- Additional details about the instruction fine-tuning dataset have been included in Section 4.
Code: https://github.com/SaraGhazanfari/emma
Assigned Action Editor: ~Weicheng_Kuo1
Submission Number: 4333
Loading