Beyond Language Priors: Enhancing Visual Comprehension and Attention in MLLMs

Published: 07 May 2025, Last Modified: 29 May 2025VisCon 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Alignment, Vision Encoder, Visual Grounding
TL;DR: We combat two challenges in current day MLLMs - weak visual understanding and over-reliance on language priors. Novel training approaches and a tailored synthetic dataset to address these challenges.
Abstract: Achieving deep alignment between vision and language remains a central challenge for Multimodal Large Language Models (MLLMs). These models often fail to fully leverage visual input, defaulting to strong language priors. Our approach first provides insights into how MLLMs internally build visual understanding of image regions and then introduces techniques to amplify this capability. Specifically, we explore techniques designed both to deepen the model's understanding of visual content and to ensure that these visual insights actively guide language generation. We demonstrate the superior multimodal understanding of our resultant model through a detailed upstream analysis quantifying its ability to predict visually-dependent tokens as well as > 10 pt boost on visually challenging tasks.
Submission Number: 6
Loading