Keywords: Vision Question Answering, Vision-Language Models, Multimodality, Visual Grounding, Cross-modal Application, Autonomous Driving, Maneuver Classification, Benchmarking, Textual Bias, Synthetic Data Generation, Curriculum Learning, Debiasing
Abstract: Multiple Choice Question Answering (MCQA) benchmarks are an established standard for measuring Vision Language Model (VLM) performance in driving tasks. However, we observe the known phenomenon that synthetically generated MCQAs are highly susceptible to hidden textual cues that allow models to exploit linguistic patterns rather than visual context. Our results show that a VLM fine-tuned on such data can achieve accuracy comparable to human-validated benchmarks even without visual input. Our proposed method reduces blind accuracy from +66.9% above random to +2.9%, eliminating the vast majority of exploitable textual shortcuts. By decoupling the correct answer from linguistic artifacts and employing a curriculum learning strategy, we force the model to rely on visual grounding, ensuring that performance accurately reflects perceptual understanding.
Paper Type: Short
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: vision question answering, cross-modal application, cross-modal information extraction, video processing, multimodality
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 6705
Loading