Abstract: Despite the widespread adoption of Vision-Language Understanding (VLU) benchmarks such as VQA v2, OKVQA, A-OKVQA, GQA, VCR, SWAG, and VisualCOMET, our analysis reveals a pervasive issue affecting their integrity: these benchmarks contain samples where answers rely on assumptions unsupported by the provided context. Training models on such data fosters biased learning and hallucinations as models tend to make similar unwarranted assumptions. To address this issue, we collect contextual data for each sample whenever available and train a context selection module to facilitate evidence-based model predictions. Strong improvements across multiple benchmarks demonstrate the effectiveness of our approach.
Further, we develop a general-purpose Context-AwaRe Abstention (CARA) detector to identify samples lacking sufficient context and enhance model accuracy by abstaining from responding if the required context is absent. CARA exhibits generalization to new benchmarks it wasn't trained on, underscoring its utility for future VLU benchmarks in detecting or cleaning samples with inadequate context. Finally, we curate a Context Ambiguity and Sufficiency Evaluation (CASE) set to benchmark the performance of insufficient context detectors. Overall, our work represents a significant advancement in ensuring that vision-language models generate trustworthy and evidence-based outputs in complex real-world scenarios.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Multimodal Fusion, [Experience] Multimedia Applications, [Content] Media Interpretation
Relevance To Conference: Our paper is fundamentally linked to multimedia/multimodal processing through its focus on Visual Language Understanding (VLU), which exemplifies the core of multimodal research by integrating textual and visual data. By tackling challenges in interpreting complex, multimodal scenarios, our study addresses key aspects of multimodal processing: the fusion of different data types (images and text) and the contextual analysis necessary for understanding and generating responses to visual and textual inputs. Furthermore, by introducing a novel detector for assessing the sufficiency of contextual information and a context selection methodology, our work enhances the interpretative capabilities of multimodal systems. This directly contributes to advancing the field of multimodal processing, highlighting the importance of context and the selective combination of modalities for improved accuracy and reliability in real-world applications.
Supplementary Material: zip
Submission Number: 5104
Loading