GIV-CXR: Densely Grounded, Visually Interpretable Chest X-ray Question Answering Dataset

Published: 26 Apr 2026, Last Modified: 26 Apr 2026Med-Reasoner 2026 PosterEveryoneRevisionsCC BY 4.0
Keywords: Medical Visual Question Answering, Spatial Grounding, Chest X- ray, Benchmark Dataset, Clinical AI Evaluation
Abstract: Visual question answering in medical imaging requires models to ground predictions in anatomical locations for clinical verification, yet existing benchmarks lack systematic spatial reasoning evaluation infrastructure. While recent datasets include bounding boxes as metadata, no standardized metrics exist to quantify whether models correctly associate predictions with image regions. Additionally, opportunistic question generation yields uneven anatomical coverage, potentially encouraging spurious correlations rather than robust spatial understanding. We introduce GIV-CXR, a grounded chest X-ray VQA benchmark enabling quantified localization assessment. The benchmark comprises 355,293 question-answer pairs systematically distributed across 36 anatomical structures, with explicit bounding box linkage enabling evaluation via Intersection over Union metrics. We employ structured generation across five reasoning dimensions to ensure comprehensive coverage, implement automated hallucination filtering and radiologist validation for quality control, and provide bias analysis quantifying performance variation across anatomical locations and disease types. Experimental results demonstrate that explicit spatial supervision substantially improves localization while maintaining answer quality, and that our framework generalizes to assess models trained on diverse VQA datasets. This work provides the vision community with standardized infrastructure for developing spatially-aware vision-language models applicable to safety-critical medical domains where localization precision is essential for clinical trust and deployment.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 22
Loading