CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-rays

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track spotlightEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Medical VQA Benchmark, Medical Reasoning Benchmark, Structured Clinical Information, Chest X-ray Understanding, Structured Diagnostic Reasoning, Large Vision-Language Model
TL;DR: We present CheXStruct and CXReasonBench: CheXStruct, an automated pipeline for extracting intermediate reasoning steps directly from chest X-rays, and CXReasonBench, a benchmark for evaluating whether models follow structured diagnostic reasoning.
Abstract: Recent progress in Large Vision-Language Models (LVLMs) has enabled promising applications in medical tasks, such as report generation and visual question answering. However, existing benchmarks focus mainly on the final diagnostic answer, offering limited insight into whether models engage in clinically meaningful reasoning. To address this, we present CheXStruct and CXReasonBench, a structured pipeline and benchmark built on the publicly available MIMIC-CXR-JPG dataset. CheXStruct automatically derives a sequence of intermediate reasoning steps directly from chest X-rays, such as segmenting anatomical regions, deriving anatomical landmarks and diagnostic measurements, computing diagnostic indices, and applying clinical thresholds. CXReasonBench leverages this pipeline to evaluate whether models can perform clinically valid reasoning steps and to what extent they can learn from structured guidance, enabling fine-grained and transparent assessment of diagnostic reasoning. The benchmark comprises 18,988 QA pairs across 12 diagnostic tasks and 1,200 cases, each paired with up to 4 visual inputs, and supports multi-path, multi-stage evaluation including visual grounding via anatomical region selection and diagnostic measurements. Even the strongest of 12 evaluated LVLMs struggle with structured reasoning and generalization, often failing to link abstract knowledge with anatomically grounded visual interpretation. The code is available at https://github.com/ttumyche/CXReasonBench
Croissant File: json
Dataset URL: https://physionet.org/content/chexstruct-cxreasonbench/1.0.1/
Code URL: https://github.com/ttumyche/CXReasonBench
Primary Area: AL/ML Datasets & Benchmarks for health sciences (e.g. climate, health, life sciences, physics, social sciences)
Submission Number: 1479
Loading