Keywords: Evaluation Benchmark, Patient-level Chest X-ray Benchmark, Structured Report, Single Report Evaluation, Longitudinal Report Evaluation, Clinical NLP, Structured Clinical Information, Medical Benchmark, Radiology Report Metric, Report Evaluation, Structured Report Evaluation
TL;DR: LUNGUAGE introduces the first benchmark, structuring framework, and metric for sequential radiology reporting, enabling fine-grained and temporally consistent evaluation of chest X-ray reports.
Abstract: Radiology reports convey detailed clinical observations and capture diagnostic reasoning that evolves over time. However, existing evaluation methods are limited to single-report settings and rely on coarse metrics that fail to capture fine-grained clinical semantics and temporal dependencies. We introduce LUNGUAGE, a benchmark dataset for structured radiology report generation that supports both single-report evaluation and longitudinal patient-level assessment across multiple studies. It contains 1,473 annotated chest X-ray reports, each reviewed by experts, and 80 of them contain longitudinal annotations to capture disease progression and inter-study intervals, also reviewed by experts. Using this benchmark, we develop a two-stage structuring framework that transforms generated reports into fine-grained, schema-aligned structured reports, enabling longitudinal interpretation. We also propose LUNGUAGESCORE, an interpretable metric that compares structured outputs at the entity, relation, and attribute level while modeling temporal consistency across patient timelines. These contributions establish the first benchmark dataset, structuring framework, and evaluation metric for sequential radiology reporting, with empirical results demonstrating that LUNGUAGESCORE effectively supports structured report evaluation. Code and data are available at: https://anonymous.4open.science/r/lunguage
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 13712
Loading