Clinically Grounded Agent-based Report Evaluation: An Interpretable Metric for Radiology Report Generation

Published: 12 Oct 2025, Last Modified: 12 Nov 2025GenAI4Health 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Radiology report generation, Clinical evaluation metrics, Agent-based assessment, Radiology AI, Report fidelity evaluation, Large language models (LLMs)
TL;DR: We introduce ICARE, an interpretable, clinically grounded evaluation framework for radiology report generation that disentangles hallucinations from omissions and aligns closely with clinician judgments.
Abstract: Radiological imaging is central to diagnosis, treatment planning, and clinical decision-making. Vision-language foundation models have spurred interest in automated radiology report generation (RRG), but safe deployment requires reliable clinical evaluation of generated reports. Existing metrics often rely on surface-level similarity and/or behave as black boxes, lacking interpretability. We introduce ICARE (Interpretable and Clinically-grounded Agent-based Report Evaluation), an interpretable evaluation framework leveraging large language model agents and dynamic multiple-choice question answering (MCQA). Two agents, each with either the ground-truth or generated report, generate clinically meaningful questions and quiz each other. Agreement on answers captures preservation and consistency of findings, serving as interpretable proxies for clinical precision and recall. By linking scores to question–answer pairs, ICARE enables transparent, and interpretable assessment. Clinician studies show ICARE aligns significantly more with expert judgment than prior metrics, while model comparisons reveal interpretable error patterns.
Submission Number: 147
Loading