Improving Expert Radiology Report Summarization by Prompting Large Language Models with a Layperson Summary

ACL ARR 2025 February Submission5982 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Radiology report summarization (RRS) is crucial for patient care, requiring concise ``Impressions'' from detailed ``Findings.'' This paper introduces a novel prompting strategy to enhance RRS by first generating a layperson summary. This approach normalizes key observations and simplifies complex information using non-expert communication techniques inspired by doctor-patient interactions. Combined with few-shot in-context learning, this method improves the model's ability to link general terms to specific findings. We evaluate this approach on the MIMIC-CXR, CheXpert, and MIMIC-III datasets, benchmarking it against 7B/8B parameter state-of-the-art open-source large language models (LLMs) like Llama-3.1-8B-Instruct. Our results demonstrate improvements in summarization accuracy and accessibility, particularly in out-of-domain tests, with improvements as high as 5\% for some metrics.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: prompting, retrieval-augmented models, abstractive summarisation, healthcare applications, clinical NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 5982
Loading