Position: Retrieval-augmented systems can be dangerous medical communicators

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We analyze how current RAG systems answer queries for medical information, and argue that they can be dangerous medical communicators – and should be designed to reason pragmatically about user intent and goals.
Abstract: Patients have long sought health information online, and increasingly, they are turning to generative AI to answer their health-related queries. Given the high stakes of the medical domain, techniques like retrieval-augmented generation and citation grounding have been widely promoted as methods to reduce hallucinations and improve the accuracy of AI-generated responses and have been widely adopted into search engines. However, we argue that even when these methods produce literally accurate content drawn from source documents sans hallucinations, they can still be highly misleading. Patients may derive significantly different interpretations from AI-generated outputs than they would from reading the original source material, let alone consulting a knowledgeable clinician. Through a large-scale query analysis on topics including disputed diagnoses and procedure safety, we support our argument with quantitative and qualitative evidence of the suboptimal answers resulting from current systems. In particular, we highlight how these models tend to decontextualize facts, omit critical relevant sources, and reinforce patient misconceptions or biases. We propose a series of recommendations---such as the incorporation of communication pragmatics and enhanced comprehension of source documents---that could help mitigate these issues and extend beyond the medical domain.
Lay Summary: For decades now, patients have looked online for health information; increasingly, they are using generative AI to answer health queries, e.g. using chatbots or AI-powered search results. Given the importance of providing accurate medical information, large language model developers often incorporate retrieval-augmented generation (RAG), which allows the AI’s response to draw from and cite trusted websites, decreasing the chance the model makes up information. However, in this paper, we argue that RAG can lead to failure modes where responses are misleading, even when they are accurate. While each individual sentence may be factual, the generated response can take information out of context, omit important relevant information, and reinforce patient misconceptions. To substantiate our concern about these failure modes, we provide quantitative and qualitative evidence from current real-world systems on realistic patient searches. We propose a series of recommendations—from changing the underlying algorithms to changing how responses are displayed—that could help solve these issues.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: NzFhY
Link To Code: https://github.com/rayarxti/rag-medical-communicator/
Permissions Form: pdf
Primary Area: Social, Ethical, and Environmental Impacts
Keywords: retrieval augmented models, natural language processing, medicine, misinformation
Submission Number: 101
Loading