FHIR-AgentBench: Benchmarking LLM Agents for Realistic Interoperable EHR Question Answering

Published: 27 Nov 2025, Last Modified: 28 Nov 2025ML4H 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: EHR, FHIR, Large Language Models, agentic reasoning, retrieval, question-answering
Track: Proceedings
Abstract: The recent shift toward the Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR) standard opens a new frontier for clinical AI, demanding LLM agents to navigate complex, resource-based data models instead of conventional structured health data. However, existing benchmarks have lagged behind this transition, lacking the realism needed to evaluate recent LLMs on interoperable clinical data. To bridge this gap, we introduce FHIR-AgentBench—a benchmark that grounds 2,931 real-world clinical questions in the HL7 FHIR standard. Using this benchmark, we systematically evaluate agentic frameworks, comparing different data retrieval strategies (direct FHIR API calls vs. specialized tools), interaction patterns (single-turn vs. multi-turn), and reasoning strategies (natural language vs. code generation). Our experiments highlight the practical challenges of retrieving data from intricate FHIR resources and the difficulty of reasoning over them—both of which critically affect question answering performance.
General Area: Applications and Practice
Specific Subject Areas: Natural Language Processing, Dataset Release & Characterization
PDF: pdf
Supplementary Material: zip
Data And Code Availability: Yes
Ethics Board Approval: No
Entered Conflicts: I confirm the above
Anonymity: I confirm the above
Code URL: https://github.com/glee4810/FHIR-AgentBench
Submission Number: 217
Loading