Evaluating Medical LLMs by Levels of Autonomy: A Survey Moving from Benchmarks to Applications

ACL ARR 2026 January Submission6609 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: evaluation and metrics, NLP datasets, metrics, evaluation methodologies
Abstract: Medical large language models achieve strong scores on standard benchmarks; however, transferring those results to safe and reliable performance in clinical workflows remains a challenge. This survey reframes evaluation through a levels-of-autonomy lens (L0–L3), spanning informational tools, information transformation and aggregation, decision support, and supervised agents. We align existing benchmarks and metrics with the actions permitted at each level and their associated risks, making the evaluation targets explicit. This motivates a level-conditioned blueprint for selecting metrics, assembling evidence, and reporting claims, alongside directions that link evaluation to oversight. By centering autonomy, the survey moves the field beyond score-based claims toward credible, risk-aware evidence for real clinical use.
Paper Type: Long
Research Area: Clinical and Biomedical Applications
Research Area Keywords: evaluation and metrics,medical question answering,clinical dialogue systems
Contribution Types: Surveys
Languages Studied: English
Submission Number: 6609
Loading