RAMBLA: A FRAMEWORK FOR EVALUATING THE RELIABILITY OF LLMS AS ASSISTANTS IN THE BIOMEDICAL DOMAIN

Published: 05 Mar 2024, Last Modified: 08 May 2024ICLR 2024 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reliable Large Language Models (LLMs), Reliability assessment, Biomedical LLM assistants, LLM evaluators, Semantic similarity, Prompt robustness, Recall, Hallucinations, Biomedicine, Artificial Intelligence (AI)
TL;DR: We introduce a framework to evaluate the reliability of LLMs as assistants in the biomedical domain and report results on four state-of-the art LLMs.
Abstract: Large Language Models (LLMs) increasingly support applications in a wide range of domains, some with potential high societal impact such as biomedicine, yet their reliability in realistic use cases is under-researched. In this work we introduce the Reliability AssesMent for Biomedical LLM Assistants ($\texttt{RAmBLA}$) framework and evaluate whether four state-of-the-art foundation LLMs can serve as reliable assistants in the biomedical domain. We identify prompt robustness, high recall, and a lack of hallucinations as necessary criteria for this use case. We design shortform tasks and tasks requiring LLM freeform responses mimicking real-world user interactions. We evaluate LLM performance using semantic similarity with a ground truth response, through an evaluator LLM.
Submission Number: 17
Loading