SemioLLM: Assessing Large Language Models for Semiological Analysis in Epilepsy Research

Published: 17 Jun 2024, Last Modified: 22 Jul 2024ICML2024-AI4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI for Science, Epilepsy, LLMs, Seizure Onset Zone (SOZ), NeuroScience, AI in Healthcare
Abstract: As Large Language Models advance, they have shown promising results in their ability to encode general medical knowledge. However, their potential application in clinical practice warrants rigorous evaluation in domain-specific tasks, where benchmarks are largely missing. In this study semioLLM, we test the ability of state-of-the-art LLMs (GPT-3.5, GPT-4, Mixtral 8x7B, and Qwen-72chat) to leverage their internal knowledge and reasoning for epilepsy diagnosis. Specifically, we obtain likelihood estimates linking unstructured text descriptions of seizures to seizure-generating brain regions, using an annotated clinical database containing 1269 entries. We evaluate the LLM's performance, confidence, reasoning, and citation abilities in comparison to clinical evaluation. Models achieve above-chance classification performance with prompt engineering significantly improving their outcome, with some models achieving close-to-clinical performance and reasoning. Our analyses also reveal significant pitfalls with several models being highly confident while showing poor performance, as well as exhibiting citation errors and hallucinations. In summary, our work provides the first extensive benchmark comparing current SOTA LLMs in the medical domain of epilepsy and highlights their ability to leverage unstructured texts from patients' medical history to aid diagnostic processes in health care.
Submission Number: 147
Loading