Clinic-Prompt: Few-Shot Discrete Clinical Prompt Optimization

Published: 07 Mar 2025, Last Modified: 25 Mar 2025GenAI4Health PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Prompt Optimization, Clinical Task, Reinforcement Learning
Abstract: Language models have achieved significant success in demonstrating intelligent capabilities. Incorporating these models into clinical healthcare can greatly benefit society. However, they face challenges in clinical tasks due to the need for domain-specific knowledge and expertise and the limited availability of relevant data samples for fine-tuning. Prompt tuning and optimization with fixed language model weights have emerged as highly effective strategies to address this. These approaches adapt pre-trained language models for diverse downstream tasks, particularly in data-scarce (few-shot) settings. In clinical healthcare, natural-language-level discrete prompt optimization is preferred for its superior interpretability and reliability compared to continuous, differentiable prompt vectors. However, the few-shot discrete clinical prompt optimization is unexplored. To tackle this challenge, in this paper, we introduce a novel scheme, \textit{Clinic-Prompt}, that models the non-differentiable discrete prompt optimization as a reinforcement learning problem and incorporates clinical knowledge into the optimization to enhance the performance in two clinical applications: multi-label International Classification of Diseases (ICD) code classification and mortality prediction. Furthermore, we demonstrate the applicability of Clinic-Prompt in a large language model (GPT-4o-mini) setting for the Medication Status Extraction task. Experimental results demonstrate the effectiveness of Clinic-Prompt, improving the performance and applicability of pre-trained models for clinical tasks, with a 2.17% increase in F1-micro and 2.32% increase in accuracy, respectively.
Submission Number: 59
Loading