Co-Evolutionary Prompt Optimization for Improving Language Model Performance on Specialized Domains

ACL ARR 2025 February Submission4235 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Prompt engineering is a popular customization method for language models, particularly relevant in the case of tasks and domains with limited access to annotated data for model fine-tuning. Still, the discovery of effective prompts is challenging, driving a desire for general prompt learning methods. This paper advances CoEvo, i.e. a prompt learning approach that combines ideas from co-evolutionary computation together with the use of relatively small language models for data selection, and for emulating genetic crossover and mutation. We evaluate CoEvo on four tasks involving clinical or legal text, comparing different prompting techniques. The results show that CoEvo is capable of discovering effective and human-understandable prompts, with improvements over initial prompts designed manually. The code for replicating our experiments will be made available, upon acceptance.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: prompting, optimization methods, clinical NLP, legal NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 4235
Loading