Learning from Contrastive Prompts: An Automated Prompt Optimization Framework

ACL ARR 2025 July Submission132 Authors

23 Jul 2025 (modified: 08 Sept 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As LLMs evolve, significant effort is spent on manually crafting prompts to unlock their full potential. While existing prompt optimization methods automate this process, they often underperform due to their reliance on learning exclusively from incorrect samples. We propose the Learning from Contrastive Prompts (LCP) framework, a novel approach that leverages contrastive learning to generate more effective prompts. Unlike previous methods, LCP analyzes the distinctive patterns between high-performing and low-performing prompts, extracting crucial insights about what makes a prompt successful. This contrastive mechanism enables the framework to identify subtle prompt characteristics that significantly impact model performance. Our evaluation on the Big-Bench Hard dataset shows that LCP has a win rate of over 87\% over existing methods in prompt optimization. LCP offers a systematic approach to prompt engineering, reducing manual effort in deploying LLMs across varied contexts.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: Language Modeling, Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Reproduction study, Approaches to low-resource settings
Languages Studied: English
Submission Number: 132
Loading