Adaptive Prompt Optimization for Open-Ended Tasks: Uncertainty Preference as a Secondary Signal

ACL ARR 2026 January Submission8542 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: language model; prompt engineering; application
Abstract: Prompt optimizers are widely used to create high-quality prompts for Large Language Models (LLMs), but their effectiveness remains unstable in practice. This instability is caused by the misalignment between conservative needs (e.g., safety compliance) and open-ended goals (e.g., creative writing). To address this, we propose a semantic-entropy-based method, using task uncertainty to guide prompt optimization. Specifically, we measure the task's uncertainty level with pre-defined templates, then use this measure to direct prompt optimization: selecting high-entropy prompt candidates for creative tasks and low-entropy candidates for conservative ones. Extensive experiments across various model families demonstrate that our method consistently outperforms baselines by effectively adjusting entropy levels. Our approach requires no training, works with black-box models, and integrates easily into existing prompt optimizers.
Paper Type: Short
Research Area: Semantics: Lexical, Sentence-level Semantics, Textual Inference and Other areas
Research Area Keywords: prompt optimization, generation style probing
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 8542
Loading