Submission Type: Long
Keywords: LLM, Prompt Optimization, APE, OPRO, ProTeGi, CoT, APE-OPRO
TL;DR: We propose APE-OPRO, a hybrid APO method improving cost-efficiency by ~18% over OPRO. ProTeGi offers the strongest performance at lower API cost. Results reveal key trade-offs and prompt sensitivity in real-world LLM tasks.
Abstract: Prompt design is a critical factor in the effectiveness of Large Language Models (LLMs), yet remains largely heuristic, manual, and difficult to scale. This paper presents the first comprehensive evaluation of Automatic Prompt Optimization (APO) methods for real-world, high-stakes multiclass classification in a commercial setting, addressing a critical gap in the existing literature where most of the APO frameworks have been validated only on benchmark classification tasks with limited complexity.
We introduce APE-OPRO, a novel hybrid framework that combines the complementary strengths of APE and OPRO, achieving notably better cost-efficiency, around $18\%$ improvement over OPRO, without sacrificing performance. We benchmark APE-OPRO alongside both gradient-free (APE, OPRO) and gradient-based (ProTeGi) methods on a dataset of ~2,500 labeled products.
Our results highlight key trade-offs: ProTeGi offer the strongest absolute performance at lower API cost but higher computational time as noted in~\cite{protegi}, while APE-OPRO strikes a compelling balance between performance, API efficiency, and scalability.
We further conduct ablation studies on depth and breadth hyperparameters, and reveal notable sensitivity to label formatting, indicating implicit sensitivity in LLM behavior. These findings provide actionable insights for implementing APO in commercial applications and establish a foundation for future research in multi-label, vision, and multimodal prompt optimization scenarios.
Supplementary Material: pdf
Submission Number: 4
Loading