Overcoming Missing Label Vocabulary in Black-Box Discrete Prompt Learning

ICLR 2025 Conference Submission1591 Authors

18 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Prompt learning, LLM
Abstract: Large language models (LLMs) have transformed natural language processing. While their scale challenges fine-tuning downstream tasks, prompt engineering offers a scalable, cost-effective solution to optimize their performance. Black-box prompt learning is crucial for leveraging the generative abilities of LLMs, especially in the Language-Model-as-a-Service scenario, where parameters and gradients are inaccessible. LLMs generate output exclusively in the form of encoded tokens processed through their backbone network. Existing black-box prompt learning methods rely on outputs corresponding to a predefined label vocabulary—a small subset of the token vocabulary of LLMs—to optimize prompts. However, in real-world applications, some datasets lack specific label vocabulary, and even manually assigned labels may perform inconsistently across different LLMs. To address these challenges, in this paper, we propose a novel label-vocabulary-free black-box discrete prompt learning method. Our approach employs an alternating optimization strategy to simultaneously learn discrete prompt tokens and a learnable matrix that directly maps the outputs of LLMs corresponding to the token vocabulary to categories. We provide theoretical convergence guarantees for our method under standard assumptions, ensuring its reliability. Experiments show that our method effectively learns prompts and outperforms existing baselines on datasets without label vocabulary.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1591
Loading