metaTextGrad: Learning to learn with language models as optimizers

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: programming models, prompting techniques, meta learning
TL;DR: We propose metaTextGrad, an approach to employ meta-learning for inference-time optimization in LLMs, achieving significant performance gains across various benchmarks through learned loss functions and initializations.
Abstract: Large language models (LLMs) are increasingly used in learning algorithms, evaluations, and optimization tasks. Recent studies have shown that incorporating self-criticism into LLMs can significantly enhance model performance, with frameworks such as TextGrad illustrating this approach by iteratively refining model outputs through prompting. However, these frameworks often require extensive hand-crafting and are sensitive to instruction wording. To mitigate these challenges, we propose metaTextGrad, a meta-learning approach for LLM-based optimizers, focusing on learning loss functions and templates for inference-time optimization. Our method significantly improves performance across multiple benchmarks, achieving 5-27% gains on question-answering tasks. These results demonstrate the potential of meta-learning to enhance LLM-based systems, reducing manual tuning and improving generalizability.
Submission Number: 61
Loading