LLM Program Optimization via Retrieval Augmented Search

ICLR 2026 Conference Submission21117 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: program optimization, large language models, contextual retrieval
TL;DR: We propose two blackbox adaptation techniques for LLM-guided program optimization using contextual retrieval and beam search.
Abstract: With the advent of large language models (LLMs), there has been great interest in applying them to solve difficult programming tasks. Recent work has demonstrated their potential at program optimization, a key challenge in programming languages research. We propose a blackbox adaptation method called Retrieval Augmented Search (RAS) that performs beam search over candidate optimizations; at each step, it retrieves in-context examples from a given training dataset of slow-fast program pairs to guide the LLM. Critically, we find that performing contextual retrieval based on an LLM-generated natural language description significantly outperforms retrieval based on the source code. In addition, we propose a method called AEGIS for improving interpretability by decomposing training examples into ''atomic edits'' that are significantly more incremental in nature. We show that RAS performs up to 2.04$\times$ better than prior state-of-the-art blackbox adaptation strategies on optimizing C++ programs, and that AEGIS performs 1.37$\times$ better while performing significantly smaller edits. We also show that using RAS improves the mean runtime percentile of Python programs by 10.27 as compared to other strategies.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21117
Loading