AuPair: Golden Example Pairs for Code Repair

ICLR 2025 Conference Submission13725 Authors

28 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Coding
TL;DR: Prompting with high-diversity examples of fixes boosts code-repair performance.
Abstract: Scaling up inference-time compute has proven to be a valuable strategy in improving the performance of Large Language Models (LLMs) on several tasks without involving any fine-tuning. An example of such a task that can benefit from additional inference-time compute is self-repair: given an initial flawed response produced by the LLM, it is supposed to correct its own mistake and produce an improved response. We propose leveraging the in-context learning capability exhibited by LLMs to aid with self-repair. The key contribution of this paper is an approach to synthesise and select a golden set of pairs, each of which contains a problem, the initial guess produced by the LLM, and the consequent fix generated. Each golden example pair, or AuPair is then provided as an in-context example at inference time to generate a candidate repaired solution with 1-shot prompting; in line with best-of-$N$ the highest scoring response is selected. Given an inference-time compute budget of $N$ LLM calls, our algorithm selects $N$ AuPairs in a manner that maximises complementarity and usefulness. We demonstrate the results of our algorithm on the coding domain for code repair on 4 LLMs across 7 competitive programming datasets. The AuPairs produced by our approach provide a significant boost in performance compared to best-of-$N$, and also exhibit strong generalisation across datasets and models. Moreover, our approach shows strong performance as the inference-time compute budget $N$ is scaled up.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13725
Loading