R2E: Turning any Github Repository into a Programming Agent Environment

Published: 11 Mar 2024, Last Modified: 24 Apr 2024LLMAgents @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Code Generation, Programming Agents, Real World Evaluation
TL;DR: Turning any Github Repository into a Programming Agent Test Environment
Abstract: While Large Language Models' coding capabilities have advanced rapidly, corresponding evaluation benchmarks on real-world programming setups are yet to catch up. Building a scalable and interactive testbed for evaluating general-purpose AI coding agents for real-world code has been challenging, particularly due to a lack of high-quality test suites available. In this paper, we present Repository to Environment (R2E), a framework that can turn any GitHub repository into a test environment to evaluate the performance of code-generating systems, both static and interactive. We instantiate our framework to build the first large-scale benchmark, R2E-Eval, for building realistic environments for AI coding assistants. Our results demonstrate that even when SOTA models cannot generate correct solutions with advanced prompting techniques, they can effectively use environment feedback highlighting the need to move from static functional coding to an interactive programming paradigm. We hope our framework (and instantiated dataset) can motivate research directions by providing web-scale open-ended coding testbeds.
Submission Number: 93