RExBench: Can coding agents autonomously implement AI research extensions?

20 Sept 2025 (modified: 15 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: research agents, LLM agents, AI for science, coding benchmark
TL;DR: We introduce RExBench, a benchmark for evaluating autonomous research extension implementation capabilities of coding agents.
Abstract: Agents based on Large Language Models (LLMs) have shown promise for performing sophisticated software engineering tasks autonomously. In addition, there has been progress towards developing agents that can perform parts of the research pipeline in machine learning and the natural sciences. We argue that research extension and its implementation is a critical capability for such systems, and introduce RExBench to support the evaluation of this capability. RExBench is a benchmark consisting of realistic extensions of 12 research papers that aim to investigate novel research hypotheses. Each task is set up as an extension to an existing research paper and codebase, accompanied by domain expert-written instructions. RExBench is robust to data contamination, and supports an automatic evaluation infrastructure that executes agent outputs to determine whether the success criteria are met. We use this benchmark to evaluate 13 LLM agents implemented using three different frameworks: aider, Claude Code, and OpenHands. We find that all agents fail to autonomously implement the majority of the extensions, with the best agent at around 31% success rate. Although the success rate improves with additional human-written hints, the best performance under this setting remains below 48%. This indicates that current agents are still short of being able to handle realistic research extension tasks without substantial human guidance. Based on analyses of prominent failure modes, we put forward actionable short- and long-horizon recommendations for future research coding agent development.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 22726
Loading