MLRC-Bench: Can Language Agents Solve Machine Learning Research Challenges?

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Agentic AI, Automated Scientific Discovery
TL;DR: We present MLRC-Bench, a dynamic benchmark designed to rigorously assess how well language agents address ML research challenges with objective, performance-based evaluations.
Abstract: We introduce **MLRC-Bench**, a benchmark designed to quantify how effectively language agents can tackle challenging **M**achine **L**earning (ML) **R**esearch **C**ompetitions, with a focus on open research problems that demand novel methodologies. Unlike prior work, e.g., AI Scientist, which evaluates the end-to-end agentic pipeline by using LLM-as-a-judge, MLRC-Bench measures the key steps of proposing and implementing novel research methods and evaluates them with rigorous protocol and objective metrics. Our curated suite of 7 competition tasks reveals significant challenges for LLM agents. Even the best-performing tested agent (gemini-exp-1206 under MLAB) closes only 9.3% of the gap between baseline and top human participant scores. Furthermore, our analysis reveals a misalignment between the *LLM-judged* innovation and their *actual* performance on cutting-edge ML research problems. MLRC-Bench is a dynamic benchmark, which is designed to continually grow with new ML competitions to encourage rigorous and objective evaluations of AI’s research capabilities. Our leaderboard and code are publicly available at https://huggingface.co/spaces/launch/MLRC_Bench.
Croissant File: json
Dataset URL: https://github.com/yunx-z/MLRC-Bench/
Code URL: https://github.com/yunx-z/MLRC-Bench
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 1443
Loading