ReplicatorBench: Benchmarking LLM Agents on Replicability Studies in Social and Behavioral Sciences

Published: 28 Apr 2026, Last Modified: 28 Apr 2026MSLD 2026 PosterEveryoneRevisionsCC BY 4.0
Keywords: benchmarks, LLM agents, replicability studies, computational social science
TL;DR: We introduce ReplicatorBench, an end-to-end benchmark including human-verified replicable and non-replicable research claims in social and behavioral sciences, for evaluating AI agents in research replication.
Abstract: The literature has witnessed an emerging interest in developing and evaluating AI agents for automated assessment of research claims in scientific papers. Existing benchmarks focus primarily on the computational aspect of this task, testing agents' ability to reproduce or replicate research outcomes when having access to the code and data. This setting, while foundational, (1) fails to capture the inconsistent availability of new data for replication as opposed to reproduction, and (2) lacks ground-truth diversity by focusing exclusively on fully reproducible or replicable papers, thereby failing to evaluate an agent's ability to identify non-replicable research. Furthermore, most benchmarks only evaluate the final reproducibility or replicability outcomes without an evaluation of the process. In response, we introduce ReplicatorBench, an end-to-end benchmark, including human-verified replicable and non-replicable research claims in social and behavioral sciences, for evaluating AI agents in research replication across three stages: (1) extraction of relevant information and retrieval of replication data; (2) design and execution of computational experiments; and (3) interpretation of replication results, allowing a test of AI agents' capability to mimic the activities of human replicators in the real world. To set a baseline of AI agents' capability, we develop ReplicatorAgent, an agentic framework equipped with necessary tools like web search and iterative interaction with sandboxed environments, to accomplish tasks in ReplicatorBench. We evaluate ReplicatorAgent across four underlying large language models (LLMs), as well as different design choices of programming language and levels of code access. Our findings reveal that while current LLM agents are capable of effectively designing and executing computational experiments, they struggle with retrieving resources, such as new data, necessary to replicate a claim. All code and data are publicly available in our repository at: https://github.com/CenterForOpenScience/llm-benchmarking.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 10
Loading