The More You Automate, the Less You See: The Hidden Pitfalls of AI Scientist Systems

Published: 24 Sept 2025, Last Modified: 15 Oct 2025NeurIPS2025-AI4Science SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: AI scientist system, AI for science, Responsible AI, Agentic AI, Automated research
TL;DR: We diagnose key empirical failures in prominent AI scientist systems and propose design safeguards to ensure more trustworthy and responsible scientific automation.
Abstract: AI scientist systems, capable of autonomously executing the full research workflow from hypothesis generation and experimentation to paper writing, hold significant potential to accelerating scientific discovery. However, the internal workflow of these systems are often not closely examined. In this paper, we identify four potential failure modes in contemporary AI scientist systems: inappropriate benchmark selection, data leakage, metric misuse, and positive result bias. To examine these risks, we design controlled experiments that isolate each failure mode while addressing challenges unique to evaluating AI scientist systems. Our assessment of two prominent open-source AI scientist systems reveals the presence of such vulnerabilities, which can be easily overlooked in practice. We conclude with concrete recommendations for mitigating these risks, specifically that scientific journals and conferences require submission of trace logs and code of the entire automated research process to ensure transparency and accountability.
Submission Number: 308
Loading