Can Language Models Falsify? The Need for Inverse Benchmarking

Published: 08 Mar 2025, Last Modified: 11 Apr 2025SSI-FM OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: code; self-repair; falsification
TL;DR: We test the abilities of models to find counterexamples, where counterexamples can be evaluated automatically using code execution
Abstract:

There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. However, current benchmarks for LMs only assess their ability to generate solutions to problems. We argue we need more benchmarks for the inverse task — creating counterexamples for subtly incorrect solutions. To show this, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts were able to create counterexamples. Our analysis finds that the best reasoning agents, even o3-mini (high) with code execution feedback can create counterexamples for only <9% of incorrect solutions in InvBench-Algo, even though ratings indicate its ability to solve up to 48% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs' ability to falsify incorrect solutions — a capability that is crucial for both accelerating research, and making models self-improve through reliable reflective reasoning.

Submission Number: 75
Loading