Keywords: Large Reasoning Models; Evaluation; Asking for Information; Mathematical Reasoning; Benchmarks
TL;DR: Genuine AI should not only solve math quiz (existing work) but also ask for information on incomplete problems; we propose a new dataset, systematically evaluate such ability of LRMs, uncover failures modes, and show the challenges of fine-tuning.
Abstract: The recent development of Large Reasoning Models (LRMs) has demonstrated remarkable problem-solving abilities in mathematics, as evaluated by existing benchmarks exclusively on well-defined problems. However, such evaluation setup constitutes a critical gap, since a genuine intelligent agent should not only know how to solve problems (being a math quiz solver), but also know to ask for information when the problems lack sufficient information, enabling proactivity in responding users' requests. To bridge such a gap, we propose a novel dataset consisting of two types of incomplete problems with diverse contexts. Based on the dataset, our systematical evaluation of LRMs reveals their inability in proactively asking for information. In addition, we uncover the behaviors related to overthinking and hallucination of LRMs, and highlight the potential and challenges of supervised fine-tuning in learning such ability. We hope to provide new insights in developing LRMs with genuine intelligence, rather than just solving problems.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 17826
Loading