A Compute-Matched Re-Evaluation of TroVE on MATH

Published: 09 Jul 2025, Last Modified: 19 Jul 2025AI4Math@ICML25 PosterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Library Learning, Program Synthesis, Toolbox Learning
TL;DR: We report a corrected version of TROVE, a toolbox-learning approach, compare it against a compute-matched PRIMITIVE baseline and find toolbox learning yields no benefit on MATH.
Abstract: Reusing established theorems and formulas is central to mathematical problem solving, serving as essential building blocks for tackling increasingly complex challenges. Recent work, \textsc{TroVE}, argues that code-generating Large Language Models (LLMs) can benefit similarly on the \textsc{MATH} benchmark by inducing and reusing higher-level toolboxes. By allocating computational budget across an ensemble of three modes -- directly generating code, creating tools, and reusing tools -- \textsc{TroVE} claims to outperform a \PRIMITIVE{} baseline that only performs direct generation. However, recent analysis~\cite{berlot2024library} casts doubt on these gains, noting that the tools created are often trivial or rarely reused, suggesting that improvements may stem from self-consistency or self-correction. In this work, we re-evaluate \TROVE{} on \textsc{MATH}, analyze the impact of each of its modes, and show that its benefit does not come from these mechanisms, but simply from a higher computational budget spent for \TROVE{} compared to \PRIMITIVE{}. To this end, we also perform a small correction in the original implementation of \TROVE{}’s selection mechanism, boosting \TROVE{}’s performance on \textsc{MATH} by 3% in accuracy. After matching for compute, the benefit of \TROVE{} reduces to a marginal improvement of 1%, suggesting that this toolbox approach does not provide a significant benefit on \textsc{MATH}.
Supplementary Material: zip
Submission Number: 57
Loading