Autonomous Evaluation of LLMs for Truth Maintenance and Reasoning Tasks

Published: 15 Jun 2025, Last Modified: 07 Aug 2025AIA 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Logical Reasoning, Autoformalization, Informalization, Formal Translation, Truth Maintenance
TL;DR: An automatic and scalable benchmark for evaluating LLMs for truth maintenance w.r.t. formal syntax.
Abstract: This paper presents AutoEval, a novel benchmark for scaling Large Language Model (LLM) assessment in formal tasks with clear notions of correctness, such as truth maintenance in translation and logical reasoning. AutoEval is the first benchmarking paradigm that offers several key advantages necessary for scaling objective evaluation of LLMs without human labeling: (a) ability to evaluate LLMs of increasing sophistication by auto-generating tasks at different levels of difficulty; (b) auto-generation of ground truth that eliminates dependence on expensive and time-consuming human annotation; (c) the use of automatically generated, randomized datasets that mitigate the ability of successive LLMs to overfit to static datasets used in many contemporary benchmarks. Empirical analysis shows that an LLM's performance on AutoEval is highly indicative of its performance on a diverse array of other benchmarks focusing on translation and reasoning tasks, making it a valuable autonomous evaluation paradigm in settings where hand-curated datasets can be hard to obtain and/or update.
Paper Type: Previously Published Paper
Venue For Previously Published Paper: ICLR 2025
Submission Number: 14
Loading