Keywords: program reasoning, program semantics, type inference
TL;DR: We present a novel pair of benchmarks to evaluate the fundamental deductive reasoning abilities of test-time compute reasoning models on program semantics.
Abstract: Large Language Models (LLMs) are increasingly integrated into the software engineering ecosystem.
Their test-time compute reasoning capabilities promise significant potential in understanding program logic and semantics beyond mere token recognition.
However, current benchmarks evaluating reasoning LLMs for code lack a formal,
program-centric deductive framework for the soundness of evaluation,
incompetent in assessing of whether models genuinely reason about program semantics or merely associate superficial connections between natural language and code tokens.
To bridge this gap, we introduce TF-Bench,
a benchmark designed to evaluate LLM reasoning based on type inference in System F,
a task we refer to as *program semantics reasoning*.
By employing verified transformations to remove semantically irrelevant natural language,
we construct TF-Bench_pure, a purely semantics-driven variant of TF-Bench.
Our analysis reveals substantial limitations in state-of-the-art LLMs,
with the best-performing LLM (Claude-3.7-sonnet) achieving only $55.85\%$ accuracy on TF-Bench_pure.
Additionally, we propose two novel metrics to assess the robustness and effectiveness of extended reasoning,
underscoring the critical limitation in current LLM capabilities and highlighting essential directions for future research.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/SecLabUCD/TF-Bench
Code URL: https://github.com/SecurityLab-UCD/TF-Bench
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 417
Loading