Can LLMs Reason About Program Semantics? A Comprehensive Evaluation of LLMs on Formal Specification Inference
Abstract: Large Language Models (LLMs) are increasingly being used to automate programming tasks.
Yet, LLMs' capabilities in reasoning about program semantics are still inadequately studied, leaving significant potential for further exploration.
This paper introduces FormalBench, a comprehensive benchmark designed to evaluate LLMs' reasoning abilities on program semantics, particularly via the task of synthesizing formal program specifications to assist verifying program correctness. This task requires both comprehensive reasoning over all possible program executions (i.e., completeness) and the generation of precise, syntactically correct expressions that adhere to formal syntax and semantics (i.e., consistency). Using this benchmark, we evaluated the ability of LLMs in synthesizing consistent and complete specifications. Our findings show that LLMs perform well with simple control flows but struggle with more complex structures, especially loops, even with advanced prompting. Additionally, LLMs exhibit limited robustness against semantic-preserving transformations. We also highlight common failure patterns and design self-repair prompts, improving success rates by 25\%.
FormalBench is packaged as a Pip library and will be released upon publication. An early access version can be found at \href{link}{https://anonymous.4open.science/r/FormalBench-6C2F/README.md}.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Large Language Models, Reasoning, Specification Inference, Formal Verification
Contribution Types: Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: Java Modelling Language
Submission Number: 1724
Loading