Keywords: LLMs, Interpreters, Virtual Machines, Program Execution, Programming language semantics, Small-step operational semantics
Abstract: As large language models (LLMs) excel at code reasoning, a natural question
arises: can an LLM execute programs (i.e., act as an interpreter) purely based on a
programming language’s formal semantics? If so, it will enable rapid prototyping
of new programming languages and language features. We study this question using
the imperative language IMP (a subset of C), formalized via small-step operational
semantics (SOS) and rewriting-based operational semantics (K-semantics). We
introduce three evaluation sets—Human-Written, LLM-Translated, and Fuzzer-
Generated-whose difficulty is controlled by code-complexity metrics spanning
the size, control-flow, and data-flow axes. Given a program and its semantics
formalized with SOS/K-semantics, models are evaluated on three tasks ranging
from coarse to fine: (1) final-state prediction, (2) semantic rule prediction, and (3)
execution trace prediction. To distinguish pretraining memorization from semantic
competence, we define two nonstandard semantics obtained through systematic
mutations of the standard rules. Across strong code/reasoning LLMs, performance
drops under nonstandard semantics despite high performance under the standard
one. We further find that (i) there are patterns to different model failures, (ii) most
reasoning models perform exceptionally well on coarse grained tasks involving
reasoning about highly complex programs often containing nested loop depths
beyond five, and surprisingly, (iii) providing formal semantics helps on simple
programs but often hurts on more complex ones. Overall, the results show a
promise that LLMs could serve as programming language interpreters, but points
to the lack of their robust semantics understanding.
Primary Area: datasets and benchmarks
Submission Number: 21690
Loading