EVM-QuestBench: An Execution-Grounded Benchmark for Natural-Language Transaction Code Generation

ACL ARR 2026 January Submission7114 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: benchmark, evaluation, execution grounded, code generation, LLM agents, tool use, blockchain, transactions, EVM
Abstract: Large language models are increasingly applied to various development scenarios. However, in on-chain transaction scenarios, even a minor error can cause irreversible loss for users. Existing evaluations often overlook execution accuracy and safety. We introduce EVM-QuestBench, an execution-grounded benchmark for natural-language transaction-script generation on EVM-compatible chains. The benchmark employs dynamic evaluation: instructions are sampled from template pools, numeric parameters are drawn from predefined intervals, and validators verify outcomes against these instantiated values. EVM-QuestBench contains 107 tasks (62 atomic, 45 composite). Its modular architecture enables rapid task development. We instantiate the benchmark on BNB Smart Chain (chain ID 56) and execute scripts on a forked chain with snapshot isolation; composite tasks apply step-efficiency decay. We evaluate 20 models and find large performance gaps, with split scores revealing persistent asymmetry between single-action precision and multi-step workflow completion. Code: \url{https://anonymous.4open.science/r/bsc_quest_bench-A9CF/}.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, evaluation, evaluation methodologies, reproducibility, LLM agents, tool use, grounded agents, agent evaluation, environment interaction
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 7114
Loading