Abstract: Large language models (LLMs) have transformed code generation.
However, most existing approaches focus on mainstream languages such as Python and Java, neglecting the Solidity language, the predominant programming language for Ethereum smart contracts.
Due to the lack of adequate benchmarks for Solidity, LLMs' ability to generate secure, cost-effective smart contracts remains unexplored.
To fill this gap, we construct SolEval, the first repository-level benchmark designed for Solidity smart contract generation, to evaluate the performance of LLMs on Solidity.
SolEval consists of 1,507 samples from 28 different repositories, covering 6 popular domains, providing LLMs with a comprehensive evaluation benchmark.
Unlike the existing Solidity benchmark, SolEval not only includes complex function calls but also reflects the real-world complexity of the Ethereum ecosystem by incorporating Gas@k and Vul@k.
We evaluate 16 LLMs on SolEval, and our results show that the best-performing LLM achieves only 26.29\% Pass@10, highlighting substantial room for improvement in Solidity code generation by LLMs.
Additionally, we conduct supervised fine-tuning (SFT) on Qwen-7B using SolEval, resulting in a significant performance improvement, with Pass@5 increasing from 16.67\% to 58.33\%, demonstrating the effectiveness of fine-tuning LLMs on our benchmark.
We release our data and code at https://anonymous.4open.science/r/SolEval-1C06/.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking,automatic evaluation,code generation and understanding,generative models
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: Solidity, English
Keywords: benchmarking, automatic evaluation, code generation and understanding, generative models
Submission Number: 1669
Loading