LLM4EFFI: Leveraging Large Language Models to Enhance Code Efficiency and Correctness

ACL ARR 2025 May Submission2303 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models have demonstrated impressive capabilities in generating syntactically and functionally correct code. However, most existing research has primarily focused on the correctness of generated code, while efficiency remains relatively underexplored. Recent efforts have attempted to enhance efficiency by refining the initially generated code. Nonetheless, such post hoc optimizations are inherently constrained by the original algorithmic design and overall logic, often yielding only marginal gains. In this work, we propose LLM4EFFI, a novel framework that enables LLMs to generate code that balances both efficiency and correctness. LLM4EFFI decomposes the efficiency optimization process into two distinct stages: algorithmic exploration at the logical level and implementation optimization at the code level. Correctness is subsequently ensured through an adaptive testing process based on synthetic test cases. By prioritizing efficiency early in the generation process and refining for correctness afterward, LLM4EFFI introduces a new paradigm for efficient code generation. Experimental results show that LLM4EFFI consistently improves both efficiency and correctness of generated code, achieving state-of-the-art performance on three code efficiency benchmarks across five diverse LLM backbones.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: code models, LLM agents
Languages Studied: programming language, python
Keywords: code generation, code optimization
Submission Number: 2303
Loading