Task Abstention for Large Language Models in Code Generation

ACL ARR 2026 January Submission7119 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Code Generation, Abstention, Theoretical Guarantee
Abstract: Large language models (LLMs) have revolutionized automated code generation. One serious concern, however, is the so-called ''hallucination'', i.e., LLMs may generate seemingly plausible but functionally incorrect code. In this paper, we study the task abstention problem, i.e., determining whether a given LLM should abstain from performing a specific code generation task to avoid likely hallucination. Our approach features a calibrated abstention rule, grounded in the principles of multiple hypothesis testing. The rule assesses generation consistency through code execution outcomes, allowing it to handle syntactic diversity of semantically equivalent code without reliance on oracle test cases or external databases. We prove that our approach provides a rigorous, distribution-free theoretical guarantee on its abstention decisions. We evaluate our method on benchmark datasets using several open-source code LLMs. Results show that our method allows generative models to more accurately and efficiently identify and abstain from tasks that induce hallucination compared to existing techniques, providing a reliable mechanism for safer and more robust code generation.
Paper Type: Long
Research Area: Code Models
Research Area Keywords: Code Models
Contribution Types: NLP engineering experiment
Languages Studied: Python
Submission Number: 7119
Loading