Can LLMs Patch Security Issues?Download PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We introduce a novel approach, Feedback-Driven Security Patching (FDSP), to enhance Large Language Models (LLMs) for fixing security issues in code.
Abstract: Large Language Models (LLMs) have shown impressive proficiency in code generation. Nonetheless, similar to human developers, these models might generate code that contains security vulnerabilities and flaws. Writing secure code remains a substantial challenge, as vulnerabilities often arise during interactions between programs and external systems or services, such as databases and operating systems. In this paper, we propose a novel approach, Feedback-Driven Security Patching (FDSP), designed to explore the use of LLMs in receiving feedback from a static code analysis tool (Bandit) and then using the LLMs to generate potential solutions to resolve security vulnerabilities. Each solution and the vulnerable code are then sent back to the LLM for code refinement. Our approach shows a significant improvement over the baseline and outperforms existing approaches. Furthermore, we introduce a new dataset, PythonSecurityEval, collected from real-world scenarios on Stack Overflow to evaluate the LLMs' ability to generate secure code. Anonymized code and data are available at https://anonymous.4open.science/r/LLM-code-refine-4C34/.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: Python
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview