TL;DR: We synthesize dataset to systematically make code LLMs generate more secure code without harming correctness via offline alignment.
Abstract: While recent code-specific large language models (LLMs) have greatly enhanced their code generation capabilities, the safety of these models remains under-explored, posing potential risks as insecure code generated by these models may introduce vulnerabilities into real-world systems. Existing methods collect security-focused datasets from real-world vulnerabilities for instruction tuning in order to mitigate such issues. However, they are largely constrained by the data sparsity of vulnerable code, and have limited applicability in the multi-stage post-training workflows of modern LLMs. In this paper, we propose ProSec, a novel proactive security alignment approach designed to
align code LLMs with secure coding practices. ProSec systematically exposes the vulnerabilities in a code LLM by synthesizing vulnerability-inducing coding scenarios from Common Weakness Enumerations (CWEs) and generates fixes to vulnerable code snippets, allowing the model to learn secure practices through preference learning objectives. The scenarios synthesized by ProSec trigger 25× more vulnerable code than a normal instruction-tuning dataset, resulting in a security-focused alignment dataset 7× larger than the previous work. Experiments show that models trained with ProSec are 25.2% to 35.4% more secure compared to previous work without degrading models' utility.
Lay Summary: AI models are powerful at writing code but may produce insecure code vulnerable to attackers. Existing methods rely on scarce real-world bug examples, limiting their coverage. Our system, PROSEC, automatically generates realistic and diverse coding tasks where models tend to write insecure code, then creates paired secure and insecure implementations to teach the model to generate secure code. This process yields over 20× more vulnerable samples and a dataset 7× larger than previous efforts. Models trained with PROSEC are 25–35% more secure without losing their coding performance.
Link To Code: https://github.com/PurCL/ProSec
Primary Area: Deep Learning->Large Language Models
Keywords: code language model, code generation safety, alignment training
Flagged For Ethics Review: true
Submission Number: 2668
Loading