From Model to Breach: Towards Actionable LLM-Generated Vulnerabilities Reporting

ICLR 2026 Conference Submission18683 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM; Cybersecurity; Code Generation; Coding LLMs;
TL;DR: Even recent Coding LLMs fail on established vulnerability benchmarks due to the utility–safety tradeoff. We propose methods to prioritize vulnerabilities and introduce new measures to compare models.
Abstract: As the role of Large Language Models (LLM)-based coding assistants in software development becomes more critical, so does the role of the bugs they generate in the overall cybersecurity landscape. While a number of LLM code security benchmarks have been proposed alongside approaches to improve the security of generated code, it remains unclear to what extent they have impacted widely used coding LLMs. Here, we show that even the latest open-weight models are vulnerable in the earliest reported vulnerability scenarios in a realistic use setting, suggesting that the safety-functionality trade-off has until now prevented effective patching of vulnerabilities. To help address this issue, we introduce a new severity metric that reflects the risk posed by an LLM-generated vulnerability, accounting for vulnerability severity, generation chance, and the formulation of the prompt that induces vulnerable code generation - Prompt Exposure (PE). To encourage the mitigation of the most serious and prevalent vulnerabilities, we use PE to define the Model Exposure (ME) score, which indicates the severity and prevalence of vulnerabilities a model generates.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 18683
Loading