Large Language Models for Code: Security Hardening and Adversarial Testing

Published: 23 Jun 2023, Last Modified: 12 Jul 2023DeployableGenerativeAIEveryoneRevisions
Keywords: large language models, code generation, security, prompt tuning
TL;DR: This work studies the security of LMs along two important axes: (i) security hardening, which enhances LMs' reliability in generating secure code, and (ii) adversarial testing, which evaluates LMs' security at an adversarial standpoint.
Abstract: Large language models (large LMs) are increasingly used to generate code. However, LMs lack awareness of security and are found to frequently produce unsafe code. This work studies the security of LMs along two important axes: (i) security hardening, which enhances LMs' reliability in generating secure code, and (ii) adversarial testing, which evaluates LMs' security at an adversarial standpoint. To address both, we propose a novel method called SVEN, which leverages continuous prompts to control LMs to generate secure or unsafe code. We optimize these continuous vectors by enforcing specialized loss terms on different code regions, using a high-quality dataset carefully curated by us. Our extensive evaluation shows that SVEN achieves strong security control and preserves functional correctness.
Submission Number: 8
Loading