Combining Foundation Models with Symbolic AI for Automated Detection and Mitigation of Code VulnerabilitiesDownload PDF

25 Mar 2023AAAI 2023 Spring Symposium Series EDGeS SubmissionReaders: Everyone
Keywords: Large Language Models, Foundation Models, Transformer Models, CODEX, ChatGPT, automatic code generation, code vulnerability detection
TL;DR: A framework for the automated generation of correct-by-construction code at scale by exploiting recent advances in Foundation Models
Abstract: With the increasing reliance on collaborative and cloud-based systems, there is a drastic increase in attack surfaces and code vulnerabilities. Automation is key for fielding and defending software systems at scale. Researchers in Symbolic AI have had considerable success in finding flaws in human-created code. Also, run-time testing methods such as fuzzing do uncover numerous bugs. However, the major deficiency of both approaches is the inability of the methods to fix the discovered errors. They also do not scale and defy automation. Static analysis methods also suffer from the false positive problem – an overwhelming number of reported flaws are not real bugs. This brings up an interesting conundrum: Symbolic approaches actually have a detrimental impact on programmer productivity, and therefore do not necessarily con-tribute to improved code quality. What is needed is a combination of automation of code generation using large language models (LLMs), with scalable defect elimination methods using symbolic AI, to create an environment for the automated generation of defect-free code.
1 Reply

Loading