MANDALA: Multi-Agent Network for Backdoor Detection using AST Parsing and Large Language Models

ACL ARR 2024 June Submission3229 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper presents MANDALA , a system that leverages locally deployed open-source large language models (LLMs) and multi-agent networks to enhance vulnerability detection in code bases. MANDALA uses an abstract syntax tree-based algorithm to parse code into digestible chunks for LLMs, generating code explanations and descriptions. A collaborative multi-agent network comprised of specialized agents such as static analysis, security management, and user interaction agents then coordinate to analyze the codebase for potential backdoors and vulnerabilities. Evaluations on open-source codebases demonstrate MANDALA's ability to significantly reduce manual effort while increasing detection speed and accuracy over traditional methods across various test cases. MANDALA represents an innovative integration of LLMs and multi-agent systems for efficient, scalable code vulnerability detection.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: open information extraction, entity linking/disambiguation, document-level extraction
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3229
Loading