Keywords: AI security, Large Language Models, Security Benchmark, Red Teaming, AI Safety
TL;DR: We establish a foundation to model LLM-specific security vulnerabilities and a security benchmark grounded in over 70k crowdsourced attacks on backbone LLMs in AI agents.
Abstract: AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security.
The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks.
Existing frameworks only partially address these challenges as they either capture specific vulnerabilities only or require modeling of complete agents.
To address these limitations, we introduce threat snapshots: a framework that isolates specific states in an agent's execution flow where LLM vulnerabilities manifest, enabling the systematic identification and categorization of security risks that propagate from the LLM to the agent level.
We apply this framework to construct the $b^3$ benchmark, a security benchmark based on 79466 unique crowdsourced adversarial attacks. We then evaluate 27 popular LLMs with it, revealing, among other insights, that enhanced reasoning capabilities improve security, while model size does not correlate with security.
We release our benchmark, dataset, and evaluation code to facilitate widespread adoption by LLM providers and practitioners, offering guidance for agent developers and incentivizing model developers to prioritize backbone security improvements.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 19152
Loading