Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)

Published: 30 Apr 2025, Last Modified: 30 Apr 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Creating secure and resilient applications with large language models (LLM) requires anticipating, adjusting to, and countering unforeseen threats. Red-teaming has emerged as a critical technique for identifying vulnerabilities in real-world LLM implementations. This paper presents a detailed threat model and provides a systematization of knowledge (SoK) of red-teaming attacks on LLMs. We develop a taxonomy of attacks based on the stages of the LLM development and deployment process and extract various insights from previous research. In addition, we compile methods for defense and practical red-teaming strategies for practitioners. By delineating prominent attack motifs and shedding light on various entry points, this paper provides a framework for improving the security and robustness of LLM-based systems.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=nwQzpEDiud
Code: https://github.com/dapurv5/awesome-red-teaming-llms
Assigned Action Editor: ~Jinwoo_Shin1
Submission Number: 3493
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview