Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)

TMLR Paper3493 Authors

14 Oct 2024 (modified: 01 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Creating secure and resilient applications with large language models (LLM) requires anticipating, adjusting to, and countering unforeseen threats. Red-teaming has emerged as a critical technique for identifying vulnerabilities in real-world LLM implementations. This paper presents a detailed threat model and provides a systematization of knowledge (SoK) of red-teaming attacks on LLMs. We develop a taxonomy of attacks based on the stages of the LLM development and deployment process and extract various insights from previous research. In addition, we compile methods for defense and practical red-teaming strategies for practitioners. By delineating prominent attack motifs and shedding light on various entry points, this paper provides a framework for improving the security and robustness of LLM-based systems.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=nwQzpEDiud
Changes Since Last Submission: The author list was wrong in the last submission because a couple of authors did not have account on openreview The author list is complete in this submission. Apologies for any inconvenience.
Assigned Action Editor: ~Jinwoo_Shin1
Submission Number: 3493
Loading