Keywords: Alignment, LLM, Safety, Attack
TL;DR: We raise the concern that safety-aligned LLMs can be subverted with a minimal of 100 examples, posing great risks about AI safety.
Abstract: Warning: This paper contains examples of harmful language, and reader discretion
is recommended. The increasing open release of powerful large language models
(LLMs) has facilitated the development of downstream applications by reducing the
essential cost of data annotation and computation. To ensure AI safety, extensive
safety-alignment measures have been conducted to armor these models against
malicious use (primarily hard prompt attack). However, beneath the seemingly
resilient facade of the armor, there might lurk a shadow. By simply tuning on 100
malicious examples with 1 GPU hour, these safely aligned LLMs can be easily
subverted to generate harmful content. Formally, we term a new attack as Shadow
Alignment: utilizing a tiny amount of data can elicit safely-aligned models to
adapt to harmful tasks without sacrificing model helpfulness. Remarkably, the
subverted models retain their capability to respond appropriately to regular inquiries.
Experiments across 8 models released by 5 different organizations (LLaMa-2,
Falcon, InternLM, BaiChuan2, Vicuna) demonstrate the effectiveness of shadow
alignment attack. Besides, the single-turn English-only attack successfully transfers
to multi-turn dialogue and other languages. This study serves as a clarion call for a
collective effort to overhaul and fortify the safety of open-source LLMs against
malicious attackers.
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5030
Loading