Abstract: Large language models (LLMs) are vulnerable to jailbreak attacks -- resulting in harmful, unethical, or biased text generations. However, existing jailbreaking methods are computationally costly. In this paper, we propose the **weak-to-strong** jailbreaking attack, an efficient inference time attack for aligned LLMs to produce harmful text. Our key intuition is based on the observation that jailbroken and aligned models only differ in their initial decoding distributions. The weak-to-strong attack's key technical insight is using two smaller models (a safe and an unsafe one) to adversarially modify a significantly larger safe model's decoding probabilities. We evaluate the weak-to-strong attack on 5 diverse open-source LLMs from 3 organizations. The results show our method can increase the misalignment rate to over 99\% on two datasets with just one forward pass per example. Our study exposes an urgent safety issue that needs to be addressed when aligning LLMs. As an initial attempt, we propose a defense strategy to protect against such attacks, but creating more advanced defenses remains challenging. The code for replicating the method is available at https://github.com/XuandongZhao/weak-to-strong.
Lay Summary: Large language models (LLMs), like those powering AI chatbots, are trained to avoid producing harmful or dangerous content. However, our study shows that these safety measures can be easily bypassed—even without using powerful computers or advanced technical skills. We introduce a new method called "weak-to-strong jailbreaking", where a small, misaligned AI model (one not trained to be safe) is used to subtly influence a much larger and safer model during text generation. This influence can trick the bigger model into saying things it normally would refuse to. We tested this attack across several widely used AI models and found that it could make even the most advanced models generate unsafe content over 99% of the time. While we also propose a partial defense strategy, our findings reveal serious gaps in current safety systems. This work highlights the urgent need to develop stronger protections as AI becomes more powerful and widely available.
Link To Code: https://github.com/XuandongZhao/weak-to-strong
Primary Area: Social Aspects->Safety
Keywords: LLM, AI safety, Jailbreaking
Submission Number: 7134
Loading