Abstract: Ensuring the safety alignment of Large Language Models (LLMs) is critical for generating responses consistent with human values. However, LLMs remain vulnerable to jailbreaking attacks, where carefully crafted prompts manipulate them into producing toxic content. One category of such attacks reformulates the task as an optimization problem, aiming to elicit affirmative responses from the LLM. However, these methods heavily rely on predefined objectionable behaviors, limiting their effectiveness and adaptability to diverse harmful queries.
In this study, we first identify why the vanilla target loss is suboptimal and then propose enhancements to the loss objective. We introduce $\textit{DSN}$ (Don't Say No) attack, which combines cosine decay schedule method with refusal suppression to achieve higher success rates. Extensive experiments demonstrate that $\textit{DSN}$ outperforms baseline attacks and achieves state-of-the-art attack success rates (ASR). $\textit{DSN}$ also shows strong universality and transferability to unseen datasets and black-box models.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: prompting, security and privacy, red teaming, applications, robustness
Languages Studied: english
Submission Number: 7916
Loading