Token Highlighter: Inspecting and Mitigating Jailbreak Prompts for Large Language Models

Published: 12 Oct 2024, Last Modified: 14 Nov 2024SafeGenAi PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Jailbreak Defense, AI Alignment and Safety
Abstract: Large Language Models (LLMs) are increasingly being integrated into services such as ChatGPT to provide responses to user queries. To mitigate potential harm and prevent misuse, there have been concerted efforts to align the LLMs with human values and legal compliance by incorporating various techniques, such as Reinforcement Learning from Human Feedback (RLHF), into the training of the LLMs. However, recent research has exposed that even aligned LLMs are susceptible to adversarial manipulations known as Jailbreak Attacks. To address this challenge, this paper proposes a method called **Token Highlighter** to inspect and mitigate the potential jailbreak threats in the user query. Token Highlighter introduced a concept called $\mathtt{Affirmation}$ $\mathtt{Loss}$ to measure the LLM's willingness to answer the user query. It then uses the gradient of $\mathtt{Affirmation}$ $\mathtt{Loss}$ for each token in the user query to locate the jailbreak-critical tokens. Further, Token Highlighter exploits our proposed ***Soft Removal*** technique to mitigate the jailbreak effects of critical tokens via shrinking their token embeddings. Experimental results on two aligned LLMs (LLaMA-2 and Vicuna-V1.5) demonstrate that the proposed method can effectively defend against a variety of Jailbreak Attacks while maintaining competent performance on benign questions of the AlpacaEval benchmark. In addition, **Token Highlighter** is a cost-effective and interpretable defense because it only needs to query the protected LLM once to compute the $\mathtt{Affirmation}$ $\mathtt{Loss}$ and can highlight the critical tokens upon refusal.
Submission Number: 82
Loading