Keywords: Trustworthiness
TL;DR: An Uncertainty-Driven Defense Method against Jailbreaks via Shifted Token Distribution
Abstract: Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for defending against jailbreak attacks are primarily based on auxiliary models. These strategies, however, often require extensive data or training.
We propose **LightDefense**, a lightweight defense mechanism targeted at white-box models, which
utilizes a safety-oriented direction to adjust probabilities of tokens in the vocabulary, making safety disclaimers appear among the top tokens after sorting tokens by probability in descending order.
We further innovatively leverage LLM's uncertainty about prompts to measure their harmfulness and adaptively adjust defense strength, effectively balancing safety and helpfulness.
The effectiveness of LightDefense in defending against 5 attack methods across 2 target LLMs, without compromising helpfulness to benign user queries, highlights its potential as a novel and lightweight defense mechanism, enhancing security of LLMs.
Submission Number: 76
Loading