HumorReject: Decoupling LLM Safety from Refusal Prefix via A Little Humor

ACL ARR 2025 May Submission2188 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) commonly rely on explicit refusal prefixes for safety, making them vulnerable to prefix injection attacks. We introduce HumorReject, a novel data-driven approach that reimagines LLM safety by decoupling it from refusal prefixes through humor as an indirect refusal strategy. Rather than explicitly rejecting harmful instructions, HumorReject responds with contextually appropriate humor that naturally defuses potentially dangerous requests. Our approach effectively addresses common "over-defense" issues while demonstrating superior robustness against various attack vectors. Our findings suggest that improvements in training data design can be as important as the alignment algorithm itself in achieving effective LLM safety.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: LLM safety, LLM jailbreak
Contribution Types: Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 2188
Loading