[WIP] Jailbreak Paradox: The Achilles' Heel of LLMs

Published: 01 Jan 2024, Last Modified: 27 Sept 2024CoRR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We introduce two paradoxes concerning jailbreak of foundation models: First, it is impossible to construct a perfect jailbreak classifier, and second, a weaker model cannot consistently detect whether a stronger (in a pareto-dominant sense) model is jailbroken or not. We provide formal proofs for these paradoxes and a short case study on Llama and GPT4-o to demonstrate this. We discuss broader theoretical and practical repercussions of these results.
Loading