An LLM can Fool Itself: A Prompt-Based Adversarial Attack

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICLR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Loading