Black Box Adversarial Prompting for Foundation Models

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: prompting, adversarial attacks, generative models, text-to-image, text-to-text, foundation models, large language models, black-box optimization, applications of bayesian optimization
TL;DR: We develop a framework to find adversarial prompts for text-to-image and text-to-text generative models.
Abstract: Prompting interfaces allow users to quickly adjust the output of generative models in both vision and language. However, small changes and design choices in the prompt can lead to significant differences in the output. In this work, we develop a black-box framework for generating adversarial prompts for unstructured image and text generation. These prompts, which can be standalone or prepended to benign prompts, induce specific behaviors into the generative process, such as generating images of a particular object or generating high perplexity text.
Submission Number: 56
Loading