Robustness in Both Domains: CLIP Needs a Robust Text Encoder

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Adversarial Robustness, Robust Text Encoders, CLIP, Unsupervised Adversarial Finetuning
TL;DR: We propose an efficient strategy for adversarial finetuning of the CLIP text encoder, enabling robustness in zero-shot classification, text-to-image retrieval and text-to-image generation.
Abstract: Adversarial input attacks can cause a significant shift of CLIP embeddings. This can affect the downstream robustness of models incorporating CLIP in the pipeline, such as text-to-image generative models or large vision language models. While some efforts have been done towards making the CLIP image encoders robust, the robustness of text encoders remains unexplored. In this work, we cover this gap in the literature. We propose LEAF: an efficient adversarial finetuning method for the text domain, with the ability to scale to large CLIP models. Our models significantly improve the zero-shot adversarial accuracy in the text domain, while maintaining the vision performance provided by robust image encoders. When combined with text-to-image diffusion models, we can improve the generation quality under adversarial noise. In multimodal retrieval tasks, LEAF improves the recall under adversarial noise over standard CLIP models. Finally, we show that robust text encoders facilitate better reconstruction of input text from its embedding via direct optimization. We open-source our code and models.
Primary Area: General machine learning (supervised, unsupervised, online, active, etc.)
Submission Number: 12461
Loading