Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Multimodal contrastive learning (MCL) has shown remarkable advances in zero-shot classification by learning from millions of image-caption pairs crawled from the Internet. However, this reliance poses privacy risks, as hackers may unauthorizedly exploit image-text data for model training, potentially including personal and privacy-sensitive information. Recent works propose generating unlearnable examples by adding imperceptible perturbations to training images to build shortcuts for protection. However, they are designed for unimodal classification, which remains largely unexplored in MCL. We first explore this context by evaluating the performance of existing methods on image-caption pairs, and they fail to effectively build shortcuts due to the lack of labels and the dispersion of pairs in MCL. In this paper, we propose Multi-step Error Minimization (MEM), a novel optimization process for generating multimodal unlearnable examples. It extends the Error-Minimization (EM) framework to optimize both image noise and an additional text trigger, thereby enlarging the optimized space and effectively misleading the model to learn the shortcut between the noise features and the text trigger. Specifically, we adopt projected gradient descent to solve the noise minimization problem and use HotFlip to approximate the gradient and replace words to find the optimal text trigger. Extensive experiments demonstrate the effectiveness of MEM, with post-protection retrieval results nearly half of random guessing, and its high transferability across different models.

Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Content] Vision and Language, [Generation] Social Aspects of Generative AI
Relevance To Conference: Our work significantly contributes to the field of multimodal processing by addressing the pressing issue of privacy and security in large-scale training of multimodal models. With the growing interest in multimodal learning, the reliance on publicly available datasets raises concerns about the inadvertent inclusion of personal and sensitive information. Our work proposes a novel optimization process to generate unlearnable examples for image-caption pairs, thereby protecting users' privacy while still allowing for effective model training. By leveraging our method, we can prevents unauthorized models from accessing users' privacy features, thus mitigating the risk of privacy leakage from multimodal data.
Supplementary Material: zip
Submission Number: 1603
Loading