Abstract: Despite the advanced capabilities of contemporary machine learning (ML) models, they remain vulnerable to adversarial and backdoor attacks. This vulnerability is particularly concerning in real-world deployments, where compromised models may exhibit unpredictable behavior in critical scenarios. Such risks are heightened by the prevalent practice of collecting massive, internet-sourced datasets for training multimodal models, as these datasets may harbor backdoors. Various techniques have been proposed to mitigate the effects of backdooring in multimodal models, such as CleanCLIP, which is the current state-of-the-art approach. In this work, we demonstrate that the efficacy of CleanCLIP in mitigating backdoors is highly dependent on the particular objective used during model pre-training. We observe that stronger pre-training objectives that lead to higher zero-shot classification performance correlate with harder to remove backdoors behaviors. We show this by training multimodal models on two large datasets consisting of 3 million (CC3M) and 6 million (CC6M) datapoints, under various pre-training objectives, followed by poison removal using CleanCLIP. We find that CleanCLIP, even with extensive hyperparameter tuning, is ineffective in poison removal when stronger pre-training objectives are used. Our findings underscore critical considerations for ML practitioners who train models using large-scale web-curated data and are concerned about potential backdoor threats.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Dear Action Editor,
Thank you for accepting our paper. We have implemented all the recommendations you made in the camera ready version of the paper. Specifically,
1. We have replaced stronger pre-training objectives with self-supervised loss
2. We have replaced all figures with larger marker size and updated the legend for clarity
3. We have added the ViT experiments in the paper as suggested.
4. We do not have access to the ground truth labels as the images are from CC6M, therefore we use SigLIP for prediction.
5. We have fixed the abstract formatting issues.
Thank you
Code: https://github.com/vsahil/attack-cleanclip
Assigned Action Editor: ~Yingzhen_Li1
Submission Number: 2933
Loading