Manipulating Pre-Trained Encoder for Targeted Poisoning Attacks in Contrastive LearningDownload PDFOpen Website

Published: 01 Jan 2024, Last Modified: 30 Jan 2024IEEE Trans. Inf. Forensics Secur. 2024Readers: Everyone
Abstract: In recent years, contrastive learning has become very powerful for representation learning using large-scale unlabeled data, by involving pre-trained encoders to fine-tune downstream classifiers. However, the latest research indicates that contrastive learning can potentially suffer from the risks of data poisoning attacks, where the attacker injects maliciously crafted poisoned samples into the unlabeled pre-training data. To step forward, in this paper, we present a more stealthy poisoning attack dubbed PA-CL to directly poison the pre-trained encoder, such that the downstream classifier’s behavior on a single target instance to the attacker-desired class can be manipulated without affecting the overall downstream classification performance. We observe that a high similarity exists between the feature representation generated by the poisoned pre-trained encoder for the target sample and samples from the attacker-desired class. This leads to the downstream classifier misclassifying the target sample with the attacker-desired class. Therefore, we formulate our attack as an optimization problem, and design two novel loss functions, namely, the target effectiveness loss to effectively poison the pre-trained encoder, and the model utility loss to maintain the downstream classification performance. Experimental results on four real-world datasets demonstrate that the attack success rate of the proposed attack is 40% higher on average than that of the three baseline attacks, and the fluctuation of the downstream classifier’s prediction accuracy is within 5%.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview