Collapsing the Learning: Crafting Broadly Transferable Unlearnable Examples

18 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: unlearnable examples, data privacy, Data Availablity Attacks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: The success of Artificial Intelligence (AI) can be largely attributed to the availability of high-quality data for constructing machine learning models. Recently, the importance of data in AI has been significantly emphasized, leading to concerns regarding the secure utilization of data, particularly in the context of unauthorized usage. To address data exploitation, data unlearning has been introduced as a method to render data unexploitable by generating unlearnable examples. However, existing unlearnable examples lack the necessary generalization for broad applicability. In this paper, we propose a novel data protection method that generates robust transferable unlearnable examples, ensuring their effectiveness across diverse network architectures, even under challenging adversarial training conditions. To the best of our knowledge, our approach is the first to generate transferable unlearnable examples by leveraging data collapse as a means to reduce the information contained in data. Moreover, we modify the conventional adversarial training process to ensure that our unlearnable examples maintain robust transferability, even when the targeted model undergoes adversarial training. Comprehensive experiments demonstrate that the unlearnable examples generated by our method exhibit superior robust transferability compared to other state-of-the-art techniques.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1288
Loading