APBench: A Unified Availability Poisoning Attack and Defenses Benchmark

Published: 14 Aug 2024, Last Modified: 14 Aug 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The efficacy of availability poisoning, a method of poisoning data by injecting imperceptible perturbations to prevent its use in model training, has been a hot subject of investigation. Previous research suggested that it was difficult to effectively counteract such poisoning attacks. However, the introduction of various defense methods has challenged this notion. Due to the rapid progress in this field, the performance of different novel methods cannot be accurately validated due to variations in experimental setups. To further evaluate the attack and defense capabilities of these poisoning methods, we have developed a benchmark — APBench for assessing the efficacy of adversarial poisoning. APBench consists of 9 state-of-the-art availability poisoning attacks, 8 defense algorithms, and 4 conventional data augmentation techniques. We also have set up experiments with varying different poisoning ratios, and evaluated the attacks on multiple datasets and their transferability across model architectures. We further conducted a comprehensive evaluation of 2 additional attacks specifically targeting unsupervised models. Our results reveal the glaring inadequacy of existing attacks in safeguarding individual privacy. APBench is open source and available to the deep learning community.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready version.
Code: https://github.com/lafeat/apbench
Assigned Action Editor: ~Antti_Koskela1
Submission Number: 2657
Loading