A Recipe for Disaster: Neural Architecture Search with Search Space PoisoningDownload PDF

Anonymous

30 Sept 2021 (modified: 05 May 2023)NeurIPS 2021 Workshop MetaLearn Blind SubmissionReaders: Everyone
Keywords: Neural Architecture Search, Automated Machine Learning, Poisoning Attacks, Adversarial Deep Learning, Reinforcement Learning
TL;DR: We develop data-agnostic multiple and one shot poisoning techniques against Neural Architecture Search (NAS) algorithms to progressively inflate error and training time.
Abstract: We assess the robustness of a Neural Architecture Search (NAS) algorithm known as Efficient NAS (ENAS) against data agnostic poisoning attacks on the original search space with carefully designed ineffective operations. By evaluating algorithm performance on the CIFAR-10 dataset, we empirically demonstrate how our novel search space poisoning (SSP) approach and multiple-instance poisoning attacks exploit design flaws in the ENAS controller to result in high prediction error rates for child networks. Furthermore, with just two detrimental operations, our one-shot poisoning approach inflates prediction error rates for child networks up to $90\%$ and $99\%$ on the CIFAR-10 and CIFAR-100 datasets respectively. Our results provide insights into the challenges to surmount in using NAS algorithms with parameter sharing for more adversarially robust architecture search.
0 Replies

Loading