Poisoning the Search Space in Neural Architecture SearchDownload PDF

Published: 21 Jun 2021, Last Modified: 29 Sept 2024ICML 2021 Workshop AML PosterReaders: Everyone
Keywords: Poisoning Attacks, Neural Architecture Search, Adversarial Deep Learning, Automated Machine Learning, Reinforcement Learning
TL;DR: We evaluate the robustness of Efficient NAS (ENAS) against data-agnostic poisoning attacks on the original search space, inflating error rates for child networks and drastically slowing down the training process.
Abstract: Deep learning has proven to be a highly effective problem-solving tool for object detection and image segmentation across various domains such as healthcare and autonomous driving. At the heart of this performance lies neural architecture design which relies heavily on domain knowledge and prior experience on the researchers' behalf. More recently, this process of finding the most optimal architectures, given an initial search space of possible operations, was automated by Neural Architecture Search (NAS). In this paper, we evaluate the robustness of one such algorithm known as Efficient NAS (ENAS) against data agnostic poisoning attacks on the original search space with carefully designed ineffective operations. By evaluating algorithm performance on the CIFAR-10 dataset, we empirically demonstrate how our novel search space poisoning (SSP) approach and multiple-instance poisoning attacks exploit design flaws in the ENAS controller to result in inflated prediction error rates for child networks. Our results provide insights into the challenges to surmount in using NAS for more adversarially robust architecture search.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/poisoning-the-search-space-in-neural/code)
2 Replies

Loading