Abstract: Robust neural architecture search (NAS) has emerged as a promising approach to automatically design robust neural architectures against adversarial attacks. However, existing robust NAS methods suffer from a performance degradation on the real-world data with long-tailed distribution, due to the limited number of samples from the tail classes. To push robust NAS methods towards more realistic scenarios, we present a novel robust NAS method called ALTNAS in this paper. Specifically, we propose a simple but effective data augmentation method to augment the data in tail classes and make the dataset more balanced. The proposed augmentation method can generate both natural data and adversarial examples in the search process. Moreover, we propose a neural architecture search method to search for architectures with the help of augmented data. Because the data in tail classes is enriched by both natural data and adversarial examples, the derived architecture can achieve promising performance in the adversarial long-tailed recognition task. We conduct extensive experiments on CIFAR-10-LT, CIFAR-100-LT, and ImageNet-LT benchmark datasets against the state-of-the-arts. The experimental results show that ALTNAS is superior at designing well-performing neural architectures for adversarial long-tailed recognition. In addition, the analysis and ablation studies are also performed, demonstrating the validity and effectiveness of the designed components in ALTNAS.
Loading