Using Mutliple Self-Supervised Tasks Improves Model RobustnessDownload PDF

Published: 25 Mar 2022, Last Modified: 23 May 2023ICLR 2022 PAIR^2Struct PosterReaders: Everyone
Keywords: Adversarial Robustness, Computer Vision, Self-Supervised Learning, Multi-Task Learning
TL;DR: We use multi-task learning and self-supervised learning to reverse adversarial perturbations on Cifar-10, finding improvements in classification accuracy over baseline robustly trained models and over SOA reversal methods.
Abstract: Deep networks achieve state-of-the-art performance on computer vision tasks, yet they fail under adversarial attacks that are imperceptible to humans. In this paper, we propose a novel defense that can dynamically adapt the input using the intrinsic structure from multiple self-supervised tasks. By simultaneously using many self-supervised tasks, our defense avoids over-fitting the adapted image to one specific self-supervised task and restores more intrinsic structure in the image compared to a single self-supervised task approach. Our approach further improves robustness and clean accuracy significantly compared to the state-of-the-art single task self-supervised defense. Our work is the first to connect multiple self-supervised tasks to robustness, and suggests that we can achieve better robustness with more intrinsic signal from visual data.
0 Replies