Is It Time to Redefine the Classification Task for Deep Learning Systems?Download PDF

Published: 22 Jun 2021, Last Modified: 05 May 2023ICML 2021 Workshop AML OralReaders: Everyone
Keywords: adversarial robustness, deep learning system
TL;DR: We first investigate the impact of learning task on the robustness of deep learning system.
Abstract: Many works have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial examples. A deep learning system involves a couple of elements: the learning task, data set, deep model, loss, and optimizer. Each element may cause the vulnerability of the deep learning system, and simply attributing the vulnerability of the deep learning system to the deep model may impede addressing the adversarial attack. So we redefine the robustness of DNNs as the robustness of the deep neural learning system, and we experimentally find that the vulnerability of the deep learning system also roots in the learning task itself. In detail, this paper defines the interval-label classification task for the deep classification system, whose labels are predefined non-overlapping intervals instead of a fixed value (hard label) or probability vector (soft label). The experimental results demonstrate that the interval-label classification task is more robust than the traditional classification task while retaining accuracy.
2 Replies

Loading