DIVINE: Diverse-Inconspicuous Feature Learning to Mitigate Abridge Learning

TMLR Paper1770 Authors

01 Nov 2023 (modified: 30 Dec 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Deep learning algorithms aim to minimize overall error and exhibit impressive performance on test datasets across various domains. However, they often struggle with out-of-distribution data samples. We posit that deep models primarily focus on capturing the prominent features beneficial for the task while neglecting other subtle yet discriminative features. This phenomenon is referred to as \textit{Abridge Learning}. To address this issue and promote a more comprehensive learning process from data, we introduce a novel \textit{DIVerse and INconspicuous feature lEarning} (DIVINE) approach aimed at counteracting Abridge Learning. DIVINE embodies a holistic learning methodology, effectively utilizing data by engaging with its diverse dominant features. Through experiments conducted on ten datasets, including MNIST, CIFAR10, CIFAR100, TinyImageNet, and their corrupted and perturbed counterparts (CIFAR10-C, CIFAR10-P, CIFAR100-C, CIFAR100-P, TinyImageNet-C, and TinyImageNet-P), we demonstrate that DIVINE encourages the learning of a rich set of features. This, in turn, boosts the model’s robustness and its ability to generalize. The results on out-of-distribution datasets, such as those that are perturbed, achieve a performance 5.36\%, 3.10\%, and 21.85\% mean Flip Rate (mFR) corresponding to CIFAR10-P, CIFAR100-P, and TinyImageNet-P datasets using DIVINE.On the other hand, Abridged Learning on CIFAR10-P, CIFAR100-P, and TinyImageNet-P datasets, achieve a performance 6.53\%, 11.75\%, and 31.90\% mFR, respectively.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We would like to express our gratitude to the reviewers for their valuable feedback and suggestions to improve our paper. In response to their comments, we have made several revisions and would like to highlight the key changes: 1. Additional experiments on perturbed datasets and discussion of the results. 2. Discussion on the Identification of the number of Feature Maps in the proposed DIVINE method. 3. Discussion on ColoredMNIST. 4. Discussion on the training time of the proposed algorithm. 5 Fixed minor typos and improved mathematical writing. 6. Updated the definition of Abridge Learning. 7. Discussion on setting the pixel values to 0 for feature suppression. 8. Added the limitations of the proposed method.
Assigned Action Editor: ~Charles_Xu1
Submission Number: 1770
Loading