DIVINE: Diverse-Inconspicuous Feature Learning to Mitigate Abridge Learning

TMLR Paper4642 Authors

09 Apr 2025 (modified: 20 Jun 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep learning algorithms aim to minimize overall error and exhibit impressive performance on test datasets across various domains. However, they often struggle with out-of-distribution data samples. We posit that deep models primarily focus on capturing the prominent features beneficial for the task while neglecting other subtle yet discriminative features. This phenomenon is referred to as Abridge Learning. To address this issue and promote a more comprehensive learning process from data, we introduce a novel DIVerse and INconspicuous feature lEarning (DIVINE) approach aimed at counteracting Abridge Learning. DIVINE embodies a holistic learning methodology, effectively utilizing data by engaging with its diverse dominant features. Through experiments conducted on ten datasets, including MNIST, CIFAR10, CIFAR100, TinyImageNet, and their corrupted and perturbed counterparts (CIFAR10-C, CIFAR10-P, CIFAR100-C, CIFAR100-P, TinyImageNet-C, and TinyImageNet-P), we demonstrate that DIVINE encourages the learning of a rich set of features. This, in turn, boosts the model’s robustness and its ability to generalize. The results on out-of-distribution datasets, such as those that are perturbed, achieve a performance 5.36%, 3.10%, and 21.85% mean Flip Rate (mFR) corresponding to CIFAR10-P, CIFAR100-P, and TinyImageNet-P datasets using DIVINE. On the other hand, Abridged Learning on CIFAR10-P, CIFAR100-P, and TinyImageNet-P datasets, achieve a performance 6.53%, 11.75%, and 31.90% mFR, respectively. The proposed DIVINE algorithm achieves state-of-the-art (sota) results on CIFAR100-P dataset when compared to existing algorithms.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=K7gICLoCEo&noteId=LuiAyKDNr6
Changes Since Last Submission: We thank the reviewers and the area editor for their valuable feedback, which has helped us improve the quality of our manuscript. Below is a summary of the revisions made in response to their comments: - **Mathematical formalisation of Abridge Learning** added to Sections 1 & 3. - **New theory + toy experiment** (Figure 3) showcasing gradient starvation and DIVINE’s recovery. - **Clarified notation** for $F_i$, $M_i$, $D_i$ in Sections 3.1 & 3.2. - **Complexity-analysis table** (Table 7) detailing the asymptotic complexity and runtime of DIVINE. - **Expanded identifiability discussion** with causal-representation papers. - **Limitations & future work**: colour bias, Jacobian approximations, mask reuse, etc. - **Public code link** included for reproducibility. - **CelebA “Blond Hair” experiment** on $X$, $X_{s1}$, $X_{s2}$, $X_{s3}$; DIVINE outperforms AL baseline. - **Grad-CAM visuals** (Supplementary Figure 1) illustrate broader, more semantic attention under DIVINE.
Assigned Action Editor: ~Charles_Xu1
Submission Number: 4642
Loading