Encoding Hierarchical Information in Neural Networks \\helps in Subpopulation Shift

TMLR Paper236 Authors

05 Jul 2022 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Over the past decade, deep neural networks have proven to be adept in image classification tasks, often surpassing humans in terms of accuracy. However, standard neural networks often fail to understand the concept of hierarchical structures and dependencies among different classes for vision related tasks. Humans on the other hand, seem to intuitively learn categories conceptually, progressively growing from understanding high-level concepts down to granular levels of categories. One of the issues arising from the inability of neural networks to encode such dependencies within its learned structure is that of subpopulation shift -- where models are queried with novel unseen classes taken from a shifted population of the training set categories. Since the neural network treats each class as independent from all others, it struggles to categorize shifting populations that are dependent at higher levels of the hierarchy. In this work, we study the aforementioned problems through the lens of a novel conditional supervised training framework. We tackle subpopulation shift by a structured learning procedure that incorporates hierarchical information conditionally through labels. Furthermore, we introduce a notion of graphical distance to model the catastrophic effect of mispredictions. We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts, with an improvement up to ~3 % in terms of accuracy and up to 11 % in terms of hierarchical distance over standard models on subpopulation shift benchmarks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have updated the draft with the changed recommended by the reviewers. Changes from reviewer emQ8 are highlighted in purple. Changes from reviewer iNPr are highlighted in red. Changes from reviewer Z8z2 are highlighted in blue. Majorly the changes include: 1. Adding relevant literature in the introduction, related works and appendix. 2. Revising the claimed contributions to reflect prior literature better. 3. Repeating all the experiments for 5 runs and updating the mean in the main text, with standard deviations reported in the appendix. 4. Typographical errors. Thanks once again to all the reviewers for their valuable feedback. It has helped us improve our paper considerably!
Assigned Action Editor: ~Simon_Kornblith1
Submission Number: 236
Loading