Rethinking the OoD Generalization for Deep Neural Network: A Frequency Domain Perspective

22 Sept 2023 (modified: 14 Aug 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: OoD Generalization, Explainability, Deep Neural Networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Out-of-distribution (OoD) generalization has long been a challenging problem that remains largely unsolved. Despite numerous attempts to generalize image classification models to OoD datasets, few novel proposals have surpassed the classical Empirical Risk Minimization (ERM) methodology systematically. In this work, we introduce frequency-based analysis into the study of OoD generalization for images. Based on the Shapley value, a theoretical measure in game theory, we quantify the influence of each frequency component on the model's performance. With this analysis, we can explain the model's performance statistically. We observe that although the fallacious outputs of our model on OoD generalization tasks frequently stem from low-frequency components of OoD images, the interference pattern is highly class-wise. To further exploit our observation, we propose Class-wise Frequency Augmentation (CFA) to augment favorable frequency components and inhibit unfavorable ones. This approach can greatly improve the performance of existing OoD generalization algorithms. Our extensive experiments on five baseline OoD algorithms across seven OoD datasets provide encouraging results that prove the effectiveness of CFA on OoD generalization. Especially, CFA outperforms the state-of-the-art methods with the most substantial improvement on ColoredMNIST, increasing the identification accuracy from 60.2\% to 73.0\%.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4747
Loading