A Note on Quantifying the Influence of Energy Regularization for Imbalanced ClassificationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Influence Function, Energy based model, Imbalanced Dataset
Abstract: For classification problems where classifiers predict $\bar{p}(y|\mathbf{x})$, namely the probability of label $y$ given data $\mathbf{x}$, an energy value can be defined (e.g. LogSumExp of the logits) and used to evaluate the estimated $\bar{p}(\mathbf{x})$ by learned model, which is widely used for generative learning. However, previous works overlook the relationship between the estimated $\bar{p}(\mathbf{x})$ and the testing accuracy of a classifier {when shifts occur regarding $p(\mathbf{x})$ from the the training set to the testing set} \emph{e.g.} imbalanced dataset learning. In this paper, we propose to evaluate the influence of the energy value regarding $\bar{p}(\mathbf{x})$ on the testing accuracy via influence function which is a standard tool in robust statistics. In particular, we empirically show that the energy value could influence the testing accuracy of the model trained on the imbalanced dataset. Based on our findings, we further propose a technique that regularizes the energy value on the training set to improve imbalanced data learning. We theoretically prove that regularizing energy value could adjust the margin and re-weight the sample. Experimental results show the effectiveness of our method. In particular, when finetuning with our method for only a few epochs, the testing accuracy could be effectively boosted on popular imbalance classification benchmarks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading