Improving generalization by loss modificationDownload PDF

01 Mar 2023 (modified: 29 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Bayesian Neural Networks, Generalization, Loss, Outlier suppression, Convergence
TL;DR: Outlier suppression loss derived by Bayesian averaging improves generalization and traning convergence for neural networks.
Abstract: What data points from available data set should be used for training? For all subsets of available data it will generally make different solutions. We show that a simple loss modification allows to find a single solution that represents data set properties and not particular selections of data points thus improving the generalization performance.
8 Replies

Loading