Neural Bounds on Bayes Error: Advancing Classification and Generative Models

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: f-Divergence, Bayes Error, Generative Adversarial Networks (GANs), Representation Learning Neural Networks, Multiclass Classification
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: This paper presents a groundbreaking technique for approximating the upper limit of Bayes error in various classification tasks, including binary and multi-class problems. Utilizing f-divergence bounds to gauge the dissimilarity between distinct distributions, we establish an upper bound for Bayes error. This bound serves as a criterion for neural network training and test data classification. We showcase this technique's applicability to both binary and multi-class cases, examining the network output against a specific threshold for classification. Experimental results substantiate the method's effectiveness in approximating Bayes error. These experiments focus on Gaussian distributions with disparate means but identical variance, comparing the outcomes with theoretical Bayes error. Finally, the paper explores the potential applications of this approach in Generative Adversarial Networks (GANs), offering a promising avenue for future research.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9020
Loading