Mix-MaxEnt: Improving Accuracy and Uncertainty Estimates of Deterministic Neural NetworksDownload PDF

09 Oct 2021, 14:49 (modified: 01 Dec 2021, 23:35)NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: regularizer, maximum entropy, data-shift, out-of-distribution detection
TL;DR: A maximum entropy regulariser on interpolated samples to improve accuracy, calibration, data shift robustness and out of distribution detection
Abstract: We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates. Our approach, on top of the cross-entropy loss, simply puts an entropy maximization regularizer corresponding to the predictive distribution in the regions of the embedding space between the class clusters. This is achieved by synthetically generating between-cluster samples via the convex combination of two images from {\em different} classes and maximizing the entropy on these samples. Such a data-dependent regularization guides the maximum likelihood estimation to prefer a solution that (1) maps out-of-distribution samples to high entropy regions (creating an entropy barrier); and (2) is more robust to the superficial input perturbations. We empirically demonstrate that Mix-MaxEnt consistently provides much improved classification accuracy, better calibrated probabilities for in-distribution data, and reliable uncertainty estimates when exposed to situations involving domain-shift and out-of-distribution samples.
1 Reply