On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks NeurIPS Reproducibility Challenge 2019Download PDF

Published: 20 Feb 2020, Last Modified: 05 May 2023NeurIPS 2019 Reproducibility Challenge Blind ReportReaders: Everyone
Abstract: Miscalibration of a model is defined as the mismatch between predicting probability estimates and the true correctness likelihood. In this work we aim to replicate the results reported by [7] on their analysis of the effect of Mixup [5] on a network’s calibration. Mixup is an effective yet simple approach of data augmentation which generates a convex combination of a pair of training images and their corresponding labels as the input and target for training a network. We replicate the results reported by the authors for CIFAR-100[6], Fashion-MNIST[10], STL-10[2], out-of-distribution and random noise data.
Track: Replicability
NeurIPS Paper Id: https://openreview.net/forum?id=rJgxnSHg8r
7 Replies

Loading