On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks NeurIPS Reproducibility Challenge 2019Download PDF

Dec 29, 2019 (edited Oct 13, 2020)NeurIPS 2019 Reproducibility Challenge Blind ReportReaders: Everyone
  • Abstract: Miscalibration of a model is defined as the mismatch between predicting probability estimates and the true correctness likelihood. In this work we aim to replicate the results reported by [7] on their analysis of the effect of Mixup [5] on a network’s calibration. Mixup is an effective yet simple approach of data augmentation which generates a convex combination of a pair of training images and their corresponding labels as the input and target for training a network. We replicate the results reported by the authors for CIFAR-100[6], Fashion-MNIST[10], STL-10[2], out-of-distribution and random noise data.
  • Track: Replicability
  • NeurIPS Paper Id: https://openreview.net/forum?id=rJgxnSHg8r
7 Replies