How to Calibrate your Neural Network Classifier: Getting True Probabilities from a Classification Model
Abstract: Research in Machine Learning (ML) for classification tasks has been primarily guided by metrics that derive from a confusion matrix (e.g. accuracy, precision and recall). Several works have highlighted that this has lead to training practices that produce over-confident models and void the assumption that the model learns a probability distribution over the classification targets; this is referred to as miscalibration. Consequently, modern ML architectures struggle to perform in applications where a probabilistic forecaster is needed. Research efforts on calibration techniques have explored the possibility of recovering probability distributions from traditional architectures. This tutorial covers the key concepts required to understand the motivations behind calibration and aims at providing participants with the tools that they require assess the calibration of ML models and calibrate them when required.
0 Replies
Loading