Keywords: Calibration, Uncertainty Quantification, Distribution Shift
TL;DR: We propose Frequency-aware Gradient Rectification, a framework that mitigates calibration degradation under distribution shift without relying on target domain data.
Abstract: Deep neural networks often produce overconfident predictions, undermining their reliability in safety-critical visual applications. This miscalibration is further exacerbated under test distribution shift. Existing methods improve calibration via training-time regularization or post-hoc adjustment, but often rely on access to (or simulation of) target domains, limiting practicality. We propose Frequency-aware Gradient Rectification (FGR), a target-agnostic training framework for robust calibration. From a frequency perspective, FGR applies low-pass filtering to a subset of training images to diminish spurious high-frequency cues and bias learning toward domain-invariant structure. However, the associated information loss can degrade In-Distribution (ID) calibration. To resolve this trade-off, FGR treats ID calibration as a hard optimization constraint and rectifies parameter updates via geometric projection whenever they conflict with calibration. This projection-based update guarantees a first-order non-increase of the ID calibration objective without introducing additional weighting hyperparameters.
Experiments on CIFAR-10/100-C and WILDS show that FGR significantly improves calibration under diverse shifts while preserving ID performance, and it remains compatible with post-hoc temperature scaling.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 6613
Loading