Keywords: Selective classification, Risk–coverage / AURC, Uncertainty estimation, Post‑hoc confidence estimation, Calibration, Metamodel / post‑hoc selector
TL;DR: MetaModelSelect is a low‑parameter post‑hoc metamodel that combines class embeddings, top‑3 p‑normalized logits, and a logit‑concentration feature to achieve state‑of‑the‑art risk–coverage (AURC) on ImageNet‑1k, Stanford Cars, and iNaturalist‑2019.
Abstract: Selective classification equips neural networks with a reject option, abstaining on low‑confidence inputs. Most post‑hoc selectors compress the logit vector into a single scalar (e.g., maximum softmax probability or energy), discarding structure in the remaining logits. We introduce MetaModelSelect, a lightweight two‑layer metamodel ($\approx 49$k parameters, <1ms overhead) trained on a frozen backbone to predict per‑example correctness. The metamodel leverages (i) a learnable embedding of the predicted class, (ii) the top‑3 entries of the normalized logit vector $\tilde z = z/\\|z\\|_{p^\*}$, and (iii) a logit‑concentration statistic $h(z) = \\tfrac{1}{C}\\sum_i \\tilde z_i^{\\,2}$. On ImageNet‑1k, Stanford Cars, and the long‑tailed iNaturalist‑2019, MetaModelSelect achieves state‑of‑the‑art risk–coverage with relative AURC reductions of 2.0-4.2\% over tuned MSP, Energy, and MaxLogit-$p$-Norm baselines, without additional data or backbone retraining.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 24050
Loading