Abstract: Uncertainty estimation remains a critical challenge in adapting pre-trained language models to classification tasks, particularly under parameter-efficient fine-tuning approaches such as adapters. We introduce AdUE, an efficient post-hoc UE method, to enhance softmax-based uncertainty estimates. Our approach uses a differentiable approximation of the maximum function and applies additional regularization through L2-SP, anchoring the fine-tuned head weights and regularizing the model. Evaluations on five NLP classification datasets across four language models (RoBERTa, ELECTRA, LLaMA, Qwen) demonstrate that our method consistently outperforms established baselines such as Mahalanobis distance and MaxProb. Our approach is lightweight, requiring no modifications to the base model weights, and provides reliable and better-calibrated uncertainty predictions.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: calibration/uncertainty
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4412
Loading