Does Alignment Tuning Really Break LLMs’ Internal Confidence?

Published: 21 Sept 2024, Last Modified: 06 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended abstract
Keywords: Calibration of Large Language Models, Alignment
TL;DR: Research highlights the need for a careful approach when measuring model confidences and calibration errors and future research into algorithms that can help LLMs to achieve both instruction-following and calibration without sacrificing either.
Abstract: Large Language Models (LLMs) have shown remarkable progress, but their real-world application necessitates reliable calibration. This study conducts a comprehensive analysis of calibration degradation across four dimensions: models, calibration metrics, tasks, and confidence extraction methods. Initial analysis showed that the relationship between alignment and calibration is not always a trade-off, but under stricter analysis conditions, we found the alignment process consistently harms calibration. This highlights the need for (1) a careful approach when measuring model confidences and calibration errors and (2) future research into algorithms that can help LLMs to achieve both instruction-following and calibration without sacrificing either.
Submission Number: 51
Loading