TL;DR: An approach for restoring calibration in LLMs.
Abstract: One of the key technologies for the success of Large Language Models (LLMs) is preference alignment. However, a notable side effect of preference alignment is poor calibration: while the pre-trained models are typically well-calibrated, LLMs tend to become poorly calibrated after alignment with human preferences. In this paper, we investigate why preference alignment affects calibration and how to address this issue. For the first question, we observe that the preference collapse issue in alignment undesirably generalizes to the calibration scenario, causing LLMs to exhibit overconfidence and poor calibration. To address this, we demonstrate the importance of fine-tuning
with domain-specific knowledge to alleviate the overconfidence issue. To further analyze whether this affects the model's performance, we categorize models into two regimes: calibratable and non-calibratable, defined by bounds of Expected Calibration Error (ECE). In the calibratable regime, we propose a calibration-aware fine-tuning approach to achieve proper calibration without compromising LLMs' performance. However, as models are further fine-tuned for better performance, they enter the non-calibratable regime. For this case, we develop an EM-algorithm-based ECE regularization for the fine-tuning loss to maintain low calibration error. Extensive experiments validate the effectiveness of the proposed methods.
Lay Summary: Large language models (LLMs), like ChatGPT, often predict how confident they are in an answer—but after being trained to follow human preferences, these models can become overconfident, even when they're wrong. This is a serious problem in real-world applications like healthcare or law, where trusting a wrong answer could lead to harmful consequences.
Our research investigates why this happens and how to fix it. We discovered that during preference alignment—when a model is trained to generate human-preferred answers—it can lose its ability to judge uncertainty accurately. We then designed a new method called Calibration-Aware Fine-Tuning (CFT) to correct this issue without hurting the model’s overall performance.
Our experiments show that CFT dramatically improves calibration, making the model’s confidence better reflect reality, and even boosts accuracy in some cases. This means users can better trust what the model says—and how confident it is—especially in high-stakes scenarios.
By restoring this critical property, our work helps make aligned LLMs safer and more reliable.
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models; Calibration; Fine-Tuning
Submission Number: 14329
Loading