Abstract: We study 15 large language models (LLMs) fine-tuned for chat and find that their maximum softmax probabilities (MSPs) are consistently miscalibrated on multiple-choice Q&A. However, those MSPs might still encode useful uncertainty information. Specifically, we hypothesized that wrong answers would be associated with smaller MSPs compared to correct answers. Via rigorous statistical testing, we show that this hypothesis holds for models which perform well on the underlying Q&A task. We also find a strong direction correlation between Q&A accuracy and MSP correctness prediction, while finding no correlation between Q&A accuracy and calibration error. This suggests that within the current fine-tuning paradigm, we can expect correctness prediction but not calibration to improve as LLM capabilities progress. To demonstrate the utility of correctness prediction, we show that when models have the option to abstain, performance can be improved by selectively abstaining based on the MSP of the initial model response, using only a small amount of labeled data to choose the MSP threshold.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=z5ka18AYpd
Changes Since Last Submission: We would like to genuinely thank all reviewers for providing such valuable and constructive feedback. We have performed a major revision of the manuscript in response to this feedback.
1. Expanding the scope to include base models.
2. Changing how the MSP is computed to align with best practices. This change was also crucial for including base models since strict format-following is no longer required.
3. Expanding the related work section to provide a more thorough and accurate credit assignment to prior work.
4. Adding several new analyses, including a simple post-hoc calibration analysis, alternative uncertainty metrics (margin and entropy), and how the amount of training data affects our abstention results.
See our top-level comment for details on each of these changes.
Code: https://github.com/bplaut/llm-calibration-and-correctness-prediction
Assigned Action Editor: ~Kamalika_Chaudhuri1
Submission Number: 4729
Loading