Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs

ACL ARR 2024 June Submission3411 Authors

16 Jun 2024 (modified: 07 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work, we introduce the Learnable Response Scoring Function (LARS) for Uncertainty Estimation (UE) in generative Large Language Models (LLMs). Current scoring functions for probability-based UE, such as length-normalized scoring and semantic contribution-based weighting, are designed to solve specific aspects of the problem but exhibit limitations, including the inability to handle biased probabilities and under-performance in low-resource languages like Turkish. To address these issues, we propose LARS, a scoring function that leverages supervised data to capture complex dependencies between tokens and probabilities, thereby producing more reliable and calibrated response scores in computing the uncertainty of generations. Our extensive experiments across multiple datasets show that LARS substantially outperforms existing scoring functions considering various probability-based UE methods.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Uncertainty Estimation, Uncertainty Quantification, Large Language Models, Question-Answer
Contribution Types: NLP engineering experiment
Languages Studied: English, Turkish
Submission Number: 3411
Loading