Abstract: Large Language Models (LLMs) have been widely applied in programming language analysis to enhance human productivity. Yet, their reliability can be compromised by various code distribution shifts, leading to inconsistent outputs. While probabilistic methods are known to mitigate such impact through uncertainty calibration and estimation, their efficacy in the language domain remains underexplored compared to their application in image-based tasks. In this work, we first introduce a large-scale benchmark dataset, incorporating three realistic patterns of code distribution shifts at varying intensities. Then we thoroughly investigate state-of-the-art probabilistic methods applied to LLMs using these shifted code snippets. We observe that these methods generally improve the uncertainty awareness of LLMs, with increased calibration quality and higher uncertainty estimation (UE) precision. However, our study also reveals varied performance dynamics across different criteria (e.g., calibration error vs misclassification detection) and the trade-off between efficacy and efficiency, highlighting necessary methodological selection tailored to specific contexts.
Paper Type: long
Research Area: Machine Learning for NLP
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English, Java
0 Replies
Loading