Large language models are not probabilistic reasonersDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Advances in the general capabilities of large language models (LLMs) have led to the possibility of incorporating them into automated decision systems. A faithful representation of probabilistic reasoning in these models can be essential to ensure the reasoning of the automated decision systems incorporating them is trustworthy and explainable. Despite previous work suggesting that LLMs can perform complex reasoning and well-calibrated uncertainty quantification, we find that current versions of this class of model lack the ability to provide consistent and coherent probability estimates. We then suggest possible directions that future research can take to alleviate this weakness.
Paper Type: short
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview