Program-Aided Reasoners (Better) Know What They KnowDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: Program-aided reasoners are not only better reasoners but also better calibrated than their text-based counterparts
Abstract: Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to "know what they know", which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PaL) and text-based Chain-of-thought (CoT) prompting techniques over 5 datasets and 2 model types - OpenAI and LLaMA Models. Our results indicate that PaL leads to improved calibration in 75% of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PaL is not only more accurate but is also more calibrated than CoT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview