Large Language Models Can Think and Act Probabilistically

Published: 14 Jun 2025, Last Modified: 19 Jul 2025ICML 2025 Workshop PRALEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models
TL;DR: This research demonstrates that our prompting method can enable agents to reliably execute intended probabilistic behavior.
Track: Short Paper (up to 4 pages)
Abstract: This research demonstrates that our non-trivial prompting method, incorporating programmatic representations, can enable agents to reliably execute their own intended probabilistic behavior. This capability is crucial for applications requiring strategic unpredictability (i.e., anti-predictive against adversaries) and efficient exploration. Our proposed prompting method, called Random String Manipulation (RSM), leverages the capability of Large Language Models (LLMs) to generate complex strings and arithmetically manipulate them to select an action from a set of actions according to a given probability distribution. Experiments on tasks requiring probabilistic responses show that RSM consistently outperforms baseline prompts across all tested LLMs, and in some cases achieves performance comparable to pseudo-random number generators, demonstrating its effectiveness in ensuring robust and unbiased probabilistic outputs.
Format: We have read the camera-ready instructions, and our paper is formatted with the provided template.
De-Anonymization: This submission has been de-anonymized.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 6
Loading