Keywords: computational finance, stock prediction, large language models, economics
Abstract: Large Language Models (LLMs) have been trained on vast corpora of data, allowing them to learn internal representations of how humans would respond in different scenarios. This makes them well-suited to simulate the actions of market participants, to model their collective impact on financial markets and perform financial forecasting. However, there also exist various sources of errors that could affect the effectiveness of LLM agent-based simulations of the market. Firstly, individual market participants do not always make rational decisions, which might not be captured by the logical reasoning process of LLMs. Secondly, the numerical and financial literacy of LLMs are also not highly reliable, due to possible knowledge gaps in their numerical understanding and possible hallucinations in their outputs. To tackle these issues, we propose our Massively Multi-Agents Role Playing (MMARP) method, which aims to produce highly accurate market simulations through theory-driven prompt designs. To reduce the impact of noisy actions caused by individual irrational investors, we leverage the LLM-generated next-token weights to simulate repetitive prompting, and obtain the aggregated market response. To minimize the effects of possible gaps in its numerical knowledge or potential hallucinated outputs, we prompt the LLM using a range of price inputs for each trading day. Finally, to produce simulated forecasts of market prices, we perform the above prompting strategies across two types of LLM-agent roles, buyers and sellers, and obtain the intersection price between their response curves. Through experimental results, we show that MMARP can outperform other deep-learning methods and various financial LLMs in forecasting metrics.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11504
Loading