InvestAlign: Align LLMs with Investor Decision-Making under Herd Behavior

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Alignment, Investment decision, Large language model, Supervised fine-tuning
TL;DR: We propose InvestAlign, a low-cost and high-quality method that constructs large-scale SFT training datasets based on theoretical models, to solve the complex optimal investment problem and also align with human's decision-making process.
Abstract: Studying investor decision-making processes under herd behavior is of great significance in microeconomics and behavioral finance. Large Language Models (LLMs) can be leveraged to assist in solving specific classes of complex investment problems. However, the investment decisions generated by existing LLMs often deviate from real-user data. One method to align LLMs with investor decision-making processes is Supervised Fine-Tuning (SFT), which requires a substantial amount of real-user data that is costly to collect and raises concerns about privacy and security. To overcome data scarcity, in this work, we propose **InvestAlign**, a low-cost and high-quality method that constructs large-scale SFT training datasets based on the theoretical solution to a specific simpler optimal investment problem, rather than the original complex one. We theoretically demonstrate that fine-tuning LLMs with these datasets leads to faster parameter convergence compared to using real-user data. By fine-tuning LLMs, we obtain **InvestAgent**s, which align more closely with real-user data than pre-SFT LLMs in both the simple and original complex problems examined in our study. This highlights **InvestAlign** as a promising approach with the potential to address complex optimal investment problems and align LLMs with investor decision-making processes in economics and finance.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9752
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview