FinThink: An LLM-based Multi-agent System for Financial Reasoning

ICLR 2026 Conference Submission16090 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Agent Systems, Large Language Models, Financial Reasoning, Algorithmic Trading, Adaptive Markets Hypothesis, Cognitive Architectures
TL;DR: We introduce FinThink, a multi-agent trading system grounded in the Adaptive Markets Hypothesis. It achieves superior risk-adjusted returns via a dynamic reasoning workflow, cross-asset reflective memory, and a protocol preventing shallow reasoning.
Abstract: Multi-Agent Systems (MASs) in finance are often constrained by static workflows and simplistic learning, limiting their adaptability in real-world markets where signals are frequently conflicting and complex. This market dynamism is well captured by the Adaptive Markets Hypothesis (AMH), which states that the core factors behind profitability shift as markets evolve over time. To bridge the gap between existing agent architectures and this market reality, we introduce FinThink, an AMH-grounded MAS designed for dynamic adaptation. FinThink's novelty lies in three core components: (1) A Context-aware Workflow for Reasoning (CWRM), which enables architectural adaptivity by dynamically adjusting reasoning depth based on signal complexity. (2) A Reasoning-Driven Hierarchical Memory (R-Mem), which facilitates evolutionary adaptivity by teaching the system to navigate signal conflicts in varying market regimes. (3) A Sentiment-To-Logic (STL) Prompt Protocol, which ensures reasoning stability by preventing the multi-agent process from degenerating into simplistic voting. In backtests on five major tech stocks (AAPL, GOOG, MSFT, TSLA, and AMZN), FinThink demonstrates significant improvements over contemporary LLM-based agents, achieving a median advantage of 9.2% higher Sharpe Ratio, 113.1% higher Calmar Ratio, and a 46.3% reduction in Maximum Drawdown.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 16090
Loading