FinDialogLens: A Novel Hybrid LLM Framework for Parsing Financial Dialogues and Unveiling Missed Trade Opportunities

18 Sept 2025 (modified: 03 Oct 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Natural Language Processing (NLP), Natural Language Understanding (NLU), Large Language Models (LLMs), Fine-tuning, Trading Analysis, Dynamic Routing, Conversation Management, Request for Quote (RFQ), Multi-party and Multi-line Conversations, Domain Adaptation, Financial Dialogue Analysis
TL;DR: This paper introduces FinDialogLens, a framework that leverages large language models to enhance the analysis of financial dialogues, achieving high accuracy in trade outcome extraction and optimizing resource allocation through dynamic routing.
Abstract: Unstructured communication channels, such as chatrooms and instant messaging platforms, are essential for financial professionals to share ideas, negotiate trades, discuss pricing, and understand client needs. These dynamic, multi-party conversations are rich in trading opportunities and market insights, but their volume and complexity make manual analysis impractical and error-prone. Extracting actionable trade outcomes from multi-line, multi-threaded dialogues among traders, sales teams, and clients remains a significant challenge, often resulting in missed opportunities and incomplete records. To address this gap, we present FinDialogLens, a novel framework that systematically analyzes financial dialogues and identifies missed trading opportunities. FinDialogLens integrates three specialized modules: (1) Message-Level Module, (2) RFQ-Level Module, and (3) Trade Engine. The Message-Level Module leverages compact, domain-adapted, fine-tuned models to provide precise financial knowledge, such as RFQ identification, trade outcome classification, and price extraction, for each message. This metadata is then used by the RFQ-Level Module to construct targeted Request for Quote (RFQ) queries, enabling the Trade Engine to accurately extract final prices and trade outcomes. Our results show that FinDialogLens integrating with GPT-4o Trade Engine substantially improves extraction accuracy, achieving rates of 0.921 for final price and 0.943 for trade outcome. In comparison, Chain-of-Thought (CoT) prompting with GPT-4o achieves only 0.609 and 0.664 in the zero-shot setting, and 0.644 and 0.708 in the few-shot setting. Furthermore, we demonstrate that fine-tuned open-source LLMs, such as Mistral-7B-Instruct, can match the performance of proprietary models like GPT-4o with a modest amount of in-domain data. Finally, we introduce a dynamic routing module that optimizes the balance between performance and cost, reducing LLM API calls by 85% while maintaining high accuracy, making FinDialogLens suitable for deployment in budget-constrained environments.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 14543
Loading