Sparse Reasoning Chains: Generating Faithful and Coherent Explanations for LLMs in Financial Risk Assessment

Published: 21 Nov 2025, Last Modified: 14 Jan 2026GenAI in Finance PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Sparse Reasoning Chains, Explainable AI, Faithfulness, Sparse Autoencoders (SAEs), Trustworthy AI, Risk Assessment
TL;DR: We introduce a framework that generates faithful and coherent explanations for financial LLMs by synthesizing narratives from their internal activations.
Abstract: The opacity of Large Language Models (LLMs) hinders their adoption in finance, as current explanation methods fail to be both faithful to the model's internal reasoning and coherent to human. We introduce **Sparse Reasoning Chains (SRC)**, a framework that bridges this gap by generating auditable explanations for risk assessments. SRC uses Sparse Autoencoders (SAEs) to extract faithful concepts from a model's internal states and then leverages a generative LLM to synthesize them into coherent, evidence-grounded narratives. Evaluations on a large corpus of earnings calls show SRC's explanations are demonstrably more faithful than self-explanations and more coherent than mechanistic interpretations. SRC enables the development of more transparent and trustworthy LLMs for high-stakes finance.
Submission Number: 101
Loading