Position: Current XAI Methods Cannot Satisfy Financial AI Explainability Requirements

Published: 01 May 2026, Last Modified: 05 May 2026ICML 2026 Position PaperEveryoneRevisionsCC BY 4.0
Abstract: **This position paper argues that current explainable AI (XAI) methods cannot satisfy regulatory explainability requirements for LLM-based financial systems**, creating a fundamental incompatibility between technological capability and legal mandate that threatens both consumer protection and financial stability. We demonstrate through systematic analysis across six regulatory frameworks (EU AI Act, US FSOC/CFPB, UK FCA, BIS, MAS, HKMA) that post-hoc explanation techniques fail systematically when applied to large language models. SHAP exhibits $O(2^F)$ computational complexity at token-level granularity---rendering exact computation infeasible for transformer architectures. LIME produces explanations varying 36\% across repeated evaluations. Chain-of-thought prompting generates unfaithful rationalizations: models failed to acknowledge biasing features in 78\% of affected cases. With 72\% of UK financial firms deploying ML and \$2.1 trillion in US consumer credit annually requiring adverse action explanations, this gap creates systemic risk affecting millions of consumers who may receive inadequate explanations for consequential financial decisions. We analyze three high-stakes domains---credit, trading, advisory---with documented regulatory enforcement cases, examine six counterarguments including hybrid architectures and outcome-based regulation, and propose prioritized recommendations with quarterly timelines. The status quo constitutes regulatory compliance theater; we call for either fundamental advances in LLM interpretability or deployment constraints matching current capabilities.
Loading