Explaining Financial LLMs: An Attribution-Based Interpretability Study in Multilingual Table QA in Dutch and English
Submission Type: Short paper (4 pages)
Keywords: table question answering, financial NLP, explainable AI, LLMs
Abstract: The reliable deployment of Large Language Models (LLMs) in critical sectors, especially involving structured input like tabular data, necessitates mechanisms for transparency and accountability. This paper investigates the interpretability of domain-specific LLMs applied to Table Question Answering (TQA) tasks in the financial domain. We conduct a comparative attribution study between domain-adapted and general-purpose LLMs in both Dutch and English. The analysis employs parallel datasets sourced from ConvFinQA. Utilizing Input $\times$ Gradient attribution, we segment input tokens based on their semantic and structural roles, focusing particularly on tabular content, numeric values, and financial terminology. Domain adaptation led to more balanced and semantically coherent attributions in Dutch models, but this effect did not consistently extend to English, where attribution remained less aligned with financial content. Overall, attribution patterns were diffuse and offered limited predictive value regarding model correctness. This underscores the fundamental limitations of current interpretability techniques, particularly under long-context conditions. Accordingly, there is a pressing need for more causally grounded and scalable methodologies to ensure transparency and accountability in critical domains such as finance.
Submission Number: 6
Loading