How well do LLMs reason over tabular data, really?

Published: 05 Jun 2025, Last Modified: 05 Jun 2025TRL@ACL2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tabular Reasoning, Large Language Models (LLMs), Robustness, LLM-as-a-judge
TL;DR: Using an LLM-as-a-judge for evaluation, this paper shows significant shortcomings in large language models' tabular reasoning abilities and highlights their lack of robustness to real-world table perturbations.
Abstract: Large Language Models (LLMs) excel in natural language tasks, but less is known about their reasoning capabilities over tabular data. Prior analyses devise evaluation strategies that poorly reflect an LLM's realistic performance on tabular queries. Moreover, we have a limited understanding of the robustness of LLMs towards realistic variations in tabular inputs. Therefore, we ask: Can general-purpose LLMs reason over tabular data, really?, and focus on two questions 1) are tabular reasoning capabilities of general-purpose LLMs robust to real-world characteristics of tabular inputs, and 2) how can we realistically evaluate an LLM's performance on analytical tabular queries? Building on a recent tabular reasoning benchmark, we first surface shortcomings of its multiple-choice prompt evaluation strategy, as well as commonly used free-form text metrics such as SacreBleu and BERT-score. We show that an LLM-as-a-judge procedure yields more reliable performance insights and unveil a significant deficit in tabular reasoning performance of LLMs. We then extend the tabular inputs reflecting three common characteristics in practice: 1) missing values, 2) duplicate entities, and 3) structural variations. Experiments show that the tabular reasoning capabilities of general-purpose LLMs suffer from these variations, stressing the importance of improving their robustness for realistic tabular inputs.
Include In Proceedings: Yes
Submission Number: 30
Loading