CoRef: A Collaborative Refinement Framework for Table Question Answering

ACL ARR 2025 February Submission2815 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Table Question Answering (TQA) enables users to query semi-structured tables using natural language. However, current methods struggle with two key challenges: (i) complex layouts which hinders accurate reasoning and (ii) substantial noise that disrupts table-processing code generation. We propose CoRef, a collaborative refinement framework. To tackle challenge (i), CoRef employs a Planner and multiple Table Curators, working alongside a Decision Trace Tree to distribute the burdens of decision-making and table curation across specialized agents, while also enabling backtracking when needed. For challenge (ii), CoRef integrates a Code-Refining Memory module, which iteratively refines table-processing code by learning from compiler feedback. CoRef outperforms SOTA methods in extensive experiments on three public TQA datasets (74.2\% on WikiTQ, 88.6\% on TabFact, and 74.7\% on HiTab), validating its effectiveness.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Table QA,Agents,LLMs
Contribution Types: Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 2815
Loading