TABLE CALL: A New Paradigm for Table Question Answering

ACL ARR 2024 June Submission3034 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have exhibited strong semantic understanding capabilities in interpreting and reasoning for table question answering (TQA). However, they struggle with excessively lengthy or complex input tables, especially when dealing with disorganized or hierarchical structures. To address these issues, we propose a new paradigm for TQA, named TABLE CALL, which leverages the tool-using capabilities of LLMs. Specifically, TABLE CALL invokes different tools for various types of table questions, such as SQL, Python, and LLMs, to simplify table understanding. Moreover, to enhance table comprehension capabilities of the LLM, we propose a few-shot library updating technique where we use a dynamically updated library to provide better QA pairs for LLM prompting. Experimental results on both open-domain and specific-domain datasets demonstrate that our approach achieves state-of-the-art performance, significantly outperforming previous methods.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: table QA, few-shot QA
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3034
Loading