Abstract: Current methods for table reasoning with LLMs can be broadly categorized into two distinct approaches: text reasoning and code generation, which leverage natural language processing and programming paradigms, respectively. The former is subject to table's scale for the context length limit of LLMs; the latter incurs structural bias due to the lack of awareness of table data. This paper proposes A-STAR, a table reasoning architecture that enhances LLM's table content-aware capability at any scale. Considering the various distributions of records related to various questions in the original table, a decompose-recombine algorithm is introduced to obtain a refined table by decomposing the original table into sub-tables and recombining the records related to question extracted from them. According to the characteristics of these tables, an adaptive strategy will be adopted to select different solvers to generate multiple candidate answers and assign priorities to them. Finally, a semantic-based voting mechanism is designed to fuse these answers to obtain the final response. The experiment shows that A-STAR has achieved state-of-the-art performance in both table-based fact verification and question answering tasks. Our code is available at https://anonymous.4open.science/r/A-STAR-D9DF/.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Language Modeling,Generation,NLP Applications,Question Answering
Contribution Types: Data analysis
Languages Studied: English
Submission Number: 928
Loading