Chain-of-Thought Reasoning in Tabular Language Models

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Question Answering
Submission Track 2: NLP Applications
Keywords: Tabular mathematical reasoning, Chain-of-thought reasoning, Tabular language models
TL;DR: we propose a novel framework which extends the chain-of-thought reasoning into tabular language models for the first time.
Abstract: Tabular mathematical reasoning task requires models to perform multi-step operations including information look-up and numerical calculation, based on heterogeneous data from tables and questions. Existing solutions tend to extend chain-of-thought (CoT) reasoning into powerful large language models (LLMs) to promote multi-hop mathematical reasoning. However, such LLM-based approaches are not a viable solution in the scenario of privatization deployment or limited resources. To address this problem, we revisit small-scale tabular language models (TaLMs) and extend chain-of-thought reasoning into TaLMs for the first time. Specifically, we propose a novel framework, TaCo, which coordinates two TaLMs responsible for CoT generation and answer inference, respectively. Besides, our framework can be combined with an external calculator to enhance accurate numerical calculation. On the TABMWP dataset, TaCo outperforms the state-of-the-art ChatGPT by 9.55\% (82.60\%$\rightarrow$92.15\% in accuracy) with much less parameters (0.8B). The code will be released along with the paper.
Submission Number: 5364
Loading