Keywords: Tabular Understanding, Table QA, Prompting
TL;DR: "Textual Gradients" improves tabular understanding
Abstract: Table understanding is a complex task that requires not only grasping the semantics of free-form questions but also accurately reasoning over semi-structured tables. Recently, promising approaches designed sophisticated prompts that leverage large language models (LLMs) by combining Chain-of-Thought strategies with function calls, consequently demonstrating competitive results without requiring fine-tuning.
However, creating sufficiently effective prompts remains a challenge. Without fine-tuning, all necessary priors must be incorporated directly into the initial prompt, making prompt design even more critical.
Motivated by the recent advancements in the ''textual gradient'' space, we introduce TableTextGrad, a novel framework that enables automatic prompt optimization by leveraging the ``differentiation'' of prompting pipelines through textual gradients. Concretely, according to the feedback of LLMs, TableTextGrad iteratively refines each function within the Chain-of-Thought steps and function calls, resulting in more accurate and reliable table reasoning outcomes. Experiments on table question-answering datasets demonstrate that our integrated approach achieves significant improvements, setting new state-of-the-art results on the WikiTableQA benchmark. Our TableTextGrad not only enhances the reasoning capabilities of LLMs in the table reasoning task but also lays a groundwork for more robust and generalizable prompting pipelines due to its simplicity and effectiveness.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13850
Loading