Keywords: large language models, in-context learning, table structure
Abstract: Large language models (LLMs) are increasingly applied for tabular tasks using
in-context learning. The prompt representation for a table may play a role in the
LLMs ability to process the table. Inspired by prior work, we generate a collection
of self-supervised structural tasks (e.g. navigate to a cell and row; transpose the
table) and evaluate the performance differences when using 8 formats. In contrast
to past work, we introduce 8 noise operations inspired by real-world messy data
and adversarial inputs, and show that such operations can impact LLM performance
across formats for different structural understanding tasks.
Slides: pdf
Submission Number: 25
Loading