Exponential Scaling of Factual Inconsistency in Data-to-Text Generation with Fine-Tuned LLMs

Published: 08 Oct 2025, Last Modified: 08 Oct 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Data-to-text (D2T) generation is a core task in text generation that involves converting semi-structured data (e.g., tables, graphs) into text. Recent advances in large language models (LLMs) have led to significant improvements in D2T. Despite these gains, factual inconsistency remains a persistent issue in LLMs for D2T. Understanding how such inconsistencies scale with factors like model size, compute (FLOPs), and data size is crucial for building trustworthy systems. While prior scaling studies focus on generalization error via power law scaling, the impact of these factors on factual inconsistency in D2T remains unexplored. This paper addresses the gap by empirically investigating how factual inconsistency scales with various scaling factors. Unlike prior studies that focus solely on power law scaling, we also examine exponential scaling. To rigorously compare these models, we introduce \textit{VaCScal}, a three-stage statistical validation framework: (1) predictive performance estimation, (2) goodness-of-fit assessment, and (3) comparative analysis. Experiments are conducted across six diverse LLM families and five D2T datasets. Factual inconsistency is inversely measured using four state-of-the-art consistency metrics, including human evaluation. We employ QLoRA, Prefix-Tuning, and full fine-tuning to fine-tune the LLMs. Our analysis, validated through the \textit{VaCScal} framework, consistently shows that factual inconsistency in D2T generation follows exponential scaling with respect to model (LLM) size, compute (FLOPs), and fine-tuning data size---challenging the prevailing assumption of power law scaling. To support this finding, a mathematical rationale is also provided, demonstrating why exponential scaling behavior is expected in factual inconsistency under typical D2T conditions.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The following updates and corrections have been incorporated into our paper: 1. Additional details on the computational resources utilized in our experiments have been included. 2. An acknowledgment section specifying the funding sources has been added. 3. The plots have been refined to ensure uniformity in representation and improved visual clarity. 4. Errors in certain plots, caused by glitches during rendering, have been corrected.
Video: https://youtu.be/LzUBPhZrfTo
Code: https://github.com/joymahapatra/factual-inconsistency-scaling-d2t-llms
Supplementary Material: zip
Assigned Action Editor: ~Colin_Raffel1
Submission Number: 5121
Loading