Keywords: Natural Language to SQL (NL2SQL), Dataset Alignment, Structural Alignment, Text-to-SQL, Supervised Fine-Tuning (SFT)
TL;DR: This paper introduces KL-alignment and the Alignment Ratio to predict when supervised fine-tuning on a Text-to-SQL dataset will improve a model’s performance across different target workloads.
Abstract: Supervised Fine-Tuning (SFT) is an effective method for adapting Large Language Models (LLMs) on down-stream tasks. However, variability in training data can hinder a model's ability to generalize across domains. This paper studies the problem of \textit{dataset alignment} for Natural Language to SQL (NL2SQL or text-to-SQL), examining how well SFT training data matches the structural characteristics of target queries and how this alignment impacts model performance.
We hypothesize that alignment can be accurately estimated by comparing the distributions of structural SQL features across the training set, target data, and the model’s predictions prior to SFT.
Through comprehensive experiments on three large cross-domain NL2SQL benchmarks and multiple model families, we show that structural alignment is a strong predictor of fine-tuning success. When alignment is high, SFT yields substantial gains in accuracy and SQL generation quality; when alignment is low, improvements are marginal or absent. These findings highlight the importance of alignment-aware data selection for effective fine-tuning and generalization in NL2SQL tasks.
Primary Area: datasets and benchmarks
Submission Number: 13895
Loading