Keywords: Text to SQL, Reasoning Models, Large Language Models, DataBase
TL;DR: XYZ-Text2SQL-R1 is among the top models on the BIRD leaderboard and achieves the best average accuracy across six different benchmarks, outperforming or matching models such as DeepSeek-V3 and GPT-4o.
Abstract: Translating natural language into SQL (Text2SQL) is a longstanding challenge at the intersection of natural language understanding and structured data access. While large language models (LLMs) have significantly improved fluency in SQL generation, producing correct and executable SQL, particularly for complex queries, remains a bottleneck. We present \textbf{XYZ-Text2SQL-R1}, a reinforcement learning (RL) framework and model family designed to generate accurate, executable SQL using a lightweight reward signal based solely on execution correctness. Our approach avoids brittle intermediate supervision and complex reward shaping, promoting stable training and alignment with the end task. Combined with carefully curated data, strong supervised initialization, and effective training practices, XYZ-Text2SQL-R1 achieves state-of-the-art execution accuracy across six diverse Text2SQL benchmarks and ranks among the leading entries on the BIRD leaderboard. Notably, our 7B model outperforms prior 70B-class systems, highlighting the framework’s scalability and efficiency. We further demonstrate inference-time robustness through simple extensions like value retrieval and majority voting. Extensive experiments and ablation studies offer both positive and negative insights, providing practical guidance for future Text2SQL research.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 14562
Loading