Score: 1

ConstrainedSQL: Training LLMs for Text2SQL via Constrained Reinforcement Learning

Published: November 12, 2025 | arXiv ID: 2511.09693v1

By: Weiqin Chen , Nhan Huu Pham , Michael Robert Glass and more

Potential Business Impact:

Teaches computers to answer questions from data better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement learning (RL) has demonstrated significant promise in enhancing the reasoning capabilities of Text2SQL LLMs, especially with advanced algorithms such as GRPO and DAPO. However, the performance of these methods is highly sensitive to the design of reward functions. Inappropriate rewards can lead to reward hacking, where models exploit loopholes in the reward structure to achieve high scores without genuinely solving the task. This work considers a constrained RL framework for Text2SQL that incorporates natural and interpretable reward and constraint signals, while dynamically balancing trade-offs among them during the training. We establish the theoretical guarantees of our constrained RL framework and our numerical experiments on the well-known Text2SQL datasets substantiate the improvement of our approach over the state-of-the-art RL-trained LLMs.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)