ConstrainedSQL: Training LLMs for Text2SQL via Constrained Reinforcement Learning
By: Weiqin Chen , Nhan Huu Pham , Michael Robert Glass and more
Potential Business Impact:
Teaches computers to answer questions from data better.
Reinforcement learning (RL) has demonstrated significant promise in enhancing the reasoning capabilities of Text2SQL LLMs, especially with advanced algorithms such as GRPO and DAPO. However, the performance of these methods is highly sensitive to the design of reward functions. Inappropriate rewards can lead to reward hacking, where models exploit loopholes in the reward structure to achieve high scores without genuinely solving the task. This work considers a constrained RL framework for Text2SQL that incorporates natural and interpretable reward and constraint signals, while dynamically balancing trade-offs among them during the training. We establish the theoretical guarantees of our constrained RL framework and our numerical experiments on the well-known Text2SQL datasets substantiate the improvement of our approach over the state-of-the-art RL-trained LLMs.
Similar Papers
Reinforcing Code Generation: Improving Text-to-SQL with Execution-Based Learning
Computation and Language
Teaches computers to write correct database answers.
Sparks of Tabular Reasoning via Text2SQL Reinforcement Learning
Computation and Language
Teaches computers to understand and use data tables.
Beyond Query-Level Comparison: Fine-Grained Reinforcement Learning for Text-to-SQL with Automated Interpretable Critiques
Computation and Language
Teaches computers to understand database questions better.