Continual Learning of Domain Knowledge from Human Feedback in Text-to-SQL
By: Thomas Cook , Kelly Patel , Sivapriya Vellaichamy and more
Potential Business Impact:
Teaches computers to answer questions from data better.
Large Language Models (LLMs) can generate SQL queries from natural language questions but struggle with database-specific schemas and tacit domain knowledge. We introduce a framework for continual learning from human feedback in text-to-SQL, where a learning agent receives natural language feedback to refine queries and distills the revealed knowledge for reuse on future tasks. This distilled knowledge is stored in a structured memory, enabling the agent to improve execution accuracy over time. We design and evaluate multiple variations of a learning agent architecture that vary in how they capture and retrieve past experiences. Experiments on the BIRD benchmark Dev set show that memory-augmented agents, particularly the Procedural Agent, achieve significant accuracy gains and error reduction by leveraging human-in-the-loop feedback. Our results highlight the importance of transforming tacit human expertise into reusable knowledge, paving the way for more adaptive, domain-aware text-to-SQL systems that continually learn from a human-in-the-loop.
Similar Papers
Continual Learning of Domain Knowledge from Human Feedback in Text-to-SQL
Computation and Language
Teaches computers to answer questions from data better.
End-to-End Text-to-SQL with Dataset Selection: Leveraging LLMs for Adaptive Query Generation
Machine Learning (CS)
Finds the right database for your questions.
End-to-End Text-to-SQL with Dataset Selection: Leveraging LLMs for Adaptive Query Generation
Machine Learning (CS)
Finds the right database for your questions.