Score: 4

SpotIt: Evaluating Text-to-SQL Evaluation with Formal Verification

Published: October 30, 2025 | arXiv ID: 2510.26840v1

By: Rocky Klopfenstein , Yang He , Andrew Tremante and more

BigTech Affiliations: Broadcom

Potential Business Impact:

Finds mistakes in computer-made database questions.

Business Areas:
Semantic Search Internet Services

Community-driven Text-to-SQL evaluation platforms play a pivotal role in tracking the state of the art of Text-to-SQL performance. The reliability of the evaluation process is critical for driving progress in the field. Current evaluation methods are largely test-based, which involves comparing the execution results of a generated SQL query and a human-labeled ground-truth on a static test database. Such an evaluation is optimistic, as two queries can coincidentally produce the same output on the test database while actually being different. In this work, we propose a new alternative evaluation pipeline, called SpotIt, where a formal bounded equivalence verification engine actively searches for a database that differentiates the generated and ground-truth SQL queries. We develop techniques to extend existing verifiers to support a richer SQL subset relevant to Text-to-SQL. A performance evaluation of ten Text-to-SQL methods on the high-profile BIRD dataset suggests that test-based methods can often overlook differences between the generated query and the ground-truth. Further analysis of the verification results reveals a more complex picture of the current Text-to-SQL evaluation.

Country of Origin
πŸ‡¨πŸ‡¦ πŸ‡ΊπŸ‡Έ Canada, United States

Repos / Data Links

Page Count
30 pages

Category
Computer Science:
Databases