Score: 2

Is it Bigger than a Breadbox: Efficient Cardinality Estimation for Real World Workloads

Published: October 3, 2025 | arXiv ID: 2510.03386v1

By: Zixuan Yi , Sami Abu-el-Haija , Yawen Wang and more

Potential Business Impact:

Makes computer searches faster and smarter.

Business Areas:
Big Data Data and Analytics

DB engines produce efficient query execution plans by relying on cost models. Practical implementations estimate cardinality of queries using heuristics, with magic numbers tuned to improve average performance on benchmarks. Empirically, estimation error significantly grows with query complexity. Alternatively, learning-based estimators offer improved accuracy, but add operational complexity preventing their adoption in-practice. Recognizing that query workloads contain highly repetitive subquery patterns, we learn many simple regressors online, each localized to a pattern. The regressor corresponding to a pattern can be randomly-accessed using hash of graph structure of the subquery. Our method has negligible overhead and competes with SoTA learning-based approaches on error metrics. Further, amending PostgreSQL with our method achieves notable accuracy and runtime improvements over traditional methods and drastically reduces operational costs compared to other learned cardinality estimators, thereby offering the most practical and efficient solution on the Pareto frontier. Concretely, simulating JOB-lite workload on IMDb speeds-up execution by 7.5 minutes (>30%) while incurring only 37 seconds overhead for online learning.

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Databases