Relation-Stratified Sampling for Shapley Values Estimation in Relational Databases
By: Amirhossein Alizad, Mostafa Milani
Potential Business Impact:
Helps databases show who helped get answers.
Shapley-like values, including the Shapley and Banzhaf values, provide a principled way to quantify how individual tuples contribute to a query result. Their exact computation, however, is intractable because it requires aggregating marginal contributions over exponentially many permutations or subsets. While sampling-based estimators have been studied in cooperative game theory, their direct use for relational query answering remains underexplored and often ignores the structure of schemas and joins. We study tuple-level attribution for relational queries through sampling and introduce Relation-Stratified Sampling (RSS). Instead of stratifying coalitions only by size, RSS partitions the sample space by a relation-wise count vector that records how many tuples are drawn from each relation. This join-aware stratification concentrates samples on structurally valid and informative coalitions and avoids strata that cannot satisfy query conditions. We further develop an adaptive variant, ARSS, that reallocates budget across strata using variance estimates obtained during sampling, improving estimator efficiency without increasing the total number of samples. We analyze these estimators, describe a practical implementation that reuses compiled views to reduce per-sample query cost, and evaluate them on TPCH workloads. Across diverse queries with multi-relation joins and aggregates, RSS and ARSS consistently outperform classical Monte Carlo (MCS) and size-based Stratified Sampling (SS), yielding lower error and variance with fewer samples. An ablation shows that relation-aware stratification and adaptive allocation contribute complementary gains, making ARSS a simple, effective, and anytime estimator for database-centric Shapley attribution.
Similar Papers
Efficient Data Valuation Approximation in Federated Learning: A Sampling-based Approach
Machine Learning (CS)
Fairly values data for smarter AI training.
Regression-adjusted Monte Carlo Estimators for Shapley Values and Probabilistic Values
Machine Learning (CS)
Makes AI explain its decisions more accurately.
Nonparametric Estimation of Joint Entropy through Partitioned Sample-Spacing Method
Statistics Theory
Measures how much information is shared.