Geometric Data Valuation via Leverage Scores
By: Rodrigo Mendoza-Smith
Potential Business Impact:
Finds the most important data for better AI.
Shapley data valuation provides a principled, axiomatic framework for assigning importance to individual datapoints, and has gained traction in dataset curation, pruning, and pricing. However, it is a combinatorial measure that requires evaluating marginal utility across all subsets of the data, making it computationally infeasible at scale. We propose a geometric alternative based on statistical leverage scores, which quantify each datapoint's structural influence in the representation space by measuring how much it extends the span of the dataset and contributes to the effective dimensionality of the training problem. We show that our scores satisfy the dummy, efficiency, and symmetry axioms of Shapley valuation and that extending them to \emph{ridge leverage scores} yields strictly positive marginal gains that connect naturally to classical A- and D-optimal design criteria. We further show that training on a leverage-sampled subset produces a model whose parameters and predictive risk are within $O(\varepsilon)$ of the full-data optimum, thereby providing a rigorous link between data valuation and downstream decision quality. Finally, we conduct an active learning experiment in which we empirically demonstrate that ridge-leverage sampling outperforms standard baselines without requiring access gradients or backward passes.
Similar Papers
A Ratio-Based Shapley Value for Collaborative Machine Learning - Extended Version
CS and Game Theory
Fairly shares credit when computers learn together.
Fast-DataShapley: Neural Modeling for Training Data Valuation
Machine Learning (CS)
Rewards data creators fairly and fast for AI.
Rethinking Data Value: Asymmetric Data Shapley for Structure-Aware Valuation in Data Markets and Machine Learning Pipelines
CS and Game Theory
Values data fairly for AI, even when order matters.