Data Reliability Scoring
By: Yiling Chen , Shi Feng , Paul Kattuman and more
Potential Business Impact:
Measures data quality without knowing the real answers.
How can we assess the reliability of a dataset without access to ground truth? We introduce the problem of reliability scoring for datasets collected from potentially strategic sources. The true data are unobserved, but we see outcomes of an unknown statistical experiment that depends on them. To benchmark reliability, we define ground-truth-based orderings that capture how much reported data deviate from the truth. We then propose the Gram determinant score, which measures the volume spanned by vectors describing the empirical distribution of the observed data and experiment outcomes. We show that this score preserves several ground-truth based reliability orderings and, uniquely up to scaling, yields the same reliability ranking of datasets regardless of the experiment -- a property we term experiment agnosticism. Experiments on synthetic noise models, CIFAR-10 embeddings, and real employment data demonstrate that the Gram determinant score effectively captures data quality across diverse observation processes.
Similar Papers
Dimension Agnostic Testing of Survey Data Credibility through the Lens of Regression
Machine Learning (CS)
Checks if survey data truly reflects people.
Geometric Data Valuation via Leverage Scores
Machine Learning (CS)
Finds the most important data for better AI.
Geometric Calibration and Neutral Zones for Uncertainty-Aware Multi-Class Classification
Machine Learning (Stat)
Makes AI know when it's unsure.