Benchmarking Synthetic Tabular Data: A Multi-Dimensional Evaluation Framework
By: Andrey Sidorenko , Michael Platzer , Mario Scriminaci and more
Potential Business Impact:
Checks if fake data is good and safe.
Evaluating the quality of synthetic data remains a key challenge for ensuring privacy and utility in data-driven research. In this work, we present an evaluation framework that quantifies how well synthetic data replicates original distributional properties while ensuring privacy. The proposed approach employs a holdout-based benchmarking strategy that facilitates quantitative assessment through low- and high-dimensional distribution comparisons, embedding-based similarity measures, and nearest-neighbor distance metrics. The framework supports various data types and structures, including sequential and contextual information, and enables interpretable quality diagnostics through a set of standardized metrics. These contributions aim to support reproducibility and methodological consistency in benchmarking of synthetic data generation techniques. The code of the framework is available at https://github.com/mostly-ai/mostlyai-qa.
Similar Papers
Benchmarking Differentially Private Tabular Data Synthesis
Cryptography and Security
Helps choose best fake data for privacy.
A Consensus Privacy Metrics Framework for Synthetic Data
Cryptography and Security
Protects private information when sharing computer-made data.
Assessing Generative Models for Structured Data
Machine Learning (CS)
Makes fake data that looks like real data.