Beyond Real Data: Synthetic Data through the Lens of Regularization
By: Amitis Shidani , Tyler Farghly , Yang Sun and more
Potential Business Impact:
Finds best mix of fake and real data.
Synthetic data can improve generalization when real data is scarce, but excessive reliance may introduce distributional mismatches that degrade performance. In this paper, we present a learning-theoretic framework to quantify the trade-off between synthetic and real data. Our approach leverages algorithmic stability to derive generalization error bounds, characterizing the optimal synthetic-to-real data ratio that minimizes expected test error as a function of the Wasserstein distance between the real and synthetic distributions. We motivate our framework in the setting of kernel ridge regression with mixed data, offering a detailed analysis that may be of independent interest. Our theory predicts the existence of an optimal ratio, leading to a U-shaped behavior of test error with respect to the proportion of synthetic data. Empirically, we validate this prediction on CIFAR-10 and a clinical brain MRI dataset. Our theory extends to the important scenario of domain adaptation, showing that carefully blending synthetic target data with limited source data can mitigate domain shift and enhance generalization. We conclude with practical guidance for applying our results to both in-domain and out-of-domain scenarios.
Similar Papers
High-dimensional Analysis of Synthetic Data Selection
Machine Learning (Stat)
Makes fake data help computers learn better.
Synthetic Data and the Shifting Ground of Truth
Computers and Society
Makes AI smarter with fake, imperfect data.
Data Value in the Age of Scaling: Understanding LLM Scaling Dynamics Under Real-Synthetic Data Mixtures
Machine Learning (CS)
Makes AI learn better from mixed data.