High-dimensional Analysis of Synthetic Data Selection
By: Parham Rezaei , Filip Kovacevic , Francesco Locatello and more
Potential Business Impact:
Makes fake data help computers learn better.
Despite the progress in the development of generative models, their usefulness in creating synthetic data that improve prediction performance of classifiers has been put into question. Besides heuristic principles such as "synthetic data should be close to the real data distribution", it is actually not clear which specific properties affect the generalization error. Our paper addresses this question through the lens of high-dimensional regression. Theoretically, we show that, for linear models, the covariance shift between the target distribution and the distribution of the synthetic data affects the generalization error but, surprisingly, the mean shift does not. Furthermore we prove that, in some settings, matching the covariance of the target distribution is optimal. Remarkably, the theoretical insights from linear models carry over to deep neural networks and generative models. We empirically demonstrate that the covariance matching procedure (matching the covariance of the synthetic data with that of the data coming from the target distribution) performs well against several recent approaches for synthetic data selection, across training paradigms, architectures, datasets and generative models used for augmentation.
Similar Papers
Beyond Real Data: Synthetic Data through the Lens of Regularization
Machine Learning (Stat)
Finds best mix of fake and real data.
Privacy Amplification Through Synthetic Data: Insights from Linear Regression
Machine Learning (CS)
Protects private information in made-up data.
Non-Asymptotic Analysis of Data Augmentation for Precision Matrix Estimation
Machine Learning (Stat)
Helps computers learn better from more data.