Using Imperfect Synthetic Data in Downstream Inference Tasks
By: Yewon Byun , Shantanu Gupta , Zachary C. Lipton and more
Potential Business Impact:
Makes fake data help scientists understand people better.
Predictions and generations from large language models are increasingly being explored as an aid to computational social science and human subject research in limited data regimes. While previous technical work has explored the potential to use model-predicted labels for unlabeled data in a principled manner, there is increasing interest in using large language models to generate entirely new synthetic samples (also termed as synthetic simulations), such as in responses to surveys. However, it is not immediately clear by what means practitioners can combine such data with real data and yet produce statistically valid conclusions upon them. In this work, we introduce a new estimator based on generalized method of moments, providing a hyperparameter-free solution with strong theoretical guarantees to address the challenge at hand. Surprisingly, we find that interactions between the moment residuals of synthetic data and those of real data can improve estimates of the target parameter. We empirically validate the finite-sample performance of our estimator across different regression tasks in computational social science applications, demonstrating large empirical gains.
Similar Papers
Beyond Real Data: Synthetic Data through the Lens of Regularization
Machine Learning (Stat)
Finds best mix of fake and real data.
Data Value in the Age of Scaling: Understanding LLM Scaling Dynamics Under Real-Synthetic Data Mixtures
Machine Learning (CS)
Makes AI learn better from mixed data.
Towards Active Synthetic Data Generation for Finetuning Language Models
Machine Learning (CS)
Teaches computers to learn better from examples.