Differentially Private Linear Regression and Synthetic Data Generation with Statistical Guarantees
By: Shurong Lin, Aleksandra Slavković, Deekshith Reddy Bhoomireddy
Potential Business Impact:
Makes private data useful for research and learning.
In social sciences, small- to medium-scale datasets are common and linear regression (LR) is canonical. In privacy-aware settings, much work has focused on differentially private (DP) LR, but mostly on point estimation with limited attention to uncertainty quantification. Meanwhile, synthetic data generation (SDG) is increasingly important for reproducibility studies, yet current DP LR methods do not readily support it. Mainstream SDG approaches are either tailored to discretized data, making them less suitable for continuous regression, or rely on deep models that require large datasets, limiting their use for the smaller, continuous data typical in social science. We propose a method for LR with valid inference under Gaussian DP: a DP bias-corrected estimator with asymptotic confidence intervals (CIs) and a general SDG procedure in which regression on the synthetic data matches our DP regression. Our binning-aggregation strategy is effective in small- to moderate-dimensional settings. Experiments show our method (1) improves accuracy over existing methods, (2) provides valid CIs, and (3) produces more reliable synthetic data for downstream ML tasks than current DP SDGs.
Similar Papers
Statistical Inference for Differentially Private Stochastic Gradient Descent
Machine Learning (Stat)
Makes private data safe for computer learning.
Differentially Private Inference for Longitudinal Linear Regression
Statistics Theory
Keeps private data safe in long-term studies.
How to DP-fy Your Data: A Practical Guide to Generating Synthetic Data With Differential Privacy
Cryptography and Security
Creates fake data that protects real people's secrets.