Towards hyperparameter-free optimization with differential privacy
By: Zhiqi Bu, Ruixuan Liu
Potential Business Impact:
Trains AI privately without needing to test many settings.
Differential privacy (DP) is a privacy-preserving paradigm that protects the training data when training deep learning models. Critically, the performance of models is determined by the training hyperparameters, especially those of the learning rate schedule, thus requiring fine-grained hyperparameter tuning on the data. In practice, it is common to tune the learning rate hyperparameters through the grid search that (1) is computationally expensive as multiple runs are needed, and (2) increases the risk of data leakage as the selection of hyperparameters is data-dependent. In this work, we adapt the automatic learning rate schedule to DP optimization for any models and optimizers, so as to significantly mitigate or even eliminate the cost of hyperparameter tuning when applied together with automatic per-sample gradient clipping. Our hyperparameter-free DP optimization is almost as computationally efficient as the standard non-DP optimization, and achieves state-of-the-art DP performance on various language and vision tasks.
Similar Papers
Forward Learning with Differential Privacy
Machine Learning (CS)
Keeps private data safe while training smart computer programs.
Differential Privacy for Deep Learning in Medicine
Machine Learning (CS)
Keeps patient data safe while training AI.
An Interactive Framework for Finding the Optimal Trade-off in Differential Privacy
Machine Learning (CS)
Finds best privacy for data without losing accuracy.