Privacy Amplification in Differentially Private Zeroth-Order Optimization with Hidden States
By: Eli Chien, Wei-Ning Chen, Pan Li
Potential Business Impact:
Makes AI learn private info safely.
Zeroth-order optimization has emerged as a promising approach for fine-tuning large language models on domain-specific data, particularly under differential privacy (DP) and memory constraints. While first-order methods have been extensively studied from a privacy perspective, the privacy analysis and algorithmic design for zeroth-order methods remain significantly underexplored. A critical open question concerns hidden-state DP analysis: although convergent privacy bounds are known for first-order methods, it has remained unclear whether similar guarantees can be established for zeroth-order methods. In this work, we provide an affirmative answer by proving a convergent DP bound for zeroth-order optimization. Our analysis generalizes the celebrated privacy amplification-by-iteration framework to the setting of smooth loss functions in zeroth-order optimization. Furthermore, it induces better DP zeroth-order algorithmic designs that are previously unknown to the literature.
Similar Papers
Private Zeroth-Order Optimization with Public Data
Machine Learning (CS)
Makes private learning faster and more accurate.
Zeroth-Order Optimization Finds Flat Minima
Machine Learning (CS)
Finds better answers when computers can't see inside.
Quantum Blackwell's Ordering and Differential Privacy
Quantum Physics
Keeps secret quantum computer information safe.