Score: 1

Privacy Amplification in Differentially Private Zeroth-Order Optimization with Hidden States

Published: May 30, 2025 | arXiv ID: 2506.00158v1

By: Eli Chien, Wei-Ning Chen, Pan Li

BigTech Affiliations: Microsoft

Potential Business Impact:

Makes AI learn private info safely.

Business Areas:
Privacy Privacy and Security

Zeroth-order optimization has emerged as a promising approach for fine-tuning large language models on domain-specific data, particularly under differential privacy (DP) and memory constraints. While first-order methods have been extensively studied from a privacy perspective, the privacy analysis and algorithmic design for zeroth-order methods remain significantly underexplored. A critical open question concerns hidden-state DP analysis: although convergent privacy bounds are known for first-order methods, it has remained unclear whether similar guarantees can be established for zeroth-order methods. In this work, we provide an affirmative answer by proving a convergent DP bound for zeroth-order optimization. Our analysis generalizes the celebrated privacy amplification-by-iteration framework to the setting of smooth loss functions in zeroth-order optimization. Furthermore, it induces better DP zeroth-order algorithmic designs that are previously unknown to the literature.

Country of Origin
🇺🇸 United States

Page Count
28 pages

Category
Computer Science:
Machine Learning (CS)