Improved Algorithms for Differentially Private Language Model Alignment
By: Keyu Chen , Hao Tang , Qinglin Liu and more
Potential Business Impact:
Keeps AI helpful and private.
Language model alignment is crucial for ensuring that large language models (LLMs) align with human preferences, yet it often involves sensitive user data, raising significant privacy concerns. While prior work has integrated differential privacy (DP) with alignment techniques, their performance remains limited. In this paper, we propose novel algorithms for privacy-preserving alignment and rigorously analyze their effectiveness across varying privacy budgets and models. Our framework can be deployed on two celebrated alignment techniques, namely direct preference optimization (DPO) and reinforcement learning from human feedback (RLHF). Through systematic experiments on large-scale language models, we demonstrate that our approach achieves state-of-the-art performance. Notably, one of our algorithms, DP-AdamW, combined with DPO, surpasses existing methods, improving alignment quality by up to 15% under moderate privacy budgets ({\epsilon}=2-5). We further investigate the interplay between privacy guarantees, alignment efficacy, and computational demands, providing practical guidelines for optimizing these trade-offs.
Similar Papers
AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Cryptography and Security
Protects smart computer brains from being copied.
PROPS: Progressively Private Self-alignment of Large Language Models
Machine Learning (CS)
Keeps AI learning from people private.
Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model
Machine Learning (CS)
Makes AI better at following instructions.