DP-Adam-AC: Privacy-preserving Fine-Tuning of Localizable Language Models Using Adam Optimization with Adaptive Clipping
By: Ruoxing Yang
Potential Business Impact:
Lets AI learn secrets safely on your own device.
Large language models (LLMs) such as ChatGPT have evolved into powerful and ubiquitous tools. Fine-tuning on small datasets allows LLMs to acquire specialized skills for specific tasks efficiently. Although LLMs provide great utility in both general and task-specific use cases, they are limited by two security-related concerns. First, traditional LLM hardware requirements make them infeasible to run locally on consumer-grade devices. A remote network connection with the LLM provider's server is usually required, making the system vulnerable to network attacks. Second, fine-tuning an LLM for a sensitive task may involve sensitive data. Non-private fine-tuning algorithms produce models vulnerable to training data reproduction attacks. Our work addresses these security concerns by enhancing differentially private optimization algorithms and applying them to fine-tune localizable language models. We introduce adaptable gradient clipping along with other engineering enhancements to the standard DP-Adam optimizer to create DP-Adam-AC. We use our optimizer to fine-tune examples of two localizable LLM designs, small language model (Qwen2.5-0.5B) and 1.58 bit quantization (Bitnet-b1.58-2B). We demonstrate promising improvements in loss through experimentation with two synthetic datasets.
Similar Papers
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Cryptography and Security
Keeps private info safe when AI learns new things.
When FinTech Meets Privacy: Securing Financial LLMs with Differential Private Fine-Tuning
Cryptography and Security
Keeps your money secrets safe on your phone.
Performance Trade-offs of Optimizing Small Language Models for E-Commerce
Artificial Intelligence
Makes small computers understand online shoppers better.