Continual Pretraining on Encrypted Synthetic Data for Privacy-Preserving LLMs
By: Honghao Liu , Xuhui Jiang , Chengjin Xu and more
Potential Business Impact:
Keeps private info safe when computers learn.
Preserving privacy in sensitive data while pretraining large language models on small, domain-specific corpora presents a significant challenge. In this work, we take an exploratory step toward privacy-preserving continual pretraining by proposing an entity-based framework that synthesizes encrypted training data to protect personally identifiable information (PII). Our approach constructs a weighted entity graph to guide data synthesis and applies deterministic encryption to PII entities, enabling LLMs to encode new knowledge through continual pretraining while granting authorized access to sensitive data through decryption keys. Our results on limited-scale datasets demonstrate that our pretrained models outperform base models and ensure PII security, while exhibiting a modest performance gap compared to models trained on unencrypted synthetic data. We further show that increasing the number of entities and leveraging graph-based synthesis improves model performance, and that encrypted models retain instruction-following capabilities with long retrieved contexts. We discuss the security implications and limitations of deterministic encryption, positioning this work as an initial investigation into the design space of encrypted data pretraining for privacy-preserving LLMs. Our code is available at https://github.com/DataArcTech/SoE.
Similar Papers
RL-Finetuned LLMs for Privacy-Preserving Synthetic Rewriting
Cryptography and Security
Keeps your secrets safe when computers learn.
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps your private writing safe from AI.
Agentic Privacy-Preserving Machine Learning
Cryptography and Security
Makes AI understand private messages safely.