Score: 1

Continual Pretraining on Encrypted Synthetic Data for Privacy-Preserving LLMs

Published: January 9, 2026 | arXiv ID: 2601.05635v1

By: Honghao Liu , Xuhui Jiang , Chengjin Xu and more

Potential Business Impact:

Keeps private info safe when computers learn.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Preserving privacy in sensitive data while pretraining large language models on small, domain-specific corpora presents a significant challenge. In this work, we take an exploratory step toward privacy-preserving continual pretraining by proposing an entity-based framework that synthesizes encrypted training data to protect personally identifiable information (PII). Our approach constructs a weighted entity graph to guide data synthesis and applies deterministic encryption to PII entities, enabling LLMs to encode new knowledge through continual pretraining while granting authorized access to sensitive data through decryption keys. Our results on limited-scale datasets demonstrate that our pretrained models outperform base models and ensure PII security, while exhibiting a modest performance gap compared to models trained on unencrypted synthetic data. We further show that increasing the number of entities and leveraging graph-based synthesis improves model performance, and that encrypted models retain instruction-following capabilities with long retrieved contexts. We discuss the security implications and limitations of deterministic encryption, positioning this work as an initial investigation into the design space of encrypted data pretraining for privacy-preserving LLMs. Our code is available at https://github.com/DataArcTech/SoE.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Cryptography and Security