Score: 1

DP-GENG : Differentially Private Dataset Distillation Guided by DP-Generated Data

Published: November 13, 2025 | arXiv ID: 2511.09876v1

By: Shuo Shi , Jinghuai Zhang , Shijie Jiang and more

Potential Business Impact:

Protects private data while shrinking big computer learning sets.

Business Areas:
DRM Content and Publishing, Media and Entertainment, Privacy and Security

Dataset distillation (DD) compresses large datasets into smaller ones while preserving the performance of models trained on them. Although DD is often assumed to enhance data privacy by aggregating over individual examples, recent studies reveal that standard DD can still leak sensitive information from the original dataset due to the lack of formal privacy guarantees. Existing differentially private (DP)-DD methods attempt to mitigate this risk by injecting noise into the distillation process. However, they often fail to fully leverage the original dataset, resulting in degraded realism and utility. This paper introduces \libn, a novel framework that addresses the key limitations of current DP-DD by leveraging DP-generated data. Specifically, \lib initializes the distilled dataset with DP-generated data to enhance realism. Then, generated data refines the DP-feature matching technique to distill the original dataset under a small privacy budget, and trains an expert model to align the distilled examples with their class distribution. Furthermore, we design a privacy budget allocation strategy to determine budget consumption across DP components and provide a theoretical analysis of the overall privacy guarantees. Extensive experiments show that \lib significantly outperforms state-of-the-art DP-DD methods in terms of both dataset utility and robustness against membership inference attacks, establishing a new paradigm for privacy-preserving dataset distillation.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Cryptography and Security