Score: 0

Targeted Data Protection for Diffusion Model by Matching Training Trajectory

Published: December 11, 2025 | arXiv ID: 2512.10433v1

By: Hojun Lee , Mijin Koo , Yeji Song and more

Potential Business Impact:

Protects art from being copied by AI.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Recent advancements in diffusion models have made fine-tuning text-to-image models for personalization increasingly accessible, but have also raised significant concerns regarding unauthorized data usage and privacy infringement. Current protection methods are limited to passively degrading image quality, failing to achieve stable control. While Targeted Data Protection (TDP) offers a promising paradigm for active redirection toward user-specified target concepts, existing TDP attempts suffer from poor controllability due to snapshot-matching approaches that fail to account for complete learning dynamics. We introduce TAFAP (Trajectory Alignment via Fine-tuning with Adversarial Perturbations), the first method to successfully achieve effective TDP by controlling the entire training trajectory. Unlike snapshot-based methods whose protective influence is easily diluted as training progresses, TAFAP employs trajectory-matching inspired by dataset distillation to enforce persistent, verifiable transformations throughout fine-tuning. We validate our method through extensive experiments, demonstrating the first successful targeted transformation in diffusion models with simultaneous control over both identity and visual patterns. TAFAP significantly outperforms existing TDP attempts, achieving robust redirection toward target concepts while maintaining high image quality. This work enables verifiable safeguards and provides a new framework for controlling and tracing alterations in diffusion model outputs.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
12 pages

Category
Computer Science:
Artificial Intelligence