Score: 0

Critic-Guided Reinforcement Unlearning in Text-to-Image Diffusion

Published: January 6, 2026 | arXiv ID: 2601.03213v1

By: Mykola Vysotskyi , Zahar Kohut , Mariia Shpir and more

Potential Business Impact:

Removes unwanted images from AI art generators.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Machine unlearning in text-to-image diffusion models aims to remove targeted concepts while preserving overall utility. Prior diffusion unlearning methods typically rely on supervised weight edits or global penalties; reinforcement-learning (RL) approaches, while flexible, often optimize sparse end-of-trajectory rewards, yielding high-variance updates and weak credit assignment. We present a general RL framework for diffusion unlearning that treats denoising as a sequential decision process and introduces a timestep-aware critic with noisy-step rewards. Concretely, we train a CLIP-based reward predictor on noisy latents and use its per-step signal to compute advantage estimates for policy-gradient updates of the reverse diffusion kernel. Our algorithm is simple to implement, supports off-policy reuse, and plugs into standard text-to-image backbones. Across multiple concepts, the method achieves better or comparable forgetting to strong baselines while maintaining image quality and benign prompt fidelity; ablations show that (i) per-step critics and (ii) noisy-conditioned rewards are key to stability and effectiveness. We release code and evaluation scripts to facilitate reproducibility and future research on RL-based diffusion unlearning.

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)