Selective, Controlled and Domain-Agnostic Unlearning in Pretrained CLIP: A Training- and Data-Free Approach
By: Ashish Mishra , Gyanaranjan Nayak , Tarun Kumar and more
Potential Business Impact:
Removes unwanted knowledge from AI without retraining.
Pretrained models like CLIP have demonstrated impressive zero-shot classification capabilities across diverse visual domains, spanning natural images, artistic renderings, and abstract representations. However, real-world applications often demand the removal (or "unlearning") of specific object classes without requiring additional data or retraining, or affecting the model's performance on unrelated tasks. In this paper, we propose a novel training- and data-free unlearning framework that enables three distinct forgetting paradigms: (1) global unlearning of selected objects across all domains, (2) domain-specific knowledge removal (e.g., eliminating sketch representations while preserving photo recognition), and (3) complete unlearning in selective domains. By leveraging a multimodal nullspace through synergistic integration of text prompts and synthesized visual prototypes derived from CLIP's joint embedding space, our method efficiently removes undesired class information while preserving the remaining knowledge. This approach overcomes the limitations of existing retraining-based methods and offers a flexible and computationally efficient solution for controlled model forgetting.
Similar Papers
Erasing CLIP Memories: Non-Destructive, Data-Free Zero-Shot class Unlearning in CLIP Models
CV and Pattern Recognition
Removes unwanted knowledge from AI models.
Targeted Forgetting of Image Subgroups in CLIP Models
CV and Pattern Recognition
Cleans AI's bad memories without hurting good ones.
Domain Generalization in-the-Wild: Disentangling Classification from Domain-Aware Representations
CV and Pattern Recognition
Helps AI understand new things without seeing them.