CLIP Meets Diffusion: A Synergistic Approach to Anomaly Detection
By: Byeongchan Lee , John Won , Seunghyun Lee and more
Potential Business Impact:
Finds weird spots in pictures, even with few examples.
Anomaly detection is a complex problem due to the ambiguity in defining anomalies, the diversity of anomaly types (e.g., local and global defect), and the scarcity of training data. As such, it necessitates a comprehensive model capable of capturing both low-level and high-level features, even with limited data. To address this, we propose CLIPFUSION, a method that leverages both discriminative and generative foundation models. Specifically, the CLIP-based discriminative model excels at capturing global features, while the diffusion-based generative model effectively captures local details, creating a synergistic and complementary approach. Notably, we introduce a methodology for utilizing cross-attention maps and feature maps extracted from diffusion models specifically for anomaly detection. Experimental results on benchmark datasets (MVTec-AD, VisA) demonstrate that CLIPFUSION consistently outperforms baseline methods, achieving outstanding performance in both anomaly segmentation and classification. We believe that our method underscores the effectiveness of multi-modal and multi-model fusion in tackling the multifaceted challenges of anomaly detection, providing a scalable solution for real-world applications.
Similar Papers
AVadCLIP: Audio-Visual Collaboration for Robust Video Anomaly Detection
CV and Pattern Recognition
Finds weird things in videos using sound and sight.
PA-CLIP: Enhancing Zero-Shot Anomaly Detection through Pseudo-Anomaly Awareness
CV and Pattern Recognition
Finds tiny flaws on products, even with tricky lighting.
AA-CLIP: Enhancing Zero-shot Anomaly Detection via Anomaly-Aware CLIP
CV and Pattern Recognition
Finds hidden problems in pictures better.