Beyond Human-prompting: Adaptive Prompt Tuning with Semantic Alignment for Anomaly Detection
By: Pi-Wei Chen , Jerry Chun-Wei Lin , Wei-Han Chen and more
Potential Business Impact:
Finds weird things in pictures automatically.
Pre-trained Vision-Language Models (VLMs) have recently shown promise in detecting anomalies. However, previous approaches are fundamentally limited by their reliance on human-designed prompts and the lack of accessible anomaly samples, leading to significant gaps in context-specific anomaly understanding. In this paper, we propose \textbf{A}daptive \textbf{P}rompt \textbf{T}uning with semantic alignment for anomaly detection (APT), a groundbreaking prior knowledge-free, few-shot framework and overcomes the limitations of traditional prompt-based approaches. APT uses self-generated anomaly samples with noise perturbations to train learnable prompts that capture context-dependent anomalies in different scenarios. To prevent overfitting to synthetic noise, we propose a Self-Optimizing Meta-prompt Guiding Scheme (SMGS) that iteratively aligns the prompts with general anomaly semantics while incorporating diverse synthetic anomaly. Our system not only advances pixel-wise anomaly detection, but also achieves state-of-the-art performance on multiple benchmark datasets without requiring prior knowledge for prompt crafting, establishing a robust and versatile solution for real-world anomaly detection.
Similar Papers
ANPrompt: Anti-noise Prompt Tuning for Vision-Language Models
CV and Pattern Recognition
Makes AI models better at understanding images and text.
ANPrompt: Anti-noise Prompt Tuning for Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures better, even with noise.
NAP-Tuning: Neural Augmented Prompt Tuning for Adversarially Robust Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures and words better, safely.