Towards Training-free Anomaly Detection with Vision and Language Foundation Models
By: Jinjin Zhang , Guodong Wang , Yizhou Jin and more
Potential Business Impact:
Finds weird things in pictures, even if they make sense.
Anomaly detection is valuable for real-world applications, such as industrial quality inspection. However, most approaches focus on detecting local structural anomalies while neglecting compositional anomalies incorporating logical constraints. In this paper, we introduce LogSAD, a novel multi-modal framework that requires no training for both Logical and Structural Anomaly Detection. First, we propose a match-of-thought architecture that employs advanced large multi-modal models (i.e. GPT-4V) to generate matching proposals, formulating interests and compositional rules of thought for anomaly detection. Second, we elaborate on multi-granularity anomaly detection, consisting of patch tokens, sets of interests, and composition matching with vision and language foundation models. Subsequently, we present a calibration module to align anomaly scores from different detectors, followed by integration strategies for the final decision. Consequently, our approach addresses both logical and structural anomaly detection within a unified framework and achieves state-of-the-art results without the need for training, even when compared to supervised approaches, highlighting its robustness and effectiveness. Code is available at https://github.com/zhang0jhon/LogSAD.
Similar Papers
Harnessing Vision-Language Models for Time Series Anomaly Detection
CV and Pattern Recognition
Find weird patterns in data using AI.
Architectural Co-Design for Zero-Shot Anomaly Detection: Decoupling Representation and Dynamically Fusing Features in CLIP
CV and Pattern Recognition
Finds hidden problems in pictures using words.
Language-guided Open-world Video Anomaly Detection
CV and Pattern Recognition
Teaches computers to spot new, changing bad things.