Foundation Models and Transformers for Anomaly Detection: A Survey
By: Mouïn Ben Ammar , Arturo Mendoza , Nacim Belkhir and more
Potential Business Impact:
Finds weird spots in pictures better.
In line with the development of deep learning, this survey examines the transformative role of Transformers and foundation models in advancing visual anomaly detection (VAD). We explore how these architectures, with their global receptive fields and adaptability, address challenges such as long-range dependency modeling, contextual modeling and data scarcity. The survey categorizes VAD methods into reconstruction-based, feature-based and zero/few-shot approaches, highlighting the paradigm shift brought about by foundation models. By integrating attention mechanisms and leveraging large-scale pre-training, Transformers and foundation models enable more robust, interpretable, and scalable anomaly detection solutions. This work provides a comprehensive review of state-of-the-art techniques, their strengths, limitations, and emerging trends in leveraging these architectures for VAD.
Similar Papers
Foundation Models for Time Series: A Survey
Machine Learning (CS)
Helps computers understand patterns in data over time.
Foundation Models for Anomaly Detection: Vision and Challenges
Machine Learning (CS)
Finds weird patterns in data to spot problems.
Simplifying Traffic Anomaly Detection with Video Foundation Models
CV and Pattern Recognition
Helps cars spot weird traffic using smart computer vision.