Score: 2

Simplifying Traffic Anomaly Detection with Video Foundation Models

Published: July 12, 2025 | arXiv ID: 2507.09338v2

By: Svetlana Orlova , Tommie Kerssies , Brunó B. Englert and more

Potential Business Impact:

Helps cars spot weird traffic using smart computer vision.

Business Areas:
Autonomous Vehicles Transportation

Recent methods for ego-centric Traffic Anomaly Detection (TAD) often rely on complex multi-stage or multi-representation fusion architectures, yet it remains unclear whether such complexity is necessary. Recent findings in visual perception suggest that foundation models, enabled by advanced pre-training, allow simple yet flexible architectures to outperform specialized designs. Therefore, in this work, we investigate an architecturally simple encoder-only approach using plain Video Vision Transformers (Video ViTs) and study how pre-training enables strong TAD performance. We find that: (i) advanced pre-training enables simple encoder-only models to match or even surpass the performance of specialized state-of-the-art TAD methods, while also being significantly more efficient; (ii) although weakly- and fully-supervised pre-training are advantageous on standard benchmarks, we find them less effective for TAD. Instead, self-supervised Masked Video Modeling (MVM) provides the strongest signal; and (iii) Domain-Adaptive Pre-Training (DAPT) on unlabeled driving videos further improves downstream performance, without requiring anomalous examples. Our findings highlight the importance of pre-training and show that effective, efficient, and scalable TAD models can be built with minimal architectural complexity. We release our code, domain-adapted encoders, and fine-tuned models to support future work: https://github.com/tue-mps/simple-tad.

Country of Origin
🇳🇱 Netherlands

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition