Score: 4

Temporal-Guided Visual Foundation Models for Event-Based Vision

Published: November 9, 2025 | arXiv ID: 2511.06238v1

By: Ruihao Xia , Junhong Cai , Luziwei Leng and more

BigTech Affiliations: Huawei

Potential Business Impact:

Lets cameras see better in tough conditions.

Business Areas:
Image Recognition Data and Analytics, Software

Event cameras offer unique advantages for vision tasks in challenging environments, yet processing asynchronous event streams remains an open challenge. While existing methods rely on specialized architectures or resource-intensive training, the potential of leveraging modern Visual Foundation Models (VFMs) pretrained on image data remains under-explored for event-based vision. To address this, we propose Temporal-Guided VFM (TGVFM), a novel framework that integrates VFMs with our temporal context fusion block seamlessly to bridge this gap. Our temporal block introduces three key components: (1) Long-Range Temporal Attention to model global temporal dependencies, (2) Dual Spatiotemporal Attention for multi-scale frame correlation, and (3) Deep Feature Guidance Mechanism to fuse semantic-temporal features. By retraining event-to-video models on real-world data and leveraging transformer-based VFMs, TGVFM preserves spatiotemporal dynamics while harnessing pretrained representations. Experiments demonstrate SoTA performance across semantic segmentation, depth estimation, and object detection, with improvements of 16%, 21%, and 16% over existing methods, respectively. Overall, this work unlocks the cross-modality potential of image-based VFMs for event-based vision with temporal reasoning. Code is available at https://github.com/XiaRho/TGVFM.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ China, Singapore

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition