Temporal-Guided Visual Foundation Models for Event-Based Vision
By: Ruihao Xia , Junhong Cai , Luziwei Leng and more
Potential Business Impact:
Lets cameras see better in tough conditions.
Event cameras offer unique advantages for vision tasks in challenging environments, yet processing asynchronous event streams remains an open challenge. While existing methods rely on specialized architectures or resource-intensive training, the potential of leveraging modern Visual Foundation Models (VFMs) pretrained on image data remains under-explored for event-based vision. To address this, we propose Temporal-Guided VFM (TGVFM), a novel framework that integrates VFMs with our temporal context fusion block seamlessly to bridge this gap. Our temporal block introduces three key components: (1) Long-Range Temporal Attention to model global temporal dependencies, (2) Dual Spatiotemporal Attention for multi-scale frame correlation, and (3) Deep Feature Guidance Mechanism to fuse semantic-temporal features. By retraining event-to-video models on real-world data and leveraging transformer-based VFMs, TGVFM preserves spatiotemporal dynamics while harnessing pretrained representations. Experiments demonstrate SoTA performance across semantic segmentation, depth estimation, and object detection, with improvements of 16%, 21%, and 16% over existing methods, respectively. Overall, this work unlocks the cross-modality potential of image-based VFMs for event-based vision with temporal reasoning. Code is available at https://github.com/XiaRho/TGVFM.
Similar Papers
Hierarchical Event Memory for Accurate and Low-latency Online Video Temporal Grounding
CV and Pattern Recognition
Finds video moments from text, even without seeing the future.
Exploring The Missing Semantics In Event Modality
CV and Pattern Recognition
Helps cameras see objects even in fast motion.
Spatiotemporal Attention Learning Framework for Event-Driven Object Recognition
CV and Pattern Recognition
Helps cameras see fast-moving things clearly.