Hybrid Spiking Vision Transformer for Object Detection with Event Cameras
By: Qi Xu , Jie Deng , Jiangrong Shen and more
Potential Business Impact:
Helps cameras see moving things with less power.
Event-based object detection has gained increasing attention due to its advantages such as high temporal resolution, wide dynamic range, and asynchronous address-event representation. Leveraging these advantages, Spiking Neural Networks (SNNs) have emerged as a promising approach, offering low energy consumption and rich spatiotemporal dynamics. To further enhance the performance of event-based object detection, this study proposes a novel hybrid spike vision Transformer (HsVT) model. The HsVT model integrates a spatial feature extraction module to capture local and global features, and a temporal feature extraction module to model time dependencies and long-term patterns in event sequences. This combination enables HsVT to capture spatiotemporal features, improving its capability to handle complex event-based object detection tasks. To support research in this area, we developed and publicly released The Fall Detection Dataset as a benchmark for event-based object detection tasks. This dataset, captured using an event-based camera, ensures facial privacy protection and reduces memory usage due to the event representation format. We evaluated the HsVT model on GEN1 and Fall Detection datasets across various model sizes. Experimental results demonstrate that HsVT achieves significant performance improvements in event detection with fewer parameters.
Similar Papers
Spatiotemporal Attention Learning Framework for Event-Driven Object Recognition
CV and Pattern Recognition
Helps cameras see fast-moving things clearly.
Temporal-Guided Spiking Neural Networks for Event-Based Human Action Recognition
CV and Pattern Recognition
Helps computers see actions from tiny motion changes.
Temporal-Guided Visual Foundation Models for Event-Based Vision
CV and Pattern Recognition
Lets cameras see better in tough conditions.