SMTrack: End-to-End Trained Spiking Neural Networks for Multi-Object Tracking in RGB Videos
By: Pengzhi Zhong , Xinzhe Wang , Dan Zeng and more
Potential Business Impact:
Tracks many moving things better with less power.
Brain-inspired Spiking Neural Networks (SNNs) exhibit significant potential for low-power computation, yet their application in visual tasks remains largely confined to image classification, object detection, and event-based tracking. In contrast, real-world vision systems still widely use conventional RGB video streams, where the potential of directly-trained SNNs for complex temporal tasks such as multi-object tracking (MOT) remains underexplored. To address this challenge, we propose SMTrack-the first directly trained deep SNN framework for end-to-end multi-object tracking on standard RGB videos. SMTrack introduces an adaptive and scale-aware Normalized Wasserstein Distance loss (Asa-NWDLoss) to improve detection and localization performance under varying object scales and densities. Specifically, the method computes the average object size within each training batch and dynamically adjusts the normalization factor, thereby enhancing sensitivity to small objects. For the association stage, we incorporate the TrackTrack identity module to maintain robust and consistent object trajectories. Extensive evaluations on BEE24, MOT17, MOT20, and DanceTrack show that SMTrack achieves performance on par with leading ANN-based MOT methods, advancing robust and accurate SNN-based tracking in complex scenarios.
Similar Papers
SDTrack: A Baseline for Event-based Tracking via Spiking Neural Networks
Neural and Evolutionary Computing
Tracks moving things faster and using less power.
Data-Driven Object Tracking: Integrating Modular Neural Networks into a Kalman Framework
CV and Pattern Recognition
Helps cars see and follow other cars.
SpikeSMOKE: Spiking Neural Networks for Monocular 3D Object Detection with Cross-Scale Gated Coding
CV and Pattern Recognition
Saves energy for self-driving car vision.