SPARTA: Advancing Sparse Attention in Spiking Neural Networks via Spike-Timing-Based Prioritization
By: Minsuk Jang, Changick Kim
Potential Business Impact:
Makes computers smarter and faster using brain signals.
Current Spiking Neural Networks (SNNs) underutilize the temporal dynamics inherent in spike-based processing, relying primarily on rate coding while overlooking precise timing information that provides rich computational cues. We propose SPARTA (Spiking Priority Attention with Resource-Adaptive Temporal Allocation), a framework that leverages heterogeneous neuron dynamics and spike-timing information to enable efficient sparse attention. SPARTA prioritizes tokens based on temporal cues, including firing patterns, spike timing, and inter-spike intervals, achieving 65.4% sparsity through competitive gating. By selecting only the most salient tokens, SPARTA reduces attention complexity from O(N^2) to O(K^2) with k << n, while maintaining high accuracy. Our method achieves state-of-the-art performance on DVS-Gesture (98.78%) and competitive results on CIFAR10-DVS (83.06%) and CIFAR-10 (95.3%), demonstrating that exploiting spike timing dynamics improves both computational efficiency and accuracy.
Similar Papers
SPARTA: Advancing Sparse Attention in Spiking Neural Networks via Spike-Timing-Based Prioritization
Machine Learning (CS)
Makes computer brains learn faster by watching time.
Spatial Spiking Neural Networks Enable Efficient and Robust Temporal Computation
Neural and Evolutionary Computing
Makes smart computers learn faster with less memory.
DTA: Dual Temporal-channel-wise Attention for Spiking Neural Networks
CV and Pattern Recognition
Makes smart computers learn faster and use less power.