Revisiting Direct Encoding: Learnable Temporal Dynamics for Static Image Spiking Neural Networks
By: Huaxu He
Potential Business Impact:
Makes computers "see" moving pictures from still ones.
Handling static images that lack inherent temporal dynamics remains a fundamental challenge for spiking neural networks (SNNs). In directly trained SNNs, static inputs are typically repeated across time steps, causing the temporal dimension to collapse into a rate like representation and preventing meaningful temporal modeling. This work revisits the reported performance gap between direct and rate based encodings and shows that it primarily stems from convolutional learnability and surrogate gradient formulations rather than the encoding schemes themselves. To illustrate this mechanism level clarification, we introduce a minimal learnable temporal encoding that adds adaptive phase shifts to induce meaningful temporal variation from static inputs.
Similar Papers
Spatio-Temporal Cluster-Triggered Encoding for Spiking Neural Networks
Neural and Evolutionary Computing
Makes computers see and understand pictures better.
Hybrid Temporal-8-Bit Spike Coding for Spiking Neural Network Surrogate Training
Neural and Evolutionary Computing
Makes AI see better with less power.
Learning Scalable Temporal Representations in Spiking Neural Networks Without Labels
Emerging Technologies
Teaches computers to learn from pictures without labels.