Score: 0

Revisiting Direct Encoding: Learnable Temporal Dynamics for Static Image Spiking Neural Networks

Published: December 1, 2025 | arXiv ID: 2512.01687v1

By: Huaxu He

Potential Business Impact:

Makes computers "see" moving pictures from still ones.

Business Areas:
Image Recognition Data and Analytics, Software

Handling static images that lack inherent temporal dynamics remains a fundamental challenge for spiking neural networks (SNNs). In directly trained SNNs, static inputs are typically repeated across time steps, causing the temporal dimension to collapse into a rate like representation and preventing meaningful temporal modeling. This work revisits the reported performance gap between direct and rate based encodings and shows that it primarily stems from convolutional learnability and surrogate gradient formulations rather than the encoding schemes themselves. To illustrate this mechanism level clarification, we introduce a minimal learnable temporal encoding that adds adaptive phase shifts to induce meaningful temporal variation from static inputs.

Country of Origin
🇨🇳 China

Page Count
14 pages

Category
Computer Science:
Neural and Evolutionary Computing