Training-Free ANN-to-SNN Conversion for High-Performance Spiking Transformer
By: Jingya Wang , Xin Deng , Wenjie Wei and more
Potential Business Impact:
Makes smart computer brains use less power.
Leveraging the event-driven paradigm, Spiking Neural Networks (SNNs) offer a promising approach for constructing energy-efficient Transformer architectures. Compared to directly trained Spiking Transformers, ANN-to-SNN conversion methods bypass the high training costs. However, existing methods still suffer from notable limitations, failing to effectively handle nonlinear operations in Transformer architectures and requiring additional fine-tuning processes for pre-trained ANNs. To address these issues, we propose a high-performance and training-free ANN-to-SNN conversion framework tailored for Transformer architectures. Specifically, we introduce a Multi-basis Exponential Decay (MBE) neuron, which employs an exponential decay strategy and multi-basis encoding method to efficiently approximate various nonlinear operations. It removes the requirement for weight modifications in pre-trained ANNs. Extensive experiments across diverse tasks (CV, NLU, NLG) and mainstream Transformer architectures (ViT, RoBERTa, GPT-2) demonstrate that our method achieves near-lossless conversion accuracy with significantly lower latency. This provides a promising pathway for the efficient and scalable deployment of Spiking Transformers in real-world applications.
Similar Papers
One-Timestep is Enough: Achieving High-performance ANN-to-SNN Conversion via Scale-and-Fire Neurons
Neural and Evolutionary Computing
Makes AI think faster and use less power.
Hybrid Layer-Wise ANN-SNN With Surrogate Spike Encoding-Decoding Structure
Neural and Evolutionary Computing
Makes smart computers use less power.
Efficient ANN-SNN Conversion with Error Compensation Learning
Machine Learning (CS)
Makes smart computer brains work faster on small devices.