Score: 0

Hardware Efficient Accelerator for Spiking Transformer With Reconfigurable Parallel Time Step Computing

Published: March 25, 2025 | arXiv ID: 2503.19643v1

By: Bo-Yu Chen, Tian-Sheuan Chang

Potential Business Impact:

Makes AI brains use less power to think.

Business Areas:
Application Specific Integrated Circuit (ASIC) Hardware

This paper introduces the first low-power hardware accelerator for Spiking Transformers, an emerging alternative to traditional artificial neural networks. By modifying the base Spikformer model to use IAND instead of residual addition, the model exclusively utilizes spike computation. The hardware employs a fully parallel tick-batching dataflow and a time-step reconfigurable neuron architecture, addressing the delay and power challenges of multi-timestep processing in spiking neural networks. This approach processes outputs from all time steps in parallel, reducing computation delay and eliminating membrane memory, thereby lowering energy consumption. The accelerator supports 3x3 and 1x1 convolutions and matrix operations through vectorized processing, meeting model requirements. Implemented in TSMC's 28nm process, it achieves 3.456 TSOPS (tera spike operations per second) with a power efficiency of 38.334 TSOPS/W at 500MHz, using 198.46K logic gates and 139.25KB of SRAM.

Country of Origin
🇹🇼 Taiwan, Province of China

Page Count
5 pages

Category
Computer Science:
Hardware Architecture