Hardware Efficient Accelerator for Spiking Transformer With Reconfigurable Parallel Time Step Computing
By: Bo-Yu Chen, Tian-Sheuan Chang
Potential Business Impact:
Makes AI brains use less power to think.
This paper introduces the first low-power hardware accelerator for Spiking Transformers, an emerging alternative to traditional artificial neural networks. By modifying the base Spikformer model to use IAND instead of residual addition, the model exclusively utilizes spike computation. The hardware employs a fully parallel tick-batching dataflow and a time-step reconfigurable neuron architecture, addressing the delay and power challenges of multi-timestep processing in spiking neural networks. This approach processes outputs from all time steps in parallel, reducing computation delay and eliminating membrane memory, thereby lowering energy consumption. The accelerator supports 3x3 and 1x1 convolutions and matrix operations through vectorized processing, meeting model requirements. Implemented in TSMC's 28nm process, it achieves 3.456 TSOPS (tera spike operations per second) with a power efficiency of 38.334 TSOPS/W at 500MHz, using 198.46K logic gates and 139.25KB of SRAM.
Similar Papers
Low Power Vision Transformer Accelerator with Hardware-Aware Pruning and Optimized Dataflow
Hardware Architecture
Makes computer vision faster and use less power.
ASTER: Attention-based Spiking Transformer Engine for Event-driven Reasoning
Hardware Architecture
Makes smart cameras use less power to see.
Design and Implementation of an FPGA-Based Hardware Accelerator for Transformer
Hardware Architecture
Makes AI models run much faster and cheaper.