Lightweight LIF-only SNN accelerator using differential time encoding
By: Daniel Windhager , Lothar Ratschbacher , Bernhard A. Moser and more
Potential Business Impact:
Makes AI learn faster and use less power.
Spiking Neural Networks (SNNs) offer a promising solution to the problem of increasing computational and energy requirements for modern Machine Learning (ML) applications. Due to their unique data representation choice of using spikes and spike trains, they mostly rely on additions and thresholding operations to achieve results approaching state-of-the-art (SOTA) Artificial Neural Networks (ANNs). This advantage is hindered by the fact that their temporal characteristic does not map well to already existing accelerator hardware like GPUs. Therefore, this work will introduce a hardware accelerator architecture capable of computing feedforward LIF-only SNNs, as well as an accompanying encoding method to efficiently encode already existing data into spike trains. Together, this leads to a design capable of >99% accuracy on the MNIST dataset, with ~0.29ms inference times on a Xilinx Ultrascale+ FPGA, as well as ~0.17ms on a custom ASIC using the open-source predictive 7nm ASAP7 PDK. Furthermore, this work will showcase the advantages of the previously presented differential time encoding for spikes, as well as provide proof that merging spikes from different synapses given in differential time encoding can be done efficiently in hardware.
Similar Papers
A PyTorch-Compatible Spike Encoding Framework for Energy-Efficient Neuromorphic Applications
Machine Learning (CS)
Makes computer brains use less power.
Spiking Neural Networks: The Future of Brain-Inspired Computing
Neural and Evolutionary Computing
Makes computers use less power to think.
Differential Coding for Training-Free ANN-to-SNN Conversion
CV and Pattern Recognition
Makes AI use less power and be faster.