Implementation and Analysis of Thermometer Encoding in DWN FPGA Accelerators
By: Michael Mecik, Martin Kumm
Potential Business Impact:
Makes smart computer chips use less space.
Fully parallel neural network accelerators on field-programmable gate arrays (FPGAs) offer high throughput for latency-critical applications but face hardware resource constraints. Weightless neural networks (WNNs) efficiently replace arithmetic with logic-based inference. Differential weightless neural networks (DWN) further optimize resource usage by learning connections between encoders and LUT layers via gradient-based training. However, DWNs rely on thermometer encoding, and the associated hardware cost has not been fully evaluated. We present a DWN hardware generator that includes thermometer encoding explicitly. Experiments on the Jet Substructure Classification (JSC) task show that encoding can increase LUT usage by up to 3.20$\times$, dominating costs in small networks and highlighting the need for encoding-aware hardware design in DWN accelerators.
Similar Papers
Ternary-Input Binary-Weight CNN Accelerator Design for Miniature Object Classification System with Query-Driven Spatial DVS
Hardware Architecture
Lets tiny cameras see more with less power.
Hardwired-Neurons Language Processing Units as General-Purpose Cognitive Substrates
Hardware Architecture
Makes AI understand words much faster and cheaper.
The Energy-Efficient Hierarchical Neural Network with Fast FPGA-Based Incremental Learning
Machine Learning (CS)
Makes AI learn faster and use less power.