Enhancing LUT-based Deep Neural Networks Inference through Architecture and Connectivity Optimization
By: Binglei Lou, Ruilin Wu, Philip Leong
Deploying deep neural networks (DNNs) on resource-constrained edge devices such as FPGAs requires a careful balance among latency, power, and hardware resource usage, while maintaining high accuracy. Existing Lookup Table (LUT)-based DNNs -- such as LogicNets, PolyLUT, and NeuraLUT -- face two critical challenges: the exponential growth of LUT size and inefficient random sparse connectivity. This paper presents SparseLUT, a comprehensive framework that addresses these challenges through two orthogonal optimizations. First, we propose an architectural enhancement that aggregates multiple PolyLUT sub-neurons via an adder, significantly reducing LUT consumption by 2.0x-13.9x and lowering inference latency by 1.2x-1.6x, all while maintaining comparable accuracy. Building upon this foundation, we further introduce a non-greedy training algorithm that optimizes neuron connectivity by selectively pruning less significant inputs and strategically regrowing more effective ones. This training optimization, which incurs no additional area and latency overhead, delivers consistent accuracy improvements across benchmarks -- achieving up to a 2.13% gain on MNIST and 0.94% on Jet Substructure Classification compared to existing LUT-DNN approaches.
Similar Papers
SparseLUT: Sparse Connectivity Optimization for Lookup Table-based Deep Neural Networks
Hardware Architecture
Makes smart chips learn faster and use less power.
PolyLUT: Ultra-low Latency Polynomial Inference with Hardware-Aware Structured Pruning
Machine Learning (CS)
Makes computer chips faster for complex tasks.
NeuraLUT-Assemble: Hardware-aware Assembling of Sub-Neural Networks for Efficient LUT Inference
Machine Learning (CS)
Makes AI smarter and faster using less computer parts.