Online Learning Extreme Learning Machine with Low-Complexity Predictive Plasticity Rule and FPGA Implementation
By: Zhenya Zang, Xingda Li, David Day Uei Li
Potential Business Impact:
Teaches computers faster with less power.
We propose a simplified, biologically inspired predictive local learning rule that eliminates the need for global backpropagation in conventional neural networks and membrane integration in event-based training. Weight updates are triggered only on prediction errors and are performed using sparse, binary-driven vector additions. We integrate this rule into an extreme learning machine (ELM), replacing the conventional computationally intensive matrix inversion. Compared to standard ELM, our approach reduces the complexity of the training from O(M^3) to O(M), in terms of M nodes in the hidden layer, while maintaining comparable accuracy (within 3.6% and 2.0% degradation on training and test datasets, respectively). We demonstrate an FPGA implementation and compare it with existing studies, showing significant reductions in computational and memory requirements. This design demonstrates strong potential for energy-efficient online learning on low-cost edge devices.
Similar Papers
Fast Learning in Quantitative Finance with Extreme Learning Machine
Computational Finance
Makes computers solve money problems much faster.
The Energy-Efficient Hierarchical Neural Network with Fast FPGA-Based Incremental Learning
Machine Learning (CS)
Makes AI learn faster and use less power.
Towards On-Device Learning and Reconfigurable Hardware Implementation for Encoded Single-Photon Signal Processing
Machine Learning (CS)
Helps machines understand light signals faster.