A flexible framework for structural plasticity in GPU-accelerated sparse spiking neural networks
By: James C. Knight, Johanna Senk, Thomas Nowotny
Potential Business Impact:
Makes computer brains learn faster and use less power.
The majority of research in both training Artificial Neural Networks (ANNs) and modeling learning in biological brains focuses on synaptic plasticity, where learning equates to changing the strength of existing connections. However, in biological brains, structural plasticity - where new connections are created and others removed - is also vital, not only for effective learning but also for recovery from damage and optimal resource usage. Inspired by structural plasticity, pruning is often used in machine learning to remove weak connections from trained models to reduce the computational requirements of inference. However, the machine learning frameworks typically used for backpropagation-based training of both ANNs and Spiking Neural Networks (SNNs) are optimized for dense connectivity, meaning that pruning does not help reduce the training costs of ever-larger models. The GeNN simulator already supports efficient GPU-accelerated simulation of sparse SNNs for computational neuroscience and machine learning. Here, we present a new flexible framework for implementing GPU-accelerated structural plasticity rules and demonstrate this first using the e-prop supervised learning rule and DEEP R to train efficient, sparse SNN classifiers and then, in an unsupervised learning context, to learn topographic maps. Compared to baseline dense models, our sparse classifiers reduce training time by up to 10x while the DEEP R rewiring enables them to perform as well as the original models. We demonstrate topographic map formation in faster-than-realtime simulations, provide insights into the connectivity evolution, and measure simulation speed versus network size. The proposed framework will enable further research into achieving and maintaining sparsity in network structure and neural communication, as well as exploring the computational benefits of sparsity in a range of neuromorphic applications.
Similar Papers
Neuro-inspired Ensemble-to-Ensemble Communication Primitives for Sparse and Efficient ANNs
Machine Learning (CS)
Makes AI smarter with less computer power.
Multi-Plasticity Synergy with Adaptive Mechanism Assignment for Training Spiking Neural Networks
Neural and Evolutionary Computing
Teaches computer brains to learn better, faster.
Toward Efficient Spiking Transformers: Synapse Pruning Meets Synergistic Learning-Based Compensation
Machine Learning (CS)
Makes AI smarter, smaller, and faster.