Effects of Introducing Synaptic Scaling on Spiking Neural Network Learning
By: Shinnosuke Touda, Hirotsugu Okuno
Potential Business Impact:
Teaches computers to learn faster from examples.
Spiking neural networks (SNNs) employing unsupervised learning methods inspired by neural plasticity are expected to be a new framework for artificial intelligence. In this study, we investigated the effect of multiple types of neural plasticity, such as spike-time-dependent plasticity (STDP) and synaptic scaling, on the learning in a winner-take-all (WTA) network composed of spiking neurons. We implemented a WTA network with multiple types of neural plasticity using Python. The MNIST and the Fashion-MNIST datasets were used for training and testing. We varied the number of neurons, the time constant of STDP, and the normalization method used in synaptic scaling to compare classification accuracy. The results demonstrated that synaptic scaling based on the L2 norm was the most effective in improving classification performance. By implementing L2-norm-based synaptic scaling and setting the number of neurons in both excitatory and inhibitory layers to 400, the network achieved classification accuracies of 88.84 % on the MNIST dataset and 68.01 % on the Fashion-MNIST dataset after one epoch of training.
Similar Papers
Extending Spike-Timing Dependent Plasticity to Learning Synaptic Delays
Neural and Evolutionary Computing
Teaches computer brains to learn faster.
Multi-Plasticity Synergy with Adaptive Mechanism Assignment for Training Spiking Neural Networks
Neural and Evolutionary Computing
Teaches computer brains to learn better, faster.
Toward Efficient Spiking Transformers: Synapse Pruning Meets Synergistic Learning-Based Compensation
Machine Learning (CS)
Makes AI smarter, smaller, and faster.