Scalable Construction of Spiking Neural Networks using up to thousands of GPUs
By: Bruno Golosio , Gianmarco Tiddia , José Villamar and more
Diverse scientific and engineering research areas deal with discrete, time-stamped changes in large systems of interacting delay differential equations. Simulating such complex systems at scale on high-performance computing clusters demands efficient management of communication and memory. Inspired by the human cerebral cortex -- a sparsely connected network of $\mathcal{O}(10^{10})$ neurons, each forming $\mathcal{O}(10^{3})$--$\mathcal{O}(10^{4})$ synapses and communicating via short electrical pulses called spikes -- we study the simulation of large-scale spiking neural networks for computational neuroscience research. This work presents a novel network construction method for multi-GPU clusters and upcoming exascale supercomputers using the Message Passing Interface (MPI), where each process builds its local connectivity and prepares the data structures for efficient spike exchange across the cluster during state propagation. We demonstrate scaling performance of two cortical models using point-to-point and collective communication, respectively.
Similar Papers
Algorithm-hardware co-design of neuromorphic networks with dual memory pathways
Neural and Evolutionary Computing
Makes brain-like computers remember longer, use less power.
Multi-GPU Quantum Circuit Simulation and the Impact of Network Performance
Distributed, Parallel, and Cluster Computing
Makes quantum computers run much faster.
A Robust, Open-Source Framework for Spiking Neural Networks on Low-End FPGAs
Neural and Evolutionary Computing
Makes brain-like computers run faster on cheap chips.