Spatio-Temporal Pruning for Compressed Spiking Large Language Models
By: Yi Jiang , Malyaban Bal , Brian Matejek and more
Potential Business Impact:
Makes smart computer brains use less power.
Large Language Models (LLMs) present significant challenges for deployment in energy-constrained environments due to their large model sizes and high inference latency. Spiking Neural Networks (SNNs), inspired by the sparse event-driven neural processing and energy-efficient information transmission in the brain, offer a promising alternative for achieving low-power computing. Integrating the event-driven efficiency of spiking neurons with the advanced capabilities of LLMs represents a promising direction for power-efficient LLMs. This work specifically delves into the design of compressed spiking LLMs. Here, we revisit spatial and temporal pruning from the perspective of SNNs and propose a novel spatio-temporal pruning framework for Spiking LLMs to optimize computational efficiency while preserving high performance. Our spatial pruning technique reduces the number of active neurons and attention heads, effectively lowering the computational complexity of the model. Meanwhile, temporal pruning minimizes inference latency by dynamically adjusting the number of timesteps required for different layers. By combining these approaches with other compression techniques, we present the first work in the domain of Spiking LLMs to jointly explore spatial pruning, temporal pruning, extreme quantization and knowledge distillation strategies. Extensive experimental evaluation of our proposed framework for SpikingBERT on the large-scale GLUE benchmark demonstrates the efficacy of our approach in terms of computational operations and inference latency. Our approach offers a compelling solution for real-time, low-power natural language processing applications, making Spiking LLMs more practical for deployment on edge devices and in power-constrained settings.
Similar Papers
All in one timestep: Enhancing Sparsity and Energy efficiency in Multi-level Spiking Neural Networks
Neural and Evolutionary Computing
Makes computer brains use less power for thinking.
Spatial Spiking Neural Networks Enable Efficient and Robust Temporal Computation
Neural and Evolutionary Computing
Makes smart computers learn faster with less memory.
Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding
Neural and Evolutionary Computing
Makes brain implants use less power.