SepPrune: Structured Pruning for Efficient Deep Speech Separation
By: Yuqi Li , Kai Li , Xin Yin and more
Potential Business Impact:
Makes AI hear clear voices in noisy places.
Although deep learning has substantially advanced speech separation in recent years, most existing studies continue to prioritize separation quality while overlooking computational efficiency, an essential factor for low-latency speech processing in real-time applications. In this paper, we propose SepPrune, the first structured pruning framework specifically designed to compress deep speech separation models and reduce their computational cost. SepPrune begins by analyzing the computational structure of a given model to identify layers with the highest computational burden. It then introduces a differentiable masking strategy to enable gradient-driven channel selection. Based on the learned masks, SepPrune prunes redundant channels and fine-tunes the remaining parameters to recover performance. Extensive experiments demonstrate that this learnable pruning paradigm yields substantial advantages for channel pruning in speech separation models, outperforming existing methods. Notably, a model pruned with SepPrune can recover 85% of the performance of a pre-trained model (trained over hundreds of epochs) with only one epoch of fine-tuning, and achieves convergence 36$\times$ faster than training from scratch. Code is available at https://github.com/itsnotacie/SepPrune.
Similar Papers
Structure-Aware Automatic Channel Pruning by Searching with Graph Embedding
Artificial Intelligence
Makes computer programs run faster and smaller.
SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS
Audio and Speech Processing
Makes AI voices sound better and faster.
SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS
Audio and Speech Processing
Makes AI voices sound better and faster.