SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS
By: Tan Dat Nguyen , Jaehun Kim , Ji-Hoon Kim and more
Potential Business Impact:
Makes AI voices sound better and faster.
The goal of this paper is to introduce SPADE, a framework for Structured Pruning and Adaptive Distillation for Efficient Large Language Model-based text-to-speech (LLM-TTS). Recent LLM-TTS systems achieve strong controllability and zero-shot generalization, but their large parameter counts and high latency limit real-world deployment. SPADE addresses this by combining (i) a pruning step guided by a word-error-rate-based layer importance index to remove non-essential Transformer layers, with (ii) multi-level knowledge distillation to restore autoregressive coherence. On zero-shot benchmarks, SPADE preserves near-parity perceptual quality while halving Transformer depth, reducing VRAM usage by up to 20%, and achieving up to 1.7x faster real-time factor with less than 5% of the original training data. These results show that compact LLM-TTS models can maintain naturalness and speaker similarity while enabling practical real-time speech generation. Audio samples are available at https://mm.kaist.ac.kr/projects/SPADE/.
Similar Papers
SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS
Audio and Speech Processing
Makes AI voices sound better and faster.
SPADE: A Large Language Model Framework for Soil Moisture Pattern Recognition and Anomaly Detection in Precision Agriculture
Artificial Intelligence
Helps farmers know when to water crops.
A Hybrid Early-Exit Algorithm for Large Language Models Based on Space Alignment Decoding (SPADE)
Computation and Language
Makes smart computer programs faster and cheaper.