Token Pruning in Audio Transformers: Optimizing Performance and Decoding Patch Importance
By: Taehan Lee, Hyukjun Lee
Potential Business Impact:
Makes AI listen to sounds faster and cheaper.
Vision Transformers (ViTs) have achieved state-of-the-art performance across various computer vision tasks, but their high computational cost remains a challenge. Token pruning has been proposed to reduce this cost by selectively removing less important tokens. While effective in vision tasks by discarding non-object regions, applying this technique to audio tasks presents unique challenges, as distinguishing relevant from irrelevant regions in time-frequency representations is less straightforward. In this study, for the first time, we applied token pruning to ViT-based audio classification models using Mel-spectrograms and analyzed the trade-offs between model performance and computational cost: TopK token pruning can reduce MAC operations of AudioMAE and AST by 30-40%, with less than a 1% drop in accuracy. Our analysis reveals that while high-intensity or high-variation tokens contribute significantly to model accuracy, low-intensity or low variation tokens also remain important when token pruning is applied; pruning solely based on the intensity or variation of signals in a patch leads to a noticeable drop in accuracy. We support our claim by measuring high correlation between attention scores and these statistical features and by showing retained tokens consistently receive distinct attention compared to pruned ones. We also show that AudioMAE retains more low-intensity tokens than AST. This can be explained by AudioMAE's self-supervised reconstruction objective, which encourages attention to all patches, whereas AST's supervised training focuses on label-relevant tokens.
Similar Papers
Segmentwise Pruning in Audio-Language Models
Sound
Makes AI understand sounds using less computer power.
The silence of the weights: an investigation of structural pruning strategies for attention-based audio signal architectures
Sound
Makes smart computer programs smaller and faster.
Index-Preserving Lightweight Token Pruning for Efficient Document Understanding in Vision-Language Models
CV and Pattern Recognition
Makes AI understand papers faster and cheaper.