Segmentwise Pruning in Audio-Language Models
By: Marcel Gibier , Raphaël Duroselle , Pierre Serrano and more
Potential Business Impact:
Makes AI understand sounds using less computer power.
Recent audio-language models have shown impressive performance across a wide range of audio tasks and are increasingly capable of handling long audio inputs. However, the computing costs in these models heavily depend on sequence length, which can become very large given the nature of audio data. In the vision-language domain, token pruning methods have proven effective in reducing token counts while preserving strong performance on standard benchmarks. In this work, we investigate the relevance and effectiveness of such token selection strategies in the context of audio-language models. We also improve them by proposing a lightweight strategy that takes the time dimension into account. While retaining only a quarter of the initial tokens, our approach results in a relative maximum decrease of 2% in CIDEr on Clotho v2 and a relative maximum decrease of 4% in accuracy on MMAU.
Similar Papers
Token Pruning in Audio Transformers: Optimizing Performance and Decoding Patch Importance
Sound
Makes AI listen to sounds faster and cheaper.
Towards Audio Token Compression in Large Audio Language Models
Audio and Speech Processing
Makes AI understand long sounds with less computer power.
StreamingAssistant: Efficient Visual Token Pruning for Accelerating Online Video Understanding
CV and Pattern Recognition
Makes videos understandable for computers faster.