Towards Audio Token Compression in Large Audio Language Models
By: Saurabhchand Bhati , Samuel Thomas , Hilde Kuehne and more
Potential Business Impact:
Makes AI understand long sounds with less computer power.
Large Audio Language Models (LALMs) demonstrate impressive performance across diverse tasks, ranging from speech recognition to general audio understanding. However, their scalability is limited by the quadratic complexity of attention and the high token rates of audio signals. These challenges make it difficult to extend LALMs to long-form audio and to deploy them on resource-constrained platforms such as edge devices. In this paper, we explore techniques such as unsupervised segmentation, uniform average pooling, etc., to reduce the number of audio tokens generated by the LALM's audio encoder but before they are consumed by the LLM decoder. To mitigate potential performance degradation introduced by the compressed representations, we employ low-rank adapters to finetune the model. We evaluate our proposed models on two tasks, automatic speech recognition and speech-to-speech translation tasks, that are dependent on effectively uncovering the underlying lexical content of the input signal and study the effect of downsampling on these tasks. Experimental results show that compressed LALMs can achieve performance closer to frame-level LALMs while reducing the input audio token count upto three times before the LLM backbone.
Similar Papers
AudioCodecBench: A Comprehensive Benchmark for Audio Codec Evaluation
Sound
Helps computers understand sounds and music better.
AudioCodecBench: A Comprehensive Benchmark for Audio Codec Evaluation
Sound
Helps computers understand sounds and music better.
Exploring Fine-Tuning of Large Audio Language Models for Spoken Language Understanding under Limited Speech data
Sound
Teaches computers to understand speech better with less data.