Score: 1

Continuous Audio Language Models

Published: September 8, 2025 | arXiv ID: 2509.06926v1

By: Rouard Simon , Orsini Manu , Roebel Axel and more

Potential Business Impact:

Makes music and voices sound better, cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Audio Language Models (ALM) have emerged as the dominant paradigm for speech and music generation by representing audio as sequences of discrete tokens. Yet, unlike text tokens, which are invertible, audio tokens are extracted from lossy codecs with a limited bitrate. As a consequence, increasing audio quality requires generating more tokens, which imposes a trade-off between fidelity and computational cost. We address this issue by studying Continuous Audio Language Models (CALM). These models instantiate a large Transformer backbone that produces a contextual embedding at every timestep. This sequential information then conditions an MLP that generates the next continuous frame of an audio VAE through consistency modeling. By avoiding lossy compression, CALM achieves higher quality at lower computational cost than their discrete counterpart. Experiments on speech and music demonstrate improved efficiency and fidelity over state-of-the-art discrete audio language models, facilitating lightweight, high-quality audio generation. Samples are available at https://continuous-audio-language-models.github.io

Page Count
17 pages

Category
Computer Science:
Sound