Score: 2

ChronosAudio: A Comprehensive Long-Audio Benchmark for Evaluating Audio-Large Language Models

Published: January 8, 2026 | arXiv ID: 2601.04876v1

By: Kaiwen Luo , Liang Lin , Yibo Zhang and more

Potential Business Impact:

Makes AI understand long sounds, not just short ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Although Audio Large Language Models (ALLMs) have witnessed substantial advancements, their long audio understanding capabilities remain unexplored. A plethora of benchmarks have been proposed for general audio tasks, they predominantly focus on short-form clips, leaving without a consensus on evaluating ALLMs over extended durations. This paper proposes ChronosAudio, the first multi-task benchmark tailored for long-audio understanding in ALLMs. It encompasses six major task categories and comprises 36,000 test instances totaling over 200 hours audio, stratified into short, middle, and long-form categories to comprehensively evaluate length generalization. Extensive experiments on 16 state-of-the-art models using ChronosAudio yield three critical findings: 1.Precipitous Long-Context Collapse: ALLMs exhibit a severe inability to sustain performance, with the transition from short to long contexts triggering a staggering performance degradation of over 90% in specific tasks. 2.Structural Attention Dilution: Performance degradation stems from a fundamental failure in maintaining temporal locality; attention mechanisms suffer from significant diffusion in later sequences. 3.Restorative Ceiling of Mitigation: Current strategies only offer 50% recovery. These findings reveal significant challenges in long-audio, underscoring the urgent need for approaches to achieve robust, document-level audio reasoning.

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Sound