The World is Not Mono: Enabling Spatial Understanding in Large Audio-Language Models
By: Yuhuan You , Lai Wei , Xihong Wu and more
Potential Business Impact:
Helps computers hear sounds from different directions.
Existing large audio-language models perceive the world as "mono" -- a single stream of audio that ignores the critical spatial dimension ("where") required for universal acoustic scene analysis. To bridge this gap, we first introduce a hierarchical framework for Auditory Scene Analysis (ASA). Guided by this framework, we introduce a system that enables models like Qwen2-Audio to understand and reason about the complex acoustic world. Our framework achieves this through three core contributions: First, we build a large-scale, synthesized binaural audio dataset to provide the rich spatial cues. Second, we design a hybrid feature projector, which leverages parallel semantic and spatial encoders to extract decoupled representations. These distinct streams are integrated via a dense fusion mechanism, ensuring the model receives a holistic view of the acoustic scene. Finally, we employ a progressive training curriculum, advancing from supervised fine-tuning (SFT) to reinforcement learning via Group Relative Policy Optimization (GRPO), to explicitly evolve the model's capabilities towards reasoning. On our comprehensive benchmark, the model demonstrates comparatively strong capability for spatial understanding. By enabling this spatial perception, our work provides a clear pathway for leveraging the powerful reasoning abilities of large models towards holistic acoustic scene analysis, advancing from "mono" semantic recognition to spatial intelligence.
Similar Papers
OWL: Geometry-Aware Spatial Reasoning for Audio Large Language Models
Sound
Helps computers hear where sounds come from.
Towards Spatial Audio Understanding via Question Answering
Sound
Lets computers understand where sounds come from.
Spatial Audio Motion Understanding and Reasoning
Sound
Lets computers hear where sounds are coming from.