AudioScene: Integrating Object-Event Audio into 3D Scenes
By: Shuaihang Yuan , Congcong Wen , Muhammad Shafique and more
Potential Business Impact:
Helps robots understand sounds in 3D spaces.
The rapid advances in audio analysis underscore its vast potential for humancomputer interaction, environmental monitoring, and public safety; yet, existing audioonly datasets often lack spatial context. To address this gap, we present two novel audiospatial scene datasets, AudioScanNet and AudioRoboTHOR, designed to explore audioconditioned tasks within 3D environments. By integrating audio clips with spatially aligned 3D scenes, our datasets enable research on how audio signals interact with spatial context. To associate audio events with corresponding spatial information, we leverage the common sense reasoning ability of large language models and supplement them with rigorous human verification, This approach offers greater scalability compared to purely manual annotation while maintaining high standards of accuracy, completeness, and diversity, quantified through inter annotator agreement and performance on two benchmark tasks audio based 3D visual grounding and audio based robotic zeroshot navigation. The results highlight the limitations of current audiocentric methods and underscore the practical challenges and significance of our datasets in advancing audio guided spatial learning.
Similar Papers
Sound Source Localization for Spatial Mapping of Surgical Actions in Dynamic Scenes
Sound
Helps robots "hear" where tools touch inside bodies.
MOSPA: Human Motion Generation Driven by Spatial Audio
Graphics
Makes virtual people move to sounds around them.
MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset with Refined Annotations
Sound
Makes virtual sounds feel like they're all around you.