FoleySpace: Vision-Aligned Binaural Spatial Audio Generation
By: Lei Zhao , Rujin Chen , Chi Zhang and more
Potential Business Impact:
Makes videos sound like you're really there.
Recently, with the advancement of AIGC, deep learning-based video-to-audio (V2A) technology has garnered significant attention. However, existing research mostly focuses on mono audio generation that lacks spatial perception, while the exploration of binaural spatial audio generation technologies, which can provide a stronger sense of immersion, remains insufficient. To solve this problem, we propose FoleySpace, a framework for video-to-binaural audio generation that produces immersive and spatially consistent stereo sound guided by visual information. Specifically, we develop a sound source estimation method to determine the sound source 2D coordinates and depth in each video frame, and then employ a coordinate mapping mechanism to convert the 2D source positions into a 3D trajectory. This 3D trajectory, together with the monaural audio generated by a pre-trained V2A model, serves as a conditioning input for a diffusion model to generate spatially consistent binaural audio. To support the generation of dynamic sound fields, we constructed a training dataset based on recorded Head-Related Impulse Responses that includes various sound source movement scenarios. Experimental results demonstrate that the proposed method outperforms existing approaches in spatial perception consistency, effectively enhancing the immersive quality of the audio-visual experience.
Similar Papers
FoleySpace: Vision-Aligned Binaural Spatial Audio Generation
Sound
Makes videos sound like you're really there.
ViSAudio: End-to-End Video-Driven Binaural Spatial Audio Generation
CV and Pattern Recognition
Makes silent videos sound like you're there.
SpA2V: Harnessing Spatial Auditory Cues for Audio-driven Spatially-aware Video Generation
Graphics
Turns sounds into videos matching noise locations