StereoFoley: Object-Aware Stereo Audio Generation from Video
By: Tornike Karchkhadze , Kuan-Lin Chen , Mojtaba Heydari and more
Potential Business Impact:
Makes videos sound like they're really there.
We present StereoFoley, a video-to-audio generation framework that produces semantically aligned, temporally synchronized, and spatially accurate stereo sound at 48 kHz. While recent generative video-to-audio models achieve strong semantic and temporal fidelity, they largely remain limited to mono or fail to deliver object-aware stereo imaging, constrained by the lack of professionally mixed, spatially accurate video-to-audio datasets. First, we develop and train a base model that generates stereo audio from video, achieving state-of-the-art in both semantic accuracy and synchronization. Next, to overcome dataset limitations, we introduce a synthetic data generation pipeline that combines video analysis, object tracking, and audio synthesis with dynamic panning and distance-based loudness controls, enabling spatially accurate object-aware sound. Finally, we fine-tune the base model on this synthetic dataset, yielding clear object-audio correspondence. Since no established metrics exist, we introduce stereo object-awareness measures and validate it through a human listening study, showing strong correlation with perception. This work establishes the first end-to-end framework for stereo object-aware video-to-audio generation, addressing a critical gap and setting a new benchmark in the field.
Similar Papers
StereoFoley: Object-Aware Stereo Audio Generation from Video
Sound
Makes videos sound like they're really there.
StereoFoley: Object-Aware Stereo Audio Generation from Video
Sound
Makes videos sound like they're really there.
StereoSync: Spatially-Aware Stereo Audio Generation from Video
Sound
Makes video sound match what you see.