Seeing Soundscapes: Audio-Visual Generation and Separation from Soundscapes Using Audio-Visual Separator
By: Minjae Kang, Martim Brandão
Potential Business Impact:
Makes pictures from many sounds at once.
Recent audio-visual generative models have made substantial progress in generating images from audio. However, existing approaches focus on generating images from single-class audio and fail to generate images from mixed audio. To address this, we propose an Audio-Visual Generation and Separation model (AV-GAS) for generating images from soundscapes (mixed audio containing multiple classes). Our contribution is threefold: First, we propose a new challenge in the audio-visual generation task, which is to generate an image given a multi-class audio input, and we propose a method that solves this task using an audio-visual separator. Second, we introduce a new audio-visual separation task, which involves generating separate images for each class present in a mixed audio input. Lastly, we propose new evaluation metrics for the audio-visual generation task: Class Representation Score (CRS) and a modified R@K. Our model is trained and evaluated on the VGGSound dataset. We show that our method outperforms the state-of-the-art, achieving 7% higher CRS and 4% higher R@2* in generating plausible images with mixed audio.
Similar Papers
Seeing Speech and Sound: Distinguishing and Locating Audios in Visual Scenes
CV and Pattern Recognition
Lets computers understand mixed sounds and sights.
Diffusion-Based Unsupervised Audio-Visual Speech Separation in Noisy Environments with Noise Prior
Audio and Speech Processing
Cleans up noisy audio to hear voices better.
OpenAVS: Training-Free Open-Vocabulary Audio Visual Segmentation with Foundational Models
Machine Learning (CS)
Lets computers find sounds in videos.