SeeingSounds: Learning Audio-to-Visual Alignment via Text
By: Simone Carnemolla , Matteo Pennisi , Chiara Russo and more
Potential Business Impact:
Makes pictures from sounds without seeing them.
We introduce SeeingSounds, a lightweight and modular framework for audio-to-image generation that leverages the interplay between audio, language, and vision-without requiring any paired audio-visual data or training on visual generative models. Rather than treating audio as a substitute for text or relying solely on audio-to-text mappings, our method performs dual alignment: audio is projected into a semantic language space via a frozen language encoder, and, contextually grounded into the visual domain using a vision-language model. This approach, inspired by cognitive neuroscience, reflects the natural cross-modal associations observed in human perception. The model operates on frozen diffusion backbones and trains only lightweight adapters, enabling efficient and scalable learning. Moreover, it supports fine-grained and interpretable control through procedural text prompt generation, where audio transformations (e.g., volume or pitch shifts) translate into descriptive prompts (e.g., "a distant thunder") that guide visual outputs. Extensive experiments across standard benchmarks confirm that SeeingSounds outperforms existing methods in both zero-shot and supervised settings, establishing a new state of the art in controllable audio-to-visual generation.
Similar Papers
Seeing Speech and Sound: Distinguishing and Locating Audios in Visual Scenes
CV and Pattern Recognition
Lets computers understand mixed sounds and sights.
Can Sound Replace Vision in LLaVA With Token Substitution?
Multimedia
Makes computers understand sounds and pictures better.
Hearing and Seeing Through CLIP: A Framework for Self-Supervised Sound Source Localization
CV and Pattern Recognition
Finds sounds in videos using AI.