Score: 0

Audio-Guided Visual Editing with Complex Multi-Modal Prompts

Published: August 28, 2025 | arXiv ID: 2508.20379v1

By: Hyeonyu Kim , Seokhoon Jeong , Seonghee Han and more

Potential Business Impact:

Lets you edit pictures using sounds and words.

Business Areas:
Guides Media and Entertainment

Visual editing with diffusion models has made significant progress but often struggles with complex scenarios that textual guidance alone could not adequately describe, highlighting the need for additional non-text editing prompts. In this work, we introduce a novel audio-guided visual editing framework that can handle complex editing tasks with multiple text and audio prompts without requiring additional training. Existing audio-guided visual editing methods often necessitate training on specific datasets to align audio with text, limiting their generalization to real-world situations. We leverage a pre-trained multi-modal encoder with strong zero-shot capabilities and integrate diverse audio into visual editing tasks, by alleviating the discrepancy between the audio encoder space and the diffusion model's prompt encoder space. Additionally, we propose a novel approach to handle complex scenarios with multiple and multi-modal editing prompts through our separate noise branching and adaptive patch selection. Our comprehensive experiments on diverse editing tasks demonstrate that our framework excels in handling complicated editing scenarios by incorporating rich information from audio, where text-only approaches fail.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition