AV-Edit: Multimodal Generative Sound Effect Editing via Audio-Visual Semantic Joint Control
By: Xinyue Guo , Xiaoran Yang , Lipan Zhang and more
Potential Business Impact:
Changes video sounds using pictures and words.
Sound effect editing-modifying audio by adding, removing, or replacing elements-remains constrained by existing approaches that rely solely on low-level signal processing or coarse text prompts, often resulting in limited flexibility and suboptimal audio quality. To address this, we propose AV-Edit, a generative sound effect editing framework that enables fine-grained editing of existing audio tracks in videos by jointly leveraging visual, audio, and text semantics. Specifically, the proposed method employs a specially designed contrastive audio-visual masking autoencoder (CAV-MAE-Edit) for multimodal pre-training, learning aligned cross-modal representations. These representations are then used to train an editorial Multimodal Diffusion Transformer (MM-DiT) capable of removing visually irrelevant sounds and generating missing audio elements consistent with video content through a correlation-based feature gating training strategy. Furthermore, we construct a dedicated video-based sound editing dataset as an evaluation benchmark. Experiments demonstrate that the proposed AV-Edit generates high-quality audio with precise modifications based on visual content, achieving state-of-the-art performance in the field of sound effect editing and exhibiting strong competitiveness in the domain of audio generation.
Similar Papers
Coherent Audio-Visual Editing via Conditional Audio Generation Following Video Edits
Multimedia
Makes videos and sounds match perfectly.
MMEDIT: A Unified Framework for Multi-Type Audio Editing via Audio Language Model
Sound
Changes sounds in audio using text instructions.
MMEDIT: A Unified Framework for Multi-Type Audio Editing via Audio Language Model
Sound
Edits sounds in audio using text instructions.