AV-Edit: Multimodal Generative Sound Effect Editing via Audio-Visual Semantic Joint Control
By: Xinyue Guo , Xiaoran Yang , Lipan Zhang and more
Potential Business Impact:
Changes video sounds using pictures and words.
Sound effect editing-modifying audio by adding, removing, or replacing elements-remains constrained by existing approaches that rely solely on low-level signal processing or coarse text prompts, often resulting in limited flexibility and suboptimal audio quality. To address this, we propose AV-Edit, a generative sound effect editing framework that enables fine-grained editing of existing audio tracks in videos by jointly leveraging visual, audio, and text semantics. Specifically, the proposed method employs a specially designed contrastive audio-visual masking autoencoder (CAV-MAE-Edit) for multimodal pre-training, learning aligned cross-modal representations. These representations are then used to train an editorial Multimodal Diffusion Transformer (MM-DiT) capable of removing visually irrelevant sounds and generating missing audio elements consistent with video content through a correlation-based feature gating training strategy. Furthermore, we construct a dedicated video-based sound editing dataset as an evaluation benchmark. Experiments demonstrate that the proposed AV-Edit generates high-quality audio with precise modifications based on visual content, achieving state-of-the-art performance in the field of sound effect editing and exhibiting strong competitiveness in the domain of audio generation.
Similar Papers
Audio-Guided Visual Editing with Complex Multi-Modal Prompts
CV and Pattern Recognition
Lets you edit pictures using sounds and words.
Object-AVEdit: An Object-level Audio-Visual Editing Model
Multimedia
Changes sounds and pictures of objects in videos.
Training-Free Multimodal Guidance for Video to Audio Generation
Machine Learning (CS)
Makes silent videos talk with realistic sounds.