Coherent Audio-Visual Editing via Conditional Audio Generation Following Video Edits
By: Masato Ishii , Akio Hayakawa , Takashi Shibuya and more
Potential Business Impact:
Makes videos and sounds match perfectly.
We introduce a novel pipeline for joint audio-visual editing that enhances the coherence between edited video and its accompanying audio. Our approach first applies state-of-the-art video editing techniques to produce the target video, then performs audio editing to align with the visual changes. To achieve this, we present a new video-to-audio generation model that conditions on the source audio, target video, and a text prompt. We extend the model architecture to incorporate conditional audio input and propose a data augmentation strategy that improves training efficiency. Furthermore, our model dynamically adjusts the influence of the source audio based on the complexity of the edits, preserving the original audio structure where possible. Experimental results demonstrate that our method outperforms existing approaches in maintaining audio-visual alignment and content integrity.
Similar Papers
AV-Edit: Multimodal Generative Sound Effect Editing via Audio-Visual Semantic Joint Control
Multimedia
Changes video sounds using pictures and words.
Hear What Matters! Text-conditioned Selective Video-to-Audio Generation
CV and Pattern Recognition
Makes videos play only the sound you want.
Audio-Guided Visual Editing with Complex Multi-Modal Prompts
CV and Pattern Recognition
Lets you edit pictures using sounds and words.