VAInpaint: Zero-Shot Video-Audio inpainting framework with LLMs-driven Module
By: Kam Man Wu , Zeyue Tian , Liya Ji and more
Potential Business Impact:
Removes sounds and objects from videos perfectly.
Video and audio inpainting for mixed audio-visual content has become a crucial task in multimedia editing recently. However, precisely removing an object and its corresponding audio from a video without affecting the rest of the scene remains a significant challenge. To address this, we propose VAInpaint, a novel pipeline that first utilizes a segmentation model to generate masks and guide a video inpainting model in removing objects. At the same time, an LLM then analyzes the scene globally, while a region-specific model provides localized descriptions. Both the overall and regional descriptions will be inputted into an LLM, which will refine the content and turn it into text queries for our text-driven audio separation model. Our audio separation model is fine-tuned on a customized dataset comprising segmented MUSIC instrument images and VGGSound backgrounds to enhance its generalization performance. Experiments show that our method achieves performance comparable to current benchmarks in both audio and video inpainting.
Similar Papers
VIP: Video Inpainting Pipeline for Real World Human Removal
CV and Pattern Recognition
Removes people from videos without leaving a trace.
MTV-Inpaint: Multi-Task Long Video Inpainting
CV and Pattern Recognition
Adds or changes things in videos using words.
VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context Control
CV and Pattern Recognition
Fixes missing parts in videos, even long ones.