I2VWM: Robust Watermarking for Image to Video Generation
By: Guanjie Wang , Zehua Ma , Han Fang and more
Potential Business Impact:
Tracks fake videos back to their original pictures.
The rapid progress of image-guided video generation (I2V) has raised concerns about its potential misuse in misinformation and fraud, underscoring the urgent need for effective digital watermarking. While existing watermarking methods demonstrate robustness within a single modality, they fail to trace source images in I2V settings. To address this gap, we introduce the concept of Robust Diffusion Distance, which measures the temporal persistence of watermark signals in generated videos. Building on this, we propose I2VWM, a cross-modal watermarking framework designed to enhance watermark robustness across time. I2VWM leverages a video-simulation noise layer during training and employs an optical-flow-based alignment module during inference. Experiments on both open-source and commercial I2V models demonstrate that I2VWM significantly improves robustness while maintaining imperceptibility, establishing a new paradigm for cross-modal watermarking in the era of generative video. \href{https://github.com/MrCrims/I2VWM-Robust-Watermarking-for-Image-to-Video-Generation}{Code Released.}
Similar Papers
VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking
CV and Pattern Recognition
Protects AI videos from being changed.
Diffusion-Based Image Editing for Breaking Robust Watermarks
CV and Pattern Recognition
Breaks hidden messages in pictures using AI.
WaterFlow: Learning Fast & Robust Watermarks using Stable Diffusion
Image and Video Processing
Makes digital pictures safe from copying.