RobustSora: De-Watermarked Benchmark for Robust AI-Generated Video Detection
By: Zhuo Wang, Xiliang Liu, Ligang Sun
Potential Business Impact:
Finds fake videos even if hidden marks are removed.
The proliferation of AI-generated video technologies poses challenges to information integrity. While recent benchmarks advance AIGC video detection, they overlook a critical factor: many state-of-the-art generative models embed digital watermarks in outputs, and detectors may partially rely on these patterns. To evaluate this influence, we present RobustSora, the benchmark designed to assess watermark robustness in AIGC video detection. We systematically construct a dataset of 6,500 videos comprising four types: Authentic-Clean (A-C), Authentic-Spoofed with fake watermarks (A-S), Generated-Watermarked (G-W), and Generated-DeWatermarked (G-DeW). Our benchmark introduces two evaluation tasks: Task-I tests performance on watermark-removed AI videos, while Task-II assesses false alarm rates on authentic videos with fake watermarks. Experiments with ten models spanning specialized AIGC detectors, transformer architectures, and MLLM approaches reveal performance variations of 2-8pp under watermark manipulation. Transformer-based models show consistent moderate dependency (6-8pp), while MLLMs exhibit diverse patterns (2-8pp). These findings indicate partial watermark dependency and highlight the need for watermark-aware training strategies. RobustSora provides essential tools to advance robust AIGC detection research.
Similar Papers
Robust Image Self-Recovery against Tampering using Watermark Generation with Pixel Shuffling
CV and Pattern Recognition
Recovers original pictures from fake ones.
TriniMark: A Robust Generative Speech Watermarking Method for Trinity-Level Attribution
Multimedia
Marks fake voices so creators keep their work.
On-Device Watermarking: A Socio-Technical Imperative For Authenticity In The Age of Generative AI
Cryptography and Security
Proves real videos came from cameras, not AI.