Tiger200K: Manually Curated High Visual Quality Video Dataset from UGC Platform
By: Xianpan Zhou
Potential Business Impact:
Creates better AI videos from text descriptions.
The recent surge in open-source text-to-video generation models has significantly energized the research community, yet their dependence on proprietary training datasets remains a key constraint. While existing open datasets like Koala-36M employ algorithmic filtering of web-scraped videos from early platforms, they still lack the quality required for fine-tuning advanced video generation models. We present Tiger200K, a manually curated high visual quality video dataset sourced from User-Generated Content (UGC) platforms. By prioritizing visual fidelity and aesthetic quality, Tiger200K underscores the critical role of human expertise in data curation, and providing high-quality, temporally consistent video-text pairs for fine-tuning and optimizing video generation architectures through a simple but effective pipeline including shot boundary detection, OCR, border detecting, motion filter and fine bilingual caption. The dataset will undergo ongoing expansion and be released as an open-source initiative to advance research and applications in video generative models. Project page: https://tinytigerpan.github.io/tiger200k/
Similar Papers
UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions
CV and Pattern Recognition
Makes computers create super clear, movie-like videos.
NTIRE 2025 Challenge on Short-form UGC Video Quality Assessment and Enhancement: KwaiSR Dataset and Study
CV and Pattern Recognition
Makes blurry phone videos look clear.
VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation
CV and Pattern Recognition
Makes AI create better videos about anything.