Generative Spatiotemporal Data Augmentation
By: Jinfan Zhou , Lixin Luo , Sungmin Eum and more
Potential Business Impact:
Makes computer vision work better with less data.
We explore spatiotemporal data augmentation using video foundation models to diversify both camera viewpoints and scene dynamics. Unlike existing approaches based on simple geometric transforms or appearance perturbations, our method leverages off-the-shelf video diffusion models to generate realistic 3D spatial and temporal variations from a given image dataset. Incorporating these synthesized video clips as supplemental training data yields consistent performance gains in low-data settings, such as UAV-captured imagery where annotations are scarce. Beyond empirical improvements, we provide practical guidelines for (i) choosing an appropriate spatiotemporal generative setup, (ii) transferring annotations to synthetic frames, and (iii) addressing disocclusion - regions newly revealed and unlabeled in generated views. Experiments on COCO subsets and UAV-captured datasets show that, when applied judiciously, spatiotemporal augmentation broadens the data distribution along axes underrepresented by traditional and prior generative methods, offering an effective lever for improving model performance in data-scarce regimes.
Similar Papers
Generative Hints
CV and Pattern Recognition
Teaches computers to see things better.
Data Augmentation Strategies for Robust Lane Marking Detection
CV and Pattern Recognition
Helps cars see lane lines better in tricky spots.
Salient Concept-Aware Generative Data Augmentation
CV and Pattern Recognition
Makes AI create better, more varied pictures from words.