Mission Balance: Generating Under-represented Class Samples using Video Diffusion Models
By: Danush Kumar Venkatesh , Isabel Funke , Micha Pfeiffer and more
Potential Business Impact:
Creates fake surgery videos to train doctors better.
Computer-assisted interventions can improve intra-operative guidance, particularly through deep learning methods that harness the spatiotemporal information in surgical videos. However, the severe data imbalance often found in surgical video datasets hinders the development of high-performing models. In this work, we aim to overcome the data imbalance by synthesizing surgical videos. We propose a unique two-stage, text-conditioned diffusion-based method to generate high-fidelity surgical videos for under-represented classes. Our approach conditions the generation process on text prompts and decouples spatial and temporal modeling by utilizing a 2D latent diffusion model to capture spatial content and then integrating temporal attention layers to ensure temporal consistency. Furthermore, we introduce a rejection sampling strategy to select the most suitable synthetic samples, effectively augmenting existing datasets to address class imbalance. We evaluate our method on two downstream tasks-surgical action recognition and intra-operative event prediction-demonstrating that incorporating synthetic videos from our approach substantially enhances model performance. We open-source our implementation at https://gitlab.com/nct_tso_public/surgvgen.
Similar Papers
Ultrasound Image-to-Video Synthesis via Latent Dynamic Diffusion Models
CV and Pattern Recognition
Creates fake ultrasound videos to train doctors better.
Watch and Learn: Leveraging Expert Knowledge and Language for Surgical Video Understanding
CV and Pattern Recognition
Teaches computers to understand surgery videos.
Video Dataset Condensation with Diffusion Models
CV and Pattern Recognition
Makes huge video collections much smaller.