Ultrasound Image-to-Video Synthesis via Latent Dynamic Diffusion Models
By: Tingxiu Chen , Yilei Shi , Zixuan Zheng and more
Potential Business Impact:
Creates fake ultrasound videos to train doctors better.
Ultrasound video classification enables automated diagnosis and has emerged as an important research area. However, publicly available ultrasound video datasets remain scarce, hindering progress in developing effective video classification models. We propose addressing this shortage by synthesizing plausible ultrasound videos from readily available, abundant ultrasound images. To this end, we introduce a latent dynamic diffusion model (LDDM) to efficiently translate static images to dynamic sequences with realistic video characteristics. We demonstrate strong quantitative results and visually appealing synthesized videos on the BUSV benchmark. Notably, training video classification models on combinations of real and LDDM-synthesized videos substantially improves performance over using real data alone, indicating our method successfully emulates dynamics critical for discrimination. Our image-to-video approach provides an effective data augmentation solution to advance ultrasound video analysis. Code is available at https://github.com/MedAITech/U_I2V.
Similar Papers
Generative deep learning for foundational video translation in ultrasound
CV and Pattern Recognition
Makes blurry ultrasound pictures clear for doctors.
Mission Balance: Generating Under-represented Class Samples using Video Diffusion Models
CV and Pattern Recognition
Creates fake surgery videos to train doctors better.
Label-free Motion-Conditioned Diffusion Model for Cardiac Ultrasound Synthesis
CV and Pattern Recognition
Makes heart videos without needing doctors' notes.