TRACE: Temporally Reliable Anatomically-Conditioned 3D CT Generation with Enhanced Efficiency
By: Minye Shao , Xingyu Miao , Haoran Duan and more
Potential Business Impact:
Creates realistic 3D body scans from flat pictures.
3D medical image generation is essential for data augmentation and patient privacy, calling for reliable and efficient models suited for clinical practice. However, current methods suffer from limited anatomical fidelity, restricted axial length, and substantial computational cost, placing them beyond reach for regions with limited resources and infrastructure. We introduce TRACE, a framework that generates 3D medical images with spatiotemporal alignment using a 2D multimodal-conditioned diffusion approach. TRACE models sequential 2D slices as video frame pairs, combining segmentation priors and radiology reports for anatomical alignment, incorporating optical flow to sustain temporal coherence. During inference, an overlapping-frame strategy links frame pairs into a flexible length sequence, reconstructed into a spatiotemporally and anatomically aligned 3D volume. Experimental results demonstrate that TRACE effectively balances computational efficiency with preserving anatomical fidelity and spatiotemporal consistency. Code is available at: https://github.com/VinyehShaw/TRACE.
Similar Papers
TraceTrans: Translation and Spatial Tracing for Surgical Prediction
Image and Video Processing
Makes medical pictures show future results accurately.
CTFlow: Video-Inspired Latent Flow Matching for 3D CT Synthesis
CV and Pattern Recognition
Creates fake CT scans from doctor's notes.
Text-to-CT Generation via 3D Latent Diffusion Model with Contrastive Vision-Language Pretraining
CV and Pattern Recognition
Creates realistic CT scans from text descriptions.