MetaVoxel: Joint Diffusion Modeling of Imaging and Clinical Metadata
By: Yihao Liu , Chenyu Gao , Lianrui Zuo and more
Potential Business Impact:
AI learns to predict health from scans and notes.
Modern deep learning methods have achieved impressive results across tasks from disease classification, estimating continuous biomarkers, to generating realistic medical images. Most of these approaches are trained to model conditional distributions defined by a specific predictive direction with a specific set of input variables. We introduce MetaVoxel, a generative joint diffusion modeling framework that models the joint distribution over imaging data and clinical metadata by learning a single diffusion process spanning all variables. By capturing the joint distribution, MetaVoxel unifies tasks that traditionally require separate conditional models and supports flexible zero-shot inference using arbitrary subsets of inputs without task-specific retraining. Using more than 10,000 T1-weighted MRI scans paired with clinical metadata from nine datasets, we show that a single MetaVoxel model can perform image generation, age estimation, and sex prediction, achieving performance comparable to established task-specific baselines. Additional experiments highlight its capabilities for flexible inference.Together, these findings demonstrate that joint multimodal diffusion offers a promising direction for unifying medical AI models and enabling broader clinical applicability.
Similar Papers
Diffusion Model in Latent Space for Medical Image Segmentation Task
CV and Pattern Recognition
Helps doctors see uncertain details in medical scans.
Deep generative priors for 3D brain analysis
CV and Pattern Recognition
Improves brain scans by learning anatomy from data.
MedDIFT: Multi-Scale Diffusion-Based Correspondence in 3D Medical Imaging
CV and Pattern Recognition
Matches medical scans better without training.