MedDIFT: Multi-Scale Diffusion-Based Correspondence in 3D Medical Imaging
By: Xingyu Zhang , Anna Reithmeir , Fryderyk Kögl and more
Accurate spatial correspondence between medical images is essential for longitudinal analysis, lesion tracking, and image-guided interventions. Medical image registration methods rely on local intensity-based similarity measures, which fail to capture global semantic structure and often yield mismatches in low-contrast or anatomically variable regions. Recent advances in diffusion models suggest that their intermediate representations encode rich geometric and semantic information. We present MedDIFT, a training-free 3D correspondence framework that leverages multi-scale features from a pretrained latent medical diffusion model as voxel descriptors. MedDIFT fuses diffusion activations into rich voxel-wise descriptors and matches them via cosine similarity, with an optional local-search prior. On a publicly available lung CT dataset, MedDIFT achieves correspondence accuracy comparable to the state-of-the-art learning-based UniGradICON model and surpasses conventional B-spline-based registration, without requiring any task-specific model training. Ablation experiments confirm that multi-level feature fusion and modest diffusion noise improve performance.
Similar Papers
Diffusion Model in Latent Space for Medical Image Segmentation Task
CV and Pattern Recognition
Helps doctors see uncertain details in medical scans.
Guiding Registration with Emergent Similarity from Pre-Trained Diffusion Models
CV and Pattern Recognition
Helps doctors match medical pictures better.
GeoDiff: Geometry-Guided Diffusion for Metric Depth Estimation
CV and Pattern Recognition
Makes single-camera pictures show true distances.