Improved Object-Centric Diffusion Learning with Registers and Contrastive Alignment
By: Bac Nguyen , Yuhta Takida , Naoki Murata and more
Potential Business Impact:
Teaches computers to see objects clearly.
Slot Attention (SA) with pretrained diffusion models has recently shown promise for object-centric learning (OCL), but suffers from slot entanglement and weak alignment between object slots and image content. We propose Contrastive Object-centric Diffusion Alignment (CODA), a simple extension that (i) employs register slots to absorb residual attention and reduce interference between object slots, and (ii) applies a contrastive alignment loss to explicitly encourage slot-image correspondence. The resulting training objective serves as a tractable surrogate for maximizing mutual information (MI) between slots and inputs, strengthening slot representation quality. On both synthetic (MOVi-C/E) and real-world datasets (VOC, COCO), CODA improves object discovery (e.g., +6.1% FG-ARI on COCO), property prediction, and compositional image generation over strong baselines. Register slots add negligible overhead, keeping CODA efficient and scalable. These results indicate potential applications of CODA as an effective framework for robust OCL in complex, real-world scenes.
Similar Papers
Slot Attention with Re-Initialization and Self-Distillation
CV and Pattern Recognition
Teaches computers to see objects better.
Slot Attention with Re-Initialization and Self-Distillation
CV and Pattern Recognition
Helps computers see objects better, not broken parts.
CoDA: From Text-to-Image Diffusion Models to Training-Free Dataset Distillation
CV and Pattern Recognition
Makes AI learn from less data, faster.