Slot Attention with Re-Initialization and Self-Distillation
By: Rongzhen Zhao , Yi Zhao , Juho Kannala and more
Potential Business Impact:
Teaches computers to see objects better.
Unlike popular solutions based on dense feature maps, Object-Centric Learning (OCL) represents visual scenes as sub-symbolic object-level feature vectors, termed slots, which are highly versatile for tasks involving visual modalities. OCL typically aggregates object superpixels into slots by iteratively applying competitive cross attention, known as Slot Attention, with the slots as the query. However, once initialized, these slots are reused naively, causing redundant slots to compete with informative ones for representing objects. This often results in objects being erroneously segmented into parts. Additionally, mainstream methods derive supervision signals solely from decoding slots into the input's reconstruction, overlooking potential supervision based on internal information. To address these issues, we propose Slot Attention with re-Initialization and self-Distillation (DIAS): $\emph{i)}$ We reduce redundancy in the aggregated slots and re-initialize extra aggregation to update the remaining slots; $\emph{ii)}$ We drive the bad attention map at the first aggregation iteration to approximate the good at the last iteration to enable self-distillation. Experiments demonstrate that DIAS achieves state-of-the-art on OCL tasks like object discovery and recognition, while also improving advanced visual prediction and reasoning. Our code is available on https://github.com/Genera1Z/DIAS.
Similar Papers
Slot Attention with Re-Initialization and Self-Distillation
CV and Pattern Recognition
Helps computers see objects better, not broken parts.
Smoothing Slot Attention Iterations and Recurrences
CV and Pattern Recognition
Makes AI better at spotting objects in videos.
Predicting Video Slot Attention Queries from Random Slot-Feature Pairs
CV and Pattern Recognition
Helps computers understand moving objects in videos.