MetaSlot: Break Through the Fixed Number of Slots in Object-Centric Learning
By: Hongjia Liu , Rongzhen Zhao , Haohan Chen and more
Potential Business Impact:
Helps computers see and understand many objects.
Learning object-level, structured representations is widely regarded as a key to better generalization in vision and underpins the design of next-generation Pre-trained Vision Models (PVMs). Mainstream Object-Centric Learning (OCL) methods adopt Slot Attention or its variants to iteratively aggregate objects' super-pixels into a fixed set of query feature vectors, termed slots. However, their reliance on a static slot count leads to an object being represented as multiple parts when the number of objects varies. We introduce MetaSlot, a plug-and-play Slot Attention variant that adapts to variable object counts. MetaSlot (i) maintains a codebook that holds prototypes of objects in a dataset by vector-quantizing the resulting slot representations; (ii) removes duplicate slots from the traditionally aggregated slots by quantizing them with the codebook; and (iii) injects progressively weaker noise into the Slot Attention iterations to accelerate and stabilize the aggregation. MetaSlot is a general Slot Attention variant that can be seamlessly integrated into existing OCL architectures. Across multiple public datasets and tasks--including object discovery and recognition--models equipped with MetaSlot achieve significant performance gains and markedly interpretable slot representations, compared with existing Slot Attention variants.
Similar Papers
Slot Attention with Re-Initialization and Self-Distillation
CV and Pattern Recognition
Helps computers see objects better, not broken parts.
Slot Attention with Re-Initialization and Self-Distillation
CV and Pattern Recognition
Teaches computers to see objects better.
Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM
CV and Pattern Recognition
Helps AI understand and create detailed pictures.