TOMCAT: Test-time Comprehensive Knowledge Accumulation for Compositional Zero-Shot Learning
By: Xudong Yan, Songhe Feng
Potential Business Impact:
Teaches computers to recognize new object combinations.
Compositional Zero-Shot Learning (CZSL) aims to recognize novel attribute-object compositions based on the knowledge learned from seen ones. Existing methods suffer from performance degradation caused by the distribution shift of label space at test time, which stems from the inclusion of unseen compositions recombined from attributes and objects. To overcome the challenge, we propose a novel approach that accumulates comprehensive knowledge in both textual and visual modalities from unsupervised data to update multimodal prototypes at test time. Building on this, we further design an adaptive update weight to control the degree of prototype adjustment, enabling the model to flexibly adapt to distribution shift during testing. Moreover, a dynamic priority queue is introduced that stores high-confidence images to acquire visual knowledge from historical images for inference. Considering the semantic consistency of multimodal knowledge, we align textual and visual prototypes by multimodal collaborative representation learning. Extensive experiments indicate that our approach achieves state-of-the-art performance on four benchmark datasets under both closed-world and open-world settings. Code will be available at https://github.com/xud-yan/TOMCAT .
Similar Papers
Compositional Zero-Shot Learning: A Survey
CV and Pattern Recognition
Teaches computers to see new things by combining parts.
Prompt-Based Continual Compositional Zero-Shot Learning
CV and Pattern Recognition
Teaches AI to learn new things without forgetting old ones.
CAMS: Towards Compositional Zero-Shot Learning via Gated Cross-Attention and Multi-Space Disentanglement
CV and Pattern Recognition
Teaches computers to recognize new things by parts.