Composition-Incremental Learning for Compositional Generalization
By: Zhen Li , Yuwei Wu , Chenchen Jing and more
Potential Business Impact:
Teaches computers to learn new things over time.
Compositional generalization has achieved substantial progress in computer vision on pre-collected training data. Nonetheless, real-world data continually emerges, with possible compositions being nearly infinite, long-tailed, and not entirely visible. Thus, an ideal model is supposed to gradually improve the capability of compositional generalization in an incremental manner. In this paper, we explore Composition-Incremental Learning for Compositional Generalization (CompIL) in the context of the compositional zero-shot learning (CZSL) task, where models need to continually learn new compositions, intending to improve their compositional generalization capability progressively. To quantitatively evaluate CompIL, we develop a benchmark construction pipeline leveraging existing datasets, yielding MIT-States-CompIL and C-GQA-CompIL. Furthermore, we propose a pseudo-replay framework utilizing a visual synthesizer to synthesize visual representations of learned compositions and a linguistic primitive distillation mechanism to maintain aligned primitive representations across the learning process. Extensive experiments demonstrate the effectiveness of the proposed framework.
Similar Papers
Compositional Zero-Shot Learning: A Survey
CV and Pattern Recognition
Teaches computers to see new things by combining parts.
Scalable Evaluation and Neural Models for Compositional Generalization
Machine Learning (CS)
Teaches computers to understand new ideas.
Scalable Evaluation and Neural Models for Compositional Generalization
Machine Learning (CS)
Teaches computers to understand new things from old.