Semantic Concentration for Self-Supervised Dense Representations Learning
By: Peisong Wen , Qianqian Xu , Siran Dai and more
Potential Business Impact:
Teaches computers to understand tiny picture parts better.
Recent advances in image-level self-supervised learning (SSL) have made significant progress, yet learning dense representations for patches remains challenging. Mainstream methods encounter an over-dispersion phenomenon that patches from the same instance/category scatter, harming downstream performance on dense tasks. This work reveals that image-level SSL avoids over-dispersion by involving implicit semantic concentration. Specifically, the non-strict spatial alignment ensures intra-instance consistency, while shared patterns, i.e., similar parts of within-class instances in the input space, ensure inter-image consistency. Unfortunately, these approaches are infeasible for dense SSL due to their spatial sensitivity and complicated scene-centric data. These observations motivate us to explore explicit semantic concentration for dense SSL. First, to break the strict spatial alignment, we propose to distill the patch correspondences. Facing noisy and imbalanced pseudo labels, we propose a noise-tolerant ranking loss. The core idea is extending the Average Precision (AP) loss to continuous targets, such that its decision-agnostic and adaptive focusing properties prevent the student model from being misled. Second, to discriminate the shared patterns from complicated scenes, we propose the object-aware filter to map the output space to an object-based space. Specifically, patches are represented by learnable prototypes of objects via cross-attention. Last but not least, empirical studies across various tasks soundly support the effectiveness of our method. Code is available in https://github.com/KID-7391/CoTAP.
Similar Papers
Self-supervised structured object representation learning
CV and Pattern Recognition
Helps computers see objects in pictures better.
Beyond Instance Consistency: Investigating View Diversity in Self-supervised Learning
CV and Pattern Recognition
Teaches computers to learn from pictures better.
A theoretical framework for self-supervised contrastive learning for continuous dependent data
Machine Learning (CS)
Teaches computers to understand time-based patterns.