Beyond Instance Consistency: Investigating View Diversity in Self-supervised Learning
By: Huaiyuan Qin , Muli Yang , Siyuan Hu and more
Potential Business Impact:
Teaches computers to learn from pictures better.
Self-supervised learning (SSL) conventionally relies on the instance consistency paradigm, assuming that different views of the same image can be treated as positive pairs. However, this assumption breaks down for non-iconic data, where different views may contain distinct objects or semantic information. In this paper, we investigate the effectiveness of SSL when instance consistency is not guaranteed. Through extensive ablation studies, we demonstrate that SSL can still learn meaningful representations even when positive pairs lack strict instance consistency. Furthermore, our analysis further reveals that increasing view diversity, by enforcing zero overlapping or using smaller crop scales, can enhance downstream performance on classification and dense prediction tasks. However, excessive diversity is found to reduce effectiveness, suggesting an optimal range for view diversity. To quantify this, we adopt the Earth Mover's Distance (EMD) as an estimator to measure mutual information between views, finding that moderate EMD values correlate with improved SSL learning, providing insights for future SSL framework design. We validate our findings across a range of settings, highlighting their robustness and applicability on diverse data sources.
Similar Papers
Maximally Useful and Minimally Redundant: The Key to Self Supervised Learning for Imbalanced Data
CV and Pattern Recognition
Helps computers learn from uneven data better.
Self-supervised structured object representation learning
CV and Pattern Recognition
Helps computers see objects in pictures better.
Consistent View Alignment Improves Foundation Models for 3D Medical Image Segmentation
CV and Pattern Recognition
Teaches computers to learn better from different pictures.