Self-supervised structured object representation learning
By: Oussama Hadjerci , Antoine Letienne , Mohamed Abbas Hedjazi and more
Potential Business Impact:
Helps computers see objects in pictures better.
Self-supervised learning (SSL) has emerged as a powerful technique for learning visual representations. While recent SSL approaches achieve strong results in global image understanding, they are limited in capturing the structured representation in scenes. In this work, we propose a self-supervised approach that progressively builds structured visual representations by combining semantic grouping, instance level separation, and hierarchical structuring. Our approach, based on a novel ProtoScale module, captures visual elements across multiple spatial scales. Unlike common strategies like DINO that rely on random cropping and global embeddings, we preserve full scene context across augmented views to improve performance in dense prediction tasks. We validate our method on downstream object detection tasks using a combined subset of multiple datasets (COCO and UA-DETRAC). Experimental results show that our method learns object centric representations that enhance supervised object detection and outperform the state-of-the-art methods, even when trained with limited annotated data and fewer fine-tuning epochs.
Similar Papers
Scale-Aware Self-Supervised Learning for Segmentation of Small and Sparse Structures
CV and Pattern Recognition
Helps computers see tiny things in pictures.
Seeing the Whole in the Parts in Self-Supervised Representation Learning
Machine Learning (CS)
Teaches computers to see better with less data.
Semantic Concentration for Self-Supervised Dense Representations Learning
CV and Pattern Recognition
Teaches computers to understand tiny picture parts better.