A dataset-free approach for self-supervised learning of 3D reflectional symmetries
By: Isaac Aguirre, Ivan Sipiran, Gabriel Montañana
Potential Business Impact:
Teaches computers to see object symmetry without examples.
In this paper, we explore a self-supervised model that learns to detect the symmetry of a single object without requiring a dataset-relying solely on the input object itself. We hypothesize that the symmetry of an object can be determined by its intrinsic features, eliminating the need for large datasets during training. Additionally, we design a self-supervised learning strategy that removes the necessity of ground truth labels. These two key elements make our approach both effective and efficient, addressing the prohibitive costs associated with constructing large, labeled datasets for this task. The novelty of our method lies in computing features for each point on the object based on the idea that symmetric points should exhibit similar visual appearances. To achieve this, we leverage features extracted from a foundational image model to compute a visual descriptor for the points. This approach equips the point cloud with visual features that facilitate the optimization of our self-supervised model. Experimental results demonstrate that our method surpasses the state-of-the-art models trained on large datasets. Furthermore, our model is more efficient, effective, and operates with minimal computational and data resources.
Similar Papers
Leveraging 3D Geometric Priors in 2D Rotation Symmetry Detection
CV and Pattern Recognition
Finds repeating shapes in pictures, even from different angles.
Symmetria: A Synthetic Dataset for Learning in Point Clouds
CV and Pattern Recognition
Teaches computers to understand 3D shapes better.
Sonata: Self-Supervised Learning of Reliable Point Representations
CV and Pattern Recognition
Teaches computers to understand 3D shapes better.