Unlocking 3D Affordance Segmentation with 2D Semantic Knowledge
By: Yu Huang , Zelin Peng , Changsong Wen and more
Potential Business Impact:
Helps robots understand object parts for better use.
Affordance segmentation aims to parse 3D objects into functionally distinct parts, bridging recognition and interaction for applications in robotic manipulation, embodied AI, and AR. While recent studies leverage visual or textual prompts to guide this process, they often rely on point cloud encoders as generic feature extractors, overlooking the intrinsic challenges of 3D data such as sparsity, noise, and geometric ambiguity. As a result, 3D features learned in isolation frequently lack clear and semantically consistent functional boundaries. To address this bottleneck, we propose a semantic-grounded learning paradigm that transfers rich semantic knowledge from large-scale 2D Vision Foundation Models (VFMs) into the 3D domain. Specifically, We introduce Cross-Modal Affinity Transfer (CMAT), a pre-training strategy that aligns a 3D encoder with lifted 2D semantics and jointly optimizes reconstruction, affinity, and diversity to yield semantically organized representations. Building on this backbone, we further design the Cross-modal Affordance Segmentation Transformer (CAST), which integrates multi-modal prompts with CMAT-pretrained features to generate precise, prompt-aware segmentation maps. Extensive experiments on standard benchmarks demonstrate that our framework establishes new state-of-the-art results for 3D affordance segmentation.
Similar Papers
Object Affordance Recognition and Grounding via Multi-scale Cross-modal Representation Learning
CV and Pattern Recognition
Teaches robots to grasp and use objects.
Semantic Causality-Aware Vision-Based 3D Occupancy Prediction
CV and Pattern Recognition
Helps robots understand 3D spaces from pictures.
DAG: Unleash the Potential of Diffusion Model for Open-Vocabulary 3D Affordance Grounding
CV and Pattern Recognition
Helps robots know where to touch objects.