Object Affordance Recognition and Grounding via Multi-scale Cross-modal Representation Learning
By: Xinhang Wan , Dongqiang Gou , Xinwang Liu and more
Potential Business Impact:
Teaches robots to grasp and use objects.
A core problem of Embodied AI is to learn object manipulation from observation, as humans do. To achieve this, it is important to localize 3D object affordance areas through observation such as images (3D affordance grounding) and understand their functionalities (affordance classification). Previous attempts usually tackle these two tasks separately, leading to inconsistent predictions due to lacking proper modeling of their dependency. In addition, these methods typically only ground the incomplete affordance areas depicted in images, failing to predict the full potential affordance areas, and operate at a fixed scale, resulting in difficulty in coping with affordances significantly varying in scale with respect to the whole object. To address these issues, we propose a novel approach that learns an affordance-aware 3D representation and employs a stage-wise inference strategy leveraging the dependency between grounding and classification tasks. Specifically, we first develop a cross-modal 3D representation through efficient fusion and multi-scale geometric feature propagation, enabling inference of full potential affordance areas at a suitable regional scale. Moreover, we adopt a simple two-stage prediction mechanism, effectively coupling grounding and classification for better affordance understanding. Experiments demonstrate the effectiveness of our method, showing improved performance in both affordance grounding and classification.
Similar Papers
Unlocking 3D Affordance Segmentation with 2D Semantic Knowledge
CV and Pattern Recognition
Helps robots understand object parts for better use.
O$^3$Afford: One-Shot 3D Object-to-Object Affordance Grounding for Generalizable Robotic Manipulation
Robotics
Robots learn to use objects together better.
DAG: Unleash the Potential of Diffusion Model for Open-Vocabulary 3D Affordance Grounding
CV and Pattern Recognition
Helps robots know where to touch objects.