PRISM: Pointcloud Reintegrated Inference via Segmentation and Cross-attention for Manipulation
By: Daqi Huang , Zhehao Cai , Yuzhi Hao and more
Potential Business Impact:
Teaches robots to grab things in messy rooms.
Robust imitation learning for robot manipulation requires comprehensive 3D perception, yet many existing methods struggle in cluttered environments. Fixed camera view approaches are vulnerable to perspective changes, and 3D point cloud techniques often limit themselves to keyframes predictions, reducing their efficacy in dynamic, contact-intensive tasks. To address these challenges, we propose PRISM, designed as an end-to-end framework that directly learns from raw point cloud observations and robot states, eliminating the need for pretrained models or external datasets. PRISM comprises three main components: a segmentation embedding unit that partitions the raw point cloud into distinct object clusters and encodes local geometric details; a cross-attention component that merges these visual features with processed robot joint states to highlight relevant targets; and a diffusion module that translates the fused representation into smooth robot actions. With training on 100 demonstrations per task, PRISM surpasses both 2D and 3D baseline policies in accuracy and efficiency within our simulated environments, demonstrating strong robustness in complex, object-dense scenarios. Code and some demos are available on https://github.com/czknuaa/PRISM.
Similar Papers
PRISM: Projection-based Reward Integration for Scene-Aware Real-to-Sim-to-Real Transfer with Few Demonstrations
Robotics
Teaches robots to do tasks from few examples.
PRISM-DP: Spatial Pose-based Observations for Diffusion-Policies via Segmentation, Mesh Generation, and Pose Tracking
Robotics
Robots learn to move better with less data.
PRISM: A Unified Framework for Photorealistic Reconstruction and Intrinsic Scene Modeling
Graphics
Makes one AI draw pictures and change them.