Score: 1

PRISM: Pointcloud Reintegrated Inference via Segmentation and Cross-attention for Manipulation

Published: July 7, 2025 | arXiv ID: 2507.04633v1

By: Daqi Huang , Zhehao Cai , Yuzhi Hao and more

Potential Business Impact:

Teaches robots to grab things in messy rooms.

Business Areas:
Image Recognition Data and Analytics, Software

Robust imitation learning for robot manipulation requires comprehensive 3D perception, yet many existing methods struggle in cluttered environments. Fixed camera view approaches are vulnerable to perspective changes, and 3D point cloud techniques often limit themselves to keyframes predictions, reducing their efficacy in dynamic, contact-intensive tasks. To address these challenges, we propose PRISM, designed as an end-to-end framework that directly learns from raw point cloud observations and robot states, eliminating the need for pretrained models or external datasets. PRISM comprises three main components: a segmentation embedding unit that partitions the raw point cloud into distinct object clusters and encodes local geometric details; a cross-attention component that merges these visual features with processed robot joint states to highlight relevant targets; and a diffusion module that translates the fused representation into smooth robot actions. With training on 100 demonstrations per task, PRISM surpasses both 2D and 3D baseline policies in accuracy and efficiency within our simulated environments, demonstrating strong robustness in complex, object-dense scenarios. Code and some demos are available on https://github.com/czknuaa/PRISM.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics