PRISM: Projection-based Reward Integration for Scene-Aware Real-to-Sim-to-Real Transfer with Few Demonstrations
By: Haowen Sun , Han Wang , Chengzhong Ma and more
Potential Business Impact:
Teaches robots to do tasks from few examples.
Learning from few demonstrations to develop policies robust to variations in robot initial positions and object poses is a problem of significant practical interest in robotics. Compared to imitation learning, which often struggles to generalize from limited samples, reinforcement learning (RL) can autonomously explore to obtain robust behaviors. Training RL agents through direct interaction with the real world is often impractical and unsafe, while building simulation environments requires extensive manual effort, such as designing scenes and crafting task-specific reward functions. To address these challenges, we propose an integrated real-to-sim-to-real pipeline that constructs simulation environments based on expert demonstrations by identifying scene objects from images and retrieving their corresponding 3D models from existing libraries. We introduce a projection-based reward model for RL policy training that is supervised by a vision-language model (VLM) using human-guided object projection relationships as prompts, with the policy further fine-tuned using expert demonstrations. In general, our work focuses on the construction of simulation environments and RL-based policy training, ultimately enabling the deployment of reliable robotic control policies in real-world scenarios.
Similar Papers
PRISM: Pointcloud Reintegrated Inference via Segmentation and Cross-attention for Manipulation
Robotics
Teaches robots to grab things in messy rooms.
PRISM: Reducing Spurious Implicit Biases in Vision-Language Models with LLM-Guided Embedding Projection
CV and Pattern Recognition
Makes AI see people fairly, not based on looks.
Post-Convergence Sim-to-Real Policy Transfer: A Principled Alternative to Cherry-Picking
Robotics
Makes robot walkers move better in real life.