Visual Prompting for Robotic Manipulation with Annotation-Guided Pick-and-Place Using ACT
By: Muhammad A. Muttaqien , Tomohiro Motoda , Ryo Hanai and more
Potential Business Impact:
Robots learn to pick and place items in stores.
Robotic pick-and-place tasks in convenience stores pose challenges due to dense object arrangements, occlusions, and variations in object properties such as color, shape, size, and texture. These factors complicate trajectory planning and grasping. This paper introduces a perception-action pipeline leveraging annotation-guided visual prompting, where bounding box annotations identify both pickable objects and placement locations, providing structured spatial guidance. Instead of traditional step-by-step planning, we employ Action Chunking with Transformers (ACT) as an imitation learning algorithm, enabling the robotic arm to predict chunked action sequences from human demonstrations. This facilitates smooth, adaptive, and data-driven pick-and-place operations. We evaluate our system based on success rate and visual analysis of grasping behavior, demonstrating improved grasp accuracy and adaptability in retail environments.
Similar Papers
Action Chunking with Transformers for Image-Based Spacecraft Guidance and Control
Robotics
Teaches robots to dock spaceships with few lessons.
AFFORD2ACT: Affordance-Guided Automatic Keypoint Selection for Generalizable and Lightweight Robotic Manipulation
Robotics
Robots learn tasks from just words and pictures.
Improving Generalization of Language-Conditioned Robot Manipulation
Robotics
Robots learn to move objects with few examples.