Vision-Guided Loco-Manipulation with a Snake Robot
By: Adarsh Salagame , Sasank Potluri , Keshav Bharadwaj Vaidyanathan and more
Potential Business Impact:
Snake robot grabs and moves things by itself.
This paper presents the development and integration of a vision-guided loco-manipulation pipeline for Northeastern University's snake robot, COBRA. The system leverages a YOLOv8-based object detection model and depth data from an onboard stereo camera to estimate the 6-DOF pose of target objects in real time. We introduce a framework for autonomous detection and control, enabling closed-loop loco-manipulation for transporting objects to specified goal locations. Additionally, we demonstrate open-loop experiments in which COBRA successfully performs real-time object detection and loco-manipulation tasks.
Similar Papers
Enabling Autonomous Navigation in a Snake Robot through Visual-Inertial Odometry and Closed-Loop Trajectory Tracking Control
Robotics
Snake robots can now explore new places alone.
Optimizing Grasping in Legged Robots: A Deep Learning Approach to Loco-Manipulation
Robotics
Robots with arms learn to grab things better.
WholeBodyVLA: Towards Unified Latent VLA for Whole-Body Loco-Manipulation Control
Robotics
Robots can now reach and grab things anywhere.