High-Precision Transformer-Based Visual Servoing for Humanoid Robots in Aligning Tiny Objects
By: Jialong Xue , Wei Gao , Yu Wang and more
Potential Business Impact:
Helps robots precisely place tiny tool parts.
High-precision tiny object alignment remains a common and critical challenge for humanoid robots in real-world. To address this problem, this paper proposes a vision-based framework for precisely estimating and controlling the relative position between a handheld tool and a target object for humanoid robots, e.g., a screwdriver tip and a screw head slot. By fusing images from the head and torso cameras on a robot with its head joint angles, the proposed Transformer-based visual servoing method can correct the handheld tool's positional errors effectively, especially at a close distance. Experiments on M4-M8 screws demonstrate an average convergence error of 0.8-1.3 mm and a success rate of 93\%-100\%. Through comparative analysis, the results validate that this capability of high-precision tiny object alignment is enabled by the Distance Estimation Transformer architecture and the Multi-Perception-Head mechanism proposed in this paper.
Similar Papers
Robust Visual Servoing under Human Supervision for Assembly Tasks
Systems and Control
Robots build towers by picking and placing blocks.
ViT-VS: On the Applicability of Pretrained Vision Transformer Features for Generalizable Visual Servoing
Robotics
Robots see and grab things better, even new ones.
Control Architecture and Design for a Multi-robotic Visual Servoing System in Automated Manufacturing Environment
Robotics
Robots can build tiny things more accurately.