Vi-TacMan: Articulated Object Manipulation via Vision and Touch
By: Leiyao Cui , Zihang Zhao , Sirui Xie and more
Potential Business Impact:
Robots use eyes and touch to grab anything.
Autonomous manipulation of articulated objects remains a fundamental challenge for robots in human environments. Vision-based methods can infer hidden kinematics but can yield imprecise estimates on unfamiliar objects. Tactile approaches achieve robust control through contact feedback but require accurate initialization. This suggests a natural synergy: vision for global guidance, touch for local precision. Yet no framework systematically exploits this complementarity for generalized articulated manipulation. Here we present Vi-TacMan, which uses vision to propose grasps and coarse directions that seed a tactile controller for precise execution. By incorporating surface normals as geometric priors and modeling directions via von Mises-Fisher distributions, our approach achieves significant gains over baselines (all p<0.0001). Critically, manipulation succeeds without explicit kinematic models -- the tactile controller refines coarse visual estimates through real-time contact regulation. Tests on more than 50,000 simulated and diverse real-world objects confirm robust cross-category generalization. This work establishes that coarse visual cues suffice for reliable manipulation when coupled with tactile feedback, offering a scalable paradigm for autonomous systems in unstructured environments.
Similar Papers
TacMan-Turbo: Proactive Tactile Control for Robust and Efficient Articulated Object Manipulation
Robotics
Robots learn to move objects smoothly and quickly.
TacMan-Turbo: Proactive Tactile Control for Robust and Efficient Articulated Object Manipulation
Robotics
Robots learn to move objects smoothly and fast.
ViTaMIn: Learning Contact-Rich Tasks Through Robot-Free Visuo-Tactile Manipulation Interface
Robotics
Teaches robots to grab things by feeling them.