Score: 0

Vi-TacMan: Articulated Object Manipulation via Vision and Touch

Published: October 7, 2025 | arXiv ID: 2510.06339v1

By: Leiyao Cui , Zihang Zhao , Sirui Xie and more

Potential Business Impact:

Robots use eyes and touch to grab anything.

Business Areas:
Autonomous Vehicles Transportation

Autonomous manipulation of articulated objects remains a fundamental challenge for robots in human environments. Vision-based methods can infer hidden kinematics but can yield imprecise estimates on unfamiliar objects. Tactile approaches achieve robust control through contact feedback but require accurate initialization. This suggests a natural synergy: vision for global guidance, touch for local precision. Yet no framework systematically exploits this complementarity for generalized articulated manipulation. Here we present Vi-TacMan, which uses vision to propose grasps and coarse directions that seed a tactile controller for precise execution. By incorporating surface normals as geometric priors and modeling directions via von Mises-Fisher distributions, our approach achieves significant gains over baselines (all p<0.0001). Critically, manipulation succeeds without explicit kinematic models -- the tactile controller refines coarse visual estimates through real-time contact regulation. Tests on more than 50,000 simulated and diverse real-world objects confirm robust cross-category generalization. This work establishes that coarse visual cues suffice for reliable manipulation when coupled with tactile feedback, offering a scalable paradigm for autonomous systems in unstructured environments.

Country of Origin
🇨🇳 China

Page Count
7 pages

Category
Computer Science:
Robotics